Back to The Times of Claw

The Problem with AI-Added Features

Every SaaS company is bolting AI onto existing products. Here's why that approach is fundamentally limited—and what AI-native design looks like instead.

Kumar Abhirup
Kumar Abhirup
·9 min read
The Problem with AI-Added Features

There's a pattern in every major SaaS category right now: established products are shipping "AI features." HubSpot has AI. Salesforce has Einstein. Notion has AI. Asana has AI. The feature set is remarkably consistent: an "Ask AI" chat widget in the corner, smart autocomplete in text fields, AI-powered lead scoring, automated email drafts.

I've used most of these. They're mostly fine. They're also mostly wrong.

Not wrong as in bad engineering — the engineering is often excellent. Wrong as in: the approach reveals a fundamental misunderstanding of what AI changes about software. Adding AI to an existing product is like adding a search bar to a card catalog. It's an improvement, but you've still got a card catalog.

The Form-First Problem#

Traditional software is built around forms. Contact forms, deal forms, activity forms. You have data you want to record; you open the relevant form; you fill in the fields; you hit save. The interface is a series of structured inputs that match the data model.

This is fine. It's even good, for certain kinds of work. There are times when you want a structured form — when you're doing deliberate data entry, when you need to ensure completeness, when multiple fields have validation requirements.

But it's not how people actually work with relationship information, which is what CRM is fundamentally about.

When you finish a call with a prospect, you don't think "I need to update the Contact stage to 'Proposal Sent', set the follow-up date to next Thursday, add an Activity record with type 'Call', and update the Company headcount field." You think "that went well, they want a proposal, I should follow up Thursday." The structured version and the human version are the same information expressed differently.

The AI-added approach to this problem: keep the form structure exactly as it is, add an AI autocomplete that tries to help you fill in the form faster. You still interact with the form. The AI just assists with field completion.

The AI-native approach: replace the form as the primary interface. You describe what happened in natural language; the agent parses it and updates the relevant records. You only see a form if you want to verify or manually adjust something.

The difference is not incremental. The first approach saves minutes. The second changes the workflow entirely.

The Database-as-Interface Problem#

There's a related issue with how AI-added features interact with the underlying data model.

SaaS CRMs have database schemas that were designed for human access patterns — normalized, with specific tables for contacts, companies, deals, and explicit join relationships. This schema is appropriate for a human operator who accesses data through specific UI flows.

For an AI agent, this schema creates friction. An AI that wants to answer "who are my most engaged contacts this month" has to navigate a specific API, make multiple requests to join contacts with activity records, apply the right filters, handle pagination. This is engineering work that has to be done for every kind of query.

A data model designed for AI access is different. It's simpler to query, richer in context, and optimized for the kinds of questions an agent needs to answer. The EAV model with PIVOT views that DenchClaw uses is harder for humans to think about but trivially easy for an AI to query — you just write SQL against a flat view.

AI-added features don't change the underlying data model. They add an AI layer on top of a schema that wasn't designed for AI access. The result is an AI that's fighting the data model rather than working with it.

What Gets Missed: Context#

The deepest problem with AI-added features is what they miss about context.

When a SaaS CRM adds an AI assistant, that assistant has access to the data in the CRM — contacts, deals, activities. What it doesn't have access to: your email threads about those contacts (unless you've set up the email integration), your notes from calls (unless you've added those notes to the CRM consistently), your memory of past interactions that never made it into the database, the business context behind why certain relationships matter.

An AI agent that lives in your entire workflow — your CRM, your email, your notes, your calendar, your browser — has a completely different quality of context. It doesn't just know that you have a deal with Stripe. It knows that you met Sarah Chen at a conference in September, that you had three email exchanges before she introduced you to their procurement team, that the deal stalled in December because of a budget freeze, and that you've been waiting for Q1 to re-engage.

That's the context that makes AI assistance genuinely useful. Not just "help me write an email to my contact at Stripe" but "help me write an email that's appropriate given the full history of this relationship."

DenchClaw's agent lives in the whole workspace — CRM data, documents, memory files, browser access. That's what makes it useful for actual relationship management rather than just data entry.

The "Add AI" Checklist Mentality#

There's a product management pattern I keep seeing that I find troubling: the "add AI" checklist.

Product teams are getting pressure from leadership to ship AI features. So they go through their product and identify everything that could have an AI layer: "we can add AI to search, AI to email drafts, AI to lead scoring, AI to deal forecasting." The features get built, the press release gets written ("we've added AI-powered X to Y"), the product updates with new checkboxes in the comparison matrix.

This is not AI-native product design. It's checkbox-filling that creates a product which is more complex and marginally more useful but hasn't fundamentally changed the experience.

The question for AI-native design is not "where can we add AI?" It's "if we redesign this product assuming AI is the primary operator, what does the interface look like?" Those are very different questions with very different answers.

The first question produces features. The second produces products.

Signs That a Product Is AI-Added vs. AI-Native#

Here are some patterns that distinguish genuinely AI-native products from AI-added products:

AI-added: The primary interface is still forms and tables. AI features are supplementary to the main workflow, not central to it. You can ignore the AI and still use the product fully.

AI-native: The primary interface is conversation or natural language. Forms and tables exist as views into data, not as the primary input mechanism. Without AI, the product is significantly less functional.

AI-added: The AI requires you to be in a specific UI component to access it. There's an "Ask AI" button somewhere.

AI-native: The AI is reachable from wherever you are — Telegram, the web interface, email, voice. The AI comes to where you are, not the other way around.

AI-added: The AI's suggestions are purely informational. You still have to do the action.

AI-native: The AI can take actions directly — create records, send messages, schedule follow-ups, update data. Its outputs are operational, not just informational.

AI-added: The AI has access to the data in one system.

AI-native: The AI has access to your entire context — all relevant data from all relevant sources, integrated into a coherent picture.

DenchClaw is designed to be AI-native by these criteria. The primary interface is the agent. The agent is reachable from multiple channels. It takes actions directly, not just makes suggestions. And it has access to your full workspace context, not just the CRM data.

The Transition Is Uncomfortable#

I want to be honest about something: the transition from form-first to conversation-first is jarring for many users, especially experienced CRM users.

If you've been using Salesforce for ten years, your muscle memory is form-based. You know where things are. You know how to log a call. You know where the follow-up date field is. Switching to a system where you express this in natural language feels less certain, not more. You're not sure if the agent understood. You're not sure if the right fields were updated. You want to see the form to verify.

We've taken this seriously in DenchClaw. The agent is conversational, but the underlying data is always visible in a traditional table/kanban view. When you tell the agent something, you can immediately see what it updated in the view. The conversation and the structure are both present; you can move between them.

The goal isn't to eliminate structure. It's to make the conversational interface the fast path while keeping the structured view as the verification layer. Over time, users trust the agent more and check the form less — but the option is always there.

Frequently Asked Questions#

Can I still use forms and tables in DenchClaw, or is it purely conversational?#

Both. DenchClaw has full table, kanban, calendar, and list views. You can interact with data directly through those interfaces. The AI is the fast path, not the only path.

How does DenchClaw's AI handle data entry errors?#

The agent creates a confirmation that you can review. If it misunderstood something, you can correct it conversationally ("actually, the follow-up should be Friday, not Thursday") and it updates accordingly. The structured view is always available to verify.

What happens when the AI isn't sure what you meant?#

It asks for clarification. A well-designed AI agent should have a low confidence threshold for action — it's better to ask one clarifying question than to take the wrong action confidently.

Do I need to configure the AI with my data model before using it?#

DenchClaw comes with a default schema (people, companies, deals, tasks) that works for most use cases. The AI understands this schema automatically. If you customize the schema, the AI learns the custom fields as part of the workspace context.

Isn't conversational input slower than a well-designed form?#

For a single, well-understood record update: sometimes. For complex context with multiple implications: usually faster. "I just had a call with Sarah, she's interested, follow up Thursday, move deal to proposal stage" updates five fields in one statement. That takes longer to do via forms.

Ready to try DenchClaw? Install in one command: npx denchclaw. Full setup guide →

Kumar Abhirup

Written by

Kumar Abhirup

Building the future of AI CRM software.

Continue reading

DENCH

© 2026 DenchHQ · San Francisco, CA