Do Things That Don't Scale — But Let AI Help You Scale Them
Paul Graham's classic advice meets the AI era: how to do genuinely unscalable things while using AI to multiply your output — and when NOT to use AI to fake the unscalable.
Do Things That Don't Scale — But Let AI Help You Scale Them
Paul Graham's essay "Do Things That Don't Scale" has been quoted so many times that founders sometimes miss the actual point. The advice isn't to do manual work for its own sake. It's to do manual work because manual work teaches you things that automated work doesn't.
AI changes this in an interesting way. It doesn't change why you should do unscalable things. It does change how far you can push the frontier before you hit the wall.
The Original Insight (Still Correct)#
The reason to do unscalable things early is learning. When Airbnb's founders went to each host's apartment to take photos, they weren't trying to be the permanent photographer-on-call for all Airbnb hosts. They were learning what made a listing convert. They were building trust with early hosts. They were finding problems that wouldn't have been visible at scale.
That learning is irreplaceable. No analytics dashboard teaches you what a direct conversation teaches you. No A/B test answers the question "what do users actually want?" as directly as sitting with them while they use your product.
AI doesn't change this. The learning still requires human presence.
What AI Changes#
What AI changes is the leverage on the manual work you do.
When we onboard a new DenchClaw user with a manual setup call, we learn something. We also produce a personalized setup for that specific user. Before AI, that took 2 hours. Now it takes 45 minutes — the AI is handling the repetitive configuration work while we focus on the learning conversation.
That means we can have 2.5x more of those conversations per week with the same time investment. We still do each one manually. The density of learning increases.
Here's the practical version of this across different unscalable activities:
User interviews: Same number of hours, more conversations (AI handles scheduling, transcription, note synthesis)
Personalized outreach: Same care and personalization, more recipients (AI generates first drafts; you customize and approve)
Custom onboarding: Same depth, more customers (AI handles configuration; you handle the relationship)
Support: Same quality, more tickets resolved (AI drafts responses; you review and send)
The unscalable thing still requires your judgment and presence. AI removes the parts that didn't require either.
When AI Faking the Unscalable Breaks Things#
Here's where founders go wrong. They use AI to simulate the unscalable work without actually doing it.
AI-generated personalized emails that aren't actually personalized. AI-assisted "user interviews" where the AI asks questions and synthesizes responses without any human ever building a real relationship. Automated "outreach" at scale that gets 10x the volume with 0.1x the actual human connection.
This breaks the original reason for doing unscalable things. The user who got a GPT-written "personal" email from you didn't teach you anything. The "interview" that was actually a form processed by Claude didn't build trust. You got efficiency without learning.
The test: would you be embarrassed if the user knew exactly how much of this was automated? If yes, you're probably using AI to fake the unscalable rather than to amplify genuinely unscalable work.
DenchClaw's Approach#
We're very deliberate about this distinction internally.
Things we automate completely: scheduling follow-up tasks, updating CRM fields with standard data, generating first drafts of routine communications, synthesizing patterns across user interview notes.
Things we keep human but AI-assisted: every user onboarding call, every meaningful investor conversation follow-up, every product feedback response, every decision about what to build next.
The rule is: does this activity require judgment, relationship, or genuine curiosity to be valuable? If yes, keep it human. AI can handle the preparation, the follow-up, and the synthesis. The moment of human engagement is yours.
The Compound Effect#
The result of getting this right is compounding learning. Every manual conversation you have, amplified by AI efficiency, produces more learning per hour than either would alone. The founders who get this right will have dramatically better product instincts than those who outsourced the learning.
This is, I think, one of the most important shifts in what it means to be a founder in the AI era. The strategic value of human judgment increases because the tactical value of human execution decreases. Do the high-judgment work. Let AI handle the execution.
Frequently Asked Questions#
What's the practical way to identify which activities should stay human?#
Ask: "What would I learn from this activity that I couldn't learn from the AI's output?" If the answer is "not much," automate. If the answer is "something important about what users actually want," keep it human.
How do you avoid the "efficiency trap" where you get more output but less learning?#
Build explicit learning reviews into your workflow. Every week, ask: "What did I learn this week that changed my understanding of what we're building?" If the answer is nothing, you're optimizing for output over learning.
How does this apply to sales at early stage?#
Founder-led sales should stay human for longer than most founders want to. The learning you get from being in sales calls is too valuable to hand off early. AI can help you prepare, follow up, and track — but being in the conversation is irreplaceable until you understand your ICP deeply.
Ready to try DenchClaw? Install in one command: npx denchclaw. Full setup guide →
