Back to The Times of Claw

The Completeness Principle: Why AI Should Boil the Lake

The Completeness Principle says AI makes completeness cheap — so always implement fully. Boil the lake (one complete feature), never the ocean (full rewrites).

Kumar Abhirup
Kumar Abhirup
·9 min read
The Completeness Principle: Why AI Should Boil the Lake

The Completeness Principle: Why AI Should Boil the Lake

There is a phrase we use inside DenchClaw that has changed how we think about AI-assisted software development: Always boil the lake, never the ocean.

The lake is your feature. The ocean is everything. This distinction sounds simple, but it encodes something that took us a long time to understand — something that I think is one of the most important ideas in AI software development in 2026.

The Problem with "Good Enough"#

Before AI coding tools, completeness was expensive. Writing a feature with every edge case handled, every error state covered, every performance implication considered — that might take three times as long as writing the "happy path only" version. When you're running a startup and time is the scarce resource, "good enough" is often a rational choice.

The problem is that "good enough" accumulates. Edge cases become tech debt. Missing error states become production incidents. Ignored performance implications become scalability crises. The shortcuts you take in month three come due in month twelve, with interest.

AI changes the cost equation fundamentally. The incremental cost of handling all the edge cases — when you have an AI that can reason about them systematically — is close to zero. The incremental cost of skipping them is exactly what it always was: deferred pain.

When completeness becomes cheap, the rational calculation inverts. Now the question is: why would you ship incomplete work?

What "The Lake" Means#

The lake metaphor comes from the idea of scope. Any feature has a natural scope — the full set of requirements, edge cases, and implications that are intrinsic to implementing it correctly. This is the lake. It's bounded. You can see the shore from anywhere on the water.

Boiling the lake means implementing completely. All the happy paths. All the edge cases. All the error states. The loading states and empty states and offline behavior. The accessibility. The performance profile at 10x expected load. The security implications. The documentation that explains what shipped.

Boiling the ocean means expanding scope beyond the natural feature boundary — attempting a full architectural rewrite alongside a feature implementation, or refactoring all the code that touches the feature, or solving the general case of a problem rather than the specific case you actually need.

The distinction matters because AI amplifies scope creep as dramatically as it amplifies completeness. An AI that starts implementing a login flow and decides it should also refactor the authentication architecture, the session management, the password reset flow, and the OAuth integration is attempting the ocean. It will produce an incomplete rewrite that breaks in unexpected ways and is far harder to review than a complete implementation of the original task.

How gstack Enforces the Principle#

gstack bakes the Completeness Principle into its workflow at every phase.

In Think: The YC Office Hours role explicitly scopes the lake. The design document it produces lists not just what's in scope but what's explicitly out of scope. That "out of scope" list is as important as the feature definition itself. Without it, scope expands.

In Plan Eng: The locked architecture defines the boundary of the implementation. The Build phase implements against the architecture, not beyond it. When Build encounters something outside the architecture, it flags it rather than expanding.

In Build: The instruction is explicit: implement the full feature against the locked plan. Full means all edge cases in the plan are handled. Not all edge cases you can imagine — all edge cases the plan identified as in scope.

In Review: The Staff Engineer role checks for completeness explicitly. Are all the error states handled? Are all the edge cases in the plan actually implemented? Is there any place where the code falls through to undefined behavior?

In the 18-roles chain: Each role passes a complete artifact to the next. Design document is complete before architecture starts. Architecture is locked before Build starts. Implementation is reviewed before QA starts. The completeness requirement propagates through the chain.

Why Teams Fail at This#

Understanding the principle is easy. Applying it is harder. Here's where teams consistently fail:

Scope creep during Build. The AI finds a related problem while implementing and decides to fix it. The "fix" is outside the architecture. The architecture didn't account for it. The review doesn't catch it because it looks like intentional code. The production incident happens three weeks later.

Partial error handling. "We'll handle the error states properly later" is the most expensive phrase in software development. Later never comes, and missing error handling causes data corruption, security vulnerabilities, and user-visible failures. The Completeness Principle says handle them now, while the implementation is in context.

Documentation as afterthought. Documentation is part of completeness. Not as a bureaucratic requirement — as a functional requirement. If the documentation doesn't reflect what shipped, the next engineer (or the next AI) working on this code will make incorrect assumptions. The Document phase in gstack exists precisely because completeness includes documentation.

Performance as optional. Teams routinely ship without performance benchmarks and pay the price when traffic scales. Test Bench in gstack runs performance profiling as a required step, not an optional one. Completeness includes knowing your performance profile before users discover it.

The Ocean Problem in Practice#

The ocean failure mode is worth understanding in detail because AI makes it more common, not less.

AI coding tools are extraordinary at seeing connections. They notice that the authentication refactor you started is related to the session management code, which is related to the OAuth flow, which is related to the admin panel permissions. A human engineer might not see all those connections as quickly. An AI sees them immediately.

The danger: the AI has a strong bias toward fixing everything it sees. It's completing patterns. It finds partial implementations uncomfortable. Given the opportunity, it will attempt to resolve every related imperfection it encounters.

This is the ocean. And attempting the ocean produces incomplete rewrites at every junction: the session management is half-refactored, the OAuth flow is mid-migration, the admin panel is in a transitional state. None of it is complete. All of it is broken in subtle ways.

The freeze tool in gstack's safety system exists specifically to prevent this. By restricting the AI to a single directory, it prevents ocean-scope changes from accumulating silently.

The Positive Case: Why Completeness Compounds#

The Completeness Principle isn't just a risk mitigation strategy. It's a compounding advantage.

When you consistently ship complete features:

Debugging becomes tractable. Incomplete implementations create ambiguity about intent. Was this error state intentionally unhandled, or was it an oversight? With complete implementations, unhandled states are clearly bugs.

Handoffs become clean. When the next engineer (or AI) picks up this code, they inherit a complete implementation. They can add new features rather than finishing existing ones. Context is preserved in the code itself.

Documentation stays current. Complete features get documented while they're fresh. Documentation written after the fact is always incomplete because memory fades. Documentation written as part of the feature is complete because the feature is in context.

Trust compounds. Teams that consistently ship complete features build a reputation for quality. That reputation reduces the review overhead for future work. Completeness today buys velocity tomorrow.

The Scope Conversation#

Every sprint planning session has a version of the scope conversation: "Should we do this completely, or ship the MVP and iterate?"

The Completeness Principle gives a clear answer, but it requires understanding what "MVP" actually means.

MVP means the minimum viable product — the smallest thing that fully delivers the value proposition to the user. It does not mean "happy path only, edge cases TBD." An MVP with missing error handling isn't minimum viable, it's minimum broken.

The distinction: scope reduction is how you get to an MVP. Completeness reduction is how you get to a bug.

Reduce scope aggressively. The lake can be small. But whatever you ship, ship it completely.

FAQ#

Q: How do I know where the lake ends and the ocean begins? The design document defines the lake. Anything in the design document is lake. Anything not in the design document is ocean. If you encounter something not in the design document during Build, stop and update the design document before implementing — don't expand silently.

Q: What about truly exploratory work where you don't know the full scope yet? Time-box it. Spend two hours exploring. Write down what you found. Then scope the lake based on what you know and proceed with the Completeness Principle. Exploration and implementation are different phases.

Q: Isn't "complete" subjective? How do you know when you've boiled the lake? The Plan Eng phase produces explicit edge cases and test scenarios. The Test QA phase verifies each one. When all planned edge cases are handled and passing, the lake is boiled. If new edge cases emerge during testing that weren't in the plan, add them to the plan and handle them — don't ship with known gaps.

Q: How does this apply to AI-generated code that I'm not reviewing line by line? The completeness requirement applies to the output, not the review depth. Whether you read every line or trust the AI's implementation, your ship checklist should verify: all planned edge cases handled, all error states covered, all tests passing, documentation updated. Test the behavior, not the implementation.

Q: Does the Completeness Principle apply to documentation and tests as well? Yes. Tests that only cover the happy path aren't complete. Documentation that only describes the success case isn't complete. The Completeness Principle applies to every artifact produced by the build, not just the code.

Ready to try DenchClaw? Install in one command: npx denchclaw. Full setup guide →

Kumar Abhirup

Written by

Kumar Abhirup

Building the future of AI CRM software.

Continue reading

DENCH

© 2026 DenchHQ · San Francisco, CA