"The difference between 3x velocity and 10-20x velocity isn't effort—it's documentation architecture. This appendix shows how to structure documents so AI has zero decisions to make—and how to stress-test specs before any code is written."
Not all documents need all sections. Putting implementation details in strategic documents violates single-source-of-truth—and confuses AI.
| Type | Purpose | Examples |
|---|---|---|
| Strategic | WHAT and WHY | Master Blueprint, PRD, Vision docs |
| Implementation | HOW | Technical Specs, API docs, Module specs |
| Reference | Lookup | Schema Reference, Glossary, Configuration |
Wrong (duplicates across docs):
Master Blueprint (Strategic)
├── Strategy content
├── Anti-patterns section ← WRONG: duplicates Technical Spec
├── Test Cases section ← WRONG: duplicates Testing doc
└── Error Matrix section ← WRONG: duplicates Error Handling doc
Right (pointers, not duplicates):
Master Blueprint (Strategic)
├── Strategy content
└── References section
└── "Anti-patterns → Technical Spec, Section 7"
└── "Test Cases → Testing Doc, Section 3"
└── "Error Handling → Error Handling Doc, Section 2"
Technical Spec (Implementation)
├── Implementation details
├── Anti-patterns section ← CORRECT: lives here
├── Test Cases section ← CORRECT: lives here
└── Error Matrix section ← CORRECT: lives here
| Section | Strategic Docs | Implementation Docs | Reference Docs |
|---|---|---|---|
| Deep Links | ✅ Required | ✅ Required | ✅ Required |
| Anti-patterns | ❌ Pointer only | ✅ Required | ❌ N/A |
| Test Case Specs | ❌ Pointer only | ✅ Required | ❌ N/A |
| Error Handling Matrix | ❌ Pointer only | ✅ Required | ❌ N/A |
Every implementation document must include these four sections. Without them, AI will guess—and guessing creates the velocity mirage.
Why: AI needs to know what NOT to do. Without this, it implements common mistakes.
Format:
## Anti-Patterns (DO NOT)
| ❌ Don't | ✅ Do Instead | Why |
|----------|---------------|-----|
| Store timestamps as Date objects in IndexedDB | Use ISO 8601 strings | IndexedDB serialization issues |
| Hardcode module count anywhere | Reference Schema Reference | Becomes stale, causes mismatches |
| Send AI requests from extension | Server-side only | Exposes API keys |
| Use generic error messages | Specific error codes per failure mode | Debugging impossible otherwise |
| Skip validation on "trusted" internal calls | Validate everything | Internal calls can have bugs too |Rules:
- Minimum 5 anti-patterns per implementation document
- Each must include WHY it's wrong
- Each must include the CORRECT alternative
- Cover: naming, architecture, security, performance, data handling
Why: AI needs concrete verification criteria. Without this, it can't validate its own implementation.
Format:
## Test Case Specifications
### Unit Tests Required
| Test ID | Component | Input | Expected Output | Edge Cases |
|---------|-----------|-------|-----------------|------------|
| TC-001 | Tier classifier | 100 contacts with scores | 20-30 in Critical tier | Empty list, all same score, negative scores |
| TC-002 | Score calculator | Activity events array | Weighted score 0-100 | No events, >1000 events, future-dated events |
| TC-003 | Change detector | Before/after profile | Change type enum | No change, multiple changes, partial data |
### Integration Tests Required
| Test ID | Flow | Setup | Verification | Teardown |
|---------|------|-------|--------------|----------|
| IT-001 | Free tier quota | Create user, set 10 checks | 11th check returns 403 | Reset quota |
| IT-002 | Feed → Intelligence | Seed 50 feed events | All 7 modules produce output | Clear test data |
| IT-003 | Auth flow | Create test user | Token refresh works at expiry | Delete test user |
### Test Fixtures Location
/tests/fixtures/[component]/Rules:
- Minimum 5 unit test specifications per component
- Minimum 3 integration test specifications per flow
- Each test must include: ID, input, expected output, edge cases
- Test fixtures must be specified (location, format)
Why: AI needs to know how to handle every failure mode. Without this, error handling is inconsistent.
Format:
## Error Handling Matrix
### External Service Errors
| Error Type | Detection | Response | Fallback | Logging | Alert |
|------------|-----------|----------|----------|---------|-------|
| AI API timeout | >5s response | Retry 3x with exponential backoff | Return cached suggestion | ERROR level | If 3 failures in 5 min |
| LinkedIn rate limit | 429 response | Pause scanning 15 min | Queue events for retry | WARN level | If >5 per hour |
| Database connection lost | Connection error | Retry 3x, then circuit breaker | Auto-reconnect after 30s | ERROR level | Immediate |
### Internal Errors
| Error Type | Detection | Response | Recovery | Logging |
|------------|-----------|----------|----------|---------|
| Invalid event format | Schema validation fail | Skip event, continue processing | Log for manual review | WARN level |
| Module processing failure | Uncaught exception | Isolate to single module | Other modules continue | ERROR level |
| Memory threshold exceeded | >80% heap usage | Trigger garbage collection | Pause non-critical operations | WARN level |
### User-Facing Errors
| Error Type | User Message | Technical Code | Recovery Action |
|------------|--------------|----------------|-----------------|
| Quota exceeded | "You've used all 10 AI checks this month." | 403 QUOTA_EXCEEDED | Show upgrade CTA |
| Session expired | "Please sign in again." | 401 SESSION_EXPIRED | Redirect to login |
| Feature unavailable | "This feature is temporarily unavailable." | 503 SERVICE_UNAVAILABLE | Show retry button |Rules:
- Every external service must have error handling specified
- Every error must include: detection method, response, fallback, logging level
- User-facing errors must include friendly message AND technical code
- Circuit breaker thresholds must be explicit
Why: AI needs to navigate to exact locations. "See Technical Annexes" is useless—it forces AI to guess which annex and which section.
Format:
## References
### Schema References
| Topic | Location | Anchor |
|-------|----------|--------|
| Network profiles table | [Schema Reference](../schemas/00_SCHEMA_REFERENCE.md#network_profiles) | `network_profiles` |
| Intelligence events | [Schema Reference](../schemas/00_SCHEMA_REFERENCE.md#intelligence_events) | `intelligence_events` |
| User tiers | [Schema Reference](../schemas/00_SCHEMA_REFERENCE.md#user_tiers) | `user_tiers` |
### Implementation References
| Topic | Document | Section |
|-------|----------|---------|
| Feed parsing algorithm | [Module Spec 03](../specs/module_03_feed_parser.md#parsing-algorithm) | Section 3.2 |
| Scoring weights | [Module Spec 07](../specs/module_07_scoring.md#weight-configuration) | Section 2.1 |
| Rate limiting logic | [API Spec](../specs/api_endpoints.md#rate-limiting) | Section 5 |Rules:
- NEVER use vague references ("See Technical Annexes")
- ALWAYS include: document path, section anchor, and topic
- Use relative paths from current document
- Verify all links are valid before completing document
Strategic documents (Blueprint, PRD) use a different structure. They point to implementation details rather than containing them.
# [Document Title]
## 1. [Strategic Section]
[Strategic content]
**Implementation Implication:** [Concrete effect on code/architecture]
## 2. [Another Strategic Section]
[Strategic content]
**Implementation Implication:** [Concrete effect on code/architecture]
...
## N. REFERENCES
### Implementation Details Location
| Content Type | Location |
|--------------|----------|
| Anti-patterns | [Technical Spec, Section 7](../specs/technical_spec.md#anti-patterns) |
| Test Case Specifications | [Testing Doc, Section 3](../specs/testing.md#test-cases) |
| Error Handling Matrix | [Error Handling Doc](../specs/error_handling.md) |
### Schema References
| Topic | Location | Anchor |
|-------|----------|--------|
| [Topic] | [Path] | [Anchor] |
### Technical References
| Topic | Document | Section |
|-------|----------|---------|
| [Topic] | [Path] | [Section] |
*This document provides strategic overview. Technical documents provide implementation specifications.*Key Rule: Every strategic section MUST end with an Implementation Implication statement. If a section has no implementation implication, it's aspirational fluff—delete it.
Before entering Phase 3 (code generation), score your documentation on this rubric. Target: 9+/10.
| Criterion | Weight | 10/10 Requirement |
|---|---|---|
| Actionability | 25% | Every section has Implementation Implication |
| Specificity | 20% | All numbers concrete, all thresholds explicit |
| Consistency | 15% | Single source of truth, no duplicates across docs |
| Structure | 15% | Tables over prose, clear hierarchy, predictable format |
| Disambiguation | 15% | Anti-patterns in impl docs, edge cases explicit |
| Reference Clarity | 10% | Deep links only, no vague references |
| Score | Meaning | Action |
|---|---|---|
| 9-10 | AI can implement with zero clarifying questions | Proceed to Phase 2.5 |
| 7-8 | AI needs 3-5 clarifications | Improve weak areas |
| 5-6 | AI needs significant guidance | Major revision needed |
| <5 | Documentation not AI-ready | Return to Phase 2 |
Before Phase 2.5, ask yourself:
- Actionability: "Does every section tell AI exactly what to do?"
- Specificity: "Are there any numbers I left vague?"
- Consistency: "Is any information stated in more than one place?"
- Structure: "Could I convert any prose paragraphs to tables?"
- Disambiguation: "Have I listed at least 5 anti-patterns per implementation doc?"
- Reference Clarity: "Do any references say 'see elsewhere' without exact location?"
This is the mandatory structural completeness check. Use it after Phase 2, before Phase 2.5.
- Can AI act on every section? (No aspirational content)
- Is everything current? (No outdated decisions)
- No duplicate information across docs?
- Every statement is a decision, not a wish?
- Would you put every section in an AI prompt?
- All "future state" language removed?
- All motivational/aspirational content removed?
- Document type identified? (Strategic vs Implementation vs Reference)
- Anti-patterns in implementation docs only? (Strategic docs have pointers)
- Test cases in implementation/testing docs only? (Strategic docs have pointers)
- Error handling matrix in implementation docs only?
- Deep links present in ALL documents?
- Strategic docs use pointers, not duplicates?
- AI Coder Understandability Score ≥ 9/10?
If ANY item fails: Fix before proceeding to Phase 2.5.
NEVER SKIP THIS GATE. This is the difference between stream coding and vibe coding.
Two-gate pipeline: Spec Gate catches structural completeness. Phase 2.5 Adversarial Review catches correctness. Both are required before Phase 3.
After the Spec Gate passes (9+/10), submit specs to a different AI model or human reviewer. Never use the same session that helped write the docs.
1. Spec Gate passes (9+/10) → proceed to adversarial step
2. Submit specs to DIFFERENT AI model (Gemini, GPT-4, Perplexity)
OR trusted human reviewer
3. Use the adversarial prompt below
4. Categorize findings: CRITICAL / HIGH / MEDIUM / LOW
5. Fix ALL CRITICAL issues → return to Spec Gate → re-score
6. Document HIGH issues with explicit accept/defer decision
7. Gate: zero CRITICAL remaining → proceed to Phase 3
Why a different model: The AI that generated or reviewed your docs learned your assumptions. A different model has no context, no charitable interpretation, no benefit of the doubt. It finds gaps your primary AI normalizes.
You are a skeptical senior developer and hostile critic reviewing
this specification before it goes to an AI agent for execution.
## Your Mission
Find every flaw. Assume problems exist — your job is to find them.
Do not be helpful. Do not suggest minor improvements. Attack the spec.
## What to Look For
### 1. LOGICAL CONTRADICTIONS
- Claims that conflict with each other within the spec
- Numbers that don't add up
- Requirements that are mutually exclusive
### 2. CREDIBILITY RISKS
- Overclaims ("zero bugs", "always", "never", "guaranteed")
- Unverifiable statements with no measurement method
- Claims a hostile reader would immediately challenge
### 3. IMPLICIT DEGREES OF FREEDOM
- Points where the AI agent must CHOOSE between valid interpretations
- Anything where two different developers would implement differently
- Edge cases that are mentioned but not fully specified
### 4. MISSING CONSIDERATIONS
- Error states that have no specified handling
- Concurrency or race conditions not addressed
- External dependencies with no fallback specified
- Security assumptions not made explicit
### 5. DEFENSIBILITY GAPS
- "What would a hostile HN commenter use to debunk this?"
- "What would a junior developer get wrong from this spec?"
- "What happens when the happy path fails?"
## Output Format
For each issue found:
**[SEVERITY]** — Issue title
Location: Where in the spec
Problem: What exactly is wrong
Fix: Specific rewrite needed
Severity:
- **CRITICAL:** Execution will fail or produce wrong output without this fix
- **HIGH:** Significant risk of incorrect implementation
- **MEDIUM:** Minor ambiguity, lower risk
- **LOW:** Polish, not blocking
## Success Criteria
A good adversarial review finds:
- At least 2 CRITICAL issues (if zero, you haven't looked hard enough)
- At least 4-5 HIGH issues
- 10+ total issues across all severities
If you find fewer, state explicitly why the spec is unusually strong.- Zero CRITICAL issues remaining
- All HIGH issues documented with explicit decision: fix now / accept risk / defer
- Spec Gate re-run if any CRITICAL was fixed (score may have changed)
| Option | Notes |
|---|---|
| Gemini (Google) | Best for logical contradictions |
| GPT-4o (OpenAI) | Good for missing considerations |
| Perplexity | Useful for factual/credibility checks |
| Trusted human (senior dev) | Highest signal, highest effort |
Never use the same Claude session that helped write the docs. Start a fresh session at minimum—different model preferred.
Phase 1 (Strategic Thinking) requires answering these 7 questions with specificity. Vague answers = vague code.
| # | Question | ❌ Reject | ✅ Require |
|---|---|---|---|
| 1 | What exact problem are you solving? | "Help users manage tasks" | "Help [specific persona] achieve [measurable outcome] in [specific context]" |
| 2 | What are your success metrics? | "Users save time" | Numbers + timeline: "100 users, 25% conversion, 3 months" |
| 3 | Why will you win? | "Better UI and features" | Structural advantage: architecture, data moat, business model |
| 4 | What's the core architecture decision? | Let AI decide | Human decides based on explicit trade-off analysis |
| 5 | What's the tech stack rationale? | "Node.js because I like it" | Business rationale: "Node—team expertise, ship fast" |
| 6 | What are the MVP features? | 10+ "must-have" features | 3-5 truly essential, rest explicitly deferred |
| 7 | What are you NOT building? | "We'll see what users want" | Explicit exclusions with rationale |
Phase 1 is complete when: All 7 questions answered with "Require" level specificity, documented, and approved before proceeding to Phase 2.
For a production project, you need:
/docs
├── strategic/
│ ├── 01_MASTER_BLUEPRINT.md (Strategic)
│ └── 02_PRODUCT_REQUIREMENTS.md (Strategic)
├── implementation/
│ ├── 03_TECHNICAL_SPEC.md (Implementation) ← Has Anti-patterns, Tests, Errors
│ ├── 04_API_SPEC.md (Implementation) ← Has Anti-patterns, Tests, Errors
│ ├── 05_MODULE_SPEC_[name].md (Implementation) ← Has Anti-patterns, Tests, Errors
│ └── 06_ERROR_HANDLING.md (Implementation)
├── reference/
│ ├── 00_SCHEMA_REFERENCE.md (Reference)
│ ├── 00_GLOSSARY.md (Reference)
│ └── 00_CONFIGURATION.md (Reference)
└── decisions/
├── ADR-001_[decision].md
├── ADR-002_[decision].md
└── ...
1. Phase 1 (40%): Answer 7 Questions → Strategic docs created
2. Phase 2 (40%): Create Implementation + Reference docs
- Add 4 mandatory sections to each implementation doc
- Add deep links to ALL docs
- Strategic docs get pointers, not duplicates
3. Spec Gate: Score documentation (target 9+/10)
4. Phase 2.5 (5%): Adversarial Review — submit to different AI, fix ALL CRITICAL issues
5. Phase 3 (10%): Feed docs to AI → Code streams out
6. Phase 4 (5%): When code fails: Fix the spec, not the code → Regenerate
Ideal split: 40/40/5/10/5 (80% docs, 15% execution, 5% quality). Real projects flex—strategy-heavy projects may shift to 60/20/5/10/5.
v3.0 Insight: Documentation is the real work. Code is the printout.
v3.3 Addition: Not all docs are equal. Strategic docs point. Implementation docs contain. Never duplicate.
v3.5 Addition: Structural completeness (Spec Gate) is necessary but not sufficient. Correctness under adversarial review (Phase 2.5) is the final gate before execution.
The Payoff:
- v3.0: 5-10x velocity
- v3.3: 10-20x velocity (because AI has zero ambiguity about what goes where)
- v3.5: 10-20x velocity with fewer Phase 3 surprises (because specs are stress-tested before any code is written)
The Two-Gate Rule:
"Spec Gate catches completeness. Adversarial Review catches correctness. Both before Phase 3—no exceptions."
END OF APPENDIX C (Advanced Framework v3.5)