The system that turns generic AI output into publishable first drafts
Denny Hollick
Everyone's selling prompts, templates, quick fixes. The real work — the thing that actually determines whether AI output is generic or great — is context management. That's the human's job now.
Every conversation starts with a budget. What you put in this window is the only reality the model knows.
Context window capacity
Your files are the biggest lever you control. Everything else is set by the tool.
15 unstructured docs thrown in
6 synthesized artifacts
Imagine you're a new hire. First day. Someone walks you to your desk and says "build our landing page."
They dump 50 call recordings on your desk. "Everything you need is in here." Could you build that landing page? In a month, if you're lucky.
They hand you a synthesized messaging doc. Key themes, representative quotes, positioning pillars, audience profiles. You could start today.
"What would a new hire need on day one to do great work? Build that."
How the artifacts connect
The context window isn't a filing cabinet — it's a budget. Every piece of context you add dilutes attention on everything else. And outdated context doesn't sit harmlessly — the model treats old positioning and superseded claims as current truth. If a token isn't current and earning its keep, cut it.
Don't dump raw data. Distill it — reduce, concentrate, get to the essence. Keep curated summaries in the window and point to deeper sources when needed. Your AI navigates layers — don't flatten them.
Don't build context in the dark. Run a real prompt, review what's wrong, build the context that fixes it. AI outputs are first drafts — unchecked output feeding back in creates compounding drift. Every layer you add should produce a measurable improvement.
Close the biggest gap: ground your AI in real customer data. Pull 3–5 call transcripts, clean them (remove filler, keep quotes), and upload.
~3 hours (gathering + cleaning transcripts)
Structure is a performance leverTeach your AI who you're talking to. Build audience profiles grounded in the research from Week 1.
~2 hours
Show, don't tellBuild the backbone of what your AI says. Value props, competitive angles, evidence behind every claim.
~3 hours
Separate instructions from referenceMake it sound like you — then prove it works. Re-run your benchmark and compare to Week 1. Score it: does it use real data? Real quotes? Sound like your brand?
~2 hours (voice guide + benchmark rubric)
Write for a smart strangerStep 1: Raw call transcripts
Call #47 — 34 min
"Yeah so basically we were using QuickBooks and it was like, I don't know, I spent more time trying to figure out how to categorize things than actually doing the work, you know? And like, our accountant would always be like 'this isn't right' and we'd have to redo it..."
Call #112 — 28 min
"We looked at Xero too but honestly they're all the same, like you still need to know debits and credits and I'm a founder not an accountant, so I just... I'd put it off and then at quarter end it was this whole fire drill with our bookkeeper..."
Call #203 — 41 min
"The main thing for me was burn rate visibility. I'd go into board meetings and have to manually pull numbers from three different places, and half the time they didn't match. My investors would ask about runway and I'd be like, give me a day..."
232 calls × 30 min avg = ~100 hours of raw audio
Step 2: Individual call summaries
| Call | Pain points | Why Puzzle | Objections | Language |
|---|---|---|---|---|
| #47 | Manual categorization, accountant conflicts | AI automation | Migration worry | "more time categorizing than working" |
| #112 | Accounting knowledge gap, quarter-end fire drills | No accounting needed | Xero comparison | "I'm a founder not an accountant" |
| #203 | No real-time visibility, manual board reporting | Live metrics | Data accuracy | "give me a day" for runway |
| #204 | Reconciliation backlog, receipt management | Auto-categorize | Pricing | "huge time saver" |
| ... 228 more rows | ||||
Step 3: Quantitative grouping
Themes ranked by frequency across 232 calls — "universal" (80%+), "common" (50-79%), "emerging" (<50%)
Step 4: Final synthesis
Pick a tool with a project or notebook feature — somewhere context persists across conversations.
One conversation at a time. No memory between sessions.
Persistent files and instructions. Context carries across conversations.
Runs commands, executes skills, reads and writes files on your machine.
If you're using Cowork or Claude Code, install the customer research skill. It automates the synthesis pipeline from the demo. If your tool doesn't support skills, follow the manual guide.
Paste this into your project's system prompt or custom instructions.
Before uploading any data, pick one deliverable and run a benchmark prompt with only the project instruction as context. Save the output — this is your "before." Landing page recommended, or a sales one-pager or cold email sequence.
Now add your raw customer data — call transcripts, survey responses, reviews, existing M&P docs. Don't create anything new yet. Text beats everything; CSV beats .xlsx; markdown beats PDF.
Your synthesis is a first draft, not a finished product. Review it, fix what's wrong, upload the improved version, and re-run your benchmark. The gap closes every round.
| Check | What to look for |
|---|---|
| Quotes are real | Spot-check 3–5 against your original transcripts or reviews. If fabricated, fix them. |
| Numbers add up | Theme counts, percentages, sample sizes — do they match your actual data? |
| No hallucinated entities | Names, roles, company references should come from your data only. |
| What's missing? | What do you know from experience that the synthesis didn't capture? Add it. |
| Signal over noise | Would you cut this line if editing someone else's work? Cut it now. |
Then: upload the reviewed synthesis, re-run the exact same benchmark prompt, and compare side by side. If the output is still generic — that's OK. ICPs (Week 2) and messaging (Week 3) fix that.
Your Week 1 Prompt
Want to learn how to start with Claude Code?
Claude Code Live: Build a Real Project in 60 Minutes
Friday, March 20 · 9:00 AM PST · Google Meet
Free, recorded → Register on Luma