pmmcamp.dennyhollick.com
Week 1 Quick Start Guide →
00

Context Management
for PMMs

The system that turns generic AI output into publishable first drafts

Denny Hollick

  • 01See exactly why most AI output is generic
  • 02Learn the system that fixes it
  • 03Walk out with a working project and your first benchmark test
01

The AI space has a diet pill problem

Everyone's selling prompts, templates, quick fixes. The real work — the thing that actually determines whether AI output is generic or great — is context management. That's the human's job now.

10x productivity magic prompts one weird trick AI revolution prompt template secret formula hack your workflow instant results just ask AI copy this prompt
02

The context window

Every conversation starts with a budget. What you put in this window is the only reality the model knows.

What you see
Puzzle PMM
research-synthesis.md
messaging-brief.md
icp-profiles.md
voice-guide.md
competitive-landscape.md
call-transcript-01.md
positioning-matrix.md
landing-page-v2.md
Project instruction
Write a landing page for startup founders switching from QuickBooks
Here's a draft landing page targeting startup founders...
Good start. Make the hero more specific to the pain of manual categorization
Updated — hero now leads with "Stop categorizing transactions at midnight"
Better. Add a social proof section with real customer quotes
Added social proof with three founder testimonials and usage stats
Now tone down the testimonials — they sound too polished
Now write a comparison section: Puzzle vs QuickBooks
What Claude sees
Model training (base knowledge)
Tool definitions ~3K tokens
[{"name": "web_search", "description": "Search the web", "parameters": {...}}, {"name": "read_file", ...}]
System prompt ~1.5K tokens
You are Claude, an AI assistant by Anthropic. You are helpful, harmless, and honest. Follow instructions carefully and ask clarifying questions when needed...
Project instruction ~500 tokens
# Puzzle PMM Project Tone: confident, not aggressive Audience: startup founders (seed–A) Competitor refs: QuickBooks, Xero Key differentiator: AI-first...
research-synthesis.md + 3 files ~15K tokens
| Segment | Pain | Priority | |-------------|-----------|----------| | Seed stage | Receipts | High | | Series A | Reporting | Medium | | Bootstrap | Invoicing | High |
Conversation history ~2K tokens
User: Write a landing page for startup founders switching... Asst: Here's a draft landing page targeting startup founders... User: Make the hero more specific...
Your prompt ~30 tokens
Now write a comparison section: Puzzle vs QuickBooks

Context window capacity

Your files are the biggest lever you control. Everything else is set by the tool.

03

Same prompt. Three levels of context.

> Write a competitive positioning paragraph for Puzzle against QuickBooks targeting startup founders.
No Context
No context provided

No Context

Raw Data Dump
pitch_deck_v3.pdf call_transcript_oct.txt brand_guide_2024.pdf notes_q3.docx competitor_matrix.xlsx slack_thread.txt pricing_old.pdf roadmap_draft.docx email_chain.eml board_deck.pptx
context limit exceeded

Raw Data Dump

15 unstructured docs thrown in

Curated Context
Research Synthesis M&P Brief ICP Profiles Proof Library

Curated Context

6 synthesized artifacts

What did the curated version do differently?

  • Used real customer language ("huge time saver") from actual calls
  • Cited specific data (232 calls, #1 pain point)
  • Knew the competitive angle without being told
  • Hit the audience (startup founders tracking burn rate)
04

The "New Hire" Test

Imagine you're a new hire. First day. Someone walks you to your desk and says "build our landing page."

Scenario A

They dump 50 call recordings on your desk. "Everything you need is in here." Could you build that landing page? In a month, if you're lucky.

Scenario B

They hand you a synthesized messaging doc. Key themes, representative quotes, positioning pillars, audience profiles. You could start today.

"What would a new hire need on day one to do great work? Build that."

05

What you're building

Customer Research Synthesis

Customer Research Synthesis
# Top pain points
Source: 312 customer interviews

| Pain Point          | Freq |
|---------------------|------|
| Manual status       | 74%  |
| No source of truth  | 68%  |
| Can't forecast gaps | 61%  |
| Manual reporting    | 55%  |

## Manual status updates (74%)
> "I spend Monday mornings
> copy-pasting between three apps."

## No source of truth (68%)
> "Nobody trusts the dashboard
> because it's always stale."

## Can't forecast gaps (61%)
> "We only find out we're over
> capacity when someone burns out."

## Manual reporting (55%)
> "Board prep takes my ops lead
> two full days every quarter."

Messaging & Positioning

Messaging & Positioning
# Messaging pillars

| Pillar       | Problem        | Solution     |
|--------------|----------------|--------------|
| Visibility   | Manual status  | Live health  |
| Resources    | Late gaps      | Forecasting  |
| Reporting    | Manual prep    | Auto reports |

## Automatic visibility
Problem:  Status updates are manual/stale
Solution: Live health from WIP data
Win: No behavior change required

## Resource intelligence
Problem:  Capacity gaps surface too late
Solution: Predictive workload forecasting
Win: Forecasts 3wk out vs. backward
  looking reports

## One-click reporting
Problem:  Board reporting is busywork
Solution: Auto-generated portfolio reports
Win: Two-day prep becomes 5-min export

ICP Profiles

ICP Profiles
# ICP #1 — Growth-stage ops leader

| Attribute    | Detail               |
|--------------|----------------------|
| Company      | 50–300, Series B–C   |
| Role         | VP Ops, PMO, CoS     |
| Pain         | 15+ projects, no view|
| Trigger      | Missed deadline      |
| Tools today  | Sheets + legacy PM   |

## Company size
50–300 employees, Series B–C

## Role
VP Ops, Head of PMO, Chief of Staff

## Core pain
Managing 15+ concurrent projects
across 4–6 teams with no unified view

## Buying trigger
Missed deadline on a high-visibility
initiative; leadership demands better
forecasting

## Current tools
Spreadsheets + legacy PM tool they've
outgrown

Social Proof Library

Social Proof Library
# Quotes — Time Savings theme

| Name           | Role       | Company    |
|----------------|------------|------------|
| Jamie Torres   | VP Ops     | BrightPath |
| Priya Sharma   | CoS        | Relay      |
| Marcus Lee     | Head PMO   | Upwell     |

## Jamie Torres — VP Ops, BrightPath
> "We eliminated our Monday status
> meeting entirely."

## Priya Sharma — CoS, Relay
> "Board reporting went from two
> days to five minutes."

## Marcus Lee — Head PMO, Upwell
> "I stopped dreading quarterly
> planning."

Competitive Intelligence

Competitive Intelligence
# Head-to-head: Legacy PM Tool

| Pain Point     | Sev  | Win  |
|----------------|------|------|
| Manual status  | 9/10 | 82%  |
| No forecasting | 8/10 | 76%  |
| Rigid reports  | 7/10 | 71%  |

## Manual status collection (9/10)
Win rate: 82%
Our fix: Auto-generated from work
  activity

## No capacity forecasting (8/10)
Win rate: 76%
Our fix: 3-week predictive workload
  model

## Rigid reporting templates (7/10)
Win rate: 71%
Our fix: Configurable live dashboards

---
> "We evaluated three tools. The
> others showed us dashboards — this
> one showed us the future."

Voice & Tone Guide

Voice & Tone Guide
# Voice calibration examples

| Context     | Say ✓     | Avoid ✗     |
|-------------|-----------|-------------|
| Product     | Concrete  | Buzzwords   |
| Competitive | Specific  | Vague       |
| Proof       | Data      | Generic     |

## Describing the product
✓ "Cuts Monday status meetings
   to zero"
✗ "Leverages AI to transform
   project visibility"

## Competitive positioning
✓ "Teams switch because their old
   tool can't forecast"
✗ "Innovative disruption in the
   PM landscape"

## Customer proof
✓ "Board prep went from two days
   to five minutes"
✗ "Customers report significant
   efficiency gains"

How the artifacts connect

QUALITY STRUCTURE FOUNDATION . Customer Research Messaging & Positioning ICP Profiles Voice & Tone Social Proof Competitive Intel grounds segments informs each other calibrates evidence for differentiates
06

Three rules of context management

Rule 1Budget your context

The context window isn't a filing cabinet — it's a budget. Every piece of context you add dilutes attention on everything else. And outdated context doesn't sit harmlessly — the model treats old positioning and superseded claims as current truth. If a token isn't current and earning its keep, cut it.

Rule 2Distill up, reference down

Don't dump raw data. Distill it — reduce, concentrate, get to the essence. Keep curated summaries in the window and point to deeper sources when needed. Your AI navigates layers — don't flatten them.

Rule 3Test, review, iterate

Don't build context in the dark. Run a real prompt, review what's wrong, build the context that fixes it. AI outputs are first drafts — unchecked output feeding back in creates compounding drift. Every layer you add should produce a measurable improvement.

07

The four-week challenge

Week 1

Customer Research & Foundations

Close the biggest gap: ground your AI in real customer data. Pull 3–5 call transcripts, clean them (remove filler, keep quotes), and upload.

~3 hours (gathering + cleaning transcripts)

Structure is a performance lever
Week 2

ICP / Audience Profiles

Teach your AI who you're talking to. Build audience profiles grounded in the research from Week 1.

~2 hours

Show, don't tell
Week 3

Messaging & Positioning

Build the backbone of what your AI says. Value props, competitive angles, evidence behind every claim.

~3 hours

Separate instructions from reference
Week 4

Voice, Tone & Final Test

Make it sound like you — then prove it works. Re-run your benchmark and compare to Week 1. Score it: does it use real data? Real quotes? Sound like your brand?

~2 hours (voice guide + benchmark rubric)

Write for a smart stranger
08

Watch: From raw calls to research synthesis

Step 1: Raw call transcripts

Call #47 — 34 min

"Yeah so basically we were using QuickBooks and it was like, I don't know, I spent more time trying to figure out how to categorize things than actually doing the work, you know? And like, our accountant would always be like 'this isn't right' and we'd have to redo it..."

Call #112 — 28 min

"We looked at Xero too but honestly they're all the same, like you still need to know debits and credits and I'm a founder not an accountant, so I just... I'd put it off and then at quarter end it was this whole fire drill with our bookkeeper..."

Call #203 — 41 min

"The main thing for me was burn rate visibility. I'd go into board meetings and have to manually pull numbers from three different places, and half the time they didn't match. My investors would ask about runway and I'd be like, give me a day..."

232 calls × 30 min avg = ~100 hours of raw audio

Step 2: Individual call summaries

Call Pain points Why Puzzle Objections Language
#47 Manual categorization, accountant conflicts AI automation Migration worry "more time categorizing than working"
#112 Accounting knowledge gap, quarter-end fire drills No accounting needed Xero comparison "I'm a founder not an accountant"
#203 No real-time visibility, manual board reporting Live metrics Data accuracy "give me a day" for runway
#204 Reconciliation backlog, receipt management Auto-categorize Pricing "huge time saver"
... 228 more rows

Step 3: Quantitative grouping

Manual categorization burden 81%
Accounting knowledge gap 67%
Time spent on bookkeeping 61%
No real-time financial visibility 54%

Themes ranked by frequency across 232 calls — "universal" (80%+), "common" (50-79%), "emerging" (<50%)

Step 4: Final synthesis

Customer Research Synthesis
09

Set up your project

1

Create your project

Pick a tool with a project or notebook feature — somewhere context persists across conversations.

Chat

One conversation at a time. No memory between sessions.

  • ChatGPT
  • Gemini
  • Claude.ai

Projects

Persistent files and instructions. Context carries across conversations.

  • Claude Projects
  • ChatGPT Projects
  • NotebookLM

Agentic

Runs commands, executes skills, reads and writes files on your machine.

  • Cowork (recommended)
  • Claude Code
2

Install your skill (if available)

If you're using Cowork or Claude Code, install the customer research skill. It automates the synthesis pipeline from the demo. If your tool doesn't support skills, follow the manual guide.

3

Add your project instruction

Paste this into your project's system prompt or custom instructions.

You are a senior product marketing manager at [company name]. You help create marketing content, messaging, competitive positioning, and sales enablement materials. Grounding rules: - Always ground your work in the customer research, messaging documents, and other context in this project - Prioritize real customer language and data over generic marketing speak - When you don't have enough context to answer confidently, say so Quality standards: - Optimize for signal, not noise. Every sentence earns its place. - Litmus tests: "Would I publish this as-is? Would my VP approve this without major edits?" - No filler, no hedging, no generic marketing speak. Concrete and specific always. - Consider getting reviews from other subagent persona experts before completing work Formatting: - Use markdown with clear section headings - Prefer tables over long prose for structured comparisons - Bold key terms and takeaways for skimmability
4

Pick your benchmark & run your baseline

Before uploading any data, pick one deliverable and run a benchmark prompt with only the project instruction as context. Save the output — this is your "before." Landing page recommended, or a sales one-pager or cold email sequence.

5

Upload what you have

Now add your raw customer data — call transcripts, survey responses, reviews, existing M&P docs. Don't create anything new yet. Text beats everything; CSV beats .xlsx; markdown beats PDF.

10

Test, review, iterate

Your synthesis is a first draft, not a finished product. Review it, fix what's wrong, upload the improved version, and re-run your benchmark. The gap closes every round.

Check What to look for
Quotes are real Spot-check 3–5 against your original transcripts or reviews. If fabricated, fix them.
Numbers add up Theme counts, percentages, sample sizes — do they match your actual data?
No hallucinated entities Names, roles, company references should come from your data only.
What's missing? What do you know from experience that the synthesis didn't capture? Add it.
Signal over noise Would you cut this line if editing someone else's work? Cut it now.

Then: upload the reviewed synthesis, re-run the exact same benchmark prompt, and compare side by side. If the output is still generic — that's OK. ICPs (Week 2) and messaging (Week 3) fix that.

11

Your next move

Your Week 1 Prompt

Based on everything in this project, what are the 3 biggest gaps in my current context? What data would close each gap? What should I build first?
pmmcamp.dennyhollick.com
What's inside a skill?

A skill is a folder that tells Claude how to do a specific job. Here's what's inside:

customer-data-synthesis/
├── CLAUDE.md
├── sub-skills/
│   ├── individual-analysis.md
│   ├── cross-source-synthesis.md
│   └── report-generation.md
├── scripts/
│   ├── run-analysis.sh
│   └── validate-output.sh
└── examples/
    └── report_final.md

CLAUDE.md is the core of every skill. It's just a prompt — a markdown file that tells Claude what to do, in what order, what quality bar to hit, and how to handle edge cases. Everything the skill does flows from this one file. No code required — just well-written instructions. Click to see an example →

CLAUDE.md — it's just a prompt
# Customer Data Synthesis Skill You are a research analyst. Your job is to take raw customer data (call transcripts, surveys, reviews, support tickets) and produce a structured research synthesis. ## Process 1. Analyze each source individually - Extract pain points, gains, quotes - Tag by theme and sentiment - Note the source type and lifecycle stage 2. Cross-reference across all sources - Group by theme, count frequency - Identify patterns across segments - Flag contradictions and outliers 3. Synthesize into a final report - TLDR with top 5 findings - Detailed tables with representative quotes - Segment analysis - Recommended actions ## Quality rules - Every claim needs a source citation - Every quote must be real, not fabricated - Frequency counts must be mathematically consistent with the source data - If confidence is low, say so explicitly - Cut anything you'd edit out of someone else's work ## Output format Use the report template in examples/. Tables must include: Theme, Sentiment, Count, Prevalence, Confidence, and Representative Quotes. ## What NOT to do - Don't invent quotes or data points - Don't merge distinct themes to simplify - Don't skip sources — every file matters - Don't editorialize — report what the data says, not what you think it means
Manual guide — step-by-step process
Manual guide walkthrough
Fab Lab — Customer Insights Report (Fictional Data)

Fab Lab — Customer Insights Report

TLDR

Fragmented tooling sprawl is the dominant buying driver: 56% of prospects cite consolidation as their primary reason to evaluate Fab Lab, but current limitations in reporting, enterprise features, and mobile reliability are eroding satisfaction and fueling churn.

Calibration: 836 individual responses · 26 source types · Q1 2025 – Q4 2025

"We've basically been duct-taping things together...half the team uses Intercom, half replies from Gmail. Fab Lab promised to be the single pane of glass." — Claritask (sales_01)

What the data says

  1. Tool consolidation pressure is universal and drives evaluation, but delivery gaps are creating buyer's remorse. [HIGH · n=27]
  2. Reporting and analytics are the #1 unmet need and the #2 churn reason. 54% of sources flag reporting limitations. [HIGH · n=29]
  3. Enterprise auth gaps (SSO, RBAC, audit logs) appear in 31% of sources, blocking deals and driving churn in regulated verticals. [MEDIUM · n=17]
  4. Mobile app instability is creating operational friction and eroding trust. 35% of established customers report crashes. [MEDIUM · n=20]
  5. Zendesk, Intercom, and Freshdesk remain primary threats. Fab Lab wins on speed/UX but loses on maturity and feature depth. [MEDIUM · n=18]

What to do about it

  1. Prioritize reporting delivery (Q2–Q3 2026)
  2. Launch enterprise features package (SSO + RBAC)
  3. Stabilize mobile app by Q2 2026
  4. GTM campaign targeting Zendesk customers
  5. Churn prevention task force for high-risk segments

Detailed Findings

RQ1: What drives customers to seek Fab Lab?

ThemeCountPrevRepresentative Quote
Tooling consolidation2756%"We've basically been duct-taping things together" — Claritask
Manual health scoring1633%"We have no way to flag at-risk customers without eyeballing data" — sales_02
Support volume scale1225%"Our support team is drowning. We handle support in four different places" — sales_03
Cost vs. incumbents1123%"Zendesk was $250K/year. Fab Lab will cut that in half" — sales_04
Response time delays1021%"We're missing customer messages because we don't have a single inbox" — sales_05
Sales time on ops817%"I'm manually reviewing tickets every morning instead of working on strategy" — sales_06

RQ2: What does Fab Lab do well?

ThemeCountPrevRepresentative Quote
Unified inbox2341%"The unified inbox has been genuinely transformative...Saved probably $180K in ARR" — Arcline Software
Health scoring1934%"Health scoring has made our CSM team way more proactive" — renewal_16
Implementation speed1832%"We had value in 2 weeks. With Zendesk, it was 3 months" — renewal_17
Support team quality1629%"The onboarding team was fantastic. They understood our workflow" — renewal_18
Intuitive UI1527%"Our team picked it up immediately. Much less friction than Zendesk" — survey_01

RQ3: What's missing or broken?

ThemeCountPrevRepresentative Quote
Reporting limits2954%"Reporting is the biggest gap vs. Zendesk" — churn_21
Mobile app crashes2037%"Mobile app crashes when we're on customer calls. It's unreliable" — survey_01
Enterprise auth gaps1731%"We can't implement this without SSO. Our security team won't approve it" — sales_08
Integration depth1528%"We can't sync Salesforce back to Fab Lab. It's read-only" — support_q3
KB search relevance1324%"Search is useless. We have 500 KB articles but people can't find anything" — support_q1
Multi-tenant gaps1120%"We manage 30 client accounts but can't give them separate Fab Lab instances" — sales_09

RQ4: Competitive evaluation

CompetitorWin RateFab Lab Wins OnRepresentative Quote
Zendesk60%Cost, speed, UX"Zendesk is bloated but has everything. Fab Lab is cleaner but immature." — winloss_01
Intercom55%Consolidation, cost"Intercom's bot is better, but Fab Lab's human support is faster and cheaper" — sales_11
FreshdeskMixedSpeed, UX"Freshdesk is better for IT/MSP use cases. Fab Lab for mid-market SaaS." — winloss_01
ServiceNowN/AMid-market focus"ServiceNow is locked in at the enterprise level. That's not our market." — cab_q2

RQ5: What was the breaking point for churn?

ThemeCountPrevRepresentative Quote
Enterprise compliance undelivered633%"We were promised SSO would be ready by month 3. It's been 8 months." — churn_21
Mobile app failures633%"Our on-call team gave up on the mobile app. Too many crashes." — churn_22
Reporting limitations633%"Reporting is broken. We had to go back to Zendesk for visibility." — churn_23
Multi-tenant absent422%"We manage 15 client accounts. Fab Lab can't isolate them. We left." — support_q2
Pricing at scale422%"At $100/seat, our 50-person team would cost $60K/year." — survey_03

Where they went: Zendesk (5), Freshdesk (2), Intercom (1), proprietary (1)

RQ8: What drives expansion?

ThemeCountPrevRepresentative Quote
Reporting delivery3246%"If you ship reporting, we'd expand from 15 to 25 seats." — renewal_20
Enterprise features2029%"Once you have SSO and audit logs, we'll roll this out to the whole company." — renewal_21
Health scoring ROI1928%"Health scoring caught a churn risk we missed. That saves us $50K." — cab_q2
Mobile stability1826%"If the mobile app worked, we'd use it for everything." — survey_01
Salesforce sync1420%"Salesforce integration would cement our decision to expand." — upsell_24

High-Intensity Outliers

Healthcare/Finance Exodus

Risk: HIGH — 3 churned customers in regulated verticals cite unmet compliance needs (HIPAA, SOX). All went to vertical-specific competitors.

On-Call Mobile Dependency

Risk: MEDIUM — Field teams depend on mobile notifications. Crashes = service reliability risk. 6+ customers reverted to email/SMS.

Unrealized Consolidation

Risk: MEDIUM — 43% of renewed customers still maintain secondary tools (Zendesk for reporting, Mailchimp for campaigns, spreadsheets for health).

Pricing at 30+ Seats

Risk: MEDIUM — 4 customers cite per-seat pricing as untenable at enterprise scale. 3+ lost deals had pricing as a factor.

Roadmap Credibility Crisis

Risk: MEDIUM — Reporting promised 3+ times and slipped. Customers becoming skeptical of announced features.


Segment Analysis

Prospects (n=20)

  • Top pain: Tool consolidation (56%), manual health scoring (33%)
  • Top objections: Reporting (23%), SSO (20%), pricing (16%)
  • Win rate: 60% vs Zendesk, 55% vs Intercom

Established (n=18)

  • Love: Unified inbox (41%), health scoring (34%), speed (32%)
  • Complaints: Reporting (44%), mobile (37%), enterprise auth (31%)
  • 43% still use secondary tools

Churned (n=18)

  • Drivers: Reporting (33%), mobile (33%), compliance (33%)
  • Destinations: Zendesk (5), Freshdesk (2), Intercom (1)
  • 60% potentially reversible with reporting + mobile fixes

Feature Requests & Unmet Needs

Critical (Blocking Expansion)

#FeatureSourcesPrev
1Reporting Suite2444%
2Mobile App Stability2037%
3Enterprise Bundle (SSO+RBAC)1731%

High (Affecting Satisfaction)

#FeatureSourcesPrev
4Salesforce Bi-directional Sync1420%
5KB Search Modernization1324%
6Email Campaign Enhancements917%

Medium (Niche/Vertical)

#FeatureSourcesPrev
7Multi-Tenant/White-Label1120%
8Vertical Compliance611%
9API/Webhooks812%

So What?

Messaging

  • Prospect: Consolidate tools, cut cost 40–50% vs Zendesk. Live in 2–4 weeks.
  • Established: Health scoring saves $50K+ in churn/year. Reporting coming Q3.
  • Churned: Reporting shipped. Mobile stability improved. Let's reconnect.
  • Enterprise: SSO, RBAC, audit logging launching Q2–Q3. Early access for advisory board.

Product Roadmap

QuarterDeliverables
Q2 2026Enterprise MVP (SAML + RBAC) · Mobile stability · Reporting MVP
Q3 2026Reporting GA · Enterprise GA · Salesforce bi-directional sync
Q4 2026Multi-tenant MVP · KB search modernization

Sales Enablement

  1. Publish 3–5 case studies from renewals/expansions (Q2)
  2. Zendesk win strategy playbook with competitive messaging
  3. Enterprise readiness checklist for top 10 objections
  4. Churn recovery playbook for 18 churned customers
  5. "Fastest Path to Value" GTM motion for Zendesk prospects

Open Questions

  1. Why is consolidation the #1 driver but 43% still use secondary tools?
  2. What is the true CAC payback and LTV by segment?
  3. Are churned customers recoverable post-reporting GA?
  4. How do metrics compare to Zendesk/Intercom benchmarks?
  5. What's the TAM in regulated verticals if we close compliance gaps?

Blind Spots

  1. No product usage telemetry (DAU, feature adoption)
  2. No SMB vs enterprise segment split in data
  3. Only 6 sources on vertical-specific needs
  4. No competitive feature parity matrix
  5. No economic impact model for churn
  6. Unknown mobile-first workflow percentage
  7. Objection resolution success rates unknown
  8. No Customer Effort Score (CES) data