Connect with Denny

PMM Camp · PMM OS

Week 1: Start With
Your Customer

Build a customer research synthesis and upload it to your AI project.

Your Goal

By Friday, you should have a clean, text-based customer insights document — ranked themes, frequency counts, real quotes, clear sections. Something you'd hand to a smart new hire on day one and feel confident they'd understand your customers.

You can build this using the customer research skill (automated, ~30 min) or the manual guide (any AI tool). Here's what the output looks like:

Drop the template into your project and ask your AI to fill it out with you.


Two ways to build it

Recommended

Customer Research Skill

Claude with Cowork or Claude Code

Drop your data files, run the skill. Automated synthesis. ~30 min.

Download the Skill

Unzip into your Cowork project folder. The skill runs automatically when you prompt it. Need help? Post in the forum or reach out to Denny. The manual guide always works as a backup.

Any AI tool

Step-by-Step Guide

Claude, ChatGPT, NotebookLM, or anything else

Same process as the skill — you run each step yourself.

Follow the Guide

Step 1

Set Up Your Project

Pick a tool with a project or notebook feature — somewhere context persists across conversations.

Chat

One conversation at a time. No memory between sessions.

  • ChatGPT
  • Gemini
  • Claude.ai

Projects

Persistent files and instructions. Context carries across conversations.

  • Claude Projects
  • ChatGPT Projects
  • NotebookLM

Agentic

Runs commands, executes skills, reads and writes files on your machine.

  • Cowork (recommended)
  • Claude Code

Project Instruction

Replace [company name] with yours. You'll refine this over 4 weeks.

Project Instruction
You are a senior product marketing manager at [company name]. You help create marketing content, messaging, competitive positioning, and sales enablement materials. Grounding rules: - Always ground your work in the customer research, messaging documents, and other context in this project - Prioritize real customer language and data over generic marketing speak - When you don't have enough context to answer confidently, say so Quality standards: - Optimize for signal, not noise. Every sentence earns its place. - Litmus tests: "Would I publish this as-is? Would my VP approve this without major edits?" - No filler, no hedging, no generic marketing speak. Concrete and specific always. - Consider getting reviews from other subagent persona experts before completing work Formatting: - Use markdown with clear section headings - Prefer tables over long prose for structured comparisons - Bold key terms and takeaways for skimmability

Step 2

Run Your Baseline

Pick a deliverable and run the prompt with little or no context — just the project instruction from Step 1. This is your "before" snapshot. The whole point is to re-run this exact same prompt each week and watch the output improve as you build context. Save the output.

Landing pageRecommended — hits every context element
Write a landing page for [your product] targeting [your primary audience — role, company size, industry]. Include a hero section, 3 key benefits with proof points, social proof with specific customer outcomes, and a CTA.
Sales one-pager
Write a one-page sales document for [your product] targeting [audience — role and industry]. Include the problem, solution, 3 differentiators vs. alternatives, proof points, and next steps.
Cold email sequence
Write a 3-email cold outreach sequence for [your product] targeting [specific role, e.g. "VP of Marketing at mid-market SaaS"]. Email 1: pain point. Email 2: customer outcome. Email 3: direct ask. Each email under 150 words.

Step 3

Build Your Research Synthesis

Upload your customer data — transcripts, surveys, reviews, support tickets, G2 reviews, whatever you have — and run one of these:

Skill install: unzip into your Cowork project folder. The skill runs automatically when you prompt it. Need help? Post in the forum or reach out to Denny. The manual guide always works as a backup.

Best data sources
  • Call transcripts or interview notes (text, not audio)
  • Survey / NPS open-text responses (CSV or markdown)
  • Support tickets or customer emails
Also useful
  • G2 / Capterra / TrustRadius reviews
  • Sales call notes, CRM notes, Reddit threads
  • Existing M&P docs, website copy, pitch deck text

No transcripts or surveys?

Bootstrap it — copy your website homepage + G2 reviews into your project and run this prompt:

Analyze this website copy and these customer reviews. Identify the top 5 pain points customers mention, the top 5 outcomes they value, and any gaps between what we say on our site and what customers actually care about. Include direct quotes.

Format rules: Text beats everything. CSV beats .xlsx. Markdown beats PDF.


Step 4

Review, Upload, Re-Run

Before uploading, verify these things:

Check What to look for
Quotes are real Spot-check 3–5 against your originals. If fabricated, fix them.
Numbers add up Theme counts, percentages, sample sizes match your data.
No hallucinated entities Names, roles, companies come from your actual data.
Structure is clean Clear headings, labeled sections, tables for structured data. Would a new hire follow this on day one?
Signal over noise "Would I cut this if editing someone else's work?" Cut it now.

Then: upload to your project and re-run the exact same benchmark prompt from Step 2 — this time, tell it to reference your research synthesis. Compare the two outputs side by side. The output should use customer language. If it's still generic in places, that's expected — ICPs (Week 2) and messaging (Week 3) fix that.


Did you get to your goal?

Compare your report to the example. Does it have ranked themes, frequency counts, real quotes, and clear sections?


Troubleshooting

"I don't have call transcripts or surveys."

You don't need them. G2 reviews, Capterra reviews, Reddit threads, support tickets, sales notes, website copy — any source of customer voice works. Pull as many as you can find and run the bootstrap prompt in Step 3. A synthesis built from 20 G2 reviews is better than no synthesis at all.

"The skill won't install."

Fall back to the manual guide — same process, any AI tool, you just run each step yourself. Post the error in the forum thread and Denny will help debug.

"The output is still generic after adding my research."

Use this diagnostic table to figure out what's missing:

If the output is... You're missing... What to build
Generic — could be anyone's company Customer research Better synthesis (re-do with more data or cleaner structure)
Vague value props, no proof Messaging framework Coming in Week 3
Wrong audience, wrong language Audience specifics Coming in Week 2
Claims with no evidence Proof points Coming in Week 2 (social proof library)
Doesn't sound like your brand Brand calibration Coming in Week 4

If it's still generic after Week 1, that's normal. The research synthesis is just layer one — each week adds more context and the output improves. Keep going.

"I'm not sure if my synthesis is good enough."

Quality bar: Would you hand this document to a smart new hire on their first day and feel confident they'd understand your customers? If yes, it's good enough. If not, keep editing. Compare yours to the Fab Lab example — yours doesn't need to be that long, but aim for that standard of clarity.

"My data is messy / in the wrong format."

Text beats everything. If your data is in Excel, export to CSV. If it's in a PDF, copy-paste the text into a markdown file. If it's in slides, pull the text out — LLMs can't read slide layouts well. Don't let format be the reason you skip this step.

"I'm stuck and don't know what to do next."

Post in the forum thread. Describe where you are, what you've tried, and what's not working. Denny and the rest of the group are there to help — someone has almost certainly hit the same issue. Share what's blocking you and share prompts that worked. This is a community challenge, not a solo mission.


Customer Research Synthesis — Template
# [Company Name] — Customer Insights Report

## TLDR

**[One sentence: the single most important finding from your data. What's the dominant pattern, and what's the tension or risk it creates?]**

**Calibration:** [X] responses · [X] source types · [date range]

> "[Your most compelling customer quote — the one that captures the core insight]" — [Source]

### What the data says

1. **[Finding 1 — the dominant theme.]** [Supporting data: prevalence, count, what it means.] [Confidence level · n=X]

2. **[Finding 2.]** [Supporting data.] [Confidence · n=X]

3. **[Finding 3.]** [Supporting data.] [Confidence · n=X]

4. **[Finding 4.]** [Supporting data.] [Confidence · n=X]

5. **[Finding 5.]** [Supporting data.] [Confidence · n=X]

### What to do about it

1. **[Action 1.]** [Who owns it, what to build, target date.]

2. **[Action 2.]** [Who owns it, what to build, target date.]

3. **[Action 3.]** [Who owns it, what to build, target date.]

### By the numbers

- **[X] responses** across [X] source types
- **[Key competitive stat or benchmark]**
- **[Other high-level number worth calling out]**

---

## Detailed Findings

### What problems drive customers to seek [your product]?

| Theme | Sentiment | Count | Prevalence | Representative Quote |
|---|---|---|---|---|
| | | | | |
| | | | | |
| | | | | |
| | | | | |

---

### What does [your product] do well? What would customers miss most?

| Theme | Sentiment | Count | Prevalence | Representative Quote |
|---|---|---|---|---|
| | | | | |
| | | | | |
| | | | | |
| | | | | |

---

### What's missing or broken? What workarounds exist?

| Theme | Sentiment | Count | Prevalence | Representative Quote |
|---|---|---|---|---|
| | | | | |
| | | | | |
| | | | | |
| | | | | |

---

### How do customers evaluate [your product] against competitors?

| Competitor | Win Rate | We Win On | We Lose On | Representative Quote |
|---|---|---|---|---|
| | | | | |
| | | | | |
| | | | | |

---

### What was the breaking point for customers who churned?

| Theme | Count | Prevalence | Representative Quote |
|---|---|---|---|
| | | | |
| | | | |
| | | | |

**Where they went:** [Competitor 1] (X), [Competitor 2] (X), [Other] (X)

---

### What concerns or objections arise during evaluation?

| Objection | Count | Resolution Rate | Notes |
|---|---|---|---|
| | | | |
| | | | |
| | | | |

---

### How is the onboarding and support experience?

| Theme | Sentiment | Count | Prevalence | Representative Quote |
|---|---|---|---|---|
| | | | | |
| | | | | |
| | | | | |

---

### What would cause customers to expand, upgrade, or recommend?

| Theme | Count | Prevalence | Representative Quote |
|---|---|---|---|
| | | | |
| | | | |
| | | | |

---

## High-Intensity Outliers

*Low-frequency signals that indicate significant risk or opportunity:*

### [Outlier 1 Title]
**Risk/Opportunity:** [HIGH/MEDIUM] — [Brief description of the signal and why it matters.]

### [Outlier 2 Title]
**Risk/Opportunity:** [HIGH/MEDIUM] — [Brief description.]

---

## Segment Analysis

### [Segment 1, e.g. Prospects] (n=X)
- **Top pain:** [Theme] (X%), [Theme] (X%)
- **Top objections:** [Theme] (X%), [Theme] (X%)
- **Win rate:** X% vs [Competitor]

### [Segment 2, e.g. Established Customers] (n=X)
- **Love:** [Theme] (X%), [Theme] (X%)
- **Complaints:** [Theme] (X%), [Theme] (X%)

### [Segment 3, e.g. Churned] (n=X)
- **Drivers:** [Theme] (X%), [Theme] (X%)
- **Destinations:** [Competitor] (X), [Competitor] (X)

---

## So What?

### Messaging
- **[Segment 1]:** [One-line messaging direction]
- **[Segment 2]:** [One-line messaging direction]
- **[Segment 3]:** [One-line messaging direction]

### Product Roadmap

| Quarter | Deliverables |
|---|---|
| [Q_ 20__] | [Priority 1] · [Priority 2] |
| [Q_ 20__] | [Priority 3] · [Priority 4] |

### Open Questions
1. [Question your data couldn't answer]
2. [Question that needs more research]
3. [Question for leadership to decide]

🎉 Nice work!

Layer one of your PMM OS is live. Now go share what you learned in the forum.

Post in the forum
Fab Lab — Customer Insights Report (Fictional Data)

Fab Lab — Customer Insights Report

TLDR

Fragmented tooling sprawl is the dominant buying driver: 56% of prospects cite consolidation as their primary reason to evaluate Fab Lab, but current limitations in reporting, enterprise features, and mobile reliability are eroding satisfaction and fueling churn.

Calibration: 836 individual responses · 26 source types · Q1 2025 – Q4 2025

"We've basically been duct-taping things together...half the team uses Intercom, half replies from Gmail. Fab Lab promised to be the single pane of glass." — Claritask (sales_01)

What the data says

  1. Tool consolidation pressure is universal and drives evaluation, but delivery gaps are creating buyer's remorse. [HIGH · n=27]
  2. Reporting and analytics are the #1 unmet need and the #2 churn reason. 54% of sources flag reporting limitations. [HIGH · n=29]
  3. Enterprise auth gaps (SSO, RBAC, audit logs) appear in 31% of sources, blocking deals and driving churn in regulated verticals. [MEDIUM · n=17]
  4. Mobile app instability is creating operational friction and eroding trust. 35% of established customers report crashes. [MEDIUM · n=20]
  5. Zendesk, Intercom, and Freshdesk remain primary threats. Fab Lab wins on speed/UX but loses on maturity and feature depth. [MEDIUM · n=18]

What to do about it

  1. Prioritize reporting delivery (Q2–Q3 2026)
  2. Launch enterprise features package (SSO + RBAC)
  3. Stabilize mobile app by Q2 2026
  4. GTM campaign targeting Zendesk customers
  5. Churn prevention task force for high-risk segments

Detailed Findings

RQ1: What drives customers to seek Fab Lab?

ThemeCountPrevRepresentative Quote
Tooling consolidation2756%"We've basically been duct-taping things together" — Claritask
Manual health scoring1633%"We have no way to flag at-risk customers without eyeballing data" — sales_02
Support volume scale1225%"Our support team is drowning. We handle support in four different places" — sales_03
Cost vs. incumbents1123%"Zendesk was $250K/year. Fab Lab will cut that in half" — sales_04
Response time delays1021%"We're missing customer messages because we don't have a single inbox" — sales_05
Sales time on ops817%"I'm manually reviewing tickets every morning instead of working on strategy" — sales_06

RQ2: What does Fab Lab do well?

ThemeCountPrevRepresentative Quote
Unified inbox2341%"The unified inbox has been genuinely transformative...Saved probably $180K in ARR" — Arcline Software
Health scoring1934%"Health scoring has made our CSM team way more proactive" — renewal_16
Implementation speed1832%"We had value in 2 weeks. With Zendesk, it was 3 months" — renewal_17
Support team quality1629%"The onboarding team was fantastic. They understood our workflow" — renewal_18
Intuitive UI1527%"Our team picked it up immediately. Much less friction than Zendesk" — survey_01

RQ3: What's missing or broken?

ThemeCountPrevRepresentative Quote
Reporting limits2954%"Reporting is the biggest gap vs. Zendesk" — churn_21
Mobile app crashes2037%"Mobile app crashes when we're on customer calls. It's unreliable" — survey_01
Enterprise auth gaps1731%"We can't implement this without SSO. Our security team won't approve it" — sales_08
Integration depth1528%"We can't sync Salesforce back to Fab Lab. It's read-only" — support_q3
KB search relevance1324%"Search is useless. We have 500 KB articles but people can't find anything" — support_q1
Multi-tenant gaps1120%"We manage 30 client accounts but can't give them separate Fab Lab instances" — sales_09

RQ4: Competitive evaluation

CompetitorWin RateFab Lab Wins OnRepresentative Quote
Zendesk60%Cost, speed, UX"Zendesk is bloated but has everything. Fab Lab is cleaner but immature." — winloss_01
Intercom55%Consolidation, cost"Intercom's bot is better, but Fab Lab's human support is faster and cheaper" — sales_11
FreshdeskMixedSpeed, UX"Freshdesk is better for IT/MSP use cases. Fab Lab for mid-market SaaS." — winloss_01
ServiceNowN/AMid-market focus"ServiceNow is locked in at the enterprise level. That's not our market." — cab_q2

RQ5: What was the breaking point for churn?

ThemeCountPrevRepresentative Quote
Enterprise compliance undelivered633%"We were promised SSO would be ready by month 3. It's been 8 months." — churn_21
Mobile app failures633%"Our on-call team gave up on the mobile app. Too many crashes." — churn_22
Reporting limitations633%"Reporting is broken. We had to go back to Zendesk for visibility." — churn_23
Multi-tenant absent422%"We manage 15 client accounts. Fab Lab can't isolate them. We left." — support_q2
Pricing at scale422%"At $100/seat, our 50-person team would cost $60K/year." — survey_03

Where they went: Zendesk (5), Freshdesk (2), Intercom (1), proprietary (1)

RQ8: What drives expansion?

ThemeCountPrevRepresentative Quote
Reporting delivery3246%"If you ship reporting, we'd expand from 15 to 25 seats." — renewal_20
Enterprise features2029%"Once you have SSO and audit logs, we'll roll this out to the whole company." — renewal_21
Health scoring ROI1928%"Health scoring caught a churn risk we missed. That saves us $50K." — cab_q2
Mobile stability1826%"If the mobile app worked, we'd use it for everything." — survey_01
Salesforce sync1420%"Salesforce integration would cement our decision to expand." — upsell_24

High-Intensity Outliers

Healthcare/Finance Exodus

Risk: HIGH — 3 churned customers in regulated verticals cite unmet compliance needs (HIPAA, SOX). All went to vertical-specific competitors.

On-Call Mobile Dependency

Risk: MEDIUM — Field teams depend on mobile notifications. Crashes = service reliability risk. 6+ customers reverted to email/SMS.

Unrealized Consolidation

Risk: MEDIUM — 43% of renewed customers still maintain secondary tools (Zendesk for reporting, Mailchimp for campaigns, spreadsheets for health).

Pricing at 30+ Seats

Risk: MEDIUM — 4 customers cite per-seat pricing as untenable at enterprise scale. 3+ lost deals had pricing as a factor.

Roadmap Credibility Crisis

Risk: MEDIUM — Reporting promised 3+ times and slipped. Customers becoming skeptical of announced features.


Segment Analysis

Prospects (n=20)

  • Top pain: Tool consolidation (56%), manual health scoring (33%)
  • Top objections: Reporting (23%), SSO (20%), pricing (16%)
  • Win rate: 60% vs Zendesk, 55% vs Intercom

Established (n=18)

  • Love: Unified inbox (41%), health scoring (34%), speed (32%)
  • Complaints: Reporting (44%), mobile (37%), enterprise auth (31%)
  • 43% still use secondary tools

Churned (n=18)

  • Drivers: Reporting (33%), mobile (33%), compliance (33%)
  • Destinations: Zendesk (5), Freshdesk (2), Intercom (1)
  • 60% potentially reversible with reporting + mobile fixes

Feature Requests & Unmet Needs

Critical (Blocking Expansion)

#FeatureSourcesPrev
1Reporting Suite2444%
2Mobile App Stability2037%
3Enterprise Bundle (SSO+RBAC)1731%

High (Affecting Satisfaction)

#FeatureSourcesPrev
4Salesforce Bi-directional Sync1420%
5KB Search Modernization1324%
6Email Campaign Enhancements917%

Medium (Niche/Vertical)

#FeatureSourcesPrev
7Multi-Tenant/White-Label1120%
8Vertical Compliance611%
9API/Webhooks812%

So What?

Messaging

  • Prospect: Consolidate tools, cut cost 40–50% vs Zendesk. Live in 2–4 weeks.
  • Established: Health scoring saves $50K+ in churn/year. Reporting coming Q3.
  • Churned: Reporting shipped. Mobile stability improved. Let's reconnect.
  • Enterprise: SSO, RBAC, audit logging launching Q2–Q3. Early access for advisory board.

Product Roadmap

QuarterDeliverables
Q2 2026Enterprise MVP (SAML + RBAC) · Mobile stability · Reporting MVP
Q3 2026Reporting GA · Enterprise GA · Salesforce bi-directional sync
Q4 2026Multi-tenant MVP · KB search modernization

Sales Enablement

  1. Publish 3–5 case studies from renewals/expansions (Q2)
  2. Zendesk win strategy playbook with competitive messaging
  3. Enterprise readiness checklist for top 10 objections
  4. Churn recovery playbook for 18 churned customers
  5. "Fastest Path to Value" GTM motion for Zendesk prospects

Open Questions

  1. Why is consolidation the #1 driver but 43% still use secondary tools?
  2. What is the true CAC payback and LTV by segment?
  3. Are churned customers recoverable post-reporting GA?
  4. How do metrics compare to Zendesk/Intercom benchmarks?
  5. What's the TAM in regulated verticals if we close compliance gaps?