## The problem

Every department in the org needs the voice of the customer, and every department gets a different version of it. Research runs the interviews and writes a thirty-page report nobody outside research finishes reading. Product sifts beta-survey responses on Monday morning under deadline pressure and goes with gut. Marketing pulls testimonial copy from whichever NPS responses sales happens to forward. Sales rebuilds the objection-handling deck quarterly from memory. Exec runs its own back-of-the-envelope summary for the board the night before. Five teams, five summaries, five different stories about the same customer base — and the gold under all of it (the actual transcripts and quotes) sits in a Drive folder nobody has time to read.

## Author

The **head of UX research** and a **senior PM** co-author the customer research synthesizer in the AgentBundle dashboard wizard — no code. They paste in: the team's research framework (jobs-to-be-done, the opportunity-scoring rubric used to rank findings), the standard interview question bank, and three example past research reports that hit the right structure (themed pain points, ranked by frequency and severity, each backed by direct quotes with timestamp citations). They wire MCP connections to Drive and Notion (the transcripts repository), Linear (so the PM can manually export insights into opportunities), and the customer-research data warehouse (NPS, CES, survey aggregates). The wizard generates the bundle. The reviewer-workflow controls referenced below ship on Business tier and above; see [/pricing](/pricing) for what's gated.

## Review

The **VP of product** and the **head of research** both approve the agent before it can publish. Research findings drive roadmap prioritization across the org, so review matters. On Business tier both signoffs are mandatory — [N-required reviewers, with every change to the canonical synthesis rules captured in the audit log](/security#approval-workflow) with who made it and when. When the head of research moves on the following year, every prior version of the rules stays queryable.

## Distribute

`apm publish` — the CLI from [APM (Microsoft's open packaging spec for AI agents)](https://microsoft.github.io/apm/) — ships APM-compatible bundles to all 7 runtimes the moment both reviewers approve. The research team lives in their preferred runtime; PMs run it in their IDE of choice; non-CLI users — and that's most of marketing, sales, and exec — install the agent through the AgentBundle dashboard for their preferred surface. Same canonical agent, different surfaces.

## Use

One canonical synthesis agent, five very different consumer flows — the pattern teams build toward:

- **Research** runs the canonical flow. After twelve user interviews, the researcher dumps transcripts into the agent and gets back themed pain-point summaries with exact-quote citations and timestamp links to source transcripts. The thirty-page write-up still happens — but the v0 is on the screen in fifteen minutes instead of two days, and every claim is traceable to the line in the transcript that supports it.
- **Product** runs the agent on their feature's beta-survey responses. The PM for a billing-portal v2 pastes in 220 free-text survey responses and gets back a ranked friction list, each backed by customer-quote evidence, in five minutes — instead of the full day of spreadsheet work the Friday before sprint planning. The roadmap prioritization conversation moves from "what do we think?" to "here are the top five things customers told us, ranked by frequency."
- **Marketing** runs the agent on recent NPS free-text responses to surface customer-language patterns — the actual phrases customers use to describe what the product unlocks for them. Those phrases become ad copy, landing-page testimonials, and email subject lines that read in the customer's voice instead of the marketing team's. Permission flags are surfaced on every quote so legal-cleared candidates are obvious; nothing makes it to a public surface without confirmed opt-in.
- **Sales** runs the agent on lost-deal post-mortems. It surfaces objection patterns that recur — "didn't see the value at our team size," "champion left mid-cycle," "competitor's onboarding looked easier" — and arms account executives with anticipatory talking points before high-value calls. The objection-handling deck stops being a quarterly memory-bank exercise.
- **Executive** runs the agent on quarterly NPS aggregate. The CEO surfaces the top three customer themes for the board deck the same way the product team does — same source of truth, same synthesis rules, same evidence trail. There's no separate "exec summary" workflow that diverges from what product and research are seeing; the board reads the same story the roadmap is built on.

## Iterate

Illustrative scenario: six weeks in, the research team notices the agent over-weighting vocal customers — the ones who write 800-word survey responses getting more representation than the 220 customers who left a single sentence. The head of research tightens the canonical prompt to weight responses by frequency, not volume, and ships v2 through the same publish pipeline. **Every consuming team's next invocation picks up the corrected synthesis on next sync** — product's Monday-morning roadmap reads, marketing's testimonial pulls, sales' objection patterns, the CEO's board summary. Nobody has to be told to "use the new method"; it's the only method.

Past summaries stay accessible in the audit log. When a PM later defends a roadmap decision — "why did we prioritize the export feature?" — they can show exactly which prompt version produced the insight and which interview quotes backed it. The synthesis becomes evidence the org can stand behind, not a memory of what someone said in a Monday meeting.