Product Discovery with AI: Prompt Templates to Turn Interviews into Roadmap Priorities
Glossary of important terms
Before we dive into the specifics of using AI for product discovery, here are a few key concepts and acronyms referenced throughout this article:
CRM: Customer Relationship Management system for organizing and tracking customer data and interactions.
Jobs-to-Be-Done (JTBD): A framework for uncovering what customers truly need and the tasks they aim to complete.
INVEST: User story criteria, Independent, Negotiable, Valuable, Estimable, Sized appropriately, Testable.
Gherkin: A plain language scripting language for defining test scenarios in software development.
Epic: A large body of work in Agile that’s broken down into smaller user stories.
RICE: A prioritization system, Reach, Impact, Confidence, Effort, to score potential initiatives.
These terms provide vital context for the workflows and templates described below.

Why AI product discovery should connect interviews, CRM, and roadmap scoring
Discovery breaks down when interview data, CRM entries, and project tracking aren’t integrated. By systematically linking fields like account_id, segment, and opportunity_stage to every customer quote, your roadmap becomes grounded in revenue potential, not just the loudest voices in the room.
Maintain a unified taxonomy for job_step, pain_intensity, and outcome_metric. This consistency lets your team reliably compare results by geography or buyer category without unnecessary argument.
Evidence outweighs anecdote. Standardize your terminology once, then apply it everywhere.
Pre-interview alignment prompts that enforce Jobs-to-Be-Done structure
Structure comes first. Nail down the framework before meeting customers so results are reliable and data is comparable. Use shared variables, such as persona and scenario, across all interview setups.
Interview guide generator for a chosen persona and scenario
You are a senior product researcher. Create a 35‑minute JTBD interview guide for persona = RevOps Director, industry = B2B SaaS, scenario = renewal forecasting. Include: (1) warm‑up, (2) timeline: first thought → outcome, (3) pushes and pulls, (4) anxieties and habits, (5) switching triggers, (6) closing. For each section, write 3 neutral, non‑leading questions and 2 probes. Add a consent script and privacy notice for US clients. Output JSON with keys: guide, risks_to_avoid, consent_script. Keep language plain.Hypothesis register for live interview testing
Act as a product analyst. Given product_hypotheses = [Manual pipeline rollups cause missed renewals, Reps distrust forecast categories], write a test_plan of falsifiable signals and disconfirming evidence to observe during interviews. Map each hypothesis to job_step, metric_affected, leading_question_to_avoid, and observation_checklist. Return JSON array under key = hypothesis_register.Screener to select the right participants
Design a participant screener for persona = Sales Ops Manager with must_have_traits = [owns CRM fields, builds rollups], exclude = [< 6 months in role]. Provide 6 questions with pass criteria, plus a 30‑second self‑recorded scenario prompt. Return JSON fields: question, acceptable_answers, fail_reason.
Post-interview segmentation prompts that turn transcripts into comparable data
Transform interview transcripts into actionable facts. Consistently label keys like quote, job_step, and obstacle so they’re ready for synthesis and reporting.
Segment interviews by job step and extract verbatims
From transcript_text = ..., segment into episodes aligned to JTBD steps: Define, Locate, Prepare, Execute, Confirm. For each episode, extract: direct quote (max 25 words), desired_outcome, obstacles, current_workaround, tools, time_cost_minutes, emotional_tone. Return JSON: { interview_id, episodes: [] }. Do not invent content.Normalize tags using a controlled vocabulary
Normalize tags for items = episodes[].obstacles using vocabulary = [data_accuracy, handoff_delay, permissions, manual_export]. Map each item to nearest tag with reason, or flag = new_tag. Output JSON with keys: normalized_tag, reason, confidence_0_1.Link insights to CRM, always with revenue context
Match interview_company = Acme Health and interview_email_domain = acmehealth.com to CRM accounts. If multiple matches, rank by ARR and stage. Return { account_id, arr, open_opportunities: [ { id, stage, amount } ] } and attach theme_candidates from transcript.
Signal scoring prompts that quantify pain and urgency for roadmap decisions
Quantify what you’ve learned. Use transparent metrics. Maintain fields like reach, impact, confidence, and effort consistently teamwide.
Opportunity scoring based on importance and satisfaction
Using episodes JSON, compute opportunity_score = importance_0_10 * ( 10 - satisfaction_0_10 ). Derive importance from time_cost_minutes and frequency_per_week; derive satisfaction from sentiment and workaround_strength. Aggregate by normalized_tag. Return a table as JSON: { tag, interviews, opportunity_score, top_quote }.RICE scoring tied to CRM reach
For top tags = [data_accuracy, permissions], compute RICE. Reach = number of active_accounts with tag in CRM last 90 days; Impact = 1 if removes a blocker, else 0.5; Confidence from transcript coverage and quote specificity; Effort in team_weeks from eng_estimates JSON. Return sorted list with { tag, R, I, C, E, RICE }.Payback window estimate for finance alignment
Estimate payback_months for each tag using arr_at_risk and expected_win_rate_lift. Provide assumptions and sensitivity bands (low, base, high). Output JSON: { tag, payback_low, payback_base, payback_high, assumptions }.
Theme clustering prompts that produce candidate epics and measurable outcomes
Synthesize scattered pain points into actionable themes, each with measurable outcomes your execs will support.
Cluster obstacles into high-level themes for leaders
Cluster obstacles by semantic similarity and co‑occurrence in accounts. Produce 5–7 themes with names that a CFO would understand. For each theme, include: problem_statement (no solution words), target_metric (e.g., time to approval, hours), baseline_value, north_star_link, and representative_quotes. Return JSON under key = themes.Translate themes into epics with concrete results
Convert themes[] into epics with outcome_key_results: { epic_name, outcome_metric, target_delta, time_horizon_weeks, risks, non_goals }. Ensure outcomes are user‑observable and revenue‑linked. No UI specs.
User story and acceptance criteria prompts ready for engineering intake
Convert each epic into actionable user stories following INVEST principles, including clear, testable acceptance criteria using Gherkin language for QA.
User stories in INVEST format, free from solution bias
From epic = Reduce manual data exports in renewals, produce 6 user_stories in INVEST format. Avoid UI solutions. Include persona, context, motivation, and measurable acceptance. Return JSON: { story_id, story, rationale }.Gherkin acceptance tests for the most important stories
For stories[0..1], write Gherkin scenarios with Given / When / Then. Cover happy path, permissions edge case, and data error. Output plain text under keys: story_id, gherkin_scenarios.
Roadmap sequencing prompts that respect capacity, risk, and dependencies
Prioritize transparently by making constraints visible. Tie capacity_weeks to current team bandwidth, and document every dependency.
Quarterly sequence planning with built-in guardrails
Given epics JSON, team_capacity = { BE: 18, FE: 14, DS: 6 team_weeks }, and risks_by_theme, produce a Q2–Q3 sequence. Respect dependencies and a WIP limit of 2 epics per team. Return: { quarter, start, end, epics: [ { name, capacity_split, dependency_list, risk_mitigations } ] }.Executive summary of chosen plan and trade-offs
Summarize the selected sequence for executives. Include reason_for_change, items_deferred, impact_on_metrics, and risk_register_updates. Write in 150 words, no jargon.
Governance prompts that protect privacy and reduce bias in discovery
Safeguard privacy and ensure fairness at every stage. Every artifact should feature standard fields like pii_masked and bias_flags.
PII masking before AI processing
Redact PII from transcript_text using rules: names, emails, phone, company secrets. Replace with tokens like {{ PERSON_1 }}. Output redacted_text and mask_map. Never store raw data.Bias review for interview guides and summaries
Review interviewer_questions and summary_text. Flag leading language, attribution bias, or stereotyping. Provide before_after rewrites with neutral wording and cite why. Return JSON: { issue, severity, fix }.Hallucination check with citation linking
For every claim in the summary, link to episode_id and quote index. If missing evidence, mark status = unsupported. Return coverage_report with support_ratio.
Suggested workflow and toolchain for centralizing discovery-to-roadmap data
Many modern teams consolidate discovery work in platforms like Routine, Notion, or ClickUp. Store all interview artifacts, CRM context, and roadmap documents in one shared space with bi-directional links (bi_directional_links).
As you define themes and translate them to epics, map these structures directly to formal project artifacts. You can reference these project planning templates for structured roadmaps and charters. Progressing through delivery, track all milestones with transparency, using this guide on the five phases of the project lifecycle for seamless linkage between research and release.
Field-ready prompt set you can copy into your workspace today
Paste once, reuse every week
Build a shared prompt library accessible to your whole team. Store these prompts under/discovery/prompts with proper version control. Update quarterly to keep up with taxonomy changes.
Weekly synthesis across interviews and support tickets
Aggregate inputs = { interviews_last_7_days, top_support_topics, churn_reasons }. Produce a weekly_digest with: 5 theme_trends, new_risks, CRM_reach_shift, and suggested_executive_slide bullets (max 6). Include one chart recommendation with data fields, not visuals.PRD starter template from the highest-priority theme
From theme = Permissions bottlenecks, RICE = 640, and key_quotes = [...], generate a PRD skeleton with sections: Problem, Context, Objectives, Non‑Goals, Metrics, Risks, Rollout Plan. No UI. Return Markdown under key = prd.Stakeholder update for sales and support teams
Draft a cross‑functional update. Audience = Sales + Support. Include what we learned, what changes next week, and how to tag new cases under taxonomy. Tone direct. 120 words.
Where to go next for deeper product discovery practice
To go deeper, anchor your discovery practice in tools that keep insights connected to execution. Many teams use Routine to centralize interviews, CRM context, and roadmap artifacts in one living workspace, so evidence flows directly into decisions. Pair this with foundational learning from Teresa Torres’s Continuous Discovery Habits and structured Jobs-to-Be-Done scoring to sharpen your craft — then apply it inside Routine so your process compounds instead of resetting every quarter.
FAQ
Why is it crucial to integrate interviews, CRM, and roadmap scoring in AI product discovery?
Integrating these elements prevents discovery from becoming fragmented, ensuring data-driven decisions rather than being swayed by the loudest opinions. When fields like account_id and opportunity_stage are linked, roadmaps align more closely with revenue potential.
How does maintaining a unified taxonomy improve product discovery?
Consistent taxonomies ensure that teams can compare results reliably across different contexts, avoiding time-wasting debates. By standardizing terms like job_step and outcome_metric, data becomes easier to synthesize and report.
What is the benefit of using Jobs-to-Be-Done (JTBD) framework in interviews?
The JTBD framework focuses on understanding true customer needs and the tasks they want to complete, leading to more meaningful insights. Pre-interview alignment on this structure ensures that data gathered is both reliable and comparable across sessions.
How can Routine centralize the discovery-to-roadmap process?
Routine provides a unified platform for storing all discovery artifacts, CRM data, and roadmap documents with bi-directional linking. This centralization streamlines project planning and execution, enhancing team efficiency and decision-making.
Why prioritize evidence over anecdote in product discovery?
Relying solely on anecdotes can lead to biased decisions that do not reflect true customer needs or business potential. Evidence-based approaches ensure that data grounds every decision, leading to more accurate and strategic outcomes.
How can bias be reduced during the discovery process?
Implementing governance prompts that review interview guides and summaries for leading language or stereotypes minimizes bias. Routine's features ensure fairness and privacy by incorporating PII masking and bias checks.
What risks exist in poor roadmap sequencing?
Inadequate sequencing can result in resource overload and missed targets, especially if dependencies and team capacities are ignored. Transparent prioritization, considering constraints and risks, is vital to maintaining efficient project progress.
What is the impact of poor taxonomy in product discovery?
Inconsistent terms across teams can lead to misinterpretations and inefficiencies, affecting the reliability of data synthesis. A robust, uniform taxonomy facilitates seamless communication and more accurate comparisons of data insights.
Why use RICE scoring in product prioritization?
RICE scoring offers a structured method for evaluating initiatives based on reach, impact, confidence, and effort. This allows teams to prioritize work that maximizes value and aligns most effectively with customer and business goals.
