Customer research is a high-leverage activity that most teams underinvest in because it's tedious. The interviews themselves are 30-60 minutes; the real cost is everything around them — recruiting, scheduling, prep, transcription, synthesis, sharing findings. AI doesn't make the conversation better, but it can compress the surrounding work by 60-80%.
What AI doesn't do
The interview itself. Real interviewing requires reading body language, building rapport, knowing when to follow up vs move on, sensing emotional weight. AI doesn't do these. Hire or train good human interviewers; don't try to automate the conversation.
The insight extraction at depth. Senior researchers find non-obvious patterns by living with data for days. AI helps surface candidates faster, but the genuinely surprising insights still come from a human who's deeply familiar with the domain.
What AI does well
Discussion guide creation. Give Claude or GPT a research question and it'll draft a 5-question, 30-minute discussion guide that covers the territory. The first draft is usually 80% there. Edit for specific terminology and follow-up branches.
Transcript cleanup. Raw transcripts are messy — uhms, false starts, name corrections. AI cleans these in seconds, preserving content. Use the cleaned transcript for synthesis; keep the raw for citation.
Quote extraction. "Find me 5 quotes from these interviews where users described frustration with [topic]." AI surfaces them faster than any researcher reading manually. Verify by reading the source — sometimes the AI flattens nuance — but the speed advantage is enormous.
Pattern detection across interviews. "I'll paste 8 interview transcripts. Identify the top 5 themes that appeared in multiple interviews. For each theme, list which interviews mentioned it and what they specifically said." This is the highest-value AI task in research synthesis.
Research summary generation. Given themed insights, AI can draft a research summary document, executive presentation, or stakeholder email in minutes.
A practical workflow
Before interviews: Define research question. Have AI draft discussion guide. Edit for your domain. Send to participants.
During interviews: Record (with consent), take light notes. Don't try to AI-assist live — it's distracting and breaks rapport.
After each interview: Run transcript through cleanup. Extract a 1-paragraph summary plus the 3 most surprising quotes. Save in a structured format.
After all interviews: Paste all cleaned transcripts into Claude or Gemini's long-context window. Ask for theme analysis. Verify by reading sources for any theme you'll act on.
Distribution: AI drafts the synthesis report, the executive summary, the slide content. You edit each for accuracy and angle.
Sample prompts that work
For discussion guide:
Draft a 30-minute discussion guide for interviewing [type of user]
about [research question]. Focus on understanding their current
workflow, pain points, and what they've tried. Avoid leading
questions. Include 2-3 follow-up branches per main question.
For theme extraction:
Below are transcripts from 8 user interviews about [topic].
Identify the top 5 themes that appeared in 3+ interviews.
For each theme:
- Theme name and description
- Which interviews discussed it (by number)
- 2-3 representative direct quotes
- The strongest contradicting quote (if any)
Do not invent quotes; cite only what's in the transcripts.
For quote extraction:
Find every quote in these interviews where users described:
- A workaround they're using
- Frustration with the current state
- A wish for a specific feature
Format as: [Interview #] - [User name] - [Direct quote]
Handling sensitive interviews
For user research that touches sensitive topics — health, finances, immigration, anything personal — be careful with AI handling:
- Don't paste transcripts into ChatGPT or any free tier (data goes into training unless you've explicitly opted out)
- Use enterprise tiers with data privacy guarantees, or self-hosted models
- Strip personally identifying information before AI processing
- Disclose AI processing in your participant consent forms
The ethical floor: participants consented to talk to a researcher, not to have their words processed by a third-party AI. Make sure your consent covers what you actually do.
The hallucination risk in synthesis
AI synthesizing research has a specific failure mode: it'll generate a quote that sounds like something a participant could have said, but didn't. The quote is plausible. It's not real.
This is dangerous because research is supposed to be grounded in actual user voice. A synthesized-but-fake quote ends up in a slide deck, gets cited as user evidence, drives a product decision based on something nobody said.
Mitigations:
- Always verify quotes against transcripts
- Use prompts that explicitly require attribution: "Cite the interview number and timestamp for every quote"
- For high-stakes findings, have a human read the actual transcripts
- Treat AI synthesis as draft for human verification, not as final output
When NOT to use AI for research
For first interviews in a new domain. Going in without prep is bad, but going in with AI-generated prep that misses domain-specific issues is worse. Do the first 3-5 interviews with a human-prepared guide; let AI assist after you understand the domain.
For interviews with elite or sensitive populations (executives, patients, marginalized groups). The trust required is fragile. AI involvement, if disclosed, may erode willingness to share. If not disclosed, you've crossed an ethical line.
For very small interview counts (< 5). At that scale, your brain handles the synthesis better than AI does. AI's lift comes from compressing across many interviews.
Decision tree
- Research at scale (10+ interviews per project): AI synthesis with human verification
- Sensitive populations / topics: Human-only synthesis
- Solo founder talking to first 5 customers: Manual; AI for discussion guide only
- Established product team: AI synthesis as default, with strong verification
- Academic research with publication standards: AI assist allowed, citation conventions required
Next steps
- Try AI theme extraction on a past project's transcripts; compare to your original synthesis
- Build prompts library you reuse across projects
- Set verification standards (always read source for any quote that appears in a deliverable)
- Read about NotebookLM specifically for research synthesis use cases