Blog

How to Run AI-Moderated Studies That Actually Work

February 24, 2026

In Part 1, we covered when AI-Moderated Studies make sense and when you're better off with other methods. Now let's get practical: how teams are actually using this, what separates a brilliant study from a mediocre one, and how to get started.

How teams are actually using this

Screening before deep dives

Run 50 AI-moderated interviews to surface themes. The automatic thematic analysis clusters responses and identifies patterns across all participants. Review the findings, identify the 5-8 participants with the most interesting responses, and invite them back for human-moderated deep dives.

You've got patterns from the AI-moderated study and depth from the human conversations. Best of both, minimal calendar chaos.

Segmentation with confidence

A productivity app needs to understand why different age groups prefer different value propositions. Survey data tells them what people prefer; it can't explain why.

With AI-moderated interviews, they run 60 interviews (20 per age segment) in a matter of days. The thematic analysis breaks down findings by segment automatically. They discover that Gen Z values "chaos management," Millennials want "guilt-free focus time," and Gen X needs "email defence."

Same product. Completely different messaging per audience. A survey couldn't give them that level of insight. Neither could 8 interviews split across three segments.

Catching churn before it happens

Exit interviews only capture people who've already left. Too little, too late.

With AI moderation, you proactively reach at-risk customers, identified from usage data, before they've made up their mind. Interview 50+ of them in a week, understand what's actually driving the risk, and intervene while there's still time.

A product lead investigating churn runs 50 AI-moderated interviews. Three distinct drivers emerge in the thematic analysis, ranked by frequency and strength of evidence. Integration problems are the clear outlier: frequent, specific, and unprompted. Six participants described them in detail. The team brings a researcher in for deep dives with those six.

The AI interviews showed what's happening. The human interviews show how to fix it. Without the 50 up front, they'd have run 10, probably missed the integration pattern entirely, and built a retention strategy around the wrong problem.

Exit interviews are post-mortems. This is preventive care.

Continuous feedback programmes

Rather than annual research sprints, teams are running ongoing AI-moderated studies with 15-20 participants per week, with findings accumulating in the research repository over time.

When a stakeholder asks "what do customers think about pricing?" you can query months of research instantly with Ask AI. No more "we should probably do a study on that." No more starting from zero every time.

What good (and not-so-good) study setup looks like

The difference between a brilliant AI-moderated study and a mediocre one usually comes down to setup. The AI moderator is only as good as the direction you give it. Here's what separates the two.

A well-set-up study

Research goal: "Identify specific moments where users feel confused or frustrated during the onboarding process, and understand what information they need to feel confident moving forward."

Why it works: It's specific (onboarding process), focused on a clear outcome (confusion and frustration moments), and action-oriented (what information would help). The AI knows exactly what territory to explore and can ask smart follow-ups when participants mention something relevant.

Context provided: Details about what the product does, who the users are, what stage of onboarding you're investigating, and any specific flows or features you want to understand. The more relevant background you give, the more informed the AI's questions become.

Conversation depth: Moderate. Balanced exploration without exhausting participants.

A poorly-set-up study

Research goal: "Get feedback on the app."

Why it doesn't work: The AI has no idea what to focus on. Feedback on what? The design? The features? The pricing? The onboarding? Everything? It'll ask generic questions and you'll get generic answers.

Context provided: None, or something vague like "We're a tech company."

Result: The AI is flying blind. It can't ask informed follow-ups because it doesn't know what your product does, who uses it, or what you're trying to learn. Garbage in, garbage out.

More examples

Too vague: "Learn about user preferences."

Much better: "Discover why users choose between monthly vs. annual subscription plans, and identify the key factors that influence their decision at the pricing page."

Too vague: "Understand pain points."

Much better: "Identify friction points in the checkout flow and understand why users abandon at the payment step."

Too vague: "Get feedback on the app."

Much better: "Understand which features in the dashboard are most valuable for managing multiple projects, and identify any missing functionality that would improve workflow efficiency."

Two ways to set up your study

AI-Guided Setup. If you're not sure where to start, this option walks you through it. You provide your research focus, and the AI helps generate goals. It also prompts you for context in four sections: Company, Product/feature/service, Interview audience, and Specific questions or assumptions. Good for first-timers or when you want a structured starting point.

Start from Scratch. For researchers who know exactly what they want. You write your own research goals and provide context in freeform, whatever information will help guide the conversation. More flexibility, but requires you to know what good looks like.

Either path works. The key is being specific about what you want to learn and giving the AI enough context to ask intelligent questions.

What you get back

Thematic analysis: automatic, evidence-backed

Once participants complete their interviews, the thematic analysis kicks in automatically. No pasting transcripts into another tool. No manual coding.

The AI identifies patterns across all participant responses and groups them into themes. Each theme shows how many participants mentioned it, with every insight linking directly to the supporting quotes. Click any finding to see the exact words, who said them, and whether the statement was volunteered or prompted. Insights are scored on evidence strength: how many quotes support this pattern, how specific participants were, whether they brought it up unprompted or were led to it. High confidence means act on it. Low confidence means dig deeper.

Your researchers review the AI-generated themes, add context and strategic interpretation, and build final recommendations. The AI handles 80% of the pattern recognition and quote extraction. Your team focuses on what the patterns mean.

Ask AI: slice the data any way you want

Thematic analysis gives you the structured output. Ask AI lets you interrogate it.

Ask any question in natural language and get answers backed by evidence. "What did participants say about pricing?" "How did power users describe this differently from new users?" "Show me all frustration moments in the checkout flow." Every answer comes with source quotes, confidence scores, and participant context, not summaries pulled from thin air.

Filter by demographics, user type, sentiment, or any screening criteria you set up. Go deep on a single theme or go wide across multiple studies. When a stakeholder asks a question you didn't anticipate, you don't need to re-run the analysis. You just ask.

And because every AI-moderated study feeds into the same repository, Ask AI doesn't stop at one project. Query six months of research at once. Compare findings across studies. Build institutional memory that compounds instead of resetting to zero every time.

Getting started

If you're ready to try this:

Start small. Run a pilot with 15-20 participants before scaling to 50. Review the transcripts and the thematic analysis output. Did the AI ask the right follow-ups? Were there gaps? Adjust your research goals and context, then scale up.

Be specific. The clearer your research goals, the better the conversation. "Identify friction points in checkout flow" beats "understand user experience" every time.

Provide context. Tell the AI about your company, product, and interview audience. It uses this background to ask smarter, more relevant questions. The difference between a well-briefed AI moderator and one flying blind is enormous.

Review the source evidence. The thematic analysis is powerful, but always click through to the actual quotes. Your interpretation is what makes insights actionable. The confidence scores tell you which findings to trust and which to investigate further.

Try a hybrid approach. Use AI-moderated interviews for breadth, human moderation for depth. Start with a study you'd normally run with 10-15 human-moderated interviews, scale it to 30-40 with AI moderation, and see what patterns emerge that you'd have missed at smaller sample sizes. Then do the human deep dives on what matters most.

Quick answers

How many participants can I include? Study Track Lite: up to 15 participants. Full Study Track: up to 50 participants. Need more? Run multiple studies in parallel. Results all flow into the same repository for unified analysis.

What languages are supported? 15+ languages including English, Spanish, French, German, Portuguese, Italian, Japanese, Mandarin Chinese, Korean, Arabic, and more. Interviews happen in participants' native language with auto-translation for analysis.

How long do the interviews take? You control this through depth settings. Shallow: up to 2 minutes per research objective. Moderate: up to 4 minutes (the sweet spot for most teams). Deep: up to 6 minutes per objective. Most teams land on 10-15 minute interviews with 4-5 research objectives at moderate depth.

The AI reads the room. If someone's giving rich, detailed responses, it won't keep asking just to fill time. But if answers are vague, it'll probe deeper, within the time cap you set.

Can I combine AI interviews with other study types? AI-Moderated Studies are their own study type. You can control the order in which research goals are explored within the AI interview, but they're separate from prototype tests or surveys. Many teams run an AI-moderated study for the "why" and a separate unmoderated study for task-based evaluation.

What's the turnaround time? Typical turnaround is 24-48 hours for 50 participants. No scheduling coordination required. Participants complete on their own time.

Go time

You've got the why (Part 1), you've got the how (this article), and you've got the practical examples to see what's possible.

The best way to learn is to try it. Start with a study you'd normally run as a handful of human-moderated interviews, scale it with AI moderation, and see what patterns emerge that you'd have missed at smaller sample sizes.

Get started with AI-Moderated Studies →

New to Askable? AI-Moderated Studies are available as part of our self-serve platform. Book a demo →

Frustrated with fragmented tools?

Askable is the modern research platform for velocity-obsessed teams.

Let's chat

See Askable in action

Get a sneak peak into the product, and everything Askable can do for you.

Contact sales

Conclusion

Latest articles