Turning vague AI feedback into structured,
actionable insights for designers.
The Problem
Automated Online Feedback Gathering (AOFG) tools are widely used to collect responses from users of AI-generated outputs — but the feedback they produce is often too vague or ambiguous to act on. "Was this helpful? Kind of." gives a designer nowhere to go.
ClarifAI intercepts this feedback before it becomes noise. A three-module LLM pipeline filters out irrelevant responses, flags vague or ambiguous comments, and conducts a targeted follow-up dialogue to turn them into structured, actionable insights.
How It Works
Users rate and comment on AI outputs through a standard AOFG interface embedded in the web application. This is Stage 2 of ClarifAI's three-stage study platform, which also covers prerequisites and consent (Stage 1) and a discussion board (Stage 3).
The Telemetry module classifies whether each piece of feedback is actually about the AI output. Off-topic or tangential responses are filtered out before they enter the pipeline. In our evaluation, the Telemetry module achieved 100% precision on relevance filtering.
The Flight module classifies relevant feedback as either specific enough to act on, or vague/ambiguous. Only feedback that fails this check is escalated to the clarification dialogue, keeping the experience lightweight for users who already gave clear responses. Flight achieves 94%+ accuracy on vagueness and ambiguity detection.
The CapCom module engages the user in a short, targeted follow-up conversation. An LLM asks only the questions needed to resolve the vagueness or ambiguity, then produces a structured, machine-readable summary — a specific, categorised insight that designers can act on directly.
Why It Matters
Actionability of Feedback by Condition
100%
Telemetry precision
94%+
Flight accuracy
3
LLM pipeline modules
Active
Project status