BlogProduct

How SolveLab's Copilot panel works inside an assessment

SA
Santiago Alvarez·Mar 5, 2026·5 min read

Traditional assessments tell you what a candidate knows. They don't tell you how they work. For most skills, that distinction is minor. For AI skills, it's the entire point.

The Copilot panel is SolveLab's answer to a specific problem: you can't assess how someone works with AI by asking them questions about AI. You have to watch them do it. The AI assessment platform puts a real AI tool inside the assessment itself and tracks everything the candidate does with it.

What the candidate sees

When a candidate opens a SolveLab assessment, the screen splits in two. The left side shows the task: a scenario, background context, and clear instructions on what to deliver. The right side is the Copilot panel, a chat interface connected to a live AI model.

Watch: how SolveLab's Copilot panel captures prompt strategies, iteration patterns, and output validation during a live assessment.

The candidate can type prompts, paste in context, ask follow-up questions, and have an extended conversation with the AI. There's no limit on how many messages they send. The Copilot responds in real time, and the candidate uses those responses to build their answer in the main response area on the left.

From the candidate's perspective, it feels like working with AI the way they would at their desk. That's the point. The assessment isn't testing whether they can perform under artificial constraints. It's testing how they actually collaborate with AI when given a realistic task.

What SolveLab captures

Every interaction with the Copilot is recorded and analyzed. Not just the final answer the candidate submits, but the full sequence of decisions they made to get there.

Prompt strategy tracks what the candidate asked the AI and how they structured their requests. Did they provide context before their question? Did they specify the format they wanted? Did they give the AI enough constraints to produce something useful, or did they send a vague request and hope for the best?

Output validation measures whether the candidate evaluated what came back. When the AI returned a response with a factual error or a recommendation that didn't fit the scenario's constraints, did the candidate catch it? Did they push back, or did they copy the response directly into their answer?

Iteration pattern captures the arc of the conversation. Strong candidates build on what the AI gives them. They refine their prompts based on previous responses, narrow their requests when the AI is too broad, and adjust their approach when something isn't working. Weaker candidates start over from scratch each time or send one prompt and stop.

Independence decisions record where the candidate chose not to use AI. This matters more than most people expect. A candidate who uses AI for everything, including tasks they should clearly handle themselves, shows a different risk profile than someone who uses AI selectively for the parts where it genuinely helps.

What hiring teams see in the results

After the assessment, each question shows a behavioral breakdown alongside the score. The hiring team can see dimension-level ratings with specific evidence pulled from the candidate's Copilot interactions.

The results tend to cluster candidates into recognizable patterns. Some are over-reliant: they defer to AI on everything and rarely push back on outputs. Some under-utilise: they have the AI available but barely use it, suggesting either discomfort or unfamiliarity with AI collaboration. And some are well-calibrated: they use AI where it adds value, verify outputs, and maintain their own judgment throughout.

These patterns tell you something that a score alone can't. Two candidates with the same overall number might have completely different working styles, and those styles matter for the role you're filling.

Where the Copilot panel works best

The Copilot panel is designed for roles where AI is genuinely part of the day-to-day work. Engineers who use coding assistants. Analysts who use AI for data exploration. Marketers who use AI for content drafting and research. Product managers who use AI for synthesis and spec writing.

For roles where AI isn't part of the current workflow, a behavioral AI assessment adds less value. The Copilot panel shows you how someone works with AI. If the role doesn't involve working with AI, you need a different kind of signal.

See AI skills assessments in action

SolveLab builds custom assessments tailored to your roles. Try it free — no credit card needed.

Try for free