79% of candidates exaggerate their AI skills. Here's what the data says.
The Pluralsight 2025 AI Skills Report surveyed over 1,200 professionals who rated themselves as "proficient" or "advanced" in AI tools. When those same professionals completed practical AI skills test scenarios, 79% scored below the benchmark for competent AI usage.
That number has implications for every hiring team running interviews in 2026.
Why the gap is so wide
Self-assessment bias exists for every skill, but AI has a specific version of this problem that makes the gap wider than usual.
Most professionals have used ChatGPT. Many use it daily. That daily exposure creates a feeling of proficiency that doesn't map to actual skill. Using a tool frequently and using it well are different things, and for AI tools specifically, the gap between those two states is invisible to the person doing the using. A prompt that produces a plausible-sounding response feels like a success, even when the output is generic, partially wrong, or missing the context that would have made it actually useful.
There's also no shared standard for what "AI proficient" means. When a candidate writes "proficient in AI tools" on a resume, they might mean they use ChatGPT for email drafting. Or they might mean they've built complex multi-step prompting workflows that feed into their actual deliverables. Both candidates call it proficiency. The word carries no specific information.
And candidates know that AI skills are what hiring managers want to hear. LinkedIn's 2026 Future of Work report found that AI-related skills were the fastest-growing keywords on profiles across every industry. When the market rewards claiming a skill, more people claim it.
What exaggeration looks like in practice
Three patterns show up consistently when candidates who self-report AI proficiency are put in front of an actual AI skills test.
The first is undirected prompting. The candidate pastes a large block of text into the AI with a vague instruction like "help me with this" or "summarize." They don't specify audience, format, length, or purpose. The AI returns something generic, and the candidate treats it as a finished output.
The second is missing errors. When AI generates a confident-sounding but incorrect response, say a plausible statistic that doesn't exist, or a recommendation that contradicts the scenario's constraints, these candidates don't catch it. They treat AI output as authoritative rather than as a draft requiring verification.
The third is single-shot usage. Strong AI users iterate. They send a prompt, evaluate the response, identify what's missing, and refine their next request based on what they learned. Weaker candidates send one prompt, assess only that response, and move on. They're treating AI as a search engine rather than a collaborative tool.
What this means for hiring teams
If your AI skills evaluation depends on resume keywords, interview questions, or self-assessment surveys, you are mostly measuring confidence. A candidate who says "I use AI every day to accelerate my work" in an interview is telling you about frequency, not quality. The 79% figure suggests that frequency and quality have very low correlation.
The interview question approach has a specific failure mode here. When you ask "How do you use AI in your work?" candidates describe their best interaction. They don't mention the times they accepted a wrong answer, pasted in sensitive data without thinking, or spent twenty minutes on a prompt that a two-sentence request would have handled. You're getting a highlight reel, and there's no way to verify it.
The practical fix
The fix is not better interview questions about AI. It's structured observation.
Give candidates access to an AI tool during the assessment. Give them a realistic task. Watch what they actually do. Do they set context before prompting? Do they verify what comes back? Do they iterate when the first response falls short? Do they know when to stop using AI and rely on their own judgment?
These behaviors can't be faked in real time the way a resume bullet point or an interview answer can. And they're exactly the behaviors that separate candidates who genuinely work well with AI from candidates who've simply used it a lot.
The 79% gap is going to widen as AI tools become more mainstream and the pool of people claiming proficiency grows. The assessment approach stays the same. The signal was always in the behavior, not the self-report.
See AI skills assessments in action
SolveLab builds custom assessments tailored to your roles. Try it free — no credit card needed.