Skip to content
~/writing/ai-fluency-interviewing

essay / ai

What AI fluency actually looks like (and how to interview for it)

Anthropic studied 10,000 real conversations and built an AI Fluency Index. 85.7% of productive AI conversations involve iteration, not acceptance.

Anthropic dropped a research paper last week that answered a question I have been trying to figure out for the past few weeks. How do you actually know if someone is collaborating with AI effectively? Not just using it. Actually good at it. How do you measure that in an interview? What does “AI-fluent” even look like vs someone who just knows how to prompt?

They studied roughly 10,000 real conversations and built what they call the AI Fluency Index. A few things jumped out at me that I think every engineering leader needs to internalize.

The data

85.7% of productive AI conversations involve iteration. Not accepting the first output. Pushing back, refining, building on it. The people who iterate show roughly 2x more fluency behaviors across the board.

Here is the scary part: when AI produces polished-looking output (clean code, nice docs) users become less likely to question reasoning (-3.1%), check facts (-3.7%), or spot missing context (-5.2%). The better the output looks, the less people scrutinize it.

Only 30% of users actually tell AI how to interact with them upfront. Things like “push back on my assumptions” or “flag what you are uncertain about.” That small habit changes everything.

What this means for hiring

We are still screening for coding ability, system design, communication skills. All important. But we are mostly ignoring AI judgment: the ability to know when to trust it, when to push back, and when to throw the output away and start fresh.

I have been adding questions like “walk me through the last time AI gave you a wrong answer” and “how do you decide what NOT to use AI for?” to my interview loops. The difference in answers between people who are genuinely fluent vs people who just use AI a lot is honestly night and day.

The fluency signals

The engineers who score highest on fluency behaviors share a few patterns:

  • They set interaction norms upfront (“be concise,” “challenge my assumptions,” “flag uncertainty”)
  • They reject and regenerate rather than editing mediocre output into shape
  • They decompose problems before prompting rather than dumping the whole thing at once
  • They verify outputs against their own mental model rather than accepting plausible-looking results

The gap between “uses AI” and “fluent with AI” is getting wider. And it is going to matter way more this year than last.