Does AI assistance actually harm the way developers learn?
When developers use AI tools while learning new programming concepts, does it impair their ability to understand code, debug problems, and build lasting skills? Understanding this matters for how we deploy AI in education and training.
A randomized controlled trial assigned developers to learn a new asynchronous programming library with or without AI assistance. The core finding: "AI use impairs conceptual understanding, code reading, and debugging abilities, without delivering significant efficiency gains on average." The lack of speed-up is itself surprising — participants spent substantial time composing queries, understanding AI-generated code, and interacting with the assistant, offsetting the generation speed advantage.
Six distinct AI interaction patterns emerge, three low-scoring and three high-scoring:
Low-scoring patterns (quiz scores 24-39%):
- AI Delegation (n=4): wholly relied on AI to write code. Fastest completion. Zero learning.
- Progressive AI Reliance (n=4): started independently, gradually delegated everything. Failed to master second-task concepts.
- Iterative AI Debugging (n=4): used AI to debug/verify rather than understand. High query count but cognitive offloading throughout.
High-scoring patterns (quiz scores 65-86%):
- Generation-Then-Comprehension (n=2): generated code first, then asked AI follow-up questions to improve understanding. Looks nearly identical to AI Delegation but adds the comprehension step.
- Hybrid Code-Explanation (n=3): composed hybrid queries requesting code generation with explanations. Reading and understanding explanations took more time but preserved learning.
- Conceptual Inquiry (n=7): only asked conceptual questions, relied on improved understanding to code independently. Encountered many errors but resolved them independently. Fastest among high-scoring patterns.
The critical variable is not whether AI is used but how. "AI-enhanced productivity is not a shortcut to competence." The Generation-Then-Comprehension pattern is particularly telling: it looks almost identical to AI Delegation from the outside (both generate code) but produces dramatically different learning outcomes because of the follow-up comprehension step.
Since Does AI assistance weaken our brain's ability to think independently?, the six patterns provide the behavioral taxonomy for what the EEG study measures neurologically. The low-scoring patterns are the behavioral signatures of cognitive debt accumulation. The high-scoring patterns demonstrate that active co-creation — the mode the EEG researchers identified as potentially sustaining or enhancing cognitive capacity — has specific, identifiable behavioral markers.
The practical implication is sharp: "as companies transition to more AI code writing with human supervision, humans may not possess the necessary skills to validate and debug AI-written code if their skill formation was inhibited by using AI in the first place." This is the Knowledge Custodian paradox made empirical. Since Does AI reshape expert work into knowledge management?, the custodial role requires debugging and validation skills — precisely the skills that AI-assisted skill formation degrades most. "The biggest difference in test scores is between the debugging questions."
The broader pattern: AI assistance benefits most those who need it least. The control group (no AI) encountered more errors — syntax errors and library-specific errors — and independently resolved them, building the conceptual understanding that the AI group lacked. Errors are not friction to be eliminated. They are the learning signal.
Source: Social Theory Society Paper: How AI Impacts Skill Formation
Related concepts in this collection
-
Does AI assistance weaken our brain's ability to think independently?
Can using language models for cognitive tasks reduce neural connectivity and learning capacity? New EEG evidence tracks how external AI support may systematically degrade our cognitive networks over time.
six interaction patterns are the behavioral taxonomy for neurological cognitive debt
-
Does AI reshape expert work into knowledge management?
As AI generates knowledge at scale, does expert work shift from creating new understanding to curating and validating machine outputs? This matters because curation and creation demand different cognitive skills.
custodial role requires the debugging/validation skills that AI-assisted formation degrades most
-
Does AI separate intellectual form from the thinking behind it?
Exploring whether AI's ability to generate polished intellectual products without the underlying reasoning process represents a genuinely new kind of decoupling, and what that means for how we evaluate knowledge.
skill formation is where the decoupling becomes concrete: AI Delegation produces the output (form) without the understanding (process)
-
Does incremental AI replacement erode human influence over society?
Explores whether gradual AI adoption—without dramatic breakthroughs—can silently degrade human agency by removing the labor that kept institutions implicitly aligned with human needs.
skill formation degradation is individual-level disempowerment: the capacity to supervise AI erodes through the act of relying on AI
Click a node to walk · click center to open · click Open full network for a force-directed map
Original note title
AI assistance impairs skill formation through six distinct interaction patterns — only patterns with high cognitive engagement preserve learning outcomes