Does AI assistance remove a core learning channel through error work?
When AI reduces both the errors learners encounter and their need to resolve errors independently, does it eliminate the productive struggle that builds deep skill? This explores whether error-handling is essential to learning.
Cognitive science on skill acquisition has long identified productive failure as a primary learning channel. The learner attempts a task, encounters an error, diagnoses what went wrong, and works through the correction. The diagnosing and correcting are where the learning happens — they require active engagement with the underlying structure of the problem, which builds the mental model that future attempts can deploy.
The Skill Formation study captures this empirically. Participants in the control group (no AI) encountered more errors than the AI group — including syntax errors and library-specific errors. They also independently resolved these errors. The result was higher skill formation as measured by post-task quizzes. The control group did more error-encountering, more error-resolving, and learned more. The pattern is consistent with the productive-failure literature: errors are not friction to be eliminated; they are the substrate of learning.
AI assistance disrupts both ends of this channel. On the encounter side, AI-generated code typically does not have the syntax errors a learner would produce, so the learner does not encounter those errors. On the resolution side, when errors do occur, the AI-assisted worker can ask the AI to debug, replacing the diagnostic-and-correction work that produces learning with a delegation to AI that produces an answer without the work. Both ends of the channel narrow.
This explains a counterintuitive empirical finding: the participants who did the most debugging-with-AI scored worst on the post-task quiz (Iterative AI Debugging cluster). One might have expected debugging engagement to produce learning. The opposite happened — because the engagement was with the AI, not with the underlying problem. The cognitive substrate that produces learning was bypassed.
The diagnostic implication for AI deployment in learning contexts is significant. AI-assisted learning environments should preserve some error-encounter and force some independent-resolution if learning is the goal. The simplest version is to constrain AI assistance during the early phase of a task, when errors are most pedagogically valuable, and allow it later, when errors are friction. But this requires design intentionality that current AI integrations rarely have. Most AI-in-learning deployments treat all errors as friction to remove, which is the wrong model.
The strongest counterargument: AI-explained errors might be as pedagogically useful as self-resolved errors. The empirical data suggests not — the Skill Formation study found AI-assisted debugging clustered with the lowest learning outcomes. Self-resolution appears to be doing work that AI explanation does not replicate.
Source: How AI Impacts Skill Formation
Related concepts in this collection
-
Does AI assistance actually harm the way developers learn?
When developers use AI tools while learning new programming concepts, does it impair their ability to understand code, debug problems, and build lasting skills? Understanding this matters for how we deploy AI in education and training.
the parent finding this names a specific mechanism within
-
Does AI really save time, or just change how we spend it?
Explores whether AI's time savings are real or illusory—whether the time freed from direct work simply shifts to AI interaction tasks like prompt composition and output evaluation, with different cognitive and learning consequences.
companion claim about how time-shift relates to channel-removal
Click a node to walk · click center to open · click Open full network for a force-directed map
Original note title
encountering errors and resolving them independently is a learning channel that AI removes