Where do vibe coding students actually spend their debugging time?
When novices use AI coding tools, do they engage with the code itself, or do they primarily test the prototype? Understanding where students focus reveals how AI-assisted coding shapes learning behavior.
The vibe coding study quantifies a behavioral pattern that has architectural implications for how AI coding tools shape developer practice. Across 19 students using Replit, the dominant interaction was Interacting with Prototype at 63.6% of all labels — students testing what the AI built, not reading or modifying what it produced. Writing a Prompt followed at 20.6%, Managing Workflow at 8.4%, and Engaging with Code/log at only 7.4%.
Within prototype interactions, 91.7% were testing common cases, 6.1% were refreshes (driven by technical limitations not deliberate debugging), and only 2.2% were edge case testing. No student wrote or executed unit tests during the study. This is feature-level UI-visible behavior testing, not structured validation. The pattern suggests vibe-coding workflow inherently keeps students at the surface — they cycle through basic interactions and troubleshooting rather than advancing toward comprehensive feature validation.
When students did engage with code, 90.4% of those interactions were reading and interpreting; only 9.6% were direct edits. As one student put it: "Because so much of it was just done by the LLM, I had a lesser understanding of the codebase — rather than what I would do on my own, where I know what each line does." The vibe coding workflow distances students from implementation logic, producing hesitancy to alter AI-generated code.
A subset of students — the restarters — exhibited a different pattern: when bugs proved unresolvable through continued AI prompting, they restarted the entire project rather than continuing to patch. This was not failure but iterative refinement and task decomposition: "asked the Replit to do way too many things... break it down one task at a time."
Cohort differences appeared: introductory programming students wrote more prompts; software engineering students engaged more with code/logs. Advanced students' prompts were more likely to include relevant feature and codebase context. The implication is that vibe coding is not skill-neutral — its trajectory through user behavior depends heavily on prior coding background, and for novices it produces a workflow optimized for surface debugging rather than code understanding.
The behavioral data substantiates the conceptual distinction in Does vibe coding actually keep humans in the loop?: novices using vibe-coding tools are de facto using agentic tools — minimal code engagement, restart-rather-than-debug, surface-level validation. The interface assumes in-loop participation that the user behavior does not provide.
Source: Visual GUI Agents
Related concepts in this collection
-
Does vibe coding actually keep humans in the loop?
Vibe coding claims to keep developers steering and validating, but do novices actually engage with code and testing the way the tool design assumes? The gap between intended and actual behavior could compound failures.
extends: companion paper providing the conceptual definition; this note provides the behavioral evidence that novices drift across the conceptual boundary.
-
Does AI assistance remove a core learning channel through error work?
When AI reduces both the errors learners encounter and their need to resolve errors independently, does it eliminate the productive struggle that builds deep skill? This explores whether error-handling is essential to learning.
exemplifies: 91% common-case testing and 90% read-not-edit is the precise behavioral signature of AI removing the error-encounter-and-resolution learning channel.
-
Does AI assistance actually harm the way developers learn?
When developers use AI tools while learning new programming concepts, does it impair their ability to understand code, debug problems, and build lasting skills? Understanding this matters for how we deploy AI in education and training.
exemplifies: low-engagement patterns (passive acceptance, surface validation) dominate vibe coding for novices — exactly the patterns the skill-formation paper identifies as harmful.
-
Does AI assistance help workers learn skills for independent work?
Research tested whether using generative AI on tasks teaches workers skills they can apply later without AI. Understanding this matters for professional development and whether AI use counts as meaningful practice.
extends: vibe coding produces immediate prototype outputs but the read-not-edit pattern blocks the codebase understanding that would transfer.
-
Does AI assistance always help reasoning or does it carry hidden costs?
When AI systems intervene during human reasoning tasks, do they uniformly improve performance, or does the disruption to cognitive focus create a hidden tax that could offset their benefits?
complicates: vibe coding may avoid the flow-cost problem (AI handles long stretches of code production) but at the cost of cognitive immersion that produces understanding — different trade-off, similar outcome.
Click a node to walk · click center to open · click Open full network for a force-directed map
Original note title
vibe coding students debug at the prototype level not the code level — 91 percent of prototype interactions test common cases and only 7 percent of all interactions touch code