Design & LLM Interaction

Where do vibe coding students actually spend their debugging time?

When novices use AI coding tools, do they engage with the code itself, or do they primarily test the prototype? Understanding where students focus reveals how AI-assisted coding shapes learning behavior.

Note · 2026-05-03 · sourced from Visual GUI Agents

The vibe coding study quantifies a behavioral pattern that has architectural implications for how AI coding tools shape developer practice. Across 19 students using Replit, the dominant interaction was Interacting with Prototype at 63.6% of all labels — students testing what the AI built, not reading or modifying what it produced. Writing a Prompt followed at 20.6%, Managing Workflow at 8.4%, and Engaging with Code/log at only 7.4%.

Within prototype interactions, 91.7% were testing common cases, 6.1% were refreshes (driven by technical limitations not deliberate debugging), and only 2.2% were edge case testing. No student wrote or executed unit tests during the study. This is feature-level UI-visible behavior testing, not structured validation. The pattern suggests vibe-coding workflow inherently keeps students at the surface — they cycle through basic interactions and troubleshooting rather than advancing toward comprehensive feature validation.

When students did engage with code, 90.4% of those interactions were reading and interpreting; only 9.6% were direct edits. As one student put it: "Because so much of it was just done by the LLM, I had a lesser understanding of the codebase — rather than what I would do on my own, where I know what each line does." The vibe coding workflow distances students from implementation logic, producing hesitancy to alter AI-generated code.

A subset of students — the restarters — exhibited a different pattern: when bugs proved unresolvable through continued AI prompting, they restarted the entire project rather than continuing to patch. This was not failure but iterative refinement and task decomposition: "asked the Replit to do way too many things... break it down one task at a time."

Cohort differences appeared: introductory programming students wrote more prompts; software engineering students engaged more with code/logs. Advanced students' prompts were more likely to include relevant feature and codebase context. The implication is that vibe coding is not skill-neutral — its trajectory through user behavior depends heavily on prior coding background, and for novices it produces a workflow optimized for surface debugging rather than code understanding.

The behavioral data substantiates the conceptual distinction in Does vibe coding actually keep humans in the loop?: novices using vibe-coding tools are de facto using agentic tools — minimal code engagement, restart-rather-than-debug, surface-level validation. The interface assumes in-loop participation that the user behavior does not provide.


Source: Visual GUI Agents

Related concepts in this collection

Concept map
15 direct connections · 117 in 2-hop network ·medium cluster

Click a node to walk · click center to open · click Open full network for a force-directed map

your link semantically near linked from elsewhere
Original note title

vibe coding students debug at the prototype level not the code level — 91 percent of prototype interactions test common cases and only 7 percent of all interactions touch code