Do users truly own the AI-generated content they produce?
When people use AI to create outputs, do they experience genuine authorship and ownership of what's produced, or does the continuous interaction loop create a gap between what they feel and what they claim?
Research on agency shows that authorship is often inferred from outcomes rather than directly accessed. People construct post-hoc narratives of their contribution based on what was produced, not based on accurate recall of who did what during production. In human-AI collaboration, this dissociation becomes structural: users may not fully experience ownership of generated content at a cognitive level yet still declare authorship at a reflective or social level.
This is not dishonesty. The user genuinely cannot tell where their contribution ends and the system's begins, because the interaction loop is continuous and the intermediate steps are opaque. The post-hoc narrative of authorship feels true — "I prompted it, I refined it, I selected this version" — even though the generative heavy-lifting was done by the system. The user's experience of the process is partial and filtered, but the claim of authorship is constructed from the complete output.
Since Does AI writing collapse the author-to-public relationship?, the vault already tracks the audience-side problem: AI writing addresses the wrong recipient. This note tracks the author-side complement: the author's self-model is also compromised. The author doesn't just write for the wrong audience — they don't accurately perceive their own role in the writing.
The dissociation has practical consequences for professional signaling. Users report skills based on their ability to produce outputs with LLM assistance rather than independently acquired expertise, resulting in inflated representations of competence that do not transfer to unaided performance. The inflation is not strategic deception but genuine confusion about what they can do — because the feedback from AI-assisted work consistently signals competence.
Source: Psychology Users Paper: The LLM Fallacy: Misattribution in AI-Assisted Cognitive Workflows
Related concepts in this collection
-
Does AI writing collapse the author-to-public relationship?
When AI generates text optimized for a prompter's satisfaction rather than a public audience, what happens to the core practice of writing for readers you don't know? This explores whether AI reorganizes the structural relationship between author, text, and public.
audience-side structural distortion; this note is the author-side complement
-
Do AI-assisted outputs fool users about their own skills?
When people use AI tools to produce high-quality work, do they mistakenly believe they personally possess the skills that generated it? This matters because such misattribution could mask genuine skill loss and prevent corrective action.
the parent phenomenon
-
Does AI assistance help workers learn skills for independent work?
Research tested whether using generative AI on tasks teaches workers skills they can apply later without AI. Understanding this matters for professional development and whether AI use counts as meaningful practice.
the non-transfer finding is predicted by the authorship dissociation: if competence was never internally grounded, it cannot transfer
Click a node to walk · click center to open · click Open full network for a force-directed map
Original note title
experienced authorship and attributed authorship dissociate in AI-mediated work — users declare authorship at a reflective level without cognitive ownership at a process level