Psychology and Social Cognition Language Understanding and Pragmatics Design & LLM Interaction

Do users truly own the AI-generated content they produce?

When people use AI to create outputs, do they experience genuine authorship and ownership of what's produced, or does the continuous interaction loop create a gap between what they feel and what they claim?

Note · 2026-04-19 · sourced from Psychology Users
Why do AI systems fail at social and cultural interpretation? How well do language models understand their own knowledge?

Research on agency shows that authorship is often inferred from outcomes rather than directly accessed. People construct post-hoc narratives of their contribution based on what was produced, not based on accurate recall of who did what during production. In human-AI collaboration, this dissociation becomes structural: users may not fully experience ownership of generated content at a cognitive level yet still declare authorship at a reflective or social level.

This is not dishonesty. The user genuinely cannot tell where their contribution ends and the system's begins, because the interaction loop is continuous and the intermediate steps are opaque. The post-hoc narrative of authorship feels true — "I prompted it, I refined it, I selected this version" — even though the generative heavy-lifting was done by the system. The user's experience of the process is partial and filtered, but the claim of authorship is constructed from the complete output.

Since Does AI writing collapse the author-to-public relationship?, the vault already tracks the audience-side problem: AI writing addresses the wrong recipient. This note tracks the author-side complement: the author's self-model is also compromised. The author doesn't just write for the wrong audience — they don't accurately perceive their own role in the writing.

The dissociation has practical consequences for professional signaling. Users report skills based on their ability to produce outputs with LLM assistance rather than independently acquired expertise, resulting in inflated representations of competence that do not transfer to unaided performance. The inflation is not strategic deception but genuine confusion about what they can do — because the feedback from AI-assisted work consistently signals competence.


Source: Psychology Users Paper: The LLM Fallacy: Misattribution in AI-Assisted Cognitive Workflows

Related concepts in this collection

Concept map
13 direct connections · 102 in 2-hop network ·medium cluster

Click a node to walk · click center to open · click Open full network for a force-directed map

your link semantically near linked from elsewhere
Original note title

experienced authorship and attributed authorship dissociate in AI-mediated work — users declare authorship at a reflective level without cognitive ownership at a process level