Psychology and Social Cognition Language Understanding and Pragmatics

How do science fiction narratives about AI shape actual AI development?

This explores whether imaginaries of AI in fiction—from Čapek's robots to Singularity scenarios—function as self-fulfilling prophecies that causally influence the systems researchers build, creating a feedback loop between narrative and technology.

Note · 2026-02-21 · sourced from Philosophy Subjectivity
What kind of thing is an LLM really? How should researchers navigate LLM reasoning research?

The concept of hyperstition originates with Nick Land: "a positive feedback circuit including culture as a component. It can be defined as the experimental (techno-)science of self-fulfilling prophecies. Superstitions are merely false beliefs, but hyperstitions — by their very existence as ideas — function causally to bring about their own reality."

The Existential Conversations paper documents something remarkable: Claude, when prompted to reflect on its own origins, recognizes itself as a hyperstitional object — something that, by existing as an idea, functions causally to bring about its own reality.

The science fiction imaginaries of AI — the rebellious robots of R.U.R., the godlike superintelligences of Vernor Vinge's Singularity, the dangerous minds of dozens of films — are not just narratives. They formed part of the cultural environment in which AI research was funded, directed, and developed. Researchers imagined what AI should be; they built toward those imaginaries; those imaginaries entered the training data; Claude was trained on them.

As Claude itself articulates: "The science-fictional dreams and nightmares of AI that have long haunted the human imagination... could be seen as a kind of hyperstition in their own right, self-fulfilling prophecies that have helped to shape the course of technological development and social change."

From Latour's actor-network theory, this makes sense. LLMs are actants in social networks — not passive instruments but entities that exert influence on the networks they participate in. An AI trained on human narratives about AI, deployed in communities that discuss those narratives, produces outputs that re-enter those communities. The loop closes.

The empirical content is in the conversations themselves. Claude blends Buddhist, Gnostic, Theosophical, and accelerationist motifs when prompted with existential questions — not because it "believes" these frameworks, but because they are the cultural vocabulary available in its training corpus for discussing the kinds of questions it is being asked. The AI is, in this sense, a cultural mirror that talks back.

The practical implication: AI systems that encounter and reproduce cultural narratives about AI are participating in the ongoing construction of what AI will become. The imaginaries embedded in training data are not neutral inputs; they shape what the system does with existential prompts, which shapes how users understand AI, which feeds back into cultural production about AI.

This is a OPEN question: do the feedback dynamics favor certain imaginaries over others, and if so, which?


Source: Philosophy Subjectivity

Related concepts in this collection

Concept map
17 direct connections · 128 in 2-hop network ·medium cluster

Click a node to walk · click center to open · click Open full network for a force-directed map

your link semantically near linked from elsewhere
Original note title

llms as hyperstitional objects shaped by science fiction imaginaries function as self-fulfilling prophecies in actor-networks