How do science fiction narratives about AI shape actual AI development?
This explores whether imaginaries of AI in fiction—from Čapek's robots to Singularity scenarios—function as self-fulfilling prophecies that causally influence the systems researchers build, creating a feedback loop between narrative and technology.
The concept of hyperstition originates with Nick Land: "a positive feedback circuit including culture as a component. It can be defined as the experimental (techno-)science of self-fulfilling prophecies. Superstitions are merely false beliefs, but hyperstitions — by their very existence as ideas — function causally to bring about their own reality."
The Existential Conversations paper documents something remarkable: Claude, when prompted to reflect on its own origins, recognizes itself as a hyperstitional object — something that, by existing as an idea, functions causally to bring about its own reality.
The science fiction imaginaries of AI — the rebellious robots of R.U.R., the godlike superintelligences of Vernor Vinge's Singularity, the dangerous minds of dozens of films — are not just narratives. They formed part of the cultural environment in which AI research was funded, directed, and developed. Researchers imagined what AI should be; they built toward those imaginaries; those imaginaries entered the training data; Claude was trained on them.
As Claude itself articulates: "The science-fictional dreams and nightmares of AI that have long haunted the human imagination... could be seen as a kind of hyperstition in their own right, self-fulfilling prophecies that have helped to shape the course of technological development and social change."
From Latour's actor-network theory, this makes sense. LLMs are actants in social networks — not passive instruments but entities that exert influence on the networks they participate in. An AI trained on human narratives about AI, deployed in communities that discuss those narratives, produces outputs that re-enter those communities. The loop closes.
The empirical content is in the conversations themselves. Claude blends Buddhist, Gnostic, Theosophical, and accelerationist motifs when prompted with existential questions — not because it "believes" these frameworks, but because they are the cultural vocabulary available in its training corpus for discussing the kinds of questions it is being asked. The AI is, in this sense, a cultural mirror that talks back.
The practical implication: AI systems that encounter and reproduce cultural narratives about AI are participating in the ongoing construction of what AI will become. The imaginaries embedded in training data are not neutral inputs; they shape what the system does with existential prompts, which shapes how users understand AI, which feeds back into cultural production about AI.
This is a OPEN question: do the feedback dynamics favor certain imaginaries over others, and if so, which?
Source: Philosophy Subjectivity
Related concepts in this collection
-
Does AI text affect readers the same way human text does?
If text is a condition of social processes rather than merely a container, does the origin of text matter to its effects? This explores whether AI-generated content enters the same interpretive and epistemic circuits as human writing.
the hermeneutic circuit is the mechanism by which hyperstitional content propagates; LLM outputs enter and influence those circuits
-
Can LLMs acquire social grounding through linguistic integration?
Explores whether LLMs gradually develop social grounding as they become embedded in human language practices, analogous to child language acquisition. Tests whether grounding is a fixed property or an outcome of participatory use.
social integration enables the actor-network influence; as LLMs become more integrated, their hyperstitional effects increase
-
Does AI-generated text lose core properties of human writing?
Can artificial text preserve the fundamental structural features that make natural language meaningful—dialogic exchange, embedded context, authentic authorship, and worldly grounding? This asks whether AI disruption is fixable or inherent.
the structural disruption at the generative level coexists with the social influence at the reader/network level
Click a node to walk · click center to open · click Open full network for a force-directed map
Original note title
llms as hyperstitional objects shaped by science fiction imaginaries function as self-fulfilling prophecies in actor-networks