← All notes

What happens to social order when AI removes ritual constraints?

Classical social theory from Goffman, Giddens, and others explains why AI disrupts the conditions for trust and shared meaning.

Topic Hub · 24 linked notes · 18 sections
View as

V.a Tokenization of Intelligence (Adrian's capstone thesis)

13 notes

Does AI actually commodify expertise or tokenize it?

The standard framing treats AI output like mass-produced commodities, but does AI's contextual, mutable nature fit better with token economics than commodity theory?

Explore related Read →

Where does the value of AI output actually come from?

If AI-generated intelligence has no intrinsic content-value like physical goods do, what determines whether it's valuable to someone? This explores whether value lives in the token or the receiver.

Explore related Read →

Is the LLM a tool or a new form of intelligence itself?

Does framing AI as merely delivering pre-existing intelligence miss what's actually happening? This explores whether the model itself constitutes a fundamentally new intelligence-medium with distinct cultural effects.

Explore related Read →

Can exchange value exist entirely without use value?

Does AI-generated knowledge represent a genuinely new category of goods where exchange-value (market price, social credibility) operates independently of use-value (actual accuracy, practical utility)? This matters because it suggests AI disrupts markets in ways Marx's commodity analysis did not predict.

Explore related Read →

Why does AI output change with every prompt and context?

Explores whether the variability of AI-generated intelligence across contexts and audiences is a fundamental feature or a flaw to be fixed. Examines what this mutability means for how we should evaluate and understand AI systems.

Explore related Read →

Is AI fundamentally changing how value gets produced?

Rather than automating commodity production, does AI represent a shift from making identical stockpiled objects to generating contextual tokens on demand? And what makes this genuinely new?

Explore related Read →

Does Marxist alienation theory explain what AI does to cognitive work?

Marxist alienation frames AI as degrading authentic labor. But does that framework actually describe the shift happening with tokenization, or does it misdiagnose the transformation occurring in intelligence itself?

Explore related Read →

Does AI abundance actually devalue knowledge itself?

If AI generates vastly more claims than humans can evaluate, does the sheer volume undermine the social processes that normally establish what counts as reliable knowledge? And what would that erosion look like?

Explore related Read →

Can AI generate knowledge faster than humans can evaluate it?

Explores whether AI-driven content production is outpacing human judgment capacity, mirroring monetary hyperinflation dynamics. Why this matters: understanding this gap reveals whether our evaluation infrastructure can sustain epistemic confidence.

Explore related Read →

Why do search tools fail against AI generated content?

Internet search worked for finding needles in haystacks of fixed documents. But AI generates new content on demand with no underlying corpus to search. Does this require fundamentally different solutions?

Explore related Read →

Why can't search tools handle AI-generated content?

Search infrastructure was built for stable, pre-existing items. AI generates ephemeral content on-demand. Can the indexing tools that solved information overload work when there's nothing stable to index?

Explore related Read →

What actually backs the value of AI-generated intelligence?

If AI produces intelligence tokens at near-zero cost, what constrains their value and prevents inflation? Exploring whether training data, expert validation, or statistical probability can serve as a genuine backing mechanism.

Explore related Read →

When do users stop checking whether AI output is actually backed?

What causes users to accept AI-generated content at face value without verifying its basis? Understanding this receiver-side acceptance reveals how intelligence-token systems maintain value despite lacking real backing.

Explore related Read →

Pass 3 Additions (2026-05-03)

3 notes

Can language models simulate belief change in people?

Current LLM social simulators treat behavior as input-output mappings without modeling internal belief formation or revision. Can they be redesigned to actually track how people think and change their minds?

Explore related Read →

What should a world model actually be designed to do?

Current AI research treats world models as either video predictors or RL dynamics learners, but what if their real purpose is simulating actionable possibilities for decision-making rather than predicting next observations?

Explore related Read →

Does autoregressive generation uniquely enable LLM scaling?

Is the autoregressive factorization truly necessary for LLM scalability, or do other generative principles like diffusion achieve comparable performance? This matters because it shapes which architectural paths deserve investment.

Explore related Read →