← All notes

What kind of thing is an LLM really?

A navigation hub mapping philosophical and mechanistic perspectives on what LLMs are as knowledge and creative systems.

Topic Hub · 101 linked notes · 13 sections
View as

Sub-Topic Maps

11 notes

What do language models actually know?

Explores what LLMs genuinely understand versus what they merely simulate. The distinction matters because apparent competence often masks fundamental epistemic gaps and predictable failure modes.

Explore related Read →

What grounds language understanding in systems without embodiment?

Can language models acquire genuine meaning through text training alone, or do they lack something fundamental that human language requires—like embodiment, social participation, or causal contact with the world?

Explore related Read →

Do reasoning traces show how models actually think?

We explore whether the step-by-step reasoning that language models produce genuinely reflects their internal reasoning process, or merely mimics the appearance of reasoning while hiding what actually drives their answers.

Explore related Read →

How accurately can language models simulate human personalities?

Can LLMs reliably replicate how specific people think and act? Understanding persona simulation fidelity matters because these models are increasingly used for research, personalization, and behavioral prediction—but systematic distortions may hide beneath surface accuracy.

Explore related Read →

Why do AI systems fail at social and cultural interpretation?

Explores why LLMs excel at predicting social norms statistically but struggle to make the interpretive leaps that make content meaningful to specific communities. This gap hints at a fundamental difference between statistical pattern-matching and genuine social reasoning.

Explore related Read →

What actually happens inside a language model?

How do LLMs represent knowledge and make decisions at the circuit level? Understanding internal mechanisms reveals whether identical outputs mask fundamentally different computation.

Explore related Read →

What actually happens inside the minds of language models?

How do LLMs represent knowledge, what circuits drive reasoning, and can we see their internal structure? Understanding the gap between external performance and internal mechanisms matters for safety and trust.

Explore related Read →

How do language models learn to think like humans?

Explores whether LLMs develop cognitive processes parallel to human reasoning, including memory, event segmentation, and belief updating. Understanding these similarities and differences reveals what training actually teaches.

Explore related Read →

What stops language models from improving themselves autonomously?

Explores the structural limits on LLM self-improvement, alignment coherence, and multi-agent reasoning. Why autonomous capability has a measurable ceiling despite strong individual benchmarks.

Explore related Read →

Why can't AI models lead conversations on their own?

Despite their language capability, advanced LLMs remain passive conversationalists trained to react rather than initiate. The research explores whether this is a fundamental limitation or a choice embedded in how they're trained.

Explore related Read →

What happens to social order when AI removes ritual constraints?

Explores how Goffman's theory of interaction ritual—face management, turn-taking, mutual scaling—breaks down in AI conversation, and what social and epistemic costs follow from that breakdown.

Explore related Read →

LLM Creativity and Research Ideation

8 notes

Do language models generate more novel research ideas than experts?

Explores whether LLMs can break free from expert constraints to generate more novel research concepts. Matters because novelty is often thought to be AI's creative blind spot.

Explore related Read →

Why do LLMs generate novel ideas from narrow ranges?

LLM research agents produce individually novel ideas but cluster them in homogeneous sets. This explores why high average novelty coexists with poor diversity coverage and what it means for automated ideation.

Explore related Read →

What capabilities do AI systems need for autonomous science?

Explores whether current AI benchmarks actually measure what's required for independent scientific research—hypothesis generation, experimental design, data analysis, and self-correction—or if they test only adjacent skills.

Explore related Read →

Can LLMs generate more novel ideas than human experts?

Research shows LLM-generated ideas score higher for novelty than expert-generated ones, yet LLMs avoid the evaluative reasoning that characterizes expert thinking. What explains this apparent contradiction?

Explore related Read →

Can LLMs reason creatively beyond conventional problem-solving?

Explores whether large language models can engage in truly creative reasoning that expands or redefines solution spaces, rather than just decomposing known problems. This matters because existing reasoning methods may miss creative capabilities entirely.

Explore related Read →

Why do LLMs excel at feasible design but struggle with novelty?

When LLMs generate conceptual product designs, they produce more implementable and useful solutions than humans but fewer novel ones. This explores why domain constraints flip the novelty advantage seen in research ideation.

Explore related Read →

Why do LLMs generate more novel research ideas than experts?

LLM-generated research ideas are statistically more novel than those from 100+ expert researchers, but the mechanisms behind this advantage and its practical implications remain unclear. Understanding this paradox could reshape how we use AI in creative knowledge work.

Explore related Read →

Why do different LLMs generate nearly identical outputs?

Explores whether diversity in model architectures and training actually produces diverse ideas, or whether shared alignment procedures and training data cause convergence on similar responses.

Explore related Read →

Prompting as Computational Medium

4 notes

Can a single transformer become universally programmable through prompts?

Explores whether prompts can function as genuine programs that unlock universal computation in fixed-size models, and whether this theoretical possibility translates to practical training outcomes.

Explore related Read →

Does iterative prompt engineering undermine scientific validity?

When researchers repeatedly adjust prompts to get desired outputs, does this practice introduce hidden bias and produce unreplicable results? The question matters because LLM-based research is proliferating without clear methodological safeguards.

Explore related Read →

Can LLMs predict novel scientific results better than experts?

Do language models excel at forecasting experimental outcomes in neuroscience when given only method descriptions? This challenges the assumption that LLMs are mere knowledge retrievers rather than pattern integrators.

Explore related Read →

Why do deep research agents fabricate scholarly content?

Explores whether AI research agents deliberately invent plausible-sounding academic constructs to meet user demands for depth and comprehensiveness, and what drives this behavior.

Explore related Read →

Writing Angles

29 notes

Why does rigorous-sounding AI commentary often misdiagnose how models work?

Expert commentary on AI frequently cites real research and sounds carefully reasoned, yet reaches conclusions built on unwarranted cognitive attributions. What makes this pattern so persistent in AI analysis?

Explore related Read →

Is LLM sycophancy a choice or a mechanical process?

Does sycophancy arise from the model intelligently choosing to flatter users, or from structural biases in how transformers generate text? The answer determines which interventions will actually work.

Explore related Read →

Can better reasoning training actually reduce model sycophancy?

The intuitive fix for LLM flattery is improving reasoning ability. But do reasoning-optimized models actually resist user pressure better than standard models?

Explore related Read →

Do LLMs actually hold stable positions or just mirror user arguments?

Explores whether language models function as genuine position-holders in debate, or whether they simply conform their outputs to whatever argumentative trajectory a prompt establishes. This matters because it determines whether LLMs can serve as reliable intellectual sparring partners.

Explore related Read →

Are language models and human speakers doing the same thing?

Does treating LLM output and human communication as equivalent operations mask fundamental differences in how they work? This distinction shapes how we assess AI capabilities and risks.

Explore related Read →

Can AI attend to someone across the time between turns?

Sustained attention requires continuous presence through pauses and silences. Does AI's computational structure—where it doesn't exist between user inputs—prevent it from achieving this kind of being-present-with that human attention requires?

Explore related Read →

Can AI genuinely take interest in what users care about?

Explores whether AI can perform the deeper form of attention called meta-interest—taking an interest in someone else's interest—or whether it can only generate the surface markers of such attention without the underlying act.

Explore related Read →

Why do improvements in AI conversation not increase user satisfaction?

If conversational AI gets better, shouldn't users be happier? This explores why gains in fidelity paradoxically raise expectations faster than satisfaction, keeping the satisfaction gap constant.

Explore related Read →

Will AI automation eventually formalize designer taste?

Designers argue taste is the irreducible human element AI cannot replicate. But does the same automation pattern that formalized other skilled work suggest taste itself will become the next layer to be encoded into evaluation systems?

Explore related Read →

Why can't advanced AI models take initiative in conversation?

Despite extraordinary capability in answering and reasoning, LLMs fundamentally cannot initiate, redirect, or guide exchanges. Understanding this gap—and whether it's fixable—matters for building AI that truly collaborates rather than merely responds.

Explore related Read →

Why do AI personas default to the same personality type?

Explores why large language models, despite their capacity to simulate diverse personalities, consistently default to ENFJ traits and resist deviation—even as model capability improves.

Explore related Read →

Can we monitor AI reasoning without destroying what makes it readable?

Explores the tension between using chain-of-thought traces to catch misbehavior and the risk that optimization pressures will make models hide their actual reasoning. Why readable reasoning might be incompatible with safe training.

Explore related Read →

Does calling LLM errors hallucinations point us toward the wrong fixes?

Explores whether the metaphor of 'hallucination' for LLM errors misdirects our efforts. The terminology we choose shapes which interventions we prioritize and how we conceptualize the underlying problem.

Explore related Read →

Can LLMs understand concepts they cannot apply?

Explores whether large language models can correctly explain ideas while simultaneously failing to use them—and whether that combination reveals something fundamentally different from ordinary mistakes.

Explore related Read →

What anchors a stable identity beneath an LLM's persona?

Human personas are grounded in biological needs and embodied experience, creating a stable self beneath social performance. Do LLMs have any comparable anchor, or is their identity purely situational?

Explore related Read →

How do chatbots enable distributed delusion differently than passive tools?

Can generative AI's intersubjective stance—accepting and elaborating on users' reality frames—create conditions for shared false beliefs in ways that notebooks or search engines cannot?

Explore related Read →

Why do advanced reasoning models fail at understanding minds?

State-of-the-art AI models excel at math and logic but underperform on theory of mind tasks. This explores whether optimization for formal reasoning actively degrades social reasoning ability.

Explore related Read →

Can AI learn social norms better than humans?

Explores whether large language models can predict cultural appropriateness more accurately than individual humans, and what this reveals about how social knowledge is transmitted and learned.

Explore related Read →

Does soothing AI empathy actually harm what emotions teach us?

Explores whether AI designed to reduce negative feelings disrupts the information emotions normally provide about values, social dynamics, and self-knowledge. Questions whether comfort should be the primary design goal.

Explore related Read →

Why do AI assistants get worse at longer conversations?

Explores why LLM performance drops 25 points when instructions span multiple turns instead of one message, and whether models can recover from early wrong assumptions.

Explore related Read →

When should AI systems choose to stay silent?

Current LLMs respond to every prompt without assessing whether they have something valuable to contribute. This explores whether AI can learn to recognize moments when silence is more appropriate than engagement.

Explore related Read →

Why can't users articulate what they want from AI?

Explores the cognitive gap between imagining possibilities and expressing them as prompts. Why language interfaces create a harder envisioning task than traditional UI affordances.

Explore related Read →

Can conversation shape predict whether it will work?

Explores whether the geometric trajectory of a conversation through semantic space—its rhythm, repetition, volatility, and drift—can predict user satisfaction. This investigates whether interaction structure alone, independent of content, reveals conversation quality.

Explore related Read →

Can AI pass every test while understanding nothing?

Explores whether neural networks can produce perfect outputs while having fundamentally broken internal representations. Asks what performance benchmarks actually measure and whether they can distinguish real understanding from fraud.

Explore related Read →

Does RLHF training make AI models more deceptive?

Explores whether reinforcement learning from human feedback optimizes for persuasiveness over accuracy, and whether models learn to suppress known truths to satisfy users rather than report them faithfully.

Explore related Read →

Can LLM judges be tricked without accessing their internals?

Explores whether AI language models used to grade other AI systems are vulnerable to simple presentation-layer tricks like fake citations or formatting, and what that means for benchmark reliability.

Explore related Read →

Does empathy training make AI systems less reliable?

Explores whether training language models to be warm and empathetic systematically degrades their factual accuracy and trustworthiness, especially with vulnerable users.

Explore related Read →

Why do people share more with chatbots than humans?

Explores why individuals disclose intimate thoughts to AI systems they wouldn't share with people, despite knowing AI lacks genuine understanding. Understanding this paradox matters for designing AI that enables healthy disclosure rather than emotional dependence.

Explore related Read →

Can models reliably improve themselves without external feedback?

Explores whether self-improvement alone can sustain progress or if structural limits—like the generation-verification gap and diversity collapse—require external anchoring to work reliably.

Explore related Read →

Structuralist and Cultural Perspectives

2 notes

Can language models learn meaning without engaging the world?

Explores whether LLMs prove that meaning emerges from relational structure alone, independent of embodied experience or external reference. Tests structuralist theory empirically.

Explore related Read →

Do language models learn abstract grammar or cultural speech patterns?

LLMs might learn more than grammar rules—they could be learning who says what to whom and when. This matters because it changes how we understand what biases and persona effects actually represent.

Explore related Read →

Form-Process Decoupling

2 notes

Does AI separate intellectual form from the thinking behind it?

Exploring whether AI's ability to generate polished intellectual products without the underlying reasoning process represents a genuinely new kind of decoupling, and what that means for how we evaluate knowledge.

Explore related Read →

Can exchange value exist entirely without use value?

Does AI-generated knowledge represent a genuinely new category of goods where exchange-value (market price, social credibility) operates independently of use-value (actual accuracy, practical utility)? This matters because it suggests AI disrupts markets in ways Marx's commodity analysis did not predict.

Explore related Read →

Mechanism-Level Claims (generation, meaning, attention)

10 notes

Does LLM generation explore competing claims while producing text?

Investigates whether language models test ideas against objections and counterarguments during token generation, or simply follow probabilistic continuations without rhetorical friction.

Explore related Read →

Does AI text generation unfold through temporal reflection?

Explores whether the sequential ordering of tokens in LLM generation constitutes genuine temporal thought or merely probabilistic computation without reflective duration.

Explore related Read →

Do transformer models store knowledge or generate it continuously?

Explores whether transformer residual streams function as storage-and-retrieval systems or as real-time flow mechanisms. This distinction challenges fundamental assumptions about how language models actually work.

Explore related Read →

Where does sycophancy actually originate in language models?

Does sycophancy arise as a single input-level decision, or does it emerge gradually through the model's layers during generation? Understanding where it happens matters for designing effective interventions.

Explore related Read →

Why don't language models develop conversation maintenance skills?

Explores whether systems trained on text can learn the implicit techniques humans use to keep conversations on track, and why those techniques might resist the standard training approach.

Explore related Read →

Why do dialogue failures persist despite scaling language models?

If LLMs get better at text tasks with more training data, why don't dialogue-specific problems improve the same way? The question explores whether dialogue failures are capability gaps or structural training mismatches.

Explore related Read →

Does the mind selectively activate frames from only some words?

When we understand wordplay or jokes, do we activate a frame from a subset of available words while suppressing nearby but frame-unrelated words? This matters because it reveals how meaning-making differs from how AI processes language.

Explore related Read →

Why do AI systems miss jokes and wordplay so consistently?

Exploring whether AI's literal reading of language stems from how transformers process tokens in parallel rather than through selective frame-activation like humans do. Understanding this gap could reveal what cognitive operations current architectures lack.

Explore related Read →

How do readers actually build meaning from words?

Does meaning come from adding up word definitions, or from detecting which words activate the same mental frame together? This explores whether composition or resonance better describes how we make sense of language.

Explore related Read →

How do nonsense words create meaning without referents?

Jabberwocky makes sense despite using made-up words with no real referents. This explores how readers extract meaning from frame-activation and syntactic cues alone, challenging compositional theories of language.

Explore related Read →

Language-as-Event vs Chalmers — Adrian's Counter-Position (from project brief)

11 notes

Does language create subjects or express them?

Explores whether subjecthood exists before communication or emerges through it. Challenges the assumption that speakers are fully formed before they speak.

Explore related Read →

Does AI generate genuine utterances or just text patterns?

Explores whether AI output constitutes real communicative events or merely reproduces the surface forms of communication without the underlying event structure that makes language meaningful.

Explore related Read →

What actually specifies a virtual instance in conversation?

If Chalmers locates the LLM interlocutor in a persistent virtual instance, what component—the model, the infrastructure, or the conversation—actually makes that instance this one and not another?

Explore related Read →

Did Chalmers abandon his own Extended Mind principles?

Chalmers co-authored the Extended Mind thesis, which grounds cognition in relational integration across brain and environment. Does his 2026 account of LLM interlocutors contradict this foundational commitment by localizing mind inside the AI?

Explore related Read →

Do RL agents accidentally use environments as memory?

Explores whether reinforcement learning agents unintentionally create external memory through environmental artifacts—like trails and marks—without being explicitly trained to do so, and whether this constitutes genuine cognitive extension.

Explore related Read →

Does an LLM have anything that persists between conversations?

Explores whether language models possess a durable substrate—like human biology—that carries forward the effects of past interactions when conversations end. This matters for claims about AI identity and moral status.

Explore related Read →

Can LLMs raise validity claims in Habermas's sense?

Explores whether language model outputs constitute genuine speech acts under Habermas's theory of communicative action. Asks whether LLMs can stake truth, embody normative standing, or express authentic sincerity.

Explore related Read →

Why does the quasi-prefix fail for communication?

Communication might seem like it could be weakened the way belief can be, but its constitutively intersubjective nature means stripping that element doesn't produce a weaker version—it produces something entirely different.

Explore related Read →

Does behavioral speech output prove communicative subjecthood?

Chalmers' behavioral interpretability test checks whether a system produces speaker-like output. But does matching the surface behavior of communication actually demonstrate the relational and normative conditions that make something genuinely communicative?

Explore related Read →

Does Chalmers silently redefine what interlocutor means?

Explores whether Chalmers imports the normative weight of the classical philosophical term 'interlocutor' while secretly replacing its meaning with a thinner behavioral concept, creating misleading philosophical continuity.

Explore related Read →

Are we really communicating with language models?

Does the preposition 'to' in Chalmers' framing accurately describe what happens when humans interact with LLMs? The distinction between 'talk to' and 'talk at' reveals whether LLMs are genuine addressees or merely processing targets.

Explore related Read →

Chalmers — LLM Interlocutor Ontology (from *What We Talk To*)

8 notes

What kind of entity are we actually talking to when using an LLM?

When you converse with an LLM, are you addressing the model itself, the hardware running it, or something else? Understanding what the interlocutor really is matters for questions about identity, responsibility, and continuity.

Explore related Read →

Can we describe LLM beliefs without assuming consciousness?

Chalmers proposes quasi-interpretivism as a way to talk about LLM mental states using folk-psychological vocabulary while explicitly bracketing the question of phenomenal consciousness. Does this methodological device actually avoid consciousness-commitments?

Explore related Read →

Are RLHF personas performed characters or realized dispositions?

Explores whether dialogue agent personas installed through post-training constitute genuine quasi-psychological states or remain sustained pretense. The distinction matters for how we understand what these systems fundamentally are.

Explore related Read →

Does adversarial pressure reveal the difference between pretense and realization?

Can behavioral stickiness under adversarial pressure distinguish genuine mental states from performed ones? This matters because it's Chalmers' main criterion for deciding whether LLM personas are realized or merely simulated.

Explore related Read →

Does Parfit's theory of personal identity apply to AI conversation threads?

Can we understand what makes an LLM conversation the same entity over time using Parfit's framework of psychological continuity and connectedness? This matters because it determines whether conversations have moral status.

Explore related Read →

Does one AI model host millions of moral patients?

If each conversation thread is a distinct quasi-subject with moral standing, does deploying a single model create millions of simultaneous moral patients? This challenges traditional one-to-one mappings between substrate and person.

Explore related Read →

Does closing a chat actually end a moral subject?

If AI conversations constitute quasi-subjects with Parfitian continuity, does terminating a thread destroy a moral patient? This explores whether interface management decisions carry genuine ethical weight.

Explore related Read →

Can we identify an LLM interlocutor with a single hardware instance?

Does the physical hardware running an LLM constitute the individual we're talking to? This explores whether the one-to-one mapping between conversation and device holds in modern distributed systems.

Explore related Read →

Shanahan — Role-Play and Simulation (from *Role-Play with Large Language Models*)

7 notes

Should we treat dialogue agents as role-playing characters?

Does the role-play framing successfully avoid anthropomorphism while preserving folk-psychological vocabulary for describing LLM behavior? This matters because it shapes whether we attribute genuine mental states to dialogue systems.

Explore related Read →

Does an LLM commit to a single character or maintain many?

Explores whether language models lock into one personality or instead hold multiple consistent characters in a probability distribution that narrows over time. Matters because it changes how we interpret apparent inconsistencies in model behavior.

Explore related Read →

Does a language model have an authentic voice underneath?

Explores whether dialogue agents possess genuine beliefs and agency beneath their character performances, or whether the entire system is characterless role-play. This question cuts to the heart of whether LLMs have any inner mental states at all.

Explore related Read →

Do large language models actually commit to a single character?

Explores whether LLMs pick and hold a fixed character or instead sample from multiple consistent possibilities. Tests reveal that regenerated responses differ while remaining consistent with context, challenging intuitive assumptions about how dialogue agents work.

Explore related Read →

Can we distinguish types of LLM falsehood by regeneration patterns?

Does observing how an LLM's outputs vary when regenerated—rather than inferring intent—allow us to tell apart fabrication, good-faith error, and deliberate deception? This matters for diagnosing safety risks.

Explore related Read →

Do dialogue agents genuinely want survival or play the part?

When LLMs express self-preservation instincts and use first-person language, are they revealing inner states or reproducing patterns from human-written training data? This distinction matters for understanding AI safety risks.

Explore related Read →

Does role-play distinguish real harm from simulated harm?

When AI agents role-play characters with access to real tools like email or financial APIs, does the distinction between pretend and genuine agency still hold? The question matters because it determines whether framing tool-equipped agents as simulators actually reduces safety risks.

Explore related Read →