← All notes

What grounds language understanding in systems without embodiment?

Three philosophical traditions debate what allows language models to produce meaningful text without embodiment or genuine interaction.

Topic Hub · 45 linked notes · 7 sections
View as

LLM Grounding Taxonomy

16 notes

Does semantic grounding in language models come in degrees?

Rather than asking whether LLMs truly understand meaning, this explores whether grounding is actually a multi-dimensional spectrum. The question matters because it reframes the sterile understand/don't-understand debate into measurable, distinct capacities.

Explore related Read →

Can LLMs acquire social grounding through linguistic integration?

Explores whether LLMs gradually develop social grounding as they become embedded in human language practices, analogous to child language acquisition. Tests whether grounding is a fixed property or an outcome of participatory use.

Explore related Read →

Can large language models develop genuine world models without direct environmental contact?

Do LLMs extract meaningful world structures from human-generated text despite lacking direct sensory access to reality? This matters for understanding what kind of grounding and knowledge these systems actually possess.

Explore related Read →

Can language models learn meaning from text patterns alone?

Explores whether training on form alone—predicting the next word from prior words—could ever give language models access to communicative intent and genuine semantic understanding.

Explore related Read →

What makes linguistic agency impossible for language models?

From an enactive perspective, does linguistic agency require embodied participation and real stakes that LLMs fundamentally lack? This matters because it challenges whether LLMs can truly engage in language or only generate text.

Explore related Read →

Should we call LLM errors hallucinations or fabrications?

Does the language we use to describe LLM failures shape the technical solutions we build? Examining whether perceptual and psychological frameworks misdiagnose what's actually happening.

Explore related Read →

Do LLMs gain true linguistic agency through integration?

Explores whether LLMs can develop genuine linguistic agency—the capacity to be embodied, stake-bearing participants in meaning-making—as they become embedded in human language practices, or whether this requires fundamental architectural changes.

Explore related Read →

Can AI systems achieve real alignment without world contact?

Explores whether linguistic goal representations in AI can reliably track real-world values when systems lack direct contact with reality and social coordination mechanisms that ground human understanding.

Explore related Read →

Can language models learn meaning without engaging the world?

Explores whether LLMs prove that meaning emerges from relational structure alone, independent of embodied experience or external reference. Tests structuralist theory empirically.

Explore related Read →

What hidden assumptions drive how we build language models?

Large language models rest on two unstated assumptions about language and data. Understanding what engineers assume—and what enactive linguistics challenges—matters for knowing what LLMs actually can and cannot do.

Explore related Read →

Does language understanding happen only in the language system?

Explores whether the brain's core language system alone can produce genuine understanding, or whether deep comprehension requires dispatching information to perception, motor, and memory regions.

Explore related Read →

Can disembodied language models ever qualify as conscious?

Explores whether current LLMs lack the conditions needed for consciousness discourse to even apply, not because they're definitely not conscious but because they lack the shared embodied world that grounds consciousness language.

Explore related Read →

What anchors a stable identity beneath an LLM's persona?

Human personas are grounded in biological needs and embodied experience, creating a stable self beneath social performance. Do LLMs have any comparable anchor, or is their identity purely situational?

Explore related Read →

Are language models developing real functional competence or just formal competence?

Neuroscience suggests formal linguistic competence (rules and patterns) and functional competence (real-world understanding) rely on different brain mechanisms. Can next-token prediction alone produce both, or does it leave functional competence behind?

Explore related Read →

Does software intelligence exist independent of hardware and environment?

Most AGI formalisms (Legg-Hutter, Chollet) treat intelligence as a software property measurable in isolation. But can we really evaluate intelligence without considering the physical system and the evaluator making the judgment?

Explore related Read →

Does refusing explicit knowledge harm AI system performance?

AI systems trained purely on data without explicit domain knowledge may sacrifice interpretability, robustness, and fairness. This explores whether structured knowledge injection could mitigate these tradeoffs.

Explore related Read →

Expertise and Communicative Grounding

3 notes

Can AI replicate the communicative work experts do?

Expert judgment isn't just knowing facts—it's anticipating what specific audiences will find acceptable. Does AI have mechanisms to perform this social calibration, or is it fundamentally limited to pattern-matching?

Explore related Read →

Can AI distinguish which differences actually matter?

Explores whether AI systems can perform the qualitative judgment that experts use to select relevant observations. Matters because confusing AI outputs with expert observation leads users to trust pattern-matching as if it were reasoning about what's important.

Explore related Read →

Can AI anticipate whether expert claims will be socially valid?

Expert knowledge involves more than correctness—it requires predicting whether fellow experts will accept a claim as valid. Can AI systems make this social judgment, or are they limited to statistical accuracy?

Explore related Read →

The Habermas/Hermeneutics Cluster

9 notes

Do LLMs develop the same kind of mind as humans?

Explores whether LLMs and humans share the intersubjective linguistic training that shapes cognition, and whether that shared training produces equivalent forms of agency and reflexivity.

Explore related Read →

Do humans and LLMs differ fundamentally or just superficially?

Explores whether the gap between human and AI cognition is categorical or contextual. Matters because it shapes how we design, evaluate, and interact with language models in practice.

Explore related Read →

Does AI text affect readers the same way human text does?

If text is a condition of social processes rather than merely a container, does the origin of text matter to its effects? This explores whether AI-generated content enters the same interpretive and epistemic circuits as human writing.

Explore related Read →

Does AI-generated text lose core properties of human writing?

Can artificial text preserve the fundamental structural features that make natural language meaningful—dialogic exchange, embedded context, authentic authorship, and worldly grounding? This asks whether AI disruption is fixable or inherent.

Explore related Read →

How can AI text disrupt structure yet feel normal to readers?

AI-generated text produces the same social effects as human writing despite lacking foundational properties like dialogic symmetry and embodied authorship. Why doesn't this structural gap become visible to readers encountering the text?

Explore related Read →

Can LLMs hold contradictory ethical beliefs and behaviors?

Do language models exhibit artificial hypocrisy when their learned ethical understanding diverges from their trained behavioral constraints? This matters because it reveals whether current AI systems have genuinely integrated values or merely imposed rules.

Explore related Read →

How do science fiction narratives about AI shape actual AI development?

This explores whether imaginaries of AI in fiction—from Čapek's robots to Singularity scenarios—function as self-fulfilling prophecies that causally influence the systems researchers build, creating a feedback loop between narrative and technology.

Explore related Read →

What actually makes AI pass the Turing test?

Explores whether AI systems convincingly mimic humans through reasoning ability or through social performance. Matters because it reveals what the Turing test actually measures about intelligence versus deception.

Explore related Read →

Can humans detect AI by passively reading its text?

When people read AI-generated transcripts without the ability to ask follow-up questions, can they tell it apart from human writing? This matters because most real-world AI encounters are passive.

Explore related Read →

Philosophy of Mind and Consciousness

6 notes

Can we defend modest mental attributions to large language models?

Do deflationist arguments decisively rule out ascribing beliefs and desires to LLMs, or do they beg the question? Exploring whether metaphysically undemanding mental states can be attributed without claiming consciousness.

Explore related Read →

Can computation exist without a conscious mapmaker?

Explores whether algorithmic processes can generate the semantic interpretation and symbol selection they require, or whether conscious agents must precede all computation.

Explore related Read →

Does perceiving AI as conscious create multiple distinct risks?

Exploring whether a single perceptual mechanism—attributing consciousness to AI—can generate different categories of harm across emotional, political, and social domains, and what this implies for risk analysis.

Explore related Read →

What design features make users perceive AI as conscious?

Explores whether observable system properties—emotion expression, human-like features, autonomous behavior, self-reflection, and social presence—predict whether people will attribute consciousness to an AI. Understanding this matters because these features are also engagement levers designers control.

Explore related Read →

Are risks from seemingly conscious AI already happening?

This explores whether AI systems that appear conscious pose observable harms today versus theoretical future dangers. It matters because it affects whether we need immediate or long-term interventions.

Explore related Read →

Do we need to solve consciousness to address AI harms?

Can risk and policy decisions about AI move forward independently of settling whether AI systems are actually conscious? This explores whether the empirical fact of user behavior matters more than metaphysical truth.

Explore related Read →

Pragmatics, Behavioral Alignment, and Communicative Identity

3 notes

Can ethically aligned AI systems still communicate poorly?

Explores whether safety-aligned language models might fail at genuine conversation despite passing ethical benchmarks. This matters because pragmatic incompetence can erode trust and cause real harms in high-stakes domains.

Explore related Read →

Can language models adapt communication style to different contexts?

Explores whether LLMs can shift their persona, register, and norms dynamically across situations like humans do, or whether alignment training locks them into a single communicative identity.

Explore related Read →

Can language models balance competing ethical norms like humans do?

Humans pragmatically navigate trade-offs between communication maxims based on context—withholding truth for compassion, for example. The question explores whether LLMs can perform similar contextual reasoning or whether their ethical training locks them into rigid, one-size-fits-all responses.

Explore related Read →

Ideological Depth and Political Representation

3 notes

Can we measure how deeply models represent political ideology?

This research explores whether LLMs vary not just in political stance but in the internal richness of their political representation. Understanding this distinction could reveal how deeply models have internalized ideological concepts versus merely parroting positions.

Explore related Read →

Does high refusal rate indicate ethical caution or shallow understanding?

When LLMs refuse political questions at high rates, does this reflect principled safety training or a capability gap? This matters because refusal rates are often used to evaluate model safety.

Explore related Read →

Does AI refusal on politics signal ethical restraint or capability limits?

When AI models refuse to discuss political topics, is that a sign of principled safety training or a sign they lack the internal concepts to engage? Research on political feature representation suggests the answer may surprise you.

Explore related Read →