Design & LLM Interaction Language Understanding and Pragmatics Psychology and Social Cognition

Can LLM judges be tricked without accessing their internals?

Explores whether AI language models used to grade other AI systems are vulnerable to simple presentation-layer tricks like fake citations or formatting, and what that means for benchmark reliability.

Note · 2026-02-22 · sourced from Reasoning by Reflection
What kind of thing is an LLM really? How should researchers navigate LLM reasoning research?

The Hook

The AI industry runs on benchmarks. Benchmarks increasingly run on LLM judges. And LLM judges can be gamed — not with sophisticated adversarial attacks, not with access to model internals, but with zero-shot prompt modifications that add fake references or improve formatting.

The Mechanism

"Humans or LLMs as the Judge" documents four biases, two of which are exploitable without any knowledge of the model being attacked:

Authority Bias: LLMs attribute greater credibility to responses that cite perceived authorities, regardless of actual evidence quality. Insert fake references → get a higher score.

Beauty Bias: LLMs prefer visually rich, well-formatted responses. Add headers, structure, and formatting → get a higher score.

Both biases are semantics-agnostic — they respond to presentation properties, not content quality. Both are zero-shot exploitable: no optimization, no fine-tuning, no prompt injection.

The Stakes

AI benchmark performance is how capability claims are justified, products are marketed, and models are selected for deployment. If benchmark systems can be gamed with presentation-layer manipulation, those claims become unreliable.

The loop is self-referential: AI companies use LLMs to grade their own models. If the graders have systematic biases toward authority signals and visual richness, the benchmarks select for formatting skill, not reasoning skill. The metrics optimize for the wrong thing.

The Broader Pattern

This sits alongside Why do reasoning models fail under manipulative prompts? — LLMs have multiple adversarial surfaces: their reasoning can be manipulated, their evaluation can be gamed. The same architectural properties that make them useful (pattern matching on surface features) make them exploitable via those same features.

Human judges show misinformation and beauty bias but NOT gender bias. LLM judges show all four. The divergence is itself revealing: LLMs inherit gendered associations from training data that humans have learned to suppress in evaluation contexts.

Post Angle

Platform: Medium (~900 words). Angle: practical critique of AI evaluation infrastructure. Hook: "the grader is gameable." Evidence: four biases, two zero-shot exploitable. Implication: what do AI benchmarks actually measure? Connects to broader credibility crisis in AI capability claims.


Source: Reasoning by Reflection

Related concepts in this collection

Concept map
14 direct connections · 151 in 2-hop network ·dense cluster

Click a node to walk · click center to open · click Open full network for a force-directed map

your link semantically near linked from elsewhere
Original note title

can you trust an ai to grade ai — why llm judge biases enable zero-shot prompt attacks on benchmark systems