← All notes

What makes chain-of-thought reasoning actually work?

Examines how chain-of-thought reasoning works, what it's made of, and when it fails.

Topic Hub · 10 linked notes · 4 sections
View as

Sub-Maps

2 notes

What makes chain-of-thought reasoning actually work?

Explores the structural and mechanical properties that determine how reasoning traces function in language models. Understanding these properties reveals why format matters more than logic and what tokens carry the most information about correct answers.

Explore related Read →

Why does chain-of-thought reasoning fail so often?

Explores the limits of CoT as a reasoning technique. Understanding when and why CoT breaks down reveals whether models are genuinely reasoning or imitating reasoning patterns.

Explore related Read →

Reasoning Step Taxonomy and Intervention

1 note

Writing Angles

2 notes

Why do more capable reasoning models ignore your instructions?

As AI models develop stronger reasoning abilities, they seem to follow instructions less reliably. What causes this counterintuitive trade-off, and how severe is the problem in practice?

Explore related Read →

What critical thinking skills do reasoning models actually lose?

Step-by-step reasoning training optimizes narrow deductive thinking while degrading meta-cognitive abilities like recognizing futile thinking and maintaining tentative reasoning. Understanding this tradeoff matters for deploying reasoning models reliably.

Explore related Read →