LLM Reasoning and Architecture

Does partial formalism work better than full symbolic translation?

Exploring whether injecting limited symbolic structure into natural language preserves reasoning power better than complete formalization. This matters because current neuro-symbolic approaches often lose semantic information during translation.

Note · 2026-02-22 · sourced from Reasoning Logic Internal Rules
What makes chain-of-thought reasoning actually work? How should researchers navigate LLM reasoning research? Do reasoning traces show how models actually think?

Two independent approaches converge on the same principle: injecting partial symbolic structure into natural language context outperforms both pure NL reasoning and full symbolic formalization. The key is augmentation, not replacement.

QuaSAR (Quasi-Symbolic Abstract Reasoning) guides LLMs through four steps: (1) Abstraction — identify relevant predicates, variables, constants; (2) Formalization — reformulate using a mix of symbols and NL; (3) Explanation — solve using quasi-symbolic representations; (4) Answering — extract the final answer. The model formalizes only what's relevant, keeping everything else in NL. Result: up to 8% accuracy improvement on MMLU-Redux and GSM-Symbolic, with enhanced robustness on adversarial variations.

Logic-of-Thought (LoT) takes a different path to the same destination: extract propositional logic from the input, expand via logical reasoning laws (double negation, contraposition, transitivity), translate the expanded logic back to NL, then inject as additional context alongside the original prompt. Result: +4.35% on ReClor (with CoT), +3.52% on RuleTaker (with CoT+SC), +8% on ProofWriter (with ToT).

Both approaches solve the same problem differently. Full neuro-symbolic methods (Logic-LM, LINC, SatLM) translate the ENTIRE problem to formal logic, which inevitably loses information — the LoT paper documents specific cases where "Harry is a person" and "Walden is a book" are lost during extraction, causing symbolic solvers to fail. QuaSAR and LoT avoid this by keeping the original NL context intact and adding formal elements as enrichment.

The theoretical grounding is illuminating. QuaSAR draws on Kitcher's unificationist account of explanation: explanations work by subsuming observations under recurring argument patterns through abstraction. Replacing concrete entities with abstract symbols creates reusable reasoning patterns — the same pattern can explain why objects fall AND why celestial bodies attract. This is partial formalization as cognitive tool, not as logical translation.

Since Can large language models translate natural language to logic faithfully?, full formalization is a dead end. Since Do large language models reason symbolically or semantically?, removing semantics breaks reasoning. The partial approach threads the needle: add enough structure to bypass content bias while preserving enough semantics for the model to reason.


Source: Reasoning Logic Internal Rules

Related concepts in this collection

Concept map
14 direct connections · 129 in 2-hop network ·dense cluster

Click a node to walk · click center to open · click Open full network for a force-directed map

your link semantically near linked from elsewhere
Original note title

partial symbolic abstraction preserves information completeness that full formalization loses — augmentation outperforms replacement for logical reasoning