Does partial formalism work better than full symbolic translation?
Exploring whether injecting limited symbolic structure into natural language preserves reasoning power better than complete formalization. This matters because current neuro-symbolic approaches often lose semantic information during translation.
Two independent approaches converge on the same principle: injecting partial symbolic structure into natural language context outperforms both pure NL reasoning and full symbolic formalization. The key is augmentation, not replacement.
QuaSAR (Quasi-Symbolic Abstract Reasoning) guides LLMs through four steps: (1) Abstraction — identify relevant predicates, variables, constants; (2) Formalization — reformulate using a mix of symbols and NL; (3) Explanation — solve using quasi-symbolic representations; (4) Answering — extract the final answer. The model formalizes only what's relevant, keeping everything else in NL. Result: up to 8% accuracy improvement on MMLU-Redux and GSM-Symbolic, with enhanced robustness on adversarial variations.
Logic-of-Thought (LoT) takes a different path to the same destination: extract propositional logic from the input, expand via logical reasoning laws (double negation, contraposition, transitivity), translate the expanded logic back to NL, then inject as additional context alongside the original prompt. Result: +4.35% on ReClor (with CoT), +3.52% on RuleTaker (with CoT+SC), +8% on ProofWriter (with ToT).
Both approaches solve the same problem differently. Full neuro-symbolic methods (Logic-LM, LINC, SatLM) translate the ENTIRE problem to formal logic, which inevitably loses information — the LoT paper documents specific cases where "Harry is a person" and "Walden is a book" are lost during extraction, causing symbolic solvers to fail. QuaSAR and LoT avoid this by keeping the original NL context intact and adding formal elements as enrichment.
The theoretical grounding is illuminating. QuaSAR draws on Kitcher's unificationist account of explanation: explanations work by subsuming observations under recurring argument patterns through abstraction. Replacing concrete entities with abstract symbols creates reusable reasoning patterns — the same pattern can explain why objects fall AND why celestial bodies attract. This is partial formalization as cognitive tool, not as logical translation.
Since Can large language models translate natural language to logic faithfully?, full formalization is a dead end. Since Do large language models reason symbolically or semantically?, removing semantics breaks reasoning. The partial approach threads the needle: add enough structure to bypass content bias while preserving enough semantics for the model to reason.
Source: Reasoning Logic Internal Rules
Related concepts in this collection
-
Can large language models translate natural language to logic faithfully?
This explores whether LLMs can convert natural language statements into formal logical representations without losing meaning. It matters because faithful translation is essential for any AI system that reasons formally or verifies specifications.
full formalization fails; partial avoids the failure
-
Do large language models reason symbolically or semantically?
Can LLMs follow explicit logical rules when those rules contradict their training knowledge? Testing whether reasoning operates independently of semantic associations reveals what computational mechanisms actually drive LLM multi-step inference.
preserving semantics is necessary; adding partial structure is sufficient
-
Can symbolic solvers fix how LLMs reason about logic?
LLMs excel at understanding natural language but fail at precise logical inference. Can pairing them with deterministic symbolic solvers—using solver feedback to refine attempts—overcome this fundamental weakness?
full symbolic offloading is the other extreme; this is the productive middle ground
-
Can critical questions improve how language models reason?
Does structuring prompts around argumentation theory's warrant-checking questions force language models to perform deeper reasoning rather than surface pattern matching? This matters because models might produce correct answers without actually reasoning correctly.
CQoT is another form of partial structure injection; both add formal elements without full formalization
-
Do formal language prototypes improve reasoning across different domains?
This explores whether training LLMs on abstract reasoning patterns in formal languages like Prolog and PDDL creates generalizable reasoning foundations that transfer to structurally similar problems across diverse domains.
ProtoReasoning takes the augmentation approach at the training level: Prolog/PDDL prototypes alongside natural language, not replacing it; the 4-6% cross-domain improvement confirms that partial formal augmentation transfers better than full formalization, extending the augmentation principle from inference-time (this note) to training-time
Click a node to walk · click center to open · click Open full network for a force-directed map
Original note title
partial symbolic abstraction preserves information completeness that full formalization loses — augmentation outperforms replacement for logical reasoning