Logic-of-Thought: Injecting Logic into Contexts for Full Reasoning in Large Language Models

Paper · arXiv 2409.17539 · Published September 26, 2024
Reasoning Logic Internal RulesReasoning Methods CoT ToT

To address this issue, some studies employ the approach of propositional logic to further enhance logical reasoning abilities of LLMs. However, the potential omissions in the extraction of logical expressions in these methods can cause information loss in the logical reasoning process, thereby generating incorrect results. To this end, we propose Logic-of-Thought (LoT) prompting which employs propositional logic to generate expanded logical information descriptions and utilizes them as an additional augmentation to original contexts, thereby ensuring information completeness and enhancing logical reasoning ability. LoT is orthogonal to existing prompting methods and can be seamlessly integrated with them. Extensive experiments demonstrate that LoT boosts the performance of various prompting methods with a striking margin across five logical reasoning tasks. In particular, LoT enhances Chain-of- Thought’s performance on the ReClor dataset by +4.35%, improves Chain-of-Thought with Self-Consistency’s performance on the Rule- Taker dataset by +3.52%, and boosts performance of Tree-of-Thoughts on the ProofWriter dataset by +8%‡.

To tackle the challenge of the unfaithfulness in the reasoning process, researchers have proposed many neuro-symbolic methods that integrate LLMs with symbolic reasoning, such as Faithful Chain-of- Thought (Lyu et al., 2023), LINC (Olausson et al., 2023), Logic-LM (Pan et al., 2023) and SatLM (Ye et al., 2024). These methods follow a similar process: Initially, the problem and objectives are translated into symbolic expressions. Subsequently, symbolic results are derived through external tools such as symbolic solvers. Finally, it’s optional to explain symbolic results using LLMs or interpreters. However, these existing neuro-symbolic methods inevitably suffer from the issue of information loss, which results from omissions in the extraction of logical expressions and directly leads to incorrect intermediate reasoning processes. As illustrated in the Figure 1, in the extraction process of logical expressions in LINC, two key pieces of hidden information "Harry is a person" and "Walden is a book" are lost, which makes it impossible for the symbolic solver Prover9 to obtain the correct reasoning result.

To address the issue of information loss, in this paper, we propose a novel zero-shot prompting method named Logic-of-Thought (LoT). Specifically, LoT first extracts propositions and logical expressions from the input context, expands these logical expressions according to logical reasoning laws, and converts the deduced logical expressions back into natural language form. Then LoT considers these extended logical descriptions as additional logical augmentation for LLMs and concatenates it with the original context, which not only encourages LLMs to utilize these new deduced logical information when answering the original question but also ensures information completeness through preserving full original contexts for LLMs reasoning, thereby enhancing logical reasoning ability

• Propositions are defined as declarative sentences that have clear truth-value characteristic and cannot be simultaneously true and false. In this context, propositions are considered fundamental elements of logical expressions. We use standard uppercase letters such as A, B, C to symbolize specific propositions, exemplified by statements like "you have keyboarding skills", and lowercase letters such as p, q, r to refer to any proposition.

• Connectives are defined as operators on propositions, which can operate on a single proposition or link propositions together to form a new logical expression. In this study, We mainly focus on three connectives: ¬,→and ∧. Herein, negative ¬ denotes the negation operation for a specific logical symbol (e.g., ¬p). Implication → signifies a sufficient condition or causal relationship between two propositions (e.g., p → q). Conjunction ∧ also operates on two propositions, which represents that the entire expression is true only if both propositions are true (e.g., p ∧ q).

• Logical reasoning laws are defined as the deducing relation between two logical expressions. In this study, we utilize three basic logical reasoning laws: the Double Negation Law ¬¬p ⇔ p, the Contraposition Law (p → q) ⇔ (¬q → ¬p), and the Transitive Law (p → q) ∧ (q → r) ⇒ (p → r), which all align with human intuition and are fundamental and widely used in propositional logic (Büning and Lettmann, 1999).

Logic Extraction. In the Logic Extraction phase, we use LLMs to extract formal logic expressions from the input context through two stages. Firstly, we instruct LLMs to select sentences containing conditional reasoning relationships from the input context to generate collection of sentences with logical relationships. Subsequently, we use LLMs to extract the set of propositional symbols P and the set of logical expressions E from the collection. During the process of Logic Extraction, LLMs identify propositions with similar meanings and represent them using identical propositional symbols. Furthermore, LLMs analyze the logical relationships between propositions from their natural language descriptions, ultimately deriving the logical expressions. For propositions expressing opposite meanings, the negation ¬ is added. When there is a conditional relationship between two propositions, the implication → is used to connect their corresponding propositional symbols. We also incorporate well-designed hints about logical relationships into the prompt, such as phrases like "if...then..."or "...causes..." to further guide LLMs in analyzing logical connections and minimize errors.

Logic Extension. During the Logic Extension phase, we apply logical reasoning laws to the collection of logical expressions from the Logic Extraction phase. These logical expressions can be further expanded using a Python program to implement logical deduction. As illustrated in the Figure 2, the extracted logical expressions ¬A → ¬B and ¬B → ¬C serve as inputs for our logical deduction program. Through expansion based on Transitive Law and Contraposition Law, we finally obtain the new expression C → A, which will be used in the next phase.

Logic Translation. During the Logic Translation phase, we use LLMs to translate the generated extended logical expressions into natural language descriptions. Subsequently, we combine the natural language descriptions of propositional symbols according to the extended logical expressions to form a new part of the original input prompt. Through this approach, we inject the deduced logical information as additional augmentation into the original prompt, thus avoiding information loss.