Can storing evolved thoughts prevent inconsistent reasoning in conversations?
When LLMs repeatedly reason over the same conversation history for different questions, they produce inconsistent results. Can storing pre-reasoned thoughts instead of raw history solve this problem?
Think-in-Memory (TiM) addresses a specific failure mode: when memory-augmented LLMs repeatedly recall and reason over the same conversation history for different questions, they produce inconsistent reasoning results. The same facts, recalled for different purposes, yield different inferences — not because the facts changed, but because LLMs generate diverse reasoning paths for the same query.
The solution inverts the standard recall-then-reason cycle. Instead of storing raw history and reasoning over it each time, TiM stores THOUGHTS — the products of reasoning:
- Before responding: recall relevant thoughts from memory (not raw history)
- After responding: post-think — integrate both historical and new thoughts, then update memory
The memory evolves through three operations:
- Insert — add new thoughts derived from the current exchange
- Forget — remove thoughts that are outdated or superseded
- Merge — combine compatible thoughts into more coherent representations
This is effectively sleep-time compute applied to conversational memory. Since Can models precompute answers before users ask questions?, TiM applies the same principle to conversation: rather than reasoning over raw history at query time (expensive, inconsistent), reason once during a post-thinking phase and store the result. Future queries retrieve pre-reasoned thoughts rather than re-deriving them.
The inconsistent reasoning problem is not trivial. If a user asks "what does Alice prefer for breakfast?" and later "what should I bring to Alice's house?", both queries retrieve the same conversational evidence about Alice. But the different framing of the query can lead the model to different conclusions from identical evidence. Storing the post-thinking thought ("Alice prefers coffee in the morning") eliminates this inconsistency because the reasoning is done once and reused.
Source: Memory
Related concepts in this collection
-
Can models precompute answers before users ask questions?
Most LLM applications maintain persistent state across interactions. Could models use idle time between queries to precompute useful inferences about that context, reducing latency when users actually ask?
TiM is sleep-time compute applied to conversation memory: reason once, store result, retrieve on demand
-
Does a model improve by arguing with itself?
When models revise their own reasoning in response to self-generated criticism, do they converge on better answers or worse ones? And how does that compare to challenge from other models?
TiM's post-thinking operates on similar terrain: repeated reasoning over the same material can degenerate
-
Does reflection in reasoning models actually correct errors?
When reasoning models reflect on their answers, do they genuinely fix mistakes, or merely confirm what they already decided? Understanding this matters for designing better training and inference strategies.
TiM's post-thinking aims for consolidation not correction, sidestepping the confirmatory reflection problem
Click a node to walk · click center to open · click Open full network for a force-directed map
Original note title
post-thinking stores evolved thoughts in memory to eliminate repeated reasoning over conversation history