Conversational AI Systems Agentic and Multi-Agent Systems Psychology and Social Cognition

How should agents decide what memories to keep?

Agent memory management splits between agents autonomously recognizing important information versus programmatic triggers. Understanding this choice reveals why different memory architectures prioritize different information types.

Note · 2026-02-23 · sourced from Memory
Why do AI conversations reliably break down after multiple turns? What kind of thing is an LLM really? How should researchers navigate LLM reasoning research?

Agent memory management — how to transfer information between the LLM's context window and external storage — decomposes into two fundamentally different paths that parallel the human explicit/implicit memory distinction:

Explicit memory (hot path): The agent autonomously recognizes important information during conversation and decides to remember it via tool calling. This mirrors human conscious storage (episodic and semantic memory). The advantage: context-sensitive importance assessment — the agent can judge what matters based on the current conversational state. The challenge: implementing robust importance recognition is hard. What counts as "important enough to remember" depends on the user, the task, and future needs that can't be predicted.

Implicit memory (background): Memory management is programmatically defined at specific trigger points:

The CoALA vs Letta taxonomy debate reveals a deeper design question about working memory. CoALA treats working memory as a single category. Letta splits it into message buffer (recent messages from current conversation) and core memory (specific information the agent self-manages, like user's birthday). This split matters because core memory is agent-curated while the message buffer is conversation-driven — they have different update mechanisms and different information types.

Neither taxonomy cleanly maps human memory types to agent implementations:

The six core components of agent memory management — generation, storage, retrieval, integration, updating, and deletion (forgetting) — each face the explicit/implicit design choice independently. You might generate memories explicitly (hot-path recognition) but delete them implicitly (TTL-based expiration). The design space is combinatorial.

Since Can AI agents learn when they have something worth saying?, the inner thoughts mechanism could serve as the importance recognition layer for explicit hot-path memory — the agent's continuous covert thoughts identify what's worth remembering, solving the "what matters" problem.


Source: Memory

Related concepts in this collection

Concept map
17 direct connections · 120 in 2-hop network ·medium cluster

Click a node to walk · click center to open · click Open full network for a force-directed map

your link semantically near linked from elsewhere
Original note title

agent memory has two distinct management paths — explicit hot-path memory via autonomous recognition and implicit background memory via programmatic processing