Charles Packer — Building Machines That Learn and Remember

Table of content

Why Charles Packer Matters

Charles Packer is solving one of the fundamental problems in personal AI: memory. While most AI interactions are stateless (each conversation starts from zero), Packer’s work on MemGPT and Letta creates agents that maintain persistent memory across sessions, learn from experience, and improve themselves over time.

His key insight: treat LLMs like operating systems with virtual memory. Just as traditional OS pages memory between RAM and disk, MemGPT pages context between the LLM’s limited window and external storage. This enables “unbounded context” — agents that truly remember everything.

Core Philosophy

LLMs as Operating Systems

Packer’s MemGPT paper (2023) introduced a paradigm shift:

“Large language models are just one (amazing) piece of a complete agentic system — to build AI that can reason, plan, learn, and remember, we need to engineer the new computer.”

The LLM itself is just the CPU. A complete AI system needs memory management, tool use, and persistent state — just like traditional operating systems.

Self-Improving Machines

Letta’s tagline captures Packer’s vision: “building the self-improving machine.” Rather than static models that only learn during training, Letta agents:

Sleep-Time Compute

A breakthrough concept from Letta (2025): AI agents shouldn’t sit idle between interactions. During “sleep time,” they can:

This is inspired by how human memory consolidation works during sleep.

Key Projects

MemGPT (2023)

The foundational research paper: “MemGPT: Towards LLMs as Operating Systems”

Core innovation: Virtual context management inspired by hierarchical memory systems in traditional operating systems. The LLM manages its own memory by moving information between:

Impact: Enabled two critical capabilities:

  1. Document analysis — analyzing documents far exceeding context windows
  2. Multi-session chat — agents that remember across conversations

Letta (2024-)

The company built on MemGPT research, focused on stateful agents:

Stats (Jan 2026):

Architecture Insights

The Agent Memory Problem

Traditional RAG (Retrieval Augmented Generation) is insufficient for agent memory:

  1. RAG is query-based — agent must know what to search for
  2. No temporal understanding — doesn’t track how information evolves
  3. Passive retrieval — doesn’t actively consolidate or improve

MemGPT/Letta solves this with active memory management — the agent decides what to remember, when to retrieve, and how to organize.

Memory Block Design

Letta structures context into discrete “memory blocks”:

memory_blocks = [
    {
        "label": "human",
        "value": "Name: User. Preferences: prefers concise responses..."
    },
    {
        "label": "persona", 
        "value": "I am a helpful assistant with expertise in..."
    }
]

This gives agents consistent, usable memory rather than raw context dumps.

Sleep-Time Architecture

Sleep-time enabled agents actually create two agents:

  1. Primary agent — handles real-time conversation, uses tools
  2. Sleep-time agent — manages memory for both agents asynchronously

The primary agent focuses on interaction; the sleep-time agent focuses on learning.

Relevance to Personal AI

Packer’s work is foundational for anyone building personal AI systems:

For Personal Assistants

An AI that truly remembers you:

For Knowledge Management

Agents that can:

For Coding Agents

Letta Code demonstrates memory-first development:

Key Papers & Resources

Quotes

On the vision:

“We’re solving AI’s memory problem. Create agents that remember everything, learn continuously, and improve themselves over time.”

On stateful agents:

“Stateful agents: AI systems that maintain persistent memory and actually learn during deployment, not just during training.”

On the fundamental limitation:

“Today’s AI agents struggle to remember previous mistakes, and are unable to learn from new experiences.”

Background

Takeaways for Builders

  1. Think in systems, not models — The LLM is just one component. Build the full “operating system” for AI.

  2. Memory is fundamental — Without persistent memory, agents can’t truly learn or personalize.

  3. Active > passive memory — Don’t just store information; let agents manage their own memory.

  4. Use idle time — Sleep-time compute is a massive untapped resource for improvement.

  5. Model-agnostic design — Build systems that can evolve with the next generation of models.

Packer’s work answers a crucial question: how do we go from chatbots to AI systems that actually know us and get better over time? The answer is memory — and treating LLMs as the foundation for a new kind of computing.

Topics: memory agents personal-ai research stateful-agents llm-os