Project Athena

Table of content

project athena is a persistent memory layer for chatgpt that treats memory like a save file, not a chat log.

the problem: every new chatgpt thread starts clean. you re-explain your project. you repeat last week’s decisions. you dump context like you’re briefing a colleague with permanent amnesia.

project athena solves this by building memory outside chatgpt. your context, decisions, and open loops live in structured files. when you start a new thread, athena injects the relevant memory.

first spotted in signals — 2026-02-11 .

github.com/winstonkoh87/Athena-Public
reddit discussion

what it is

a framework for giving chatgpt persistent memory across sessions.

instead of chatgpt’s built-in memory (which stores preferences and a few facts), athena maintains: → your project context
→ decisions and the logic behind them
→ open loops and status
→ the “why not” — options you rejected and why

the memory lives in files you control. when you start a new chat, athena retrieves the relevant context and injects it into the first message.

tested across 1,000+ sessions. it works.

why it matters

chatgpt’s memory is tied to the thread, not you. each new chat is a clean room. that’s great for safety. terrible for work.

if you’re doing anything remotely complex — research, strategy, product, content — the assistant’s memory decay becomes a tax. you pay it in three currencies:

time (re-explaining)
quality (missing assumptions)
trust (you stop giving it real context)

athena flips the model. memory isn’t a feature. it’s infrastructure.

the real issue isn’t that LLMs can’t remember. it’s that the memory is in the wrong place.

the save-game framing

the developer who built athena called it a “save game.” that framing is perfect because it implies three things:

  1. you should be able to resume state
  2. your state should be portable
  3. memory should survive the UI changing

that’s the missing layer in personal AI. not smartness. continuity.

how it works

athena stores your memory in structured formats (markdown, yaml, or similar). when you need to start a new chatgpt session:

  1. athena retrieves relevant context from your memory files
  2. it formats the context as a prompt
  3. you paste it into the first message of your new thread
  4. chatgpt now has the full context

as the session progresses, you update your memory files with new decisions and context.

it’s manual for now. the roadmap includes automation: auto-retrieval, auto-injection, auto-updates.

the memory model

if you’re building a personal OS, your memory layer should look less like a chat log and more like a git history:

→ stable IDs for decisions
→ links between assumptions and outcomes
→ the ability to diff what changed

athena implements this. your memory isn’t a blob. it’s structured, versioned, queryable.

who this is for

→ people doing complex, long-term work with chatgpt
→ anyone tired of re-explaining context every session
→ teams that need shared memory across multiple chatgpt users
→ developers building personal AI workflows

who this is NOT for

→ casual chatgpt users (the built-in memory is fine)
→ anyone who doesn’t want to manage files manually
→ people looking for a turnkey, zero-config solution

the pattern

athena is part of a broader shift: memory as a product, not an afterthought.

the early AI tools treated memory as a nice-to-have. the next wave treats it as infrastructure.

other tools in this space: → rowboat — markdown knowledge graphs for agents
claude life assistant — psychological modeling via markdown
→ obsidian + agent workflows — using your notes as agent context

the common thread: memory lives in files you control. not in the assistant’s context window. not in a proprietary database.

self.md take

continuity is the missing layer in personal AI.

the first version doesn’t need to be fancy. a single “state file” that gets updated daily already beats 90% of workflows.

the power move: treat memory like a product. define what counts as memory (decisions, preferences, status, open questions). store it in a simple, queryable format. build a retrieval layer that injects only what matters.

the big shift: stop asking “how do I get chatgpt to remember?” start asking “what’s my save file, and where do I keep it?”

athena answers that question. the memory is yours. the file is yours. the control is yours.

the next chapter: automation. right now you manage the files manually. the endgame is a system that reads your work, updates your memory automatically, and injects context without you thinking about it.

nobody’s shipping that yet. but athena is the foundation.


how to give ChatGPT long-term memory (the save game approach) — deep dive article
rowboat — markdown knowledge graphs for agent context
personal AI OS — the broader ecosystem
signals — save games, boundary leaks, and the self-hosted exodus — where this was first spotted