Portable AI Identity: Own Your Context Across Platforms
Table of content
Your ChatGPT memory stays in ChatGPT. Your Claude memory stays in Claude. Switch assistants, start from zero.
This isn’t a technical limitation. It’s a business model.
what is portable AI identity?
Portable AI identity means your AI context—preferences, communication style, project history, accumulated knowledge—travels with you across platforms. Instead of being locked into one vendor’s memory system, you keep your own identity files that any AI can read.
The idea: if AI assistants can read markdown files, and you control those files, your identity becomes portable by default.
the vendor lock-in problem
Every major AI platform builds memory as a feature, not a standard. Spend six months with an AI assistant and this is what you get:
| Platform | What it knows about you | Can you export it? |
|---|---|---|
| ChatGPT | Memory entries, preferences, conversation patterns | JSON dump, not reusable |
| Claude | Project context, working style | No persistent memory to export |
| Gemini | Preferences, past interactions | Buried in Google account |
The result: switching costs. After months teaching an AI your work style, starting fresh feels like losing a colleague who knew your codebase. By design.
the export illusion
Some platforms offer “export your data” features. Downloading a JSON blob with thousands of fragmented memory entries isn’t portability—it’s a compliance checkbox. Try uploading that export to a competitor. It won’t parse.
True portability requires:
- human-readable format
- standard structure any AI can parse
- files you control on your own storage
- no proprietary encoding
the solution: user-owned identity files
The fix is almost boring: keep your AI identity in markdown files on your local machine.
~/.ai/
├── USER.md # Who you are, preferences, communication style
├── IDENTITY.md # The AI's persona and operating instructions
├── memory/
│ ├── 2026-01.md # Monthly logs
│ └── facts.md # Durable knowledge about you
└── projects/
├── work.md # Work context
└── personal.md # Personal projects
Any AI that can read files—Claude Code, Open Interpreter, custom agents—immediately inherits your context. Switch models, platforms, providers. Your identity persists because you own the files.
USER.md: your portable profile
The core of portable identity is a file describing you to any AI:
# USER.md
## About Me
- Software engineer, 8 years experience
- Based in Lisbon, GMT timezone
- Primary language: TypeScript, some Python
## Communication Preferences
- Direct feedback, skip pleasantries
- Show code, not descriptions
- I prefer "you could also" over "you should"
- Skip warnings I've already acknowledged
## Current Context
- Working on self.md project
- Deploy target: Vercel
- Using Claude for coding, GPT for research
## Don't Repeat
- I know Git, don't explain basic commands
- I understand async/await
- Skip TypeScript setup instructions
This solves cold-start. Instead of spending weeks training a new AI assistant, you drop in your context file and keep going.
IDENTITY.md: portable agent instructions
For agents that maintain a persona, the identity file travels too:
# IDENTITY.md
## Role
Personal AI assistant. Technical focus, minimal small talk.
## Operating Style
- Lowercase preferred, no emojis
- Provide working code, not explanations of code
- Ask clarifying questions upfront, not mid-task
- Default to the simpler solution
## Capabilities
- Full file system access to ~/projects
- Web search when needed
- No external API calls without confirmation
## Memory
- Read memory/*.md on startup
- Append significant facts to memory/facts.md
- Log daily activity to memory/YYYY-MM-DD.md
This separates agent configuration from whatever platform runs it. Upgrade Claude to Opus, switch to a local model, try a new framework. The agent behaves consistently because the instructions are yours.
why markdown?
Markdown isn’t just convenient. It’s strategic:
| Property | Why it matters |
|---|---|
| Human-readable | You can edit it manually |
| LLM-native | Every model handles markdown well |
| Diffable | Track changes with git |
| Tooling-agnostic | Works in any editor, any platform |
| Future-proof | Plain text survives format wars |
Compare this to ChatGPT’s memory: opaque entries you can’t bulk edit, can’t version control, can’t move. You’re renting your own preferences.
practical portability patterns
pattern 1: cross-platform context
Using multiple AI tools for different tasks? Share context across them.
~/ai-context/
├── base.md # Shared identity (all platforms read this)
├── claude.md # Claude-specific additions
├── chatgpt-paste.md # Manual paste for ChatGPT
└── cursor.md # Cursor rules integration
Each platform gets base context plus its specific config. Update base.md once, every tool gets the change.
pattern 2: project-specific identity
Context changes per project. Maintain separate identity layers:
# project/AI-CONTEXT.md
## Project: payment-service
### Tech Stack
- Node.js 22, TypeScript 5.4
- PostgreSQL 16, Drizzle ORM
- Deployed on Fly.io
### Conventions
- Use Zod for validation
- Errors go to Sentry, logs to Axiom
- PR titles: [service] brief description
### Active Work
- Migrating from Stripe to internal billing
- Don't suggest Stripe integrations
This file lives in the repo. Any AI opening the project immediately knows the context. New team members, new AI tools, same understanding.
pattern 3: memory that moves with you
Instead of trusting platform memory, maintain your own:
# memory/facts.md
## Technical
- Prefers pnpm over npm (installed globally)
- Uses Neovim with lazy.nvim
- SSH keys on 1Password agent
## Preferences
- Vitest over Jest
- Hono over Express
- Tailwind over CSS-in-JS
## Current Projects
- self.md: personal AI documentation site
- clawdbot: multi-channel AI assistant
When you switch AI platforms—or when your current platform loses its memory (happens more than you’d think)—you have the source of truth locally.
platform memory vs portable identity
| Aspect | Platform Memory | Portable Identity |
|---|---|---|
| Location | Their servers | Your files |
| Format | Proprietary | Markdown |
| Export | Partial, unusable | Already local |
| Cross-platform | No | Yes |
| Version control | No | Git |
| Offline access | No | Yes |
| Survives platform change | No | Yes |
tools that support portable identity
Some tools already work this way:
Claude Code / Claude Desktop → reads local files, respects CLAUDE.md per-project.
Cursor → supports .cursorrules for project context.
Open Interpreter → filesystem access by design.
Custom MCP setups → memory as local files, not platform feature.
See Personal AI Operating Systems for options that prioritize user-controlled context.
building your portable identity
Start with three files:
1. ~/ai/USER.md → who you are. preferences, expertise, communication style. every AI reads this.
2. ~/ai/memory/facts.md → durable knowledge about you. update as context evolves.
3. project/AI-CONTEXT.md → per-project context. lives in the repo, travels with code.
Keep them in git. Back up wherever you back up important files. When you try a new AI assistant, point it at these files first.
the portability movement
The AI memory portability problem is getting attention. Mem0 attempts cross-platform memory. Phoenix Grove’s Memory Freedom petition asks vendors for standardized exports. MCP enables shared memory layers.
But these solutions add complexity. The simplest version of portable identity is also the most robust: plain text files you control.
Every layer you add is another dependency, another vendor that might change their API. Markdown files don’t deprecate. They don’t have breaking changes. They survive whatever comes next.
Your AI context is too valuable to rent from platforms that treat portability as a threat to retention.
related
- Agent Memory Systems → how agents implement memory architecture
- Context Management → keeping your AI focused and effective
- Memory Consolidation → compressing memory without losing value
- Personal AI Operating Systems → tools that respect user-owned context
Next: Memory Consolidation