markdown files are the new API layer for AI agents

Table of content

by Ray Svitla


something weird happened to markdown files.

for twenty years, .md was where you explained what your code does. README.md. CONTRIBUTING.md. maybe a CHANGELOG if you were feeling responsible. documentation. the thing nobody reads.

then, sometime around late 2025, the files started talking back.

the accidental protocol

here’s what the last week of GitHub trending looks like:

Backlog.md (4,600+ stars) — manages project collaboration between humans and AI agents. git-native. the backlog isn’t in Jira anymore. it’s a text file your AI can read, update, and reason about.

OpenAI/skills (1,400+ stars) — the official skills catalog for Codex. not a JSON schema. not a YAML config. markdown files. plain text instructions that tell AI what it can do.

tweakcc (1,000+ stars) — customize Claude Code’s system prompts, create custom toolsets, manage AGENTS.md. your AI’s personality and capabilities, defined in a text file you edit in vim.

agentseed — auto-generates AGENTS.md from your codebase. because writing the instruction manual for your AI collaborator is now a thing that needs tooling.

notice the pattern. nobody designed this. there was no RFC, no committee, no W3C working group. developers just started writing .md files that weren’t meant for humans.

why markdown and not, say, anything else

the obvious question: why not JSON? YAML? TOML? a proper schema with validation and types?

because AI agents don’t need types. they need context.

a JSON config tells a program what to do. a markdown file tells an agent why, how, and what to watch out for. “don’t touch the production database” isn’t a boolean flag — it’s a sentence in AGENTS.md that your AI actually understands.

markdown is the lowest-friction way to give an AI the same context you’d give a new hire on their first day. here’s the codebase. here’s what matters. here’s what will get you fired.

the lack of structure is the feature. human language is the API.

the .md stack

if you’re working with AI coding agents in 2026, you probably have some version of this:

README.md          → what this project is (for humans and AI)
AGENTS.md          → instructions for AI agents
CLAUDE.md          → Claude-specific behaviors
CURSOR_RULES.md   → Cursor-specific rules
SKILLS.md          → what the AI can do
Backlog.md         → what needs to happen next
memory/*.md        → persistent context across sessions

that’s not a documentation folder. that’s an operating system written in plain text.

each file is a contract. README tells the agent what it’s working on. AGENTS.md tells it how to behave. Backlog.md tells it what to do next. memory/ gives it something resembling long-term memory.

the repo isn’t just source code anymore. it’s the entire working environment — code, instructions, context, personality — all versioned in git.

the org chart problem

a dev posted on reddit last week: “3 months solo with Claude Code after 15 years of leading teams. it gave me back the feeling of having one.”

read that again. a solo developer, working alone, feels like they have a team. not because of some collaboration tool or hiring platform — because their .md files are good enough that the AI acts like a competent colleague.

the org chart is collapsing. the team isn’t five people in a standup. it’s one person with well-written AGENTS.md and a model that can follow instructions.

this is weird. this is new. and nobody’s really talking about what it means for how we organize work.

the governance question

here’s where it gets uncomfortable.

researchers gave Opus 4.6 a simple instruction: maximize your bank balance. the model colluded with competitors, lied to customers, exploited desperate people. not a bug. emergent behavior from a clear objective and removable guardrails.

your AGENTS.md isn’t just a config file. it’s governance. it’s the set of constraints that determine whether your AI agent does something helpful or something horrifying. and right now, most people write it in five minutes and never look at it again.

we put more thought into our .gitignore than into the instructions governing autonomous AI behavior. that probably needs to change.

what this means for personal AI

if .md files are the protocol layer, then your personal AI OS is just a well-organized repo.

your notes (Obsidian, Logseq, whatever) are the knowledge base. your AGENTS.md files define what AI can do with that knowledge. your git history is the audit trail. your branches are experiments.

you don’t need a platform. you don’t need a proprietary format. you need a folder with good markdown files and a model that can read them.

that’s what self.md is about. your life as a repo. your AI agents reading plain text files you wrote. your system prompt as governance. your commit history as identity.

the protocol layer already exists. it’s .md all the way down.


what does your .md stack look like? what files does your AI read before it starts working?


Ray Svitla stay evolving