agent-first design
Table of content
by Ray Svitla
your repo was designed for humans. the file structure makes sense to you. the naming conventions are obvious — to you. the architectural decisions are documented in your head and maybe a stale wiki page.
now an AI agent is working in your repo, and it’s bumping into walls you can’t see because you never needed to see them.
agent-first design isn’t about replacing human-readable code. it’s about making your repo legible to both humans and machines. the good news: everything that helps AI agents also helps junior developers, new team members, and you in six months when you’ve forgotten how this thing works.
the problem with most repos
AI agents navigate codebases by reading files, searching for patterns, and following references. they can’t ask a colleague. they can’t look at a whiteboard diagram. they can’t remember what they learned last week in a different session.
every time an agent opens your repo, it starts from zero. it reads your CLAUDE.md (if you have one), looks at the file structure, and starts making assumptions. bad structure = bad assumptions = bad code.
common problems:
→ flat directories. 200 files in src/. the agent reads 30 of them before finding the right one.
→ clever naming. src/gizmo.ts — cute. what does it do? the agent has to read it to find out.
→ implicit conventions. “oh, we always put hooks in a use- prefixed file.” says who? where is that written?
→ scattered configuration. env vars in .env, some in docker-compose.yml, some hardcoded in config/prod.ts, some in the CI config. the agent will miss at least one.
→ missing entrypoints. where does execution start? what’s the main flow? the agent has to reverse-engineer it.
principles of agent-first design
1. make structure semantic
src/
auth/
middleware.ts
session.ts
oauth.ts
api/
users/
router.ts
handlers.ts
validation.ts
orders/
router.ts
handlers.ts
validation.ts
database/
schema.ts
migrations/
queries/
the directory name tells the agent what it’ll find. auth/middleware.ts is self-documenting. src/utils/helpers/misc.ts is a black hole.
2. write a CLAUDE.md (or equivalent)
every repo should have an instruction file at the root. not a README — that’s for humans who browse github. an instruction file is for the AI agent that’s about to work here.
minimum viable CLAUDE.md:
## stack
TypeScript, Express, Prisma, PostgreSQL, Vitest
## commands
- npm run dev — start dev server
- npm test — run tests
- npm run build — production build
## conventions
- tests colocated: foo.test.ts next to foo.ts
- named exports only
- zod for all input validation
## architecture
- API routes: src/api/{resource}/router.ts
- database: src/database/
- auth: src/auth/ (JWT + OAuth)
this costs the agent 400 tokens to read and saves it from reading 20 files to figure out the same information.
3. colocate related files
# bad: agent has to search multiple directories
src/components/UserProfile.tsx
src/styles/UserProfile.css
src/tests/UserProfile.test.tsx
src/types/UserProfile.ts
# good: everything about UserProfile is in one place
src/features/user-profile/
UserProfile.tsx
UserProfile.test.tsx
UserProfile.css
types.ts
colocation reduces the number of files an agent needs to discover. it reads one directory instead of scanning four.
4. make commands discoverable
agents run commands. if your test command is npx jest --config=jest.config.ts --coverage --runInBand, the agent might guess npm test and fail.
put every command that matters in package.json scripts or a Makefile:
{
"scripts": {
"dev": "tsx watch src/index.ts",
"test": "vitest run",
"test:watch": "vitest",
"build": "tsc && node dist/index.js",
"db:migrate": "prisma migrate dev",
"db:seed": "tsx src/database/seed.ts"
}
}
the agent reads package.json early. make it count.
5. type everything
AI agents navigate code through types like humans navigate cities through street signs. a function that takes (data: any) tells the agent nothing. a function that takes (user: CreateUserInput) tells it everything.
// bad: agent has to read the implementation to understand
function processOrder(data: any): any { ... }
// good: the signature is the documentation
function processOrder(input: CreateOrderInput): Promise<OrderResult> { ... }
strict TypeScript isn’t just good practice. it’s agent infrastructure.
6. document the non-obvious
don’t document what code does — the code already says that. document why.
// we batch notifications because the email provider rate-limits
// at 100/minute per tenant. individual sends hit the limit on
// large orders.
async function batchNotifications(items: NotificationItem[]) { ... }
this comment saves the agent from “optimizing” your batching into individual sends and breaking production.
7. make errors informative
// bad
throw new Error("failed");
// good
throw new Error(
`order creation failed: user ${userId} has no payment method. ` +
`add one at /settings/billing before retrying.`
);
when the agent runs your code and it fails, the error message is its primary diagnostic input. vague errors lead to vague debugging.
the ROI argument
agent-first design takes time upfront. but consider: every AI session that works in your repo benefits from good structure. if you use claude code 10 times a day, and good structure saves 5 minutes per session, that’s 50 minutes a day. over a month, that’s 16+ hours.
and — again — everything that helps the agent helps your human team too. clear naming, colocated files, typed interfaces, documented commands. this isn’t AI-specific best practice dressed up in new clothes. it’s good engineering practice that AI agents make impossible to ignore.
start small
you don’t need to refactor your entire repo. start with:
- add a CLAUDE.md with your stack, commands, and conventions
- rename your most ambiguous files and directories
- add types to your most-used functions
- colocate tests if they’re not already
then watch how the agent performs. iterate from there.
→ CLAUDE.md guide — write effective instruction files → why your CLAUDE.md sucks — common anti-patterns → AGENTS.md vs CLAUDE.md vs INSTRUCTIONS.md — cross-tool comparison
Ray Svitla stay evolving