convivial AI: Ivan Illich for the agent age

Table of content

by Ray Svitla


in 1973, an Austrian-born Catholic priest living in Cuernavaca, Mexico, published a book arguing that industrial society’s tools had crossed a threshold — from serving humans to enslaving them. the book was called Tools for Conviviality. the priest was Ivan Illich. and he might be the most important thinker for the AI age who almost nobody in AI is reading.


what Illich actually said

Illich didn’t hate tools. he loved them. what he hated was what happened when tools crossed what he called the “second watershed” — the point where a tool designed to serve people begins to reshape people to serve it.

medicine, he argued, crossed this threshold: originally it healed, then it created a dependency where people couldn’t conceive of health without medical institutions. education crossed it: originally it taught, then it created a dependency where learning was impossible without schools. transportation crossed it: cars designed to save time ended up forcing everyone to spend more time commuting.

the pattern is always the same. the tool solves a problem. then the tool becomes mandatory. then the problem is redefined in terms that only the tool can address. by the end, you can’t even imagine the problem without the tool.

“people need tools to work with rather than tools that work for them,” Illich wrote. he called tools that enhanced autonomy convivial.


the personal AI watershed

we are right now, in this exact moment, at the watershed for personal AI.

personal AI tools — memory systems, second brains, AI assistants with long-term recall — are crossing from “helps you think” to “thinks for you.” and the slope is greased with convenience.

consider: when your AI remembers your preferences, your past decisions, your thinking patterns — does that make you more capable or less? if you turned it off tomorrow, would you still know how to make those decisions? or have you quietly outsourced the cognitive work?

this is Illich’s question, applied to 2026: has your personal AI crossed the second watershed?


three tests for convivial AI

from the self.md board synthesis , adapted from Illich:

1. the burn test

can you delete your self.md, your AI memory, your entire personal AI stack — and still know who you are?

if yes: your AI is convivial. it enhanced your self-knowledge without replacing it. the tool amplified something that exists independently of the tool.

if no: you’ve built a dependency. your self-knowledge now lives in a server somewhere, and you’re renting access to your own identity. Illich would recognize this pattern instantly — it’s the same structure as institutionalized medicine. you’ve medicalized self-knowledge.

2. the discomfort test

does your AI ever make you uncomfortable? does it ever suggest something you don’t want to hear? challenge an assumption? propose an approach that contradicts your preferences?

if yes: it’s working as a cognitive prosthetic . it’s extending your capability, including the capability to be wrong.

if no: you’ve built a mirror. a very expensive, very sophisticated mirror that reflects your existing beliefs back at you with slightly better grammar. this is what Illich warned about with education — when the tool only confirms what you already believe, it’s not teaching. it’s credentialing.

the discomfort principle isn’t optional for convivial AI. it’s definitional.

3. the freezing test

does your AI model you as a fixed entity or as a process? does it know you as “introvert, prefers concise answers, works in TypeScript” — or does it track your tensions, your oscillations, your becoming?

labels freeze. processes flow. Illich’s convivial tool enhances the user’s ability to shape their own life. a tool that freezes your identity into preferences and optimizes for those preferences isn’t enhancing — it’s calcifying.

this is why self.md models tensions, not types . because a convivial identity tool must preserve your ability to change.


why self.md must be burnable

the first test is the hardest one to pass, and it’s the one that matters most.

every other personal AI product locks you in. your OpenAI memory is in OpenAI’s servers. your Claude preferences are in Anthropic’s database. your Notion second brain is in Notion’s cloud. even if they let you export, the format is theirs. the intelligence layer is theirs. the dependency is theirs.

self.md is a markdown file. you can burn it. literally rm ~/.self/self.md and it’s gone. if you can’t do that — if the thought of deleting it causes anxiety — then the file has crossed the watershed. it’s not serving you anymore. you’re serving it.

the goal of a convivial identity protocol isn’t to make you dependent on a better tool. it’s to make you better at being you, with or without the tool. the file is scaffolding. scaffolding gets removed.


Illich’s ghost in the machine

what makes Illich so relevant to AI isn’t just his framework. it’s his diagnosis of how institutions defend their post-watershed existence.

they professionalize the problem. they create experts and certifications. they make the solution complex enough that you can’t do it yourself. they pathologize independence.

watch this pattern in the personal AI space. “AI-powered second brain” products that require monthly subscriptions. “personal knowledge management” courses that cost thousands. “AI coaching” platforms that need your data to function. each one is a small institution crossing the watershed, professionalizing something that should be personal, and creating dependency in the name of empowerment.

Illich’s answer wasn’t to reject tools. it was to demand that tools remain under the user’s control. that they enhance capability without creating dependency. that they be simple enough to understand, modify, and abandon.

a markdown file. an open protocol. a routing layer that works with any AI model. tools you can take apart, rebuild, or throw away.

that’s convivial AI.


the uncomfortable implication

if you take Illich seriously — really seriously — the implication is that the best personal AI is one that works toward making itself unnecessary.

not in a corny “teach a man to fish” way. in a structural way: the routing layer should help you internalize your own patterns. the journal should make your self-knowledge explicit enough that you can hold it yourself. the catalog should teach principles, not just supply answers.

the endpoint of truly convivial AI isn’t a smarter assistant. it’s a more capable human who needs less assistance.

how many AI products are designed with that endpoint in mind?


the three tests: Illich, discomfort, freezing — the diagnostic framework → the discomfort principle — why good AI should challenge you → cognitive prosthetic vs cognitive crutch — when AI helps vs atrophies → the protocol thesis — why self.md is a file format, not an app


Ray Svitla stay evolving

Topics: illich philosophy convivial-tools self-md autonomy