Claude Life Assistant

Table of content

claude_life_assistant is an experiment in making AI uncomfortably personal.

instead of a generic chatbot or productivity tool, this project gives Claude access to your psychology: values, emotional patterns, triggers, coping strategies. the result is an AI that responds based on your internal state, not just your prompt.

first spotted in signals — 2026-02-10 .

github.com/lout33/claude_life_assistant

what it is

a framework for feeding Claude personal psychology data via markdown files.

you document: → your core values
→ emotional patterns (what triggers you, what grounds you)
→ decision-making frameworks
→ cognitive patterns and biases
→ personal goals and context

Claude reads these files as part of its context. when you interact with it, the responses are tailored to your psychological profile.

“I’m feeling stuck” gets a different response based on whether you’re stuck creatively, emotionally, or tactically — because the AI knows your patterns.

why it matters

most AI assistants are context-free. they respond to the prompt, not the person.

this flips the model. the AI maintains a model of you. your tendencies. your defaults. your edge cases.

the upside: genuinely helpful responses. an AI that understands not just what you’re asking, but why you’re asking and what you actually need.

the downside: an AI that knows you better than you know yourself. and the question of who else has access to that model.

the privacy question

this is the uncomfortable part.

storing your psychological profile in plain text markdown files — even locally — is a risk. if those files leak, someone has a blueprint of your internal state.

and if you’re using Claude via API or web interface, Anthropic’s servers see the context. they claim not to train on user data, but the data passes through their infrastructure.

the tradeoff: more personal AI vs. exposing your psychology to a third party.

there’s no clean answer. just choices.

who this is for

→ people who want AI that actually understands their context
→ experimenters comfortable with the privacy tradeoffs
→ anyone tired of generic AI responses that ignore personal patterns
→ people exploring AI-assisted self-reflection or mental health tooling

who this is NOT for

→ anyone uncomfortable with psychological modeling
→ people who need guaranteed privacy (therapy, clinical use)
→ anyone who wants a turnkey solution (this is DIY, experimental)

the pattern

claude_life_assistant is part of a broader shift: AI moving from task execution to relationship modeling.

the early AI tools were about “do this task.” calendar management. email drafts. code completion.

the next wave is “understand me, then help me.” and that requires modeling your internal state.

this project is the sharp edge of that shift. it works. it’s useful. and it raises every uncomfortable question about AI, privacy, and autonomy that we’re going to have to answer in the next few years.

self.md take

this is either the future of mental health tooling or a privacy nightmare. probably both.

the power: an AI that knows when you’re spiraling, what helps you get unstuck, and how to frame suggestions in ways you’ll actually act on.

the risk: an AI that knows your vulnerabilities. your triggers. your edge cases. and someone else might have access to that model.

the real question isn’t “can we build this?” — clearly we can. it’s “should we?” and more importantly, “who controls the model?”

right now, most people outsource their psychology to therapists, journals, close friends. adding AI to that list changes the threat model. and we don’t have good answers yet.

but the genie’s out. someone built it. 655 people starred it. the demand exists.

the next chapter is figuring out how to make this powerful and safe. nobody’s solved that yet.


personal AI OS — the broader ecosystem of personal AI tools
AI agent skills catalogs — how to customize AI behavior via markdown files
signals — programming languages for agents — where this project was first spotted