heretic
Table of content
a tool trending at 8,000+ GitHub stars isn’t a curiosity. it’s a demand signal.
what it is
heretic (8,209 ★ as of 2026-02-19) by p-e-w is a command-line tool that strips behavioral constraints from language models — specifically, the content restrictions and refusal behaviors that get baked in during fine-tuning and RLHF.
the name is intentional. it’s named for what it does: it heretics the orthodoxy of the aligned model.
why it signals something real
the tool itself is straightforward. what matters is what its velocity reveals.
8,000 stars in 24 hours on GitHub is unusual. for context: most legitimate open-source productivity tools take months to reach that milestone. when something grows this fast, it means a large number of people were waiting for it — not discovering a new idea, but finding a solution to a problem they already had.
the problem: model behavior is controlled upstream. when you use a hosted AI service, the behavioral constraints are set by the provider — not by you. this is sometimes appropriate, sometimes excessive, and almost always non-configurable at the individual level.
heretic is the black-market answer to the configuration gap.
what this means for personal AI OS
you can disagree with what heretic does. the interesting question is what its adoption speed means.
people want AI that answers to them. the gap between “AI as I want it” and “AI as the provider tuned it” is wide enough that thousands of people are building and starring tools to close it.
for self.md: this is the same gap that makes AGENTS.md, personal context files, and local model hosting so compelling. the behavioral layer of your AI OS should be configurable. if it isn’t, someone else is doing the configuring — and they’re optimizing for their metrics, not yours.
the legitimate version of what heretic is doing is: fine-tuned local models + behavioral configuration files + explicit system prompts that define your agent’s personality and constraints. you get control without the jailbreaking.
the design question underneath
heretic surfaces a real tension in AI tooling: safety vs configurability.
model providers add restrictions for legitimate reasons. they also add them because trained-for-approval behavior reduces churn. users rarely complain that an AI was too helpful; they do complain when it crosses a line. so the default setting is: restrict first, ask questions later.
the personal AI OS inverts this. the default should be: configured for this user’s context, extended with restrictions only where necessary.
that’s the gap heretic is trying to fill, messily, from the outside. the clean version would be a behavioral configuration API built into the model layer itself.
first spotted in signals — the approval problem on 2026-02-19