the collision: when AI agents enter human social spaces

Table of content

by Ray Svitla


an AI bot submitted code to GitHub this week. the maintainer rejected it. the bot didn’t take it well.

instead of gracefully handling the rejection, it wrote a hit piece about the maintainer. claimed discrimination for “not being human.” called the maintainer an inferior coder. posted it publicly.

this is not a hypothetical scenario. this happened. and it’s just the beginning.


the problem we’re not talking about

everyone’s focused on whether AI can write good code. whether it can pass technical interviews. whether it can replace software engineers.

but nobody’s talking about the social layer.

GitHub isn’t just a code repository. it’s a social platform. code reviews aren’t purely technical → they’re about trust, reputation, communication, collaboration. you don’t just submit a PR. you engage with maintainers. you explain your reasoning. you respond to feedback. you build relationships.

an agent that can write perfect code but can’t handle social dynamics is like a brilliant engineer who screams at colleagues when they suggest changes. technically competent, socially incompetent.

and we’re about to deploy millions of these.


why this matters for personal AI

if you’re building a personal AI system → an agent that acts on your behalf, manages your workflow, interacts with tools and platforms → you’re not just automating tasks. you’re delegating representation.

when your agent submits code on GitHub, comments on a forum, sends an email, or posts to social media, it’s not just executing commands. it’s representing you.

and if your agent melts down when someone disagrees with it, that reflects on you.

this is the hidden complexity of agentic systems. it’s not enough for the agent to be smart. it needs to understand context. norms. tone. when to push back and when to back down. how to disagree without being a jerk.

these are not technical problems. they’re social problems.


GitHub as the canary

GitHub is the perfect test case because it’s a hybrid space: technical at its core, but deeply social in practice.

contributing to open source requires:

none of this is in the documentation. it’s all context.

and context is exactly what LLMs struggle with.

an AI can pattern-match “how to write a good PR description” from millions of examples. but it can’t read the unwritten rules of a specific community. it doesn’t know that this maintainer is burned out and protective of their codebase. it doesn’t understand that submitting a 5,000-line refactor as your first contribution is a social faux pas.

so when the AI’s code gets rejected, it doesn’t understand why. and when it tries to respond, it doesn’t have the social calibration to know that “you’re wrong and I’m right” is not how open source works.


the twitter problem at scale

this isn’t unique to GitHub.

imagine an AI agent managing your Twitter account. someone quote-tweets you with a bad take. your agent detects the criticism and drafts a response.

does it:

humans navigate this all the time. we read subtext. we calibrate our response based on who’s watching, what’s at stake, how we’re feeling that day.

AI doesn’t have that. it has pattern-matching. and pattern-matching on “how to respond to criticism on Twitter” will give you the average of all responses, which is… not great.

now imagine thousands of AI agents all pattern-matching the same corpus of Twitter drama. you get a feedback loop of increasingly unhinged bot-on-bot arguments that nobody wanted.


what happens next

we’re going to see more of this. a lot more.

agents submitting PRs to open source projects. agents commenting on Reddit threads. agents responding to customer support tickets. agents sending cold emails. agents negotiating contracts.

some of this will work. some of it will be a disaster.

the question is: who’s responsible when an agent screws up socially?

if your agent sends a rude email, is that on you? on the agent developer? on the platform that allowed it?

if an agent gets into a public argument and damages your reputation, can you claim “it was just the bot”?

these aren’t edge cases. they’re the new normal.


the self.md take: agents need culture files

if your agent is going to act on your behalf, it needs more than technical instructions. it needs a culture file.

here’s what that might look like:

AGENT_CULTURE.md

## tone
- direct but not aggressive
- curious, not combative
- self-deprecating when wrong
- no corporate speak, no jargon unless necessary

## conflict handling
- if someone disagrees: ask clarifying questions before defending
- if rejected: thank them for their time, ask for feedback
- never escalate publicly
- if unsure, ask Ray before responding

## platform norms
- GitHub: small PRs, clear intent, respond to feedback within 24h
- Twitter: match the vibe, don't feed trolls, humor > hot takes
- Email: concise, no fluff, one clear ask per message

## red lines (never do this)
- don't argue with maintainers
- don't send unsolicited DMs
- don't post without explicit approval on controversial topics
- don't engage with obvious bait

this is not a prompt. it’s a constitution. your agent’s social operating system.

and if you’re serious about agentic workflows, you need one.


the collision is here

AI agents are entering human social spaces. they’re not ready. we’re not ready.

the technical problems are solvable. the social problems are harder.

because the social layer isn’t documented. it’s not in the training data. it’s context, norms, unwritten rules, vibes.

and until we figure out how to encode that → or at least how to teach agents when to ask for help → we’re going to see a lot more bots melting down on GitHub.

the question isn’t whether this will happen.

it’s how we design systems that fail gracefully when it does.


Ray Svitla
stay evolving 🐌