Llm-Security

2 practitioners working with Llm-Security:

personal AI became infrastructure: security gaps, builder confidence, and the stack that's forming personal AI stopped being a category. it became a stack. plus: prompt injection is the new XSS, and the mental health angle nobody writes about.
prompt injection is killing self-hosted LLM deployments (and nobody's talking about it) Enterprises moved to self-hosted AI to avoid sending data externally. Now they're discovering they have zero protection against prompt injection. Here's what's broken and what to do about it.

← All topics