Shreya Rajpal: Making AI Reliable with Guardrails

Table of content

Who is Shreya Rajpal?

Shreya Rajpal is the founder and CEO of Guardrails AI, the leading open-source framework for adding validation and safety layers to Large Language Model outputs. Before starting Guardrails, she worked at Apple’s Special Projects Group focusing on ML, Systems, and Computer Vision.

She studied AI at the University of Illinois at Urbana-Champaign (2016-2018), specializing in Artificial Intelligence—a background that deeply informs her practical approach to making AI systems reliable for production use.

What is Guardrails AI?

Guardrails AI solves one of the most pressing problems in deploying LLMs: reliability. LLMs are inherently unpredictable—they hallucinate, leak sensitive data, get jailbroken, and return inconsistent outputs. Guardrails provides a validation layer that intercepts and fixes these issues in real-time.

The framework has become the #1 open-source AI guardrails solution with thousands of GitHub stars and adoption across enterprise teams.

Core Capabilities

The Guardrails Hub

A key innovation is the Guardrails Hub—a marketplace of community-contributed validators that anyone can use:

from guardrails import Guard
from guardrails.hub import ToxicLanguage, PIIDetection

guard = Guard().use_many(
    ToxicLanguage(),
    PIIDetection()
)

result = guard(
    model="gpt-4",
    prompt="Generate customer response..."
)

The Hub includes validators for:

Philosophy: Validation as Infrastructure

Shreya’s core insight: AI validation should be as standard as input validation in web apps. Just as we don’t trust user input, we shouldn’t trust raw LLM outputs. Guardrails treats this as an infrastructure problem, not an afterthought.

“The new uptime for LLM apps isn’t just availability—it’s output accuracy and safety.”

Why This Matters for Personal AI

For anyone building personal AI systems, Guardrails solves critical problems:

  1. Privacy Protection: Ensure your personal AI doesn’t accidentally expose sensitive data in responses
  2. Consistency: Get reliable structured outputs for automation workflows
  3. Safety: Prevent your AI from generating harmful or biased content
  4. Observability: Track when and why validations fail via OpenTelemetry integration

Key Projects

Relevance to Self.md

Shreya represents the infrastructure layer of personal AI—the plumbing that makes AI systems trustworthy. As we build increasingly autonomous personal assistants, validation becomes non-negotiable. Her work shows that reliability isn’t magic; it’s systematic validation at every step.

For personal AI builders, the lesson is clear: treat AI outputs as untrusted input that needs validation before acting on it.

Topics: ai-safety llm-infrastructure open-source validation