Ariya Hidayat's Anti-Framework Approach to LLM Development
Table of content

Ariya Hidayat created two foundational open-source projects: PhantomJS (the first headless browser, used by thousands of organizations for web automation) and Esprima (the first JavaScript parser in JavaScript, with 80 million monthly npm downloads). Now he builds minimalist LLM tools that skip the framework complexity.
Hidayat (@ariyahidayat) spent 17 years in software engineering, including leading a 50-person engineering team at Shape Security. His current focus: making LLMs accessible through simple, dependency-free tools.
Background
- Created PhantomJS in 2011, enabling headless browser testing
- Built Esprima, the foundation for countless JavaScript tools
- PhD in Electrical Engineering from University of Paderborn (magna cum laude)
- Contributed to WebKit, KDE, and Qt
- Currently Software Engineer at Remote Browser (Palo Alto)
GitHub | Twitter | Blog | LinkedIn
The Anti-Framework Philosophy
Skip LangChain. Skip LlamaIndex. Skip Haystack.
Hidayat’s argument: LLM APIs are simple HTTP calls. Wrapping them in frameworks adds complexity without proportional benefit, especially when learning.
# This is all you need to call an LLM
curl -X POST https://api.openai.com/v1/chat/completions \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4",
"messages": [{"role": "user", "content": "Hello"}]
}'
The architecture of any LLM app:
┌─────────────┐ HTTP/JSON ┌─────────────┐
│ Your App │ ←───────────────→ │ LLM API │
└─────────────┘ └─────────────┘
No SDK required. No framework needed. Just POST requests with JSON.
| Framework Approach | Anti-Framework Approach |
|---|---|
| Learn framework API + LLM API | Learn LLM API only |
| Debug framework abstraction layers | Debug your code directly |
| Wait for framework updates | Use new features immediately |
| Lock-in to framework patterns | Switch providers with URL change |
ask-llm: Zero-Dependency CLI
ask-llm is Hidayat’s minimalist CLI for interacting with any LLM service. No dependencies. Multiple language implementations.
# Interactive mode
./ask-llm.py
# Piped input
echo "Why is the sky blue?" | ./ask-llm.py
# Translation task
echo "Translate into German: thank you" | ./ask-llm.py
Supported providers:
| Local | Cloud |
|---|---|
| llama.cpp | OpenAI |
| Ollama | Claude |
| LM Studio | Gemini |
| Jan | Groq |
| LocalAI | DeepSeek |
| Msty | Fireworks |
Available in:
- Python
- JavaScript
- TypeScript
- Clojure
- Swift
- Go
The same tool, same interface, different implementations. Pick your language.
Building Without Frameworks
Hidayat’s recommended learning path:
- Start with raw API calls using Postman, Insomnia, or curl
- Build a simple wrapper in your language of choice
- Add features only when you need them
- Consider frameworks only for production orchestration
Example Python wrapper (no dependencies beyond standard library):
import json
import urllib.request
import os
def ask_llm(prompt, model="gpt-4"):
url = "https://api.openai.com/v1/chat/completions"
headers = {
"Authorization": f"Bearer {os.environ['OPENAI_API_KEY']}",
"Content-Type": "application/json"
}
data = json.dumps({
"model": model,
"messages": [{"role": "user", "content": prompt}]
}).encode()
req = urllib.request.Request(url, data=data, headers=headers)
with urllib.request.urlopen(req) as response:
result = json.loads(response.read())
return result["choices"][0]["message"]["content"]
# Usage
print(ask_llm("Explain recursion in one sentence"))
No requests. No openai SDK. Standard library only.
RAG Without the Overhead
Hidayat gave a talk on RAG for Small LLM, showing that retrieval-augmented generation works fine without heavyweight frameworks.
The pattern:
# 1. Embed your documents
embeddings = embed(documents)
# 2. Store in any vector database (or just numpy)
index = build_index(embeddings)
# 3. Retrieve relevant chunks
relevant = search(index, query_embedding, k=5)
# 4. Stuff into prompt
prompt = f"Context: {relevant}\n\nQuestion: {query}"
# 5. Call LLM
response = ask_llm(prompt)
Each step is a function. No framework orchestration needed.
Local LLM Recommendations
Hidayat writes about running LLMs locally. His current picks:
| Tool | Best For |
|---|---|
| LlamaBarn | macOS simplicity (llama.cpp wrapper) |
| LM Studio | Polished UI, model management |
| Jan | Open-source flexibility |
| Ollama | CLI-first workflow |
For local inference, use the same HTTP API pattern. Ollama, LM Studio, and llama.cpp all expose OpenAI-compatible endpoints.
# Same code works for local and cloud
export LLM_API_BASE="http://localhost:11434/v1" # Ollama
# or
export LLM_API_BASE="https://api.openai.com/v1" # OpenAI
Key Takeaways
| Principle | Implementation |
|---|---|
| Skip frameworks while learning | Use raw HTTP calls to understand the API |
| Zero dependencies | Build with standard library first |
| Provider agnostic | Abstract only the base URL and auth |
| Local-first testing | Run Ollama or LM Studio for development |
Links
Next: Linus Lee’s Custom AI Tools
Get updates
New guides, workflows, and AI patterns. No spam.
Thank you! You're on the list.