Setup
9 practitioners working with Setup:
Claude Code Setup Guide
Complete Claude Code setup guide: installation, MCP servers, custom CLAUDE.md instructions, shell aliases, and security configuration
CLAUDE.md Best Practices
Write effective project instructions that Claude reads at every conversation start
Customizing the Status Line
Configure Claude Code's status line to display model, git status, tokens, and custom information
Local LLM Runtimes: When to Use Ollama vs vLLM
Ollama excels for single-user development with simple setup. vLLM delivers 20x higher throughput for production multi-user deployments. Choose based on your workload.
Model Quantization: Running 70B Models on a Laptop
Reduce model precision from 32-bit to 4-bit to run large language models locally. Covers k-quants, GGUF, and choosing the right quantization level.
Quick Start: Pick Your Path
Three ways to build a personal OS today. Choose based on your tools and comfort level: Claude Code for developers, GPT + Zapier for quick setup, or Gemini for Google users.
Running Claude Code in Containers
Isolate agent execution with Docker for security, scalability, and 24/7 operation
Running LLMs on Your Hardware with llama.cpp
Build llama.cpp from source, download GGUF models, pick the right quantization, and run a local AI server on Mac, Linux, or Windows.
Sandboxing & Security for AI Agents
How to isolate AI agents using OS-level sandboxing to prevent unauthorized access and reduce permission fatigue.