Thomas Landgraf's Deep Knowledge Method
Table of content

Thomas Landgraf is a software developer and entrepreneur based in Hamburg, Germany. He started coding in 1983 on a VIC20 and currently works at the noventic group. His Substack, “Industrial-Grade Architecture | Research-Lab Curiosity,” documents his experiments with AI-assisted development.
Substack | GitHub | X | LinkedIn
Background
Landgraf founded enercast GmbH, which applies AI to energy forecasting. He was involved with BESOS, an EU research project for smart city energy management. His background spans industrial software, energy systems, and applied machine learning.
His interest in Claude Code grew from frustration with hallucinations. Complex middleware and specialized frameworks tripped up AI assistants. He developed a methodology to fix this: feed the model curated technical knowledge before asking it to code.
The Three-Pillar Approach
Landgraf structures context engineering around three components:
| Pillar | Storage | Purpose |
|---|---|---|
| Project Architecture | CLAUDE.md files | Class hierarchies, patterns, conventions |
| Product Requirements | *.prd.md files | Specifications from user stories |
| Deep Technical Knowledge | *-knowledge.md files | Middleware details, API patterns, algorithms |
The third pillar is his key contribution. Most developers provide project context but skip specialized technical knowledge. This gap causes hallucinations.
Research-Driven Knowledge Documents
His workflow uses AI research tools to build comprehensive technical references:
Step 1: Initial Deep Research
Use OpenAI’s Deep Research for comprehensive exploration. It takes 7-30 minutes but produces 25-36 page reports with 100+ sources.
Step 2: Validation
Cross-check with Claude Research (2-5 minutes, 7 pages, 20-25 sources). Identify gaps, verify accuracy, add recent examples.
Step 3: Integration
Create master knowledge files that capture nuanced details LLMs typically miss.
Example from his Eclipse Ditto work:
# ditto-advanced-knowledge.md
## Policy Structure
- Policies use `<namespace>:<name>` format
- Default permission: deny (implicit)
- Grant statements require explicit subjects
## WoT Thing Models
- Thing descriptions follow W3C spec
- Required fields: @context, @type, title
- Properties map to Ditto features
## RQL Query Syntax
- Filter: eq(attributes/location,"munich")
- Options: sort(+thingId),limit(0,25)
- Escape special chars in values
He estimates one incorrect API pattern in a knowledge file pollutes every future implementation.
File Size Management
Large knowledge documents waste context tokens. His guidelines:
| File Size | Action |
|---|---|
| Under 50KB | Use as-is |
| 50-100KB | Split by topic |
| Over 100KB | Aggressive splitting required |
Monitor token usage with Claude Code’s /compact and /clear commands.
ShellPromptor
Before Claude Code, Landgraf built ShellPromptor, a shell script for priming LLMs with context. It:
- Copies relevant files to clipboard
- Chunks content for different LLM input limits (default: 65,000 chars)
- Replaces environment variables in paths
- Uses Zsh autocomplete for fast prompt selection
From his blog post:
“I often find myself starting a new chat with an LLM and needing it to understand the background before it can provide meaningful help.”
AskMeMCP
His AskMeMCP tool adds human-in-the-loop checkpoints to AI workflows. It pauses execution and requests feedback through a web interface.
Tools available:
ask-one-question- Free-form inputask-multiple-choice- Selection from optionschallenge-hypothesis- Validate assumptionschoose-next- Decision workflows
Install with: npx --yes ask-me-mcp
Filesystem Project Management
Landgraf replaced JIRA with a 600-line Claude Code prompt. His insight:
“LLMs consume and produce text. Unix treats everything as text. When you combine these two facts, you unlock a universal interface.”
The system stores issues as markdown files in a directory structure:
./ProjectMgmt/
open/ # New issues
wip/ # Work in progress
closed/ # Completed
Custom commands /openIssue and /finishIssue manage the workflow. Claude logs timestamps, modified files, and executed commands in each issue file.
Memory Architecture
He documents Claude Code’s three-tier memory system:
| Tier | Path | Scope |
|---|---|---|
| Project | ./CLAUDE.md | Team-shared, version controlled |
| Local | ./CLAUDE.local.md | Individual developer, gitignored |
| User | ~/.claude/CLAUDE.md | Cross-project preferences |
Subdirectory memory files load only when Claude accesses those directories. This prevents token waste on irrelevant codebase sections.
Key Takeaways
| Principle | Implementation |
|---|---|
| Front-load technical knowledge | Create *-knowledge.md files for specialized frameworks |
| Split large contexts | Keep files under 50KB |
| Validate research outputs | Cross-check Deep Research with Claude Research |
| Audit memory regularly | Compare documented patterns against actual code |
| Log implementation details | Use filesystem-based issue tracking |
Links
Next: Jesse Vincent’s Superpowers Framework
Get updates
New guides, workflows, and AI patterns. No spam.
Thank you! You're on the list.