Thomas Landgraf's Deep Knowledge Method

Table of content
Thomas Landgraf's Deep Knowledge Method

Thomas Landgraf is a software developer and entrepreneur based in Hamburg, Germany. He started coding in 1983 on a VIC20 and currently works at the noventic group. His Substack, “Industrial-Grade Architecture | Research-Lab Curiosity,” documents his experiments with AI-assisted development.

Substack | GitHub | X | LinkedIn

Background

Landgraf founded enercast GmbH, which applies AI to energy forecasting. He was involved with BESOS, an EU research project for smart city energy management. His background spans industrial software, energy systems, and applied machine learning.

His interest in Claude Code grew from frustration with hallucinations. Complex middleware and specialized frameworks tripped up AI assistants. He developed a methodology to fix this: feed the model curated technical knowledge before asking it to code.

The Three-Pillar Approach

Landgraf structures context engineering around three components:

PillarStoragePurpose
Project ArchitectureCLAUDE.md filesClass hierarchies, patterns, conventions
Product Requirements*.prd.md filesSpecifications from user stories
Deep Technical Knowledge*-knowledge.md filesMiddleware details, API patterns, algorithms

The third pillar is his key contribution. Most developers provide project context but skip specialized technical knowledge. This gap causes hallucinations.

Research-Driven Knowledge Documents

His workflow uses AI research tools to build comprehensive technical references:

Step 1: Initial Deep Research

Use OpenAI’s Deep Research for comprehensive exploration. It takes 7-30 minutes but produces 25-36 page reports with 100+ sources.

Step 2: Validation

Cross-check with Claude Research (2-5 minutes, 7 pages, 20-25 sources). Identify gaps, verify accuracy, add recent examples.

Step 3: Integration

Create master knowledge files that capture nuanced details LLMs typically miss.

Example from his Eclipse Ditto work:

# ditto-advanced-knowledge.md

## Policy Structure
- Policies use `<namespace>:<name>` format
- Default permission: deny (implicit)
- Grant statements require explicit subjects

## WoT Thing Models
- Thing descriptions follow W3C spec
- Required fields: @context, @type, title
- Properties map to Ditto features

## RQL Query Syntax
- Filter: eq(attributes/location,"munich")
- Options: sort(+thingId),limit(0,25)
- Escape special chars in values

He estimates one incorrect API pattern in a knowledge file pollutes every future implementation.

File Size Management

Large knowledge documents waste context tokens. His guidelines:

File SizeAction
Under 50KBUse as-is
50-100KBSplit by topic
Over 100KBAggressive splitting required

Monitor token usage with Claude Code’s /compact and /clear commands.

ShellPromptor

Before Claude Code, Landgraf built ShellPromptor, a shell script for priming LLMs with context. It:

From his blog post:

“I often find myself starting a new chat with an LLM and needing it to understand the background before it can provide meaningful help.”

AskMeMCP

His AskMeMCP tool adds human-in-the-loop checkpoints to AI workflows. It pauses execution and requests feedback through a web interface.

Tools available:

Install with: npx --yes ask-me-mcp

Filesystem Project Management

Landgraf replaced JIRA with a 600-line Claude Code prompt. His insight:

“LLMs consume and produce text. Unix treats everything as text. When you combine these two facts, you unlock a universal interface.”

The system stores issues as markdown files in a directory structure:

./ProjectMgmt/
  open/      # New issues
  wip/       # Work in progress
  closed/    # Completed

Custom commands /openIssue and /finishIssue manage the workflow. Claude logs timestamps, modified files, and executed commands in each issue file.

Memory Architecture

He documents Claude Code’s three-tier memory system:

TierPathScope
Project./CLAUDE.mdTeam-shared, version controlled
Local./CLAUDE.local.mdIndividual developer, gitignored
User~/.claude/CLAUDE.mdCross-project preferences

Subdirectory memory files load only when Claude accesses those directories. This prevents token waste on irrelevant codebase sections.

Key Takeaways

PrincipleImplementation
Front-load technical knowledgeCreate *-knowledge.md files for specialized frameworks
Split large contextsKeep files under 50KB
Validate research outputsCross-check Deep Research with Claude Research
Audit memory regularlyCompare documented patterns against actual code
Log implementation detailsUse filesystem-based issue tracking

Next: Jesse Vincent’s Superpowers Framework

Topics: claude-code context-engineering ai-coding workflow