tdd
Introduces commands for test-driven development, common anti-patterns and skills for testing using subagents.
View on GitHubTable of content
Introduces commands for test-driven development, common anti-patterns and skills for testing using subagents.
Installation
npx claude-plugins install @NeoLabHQ/context-engineering-kit/tdd
Contents
Folders: commands, skills
Files: README.md
Documentation
A disciplined approach to software development that ensures every line of production code is validated by tests written first. Introduces TDD methodology, anti-pattern detection, and orchestrated test coverage using specialized agents.
Focused on:
- Test-first development - Write tests before implementation, ensuring every feature is verified
- Red-Green-Refactor cycle - Systematic approach that builds confidence through failing tests
- Anti-pattern detection - Identifies common testing mistakes like mock abuse and test-only methods
- Agent-orchestrated coverage - Parallel test writing using specialized subagents for complex changes
Plugin Target
- Prevent regressions - Every change is backed by tests that catch future breaks
- Improve design quality - Hard-to-test code reveals design problems early
- Build confidence - Watching tests fail then pass proves they actually test something
- Accelerate development - TDD is faster than debugging untested code in production
Overview
The TDD plugin implements Kent Beck’s Test-Driven Development methodology, proven over two decades to produce higher-quality, more maintainable software. The core principle is simple but transformative: write the test first, watch it fail, then write minimal code to pass.
The plugin is based on foundational works including Kent Beck’s Test-Driven Development: By Example and the extensive research on TDD effectiveness.
Quick Start
# Install the plugin
/plugin install tdd@NeoLabHQ/context-engineering-kit
> claude "Use TDD skill to implement email validation for user registration"
# Manually make some changes that cause test failures
# Fix failing tests
> /tdd:fix-tests
After Implementation
If you implemented a new feature but have not written tests, you can use the write-tests command to cover it.
> claude "implement email validation for user registration"
# Write tests after you made changes
> /tdd:write-tests
Commands Overview
/tdd:write-tests - Cover Local Changes with Tests
Systematically add test coverage for all local code changes using specialized review and development agents.
- Purpose - Ensure comprehensive test coverage for new or modified code
- Output - New test files covering all critical business logic
/tdd:write-tests ["focus area or modules"]
Arguments
Optional focus area specification. Defaults to all uncommitted changes. If everything is committed, covers the latest commit.
How It Works
Preparation Phase
- Discovers test infrastructure (test commands, coverage tools)
- Runs full test suite to establish baseline
- Reads project conventions and patterns
Analysis Phase (parallel)
- Verifies single test execution capability
- Analyzes local changes via
git statusor latest commit - Filters non-code files and identifies logic changes
- Assesses complexity to determine workflow path
Test Writing Phase
- Simple changes (single file, straightforward logic): Writes tests directly
- Complex changes (multiple files or complex logic): Orchestrates specialized agents
- Coverage reviewer agents analyze each file for test needs
- Developer agents write comprehensive tests in parallel
- Verification agents confirm coverage completeness
Verification Phase
- Runs full test suite
- Generates coverage report if available
- Iterates on gaps until all critical logic is covered
Complexity Decision:
- 1 simple file: Write tests directly
- 2+ files or complex logic: Orchestrate parallel agents
Usage Examples
# Cover all uncommitted changes
> /tdd:write-tests
# Focus on specific module
> /tdd:write-tests Focus on payment processing edge cases
# Cover authentication changes
> /tdd:write-tests authentication module
# Focus on error handling
> /tdd:write-tests Focus on error paths and validations
Best practices
- Run before committing - Ensure all changes have test coverage before commit
- Be specific - Provide focus areas for more targeted test generation
- Review generated tests - Verify tests actually test behavior, not implementation
- Iterate on gaps - Re-run if coverage reviewer identifies missing cases
- Prioritize critical logic - Not every line needs 100% coverage, focus on business logic
/tdd:fix-tests - Fix Failing Tests
Systematically fix all failing tests after business logic changes or refactoring using orchestrated agents.
- Purpose - Update tests to match current business logic after changes
- Output - Fixed tests that pass while preserving test intent
/tdd:fix-tests ["focus area or modules"]
Arguments
Optional specification of which tests or modules to focus on. Defaults to all failing tests.
How It Works
- Discovery Phase
- Reads test infrastructure configuration
- Runs full
…(truncated)
Included Skills
This plugin includes 1 skill definition:
test-driven-development
Use when implementing any feature or bugfix, before writing implementation code - write the test first, watch it fail, write minimal code to pass; ensures tests actually verify behavior by requiring failure first
View skill definition
Test-Driven Development (TDD)
Overview
Write the test first. Watch it fail. Write minimal code to pass.
Core principle: If you didn’t watch the test fail, you don’t know if it tests the right thing.
Violating the letter of the rules is violating the spirit of the rules.
When to Use
Always:
- New features
- Bug fixes
- Refactoring
- Behavior changes
Exceptions (ask your human partner):
- Throwaway prototypes
- Generated code
- Configuration files
Thinking “skip TDD just this once”? Stop. That’s rationalization.
The Iron Law
NO PRODUCTION CODE WITHOUT A FAILING TEST FIRST
Write code before the test? Delete it. Start over.
No exceptions:
- Don’t keep it as “reference”
- Don’t “adapt” it while writing tests
- Don’t look at it
- Delete means delete
Implement fresh from tests. Period.
Red-Green-Refactor
digraph tdd_cycle {
rankdir=LR;
red [label="RED\nWrite failing test", shape=box, style=filled, fillcolor="#ffcccc"];
verify_red [label="Verify fails\ncorrectly", shape=diamond];
green [label="GREEN\nMinimal code", shape=box, style=filled, fillcolor="#ccffcc"];
verify_green [label="Verify passes\nAll green", shape=diamond];
refactor [label="REFACTOR\nClean up", shape=box, style=filled, fillcolor="#ccccff"];
next [label="Next", shape=ellipse];
red -> verify_red;
verify_red -> green [label="yes"];
verify_red -> red [label="wrong\nfailure"];
green -> verify_green;
verify_g
...(truncated)
</details>
## Source
[View on GitHub](https://github.com/NeoLabHQ/context-engineering-kit)