Auto-Activating Skills in Claude Code
Table of content
You install a plugin with ten skills. A week later, you’ve used two of them. The rest sit forgotten because you have to remember they exist.
This is the core problem with manual skill invocation. The solution: make skills activate themselves.
Why Manual Activation Fails
| Problem | What happens |
|---|---|
| Cognitive load | You forget which skills you have |
| Context switching | Stopping to type /skill-name breaks flow |
| Discovery gap | New team members don’t know what’s available |
| Inconsistent usage | Some requests get skill assistance, others don’t |
Jeremy Longshore’s Claude Plugins Hub addresses this by embedding auto-activation triggers into every skill. When you mention “deploy to production,” the relevant DevOps skills load automatically.
Three Trigger Mechanisms
Claude Code supports three ways to trigger skills automatically:
| Mechanism | How it works | Best for |
|---|---|---|
| Description matching | Skill description field matches user input | General purpose |
| Hooks | Shell commands run before/after tool use | File-pattern triggers |
| Prompt injection | Hook appends skill recommendations to prompts | Guaranteed activation |
Description Matching
The simplest approach. Claude reads your skill’s description field and loads it when your request matches.
---
name: code-review
description: "Review code for security vulnerabilities, bugs, and performance issues"
---
When you type “review this file for security issues,” Claude matches the description and loads the skill.
Limitations:
- Relies on Claude’s judgment
- Not deterministic
- May miss edge cases
Hooks with File Patterns
Use hooks to trigger skills based on file types:
{
"hooks": {
"PreToolUse": [
{
"matcher": "Edit|Write",
"hooks": [
{
"type": "command",
"command": "if [[ \"$CLAUDE_FILE_PATH\" =~ \\.tsx?$ ]]; then echo 'Use typescript-patterns skill'; fi"
}
]
}
]
}
}
This fires every time Claude edits a TypeScript file. The echo output gets injected into Claude’s context.
Prompt Injection Hooks
The most reliable method. A UserPromptSubmit hook intercepts your message and appends skill recommendations:
{
"hooks": {
"UserPromptSubmit": [
{
"hooks": [
{
"type": "command",
"command": "python3 ~/.claude/hooks/skill-recommender.py"
}
]
}
]
}
}
The script analyzes your prompt and outputs skill suggestions:
#!/usr/bin/env python3
import sys
import os
import json
prompt = os.environ.get('CLAUDE_USER_PROMPT', '')
recommendations = []
if any(word in prompt.lower() for word in ['deploy', 'release', 'ship']):
recommendations.append('devops-deploy')
if any(word in prompt.lower() for word in ['test', 'spec', 'coverage']):
recommendations.append('testing-patterns')
if any(word in prompt.lower() for word in ['review', 'audit', 'check']):
recommendations.append('code-review')
if recommendations:
print(f"\n[RECOMMENDED SKILLS: {', '.join(recommendations)}]")
Claude sees your message plus the recommendations. It can’t ignore something that’s already in its context.
Building a Trigger-Based Skill
Create a skill that documents its own triggers:
mkdir -p .claude/skills/api-design
Write .claude/skills/api-design/skill.md:
---
name: api-design
description: "Design REST APIs with OpenAPI spec, versioning, and error handling patterns"
---
# API Design Skill
## Auto-Activation Triggers
This skill activates when you mention:
- "design an API"
- "REST endpoint"
- "OpenAPI spec"
- "API versioning"
## Process
1. Clarify resource requirements
2. Define endpoint structure
3. Document request/response schemas
4. Add error codes and examples
5. Generate OpenAPI YAML
## Output Format
| Method | Endpoint | Description |
|--------|----------|-------------|
## Versioning Strategy
| Pattern | When to use |
|---------|-------------|
| URL path (`/v1/`) | Breaking changes expected |
| Header | Multiple versions active |
| Query param | Simple APIs |
## Error Response Schema
```json
{
"error": {
"code": "VALIDATION_ERROR",
"message": "Human readable message",
"details": []
}
}
The `description` field does the heavy lifting. When you say "I need to design an API for user management," Claude matches it.
## Keyword Triggers in Frontmatter
Some frameworks support explicit trigger keywords. Define them directly:
```yaml
---
name: security-audit
description: "Audit code for security vulnerabilities"
triggers:
- security
- vulnerability
- CVE
- audit
- penetration
---
Not all Claude Code versions support triggers natively, but the description matching handles most cases.
Combining Triggers
Layer multiple mechanisms for reliability:
- Description for natural language matching
- Hooks for file pattern triggers
- CLAUDE.md for explicit project rules
In your project’s CLAUDE.md:
## Skill Usage
When working with:
- `.tsx` files: Use `react-patterns` skill
- `prisma/` directory: Use `database-migrations` skill
- CI config: Use `devops-deploy` skill
This creates redundancy. Even if one mechanism misses, another catches it.
Common Mistakes
| Mistake | Why it fails | Fix |
|---|---|---|
| Vague descriptions | Claude can’t match intent | Use specific trigger words |
| Too many triggers | Everything activates everything | Narrow scope per skill |
| No fallback | Single point of failure | Layer mechanisms |
| Complex hook scripts | Slow, fragile | Keep hooks under 10 lines |
| Conflicting skills | Multiple skills claim same trigger | Namespace skills clearly |
Debugging Activation
Check if a skill activated:
claude "What skills are you using for this session?"
Or add logging to your hook:
{
"command": "echo \"[$(date)] Prompt: $CLAUDE_USER_PROMPT\" >> ~/.claude/trigger.log && python3 ~/.claude/hooks/skill-recommender.py"
}
Review the log to see which prompts trigger which skills.
Project vs Global Skills
| Scope | Triggers | Use case |
|---|---|---|
Project (.claude/skills/) | Match project context | Team workflows |
Global (~/.claude/skills/) | Match general patterns | Personal tools |
Project skills override global ones with the same name. Use this to customize triggers per project.
Next: The Skills System
Get updates
New guides, workflows, and AI patterns. No spam.
Thank you! You're on the list.