Auto-Activating Skills in Claude Code

Table of content

You install a plugin with ten skills. A week later, you’ve used two of them. The rest sit forgotten because you have to remember they exist.

This is the core problem with manual skill invocation. The solution: make skills activate themselves.

Why Manual Activation Fails

ProblemWhat happens
Cognitive loadYou forget which skills you have
Context switchingStopping to type /skill-name breaks flow
Discovery gapNew team members don’t know what’s available
Inconsistent usageSome requests get skill assistance, others don’t

Jeremy Longshore’s Claude Plugins Hub addresses this by embedding auto-activation triggers into every skill. When you mention “deploy to production,” the relevant DevOps skills load automatically.

Three Trigger Mechanisms

Claude Code supports three ways to trigger skills automatically:

MechanismHow it worksBest for
Description matchingSkill description field matches user inputGeneral purpose
HooksShell commands run before/after tool useFile-pattern triggers
Prompt injectionHook appends skill recommendations to promptsGuaranteed activation

Description Matching

The simplest approach. Claude reads your skill’s description field and loads it when your request matches.

---
name: code-review
description: "Review code for security vulnerabilities, bugs, and performance issues"
---

When you type “review this file for security issues,” Claude matches the description and loads the skill.

Limitations:

Hooks with File Patterns

Use hooks to trigger skills based on file types:

{
  "hooks": {
    "PreToolUse": [
      {
        "matcher": "Edit|Write",
        "hooks": [
          {
            "type": "command",
            "command": "if [[ \"$CLAUDE_FILE_PATH\" =~ \\.tsx?$ ]]; then echo 'Use typescript-patterns skill'; fi"
          }
        ]
      }
    ]
  }
}

This fires every time Claude edits a TypeScript file. The echo output gets injected into Claude’s context.

Prompt Injection Hooks

The most reliable method. A UserPromptSubmit hook intercepts your message and appends skill recommendations:

{
  "hooks": {
    "UserPromptSubmit": [
      {
        "hooks": [
          {
            "type": "command",
            "command": "python3 ~/.claude/hooks/skill-recommender.py"
          }
        ]
      }
    ]
  }
}

The script analyzes your prompt and outputs skill suggestions:

#!/usr/bin/env python3
import sys
import os
import json

prompt = os.environ.get('CLAUDE_USER_PROMPT', '')

recommendations = []

if any(word in prompt.lower() for word in ['deploy', 'release', 'ship']):
    recommendations.append('devops-deploy')

if any(word in prompt.lower() for word in ['test', 'spec', 'coverage']):
    recommendations.append('testing-patterns')

if any(word in prompt.lower() for word in ['review', 'audit', 'check']):
    recommendations.append('code-review')

if recommendations:
    print(f"\n[RECOMMENDED SKILLS: {', '.join(recommendations)}]")

Claude sees your message plus the recommendations. It can’t ignore something that’s already in its context.

Building a Trigger-Based Skill

Create a skill that documents its own triggers:

mkdir -p .claude/skills/api-design

Write .claude/skills/api-design/skill.md:

---
name: api-design
description: "Design REST APIs with OpenAPI spec, versioning, and error handling patterns"
---

# API Design Skill

## Auto-Activation Triggers

This skill activates when you mention:
- "design an API"
- "REST endpoint"
- "OpenAPI spec"
- "API versioning"

## Process

1. Clarify resource requirements
2. Define endpoint structure
3. Document request/response schemas
4. Add error codes and examples
5. Generate OpenAPI YAML

## Output Format

| Method | Endpoint | Description |
|--------|----------|-------------|

## Versioning Strategy

| Pattern | When to use |
|---------|-------------|
| URL path (`/v1/`) | Breaking changes expected |
| Header | Multiple versions active |
| Query param | Simple APIs |

## Error Response Schema

```json
{
  "error": {
    "code": "VALIDATION_ERROR",
    "message": "Human readable message",
    "details": []
  }
}

The `description` field does the heavy lifting. When you say "I need to design an API for user management," Claude matches it.

## Keyword Triggers in Frontmatter

Some frameworks support explicit trigger keywords. Define them directly:

```yaml
---
name: security-audit
description: "Audit code for security vulnerabilities"
triggers:
  - security
  - vulnerability
  - CVE
  - audit
  - penetration
---

Not all Claude Code versions support triggers natively, but the description matching handles most cases.

Combining Triggers

Layer multiple mechanisms for reliability:

  1. Description for natural language matching
  2. Hooks for file pattern triggers
  3. CLAUDE.md for explicit project rules

In your project’s CLAUDE.md:

## Skill Usage

When working with:
- `.tsx` files: Use `react-patterns` skill
- `prisma/` directory: Use `database-migrations` skill
- CI config: Use `devops-deploy` skill

This creates redundancy. Even if one mechanism misses, another catches it.

Common Mistakes

MistakeWhy it failsFix
Vague descriptionsClaude can’t match intentUse specific trigger words
Too many triggersEverything activates everythingNarrow scope per skill
No fallbackSingle point of failureLayer mechanisms
Complex hook scriptsSlow, fragileKeep hooks under 10 lines
Conflicting skillsMultiple skills claim same triggerNamespace skills clearly

Debugging Activation

Check if a skill activated:

claude "What skills are you using for this session?"

Or add logging to your hook:

{
  "command": "echo \"[$(date)] Prompt: $CLAUDE_USER_PROMPT\" >> ~/.claude/trigger.log && python3 ~/.claude/hooks/skill-recommender.py"
}

Review the log to see which prompts trigger which skills.

Project vs Global Skills

ScopeTriggersUse case
Project (.claude/skills/)Match project contextTeam workflows
Global (~/.claude/skills/)Match general patternsPersonal tools

Project skills override global ones with the same name. Use this to customize triggers per project.


Next: The Skills System

Topics: claude-code automation workflow