Alex Albert's Claude Mastery

Table of content
Alex Albert's Claude Mastery

Alex Albert is Head of Developer Relations at Anthropic. He joined the company in June 2023 as their first prompt engineer after graduating from the University of Washington with a computer science degree. Before Anthropic, he gained attention for creating tools to test AI safety mechanisms in models like GPT-4.

Alex shares prompting patterns that consistently work across thousands of user interactions.

The Five Core Tips

These tips come from Alex’s Anthropic prompt engineering deep dive and his AI Engineer World’s Fair talk.

1. Describe Your Task Clearly, Directly, and Specifically

# Vague (fails often)
"Help me with my code"

# Clear (succeeds)
"Review this Python function for security vulnerabilities.
Focus on SQL injection, XSS, and authentication bypass.
List each issue with line number, severity, and fix."

Pattern:

[What to do] + [How to do it] + [What format to use]

2. Use XML Tags to Structure Your Prompt

<task>
Summarize the following article for a technical audience.
</task>

<article>
[Long article text here]
</article>

<requirements>
- Maximum 3 paragraphs
- Include key statistics
- Highlight actionable insights
</requirements>

<output_format>
## Summary
[Your summary]

## Key Statistics
- [Stat 1]
- [Stat 2]

## Action Items
- [Item 1]
- [Item 2]
</output_format>

3. Provide Examples

Convert these notes into action items.

Example 1:
Notes: "Talked to Sarah, she needs the report by Friday"
Action items:
- [ ] Send report to Sarah by Friday

Example 2:
Notes: "Server is slow, might need to upgrade"
Action items:
- [ ] Investigate server performance issues
- [ ] Research upgrade options
- [ ] Get cost estimate for upgrade

Now convert:
Notes: "Meeting ran long, John mentioned budget concerns,
need to follow up on Q3 numbers"
Action items:

4. Use Long Context Effectively

<document>
[Full 50-page document]
</document>

<question>
What are the three most important recommendations
in section 4, and how do they relate to the
conclusions in section 7?
</question>

5. Give Claude a Role

# Generic
"Review this code"

# With role
"You are a senior security engineer conducting a
penetration test. Review this code as if you were
trying to find vulnerabilities an attacker could exploit.
Be adversarial in your thinking."
TaskEffective Role
Code reviewSenior engineer at your company
WritingEditor at target publication
LearningPatient teacher for beginners
DebuggingSkeptical QA engineer
StrategyExperienced advisor in your industry

Common Misconceptions

These come up frequently in Alex’s discussions and contradict what you might expect from the official prompt engineering docs.

“Claude requires XML tags”

XML tags help with complex prompts but aren’t required. Use them when you have multiple distinct sections; skip for simple requests.

“System prompts are for enterprise only”

System prompts work everywhere:

response = client.messages.create(
    model="claude-sonnet-4-20250514",
    system="You are a concise technical writer. Never use
            marketing language. Assume the reader is
            an experienced developer.",
    messages=[
        {"role": "user", "content": "Explain Docker containers"}
    ]
)

“Longer prompts are always better”

Clarity beats length.

# Verbose
"I would really appreciate it if you could potentially
help me with understanding the concept of, if possible,
how Docker containers work, keeping in mind that I'm
somewhat familiar with virtual machines but not entirely
sure about the differences, if that makes sense?"

# Clear
"Explain Docker containers. Compare to VMs.
I understand VMs basics."

Prompt Patterns for Personal OS

Daily Planning

<context>
Today: {{date}}
Calendar: {{calendar}}
Tasks: {{task_list}}
Energy: {{energy_level}}
</context>

<instructions>
Create my daily plan:
1. Identify top 3 priorities
2. Suggest time blocks
3. Flag any conflicts
4. Recommend what to defer if overloaded
</instructions>

<constraints>
- Protect 2 hours for deep work
- Schedule breaks every 90 minutes
- Leave 20% buffer for unexpected items
</constraints>

Code Review

<role>
Senior engineer reviewing a junior developer's PR.
Be thorough but constructive.
</role>

<code>
{{diff}}
</code>

<review_focus>
- Security vulnerabilities
- Performance issues
- Code clarity
- Test coverage gaps
</review_focus>

<output_format>
## Critical Issues (must fix)
## Suggestions (should consider)
## Nitpicks (optional improvements)
## What's Good (positive feedback)
</output_format>

Research Synthesis

<task>
Synthesize these sources into a briefing document.
</task>

<sources>
<source id="1">{{source_1}}</source>
<source id="2">{{source_2}}</source>
<source id="3">{{source_3}}</source>
</sources>

<output_requirements>
- Executive summary (1 paragraph)
- Key findings (bullet points)
- Areas of agreement between sources
- Contradictions or gaps
- Recommended next steps
- Cite sources by ID
</output_requirements>

Email Drafting

<context>
Recipient: {{name}}, {{role}}
Relationship: {{relationship}}
Previous interaction: {{last_interaction}}
</context>

<goal>
{{email_purpose}}
</goal>

<tone>
{{professional/casual/formal}}
</tone>

<constraints>
- Maximum 3 paragraphs
- One clear ask
- Include specific deadline if applicable
</constraints>

Testing Your Prompts

Test prompts multiple times before relying on them.

def test_prompt(prompt, test_cases, runs=5):
    """Test prompt reliability across multiple runs."""
    results = []

    for case in test_cases:
        formatted = prompt.format(**case['input'])
        successes = 0

        for _ in range(runs):
            response = get_claude_response(formatted)
            if case['validator'](response):
                successes += 1

        results.append({
            'case': case['name'],
            'success_rate': successes / runs
        })

    return results

Test different input types, edge cases, adversarial inputs, and format consistency.

Building Your Prompt Library

~/.prompts/
├── work/
│   ├── code-review.xml
│   ├── meeting-notes.xml
│   └── email-draft.xml
├── personal/
│   ├── daily-planning.xml
│   ├── journal-prompt.xml
│   └── learning-summary.xml
└── templates/
    ├── base-structure.xml
    └── role-definitions.xml
cd ~/.prompts
git init
git add .
git commit -m "Initial prompt library"

# After improvements
git diff daily-planning.xml
git commit -m "Added energy level consideration to planning"

Next: Peter Yang’s AI Content System

Topics: prompting workflow claude-code ai-coding