ai-sdk-agents
Multi-agent orchestration with AI SDK v5 - handoffs, routing, and coordination for any AI provider (OpenAI, Anthropic, Google)
View on GitHubTable of content
Multi-agent orchestration with AI SDK v5 - handoffs, routing, and coordination for any AI provider (OpenAI, Anthropic, Google)
Installation
npx claude-plugins install @jeremylongshore/claude-code-plugins-plus/ai-sdk-agents
Contents
Folders: agents, commands, skills
Files: LICENSE, README.md
Documentation
Multi-agent orchestration with AI SDK v5 - handoffs, routing, and coordination for any AI provider.
Build sophisticated multi-agent systems with automatic handoffs, intelligent routing, and seamless coordination across Ollama (FREE), OpenAI, Anthropic, Google, and other AI providers.
๐ฐ NEW: Use Ollama for zero-cost local AI agents - eliminate $30-200/month in API fees!
๐ฏ What This Plugin Does
Transform complex workflows into multi-agent systems where specialized agents:
- Hand off tasks to each other automatically
- Route requests to the best-suited agent
- Coordinate complex workflows across multiple LLMs
- Specialize in specific domains or tasks
- Work together to solve problems beyond single-agent capabilities
๐ Quick Start
Installation
# Install the plugin
/plugin install ai-sdk-agents@claude-code-plugins-plus
# Install dependencies in your project
npm install @ai-sdk-tools/agents ai zod
Your First Multi-Agent System
/ai-agents-setup
# Creates:
# - agents/
# โโโ coordinator.ts # Routes requests
# โโโ researcher.ts # Gathers information
# โโโ coder.ts # Writes code
# โโโ reviewer.ts # Reviews output
# - index.ts # Orchestration setup
# - .env.example # API keys template
โ ๏ธ Rate Limits & LLM Provider Constraints
Multi-agent systems multiply API costs - 5 agents ร $0.03/request = $0.15 per workflow. Use Ollama (FREE) to eliminate costs entirely.
Quick Comparison: Paid APIs vs Ollama (FREE)
| Provider | 5-Agent Workflow Cost | Monthly (1K workflows) | Annual |
|---|---|---|---|
| OpenAI GPT-4 | $0.15-0.30 | $150-300 | $1,800-3,600 |
| Anthropic Claude | $0.08-0.15 | $80-150 | $960-1,800 |
| Google Gemini | $0.03-0.10 | $30-100 | $360-1,200 |
| Ollama (Local) | $0.00 | $0 | $0 โ |
Annual Savings: $360-3,600 by using Ollama for multi-agent orchestration.
Rate Limits by Provider
OpenAI (Paid)
- GPT-4: 10,000 requests/day (Tier 1), 500 RPM
- GPT-4 Turbo: 30,000 requests/day (Tier 2), 3,000 RPM
- Registration: โ Email + payment required
- Cost: $30-60/1M tokens
Multi-Agent Impact: 5 agents ร 500 RPM limit = effective 100 RPM per agent
Anthropic (Paid)
- Claude Sonnet: 50,000 requests/day (Tier 1), 1,000 RPM
- Claude Opus: 50,000 requests/day (Tier 1), 1,000 RPM
- Registration: โ Email + payment required
- Cost: $15-75/1M tokens
Multi-Agent Impact: 5 agents ร 1,000 RPM limit = effective 200 RPM per agent
Google Gemini (Paid/Free Tier)
- Gemini 1.5 Flash (Free): 15 RPM, 1M tokens/day
- Gemini 1.5 Pro (Free): 2 RPM, 32K tokens/day
- Gemini (Paid): 1,000 RPM, unlimited tokens
- Registration: โ Google account required
- Cost: Free tier available, $0.35-1.05/1M tokens (paid)
Multi-Agent Impact: Free tier 15 RPM = 3 RPM per agent (5 agents) โ Very restrictive
Ollama (FREE - Self-Hosted)
- Requests: โ Unlimited (hardware-limited only)
- Models: Llama 3.2, Mistral, CodeLlama, etc.
- Registration: โ Not required
- Cost: $0 (one-time hardware: $0-600)
Multi-Agent Impact: No API limits! Only limited by CPU/RAM. See ollama-local-ai plugin for full hardware constraints documentation.
Multi-Agent Coordination Strategies
Strategy 1: Shared Ollama Instance (RECOMMENDED - FREE)
// All agents share one local Ollama instance
import { ollama } from 'ollama-ai-provider';
const agents = {
coordinator: createAgent({
model: ollama('llama3.2'), // FREE
name: 'coordinator'
}),
researcher: createAgent({
model: ollama('llama3.2'), // FREE
name: 'researcher'
}),
coder: createAgent({
model: ollama('codellama'), // FREE
name: 'coder'
}),
reviewer: createAgent({
model: ollama('llama3.2'), // FREE
name: 'reviewer'
})
};
// 5 agents, 1,000 workflows/month = $0 cost
Hardware Requirements:
- 4 agents ร Llama 3.2 7B: 32GB RAM minimum
- Concurrent requests: Limited by CPU cores
- See: ollama-local-ai plugin for detailed hardware sizing
Annual Cost: $0 (vs $360-3,600 for cloud APIs)
Strategy 2: Hybrid (Free for Development, Paid for Production)
const MODEL_CONFIG = {
development: {
coordinator: ollama('llama3.2'), // FREE
researcher: ollama('llama3.2'), // FREE
coder: ollama('codellama'), // FREE
reviewer: ollama('llama3.2') // FREE
},
production: {
coordinator: anthropic('claude-sonnet'), // $15/1M tokens
researcher: anthropic('claude-sonnet'), // $15/1M tokens
coder: anthropic('claude-sonnet'), // $15/1M tokens
reviewer: anthropic('claude-sonnet') // $15/1M tokens
}
};
...(truncated)
## Included Skills
This plugin includes 1 skill definition:
### orchestrating-multi-agent-systems
> |
<details>
<summary>View skill definition</summary>
# Orchestrating Multi Agent Systems
## Overview
This skill provides automated assistance for the described functionality.
## Prerequisites
Before using this skill, ensure you have:
- Node.js 18+ installed for TypeScript agent development
- AI SDK v5 package installed (`npm install ai`)
- API keys for AI providers (OpenAI, Anthropic, Google, etc.)
- Understanding of agent-based architecture patterns
- TypeScript knowledge for agent implementation
- Project directory structure for multi-agent systems
## Instructions
1. Create project directory with necessary subdirectories
2. Initialize npm project with TypeScript configuration
3. Install AI SDK v5 and provider-specific packages
4. Set up configuration files for agent orchestration
1. Write agent initialization code with AI SDK
2. Configure system prompts for agent behavior
3. Define tool functions for agent capabilities
4. Implement handoff rules for inter-agent delegation
See `{baseDir}/references/implementation.md` for detailed implementation guide.
## Output
- TypeScript files with AI SDK v5 integration
- System prompts tailored to each agent role
- Tool definitions and implementations
- Handoff rules and coordination logic
- Workflow definitions for task sequences
- Routing rules for intelligent task distribution
## Error Handling
See `{baseDir}/references/errors.md` for comprehensive error handling.
## Examples
See `{baseDir}/references/examples.md` for detailed examples.
## Resources
- AI SDK v5 official
...(truncated)
</details>
## Source
[View on GitHub](https://github.com/jeremylongshore/claude-code-plugins-plus-skills)