
Shaw Talebi explains Claude Code's subagents and agent teams in detail, shares experiment results comparing both approaches on real tasks. Starting from Claude Code basics, the video covers context handling limitations AI agents face, and clearly explains the roles of subagents and agent teams in overcoming them—with practical examples and data showing pros and cons of each.
1. Claude Code and the Importance of Context Handling
Claude Code is essentially the combination of the Claude language model with various tools and software: local file access, web search, terminal commands, conversation compression, mode switching, task lists, and user questioning. However, even powerful LLMs face challenges in context handling:
- Technical limits: The context window caps how much text can be processed at once (Claude Sonnet: 200K tokens). Everything—system messages, tool info, user messages, reasoning, tool results, responses—must fit within this window.
- Context Rot: As the context window fills (50-70%+), model performance observably degrades.
This makes context management crucial—ensuring only needed information is present at the right time.
2. Subagents: Context Management Through Division of Labor
Subagents delegate tasks to new specialized Claude Code instances. The main agent can spawn subagents instead of doing work directly. Subagents complete tasks and report back to the main agent.
Built-in subagents include:
- Explore Agent: Uses the smallest model (Haiku) with read-only tools for codebase understanding
- Planning Subagent: Uses the main model with read-only tools for implementation planning
- General Purpose Agent: A copy of the main agent with full tool access for complex tasks
Users can also create custom subagents defined as text files with metadata (name, description, tools, model) and instructions.
3. Subagent Limitations and the Rise of Agent Teams
Key subagent limitations:
- No direct communication between subagents: They can only communicate through the main agent
- Main agent context limits: All information must pass through the main agent, risking context overflow
Agent teams solve this by enabling subagents to communicate directly with each other via a shared task list. The main agent creates the team and shared task list; subagents interact directly with the list and each other; the main agent supervises overall and delivers final results.
As of 2026, agent teams are still experimental—requiring manual activation via .claude/settings.json.
4. Comparison Analysis
Similarities: Both delegate work to new Claude instances and effectively manage context windows.
Key difference: Subagents cannot talk to each other (centralized architecture); agent teams can (hybrid architecture).
- Subagents: Simpler, more fault-tolerant (errors don't cascade), best for sequential tasks
- Agent teams: More complex, great for parallelizable tasks (multiple components built simultaneously), but risk error cascades
5. Mini Experiment Results
| Task | Subagent Time | Team Time | Subagent Tokens | Team Tokens | Quality Winner |
|---|---|---|---|---|---|
| Lead list (parallel) | 27 min | 19 min | 165K | 195K | Subagent |
| YouTube app (sequential) | 47.5 min | 45 min | 99K | 111K | Subagent |
| Landing page (mixed) | 42 min | 52 min | 102K | 164K | Tie |
Key findings: Agent teams were generally faster (parallel processing), used more tokens, but subagents consistently delivered better quality output.
6. Conclusion
Shaw concludes that the quality gap likely stems from agent teams being in an early experimental stage with immature scaffolding. For now, subagents are more reliable for production work, while agent teams are worth exploring for their potential. As Anthropic collects more feedback, agent team capabilities should improve significantly.
"At this point, I'd probably stick with subagents for actual work."