Shaw Talebi explains Claude Code's subagents and agent teams in detail, shares experiment results comparing both approaches on real tasks. Starting from Claude Code basics, the video covers context handling limitations AI agents face, and clearly explains the roles of subagents and agent teams in overcoming them—with practical examples and data showing pros and cons of each.


1. Claude Code and the Importance of Context Handling

Claude Code is essentially the combination of the Claude language model with various tools and software: local file access, web search, terminal commands, conversation compression, mode switching, task lists, and user questioning. However, even powerful LLMs face challenges in context handling:

  • Technical limits: The context window caps how much text can be processed at once (Claude Sonnet: 200K tokens). Everything—system messages, tool info, user messages, reasoning, tool results, responses—must fit within this window.
  • Context Rot: As the context window fills (50-70%+), model performance observably degrades.

This makes context management crucial—ensuring only needed information is present at the right time.


2. Subagents: Context Management Through Division of Labor

Subagents delegate tasks to new specialized Claude Code instances. The main agent can spawn subagents instead of doing work directly. Subagents complete tasks and report back to the main agent.

Built-in subagents include:

  • Explore Agent: Uses the smallest model (Haiku) with read-only tools for codebase understanding
  • Planning Subagent: Uses the main model with read-only tools for implementation planning
  • General Purpose Agent: A copy of the main agent with full tool access for complex tasks

Users can also create custom subagents defined as text files with metadata (name, description, tools, model) and instructions.


3. Subagent Limitations and the Rise of Agent Teams

Key subagent limitations:

  • No direct communication between subagents: They can only communicate through the main agent
  • Main agent context limits: All information must pass through the main agent, risking context overflow

Agent teams solve this by enabling subagents to communicate directly with each other via a shared task list. The main agent creates the team and shared task list; subagents interact directly with the list and each other; the main agent supervises overall and delivers final results.

As of 2026, agent teams are still experimental—requiring manual activation via .claude/settings.json.


4. Comparison Analysis

Similarities: Both delegate work to new Claude instances and effectively manage context windows.

Key difference: Subagents cannot talk to each other (centralized architecture); agent teams can (hybrid architecture).

  • Subagents: Simpler, more fault-tolerant (errors don't cascade), best for sequential tasks
  • Agent teams: More complex, great for parallelizable tasks (multiple components built simultaneously), but risk error cascades

5. Mini Experiment Results

TaskSubagent TimeTeam TimeSubagent TokensTeam TokensQuality Winner
Lead list (parallel)27 min19 min165K195KSubagent
YouTube app (sequential)47.5 min45 min99K111KSubagent
Landing page (mixed)42 min52 min102K164KTie

Key findings: Agent teams were generally faster (parallel processing), used more tokens, but subagents consistently delivered better quality output.


6. Conclusion

Shaw concludes that the quality gap likely stems from agent teams being in an early experimental stage with immature scaffolding. For now, subagents are more reliable for production work, while agent teams are worth exploring for their potential. As Anthropic collects more feedback, agent team capabilities should improve significantly.

"At this point, I'd probably stick with subagents for actual work."

Related writing

HarvestEngineering Leadership · AIEnglish

Why Agent-Era Skill Standardization Changes Everything

A walkthrough of Skills as org-wide AI infrastructure: four ecosystem shifts, how specialist stacks and orchestrators work, practical authoring rules (description, one-line gotcha, reasoning), agent-first design, three-tier ops, community repos—and why perfection, depth, and stamina still compound.

Mar 31, 2026Read more
HarvestEngineering Leadership · AIEnglish

AX Roadmap That Leads to Results: Connecting Individual Efficiency to Organizational Productivity

This webinar by Flex team's CCPO examines why 'using more AI doesn't automatically improve organizational outcomes' through structural analysis. Drawing from real experiments and failures in measurement, sequencing, and organizational adoption, it presents an AX design strategy focused on solving bottlenecks from the last mile - SSOT, evaluation environments, validation, and access control - concluding that changing bottlenecks, verification, decision-making, and collaboration structures matters more than increasing output volume.

Mar 28, 2026Read more
HarvestAI · Data & Decision-MakingEnglish

The Era When Agents 'Code' and Research Runs in 'Loops': Andrej Karpathy Conversation Summary

Andrej Karpathy says that with the recent leap in coding agents, the core task has shifted from writing code directly to 'conveying intent to agents.' He sees this extending to AutoResearch—autonomous research loops running experiment-learn-optimize cycles with minimal human involvement.

Mar 22, 2026Read more