
This article carries the core message that AI (LLM) usage within development teams—currently heavily dependent on individual capability—must evolve into a systematic organizational system. It proposes using Claude Code's plugin and marketplace ecosystem not as simple extension tools, but as an 'executable knowledge base' that codifies and distributes team workflows and knowledge. Ultimately, it envisions systematizing AI workflows to raise the productivity floor across the entire team and building a unique organizational Data Flywheel over the long term.
1. Every Person for Themselves with LLMs—Time to Make It a Team System
Many development teams are rushing to adopt AI (LLMs), but the reality is closer to a 'survival of the fittest' situation where each team member fends for themselves. Even with the same AI model and development environment, the gap in output quality between developers is extreme.
Some engineers excel at Context Engineering—precisely conveying context to AI—and finish complex tasks in just 10 minutes. Others wrestle with AI hallucinations and easily exceed an hour.
The reason for completely different results from the same environment and task comes down to one thing: how well 'context' was designed before the task began.
"This is not a difference in coding skill. It's a gap in know-how about 'how precisely you control the LLM tool (LLM Literacy).' Leaving this gap to individual talent is a significant organizational loss."
It's time to incorporate this tool not as something left to individual capability, but as a harness—a reliable safety net and system that elevates the entire organization's capability.
2. Seamless Experience: AI Enters the Terminal
There have been great attempts to leverage AI before, but the process of opening a web browser to copy-paste code into a chatbot created subtle friction that broke developers' concentration.
In this regard, Claude Code's terminal-based interface (TUI) holds tremendous value. It provides Seamless Integration where natural language and code blend fluidly within the terminal—where developers spend the most time. For team members to adopt new workflows without resistance, such a frictionless environment must be a prerequisite.
3. Executable Knowledge: Documents Die, Code Lives
We always want a 'Single Source of Truth (SSOT)' within teams. But documents in Notion or internal wikis become outdated information from the moment they're written, because humans must manually read and remember them.
But knowledge defined as plugins is completely different. It becomes an Executable SSOT. When humans read it, it serves as a friendly workflow guideline; when AI reads it, it functions as precise system prompts. The paradigm of document management evolves from simple 'recording' to actual 'execution.'
4. Domain-Optimized Harnesses That Raise the Productivity Floor
To reduce the variance in AI proficiency among team members, we need to raise the entire team's productivity floor. Using good open-source plugins from others is a fine starting point.
But we must go further. External tools know nothing about our team's specific circumstances or domain context. What the payments team should delegate to AI versus what humans must verify directly differs completely from the settlement team.
"Minimize human intervention in my work, have people approve only where absolutely necessary, and generate as many tokens as possible."
This is why we must achieve 'domain optimization' perfectly tailored to our team's characteristics.
5. Applying Software 1.0 Wisdom to the AI Era
Adopting new AI workflows may feel unfamiliar and daunting. But looking back, we've done this before. We used to build common internal libraries for repeated functions (login, payments) to save team members' time.
The Software 3.0 era marketplace follows the same principle. The past 'common modules' have become today's 'AI workflow plugins,' and what was inside has simply changed from plain code to prompts that direct AI.
The most important thing remains quality control. Just as we used to rigorously code-review, we now need to review AI prompts and behaviors with colleagues.
6. Why Choose Marketplaces Over RAG
"Why not just use RAG (Retrieval-Augmented Generation) to let AI search internal documents?" Of course that's possible, but the marketplace (plugin) approach has far more attractive advantages in terms of system reliability and efficiency.
- Predictability: With RAG, it's hard to know exactly which documents AI retrieves to compose its answer. Plugins are explicit code that developers control 100%, so you can visually confirm what information is injected into AI.
- Fast Experimentation and Dev-Prod Parity: You can modify plugins and immediately test them with AI in your local environment without complex server deployments. Plugins verified locally work identically in production.
7. Marketplace 1.0: Deploying the Organization's Way of Working
The AI marketplace can be more than a place to download convenient features—it can become a platform for deploying "how our organization works (Workflow) itself."
A team lead bundles the team's coding rules or testing policies into a plugin and publishes it to the marketplace. Team members can download this discipline with a single command and apply it to AI.
Even better, the top ace developer's 'AI usage know-how' can be propagated identically to every team member with one command. This is the true way to solidly raise the team's capability and productivity floor.
8. Layered Context by Concern: A Living Knowledge Base
Just as dumping all company documents on a new hire's first day would overwhelm them, AI also needs only the knowledge precisely relevant to the current task, cleanly injected.
Structuring knowledge into multiple layers is highly effective. When well-organized plugins accumulate, they form the organization's Living Knowledge Base without needing a heavy RAG system. Preventing knowledge from scattering and focusing it where needed—this is knowledge management for the new era.
9. A Hypothesis for the Future: Building a Data Flywheel
If such a system firmly takes root in the organization, we can imagine an even more exciting future: the Data Flywheel—a virtuous cycle where AI improves itself.
The plugins we use daily aren't just tools that help with work—they become excellent factories that ceaselessly produce quality 'AI training data.' This data can be collected to train a domain-specific model (sLLM) tailored precisely to our domain.
Of course, time, effort, and sustained organizational investment are needed, but once the wheel starts turning, a magical virtuous cycle emerges: the more it's used, the more data accumulates, the model becomes more refined, and ultimately more people use it.
10. Wrap-Up: Toward Our Team's Optimized Harness
Everything discussed so far represents the broad direction we should head toward and fascinating hypotheses. How effective this system will actually be must be refined through trial and error.
But one thing is clear: the ability to work with AI can no longer remain the weapon of only talented individuals. This belongs in the domain of 'systems' that the entire team must design and solidly build together.
"The tools are ready. What will your team 'install' now?"
Claude Code's marketplace is merely the first button of that massive change. Now gather the precious know-how scattered across your team, and start building a robust AI harness perfectly tailored to your organization.