This video goes beyond simply speeding up work to explore how knowledge workers in the AI era can develop their skills through 'deliberate practice' like athletes. The author presents a concrete guide for converting vague work skills into measurable scores and using AI as a personal coach to create feedback loops. Through this, you can learn how to build irreplaceable judgment and taste rather than merely churning through tasks.


1. Why Don't Knowledge Workers 'Train'?

We've reached a point where we need to think about careers in terms of skills rather than jobs. Yet nobody talks about this properly. In 2019, economist Tyler Cowen raised an intriguing point: athletes and musicians train relentlessly, but knowledge workers don't.

"Athletes train, musicians train, performers train. But knowledge workers don't train. We don't train. I don't shoot free throws or anything like that. There's no equivalent for knowledge work."

A pianist practices scales, repeatedly refining sub-skills like finger movement, pressure, and speed. But as knowledge workers, we spend our entire lives in 'live performance.' It's like going on stage without rehearsal, making the learning process highly inefficient.

So why is practicing so difficult for us? There are three structural reasons that make it environmentally difficult:

  1. Fuzzy Outcomes: A basketball either goes through the hoop or it doesn't, but work like strategy or leadership involves so many factors -- speed, quality, political dynamics -- that success is unclear.
  2. Delayed and Noisy Feedback: A decision made in Q1 might not reveal its results until Q3 at the earliest. In the meantime, markets shift or key people change, making it hard to know precisely whether your decision was right.
  3. Low Repetition: You don't write important decision documents or strategy plans hundreds of times a week. The count is low, and every piece of work is live, leaving no 'practice field where failure is okay.'

But the good news is that in this 2025 AI era, we've gained the first-ever opportunity in history to train properly with personalized feedback.


2. The 5 Skills That Truly Matter in the AI Era

As AI takes over much of the execution, the skills we need to focus on training can be distilled into these five:

  1. Judgment: The ability to make decisions and define options under uncertain conditions.
  2. Orchestration: The ability to convert ambiguous goals into concrete workflows that humans and AI can execute.
  3. Coordination: The ability to lead people (and AI agents) without causing chaos.
  4. Taste: High standards for knowing 'what good looks like' in product, writing, design, and strategy.
  5. Updating: The ability to revise your thinking when evidence and circumstances change without being swayed by noise.

The key point is that these skills aren't abstract adjectives on a resume but live in the artifacts we leave behind.

"These aren't adjectives... they're 'patterns' in the things you produce. Once you accept that, you stop arguing about who's abstractly 'strategic' and start looking at how people actually write, act, and decide."

For example, 'judgment' shows up in decision documents and experiment designs, 'coordination' appears in emails and meeting notes, and 'taste' is revealed in UX design or the metaphors you choose.


3. Building Your Own 'AI Coach': Rubrics and Red Pen

So how do you train these skills? AI isn't a magic brain -- it's a tool that reads text and follows instructions. Let's leverage this to build a 'wall to practice against.'

Step 1: Define What 'Good' Means

First, turn off AI and find the most trusted person on your team. Ask persistently: "When you say this document is good, what specifically do you mean?" This produces concrete criteria (Rubrics):

  • Is the conclusion clear in one sentence?
  • Are there at least two realistic options?
  • Are risks and trade-offs explicitly stated?

Step 2: Score with Red Pen

Convert the criteria into a 1-5 scale and pull out 3-5 real documents to score yourself. "This one has good clarity but weak risk analysis" -- leave notes in red pen. This is the process of establishing human judgment standards.

Step 3: Hand the Scoring System to AI

Now input your rubric and personally scored examples into an LLM (large language model). Then request:

"When I send you a new document, score it according to these criteria. Quote the parts you're reacting to, explain why you gave that score, and suggest how to fix it to raise the score by 1-2 points."

This turns AI into your dedicated coach, providing structured feedback on every document instantly.


4. From Film Review to Repeatable Training

Now we can apply the 'Film Review' -- where athletes watch game footage for analysis -- to our work. Send your writing or proposals to your AI coach for feedback. You can track patterns: "How did my judgment score change this week?"

Going further, virtual drills beyond real work are also possible:

  • Training example: Once a week, take a 'messy situation' -- like a complex Slack thread or a vague request from your boss -- and write a 1-page decision document.
  • Feedback: Run it through your AI rubric to get scores and compare it with the AI-generated 'better version.'

"Compare your version with the model's stronger version. Notice what you missed. That's the practice. Compare against what's good, focus on sub-skills, and practice week after week."

The chain of action flows: Skill -> Artifact -> Grade -> Practice. This is the difference between vaguely thinking about improvement and actually developing skills with AI as your coach.


5. Applying to Team Training and Hiring

This method becomes far more powerful when scaled to the entire team.

Team-Level Practice Loops

Team members collectively define "What makes a good document for us?" and create rubrics. Then, just as code reviews are automated, documents can be set up so AI provides first-pass feedback (Critique) when written.

  • Team training: Once or twice a week, set a 10-minute timer and do short exercises improving common weaknesses AI has flagged.
  • Results: You can actually measure things like "Did our team's document scores improve this quarter?" and "Are decisions being made faster?"

Applying to Hiring

Most hiring interviews rely on indirect questions like "Tell me about a time you persuaded a stakeholder." But with our rubrics, much more concrete validation becomes possible:

  1. Practical exercise: Give candidates a realistic scenario and have them write or revise a short decision document.
  2. Live session: During the interview, throw in constraints like "What if legal suddenly objects?" or "What if the timeline gets cut in half?" and observe how they respond and think.
  3. Critique exercise: Show an AI-written mediocre document and ask "What's wrong with this?" to test the candidate's 'taste.'

Whether candidates use AI in this process doesn't matter. What matters is their thinking patterns. Check whether those patterns hold up even without AI tools (in conversation), and whether they can clearly explain trade-offs even when using AI.

"We're not trying to catch someone cheating with AI... we're trying to see if they have stable thinking patterns that still show up when they can't just 'tab, tab, tab' (autocomplete)."


6. Closing: Become an 'Athlete' of Knowledge Work

Of course, this method has its limits. AI's scores may not be perfect, and you shouldn't use them as absolute criteria for promotions or performance reviews (you don't want to give the feeling of being surveilled).

But the key is starting small. Pick one habit, one document, and start practicing.

"The real goal is to develop athlete-like habits about our knowledge work. How do we deliberately name skills, measure them, identify what good looks like, and harness AI to train and get better?"

What was impossible in 2019 is now possible in 2025. We now have a personal coach (AI). Beyond simply processing tasks faster, the opportunity is open to refine your skills into irreplaceable expertise. Don't miss this opportunity -- start 'your own training' today

Related writing