This video provides deep insights into how knowledge workers can improve their professional skills in the AI era. In particular, it introduces how to train systematically like an athlete using AI, convert vague feedback into measurable scores, the five critical skills that matter when AI handles execution, and how to create practice loops at the team level.


1. Knowledge Workers Need Training Too: From Job-Centric to Skill-Centric

We need to shift from thinking about careers in terms of 'jobs' to thinking in terms of 'skills,' and consider how to train and develop these skills for career growth. In particular, we need to focus on how to improve these skills with the help of AI.

This idea was inspired by a 2019 blog post by Tyler Cowen. He pointed out that while athletes, musicians, and performers train, knowledge workers don't train. Just as a basketball player practices free throws, knowledge workers have no equivalent clear training method.

"Athletes train, musicians train, performers train. But knowledge workers don't really train. We don't train. I don't shoot free throws. There's no equivalent for knowledge work."

This led the author to start thinking about what the equivalent of a pianist practicing scales might be for knowledge work. The AI era in particular has opened up new opportunities to think about skills differently.

Our current assumptions about skills are too tied to job descriptions. If you're a hiring manager, you know that specific skills are embedded in job postings for recruiting software, compensation estimates, and promotions -- as if skills can't exist independently of roles. But we're moving into a world where skills can exist independently of roles.

"It's as if you can't imagine a world where skills can exist independently of roles. But that's exactly the world we're heading toward."

In the future, it will be a world where we acquire skills to perform meaningful work alongside AI, and we should be measured not by having job A or B (e.g., product manager or engineer) but by our ability to produce results through performance and skills.


2. Deliberate Practice Methods for Knowledge Workers

So what does practice look like in this 'skill-centric' world? Just as athletes train physically, knowledge workers must also strengthen specific cognitive patterns and responses through practice. Otherwise, we spend our entire careers in live performance, which is a highly inefficient way to learn.

The good news is that right now, in the AI era, we have the best opportunity ever for this kind of practice, because AI can provide personalized practice feedback that was previously impossible to scale.

However, there are three structural reasons why knowledge workers find it hard to practice.

2.1. Three Structural Barriers to Practice

  1. Fuzzy Outcomes: In basketball, the ball either goes in or it doesn't -- the result is clear. But in product, strategy, leadership, and engineering, 'good' gets muddled across many dimensions like speed, quality, politics, relationships, and risk. There's no clean binary signal.

    "In basketball, the ball goes in or it doesn't. You shoot a free throw and miss or make it. It's a clean signal. In product, strategy, leadership, or engineering, 'good' can get muddled across so many dimensions."

  2. Delayed and Noisy Feedback: Even if you make an important decision in Q1, you might not know if it was successful until Q3 at the earliest. In the meantime, the market changes, a competitor launches something new, or key people leave. Getting a clean counterfactual like "if I had written the spec differently, I could have avoided event X or Y" is nearly impossible.

  3. Low Repetition: A serious musician practices scales hundreds of times per week, but how many important decision documents, product specs, strategy documents, or technical architecture memos does a knowledge worker write per quarter? Since each document involves real money and people, there's no 'low-stakes' practice space in a traditional career path. Most knowledge workers spend 95%+ of their time in actual work -- in 'live games.' This is like practicing in front of a crowd, literally for your career. It's better than nothing, but far from true practice.


3. Five Critical Skills for the AI Era

So what specific skills can be repeatedly practiced in the AI era? The author identifies five important skills.

  1. Judgment: How do you frame decisions, define options, and make choices under uncertain conditions? That's judgment.

  2. Orchestration: How do you convert ambiguous goals into concrete workflows that humans and AI can execute together? It's the ability to extract clarity from ambiguity.

  3. Coordination: How do you lead groups of humans through ambiguity without causing chaos? As AI agents advance, the skill of coordinating both agents and humans will also be needed.

  4. Taste: Do you have meaningful quality standards for product, writing, design, and strategy? Can you articulate what good looks like and talk about it and improve it as a skill?

  5. Updating: When evidence and context change, how do you change your mind without being swept away by noise? How do you update your heuristics and rubrics and change your thinking in meaningful ways?

These skills aren't just 'adjectives' for a LinkedIn tagline. They live in the 'artifacts' we create and leave behind.

  • Judgment shows up in decision documents, experiment designs, and prioritization documents.
  • Orchestration appears in handoff documents, specs, and project planning approaches.
  • Coordination is found in emails, meeting notes, and stakeholder maps.
  • Taste is revealed in the look of UX and in the examples or analogies chosen.
  • Updating shows in how plans evolve over time and how reasoning is documented.

In other words, these skills are not abstract adjectives but 'patterns' that emerge in what we produce. Accepting this means we stop arguing abstractly about who's strategic and instead look at how people actually write, act, and decide. This was the gold standard for behavioral interviews, but achieving this clarity has been especially difficult since the AI era.


4. How AI Changes the Way We Practice

AI isn't a magic brain -- it's a tool that can read text, follow instructions, and apply rubrics consistently. This makes it incredibly useful as a 'wall' to practice against.

4.1. Step 1: Define 'Good'

To get serious about practice, first select one 'artifact' that matters to your team. For example, a product manager's 'decision document.' Then sit down with people you trust and ask a very simple question:

"When you say a decision document is 'good,' what specifically do you mean?"

Ask this question gently, clearly, persistently, and push trusted people for specifics. Build a small, concrete list. For example:

  • Is the decision stated in one sentence?
  • Are there at least two real options?
  • Are stakes and metrics explicit?
  • Is there a clear recommendation?
  • Are risks and trade-offs surfaced?

This is just one example for one artifact. You should define what 'good' means for every artifact relevant to your field -- architecture documents for engineering, call summaries for CSMs, pipeline forecasts for sales, etc.

4.2. Converting the Rubric into an AI Scoring System

After defining 'good,' convert it into a clear 'rubric' on a 1-to-5 scale. Then take 3-5 real examples, score them yourself, and record your feedback.

"This one is really good on clarity. This one has good risk analysis but has these weaknesses. Here's why."

Note that we haven't used AI yet. The author emphasizes that human skills require human accountability. Only after scoring a few documents yourself and recording feedback do you bring in the LLM.

Provide the LLM with the rubric and your hand-scored examples. Technology has advanced to the point where even handwritten red-pen notes on documents can be recognized.

Then instruct the AI:

"When I send you a new document, score it like this. Quote the parts you're reacting to, briefly explain why you gave each score, and suggest an edit that could raise one of these dimensions by 1-2 points."

Look at the change this creates! Instead of a manager glancing over something for 15 minutes thinking "hmm, vague, I'll look later," you now get structured critique that can be applied consistently across every type of document.

"This document scores a 2 on options. That document scores a 4 on clarity but a 1 on risk structuring. Here's what I need to change."

AI provides a rough but consistent view of how your skills manifest in your actual work. This is the 'signal' we've been missing -- the equivalent of the basketball going through the hoop. You can record these scores and track your behavioral patterns and score changes quarterly. What was impossible before the AI era -- 'film reviews' of thinking and writing at scale -- is now achievable with just good prompts, without hiring an army of coaches!

4.3. Turning Film Reviews into Repeatable Training

Now it's time to convert film reviews into repeatable drills that train the patterns you care about.

Take 'judgment' as an example. In decision documents, judgment looks like "Can you write a decision doc such that a reasonable person can say yes or no without a 2-hour meeting?" If you have a rubric, you can create a practice drill like this:

"Once a week, take a real messy situation -- a Slack thread, an ambiguous manager request, an idea you had in the shower -- and write a one-page decision document. Include the patterns you defined as 'good': clear decision, options presented, stakes, recommendation. Now run it through the same AI rubric you use for real documents. Compare your version with a stronger version generated by the model. Notice what you missed. That's your practice."

This is the practice! Comparing against what's good, focusing on sub-skills, and repeating weekly.

  • For orchestration, define good specs (explicit goals, inputs, outputs, constraints) and create drills for converting ambiguous goals into time-boxed specs or organizational decisions.
  • For coordination, define patterns for executive updates.

The key is understanding the chain of behaviors to adopt for skill improvement.

Skill -> Identify recurring actions -> Connect to recognizable patterns in artifacts -> Set grades -> Start practicing!

Through this process, you can use AI like a personal coach to genuinely improve.


5. Team-Level Practice Loops

If you're a team leader, how can you apply this? The author explains a potential operating model that most team leaders don't follow.

  1. Define team-level rubrics: Focus on a specific artifact the team wants to improve over a quarter. Have the entire team define rubrics together. Rather than individual work, having the whole team find and discuss good example documents to create rubrics is much more powerful.

  2. Build a team LLM: Build a team LLM that automatically runs rubric reviews whenever someone marks a document as 'ready for review.' This is like engineers auto-reviewing PRs with Codex. Now Claude or ChatGPT auto-reviews documents and leaves comments.

  3. Use AI feedback before human review: Ask team members to apply AI critique to documents before human review. This is a managerial decision.

  4. Secure team practice time: Once or twice a week, have the entire team set a 10-minute timer and practice improving the areas AI has consistently flagged as needing work, then share. When goals are clear, humans perform better.

This approach helps teams get stronger and individuals develop faster. The goal isn't to demand perfection or tie this to performance reviews. Instead, it's about using small, consistent habits to build and scale the kinds of useful skills needed in the AI era.

At the end of the quarter, you should be able to have conversations with the team like "Did we see improvement in our rubric for this artifact? Did scores go up? Did the number of iterations needed for document approval decrease? Are key decisions being made faster and with less confusion?" If these metrics are moving in the right direction, you'll know the practice loop is changing how the team thinks and writes.


6. Applying Rubrics to Hiring

Remarkably, these skill sets can also be applied in job interviews. Most companies hire for skills in very indirect ways. For example, they ask "Tell me about a time you influenced a stakeholder" and listen to the candidate's story, trying to infer whether they can perform the work needed over the next few quarters.

But if you've defined patterns for specific artifacts, there's a far more realistic way to evaluate people. Give candidates the same 'game' you do as a team and see how they'd perform on the job.

For example, instead of a traditional PM interview, give the PM a short exercise to write or revise a decision document based on a realistic prompt. Then discuss the document in a live session, changing constraints like legal limitations or tight deadlines to observe how the candidate thinks and adjusts.

Next, you can run a critique exercise showing an intentionally mediocre AI-generated document and asking what's wrong with it.

The advantage of this approach is that you can leverage the same rubrics developed internally and use the same AI model as a consistent first-pass scoring tool. The point isn't having AI decide who to hire, but having shared, concrete standards for what 'good' looks like in real work.

Another great thing about this approach is that hiring and development now point to the same place. The skills you test candidates on become the skills you help them practice after they're hired. Not "We hired for strategic thinking but their Jira ticket management is terrible," but "these are the skills we tested for and the skills we work on together as a team."


7. Why AI Usage Doesn't Undermine Skill Assessment

This entire process presupposes getting better by using AI. There's no need to hide AI usage, and you can improve these skills while using AI openly. Because the goal is 'results.'

Even if an interviewee uses AI, if they panic and can't cope when you change constraints or present specific situations in a live session, it clearly shows they either don't have a healthy relationship with AI or the limits of their skill set are apparent.

These practice loops are designed to strengthen the kinds of skills humans need in the AI era. They'll push people to clarify decisions, surface risks, and articulate trade-offs clearly. If someone uses AI to draft something, that's great, but if they haven't done the deep thinking, it'll be easy to spot.

The freedom of this approach is that real assessment happens through genuine conversations where people discuss their choices and how they respond. Interviews and development conversations will feel remarkably similar. We're not trying to catch people cheating with AI. We just want to verify that stable thinking patterns remain even when the 'tab' (AI autocomplete) is gone. If, during a screen-free conversation where you shift dynamics and discuss quality, they stumble, you'll know. But if they arrived at the destination faster with AI's help and can clearly explain the trade-offs and leverage the technology in the right direction, that's a fantastic outcome. And now we can measure it.


8. Limitations and Starting Small

There are realistic limitations to this approach.

  • Rubric scores can be noisy. Don't treat them as precise numerical representations or use them as grounds for promotion.
  • You don't want to create a surveillance risk where people feel every document is being scored.
  • You need to prevent program fatigue from causing these efforts to fizzle out.

Therefore, the author strongly recommends starting small rather than trying to go big. Choose one small thing, a brief change in habit, and start practicing, gradually building comfort.

The ultimate goal is developing athlete-like habits for knowledge work. How do you deliberately name, measure, and identify what 'good' looks like for your skills, and harness the power of AI to train and improve?

When Tyler Cowen posed the question in 2019, AI couldn't coach us, so coaching was too expensive for most people. But now AI can help each of us and our teams actually grow our skill sets. This is a truly exciting change!


Conclusion

The AI era offers knowledge workers far more than mere efficiency gains -- it provides a new opportunity to systematically train and develop professional skills. By defining the previously vague standards of 'doing well' as clear rubrics, using AI as a personal coach for repetitive practice, and applying these processes to teams and hiring, we can achieve levels of growth that were hard to imagine in the past. Starting small now and building the habit of becoming an 'athlete' of knowledge work is what matters most.

Related writing