As AI moves deeper into ordinary work, the "tab key" becomes a symbol for a new kind of labor. Instead of creating everything ourselves, we increasingly approve, reject, and lightly edit outputs proposed by machines. This article argues that the shift is not limited to software development. It is spreading across knowledge work in general, transforming human expertise into training data and reducing more and more jobs to binary acts of validation.


1. "Tab, Tab, Tab" Is No Longer Just About Coding

The piece opens with a vivid scene: an AI startup founder uses Cursor during a hackathon and accepts one code suggestion after another. After twenty minutes, almost none of the resulting code was directly typed by the human. The work mostly consisted of reviewing and accepting.

The author then recognizes the same pattern in their own use of Claude for note-taking and market research. Even when spending many hours with the tool, the truly original thoughts were relatively few. Most of the interaction was selection rather than creation.

That is why the repeated keystroke becomes the article's refrain:

Tab. Tab. Tab.


2. A New Industry: Human Validation Farms

The article argues that a major AI business model is no longer pure software creation, but human validation at scale.

Companies like Mercor, Surge, and Scale AI are framed as examples of this shift. Their value comes from organizing humans to judge, refine, and label model outputs. The bottleneck is no longer generating possibilities. Models can generate near-infinite possibilities already. The bottleneck is deciding which outputs are useful.

In that sense, people increasingly serve as filters for machine abundance.


3. Work Is Becoming Binary

The author says this pattern is spreading everywhere:

  • radiologists review AI-detected findings,
  • bankers inspect AI-generated models,
  • lawyers check AI-drafted contracts.

Across professions, the structure of work is drifting toward:

  • accept,
  • reject,
  • revise.

Instead of producing from scratch, workers are increasingly being asked to validate, triage, and lightly shape machine-generated options. The article warns that if this continues, many workdays may soon begin with most tasks already done, waiting only for final human approval.


4. Human Expertise as Proprietary Training Data

The article then identifies the real competitive moat in AI: not GPUs, not even algorithms, but expert human judgment captured as data.

Smart companies, the author says, are building internal tools that record every subtle choice employees make while using AI systems. The point is not just productivity. It is to convert the taste, corrections, and decision patterns of experts into proprietary training data.

That creates a closed loop:

  • workers interact with AI,
  • their edits and approvals are captured,
  • models improve from those signals,
  • and the organization becomes less dependent on the individual humans who generated the data.

5. Are We Teaching AI to Replace Us?

One of the article's darkest claims is that workers are not merely being displaced by AI. In many cases, they are actively teaching AI how to replace them.

Past outsourcing at least left skill and judgment in other humans. But in this new model, expertise is compressed into machine-readable feedback. Once the model becomes good enough, the original expert may no longer be necessary.

That is why the simple act of pressing tab carries symbolic force. Each small approval trains the system a little more.


6. What Remains Human

The article does not end in total despair. It points to areas that still feel genuinely human:

  • curiosity,
  • taste,
  • exploration beyond existing patterns,
  • and noticing problems that no model has yet been trained to see.

These are not things that can be reduced easily to binary validation. They are closer to discovering new terrain than to checking boxes.

The piece suggests that this may become the real frontier of human work: not endless approval of known outputs, but the creation of new questions, new frames, and new forms of judgment.


Conclusion

The article's warning is that AI is no longer just a tool sitting beside expertise. It is becoming a mechanism that absorbs expertise through everyday use. The more work turns into "yes," "no," "accept," or "delete," the more human knowledge becomes machine fuel.

And yet the essay also points toward a possible line of defense: curiosity cannot be tab-completed. Taste is not born from validation alone. Problems that do not yet exist in any training set still require genuinely human perception.

That may be where meaningful work survives, after the rhythm of "tab, tab, tab" has spread everywhere else.

Tab Tab Tab

Related writing