1. Opening: The Reality of AI Development and the Story of Noam Brown
The video was recorded on Saturday morning, June 28, 2025, and opens with a discussion of what may be the hottest topic in the AI industry right now: agent frameworks and harnesses (external frameworks). Noh Jeong-seok emphasizes that while building harnesses is work that will eventually disappear, "there is enormous value in building something today that will be gone in six months and deploying it tomorrow."
"The biggest trend these days is agent frameworks and things like that—building harnesses is work that will end someday, and investing in scale is the smarter play. But the reality is that you need to build a lot of these six-month harnesses."
The episode's main subject is Noam Brown. A recent appearance by Noam Brown on the popular podcast Latent Space has been generating buzz, and Noh Jeong-seok says he listened to it on walks and turned it over in his mind many times.
Noam Brown is one of OpenAI's core researchers. Previously at FAIR (Facebook AI Research), he worked on game AI including poker AI and Diplomacy AI. He is seen as a next-generation AI star comparable to Demis Hassabis.
2. System 1 and System 2: An Analogy for AI and Human Thinking
Understanding Noam Brown's research and philosophy requires grasping the concepts of System 1 and System 2, drawn from Daniel Kahneman's Thinking, Fast and Slow, which map closely onto the evolution of AI models.
- System 1: Intuitive, fast thinking that relies on experience and heuristics
- System 2: Slow, energy-intensive thinking that is deliberate and logical
"System 1 is intuitive and fast—drawing on biases and heuristics to quickly retrieve what you know, with some guesswork mixed in. System 2 takes much more energy: it's the feeling of taking time to think something through carefully."
In AI terms, GPT-4 corresponds to System 1 and o3 to System 2. A key point is that System 2 can only operate if there is already a sufficiently good System 1 underneath it.
"System 2 doesn't work without a sufficiently good System 1. That's completely right."
3. The Bitter Lesson and the Role of Harnesses
The Bitter Lesson, often cited in AI development, is the principle that feeding in large-scale data and compute consistently outperforms hand-coding complex rules, no matter how sophisticated those rules are.
"To simplify the Bitter Lesson: no matter how carefully a human engineers rule-based logic with deep domain knowledge, throwing simple methods together with massive compute and massive data produces better outputs. That's the idea."
In this context, a harness (external framework, exoskeleton) is an umbrella term for everything done to help an AI model perform better—all forms of prompting, context grounding, and function calling. Noam Brown calls harnesses "crutches"—things that need to exist for now but will eventually disappear.
"Noam uses the word 'crutch' for harnesses. He means it will have to go away. That's both right and wrong—I'll dig into that more in a moment."
4. The Evolution of Reasoning Models and the Transfer Phenomenon
Recent AI progress has been moving strongly toward strengthening reasoning capabilities. Models like DeepSeek R1 illustrate the progression from System 1 toward System 2.
- Reward clarity acts as an important training signal. Reasoning ability strengthened in verifiable domains like math and coding has been observed to transfer to other domains.
"They strengthened reasoning ability using only math and coding, but it turns out reasoning improved across other domains too. 'Transfer' is the precise word for it…"
As this evolution unfolds, external frameworks (harnesses) remain necessary, but the expectation is that they will gradually be internalized by the model and disappear as it advances.
5. Real-World AI Services and Context Engineering
When actually building AI services, a harness—i.e., context engineering—is essential. But it too is fated to become obsolete as models improve.
- Context engineering goes beyond prompt engineering to encompass context grounding, function calling, context retrieval (RAG), and more.
- Most current agent services rely heavily on heuristic (experience-based, rule-driven) approaches, which have clear limitations.
"As Noam is saying, at a fundamental level, all harnesses—he uses the very dismissive word 'crutch' for them—will be 'eventually washed away by scale,' as he puts it."
But practically speaking, even a six-month harness is what makes a real service run, and the data accumulated through that process is ultimately essential for evolving the models themselves.
"I want to add something here: there is enormous value in building something today that will be gone in six months, and deploying it tomorrow."
6. Data, Experience, and the Essence of AI Development
The evolution of AI models ultimately comes down to data. Data is the program itself, and the essence of Software 2.0 lies in extracting patterns from data and deriving rules from them.
- Capability overhang: abilities a model already has but that haven't yet been unlocked
- External frameworks (harnesses) accumulate data, which then feeds downstream training: SFT, RLHF, DPO, and so on
"It comes back to data. I personally think data is the program itself."
There is also an honest acknowledgment of anxiety—that AI is advancing so fast that today's efforts might be meaningless in one or two years.
7. Human Learning and AI Learning: System 1 ↔ System 2
In the latter part of the video, the discussion turns to how human learning and AI learning resemble each other and how they mutually influence one another.
- Techniques for strengthening long-term human memory—spaced repetition, flashcard apps (Anki, Mochi)—are introduced, along with examples of using AI for self-augmented learning.
- "Raise System 1 and you get better at System 2; System 2 experience becomes System 1."
"You've probably heard the saying that learning compounds like interest. Because you can use what you learned in the past to learn what comes next. Everything connects to everything else."
8. Creativity, Connection, and the Limits of AI and Humans
Creativity emerges from connecting distant concepts, and AI models can learn from and link far more data than any human can. That capacity is highlighted as a key advantage.
- Human cognitive tendencies like apophenia (finding meaning in noise) also play an important role in how we interact with AI.
- The importance of detachment and verification is noted, with a caution that creative connections can sometimes lead you astray.
"The way is to notice when pattern-seeking has gone into overdrive, step back, and keep only the connections that survive actual interaction. I genuinely sat on this for nine days. When I read it again, I thought, 'Was I being gaslit?'"
9. The Limits of AI and the Role of Humans
No matter how capable AI becomes, its usefulness is ultimately bounded by the capability of the person using it—a point repeated throughout the video.
"The more AI performance increases, no matter how excellent that AI becomes, its capabilities are bounded by the capabilities of the person using it."
The conclusion, then, is that to use AI well, humans themselves must grow alongside it.
10. Closing: The Future of AI and How to Prepare
The video closes with a quote from Ilya Sutskever, warning that AI's future is profoundly unpredictable and that an "intelligence explosion"—AI building AI—may be coming.
"The problem with AI is that it is so influential and so powerful that it can solve everything, while at the same time it can do 'everything.' That's the problem. And right now, none of these questions have answers."
The message the video ends on: to prepare for the "unthinkable" changes that AI will bring, more people need to be asking questions and exchanging perspectives from a wide range of viewpoints. 🚀🤖🧠
Key Terms
- System 1 / System 2
- Bitter Lesson
- Harness (external framework, crutch, exoskeleton)
- Context Engineering
- Reasoning models
- Transfer
- Capability Overhang
- Data is the program
- Spaced Repetition, Anki, Mochi
- Apophenia, creative connection
- The limits of AI and the role of humans
- Intelligence explosion, Unthinkable
Notable Quotes
"System 2 doesn't work without a sufficiently good System 1."
"The Bitter Lesson is always right."
"Eventually washed away by scale."
"There is enormous value in building something today that will be gone in six months, and deploying it tomorrow."
"Data is the program itself."
"No matter how excellent that AI becomes, its capabilities are bounded by the capabilities of the person using it."
"The problem with AI is that it is so influential and so powerful that it can solve everything, while at the same time it can do 'everything.' That's the problem."
Closing Thoughts
This video offers deep insight into the reality and philosophy of AI development, and the interaction between humans and AI. It unpacks what Context Engineering actually is, why it gets compared to a "crutch," and what posture we should adopt as we prepare for the AI future— all through a wealth of examples, quotes, and real experience, delivered in an approachable conversational style. If you have any interest in AI, this is well worth a listen: it's full of things to think about. 🚀🤖🧠
