1. The Importance of Compute

Greg Brockman emphasizes that compute remains a strategic constraint. Better models require not only algorithms and data, but the infrastructure to train and serve them reliably.

2. Scaling Laws and New Architectures

Scaling has repeatedly produced surprising capability gains, but architecture still matters. Progress comes from combining larger systems with better ways to use reasoning, tools, and feedback.

3. Access to AGI and the Future of Software

If AI becomes broadly capable, the bottleneck shifts toward human attention: what people choose to ask for, supervise, and integrate. Software creation may become more accessible, but judgment remains scarce.

4. OpenAI's AI Use and Organizational Change

OpenAI uses AI internally to accelerate work and expose new product patterns. This changes how teams coordinate, review work, and decide what deserves human focus.

5. Common Mistakes and Responsible Deployment

One common mistake is treating AI capability as either magic or useless. Responsible deployment requires measuring real behavior, understanding failure modes, and releasing systems with appropriate safeguards.

6. The Application Layer and the Future of Science

The application layer is where model capability becomes user value. In science, AI could help generate hypotheses, analyze literature, and speed up discovery, but the human process of verification remains essential.

Closing

The key idea is that as AI handles more execution, human attention becomes the scarce resource. The challenge is learning how to aim, evaluate, and govern increasingly capable systems.

Related writing