This video features renowned AI expert and investigative journalist Karen Hao discussing her book "EMPIRE OF AI: Inside the Reckless Race For Total Domination" with Steven Bartlett, exposing the dark side of the AI industry. Hao argues that AI companies are deceiving the public with "imperialist agendas" that go beyond mere profit-seeking, and presents the social and environmental damage that AI development can bring along with directions for safer AI development.


1. The Uncomfortable Truth of the AI Industry: An Empire of Profit and Control

Karen Hao points out that while the AI industry outwardly promises human progress and prosperity, it is actually driven by hidden motives of profit-seeking and power consolidation. AI companies paradoxically emphasize that AI could be a potential threat to humanity while positioning themselves as the only entities capable of developing it safely.

AI companies violate the intellectual property of artists, writers, and creators for model training, exploit massive amounts of labor, and create contradictory situations where laid-off employees teach AI models the very tasks they were fired from. Meanwhile, AI development is causing environmental and public health crises, with companies spending hundreds of millions to block relevant legislation and suppress critical researchers in imperialist fashion.


2. Karen Hao's AI Research Journey: From MIT to the Wall Street Journal

Karen Hao studied mechanical engineering at MIT before joining a Silicon Valley startup. After witnessing her climate-focused company fire its CEO due to profitability issues, she began questioning whether technological innovation truly serves the public good. Her book "Empire of AI" was born from 8 years of in-depth reporting and over 250 interviews, including more than 90 current and former OpenAI employees.


3. The AGI Myth: Vague Definitions and Sam Altman's Strategy

Hao points out how AI companies use AGI (Artificial General Intelligence) as a marketing tool, exploiting the lack of a clear definition. Sam Altman used different definitions of AGI depending on the audience - before Congress it was "a system that cures cancer and solves climate change," while for Microsoft investors it was "a system generating hundreds of billions in revenue." This vagueness serves as a strategy to evade regulation and attract more capital and consumer engagement.


4. The Sam Altman Ouster Attempt and OpenAI's Internal Conflicts

Co-founder Ilya Sutskever deeply concerned about Altman's leadership, believed he was "not the person who should press the AGI button." Sutskever judged that Altman was causing extreme chaos within the company, pitting teams against each other, and fostering distrust. The board fired Altman secretly but faced fierce backlash from investors including Microsoft, and Altman was reinstated within days. Sutskever and Mira Murati eventually left the company.


5. AI Companies' Deceptive Narratives: The Duality of "Summoning the Demon"

Hao analyzes how AI companies' rhetoric about AI dangers is actually a strategy to deceive the public and consolidate power. They argue "if we don't do it, China will" while also claiming "if someone else does it, it'll be a disaster" - ultimately pushing the narrative that only they should control AI development. She compares this to the myth-making in the movie "Dune", where AI leaders eventually fall for their own myths.


6. AI and Jobs: Polarization and Loss of Humanity

Hao discusses the complex impact on jobs beyond simple automation. While individual work speed increases, the labor market faces polarization where new jobs are far worse than those lost. She cites Klarna's case where AI handles 70% of customer service, reducing staff by more than half. Meanwhile, laid-off workers are pushed into poor-quality "data annotation" jobs, essentially training AI to permanently replace their former roles.


7. AI's Environmental Destruction and Deepening Inequality

AI development poses serious threats to the environment and public health. Massive data centers consume enormous power - OpenAI's facility in Abilene, Texas will consume over 1 gigawatt, more than 20% of New York City's power. These facilities are built in vulnerable communities, increasing power costs, destabilizing grids, and depleting clean water resources. In Memphis, Musk's Colossus supercomputer exposed residents to toxic substances, increasing respiratory disease and cancer rates.


8. Dismantling AI Empires and Seeking New Development Directions

Hao strongly advocates for "dismantling AI empires and developing alternatives." She compares AI to transportation - we don't need only rockets (large-scale models like GPT-4); efficient "bicycles" like DeepMind's AlphaFold can provide enormous benefits with far fewer resources. She emphasizes that the public, as "data contributors" to AI companies, should apply pressure through data refusal and other means, noting that 80% of Americans believe AI industry regulation is needed.

Related writing

HarvestEngineering Leadership · Data & Decision-MakingEnglish

AX Roadmap That Leads to Results: Connecting Individual Efficiency to Organizational Productivity

This webinar by Flex team's CCPO examines why 'using more AI doesn't automatically improve organizational outcomes' through structural analysis. Drawing from real experiments and failures in measurement, sequencing, and organizational adoption, it presents an AX design strategy focused on solving bottlenecks from the last mile - SSOT, evaluation environments, validation, and access control - concluding that changing bottlenecks, verification, decision-making, and collaboration structures matters more than increasing output volume.

Mar 28, 2026Read more
HarvestAIEnglish

The Era When Agents 'Code' and Research Runs in 'Loops': Andrej Karpathy Conversation Summary

Andrej Karpathy says that with the recent leap in coding agents, the core task has shifted from writing code directly to 'conveying intent to agents.' He sees this extending to AutoResearch—autonomous research loops running experiment-learn-optimize cycles with minimal human involvement.

Mar 22, 2026Read more
HarvestEngineering Leadership · AIEnglish

Building AI Harnesses to Boost Organizational Productivity in the Software 3.0 Era

This article argues that the way development teams use AI (LLMs)—currently dependent on individual skill—must evolve into a systematic organizational capability. It proposes using Claude Code's plugin and marketplace ecosystem not merely as extensions, but as an 'executable knowledge base' that codifies and distributes team workflows and knowledge.

Mar 8, 2026Read more