This video features renowned AI expert and investigative journalist Karen Hao discussing her book "EMPIRE OF AI: Inside the Reckless Race For Total Domination" with Steven Bartlett, exposing the dark side of the AI industry. Hao argues that AI companies are deceiving the public with "imperialist agendas" that go beyond mere profit-seeking, and presents the social and environmental damage that AI development can bring along with directions for safer AI development.


1. The Uncomfortable Truth About the AI Industry: An Empire of Profit and Control

While the AI industry outwardly promises human advancement and prosperity, its true motivations are profit-seeking and power consolidation, says Karen Hao. AI companies package themselves as the only entities capable of safely developing potentially threatening technologies — while simultaneously being agents of that very threat.

"Many of the things happening in the AI industry today are extremely inhumane."

AI companies are violating the intellectual property rights of artists, writers, and creators to train their models, exploiting vast amounts of labor, and creating perverse situations where laid-off workers train AI models to do the very jobs they were fired from. Many people are losing their livelihoods, and even newly created jobs are far inferior to what came before. This technological development also creates environmental and public health crises, with companies spending hundreds of millions of dollars to block related legislation and suppress critical researchers — operating in an imperialist fashion.


2. Karen Hao's AI Research Journey: From MIT to the Wall Street Journal

Karen Hao studied mechanical engineering at MIT before joining a Silicon Valley startup. But watching a climate-focused company fire its CEO over lack of profitability raised a fundamental question: is technology truly for the public good, or merely profit-driven?

"If this hub is ultimately focused on making profitable technology, and many of the world's problems I think need to be solved — like climate change — are unprofitable problems, what are we doing here? Has innovation reached the point where instead of serving the public good, it damages the public good in pursuit of profit?"

Turning to journalism, she worked as an AI reporter at MIT Technology Review, exploring who builds technology, how capital and ideology affect technological production, and how to reimagine innovation ecosystems to benefit everyone. Her book "EMPIRE OF AI" emerged from eight years of in-depth reporting and over 250 interviews, including interviews with more than 90 current and former OpenAI employees.


3. The AGI Myth: Vague Definitions and Sam Altman's Strategy

AI history began in 1956 when John McCarthy introduced the term "Artificial Intelligence" at Dartmouth — even then, there was no clear scientific consensus on what human intelligence is. Karen Hao argues that AI companies exploit this ambiguity, using AGI (Artificial General Intelligence) as a marketing tool.

"There is no destination in this field... when the industry says it wants to recreate an AI system as smart as humans, it has no destination either. How do we define what that means? And if we can't define the destination, when do we arrive? What this means is that these companies can essentially use the term AGI however they want."

Sam Altman used different AGI definitions for different audiences: to Congress — "a system that can cure cancer, solve climate change, and eliminate poverty"; to consumers — "the best digital assistant"; to Microsoft in investment negotiations — "a system that will generate hundreds of billions in revenue"; on the OpenAI website — "a highly autonomous system that outperforms humans at most economically valuable work." This definitional ambiguity was used strategically to evade regulation and attract more capital and consumer engagement.


4. The Attempted Ouster of Sam Altman and OpenAI's Internal Conflict

OpenAI co-founder Ilya Sutskever grew deeply concerned about Sam Altman's behavior and leadership, believing he was "not someone who should press the AGI button." Sutskever felt Altman was causing extreme internal chaos, pitting teams against each other and fostering distrust among employees. He reached out to independent board director Helen Toner, who coordinated with other executives to attempt Altman's removal.

The board believed that Altman's chaotic leadership was intolerable given that OpenAI was developing technology that could "make or break the world." They fired him without warning anyone, fearing his powers of persuasion — but this triggered fierce opposition from Microsoft and most employees, and Altman returned as CEO within days. Those who had opposed him, including Sutskever and Mira Murati, eventually left the company.

This internal conflict led OpenAI's early members — each with their own AI vision — to found separate companies: Elon Musk founded xAI, Dario Amodei founded Anthropic, and Ilya Sutskever founded Safe Super Intelligence.


5. AI Companies' Deceptive Discourse: The Duality of "Summoning Demons"

AI companies frequently invoke the phrase "summoning demons" to warn about AI dangers. But Karen Hao argues this discourse is a strategy to deceive the public and consolidate power.

"Do I know they're summoning demons? Well, they are deliberately trying to evoke those emotions in the public. Because that is an important part of their power."

On one hand they say "if we don't do it, China will." On the other, "if someone else does it, it will be catastrophic." The conclusion: only they should control AI development. These claims are not ordinary predictions but strategic speech acts designed to persuade the public to delegate more power and resources to them. Hao compares this to mythmaking in the film "Dune" — AI leaders eventually get lost in the myths they themselves created.

AI is not self-improving across all domains. It performs only the functions it was specifically trained for. This means companies are investing heavily in particular industries where profit is possible (finance, law, medicine, commerce) to train their models.


6. Jobs in the AI Era: Polarization and Loss of Humanity

Karen Hao says AI's impact on jobs is a more complex problem than simple automation. Not just technical capabilities matter — decisions of corporate executives and their rhetoric toward the public play a critical role. She worries about a polarizing labor market where AI eliminates jobs en masse, and even newly created jobs are far worse than their predecessors.

The Klarna CEO case illustrates this: AI now handles 70% of customer service work, cutting the workforce by more than half — while simultaneously emphasizing the importance of "excellently prepared human experiences" for those who want human connection.

"A marketer gets laid off, works at a data annotation company, and trains the model on the very work they were fired from. That model then perpetuates more layoffs."

This strips workers of their humanity, undermining autonomy and dignity while amplifying anxiety and stress.


7. AI's Environmental Destruction and Growing Inequality

AI development poses serious environmental and public health threats beyond just jobs. The massive computing resources required for AI model training consume enormous amounts of electricity, directly increasing carbon emissions. OpenAI's large-scale data centers in Abilene, Texas, and Elon Musk's supercomputer Colossus built in Memphis for training Grok illustrate the severity of this problem.

"When complete, this facility will consume more than 1 gigawatt of power — more than 20% of New York City's electricity."

These data centers are primarily built in vulnerable communities, raising electricity costs for local residents, destabilizing the power grid, and even depleting clean water resources. In Memphis, methane turbine operations exposed local residents to toxic substances, increasing rates of respiratory disease and cancer.

Hao warns that if AI continues developing this way, the gap between "haves and have-nots" will widen dramatically. Wealthy corporate executives will live "more human" lives thanks to AI, while most people will work for AI — losing humanity and suffering environmental degradation.


8. Dismantling AI Empires and Seeking New Development Directions

Karen Hao strongly argues we must "dismantle AI empires and develop alternatives." She compares AI to "transportation" — not only rocket-scale AI models (like GPT-4) need to exist, but also efficient, eco-friendly models like bicycles.

DeepMind's AlphaFold — which predicts protein folding from amino acid sequences — has made enormous contributions to drug development and disease research without requiring vast data like large language models. It's the "bicycle of AI," recognized with the 2024 Nobel Prize in Chemistry.

Hao emphasizes that the public needs to recognize they are "data contributors" and can pressure companies by refusing to provide data. Local resistance to data center construction and intellectual property lawsuits from artists and writers are important movements preventing AI companies from operating imperialistically.

"We need to dismantle the empire and develop alternatives. And we are witnessing an incredible flourishing of grassroots movements that are already exerting an enormous amount of pressure."

Currently 80% of Americans believe AI industry regulation is necessary, and anti-data-center protests are occurring worldwide. The key is not negating AI technology itself, but recognizing problems with its development and advancing AI in a fair and sustainable way — requiring a new development philosophy that puts the welfare and public good of all humanity first, not corporate profits.


Conclusion

The conversation with Karen Hao reminds us of the importance of balanced perspectives and critical reflection about AI technology's future. AI is undoubtedly a tool with enormous potential, but the current profit-driven, imperialist development approach can produce serious side effects — worsening social inequality, environmental destruction, and loss of humanity. We must not overlook the dark shadow hidden beneath AI's benefits. The time has come for all of civil society to participate — not just corporations or a handful of experts — to build AI for humanity.

Related writing