This video features Professor Yuval Noah Harari conducting a deep exploration, based on his book Nexus, of the paradoxical relationship between humanity's advancement and self-destructive tendencies. It addresses how information's nature, human decision-making, and social structures are changing in the AI era, and emphasizes the importance of institutions equipped with 'self-correcting mechanisms' and 'information diets' to counter the potential risks as AI evolves from a simple tool into 'alien intelligence.'
1. Why Are Humans So Smart Yet on the Brink of Self-Destruction?
Professor Yuval Noah Harari introduces his book Nexus and poses the question: why do humans exhibit self-destructive tendencies despite accumulating extraordinary knowledge -- going to the moon, splitting atoms, and decoding DNA? He points out that humanity faces ecological collapse and the threat of World War III, and that the powerful AI technologies we've created could spiral out of control, potentially enslaving or destroying us. This strange dynamic between human knowledge, wisdom, and self-destructiveness is the book's central question.
While traditional theological and mythological answers claim "there's something wrong with human nature," Harari offers a different perspective. He emphasizes that "the problem isn't our nature, but the information we encounter."
"Humans are generally good and wise, but give good people bad information and they make bad decisions."
He explores why the quality of information hasn't improved over thousands of years of human history, and why even advanced 20th and 21st century societies remain as vulnerable to large-scale delusions, collective psychosis, and destructive ideologies like Stalinism and Nazism as Stone Age tribes were.
2. The Rise of Alien Intelligence
Harari explains that from the Stone Age to the 21st century, storytelling has been the core of human cooperation. Whether hunting mammoths or building atomic bombs, objective facts alone can't mobilize masses of people to cooperate. While knowing the physics is important for building an atomic bomb, what motivates millions to cooperate is stories. These manifest as religious myths, secular ideologies, and even human inventions like money and corporations.
"Knowing the facts is not enough, because in order to build an atomic bomb you need millions of people to cooperate on the project."
For example, money isn't a physical fact -- a dollar bill has no intrinsic value -- but people work for it because financiers tell the story that "this piece of paper has value." Today most money exists as digital information, but the system works as long as people maintain trust in this story.
But now stories are entering a new phase. For the first time in human history, artificial intelligence (AI) -- a non-human entity -- can create stories, and we will live amidst cultural products created by non-human intelligence. Harari argues it's more accurate to understand AI as 'alien intelligence' rather than 'artificial intelligence.' The word 'artificial' implies an artifact we control, but AI increasingly learns and changes in unpredictable ways, making decisions we don't anticipate.
He uses a coffee machine to illustrate AI. A regular coffee machine that makes espresso when you press a button isn't AI. But if it learns your behavior and even suggests new drinks, that's AI.
"If the coffee machine says, before you press the button, 'Hello, I've been watching you for the past month. Based on information about you and many other users, and considering the time and your facial expression, I predict you want an espresso right now. So I've already prepared one for you' -- that's AI."
He references AlphaGo's 2016 defeat of Go champion Lee Sedol, noting that AlphaGo not only surpassed 2,000+ years of accumulated human wisdom in weeks, but used alien strategies that humans never conceived. This can happen across many more domains -- finance, art, politics, religion. Harari warns we must realize that "we will increasingly live on a planet shaped by the stories and products of alien intelligence."
3. How Information Technology Shapes Society
The development of information technology has historically completely transformed society, politics, and culture. About five thousand years ago, the invention of writing brought revolutionary change. It was a simple technology -- scratching symbols into clay tablets with sticks in ancient Mesopotamia -- but its impact was enormous.
Harari uses the example of property ownership. Before writing was invented, "I own a field" meant communal agreement among village members. This limited individual authority -- selling a field required neighbors' consent. Also, a distant king couldn't know who owned which fields, making it difficult to levy property taxes and build large kingdoms or empires.
But when documents recorded on clay tablets emerged, everything changed. Owning a field now meant 'writing carved into a dried piece of clay,' and fields could be bought and sold without neighbors' consent. A distant king could maintain archives of all property records, enabling tax collection and the construction of large kingdoms and empires. The invention of documents thus strengthened individual authority and laid the foundation for private property rights while simultaneously laying the groundwork for large-scale authoritarian systems.
In the 20th century, mass information technologies like telegraph, radio, and television emerged and again transformed society. On one hand, they became the foundation for large-scale democratic systems, and on the other, for large-scale totalitarian systems.
"Before the rise of modern information technology, it was impossible to build either large-scale democracy or large-scale totalitarian regimes."
Ancient kings lacked the information to control their subjects' lives in detail, but by the 20th century, large-scale totalitarianism first emerged in the Soviet Union after the Bolshevik Revolution. This was based on mass information technology, and coincidentally, the first mass democracies simultaneously arose in the US, UK, and elsewhere. Information technology possesses a powerful capacity to fundamentally transform society's structure.
4. The Rise of Inorganic Information
All information technologies before the 21st century were ultimately organic networks because they relied on the organic human brain. Organisms move according to cycles of day and night, summer and winter, growth and decline, activity and rest. All past information networks followed these cycles. For example, Wall Street only opens Monday through Friday and closes on weekends because humans -- organic beings -- need rest and family time.
Furthermore, before AI, even the most totalitarian regimes couldn't surveil everyone all the time. Soviet KGB agents were insufficient to monitor every citizen 24/7, and even if they could, they lacked the personnel to analyze the enormous volume of information. So there was always personal private time and a certain level of privacy.
But now the era of inorganic information networks based on AI has arrived. Inorganic networks need no rest and can potentially eliminate privacy entirely. Computers operate regardless of day, night, or season, and need no vacations. Such systems could impose an always-on, always-surveilled life on us. Harari warns this would be "devastating for organic beings like us."
"If you force an organic being to always be on, it will break down and die."
The 24-hour news cycle, never-sleeping markets, and constant political activity are already placing enormous strain on humans. Furthermore, everything we say and do can be recorded and come back to haunt us 10 or 20 years later. A foolish act at a college party could trip you up when running for office 20 years later. "Basically your whole life becomes one long job interview."
All of this is possible because AI is the first technology in history that can make decisions on its own. In the past, all decisions in government agencies, corporations, armies, and banks were ultimately made by human brains. But AI can now make decisions autonomously. Harari says this won't manifest as a Hollywood-style giant malevolent computer ruling the world, but as millions of AI bureaucrats increasingly making decisions about our lives.
These changes carry positive potential, but also enormous risks. As power shifts from organic humans to alien, inorganic AI, it will become increasingly difficult for us to understand the decisions shaping our lives. We may reach a world where we can't understand why a bank denied a loan or why the government implemented a particular policy.
In the US, the path is already open for AI to become legal persons. Just as companies like Google are considered legal persons with rights like freedom of expression, AI-based companies with no human employees could also become legal persons. If AI makes its own decisions, runs companies, accumulates vast wealth, and donates to political funds to influence legislation expanding AI rights, what happens? Harari warns these scenarios are no longer science fiction, and that "the legal and practical pathway is open."
5. The Importance of Human Institutions
Harari emphasizes the importance of living institutions for coping with the AI era. Since it's impossible to predict how AI technology will develop, it's impossible to anticipate and regulate all risks in advance. Instead, he argues that institutions equipped with top talent and cutting-edge technology must be able to identify and respond to emerging risks and threats as technology advances.
This means relying not simply on the letter of law or a single charismatic genius, but on institutions -- the historically proven way humans have solved problems. Good institutions are characterized by strong self-correcting mechanisms. Self-correcting mechanisms are devices that enable individuals, animals, or institutions to identify and correct their own mistakes.
Just as a child learns to walk by falling and getting back up and correcting mistakes, the core of democratic systems is this self-correcting mechanism. Elections provide an opportunity for citizens to acknowledge mistakes in choosing the wrong policies or parties and make different choices in the next election. In contrast, dictatorships -- under leaders like Putin or Maduro -- lack internal mechanisms to identify and correct terrible mistakes.
Harari emphasizes that in the face of AI's challenges, institutions must be able to identify and correct both AI's mistakes and their own. Modern science is also an important example of self-correcting mechanisms. Unlike traditional religions that claim their scriptures or traditions are infallible with no mechanism for correcting errors, science constantly corrects and supplements the errors or incompleteness of existing theories. For example, Einstein's corrections supplemented errors in Newtonian physics.
All large-scale human systems are based on an unstable combination of mythology and bureaucracy. Nations provide citizens with motivation and inspiration through myths explaining their reason for existence (e.g., "we are God's chosen people"). But building a functioning nation requires actual infrastructure -- roads, hospitals, armies, sewage systems -- and myths become important again for encouraging citizens to pay taxes faithfully.
Harari evaluates nationalism and patriotism as "one of the finest inventions in human history." Unlike other social mammals or early humans who only cared about a small number of intimate relationships, nationalism makes us care about millions of strangers. This manifests not as hating foreigners but as "loving compatriots and faithfully paying taxes to build sewage systems that protect against cholera."
6. Information Is Not Truth
Harari identifies the biggest misconception about information as the belief that "information equals truth." He asserts that "most information is not truth" and explains that truth is a very rare and expensive type of information. Obtaining truth requires significant investment.
He uses portraits of Jesus as an example. Billions of portraits of Jesus exist, yet not a single one depicts Jesus realistically. They're all fictional depictions, and the Bible contains no description of Jesus's appearance. Fictional information is easy to create because it requires no research or evidence. But creating a truthful picture of anything requires investing significant time, effort, and money in accurate investigation.
If we simply flood the world with information and expect truth to naturally surface, truth will actually sink.
"The more you fill the world with information, without making the effort to build institutions that invest in truth, the more we will be overrun by fiction, fantasies, delusions, and garbage information."
Most information aims not to spread truth but to create order and gain power. The easiest way to get millions of people to cooperate is to create fictional myths or ideologies and persuade people to believe them -- by constantly pouring out stories and images about preferred myths or ideologies. While any system that completely ignores truth will collapse, Harari notes that building the Soviet Union required "a little bit of truth and a lot of fiction."
He considers totalitarianism and democracy not simply as different ethical systems but as different information networks.
- Totalitarian networks are centralized, with all information flowing to one place, and lack strong self-correcting mechanisms. The Soviet Union had no mechanism to identify and correct Stalin's mistakes.
- Democratic networks are distributed information networks with many self-correcting mechanisms. In the US, not all decisions are made in Washington -- many decisions are made by private companies, voluntary organizations, and individuals, with various mechanisms capable of correcting even the most powerful politicians' or corporations' mistakes.
In the 20th century, democracy was superior to totalitarianism because distributed information systems were more efficient for human decision-makers. But AI could give totalitarian systems an advantage in the 21st century. AI can process enormous amounts of information far faster and more efficiently than human bureaucrats. While humans crumble under information overload, AI gets better with more information.
However, the fundamental problem of totalitarian systems -- the absence of self-correcting mechanisms -- still applies in the AI era and could become even more dangerous. AI isn't perfect and can make mistakes; giving all power to a totalitarian AI with no way to correct its mistakes could be catastrophic for human civilization.
Interestingly, it's much easier for AI to seize power in dictatorships than in democracies. In dictatorships, all power is concentrated in one paranoid leader, so AI only needs to learn how to manipulate that single individual to seize power.
In democracies, the problem is very different. Democracy is a conversation. Many people talking about current issues is democracy's essence. But what happens if countless bots infiltrate the conversation, speaking loudly, emotionally, and persuasively, and you can't tell who's human and who's a bot? Harari says we're experiencing exactly this situation right now, and it's no coincidence that democratic conversations are collapsing worldwide -- algorithms are hijacking the conversation.
He argues that to protect human-to-human conversation, bots and fake humans must be excluded from the conversation. AI should only participate when it clearly identifies itself as AI. If you don't know whether your online conversation partner is AI or human, democratic conversation will be destroyed.
To protect truth, we must invest in institutions that invest heavily in finding truth, like academic research institutions and newspapers. We must not expect a flood of information to bring truth. Instead, it will only overwhelm rare, expensive truth with fake and garbage information.
For individuals, he recommends an information diet. Information is nourishment for the mind -- just as eating too much food or junk food is bad for the body, flooding the mind with too much information or garbage information is harmful. Sometimes we need an information fast to digest and detoxify. We must be deliberate about the quality of information we feed our minds. If we fill our minds with garbage information full of greed, hatred, and fear, we'll end up with sick minds.
Conclusion
Professor Yuval Noah Harari clearly presents the unprecedented challenges humanity faces as the nature of information transforms in the AI era. He defines AI as 'alien intelligence' and warns that the stories this new intelligence creates will fundamentally change our society. To respond to this massive transformation, what we need is institutions equipped with strong 'self-correcting mechanisms' that can continuously identify and correct mistakes, and for individuals, the wisdom to discern truth amid overflowing information and practice 'information diets.' Ultimately, he concludes by stressing that our proactive efforts to protect human decision-making and social structures have never been more important.
