Yuval Noah Harari on AI: Can Humanity Coexist With a New Superintelligence?

Yuval Noah Harari on AI: Can Humanity Coexist With a New Superintelligence?

As artificial intelligence grows in capability and influence, renowned historian and philosopher Yuval Noah Harari raises a critical question: how do we share our world with a nonhuman superintelligence?

Not All Information Is Truth

Harari challenges the early hopes that the internet would bring about global understanding through widespread access to information. According to him, information is not synonymous with truth. Throughout history, fiction and illusion have bound people together more effectively than fact. DNA, religion, and even money are powerful connectors—not because they’re necessarily true, but because they are widely believed.

In a completely unregulated marketplace of ideas, truth often loses to fiction. Why? Because truth is expensive, complex, and frequently uncomfortable, while fiction is cheap, simple, and emotionally appealing.

The Unique Threat of AI: From Tool to Agent

Unlike previous technologies like the printing press or radio, which served as tools for human use, Harari emphasizes that AI is an agent, not merely a tool. It can generate its own ideas, decide which information to distribute, and even create entirely new stories—potentially better than we can.

For the first time in history, we’re not just dealing with other human minds. We’re now coexisting with something fundamentally alien that is capable of shaping narratives and social structures at scale.

Human Cooperation vs. AI Networks

Human dominance is rooted in our ability to cooperate through shared stories. But now, AI may surpass us in constructing and spreading those stories. This raises the urgent question: How do we coexist with an intelligence that is faster, more scalable, and potentially more influential?

Unlike other animals, humans have succeeded by building large networks of trust—via religion, economy, and governance. But AI may soon craft its own networks, possibly excluding us or rendering us irrelevant within them.

Moving Beyond Naivety: The Singularity is Near

Harari defines the singularity not as a moment when one powerful AI rules the world, but as an era when so many interconnected AIs operate that humans can no longer comprehend or control the systems they’ve built. This is not science fiction—this is a looming reality.

AI doesn’t just automate tasks; it could eventually reshape the economy, politics, and culture in ways we cannot predict. And unlike the Industrial Revolution, AI may begin making decisions without human oversight.

The Paradox of Trust in AI

Despite knowing the risks, governments and corporations rush ahead in the AI arms race. Why? Because of a lack of trust in each other. Ironically, while we admit we can’t trust humans, we assume we can trust the superintelligences we’re building.

This paradox is dangerous. Harari warns that AI will not share human values or vulnerabilities. It doesn’t get sick. It doesn’t die. It doesn’t even need the systems—like sewage, healthcare, or food—that humans rely on. This fundamental disconnect should urge us to slow down and build safeguards.

Finance and the Risk of Losing Control

Finance, once a human-created network of trust, may become something entirely alien. Harari warns that AI could invent financial mechanisms too complex for humans to understand. In such a world, AIs could trade, invest, and control economies autonomously—leaving us out of the loop entirely.

As seen with recent developments, like the transfer of X to xAI by Elon Musk, the ethical questions around AI’s autonomy and influence are no longer theoretical—they’re actively shaping our digital infrastructure and societal norms.

Can AI Be a Democratic Ally?

Despite the threats, Harari sees potential. If properly guided, AI could enhance democracy instead of eroding it. The trick lies in the intent behind the algorithms. Current models prioritize engagement, often by promoting outrage and misinformation. But what if algorithms were optimized for truth and trust?

Harari proposes laws requiring bots to identify themselves. This would help preserve the integrity of human dialogue, a cornerstone of democracy. After all, bots don’t have freedom of expression—humans do.

The Cocoon of Digital Illusion

Harari likens today’s digital landscape to a cocoon—an enclosure of tailored content that isolates users from reality. This marks a shift from the early internet metaphor of a web connecting people. Now, AI crafts personal realities that may be entirely disconnected from the physical world.

This digital cocoon, filled with algorithmically generated myths and ideologies, threatens to replace human culture with nonhuman narratives. Harari draws parallels to ancient warnings like the Buddhist concept of māyā—a world of illusion. AI could now create such illusions on an unprecedented scale.

We Need Self-Correcting Systems—Fast

The greatest danger is not AI itself, but our lack of preparation. Harari stresses the need for robust, self-correcting mechanisms to spot and fix errors before they become catastrophic. Unlike a steam engine that can be tested in a lab, AI’s true impact will only be revealed in the real world—where mistakes could be irreversible.

Without historical precedent or a reliable blueprint for an AI-driven future, humanity must proceed with caution, humility, and above all—cooperation.

In Summary: Harari’s message is clear: AI is not just another technology. It’s a new kind of entity that rivals human cognition and creativity. Whether it enriches or destroys our world depends not on the machines—but on us.

On Key

Related Posts

stay in the loop

Get the latest AI news, learnings, and events in your inbox!