Yuval Harari on AI: Navigating the Rise of an Alien Intelligence

The Human Paradox and the Rise of ‘Alien Intelligence’

Historian Yuval Noah Harari addresses a central paradox: despite humanity’s vast intelligence, we are creating technologies like Artificial Intelligence (AI) that could lead to our own destruction. He argues that AI should be understood not as “Artificial Intelligence” but as “Alien Intelligence” because it learns, evolves, and makes decisions in ways that are fundamentally non-human and unpredictable. This new form of intelligence is poised to reshape our world in profound, non-deterministic ways, similar to how mass media enabled both democracy and totalitarianism in the 20th century.

From Organic Cycles to an ‘Always-On’ World

Harari contrasts traditional human-based information networks with emerging AI networks. Human systems are “organic,” defined by natural cycles of activity and rest, which historically allowed for privacy and recovery. In contrast, AI-based networks are “inorganic”—they are always on, never needing to rest. This shift forces humans into a state of constant surveillance and activity, where every action can be recorded and judged later, effectively turning life into one long job interview. Harari warns this relentless state is fundamentally destructive to our organic nature.

The Rise of AI Bureaucrats and the Information Crisis

AI is the first technology in history capable of making its own decisions, leading to a massive power shift from human bureaucrats to AI systems in banks, governments, and armies. Harari notes that a legal path already exists for an AI (as a corporation) to become a legal “person,” potentially accumulating vast wealth and influence. This is coupled with a profound information crisis. The misconception is that more information leads to more truth. In reality, truth is costly to produce, while misinformation is cheap to generate. By flooding our world with content, AI threatens to bury truth under an avalanche of fiction.

A Path Forward: New Institutions and Personal Responsibility

To navigate this future, Harari dismisses rigid, pre-emptive regulation. Instead, he advocates for creating new, adaptable institutions with strong “self-correcting mechanisms” to monitor and react to AI-related threats. To protect public discourse, he suggests that AIs must always identify themselves. On a personal level, he advises individuals to adopt an “information diet”—consciously limiting the intake of junk information and taking “information fasts” to allow the mind to process, digest, and detoxify from the constant digital noise.

Mentoring question

Considering Harari’s advice on an ‘information diet,’ what is one specific change you can make this week to improve the quality of information you consume and give your mind more time to ‘digest and detoxify’?

Source: https://youtube.com/watch?v=K1OvbwY6GPM&si=ZheqsQ9z1guPm3Kp

Leave a Reply

Your email address will not be published. Required fields are marked *


Posted

in

by

Tags: