Originally published on We Can Look Up - Substack, May 22, 2025.

Read Part 1 first.


III

Humanity Faces an Approaching Comet

“There is also a longer term existential threat that will arise when we create digital beings that are more intelligent than ourselves. We have no idea whether we can stay in control.

[…]

We urgently need research on how to prevent these new beings from wanting to take control.

They are no longer science fiction.”

Geoffrey Hinton, “Godfather of AI” - during his Nobel Prize acceptance speech for work in AI - Stockholm, December 2024

Is it just me, or does Prof. Hinton’s recent Nobel Prize acceptance speech sound suspiciously like the opening scene in a disaster movie?

Don’t Look Up

Starring Leonardo di Caprio and released in 2021, Don’t Look Up told the story of astronomers who discovered a planet-killing comet on a trajectory to wipe out humanity.

It was a cartoon critique of a civilisation unable to coordinate around a risk to its very existence, or even to look up from its distractions and acknowledge the threat existed at all.

But Don’t Look Up was wrong.

Humanity can rise to the challenge, and we’ve done so many times before.

Our story need not end in catastrophe.

We’re capable of cooperating across ideological divides. We’ve stood at the dawn of lucrative and transformational technologies, and we’ve made wise choices time and again.

Smallpox Eradication

Smallpox devastated humanity for thousands of years. In the 20th century alone, the disease killed 300 million people, more than all wars combined.

Approximately 30% of those infected died, and survivors were left permanently scarred, often blind.

In 1966, during the height of the Cold War, an ambitious international campaign was launched to completely eradicate smallpox.

Americans and Soviets worked side-by-side, despite their nations standing at the brink of nuclear war.

The campaign spanned 73 countries, and the last naturally occurring case was identified 11 years later in Somalia.

In 1980, smallpox was declared the first human disease eradicated through international coordination.

Doing so has saved an estimated 150-200 million lives in the decades since - approximately four times the number of lives lost in those years - to all wars, terrorism, genocide and murder, combined.

So our coordinated success in eradicating one disease, saved four times as many lives as world peace would have.

Nuclear De-Escalation

During the cold war, every political incentive was pushing the USA and USSR down an escalating path towards nuclear apocalypse (with several extremely close-calls along the way)

And yet, the global superpowers achieved the impossible, across one of the largest ideological divides in history.

Led by Reagan and Gorbachov, they backed away from mutually assured destruction.

Of course, people will point to our lack of coordinated action on climate change to date, as evidence for our inability to work together. But this hasn’t always been the case.

The Ozone Layer

Remember when environmental activists were worried about the hole in the ozone layer?

In the 1970s, we realised chlorofluorocarbons (CFCs) - found in aerosol cans, refrigerators, and air-conditioners - were breaking down ozone molecules, destroying our planet’s protective shield, and exposing us to intense solar radiation.

Without intervention, UV radiation was on track to double by 2050, which would have resulted in a Mad Max wasteland scenario, with global food shortages and widespread public health crises.

We chose not to turn car-bonnets into frying pans, and the rest of the world into the Australian desert.

The 1987 Montreal Protocol, an international treaty, led by Reagan and Thatcher, among other world leaders, phased out CFCs, and averted global catastrophe. Now the ozone layer shows significant signs of recovery.

Is that what you call conservative conservation? It certainly has a ring to it!

But people will argue that in contrast to nuclear warheads and chlorofluorocarbons, AI progress promises irresistible financial rewards.

And yet this wouldn’t be the first time we’ve recognised a lucrative technological threshold and chose not to cross it.

Human Cloning

In 1996, researchers in Scotland successfully cloned the first mammal, ‘Dolly the Sheep’.

Scientists, policymakers, and ethicists across the globe immediately engaged in debate, resulting in nation-states banning human cloning. The UN General Assembly called for prohibition in 2005.

Human cloning threatened to bring babies into the world with inhumane deformities, a high mortality rate (95% of initial cloning experiments led to premature death), and it posed ethical challenges around human identity and exploitation.

Cloning promised huge potential benefits (organ cloning could extend human health and lifespan), and yet, twenty-nine years later, there remain no verified cases.

Gene Editing

But what about our politically polarised modern era of social media? Isn’t the prospect of cooperation hopeless, in the era of acerbic modern public discourse?

And what about China? “If we don’t accelerate towards advanced artificial intelligence, then China will get there first”; or so they say.

CRISPR

In 2012, the revolutionary gene-editing tool CRISPR-Cas 9 was discovered, enabling precise editing of human genetic code for the first time.

The world was shocked when Chinese scientist He Jiankui announced in 2018, the birth of twin girls whose genomes were edited using CRISPR.

This was the first known case of human germline (ie. potentially inherited and therefore permanent) genetic modification.

It heralded a new era for humanity. Homo sapiens’ gene pool has now been intentionally self-edited, and these changes may persist as long as Sapiens does.

The “Save As” button was clicked after editing our collective genetic code. (In this case, to confer immunity to HIV - certainly a worthy goal).

What followed was widespread condemnation from the scientific community, towards the unilateral nature of He’s research, and a significant boundary crossing in our species’ history.

Following global pressure, China shut down He’s lab and imprisoned him. Countries tightened regulations on human genetic editing, and there have been no further (publicly-known) examples of human germline editing since 2019.

China has recently demonstrated willingness to coordinate with the international community to avoid the perils of unregulated emerging technology.

Xi Jinping and the CCP leadership are technically astute, and have demonstrated impressive strategic planning over long time-horizons.


I believe in the power of the people of the world to demand action.

I believe in the power of our leaders to reshape the world.

And I believe humanity can coordinate against incentives for our collective benefit. After all, this was the crucial quality that originally gave Homo Sapiens our supremacy over this planet.


IV

Our Greatest Opportunity. Our Greatest Hazard.

Right now there are a small handful of people guiding the course and destiny of the most important technology humanity will ever create.

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war”

Statement on AI Risk - signed in May 2023 by leaders from all major frontier AGI labs, including Sam Altman, Yoshua Benjio, Demis Hassabis, Daniela Amodei, Dario Amodei, Geoffrey Hinton (Nobel Prize winning ‘Godfather of AI’), and Bill Gates (among many others).

“It’s clear by now that AI will affect us all. It makes sense that much of the attention - both in government and the private sector - is focused on extreme risks and national security threats. We don’t want anyone with an internet connection to be able to create a new strain of smallpox, access nuclear codes, or attack our critical infrastructure.”

Barack Obama - 2023

“It is possible that we will have superintelligence in a few thousand days (!); it may take longer, but I’m confident we’ll get there.”

Sam Altman - September 2024

Artificial Intelligence is Humanity’s Final Test.

Failing this test could mean our collective death and humanity’s extinction.

Succeeding at this test could bring God-like beings into this world, capable of solving any other problem we face, if they are inclined to help us.

After all, our resourcefulness, aligned with our human values, has given us the solutions to every previous problem humanity has solved.

“AI is a fundamental existential risk for human civilization, and I don’t think people fully appreciate that”

Elon Musk - 2017, to a meeting of Democrat and Republican US Governors

To our leaders, our politicians, business-people, national security community, academics and media elite, I implore you.

Prevent our extinction.

Lead us to survive AI.

Guide us through the most dangerous phase of our tumultuous adolescence, into humanity’s unknown but potentially glorious adulthood.

If you do, you won’t just go down in history, you will be the most important people to ever live.

Don’t Look Up was wrong.

We can look up.

We must look up.

And the time is now.


Stay tuned for further essays discussing how I believe we can look up.

For an easily actionable step now, consider pre-ordering If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All. This book is written by two of the biggest experts in this field. I’ve met and talked with both Nate Soares and Eliezer and while I haven’t always agreed with all their opinions, I am very confident they are both highly intelligent, deeply careful thinkers, who are sharing their honestly held beliefs about the risks of AI.

The book is set for release in September and pre-ordering it will signal boost the message and could earn a position on the NYT bestseller list, giving this perspective a desperately-needed wider audience.


Read Part 1.