Why Existential Risks are really really bad

Imagine what a catastrophe looks like to you. Running out of toilet paper mid-bathroom visit? Stubbing your toe? Making a bad investment? Losing a key election? Or even being forced to watch Love Island by your housemates?

Now think bigger. When we talk about Global Catastrophic Risks (GCRs), we mean world wars, huge forest fires, single (or double as in the case of the attack on Japan in 1945) nuclear bomb attacks, cyber hacks that take out continental energy systems, and more. 

Now think even bigger. An event that kills or impedes so many people (perhaps 99 per cent or more) that humanity as it once was may never, ever recover. Types of X-risk include:

  • Climate Disaster – an event so destabilising it obliterates agricultural supply chains, forces mass migration, and induces extreme floods, droughts, and other weather events. This may be caused by man-made climate change, or by an asteroid hitting Earth, or even a nuclear winter.

  • Nuclear War – as we saw in 1945, the firing of 1 or 2 nuclear weapons is indiscriminate and a huge loss of life. But it wasn't quite a GCR nor anywhere near an X-risk. However, a nuclear war between, say, the US and Russia - with a combined 11,405 warheads between them - may bring about a nuclear winter from which there is no coming back.

  • Biological Risks – both naturally occurring and man-made pathogens could prove much deadlier than the COVID-19 pandemic we just suffered. Some in the field of Biotechnology are more fearful of artificial pathogens which could be created as a weapon. The Government Office for Science report expresses similar worries. With genetic advancements, it is becoming easier and thus more likely that a rogue terrorist group (or state) might seek such a destructive power.

  • Artificial Intelligence – what’s AI going to do? Is ChatGPT going to take over the world? Not quite. But some fear that this increasingly more powerful technology could one day be as smart if not smarter than people. And at such a point, how do we ensure they are acting in our interests? How do we ensure they don’t turn us into paperclips

These are all examples of X-risks. I know, they sound like they’re straight out of a science-fiction book… but they’re not. And they’re more likely to happen than we think.

Professor Toby Ord, a Senior Research Fellow in Philosophy at Oxford University — whose work focuses on the big picture questions facing humanity — puts the likelihood of an X-risk event at a 1-in-6 chance of happening this century. Other academics in the fields are less conservative.

Scary, right? If there was a 1-in-6 chance of you dying today in your car journey, would you drive? An X-risk event would be so bad for the UK and for the rest of the world because it is virtually irrecoverable. So one would hope we are doing a lot to prevent it? Not enough, I’d argue.

A charitable explanation as to why the UK does not seem as prepared against X-risk as it could be has something to do with both 1) the inherent ‘short-termism’ we see in our Governments, and 2) the relative unlikelihood of an X-risk occurring. It seems (politically) more rational to use extra funding to bring about services that will win votes at the next election. But, as the pioneering AI Professor Stuart Russell said:

“You can’t fetch the coffee if you’re dead.”


In other words, we cannot even think about making the world a better place through policy if we’re all… dead. An X-risk will destroy not just our economy, but might also mean the end of the human race as we know it. This sounds bad to me!

What kind of policies might be helpful here? The Centre for Long Term Resilience (CLTR) has an idea which includes the implementation of a government Chief Risk Officer (CRO). A ‘three lines of defence’ model will introduce less siloed risk management with clearer accountabilities across government. And on AI, our work at the Adam Smith Institute has rightly focussed on how AI might or might not steal our jobs.

But we should probably start thinking a little harder and a little longer about how we might avoid X-risks.