The foreseeable, yet largely unforeseen, risks of a tech crash
Technology has been an indispensable tool in our response to the Covid-19 pandemic and the consequent economic slump. Doctors have adopted telemedicine. School children have been taught in digital classrooms. Billions of us have communicated, shopped, worked and been entertained mostly online. But unless we are careful, our increased reliance on technology may magnify, rather than minimise, the next global crisis.
Technology has been an indispensable tool in our response to the Covid-19 pandemic and the consequent economic slump.
Doctors have adopted telemedicine. School children have been taught in digital classrooms. Billions of us have communicated, shopped, worked and been entertained mostly online.
But unless we are careful, our increased reliance on technology may magnify, rather than minimise, the next global crisis.
Just like the Covid-19 pandemic, that risk falls into the category of entirely foreseeable, yet largely unforeseen.
We know how this story could play out even if we have not yet read the script. Our ubiquitous use of technology has already outstripped our ability to manage it safely.
Unless we upgrade our security, governance and regulatory regimes, we will remain worryingly vulnerable to the crippling of critical infrastructure, either by malicious design or by default.
Call it a tech crash.
The events this week at FireEye signal the inherent risks. The United States cybersecurity company’s job is to protect its clients from hackers, but it was itself hacked.
FireEye pointed the finger of suspicion at a state-sponsored attacker “who primarily sought information related to certain government customers”.
Alarmingly, the hackers stole the tools used by FireEye’s “red team” which hacks into its clients’ systems to highlight their own vulnerabilities. The company is now scrambling to deploy countermeasures.
Cyberweapons have already become an accepted part of many states’ armouries given their cheapness, effectiveness and deniability.
Their use has been examined in a chilling, new HBO documentary, The Perfect Weapon, based on a book by Mr David Sanger.
The film highlights how the US and Israel were the first to realise the power of cyberweapons, unleashing the Stuxnet malware against Iran to degrade its nuclear weapons programme in 2007.
“Stuxnet was the first time a major state used a powerful cyberweapon in an aggressive way,” Dr Amy Zegart, the co-director of the Centre for International Security and Co-operation at Stanford University, says in the film.
But that successful attack opened a Pandora’s box of troubles that may now be impossible to slam shut.
The Iranians, North Koreans, Russians and Chinese rapidly concluded that cyberwar was an asymmetrical game against a country as big, open and digitally exposed as the US.
In 2014 there was a damaging Iranian cyberattack on the casino empire of Sheldon Adelson, the American tycoon who had openly called for a nuclear bomb to be dropped on Iran.
North Korean hackers then inflicted serious damage on Sony Pictures in anger at the release of a film mocking the dictator Kim Jong Un.
They later released the WannaCry ransomware, exploiting flaws in Microsoft software to hit more than 155 countries.
Russians have launched cyberattacks against Ukraine, incapacitating electricity grids, subway systems and airports.
They also hacked the Democratic National Committee during the 2016 US presidential election campaign and released stolen emails to WikiLeaks.
Chinese hackers have cracked open the US Office of Personnel Management accessing nearly 22 million files.
According to experts quoted in the film, they have also been attempting to hack into Covid-19 vaccine programmes and have been deliberately feeding an “infodemic” of disinformation about the pandemic in the US.
Given all this, it is little wonder that US defence officials have for years been warning about the dangers of a “cyber Pearl Harbor” that could take down critical infrastructure, even as they contemplate unleashing devastating cyberattacks of their own.
But it is not just state-on-state cyberconflict that is alarming. We should also worry about the internet’s systemic instability, given its governance is unnervingly flimsy.
Ingenious, short-term patches have stayed in place a remarkably long time while long-term fixes have never materialised.
Mr Satya Nadella, Microsoft’s chief executive, argues that societal trust in technology has been degrading because of growing concerns about cybersecurity, privacy, internet safety and the ethical use of artificial intelligence.
“Given the inevitability of tech playing a much more central role, we need to build more trust,” he said this week.
Corporate engineering teams should take more responsibility for developing systems to ensure security and reinforce trust, Mr Nadella said.
But we also need new regulations and institutions. Our governance structures remain stuck in the analogue age.
We either need to reimagine their scope or invent new ones. We could start with a World Data Organisation to agree protections for personal data and secure international data flows.
The digital equivalent of a US Food and Drug Administration might be charged with preapproving algorithms used in sensitive areas, such as healthcare and the judicial system.
And a Digital Geneva Convention could establish the limits of cyberwar.
Mr William Gibson, the science-fiction writer who coined the term cyberspace, told me earlier this year that we may be the last generation to draw any distinction between our offline and online worlds.
He is doubtless right. It is time we governed our physical and virtual worlds as one. FINANCIAL TIMES
ABOUT THE AUTHOR:
John Thornhill is innovation editor at the Financial Times.