Technological developments can create new types of global risk, including risks from climate change, geo-engineering, and emerging biotechnology. These technologies have enormous potential to make people better off, but the benefits of innovation must be balanced against the risks they create. Risk reduction is a global public good, which we should expect to be undersupplied. This problem is especially urgent in situations where one group can unilaterally develop and implement a technology with global repercussions. The international community needs to continue to learn how to manage unilateral global risks like those from some forms of biotechnology and geo-engineering.
Global interconnectedness lets new discoveries help more people, faster, than ever before. However, this global reach creates a new type of global risk. Historically, disasters were mostly geographically confined. An earthquake is a tragedy, but it is a local tragedy. Regular trade across the Atlantic made a global pandemic possible for the first time in the 16th century, but the probability remained small. The efficiency of modern technologies, like cheap air travel, makes those risks bigger. Climate change is already a truly global risk - burning fossil fuels in one city causes climate change world-wide. In coming decades we may have to face new global technological risks, as geo-engineering or biotechnology become more powerful, for example. Innovation in these fields is very promising but needs to be approached carefully.
Global technological risks get less attention than they deserve. Countries are not properly rewarded for doing things that help everyone else too. The risks often affect the poor or future generations who do not get a say in decisions. They seem unlikely because many are unprecedented. It is politically hard to spend money on the risk of something that has never happened before.
Risks from new technologies come primarily in two types. The first, of which climate change is an example, is when almost everyone must do something together for a collective reward. The second, even more difficult to manage, is a ‘wildfire’ scenario - where it only takes one person or group to make a move, to drop the match, as it were, for everyone to suffer the consequences. Below, I describe some of the risks, explain why they are systematically neglected, and suggest some ways to address ‘wildfire’ risks.
What are the risks?
Climate change, geo-engineering, and emerging biotechnology are three important potential sources of global risk. Climate change is the best understood of them. Unless we work out how to remove carbon dioxide from the atmosphere, the world needs to cut emissions significantly within the next few decades. If we do not, there will be long-term losses.1 Of course, significantly cutting emissions makes energy more expensive, which could affect all parts of the economy. One of the most worrying ways climate change could hurt us is if panicked political leaders make a rushed and ultimately botched attempt at geo-engineering.
Geo-engineering was for many years closer to science fiction than reality. However, it is slowly starting to become potentially practicable. In 2008, the Royal Society devoted an issue of their journal to the mechanics and ethics of global geo-engineering to avert climate change.2 China spends hundreds of millions of dollars on weather manipulation.3 In principle, geo-engineering gives us ways to manage climate change without a radical restructuring of our energy economy.
A leading proposal is to increase the concentration of sulfates in the upper atmosphere. This would reduce the amount of heat energy from the sun entering our atmosphere. The sulfate process is naturally occurring as volcanoes sometimes increase sulfate emissions substantially. This means scientists understand it better than most geo-engineering alternatives.
Although geo-engineering has potential, there are big risks. Knowing that releasing sulfates is an option could make countries think cutting carbon dioxide emissions is less important. This could lock us in to using more sulfates. Moreover, there could be effects we do not understand. No one has ever increased emissions of sulfates by 15-30 times their natural rate for extended periods before, which is what scientists think we might need to do in order to offset approximately a doubling of CO2 concentrations in the atmosphere.4 Volcanoes rarely increase global emissions of sulfates to more than twice their natural rate. This could cause irreversible shifts in weather patterns or ecosystems as well as harming human health directly. Although sulfate use looks safer than many examples of geo-engineering, researchers remain hesitant to recommend it.
We do not need global coordination for geo-engineering. Scientists estimate that initial sulfate programmes would cost a few billion dollars.5 That is expensive, but affordable for many countries. Hundreds of people are rich enough to afford single-handedly a major programme. If a single actor went ahead with such geo-engineering, the effects would be almost instantly global. Other, riskier, forms of geo-engineering can be even cheaper.
Emerging biotechnology is possibly even more promising but creates similar risks. Scientists in the Netherlands and the US stirred up controversy in 2011 by modifying the highly pathogenic H5N1 virus to be transmissible between mammals.6 They did this to learn about transmission mechanisms in viruses. It helped us understand how few mutations were needed to allow transmission between mammals and showed that there was more than one way these mutations could happen. This is valuable information for epidemiologists, influenza researchers, and organisations preparing for pandemics.
However, many thought that publishing the details of these experiments might make it too easy for rogue agents to make a deadly pathogen in the future. (In fact, it was later shown that the changes made were specific to the strain of virus used by that lab.) Much like geo-engineering, only one research lab and one journal needs to publish results in order for the whole world to be able to use the information, even while everyone else abstains from publishing. Similarly, only one group needs to synthesise such a virus to seed a global pandemic. With current technology, it is unlikely that anyone outside a major national laboratory would be able to do this.
That might change in the coming decades. Biotech firms let people without specialised facilities order custom DNA. It is possible to buy custom sections of smallpox DNA, although firms are better at spotting this than they once were.7 In the future, a small team of PhD level workers might be able to assemble those bits without access to a big lab.
Risks grow as the underlying technologies become more widely distributed and powerful. Individuals will be more able to cause global calamities in the same way that someone with a gun can kill more people than a someone with a knife. As a result, we need to work out how to manage global risk before the stakes get too high.
Global risks are neglected
Climate change, geo-engineering and biosecurity risks are not completely new, and there is already a large amount of work that goes into preventing calamities and preparing to mitigate them. Despite this, there is probably less work going into the risks than there should be.8
First, reducing these risks is a global public good - everyone benefits and no one can be left out if they do not pay for risk reduction.9 Public goods are often undersupplied when there is no collective group that acts on behalf of the whole affected population. For global risks, there is in most cases no such group. The United Nations, for example, is not powerful enough to act that way.
Second, managing the risks costs a small group a lot and helps a much larger group a little. Fossil fuel companies and heavily industrial sectors care very strongly about emission regulation. However, most people do not care very strongly and future generations, who are also affected, do not get a say. The loud voices of a few are generally powerful relative to the diffuse concerns of many.
Third, many of these risks are unprecedented. People are better at reacting to things that have already happened once. If no calamity happens, money spent on prevention and mitigation looks like it was wasted even though it might have been the reason no calamity happens. If the calamity does happen, the prevention efforts look unsuccessful and therefore wasted. The mitigation preparations look clever in retrospect, though. It is a hard situation to put decision-makers in, and we should expect it to lead to under-investment in mitigation and especially prevention.
The commons and the wildfire
As a species, we have experience managing the ‘tragedy of the commons’ - in which a shared public resource is good for everybody so long as it is not overused. In a ‘commons’ situation, it is not necessary that absolutely everyone plays along, just that most people (and the important ones) do. Climate change is a challenge of this type - it would not matter if New Zealand, say, refused to curb its carbon dioxide emissions so long as everyone else did.
In the ‘commons’ one group can ‘defect’ - they refuse to play along with the international community. While everyone else cuts emissions, they keep burning coal. This makes it harder for everyone else because they need to cut their emissions further, but saves the defector money. Reducing the number of defectors to an acceptable level is hard.
The situation that geo-engineering and emerging biotechnologies create, however, is much harder to manage. It only takes one ‘defector’ to start a wildfire - only one person needs to drop a match. Similarly, only one country needs to decide that the benefits of stratospheric sulphate outweigh the costs. Its choice will affect everyone. This situation is harder to manage, from an international perspective, because the only acceptable number of defectors is zero.
Other things make it more complicated. Different groups will find risky new technologies more or less valuable. A country that risks yearly hurricanes might want to stop global warming more urgently than any other country. It might be willing to pull the trigger on geo-engineering, for its own sake, even if the cost for humanity as a whole is high.
This is made worse by an effect similar to the ‘winner’s curse’. In auctions the person who wins probably paid too much - everyone else decided that the object at auction was not worth as much as that. The winner may simply have been wrong about the value, rather than benefiting the most from it. In this case, a country could be wrong about the risks and benefits of an unprecedented technology. The groups which overestimate the benefits and underestimate the risks are likely to start using the technology first.10
How can we manage the wildfire scenario? We can restructure the incentives by breaking the path towards the development of a risky technology into smaller steps. One type of treaty draws a bright red line - perhaps it prohibits unilaterally engaging in geo-engineering. This risks being brittle because the first defection starts a wildfire. If we define intermediate steps that progressively lead towards use of globally risky technology, then we create much more flexibility. Groups could be encouraged to progress in certain directions, and not others. As they come closer to being able to engage in unilateral action, they could face increased scrutiny.
Breaking apart the steps can be helpful because it makes it possible for groups to defect along the way without resulting in catastrophe. This makes it more like a ‘commons’ situation in which we can tolerate a few defectors. It forces defectors to be deliberate and fairly public in their choices, which makes it easier for the international community to step in to resolve whatever concerns underlie a group’s approach towards the technology. There needs to be a vigorous response to the little defections - otherwise you allow countries to ratchet up towards a wildfire with a series of small steps.
Breaking the steps down works in situations where there are few groups that need to be monitored and where the programmes of research and development are big enough that they are hard to hide. A big risk, therefore, lies in globally risky technology that becomes so cheap and easy to use that even very small groups are able to use it. At the moment, humanity does not have a way to handle that kind of risk. Until we uncover a solution to that problem, it may be best to focus on research and development that makes risky technologies effective at scale rather than cheap to do for individuals. That way, we can focus on the problem that is already hard enough - making sure that a single irresponsible country does not make use of a wildfire technology that poses risks for us all.
Although existing international systems are able to make some progress towards resolving collective action problems like climate change, we have a lot to learn about situations where a single group can have a significant global effect. As technology develops, humanity will encounter more such ‘wildfire’ technologies. The international community will need to do more work to prepare for these scenarios before technologies become mature, so that we are not left with a hasty and poorly considered response to emerging technologies.