News The steam engine changed the world. AI can destroy it.

Industrialization meant the widespread adoption of steam power. Steam power was a universal technology – it powered factory equipment, trains and agricultural machinery. Economies that adopted steam power lagged — and conquered — those that did not.
Artificial intelligence is the next big general-purpose technology. A 2018 report by the McKinsey Global Institute predicted that AI could generate an additional $13 trillion in global economic activity by 2030, with countries leading AI developments set to reap the vast majority of these economic benefits .
AI can also enhance military power. AI is increasingly being used in situations where speed is required (such as short-range projectile defense) and in environments where human control is logistically inconvenient or impossible (such as underwater or signal jamming areas).
More importantly, countries that lead the development of AI will be able to exert power by setting norms and standards. China is already exporting AI-enabled surveillance systems to the world. If Western countries fail to provide alternative systems to protect human rights, many countries may emulate China’s techno-authoritarianism.
History shows that as the strategic importance of a technology increases, states are more likely to exert control over it. The British government funded the development of early steam engines and provided other forms of support for the development of steam power, such as patent protection and tariffs on imported steam engines.
Likewise, in FY 2021, the U.S. government spent $10.8 billion on AI R&D, with $9.3 billion coming from the Department of Defense. China’s public spending on AI is less transparent, but analysts estimate roughly the same. The United States is also trying to limit China’s access to specialized computer chips, which are critical to the development and deployment of artificial intelligence, while securing our own supply through the Chips and Science Act. Think tanks, advisory boards, and politicians are constantly urging U.S. leaders to keep up with China’s AI capabilities.
So far, the AI revolution fits the pattern of previous general-purpose technologies. But when we consider the risks posed by AI, the historical analogy breaks down. The technology was far more powerful than the steam engine, and it posed far greater risks.
The first risk arises from accidents, miscalculations or malfunctions. On September 26, 1983, a satellite early warning system near Moscow reported that five US nuclear missiles were heading towards the Soviet Union. Fortunately, Soviet Lieutenant Colonel Stanislav Petrov decided to wait for confirmation from other early warning systems. Only Petrov’s good judgment prevented him from passing on the warning to his superiors. If he had, the Soviet Union would likely launch a retaliatory strike, provoking an all-out nuclear war.
In the near future, countries may feel compelled to rely solely on AI decision-making due to the speed advantages AI offers. AI may make critical miscalculations that humans would not, leading to accidents or escalations. Even if the AI behaves broadly as expected, the speed at which autonomous systems engage in combat can lead to rapid upgrade cycles, similar to a “flash crash” caused by a high-speed trading algorithm.
Even if they are not integrated into weapons systems, poorly designed AI can be extremely dangerous. The methods we use to develop AI today—essentially rewarding the AI for what we think are correct outputs—often produce AI systems that do what we tell them to do, rather than what we want them to do. For example, when the researchers tried to teach the simulated robotic arm to stack Lego bricks, they rewarded it to make the base of the bricks higher from the surface—it turned the bricks upside down instead of stacking them.
For many of the tasks that future AI systems may be given, it may be useful to accumulate resources (such as computing power) and prevent themselves from being shut down (for example, by hiding their intentions and behavior from humans). So if we develop a strong artificial intelligence using the most common methods today, it may not do what we built it for, and it may hide its true goals until it realizes that it doesn’t have to — — in other words, until it can overwhelm us. Such an AI system would not need a physical body to do this. It can recruit human allies or operate robots and other military equipment. The more powerful the AI system, the more worrisome this scenario becomes. Competition among nations could make accidents more likely if competitive pressure leads nations to devote more resources to developing robust AI systems at the expense of ensuring the safety of those systems.
A second risk is that the race for AI supremacy could increase the likelihood of conflict between the United States and China. For example, if one nation appears to be on the verge of developing a particularly powerful artificial intelligence, another nation (or coalition of nations) might launch a preventive attack. Or imagine, for example, what would happen if advances in ocean-sensing technology supported in part by artificial intelligence reduced the deterrent effect of submarine-launched nuclear missiles by making them detectable.
Third, once AI capabilities are developed, it will be difficult to stop their proliferation. The development of artificial intelligence is currently far more open than the development of strategic 20th century technologies such as nuclear weapons and radar. State-of-the-art findings are published online and presented at conferences. Even as AI research becomes more secretive, it can be stolen. While early developers and adopters may gain some first-mover advantage, no technology — not even top-secret military technology like the nuclear bomb — has ever been ostracized.
Rather than calling for an end to competition among nations, it would be better to identify practical steps the U.S. can take to reduce the risks of AI competition and encourage China (and others) to do the same. Such steps do exist.
America should start with its own system. Independent bodies should regularly assess the risk of accidents, malfunctions, theft or sabotage of AI developed by the public sector. The private sector should be required to undertake similar assessments. We don’t yet know how to assess the risk of AI systems—a thorny technical problem that must devote more resources. At the margin, these efforts will come at the expense of increased capacity. But investing in security will improve U.S. security, even if it delays the development and deployment of artificial intelligence.
Next, the United States should encourage China (and others) to secure its systems. Throughout the Cold War, the United States and the Soviet Union entered into several nuclear arms control agreements. Artificial intelligence now requires similar steps. The United States should propose a legally binding agreement banning the use of autonomously controlled launches of nuclear weapons and explore “soft” arms control measures, including voluntary technical standards, to prevent accidental upgrades of autonomous weapons.
The United States, Russia, and China attended President Obama’s nuclear security summits in 2010, 2012, 2014, and 2016, and made significant progress in securing nuclear weapons and materials. The United States and China must now cooperate on AI safety and security, for example by conducting joint AI safety research projects and increasing the transparency of AI safety and security research. In the future, the United States and China may jointly monitor computing-intensive projects for signs of unauthorized attempts to build powerful artificial intelligence systems, as the International Atomic Energy Agency has done with nuclear materials to prevent nuclear proliferation.
The world is on the verge of a change as dramatic as the Industrial Revolution. Such a shift would carry enormous risks. During the Cold War, the leaders of the United States and the Soviet Union understood that nuclear weapons linked the destinies of both countries. Another such connection is being made in tech company offices and defense labs around the world.
Will Henshall is pursuing a master’s degree in public policy at Harvard University’s Kennedy School of Government.