Militarized artificial intelligence in the nuclear field

Automation and autonomy are not new phenomena in modern economies and societies: more and more tasks are being delegated to machines and computer systems, while the role of direct human intervention is being reduced.

Thomas Osborne
Thomas Osborne
25 September 2022 Sunday 17:49
13 Reads
Militarized artificial intelligence in the nuclear field

Automation and autonomy are not new phenomena in modern economies and societies: more and more tasks are being delegated to machines and computer systems, while the role of direct human intervention is being reduced. Such processes are not new in the military field either. For decades, armed forces around the world have shown interest in the development of weapon systems with automated and autonomous features. In recent years, various governments (including those of nuclear-armed states) have put forward plans for further integration of artificial intelligence (AI) into militaries as part of a quest for greater levels of autonomy.

Discussions about AI in the military often refer to the super-intelligent machines depicted in popular culture; for example, in the Terminator film series. Since current research has no chance of producing a strong or superior level of AI capable of replicating or surpassing the cognitive abilities of humans, it gives the impression that militarized AI is a matter of science fiction and not very urgent. address it.

However, rapid advances in computer processing power have increased the complexity and scope of military tasks delegated to machines, raising questions about the implications of militarized AI for the future of warfare and security. international. Although we are far from a Skynet-like system taking on nuclear weapons, further integration of automation and autonomy into nuclear command and control poses concerns for global strategic stability and nuclear deterrence that need to be monitored, analyzed and tackle.

Automation can be defined as the delegation of tasks to machines based on a specific sequence of actions or rules, which makes the process more predictable. Autonomy, on the other hand, can be described as the programming of machines to perform tasks or functions without detailed rules. Therefore, an autonomic system is not only capable of performing tasks, but also of responding to unexpected dangers based on its sensory inputs. However, autonomy is not always associated with high levels of intelligence and can have different levels, from a robot vacuum cleaner to a car that drives alone.

Broadly speaking, AI can be defined as a branch of computer science that combines statistics and algorithms so that computers or robotic systems perform tasks normally associated with human intelligence (including vision, perception, audio recognition, etc.). planning and decision making). AI and its subfields of machine learning and deep learning allow robotic systems to operate with less human intervention and thus with higher levels of autonomy.

It is a dual-use technology, which means that it can be applied for both civil and military purposes. Rather than being a weapon itself, AI is seen as an enabler of weaponry. Governments around the world (including the US, China, Russia, Israel, South Korea, and the UK) view AI, robotics, and machine learning as critical enabling technologies for enhancing strategic advantage and for modernize military capabilities and equipment.

However, increasing levels of automation and autonomy in the military are raising concerns among policymakers, academics, and experts from disciplines such as international law, ethics, and global security. It is not clear that weapons systems with autonomous features deployed in a war zone will be able to distinguish between civilians and combatants, a basic principle of international humanitarian law that applies to armed conflict. According to some ethicists and robotics experts, even if a system could make that distinction, it would be immoral and unethical to delegate the ability to kill people to a machine that is not capable of making the same moral judgments as humans. In addition, the development of militarized AI brings with it security concerns related to unpredictability of algorithms and data issues that could exacerbate military escalations and lead to faster decision-making without full human involvement or control.

In the nuclear arena, the latter category of fears is especially crucial given the risks of nuclear weapons and the destruction that a potential nuclear attack brings. A nuclear power is not likely to transfer the nuclear button to an autonomous system any time soon. However, AI applications already embedded in the field of nuclear weapons (including nuclear delivery systems, nuclear command and control, early warning, intelligence, surveillance and reconnaissance) deserve examination for their potential. to destabilize the nuclear deterrent.

During the cold war, both the US and the USSR sought the integration of automation and autonomy in the nuclear field. The two great powers developed detection and warning systems in relation to each other's nuclear attacks to ensure the ability to respond to them, including those carried out by surprise. Those early warning systems integrated some automated processes, although they were not fully trusted, since there was always a person who evaluated the threats reported by the systems. In reality, the automated systems were intended to aid in the decision of whether or not to activate retaliation.

Famous is the 1983 incident in which Soviet Lieutenant Colonel Stanislav Petrov did not trust the early warning system that indicated with a high degree of certainty that five US ICBMs were heading towards the USSR. Normally, such a situation would have entailed a Soviet nuclear retaliation; however, Petrov made the decision not to report the event, because he correctly concluded that it was a system error and not a real attack. It is often claimed that nuclear war was averted thanks to Petrov's human judgment. If Petrov had been a system programmed to respond when the radar presented very certain information about an attack, it would probably have launched a nuclear retaliation.

The Soviet Union also developed the semi-automated nuclear response system called the dead hand, which reportedly could launch nuclear weapons at the US without the need for an order from a commanding authority. The system would only work in certain exceptional conditions, in the event that there was no one in the Soviet leadership alive and capable of giving the order of retaliation. In general, neither the US nor the USSR fully trusted machines to make nuclear-related decisions. Given the high level of risk involved, it is difficult to conceive that the leaders of today's nuclear powers would delegate such a decision to an AI-based system.

In contrast, the current powers (especially the US, China and Russia) do show interest in integrating AI and different levels of autonomy in the nuclear field. Unlike the nuclear killer robots depicted in science fiction, these processes are less visible and more difficult to observe; in particular, given the secrecy that surrounds that information. They include the integration of AI and machine learning into the software and systems used for detection and early warning. Algorithms can be used for intelligence gathering, surveillance, data collection and analysis that assist military leaders in decision making, command and control.

In addition, unmanned vehicles or systems with autonomous features (such as drones or underwater vehicles with sensors) are likely to be used to detect the nuclear attack capabilities of adversaries. AI can act as a force multiplier and be integrated into missiles to improve their accuracy and guidance. Some see the possibility of unmanned vehicles (mainly aerial or underwater) becoming alternative ways to deliver nuclear weapons. Russian leaders, for example, have boasted of a nuclear-powered unmanned underwater vehicle, the Status-6 (also called Poseidon), which integrates some autonomous features (although exactly what they are is unclear). It is said that it is able to stay in the sea for a long time and in extreme conditions, and it is also possible to use it to investigate the ocean floor. More importantly, though, it appears to have the ability to launch a nuclear weapon off an enemy's shoreline and create a tsunami effect.

The field of nuclear weapons is said to be conservative and slow in the application of new technologies. A much higher level of trust in AI-based technologies is necessary for drastic changes to occur in that field. However, it is worth underscoring the potential risks and detrimental implications of militarized AI for nuclear stability and deterrence as the integration of automated and autonomous functions progresses.

The current state of AI-based systems and their applications for civilian purposes makes it clear that there are many things that can go wrong. Systems using AI and machine learning rely on vast amounts of data. These data may be incomplete, of insufficient quality, incorrect or simply missing. In addition, they often have some bias introduced, often inadvertently, by the human engineers programming the system. Governments are unlikely to obtain fully accurate data on the weapons systems and nuclear capabilities of their adversaries. Therefore, the misinterpretation of the data or an accidental escalation due to wrong information provided by the algorithms are still possible scenarios.

Another set of risks has to do with the potential for adversary attacks, hacking, and spoofing. There is the possibility of data manipulation by an algorithm in order to trick it into wrongly recognizing and classifying an object. Increasing digitization and networked command and control systems are also vulnerable to cyber threats, including those enabled by AI. Malicious actors can hack early warning systems or try to sabotage command. There is even the possibility that cyberattacks could access nuclear weapons systems or launch platforms.

The integration of autonomous functions and AI makes the relationship and interaction between man and machine more complex and raises questions about the role of the human and decision-making. Algorithm-based systems tend to process large amounts of data much faster than human operators. In fact, the ability to alleviate cognitive load is seen as one of the perceived benefits of integrating AI into military decision-making. According to the reasoning of some experts and policymakers, the integration and use of algorithms would improve and increase human capabilities, such as the so-called situational awareness through which humans understand the multiple dimensions of the battlefield. Situational awareness is the clear ability to understand a situation and encompass the different aspects of the terrain. Throughout history, human beings have shown an interest in the use of technology to enhance human physical and mental abilities, through armor or weapons. The integration of AI is part of this trend, along with arguments in favor of greater precision and efficiency in decision-making and the consequent reduction in risk due to miscalculation.

However, the current state of AI does not allow conclusions to be drawn about its potential improvement in decision-making efficiency. The field of AI includes a lot of uncertainty, raising fears of overreliance on automated decision support systems, or automation bias. Faced with an overwhelming amount of data, a human operator will be unable to exercise judgment due to overconfidence in the system. Scientists describe the decision-making process of algorithms as a black box and point out that it is difficult to understand how a certain result is arrived at. Relying on algorithms to make crucial nuclear decisions is a risk if the system misinterprets the data the human operator relies on, and it becomes impossible to fully understand the decision-making process.

Current levels of AI are efficient in limited tasks with well-defined objectives, not in complex situations that require a certain degree of human judgment and an analysis of the general context of the situation. For example, Lieutenant Colonel Petrov claimed to have taken into account various factors in making his decision and that, above all, he trusted his human instinct. With increased integration of militarized AI and increased reliance on automated or autonomous systems, a person in Petrov's position might not have had time to fully consider the whole situation or might have relied too much on the system. Assessing a nuclear war and its disastrous consequences requires a degree of human judgment that AI is unlikely to possess in the short term.

Much of nuclear deterrence is based on the perception of the capabilities of the adversary and the probability that it will carry out an attack. AI can change those perceptions; not inevitably, but through misperceptions and misconceptions about the AI ​​capabilities of opponents, and especially among major nuclear powers.

The rhetoric of policymakers indicates that there is competition among the great powers around AI-based technologies. In the US, the National Security Commission on Artificial Intelligence led by Eric Schmidt, former CEO of Google, and Robert Work, former Deputy Secretary of Defense, published a report in 2021 urging the Government to “accept the competition of AI” with China and to be aware of China’s “ambition to overtake the US as the world leader in AI within a decade”. Russian President Vladimir Putin has said on multiple occasions that whoever can secure a monopoly in the field of AI "will become the ruler of the world." In 2019, he stated: “It is no coincidence that many developed countries in the world have already adopted action plans for the development of these technologies. And we, of course, must guarantee technological sovereignty in the field of AI.”

In such a competitive atmosphere, as well as amid ongoing geopolitical tensions, governments are likely to have misperceptions about the AI ​​capabilities of their opponents in the nuclear arena. Those fears are fueled by announcements of nuclear-powered autonomous weapons, such as the Poseidon mentioned above and described by Russian authorities as a unique system that is unlikely to be replicated elsewhere. Although it is not entirely clear what exactly such a torpedo can do, the mere news of Russia's possession of such a weapon may lead other nuclear powers to develop counter technologies and thus further destabilization. So the risks often come not from AI itself or its actual capabilities, but from the misconceptions associated with the technology. Perceived technological changes have the potential to change how governments view nuclear deterrence and first-strike incentives, undermining nuclear stability. Referring to the nuclear arms race between the US and the USSR, some speak of a modern “AI arms race”. If those fears and misperceptions about capabilities and intentions are not addressed, militarized AI risks fueling competition between great powers.

The international community has not entirely ignored the fears outlined above. Until now, debates about possible regulations on militarized AI have focused on conventional weapons. States parties to the Convention on Certain Conventional Weapons have gathered at the United Nations headquarters in Geneva to discuss lethal autonomous weapons systems, which are capable of selecting and attacking targets without human intervention. Those talks have not emphasized the nuclear field. Activists, civil society organizations, and campaign organizers like the Campaign to Stop Killer Robots do not focus on nuclear weapons in their calls for a ban on fully autonomous weapons.

Other existing treaties, such as the Treaty on the Non-Proliferation of Nuclear Weapons (NPT), also do not include specific requirements for human control to be present in nuclear decision-making. Therefore, to mitigate the risk of escalation, states could introduce an amendment to the NPT explicitly prohibiting autonomous nuclear weapons. Furthermore, formalizing and enshrining in international law the requirement to maintain human control over nuclear operational command and control would demonstrate the commitment of nuclear powers to reducing the risks associated with militarized AI.

In conclusion, the risks of militarized AI in the nuclear arena are not as flashy as killer robots or an intelligent computer running missiles. However, the integration of automated and autonomous functions into nuclear command and control and further delegation of tasks to algorithms have the potential to intensify escalation and become a threat to nuclear stability. The nuclear powers must address, in the framework of bilateral or multilateral talks, risks such as overconfidence in support systems, data problems, vulnerability to attacks, the greater complexity of the interaction between people and machines, as well as misperceptions about the AI ​​capabilities of adversaries. Likewise, confidence-building measures that mitigate misunderstandings and miscommunication are essential to deal with the potential escalation related to AI in the nuclear arena.

Anna Nadibaidze is a PhD student at the University of Southern Denmark and a researcher on the AutoNorms project.