Does Introduction Of AI Tools In Nuclear Architecture In SA Mean Strategic Instability?

The integration of AI in South Asia's nuclear strategies raises concerns over stability, secrecy, and pre-emptive strike risks. Transparency and dialogue are essential to mitigate mistrust and potential escalation.

Does Introduction Of AI Tools In Nuclear Architecture In SA Mean Strategic Instability?

The use of Artificial Intelligence in nuclear decision-making and targeting of the enemy’s nuclear assets is no longer merely a subject of science fiction. It is happening in the real world. The majority of the nuclear states, including Pakistan, however, have made their position clear that they would retain the element of human involvement in the final decision to launch a nuclear attack in their nuclear decision-making processes. As possibilities of AI technologies making a dramatic breakthrough, are increasing with each passing day, humans face the question of whether it would be wise, viable, and secure for the continued survival of the human race on the face of planet Earth, for nuclear states to completely delegate the operation of their nuclear deterrence in the hands of the Artificial Intelligence driven machines? On a smaller scale the question facing our region, South Asia, where two nuclear-armed states, India and Pakistan are engaged in a dangerous military rivalry, is how the fast-paced acquisition and integration of Artificial Intelligence technologies by the Indian military will impact the precariously held strategic stability in this region? There is little public information available about the level of integration of AI into Indian nuclear security architecture and decision-making processes.

However, Indians are using AI technologies to strengthen their intelligence and surveillance capabilities and introduce speed into their decision-making processes regarding target identification, indicating a type of capability and in Pakistani estimation, intentions to go for a pre-emptive strike in a crisis. Recent works by some Indian strategic experts argue that the AI technologies and their integration into Indian military and nuclear architecture and developments in India’s conventional capabilities may be intended to give India a pre-emptive or retaliatory counterforce capability. Pakistani experts in low tones have been expressing their fear that the capabilities Indians are acquiring may tempt them to go for pre-emptive strikes against Pakistani nuclear targets in times of crisis. AI technologies that Indians have acquired or are in the process of developing make it easier to identify Pakistan's strategic assets, “The AI technologies {that Indians are acquiring} could increase India’s counterforce capabilities still further. Regardless of whether this was India’s true goal, Pakistan might perceive the development of AI-based technologies alongside precision strike capabilities as designed to give India the capability to launch a disarming strike should a limited conventional conflict threaten to go nuclear. This would be consistent with many Pakistanis' fears about India’s alleged Cold Start strategy” reads a recent Research Report by the Centre for Strategic and International Studies (CSIS) titled, “Will the Adoption of Artificial Intelligence-enabled Decision Support Tools by India Reduce Nuclear Stability with Pakistan?”.

Very little is known about the use of AI technologies by Pakistani military and security establishments. Besides a Presidential initiative to harness AI technologies, in 2020, the Air Chief of Pakistan inaugurated the Center of AI and Computing (CENTRIC), a significant step towards incorporating AI into the Pakistan Air Force’s operational milieu. “CENTRIC will facilitate the development of sensory fusion technology for the Pakistan Air Force that would assimilate sensory data from several sources, including cameras and radars, thus enabling PAF to analyse large volumes of data speedily”. We don’t know what kind of role AI is or will be playing in the nuclear decision-making processes the management of deterrent relations with India or the management of nuclear and delivery system assets in Pakistani nuclear security architecture.

If we take US-Soviet deterrent relations during the Cold War as a model, we will see that a certain level of transparency about how a nuclear state manages its deterrent relations with its nuclear opponent is a necessity. International nuclear experts are convinced that “successful deterrence requires a keen understanding of each party’s perceptions. Any strategy of deterrence reflects interactions of perceptions, beliefs, and expectations rather than objectively measurable material factors. Absent a clear understanding of a potential aggressor’s belief, perceptions, and values, it is much more difficult to develop effective deterrence strategies”.

Pakistani and Indian political leaders could hardly exchange meaningful, creative, and innovative ideas about how to manage strategically dangerous relations between the two countries, which nurse hostile and antagonistic feelings about each other

The functionality of AI tools used in the military decision-making processes is dependent on the training data fed into the machine. Experts say this training data that will be fed into each of their machines by South Asian nuclear rivals would reflect the biases and prejudices of trainers and organisations that would be assigned the task of collecting and feeding the data into the machine. This will be a situation where human decision-makers will be replaced by machines governed by Artificial Intelligence tools. In other words, the biases predilections, and proclivities of humans or groups of humans will be reflected in the AI tools that will replace human decision-makers. How India trains its AI tools and with what kind of data will be an issue that will be essentially relevant to Pakistan’s security calculations. Similarly what kind of AI tools and their training data Pakistan would be using will be extremely relevant to the security calculations of the Indian nuclear establishment. Since 1998, at the time of the Lahore Summit, when the Prime Ministers of Pakistan and India decided to create a forum for discussions and negotiations of nuclear doctrines between the experts and officials of the two nuclear states, every attempt at formalising and stabilising strategic nuclear relations by openly discussing each other nuclear doctrines have been scuttled by one side or another. The advent of AI technologies and their military utility and use in nuclear decision-making processes is another occasion when both Pakistan and Indian nuclear and security establishments need to engage in thorough discussions about the possible uses, applications, and training data that each country would feed into its machines.

At present the political relations between the countries are at their lowest ebb of warmth and cordiality. Our relations are hanging by a delicate thread of hotline contact between officials of major general rank in the two respective military establishments. This practically means that Pakistani and Indian political leaders could hardly exchange meaningful, creative, and innovative ideas about how to manage strategically dangerous relations between the two countries, which nurse hostile and antagonistic feelings about each other. The advent of AI technologies, considered most disruptive across the world by military establishments, experts, and political leaders, are being developed and integrated into their respective militaries without considering the possibility of adjusting with a rival military that has all the intent of destroying and damaging each other. Both these militaries are equipped with nuclear arms and their delivery systems and the introduction of AI technologies by one military establishment could possibly disrupt the plans and strategies of the other military establishments. For instance, the security of Pakistan’s strategic assets is based on mobility and dispersal, and this is likely to become vulnerable to high accuracy with which AI tools could target the nuclear targets of the opponents.

Indian military leaders have routinely started talking about the integration of Artificial Intelligence technologies into their militaries. The Pakistani military, however, seldom talks about this field in their routine statements and speeches

US military experts point out the accuracy of US AI systems in identifying Chinese missile launchers in recent times, “A team from the University of Missouri’s Center for Geospatial Intelligence has applied these techniques in practice, developing a machine-learning algorithm to identify and rank potential Chinese surface-to-air missile (SAM) sites “with a 98.2 [percent] average accuracy.” The algorithms were trained using “fewer than 100 positive training examples,” which represents a significant improvement over older techniques”. Similarly, AI tools make the undersea operating submarines vulnerable to detection by AI-assisted and machinery learning tools. In this era of AI technologies Indian efforts to achieve submarine-based second strike capabilities will remain a pipedream essentially because AI tools could potentially make seas extremely transparent. Some of the international experts are of the opinion that in the India-Pakistan nuclear context, the AI tools need not fulfill their promises of perfection before they start to disrupt the delicately held deterrence relations between the two nuclear rivals, their very introduction could potentially undermine deterrence relations. We are fast reaching a point where Indians could develop and acquire the confidence that they have identified all the strategic weapons in Pakistan’s possession and in consequence do something adventurous or even hint at something adventurous. Pakistanis on the other hand don’t even need to depend on AI tools to make the tall claim that India is a target-rich country. They only must show themselves to be desperate when attacked by the conventionally superior Indian military.

Indian military leaders have routinely started talking about the integration of Artificial Intelligence technologies into their militaries. The Pakistani military, however, seldom talks about this field in their routine statements and speeches. So much so that the notoriously talkative media wing of the Pakistani military, ISPR—to the best of my knowledge—has not issued a single press release on this topic.  The information about military applications of AI tools in the Pakistani military comes from other semi-official sources like the Institute of Strategic Studies Islamabad (ISSI) “Pakistan has also procured AI-enabled military systems from China, including Wing Long II UAV and LY-80 surface-to-air missiles. Over the years, Pakistan has made significant strides in UAVs, including Shahpar, Ababeel, Mukhbar, Uqab, and Buraaq, as it has indigenously developed UAVs38. Carrying on the tradition of introducing new technological advancement, Pakistan launched the latest version of Shahpar – Shahpar II, a medium altitude long endurance (MALE) Unmanned Combat Aerial Vehicle (UCAV)39. In addition, Pakistan has also made progress in incorporating AI into its missile program, which is signified by its recent missile system, namely Shaheen III and Ababeel ballistic missiles and Ra’ad cruise missiles. These missiles have noteworthy attributes such as Multiple Independent Reentry Vehicles (MIRVs) and terminal guidance systems” reads ISSI's recent report.

Cumulatively, one trend from each country represents the overall impact of AI on the strategic stability and strategic culture of the South Asian region. While India is flirting with the idea of strategic domination of the South Asian region with massive arms and technology acquisition drive and in this connection, it seems particularly interested in acquiring those technologies and weapons systems that bolster its capabilities to launch a pre-emptive strike in the time of military crisis. On the other hand, the representative trend in Pakistan is complete secrecy about the utilisation of Artificial Intelligence technologies in the management and operations of its nuclear weapons capability. Ironically the cumulative impact of both these trends is the same— they will reinforce doubts and mistrust in the minds of each other. Even if India doesn’t have any intention to launch a pre-emptive strike against Pakistan, the Pakistani security establishment has a genuine reason to doubt its intentions, especially with the kind of capabilities it is acquiring. The secrecy trend on Pakistan’s side can potentially destabilise strategic stability. Some international experts have recommended that both Pakistan and India would do a great service to the region in case they organise technology demonstrators of whatever AI tools they develop in the process of their Research and Development procedures. This may ease each other's anxieties.

Concepts of deterrence escalation and risk reduction heavily rely on human rationality, caution, perception and management of the situation politically. Perceived or real absence of human factor renders these concepts without their traditional meaning with risk of automating escalation

There are experts and military research organisations that have been expressing doubts about the reliability of AI technologies. For instance, the Chief technology officer of the US Central Intelligence Agency (CIA) recently suggested that they should treat generative AI as a ‘crazy, drunk friend. SIPRI—a Stockholm-based military research organisation recently published a background paper under the title, “NUCLEAR WEAPONS AND ARTIFICIAL INTELLIGENCE: TECHNOLOGICAL PROMISES AND PRACTICAL REALITIES” in which it narrated the several factors that prove AI to be an unreliable tool for nuclear decision-making processes, “To begin with, advanced AI models suffer from unreliability—that is, a lack of trustworthiness. They have consistently been shown to hallucinate, meaning that they can confidently produce outputs that are incorrect and unsupported by their training data.56 This can mean that an LLM model invents facts or that a large-scale vision model incorrectly identifies an object in an image, leading to inaccurate assessments or false positives in critical areas such as threat detection and surveillance. This has led the chief technology officer of the US Central Intelligence Agency (CIA) to suggest treating generative AI as a ‘crazy, drunk friend’. More advanced AI capabilities have enabled improved analysis, predictions of behavior, or evolutions of certain scenarios. However, performance has come at the expense of interpretability. The more parameters an ML model has, the harder it is to trace how particular inputs lead to specific outputs” reads the SIPRI report.

AI experts are predicting a stage in AI development where AI could surpass humans intellectually and they have labelled this stage as superintelligence. This could be the stage in the AI journey where Artificial Intelligence could potentially destroy humanity. Nuclear weapons could be one tool employed by AI in such a stage sometime in the future. The majority of the nuclear states have taken the position that they would not wholly delegate nuclear decision-making to Artificial Intelligence tools as this could prove disastrous. Pakistan’s delegate to the Conference on Disarmament (CD)—a Geneve-based body comprising a majority of the states in the international system that deals with issues related to arms and disarmament—presented a paper in one of its sessions and there it took the position that Pakistan would never exclude humans as a final decision-maker in the nuclear decision-making, “Concepts of deterrence escalation and risk reduction heavily rely on human rationality, caution, perception and management of the situation politically. Perceived or real absence of human factor renders these concepts without their traditional meaning with risk of automating escalation” reads the Pakistani working paper.

The South Asian region is entering uncharted and dangerous waters—our political leadership is either in a slumber or in a state of maliciously handling matters of statecraft. AI-generated dominance of a strategic scene in the region could mean the complete destruction of the region.

The writer is a journalist based in Islamabad.