
Nobel Prize-winning American writer Thomas Schelling once famously stated, “There is a tendency in our planning to confuse the unfamiliar with the improbable.” This is what happens in Pakistan, where, due to a lack of technological advancement, little attention has been given to “Artificial intelligence tools” that extremist groups can exploit in an already volatile region. Artificial Intelligence (AI) has long been hailed as a transformative force capable of revolutionising industries and improving lives. Mustafa Suleyman, CEO of Microsoft AI, expressed the prominence of AI by saying that “AI is to the mind what nuclear fusion is to energy.” Yet, like any tool, it is only as effective as the hands that wield it. In a chilling turn, AI could be misused by extremist groups and organisations, turning it into a weapon of mass disruption. This development presents new and unprecedented threats, especially for countries like Pakistan, which are already grappling with the scourge of terrorism.
Terrorist groups have always adapted quickly to new technologies, and AI is no exception. One of its most alarming uses lies in the realm of propaganda. Extremists have turned to generative AI to create hyper-realistic deepfake videos and meticulously tailored disinformation campaigns. These tools have made it nearly impossible for the average individual to distinguish between truth and fiction. The Islamic State (ISIS), for example, has experimented with deepfake videos to simulate fake news broadcasts about the event when IS killed over 135 people in a terror attack on a Moscow theater. These tactics could lead to widespread confusion in unstable regions such as Pakistan, where socio-political instability and economic grievances frequently foster radicalisation. These tools may pose harmful consequences.
The other aspect concerning is AI's influence on recruitment. Machine learning algorithms enable terrorist groups to examine online activities and pinpoint individuals susceptible to radicalisation. AI customises its messaging for individuals, delivering a compelling recruitment pitch that proves hard to reject. Europol has recently noted a notable rise in recruitment success attributed to these strategies. The Washington Post reports that in February 2024, a group linked to al-Qaeda declared its intention to conduct online AI workshops. Subsequently, this group published a guide on utilising AI chatbots. In Pakistan, where many young people experience unemployment and disenfranchisement, this tactic could worsen an already critical scenario.
Beyond propaganda, the cyber domain is becoming a key battleground. Terrorists are using AI to enhance their cyber capabilities, automating attacks and exploiting vulnerabilities in critical infrastructure. An example is self-operating vehicular bombs, which the international community has a limited understanding of. Nevertheless, global organisations such as NATO are concerned that terrorist groups may use these modern autonomous devices in the future. These self-driving vehicles can navigate to designated targets and detonate explosives autonomously. Contemporary cyber technologies are inherently linked to self-driving cars, enhancing their capabilities.
AI is neither inherently good nor evil; it is a tool with immense potential to transform our world. But in the wrong hands, it can just as quickly become a weapon of unparalleled destruction
AI also presents a troubling new frontier in drone warfare. The weaponisation of drones, once the domain of state militaries, is becoming increasingly accessible to non-state actors. With AI-enhanced navigation and targeting systems, drones can be deployed with devastating effectiveness and precision. Autonomous drones have already been utilised in conflict zones with catastrophic consequences. In Pakistan, where drone strikes have long sparked controversy, the prospect of terrorists using such technology raises serious concerns—especially now that militant groups have found safe havens in Afghanistan under the Taliban regime. These machines could circumvent conventional security measures, potentially wreaking havoc on critical infrastructure or targeting high-profile individuals, fostering fear and vulnerability.
Pakistan’s cybersecurity framework is not immune to these threats. Cyberattacks targeting financial institutions and government websites have significantly increased in recent years, and incorporating AI into these operations could enhance their destructiveness. A synchronised cyber-attack on Pakistan’s energy grid or financial infrastructure could cripple the economy and diminish public trust in government institutions almost instantly. The question then arises: what can counter this growing menace? To start with, Pakistan must take its cybersecurity infrastructure seriously. While acknowledging the potential risks of AI, it can significantly improve counterterrorism initiatives in various ways.
Recent initiatives like the United Nations workshops on crime analytics in Islamabad to equip the National Counter Terrorism Authority (NACTA) with AI-driven crime analytics skills are promising. Still, they must be scaled up and institutionalised. Policymakers must also engage with tech companies to regulate the sale and use of AI technologies, ensuring they do not fall into the wrong hands. Public awareness campaigns are equally critical. Citizens must be educated about the dangers of deepfake technology and taught to identify fake news and disinformation campaigns.
International collaboration will also play a pivotal role. AI-driven terrorism is not an isolated issue but a global challenge that requires a unified response. By working with international partners, Pakistan can share intelligence, resources, and best practices to counter the misuse of AI. But perhaps most importantly, we must remember that technology alone cannot solve this problem. Addressing the root causes of extremism—poverty, inequality, and disenfranchisement—is crucial. Without meaningful social and economic reforms, no technological intervention will be sufficient.
AI is neither inherently good nor evil; it is a tool with immense potential to transform our world. But in the wrong hands, it can just as quickly become a weapon of unparalleled destruction. Like the rest of the world, Pakistan must proceed cautiously as it navigates this new landscape. The misuse of AI by extremist organisations is not just a distant possibility; it is an imminent threat. The sooner we confront it, the better prepared we will be to ensure that this remarkable technology serves humanity rather than harms it.