A New Technology Game with Old Warfare Rules: How AI and Quantum Computing will Impact War in the 21st Century

Emerging technologies such as artificial intelligence and quantum computing are reshaping modern warfare, raising critical questions about their impact, regulation, and the future of global security. For example, in a special report published by The Economist in 2018 titled “The Future of War,” the authors question what impact emerging technologies will have and how they will create “new battlegrounds.”[1] Even though technology and geopolitical competition are reshaping the character of war in the 21st century, they argue, Clausewitz’s axiom for which “war is still a contest of wills” still applies. The Economist report also wonders if the competition for supremacy in these disruptive technologies — artificial intelligence, autonomous weapon systems, drones, robots, new biological weapons, and cyber operations — “can be controlled, and whether rules to ensure human control over [these] systems are possible.”[2]

Expanding on these concerns, Paul Scharre[3] describes a “nightmare scenario” in which future conflicts are shaped not by large-scale invasions reminiscent of past centuries (i.e. Napoleonic campaigns) but by the malfunction of advanced military technologies, often due to human errors like flawed software coding or cyberattacks from adversaries. As technological advancements push warfare toward increased automation, the traditional human-driven nature of war may give way to conflicts fought primarily by machines, thus changing the face of war away from a “teacher of violence”. This shift is particularly significant for Western democratic societies, where the implications of losing humans life on war challenge long-standing ethical, norms and public opinion concerns.

In the 1990s, Edward Luttwak noted that the United States and the West in general are reluctant to engage in armed conflicts because of public aversion to human casualties. The advent of “post-heroic warfare” fought “to minimize the exposure of [...] military personnel to the risks of combat” has consequently changed the public’s response to military interventions. In democratic systems, public opinion carries more weight than in non-democratic countries, so the worst nightmare of our times will be algorithm malfunctions rather than “19th-century-style” war.[4] This echoes Scharre’s ideas.

However, the evidence from the current international context is quite different. There are currently 45 intra- and inter-state conflicts underway in the Middle East and Africa, 21 armed conflicts in Asia, seven in Europe, and six in Latin America. These conflicts have resulted in approximately 500,000 deaths among combatants and civilians, according to a database provided by the Oxford Martin Institute.[5] These data help us to understand two foundational elements of international politics. On the one hand, violence remains an essential element in international relations, even in the 21st century, and, even in intrastate and extra-systemic conflicts. On the other hand, examples of conflicts such as those in Iraq and Afghanistan, where the Western coalition led by the United States of America was incredibly technologically-advanced, and the conflict in Ukraine, where both sides employ significantly modern resources, show that technological superiority is not the sole factor in winning wars. Similarly, the recent Israeli conflicts in Gaza and Lebanon have demonstrated that military superiority alone does not provide a security threshold sufficient to neutralize enemies who operate with rudimentary tools capable of bypassing and neutralizing any advanced technology. In other words, contemporary wars, like those of the past, are fought and won not necessarily through technological primacy, but with capacity, speed of adaptation, technical and financial resources, and personnel.

This is not the first time we have witnessed the theorization of a technological revolution in warfare and the end of the “boots on the ground” paradigm The underlying justification is always the same: an exaggerated emphasis on new technologies. The conclusion is equally identical; despite inevitable technological evolution, such theories are refuted by empirical evidence from battlefields. Consider the advent of airpower, which at the beginning of the 20th century was rightly seen as a revolutionary technological factor in the context of military evolution. Those who predicted that the aerial bombing of cities would be central to a rapid enemy capitulation were soon disillusioned; Nazi air raids, for example, failed to weaken British morale. The same can be said for aerial bombardments conducted with advanced technological systems able to integrate multidomain operations; these failed to subdue the Vietcong in the 1960s, just as intensified drone operations failed to overcome the Taliban.

Paraphrasing Clausewitz, we can say that through technological evolution, the “chameleon of war” changes its form and grammar, but the logic of pursuing political objectives remains unchanged. We must ask ourselves what changes are triggered by current disruptive technologies? And what can we learn from the past about technological revolutions that have marked various historical periods? Ultimately, how can we govern these technologies to limit their harmful effects in an international context imbued with competitive conflict among great powers? To answer these questions, we can analyze the political, military, and social implications of two disruptive technologies — artificial intelligence (AI) and quantum computing (QC) — in the context of international politics.

A Paradigm Shift: Civilian Tools Used for Military Purposes?

The 20th century has shown that technological advancements in the military field often lead to new technologies available to the general public. Simply put, over time, the progress for military technologies is converted for civilian use. Consider, for example, the internet, which was originally conceived as ARPANET as a resilient communication network during catastrophic events or nuclear war by the U.S. Department of Defense in the 1960s. Its evolution into nowadays internet gave rise to the current global infrastructure that supports the world economy, enables online education, and guarantees global communications.

Similarly, the GPS (Global Positioning System) was developed for navigation and targeting during the Cold War, but today it is integrated into everyday applications such as automotive navigation, precision agriculture, and logistics services. RADAR (Radio Detection and Ranging), which was also created for military purposes during World War II, has become essential for civilian aviation, meteorology, and autonomous guidance systems, while drones, which were invented for surveillance missions and military attacks, have been adapted for civilian uses such as deliveries, aerial photography, and agricultural mapping.

Emerging technologies nowadays represent a shift from this paradigm, as they are predominantly used by civilians but readily transfer into military use. AI is already employed in civilian fields, such as medical diagnostics, autonomous driving systems, and enhancing computational capabilities. Similarly, QC, although still in its early stages, promises to solve complex problems in areas such as cryptography, climate modeling, and the design of new materials. Traditional technologies have had an incremental effect, improving existing processes or enabling new services in the military context, then being translated into civilian uses. This does not hold true for AI and QC, which are general-purpose technologies that improve existing processes and create new operational paradigms, making them instantly accessible to virtually anyone with the necessary expertise and budget.

Moreover, while traditional technologies have improved security without generating significant systemic risks, emerging technologies are accompanied by possible dangers. AI could be used to create autonomous weapon systems, for example, while QC could be used to overcome cryptographic defenses. Global regulation and collaborative approaches among nations will be needed to mitigate these risks, however, diplomacy clashes with the natural competitive conflict inherent in the race for technological supremacy.

The quantum race, for example, has serious implications, as explained in a recent article in Foreign Affairs.[6] “Whereas a classical computer must process one state after another sequentially, a quantum computer can explore many possibilities in parallel,” the authors write. “Think of trying to find the correct path through a maze: a classical computer has to try each path one by one; a quantum computer can explore multiple paths simultaneously, making it orders of magnitude faster for certain tasks.”

In virtue of these characteristics, QC is poised to be a disruptive force in modern warfare, complementing and, in some cases, even surpassing AI in strategic importance. The research arm of the U.S. Department of Defense, the Defense Advanced Research Projects Agency (DARPA), for example, recently announced a Quantum Benchmarking Initiative to determine whether a quantum approach can achieve utility-scale operation by 2033.[7] Furthermore, one of the most immediate threats posed by QC is its ability to break classical encryption through Shor’s algorithm, rendering current cryptographic systems obsolete and exposing military communications, intelligence data, and critical infrastructure to cyber threats. This has led to a global race for post-quantum cryptography (PQC) to mitigate future decryption risks, particularly in cyber espionage, where adversaries may already be harvesting encrypted data for future quantum decryption.

Quantum sensing and navigation technologies also offer significant advantages in GPS-independent positioning, stealth detection via quantum radar, and underwater warfare through quantum magnetometers. These advances could potentially shift naval and aerial combat paradigms. QC’s computational power can also optimize military logistics, war-gaming simulations, and strategic decision-making by rapidly analyzing complex scenarios in real time. Quantum simulations further extend to material science, accelerating the development of next-generation armor, missile coatings, and energy storage solutions while enhancing nuclear weapons modeling, which could reshape arms control dynamics.

Even the field of AI stands to benefit, with the synergy between QC and AI holding transformative potential, with quantum machine learning (QML) being able to expedite AI training for autonomous systems, electronic warfare, and predictive analytics. However, while AI remains a dominant force in current warfare applications, QC represents a long-term strategic wildcard that could redefine military capabilities, requiring immediate investment and policy adaptation to maintain technological superiority, as its broader strategic implications include an emerging quantum arms race, shifts in intelligence and counterintelligence practices, and disruptions in traditional deterrence models as stealth technologies and encryption-based security frameworks become vulnerable.

Current Applications and Conclusions

Nowadays, Ukraine and Gaza represent dramatic laboratories for understanding the effects of AI in contemporary armed conflicts. Ukrainian authorities, for example, utilized Clearview AI, a facial recognition platform, to identify captured or deceased Russian soldiers, making it easier to notify families, but also exerting psychological pressure and countering Russian propaganda about minimal soldier losses. Clearview AI has also been employed to identify potential Russian collaborators within Ukrainian territory by analyzing images and videos shared on social media. AI was also used in battle by Ukraine.[8]

In Gaza, AI was used for surveillance and population control, as well as enhancing conventional warfare management and improving tactical-operational decision-making efficiency and effectiveness. The Israel Defense Forces (IDF) have increasingly integrated AI into their military operations, particularly in the Gaza Strip, with the aim of optimizing data collection and analysis to identify strategic targets. Among the systems used, the AI-based database known as Lavender has identified approximately 40,000 potential targets, according to recent sources. This system leverages machine-learning algorithms to analyze large volumes of data collected through various intelligence sources such as All-Source Intelligence. In 2023, the IDF revealed the use of the Gospel system to generate automated recommendations for targets associated with Hamas based on data from various sources, including communications intercepts (SIGINT) and satellite or drone imagery (VISINT). In the 2021 Operation Guardians of the Wall, the IDF’s cyber intelligence unit (Unit 8200) used Gospel in combination with other systems, including Alchemist and Depth of Wisdom, to identify and strike strategic targets. According to official sources, Gospel allowed real-time tracking of the communications of Hamas members, identifying their precise locations; Depth of Wisdom allowed Unit 8200 to decipher encrypted signals, providing further details to confirm selected targets.[9]

In all the above-mentioned cases, the use of AI and QC raises concerns about ethics and the proportionality of attacks. For instance, the automation of strategic decisions, such as those supported by Gospel, can reduce response times but entails risks of errors or false identifications, which could lead to targeting innocent civilians. Likewise, the combined use of SIGINT and VISINT can significantly improve precision but requires constant human supervision to avoid catastrophic errors.

As previously discussed, new technologies are not likely to “change the battlefield” forever, with kinetic force and the “boots on the ground” approach still being the main elements of the traditional battlefield, However, this should not prevent us from seeing the issues that arise from the use of AI, and even more from QC.

Beyond large-scale data breaches, economic upheaval, and intelligence vulnerabilities, quantum computers could facilitate malicious activities such as the breaking of encryption, the simulating of chemical and biological weapons, or autonomous drone attacks. Similarly, AI could amplify misinformation campaigns, enhance autonomous weaponry, and enable highly sophisticated cyberattacks, posing significant security threats. These possibilities raise urgent questions about who should govern these technologies, how to prevent their misuse, and what safeguards are necessary to mitigate black swan scenarios, that is unpredictable but catastrophic events with far-reaching consequences.

Historical precedents offer valuable insights into the need for global regulation. The development of nuclear technology underscored the importance of arms control agreements to prevent weaponization, while the rise of cybersecurity threats highlighted the necessity of proactive defense measures. Likewise, the space race and the internet revolution demonstrated the benefits and risks of rapid technological advancements, reinforcing the case for ethical frameworks and multilateral cooperation. To address these challenges, an AI & QC Arms Control Treaty (AQC Treaty) could establish legal constraints on their military use, including bans on fully autonomous lethal weapons and agreements to prevent quantum-enabled cyber warfare. A Global AI & QC Ethical Framework (GAQEF) could complement this treaty and guide responsible deployment by enforcing governance standards, mandating quantum-resistant encryption, and instituting certification processes for AI and QC military applications. Without such measures, the unchecked proliferation of these technologies could destabilize global security.

Policymakers and hyperscalers (i.e. Big Tech companies) today face the urgent task of learning from these precedents to ensure that QC, AI and other disruptive technologies serve humanity’s best interests. This includes fostering global collaboration to prevent an arms race in AI or quantum applications. AI and QC might lead to efficient and effective decision-making and provide tactical-operational superiority, but we must embed ethics and accountability into their design, and develop safe and certified standards paths to protect vital services. The exclusion of human decision-making raises potential conflicts with what is provided in Article 1(2) of the First Additional Protocol and in the Preamble of the Second Additional Protocol of the Geneva Conventions, better known as the Martens Clause, which requires adherence to humanitarian principles when using technological tools in armed conflicts.[10]

Achieving a balance between market-driven interests and global benefits will require a multifaceted approach. Rigorous public debate and education are essential to ensure an informed citizenry that understands both the opportunities and risks. Transparent discussions among governments, researchers, and industry leaders can guide policies that maximize societal gains while minimizing dangers.


Disclaimer:

The views and opinions expressed in the INSIGHTS publication series are those of the individual contributors and do not necessarily reflect the official policy or position of Rabdan Security & Defense Institute, its affiliated organizations, or any government entity. The content published is intended for informational purposes and reflects the personal perspectives of the authors on various security and defence-related topics.




[1] The Economist, “Special report: The new battlegrounds, The future of war,” 25 January 2018 URL https://www.economist.com/special-report/2018/01/25/the-future-of-war

[2] Ibidem, p. 21

[3] P. Scarre, Army of None: Autonomous Weapons and the Future of War, Norton & Company, New York, 2018

[4] E. N. Luttwak, “Toward Post-Heroic Warfare.” Foreign Affairs, vol. 74, no. 3, 1995, pp. 109–22.

[5] OurWorldinData.org/war-and-peace

[6] C. Chou, J. Manyika and H. Neven, “The Race to Lead the Quantum Future.” Foreign Affairs, January 7, 2025.

[7] DARPA, “QBI: Quantum Benchmarking Initiative,” URL: https://www.darpa.mil/research/programs/quantum-benchmarking-initiative

[8] Limante, Agne. “Faces of War: Russia’s Invasion of Ukraine and Military Use of Facial Recognition Technology.” The Cambridge Handbook of Facial Recognition in the Modern State. Ed. Rita Matulionyte and Monika Zalnieriute. Cambridge: Cambridge University Press, 2024. 112–124. Print. Cambridge Law Handbooks.

[9] B. McKernan and H. Davies “ ‘The machine did it coldly:’ Israel used AI to identify 37,000 Hamas targets,” The Guardian, 3 April 2024. URL: https://www.theguardian.com/world/2024/apr/03/israel-gaza-ai-database-hamas-airstrikes 

[10] ICRC, International Humanitarian Law Database, “Protocol Additional to the Geneva Conventions of 12 August 1949, and relating to the Protection of Victims of International Armed Conflicts (Protocol I), 8 June 1977” URL: https://ihl-databases.icrc.org/en/ihl-treaties/api-1977/article-1

Submit Your Publication

Submit Your Publication

Submit Your Publication

Submit Your Publication

Submit Your Publication

Submit Your Publication

Submit Your Publication

Submit Your Publication

Submit Your Publication

Submit Your Publication

An error has occurred. This application may no longer respond until reloaded. Reload 🗙