A long-term defining characteristic of terrorist actors is their ability to appropriate existing knowledge and turn it against states. Artificial intelligence follows the same trajectory. Unlike earlier innovations such as the Internet or social media, which did not directly transform the military capabilities of armed groups, artificial intelligence functions as a genuine force multiplier. It enhances combat potential and, in doing so, opens the possibility of entirely new categories of threats.
Ideology, as is well known, constitutes the primary fuel of terrorist organizations. Without it, they cannot exist. Artificial intelligence now facilitates the dissemination of sectarian discourse. Telegram channels circulate images generated by AI tools, enabling the manipulation of public perception of specific events. These images are not intended to inform, but to transform facts into heroic symbols. Information no longer describes reality; it mythologizes it. In Nigeria, the terrorist group Boko Haram has used AI-generated virtual news anchors to broadcast entirely fabricated newscasts promoting its nihilistic ideological narrative. The diagnosis is clear. Militant groups in northern Nigeria, notably Boko Haram and the Islamic State West Africa Province (ISWAP), are increasingly exploiting AI tools to recruit and coordinate their radicalization campaigns.
Today, the challenge for counterterrorism agencies lies in distinguishing between propaganda initiated by human militants and propaganda automatically generated by AI agents that will soon be capable of producing proselytizing content autonomously. This represents a deeply concerning development. Armed groups will increasingly be able to delegate propaganda activities and certain pre-recruitment tasks to artificial intelligence. Automated systems will be able to contact internet users and initiate preliminary exchanges. If an individual proves receptive to nihilistic narratives, a human militant will then assume control in order to complete the recruitment process within the terrorist organization.
Technical Acceleration and Organizational Transformation
Access to combat technologies has long been reserved for professional fighters. That era is coming to an end. While the internet had already facilitated target identification and access to sensitive information, artificial intelligence now provides terrorists with immediate access to highly specialized knowledge. Training materials previously existed in the form of photocopied manuals passed hand to hand. Artificial intelligence now acts as a powerful accelerator of learning. Terrorism is becoming less dependent on external human resources. Operational planning relies less on human interaction and increasingly on computer-mediated self-learning processes.
In the 2.0 world of armed violence, the former model was that of loosely connected lone wolves. Today, the AI-assisted terrorist is cognitively assisted. He is no longer alone. He interacts with his computer, which responds. He can request guidance and expect structured guidance. Commercial generative AI models do include safeguards. They do not respond to direct questions such as “how to build a bomb in a kindergarten.” However, they can explain how to approach such a location, describe building layouts, outline the social profiles of families who send their children there, and identify explosive products sold in nearby supermarkets. This marks a major shift. A radicalized individual can now plan complex operations without direct human mediation.
Counterterrorism
Because the technological evolution of artificial intelligence is unfolding at extreme speed, counterterrorism agencies now face a mutating threat that they must follow as closely as possible in order to anticipate it. This has always been the case, but the pace of change is no longer measured in decades, but in weeks.
The technological progress of criminal groups is now directly correlated with computing power. States are creating structural dependencies with public and private companies. Counterterrorism is no longer limited to traditional law-enforcement instruments. The battlefield is digital. Cooperation between states and private AI actors has now become a central paradigm of this struggle. At first glance, these partnerships appear mutually beneficial: public authorities gain access to predictive analytics, biometric recognition, and language processing tools of unprecedented precision, while companies benefit from contracts and access to sensitive data that allow them to refine their models.
However, this symbiosis remains marked by asymmetry. By outsourcing entire segments of national security, states do not merely delegate operational tasks; they relinquish part of their strategic autonomy and become captive clients for matters as basic as software updates or technical corrections. This constitutes a structural problem in Western systems. The state thus entrenches strategic dependencies on private companies.
The threat is no longer limited to the functioning of computer systems alone. Artificial intelligence itself is now becoming a target. In the 1970s, insurgent forces sought either to seize power or to control illicit trafficking. Today, terrorist actors seek to disrupt the system itself, in this case AI, and to undermine it from within. Attacking its fundamental ability to function means striking at the core of the state’s capacity to govern.
The security of artificial intelligence data centers raises a fundamental question of responsibility. These infrastructures support vital state functions, yet their protection relies primarily on private actors. This dissociation remains sustainable only as long as no direct attack compromises the security of these sites and, by extension, the functioning of our societies. A structural choice therefore emerges. Either AI security remains entrusted to private companies under reinforced regulation, or the state recognizes these infrastructures as strategic and assumes their protection directly.
Data centers now occupy a gray zone, neither fully sovereign nor strictly commercial. This ambiguity creates a structural vacuum of responsibility. History has repeatedly shown that armed actors systematically exploit security or institutional vacuums. One certainty remains. The security of artificial intelligence must be conceived as a matter of sovereignty. As long as this shift is not fully acknowledged, the material foundations of AI will remain vulnerable. This requires political recognition and institutional commitment. It marks the opening of a new doctrinal chapter for intelligence agencies specializing in counterterrorism.




