Decision-making in national security is one of the most high-stakes endeavours a government undertakes. Misjudgement can lead to diplomatic blunders or violent conflict, and might cause profound geopolitical shifts. Despite the rigorous training and vast resources dedicated to intelligence gathering and policy formulation, human error remains an inescapable variable. Policymakers and intelligence analysts, like all individuals, are susceptible to cognitive biases, which can distort threat perceptions, skew intelligence assessments, and lead to strategic blunders. In security settings, where uncertainty and pressure are high, the consequences of flawed thinking are often amplified.[1]
The prevalence of cognitive bias in national security decision-making is not coincidental. The stakes in national security are exceptionally high; policymakers must navigate immense ambiguity about their adversaries’ intentions, capabilities, and timelines, making them more likely to rely on mental shortcuts or heuristics. Compounding this is time pressure: decisions often need to be made urgently in response to fast-moving developments,
The intelligence community operates within highly secretive and compartmentalized structures, which, while necessary for security, also foster isolation and groupthink by limiting cross-agency collaboration and outside scrutiny. Political incentives further exacerbate these issues and in some cases, administrations may favor intelligence that aligns with their strategic goals, discouraging dissent or alternative views.
To Err is Human: The Psychology Behind Security Blind Spots
Cognitive biases are mental shortcuts that help individuals make decisions quickly, especially under conditions of uncertainty. While such heuristics may serve well in everyday life, they become liabilities in complex policy environments, particularly in the realm of national security, where stakes are high, time is limited, and ambiguity reigns.[2]
Confirmation bias, for instance, leads policymakers and intelligence analysts to favor information that aligns with their pre-existing beliefs while discounting data that contradicts those views. This was evident in the lead-up to the 2003 Iraq War, when President George W. Bush’s administration of selectively highlighted intelligence suggesting Iraqi leader Saddam Hussein possessed weapons of mass destruction. Intelligence agencies, particularly in the United States and United Kingdom, were strongly influenced by the prevailing political narrative that Hussein possessed WMDs. Evidence supporting this narrative was amplified, despite United Nations inspections that found no such evidence and despite dissent within the Central Intelligence Agency. Under pressure, analysts may have unconsciously sought to confirm the desired outcome.[3] The “Curveball” informant’s unreliable testimony was given undue weight, fitting the pre-existing belief. Curveball, an Iraqi defector whose real name was Rafid Ahmed Alwan al-Janabi, provided fabricated claims about mobile biological weapons labs in Iraq, which were later discredited but had already been prominently cited by U.S. officials, including in Secretary of State Colin Powell’s 2003 address to the United Nations. His false testimony became a cornerstone of the flawed intelligence case for war, illustrating how cognitive bias and the desire for confirming evidence can override rigorous vetting and lead to disastrous policy choices.[4]
Similarly, the availability heuristic is a mental shortcut that involves estimating the probability or frequency of an event based on how easily examples come to mind. In security, if recent, vivid, or highly publicized events (e.g., a terrorist attack) are readily recalled, analysts might overestimate the likelihood of similar future events, even if statistical data suggests otherwise.[5] Its psychological root is the brain’s preference for readily accessible information, often influenced by recency and emotional salience. For instance, in the lead-up to the 2021 U.S. withdrawal from Afghanistan, officials underestimated the speed of the Taliban’s advance partly due to overreliance on outdated assumptions and an inability to process rapidly shifting realities on the ground.
Anchoring bias can be equally damaging: during the early stages of the COVID-19 pandemic, many national leaders, including in the U.S. and parts of Europe, anchored their risk assessments to early assumptions that the virus was no more dangerous than the flu, delaying crucial mitigation strategies. Anchoring bias occurs when individuals rely too heavily on the first piece of information they receive when making judgments or decisions, even if that initial information is later shown to be incomplete or inaccurate. Once the initial comparison to seasonal influenza took root in public discourse and elite messaging, subsequent warnings from epidemiologists and new evidence about the virus’s higher transmission rate, asymptomatic spread, and fatality risk were often discounted or downplayed. This early framing set the tone for public messaging, policy delays, and even procurement decisions. The anchoring effect is particularly powerful because it shapes the mental baseline against which all future information is judged, making it difficult for decision-makers to fully adjust their evaluations as the facts evolve. In the case of COVID-19, the cost of anchoring to flawed early analogies was profound, contributing to delayed lockdowns, insufficient testing infrastructure, and ultimately, higher mortality rates.[6]
Groupthink, a powerful dynamic in hierarchical and insular decision-making environments, discourages dissent and critical evaluation. This phenomenon occurs within a group of people in which the desire for conformity in the group results in an irrational decision-making outcome. In national security, this often manifests when a tight-knit advisory team or intelligence committee prioritizes consensus over critical evaluation of alternatives, suppressing dissenting opinions. Its psychological basis stems from the need for social acceptance and the avoidance of conflict within a group. The failure to anticipate the Iranian revolution in 1979 or the rapid collapse of U.S.-supported regimes in Afghanistan and South Vietnam can in part be attributed to an overreliance on consensus thinking among officials unwilling to challenge dominant narratives.[7]
Hierarchical and tradition-bound organizational cultures within defense and intelligence agencies often reward conformity over critical questioning. The 1980s failure to anticipate the collapse of the Soviet Union stemmed in part from entrenched institutional thinking that assumed the USSR’s stability and rationality, despite mounting internal fractures. Taken together, these dynamics create a “perfect storm” in which cognitive and institutional biases become embedded in the machinery of national security policymaking, making strategic misjudgments and intelligence failures more likely.[8]
These biases are not isolated accidents. They are embedded in the institutional logic of national security settings and heightened by fear, worst-case scenario planning, bureaucratic pressures to conform, and the desire to avoid being the outlier or the bearer of bad news. Such distortions encourage overreactions to ambiguous threats, suppression of alternative hypotheses, and flawed decision-making with far-reaching consequences.
Scaring Us to Death: How Fear-Driven Thinking Leads to Over-Militarization
Security policy is often shaped by the mantra “better safe than sorry.” While precautionary thinking can be essential in protecting national interests, it frequently leads to inflated threat perceptions, especially when coupled with emotionally charged fears and institutional biases. Worst-case scenario planning, although at times justified, can distort strategic assessments and result in over-militarized responses to challenges that may be more effectively addressed through diplomatic engagement, intelligence cooperation, or developmental aid.[9] Several structural and psychological factors contribute to the persistence of these distortions in national security settings. The high stakes and uncertainty inherent in predicting future threats, where the cost of failure may be catastrophic, create a fertile environment for cognitive shortcuts and intuitive reasoning. In such scenarios, decision-makers often seek patterns or closure, relying on easily recalled or emotionally salient examples rather than rigorous analysis.[10]
Time pressure during crises only exacerbates this tendency. The secrecy and compartmentalization of intelligence work restrict external scrutiny, allowing flawed interpretations to go unchallenged. Organizational cultures in intelligence and defense bureaucracies, which often emphasize hierarchy, loyalty, and conformity, can reinforce groupthink and deter dissenting views. Emotional involvement, such as fear, nationalism, or outrage following an attack, can further cloud judgment, making threats appear more severe or imminent than they truly are. These dynamics feed into a broader pattern of inflated threat perception, where policymakers, consciously or not, tend to overstate an adversary’s capabilities. This often leads to a security dilemma, in which one state’s defensive preparations are interpreted by others as offensive posturing, fuelling escalation and mistrust.[11]
The 2003 U.S. invasion of Iraq exemplifies these flaws. Fuelled by confirmation bias and anxiety over a hypothetical “mushroom cloud,” the Bush administration treated ambiguous and selectively interpreted intelligence about Saddam’s WMDs as definitive. Dissenting voices within the CIA and from international bodies like the International Atomic Energy Agency, which found no evidence of an active nuclear program, were sidelined or ignored. The belief in an imminent, catastrophic threat drove policymakers toward a preemptive military action that bypassed viable diplomatic alternatives and carried enormous unforeseen consequences, including regional instability and the rise of extremist groups like ISIS.[12]
Intelligence Failures: When Analysis Becomes Advocacy
Biases are deeply embedded within intelligence institutions, often distorting analysis at the most critical moments. Intelligence assessments are particularly vulnerable to mirror imaging, namely the flawed assumption that adversaries will think and act like ourselves. Mirror imaging contributed to the U.S. intelligence community’s failure to foresee the Iranian revolution in 1979 and the rapid fall of Kabul in 2021 as analysts mistakenly assumed that U.S.-aligned elites and institutions would retain legitimacy in the eyes of local populations.[13]
Politicization, when intelligence is shaped to fit political objectives rather than inform them, further compounds the problem. A classic example of this was the 2002–2003 run-up to the Iraq War, where analysts were pressured to find evidence of WMDs, and caveats or dissenting views were suppressed in favor of building a case for war. Intelligence estimates were selectively emphasized to support the Bush administration’s preferred narrative, while contradictory assessments, such as those from the State Department’s Bureau of Intelligence and Research (INR), which questioned the existence of active nuclear weapons programs in Iraq, were marginalized. Another illustrative case is the Trump administration’s handling of intelligence concerning the growing threat of white supremacist violence which was downplayed in favor of focusing on politically convenient targets such as antifa or immigration-related security concerns. These instances reveal how politicization not only distorts the intelligence cycle but also erodes institutional credibility, weakens public trust, and increases the likelihood of strategic surprise by encouraging decision-makers to see the world as they wish it to be rather than as it truly is.[14]
A recent article in Foreign Affairs magazine entitled “Trump Is Breaking American Intelligence” by David V. Gioe and Michael V. Hayden offers a contemporary illustration of how such dynamics can erode intelligence integrity. During Trump’s first presidency, the intelligence community faced intense political pressure to align assessments with the administration’s worldview. Agencies like the CIA and FBI were marginalized in favour of loyalists who prioritized ideological alignment over analytic rigor. Intelligence warnings about the COVID-19 pandemic were minimized, and briefings on domestic extremism were disregarded because they contradicted the president’s political messaging. In this climate, analysts self-censored or avoided presenting inconvenient truths.[15]
This case demonstrates that when intelligence is manipulated to serve advocacy rather than objectivity, leaders receive distorted, incomplete pictures of reality. This dramatically increases the risk of strategic surprise, operational failure, and long-term reputational damage to intelligence institutions themselves.
Mitigating Bias: Towards Better Security Decision-Making
Cognitive biases are not inevitable or unmanageable. Several tools and institutional strategies can reduce their influence on national security decision-making, though their success depends heavily on political will and institutional integrity.
One effective approach is red teaming, where independent analysts are tasked with challenging prevailing assumptions by thinking like an adversary. This technique was notably used by the U.S. military in advance of the Iraq surge in 2007 to anticipate insurgent reactions and adapt counterinsurgency strategies.[16]
Similarly, structured analytic techniques such as “key assumptions checks,” “analysis of competing hypotheses,” and “premortem analysis” can introduce methodological rigor and discipline into assessments. Such practices were institutionalized to some extent after the 9/11 and Iraq WMD intelligence failures. These methods help analysts avoid prematurely settling on a single narrative and encourage continual testing of alternatives.
Encouraging diverse perspectives is also crucial. The failure to foresee the collapse of the Shah’s regime in Iran in 1979 partly stemmed from a lack of culturally and linguistically informed analysts who could interpret signals from within Iranian society. Incorporating experts from different ethnic, regional, or academic backgrounds can reduce groupthink and improve situational awareness.
Wargaming and scenario planning, widely used in NATO and by the U.S. Department of Defense, allow policymakers to stress-test assumptions in simulated environments, exposing the limitations of their strategies under varied conditions.[17]
Equally important are institutional safeguards such as robust congressional or parliamentary oversight, independent inspector generals, and whistleblower protections.[18] These mechanisms are essential in ensuring that dissenting views and inconvenient analyses are not silenced or ignored. Without strong structural protections, even the most sophisticated analytic techniques can be overridden by political expediency, rendering institutions blind to emerging threats.
Security decisions will always involve uncertainty. But when decisions are driven more by psychological shortcuts and political imperatives than by evidence and critical thinking, the risks of miscalculation rise exponentially. As the U.S. experience under Trump illustrates, politicizing intelligence not only erodes public trust and global cooperation, it also blinds decision-makers to emerging threats.
To avoid future Iraq-style failures or blind spots like January 6, security institutions must inoculate themselves against the cognitive distortions that come naturally to all humans but are especially dangerous in national security. The stakes are too high to allow “seeing what we want to see” to substitute for seeing what is.
Disclaimer:
The views and opinions expressed in the INSIGHTS publication series are those of the individual contributors and do not necessarily reflect the official policy or position of Rabdan Security & Defense Institute, its affiliated organizations, or any government entity. The content published is intended for informational purposes and reflects the personal perspectives of the authors on various security and defence-related topics.
[1] Stuart, Douglas T., ' Foreign‐Policy Decision‐Making', in: Christian Reus-Smit, and Duncan Snidal (eds), The Oxford Handbook of International Relations (2008; online edn, Oxford Academic, 2 Sept. 2009), https://doi.org/10.1093/oxfordhb/9780199219322.003.0033
[2] Mintz A, DeRouen Jr K.’ Biases in Decision Making.’ In: Understanding Foreign Policy Decision Making. Cambridge University Press; 2010:38-54.
[3] Jervis, R. (2010). Why Intelligence Fails: Lessons from the Iranian Revolution and the Iraq War. Cornell University Press, pp-123-155, http://www.jstor.org/stable/10.7591/j.ctt7z6f8
[4] Chulov, Martin and Helen Pidd, ‘Curveball: How US was duped by Iraqi fantasist looking to topple Saddam’, 15 February, 2011, The Guardian, https://www.theguardian.com/world/2011/feb/15/curveball-iraqi-fantasist-cia-saddam
[5] Mintz A, DeRouen Jr K. ‘Types of Decisions and Levels of Analysis in Foreign Policy Decision Making.’, In: Understanding Foreign Policy Decision Making, Cambridge University Press; 2010:15-37.
[6] David Paulus, Gerdien de Vries, Marijn Janssen, and Bartel Van de Walle, ‘The influence of cognitive bias on crisis decision-making: Experimental evidence on the comparison of bias effects between crisis decision-maker groups,’ International Journal of Disaster Risk Reduction, Volume 82, 2022, https://www.sciencedirect.com/science/article/pii/S2212420922005982
[7] Schafer, Mark and Scott Crichlow (2010) Groupthink Versus High-Quality Decision Making in International Relations. Columbia University Press, (Chapter 8. The 2003 War in Iraq: How Flawed Decision Making Led to Critical Failures)
[8] Cox, M. (2008) ‘1989 and why we got it wrong’, (Working Paper Series of the Research Network 1989, 1). Berlin. https://nbn-resolving.org/urn:nbn:de:0168-ssoar-16282
[9] Houghton, D. (2017, September 26) ‘Crisis Decision Making in Foreign Policy’, Oxford Research Encyclopedia of Politics, https://oxfordre.com/politics/view/10.1093/acrefore/9780190228637.001.0001/acrefore-9780190228637-e-403.
[10] Pursiainen, C., Forsberg, T. (2021) ‘Beliefs That Shape Decisions’, In: The Psychology of Foreign Policy. Palgrave Studies in Political Psychology. Palgrave Macmillan, Cham. https://doi.org/10.1007/978-3-030-79887-1_4
[11] Bell, M. S., & Quek, K. (2025) ‘How Intractable is Security Dilemma Thinking?’, Journal of Conflict Resolution, 0(0). https://doi.org/10.1177/00220027251356279
[12] Pursiainen, C., Forsberg, T. (2021) ‘Biased Decisions’, In: The Psychology of Foreign Policy. Palgrave Studies in Political Psychology. Palgrave Macmillan, Cham. https://doi.org/10.1007/978-3-030-79887-1_5
[13] Johnston Conover, P., Mingst, K. A., & Sigelman, L. (1980) ‘Mirror Images in Americans’ Perceptions of Nations and Leaders during the Iranian Hostage Crisis’, Journal of Peace Research, 17(4), 325-337. https://doi.org/10.1177/002234338001700404
[14] Gvosdev, N.K., Blankshain, J.D. and Cooper, D.A. (2019) Decision-Making in American Foreign Policy. Cambridge: Cambridge University Press, pp.14-51
[15] David V. Gioe and Michael V. Hayden, ‘Trump Is Breaking American Intelligence. Politicizing the System Makes Dangerous Failures More Likely’, Foreign Affairs, July 2, 2025, https://www.foreignaffairs.com/united-states/trump-breaking-american-intelligence
[16] Ackerman, G., & Clifford, D. (2021, August 31) ‘Red Teaming and Crisis Preparedness’, Oxford Research Encyclopedia of Politics, https://oxfordre.com/politics/view/10.1093/acrefore/9780190228637.001.0001/acrefore-9780190228637-
[17] FAYET, Héloïse and Amélie FÉREY, ‘Imagining Beyond the Imaginary The Use of Red Teaming and Serious Games in Anticipation and Foresight’, Briefings de L’IFRI, March 30, 2023, https://www.ifri.org/sites/default/files/migrated_files/documents/atoms/files/fayet-ferey_imaginingbeyondimaginary_2023.pdf
[18] Council of Europe Commissioner for Human Rights, ‘Democratic and Effective Oversight of National Security Services’, Issue Paper, 2015, https://rm.coe.int/16806daadb