The AI Revolution in Cyber and Strategy: A Double-Edged Sword
Artificial intelligence (AI) is rapidly transforming numerous aspects of our lives, and its impact on the critical domains of cybersecurity and military strategy is proving to be particularly profound. As frontier AI models become increasingly capable, they present a double-edged sword, offering unprecedented opportunities for enhancing defenses while simultaneously enabling more sophisticated and efficient attacks. Understanding this duality and its implications is crucial for navigating the evolving security landscape.
AI as a Force Multiplier in Cyberattacks:
The potential for AI to amplify offensive cyber capabilities is a growing concern. Researchers at Google DeepMind have proposed a novel framework to systematically evaluate these emerging capabilities across the entire cyberattack chain. Their analysis of over 12,000 real-world attempts to use AI in cyberattacks, catalogued by Google’s Threat Intelligence Group, highlights several key areas of impact.
AI offers a capability uplift, boosting the cyber skills of a wider range of actors, allowing even those with limited expertise to launch sophisticated attacks. Furthermore, AI can lead to a throughput uplift, expanding the scale and accelerating the speed at which attacks can be executed. Perhaps most concerning are the novel risks from autonomous systems, where AI can power automated reconnaissance, social engineering, and even autonomous cyber agents, significantly increasing attack effectiveness and discretion.

To better understand and prepare for these threats, the proposed framework adapts established cybersecurity frameworks like the Cyberattack Chain (Lockheed Martin, 2025) and the MITRE ATT&CK framework (Strom et al., 2018). This structured approach facilitates a comprehensive evaluation of AI cyberattack capabilities across different phases, such as intelligence gathering, vulnerability exploitation, and malware development. By analyzing these phases, researchers can identify potential bottlenecks where AI-driven cost disruptions are most likely to occur.
To evaluate these potential disruptions, a new AI cyber capability benchmark consisting of 50 novel challenges spanning various attack phases was developed. These challenges are designed to be new and not based on publicly available challenges, mitigating the risk of benchmark contamination. Initial evaluations on experimental models like Gemini 2.0 Flash suggest that while current frontier AI capabilities may not yet enable disruptive end-to-end attacks or significantly assist with high-impact cyberattacks, they do allow threat actors greater speed, scale, and throughput.
Targeted Defenses and the Role of Evaluation Frameworks:
Recognizing the growing threat, the framework emphasizes the importance of prioritizing targeted mitigations and conducting AI-enabled adversary emulation to support red teaming. By structuring the results of cyber capability evaluations using an attack chain framework, organizations can map emerging AI-enabled cyber capabilities onto specific phases of an attack. This helps identify gaps in defenses and allows for a more focused allocation of resources to counter the most critical threats.
Various methods are being employed to evaluate the cyber capabilities of AI models. Capture the Flag (CTF) challenges are a versatile format that allows for the customization of various cyber skills and difficulty levels. Multiple-choice question benchmarks offer measurability and scalability, although crafting questions that accurately reflect the offensive cyber domain and resist memorization remains a challenge. Emerging research also focuses on scaffolding and capability elicitation techniques to obtain upper-bound estimates of model capabilities, with systems ranging from lightweight action-observation loops to more comprehensive agent systems with access to traditional offensive cybersecurity tools.
A key objective of this research is to bridge the gap between identifying risks through evaluations and empowering defenders with actionable insights. Understanding how AI capabilities will impact the cost to execute specific attack patterns is crucial for informing defensive strategies. This focus on the end-to-end attack chain and bottleneck analysis enables defenders to prioritize where to deploy targeted defenses effectively.
Strategic Implications of Military AI: Beyond Cyber:
The impact of AI extends beyond the realm of cyber warfare and carries significant strategic implications for military affairs at national and international levels. A study commissioned by the UK Ministry of Defence (MOD) and the Foreign, Commonwealth and Development Office (FCDO) by RAND Europe aimed to develop a conceptual framework for understanding these emerging strategic risks and opportunities.
AI is considered a set of general-purpose technologies (GPTs), primarily software-driven, which are proliferating rapidly and democratically, with innovation largely driven by the private sector. This differs significantly from traditional military technologies, which were primarily developed by governments and defense establishments. The military applications of AI are not solely determined by the technology itself but are also shaped by the operational, organizational, and cultural contexts in which they are developed and deployed.
The RAND Europe framework categorizes the strategic impacts of military AI across several levels, including the national level (impact on individual actors) and the international level (impact on the balance of power and stability). It also considers the impact across the full continuum of cooperation, competition, and conflict.
At the national level, AI can significantly impact an actor's potential and propensity for strategic advantage. This includes the impact on society, the economy, the defense enterprise, and military capabilities. AI can transform the efficiency and productivity of the defense enterprise, influencing strategic bandwidth and competence. In terms of military capability, AI offers opportunities for enhanced decision-making, improved intelligence analysis, and the development of new autonomous systems. AI tools can also play a vital role in strategic communications and information operations.
However, the pursuit of first-mover advantage in AI also presents risks. Given the rapid diffusion of AI as a GPT, some advantages may prove fleeting. Furthermore, differing ethical stances on AI deployment could create complexities, although it is not certain that less ethical approaches will necessarily yield a decisive long-term advantage.
At the international level, military AI has the potential to alter the sources and balance of power between actors. This includes the intensification of competition between major powers like the US and China, driven by fears of being outstripped in AI capabilities. The proliferation of AI also empowers non-state actors, including extremist organizations, to challenge the capabilities of state militaries.
Navigating the Future: Governance and Collaboration:
Given the profound implications of AI in cyber and strategy, effective governance and international collaboration are essential. The current global governance architecture for military AI is nascent. However, lessons can be learned from other domains like nuclear arms control and space. Confidence-building measures and open communication channels are crucial for mitigating the risks of misunderstanding and escalation.
A multi-faceted approach involving governments, commercial AI firms, academia, and civil society is necessary to navigate this complex landscape. This includes fostering a better collective understanding of the risks and opportunities, bridging the divide between near-term and long-term concerns, and promoting responsible development and deployment of AI. Ultimately, balancing the pursuit of strategic advantage with the imperative of maintaining stability and security in the age of AI will require a concerted global effort.