AI‑Enabled Cybersecurity, Agents, and Model Distillation

AI‑Enabled Cybersecurity Panel
The rapid diffusion of advanced AI models is reshaping the cyber threat landscape. Across the globe, the ways in which AI is weaponised differ dramatically, reflecting variations in digital maturity, institutional capacity, and geopolitical positioning. Four distinct contexts illustrate these divergences.
Highly Digitised Economies – Scale & Speed
In regions with deep digital infrastructure and high AI adoption—such as the United States and many EU nations—AI magnifies the scale and speed of attacks. Threat actors exploit model vulnerabilities and prompt injection in enterprise AI services, leverage generative code assistants for automated vulnerability discovery, and deploy AI-enhanced phishing and social engineering at industrial scale. Supply-chain attacks targeting cloud providers and foundation-model platforms compound the risk. Because AI is tightly integrated into critical systems, a single compromise can cascade across finance, healthcare, and essential services before defensive protocols engage.
Emerging Digital Economies – Asymmetry Over Scale
In many parts of Africa, South Asia, and Latin America, AI adoption is uneven. Institutions often rely on imported platforms with limited local oversight, and the threat surface is shaped by governance gaps rather than technical sophistication alone. Deployments proceed without robust red-team testing or security audits; government procurement frequently lacks technical due-diligence; cyber defences remain reactive, struggling to keep pace with evolving techniques. A breach in a digital ID system or mobile-payment infrastructure can rapidly erode public trust, undermining the very digital transformation these systems are meant to enable.
Low‑Capacity or Fragile States – Political Instability
Weak regulatory frameworks enable AI-enabled misinformation, automated surveillance, and deepfake-driven political manipulation. In these contexts, cyber incidents do not merely disrupt services; they trigger legitimacy crises that can destabilise fragile societies. The technical and the political collapse into each other, with synthetic media and automated influence operations accelerating cycles of distrust and unrest.
Geopolitical Positioning – Strategic Dependencies
States that are net importers of AI infrastructure face data-sovereignty risks, foreign platform leverage, and limited visibility into model training cycles. Their security posture depends on decisions made elsewhere, by actors with divergent interests. Conversely, sovereign AI powers confront offensive AI-cyber races and dual-use escalation dilemmas, where defensive capability and offensive potential become indistinguishable. Both positions carry distinct vulnerabilities: dependency on one side, arms-race instability on the other.
The Emerging Threat of Model Distillation Attacks
Model distillation—compressing a large model into a smaller one—offers efficiency gains but also opens a new attack vector. Adversaries can extract proprietary knowledge from public APIs, effectively stealing capabilities that took years and significant investment to develop. Anthropic’s recent post on detecting and preventing distillation attacks provides a valuable framework for mitigation, outlining approaches that include monitoring query patterns for anomalous extraction attempts, implementing rate-limiting and output sanitisation to reduce leakage, and watermarking model outputs to trace illicit copies. These technical safeguards must evolve continuously, as extraction techniques grow more sophisticated.
Panel Discussion – AI‑Enabled Cybersecurity
A recent panel featuring Amlan Mohanty, Amanda Craig Deckard, Jonas Kgomo, Supheakmungkol Sarin, PhD, and Peter Mattson explored these themes in depth. The discussion traversed the spectrum from frontier-model vulnerabilities to the governance challenges facing emerging economies. For a full recap, see the Frontier Model Forum report on managing advanced cyber risks in frontier AI frameworks.
Closing Thoughts
Across all contexts, AI compresses the distance between vulnerability discovery and exploitation. The consequences, however, diverge sharply. Mature economies face systemic disruption at scale; emerging economies grapple with institutional fragility that technical solutions alone cannot address; fragile states risk political destabilisation where cyber and information operations merge. Effective resilience must be context-aware, blending technical safeguards with local capacity-building and governance reforms that acknowledge the uneven geography of AI's diffusion.