When it comes to artificial intelligence, there’s no denying its transformative power. In fields ranging from medicine to logistics, AI has redefined the art of the possible. But in cybersecurity, AI is less a revolution and more an evolution—a tool in a game that has always been two-sided.
The reason is what I call the symmetric advantage. Cybersecurity is, at its core, a battle of adaptation. Adversaries evolve their tactics to circumvent defenders’ systems; defenders analyze and counter those tactics. AI, while powerful, is available to both sides. Just like a rising tide lifts all ships, AI raises the capabilities of attackers and defenders alike.
The Reluctance Gap: Beliefs vs. Reality
Here’s where the narrative grows dangerous: the reluctance to adapt. Fear of change, institutional inertia, or even an insistence on “what’s worked before” creates a critical gap that adversaries are more than willing to exploit.
It’s tempting to think that our preferences dictate outcomes. I might convince myself that fire won’t burn me, but the chemical reaction that fire fundamentally is doesn’t care about my beliefs or desires. Similarly, security leaders may declare, “No more agents” or “No more appliances,” but the reality is stark—our needs and desires aren’t aligned because our adversaries simply don’t care about them.
No sheep wants to be eaten by a wolf. No wolf cares about the sheep’s desires.
This harsh reality underscores the stakes: our adversaries aren’t bound by our fears, preferences, or organizational constraints. In fact, they count on them. Every moment spent debating whether to adopt new tools or approaches is a moment adversaries spend exploiting gaps you’ve left unaddressed.
The Symmetric Advantage and the Adaptation Arms Race
AI, for all its potential, doesn’t fundamentally disrupt this dynamic. Both attackers and defenders can use AI to refine their strategies, automate their workflows, and scale their operations. It’s a tool, not a solution.
Think of AI as a new type of weapon in an ongoing arms race. It doesn’t break the cycle of adaptation. Attackers will use AI to create more sophisticated phishing lures, evade detection systems, and probe for vulnerabilities faster. Defenders will use AI to analyze log data, automate incident response, and surface hidden threats. In this way, AI raises the baseline for both sides.
The danger is not in AI itself but in failing to wield it. Those who refuse to adopt it, whether out of fear, skepticism, or inertia, create an asymmetric advantage for their adversaries. Your adversary doesn’t care if you’re uncomfortable with change. In fact, they’re counting on it.
Cybersecurity is a Data Search Problem
If AI alone isn’t enough to break the adaptation cycle, what is? The answer lies in the data.
AI is only as good as the information it’s trained on. Attackers train their models on what they can observe, reverse-engineer, or simulate. This creates an inherent limitation: their perspective is incomplete. They’re building strategies in the dark.
That’s why it’s important to rewrite the rules—and if some adversaries call it cheating, I’m fine with that. I believe cybersecurity is a data search problem. AI systems need to be trained on data attackers can’t see, access, or understand. It’s a fundamental shift—one that takes the advantage away from attackers entirely.
This isn’t just about incremental improvement. It’s about asymmetry. It’s about defending from the high ground, where your visibility is comprehensive and your systems continuously and permanently outpace your adversaries’ ability to adapt.
Reluctance as a Risk
It’s worth revisiting the concept of reluctance because it may be the single greatest vulnerability organizations face. Security leaders often feel immense pressure to simplify their stack, reduce agent sprawl, and “do more with less.” While these are reasonable goals, they can morph into dangerous absolutes.
Saying “no more agents” or “we don’t need another appliance” may feel like decisive leadership, but what if that decision is rooted in reluctance instead of strategy? What if it’s based on internal preferences rather than external realities?
Our adversaries are not bound by our limitations. They don’t care if your team is tired of deploying new tools or if your organization has declared a moratorium on adding headcount. They will continue to evolve regardless of the constraints you impose on yourself.
The truth is, security isn’t about what’s convenient for us. It’s about what’s necessary to stop them.
The Future of Cybersecurity: Beyond AI
If AI isn’t the revolutionary force it’s often made out to be, what is? The answer lies in how we wield AI and, more importantly, how we train it. Data is the linchpin. Attackers operate in a world of incomplete information. They simulate environments, reverse-engineer defenses, and test hypotheses. But they are fundamentally blind to the data we hold.
Adaptation Is the Only Option
So, what’s the future of AI in cybersecurity? It’s a rising tide. And if you don’t rise with it, you’ll sink beneath it.
Reluctance is a choice, and in security, it’s a dangerous one. The adversaries we face don’t wait for us to catch up. They exploit every moment of hesitation, every gap in visibility, every opportunity created by inaction. The organizations that survive and thrive in the age of AI are the ones that embrace not just the technology but the mindset of continuous adaptation.
The opinions expressed in this post belongs to the individual contributors and do not necessarily reflect the views of Information Security Buzz.