2025 promises to be a big year in cybersecurity—for all the wrong reasons. While many are familiar with the projection that cybercrime will cost $10.5 trillion, Forrester’s updated report projects the costs will likely be closer to $12 trillion. To put that in perspective, the largest economy in the world, the US, has a GDP of “only” approximately $29 trillion, and if cybercrime were its own economy, it would be third largest in the world.
Furthermore, the rise of artificial intelligence is accelerating the arms race between attackers and defenders in cybersecurity. In a 2024 survey, 74% of surveyed professionals said their organizations were already feeling the impact of AI attacks. 2025 doesn’t look like it will buck that trend: 93% of surveyed security leaders thought AI-powered attacks will happen on a daily basis . 65% also thought the majority of this year’s attacks will be enhanced by AI somehow.
The Messy Process of Scaling Security
While it may seem like adding more talent can remedy this ongoing deluge, cybersecurity is facing a global talent shortage like many other industries. The World Economic Forum estimates there’s a talent shortage of nearly 4 million worldwide. Put another way, 71% of surveyed organizations reported they had open cybersecurity positions. Even when people do enter the industry, they are likely to face burnout. One 2024 report found that half of surveyed professionals expected to burn out in 2025.
With difficulty scaling talent-wise, professionals inevitably add more tools to properly test and scan all of an organization’s software, creating a sprawling security stack. As a new piece of software comes online, a new tool may have to be adopted to cover it. In fact, IDC found that most professionals use between 21-80 tools regularly. However, a small portion of surveyed North American professionals reported they used over 100 tools, with 0.6% reporting they had more than 140 tools in use.
However, these tools aren’t perfect in a number of ways. Some tools will overlap in their coverage areas and surface findings that look like multiple issues, but are in reality the same issue. Other tools produce a lot of “noise,” flagging items that aren’t actually issues for one reason or another.
Ironically, noisy tools are often the best at detecting real vulnerabilities, but they also contribute to the rate of false positives, estimated to be anywhere from 20%-40% of all findings. Given that companies average 500 or more endpoint security alerts weekly, anywhere from 100 to 200 (or more!) of these could be false positives by the above estimate, slowing down threat response or, if uncaught, slowing down development and requiring rework.
Is Consolidation the Answer?
To try and stem the flow of poor-quality findings, many teams are looking to the simplest-seeming solution: just getting rid of some tools. Gartner found that 75% of the organizations they surveyed were adopting a tool consolidation approach in search of better efficiency. And while consolidation can simplify security workflows and free up resources, it can also open up new avenues of exploitation.
This is not to say that security teams should not get rid of tools ever; far from it. Constant assessment helps you proactively adjust your toolset to better meet your organization’s needs and can help reduce alert fatigue, itself a major security threat.
The Necessity of Deduplication: Manual vs Automated
The simple truth is all these tools are necessary, but they also produce consistently overlapping findings. The solution sounds easy enough: “reduce the number of duplicate findings.” Better known as deduplication, this process can actually add more work onto a security team if done manually—exactly the opposite of what a team wants to accomplish!
Manual deduplication means painstakingly reviewing every finding—a process that demands superhuman concentration to sift through 500+ weekly alerts, even when the work is divided up. It’s no surprise humans make mistakes when working with repetitive data over extended periods. It’s simply not something we’re particularly good at.
AI, however, is good at finding patterns in repetitive data and doesn’t go cross-eyed from staring at a screen for too long. Machine learning (ML) algorithms can learn from human responses, then apply those same standards to future cases, further reducing a security team’s workload and avoiding tool sprawl issues. By incorporating more advanced reasoning and logic, ML can even go beyond basic string matching, more effectively weeding out duplicates.
As we move through 2025, security leaders are confronting a number of issues making cyber defense hard, with each issue leading to the next in turn. Without more talent, teams have to do more with less, which often means incorporating more tools to cover weaknesses. More tools mean more noise to sift through. More noise means more work. And more work means more burnout for humans—and there’s not enough talent to be added around the world, starting the cycle over again. While there’s no single silver bullet to solve all of these problems, a simple solution like deduplicating alerts (so long as it’s implemented smartly) can significantly reduce workload and strengthen your security posture.
Greg Anderson is the founder, creator, and CEO of DefectDojo. His mission is to prevent breaches by making visibility and scalability a reality for all in security.
Greg is a seasoned security practitioner and an active participant in the global community, having served as a member of the Board of Directors for the OWASP Foundation, performed assessments for the United States Department of Defense (Pentagon), and presented research on compromising CI/CD pipelines at DEFCON. Greg has also presented at AppSec USA and AppSec EU.
The opinions expressed in this post belongs to the individual contributors and do not necessarily reflect the views of Information Security Buzz.