Thanks to the fantastic response we received, we’re excited to continue our exploration of the evolving cybersecurity landscape. As we approach 2025, the challenges and threats facing businesses, governments, and individuals are becoming increasingly complex. Following our initial insights, we reached out to more experts across the technology and cybersecurity fields to delve deeper into the transformative shifts ahead.
In this next instalment, we will explore pivotal trends such as the increasing threat of business logic attacks, vulnerabilities in the software supply chain, the evolution of DevSecOps practices, and more. We will also discuss the need for quantum-resilient encryption and the dual-edged nature of generative AI in cybersecurity. These insights offer a glimpse into a future with exciting opportunities and daunting challenges.
We hope you enjoy these crucial predictions in the next edition on cybersecurity trends and predictions for 2025.
John Hammond, principal security researcher at Huntress
“Ransomware will fall out of the spotlight, and infostealer malware will take the throne: As our world and the industry gets more and more intertwined with “SaaS” and “Cloud” and all the trite buzzwords that just mean a third-party managed solution, adversaries are going to care less and less about the endpoint.”
Cybercrime will instead opt for access tokens, API keys, credentials, or keys to the kingdom of identity so it can swing from one online access portal to the next—never needing traditional malware to do damage.
LLMs will become an extremely unprotected attack surface: As cliche and saturated as the “AI” conversation is, it is here to stay — good, bad, and ugly — and while organisations and businesses worked in a frenzy to add the new hotness to their product or service, security has fallen to the wayside. Whatever shape it takes as a chatbot or some other black box, LLM adoption will continue to grow but remain unchecked. Prompt engineering or other adversarial tricks will expose sensitive information, or internal training model data will be leaked as threat actors go after the new boon of “AI”.
Smash-and-grab operations will become less common as bad actors will wait for a bigger impact: Attacks of opportunity and low-hanging fruit will still undeniably be targets, but adversaries are starting to and will continue to, acknowledge that their reward is bigger and better when they play the long game. We will see more capable threat actors go after larger corporations or leverage smaller compromises as stepping stones to reach more prominent organisations that can do more damage to an entire supply chain. The often-forgotten sectors that don’t have security front of mind (think gasoline, construction, agriculture, and… unfortunately, typical critical infrastructure) will be targeted and taken advantage of just because there is less scrutiny for security.
Akhil Mittal, senior security consulting manager at Black Duck
“By 2025, static, annual assessments will no longer determine cyber insurance premiums. Instead, insurers will rely on AI-driven, real-time risk assessments to evaluate a company’s security posture. AI-powered tools will analyse a company’s ongoing security practices, monitoring how well they defend against evolving threats like ransomware, phishing, and supply chain attacks.”
Organisations that adopt AI-based detection tools and stay ahead of security standards will see lower premiums and more favourable coverage. Conversely, companies that lag or fail to implement modern security practices may face higher premiums or even struggle to obtain coverage. This shift in cyber insurance will drive broader adoption of advanced cybersecurity measures as companies work to reduce both their risk exposure and insurance costs.
By 2025, business logic attacks will pose a greater challenge, especially in industries like financial services and e-commerce. Unlike traditional attacks that exploit coding flaws, these attacks manipulate legitimate workflows, making them harder to detect. Organisations must implement behavioural analytics for real-time monitoring to identify anomalies indicating misuse.
APIs, crucial for modern applications, also present vulnerabilities. Companies must conduct regular API audits, enforce strong authentication, and use encryption to protect these access points. Ransomware will evolve with Ransomware-as-a-Service (RaaS), making AI-driven detection tools essential for the early identification of unusual behaviours before ransomware can lock systems.
The software supply chain remains a weak link in cybersecurity, as seen with SolarWinds and Log4j. By 2025, real-time monitoring and implementing a Software Bill of Materials (SBOM) will be crucial for transparency regarding third-party components. Pre-breach scanning tools will help organisations comply with regulations and reduce risks from hidden vulnerabilities.
DevSecOps will mature into a fully automated approach by 2025, with security as code becoming standard practice. AI-powered tools will facilitate real-time code reviews and self-healing code that automatically fixes vulnerabilities. Many developers already use AI in their processes, highlighting its potential to reduce human error and enhance security.
Organisations must prepare for the impact of quantum computing by 2025, as it could break current encryption standards. Transitioning to quantum-resilient encryption through quantum-safe algorithms is essential for protecting sensitive data. Early adoption will help businesses avoid last-minute disruptions when quantum technology becomes mainstream.
Generative AI poses risks and opportunities in cybersecurity. It enables sophisticated phishing attacks and adaptable malware but can also enhance defence systems. By 2025, deception technologies that use AI to create traps for attackers will become vital tools in cybersecurity strategies.
Muhammed Yahya Patel, lead security engineer at Check Point Software
“Cyber warfare will take centre stage as governments worldwide prioritise strategies for defence and offence in cyberspace. Nations will ramp up their capabilities to combat cyber threats while actively engaging in offensive operations, marking a significant shift in the global geopolitical landscape.”
2025 could usher in a new era of cyberattacks driven by AI-powered botnets. These sophisticated networks, deployed by both attackers and nation-states, will be capable of executing large-scale, coordinated attacks with unprecedented efficiency. By leveraging AI to mimic human behaviour, these botnets could evade detection, operating under the radar while requiring minimal human oversight. This evolution will significantly amplify the scale and impact of botnet attacks, challenging existing defence mechanisms.
Attackers are expected to intensify their focus on supply chain vulnerabilities, targeting suppliers, software providers, and managed service providers (MSPs) to infiltrate larger organisations. By compromising trusted partners, cybercriminals can bypass traditional security measures and gain access to their ultimate targets, making supply chain attacks a critical concern for businesses worldwide.
The attack surface expands as organisations increasingly adopt Software-as-a-Service (SaaS) platforms and integrate third-party solutions. Cyber adversaries will likely exploit these interconnected ecosystems, targeting third-party connections to breach corporate SaaS environments. This mirrors the dynamics of supply chain attacks and underscores the need for robust security measures across all integration points.
The ongoing tension between security and privacy will deepen in 2025, with governments across the globe escalating their challenges to the necessity of end-to-end encryption (E2EE). These efforts will likely fuel heated debates as stakeholders weigh the trade-offs between safeguarding user privacy and ensuring public safety, reshaping policies and regulations in the tech industry.
Dr Stefan Leichenauer, VP of engineering at SandboxAQ
“As companies start rolling out AI capabilities, they will want to extend their use to increasingly sensitive areas of their business, which will jumpstart a new wave of new and pivoting existing startups, as well as consultants, that begin highlighting security in their marketing.”
Companies are increasingly adopting an AI-centric, agentic strategy for problem-solving, focusing on creating AI systems capable of making decisions based on environmental interactions. This shift necessitates more than language models; Large Quantitative Models (LQMs) will play a crucial role. LQMs leverage extensive quantitative data and physics-aware architectures to address a variety of applications, including drug discovery, materials design, healthcare diagnostics, financial modelling, and industrial optimisation.
AI is set to traditionally impact slow-adopting industries such as agriculture, construction, manufacturing, and supply chain management. In these sectors, pure language models may fall short; thus, quantitative AI powered by LQMs will be essential. This expansion will create new roles that blend industry expertise with AI skills, allowing professionals to optimise AI applications in their fields.
The competitive advantage in AI is shifting from algorithmic advancements to the scale and efficiency of physical infrastructure. Custom-built data centres and optimised hardware will become vital for supporting large models, making investments in specialised facilities and energy resources critical for sustaining AI innovation.
Companies will focus on building AI expertise and developing intuitive internal tools to enhance employee productivity. By hiring and collaborating with AI specialists, firms aim to embed AI into daily operations, democratising access to productivity-boosting tools across all levels.
AI products operate on low margins, prompting key players to invest heavily in resources like data and computing power. As they continue to develop larger models, AI consumers may shift towards smaller, task-focused models to reduce costs. While broad generalizability is appealing, specialised models often yield better commercial returns.
As companies expand their AI capabilities into more sensitive business areas, this will catalyse a new wave of startups and consultants emphasising security in their offerings.
Christopher Robinson, chief security architect at OpenSSF
“AI will increasingly help coders, defenders, and attackers accelerate their work. Developers can quickly identify and fix coding flaws by integrating AI with automated tooling and CI/CD pipelines.”
Defenders can leverage AI’s ability to analyse massive amounts of data and identify patterns, accelerating the work of SOC teams and other blue-team operations. Unfortunately, attackers may also use AI to craft sophisticated social engineering attacks, review public code for vulnerabilities, and employ different tactics that will complicate cybersecurity in the near future. We must learn to secure AI before broadly deploying it for security purposes.
The opinions expressed in this post belongs to the individual contributors and do not necessarily reflect the views of Information Security Buzz.