The software industry is full of surprises. From development to user experience, it`s a vast avenue of innovations, problem-solving, and security hurdles, driving to create a better and reliable digital landscape for everyone. We spoke with Paul Davis, Field CISO at JFrog, on some interesting topics such as Generative AI, preparing for software outages, and what could be the next Y2K. Dive into this insightful discussion to learn more!
What lessons from the Y2K incident remain relevant to today’s cybersecurity challenges?
The Y2K crisis was a pivotal moment because we had noticed and knew a problem was coming, and the amount of effort that went into it was amazing. At that time, there were fears of planes falling from the sky, and it turned out to be a non-event. There were some issues, but what stood out was suddenly, we had to work out an inventory of where our software vulnerabilities were. But we weren’t looking at software development or vulnerabilities in detail. The challenge with security is that we tend to be all-embracing, and it’s always a security problem.
Confidentiality, Integrity, and Availability can be both a blessing and a curse. Suddenly, it became a security issue. As a security person, I was pulled in to help and waited long hours to see if any issues arose with my clients. Fortunately, things turned out alright. It was a valuable lesson in exposure and understanding potential weaknesses. For me, visibility is key in security. If we can see and understand something, we can implement mitigating controls. It’s the unknowns, the surprises, that truly keep us up at night.
Y2K marked the beginning of a journey where people started to think about problems of the future. The lessons from that time still resonate today, especially when it comes to maintaining an accurate software inventory. Over 80% of the software today relies on third-party open-source code. It is crucial to understand the software`s origins, how it`s built, and whether it`s malicious or vulnerable. The effort to grasp what’s inside our software began with Y2K.
Will the next Y2K be in the Generative AI era? What makes Generative AI a cybersecurity threat?
Generative AI has accelerated rapidly. What’s concerning is its ability to process vast amounts of data, providing trustworthy answers. The trust factor is what scares me. It’s not a whole new discipline. We’ve been doing machine learning, data analysis, and natural language sentiment analysis for years, and now AI is embedded in products. This shift creates a new series of threats. At present, where generative AI is so accessible, not only it helps defenders, but it also gives attackers the opportunity to gain a foothold inside organisations.
We were used to being able to spot bad phishing attacks that had typos or were badly written. Now, they are well-customized and easily generated. Our research on machine learning and open-source LLMs revealed they can be used to compromise developers. For developers, security was an isolated concern, and now they are directly targeted along with their infrastructure. They are going after public repositories like Hugging Face, and when unknowingly downloading an open-source LLM could execute malicious code, compromising both infrastructure and other data scientists within an organization.
There are tools now that allow you to steal the weightings and metrics that build your LLMs. We should monitor production environments differently, adjusting and timely updating guardrails. Hackers enjoy bypassing these protections to make systems do things they shouldn’t. Additionally, it’s important to be mindful of the kind of data we use, question where they come from, and consider whether you should be using it and if it’s compliant with regulations like GDPR. We’re entering a new world of threats. Data scientists are now developing code for machine learning in languages like Python and SWIFT. I recently showed a team how I can generate code using coding assistants. You can provide guidelines, and it does the job, but it doesn`t include error correction or security checks, which can introduce vulnerabilities. I`ve been speaking with a lot of security executives, and we worry about how attackers are using AI and GenAI and attacks are accelerating, so we should be able to respond quickly but securely. There’s a saying that attackers can try a hundred times to get it right, but we only need to fail once for them to get in. AI-driven attacks are getting more sophisticated and clever, and the emergence of Agentic AI, which can make decisions, is exciting but also unnerving.
What lessons do you think notable cases like Log4j and CrowdStrike outage offer for preventing similar disruptions in the future?
Log4j was a security wake-up call. There were two sides to the problem. Developers were told to “log everything,” and open-source projects, too, started embedding it in their solutions. When it started becoming a security problem, we had to look in depth not just at top-level programs but their underlying components. We need to collaborate with security, understand threats, and integrate threat intelligence to actively monitor potential attack methods on our software. Unlike Y2K, which was mitigated before becoming a crisis, Log4j had a real impact, affecting tens of thousands of organizations.
Similarly, last year’s CrowdStrike incident was caused by a bad configuration file. The company initially prioritized speed over full testing but has since recognized the need for a balance between rapid response and security. This incident broadened the scope of software security, highlighting the need to check open-source components, secrets, cryptographic licenses, and configurations. With automation, retesting after configuration changes is now feasible. I was impressed by CrowdStrike’s swift response and transparency in providing guidance. Some criticized them, while other companies said that it proved that their incident response ran well. The CEO later noted that trust in the company grew because of their transparent handling and recovery, turning a crisis into an opportunity for improvement.
Could you explain the 2038 problem that involves Unix operating systems and its potential impact on digital infrastructure?
In 2038, the Linux calendar will reset to zero, similar to the Y2K issue. Linux is an amazing operating system that has existed for many years, and early developers didn`t anticipate this. However, modern programming languages such as Python, Ruby, and C have abstracted the calendars and implemented fixes in advance. Since we know and anticipate the issue, the impact should be minimal.
What worries me is that legacy programs based on outdated programming languages are left without access to source code. I once had to fix software without breaking it—like repairing a plane mid-flight—only to find another program needing a fix but lacking source code. Without it, the only options were decompiling (which wouldn’t work well), working around it, or rewriting it. But that`s the case with legacy software. Just like Y2K, we first need to assess if the issue affects critical functions, especially in older systems, and then determine the best mitigation strategy, which may be complex if code changes aren’t possible. If the source code is unavailable, we have to work on mitigation. I thought of using GenAI to address this issue quickly, but the experts I`ve talked to were unsure. 2038 will not be a significant problem, but it will add a few surprises if we don`t look ahead.
As threats evolve, so must cybersecurity skills. What key areas should the next generation focus on to tackle future challenges like the next Y2K?
Security isn`t just for cybersecurity professionals. Within 15 minutes, I convinced a developer that he was a security person at a developer conference. He was somewhat upset with the idea but eventually accepted it. We security professionals and developers have a common goal. The challenge is to bridge the gap between us. Understanding the developers` lifecycle, integrating security, speaking their language, and having empathy is important. Security professionals must also grasp compliance frameworks like CRA and AI regulations, and the business impact of security decisions. With AI playing a growing role, professionals should learn how to guide it effectively, setting guardrails and leveraging it for detection, triage, and response at machine speed.
For security professionals, the world never ends. We are always learning new technologies, strategies, threats, attack methods, and response strategies, which is exciting. What`s unique about security professionals is that we need to understand the entire stack of technologies, from operating systems to networking, security authentication, web applications, and AI. Understanding AI and data sets is now essential. When you hear the word experiment with a data scientist, it`s not something to fear. It`s a normal part of the process of building a model. Security professionals must grasp concepts like feature stores and AI lifecycle and recognize the uniqueness of the chain lifecycle. I think it`s another exciting learning opportunity.
How can security teams and developers work together more effectively?
Security shouldn`t be an afterthought. Development has often been siloed, with security layered on top, but executives are pushing for a more streamlined approach. To achieve this, teams need a full end-to-end platform and broader acceptance of security integration throughout the development lifecycle. We`ve always said the earlier we can start security, the better, so if we can embed security during the design and architecture phase, that would be great! Having a greater collaboration between security and developers is important. I think that developers need to understand that security people like to help. We do this because it`s a mission, not because we like to go around and search for problems, we want to solve problems. I think we have to open up that conversation more. It`s already started and is improving, but we have a long way to go.
Formalise stopping bugs ahead of their lifecycle rather than waiting until they get to production. Also, do metrics that reflect not just technology but also the business and people impact. The developer experience is often prioritized, but also consider the security operations experience. I want to make security more engaging and fun by using Gen AI, letting it handle ordinary, mundane tasks, allowing us to use our brains more to focus and solve real problems.
The gap between security teams and developers is improving but still exists. CISOs see progress, while developers often struggle with unvalidated vulnerabilities—many of which turn out to be false positives, leading to frustration. Security should focus on providing actionable insights rather than overwhelming developers. Collaboration is key, as security affects not just developers but the entire business. Ideally, security should be as seamless as nitrogen in the air—essential but invisible, integrating naturally into daily workflows without disruption. Developers and security teams share the same goal but often don’t realize it.
Developers see coding as an art and prefer not to revisit fixes, while security teams want it to be stable. Bridging this gap requires translating security in ways that resonate with different audiences. Ultimately, the goal is to work together effectively.
Dilki Rathnayake is a cybersecurity content writer and the Managing Editor at Information Security Buzz, with a BSc in Cybersecurity and Digital Forensics. She is skilled in computer network security and Linux system administration. Dilki has also led awareness programs and volunteered for communities promoting best practices for online safety.
The opinions expressed in this post belongs to the individual contributors and do not necessarily reflect the views of Information Security Buzz.