Sometimes security and risk management professionals – even corporate executives and boards – are so focused on protecting against sophisticated attacks that they take their eyes off the seemingly mundane, but no less important, tasks required to secure an enterprise. Basic vulnerabilities in software and infrastructure are the perfect example.
Vulnerability discovery is one area where this oversight challenge commonly dominates operations. Since you can’t fix what you can’t see, only a fraction of vulnerabilities are actually getting patched. By some accounts, as little as 10% of vulnerabilities are actually detected and remediated. Vulnerabilities within software and infrastructure remain, and security initiatives struggle to identify, prioritize, and manage them to lower that risk to a level acceptable to the business – without slowing down software delivery.
Right now, organizations have very fragmented and limited views into specific components of their stack, from host to network, software, OSS, and now container or cloud-native security. While new approaches to vulnerability discovery emerge, it remains the case that security initiatives have to consider the results from a variety of tools to get a more complete picture of an application’s risk. The measured reality of each of these tools’ coverage is that each finds only a slim minority of the types of issues on which organizations conduct triage and risk management.
An organization’s view of risk facing customer applications demands visibility from several phases of the software development lifecycle (SDLC), and from running infrastructure. Organization’s must map identified vulnerabilities back to responsible parties so that it’s possible to prioritize and remediate. To help facilitate this workflow, here are five steps that organizations can take to start down the right path and gain the complete view they need.
- Jettison the Junk. Many scanning tools report issues that will almost never have any impact on security risks. There are valid reasons for this: customers’ threat models differ, vendors showing they “catch more” to win competitions, and so forth. But at the end of the day, an adopting organization should decide which issues matter to it and use a tool’s facilities to remove those that don’t match their risk sensitivities.
- Use the Same Scorecard. Nearly every scanning tool scores vulnerabilities differently, making it extremely difficult to understand and prioritize the level of risk. To address this challenge, firms can transcode tool output to the Common Vulnerability Scoring System (CVSS): an industry-standard framework for ranking the characteristics of a vulnerability and its risk. Some OSS maintainers may not have the bandwidth to build support for this standard whereas some commercial vendors promote their own proprietary scoring. Normalization is a critical step that organizations need to provide where vendors don’t natively.
- Consolidate Views of The Software Value Stream. Organizations need a more holistic view of risk – simply correlating static application security testing (SAST) and dynamic application security testing (DAST) results won’t cut it. Data show that individual discovery techniques may only catch up to 20% of the overall critical vulnerabilities that exist in a system, so it’s pivotal that software and infrastructure is tracked throughout their respected lifecycles. From there, businesses can collect and relate resulting vulnerability discovery data, providing a holistic picture of vulnerabilities within the context of the full application as well as its operating environment.
- Report ‘Units of Work’. It’s not enough to consolidate output of multiple vulnerability discovery tools in a single stream. An engineer will often see dozens of instances of a developer error within the same file or function, or the same exposure of a running and vulnerable service accessed through different services/IPs. Consolidating issues in a view that allows developers to address a single “unit of work” draws a focus on all the important issues for which remediation is necessary to address an exposure.
- Find Who’s Responsible. No matter how integrated an organization is, it’ll still be organized by teams, business units and/or regions. In order to remediate vulnerabilities quickly, the fastest path is to find the pipelines, code bases, and teams that created them and assign them back to the appropriate individuals. Doing so enables development and operations teams to have a more contextually rich conversation about how to mitigate risk across the organization.
As organizations seek to scale their vulnerability discovery practices to the entirety of their portfolio, the above techniques can help reduce effort and involvement from security practitioners in the detect and remediate loop. And, the more “hands-off” this workflow security practitioners are, the more time they have to proactively address endemic sources of identified risk, or those more sophisticated attacks vulnerability discovery tooling hasn’t yet evolved to effectively identify.
The opinions expressed in this post belongs to the individual contributors and do not necessarily reflect the views of Information Security Buzz.