I love the boundless possibilities of modern software development. Anyone with a computer and an internet connection can code. More than any other time in human history, each of us has the power to build something in software, to realise whatever we can imagine.
At the same time, a thriving ecosystem of open source software components allows us to stand upon the shoulders of giants, to quickly assemble huge building blocks of existing functionality that can rocket us toward our own goals.
At some point, of course, reality intrudes. Gravity is a harsh mistress. If we create buildings, they must not collapse. If we create airplanes, they must stay aloft until they’re ready to land.
If we create software, it must not fail, either by accident or under attack.
Don’t cook the goose
In some sense, developers are like the goose that laid golden egg. After all, developers are the creative workers that produce the software that brings in revenue.
However, security vulnerabilities are essentially mistakes made by developers; every once in a while, the goose lays a dud.
Sometimes the connection between developers and security vulnerabilities leads management to make bad decisions. If developers are the source of vulnerabilities, the reasoning goes, then let’s just train them in security so they don’t make any more mistakes.
Problem solved, right? Not at all.
Although giving developers more knowledge about software security is an excellent step forward, it is a narrow, simplistic approach to a much larger challenge.
The software supply chain
The notion that developers create software is a gross oversimplification. Software is made from of the effort of a wide variety of contributors.
- Designers and architects figure out what the software should do and how it should be structured. Many vulnerabilities are introduced at this stage, before any code is written. Design reviews, threat modeling, and architectural risk analysis should all be performed so that the finished design is as secure as it can be.
- Developers write code, but they make extensive use of prebuilt components — often open source packages — to supply basic functionality. The code that developers write is often a relatively thin layer of glue that holds everything together and provides specific functionality. In essence, developers rely on a shadow army consisting of all the developers who contributed to the components they are using.
- Operations engineers deploy, configure, and maintain the software, making plenty of decisions with far-reaching security implications.
- Users themselves makes security-related decisions when they use the software.
The path that leads all the way from an idea to a user is the software supply chain. Developers are important, but they only occupy one link of the whole supply chain. Building secure software is about taking steps to reduce risk across the entire supply chain.
You can move a big pile of dirt with a shovel, but if you have a backhoe handy, it’ll go much faster. Making software more secure works the same way. You can hunt down vulnerabilities manually, but if you can use automated tools, you can accomplish so much more.
A single software application could have many thousands of lines of code, sometimes millions, especially if you add in the open source components that are part of the application. It is simply not feasible to have developers hunt security vulnerabilities manually.
Security tools are not as smart as humans (yet), but they can do a huge amount of analysis in a relatively short amount of time. Effective security programs use the best parts of tools and humans; tools can cover broad areas quickly, and humans can do more targeted analysis.
Some activities simply cannot be automated. Most, if not all, the security analysis that happens during the design phase, for example, must be done by humans. But during development, testing, deployment, and maintenance, a variety of useful automated testing techniques help flush out security vulnerabilities.
Let developers be developers
Developers are creative people who solve tough problems. The core of the job is creating code that gets something done. While security is an important part of building applications, it is not really at the heart of what developers do.
So yes, it’s good if developers know about security, but what they really need is to be part of a proactive security process. When developers make mistakes, automated tools should let them know and help them fix it. This needs to happen in the least intrusive way possible, which means integrating security into the tools that developers already use.
In a properly implemented software development process, security is nearly invisible to developers — just a normal part of everyday life. They spend their days writing code, confident that the designs they’re implementing have already been evaluated and hardened from a security perspective. When they make coding errors, a plug-in in their development environment (Code Sight™, for example) tells them about the vulnerability and provides pointers about how to fix it. When they commit their code to the source repository, automated tools — such as Coverity® static application security testing (SAST), Black Duck® software composition analysis (SCA), Seeker® interactive application security testing (IAST), and so on — find vulnerabilities, which are fed back to developers via the issue-tracking systems (such as Jira) that they are already using.
At the end of the day, security demands not that we improve developers, but that we build a better process around developers.