I spoke with a couple of companies recently and discussed their Software Development Life Cycle (SDLC) processes. I was alarmed that they completely missed one of the fundamental aspects of the SDLC process, a reoccurring theme from a conference the week prior. A large majority of companies overlook a critical aspect in a good SDLC process: An effective SDLC process should use the results from vulnerability assessments and penetration tests to review what part of their SDLC should or could have stopped that issue from being a vulnerability and ensuring that specific area is addressed. Too many people treat discovering vulnerabilities as an acceptable part of the SDLC and never look back at the root cause. Finding vulnerabilities in the production process normally indicates some part of your SDLC failed.
Much focus is spent on what to do in each phase of the process by organizations, how they applied metrics to measure success before something could move from one environment to the next, and how they empowered people to succeed in those tasks. These are all great, and all spot on, but the SDLC is a not a linear process. In listening to all these CISOs and companies no one discussed having any sort of feedback cycle process once they got into the production process.
When you move a product to production and begin its regular security lifecycle, it’s inevitable that issues will be found. Finding vulnerabilities in a production application shouldn’t be viewed as a control measure and an acceptable part of the lifecycle. Discovering vulnerabilities via network and application scanning, or manual pentesting, should be viewed as validation that a previous SDLC control or operational process failed. Just as important, if not more important, as fixing each vulnerability is backtracking why the vulnerability was there to begin with.
- Application issues – If it’s an application issue why did it occur in the first place? Do your developers not understand secure coding principles well enough to code a secure application? If application scanners were run, why wasn’t the vulnerability detected? Does the application scanner not cover that component of your technology? Did you address this with the vendor to ensure they improve their tool so you can detect it earlier in the SDLC process?
- Patching issues – If the issue is patch related, how and why did the operational process fail to apply the patch in a timely manner?
- Configuration issues – If it’s a configuration issue how and why did the hardening guide not cover that feature? How and why was it not applied if it was part of it?
Some things can’t be anticipated (Heartbleed, new App attacks etc.) and you don’t know what you don’t know, but it’s important to understand that the majority of most vulnerabilities found in the production process probably indicates some part of your SDLC failed. You should certainly fix the issue, but the root cause of the SDLC failure should be examined to ensure that it doesn’t happen again, thereby ensuring future production services are more secure. That isn’t occurring by and large today. People find vulnerabilities in production environments and begin the remediation process with little thought towards why it slipped through the SDLC the first time.
My observed omissions and tips for your SDLC process :
- It is not sufficient enough to send your developers to an annual four hour Secure OWASP CBT training session train them to code securely and to stop hackers. Security must be baked into their ongoing professional development and should be done routinely to ensure their skills are up to snuff.
- The OWASP is not just a Top 10. It covers many types of attacks, spanning 12 subcategories and 68 various unique “Attacks.” You should be training, testing and validating against the OWASP standard, not the OWASP Top 10.
Tool selection. People select network and application tools based on price, brand or some other feature but not once have I heard someone say we validated the tool will assess our network or application well because we vetted it has a comprehensive library of checks for what’s in our environment. This is especially true for OS based checks where there is still today massive differences between network scanners and support for different Unix flavors like Mandrake, Debian and so forth (especially when you look at credentialed support checks). Same for Application logic. Many application scanners don’t assess some of the cutting-edge application technologies. It’s critical that you profile out what technologies support your web application (REST, SSO, JSON, AMF, Mobile etc.) and validate that the scanner your choosing can assess the technologies you use. There are large discrepancies between scan vendor features and technologies they support. It’s a bad idea to go and buy a tool with no idea if it suits your application and hoping that you are getting a good assessment.[su_box title=”About Court Little” style=”noise” box_color=”#336588″]Court Little, Sr. Security Strategist, at Solutionary is on product development of the company’s managed service offerings. Using his 10+ years of experience in networking technology and security, Court addresses topics like vulnerability scanning, security monitoring, and the consulting/penetration testing skill set in his blog posts.
Court is a self-professed rabid college sports fan – basketball, football, hockey – he loves it all! When he’s not spending time with his family, his favorite activity is “destroying” himself on either his road bike or his mountain bike.[/su_box]
The opinions expressed in this post belongs to the individual contributors and do not necessarily reflect the views of Information Security Buzz.