Why PCI’s Mandatory Pen Testing is no Silver Bullet

By   ISBuzz Team
Writer , Information Security Buzz | Jun 14, 2015 06:00 pm PST

On 1st June 2015 the new PCI (Payment Card Industry) 3.0 standard became mandatory, and includes a requirement to conduct web penetration testing at least once a year. However, while compliance is obligatory this enhanced standard shouldn’t breed complacency – annual pen testing should be seen as a minimum, and pen testing itself is only capable of mitigating a specific range of threats.

There’s considerable emphasis on ‘at least’, and the PCI guidelines 3.1 (mandatory in June 2016) spell frequency requirements out more clearly:

“Examine the scope of work and results from the most recent external penetration test to verify that penetration testing is performed as follows:

  • Per the defined methodology
  • At least annually
  • After any significant changes to the environment.”

And here is how PCI SSC (section 11.3.1.a): defines “significant changes”: “The determination of what constitutes a significant upgrade or modification is highly dependent on the configuration of a given environment. If an upgrade or modification could allow access to cardholder data or affect the security of the cardholder data environment, then it could be considered significant.”

However, it’s vital to note that a standalone penetration test will not protect your website from all risks, so you should always combine it with daily vulnerability and malware scanning, data integrity and threat monitoring.

For example, a compromised webmaster’s PC with stored credentials can open the door to hackers, and standard pen tests would not assess this risk at all. Increasingly this attack vector is a serious issue, as larger organisations with wider teams incorporating external parties and consultants often have admin website access, but often without well-implemented 2Factor authentication (2FA) or One-Time-Passwords (OTP).

It’s worth bearing in mind that this applies to any privileged third party that has access to the website, such as your hosting or cloud provider, and even your dedicated server or rack hosting firm. Often data centres have web control panels from which servers, routers, and virtual machines can be re-configured. Remote access, blocked from the outside, is usually authorized for the IPs of the datacenter support, opening one more door in case of its compromise.

As you can see, besides your own web application, with its internal vulnerabilities and weaknesses, there are other risk factors to think about. However, by implementing file integrity, proper event and anomalies monitoring you can significantly minimize those third-parties’ risks. Daily vulnerability scanning is also useful to get notifications about the most recent vulnerabilities in your CMS, framework, web server, or any new SSL weaknesses – something that is not yet discovered at the moment of your last penetration test. Therefore, vulnerability scanning, file integrity and anomalies monitoring, and malware scanning should be conducted daily, especially nowadays, when there are lot of free tools and open source applications to do all these tasks in-house to reduce the operational costs.

Another important component of risk remediation is source code review that shall be conducted before web application launch, or after any significant update of the application functionality. Source code review is an essential part of White Box penetration testing, however in this blog post we speak about more budget-friendly Black Box penetration testing. If you can integrate secure coding and regular code review into your SDLC (System Development Life Cycle) – you can almost avoid your spending on source code review by performing this task by internal resources.

At High-Tech Bridge, we recommend conducting web penetration tests before launch of your web application, after its update, and at least once per year in order to test it for new vulnerabilities or attacking techniques that appear publicly almost every month. Of course, each web application and its related business needs are unique, however this timing model can be taken as a basic.

By High-Tech Bridge research team

About High-Tech Bridge

High-Tech BridgeEstablished in 2007 with just two employees, High-Tech Bridge has grown to now employ 25+ employees and serve 250+ large multinational customers from financial, industrial, telecom and luxury sectors.In 2012, analyst firm Frost & Sullivan recognized High-Tech Bridge as one of the market leading service providers in the ethical hacking industry. High-Tech Bridge also received the prestigious Online Trust Alliance Honor Roll award in2012, 2013 and 2014.High-Tech Bridge is an ISO 27001 certified company. The certification assures absolute confidentiality, secure storage and transfer of any data related to our customers, proper corporate risk analysis and internal business continuity. Yearly compliance audits are performed by SGS certification authority.High-Tech Bridge’s mission is to secure companies and organizations worldwide by providing them with efficient and effective information security services and solutions that are totally vendor and product independent.For more information visit HERE.

Subscribe
Notify of
guest
0 Expert Comments
Inline Feedbacks
View all comments

Recent Posts

0
Would love your thoughts, please comment.x
()
x