In response to a recent Tripwire study which revealed that 50% of security professionals believe researchers should not be allowed to test the security constraints of an organisations network without upfront approval,IT security experts commented below.
Laurie Mercer, Solutions Engineer at HackerOne:
“Most companies (94% of the Forbes Global 2000) do not have a published vulnerability disclosure policy. Of the 28 states of the European Union, only three have a policy on responsible disclosure. This has led to a situation where nearly one in four security researchers have not reported a vulnerability that they found because the vulnerable organisation didn’t have a channel to disclose it (HackerReport). Many more researchers have reported vulnerabilities and heard nothing back, often because the reports are lost for weeks and months in a helpdesk system or in a sales and marketing queue. Receiving and responding to unsolicited security reports does not need to be painful. In fact, responsible disclosure can help make the internet safer for everyone.
The US-based National Institute of Standards and Technology recently recommended processes be established to receive, analyse and respond to vulnerabilities disclosed from internal and external sources, including unsolicited reports from security researchers. The European Union is also considering how to standardise responsible disclosure under the new Cybersecurity Act. In fact, some would argue that it is riskier to deny, ignore or discourage responsible disclosure of unsolicited security vulnerabilities. If one sees a suspicious package in a train station, it is one’s responsibility to report it – why should it be different for online services?
In an ideal world, if a responsible researcher finds a security vulnerability, they would be able to securely and responsibly report it to a security team, who would respond, triage and coordinate the remediation activities. Vulnerabilities would only be publicly disclosed if and when both parties were in mutual agreement, normally after vulnerable systems have been patched. This relies on organisations having defined and rehearsed processes for receiving and managing vulnerabilities from the outside.”
Martin Jartelius, CSO at Outpost24:
“The findings are indicative of a shift from responsible disclosure to full disclosure. We’ve seen this cyclic shift over the years as organizations using software become impatient to software vendors lack of response (and fixes) to security vulnerabilities discovered and disclosed. Outpost24 employs a 90 days policy, which provides software vendors the room for triage, change implementation, proper quality assurance and a structured release for their customers before it’s made public. In cases where we do work closely with a software vendor, we can accept an extension of this as a grace period whilst they’re working towards a resolution.
This disclosure process applies differently to our customers. When we detect defects in off the shelf products during security auditing, we notify them immediately and provide guidance about their best course of action to minimize the possibility of attack.
Generally a public disclosure does NOT need to wait for a vendor fix, as this would constitute delays for a potential fix. Vendors have in many cases delayed publication and use their delays as a way to prevent the disclosure of the vulnerability. However, this works against responsible or coordinated disclosure, and creates tension between security researchers and software vendors with the customers on the sidelines getting frustrated.
With regards to unsolicited security reports, they should be treated as an incident and prioritized with necessary resources to implement fixes for the identified risks. Based on severity, these fixes should be implemented swiftly. However, an unsolicited report, by its nature, indicates an unlawful intrusion. This also warrants an investigation – If a security tester identifies a risk without permission to test, then progresses to exploit it and steal confidential data or personal data from systems, it’s still a criminal offense. It’s more acceptable to observe the risk, or test against their own application data, and not attempt further data extraction or theft of data to prove a point.
In the case of standard software products which can be installed and tested by the analyst and affecting only their own data, the best response from the software vendor is a sincere thank you.”
The opinions expressed in this post belongs to the individual contributors and do not necessarily reflect the views of Information Security Buzz.