Organic Denial of Service, When DoS Isn’t an Attack

By   Brian A. McHenry
, F5 | Mar 22, 2016 06:07 pm PST

Denial of service attacks are so common now that “DoS attack” hardly needs explanation, even to the lay person. The phrase “DoS attack” instantly conjures images of banking sites that refuse to load, and gaming consoles unable to connect. The other instant reaction is to think of the attackers such as Anonymous, the Qassam Cyber Fighters, or the Lizard Squad. However, not all denial-of-service is the product of a coordinated attack. Many forms of DoS are organic by-products of completely normal traffic.

So-called “normal traffic” includes everything from legitimate customers, business partners, search-index bots,data-mining scraper-bots, and other more malicious automated traffic. As we know, anywhere from 40- 70 percent of any given web site’s traffic is automated traffic.

Combined with often unpredictable surges in legitimate user traffic, maintaining the availability of any Internet-based service is daunting. This brings up a topic of frequent debate. Who should be responsible for managing availability—the security team or the infrastructure and application development teams?

 The security triad of “confidentiality, integrity, and availability” (CIA) dictates that security practitioners work to ensure availability. The scope of this duty extends beyond availability issues caused by malicious attacks. Attackers regularly perform reconnaissance to identify vulnerabilities in availability. These vulnerabilities range from capacity of ISP links and firewall performance, to DNS server availability and application performance. Sizing ISP links and firewall throughput are well-understood and easily quantified aspects of availability planning. The latter areas of DNS capacity and application performance are oft-overlooked areas of application security.

Application security practices are maturing to address remediating OWASP Top 10 vulnerabilities such as injections, scripting, or poor authentication and authorization handling. However, many application security scans do not include identifying processor-intensive and bandwidth-intensive URLs, as these aspects of application performance monitoring (APM) might be seen as the sole responsibility of the application development and/or server administration teams. After all, it’s their job to ensure the code is optimized and the server capacity is available, or is it?

Unfortunately, while server infrastructures are more elastic thanks to virtualization and applications are often built to take advantage of that compute power, without proper monitoring and regular scanning weaknesses in application capacity can quickly lead to serious outages. A single underperforming URL or other web application widget can affect the load of an entire server or farm of servers. Further, application dependencies can cause more serious race conditions, leading to widespread impact.

Proactively scanning the web applications to identify underperforming URLs not exposed in software QA or user acceptance testing enables the security team to add additional protections to heavy or processor-intensive URLs. These protections range from additional log and alert thresholds to more aggressive bot detection and dynamic traffic throttling.

 Without such preventative measures, a marketing campaign, Cyber Monday, or an eventful news day can cause denial of service conditions unrelated to any malicious attack patterns. Many, if not most, traditional security measures are derived from understanding the normal state of traffic and then identifying anomalous patterns. This methodology is implemented in everything from IP address blacklisting and whitelisting, attack signature checking, SYN flood detection, and source/destination ACL’s. However, these methods fall short when the cause of DoS is rooted in well-formatted requests for legitimate services.

 Since the majority of traffic on Internet-facing web sites is automated, filtering out malicious or illegitimate automated traffic offers protection resource-intensive features of the web application. Profiling web applications for resource-intensive components–similar to the approach of attackers—also provides additional insight. Gaining insights into fragile application components enables more effective monitoring, resulting in increased server response times. These can be used as metrics for more dynamic response to potential L7 DoS conditions.

 Security and availability are intrinsically linked. Leveraging components of the infrastructure such as application delivery controllers (ADCs), application performance monitoring (APM) solutions, and other availability tools is vital to a comprehensive security practice. Even if these solutions might not have security, threat, or firewall in the product name.

1 Expert Comment
Newest
Oldest Most Voted
Inline Feedbacks
View all comments

Recent Posts