The Escalating Threat of DDoS Attacks

By   ISBuzz Team
Writer , Information Security Buzz | Nov 10, 2014 05:05 pm PST

With increasing frequency and scale, some of the world’s largest data center and network operators are suffering from crippling Distributed Denial of Service (DDoS) attacks. Virtually every commercial and governmental organization today is largely – if not entirely – reliant on its online services, and service availability is completely at risk from the rising tide of DDoS attacks. But don’t take my word for it, DDoS attacks are growing in a variety of ways today, as evidenced by:

– Frequency – DDoS attacks are increasing by 50 percent year-over-year, according to Akamai’s State of the Internet 2014 study.

– Size – Akamai counted 17 attacks of more than 100 Gbps in Q3 of 2014 – a 389 percent increase compared to Q2.

– Severity – The biggest impact is the staggering increase in the average packets per second (PPS) rate in typical DDoS attacks. DDoS attack rates have skyrocketed 1,850 percent to 7.8 Mpps between 2011 and 2013, according to the often-quoted Verizon data breach report earlier this year.

– Sophistication – We find in the Incapsula 2013-2014 DDoS Threat Landscape Report that 81 percent of attacks are multi-vector threats, and that botnets are getting smarter.

– Persistence – DDoS attack campaigns in 2014 have been marked by high volume traffic and longer attack durations, reaching an average of 22 hours in Q3 according to the Akamai State of the Internet report.

The growing volume and scale of DDoS attacks impairs services used by hundreds of millions of people around the world. This includes users of services such as e-commerce, financial services, gaming, social media and even governmental and healthcare services. Even well-funded networks of the largest U.S. banks have experienced outages due to DDoS attacks. Bank of America, Wells Fargo, US Bank, JP Morgan Chase, Sun Trust, PNC Financial Services, Regions Financial and Capital One have all purportedly lost service availability for extended periods as a result of large-scale DDoS attacks. And they are not alone. Smaller credit unions have also had their share of DDoS attacks. The gaming industry, very familiar with DDoS, suffered as a group called the Lizard Squad claimed responsibility for using a new amplification attack against Sony Playstation network and other gaming networks.

These trends have resulted in new guidelines from the U.S. Federal Financial Institutions Examination Council (FFIEC), as well as the Monetary Authority of Singapore (MAS), to require all banking entities to put infrastructure in place to handle these cyber-attacks. Gartner recommends that an eight step program be put in place to control DDoS damage. (“Master These Eight Steps to Control the Damage from DDoS Attacks,” Gartner, April 2014)

Featured Download: Social media access at work. Do your employees know the rules?

New, focused and targeted DDoS attacks are a devastating contrast to security threats such as worms, phishing and virus attacks. A DDoS attack launched by a criminal or vicious competitor can take an entire business offline for an extended period, and the ease with which an attack can be generated makes every organization vulnerable. Criminal syndicates and commercially motivated hackers have built “for hire” botnet networks that can be “rented” on-demand over the Internet. These criminals shamelessly promote their DDoS services, also known as “booters,” and often market as web performance test tools or “stressers.” For example, for less than $30 these organizations will launch an hour-long attack, and of course more money provides higher degrees of havoc. While individual DDoS attacks were historically launched by gamers to gain control of an online game session (a.k.a. “booting the host”), the booters have come within easy reach of the masses, such as disgruntled ex-employees who want to disrupt the services of their previous employer. Extortion by criminal syndicates is another common motivation. And more recently, DDoS attacks have been used as a foil for criminals who use the havoc created in response to an attack as an opportunity to exfiltrate data from the target organization.

There are many elements that are involved in DDoS attacks and the measures against them, but they all share one common element: Large-scale zombie networks or botnets sending traffic at very high packet per second rates. Protecting against these massive botnets requires equally powerful tools.

Large-Scale Concerns

With the backdrop of an unparalleled growth in DDoS attacks, the common thread is that everything happens at large scale. The main elements that form the DDoS problem are the increasing scale of botnets, bandwidth and connection rates.

In 2012, Spamhaus, an organization that tracks and lists known spammers in a database, suffered from what is still one of the largest attacks in Internet history, reportedly clocking in at 300 Gbps. This attack followed a dispute with CyberBunker, a hosting company where virtually everything is allowed to be hosted. CyberBunker’s IP addresses were listed in Spamhaus’ database and, as a result, many email servers would not accept email from these IP addresses (many email systems cross-check against Spamhaus’ database). After Spamhaus refused to remove the Cyberbunker IPs, a large-scale DNS amplification attack started, peaking at 300 Gbps.

In February 2014, an undisclosed customer of CloudFlare was under attack, easily trumping the scale of the Spamhaus attack. CloudFlare claims the peak bandwidth was just shy of 400 Gbps, which was made possible by leveraging a Network Time Protocol (NTP) amplification attack.

In the State of the Internet Report of Q3 2014, Akamai mentions DDoS assaults lasting for more than a week, as well as an attack peaking at more than 300 Gbps and 72 Mpps, making this the record-holder for DDoS attacks on their network to date.

These attacks are possible due to the exploitation of a vast number of poorly configured networks and servers. Older unpatched Content Management Systems (CMS) such as WordPress, Drupal and Joomla are a popular target to enlist as zombies in a botnet network. These servers have higher bandwidth connectivity compared to private Internet connections and are always on. Many Internet services such as DNS or NTP, if unpatched, can be leveraged in amplification attacks. In fact, the openresolverproject indexed about 28 million open DNS resolvers that can be exploited for DDoS attacks. But the Internet of Things concept, where virtually every device is connected, brings more risk too. New amplification attacks leverage customer premises equipment (CPE) to generate large volume attacks.

In order for these services to send traffic to the DDoS victim, the network has to allow spoofed IPs to exit the network. Properly configured networks do not allow source IPs that are not part of their Autonomous System (AS) to exit the network. Unfortunately, many networks do not check for this; the Spoofer Project shows that almost a quarter of all networks allow spoofing.

Botnets: Unified Attacks

Many personal computers, or even more powerful web servers, are infected with viruses or malware that allow an attacker to control them remotely. These compromised hosts, known as “zombies” or “bots,” are legion and can be controlled in unison, as they are all linked together by “command-and-control” software to form a “botnet.” These bots typically “call home” to a command-and-control center, communicating over Internet Relay Chat (IRC) channels. This allows the attacker to hide, while traffic from each bot accumulates to gigantic proportions, taking out the intended victim by saturating its Internet connection or overwhelming the service or supporting infrastructure, rendering the service unavailable to legitimate clients.

The volume of traffic from each individual bot or zombie machine doesn’t seem out of the ordinary, so often the malicious traffic flies under the radar (if even monitored) of the service provider where the bot is hosted. It’s the aggregation of traffic from thousands or even tens of thousands of bots targeting a host that creates the crippling impact.

Different botnets can also be used in a single attack. With the increase in connected devices (popularly called the “Internet of Things”), the potential botnet sizes also increase rapidly. The first reports on Android-hosted bots are already a fact. With 6.8 billion mobile phone subscribers already, this is an area that needs to be monitored, as overall botnet activity has been up 240 percent already in this first quarter of 2014. In comparison, in 2013, more than 60 percent of all web traffic was found to be generated by bots. According to the Incapsula report referenced earlier, 29 percent of botnets attack more than 50 targets a month, a 26 percent year-over-year increase.

Bandwidth: Increased Attack Volume

Along with the increasing number of zombie hosts, the bandwidth contributed by each bot is increasing as well. Current botnets are increasingly leveraging compromised commercial servers with high-speed data center network connectivity (e.g., CMS systems such as WordPress) instead of private consumer Internet connections. These servers are equipped with higher bandwidth connectivity, increasing the total botnet capacity.

Amplification and reflection techniques compound this problem even more dramatically. These attacks use forged (spoofed) IP addresses of the victim and can send queries to DNS or NTP servers, for example. The NTP and DNS servers send all responses to the victim IP with a packet size that can be 200 times the magnitude of the initial query. This of course happens at a whole array of these servers, so the amplified traffic can accumulate to hundreds of Gbps. The bandwidth used in DDoS attacks is ever increasing. 2014 has already seen a 39 percent increase in average bandwidth and a 35 percent increase in simple network-layer attacks. Peak traffic is up an astonishing 114 percent. Neustar reports that it’s not uncommon for attacks to reach 100 Gbps or higher. For example, as of April 2014, the Neustar Security Operations Center has already mitigated more than twice as many 100+ Gbps attacks versus all of last year. Incapsula decided to include the first quarter of 2014 in its 2013 DDoS report due to the high incidence of newly reported DDoS attacks. As expected, it is seeing the same trends as other DDoS cloud protection providers. Large size attacks are increasing and account for almost 33 percent of all DDoS attacks.

Connection Rates: Most Significant Increase

DDoS doesn’t just come in the form of massive bandwidth attacks; bots perform a large and growing portion of attacks that use various levels of sophistication to exhaust a service, or the infrastructure of the service. Application-layer attacks are not only identified by their volume, but by their connection behavior. The Slowloris attack, for example, consumes a web server’s resources by communicating with it as slowly as possible. Just before a connection times out, it sends another small read request to keep the connection alive. This, of course, is done by a multitude of bots simultaneously, and the web server’s resources become so exhausted that it can no longer respond to legitimate requests.

The problem with these large-scale connection rates is often, ironically, the security infrastructure (such as firewalls and intrusion prevention systems) that can also fall victim to resource attacks due to their stateful nature of maintaining session and connection state for each flow. The SYN flood attack, one of the oldest and most prevalent attack types, can be devastating to the stateful security infrastructure, though technically the attack is aimed at exhausting the TCP stack of a node inside the network. More than 50 percent of large-scale DDoS attacks include a SYN flood component, according to Incapsula. Attacks often consist of both network-layer and application-layer attacks simultaneously, also known as multi-vector attacks.

Because of varying network packet sizes, the packet per second (PPS) rate is the most important metric to use when measuring DDoS attacks, compared to the bandwidth metric that is used in pure volumetric attacks. Verizon’s 2014 Data Breach Investigations Report notes that the mean PPS attack rate is on the rise, increasing 4.5 times compared to 2013. If we carefully extrapolate these numbers, we can expect 37 Mpps attacks in 2014 and 175 Mpps in 2015. These are the mean values to show the trend, but of course this means many higher PPS rates have been seen. For this reason, Prolexic (Akamai) is focusing on the peak values in its DDoS monitoring so that network architects can focus on provisioning networks for the worst case scenario.

Detection and Mitigation Requirements

As attacks involve high-scale bandwidth, connections and packet rates, organizations need large-scale power to mitigate them. High-performance, purpose-built hardware can mitigate network-layer attacks very effectively. But as mentioned, DDoS attacks come in many shapes and forms and are not limited to the network layer. High-performance processors and intelligent software are both required to inspect traffic at the highest packet rates, and then plenty of processing power needs to be available to actually mitigate unwanted traffic. The most effective combination is to leverage dedicated network traffic processors (such as FPGAs) to handle the common network-layer attacks and also have powerful, multi-core CPUs available for the more complex application-layer attacks. With the clear precedent that the scale of DDoS keeps growing in all directions, plenty of processing headroom is required to prepare your network against future generations of DDoS attacks.

If you are concerned about the possibility of major service outages due to DDoS attacks, you should ensure that your security vendor can scale to mitigate the largest multi-vector attacks at your network’s edge. Make sure that you build a DDoS security infrastructure that can scale to meet the DDoS threats of botnets today and tomorrow by looking at high performing multi-vector DDoS detection and mitigation solutions. Make sure your chosen solution can mitigate the highest PPS rates and bandwidths, as well as provide various deployment modes and APIs to integrate in any network architecture.

By Rene Paap, Product Marketing Manager, A10 Networks

Rene PaapBio: Rene Paap, Product Marketing Manager at A10 Networks, is a networking professional with more than 15 years of experience. Through previous roles as a Technical Marketing Engineer, he developed a thorough understanding of networking technologies and now specializes in product positioning and related product marketing.

Subscribe
Notify of
guest
0 Expert Comments
Inline Feedbacks
View all comments

Recent Posts

0
Would love your thoughts, please comment.x
()
x