Is Bot Detection the Best Value in InfoSec?

By   Brian A. McHenry
, F5 | Jul 21, 2015 07:00 pm PST

Spending on cyber security solutions is exploding. Security startups like Crowd Strike are attracting investment funding to the tune of $100M, and enterprises are hiring security engineers as quickly as they can find them. Unfortunately, unlike with online shopping where there’s a deal site or coupon code just waiting to be used, there’s no coupon code for getting the most out of our efforts to improve security.

Instead of wishing for a coupon code, the key is to focus on reducing the risk of a successful denial-of-service (DoS) attack, or worse, a data breach.

While every organization is different, with a unique threat model, there are some universal truths. For example, we know that most successful attacks are preceded by reconnaissance of network and application security. Much of that digital reconnaissance is performed via automated tools, which can be characterized as bots. The annual Bot Traffic Report showed that, once again in 2014, over half of Internet traffic was sourced from bots.

Bots come in all shapes and sizes. Some bots are friendly, like spider bots from Google and Yahoo, while others are much more malicious. Malicious bots include automated attack scripts, malware-infected machines, scrapers, and spammers.

Successful attack scripts can often lead to more intensive, manual probing for flaws. For example, SQL injection flaws are less common now than ever, yet remains in the #1 spot in the OWASP Top 10. The reason for this high ranking is the risk of a successful injection exploit that results in data loss. This has frequently been at the heart of many high-profile password and credit card breaches in recent years. Attackers and penetration testers alike often employ tools like SQLmap to automate probing for injection flaws. While many web application firewalls (WAFs) and intrusion prevention systems (IPSs) can detect and block SQL injection attempts, the incidence of false positives is high in such signature-based solutions.

Detecting the automated first-pass attempt would do much to discourage all but the most motivated would-be attackers. With this in mind, the value of malicious bot detection is underscored. Furthermore, reconnaissance is frustrated, if not largely eliminated. Focusing threat mitigation on client type rather than the specific nature of each attack reduces the chances of false positives along with the need to have extensive knowledge of the back-end application infrastructure.

The first line of defense in bot detection has traditionally been the use IP address reputation or black lists. However, “known botnets” change addresses often. Blocking known Tor or anonymizer proxies is similarly fraught with error and false positives, as these addresses frequently change or may even be the source of legitimate traffic. In practice, IP-based blocking requires a lot of management overhead in order to groom and maintain whitelists and blacklists. So much so that many organizations subscribe to multiple IP reputation services in an effort to aggregate the most accurate list possible.

Many bots display known patterns in their requests, such as the User-Agent header, identifying them as non-browser clients. In this case, signature-based solutions are a very accurate and effective means of detecting and blocking bots. However, more advanced bots will employ obfuscation techniques or mimic legitimate browsers. As we might expect, more advanced detection methods, such as behavioral analysis, are available to counter these sophisticated bots. Behavioral analysis includes tracking keyboard or mouse movement and JavaScript capabilities of a client. More advanced detection solutions are able to track the surfing behavior, such as rapid page transitions or loads characteristic of scrapers or aggregators.

At the application layer, DoS attacks are more insidious, often mimicking legitimate requests in every way and sourced from multiple IP addresses. Dynamic rate-limiting of traffic at the URL level is vital to protecting aspects of a web application that may be more resource-intensive. At the URL level, we are able to monitor the rate of requests to a specific web application resource, as well as the latency of the application servers’ responses. These are definitive markers of an attempted application layer DoS attack, which relies not on massive amounts of bandwidth, but carefully crafted and targeted requests to the web application server that induce abnormal load and stress.

Combining these advanced techniques enables us to eliminate a significant percentage of illegitimate traffic without employing anything so complex as payload inspections. In addition, since these attack and scanning techniques are so closely linked to advanced reconnaissance, we reduce the probability of future attacks that may be more directed and harder to detect. With a simple shift in perspective—away from the web application and toward the nature and behavior of the client requesting access to the data presented by that application—we reduce the scope of threat mitigation while increasing efficacy. Although bot detection represents only one aspect of a robust security posture, it is certainly among the most effective ways to reduce our threat surface.

Subscribe
Notify of
guest
0 Expert Comments
Inline Feedbacks
View all comments

Recent Posts

0
Would love your thoughts, please comment.x
()
x