You won’t often find me writing something prompted by a specific product, in this case IPViking Live Threat Intelligence, but it was too fascinating not to (click on the image to see the thing in action).
June has arguably been the month of Threat Intelligence (TI). Microsoft, Symantec and GCHQ have all been shouting about new tools or resources. Things that give better or more joined-up sight of global cyber threats (no doubt heralding complimentary consultancy offerings from just about everyone).
[wp_ad_camp_4]
Decent, dynamic threat intelligence is indisputably a critical ingredient when trying to thrash out your real level of cyber risk. It’s also pretty handy when you pitch for budget to fix existing vulnerabilities, buy new tools and/or cyber insure.
On 19th June, Business Daily looked at a survey by Checkpoint (who have their own TI offering). 140 InfoSec professionals were questioned and called out widespread problems identifying and mitigating attacks. This was put down, in large part, to a lack of useful threat intelligence.
“The gap between attack sophistication and available threat intelligence meant 31% of respondents said their organisation had suffered up to 20 successful attacks in the past 12 months – while 34% were unable to say exactly how many they had fallen victim to”
In this case study Norse tell us how IPViking detected over 100 TOR exit nodes used to attempt over $400k worth of fraudulent transactions via a political campaign’s fundraising website. When the so called “bad actors” were identified, they were blocked and the fraud was prevented.
So, before folk bring you stats and pretty graphics – like the “WOW” stuff from Norse – and get your senior budget holders properly excited, how do you prepare to balance out the hype? It’s lovely to have more risk data, but are you ready to use it? Can you translate it into meaningful security ‘to do’ lists?
Not an easy question to answer.
As a starter for 10, the following are ways threat intelligence is expected to inform your security stance and security response, put together from various sources;
Increasing and speeding effectiveness of SIEM (Security Incident & Event Management) solutions
Essentially it improves your ability to tell bad stuff from benign stuff and improves the availability, quality and quantity of info about the source, nature and prevalence of attacks. For a more sophisticated version of that take on things, have a look at this 30th June article “Gathering and using threat intelligence” by Checkpoint’s Mirko Zorz.
Day job effect?
It depends. How DO you decide what needs a firewall change or patch when you look at your shiny ‘Threat Intelligenced’ SIEM report? How well do you understand your IT estate and how good are your risk assessment and incident response processes?
In February JB O’Kane (Principle consultant for Vigilant) said;
“Coming up with a threat-vulnerability pairing can help you hone in on a risk-based approach. If the feed is coming in saying you’re exposed to these threats, you start to narrow things down and turn the threats and vulnerabilities into pairs so that now they’re decision nodes. Now you’re getting closer and closer to understanding the true risk that you might be exposed to.”
Dark Reading “Threat Intelligence Brings Dynamic Decisions To Risk Management“
In the same article Srinivas Kumar, CTO of TaaSERA, agreed
“Active intelligence will help drive innovation in IT services, improving early warning and remediation of coordinated and targeted attacks. But it will take equally coordinated efforts to actually integrate threat intelligence into the fabric of today’s risk management and security ops practices“
I don’t think that goes far enough to highlight the real pitfalls of buying into solutions too fast. Threat intelligence is just interesting news if not efficiently put together with local vulnerability and impact info. The quality of threat intelligence also varies widely, both in terms of useful interpretation of raw data and in terms of completeness (hence moves by many vendors to co-operate and consolidate feeds).
Supplementing traditional anti-virus implementations
It’s no surprise that the AV vendors are major players in the TI space. Of course it compliments and leverages their existing business models and capabilities, but it may also have something to do with a certain widely reported quote from an AV company executive “Anti-virus is dead”.
Is anti-virus dead? Not while most businesses still have a massive investment in it and no-one high profile has had the balls to uninstall it.
AV solutions have increasingly limited, mainly detective, value. Sandboxing and analysing inbound data, while not denying users access, is about the gold standard.
“Advanced attackers haven’t respected anti-virus software for at least a decade… It’s a speed hump rather than a barrier and it hasn’t kept up with today’s threats,” said Ben Johnson, chief evangelist for Bit9 and Carbon Black in a June article for The Telegraph Business Technology site
Threat intelligence keeps your AV better informed. Another input that increases the speed and effectiveness of detection.
Day job effect?
Enterprise AV enhanced with threat intelligence might throw out more real or false positive alerts. It depends what’s being stopped by perimeter and other network defenses. Any previously undetected malware will either be dealt with by the tool, or you’ll have to intervene to mitigate further spread and impact while planning a fix…same as you currently do.
But how did you cope with past outbreaks? Take the opportunity to realistically review incident response capabilities before new functionality gets switched on. The same goes for adding threat intelligence to your SIEM solution.
There may be no upswing in the number of incidents, but you need to be prepared. You’ve just sold the board on the risk reduction benefits of TI plug-ins, so extra attention will be paid when something goes wrong.
Improving General IT Security Administration
Good TI can mean more timely and better justified patch and change plans for your software, perimeter and endpoints. It can also enhance plans for strategic improvements to the IT estate.
Desirable stuff. Achievable IF you know the extent to which you’re exposed to reported threats and you can put that together with a clear and persuasive view of potential impact. If you can’t, it’s likely to make your day job harder.
Day Job Effect?
Better quality decisions when managing day to day security, plus credibility and better business cases for strategic spend.
OR
More info to drown in, worry about and get beaten over the head by, if something goes bang before you manage to get a handle on new threats.
Of course the value isn’t all reactive. Threat intelligence provides valuable early sight of new nasties. But, you still have to plan a sensible response based on local risks. If done well, it can counteract threat related FUD from the media or sales hungry consultants.
Enhancing Security Risk Management
Much the same story as above. Well communicated threat data is persuasive, just like the IPViking dynamic map, but it is utterly useless unless you can demonstrate it’s relevance to YOUR network.
Any improvement in the quality of security risk reporting will be dependent on the availability and clarity of information about current vulnerabilities, the mapping of that to current threat intelligence and the quality of information about potential fallout from an exploited vulnerability.
Day Job Effect?
If the whole picture is available, there will be a significant upswing in the quality of security risk data and therefore credibility of the risk function. Priceless commodities when budget time swings around.
To quote Jason Clark in his 24th June article for CSO Online “Decoding Threat Intelligence“;
“Answering these types of questions moves your business along a security journey that begins in the hell of ad hoc approaches and ends at the nirvana of a business-aligned security program”
If you can’t answer the questions, your risk function is just a FUD fountain. Telling tales of cyber monsters without the means to responsibly scale or describe mitigation for those technicolor risks.
Conclusions
Threat Intelligence becomes an oxymoron without the context of your local exposure. Integrated into your SIEM or AV solution it will increase your capability to spot, understand and deal with most nasties. But only IF you know what to fix, where it is and how the fix will impact the business.
Is it better not to know? No. But if the business invested in this, with your backing, it’s safer for your career if you can actually make use of outputs and demonstrate real results. Or, at the very least, explain the plan to get to that point.
So, is your business mature enough to get that value-add, or rich enough to buy in expertise to get you there?
Not all Threat Intelligence is created equal. Piecemeal or poorly interpreted threat data is of limited use. Spy before you buy – get real reports and implementation case studies and compare the same from other vendors. Delve into the source and comprehensiveness of threat data consolidated into feeds. Look at independent industry opinions on the value of various offerings.
Threat Intelligence is still a poorly defined term. Cherry pick threats at your own risk. Here the discussion has revolved implicitly around cyber threats and more specifically, cyber threats initiated from outside the network. This is where the old People, Process, Technology triumvirate comes in handy.
It can apply to attack mechanisms as well as potential sources of vulnerabilities in your business and IT environment. NO prizes for guessing which corner of the triangle is most ignored.
Social Engineering is the most dramatically underestimated ingredient in a vast number of eventual exploits, as I called out in this mildly controversial tweet back on 24th March:
“Only 1% breaches are hacks. Human error is just less tweetworthy” pointing to this Information Security Buzz article by Michael Brophy, Founder and CEO of Certification Europe
It turned out those who shouted just thought the words should change and I found arguing with version 2.0 a challenge:
“99% of breaches are made possible by human error, willful or ignorant bypassing of controls and individuals who have been induced (willingly or otherwise) to share confidential information with criminals”
Note how the same Checkpoint survey quoted at the start, gives more than a nod to this:
“Survey respondents highlighted a number of factors contributing to malware attacks being more successful, including: more zero-day exploits that weren’t detected by anti-virus solutions (15%); a lack of useful intelligence about new threats (14%); and smarter social engineering tactics by malware authors that tricked users (12%)“
You don’t have to look far to find a whole raft of reasons to keep the human element at the center of your threat and risk picture.
If you’re not in a position to use Threat Intelligence how do you get there?
You can do worse than leverage some existing tools. For example CVSS (the Common Vulnerability Scoring System).
When looking at how to integrate threat intelligence into corporate risk management equations, something similar was suggested by Srinivas Kumar:
“As the industry dives further into leveraging threat intelligence to make risk-based decisions, Kumar believes there may even be calls for more standardized scoring, similar to what NIST and MITRE do with vulnerabilities.
In the same way, NIST or some entity has to expand beyond what they do today with vulnerabilities out to attacks”
Yes CVSS is focused on vulnerabilities, not threats, but we’ve already established threat reports are pretty pointless in isolation. NIST’s severity scoring helpfully highlights the part YOU need to play in making threat intelligence usable. Your inputs into that journey from raw threat data, to realistically articulated risks.
Metrics are broken down into three groups (Figure 1 from First’s detailed description of CVSS 2.0), Base, Temporal and Environmental. The first two types should and usually do come from providers of TI and vulnerability alerts. The latter is all yours to thrash out.
These metric groups are described as follows:
– Base: represents the intrinsic and fundamental characteristics of a vulnerability that are constant over time and user environments. Base metrics are discussed in Section 2.1.
– Temporal: represents the characteristics of a vulnerability that change over time but not among user environments. Temporal metrics are discussed in Section 2.2.
– Environmental: represents the characteristics of a vulnerability that are relevant and unique to a particular user’s environment. Environmental metrics are discussed in Section 2.3.
I’m not going into detailed definitions here, but I thoroughly recommend a review of Section 2.3 of the guide. It’s a reasonable summary of the information your business needs to know, or be able to produce. Don’t get hung up on the calculations and ratings. It’s about availability of this kind of data.
If you’re not there yet, perhaps spend some time realistically mapping your path to get there. Then revisit the shiny offerings in this rapidly growing market with your eyes that bit wider open.
Sarah Clarke | @S_Clarke22
Sarah Clarke has 13 years experience in IT and information security and currently manages a supplier security assurance function for a FTSE 100 insurer. She worked from the IT helpdesk floor up and managed networks before specialising in security.
Her blog www.infospectives.me was nominated in the 2014 European Security Blogger Awards as Best Personal Security Blog.
She also contributes to The Analogies Project (www.theanalogiesproject.org). An initiative using real life context to demystify InfoSec and improve effectiveness of security education and awareness efforts.
The opinions expressed in this post belongs to the individual contributors and do not necessarily reflect the views of Information Security Buzz.