I have packet capture data for forensics, isn’t that enough? No!
Of late, I have briefed with a number of companies that provide full network packet capture capabilities. There seems to be a real fervor around the topic, almost to the extent that it would seem that all the data we previously had was worthless. I’m not sure what has caused this renewed or increased attention to packet data, but it is not the “Holy Grail”, “End-All-Be-All”, or “Silver Bullet” of security and forensics.
First of all, packet data is great. It has a number of real and key benefits over traditional system, application and network log information, as illustrated below:
– As any network or network-based application engineer worth his or her salt can tell you, it can be invaluable in troubleshooting network-based application problems. It helps isolate root cause infrastructure, connectivity or application stack issues.
– Packet data can contain all of the details of data exfiltration. Compromised data can be reconstructed to show responders/investigators exactly what left the building.
– It can be used to proactively identify collaborators communicating prior to executing on a malicious activity.
– It can provide insights into malware communications with command and control servers.
It can identify commands issued by remote hackers to understand what they did to infiltrate the environment.
– Packet Capture Data should be generally considered detection. Yes, it is possible to catch packets with bad “stuff” coming in. However, that is generally not the case. Usually, an endpoint is infected malware delivered through email or USB, or a rogue user operating on the keyboard and the evidentiary packets are identified on the way out of the organization. This means that some form of prevention has failed.
– It is as real time as any data can get.
The list goes on. (Ask any packet capture vendor.) The point is that the network is a valuable resource that has been highly underutilized for addressing many problems outside the traditional network troubleshooting realm.
Most conversations seem to stop here. That is a bit short-sighted. Though packet capture is great, (I think I already mentioned that.) it does have some limitations that need to be recognized for forensics.
– No matter how good the tools are, they can only see network traffic. More so, they can only see network traffic crossing the data lines that they are connected into. At a minimum, sniffers are placed at the edge of the network to capture the data ingressing and egressing the organization. This is good but may be inadequate. If capture only takes place at the edge, you are missing a lot going on within of the network. There are many advanced indicators you will not be privy to if you only capture on the edge.
– To provide forensic level information and details, the capturing system needs to be application aware, meaning it recognizes how applications communicate with a network protocol so it can provide context around what the application is doing and determine if it is uncharacteristic activity. It also needs to be able to create metadata about the packets and flows at the time of capture for use in analysis and correlation. If the system does not create the metadata, searching is far more difficult and the analyst will have to do much of the data analysis/crunching by hand prior to rendering a decision.
– Just because you can capture it doesn’t mean you should keep it forever. Though most organizations have corporate use and privacy policies, capturing and storing personal data is a precarious practice. Many countries and state governments in the U.S. have strict privacy laws that can be broken by full packet capture, especially if access and use are not closely controlled. Personnel having access to the data must be highly trustworthy and should be vetted to ensure they maintain the highest standards possible in order to avoid misusing the data to which they have access and to not incur legitimate lawsuits from monitored personnel.
– When pondering packet capture for use in forensics investigations, make sure that you are getting full packet capture, not just summary data. Many packages can only deliver summary data, which means not only parts of data but entire conversations can be missed between sampling. This has many side effects including shutting down the ability to do data reconstruction.
– No matter how good your packet capture is or where it is placed, it can’t tell you what has happened on the endpoint in its entirety. Yes, it can tell you remotely issued commands that were not encrypted in transit, but in the scenario where an attacker encrypts transmission, it is totally blind. Additionally, in a similar scenario where personnel are using web mail, most of those transmissions are encrypted now, so it is blind. Also, if malware is introduced to a system via a USB stick, there is no network traffic. Then that malware may be detected as it goes to the Internet, but if no sniffers are placed internally, it could spread to every system in the environment before being detected. If it is designed to cause problems not to exfiltrate data, it could totally compromise the environment before being identified. There are a lot of “ifs” here, but they are all relevant. The scenario that hasn’t been discussed yet is the trusted insider who logs on to a system locally and removes data or makes changes to the system. That is never seen by the network sniffer.
Ultimately, all of these scenarios need more data for remediation. The technologies that monitor, control and report on endpoints such as File Integrity Monitors, Host Intrusion Detection (HIDS), Registry Checkers, application and process monitors, etc. provide both an opportunity to maintain preventative protections unavailable to the vast majority of network packet capture technologies and provide crucial insights about how systems were modified in preparation for compromise; the data collected also indicates what steps were taken post compromise and before any network traffic was sent. This combination provides insight into the end systems’ configuration, changes and user and application activities pinpointing what has been done by malware or a malicious or negligent insider.
It really comes down to each having their place and function. When deployed appropriately and together, they each provide invaluable information providing the full picture required for true forensics and accelerated surgical remediation. This creates more of an outpatient visit scenario for the affected systems and users rather than the traditional approach of reimaging, which is usually more like an extended hospital stay leading to man-days of lost productivity and business impact per machine.
David Monahan, Research Director, Security and Risk Management, Enterprise Management Associates
Bio: David is a senior Information Security Executive with nearly 20 years of experience. He has organized and managed both physical and information security programs including Security and Network Operations (SOC’s and NOC’s). He has diverse experience with Audit and Compliance, Risk and Privacy. He provides both strategic and tactical leadership developing, architecting and deploying assurance controls, delivering process and policy documentation and training and other aspects associated with educational and technical solutions and driving their acceptance and adoption across the enterprise. Highly adept at identifying procedural and operational gaps that increase, or fail to reduce, the risk and determining optimal methods to deal with that risk through remediation, mitigation, acceptance, or transference. By exercising strong leadership, cooperation, communication, and a creative and analytical approach, the best security is achieved. Through exercising a decisive, solutions-focused and results-oriented approach, security is maintained and measured.
The opinions expressed in this post belongs to the individual contributors and do not necessarily reflect the views of Information Security Buzz.