Approximately 2.38 million customers worldwide use Amazon Web Services (AWS) to host and power their cloud-based business assets, per a recent market report. If you’re reading this, you’re probably one of them. With officially over half (50.1%) of the market share among the top ten cloud providers, it has a huge responsibility to ensure the safety of its customers. As part of its Shared Responsibility Model, it utilizes multiple high-powered security solutions to do the job, and AWS GuardDuty is one of its most common.
However, no tool is perfect. The other half of the Shared Responsibilty Model – the part that the customer is responsible for – demands that additional improvements be made if there are gaps in the security outcomes. Learning how to use Large Language Models (LLMs) to refine GuardDuty output may be the key to bridging those gaps.
This blog will explore best practices for configuring AWS GuardDuty to maximize detection capabilities and reduce false positives, helping security teams efficiently identify genuine threats.
Experiment: An LLM Q&A with AWS GuardDuty
The cloud security problem is two-fold: tools generate too much data, and teams don’t have the talent pool to keep up in the cloud. The ongoing cyber talent crisis leaves us short roughly 4 million skilled workers, according to the World Economic Forum, and the cloud is often where that lack of skills manifests itself the most, being a relatively new field in the realm of cybersecurity. This, combined with high-powered AI-based cloud security tools, makes a perfect storm of too much data and not enough comprehension.
Luckily, AI can help – if leveraged skillfully. In an experiment using an LLM (GPT-4) and GuardDuty, security company Prophet Security demonstrated how the right prompts could get the most out of an otherwise overwhelming GuardDuty data dump. Their investigation yielded several telling results:
- False positives: GuardDuty, if left to its own devices, is subject to spotting non-malicious anomalies and creating needless alerts, like mistakenly flagging a first-time visiter to the AWS Security Hub.
- General questions, complex answers: Asking single questions of the LLM in relation to GuardDuty findings may often result in answers lacking specificity, optimization, and clarity. In other words, it may do little initial good at all.
- Refine with specific follow-ups: To get the most out of complex, out-of-the-box AWS GuardDuty alerts, you need to ask follow-up questions and not just request “a foolproof and actionable plan off the jump.” The more you “refine your search” with additional inquiries, the simpler and more attainable your answers will become, ultimately putting them in a striking range of junior analysts and others who may be tasked with ensuring cloud outcomes.
AWS GuardDuty Best Practices
In addition to leveraging AI-based technology to “translate” difficult GuardDuty findings, there are some other routes you can take to get the most out of your AWS cloud protection tool. This entails utilizing the full functionality of the GuardDuty dashboard to really “make it sing.” Those GuardDuty best practices include:
- Use the Summary tab: The Summary dashboard gives you a visual overview of the last 10,000 findings in a given AWS region. Customize your view with six widgets, three of which include the ability to filter down, and view results of the past 30, 7, or 2 days.
- Get advanced filtering in the Findings tab: As you progress, you’re going to want to drill down for the purpose of investigations. Use the advanced filtering technique in the Findings tab for this, which opens access to over 80 different attributes you can use in your search. You can filter for high-severity findings or instances of unwanted billing charges (like Bitcoin mining). Mix and match your criteria for the most specific find. For example: Severity:High, Finding type:CryptoCurrency:EC2/BitcoinTool.B!DNS.
- Cut out potential noise: You can implement a suppression rule to give you the best possible chances of getting “all lean meat, no fat.” Using this technique, you can automatically filter out (archive) alerts that meet certain criteria based on your expertise. For example, you can exclude all results when using a vulnerability assessment application, third-party or otherwise, using finding type Recon:EC2/Portscan.
- Get notified when high priority items arise: This one is a no-brainer. Set a notification within GuardDuty to automatically alert you whenever a high priority finding comes up. These findings are dynamic, so if something more important comes up for the same security issue, the alert will be changed to reflect the most prescient event.
- Automate remediation for common problems: This response capability can really help take a load off of your security team when working with the overwhelming amount of potential threats in the cloud. GuardDuty commonly addresses misconfigurations (intentional or unintentional) resulting in S3 and EC2-related issues. Remediation plays can be set and triggered by offending actions, significantly cutting down on SOC response demands with automated, playbook-style event workflows.
GuardDuty is a powerful and capable solution, but it sits on a high shelf. Using an LLM and these best practices as a ladder can help you harness its capabilities, no matter how mature your current cloud security expertise may be.
The opinions expressed in this post belongs to the individual contributors and do not necessarily reflect the views of Information Security Buzz.