Experts On Study Finds Red Teams OK To Push Ethical Limits But Not On Themselves

By   ISBuzz Team
Writer , Information Security Buzz | Feb 04, 2020 02:15 am PST

Newly released research, which looks at the ethics involved in offensive security engagements, finds that security professionals, like red teamers and incident responders, are more likely to find it ethically acceptable to conduct certain kinds of hacking activities on other people than they are with having those activities run against themselves https://techcrunch.com/2020/02/02/red-team-ethical-limits/

Subscribe
Notify of
guest
4 Expert Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments
Erich Kron
Erich Kron , Security Awareness Advocate
February 4, 2020 10:33 am

While red team engagements are certainly not something new, they have had to evolve in complexity just as the attacks from the bad actors have. This has certainly sparked some concerns about how far is too far.

One consideration when determining the scope of the engagement is determining what security controls are being tested. For example, if you are simply testing access to an office within an organization, there is no need to actually remove anything to prove you could get in. You can simply leave a note or snap a picture to prove you were there. If you are, however, testing whether you can steal a prototype from a manufacturing plant, things may be different. Any engagements you are performing need to carefully consider this.

The employee morale must be considered if they are going to be a part of the engagement. Gaining access to an empty building after hours is very different than doing so during the day. Using techniques such as phishing are passive — the employee accidentally clicks on a link or opens a document — however, things are very different when allowing testers to bribe the employees, as that is an active and likely even criminal behavior that the employee is then performing.

In addition to the morale issue, there can be significant legal issues when employing red teams against organizations, especially when attempting to coerce employees into performing potentially criminal acts. For all but the highest level security organizations or agencies, this type of engagement should not occur. For those instances where it is allowed, employees must be made aware, clearly, that this level of testing may occur.

When an organization determines that it is going to employ the services of a red team for penetration testing, the employees should be made aware of the decision. This does not mean telling them when the test will occur, or how it will be done, but simply that it may happen and what that means to them. Employees should be told that the goal of the test is not to get them in trouble, but rather to find where bad actors may find a way past defenses, which will allow the organization to better defend against those attacks. The messaging should be very non-threatening to the employees and they should be encouraged to report anything unusual. If done correctly, the knowledge that this opportunity for testing is in the works can raise awareness and make the employees much more diligent, even if they are never the target of the test.

Last edited 4 years ago by Erich Kron
Roger Grimes
Roger Grimes , Data-driven Defence Evangelist
February 4, 2020 10:24 am

My biggest concern with red teams is how accurately they reflect the real risks and threats to the organization. Most don’t do it very well at all. Most red teams are great at breaking into the organizations they are paid to break into, but often do so by using tools and techniques which do not accurately reflect how real-world hackers are mostly likely to get in. And if how the red team is able to compromise an asset has little to no association to how real-world hackers would, how much value is it? One time I had a Fortune 10 CSO show me the 20 ways his red team broke into his servers and he slapped the report down in front of me asking, “Well, Roger, you think you are so smart at defending, which of these 20 things should I do first?” He asked because no one can do 20 things well at once. I took a look at the list and I didn’t see any of ways a normal hacker would likely break in. The report was full of very, cool, advanced attacks that clearly showed the expertise of the red team. But I didn’t see any that actually reflected the real world of hacking, so I told him that he didn’t need to fix any of them, and instead he should be fixing the things that are mostly likely to be exploited instead of chasing red herrings generated by a bunch of hackers not rooted in reality.

Last edited 4 years ago by Roger Grimes
Roger Grimes
Roger Grimes , Data-driven Defence Evangelist
February 4, 2020 10:23 am

My biggest concern with red teams is how accurately they reflect the real risks and threats to the organization. Most don’t do it very well at all. Most red teams are great at breaking into the organizations they are paid to break into, but often do so by using tools and techniques which do not accurately reflect how real-world hackers are mostly likely to get in. And if how the red team is able to compromise an asset has little to no association to how real-world hackers would, how much value is it? One time I had a Fortune 10 CSO show me the 20 ways his red team broke into his servers and he slapped the report down in front of me asking, “Well, Roger, you think you are so smart at defending, which of these 20 things should I do first?” He asked because no one can do 20 things well at once. I took a look at the list and I didn’t see any of ways a normal hacker would likely break in. The report was full of very, cool, advanced attacks that clearly showed the expertise of the red team. But I didn’t see any that actually reflected the real world of hacking, so I told him that he didn’t need to fix any of them, and instead he should be fixing the things that are mostly likely to be exploited instead of chasing red herrings generated by a bunch of hackers not rooted in reality.

Last edited 4 years ago by Roger Grimes
Roger Grimes
Roger Grimes , Data-driven Defence Evangelist
February 4, 2020 10:21 am

My biggest concern with red teams is how accurately they reflect the real risks and threats to the organization. Most don’t do it very well at all. Most red teams are great at breaking into the organizations they are paid to break into, but often do so by using tools and techniques which do not accurately reflect how real-world hackers are mostly likely to get in. And if how the red team is able to compromise an asset has little to no association to how real-world hackers would, how much value is it? One time I had a Fortune 10 CSO show me the 20 ways his red team broke into his servers and he slapped the report down in front of me asking, “Well, Roger, you think you are so smart at defending, which of these 20 things should I do first?” He asked because no one can do 20 things well at once. I took a look at the list and I didn’t see any of ways a normal hacker would likely break in. The report was full of very, cool, advanced attacks that clearly showed the expertise of the red team. But I didn’t see any that actually reflected the real world of hacking, so I told him that he didn’t need to fix any of them, and instead he should be fixing the things that are mostly likely to be exploited instead of chasing red herrings generated by a bunch of hackers not rooted in reality.

Last edited 4 years ago by Roger Grimes

Recent Posts

4
0
Would love your thoughts, please comment.x
()
x