Is Defence in Depth still relevant now that the concept of Zero Trust has taken hold? That was the question a colleague asked me recently on a webinar and it got me thinking if one has replaced the other and whether these strategies are mutually exclusive. It’s a complex question because there are pluses and minuses to both approaches.
Defence in Depth (DiD) has been around for decades and there are those that argue the strategy has failed. They point to the bloated cyber security stack of up to 70 solutions now found in the average enterprise and the seemingly unchecked onslaught of attacks over those years. Add to that the evaporation of the network perimeter in a hybrid workforce and the increase in consumption of cloud services, and it’s easy to see why some question the relevance of this framework.
DiD works by using a layered approach to security which effectively buys response time. The theory is that even if a threat actor gets past the initial defence, another security control will likely identify, slow or mitigate the attack, effectively plugging the gap. It can accommodate the needs of the organisation as more layers can be applied to areas deemed high risk but it makes two big assumptions. Firstly, that you have ownership and control over the network and secondly that an attack will originate externally which means that all users within the network are trusted.
These two issues became all too apparent during the pandemic when we saw mass migration to the cloud to facilitate remote working. Now the users were outside the network, attempting to get in, but so too were the threat actors. Those legitimate users were afforded little protection so could easily be exploited and their credentials used to bypass security mechanisms. As a result, the attack surface had expanded overnight, and it was open season on network defences.
Beleaguered businesses looked at their security arsenals with fresh eyes. Suddenly the point solutions which they had overlapped to improve defences, seemed inadequate because they couldn’t integrate with one another or provide the visibility needed within the cloud. There was a growing realisation of how resource intensive monitoring these systems can be, often resulting in alert fatigue and high staff turnover, causing many teams to seek to either scale back their cyber stack or to limit the number of vendors they use.
Suddenly Zero Trust, a term which was actually conceived back in 2010 (but actually with earlier initiatives such as “deperimeterisation” championed by organisations such as the Jericho Forum) became the hero of the hour. It’s ideal for the modern hybrid environment as it can encapsulate entities, network or data objects and effectively protect remote users and the protection of cloud-based assets. The mantra of the zero trust approach is “never trust, always verify” so that in a Zero Trust Architecture (ZTA) every access request is regarded as potentially hostile and so needs to be authenticated, authorised and continually validated.
One of the criticisms of the approach, however, is that it can cause too much friction, exacerbating users. It’s ideal for pureplay cloud businesses that use SaaS but becomes more difficult to implement for those with legacy systems. This can be remedied through approaches such as Just In Time (JIT) protocols that provide the user with temporary access usually via ephemeral certificates which are issued instantaneously and act as self-destructing security tokens. But there’s no getting away from the fact that for most organisations, moving to a zero trust approach will require significant planning.
Recognising this, the National Institute of Standards and Technology (NIST) has just published its Planning for a Zero Trust Architecture: A Guide for Federal Administrators, whitepaper which aims to provide Federal enterprise admins, system operators, and IT security officers with a blueprint for migrating to a zero-trust architecture using the NIST Risk Management Framework (RMF). Based on NIST SP 800-207 ZTA Roadmap, it’s just as applicable to other organisations, however, and emphasises the need for a phased approach which begins with identifying which tools are compatible with its ZTA and the need to involve cybersecurity planners, management, administrators, and operations.
The consensus is that the adoption of ZTA can be gradual and this means that many organisations will continue to rely upon DiD. Which brings us back to our original question: Can both strategies co-exist? It could be argued that Zero Trust is in fact part of DiD, in that it governs access while DiD protects the data, through encryption and segregation, for example.
DiD has also seen vendors build-out security solutions with features that can help implement Zero Trust. The Zero Trust approach embodies the concept of least privilege, with access typically limited by role, but what happens if a user, when granted access, then begins to deviate from their usual working pattern or to attempt to exfiltrate data? It’s here where a behaviour analysis solution, can help, spotting and flagging the anomalous activity enabling access to be terminated, effectively resolving the insider threat problem.
I would also argue that policies, procedures, security awareness training and robust physical security measures would all form part of an effective DiD strategy to govern and drive resilient user behaviour and to protect assets such as mobile endpoints. I cannot foresee many organisations starting to neglect those areas just because they have implemented a ZTA solution.
There’s no doubt that we are moving to ZTA and it’s an approach that promises to better protect our distributed workforces and businesses. But, realistically, teams are going to want to amortise their existing investments so we need to look at how the transition can be made smoothly by utilising existing tools and in a way that doesn’t expose the flank of the enterprise. To do that, we’re going to need to consider ZTA as part of an effective DiD for some time to come.