The Lingering Effect Of Blind Spots In The Cloud

By   Jesse Stockall
, Snow Software | Aug 05, 2021 02:16 am PST

For many organisations, 2020 was the year of maintaining business continuity. No matter the experience, many also learned a lot about resilience. But if last year was about keeping the lights on, 2021 must be about operationalising what is now our new normal. 

Without a doubt, the pandemic has proven work from anywhere was feasible. It could be said though that COVID19 only accelerated what was inevitable. For the sake of improved productivity and efficiencies, organisations were already diversifying their technology stack by adopting SaaS models and migrating mission-critical business to the cloud (or multiple clouds), often in addition to maintaining valuable, on-premises legacy solutions. 

Add to that increasingly complex technology mix, a sudden shift to remote work where users could access cloud services and freely download applications directly onto their home-stationed laptop, outside of their company’s network. IT now faces a mountain of technology sprawl, not to mention real shadow IT challenges. 

And if that wasn’t enough…alongside a growing reliance on cloud instances is emerging security risks. Early in 2021, the SolarWinds breach became public knowledge, and the details are worrisome. Their Orion software is a popular network management system that monitors and manages the various components of an organisation’s network. A long list of organisations use Orion to sort through the full scope of their network, including multi-cloud services. Malicious code was inserted into the Orion development process and pushed out as a typical software update, thereby infecting any organisations using it. 

This breach taught us two important lessons: failure to defend your technology supply chain could give attackers the one weak link they need to enter your network and, while complexity is inevitable in modern technology stacks, unnecessary complexity is risky. 

The complexity conundrum 

Today’s IT teams are challenged to operationalise their changing technology mix and manage the risks that come with it, especially when it comes to cloud environments.

For example, do you know how many cloud environments your organisation uses today? What workloads are running on them and who is using them? Do you have more licenses than you need? Are you re-harvesting unused subscriptions rather than simply buying more? These and other questions have financial implications, certainly. Without visibility into what you have and how you use it, you’re likely overspending and underutilising. Additionally, the lack of visibility and control over your cloud computing resources creates a tangled web of complexity that presents a significant security risk and potential compliance failures. Take application development as one example. Much of today’s work has shifted from a completely build-from-scratch model to one where you’re likely building while assembling a vast collection of open-source components and cloud services. This enables fast, easy development but it also presents blind spots when those open source projects receive updates and fixes but those are not propagated to your product. This could lead to increased supply chain risk as was the case with the SolarWinds breach. If your developers aren’t properly sourcing open source code, you are not only at risk for noncompliance fines or requirements to divulge source code but susceptible for security vulnerabilities.

When considered on a larger scale, the complexity-driven security and compliance risk can be even more costly. If you’re a hybrid or multi-cloud customer who also relies on certain on-premises solutions, a co-location center and public cloud services, your legacy security stack probably doesn’t support the mix as well as it should. And your security team may not have the skills to fully understand cloud containers, on-premises legacy systems, mobile devices and endpoints to any in-depth level. Your choice then becomes sub-standard security or far too many cooks in the kitchen, each with their own technology agenda, which only raises the threshold of your complexity.

The need for a better mousetrap

Further complicating the issue of security in the cloud today is the Shared Responsibility model. When you rely on a third-party cloud service like Amazon AWS, Microsoft Azure or Google Cloud, they secure only a baseline level of security for their platform. This is a too-often forgotten fact, and the tendency is to think ‘Amazon is protecting our data’ when, in reality, you have an interconnected spiderweb of applications and permissions, each impacting all other systems.  

When something goes wrong, you can’t call up Amazon and ask them to fix it. Instead, who can help you address the problem? Your internal staff? The cloud provider? A software vendor? Your networking provider? Figuring out where the issue arose in this sense is more like a game of Clue where you’re searching for who did it and with what. It’s a vicious cycle that can result in no real progress.

The best defence to this complexity is to understand what your third-party cloud provider (or providers) are responsible for, as written in their Shared Responsibility policy, and communicate that with your IT and security team. With that baseline in place, you can build out incident response plans from there.

The second step you can take in shoring up your security and compliance risk is found within the power of automation. To continue the example of application development, your team may have hundreds of source code repositories with dozens to hundreds of components which are all pieced together into a portfolio of products. It isn’t humanly possible to stay on top of everything being built with a manual process. Automation quickens the pace and drastically improves accuracy so details aren’t missed.

Again, looking at this problem on a larger scale, maintaining visibility over the complex menu of cloud services, applications, on-premises legacy systems, mobile endpoints and whatever else is mission-critical to your organisation is a complex task, but visibility is essential to get a handle on your security and compliance risk, not to mention perform the necessary due diligence over your IT budget. 

Shine some light on the blind spots with visibility

Having a heterogeneous IT environment has its benefits – it allows you to choose best of breed tools, maximise your budget and build a resilient technology backbone. But one chink in the armour and everything is suddenly precarious. Sorting out how to fix the problem is no easy task. With visibility into your network, cloud services, product development and your users, you can make significant gains across your security and compliance risks, and your budget. Without it, you’re left floundering in the dark.

Subscribe
Notify of
guest
0 Expert Comments
Inline Feedbacks
View all comments

Recent Posts

0
Would love your thoughts, please comment.x
()
x