Don’t Be A Network Squirrel: Five Reasons Network Blindness Is Putting Networks At Risk

By   ISBuzz Team
Writer , Information Security Buzz | Nov 07, 2017 09:15 am PST

NetBrain Product Strategist Jason Baudreau discusses the importance of network visibility and what networks risk without it.

 As the old saying goes, even a blind squirrel finds a nut once in a while. However, network engineers certainly shouldn’t employ blindness as a long-term strategy. As organizations rely on their networks, many are failing to properly prepare for the risks that come with managing an enterprise operation.

Visibility into infrastructure, traffic flow, outages and more is critical to effectively manage, troubleshoot and mitigate threats. Right now, the complexity of networks is rapidly growing with trends like software-defined networking (SDN), yet network teams are still using the same methods they’ve been using for years, leaving them with information gaps that can be costly.

Let’s look at five areas where many organizations are flying blind and the risks that come with that.

  1. It all starts with documentation

The complexity of most enterprise networks makes documenting the network a significant challenge, but it’s also the most critical step to effective network management. With traditional tools like Microsoft Visio, engineers are forced to type commands box-by-box, piecing together each device, how it’s connected and how traffic flows. For complex enterprise networks, manual documentation is no longer realistic. It can take dozens of hours, if not longer, to create an effective network diagram, which can be rendered useless almost immediately in the case of a configuration change. Even organizations that are extremely thorough about documenting their network is at risk of human error, and the resulting information only gives limited insight into configuration data like hostnames and IP addresses.

This leaves networks susceptible when it comes to managing and troubleshooting. Without instant visibility, it becomes difficult to proactively harden at-risk devices or identify the source of a problem.

  1. Managing change

When networks experience problems, it’s a result of something that has changed within the network. Is it a new configuration that went wrong or a device that has been updated improperly? For most network teams, identifying the change that caused the issue is problematic. With traditional documentation processes, engineers would need to go through the network one device at a time to identify the change. Modern network teams are automating this process and can quickly see what has changed recently within the network and quickly identify the problem.

To gain greater visibility and reduce mean time to repair in the event of an outage, automation is critical. Some organizations can test configuration changes and see how that will impact the network before actually deploying the change. This is a prime example of how network teams can modernize visibility techniques to reduce risk.

  1. Overreliance on the command line interface

Old faithful! The command line interface (CLI) is perhaps the most relied upon tool for network engineers over the past decade. For network experts, the ability to examine configurations, topology or performance data with the right commands is critical to effective network management. Yet, even the CLI is holding network teams back when it comes to full visibility. The CLI only allows users to analyze a single device at a time through a single command.

While the CLI will continue to be important for troubleshooting, diagnosing problems through it can be tedious and time consuming. The CLI requires a steep learning curve for engineers as each vendor and model has its own command structure, further complicating the process.

  1. Information overload

While many will point to a lack of information as the core reason for limited network visibility, the opposite can also be true. Too much information leaves network teams overwhelmed and searching for a needle in a haystack of data. The purpose of an IDS/IPS is to alert network teams to suspicious activity and provide context into what area of the network may be at risk. The problem is that these systems often trigger false alarms and it becomes difficult to identify real and perceived threats.

To avoid this issue, organizations can automate the process and trigger runbooks to provide more detail on potential threats. This helps network teams sort out the real threats and address them promptly.

  1. The prevalence of “tribal leaders”

Visibility requires that information be shared beyond just a few network experts. If knowledge is stored only in the head of a few “tribal leaders,” then an organization is at risk when an issue arises and the leader is not around to solve it. Knowledge hoarding also limits the effectiveness of a team to handle various issues and secure a comprehensive network.

While these leaders may have vast knowledge of the network and its infrastructure, it becomes infinitely more effective if the knowledge is shared. Using a centralized platform to share and organize knowledge ensures that every engineer has the tools to effectively manage the network.

When IT teams lack visibility into the network, it’s nearly impossible to effectively mitigate potential threats. Visibility extends beyond just documentation, as the CLI, IDS/IPS monitoring tools and internal collaboration problems all create added visibility challenges for network teams. Organizations should be automating documentation and basic troubleshooting processes to gain instant visibility and have the tools at hand to mitigate threats as quickly as possible.

[su_box title=”About Jason Baudreau” style=”noise” box_color=”#336588″][short_info id=’103691′ desc=”true” all=”false”][/su_box]

Notify of
0 Expert Comments
Inline Feedbacks
View all comments

Recent Posts

Would love your thoughts, please comment.x