2017 Data Security Predictions – Alfresco Software

By   ISBuzz Team
Writer , Information Security Buzz | Dec 23, 2016 10:15 pm PST

CISOs will shift more investment toward granularly identifying information vs. parametric measures: Depending on your business, digital information on average is doubling every three to nine months. The knee-jerk reaction to burgeoning data is to protect all that ‘stuff’: contain it behind hyper secure firewalls, deploy DLP (data loss prevention/protection) technologies at the parameter and key core switches, leverage active packet inspection technologies at the parameter, and lock down USB ports. These are all good countermeasures that help partially solve, but don’t prevent the issue. In 2017 and beyond, you will see a more deliberate movement by CISOs toward first identifying what exactly it is they are securing, and assigning security levels to that content. This change in how data is governed will clarify what content is sensitive and what is not, making it easier to assess the extent of the damage if a breach occurs. This isn’t about locking down more data to make it unusable – rather, it’s about making the data usable with pervasive, invisible governance around it.

 Acceleration of ‘ditch digging’ redundant obsolete and trivial (R.O.T) content:

Studies have shown that up to 70 percent of data in an enterprise is R.O.T. As enterprise content ages, its value to the business declines, and the risk that content poses to the organisation also goes up. For example, in Edward Snowden’s case, the documentation he uncovered at work was largely made up of archives and wasn’t particularly relevant to Booz Allen, but it was extremely relevant and damaging to the US Government. The archives contained sensitive information and Snowden’s employer clearly didn’t have the proper internal content controls, policies and procedures in place; that has been a loud and clear lesson for CISOs that previously didn’t invest significantly in content lifecycle management. All of the information being created can’t just linger indefinitely without posing future risks. You have to ultimately make the choice to delete some of it, particularly if it is not of use. In the information management discipline, it is a well-known fact that as content ages, its value to the corporation decreases and the risk increases by an exponential factor.

 The rise of “applied governance” to unstructured data: Earlier this year, more than 20,000 pages of top-secret Indian Navy data, including schematics on their Scorpene-class submarines, were leaked. It’s been a huge setback for the Indian government. It’s also an unfortunate case study for what happens when you lack controls over unstructured information, such as blueprints that might be sitting in some legacy engineering software system. Now, replace the Indian Navy scenario with a situation involving the schematics for a Nuclear power plant or consumer IoT device, and the value of secure content curation becomes even more immeasurable. If unstructured blueprints and files are being physically printed or copied, or digitally transferred, how will you even know that content now exists? Also, as more industries move towards digitisation, physical content will not simply disappear, and there must be a way to keep a record of it, as it will still hold value. Tracking this ‘dark data’ – particularly in industrial environments – will be a top security priority in 2017.

Recent Posts