Industry Leaders Reaction On Google+ Shutting Down After Users’ Data Is Exposed

By   ISBuzz Team
Writer , Information Security Buzz | Oct 09, 2018 06:00 am PST

News is breaking that Google is shutting down Google+ for consumers after an API bug exposed the private account details of more than 500,000 users.

The bug, located in the Google+ People API, allows users to grant access to their profile data via third-party apps – like users of other social apps Facebook and Twitter sometimes allow. In a blog post, the Google engineering team said the bug allowed third-party apps to also gain access to users’ data that had previously been marked as private. This includes sensitive details such as a person’s name, email address, occupation, gender, age, nickname, birthday, and so on.

Below security experts commented as part of our expert comments series on recent cyber security news.

Tyler Moffitt, Senior Threat Research Analyst at Webroot:

“Although it seems that Google has shut down an entire line of business due to this breach, from a GDPR perspective, the company appears to have gotten off lightly. Had this breach occurred just a few months later, Google could be subject to strict GDPR fines for not keeping user data safe.

“It’s important for consumers to realise that connecting apps in social media platforms only increases the amount of valuable information that could potentially be breached, as well as increases attack vectors that hackers can leverage.”

Bryan Becker, Application Security Researcher at WhiteHat Security:

“Even giants can have security flaws. I’m sure the offices of Facebook breathed a collective sigh of relief today, as they’re pushed out of the headlines by a new privacy breach at competitor Google.

Breaches like this illustrate the importance of continuous testing and active threat modeling, as well as the attention that APIs require for secure development and least information/privilege principles. Companies like Google grow large and fast, and can have a problem keeping every exposed endpoint under scrutiny. No one person can possibly be aware of every use or permutation of a single piece of code or API, or microservice.

For organizations that already have a large architecture, knowing where and how to start evaluating security can be a challenge in and of itself.  In these cases, organizations can benefit from active threat modeling – basically a mapping of all front-end services to any other services they talk to (both backend and frontend), often drawn as a flow-chart type of diagram. With this mapping, admins can visualize what services are public facing (as in, need to be secured and tested), as well as what is at risk if those services get compromised. In some ways, this is the first step to taking ‘inventory’ in the infosec world.

Once the landscape is mapped out, automated testing can take a large portion of the strain by continuously scanning various services – even after they become old.  Of course, automated testing is not a be-all/end-all solution, but it does carry the benefit that old or unused-but-not-yet-retired services continue to have visibility by the security team, even after most of the engineering team is no longer paying attention or has moved onto more interesting projects.”

Pravin Kothari, CEO at CipherCloud:

Pravin Kothari“Google’s unofficial motto has long been ‘don’t be evil.’ Alphabet, the Google parent company, adapted this to ‘do the right thing.’

Google’s failure, if true, to not disclose to users the discovery of a bug that gave outside developers access to private data, is a reoccurring theme. We saw recently that Uber was fined for failing to disclose the fact that they had a breach, and instead of disclosing, tried to sweep it under the rug.

It’s not surprising that companies that rely on user data are incented to avoid disclosing to the public that their data may have been compromised, which would impact consumer trust. These are the reasons that the government should and will continue to use in their inexorable march to a unified national data privacy omnibus regulation.

Trust and the cloud do not go together until responsibility is taken for locking down and securing our own data. Even if your cloud offers the ability to enforce data protection and threat protection, it is not their data that is compromised and potentially used against them, it is the consumers.

Enterprises leveraging cloud services need to ensure additional security measures and data is protected before it is delivered to a third-party cloud service – this is the only way we can ensure data is protected.”

Colin Bastable, CEO at Lucy Security:

“Don’t be Evil mutated into Don’t be Caught. Google’s understandable desire to hide their embarrassment from regulators and users is the reason why states and the feds impose disclosure requirements – the knock-on effects of security breaches are immense.

The risk of such a security issue is shared by all of the Google users’ employers, banks, spouses, colleagues, etc.  But I guess we can trust them when we are told there was no problem.”

Justin Jett, Director of Audit and Compliance at Plixer:

“Vulnerabilities in code are rampant and IT professionals should take note that breaches are inevitable. While this is a case of a social media compromise, these types of breaches can occur across any platform that an organization uses. In the case of Google, the breach was a result of a flaw in the APIs. With this in mind, any company that uses a platform via APIs should be equally concerned because this type of breach could happen on any API platform. This is why it is critical that organizations deploy a network traffic analytics platform that allows them to see the traffic as it moves across their network. Additionally, organizations should vet their third-party vendors to ensure they too are using network traffic analytics, as third-parties are another attack surface in which data breaches may occur.”

Tim Erlin, VP at Tripwire:

“It’s clear that data privacy for individuals is a growing concern for European governments. It’s worth noting that the alleged infraction is more than 5 years old in this case. With the legal landscape around privacy changing at a relatively rapid pace, we should expect the boundaries to be tested. While the court has blocked the case for now, I would expect that we’ll hear more about it, and other cases, as the dust settles on changing data privacy laws.

Other organizations should pay attention to how this case progresses as it may very well affect future actions against them.

Potential fines make great headlines, but they remain potential until they become real. It’s entirely possible that this ruling won’t carry the full £3.3 billion. For now, we’ll have to wait and see.”

Lillian Tsang, Senior Data Protection and Privacy Consultant at Falanx Group:

“Unfortunately – depending on which side you are on – the High Court has ruled that there was no dispute that Google was wrongful in obtaining personal data through the alleged use of cookies to circumvent the Apple Safari browser. However, in this instance it was concluded that it was impossible to calculate the number of iPhone users affected. A bad day today, as there is no redress for the subjects affected. The claimants intend to appeal.

On a broader scope, what can an individual do where there is an alleged breach? They can complain directly to the organisation whether in writing or by phone. Organisation’s like to do the right thing, so they will normally right their wrongs. An individual can seek advice from the Information Commissioner’s Office, which can investigate the breach further (note the ICO does not award compensation but can investigate breach allegations) or an individual can directly issue proceedings against an organisation. Representative actions such as the Google case may put an organisation into the spotlight and create awareness of so-called wrong doings but whether justice is served is another matter. We shall wait for another day.”

Brian Vecci, Technical Evangelist at Varonis:

“This is a breach almost everyone can relate to, because everyone has a Google account and between emails, calendars, documents and other files, lots of people keep a ton of really valuable data in their Google account — so unauthorised access could be really damaging. On top of that, when you get access to someone’s primary email—which for many people is Gmail, you’ve got the keys to their online life. Not only do you have their login, which is almost always their email, you have the ability to reset any password since password reset links are sent via email. A Gmail breach could be the most damaging breach imaginable for the most number of people the longer it goes undetected. If Google knew about a potential breach and didn’t report it, that’s a huge red flag.

Unlike many other types of accounts, Google serves for many users as the authentication for other apps like Facebook. Last week, Facebook said they had no evidence that linked apps were accessed. But if these linked apps were accessed due to a breach, it could expose all kinds of personal user data. If you’re using Google or Facebook to login to other apps, there is a whole web of information that could be exposed. Breaches like these are the reason why Google, Facebook and other big tech players need to be regulated – they are a gateway to other applications for business and personal use.”

Ilia Kolochenko, CEO at High-Tech Bridge:

“Unlike the recent Facebook breach, this disclosure timeline is incomprehensibly long and will likely provoke a lot of questions from
regulatory authorities. Inability to assess and quantify the users impacted does not exempt from disclosure. Although, a security vulnerability per se does not automatically trigger the disclosure duty, in this case it seems that Google has some reasonable doubts that the flaw could have been exploited. Further clarification from Google and technical details of the incident would certainly be helpful to restore confidence and trust among its users currently abandoned in darkness.

Technically speaking, this is one more colourful example that bug bounty is no silver bullet even with the highest payouts by Google. Application security is a multi-layered approach process that requires continuous improvement and adaptation for new risks and threats. Such vulnerabilities usually require a considerable amount of efforts to be detected, especially if it (re)appears on a system that has been already tested. Continuous and incremental security monitoring is vital to maintain modern web systems secure.”

Etienne Greeff, CTO and Co-founder at SecureData:  

“The news today that Google covered up a significant data breach, affecting up to 500,000 Google+ users, is unfortunately unsurprising. It’s a textbook example of the unintended consequences of regulation – in forcing companies to comply with tough new security rules, businesses hide breaches and hacks out of fear of being the one company caught in the spotlight.

Google didn’t come clean on the compromise, because they were worried about regulatory consequences. While the tech giant went beyond its “legal requirement in determining whether to provide notice,” it appears that regulation like GDPR is not enough of a deterrent for companies to take the safety of customer data seriously. And so this type of event keeps on happening. While Google has since laid out what it intends to do about the breach in support of affected users, this doesn’t negate the fact that the breach – which happened in March – was ultimately covered up.

However, there are events that are happening far closer to home that aren’t getting the attention they deserve. We seem to pay more attention to the big tech breaches, when businesses such as the supermarket chain Morrisons is undergoing a class action lawsuit against them, for failing to protect deliberately leaked employee data. Last year the High Court ruled that the supermarket was what they termed “vicariously liable” as the Internal Auditor in question was acting in the course of his own employment at the company when he leaked that information online. The implications of this type of action are huge – if businesses can be held accountable for the actions of rogue employees acting criminally, then we will have to treat all our employees as malicious threat actors – which is a huge thing to consider and could have momentous repercussions across the globe in all industries.

Until then, we will undoubtedly see even more of this ‘head-in-the-sand’ practice in the future, especially given GDPR is now in force from larger tech firms. It ultimately gives hackers another way of monetising compromises – just like we saw in the case of Uber. This is dangerous practice, and changes need to be made across the technology industry to make it a safer place for all. Currently, business seems to care far more about covering its own back than the compromise of customer data. It’s a fine line to walk.”

Bobby S, Red Team at ThinkMarble:

“The fact that Google chose to shut Google+ down on discovering this breach is telling of how serious it is. It appears that a bug in the API for Google+ had been allowing third-party app developers to access the data not just of users who had granted permission, but of their friends. The vast majority of social media platforms that we use every day monetise our data by making it available to 3rd parties via an API, but it is not acceptable that exploitative practices continue.

This has echoes of the Cambridge Analytica scandal that hit Facebook and has led to much greater scrutiny of Facebook’s policies and openness towards how data is accessed, used and shared. Similarly, Google must seriously consider how it continues to operate alongside third-party developers. This is especially relevant now that the GDPR is in force, affecting any company with users in the EU.

As a data controller, under Article 32 of the GDPR, Google now has greater obligations to ensure that its data-processors (including third-party app developers) implement measures to both ensure the security of personal data, but also gain the proper permissions from individual users to access it.  In wake of this new regulation, these same companies also now hold a legal requirement to take appropriate actions to secure and pseudonymize this data before making it available through their services.”

Subscribe
Notify of
guest
0 Expert Comments
Inline Feedbacks
View all comments

Recent Posts

0
Would love your thoughts, please comment.x
()
x