Facial-recognition company Clearview AI which contracts with powerful law-enforcement agencies has reported that an intruder stole its entire client list, according to a notification it sent to its customers. In the notification Clearview AI disclosed to its customers that an intruder “gained unauthorized access” to its list of customers, to the number of user accounts those customers had set up, and to the number of searches its customers have conducted. The notification said the company’s servers were not breached and that there was “no compromise of Clearview’s systems or network.” The company also said it fixed the vulnerability and that the intruder did not obtain any law-enforcement agencies’ search histories.
Scoop: Facial recognition company Clearview AI has told its law enforcement that an intruder stole its entire customer lists. Reported customers include FBI, DHS, and hundreds of other law enforcement agencies https://t.co/0LubBiXfDX
— Betsy Woodruff Swan (@woodruffbets) February 26, 2020
Commenting on the news and offering insight are the following cybersecurity experts:
It’s unclear what “unauthorized access” means, and I’m just guessing, but the general contours of what has been reported seems to indicate that an unauthorized person was able to perform limited commands or queries against the server or database without the expected authentication. It’s a very common type of attack caused by programming or configuration errors. However, I’d be surprised if it was due to this. With that said, it’s good to hear that it did not involve a more significant compromise or the actual images themselves. Still, getting not only the customer list but some limited information, such as the number of searches that a particular customer performed, has value.
As a minimum, that customer list can be sold to competitors who can then offer similar services with steep competitive discounts to gain marketshare, along with which potential customers seem to be the biggest users (and thus are likely to spend the most money). That’s a no brainer. It might also be used in a sophisticated spearphishing campaign, where membership can be tied to specific phishing attempts which appear to have “insider information”. Any bits of private information and knowledge, such as account names, that can be used within a spearphishing email make that email seem more realistic and more able to fool a higher percentage of people. If the error was as I think it was, hopefully Clearview AI is doing a massive review of all their services and code to ensure that other similar vulnerabilities aren’t lurking out there.
Every good company response should include not only closing down the current known vulnerability, but look to make sure there are not others, and if they are, closing them as well. Then, you try to figure out why the error occurred. Errors don’t occur just because of a person making a coding mistake. It’s always more sophisticated and systematic than that. Why did the programming language allow it? Why didn’t subsequent pre-production reviews catch the mistake? Were vulnerability testing and penetration testing reviews done? Are the programmers forced to take secure coding training education? The most short-sighted incident response any affected organization can do is fix the one found vulnerability and call it a day. Mature companies look all the way up the food chain to see how the error was allowed to get introduced in the first place and why it wasn’t caught until after it went to production? The most successful defending companies use every found public vulnerability as a chance to learn, correct, and improve across the board.
This notification provides very little actionable information for anyone involved or just trying to avoid the same mistakes. A breach like this just adds fuel to the fire for Clearview’s critics.
We’re likely to hear more about the extent of this breach as investigations uncover more data, and history tells us that it’s likely to expand in scope.
In cybersecurity there are two types of attacks – opportunistic and targeted. With the type of data and client base that Clearview AI possess, criminal organisations will view compromise of Cleraview AI’s systems as a priority. While their attorney rightly states that data breaches are a fact of life in modern society, the nature of Clearview AI’s business makes this type of attack particularly problematic. Facial recognition systems have evolved to the point where they can rapidly identify an individual, but combining facial recognition data with data from other sources like social media enables a face to be placed in a context which in turn can enable detailed user profiling – all without explicit consent from the person whose face is being tracked. There are obvious benefits for law enforcement seeking to identify missing persons to use such technologies for good, but with the good comes the bad.
I would encourage Clearview AI to provide a detailed report covering the timeline and nature of the attack. While it may well be that the attack method is patched, it also is equally likely that the attack pattern is not unique and can point to a class of attack others should be protecting against. Clearview AI possesses a target for cyber criminals on many levels, and is often the case digital privacy laws lag technology innovation. This attack now presents an opportunity for Clearview AI to become a leader in digital privacy as it pursues its business model based on facial recognition technologies.
The timing of this attack is particularly interesting. The company has faced criticism following media reports alleging that Clearview AI’s database stores photos after users have deleted them from their social media accounts. It is telling that the company has already said that it has patched the flaw which led to the breach. Vulnerability management is a crucial part of any business – priority should be given to update technology that cause the most impact when compromised. That being said, Clearview’s assertion that there was “no compromise of Clearview’s systems or network” may suggest that a misconfiguration caused the confidential customer data to be publicly accessible. This should set alarm bells ringing with their customers, especially considering the scrutiny facial recognition technology is under, with debates ongoing concerning both its potential use and abuse, which it has in equal measures.
In the 21st century, it seems we are getting increasingly used to data breaches, which now seem like a part of everyday life. While it may be unrealistic to think we would be able to prevent all data breaches, we still need to focus on ensuring that the severity of breaches like this are minimised as far as is absolutely possible.
Data stored should always be heavily encrypted to protect against threat actors, even in the case it gets released or exposed in a hack. Especially with Clearview AI’s breach in mind, we must remember that every data breach is serious – and if the data exposed this time had included faces, it would have taken the breach to the next level.
When companies are entrusted with extremely sensitive data, such as personal information like facial identities, they need to take on the responsibility seriously, and understand that they are a higher-profile risk. This should mean adding extra layers of protection to guard against attacks, even if these seem inevitable.