Israel, Singapore, India, China and the Czech Republic are amongst some of the countries already using official contact tracing apps in an attempt to halt the spread of COVID-19.
This week the UK began testing of its NHS COVID-19 app which has ben in production since early March.
However, much speculation surrounds the production of such an app, bringing the privacy vs protection debate to the forefront of public consciousness.
The COVID app situation is a classic case of balancing risk and benefits, which is exactly what CISO’s do 24 hours a day. The infection tracing methods published by Google and Apple appear reasonable and well grounded, with sound cryptographic methods to de-risk data using rolling cryptographically ‘tokenized’ Bluetooth pseudorandom identifiers vs actual personal data – the data is also in constant time-based change.
The architecture is tuned to COVID19’s characteristics too, for example, the 14-day infection symptom periods. It’s actually a great example of building privacy into a design which is a core tenet of modern privacy compliance like CCPA. The only time any real data access occurs is on matching to infected people and the protocol recommends additional data de-identification/sanitization of the limited data set used to initiate contact over Bluetooth – a very limited risk. The infection risk matching however works purely on de-identified data. But not all apps seem to follow this, and it’s not clear what data is really collected in every case.
The Australian government COVID-19 apps downloaded by 3 million people so far collect more personal data on enrolment and share it on infection detection – to ‘healthcare teams’, and there’s no current regulation to ensure data privacy in Australia, so what if there is a leak?
In the US, apps that only use the Apple and Google model are likely to be quite benign, but the challenge now will be ensuring that all applications that are downloaded are genuine, only use the provided model, don’t collect more than is needed, and that rogue apps don’t appear in the ecosystem linked to bad actors with malicious data collection intent. Consumers won’t always know, and a look in the app store right now has several ‘tracing’ of apps from highly variable sources rushed to market, so how these stack up to US privacy laws isn’t totally clear.