Like many of the people who watched Minnesota public safety commissioner John Harrington talk about how the city had begun analysing the data of people arrested at a Black Lives Matter protest and heard him say, “It’s contact tracing,” I initially worried that this was the first sign of what we’ve feared since automated contact-tracing apps became a standard part of government responses to the coronavirus pandemic: data gathered for public health outcomes was being used to trace protesters.
It turns out that in this case, the use of the term “contact tracing” by Harrington was merely a way to describe typical investigative (though invasive) police processes. Privacy campaigners can breathe a sigh of relief, but perhaps not for long. As Vox reported, the Minnesota Department of Health “doesn’t have a policy or law specifically forbidding law enforcement from accessing or using any information collected by coronavirus contact tracers or tools”. What prevented these fears from coming true on this occasion was not any legal protections or safeguards, but rather simply a lack of existing infrastructure.
In other cities, states and countries, automated contact-tracing apps are already being implemented – meaning that data is already being collected under the guise of supporting public health outcomes.
But it never mattered what the intended purpose of gathering sensitive location data about people was. Building widespread surveillance infrastructure via automated contact-tracing apps with few protections in place is always going to result in misuse and violation of basic human rights. Layering technical solutions on top of existing systemic injustices without proactively planning for potential misuses is naive at best, and seriously harmful at worst.
At their most basic, automated contact-tracing apps can collect incredibly sensitive data – tracking where people are via their phones, whose phones they are near to, and associating that with specific identities of people. How that data is collected, where it goes, who it’s shared with, how long it’s stored or saved for, who has access to it and what it can be used for are all decisions that are made through the design of the app (or the protocol upon which the app is based). Some apps go further than that from the get-go, too – India’s contact-tracing app, Aarogya Setu, collects users’ names, phone numbers, gender, travel history and whether or not they’re a smoker, for example.
Generally speaking, the best way to ensure that data is not misused is simply to not collect it. This is known as data minimisation – collecting only what you absolutely need for the specific purpose at hand, and nothing else. Crucially, this doesn’t mean deleting what you don’t need after the fact – it means not collecting it in the first place. It rules out the need for trusting the party collecting the data, and forces certain protections from the very beginning. Another best practice would be strict data retention – that is, deleting the data as soon as it’s no longer helpful to the purpose at hand.
Layering tech solutions on top of existing systemic injustices, without proactively planning for potential misuse, is naive at best
While data minimisation and short data retention are best practices among privacy and responsible data advocates, they also sit in contrast to many practices that we’re in touch with every day on the internet. For example, the ad industry does effectively the opposite – they typically collect as much data as possible, then figure out how to monetise it, forming part of what privacy scholar Shoshana Zuboff terms surveillance capitalism.
In short, practising data minimisation and data retention means planning ahead – it’s impossible to go back and collect something that you forgot, but it’s easy to ignore a data point you collected but didn’t need. And that long-term planning is something that is severely lacking among many governments right now in the rushed scramble of a response to Covid-19.
Few, if any, countries have rolled out accompanying data protection legislation to guide the implementation of contact-tracing apps – if anything, governments who have given a thought to the legal basis of the apps seem to have leaned towards the exact opposite side, of making the apps mandatory for download, like in Turkey or, for a short time, in India.
In the UK, the Guardian reported that a privacy notice for the National Health Service’s test-and-trace programme stated that they are planning to store personal data of people with coronavirus for 20 years, including full name, date of birth, phone numbers, home address and email address. Interestingly, that privacy notice has now been taken down.
Privacy rights campaigners the Open Rights Group have instructed lawyers to prepare a legal challenge to this plan, with their concerns including “whether the personally identifiable data retained could be subsequently obtained by the Home Office or other government departments for immigration or other purposes”. This is a well-warranted concern – the NHS has a history of sharing data with the Home Office, with plans to use NHS data to track down patients “believed to be breaching immigration rules” scrapped following a legal challenge in 2018.
But let’s be clear – we shouldn’t have to rely upon ad hoc legal challenges, or upon trusting the body in charge of implementing the contact-tracing app. This is an issue of accountability, power, and protecting human rights. Collecting such data and building such widespread surveillance infrastructure is a manifestation of power, and power requires safeguards to avoid misuse, particularly in the current political environment.
This particular surveillance infrastructure is especially dangerous because it’s being built in the name of public health, which, whether intentionally or not, implies that it’s being built "for good". Downloading a contact-tracing app is being framed as an action of national solidarity, as a contribution to keeping your fellow citizens safe – all while potentially putting them more at risk from less immediately visible consequences of the data that the apps are collecting.
There’s no such thing as good surveillance or bad surveillance, just surveillance – which has traditionally been most used on communities of colour. To ensure that contact-tracing apps are not misused for purposes that violate our human rights, such as tracking protestors, identifying immigrants, or policing communities of colour, they must be built with privacy by design, accompanied by clear data protection and privacy legislation.
We can’t continue to ignore the social and political environments in which technical solutions are implemented, and we cannot let this pandemic open the door to even more invasive surveillance solutions.