Governments across the world face the same dilemma: how to contain the spread of Covid-19 while at the same time reopening their shuttered societies. The stakes are immense. Open up too early and the death toll could skyrocket, putting health systems under enormous strain. Open up too slowly and the economic downturn could be catastrophic, reverberating for years.
Surveillance technology has been touted as a potential solution, promising to suppress the spread of the virus while we wait for a vaccine. Contact-tracing apps, electronic tagging for quarantine enforcement , and remote health monitoring systems are available or in development.
The benefits from these technological fixes are obvious: the status quo is untenable and much of this technology already works – it only needs to be repurposed. But there are also downsides.
Governments have a habit of dismantling civil liberties in times of crisis
The most heated debate revolves around privacy. With no end to the coronavirus crisis in sight, should whole populations be subject to constant surveillance? That is in itself an important question, but the use of surveillance technologies has implications far beyond privacy.
If apps or biometrics determine who is allowed to travel, work or spend time in public spaces, then surveillance will establish new and profound divisions in our societies.
If quarantines are enforced automatically, there needs to be transparency about how this technology works. Its application has huge ramifications for those ordered to stay home. This is not just about privacy, but fairness. The issues at stake include a citizen’s right to legal recourse, maintaining public trust in government, and what authority can do with medical advice.
Another big question: what will happen once the pandemic is over? Will surveillance measures be retracted? Governments have a habit of dismantling fundamental liberties at times of crisis. This one is likely no exception.
Before any such debates can be had, a vital question that isn’t asked often enough: do these surveillance technologies actually work?
For these reasons, we – The Correspondent and De Correspondent – have started a long-term project to identify surveillance technologies, and track how they are used or abused, around the world. With help from our readers, other journalists and experts, we will analyse the technologies, organisations and companies involved in surveillance and assess whether these measures are effective.
We’ve started here with an overview of technologies currently deployed. This is not a definitive list. New technologies are emerging almost daily. If we’ve missed something, tell us here – and help us track coronavirus surveillance where you live.
A technique known as “contact tracing” has long been considered a critical tool to understand and suppress outbreaks of disease. Traditionally, this process has been analogue: typically, a health worker would interview someone who has tested positive for an infectious disease, asking them to remember with whom they’ve been in contact when they were contagious. Those people can then be asked to get tested, self-isolate or seek treatment so that they don’t spread the illness further.
Countries are now automating this process, by introducing – or making plans to introduce – their own contact tracing apps. In Argentina, the Co-Track app works by tracking users’ locations via the GPS chips in smartphones. If a user tells the app they are infected, it can detect other app users who have been nearby in recent weeks, and notify them of the risk.
Singapore has deployed a similar app, Trace Together, which uses Bluetooth technology instead of GPS. If two people who have downloaded the app spend at least 30 minutes within 2 metres of each other, their phones “shake hands” by exchanging a unique code which is stored on their devices for 21 days. If one becomes infected, these codes enable disease detectives to quickly identify who else might be at risk.
Apple and Google plan a ‘privacy-focused’ Bluetooth tracking system with no centralised database
These and other apps promise to detect and manage new coronavirus outbreaks, while also enabling governments to ease lockdown restrictions during the wait for a vaccine. But big questions remain unanswered. Is digital contact tracing effective? In Singapore, only one person in six had downloaded the official app in the first two weeks following its launch. While GPS doesn’t work inside buildings, for example, Bluetooth apps will “shake hands” through thin apartment walls.
Tufts University professor Susan Landau cautions that it’s important to scrutinise the efficacy of contact tracing before these technologies form the basis of a new global surveillance system. “If a privacy- and civil liberties-infringing program isn’t efficacious, then there is no reason to consider it further,” she writes in Lawfare, a blog which covers national security issues.
To date, the approach to contact tracing has been fragmented as many countries develop their own unique apps. On 10 April, Google and Apple announced they were working together to integrate Bluetooth tracking for all up-to-date Android and iPhone operating systems. The companies say they are planning a “privacy-focused” de-centralised system, meaning there will be no central database to log who has come into contact with whom – a potential safeguard against abuse by overreaching governments.
Embedding contact tracing capabilities into our phones risks making this technology a permanent part of their operating systems. This is known as “mission creep”: when surveillance designed for extraordinary circumstances becomes routine, the same tools can, subsequently, be deployed for other political purposes. For example, NSO Group, a company which has pitched contact tracing technology to governments, has been linked to tech that enables authoritarian regimes to hack dissidents’ phones.
At the end of March 2020, it was estimated that 1.7 billion people had been ordered to stay home – a fifth of the global population. When a government imposes a lockdown, it will want to understand how many people are following its advice. For the first time in human history, it’s now possible to track entire populations through location data from smartphones.
Authorities use this information to decide when to tighten restrictions, impose curfews or put more police on the streets. There are two ways governments can access this data. The first is to strike deals with local service providers. Their networks connect handsets via a signal emitted from nearby phone masts.
Mass location tracking can be justified, but making data truly anonymous is difficult
Before coronavirus, anonymised location data from mobile networks was already used by municipal governments to track pedestrians’ movements around a city – and by bounty hunters to track down their targets. Since the pandemic Austrian, Belgian, German and Italian service providers have pivoted to help governments by handing over aggregated, anonymised data.
In an emergency, mass location tracking can be justified. It’s already proved valuable, for example, to guide distribution of emergency aid in the aftermath of earthquakes. Transparency is vital to ensure this data is aggregated and anonymised before it’s handed over to governments. Making location data entirely anonymous, however, is difficult.
A 2018 study by researchers at MIT found that it’s relatively easy to identify an individual from their location history. Mobile service providers should be transparent about the type of data they’re passing on, by opening their methods to scrutiny.
The other way for governments to track how many people are following lockdown orders is from GPS location data collected by companies such as Google. The search engine publishes figures for the number of visitors to public places such as transport hubs, parks, workplaces, homes, pharmacies and grocery stores in 131 countries. This data – collected from people who have granted Google permission to track their location history – confirms, for example that visits to parks in Australia have fallen by 51%, while in Afghanistan, visits to parks have only decreased by 12%.
In late February of this year, a video published online by a newspaper linked to the Chinese government showed a drone scolding a woman in the Inner Mongolia region. “Yes auntie, this is the drone speaking to you,” says a disembodied voice from the drone’s loudspeaker. “You shouldn’t walk around without wearing a mask. You’d better go back home and don’t forget to wash your hands.”
The use of drones to police the public and convey messages from the authorities has been reported in Europe – in Belgium, France, Spain, and the Netherlands – and in the Middle East, in Israel, Jordan, Kuwait and the United Arab Emirates.
In Albania, where people are allowed outside only for authorised activities, drones alert police to pedestrians who may be breaking the rules. In the UK, video footage captured by drones flying above rural beauty spots has been published to shame those hiking or having picnics – sparking controversy in the media and protests over civil liberties.
Drones enable local police forces to dramatically expand the areas over which they can keep watch. To critics, they are proof of a creeping militarisation of civil life as technology designed for war zones becomes integrated into the everyday.
Artificial intelligence (AI) and big data have been touted for years as key tools for managing the spread of disease. This time is no different. Predictive analytics from big data can help to detect, predict and track outbreaks.
Analytics guide policy decisions – and may be especially helpful as countries determine when to ease lockdown restrictions. Here again, governments depend on the cooperation of private companies – including Google, Facebook and telecommunications companies – to share location and other behavioural data.
BlueDot, for example, is a Canadian company which hoovers up digital media, health reports, air travel data, then applies natural language processing and other types of machine learning. These can detect early signs of outbreaks from factors such as mentions on social media or a spike in search queries for flu symptoms. According to media reports, BlueDot’s AI algorithm predicted the outbreak of Covid-19 in December, earlier than the World Health Organisation.
The potential is limited by both a lack of data and the poor quality of available data
American data-mining company Palantir is helping Britain’s National Health Service and other countries’ health departments to build analytics capacity to guide allocation of scarce medical supplies to hospitals based on predictions of where the disease will hit next. Outside of health, Palantir’s other clients include intelligence agencies, immigration enforcement, police departments and the United Nations’ World Food Programme.
In Russia, artificial intelligence plays a role in quarantine enforcement. Facial recognition has identified people who break their stay-at-home orders. Elsewhere, companies are harnessing AI to monitor social distancing, by harvesting data from surveillance cameras to check if people keep far-enough apart – and to send alerts or sound alarms if they come too close.
Within healthcare systems, AI may help with patient diagnosis, to establish a prognosis for the disease and to find treatments and cures. Computer vision technology, for instance, can detect traces of Covid-19 from x-rays of lungs, saving radiologists precious time.
By trawling through patient data, algorithms can track how Covid-19 develops in certain patients and which treatments are effective. Although AI sounds promising in this context, its potential in this pandemic is limited by two big problems: the lack of data and, where data is available, its poor quality.
Modelling by computers is effective only when the models can be supplied with – and trained to interpret – reliable data sets. Although data is available from previous outbreaks such as Sars and Zika, these viruses behave differently to Covid-19. So models trained on previous data are now of limited use.
Other sources of data, such as travel information or social media posts, can be very messy and difficult to model. For artificial intelligence to work well here, new data from many more people is needed – often raising serious privacy concerns.
A number of states have been experimenting with heavy-handed surveillance and intimidation techniques to enforce quarantine orders. Trials underway vary from apps to bracelets, "electronic fences" and online shaming.
The Indian state of Karnataka has rolled out Quarantine Watch, an app to track people who have been placed under home-quarantine orders – either because they have tested positive for Covid-19, have shown symptoms, or recently have returned from overseas. The app tracks users’ movements via GPS, and users must submit hourly selfies during daytime to prove they haven’t left the house.
Similarly in Poland, a Home Quarantine app randomly requests selfies from people undergoing quarantine after returning from abroad. While Hong Kong issues wristbands to track people under compulsory quarantine to ensure they don’t leave their homes. The wristbands connect to a mandatory smartphone app, which shares its location with authorities via the messaging apps WeChat or WhatsApp.
Israelis received quarantine orders after waving through a neighbour’s window
Other governments are harvesting location data from mobile network providers, in some cases by tracking individuals. The Indian state of Andhra Pradesh has entered agreements with telecoms companies to track people under quarantine orders. Alerts are sent to authorities when a quarantined person moves more than 100 metres from their home and ignores reminders to go back.
Taiwan has implemented a similar "electronic fence" system. If a quarantined person leaves their home or turns off their phone, location data alerts local police who will visit or make contact. In Singapore, quarantined individuals receive text messages containing a link which they must click on to report their location.
Other governments have resorted to online naming and shaming. In the Republic of Srpska, an entity within Bosnia and Herzegovina, the names and hometowns of the first 30 people who had broken quarantine orders were published on 23 March. In Paraguay, police officers posted videos of themselves on social media: the scenes include threatening people who broke quarantine with a taser, forcing them to do star jumps, and making them lie face-down on the floor while repeating: “I won’t leave my house again, officer.”
Given this unprecedented surveillance of people under quarantine, it’s crucial that the technology which decides who stays home should be both effective and transparent. On both counts, doubts are emerging.
In Israel, some citizens suspect the location data harvested by authorities is not entirely accurate. Haaretz newspaper has reported how, when one person tested positive for coronavirus, his upstairs neighbours – including their visitor and his partner, who only waved through a window from outside – also received quarantine orders, although none came close enough to have been infected.
In China, a “health-code” app introduced by some cities to determine who can and can’t go outside, has triggered similar concerns over the opaque decision-making process.
Technology also plays a role in finding infections before people become seriously ill. In Peru, health authorities launched a symptom-checker website where citizens who answer questions receive a preliminary diagnosis. The website is not anonymous and users are asked to fill in their national ID numbers.
Other governments are working with startups to develop symptom-checker apps. The data generated from users helps healthcare institutions to advise users on deciding when to seek help, while it also helps health authorities to spot early signs of possible outbreaks.
Demand for thermal imaging cameras in the US has soared by 700%
Asking how people feel is one approach. Another is to deploy technology to find out. The pandemic is a catalyst for the introduction of biometric technology – such as mass facial recognition and remote temperature monitoring – that has never before been integrated into mainstream societies. “Biometric monitoring would make Cambridge Analytica’s data hacking tactics look like something from the Stone Age,” wrote Israeli historian Yuval Noah Hariri in a Financial Times article.
Most biometric surveillance deployed against the coronavirus has focused on fever-detection. In China, cameras with thermal imaging capacity have been installed to survey passengers in airports and public transport hubs. Those with high body temperatures are selected for questioning.
Cameras enhanced with AI can spot people with fevers among crowds, according to claims by US companies. Sales in this market are soaring with demand up by 700%, reported Flir Systems, a fever-detection company. In Georgia, US, a grocery chain has deployed thermal imaging cameras at the entrances to stores, where employees pull aside shoppers with a temperature of 38 degrees or higher and hand them a flyer asking them to leave.
The technology is invasive, although its effectiveness remains unproven, after all widespread testing in Iceland reveals that half of the people who tested positive for the virus showed no symptoms at all.
As more countries begin cautiously to attempt a return to normal life, surveillance tech is routinely framed as a means for governments to relax their lockdown measures. In Australia, for example, ministers have said that restrictions could be eased when enough Australians have installed – voluntarily – the country’s contact tracing app.
It remains to be seen whether these technologies can be effective. More certain are the as-yet-unseen costs to civil life and liberties. Discrimination, racial or other forms of unjust profiling by algorithms, opaque and wrong decisions – all are inevitable risks.
We will keep tracking the deployment of surveillance technologies across the world. To keep us informed of any new measures you come across fill out this form .
This story is also published on De Correspondent. Read it in Dutch here.
Track(ed) Together: help us gather data We’re looking for readers, activists, legal and surveillance experts and journalists around the world to be our eyes and ears, keeping us up to date on the latest developments in pandemic surveillance from your country – or a country where you speak the language.