Can you expect algorithms to be unbiased?
My colleague Jesse Frederik, Economics correspondent for our Dutch sister site De Correspondent, put that question to me after I wrote an article about the dangers of the digitisation of the welfare state – specifically, the System of Risk Indication (SyRI), a big data analysis system that the Dutch government and authorities had been using to detect possible social security or tax fraud. SyRI analyses all kinds of personal data – about benefits, taxes, allowances – and then uses an algorithm to draw up a list of possible suspects. A Dutch judge ruled last week that SyRI breaches human rights.
Read this story in a minute.
One problem with a system like SyRI: it’s biased. SyRI was literally only used in disadvantaged neighbourhoods. But if you start focusing on certain places, you’ll only find something in those places. The people who commit fraud in rich neighbourhoods – say, white-collar crimes like mortgage fraud – will never end up in the database.
If you start focusing on certain places, you’ll only find something in those places
This is part of a much broader discussion about algorithms. In the United States, an algorithm to predict recidivism gave black people a higher risk score. And when Amazon used an algorithm to select job applicants, the system preferred men.
In short, systems are biased.
In the contributions section under the article on De Correspondent, Jesse and I were joined by Maxim Februari, a Dutch writer who was part of the group that brought SyRI to court, and Christian van Veen, who contributed to a United Nations (UN) report that warned of the dangers of the digital welfare state.
Here are some key insights from the discussion.
People are biased too
An often-heard criticism of algorithms is that they’re biased. But Jesse highlighted the infamous CAF-11 case in the Netherlands. After the government attempted to clamp down on fraud, childcare allowance was incorrectly cut off for a large number of parents by an overzealous team of investigators.
As he put it: "That wasn’t algorithmic, just human bias."
And he mentioned another objection that we frequently hear and which is also entirely true of humans: algorithms are opaque. In other words, both algorithms and humans are a black box. Information goes in, decisions come out, and you have no idea how either reached a decision.
What expectations do we have of new systems? Should they work 100% flawlessly?
It’s an important point, which plays a broader role in the discussion about artificial intelligence (AI). Because what expectations do we have of new systems? Should they work 100% flawlessly?
Take the self-driving car. We’re shocked when an autonomous Uber car gets into an accident, but so do human drivers. So it seems more reasonable to expect that such a car will have fewer accidents than a human being than to expect that a self-driving car will never have another accident.
So to come back to SyRI: why be so critical of an algorithm when people are fallible too? Our discussion raised three main problems.
(1) It’s opaque
In my piece, I quoted a speech by Februari – a writer, philosopher and lawyer – who was part of the group that took SyRI to court. So I was happy to see Februari respond to the discussion.
As he pointed out: "There really is a crucial difference between a biased civil servant making a separate, questionable decision and a whole fraud detection system that is opaque even to the court."
He meant SyRI’s total lack of transparency. It is not clear who exactly is being vetted, what data SyRI has access to, and which models are being used to analyse that data. The state did not want to clarify the situation with the judge in case people could work the system to their advantage.
Februari thought this lack of transparency is problematic because in a state governed by the rule of law, a judge must be able to review the actions of the government. He also found it unacceptable that that wasn’t possible in the case of SyRI: "If a system is comprehensible, you can change it, refine it, abolish it ... "
He said the court also stated in its judgment that citizens should be able to track their personal data. The unclear origin and quality of the data is at least as problematic as the way the algorithms work.
(2) It happens on a large scale
UN special rapporteur on human rights and extreme poverty Christian van Veen pointed out that one of the reasons this case was brought before the courts was because SyRI "‘systematises’ and whitewashes a lot of prejudices and discriminatory assumptions".
Van Veen, who leads the Digital Welfare State and Human Rights Project at New York University, contributed to a UN report that warned the world risked "stumbling zombie-like into a digital welfare dystopia".
Van Veen thought another reason for concern is the scale on which such systems operate. "Previously, an investigator was called in to check your toothbrush situation [to find out how many people you’re living with] on the basis of a tip or other clue; now, a whole neighbourhood is being investigated."
He mentioned the example of Waterproof, a precursor to SyRI, in which the water consumption of 63,000 people was analysed to find possible fraud. "[An] individual civil servant is probably not free of certain prejudices, but how much harm can one civil servant do compared to a system that vets 63,000 people?"
(3) It’s anything but intelligent
So what do I think? I find it particularly disturbing how little thought is given to these kinds of systems. If an algorithm is proven to make better decisions than a human being, then you should definitely consider it.
Take radiology, where academic research is hopeful about the possibilities of AI. For example, on 1 January, Nature ran an article about the use of deep learning in finding breast cancer on a mammogram. The conclusion: "In an independent study of six radiologists, the AI system outperformed all of the human readers."
Does this mean we need to deploy AI in hospitals right away? Not so fast. As with other scientific research, replications need to be done to see if the system is still correct.
And sometimes, reality turns out to be messier than the academic research. For example, an oncology research centre ended its collaboration with IBM because the system’s recommendations were not only wrong but dangerous.
Which decisions do we want to leave to an algorithm, and which don’t we?
So, step by step, while you evaluate these systems, you have to think about where they add value. This is the opposite of what happened with SyRI. It came into force without democratic debate and without any proven added value.
This added value is not only about whether SyRI "works" but also whether such a system is even desirable. Which decisions do we want to leave to an algorithm, and which don’t we?
At its core, these are political questions that are becoming increasingly important now there’s so much hype around AI. Not thinking about them is anything but intelligent. It’s foolish.
But who knows? Maybe I’m biased.
Not a member of The Correspondent yet? The Correspondent is a member-funded, online platform for collaborative, constructive, ad-free journalism. Choose what you want to pay to become a member today! Want to stay up to date? Follow my weekly newsletter to receive notes, thoughts, or questions on the topic of Numeracy and AI.