Can you expect algorithms to be unbiased?

My colleague Jesse Frederik, Economics correspondent for our Dutch sister site De Correspondent, put that question to me – specifically, the System of Risk Indication (SyRI), a big data analysis system that the Dutch government and authorities had been using to detect possible social security or tax fraud. SyRI analyses all kinds of personal data – about benefits, taxes, allowances – and then uses an algorithm to draw up a list of possible suspects. A Dutch judge ruled last week that SyRI breaches human rights.

One problem with a system like SyRI: it’s biased. SyRI was literally only used in disadvantaged neighbourhoods. But if you start focusing on certain places, you’ll only find something in those places. The people who commit fraud in rich neighbourhoods – say, white-collar crimes like mortgage fraud – will never end up in the database.

If you start focusing on certain places, you’ll only find something in those places

This is part of a much broader discussion about algorithms. In the United States, And

In short, systems are biased.

In the contributions section under the article on De Correspondent, Jesse and I were joined by Maxim Februari, a Dutch writer who was part of the group that brought SyRI to court, and Christian van Veen, who contributed to

Here are some key insights from the discussion.

People are biased too

An often-heard criticism of algorithms is that they’re biased. But Jesse highlighted

As he put it: "That wasn’t algorithmic, just human bias."

And he mentioned another objection that we frequently hear and which is also entirely true of humans: algorithms are opaque. In other words, both algorithms and humans are a black box. Information goes in, decisions come out, and you have no idea how either reached a decision.

What expectations do we have of new systems? Should they work 100% flawlessly?

It’s an important point, which plays a broader role in the discussion about artificial intelligence (AI). Because what expectations do we have of new systems? Should they work 100% flawlessly?

Take the self-driving car. but so do human drivers. So it seems more reasonable to expect that such a car will have fewer accidents than a human being than to expect that a self-driving car will never have another accident.

So to come back to SyRI: why be so critical of an algorithm when people are fallible too? Our discussion raised three main problems.

(1) It’s opaque

In my piece, I quoted a speech by Februari – a writer, philosopher and lawyer – who was So I was happy to see Februari respond to the discussion.

As he pointed out: "There really is a crucial difference between a biased civil servant making a separate, questionable decision and a whole fraud detection system that is opaque even to the court."

He meant SyRI’s total lack of transparency. It is not clear who exactly is being vetted, what data SyRI has access to, and which models are being used to analyse that data. The state did not want to clarify the situation with the judge in case

Februari thought this lack of transparency is problematic because in a state governed by the rule of law, a judge must be able to review the actions of the government. He also found it unacceptable that that wasn’t possible in the case of SyRI: "If a system is comprehensible, you can change it, refine it, abolish it ... "

He said the court also stated in its judgment that citizens should be able to track their personal data. The unclear origin and quality of the data is at least as problematic as the way the algorithms work.

(2) It happens on a large scale

UN special rapporteur on human rights and extreme poverty Christian van Veen pointed out that one of the reasons this case was brought before the courts was because SyRI "‘systematises’ and whitewashes a lot of prejudices and discriminatory assumptions".

Van Veen, who leads the Digital Welfare State and Human Rights Project at New York University, contributed to a UN report that warned

Van Veen thought another reason for concern is the scale on which such systems operate. "Previously, an investigator was called in to check your toothbrush situation [to find out how many people you’re living with] on the basis of a tip or other clue; now, a whole neighbourhood is being investigated."

He mentioned the example of Waterproof, a precursor to SyRI, in which the water consumption of 63,000 people was analysed to find possible fraud. "[An] individual civil servant is probably not free of certain prejudices, but how much harm can one civil servant do compared to a system that vets 63,000 people?"

(3) It’s anything but intelligent

So what do I think? I find it particularly disturbing how little thought is given to these kinds of systems. If an algorithm is proven to make better decisions than a human being, then you should definitely consider it.

Take radiology, where academic research is hopeful about the possibilities of AI. For example, about the use of in finding breast cancer on a mammogram. The conclusion: "In an independent study of six radiologists, the AI system outperformed all of the human readers."

Does this mean we need to deploy AI in hospitals right away? Not so fast. As with other scientific research, replications need to be done to see if the system is still correct.

And sometimes, reality turns out to be messier than the academic research. For example, because the system’s recommendations were not only wrong but dangerous.

Which decisions do we want to leave to an algorithm, and which don’t we?

So, step by step, while you evaluate these systems, you have to think about where they add value. This is the opposite of what happened with SyRI. It came into force without democratic debate and without any proven added value.

This added value is not only about whether SyRI "works" but also whether such a system is even desirable. Which decisions do we want to leave to an algorithm, and which don’t we?

At its core, these are political questions that are becoming increasingly important Not thinking about them is anything but intelligent. It’s foolish.

But who knows? Maybe I’m biased.

Not a member of The Correspondent yet? The Correspondent is a member-funded, online platform for collaborative, constructive, ad-free journalism. Choose what you want to pay to become a member today! Click here to join us! Want to stay up to date? Follow my weekly newsletter to receive notes, thoughts, or questions on the topic of Numeracy and AI. Sign up here

Dig deeper

Photo of a blue tube on the left, throwing a small blue ball onto a white, round platform held in place by a blue string. The blue string is whirled around three wooden circular pegs, handing in various places on a vertical, rectangular white wooden board with holes in it. The string is connected on the right to the nozzle of a blue spray bottle. The blue and white kitchen checked kitchen towel hanging to the right of the picture has a blue spray stain on it. A little less automation, a little more friction, please In the age of self-driving cars, autoplay TV shows, and beverages that contain all your nutrients, merchants of efficiency grow rich while we lose skills and control over our time. It’s time to make our lives a little less efficient. Read another piece by Sanne Blauw here.