Every summer there is a mass exodus from New York City towards the white beach at Jones Beach State Park. Here, looking out over the Atlantic Ocean, you can sunbathe, catch a concert or play a game of mini-golf. And get away from the bustle of the city.
But you have to get there first. And there’s something odd about the route you take. The flyovers over the Southern State Parkway that leads to Jones Beach are low. Unusually low, with an average height of less than three metres.
Not all the flyovers along the route are as low as that, just the oldest. The ones that date back to when the Parkway was first constructed in the 1920s, when the area was no more than a swamp and any heavy rainfall caused immediate flooding.
Robert Moses, who came up with the idea of the Parkway, had big plans. The young civil engineer would often sail his small boat to the beach from the other side of the bay. There, he’d look out over the ocean, towards the city lights in the distance. When he got back he headed to the library and pored over maps of the city.
And then he had his idea: he would construct freeways and make the beach accessible to all. The holiday feeling would start as the New Yorkers escaped the city through a park-like landscape with lush verges to the left and the right and a gorgeous beach ahead of them. The parkway was born. And those low flyovers were all part of the romantic feeling, as they meant large, noisy vehicles couldn’t use the road.
But they also meant that buses couldn’t use the road. And buses just happened to be how most poor Americans – including many African Americans – got around. The result? The pristine sands of Jones Beach were not accessible to these demographics.
Which was no coincidence, according to Moses’ biographer Robert Caro. Caro won the Pulitzer Prize for The Power Broker, his massive biography of the urban planner and engineer. He later described Moses as ‘the most racist man’ he had ever met.
Sidney Shapiro, an engineer who worked for Moses, told Caro that Moses had expressly instructed him to keep the viaducts extra low. That was how they would keep away the poor – including most African Americans, whom Moses considered ‘dirty’. This was how Moses sought to maintain lasting control over the situation, said Shapiro. ‘He wrote legislation, but he knew that you could change the legislation. You can’t change a bridge after it’s up.’
Every one of the flyovers along the Southern State Parkway was an embodiment of prejudice.
Technology is not neutral
Moses’ story shows that technology is a human product. His ideas influenced his infrastructure designs, and had real consequences for the people who wanted to use them.
And his ideas are still having an impact, just as he wanted them to. There are frequent accidents on the Southern State Parkway when buses and trucks get their roofs scraped off because the drivers didn’t realise they weren’t welcome on this road. And the beach is still hard to reach for people who don’t own a car.
Our modern technology is far from neutral too. Take your smartphone, for instance. It’s designed to be as addictive as possible. From its bright fruit machine colours to the notifications that alert you to your network’s every online move – it’s all geared to keeping you glued to your screen. And it’s all for the benefit of the tech companies’ coffers. The more you use your telephone, the more they earn.
But technology is also biased in a way that goes beyond profit margin considerations. What about speech assistants, which for a long time all had women’s voices? And the facial recognition software that couldn’t recognise black faces? Our modern technology can be just as much an embodiment of prejudice as those Long Island flyovers.
But it’s unfair to blame the technology. It only listens – if one can use such a human term – to its makers. They are the ones who think Siri has to be a woman. And who use databases dominated by white faces.
That doesn’t make every tech worker a modern-day Robert Moses – they are not necessarily doing it on purpose. But some of them are. In Weapons of Math Destruction, Cathy O’Neil describes how American universities with dubious credentials use algorithms that target vulnerable groups, trying to sell their degree courses to the unemployed, single mothers on benefits, or ex-prisoners.
This makes money for the university – because the American government provides grants for these groups – but doesn’t get the students anywhere, since a degree from such institutions is hardly worth the paper it’s written on. But these students only find that out after they have accrued a mountain of debt.
In short, behind any technology are people with ideals and blind spots, interests and prejudices. And that has consequences. Which is why I want us to get to know those people a bit better.
But first: who am I?
Maybe I should introduce myself first. I was fascinated by numbers from a very early age, so I did a degree in econometrics – a mix of economics and statistics. Not only did I enjoy numbers, but in many contexts they gave me something to hold on to. My school grades, my weight – these figures were objective measures in my life.
But as I progressed with my PhD, my confidence in numbers became increasingly shaky. I was doing research in Bolivia, where I asked 240 people about their happiness levels and their ideas about income distribution in their country. Now that I was getting a peep behind the scenes, I began to have doubts about the objectivity of numbers. It was me who was interested in happiness. It was me who thought it was something you could measure. It was me who thought it was important to do that in Bolivia.
Underlying those numbers were my preconceived ideas. They weren’t necessarily wrong in themselves, but they certainly weren’t objective. So how did it work, I began to wonder, with the numbers that affect our lives? The economic statistics? The health studies? The school grades?
After getting my PhD I started to delve into these questions as Numeracy correspondent at De Correspondent. This position attracted me because The Correspondent thought about journalism in the same way I had come to think about numbers: objectivity is impossible, so you’d better get to know the people behind it.
My quest for the people behind the numbers culminated in 2018 when I published Het bestverkochte boek ooit (met deze titel) (‘The best-selling book of all time (with this title)’). The English translation of my book, due to be published in 2020, will be called The Number Bias.
The main conclusion: numbers are not sacred. They are influenced by the errors, interests and convictions of the people who collect and publish them – the scientists, businesspeople, politicians and journalists. And by us, as numbers consumers. Because we are not innocent either, and we are easily misled and seduced by numbers.
In short, numbers are a human product.
Let’s get back to technology. I want to get to know the people who are behind it better. And what better place to start than with the hype of the day: artificial intelligence (AI). A logical follow-on from my previous work, given that AI is often about smart number-crunching on a massive scale.
Because you shouldn’t be misled by the magical aura surrounding AI. The name is often just the attractive packaging of things that make many of us either yawn or panic: maths and statistics. Almost everything you read about AI at the moment is actually about machine learning – the use of algorithms and statistical models to get a machine to carry out a task without giving it specific rules.
Great strides have been taken in the field of machine learning in the past few years. The best-known success story is that of AlphaGo, developed by Google DeepMind for playing the board game Go. In 2016, the computer program beat Lee Sedol, the 18-times world champion. And there have been other successes: models that can detect cancer cells, that recognise faces and – yes – that can guess what you want to watch next on Netflix.
Such stories help sustain the hype around AI. AI startups are attracting funding like never before, and governments are falling over each other with their ambitions. US president Trump presented a plan in February to maintain ‘American leadership in the field of artificial intelligence’. The EU is investing 1.5 billion euros in AI and aims to spend another 20 billion in the coming decade. And all this fades into insignificance besides China, where a city like Shanghai alone promised to spend 100 billion yuan (12.8 billion euros) on AI initiatives.
If the hype is to be believed, AI is going to penetrate every facet of our lives. It’s going to detect our tumours, drive our cars, and fight our wars. Some say it can solve our major problems – from climate change to death – but others that it might be our ‘last invention’.
A technology that is expected to have such an impact on your life seems like something you should get to know a bit better. That goes for the technique, but even more for the people behind it.
What’s my plan?
That is precisely what I intend to do in the coming months. I want to get to know the people behind AI. To be specific, the people behind machine learning. What choices do they make? What are their assumptions, their worldviews and their ethical considerations? All this to answer the question: how are they going to influence our lives? Are we going to be crashing into flyovers one day soon?
My quest is also a great opportunity to get to grips with the technology. How does a neural network function, for example? Don’t worry: you needn’t expect any Greek symbols from me. Even if you still have nightmares about maths lessons, you should definitely read my pieces. Because I am convinced that everyone can join the conversation about machine learning without having to be crazy about Generative Adversarial Networks (although they are, as I intend to convince you, really cool).
Let me say from the start: I don’t expect to just find Robert Moses figures. In fact, I am convinced that the vast majority of people involved in AI want to do some good in this world. But like all of us, myself included, they carry their background with them in everything they do. And I’m curious about that, and how it affects their work.
Artificial intelligence is a huge subject, of course, one that could keep me busy for years – so this project is just a start. And I’m hoping you can help me with it.
Can you help me?
There are roughly four stages to building a machine-learning application. Before you can train an AI model, you have to collect data. That data often has to be cleaned up or added to. To distinguish between a cat and a dog, for instance, you first have to label them both.
Then it is time to develop the application, time for the machine to learn its task. The data are used to train a model to – say – recognise faces, predict energy prices or translate a text. Eventually the application ends up on our computers, in our surgeries, in our town halls, and we start to use it. And by using it we generate new data and the cycle starts over again.
I would very much like to talk to people who work on one or other part of this cycle. I’ve also written a separate callout with more details of what I’m looking for which you can share in your network.
If you work in the machine learning sector and are willing to share your story, then I would really like to talk to you. Who are you? How do you go about your work? Why do you do what you do? I hope to talk to people from all around the world, so feel free to share this with your international network.
Do you think you can help me? If so, please leave your details on the form. Many thanks in advance!
What can you expect?
My articles will be published in both the English and the Dutch edition of The Correspondent. That means two articles every time, two groups of readers, and two sets of readers’ responses. This is a new way of doing things for us. My current plan is to reflect on the reactions of both groups of readers in my weekly newsletter, or, if there is a lot of interesting stuff to report, in a separate piece, so you get to hear about the whole conversation.
Ultimately, I hope that between us we shall arrive at a realistic picture of the influence of artificial intelligence and the people who work on it. And how we should respond. Because you may not be able to change a bridge once it’s been built, but we can all choose to take a different road.
This article was translated from Dutch by Clare McGregor.