Depending on who you speak to, artificial intelligence (AI) is either the answer to all our major problems – from climate change to queuing (if you talk to Jeff Bezos) – or it might just be the last invention that humankind ever makes.
I recently attended the World AI Summit here in the Netherlands, where I heard a talk by Werner Vogels, Amazon’s CTO. Wearing a black T-shirt emblazoned with the slogan “Encrypt Everything”, he waxed lyrical about the promise of AI and machine learning in the retail industry.
I also attended presentations by academics such as Gary Marcus and Stuart Russell, both of whom presented their own models to tap into AI’s potential. Dutch politician Mona Keijzer spoke about her government’s wish to invest €2bn in AI over the next seven years.
No matter how varied the talks or presenters were, everyone seemed to agree: AI is going to change our lives.
I have three problems with that idea.
1: No one knows what AI is exactly
"Stop calling it AI," a friend who’s a data scientist at a multinational said to me recently, slightly irritated. "Nobody calls it that."
‘Stop calling it AI... No one calls it that’
It’s a reaction I’ve often encountered in conversations I’ve had about artificial intelligence. Experts were not necessarily annoyed whenever I asked: “What is AI?” – but they were usually laconic. They’d shrug their shoulders, chuckle, and sometimes I’d get a definition out of them. But the gist was almost always: “AI, oh well…”
A researcher, Daniel Worrall, told me he would not have called his research field AI a few years ago, but now many do. He thought it was mainly journalists who wanted to stick this term on everything.
An entrepreneur said that he told his investors that he used AI, whereas in reality he often used fairly simple statistical models. But what’s the difference, right?
And my friend who’s a data scientist preferred to talk about "machine learning" because almost all AI reporting is about machines learning something without getting precise commands. Yet I was not at the World Machine Learning Summit this week.
After “big data”, “algorithms”, and “data driven”, “AI” is just the latest in a series of buzzwords in the world of data technology. But what is it? No one knows exactly.
That’s quite problematic if large amounts of money are being distributed, like in the Netherlands, the United States, or other countries that have pledged to invest in AI. Hyperbole and wild promises lurk wherever people have the potential to earn money.
A car with a chic cruise control is "self-driving"; a city with a few sensors is a "smart city"; a large Excel sheet is called "machine learning".
2: AI is not inevitable
The term “AI” has a strong Terminator feel. It feels creepy, like robots will take over our lives. People such as Nick Bostrom and Elon Musk think that we should be afraid of "superintelligence", a form of intelligence that far surpasses what humans possess.
"The first ultra-intelligent machine is the last invention that man will ever make," said mathematician Irving Good in 1965. And Apple’s co-founder Steve Wozniak is not necessarily scared – but he does believe humans will become robots’ pets.
As the old saying goes, making predictions is difficult, especially about the future, though it’s clear that we don’t have to worry about a robot invasion right now.
But just in case, Professor Gary Marcus gave some tips in his talk at the World AI Summit for what to do when a robot tries to attack you:
- Close the door (or if things get really rough, lock it).
- Hide behind a bus, dress like a bus, or hide behind a shiny toaster.
- Keep a pack of psychedelic stickers handy, and a giant fan. Have something handy that is slimy or silly (like jacks or banana peels) to throw on the floor.
- Or just talk in a noisy room with a foreign accent.
In other words, robots are just not that intelligent yet. More broadly, Marcus’s book Rebooting AI shows that it is not just robots that are disappointing but other AI applications too.
Nobel Prize winner Herb Simon said: ‘Machines will, within 20 years, be able to do any work that a person can do.’ That was in 1965.
For example, the renowned AI researcher Geoffrey Hinton stated in 2016 that "it’s quite obvious that we should stop training radiologists" because he believed that algorithms would recognise anomalies more quickly than people. Yet while there have been impressive breakthroughs with machine learning in the field of medicine, many big promises have later turned out to be a let down.
Take Watson, the question-answering computer system developed by IBM that won the American quiz show Jeopardy! in 2011. IBM stated in 2016 that Watson would cause a "revolution in healthcare". Instead, several research centres have since cancelled their cooperation with the system because Watson’s recommendations were not only wrong but also dangerous.
I’m reminded of the words of Nobel Prize winner Herb Simon: "Machines will, within 20 years, be able to do any work that a person can do."
That was in 1965.
Although remarkable things have happened in the field of AI – um, machine learning – in recent years, all of the successful AI models had a limited goal: winning a board game, recognising a face, translating a sentence. Press a little further though and they soon fall apart. Just try asking Siri some strange questions.
Maybe we’ll have superintelligence someday, maybe not. It’s difficult to predict. Whatever form it takes, AI is in the power of people and will remain so for a long time. It is not magic or a force of nature – it’s a human creation. So AI is not inevitable, it’s a choice. But if everyone screams loud enough that it’s unavoidable, it will eventually become so.
3: AI is not a goal in itself
AI as we see it today – that is to say, machine learning – is simply a set of methods. People decide what happens to it. The same algorithms behind the ‘deepfake’ videos can also help to diagnose cancer.
The question of whether we need AI at all hardly seems to be asked. The technology and not the problems we’re trying to solve is often the starting point. How exactly are we going to use AI to stop climate change? To improve healthcare? To make quality education accessible to everyone?
Teachers and healthcare workers would probably benefit more from a better salary than a robot assistant.
This is what Evgeny Morozov, a philosopher who researches the social and political implications of technology, calls "solutionism" – the idea that any problem can be solved, as long as we have the correct computer code. We become like the drunk who searches for his lost keys where the streetlight shines, rather than where he actually lost them.
We’re looking to technology as the sole solution to all our problems. Yet the best solutions could be in a completely different place. Climate change may require systemic change rather than a better algorithm. Teachers and healthcare workers would probably benefit more from a better salary than a robot assistant.
These are not technological discussions. They’re political ones. We should think first about what we think is important and then find the right solution. Sometimes that could be something AI-related, other times not at all.
But the solutions should be up to us, not the companies that want to sell us their wares.
Some of these ideas have been developed from my work with De Correspondent.