Hi,

Michal Kosinski is in a car. "You know, I would be asked questions by journalists, like “How do you feel about electing Trump and supporting Brexit?" Grass flashes in the background. "How do you answer such a question? I guess I have to deal with being blamed for all of it." He stares into the distance, a glimpse of a smile appears on his face.

Kosinski is one of the interviewees in , a documentary about artificial intelligence showing this week at the IDFA documentary festival in Amsterdam. He’s a prominent psychologist specialising in measuring personality. And some people blame him for Trump and Brexit.

The story: Cambridge Analytica, a data-mining company, is said to have deployed Kosinski’s methods to manipulate millions of Americans in the run-up to the 2016 presidential election. Before that, they pulled the same trick with the Brexit referendum. Voilà: the smouldering wreck that we are now living in.

Whether or not Cambridge Analytica actually stole Kosinski’s methods is doubtful, as my colleague in 2017. More importantly, it’s a myth that voters can be so easily influenced.

Researchers Joshua Kalla and David Broockman, for example, at 49 field experiments and concluded: "The best estimate of the effects of campaign contact and advertising on Americans’ choices in general elections is zero." (Italics are my own.)

There is no evidence that micro-targeting, what Cambridge Analytica did, works. As far as we know, the practices of the company were pure quackery.

Snake oil

And this does not only apply to the manipulation of voters. My colleagues Jesse Frederik and Maurits Martijn about the unproven effects of online advertisements. It was no coincidence, then, that both sent me slides from a computer scientist named Arvind Narayanan last week, entitled

Narayanan’s point: there are impressive breakthroughs in the area of ​​AI - AlphaGo, facial recognition, analysis of medical scans - but that does not mean everything with the AI label on it automatically works.

He shows a screenshot of the promotional material from an HR company that claims to assess whether someone will be a good employee on the basis of a 30-second video.

We see a man called DuShaun Thompson, next to tabs with an array of descriptive personal characteristics - adventurous, sensitive, intellect... Thompson, we’re told, is an "assertive" personality type. A "bottom line organiser" who changes the status quo. All in all, he gets a score of 8.98 on a 10-point scale.

"Common sense tells you that this is not possible and AI experts would agree," Narayanan writes. "This product is essentially an elaborate number generator." In general, as his research shows, ‘social outcomes’ are very difficult to predict.

At the same time there are many problems with the use of such algorithms. Narayanan sums them up: "Hunger for personal data; massive transfer of power from domain experts & workers to unaccountable tech companies; Lack of explainability; distracts from interventions; veneer of accuracy..."

iHuman

Which brings me back to iHuman. The documentary asks too few questions about the accuracy of these types of models. It makes it look as if AI models did indeed bring about the election of Trump and the vote for Brexit. (Even though Kosinski himself doubts that. "Obviously, Big Data analysis didn’t win the election," he once told a Bloomberg journalist. "Candidates win elections. We don’t know to what extent Big Data has helped.")

The documentary also suggests that super-intelligence is inevitable. We see computer scientist Jürgen Schmidhuber, sometimes dubbed the ‘father of AI’, trying to build ‘general AI’. Although plenty of AI researchers have doubts as to whether this will happen (soon), Schmidhuber seems certain.

"Is there a lot of responsibility weighing on my shoulders?" he says towards the end of the documentary. "Not really. Was there a lot of responsibility on the shoulders of the parents of Einstein? The parents somehow made him, but they had no way of predicting what he would do and how he would change the world. And so you can’t really hold them responsible for that."

The documentary highlights themes which play well in the current AI discourse - the surveillance society in China, dangers of autonomous weapons, bias in some algorithms. But by leaving no room for doubt, while almost constantly playing ominous music in the background, the film-makers contribute to the already-huge hype. Not to mention, to the wallets of those charlatans who want us to believe that AI is capable of anything.

Just before you go...

On Tuesday I saw the film about the life of a delivery driver. Director Ken Loach previously made the heartbreaking I, Daniel Blake and this new film is yet another gem. Go see it, but be warned: it will make you very sad. So maybe follow it up with Frozen 2.

Prefer to receive this newsletter in your inbox? Follow my weekly newsletter to receive notes, thoughts, or questions on the topic of Numeracy and AI. Sign up here