Ben Lazarus
The algorithm myth: why the bots won’t take over
Google once believed it could use algorithms to track pandemics. People with flu would search for flu-related information, it reasoned, giving the tech giant instant knowledge of the disease’s prevalence. Google Flu Trends (GFT) would merge this information with flu tracking data to create algorithms that could predict the disease’s trajectory weeks before governments’ own estimates.
But after running the project for seven years, Google quietly abandoned it in 2015. It had failed spectacularly. In 2013, for instance, it miscalculated the peak of the flu season by 140 per cent.
According to the German psychologist Gerd Gigerenzer, this is a good example of the limitations of using algorithms to surveil and study society. The 74-year-old has just written a book on the subject, How to Stay Smart in a Smart World. He thinks humans need to remain in charge in a world increasingly filled with artificial intelligence that tries to replicate human thinking.
As director of the Harding Center for Risk Literacy at the University of Potsdam and former director of the Center for Adaptive Behavior and Cognition at the Max Planck Institute for Human Development, Gigerenzer is considered one of the world’s most eminent psychologists. Steven Pinker is a fan; Dominic Cummings quotes him approvingly.
‘Google Flu Trends completely flopped for the simple reason that uncertainty exists — the future is not like the past,’ Gigerenzer says, stroking his walrus moustache. ‘When using big data, you are fine-tuning the past and you’re hopelessly lost if the future is different. In this case, the uncertainty comes from the behaviour of viruses: they are not really predictable, they mutate. And the behaviour of humans is unpredictable.’ In other words, AI can’t predict ‘Black Swan’ events — major surprises that aren’t anticipated in modelling and plans.
Gigerenzer worries that important decisions are being handed over to AI, despite its clear limitations. He’s concerned, too, that the technology creates huge surveillance powers. ‘I worry about the people behind the technology,’ he says over Zoom from his office in Germany. ‘The government surveillance and the commercial surveillance.’
What scares him is our own passivity. ‘We should be worried about people who aren’t getting smart while technology gets smarter,’ Gigerenzer says. ‘The more sophisticated algorithms become, the more sophisticated people need to become… The algorithms have become better over the last ten years by sheer computational powers, by the video capabilities and other things. On its own, that’s great. But the algorithms have a dangerous double capability: they make our lives easier and more convenient, but they allow us to be surveilled 24/7. We need to have a certain awareness and stop that, otherwise we will end up like China.’
The pandemic has not allayed Gigerenzer’s concerns. ‘The coronavirus crisis has been used by the Chinese to explain to every-one in their country how much more superior an autocratic system is than democracy,’ he says. ‘The capabilities of AI are ideal for autocratic systems. AI works better the more controlled the environment is. I don’t fear superintelligence coming — that’s for the movies — but I fear people adapt too much and just lean back and let the companies behind the AI change their lives.’
One controversial adoption of algorithms during the pandemic was for Britain’s school exam system. GCSEs and A-levels were cancelled, and grades dished out by an algorithm. The results were wildly unfair. ‘It was a very bad idea,’ says Gigerenzer. ‘How can you predict how a pupil would be graded? There are commercial companies that develop black box algorithms [an algorithm that’s not transparent] which nobody understands, including the teachers and the educational administrators, and the precision of validity was not independently checked. Yet they have had an immense influence on young people’s lives — they should be banned. The algorithms used for pupil grades were probably very simple ones, but they were secret. They were probably running on some kind of linear equation: they take the old results [from the school and the area] and a few other things. But they also intervene in our lives just like surveillance.’
Gigerenzer believes one of the particularly nefarious forms of AI are the algorithms deployed by social media giants, dubbed ‘in-human intelligence’ by the historian Niall Ferguson. Facebook, Instagram and YouTube use algorithms designed to maximise the amount of time people spend on their sites.
Gigerenzer points to the story of the Facebook whistleblower Frances Haugen, whose leaked documents showed Facebook prioritised ‘growth over safety’. ‘The [internal Facebook] study that was leaked found one in eight teenagers with suicidal thoughts traced this to Instagram,’ Gigerenzer says. ‘That’s a consequence of algorithms being geared towards social comparison and self-presentation. We’ve had this throughout history, but now it’s 24 hours a day and that hurts.’
He is scathing about YouTube, too. ‘Its algorithms lead you from one video to another, to ever more extreme content. They serve those who pay for advertising and that’s not in the users’ interests.’
Gigerenzer believes he has the solution to how we solve the toxicity of social media: we should pay for it. ‘I have made a simple calculation of what it would cost to reimburse the entire Meta or the Facebook group for their [advertising] earnings, and it’s about £2 per month per person. That is all you need to pay for freedom and for the companies to lose their surveillance capability.’
In our homes, we’ve already accepted tech surveillance via many of these devices and appliances which are connected to the internet. There are reportedly more than 2.2 million ‘smart homes’ in Britain, with two or more ‘smart’ devices such as fridges, coffee machines and TVs that are networked through a central hub (a smart speaker, control panel or app); and 57 per cent of British homes (15 million) contain at least one smart device.
Smart homes leave people vulnerable to blackmail. Hackers can use ransomware to tap into appliances, including Alexa devices and webcams to record and blackmail victims. But in a survey conducted by Gigerenzer, he found that only one out of every seven people were aware that a smart TV can record what you say. Samsung isn’t even shy about its technological capabilities. According to its privacy policy, users should ‘be aware that if your spoken words include personal or other sensitive information, that information will be among the data captured and transmitted to a third party’.
‘Just think how many people have a TV in their bedrooms,’ Gigerenzer says with a smile. Worse still, he says, are ‘smart mattresses’. ‘These mattresses will record every-thing: your heartbeat, how it changesovernight, how many people are in the bed and so on,’ he adds with a dry laugh. ‘Smart homes intensify surveillance — they don’t serve your own interests. They serve the interests of the companies who use your data to sell it on to someone.’
The use of AI for mass surveillance is more obvious — and no less alarming. AI surveillance technology is now used by at least 75 governments. Of these, 63 use Chinese technology, the majority of which is provided by Huawei.
AI surveillance systems work by analysing live video footage to detect unusual behaviour that might be missed by a human eye. But Gigerenzer is doubtful as to how effective it can ever be. ‘Mass surveillance doesn’t work for sheer statistical reasons. The key problem is that you have many people, millions of people, whom you surveille and you have a criminal database of maybe hundreds of thousands, and it could be only a few hundred that you are looking for. You get tons of false alarms.’
But such surveillance capabilities are useful for autocratic regimes. Gigerenzer points to the mass surveillance system used in 40 large Chinese cities, including Shanghai. ‘Everything is recorded. Algorithms search for keywords like “Dalai Lama”. Then your social credit value will go down, and those with low values get punished. In the last year, hundreds of thousands of Chinese citizens were not allowed to purchase tickets for bullet trains or for using a plane, because they had too low [social credit] values. Or children are not allowed to go to the best private schools. Those with high scores advertise it when they go online dating and they get goodies like premium medical treatment.’
Despite Gigerenzer’s concerns, he admits there are benefits too. ‘AI is a great thing, but you need to understand what it can and can’t do,’ he says. ‘We couldn’t have this conversation over Zoom without AI. [But] I think we should be able to profit from the technology without the intervention of commercial firms who want to change it.’
So what exactly does Gigerenzer want? ‘What I want is to help people open their eyes, so they understand the technology. We need to get the internet back where it was, back to being something that we can enjoy without being surveilled, without having the fear of it being misused. If we don’t, we may well end up with the Chinese model and sleepwalk into full-time surveillance.’
He pauses for a second before concluding with a shrug: ‘It’s convenient, sure. You can do everything with it, but the price you pay is total surveillance.’
Gerd Gigerenzer’s How to Stay Smart in a Smart World: Why Human Intelligence Still Beats Algorithms is published on 3 March.