Sam Leith
Should Apple snoop on your iPhone?
Should Apple use software to scan the photo library of every individual iPhone in search of images of child abuse? GCHQ thinks so. So does the National Cyber Security Centre. (Well, you might say: they would, wouldn’t they?) And so does professor Hany Farid, inventor of a technology called PhotoDNA, which is already used across the web to scan for illegal images.
He told the Internet Watch Foundation that, though Apple paused proposals to roll out this software last year thanks to 'pushback from a relatively small number of privacy groups', 'I contend that the vast majority of people would have said, ‘sure, this seems perfectly reasonable’'.
At issue, it should be said, is not the idea of checking for such images altogether. Tech companies already scan cloud-storage services, emails and suchlike for illegal images. But so-called 'client-side' scanning would install software on the individual phone itself – which would, among other things, circumvent the difficulty of intercepting these images on end-to-end encrypted messaging services. You’d be caught at the point of upload. And – for those worried that you’d find yourself hauled into the jug for snapping your tots in the bath – the software would be comparing suspect images to known child-abuse images on the databases of child protection agencies.
This seems to me an interesting conundrum, and not entirely straightforward. Here is a collision between principle and practicality. Is there really such a difference between checking on the device itself and checking images that have been sent over email or stored online? My instinct is that there is. By analogy: customs officers are entitled to rifle through your socks in search of drugs when you pass across an international border; they need a warrant to do so when your socks are in the top drawer at home.
The privacy principle, here, seems to me to be more or less cut and dried. Indeed, it’s almost more of a property principle than a privacy principle. If you buy a physical photo album, would you think it okay if the manufacturers of said photo album were entitled to break into your home whenever they felt like it and look at whatever photos you’ve put in there, just in case you were filling it with images of child abuse? Those of us with a traditional cast of mind will tend to be uneasy with the idea that once you have bought a physical good, and taken it into your own home, its manufacturer could reserve not only the capability but the right to stretch its wabbling finger out and poke around with your property ad lib.
Yet this is, in all sorts of areas, more and more the way things are going. You may remember that in 2009, Amazon discovered that it had been selling copies of George Orwell’s Nineteen Eighty-Four on Kindle without the correct permissions. Solution? It reached in without so much as a by-your-leave and and 'updated' its users’ devices to remove the digital books that they had bought. As some users pointed out, this was analogous to Amazon burgling your house and removing a book from your shelf. Even Jeff Bezos, at the time, admitted this was 'stupid, thoughtless, and painfully out of line with our principles'.
More recently I was fascinated by a thread from technology blogger Cory Doctorow about 'VIN [Vehicle Identification Number]-locking'. If you buy a tractor from John Deere, he explained, and it breaks down in your bottom field, you do not have the right to fix it yourself: only manufacturer-approved spare parts can be used. They have to be authorised with the on-board computer using a cryptographic key. If you don’t comply, the company can remotely disable your tractor with a kill-switch from headquarters. It’s only 'your' tractor, you could say, in a distinctly limited sense. Apple products, General Motors cars, even hospital ventilators, are subject to versions of the same highly profitable, highly restrictive post-sale practices.
Clearly, in a world where property consists of hardware and software, and the latter often benefits from upgrades, patches and bug fixes, there’s a bit of a grey area here. Most of us willingly accept a level of remote interference – but it’s consensual. Before you download a new version of iOS or Windows or whatever it is, there’ll be a box to tick or patch-notes to read. You do in principle have to consent before companies can harvest and use your personal data – however inadequate the legal protections may be and ingenious (or unscrupulous) tech companies may be in getting around them. Accepting that you may void the warranty by fixing your own phone or your own tractor is one thing; having the company turn it into a giant paperweight as punishment is another.
So here’s where the practical, or instrumental, argument comes in. Would using AI to scan every iPhone photo album for illegal images not, at a minimal cost to privacy in principle, potentially achieve the tremendous good of catching some of the cruellest and most depraved users of child abuse images? Well, yes, it might. That’s not a trivial case.
But reluctant as I am to deploy a thin-end-of-the-wedge argument: this is the thin end of a wedge. First they came for the nonces, as Pastor Niemoller did not say. You could by the same token argue that any number of different crimes could be prevented by the simple expedient of giving the government (or, God help us, private companies) unlimited powers of surveillance. Most of us, at some point along this continuum, accept that the fact privacy can protect the guilty is a price we pay for its value in giving us freedom. I seem to remember God took roughly that view when he gave us all the capacity to choose between good and evil for ourselves.
In his latest novel The Every, Dave Eggers imagines a tech company that programs its Alexa-type devices to listen to every domestic conversation – and if its AI detects phrases or tones of voice that are associated with domestic violence, to send the police round. Eggers, who is by bent against that sort of soft totalitarianism, is honest enough to admit that that case is the one 'that keeps me up at night': 'The justification will be: there’s 10 million cases of domestic violence in the US each year,' he said when I interviewed him about it. 'Surveillance cameras would put a dent in that. How do you justify not having it? You could make an argument, well, OK, sure, domestic violence is catastrophic but privacy is more important. I don’t think it’s a powerful argument for most people.'
I think Prof Farid is wrong; and also right. My sense – though I can’t summon up the force of outraged certainty – is that on balance he’s wrong about the privacy principle being unimportant. But I daresay he’s right about many reasonable people instinctively agreeing with him. As Eggers rightly says, arguments from principle struggle to make headway against our emotional and moral repugnance at child sexual abuse. Apple may well end up going ahead with their plans.
If I were the police, I’d take a lively interest in the people who rush out to buy Android phones in the month or two after that happens.