top of page

OPINION | THERE IS MUCH TO FEAR EVEN WITH LITTLE TO HIDE: FACIAL RECOGNITION HAS NO PLACE IN LONDON

There is very little in the modern world that elicits nightmarish images of Orwellian dystopia with the conciseness and fervidity of the term "biometric data". Defined broadly as "metrics related to human characteristics", it is the primary force behind facial recognition technology. It's not exactly brand-new: everyone from Facebook to Apple have employed variations of the tech to tag you in photos, open your phone and check you in at airports. But one of its' most recent usages has fallen under the control of the London MET, who deployed the technology in East London earlier this week, supposedly to scan and check individual faces against a database of 5,000 high-risk criminals. One phrase is particular keeps arising in the rhetoric of those who defend the move: "there's nothing to fear if there's nothing to hide." I call bullshit.


Moscow activist group Sledui advises followers to paint their faces to throw off facial recognition.

Let's look at the data first: the MET claimed last year while trying the technology out that their system was 70% effective at spotting the dangerous suspects. It also claimed that false identifications were a one-in-a-thousand experience. Both of these claims are false: independent studies reveal far, far lower numbers. Prof Pete Fussey of Essex University found the figures to be closer to 19% and that the likelihood of false identification was high enough that it was "highly possible" for the MET's use of facial recognition to be held unlawful if challenged in court. Fussey's research also found "significant operational shortcomings" in the final six of ten of the trial run's processes and practices. So even if you did have something to hide, it's unlikely the tech would identify you correctly anyway: the process is simply too new and too inexperienced to be of any real or practical use. The implication of an ineffective, buggy operation across such a major and crime-ridden city is a waste of resources and time at best and a very real potential trigger for social unrest at worst.


"Having nothing to hide is simply not protection enough against a system with everything to see - especially in a city like London."


This brings me to my second point: it is one thing for the technology to be vaguely ineffective but it is an entirely and far more dangerous one for it to be actively harmful. Gal-Dem recently published a fantastic article on the ways in which facial recognition target and discriminate against people of colour, which I will summate briefly: it is well established that the majority of facial recognition technology is around 100 times more likely to misidentify people from ethnic minorities than to misidentify a white person. Despite this, the MET refused to test the racial bias of their own tech - NeoFace - on at least four occasions since 2014 (when they were first made aware of them problem), citing various platitudes about funding and safeguards. Combine this with the 81% margin of error cited by Fussey's findings, you're looking at an average of four out of five wrongly identified individuals per arrest. As argued by politics journalist Moya Lothian McLean "when those individuals are from demographics who are already seen as suspects, you’re looking at a situation that will not only result in tragic miscarriages of justice: it will actively encourage them." The racial diversity of London does not exactly lend itself to this gaping fault in already inaccurate technology: in a city where crime and race already share a rocky relationship, targeted and ineffectual processes like those of NeoFace and the MET will do nothing but add fuel to the fire.


My final thoughts on the MET's recent action comes from a place of philosophical rather than cultural condemnation: facial recognition poses a serious and general threat to human welfare. As argued by Evan Selinger of the Rochester Institute of Technology, in order for the benefits of facial recognition technology to take place - the supposed application of increased security, cultural serenity and reduced crime - can only occur under the implementation of the exact sort of oppressive, ubiquitous surveillance systems that civil rights and privacy rules aim to prevent. Consent rules, procedural requirements, and boilerplate contracts are simply no match against the power and rifeness of infrastructure. The MET's roll-out of this ineffective, racist technology lends itself ever-more closely to the dystopian horror I mentioned at the start of my article: rampant, nontransparent, targeted drone strikes; overreaching social credit systems that exercise power through blacklisting; relentless enforcement of even the most trivial of laws. Surveillance only serves to gain from the identification of discrepancies, and what is legal today may not be legal tomorrow. Ultimately, having nothing to hide is simply not protection enough against a system with everything to see - especially in a city like London.


Image credit: Guardian Design, katrin.nenasheva

bottom of page