Home News “Dangerous and inaccurate” police facial recognition technology exposed in Big Brother Watch report

“Dangerous and inaccurate” police facial recognition technology exposed in Big Brother Watch report

by Brian Sims
Silkie Carlo

Silkie Carlo

Big Brother Watch is launching a campaign calling for the police service to stop using what the privacy and civil liberties champion deems to be “controversial” facial recognition technology, which has been branded by the group as “dangerous and inaccurate”. The campaign follows in the wake of a new Big Brother Watch report – presented at Parliament on Tuesday 15 May – that reveals some startling findings.

According to Big Brother Watch, South Wales Police stores photos of all innocent people incorrectly matched by facial recognition for a year, without their knowledge, resulting in a biometric database of over 2,400 innocent people. The report also states that the Home Office spent £2.6 million on funding South Wales Police’s use of the technology, although it’s “almost entirely inaccurate”.

Big Brother Watch also asserts that the Metropolitan Police Service’s facial recognition matches are 98% inaccurate, misidentifying 95 people at last year’s Notting Hill Carnival as criminals, yet the force is planning seven more deployments this year.

Again, according to Big Brother Watch, South Wales Police’s matches are 91% inaccurate, yet the force plans to target the Biggest Weekend and a Rolling Stones concert next with facial recognition technology.

Big Brother Watch’s campaign, which calls upon UK public authorities to immediately stop using automated facial recognition software with surveillance cameras, is backed by David Lammy MP and 15 rights and race equality groups including Article 19, the Football Supporters Federation, Index on Censorship, Liberty, Netpol, the Police Action Lawyers Group, the Race Equality Foundation and the Runnymede Trust.

Shadow Home Secretary Diane Abbott MP and Shadow Policing Minister Louise Haigh MP spoke at the report’s launch event in Parliament.

Cause of controversy

The police service has begun using automated facial recognition in city centres, at political demonstrations, sporting events and festivals over the past two years. Particular controversy was caused when the Metropolitan Police Service targeted Notting Hill Carnival with the technology two years in a row, with civil rights groups expressing concern that comparable facial recognition tools are more likely to misidentify black people.

Big Brother Watch’s report found that the police’s use of the technology is “lawless” and could breach the right to privacy protected by the Human Rights Act.

Silkie Carlo, director of Big Brother Watch, said: “Real-time facial recognition is a dangerously authoritarian surveillance tool that could fundamentally change policing in the UK. Members of the public could be tracked, located and identified – or misidentified – everywhere they go. We’re seeing ordinary people being asked to produce ID to prove their innocence as the police are wrongly identifying thousands of innocent citizens as criminals. It’s deeply disturbing and undemocratic that the police service is using a technology that’s almost entirely inaccurate, that they have no legal power for and that poses a major risk to our freedoms. This has wasted millions in public money, while the cost to our civil liberties is too high. It must be dropped.”

Response from the Surveillance Camera Commissioner

Responding to the Big Brother Watch document, Surveillance Camera Commissioner Tony Porter QPM LLB stated: “I welcome the publication of the Big Brother Watch report as, in my view, it adds value to a much needed debate on a matter of growing public interest, the public interest which demands clear legislation, transparency in governance and approach and a coherent and effective regulatory framework in which they can derive confidence whenever and wherever their civil liberties are at risk from the state.”

Porter continued: “The effective regulation of the use of facial identification technology (commonly referred to as Automated Face Recognition or AFR) by the police is a priority of the National Surveillance Camera Strategy and a matter which I have been addressing as a priority for some time now, engaging with the National Police Chiefs’ Council, the Home Office, fellow regulators and ministers alike. The police service has to abide by the Surveillance Camera Code of Practice which I regulate under the terms of Section 33(1) of the Protection of Freedoms Act 2012. Those familiar with the content of the Code will know that it’s explicit in that facial identification technologies used by the police in England and Wales will be regulated by it. That’s not to say that I consider existing or indeed anticipated legislation as being wholly sufficient in these matters. I do not. My fellow regulators, namely the Biometrics Commissioner and, in recent times, the Information Commissioner, have added welcome contributions to the debate.”

Tony Porter QPM LLB: the Surveillance Camera Commissioner

Tony Porter QPM LLB: the Surveillance Camera Commissioner

Porter went on to comment: “I do think that the police service is genuinely doing its best with AFR and to work within the current and anticipated legal regulatory framework governing overt surveillance. That framework is far less robust than that which governs covert surveillance, yet arguably the evolving technological capabilities of overt surveillance is the equal in terms of intrusion to that which is conducted covertly. It’s inescapable that AFR capabilities can be an aid to public safety particularly from terrorist threats in crowded or highly populated places.”

Embellishing this theme, Porter observed: “Andrew Parker, the director general of the Security Service, rather eloquently set out the threat context to our society only recently. It’s understandable that there’s an appetite from within law enforcement agencies to exploit facial identification capabilities. It’s an appetite which is doubtless borne out of a duty and determination to keep us safe. This technology already exists in society for our convenience and therefore it’s arguable that the public will have something of an expectation that this technology is so used by agents of the state to keep us safe from serious threats, but only in justifiable circumstances where that use is lawful, ethical, proportionate and transparent.”

In conclusion, Porter outlined: “In the context of safety, the public also needs to be safe from unlawful, disproportionate and illegitimate state intrusion, and must have confidence that those technologies have integrity. In my view, the challenge is arriving at a balance. For that to happen, there needs to be a clear framework of legitimacy and transparency which guides the state, holds it to account and delivers confidence and security among the public. I have yet to have confidence that Government has a satisfactory approach to the issue in delivering a framework upon which the police service and others can rely and upon and which the public can have confidence, but I do believe that we’re on a journey to that destination. It’s a journey that’s fuelled by constructive and challenging debate.”

*The 15 NGOs calling for the police service to stop using automated facial recognition are Big Brother Watch, Article 19, defenddigitalme, the Football Supporters Federation, Index on Censorship, the Institute of Race Relations, Liberty, The Monitoring Group, Netpol, the Open Rights Group, the Police Action Lawyers Group, the Race Equality Foundation, Race On The Agenda, the Runnymede Trust and Tottenham Rights

You may also like