Report raises concerns over Met Police trials of live facial recognition technology

A new report produced by researchers from the Human Rights, Big Data and Technology Project based at the University of Essex’s Human Rights Centre identifies “significant flaws” with the way in which live facial recognition technology was trialled in London by the Metropolitan Police Service.

This is the first independently-funded academic report into the use of live facial recognition technology by a UK police force and it raises concerns about the Metropolitan Police Service’s procedures, practices and Human Rights compliance during the trials.

The authors of the report, namely Professor Peter Fussey and Dr Daragh Murray, conclude that it’s “highly possible” the Metropolitan Police Service’s use of live facial recognition to date would be held unlawful if challenged in a Court of Law. They’ve also documented what they believe to be “significant operational shortcomings” in the trials which could affect the viability of any future use of live facial recognition technology.

Professor Peter Fussey

Professor Peter Fussey

In light of their findings, Professor Fussey and Dr Murray are calling for all live trials of live facial recognition to be ceased until these concerns are addressed, noting that it’s essential Human Rights compliance is ensured before deployment, and that there be an appropriate level of public scrutiny and debate on a national level.

After reviewing the Human Rights, Big Data and Technology Project Report, the Metropolitan Police Service has chosen not to exercise its right of reply.

Professor Fussey is a leading criminologist specialising in surveillance and society and is currently leading the new Human Rights, Data and Technology strand of Surveillance Camera Commissioner Tony Porter QPM LLB’s National Surveillance Camera Strategy.

Speaking about the report, Professor Fussey said: “This report was based on detailed engagement with the Metropolitan Police Service’s processes and practices surrounding the use of live facial recognition technology. It’s appropriate that issues such as those relating to the use of live facial recognition are subject to scrutiny, and the results of that scrutiny made public. The Metropolitan Police Service’s willingness to support this research is welcomed. The report demonstrates a need to reform how certain issues regarding the trialling or incorporation of new technology and policing practices are approached, and also underlines the need to effectively incorporate Human Rights considerations into all stages of the Metropolitan Police Services’s decision-making processes. It also highlights a need for meaningful leadership on these issues at a national level.”

Concerns over Human Rights law compliance

Dr Daragh Murray

Dr Daragh Murray

Dr Daragh Murray is a specialist in international Human Rights law with a focus on conflict, counter-terrorism and the application of modern technology. He’s based in the School of Law at the University of Essex.

Dr Murray explained: “This report raises significant concerns regarding the Human Rights law compliance of the trials. The legal basis for the trials was unclear and is unlikely to satisfy the ‘in accordance with the law’ test established by Human Rights law. It doesn’t appear that an effective effort was made to identify Human Rights harms or to establish the necessity of live facial recognition. Ultimately, the impression given is that Human Rights compliance was not built into the Metropolitan Police Service’s systems from the outset and wasn’t an integral part of the process.”

In order to compile the report, Professor Fussey and Dr Murray were granted unprecedented access to the final six of the ten trials run by the Metropolitan Police Service running from June 2018 to February 2019. They joined officers on location in the live facial recognition Control Rooms and engaged with officers responding on the ground. They also attended briefing and de-briefing sessions as well as planning meetings.

The main concerns raised in the report are as follows:

*It’s highly possible that police deployment of live facial recognition technology may be held unlawful if challenged before the courts. This is because there’s no explicit legal authorisation for the use of such technology in domestic law, and the researchers argue that the implicit legal authorisation claimed by the Metropolitan Police Service – coupled with the absence of publicly available and clear online guidance – is unlikely to satisfy the ‘in accordance with the law’ requirement established by Human Rights law if challenged in court

*There was insufficient pre-test planning and conceptualisation and the Metropolitan Police Service’s trial methodology focused primarily on the technical aspects of the trials. This meant the research process adopted by the Metropolitan Police gave insufficient attention to addressing the non-technical objectives identified

*The Metropolitan Police Service did not appear to engage effectively with the ‘necessary in a democratic society’ test established by Human Rights law. Live facial recognition was approached in a manner similar to traditional CCTV. This fails to take into account factors such as the intrusive nature of live facial recognition and the use of biometric processing. As a result, a sufficiently detailed impact assessment wasn’t conducted, making it difficult to engage with the necessity test

Tony Porter QPM LLB: the Surveillance Camera Commissioner

Tony Porter QPM LLB: the Surveillance Camera Commissioner

*The mixing of trials with operational deployments raises a number of issues regarding consent, public legitimacy and trust – particularly so when considering differences between an individual’s consent to participate in research and their consent to the use of technology for police operations. For example, from the perspective of research ethics, someone avoiding the cameras is an indication that they’re exercising their entitlement not to be part of a particular trial. From a policing perspective, this same behaviour may acquire a different meaning and serve as an indicator of suspicion

*There were numerous operational failures including inconsistencies in the process of officers verifying a match made by the technology, a presumption to intervene, how the Metropolitan Police engaged with individuals and difficulties in defining and obtaining consent of those affected

Further issues highlighted

Further issues highlighted include the following:

*Across the six trials that were evaluated, the live facial recognition technology made 42 matches – in only eight of those matches can the report authors say with absolute confidence the technology ‘got it right’

*Criteria for including people on the ‘Watchlist’ were not clearly defined, and there was significant ambiguity over the categories of people the live facial recognition trials intended to identify

*There were issues with the accuracy of the ‘Watchlist’ and information was often not current. This meant, for example, that people were stopped despite the fact their case had already been addressed

*The research methodology document prepared by the Metropolitan Police Service focuses primarily on the technical aspects of the trials. There doesn’t appear to be a clearly defined research plan that sets out how the test deployments are intended to satisfy the non-technical objectives, such as those relating to the use of live facial recognition as a policing tool. This complicated the trial process and its purpose

Hannah Couchman

Hannah Couchman

Hannah Couchman, policy and campaigns officer at civil rights group Liberty, observed: “This damning assessment of the Metropolitan Police Service’s trial of facial recognition technology only strengthens Liberty’s call for an immediate end to all police use of this deeply invasive tech in public spaces. It would display an astonishing and deeply troubling disregard for our rights if the Metropolitan Police now ignored this independent report and continued to deploy this dangerous and discriminatory technology. We will continue to fight against the police’s use of facial recognition which has no place on our streets.”

Silkie Carlo, director of Big Brother Watch, explained: “This report is an utterly damning conclusion to the police’s dangerous experimentation with live facial recognition. It confirms what we have long warned – the technology is inaccurate, lawless and must be stopped urgently. This message is now coming not just from us, but from the independent reviewers commissioned by the Metropolitan Police themselves. The only question that remains is when will the police finally drop live facial recognition? The public’s freedoms are at stake and it’s long overdue.”

Silkie Carlo

Silkie Carlo

Carlo continued: “This authoritative and detailed report offers a series of seriously worrying insights, from the police attempting to spin stats to conceal overwhelming inaccuracy rates through to the inclusion of people on ‘Watchlists’ who were not actually wanted by police. This China-style surveillance is undemocratic and has no place in Britain. It’s alarming that the police have been let off the leash for so long and allowed to make serious policy decisions about our civil liberties. Given the legal challenge we’re bringing against the Met and this definitive report, we trust the force will now decide not to use live facial recognition any further. If they do use it again, we’ll see them in court.”

Comment from the industry

Zak Doffman, CEO of Digital Barriers (the national ‘facial recognition’ spokesperson who regularly talks of its applications for law enforcement) informed Risk Xtra: “The failure rate presented in this news is highly ambiguous. The amount of people on ‘Watchlists’ may be in the thousands, which means there’s little guarantee they will walk past and be identified by the technology. However, if two innocent people are mistakenly picked out of tens of thousands of people walking past the technology on any given day, the error rate will of course appear horrendous which leads to misconceptions of the actual success statistic of this technology. However, this small number of false positives is what’s helping the technology to currently develop and, with the right expertise, these errors are quick and easy to correct for future operations.”

Doffman continued: “Facial recognition technology can only be as good as the data it’s caught from and the feedback it’s given. Overly simplistic approaches by some actors in the field to database training, verification and testing have led to some systems rightfully being accused of poor recognition performance. Those of us in the industry who take a responsible approach work on actively developing the technology and continually measuring performance against a wide range of people with different characteristics in order to understand where differences in recognition accuracy occur, while also constantly creating new techniques to improve overall performance.”

He went on to state: “No-one has ever been arrested based on being picked out by the technology alone because the final decision on the course of action always comes from a human police officer. Facial recognition is merely a tool for human operators and doesn’t make decisions on its own. It was created to help keep the public safe. When used properly and under the appropriate regulatory frameworks, it’s an application that can be operated by the police to make them more effective, but no less accountable. At Digital Barriers we favour clear regulation and clarity on the use of facial recognition and encourage open discussion around any misconceptions surrounding it.”

*Read the Human Rights, Big Data and Technology Project Report

About the Author
Brian Sims BA (Hons) Hon FSyI, Editor, Risk UK (Pro-Activ Publications) Beginning his career in professional journalism at The Builder Group in March 1992, Brian was appointed Editor of Security Management Today in November 2000 having spent eight years in engineering journalism across two titles: Building Services Journal and Light & Lighting. In 2005, Brian received the BSIA Chairman’s Award for Promoting The Security Industry and, a year later, the Skills for Security Special Award for an Outstanding Contribution to the Security Business Sector. In 2008, Brian was The Security Institute’s nomination for the Association of Security Consultants’ highly prestigious Imbert Prize and, in 2013, was a nominated finalist for the Institute's George van Schalkwyk Award. An Honorary Fellow of The Security Institute, Brian serves as a Judge for the BSIA’s Security Personnel of the Year Awards and the Securitas Good Customer Award. Between 2008 and 2014, Brian pioneered the use of digital media across the security sector, including webinars and Audio Shows. Brian’s actively involved in 50-plus security groups on LinkedIn and hosts the popular Risk UK Twitter site. Brian is a frequent speaker on the conference circuit. He has organised and chaired conference programmes for both IFSEC International and ASIS International and has been published in the national media. Brian was appointed Editor of Risk UK at Pro-Activ Publications in July 2014 and as Editor of The Paper (Pro-Activ Publications' dedicated business newspaper for security professionals) in September 2015. Brian was appointed Editor of Risk Xtra at Pro-Activ Publications in May 2018.

Related Posts