Facial Recognition Technology: Preventing The Loss of Data

Steven Webb

Steven Webb

While we all continue to take and share a huge number of photos with various arrays of smart phones and tablets, there’s now an increasing concern about how people’s biometric data is being kept and subsequently used. This is mainly due to a certain level of consumer ignorance about what can happen to biometric data, coupled with an increasing realisation by Government and civil liberties groups alike that facial identities absolutely need to be protected. The European Union’s (EU) General Data Protection Regulation (GDPR) and increasing sensitivity over biometric data mean that today’s organisations must ensure they have fully understood the risks associated with losing facial data. Here, Steven Webb and Anthony Leather outline their views on the matter.

It’s really not at all surprising that the number of photos taken each year continues to grow. Mobile device penetration is on the increase and the use of data sharing platforms has also grown, while our appetite to openly share our experiences online shows no sign of slowing down.

Several online sources estimate that around 1.4 trillion photos will be taken in 2019 alone. The number of photos shared through online platforms provides some further context and perhaps debunks the online research. Facebook has 300 million photos uploaded daily and Instagram 95 million, while Snapchat has three billion “snaps” uploaded per day. Regardless of whether it’s 1.4 trillion photos or many more, it’s a huge number.

People readily share their photos on a range of social platforms in exchange for providing personal data and share their facial image with Governments and commercial organisations in return for a service – whether it’s the ability to cross a border or to open a new bank account.

Improvements in accuracy

For its part, facial recognition technology is expanding rapidly alongside the growth in the number of facial images, improving in accuracy and becoming increasingly available to all. A comparison of seven different analyst firms that track facial recognition estimates an industry growth rate of 23% between 2015 and 2025.

Facial recognition analytics market growth

Facial recognition analytics market growth

In addition to its increasing use, the software is becoming more accurate. The National Institute of Standards & Technology (NIST), which has become the de facto global evaluator of facial recognition efficacy, has reported that its latest test showed a considerable improvement in the rate of false positives. In other words, the technology is improving. “Between 2014 and 2018, facial recognition software became 20 times better at searching a database to find a matching photograph. That’s according to the NIST’s evaluation of 127 software algorithms from 39 different developers. In short, the bulk of the industry.”

This improving accuracy has also coincided with increasing availability. Digital platforms such as Google Images harbour facial recognition capability that allows consumers to search for photos of themselves on the Internet. Betafence and PimEyes provide similar services, while PicTriev and FaceApp allow users to change their appearance. In addition to platforms, open source software such as OpenFace provides developers with the tools to build new software and applications which means that the technology is available to all (including organised crime networks).

Reasons for continued growth

There are strong reasons why facial recognition deployment will continue to grow. The face is a ‘Unique Identifier’ and is being used in an increasing number of critical applications as proof of identity. A well-known use case is in border control where biometrics match a traveller’s face to their records to provide entry to a country.

It also has key applications in ‘Know Your Customer’ where financial institutions need to be sure of the identity of their customers as part of anti-money laundering Best Practice. From a consumer perspective, facial recognition helps users access phones and laptops. Going forward, this is likely to extend to accessing homes and cars.

Facial images also help to build trust. In large corporations spread across the world, putting a face to a name can help build rapport between distant colleagues. This will inevitably lead to employee databases containing photos and personal data. Consumer-facing organisations are also collecting more customer data and images to help them improve and tailor experiences, thereby increasing loyalty and sales per head.

The growing number of facial image use cases is leading to an increase in biometric data being stored on corporate and Government databases. This can be a problem for organisations if the data isn’t adequately protected.

Many people assume that large organisations who are financially stable and outwardly trustworthy will be holding data securely. This isn’t always the case, though. Although regulated industries holding confidential customer data have invested significantly in cyber security technologies, policies and training, there will always be the risk of data loss and it’s a problem that comes in several guises. In the majority of cases data loss is indeed from the outside. It’s perpetrated by a malicious external outsider intent on stealing data, disgruntled employees or accidental data loss. That can also result in organisations losing confidential biometric data.

Consequences of lost data

Sources of data loss

Sources of data loss

The consequences of losing data (or a face, if you will) can be severe. Reputational damage can have a long-term impact on organisations, in turn leading to the loss of customers, investors and partners. Equally so, the financial consequences of data loss can be crippling, from poor stock market performance through to a loss of revenues and regulatory fines that impact company profitability.

There are an increasing number of Case Studies that highlight the data security challenges organisations face. Although Equifax is an old example of data loss from 2017, it’s very relevant now that the financial consequences are clear. The attack by an external actor resulted in the loss of social insurance numbers and account numbers and affected around one million people. The damage to the organisation has been significant. The share price fell from $141 to $93 in the space of just two weeks and has taken two years to recover. The organisation has also settled on fines and customer support that will amount to around $700 million.

British Airways also lost a significant amount of customer data in September 2018 and has been fined £183.9 million for its breach of the GDPR. Information Commissioner Elizabeth Denham has gone on record as saying: “People’s personal data is just that – personal. When an organisation fails to protect it from loss, damage or theft it’s more than an inconvenience. That’s why the law is clear – when you are entrusted with personal data you must look after it. Those that don’t will face scrutiny from my office to check they have taken appropriate steps to protect fundamental privacy rights.”

In August this year, two clear cases emerged where biometric data was lost. Binance, a cryptocurrency exchange, lost customer verification data – a ‘Know Your Customer’ use case – that included facial images. The Suprema story also broke in the same month when researchers discovered a route into the databases of the South Korean organisation which holds the biometric records for many global customers. It’ not clear how much data has been stolen, though the researchers estimated that it could be as many as 30 million records including authentication data for around one million users. The consequences for Suprema are not yet clear, though the company’s share price dipped 20% and the organisation’s reputation has already been severely damaged.

Regulation and consumer fears

The objective of the GDPR, which was launched on 25 May last year, is to encourage organisations to protect customer information (including biometric data). Those that fail to do so will be hit with a pecuniary fine that can reach 4% of global revenues. The wording related to biometrics in the GDPR is as follows: “‘Biometric data’ means personal data resulting from specific technical processing relating to the physical, physiological or behavioural characteristics of a natural person which allow or confirm the unique identification of that natural person, such as facial images or dactyloscopic data.”

Although there’s no federal law in the United States, there are now several state acts focused on the protection of personal data. The fiscal consequences of losing biometric data and the increasing sensitivity about the potential misuse of biometrics has recently resulted in some interesting Case Studies.

In May this year, San Francisco voted to ban facial recognition across the city. A month later, Microsoft deleted a database of 100,000 people that contained ten million images to help train facial recognition systems. In August 2019, the Irish Government was asked to delete a database of 3.2 million people’s facial biometric data by the Data Protection Commission. A few weeks later, the Swedish Data Protection Authority fined a Swedish school for trialling facial recognition to track class attendance. This was deemed to be in contravention of the GDPR.

What can we make of these trends? Westlands Advisory believes that, while facial recognition will continue to grow in scope, as indeed will the size of databases containing facial images, the sensitivity around biometric data will not diminish and the threat of financial and reputational damage to organisations that lose data will persist. Put simply, today’s organisations need to be better at protecting customer information.

Protecting facial images

Anthony Leather

Anthony Leather

There are many positive applications for facial recognition. The industry is still relatively immature and will require frequent evaluation of practices and revision to policies in order to ensure that the use of the technology remains relevant and proportionate.

Our focus here is not to debate what the norms should be, but rather to raise the issue of how to protect facial data. Following a review of suggested approaches, we recommend that organisations should consider the following:

*Conduct a risk assessment including a Data Protection Impact Assessment that specifically reviews how facial data is stored, where the vulnerabilities rest and how to mitigate the risk of facial data loss

*Ensure that network, application and endpoint security policies are effective

*Deploy multi-factor authentication

*Ensure that databases are not public-facing

*Anonymise data through encryption and hashing

*Image de-identification

*Deploy data loss prevention tools and behavioural analysis in order to protect against deliberate or accidental internal data loss

*Ensure incident response and data recovery policies and plans are in place to reduce the impact of a data breach

The benefits of biometric data are significant, and organisations should not be afraid of exploring how the technology can improve operational performance or customer experience. However, it’s also clearly the organisation’s responsibility to protect the biometric data that’s being held.

Equal consideration should be given to how the customer service can be delivered while at the same time always limiting the risk of losing data and the trust factor that goes with this.

Steven Webb and Anthony Leather are Directors of Westlands Advisory (www.westlandsadvisory.com)

*For further information on the subject of facial data protection Westlands Advisory is moderating a free to attend webinar on the topic on Tuesday 24 September. Register to attend by visiting this link: https://zoom.us/j/123361237 

About the Author
Brian Sims BA (Hons) Hon FSyI, Editor, Risk UK (Pro-Activ Publications) Beginning his career in professional journalism at The Builder Group in March 1992, Brian was appointed Editor of Security Management Today in November 2000 having spent eight years in engineering journalism across two titles: Building Services Journal and Light & Lighting. In 2005, Brian received the BSIA Chairman’s Award for Promoting The Security Industry and, a year later, the Skills for Security Special Award for an Outstanding Contribution to the Security Business Sector. In 2008, Brian was The Security Institute’s nomination for the Association of Security Consultants’ highly prestigious Imbert Prize and, in 2013, was a nominated finalist for the Institute's George van Schalkwyk Award. An Honorary Fellow of The Security Institute, Brian serves as a Judge for the BSIA’s Security Personnel of the Year Awards and the Securitas Good Customer Award. Between 2008 and 2014, Brian pioneered the use of digital media across the security sector, including webinars and Audio Shows. Brian’s actively involved in 50-plus security groups on LinkedIn and hosts the popular Risk UK Twitter site. Brian is a frequent speaker on the conference circuit. He has organised and chaired conference programmes for both IFSEC International and ASIS International and has been published in the national media. Brian was appointed Editor of Risk UK at Pro-Activ Publications in July 2014 and as Editor of The Paper (Pro-Activ Publications' dedicated business newspaper for security professionals) in September 2015. Brian was appointed Editor of Risk Xtra at Pro-Activ Publications in May 2018.

Related Posts