On facial recognition, police, and regulation of new technologies

by Colin Gavaghan, Alistair Knott, James Maclaurin, Kristiann Allen and Andrew Chen
Woman in an augmented reality abstract space, interacting with the internet whilst wearing a VR headset.

Colin Gavaghan, Alistair Knott, James Maclaurin, Kristiann Allen, Andrew Chen

Last week, RNZ reported that NZ Police had been testing facial identification software from Clearview AI without “the necessary clearance.” Although the trial was seemingly abandoned at an early stage, its timing meant that the story was always likely to attract attention. Concern about contact tracing apps had already focused attention on privacy issues around technology. And just a few weeks before, international controversy had been generated around an announcement from Harrisburg University – subsequently deleted – that they had developed facial identification software that was “highly predictive of criminality.”

It was hardly surprising, then, that the prospect of using facial recognition technology here has raised some eyebrows. But just how concerned should we be about the trial? Should we be reassured that it seems not to have got beyond the first step? Does this technology actually pose new threats to our privacy, or other concerns? And what sorts of “clearance” should there have to be for this kind of technology?

What can the tech do?

Using pictures of people’s faces, artificial intelligence and machine learning techniques can be used to find features that help distinguish between one face and another face. This gives engineers and developers a “face model” that can be applied to different images, describing the appearance of each person in terms of this model. The struggle with these systems is that when you look at a large population of people, the likelihood of two or more people looking similar gets quite high – we even have the word doppelgänger to describe this! If your face model doesn’t have enough detail, then it increases the chances of confusing identities. Over time, researchers have built increasingly detailed face models that pick up finer and finer details to help distinguish between people’s faces. With new technologies like deep neural networks, facial recognition systems are getting more accurate at identifying people from their faces.

Clearview AI took a step further by compiling a massive dataset with 2.8 billion images of people’s faces. They took this data from news sources and social media websites where the images were publicly available (even if it breached the website’s Terms and Conditions), and then used it to train their AI algorithm. By providing a really large dataset, a wide variety of faces are shown and the system can better learn what makes a face different to someone else’s. It also helps in police tasks where they want to identify some target person from their photo: the larger the dataset is, the more likely it is to contain the target person.  Of course, if the target person isn’t in the system’s dataset in the first place, the system can’t succeed. What they want it to do in this case is say ‘I can’t identify that person’ – but there’s a worse failure mode, where it wrongly identifies the target person as someone it does know about. If the dataset happens to be biased towards certain groups, then errors of this kind are likely to prejudice these groups. Experts are guessing that since Clearview is a US company with a US focus, a lot of their dataset may come from US sources, leaving people in New Zealand relatively underrepresented. The short New Zealand trial ended after Police discovered that it didn’t work very well on the local test images they submitted to the system.

Should we worry about facial recognition?

How concerned we should be about this technology presumably depends a fair bit on what it would end up being used for. Potential uses of facial recognition software range from the positive – locating missing children for instance – to the fairly sinister; last year, an article in The Guardian claimed that the Chinese government “uses facial recognition for racial profiling and … tracking and control of the Uighur muslims” and to “verify students at school gates, and monitor their expressions in lessons to ensure they are paying attention.”

At the moment, we have no idea what the NZ Police researchers had in mind, or whether they had even thought much beyond simply testing a novel tool. But even if we assume that the uses our police would have for such technology are more towards the benign end of the spectrum – identifying suspects and victims – we may still have concerns about facial recognition technology in general.

The concern about privacy is probably the best known, but there are other issues that arise. Some studies have reported that the technology has considerable difficulty with non-white faces. This can render the technology less effective, but more importantly, ‘false positive’ results can have significant implications for those affected. As ACLU’s Jay Stanley said, “One false match can lead to missed flights, lengthy interrogations, watch list placements, tense police encounters, false arrests or worse.” And being wrongly flagged up as a terrorist, for instance, can have very grave implications indeed.

And facial ID technology might not just be available to the police. As New York Times journalist Kashmir Hill discussed on RNZ at the weekend, it could be used by any passing stranger who is curious about – or who maybe takes a fancy to – you in the street. Imagine a technology that allows them, at a glance, to match your face to your online profile – including maybe your relationship status, where you work, and all the other little details that can be scraped from your online footprint.

Then again, we might think that this isn’t so very different from what social media and Google searches enable us to do already. Maybe that particular privacy horse has already bolted. Opinions and contexts may differ. What we might say, though, is that all citizens likely to be affected by this sort of technology (which probably means all citizens) should have a chance to air those opinions before the technology is pressed into service. If we think there’s something distinctively creepy or even dangerous about facial ID, we should have a chance to voice those concerns, some sort of input to the decision about whether and where those kinds of technologies are widely deployed, or maybe what rules are brought into place around them. There is an ongoing research project led by A/Prof Nessa Lynch at Victoria University of Wellington funded by the Law Foundation looking into Facial Recognition Technology and developing a legal and ethical framework in New Zealand.

What sort of ‘clearance’ is currently required?

The media reporting of the Clearview story referred to a failure to get proper “clearance’ or “sign-off”. But little detail is given about what sort of official clearance was required for a study like this. The Privacy Commissioner has clarified to us that, while it would certainly be considered good practice to run such a proposal past his office, there is no formal requirement to do so.

As to what sort of internal approval processes the police themselves may have, we have very little information to work with. The Police Commissioner Andrew Coster was sufficiently concerned to order “a stocktake of any surveillance-type technologies that we may be using or trialing”. But it isn’t obvious what sorts of hoops such uses or trials would ordinarily be expected to go through. Is this lack of clarity as significant internally? Are police – or any other public servant for that matter – equipped and required to know precisely what sort of scrutiny is expected around new digital technologies?

In 2018, DIA and Stats NZ published an Algorithm Assessment Report, a stocktake of the use of algorithms across government. While a valuable resource, and reassuring in some respects, this report painted an uneven picture regarding the sorts of assurance processes for ethical and privacy standards, let alone the knowledge on how to implement them.

We’ve seen over the past few years a number of stories like last week’s, revelations about the use of algorithmic processes that seem to have caught out Ministers and senior officials, and which resulted in a degree of adverse publicity. This has led to proposals that there needs to be a more formal process for oversight of such tools, and as the last Chief Science Advisor has recommended to both the past and current administrations. This could take the form of a regulatory body within the public sector that would assess proposals to develop, purchase or deploy such tools. It could involve in-house processes like the PHRaE (Privacy Human Rights and Ethics) framework, developed and used at MSD to assess all proposed uses of personal information to inform the development of new services. It would almost certainly want to draw on the expertise of the Office of the Privacy Commissioner. But privacy is not the only concern about such new technologies, and any regulatory or oversight body would need to be equipped to assess matters such as transparency, bias and accuracy.

Precisely what this assessment process or regulatory agency would look like is a matter for further deliberation. How wide should its remit be? Should it apply only to new tools, or to those already in use, whether or not they are being put to new uses? Should it exist to offer advice and publish best-practice standards, or should it be empowered to deny proposals that don’t meet agreed standards? At what point in the development or testing of a new tool should the process be engaged? And precisely what sorts of technologies should come within its scope?

These are all important questions, and our team may revisit some of them in future blog posts. But it seems difficult to dispute that something of this nature is needed in New Zealand. Public trust is a resource that’s hard to acquire and easy to squander. When we’re talking about using personal data for surveillance purposes, it’s not at all hard to imagine how trust could be lost. And some of the potential harms around this sort of technology are far from fanciful.

Neither, though, are they inevitable. Some of these technologies could also offer major benefits for New Zealand, and the authors have no desire to see babies discarded with bathwater. To maximise the perceived gains and minimise the potential harms, there should be some sort of consistent and transparent process for oversight and approval of trials and use of such tools. The Ministers certainly don’t want to be surprised by revelations like this, and it shouldn’t be left to journalists and academics to do the digging.

Colin Gavaghan, Alistair Knott, and James Maclaurin are with the University of Otago’s Centre for AI and Public Policy, and are co-PIs on a New Zealand Law Foundation-funded project on AI and law. Kristann Allen and Andrew Chen are with the University of Auckland’s Koi Tū – The Centre for Informed Futures. Between the authors, there are backgrounds in law, philosophy and ethics, AI and computer systems, and public policy.

Our themes