UK Report on Facial Recognition AGAIN notes need for trained human operators

A report has been published by the British Journal of Criminology entitled:

‘Assisted’ facial recognition and the reinvention of suspicion and discretion in digital policing

The most relevant section is shown below, but the full report is on the link. It shows how AFR (automated facial recognition) needs human intervention.

Human–computer interaction

The potential of AFR becomes realized when computer-generated matches are resolved through human activity. In most instances, this either involves an initial decision to disregard a match or, conversely, a suspect being engaged by street-based intervention teams tasked with conducting additional identity checks with AFR-matched individuals. In practice, this ‘second stage’ of activity was an arena of contestation and negotiation.

Human operators, therefore, constitute an essential component of the AFR process and play the primary role in adjudication. Two officers usually carried out this role. Many received formal or informal training prior to deployment, and some occupied non-operational roles, meaning AFR was a novel experience for them. Variances in operator capability were evident across both research sites and these disparities mirrored those encountered in other forms of biometric policework, such as that identified in DNA typing activities (Cole 2002).

Such considerations shaped the deference some officers gave to the algorithm and, conversely, why others were more sceptical of its performance. During one SWP (South Wales Police) deployment, one operator was visibly frustrated with a lack of correct alerts being generated in their van, while, on the same day, operators conducting surveillance elsewhere in the city centre had succeeded in locating and arresting multiple ‘persons of interest’. As a result, this operator became less trusting of alerts generated by the system. Despite habituation to the system, the technology thus reduced the sense of suspicion he experienced. Similarly, in London, once an AFR match had first been deemed incorrect by operators (on the third day observed), the overall rate of disconfirmed alerts increased slightly. Such incidences demonstrate the varied responses among human operators of AFR. However, while deference to suggestions generated by algorithmic decision-making was largely habitual—and with 26 of 42 computer-generated alerts considered suitably credible to intercept a matched individual in London—it is important to acknowledge the important role of some officers’ (techno)scepticism.

Roles and interactions between adjudicating officers undertaking this duty varied considerably. Sometimes, one operator would be looking for the person in the crowd while the other was describing them aloud from the image captured on the screen: operators reported using key facial features, such as eyes, nose, mouth, jawline and hairline to inform their decisions. While not relevant to a subject’s appearance, some officers also recruited background information (e.g. offence type) for their deliberations. At other times, contrasting approaches occurred within the same operational team. In London, any disagreements over whether to launch an intervention were always resolved in the affirmative, though this was not the case during SWP deployments.

Technical difficulties sometimes limited the role of AFR. During mobile deployments in central London, radios continually failed to work inside the AFR van. The corollary effect of these network fractures is illustrated by the following field note:

Fourth AFR match of the first Soho deployment (0.57 threshold). The officer adjudicating images attempts to radio a request to intercept a suspect. Responding to failures of both the radio and mobile tablets he lent out of the van and tried to radio again. When this failed, he took off after the suspect on foot. By this time the individual had crossed almost the length of Leicester Square. Limited adjudication time. Decision-making was near instant. With reflection significant differences were apparent between the probe and gallery images. The gallery image had moles on the suspects face, the probe image had none. While difficult to ascertain at first, most tellingly they had different colour eyes. A false positive (MPS, 17 December 2018, 14:22 pm).

Compensating for technical difficulties, therefore, not only limited AFR capability but also compressed the time available for discretionary adjudication. During South Wales’ initial deployments for the Champions League Final when the system was still being configured, it was slow and often produced ‘lag’. For example, 90 seconds elapsed between the camera timestamp and real alert time, ‘which was especially evident where a potential match was brought up by the system’ (field note, SWP, 31 May 2017). This relationship between different components of human-technical networks also reflects a critique among accounts of surveillance informed by assemblage theories rehearsed above: while surveillance practices involve intricate relationships between different forms of technology, they are not necessarily enhanced by such unions. Single points of failure inhibit the network and reduce overall surveillance possibilities.

https://academic.oup.com/bjc/article/61/2/325/5921789#.YDycnz3-efQ.twitter