5 Common Sense Facial Recognition Policies

The 2015 death of Baltimore’s Freddie Gray Jr. in police custody led to ten days of public outcry. The aftermath of the protests further damaged the public’s trust once they found out what police had been up to. The ACLU of North Carolina uncovered the fact that the Baltimore Police Department used facial recognition tools coupled with Geofeedia, a social media search and discovery tool to perform real-time, location-based social media monitoring during the protests. According to a Geofeedia marketing case study, police officers were able to examine posts and “discover rioters with outstanding warrants and arrest them directly from the crowd.”

Five years have passed since then and little has changed. Earlier this month the Los Angeles Police Department admitted it used facial recognition technology 30,000 times over the past few years. Even small police departments are getting in on the technology, partnering with video doorbell Ring to gain access to video sources. In addition, law enforcement across the country continued its use of facial recognition during the Black Lives Matter protests despite the fact that the technology is problematic to say the least.

“This technology is being acquired and implemented in secrecy by police and other agencies across the country — mostly local police — without any sort of community input without any transparency,” Lauren Sarkesian, a senior policy counsel at New America’s Open Technology Institute (OTI) says. “It’s being used to track potentially thousands of protesters at a time without any regulation to try to identify suspects. It is antithetical to both our First and Fourth amendment rights. It’s like, let’s just take all the faces that are out here as prototypes and log them over time. Down the road, if somebody does something, we’ll just go back through all these faces. There’s no level of threshold of suspicion.”

A Double-Edged Sword

Facial recognition can be used to track an individual over time, identifying where they have been and what they support or believe in, chilling speech. It’s also been proven to be biased and flawed, falsely identifying Black and Asian faces 10 times more often than white faces. All of these reasons are why OTI as well as organizations such as the American Civil Liberties Union (ACLU), and municipalities such as Boston, Oakland, Calif., and Baltimore have banned or moved to ban its use. Portland this month passed the strongest facial recognition ban in the country, banning public and private use of the technology. Even Congress has moved to regulate it, introducing a bill — the Facial Recognition and Biometric Technology Moratorium Act of 2020 – that would prohibit its use by the federal government.

Proponents, however, say that facial recognition helps them track down criminals more quickly and effectively, helping to bring justice to victims. The police departments I reached out to laud facial recognition and say it helps keep citizens safer. Kristen Metzger, Deputy Director of the Washington D.C. Metropolitan Police Department’s Office of Communications says that MPD does not use facial recognition to “live monitor” or identify members of the public that are engaged in lawful conduct. “It is only used to identify subjects who are observed engaging in unlawful (criminal) conduct,” Metzger says.

The Information Technology & Innovation Foundation sums it up more succinctly: “There are many opportunities to use facial recognition technology as an investigative tool to solve crimes; as a security countermeasure against threats in schools, airports, and other public venues; and as means to securely identify individuals at ports of entry.”

This sentiment is why bans and proposed bans are unlikely to stick in the long run. The proverbial genie is out of the bottle, and it will never completely go away. With that in mind, there are some steps that users and municipalities can take to make sure they are protecting the rights of all citizens.

Build in transparency and local oversight

One of the biggest problems with law enforcement’s use of facial recognition is the fact that – in most cases — it’s simply put into place, says Jameson Spivack, a policy associate at the Center for Privacy and Technology at Georgetown Law. There are no announcements or debates. Agencies purchase and use the software and services and the public never knows.

Given that law enforcement generally sees facial recognition as a positive thing, while public sentiment is overwhelmingly negative, it’s important to have candid, frank discussions about the technology, discussing what it is, how it’s being used, and what its limitations and benefits are before a license or purchase is made.  

“The conversation in the public around face recognition has largely focused on accuracy and bias, which are credibly important things that we need to worry about,” Spivack says. “But we need to extend the conversation beyond that. We need to ask how this technology changes power dynamics, what capabilities it gives the police and the government, and how our liberties are might be adversely affected by changing our dynamic, potentially unanticipated those kinds of ways.”

Creating a working group or committee around the research, purchase, and use of the technology is another good idea, especially if it’s comprised of law enforcement, citizens, and local politicians or council people.  

Mandate a warrant requirement

One of the main complaints about facial recognition is the unbridled and unfettered way some law enforcement organizations use it. The BLM protests are a perfect example. Law enforcement has scanned video footage and photography posted online, matching it against a variety of databases including Department of Motor Vehicle repositories. This was the case in New York City where the NYPD used facial recognition to track down a protester who screamed in an officer’s ear. Another report pointed out that “surveillance of Black Lives Matter protests has become a common sight, with police officers carrying camcorders alongside riot shields, batons and guns in the US and elsewhere. Some protests have even been watched from above, with drones circling the scenes where protesters congregate.” There may be a way to fix this, though.

Some experts have suggested passing regulations such as the ones in place for other sensitive and potentially harmful technologies such as wiretaps. “There were wiretap warrant requirements put in place to conform with the Fourth Amendment, and doing that with facial recognition technology might decrease that pervasive tracking and surveillance nature of the system. If law enforcement is just using [facial recognition] for Investigative purposes, they should be able to get a warrant and use the technology in this narrowly-tailored way,” Sarkesian says. In addition, while she maintains that such a protection would be a welcome improvement, it would not resolve all the issues OTI sees with law enforcement’s use of facial recognition.

It may also help to limit the combination of surveillance technologies employed together such as making it illegal to use drones or body cameras in conjunction with facial recognition. “We need to ask if there are certain there are certain uses of this technology that are beyond the pale like using it on a drone to be able to spy on people and watch them and track them. Or its use in real time, for example, on things like immigration enforcement,” Spivack says.

Put privacy into the hands of consumers

Advertisers learned long ago that they couldn’t just track people online without letting them know. Today, there are myriad protections in place and, while they aren’t perfect, there are ways that consumers can simply opt out of online surveillance. Opting out of facial recognition databases is much more difficult and in some cases impossible, but there are policies that should be in place to safeguard privacy when possible.

The California Consumer Privacy Act, for instance, has a provision that allows residents to reach out to data brokers including image databases, and ask what information they are holding. And social media companies by in large have very specific use policies. Facebook, for instance, clearly states in its terms of use, “if you share a photo on Facebook, you give us permission to store, copy, and share it with others (again, consistent with your settings) such as service providers that support our service or other Facebook Products you use. This license will end when your content is deleted from our systems.” Can Facebook sell your profile photos to image databases? Yes, it can. The same goes for Instagram and most other social media providers. There should be a way for Americans to opt out of image databases, and it’s not all that difficult to do. U.S. citizens can already opt out of biometric entry and exit of the country – facial recognition when traveling internationally – by simply asking, according to the U.S. Customs and Border Protection.

Train for competent and ethical proficiency

The last year, the Center for Privacy and Technology at Georgetown Law discovered, using a FOIL request, that police in New York City used a photo of actor Woody Harrelson to find a suspect. “The suspect disappeared and, because the footage that they had of him, was too grainy, using facial recognition didn’t work. But the police officers thought, ‘Hey, that guy looks like Woody Harrelson. Let’s just run a photo warehouse and see what comes up,’” Spivak explains. “They ran Woody’s photo through, got a list of results, found a match and they ended up arresting that person.”

The Center also uncovered instances where police copied and pasted features from one person onto the phone of another, rotated faces and heads, and mirrored images when part of a face was missing. “It really boils down to fabrication of evidence,” he says. It almost goes without saying that these examples are unethical and unquestionably wrong. This is why training – not just on how a technology works but also how it should be ethically used – is so important.

Make auditing part of the process

If New York City had a stringent auditing process in place there’s no doubt that someone would have flagged the aforementioned techniques as poor processes. The problem is that there’s no one policing the police except for other police. In a perfect world, every law enforcement and public entity using facial recognition would have the same civilian, police, and expert working group that’s charged with researching and buying the technology involved in regular periodic audits to make sure the technology is only used for good, and used in a manner that protects citizen rights.

“Auditing requirements for accuracy and bias is probably one of the more important aspects of using facial recognition fairly and responsibly,” Spivak says. “But it’s also important to audit how the technology affects the balance of power. It’s kind of more nebulous to try to think about power dynamics and civil liberties and things like that, it’s a little harder for people to understand, but these are serious questions.”