facial recognition
Civil rights activists have filed an official complaint against the Detroit police, alleging the department arrested the wrong man based on a faulty and incorrect match provided by facial recognition software — the first known complaint of this kind.

The American Civil Liberties Union filed the complaint (PDF) Wednesday on behalf of Robert Williams, a Michigan man who was arrested in January based on a false positive generated by facial recognition software. "At every step, DPD's conduct has been improper," the complaint alleges. "It unthinkingly relied on flawed and racist facial recognition technology without taking reasonable measures to verify the information being provided" as part of a "shoddy and incomplete investigation."

The investigation began when five watches, valued at about $3,800, were stolen from a Shinola luxury retail store in Detroit in October 2018. Investigators reviewed the security footage and identified a suspect: an apparent Black man wearing a baseball cap and a dark jacket. In March 2019, according to the complaint, Detroit police conducted a facial recognition search using an image from the surveillance footage; that search matched the image to Williams' driver's license photo.

Several months later, in July 2019, DPD investigators showed a lineup of six images to a Shinola security guard "who had not witnessed the incident in person and who had merely watched the same security camera footage," according to the complaint. The guard identified Williams' photo, and police issued an arrest warrant, which they then did not enforce until January 2020, when they showed up at Williams' home to arrest him on his return from work.

During an interrogation the next day, it became clear that Williams was not, in fact, the man from the security camera footage, according to the complaint, and a confused officer told him, "the computer must have gotten it wrong."

Unforced errors

The false positive could have been alleviated if the DPD did additional old-fashioned police work, the complaint alleges. For example, the suspect in the footage was wearing a St. Louis Cardinals hat. "Mr. Williams, a lifelong resident of the Detroit area, owns no such hat, and is not a Cardinals fan," the complaint says. "He's not even a baseball fan. He is, however, Black."

Williams' race is indeed important to the case, as repeated tests and studies — including some conducted by the ACLU — have shown that facial recognition systems are significantly more likely to generate errors when trying to match images featuring people of color.

The National Institute of Standards and Technology (NIST) in December published the results of a study it conducted finding that facial recognition systems were most accurate on white men but were "10 to 100 times" more likely to generate false positives for Asian or Black faces. Systems developed in the United States also performed very poorly when faced with Native American faces, NIST found.

"Given the technology's flaws, and how widely it is being used by law enforcement today, Robert likely isn't the first person to be wrongfully arrested because of this technology," ACLU attorneys wrote. "He's just the first person we're learning about."

The ACLU explicitly ties the expansion of flawed facial recognition software to the nationwide protest movement calling for police reform and support of Black communities, adding: "To address police brutality, we need to address the technologies that exacerbate it too. When you add a racist and broken technology to a racist and broken criminal legal system, you get racist and broken outcomes."

Several major technology firms recently changed their facial recognition businesses in the wake of protests for racial justice. Amazon announced a one-year moratorium on police departments using their Rekognition technology in order to "give Congress enough time to implement appropriate rules." Microsoft, which said it has not sold its facial recognition tech to police, promised not to do so until the US has "a national law in place" governing its use.

IBM not only joined the call for federal regulation of facial recognition tech, but it also quit the business altogether earlier this month. "IBM firmly opposes and will not condone uses of any technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, [or] violations of basic human rights and freedoms," company CEO Arvind Krishna wrote in a June 8 letter to Congress.