On a normal afternoon at his job, Rob gets a phone call so preposterous he thinks it is a prank. “Come down to the local precinct. You’re under arrest. If you do not come peacefully, we will come arrest you.” Rob, like a normal law-abiding citizen, thinks these teens have gone too far and ignores the call. That evening he is arrested as he pulls into the driveway of his suburban home.
The police refuse to say why he is being arrested, beyond citing charges for a felony warrant and larceny. They tell Rob’s wife, Melissa, to “Google it” when she asks where he is being taken. Rob is taken to a detention center where he is processed, fingerprinted, DNA sampled, and his mug shot is taken. He is held overnight.
The next day he is taken to an interrogation room with two detectives. The officers produce a surveillance image of a heavyset black man shoplifting five watches from a Shinola store in Midtown Detroit. They present Rob with a blurry close-up of the shoplifter’s face from the footage and ask, “Is this you?” Rob holds the blurry picture up next to his face, rightfully indignant, and replies “No, this is not me. You think all black men look alike?”
The detectives lean back in the chair. One, chagrined, says “I guess the computer got it wrong.” Despite this admission, when Rob asks if he can go, the request is denied. He is released on a $1,000 personal bond late that evening, after spending 30 hours in jail. The arrest breaks a four-year perfect attendance streak at work.
***********************************
Rob is a real person, Robert Julian-Borchak Williams, arrested purely on the basis of a faulty identification by facial recognition software. Yesterday his story was published in the New York Times, which the above dramatization relies on.
How was Rob identified for arrest by the Detroit PD?
Shinola contracted with Mackinac Partners, a loss-prevention firm, which reviewed the shoplifting surveillance footage and sent a copy to the Detroit police. A digital image examiner for the Michigan State Police, Jennifer Coulson, uploaded a still from the video to the state’s facial recognition database, which contains more than 49 million photos.
Michigan obtained its technology from DataWorks Plus, a mug shot management software firm that has incorporated facial recognition tools developed by outsiders. The facial recognition tech used by DataWorks Plus comes from NEC and Rank One Computing. A 2019 federal study showed these algorithms were biased, falsely identifying African American and Asian faces 10-100 times more than white faces.
Ms. Coulson’s run of the mugshot in the state police database would have produced several rows of results, along with confidence scores for each image. Mr. William’s driver’s license photo would have been among the matches. These were forwarded to DPD as an “Investigative Lead Report” which includes the useless disclaimer “This document is not a positive identification. It is an investigative lead only and is not probable cause for arrest.” DPD showed the lineup to Shinola’s loss prevention contractor and she identified Mr. Williams, which appears to be the sole basis for his arrest.
At a probable cause conference, the Wayne County prosecutor announced the charges were being dropped “without prejudice” (this is the prosecutor’s way of reserving the right to continue to suspect and later arrest Mr. Williams if they find better evidence). Mr. Williams lawyer obtained an order and filed FOIA requests with the DPD to obtain his arrest records, but has been stonewalled by the department according to a filing in his favor by the ACLU of Michigan.
***********************************
What is wrong here?
OK, there’s a lot wrong here. By my reckoning, I count bad technology, corporate malfeasance, disinterested and bad policing, a useless disclaimer, and a system that presumed Robert Williams guilty after robo-identification until proven innocent. I suggest that we should zoom out from the specific case to ask whether facial recognition is worth it if we cannot guarantee it delivers on the promise of identifying individuals reliably.
I took a wonderful seminar in law school on AI & the Law and wrote a paper for that seminar on law enforcement use of AI facial recognition technologies. One problem I had with the class, which still sticks in my craw today, is that we always discussed the technologies under the presumption of efficacy. That is, when we debated using AI weaponry, or AI hiring, or AI surveillance, we jumped into the future when the technology 99.99% of the time does what it is supposed to do.
I understand why the professor did this – it’s pretty easy to shut down an argument for a technology by saying, “boy, the road to get there is pitted with immoral inefficacy, might kill some people, and definitely violates human rights.” The idea is to debate it at first principles. Right now there’s a lot of faith in the iterative nature of machine learning to achieve close to ideal results. But if the technology is bad and it is used by the government to falsely imprison people, inefficacy on the road to glory is entirely the point.[1] If you’re arrested because an algorithm says you are a person captured on video, it’s difficult to assert the negative until you prove you’re not the person on video. Because a computer says so, you’re guilty until proven innocent, which requires time and money waiting in jail and hiring a lawyer to gather the evidence to exonerate you.
You could extend this thinking even broader, to the current debate we’re having about police brutality. Law enforcement is given a monopoly on violence by the state, and if it does not use that power in a lawful, equitable, judicious way, it does not get to cover for its incompetence by saying “we’ll try harder next time.” Until facial recognition is perfect – and I mean perfect – law enforcement shouldn’t be able to spend millions of dollars in taxpayer money on a technology that sometimes makes an oopsie and leads them to arrest someone based on a bad algorithm.
Morally, the government cannot use algorithms to make identifications without sufficiently proving their efficacy and freedom from bias. Legally? Well, we’ll talk about that in part two.
***********************************
Twitter thread worth reading:
I loved this whole thread, but particular the idea summed up in this tweet:
[1] Highly recommend the reporting done by Clare Garvie and everyone at the Georgetown Law Center on Privacy & Technology on this issue. If I knew I was this interesting in privacy law when I was choosing law school, I probably would’ve gone to Georgetown.