Tech

A man spent 10 days in jail based on a facial recognition error

Critics have been warning that software that uses images pulled from social media and in other databases will lead to false positives.

Face recognition and personal identification technologies in street surveillance cameras, law enforcement control. crowd of passers-by with graphic elements. Privacy and personal data protection,
Shutterstock

A man in New Jersey spent 10 days in jail after facial recognition software wrongly identified him as the suspect in an aggravated shoplifting incident.

Arrest by algorithm — According to NJ Advance Media, police were looking to identify a suspect who stole from a hotel gift shop and then fled from police in a Dodge Challenger, ramming into a patrol car before speeding off. Images taken from the scene were run through facial recognition software, which returned a "high profile" match of Njeer Parks. He was soon taken into custody.

The only problem is, Parks later said that he had never been to the town where the incident occurred. And what's more, after a judge pressured prosecutors to produce more evidence beyond the facial recognition software, the case was quietly dropped. But only after Parks had spent 10 days in jail and thousands of dollars on a lawyer.

While NJ Advance Media initially reported that the notorious facial recognition tool, Clearview AI, was to blame for the false identification, Parks' lawyer has since gone back on that claim. Clearview AI has also denied the allegation, with founder and CEO Hoan Ton-That telling Input, "We have no indication that Clearview AI was used regarding this situation."

False positives can ruin lives — The case highlights the danger posed by facial recognition software, which remains unreliable. This isn't the first time someone has been wrongly arrested because of the technology — in Detroit, a man spent 30 hours in jail earlier this year after a video from a robbery was run through facial recognition tools and he was wrongly flagged as the suspect.

A unifying theme between these two cases is that both men were Black. Facial recognition software has been found to misclassify people of color disproportionately to white people.

Serious consequences — It's one thing when an algorithm accidentally serves you up some Nickleback in your Spotify playlists (though that would be pretty awful), but it's a whole different story when it leads to consequences in the real world. Besides stress, wrongful arrests can cause financial damage from legal fees and lost wages, and if the error isn't spotted, innocent people can wind up with criminal records.

Despite all this, there is very little oversight of facial recognition use in law enforcement, which is increasingly turning to the technology because it makes identifying suspects easier. Police have also have used it to arrest protestors at large rallies, scanning footage for evidence of petty crimes and then identifying suspects with images from social media.

Because of its faults and potential for abuse, experts and civil rights groups say there needs to be federal legislation enacted that governs the use of facial recognition, putting more safeguards in place to prevent police from relying on the software alone to make arrests.

Many private corporations have heeded calls to walk back the use of facial recognition. Amazon, Microsoft, and IBM have all called for moratoriums on its use and reform of its use by law enforcement.

Since Parks's arrest, New Jersey has paused its use of facial recognition products while it develops policy governing its use.

UPDATE 12/29: This post has been updated per new information that Parks' lawyer, Daniel Sexton, may have "been mistaken" in pointing the finger at Clearview AI. A statement from Clearview AI founder and CEO Hoan Ton-That has also been added.