The number of times police force in LA has used facial-recognition technology since 2009.
The Los Angeles Police Department (LAPD) has consistently told reporters that it's never used facial recognition software to catch criminals. A new investigation by The Los Angeles Times contradicts that, finding that the department has in fact used facial recognition in 29,817 different instances. Which are a few more occasions that could reasonably be chalked up as an honest mistake.
Typically police agencies take images of suspects from surveillance cameras and use facial-recognition software to compare them against databases of mugshots and driver's license photos. The number of people with access to these systems tends to be kept to a minimum, but in the LAPD's case, more than 300 officers have access to facial recognition programs.
The new report suggests that facial recognition isn't some experimental technology but is actually in widespread use today, which is certain to raise alarms among privacy advocates who say the technology isn't ready for prime time, prone as it is to false positives, especially when people of color are involved.
Ethical issues — Maybe the denials weren't an honest mistake and rather the LAPD hid its use of facial recognition over worries about the software's inaccuracy and potential use for intimidation. Various reports have covered false arrests made after the software inaccurately matched pictures of suspects, like one Black man in the Detroit area who was taken into custody after an algorithm determined his face matched with one seen in surveillance footage. He was released after it was determined that the suspect looked nothing like him.
Police have also been found using surveillance footage and social media posts from protests to later arrest individuals for minor offenses like failure to disperse and disorderly conduct, therefore intimidating them out of participating in subsequent events. Social media companies can also be compelled to provide all posts that were geotagged within a certain area like the center of a protest, which when combined with facial-recognition technology can lead to serious infringements of civil liberties.
Rite-Aid recently said it would discontinue the use of facial recognition in its stores over questions about its accuracy and the ethical implications surrounding its use. The retail chain said employees in its stores could tag the faces of suspicious shoppers and be alerted if they returned to the store later. But customers were only informed of the technologies use via limited signage, and its accuracy was called into question by civil rights advocates.
Public input — After being caught red-handed, the LAPD admitted to The Los Angeles Times that it uses facial recognition software to identify suspects in gang crimes where witnesses are too fearful to come forward or in crimes where no witnesses exist. But the denials are what concern civil liberties proponents as the technology is half-baked and its use could lead to harmful outcomes or tragic outcomes, particularly given the over-zealous actions of police with regards to suspects in recent months.
Companies that develop facial recognition technology have been racing to adapt to a world in which people are wearing masks that cover a large portion of their faces. The software works by identifying the unique contours of a person's face, such as the distance between their eyes and nose. Without that data, the accuracy rates plummet tremendously. The programs already disproportionately misidentify people of color, which can reinforce arrest rates. A federal study in 2019 of more than 100 facial recognition systems found they false identify Black and Asian faces 10 to 100 times more often than white faces.
With the potential adverse consequences of facial recognition being so high — like unnecessary confrontations between police and people of color — the public should have more of a say in whether or not it should be used in the first place.