Tech Firms Fear False IDs from Facial Recognition Systems

By Crime and Justice News July 31, 2018

Picture a crowded street. Police are searching for a man believed to have committed a violent crime. They feed a photograph into a video surveillance network powered by artificial intelligence. A camera scans the street, instantly analyzing the faces of everyone it sees. The algorithms found a match with someone in the crowd. Officers rush to the scene and take him into custody. It turns out the guy isn’t the one they’re looking for ─ he just looked a lot like him. This is what some makers of the technology fear might happen if police adopt advanced forms of facial recognition that make it easier to track wanted criminals, missing people and suspected terrorists, NBC News reports. 

Despite “real-time” facial recognition’s potential for crime-prevention, it is raising alarms of the risks of mistakes and abuse. Those concerns are coming not only from privacy and civil rights advocates, but increasingly from tech firms themselves. In recent months, one tech executive vowed never to sell his facial recognition products to police departments and another has called on Congress to intervene. One company formed an ethics board for guidance. Employees and shareholders from some big tech firms have pressed their leaders to get out of business with law enforcement. “Time is winding down but it’s not too late for someone to take a stand and keep this from happening,” said Brian Brackeen, CEO of the facial recognition firm Kairos, who wants tech firms to keep the technology out of law enforcement’s hands.  Brackeen, who is black, has long been troubled by facial recognition algorithms’ struggle to distinguish faces of people with dark skin. He says, “There’s simply no way that face recognition software will be not used to harm citizens.”