Jeremy Siegel: You're listening to GBH's Morning Edition. Facial recognition technology is all around us. You might use it for unlocking your phone or even going through security in an airport. But one major use of this software in the U.S. is in policing, including in Massachusetts. Law enforcement can use facial recognition technology to look at things like surveillance video and then search for suspects based on their physical characteristics. Lawmakers in Massachusetts are now considering a bill that would make changes to that process over concerns about inaccuracy and the potential for racial discrimination. To talk more about how this technology works and the problems associated with it, we're joined now by UMass Amherst computer science professor Erik Learned-Miller. Professor Learned-Miller, thanks so much for joining us.
Erik Learned-Miller: Thanks for having me. Glad to be here.
Siegel: So you've studied facial recognition technology and some of the problems that can arise when law enforcement uses it. What are the risks associated with facial recognition, and why are advocates pushing for changes to the way that law enforcement in Massachusetts uses it?
Learned-Miller: Well, let me make two points here. One is that there's a lot of talk about accuracy in face recognition. How often does it make errors? How often does it get a correct match? And you might hear a vendor say something like, our software is 99% accurate, or something like that. And I want to let people know that those numbers are very hard to interpret. And the reason is: When somebody says that they got 99% of the examples correct on some test, it could have been something relatively easy, like passport photos, which are taken in well-lighted conditions with the person looking straight towards the camera. And it's a much easier situation in which to do face recognition. However, in practice, the police might be using something to look at a grainy surveillance video in which the subject's face is a bit blurry, the lighting is poor, and maybe they're looking slightly away from the camera. So just because a piece of software does well on a certain laboratory test does not mean it's going to do the same in practice, and it almost never does as well. And the second point I want to make is that people sometimes put too much trust in technology. You know, they grew up using calculators that were always correct when they'd multiply two big numbers together. And they got used to the idea that technology is the accurate and precise all the time. But face recognition and other artificial intelligence applications are much more uncertain and much more difficult. And frankly, they make errors all the time.
Siegel: So what kind of changes would this bill implement in law enforcement in its use of facial recognition?
Learned-Miller: So one idea is to have one centralized office that would do this. A centralized center that would do all the face recognition. So a local department would make a request to that center, and that would have them using just one software package that could be closely monitored. And the people using that software would be well-trained in understanding how it worked and what its vulnerabilities are. Another point is that we want to avoid fishing expeditions in which you put in an image of somebody and start searching for a match. So this leads to recommendation like having a warrant: before you make any face recognition match, you want to have a warrant that says there's probable cause to believe that this particular individual may be involved in a crime. I'm not a lawyer, so I can't tell you the details of that. But those kinds of basic rights that would prevent this from being overused, just the way we don't allow surveillance to be done on people that are not suspected of a crime.
Siegel: Before I let you go, you have a long history studying facial recognition and the problems associated with it. You're a member of the Massachusetts Facial Recognition Commission. Are there any examples that you can point to where things went wrong with facial recognition technology, where someone was maybe falsely targeted by facial recognition software?
Learned-Miller: Absolutely. So there's a now-famous case of a man named Robert Williams who was in Detroit, and he was accused through a false face recognition match of robbing a watch from a jewelry store. He was nowhere near the store at the time, and the software that was used gave him as a possible match to the perpetrator of the crime. And interestingly enough, the police report on that possible match said in big, bold letters, Do not use this match. To arrest this person, you have to have corroborating evidence. Unfortunately, the department ignored its own rules and went out and arrested him anyway with no additional corroborating evidence. Of course he was far away from the crime scene at the time and able to prove this. But he spent a couple of days in jail and was arrested in front of his two young girls. He also missed some work and was humiliated through this experience. So now this has happened about five or six times. And believe it or not, it's every single time that it's known to have happened it been to a Black person. So that definitely reinforces the idea that software may work less well for dark skinned people. And in addition, the protections that these people should have are perhaps bypassed more often for minorities and other people who are traditionally, have not had a great history with law enforcement through nothing of their own fault.
Siegel: Erik Learned-Miller is a professor of computer science at UMass Amherst. Professor Learned-Miller, thank you so much for joining us this morning.
Learned-Miller: Thank you for having me.
Siegel: And recommendations to change the use of facial recognition technology in Massachusetts have been passed by the House in past versions of this bill, but have not yet gotten approval from the Senate or from the governor's office. You're listening to GBH's Morning Edition.
Lawmakers in Massachusetts are now considering a bill that would make changes to how police can use facial recognition technology over concerns about inaccuracy and the potential for racial discrimination.
Right now, law enforcement officers can use facial recognition technology to look at things like surveillance video, then search for suspects based on their physical characteristics.
That can bring up a slew of issues, said UMass Amherst computer science professor Erik Learned-Miller.
“There's a lot of talk about accuracy in face recognition. How often does it make errors? How often does it get a correct match?” Learned-Miller said. “You might hear a vendor say something like, our software is 99% accurate, or something like that. And I want to let people know that those numbers are very hard to interpret.”
That 99% figure, he said, could come from testing a database of something highly standardized, like passport photos, in which images are well-lit and subjects are looking directly at the camera.
“However, in practice, the police might be using something to look at a grainy surveillance video in which the subject's face is a bit blurry, the lighting is poor, and maybe they're looking slightly away from the camera,” Learned-Miller said. “Just because a piece of software does well on a certain laboratory test does not mean it's going to do the same in practice, and it almost never does as well.”
Proposed changes in front of the Massachusetts Legislature include creating a centralized statewide office staffed by people trained on the software’s vulnerabilities, to which local police departments can submit requests.
Another proposed change: Requiring police officers to have a warrant before they run an image through facial recognition software.
“We want to avoid fishing expeditions in which you put in an image of somebody and start searching for a match,” he said. “You want to have a warrant that says there's probable cause to believe that this particular individual may be involved in a crime. … Those kinds of basic rights that would prevent this from being overused, just the way we don't allow surveillance to be done on people that are not suspected of a crime.”
Learned-Miller said he hoped more checks on facial recognition would help prevent cases like that of Robert Williams, a Black man in Detroit accused of stealing watches from a jewelry store and arrested based only on a false positive match from facial recognition software. Williams was nowhere near the scene of the crime. But police arrested him in front of his two young daughters.
“The police report on that possible match said in big, bold letters, 'Do not use this match. To arrest this person, you have to have corroborating evidence,'” Learned-Miller said. “Unfortunately, the department ignored its own rules and went out and arrested him anyway with no additional corroborating evidence.”
Williams is now suing the police department over the wrongful arrest.
“This has happened about five or six times,” Learned-Miller said. “Every single time that it's known to have happened, it been to a Black person. So that definitely reinforces the idea that software may work less well for dark-skinned people.”
Learned-Miller said people who grew up with simple, reliable technology tend to trust it more — even as technology becomes more complex and more fallible.
“They grew up using calculators that were always correct when they'd multiply two big numbers together, and they got used to the idea that technology is the accurate and precise all the time,” he said. “But face recognition and other artificial intelligence applications are much more uncertain and much more difficult. And frankly, they make errors all the time.”