With artificial intelligence comes the promise of increased productivity — for humans, and for robots. Last month, a robotic dog was deployed by the Massachusetts State Police Bomb Squad in Barnstable and was shot three times by an armed assailant. The robot was not run by artificial intelligence — police controlled it from a remote location — but it’s not the first time robots were deployed instead of humans in dangerous policing situations.
But what happens when AI-powered tools are in the mix? Do we want, or should we have, autonomous robots doing crowd control, or out on the beat? Do we give them guns?
Nir Eisikovits, director of UMass Boston’s Center for Applied Ethics, joined GBH’s All Things Considered host Arun Rath to discuss the implications of where artificial intelligence, robots and law enforcement intersect. What follows is a lightly edited transcript.
Arun Rath: Before we start talking about autonomous AI systems, let’s talk about robotics in policing. When we have these situations where a robot could take police officers out of harm’s way, what is the current understanding of the general ethics of this and when it’s appropriate?
Nir Eisikovits: Police departments have been using robots for a long time — for example, bomb squad robots. That, I don’t think, is necessarily very controversial. Drones have been used by police forces for a long time.
There’s this understanding that if a robot can spare a police person’s life, you know, there’s a very good argument for that. The current understanding is that it’s a pretty typical use.
[But] as you say in the introduction, the problem starts when you start having an AI-powered robot or something that has — at least potentially — some autonomous capabilities.
Rath: Before we talk about that, what about weaponizing robots that are not AI-powered and that are remotely operated?
Eisikovits: You know, I think one of the questions around that is that: a lot of those robots in other departments are from military surpluses, so I think a lot of the questions and concerns about that is whether the use of those robots is a growing trend of militarizing police forces.
But, again, the typical uses have been safety-sparing. That’s pretty standard practice, both in the United States and elsewhere.
Rath: If they were to apply that kind of military approach to, let’s say, the military’s use of drones for remote attacks — is that the sort of thing you could see happening?
Eisikovits: I assume that there would be pushback for that. Again, most of the time, I don’t think that there would be a need for it. There’s a difference between using a drone for intelligence collection operations — such as trying to see where a shooter would be, or what kind of approaches one should take — and between using it as an attack weapon.
I think the latter would be very much along the lines of militarizing the police force and would be very controversial. The former, probably less so.
Rath: Getting into artificial intelligence — when we’re thinking about uses in law enforcement, what are the general guardrails that you believe are necessary?
Eisikovits: Well, it would depend on the type of artificial intelligence that a robot would use. If the artificial intelligence that the robot would use is already in deployment in some places, like a facial recognition system, then you need to make sure that the facial recognition system is accurate enough to identify people correctly and that it’s not — as some of the facial recognition systems have been — discriminatory and known to be better at correctly recognizing faces from some races rather than others.
You would have to make sure that you know the data these facial recognition systems are trained on is accurate and that the models they use are accurate. Otherwise, you would have sort of a tool for supercharging police bias. So that would be one.
The other would be if these tools did have some kind of capacity to use lethal force and they would never use it completely autonomously — there would have to be a human in the loop.
Where this becomes a little more creepy is if you replace the example in your intro of the dog [from the bomb squad incident] with one of Boston Dynamics’ neural robots. You get into pretty strange territory there, which I think most police departments would probably want to avoid — and would be smart to avoid.
“That seems — at least to me — to be a no-brainer. You do not want a system prone to bias going around with the capacity to kill people.”Nir Eisikovits
Rath: And that territory includes, even without a gun or something like that, that Atlas robot we’ve seen online. That could really hurt somebody. [Editor’s note: Boston Dynamics has specific terms and conditions and ethical principles against weaponization of its robots.]
Eisikovits: Yeah, it could really hurt somebody. You know, police force at its best is when it’s actually doing what it’s supposed to be doing, is supposed to be a force that is beneficial to the community. And there can be a real alienating factor in deploying more and more sophisticated robotics — humanoid robotics, robotics that you can easily anthropomorphize or project human qualities onto.
It doesn’t take much for us to do this. In New York City, the NYPD deployed a non-human-looking robot at some of the subway stations and already received a good amount of pushback. Police departments want to be careful, in addition to all the safety questions and bias questions. They want to be careful not to alienate their constituents by seeming to sort of step away from human policing.
Rath: In terms of avoiding getting into that kind of negative territory, how should regulation work? There is a bill on Beacon Hill that proposes banning weapons attached to autonomous or remotely controlled devices. Is that the way to go? What kind of laws do we need?
Eisikovits: I think some of this is legislation, and some of this will have to do with police culture and having conversations with civil society actors that can lay out why this is worrying to parts of the community. So, at a minimum, yes, there shouldn’t be any autonomous lethal capabilities in the hands of police. That seems completely unnecessary.
Given the fact that these systems are not yet completely reliable and are prone to errors that come from their data sets and models, that seems — at least to me — to be a no-brainer. You do not want a system prone to bias going around with the capacity to kill people.
Beyond that, I think there needs to be a kind of conservatism about the rush to deploy every new technology in the hands of some of our more powerful institutions. Don’t deploy robots when there’s no very, very specific and special reason to do that.
In addition to being potentially dangerous and potentially increasing bias, it has the slightly less concrete risk of further alienating the public from the police in a period where those relationships are somewhat fraught already. It just seems to be bad policing, in addition to anything else.