Artificial intelligence is changing aspects of our lives. But what does that mean for global military operations? And what are the pros and cons of using it in that context? As an Iraq War veteran and member of the House Armed Services Committee, that's something Massachusetts Congressman Seth Moulton thinks about a lot. In an op-ed in The Boston Globe this week, he wrote, "AI could be as dangerous as nuclear weapons." Moulton joined GBH’s Morning Edition co-host Paris Alston to discuss his op-ed and congressional discussions over the debt ceiling. This transcript has been lightly edited.
Paris Alston: Tell me a little bit about where artificial intelligence is already being used in the military, both by the U.S. and its adversaries.
Rep. Seth Moulton: Well, first, let me say that artificial intelligence is wonderful in so many ways. I mean, it's going to accelerate cures for cancer. It's going to change our lives and make them more efficient and easier in a whole host of ways we can barely even imagine right now. But there are these very concerning aspects to it as well. We use artificial intelligence in the military with what we call autonomous weapons, weapon systems that don't need humans to do all the work. The Patriot missile system, which is helping to defend Ukrainians today, is an example of a system that's mainly autonomous. It acquires targets on its own, but it still requires a human operator to push the button, to say, 'okay, we're going to take out this incoming missile'.
Alston: It sounds like there are uses for AI in the military that could be working at its best as well as its worst. Why could it be dangerous with respect to nuclear weapons?
Moulton: Well, if you think about how sophisticated a Patriot missile system is, these are really remarkable devices that have saved thousands of lives. And sophisticated computers figure out how to intercept a missile coming your way with a missile going in the other direction. But with that kind of technology and the new developments we've seen with artificial intelligence in just the last few months, we're quickly getting to the point where we literally have killer robots -- I mean, entirely autonomous weapons systems that are appealing to the military because it means that our troops don't have to be in as much danger. You can just send a robot forward instead of a young American. But if the robots are not properly programed to follow the rules of war, to limit civilian casualties and collateral damage, then you can imagine war quickly getting out of hand. Of course, this isn't just about us. This is about what our adversaries do as well. So imagine [Russian president Vladimir] Putin being given a killer robot that he's told will go and kill Ukrainian troops and it's going to limit collateral damage and civilian casualties. But if you just turn off the switch, it will kill everything in its path. You can imagine Putin using that right away. He's already obliterating Ukrainian cities, and this is dangerous, not just for us in a war, it's dangerous for humanity itself.
Alston: This is illuminating the point that the technology is moving very quickly. We also know that when it comes to advancements in technology, members of Congress haven't necessarily demonstrated that they've always been up to speed. Do you worry about the advancement of technology outpacing how Congress is able to respond?
Moulton: I worry about this a lot. And it's not even just Congress. It's the Pentagon and the Department of Defense as well. I co-authored a report called The Future of Defense Task Force Report back in 2020 that talked a lot about artificial intelligence and how this is a common problem. And not only does the United States have to get ahead in developing this technology, but we need to help set the international standards for its use, like a Geneva Conventions of AI. But it's been three years since we wrote that report, and the Pentagon has done almost nothing.
Alston: And what would you like to see done?
More Politics
Moulton: Here's, I think, the sweet spot for where Congress and the government can help. I don't think we're ever going to catch up with technology to be able to regulate all of it. And there's a lot of AI development that we don't want to slow down because we want to get cures for cancer as quickly as we can. We have to therefore focus on these most extreme risk cases, one of which is AI used in warfare. Another of them, of course, is disinformation. We heard a lot of talk about that. These are the places where I think Congress needs to focus its regulatory oversight, not to just try to regulate AI overall, but just prevent the worst-case scenarios from happening.
Alston: And before we let you go, negotiations on the debt ceiling are ongoing between President Biden and Congress and House Speaker Kevin McCarthy. What is it going to take to strike a deal by the deadline?
Moulton: The problem is that we are negotiating with extremists. Kevin McCarthy probably doesn't want to ruin the economy, but what he wants even more is to hold on to his seat as Speaker of the House. And Marjorie Taylor Greene had to vote for Kevin McCarthy for him to get elected speaker, so he's beholden to extremists in the Republican Party like that. Extremists who are more interested in making a political point than doing the right thing for the country. Extremists who don't even understand what it means for America to lose 7 million jobs overnight if we go into default. Extremists who don't care about the veterans would be left without veterans' care if we follow the Republican plan here. So I don't know what it's going to take because this is not a normal negotiation, but I do know the Democrats are standing by and ready to do the right thing for the country, as we have many times in the past. Let's not forget Congress raised the debt limit three times under President Trump, even when he was cutting taxes for the wealthy, something I firmly disagreed with because preventing default is the right thing to do for the country.