Jeremy Siegel: You're listening to GBH's Morning Edition. The White House is out with new guidelines for the use of artificial intelligence. Experts are celebrating the move, but they're also concerned that they're just guidelines, not actual rules, for how to use the technology. That includes Professor Usama Fayyad of Northeastern University. He runs the school's Institute for Experiential AI and spoke with me about what President Biden's guidelines accomplish and where he says they fall short.
Prof. Usama Fayyad: The guidelines basically direct the federal agencies to pay attention to certain topics that relate to AI in fundamental ways. Now, the reason this is significant is because the government itself is a big buyer of AI technology and services. It sends the signal that the government is paying attention to AI, the use of AI, and most importantly, the responsible use of AI. And I like that aspect of it. Of course, the question then becomes: are guidelines enough? Is encouragement enough? For example, I love the fact that the director talks about NIST, National Institute of Standards and Technology, should pay attention to developing standards for AI and so forth, but it doesn't require them to do so.
Siegel: I want to go back to something that you said there, which is that AI, artificial intelligence, is used widely across the federal government, which on one hand, I guess it's like, of course, the government is using AI if people at home are. But my only experience with that is typing things into ChatGPT. And it's kind of wild to imagine what the federal government is doing with artificial intelligence. What does AI look like at the federal level? I mean, is it people at federal agencies typing things into ChatGPT or is it something else?
Fayyad: It's definitely something else. So if I'm doing a task, if I'm generating a report, if I am collecting data on compliance with certain rules, if I am looking at images for security or doing surveillance, if I am watching borders, if I am surveilling economic activities and so forth, all of these actions involve a lot of knowledge economy work, meaning knowledge workers have to manipulate data, have to do reporting, have to move data from one system to another, have to visualize it, all of that. What generative AI can do is accelerate a lot of these tasks, somewhere between --- it's controversial right now, but between 10% to maybe 80%, depending on the task and how repetitive and how robotic. Now, a lot of that work that's happening in the government can actually benefit from this. And a lot of the government agencies are behind, right? There's more demand for their services than they are able to meet that demand. And therefore this technology can help accelerate a lot of those tasks. Now, I say accelerate very carefully because you cannot rely directly on the output of what the AI produces. What the AI produces is a fast draft. That draft may have errors in it, it may have issues in it, it may have problems in it, it may have biases in it. It may have discrimination within it, right? You need a human to quickly check that, catch the problems, fix it. And the whole hope is that by using the AI to help you do the task, it starts you from a much better starting place.
Siegel: So you mentioned that you wish that this all went a step further, that the federal government, that the White House, didn't just issue guidelines but issued requirements for federal agencies. Which makes me wonder, I mean, when federal agencies are using artificial intelligence to do some of the work that you just mentioned, and when there is the potential for bias or for errors, is there no regulation in place at this point for how artificial intelligence is used within government agencies?
Fayyad: There's kind of internal policies. There's no law yet, right? And we all know that, you know, it's not the job of the executive branch to come up with the laws. That's the job for Congress. Now, when the government, and one of the benefits of having a directive like this, is it starts bringing pressure, some pressure. I wish it was more real pressure, but it brings some pressure on the agencies to start using AI to start dedicating resources for that more. And when that happens, then you get kind of the reactions that come from Congress where they say, wait a second, if you're using it --- or from the public or from companies --- if you're using it, under what rules are you using it? If you're hesitating to use it, can we come up with guidelines and guardrails that say, if you use it according to these standards, then you're safe in using it, right? Today, there are no such laws in the U.S..
Siegel: Do you have confidence that this will actually happen? Can the federal government actually keep up here? Because I keep thinking back to social media and data and Facebook, for example, where it always feels like tech is a step ahead and that Congress and the federal government are working to catch up when it comes to regulation after something bad happens or dangerous happens. Do you have confidence that the federal government can do this?
Fayyad: So keep up, keep up is a very good word, it's a very good description. You know, regulation never leads. It shouldn't lead. It should be reactive and it should respond. So what worries me is if we don't move, we may fall way too far behind and a lot of damage can happen before we catch up. That's why I'm eager to say, look, we got to get started somewhere and we got to begin to kind of keep up with what's happening. Our failure to begin to regulate and to start that wheel turning might make us fall too far behind in areas where it can become a bit too dangerous, where biases will come into the system and we'll start affecting the livelihoods of people, may affect the safety of people, may affect how the government is done and administered, may affect many aspects of things that we can think about. The spread of AI, it's so pervasive, it's almost like electricity. We can't even envision what the uses might be until we see them. But we need to be ready once we see them, to have some kind of beginnings of a policy or a guideline or a standard that says, you know, you shall do no harm, and here's our initial set of defined harms, and here's what you might be held liable for. And then, of course, evolve that as the technology evolves and as we respond to it from the government side.
Siegel: Professor Usama Fayyad, executive director of the Institute for Experiential AI at Northeastern University, thank you so much for your time.
Fayyad: Thank you.
Siegel: You're listening to GBH's Morning Edition.
The White House’s release of new guidelines for the use of artificial intelligence had some experts celebrating the move — while others expressed concern that the White House release is just guidelines, not actual rules, for how to use the technology.
That includes Professor Usama Fayyad of Northeastern University, who runs the school's Institute for Experiential AI.
“The spread of AI, it's so pervasive, it's almost like electricity,” Fayyad told GBH’s Morning Edition co-host Jeremy Siegel. “We can't even envision what the uses might be until we see them. But we need to be ready once we see them, to have some kind of beginnings of a policy or a guideline or a standard.”
Fayyad said he was encouraged that the White House guidelines called on the National Institute of Standards and Technology to pay attention to developing standards for AI — and was disappointed to see they stopped short of requiring those guidelines.
“Of course, the question then becomes: are guidelines enough? Is encouragement enough?” he asked.
The government itself is already using AI services, he said.
“If I'm generating a report, if I am collecting data on compliance with certain rules, if I am looking at images for security or doing surveillance, if I am watching borders, if I am surveilling economic activities and so forth — all of these actions involve a lot of knowledge economy work,” he said.
Some generative AI companies have claimed that their tools can cut some of the busywork for government workers, Fayyad said.
“A lot of the government agencies are behind, right? There's more demand for their services than they are able to meet that demand, and therefore this technology can help accelerate a lot of those tasks,” Fayyad said. “Now, I say accelerate very carefully because you cannot rely directly on the output of what the AI produces. What the AI produces is a fast draft. That draft may have errors in it, it may have issues in it, it may have problems in it, it may have biases in it. It may have discrimination within it. You need a human to quickly check that.”
Right now, federal agencies might have internal policies for how employees and contractors are allowed to use generative AI. But there are no laws.
“We all know that it's not the job of the executive branch to come up with the laws. That's the job for Congress,” Fayyad said.
Fayyad said he hopes the law can keep up with technology.
“Regulation never leads,” he said. “It shouldn't lead. It should be reactive and it should respond.”
What concerns him, he said, is what would happen if regulators move too slowly.
“If we don't move, we may fall way too far behind and a lot of damage can happen before we catch up,” he said. “Our failure to begin to regulate and to start that wheel turning might make us fall too far behind in areas where it can become a bit too dangerous, where biases will come into the system and we'll start affecting the livelihoods of people, may affect the safety of people, may affect how the government is done and administered, may affect many aspects of things that we can think about.”