There’s a new back-to-school ritual for students and professors: brushing up on policies regarding the use of artificial intelligence. Thanks to the rise of generative artificial intelligence, what one instructor considers a tool in another context could be considered a slippery slope into academic dishonesty. 

Some universities have implemented AI policies that faculty are required to enforce. Others have recommendations — but no school-wide standards. So, how should professors proceed update their class policies? 

Sam Ransbotham, professor of analytics at Boston College and expert in machine learning and AI, joined GBH’s All Things Considered host Arun Rath to discuss different kinds of AI policies in higher education. What follows is a lightly edited transcript.

Arun Rath: Boston College doesn’t have a fixed, one-size-fits-all AI policy for professors to implement. There are guidelines that are specific for each department. Before we get into the guidelines within your school — the School of Management — talk about why BC and other universities might be hesitant to implement school-wide policies.

Sam Ransbotham: I think there are just so many different classes on one campus. If you think about a university, there are so many different needs, and I think one policy that meets all those needs would be pretty tough.

We can make an example of math: If you’re learning math — learning to add and subtract — I think a calculator is going to do you a disservice. On the other hand, if you’re doing another class that is not about learning adding and subtracting, then a calculator is a great help.

We don’t want to outlaw calculators. Similarly, we have the same sort of problems with AI.

Rath: Let’s break it down then. The Carroll School has three types of guidelines for using generative AI. Can you walk us through each one?

Ransbotham: Yeah. There are three basic shapes.

The first shape — and I’m going to greatly simplify here — but the first shape is: Don’t use any.

Second is: You can use [AI] if the professor says you can for a particular assignment.

The third is: You can use [AI] all the time, unless the professor says you can’t.

I think what we’ve seen is that most people have chosen that middle ground of, “Hey, it’s possible — but I have to say that it’s OK per assignment.”

Rath: I don’t know how much you can continue on with the calculator analogy with, say, math, but how does this work in terms of what might be appropriate for what types of classes?

Ransbotham: For example, I teach a class in machine learning and artificial intelligence. I think it would be pretty hypocritical to say, “Hey, we can’t use this tool that we’re talking about in my class.”

On the other hand, if you are, for example, learning to write an essay, having a tool write an essay for you might be inappropriate. So I think that’s where the granularity comes in.

I think the crux of this issue, and what makes it really hard, is that we don’t yet know how these tools work well for learning. We’re scarcely 500 days into the presence of useful large language models.

But learning outcomes are slow and hard to measure, and I think that’s where the trick is. We don’t know yet whether this is going to be a tool that helps people get to average — and then they can excel beyond average — or if it’s a crutch that gets them to average, but then they lack the tools to go beyond mediocre.

If you think about Boston College, our motto is “Ever to Excel.” It’s not “Ever to Mediocre.” And no one wants to stop at mediocre. We just don’t know yet if these tools are helping us get to mediocre, and we won’t be able to go any further, or if they’re only helping us get to mediocre so that we can go further.

Rath: Which is why we’re talking about guidelines as opposed to a fixed policy.

Ransbotham: Exactly. And stuff changes so quickly; I mean, these tools are changing rapidly.

Rath: With all of that, can you talk sensibly about how professors on the whole — or even just professors individually — are dealing with this as the rubber hits the road, as this technology is unfolding as they teach?

Ransbotham: Well, certainly, it’s tough. It would be really nice to teach a class where you didn’t have to change everything every semester and week by week, but that’s not how it’s working right now.

I think the big presence of AI in education is going to offer us an ability to learn at scale. What I mean by that is that if we built a curriculum around, let’s say, lower levels of learning — and what I mean by that is something like Bloom’s taxonomy of learning, where the memorizing is at the bottom level and creating is at the top.

I think we’re going to have to push our learning further towards creating versus memorizing, and I think courses that are focused on memorizing are particularly going to struggle with this.

Rath: As someone from a literature background, I’m especially curious about how you implement this in the — I think it’s design type three, where there is unlimited use but proper citations. How does that work in terms of how you submit your work? How do you cite artificial intelligence?

Ransbotham: I think you just say, “Hey, I used artificial intelligence to get me started on this task.” What we’re hoping with that is — and, actually, that’s the policy I’ve chosen in my class — not only can you use AI to perform your assignments, you ought to be.

If we think about the future, there’s a lot of talk about artificial intelligence replacing people at work. I don’t really think that’s going to happen soon. But [what is] much more likely is that people who are using artificial intelligence well will replace people who do not use artificial intelligence well. Part of my class is teaching people to use these tools well.

Now, the difference is: If you get a head start from the tool, you need to be able to extend beyond what that tool offers. I think there’s the challenge. You know, you mentioned essays — it’ll do a first draft, but a first draft is rarely what you want to turn in.

Rath: You just raised something deeply interesting in terms of a potential gap between people who know how to use AI well and those who don’t. That’s something you’re thinking about right here at the outset of this.

Ransbotham: Clearly. I mean, the technologies have a long history of increasing the digital divide between people, and they’re going to be socioeconomic implications of who learns to use these tools well, and who does not.

When you think about, you know, all the hullabaloo about, “Oh no, machines are going to take my job!” I don’t think that’s realistic, particularly in the short term. I think it’s far more realistic that people are going to replace other people, especially if they can use these tools well. That’s incumbent on us as educators to help students use these tools well.

Rath: Finally, we’ve been talking about the professors’ perspective mainly on this, but what about the students? What are you hearing from them in terms of how they feel about these guidelines? [The students] probably are the people who understand this technology better than we do.

Ransbotham: Yeah. I just quickly did a show of hands in my classes this morning about who’s using certain tools, and all the hands go up. These people are — you know, the students now are using tools, and they’re using them well. I think it’s up to us to push them further and help them use those tools, and that’s actually pretty hard because that’s not what we’re used to, and that’s not what we grew up on.