Generative AI, or artificial intelligence that creates original content from your prompts, rather than simply spitting out search results that link to existing information, has gone public in the past few months.
Since DALL-E 2 and ChatGPT became publicly available, each has alarmed many observers, including educators. They worry that students will use it to research and write class assignments for them.
But that worry isn’t universal. Daniel Ernst, an assistant professor of English at Texas Woman’s University, told Texas Standard that AI tools present an opportunity to teach and learn in new ways. Listen to the interview above or read the transcript below.
This transcript has been edited lightly for clarity:
Texas Standard: There’s a lot of hysteria – some would say hype – about AI, that seems similar to fears last century about robots taking over our jobs. We’re hearing a lot of those same sort of dynamics in the conversation around AI right now. You’re not buying it, though. Why not?
Daniel Ernst: Yeah, I think because education is adaptable. We can look at history and see education, educators and curricula adapt to new technology all the time, like the Internet, even word processors. And I think ultimately, generative AI is best thought of as a text manipulator. It’s certainly more powerful than a word processor, but that’s ultimately what it is. It’s a new way to manipulate and use text. And so I think it presents opportunities for education.
Well, so what does this mean for you as a practical matter? As a teacher, you’re not a computer researcher or a computer scientist. You are involved with rhetoric. Are you not concerned that students are going to start writing their rhetoric papers using artificially generated content?
I am concerned to a degree. But we don’t only teach students how to learn to write, we also use writing to learn. So while using these technologies as a way to just generate a paper quickly presents some problems, I think that it enables us to challenge our students to think in new ways that are ultimately going to be productive.
For instance, I think they present opportunities for students to interact with texts in ways they can’t. You know, if you read a book, it’s a one-way street. You just have to sort of read what is written. These generative AIs allow you to ask questions, essentially, of a book and can potentially unleash new insights that reading alone and writing alone maybe can’t do.
Can you tell AI-generated content from the real deal?
Not always. There are some sort of tell-tale aspects and features, and some of the companies are working on technology to detect AI-generated text. The irony there, of course, being that they rely on the same AI to detect the AI. But yeah, OpenAI, for instance, has experimented or has talked about using watermarking technology that would essentially be able to alert someone if the text was generated by AI.
But again, I think the overall point that I’m trying to make is that this technology is here, whether we like it or not. You can’t put the toothpaste back in the tube. And so I think we are better off spending our resources, rather than fighting against it, trying to adapt to a new reality where this is just part of our technology now, in much the same way that calculators changed the way we teach math. But they didn’t end math.
So how, as a practical matter, are you approaching this with your classes?
Well, one thing is I’m trying to use some of the generative AI capabilities in my assessments. So, for instance, if I’m asking students to analyze a piece of rhetoric – maybe an excerpt from a presidential speech or something – I can offer the AI, the ChatGPT analysis, as a sort of third point of analysis. So there’s the text itself, the speech, there’s the AI analysis of the text, and then we can have the students analyze both the text itself and the AI analysis. And so essentially putting students in dialog with the generative AI in a way that pushes them to sort of think more deeply about these topics and also to, again, create a sort of dialog, which is a kind of forgotten genre in pedagogy, but one with a long tradition going back to the ancient Greeks.
So the value, as you see it, or at least part of the value for people who will be part of the workforce in the future, is not just being able to generate reams of written or artistic content. It’s that you have to be able to ask the right questions in the first place.
Exactly. I think this is going to cultivate or inculcate a new skill or maybe a forgotten skill in our students, which is the ability to prompt the eye with innovative or insightful questions, and then also to critically analyze what the output is. Because if you spend any time with these generators, it’s impressive – the breadth of knowledge that they have. But the depth is somewhat lacking. If someone is an expert in an area they can pretty quickly find errors or sort of boilerplate responses. And so we can encourage our students not only to ask better questions, but to then critically analyze and critically read the responses and then offer, you know, rebuttals to what the AI generates.
How widespread is that sentiment among colleagues? I have not heard this sort of conversation about AI, at least in the popular press. A lot of it has been very much sort of a meltdown over how much change this might bring about.
Yeah, there’s been a lot of doom and gloom, I would say, initially, and I’m sort of pushing against that – trying to sort of be not so nihilistic, be more realistic. But that said, I understand why there’s a lot of concern and I think there’s a lot of pessimism currently. But I think that happens with any kind of novel technology. There’s a lot of hand-wringing at first, and then we adjust and we adapt. And I think, again, we’re better off spending our resources trying to not fight against this, but adapt to it.