The End of the Essay?

Staff and faculty on the AI revolution... and keeping it all in perspective

- December 18, 2023

An AI-generated robot uses a typewriter (stock photo).
An AI-generated robot uses a typewriter (stock photo).

Kate Crane’s advice to professors worried about new artificial intelligence (AI) tools comes down to two simple words: don’t panic.

An educational developer with Dal’s Centre for Learning and Teaching (CLT), Crane says, “I would just want to remind professors that they should prepare, but they don’t necessarily have to turn their worlds upside down.”


Kate Crane, educational developer (Nick Pearce photo).


Sometimes it feels like everything, not just teaching, is being turned upside-down by new generative AI tools.

Media are experimenting with articles written and illustrated by AI. Sometimes this goes wrong, as when Microsoft made the news for apparently AI-written travel articles that recommended Ottawa tourists visit a food bank “on an empty stomach.” Meanwhile, search engines and blogging tools are incorporating AI assistants and chatbots, and photo-enhancing software is using AI to blur the lines between reality and touch-up more than ever.

And your feeds are likely full of AI images, some of them passing themselves off as real. (No, those babies are not really parachuting, and that isn’t Van Gogh sitting on the front steps of his house at Arles.)

But what does it all mean for teaching, learning, and academic integrity? Does widespread adoption of ChatGPT mean the end of the essay as a meaningful evaluation tool? Are Dal’s academic integrity officers about to be swamped? And should professors ban AI, incorporate it, or embrace it?

A pedagogical problem and an integrity issue

In an online workshop held earlier this year, Computer Science professor Christian Blouin said AI tools like ChatGPT represent “a pedagogical problem that has a short-term academic integrity issue — and we need to sort ourselves out very quickly.” Dr. Blouin is Dal’s institutional lead for AI strategy, and he says it doesn’t make sense for a university with as many programs and disciplines as Dal to have one blanket policy on acceptable use of AI by students.

Related readingDal's AI lead aims to spark conversation and connection on our rapidly evolving information future (Dal News, July 25)

“In computer science, we’re thinking about AI-driven tools differently than in engineering, for example,” Dr. Blouin said in an interview. Meanwhile, in the arts and social sciences, “We are assessing critical thinking, but the medium through which we do that is writing.”

The discussion around AI tools quickly draws us from specifics to big-picture questions: What are universities for? What is the purpose of assignments? What are we assessing and evaluating?

When it comes to essays, for instance, “The point is not so much that you wrote something, but the process of thinking, and the process of articulating what’s underneath,” Dr. Blouin says. “A tool is not an agent, it’s not a person... People come to university so they can become citizens and professionals. And it’s really important that we provide them with an education and give them an assessment of their abilities in making decisions, and reasoning through, and thinking ethically.”


Jesse Albiston, pictured on the sofa wearing a cap, with colleagues from Bitstrapped and a robot dog.


AI in the workplace

Jesse Albiston (BComm’14) is a founder and partner at Bitstrapped, a Toronto-based consulting firm specializing in machine learning operations and data platforms. In short, they help companies figure out if and how they should be using AI.

The AI revolution has been good for Bitstrapped. Albiston says the company booked more work in the first quarter of this year than all of the previous year. At the same time, he cautions against jumping on the AI bandwagon just because that’s what everyone else is doing. When the firm is approached by clients who want to integrate AI into their workflows, “Half the time — maybe more than half the time — AI is not the right approach,” he says.

At the same time, he thinks learning how to use these tools should be an essential part of a university education — at least in some fields — because they are going to be an essential part of the workplace.

“If someone graduates university today, they should be using these tools. You’re not going to be replaced by AI. You’re going to be replaced by people using these tools,” Albiston says. “In my company, I have employees one or two years out of university who are using these tools, and their output is fantastic. They just need a bit of coaching on how it works.”

But if they “just need a bit of coaching,” is that something a university should be providing? Dr. Blouin is not so sure. He says graduates will definitely encounter AI integrated into tools like office suites. But universities should take a longer-term view, preparing students for careers that will last decades. (How many of us learned high school tech skills we never used again, because technology had moved on?) That means thinking beyond ChatGPT and related large language model (LLM) tools.

Even if professors do want to integrate tools like ChatGPT, Crane says they should proceed with caution. While she believes “experimentation is good,” she notes that at Dal, instructors are not allowed to require students to use AI for coursework. Apart from any pedagogical concerns, “There are data privacy concerns,” she says. The CLT says on its website that making the use of AI tools mandatory for a class contravenes Nova Scotia privacy law and Dalhousie’s Protection of Personal Information Policy.

Process over output

English professor Rohan Maitzen, who teaches both literature and writing, feels “resentment towards the people who are propagating these systems on us without our permission.” Teaching and learning writing is more about process than output, she says. And ChatGPT can’t help with that. But because it offers the promise of producing passable essays quickly and easily, Dr. Maitzen says she and her colleagues are worried.

“We can’t ignore the fact that this is a tool designed to take over the writing process,” says Dr. Maitzen.

“Right from the moment you think, ‘What am I even going to write about?’ that begins your own individual process of figuring something out and putting your mind in contact with it. You can’t outsource that work to a machine. It’s an act of communication between you and the person you’re writing for.”

Dr. Maitzen has already received at least one assignment written by ChatGPT. One of the tool’s well-known shortcomings is that it is known to make up information, such as false citations and inaccurate “facts.” She assigned a reflection on a short poem and received an essay with one critical problem: “The quotations the paper included were not in the poem. They don’t exist at all,” Dr. Maitzen says. “So it wasn’t a mystery looking at this paper — and looking at the relatively short poem that the paper was supposed to be about — there was just no correlation whatsoever.”

Avoiding an arms race

This, of course, brings up the question of cheating and academic integrity. Bob Mann is the manager of discipline appeals for the university secretariat. He said the number of cases referred to academic integrity officers has “gone up dramatically” in the last few years—although that might be because of greater detection. Mann said sometimes students are deliberately cheating, but often they “are just trying to figure things out” and “inadvertently commit offences.”

He expects AI tools to make him busier this year. “I call it Napster for homework,” he says. But it won’t necessitate a change in academic integrity rules. “Writing a paper using AI is not a specific offence we have on the books; a student is required to submit work that is their own. So the rules have not changed.”

But determining what constitutes a student’s own work has (with exceptions, like the AI fabricated quotes Dr. Maitzen mentioned earlier) become harder. In terms of enforcement, Mann cautions against assuming students are using AI, saying he has seen cases where accusations proved to be unfounded. Students who struggle with English or who don’t understand how to cite properly may be particularly vulnerable to these charges.

And while it might seem tempting to deploy ever-more-sophisticated tools to crack down, everyone interviewed for this story counselled against that approach.

On a basic level, if you are “suspicious of generative AI, why are you going to trust another piece of AI software with a decision that can cause harm? It makes no sense from an ethical perspective,” Dr. Blouin says.

Dr. Maitzen agrees and says “focusing on this as a discipline-and-punish problem is maybe counterproductive.” Plus, she has no interest in an AI arms race with students.

“It isn’t just about an enforcement problem. We would like to trust them, and we would like to engage with them in the spirit of trust and authenticity. And so we want them to understand what it is we’re really asking them to do, rather than just emphasizing what we’re telling them not to do,” she said.

Les T. Johnson, who, like Crane, is an educational developer at the CLT, also emphasizes conversation. He says, “I would want to make sure I had an honest conversation with my students about [AI].”

Crane agrees: “Talk to your students about implications of AI, for themselves, their communities, their learning, society.”

“Artificial intelligence” is not intelligent

A small number of students have always found ways to cheat or cut corners on assignments. They could copy passages out of books, hire people to write papers for them, buy essays online, and use any number of other tools. What AI has done is made it that much easier for students to use outside help—whether or not sanctioned by their professors.

But not that easy.

Despite the term “artificial intelligence” there is nothing intelligent about ChatGPT and other LLMs. They recognize patterns and can create coherent sentences. That doesn’t mean students can always rely on them to write a decent essay. “It isn’t actually quite so easy as just logging on, putting in your assignment prompt, and it’ll just give you exactly what you need to get a C or above,” Crane says. “It takes some competency to get what you want out of it, and that takes time.”

Dr. Johnson has noticed professors seem less worried about students turning in AI-generated assignments than they were six months ago. That may be because a lot of students tried them and found them lacking. “I was thinking about ChatGPT in particular. When it first started, it was, ‘Oh my gosh, how exciting! I can have this computer write my essay!’” he says. “But of the students who used it, some may have been cited for plagiarism and then some just didn’t get good grades. And they’re like, well, wait a second — I didn’t learn anything, I didn’t even do that well, and it wasn’t that much easier... I’m just going to do it myself.”

Dr. Maitzen doesn’t blame students who are anxious about grades. They’ve grown up in a culture that increasingly tends to view university education as a commodity. “They don’t have a sense that it’s all right to take a risk, to just give it a try, to just say what they think,” Dr. Maitzen says. “They’re not sure they have the skills to do that, and they don’t have enough metacognition to realize that doing it online is exactly what prevents them from developing those skills — and it becomes a self-fulfilling prophecy.”

Related readingAsk the experts: Where will artificial intelligence go next? (Dal News, June 5)

How broken are our courses?

Dr. Blouin said the Faculty of Computer Science hired a graduate student over the summer “to assess, if we change nothing in our curriculum, how broken our courses are.” Essentially, how far could students get using AI for assignments, without actually understanding any of the material or learning anything?


Dr. Christian Blouin (Nick Pearce photo).


In some cases, pretty far. “GPT is a pretty good programmer for simple stuff,” Dr. Blouin says. What the faculty found anecdotally was a lot of variation. “There are some third-year assignments that it does very well, and some first-year questions on which it falls apart very quickly.” One approach would be to rejig assignments that can be done by ChatGPT, but Dr. Blouin calls that “a dangerous game to play, because as soon as a new version comes out, your entire strategy may fall apart.”

Despite the AI revolution, Dr. Blouin says one fundamental thing has not changed: “You are intellectually responsible for the work that you produce.” And “universities should not be satisfied with something that looks like work. We are satisfied with and expect actual work from our students,” he says. “So I think it’s more an issue of personal and professional responsibility than it is of honesty.”

Still, Dr. Maitzen is planning on making some changes to her assignments, especially for larger introductory classes. That may mean more contract grading, in-class writing, and multiple-choice tests. (She is less worried about her upper-level Victorian literature courses.)

But she is hoping that honest conversations with students about getting the most out of university will carry the day. Crane agrees with that approach. A good assessment isn’t suddenly bad because there is a possibility AI could help with it. She urges faculty to “keep designing good assessments according to evidence-based practice.”


This story appeared in the DAL Magazine Fall/Winter 2023 issue. Flip through the rest of the issue using the links below.


Comments

All comments require a name and email address. You may also choose to log-in using your preferred social network or register with Disqus, the software we use for our commenting system. Join the conversation, but keep it clean, stay on the topic and be brief. Read comments policy.

comments powered by Disqus