Column, Opinions

Is The Fear of AI in Academia Justified?

Artificial intelligence (AI) seems to have become every teacher’s greatest fear. It haunts the plagiarism section of every syllabus I received this semester. The fear of AI within academia has been well documented with the rise of ChatGPT, but the real question is whether those fears are justified.

Teachers are clearly torn over the use of AI in the classroom, but it seems like most have accepted that it is here to stay. Some teachers have focused on making essay questions AI-proof, while others have moved to entirely in-person testing. 

It is impossible to deny the risks of AI. Cheating and plagiarism are always going to be a problem for teachers, but AI presents new challenges—anyone with an internet connection can access these technologies.

It is also nearly impossible to actually prove whether or not AI has been used. OpenAI unveiled a tool about a year ago to detect ChatGPT in portions of text, but unfortunately for teachers everywhere, the tool was only correct 26 percent of the time.

The technology to accurately determine whether AI has been used simply doesn’t exist yet, but is it even necessary in most cases? More often than not, students aren’t using this technology in the way teachers fear.

A study sponsored by Turnitin showed that while AI usage among students this fall jumped to 49 percent, those numbers don’t reveal the whole picture. Students who reported using AI daily were not typically using it to write essays—their top use was to summarize texts. The other top reported uses were to help understand difficult concepts, organize schedules, solve homework problems, and then assist with writing assignments.

Obviously, from a teacher’s perspective, using AI to get out of homework assignments and reading is not ideal, but AI isn’t the first technology to help with these tasks. It’s clear that concerns about AI are particular to writing assignments, through which students are supposed to showcase their critical thinking abilities.

The benefits of AI to help understand concepts and create outlines have been encouraged in my classroom experience. I have also encountered a lot of preventative AI measures in the very same classes. As a political science major who often takes classes where a handful of essays determine my entire grade, it makes sense that my teachers are apprehensive about the integration of AI.

Still, I cannot help but wonder why I rarely see my peers use AI to fully write essays. I know many students who use it to help brainstorm ideas, but most don’t want to risk using AI in higher-level classes.

I believe this trend goes beyond the fear of repercussions. Students simply don’t trust AI-written essays. Most students I know think they could write a better essay than AI, especially within a major class. Perhaps a business major trying to finish their English core would be more inclined to dabble in AI, but a student taking an advanced class for the degree that they chose to pursue seems less likely to risk it.

It does not seem like this fear is unfounded either. Professor S. Scott Graham at the University of Texas at Austin wrote an op-ed for Inside Higher Ed to explain just that. He said he and his students tested various forms of AI to write the same essay. They concluded that the technology to create a good, consistent essay simply didn’t exist.

This is not to say that AI is not capable of good writing, but a key factor in the quality of an essay is what you are asking the AI. AI infamously “hallucinates” or makes up facts and research. Additionally, good answers require specific and well-thought-out prompts on part of the student. And even then, AI has a spotty track record with issues like ethics, critical thinking, and morality.

Even those who are most hopeful about AI advancements would not say it is at the processing level of a human brain. There are regular advancements to the technology, but to me, it’s no wonder why students may choose to steer clear of the allure of AI essays. It simply hasn’t reached a level of functionality that students are ready to trust for their written assignments and the disciplines they have chosen to pursue. 

This leaves one question: Is the fear of and willingness to mangle our curriculums around AI worth it for a threat that may be, at best, misunderstood?

March 18, 2024