News, Academics

Dean’s Colloquium Examines the Role of Artificial Intelligence at Universities

Getting ahead of artificial intelligence (AI) and understanding the value of education is important in preventing total domination by AI, according to John FitzGibbon. 

“If we can get out ahead of it … explaining to students about the value of history, the value of learning how to code, the value of economics, the value of theology, and communicate that, and then empower students to understand that before technology becomes pervasive, that’s where we have a huge opportunity,” Fitzgibbon said.

The Morrissey College of Arts and Sciences hosted a panel on Thursday to discuss the implications of AI on teaching, research, and learning in universities.

FitzGibbon, associate director of the Center for Digital Innovation in Learning, said professors should adapt to the presence of AI by changing their grading criteria and creating assignments that can’t be completed by AI.

“Try ChatGPT out, see what it’s good at, see what it’s not good at, see what it’s missing, and that’s what you grade students on,” Fitzgibbon said. “[Students] have to answer the question asked, very specific questions that are answered. And you gotta have your own opinion and your own voice, and so generative AI cannot do any of it.”

AI can still be a tool for professors, according to Assistant Professor of Computer Science Nam Wook Kim, who said he uses ChatGPT in his research process to brainstorm ideas.

“In a way, my research process now like always starts with the brainstorming with ChatGPT,” Kim said. “So now I’m kind of—for all of the projects I’m doing right now—I’m always constantly thinking about how to orchestrate between these A.I.s and … human assistance.” 

Professor of History Virginia Reinburg said she believes AI is another technology that humans will learn to work in tandem with.

“We’re looking at a theory of hybridity,” Reinburg said. “There’s not going to be a complete takeover of anything by this technology.”

Kim said that by using better judgment, humans can learn to use AI as a tool for amplifying their skills, and that while concerns about the impact of new technology are common, researchers are working to make AI more safe. 

“Every time we have new technology, we have these issues, and it’s good that we are thinking about them,” Kim said. “But I think the future is bright, and I know that talented machine learning researchers are working on trying to give them more safe [performance management] systems.” 

One of Kim’s fears when it comes to AI is that technology is changing the expectations of how much work students should be able to produce.

“Now, with ChatGPT and many Ph.D. students, we are expecting Ph.D. students to produce more papers when they graduate,” Kim said. “We’re expecting undergrad students to have more things on their resume. Because we have this ChatGPT, our expectations also change as well.”

Reinburg said she uses AI to connect with her students who are struggling to understand the computer system, and that struggling to understand AI is natural.

“I think it’s fine to struggle, and I want to see the students struggle, and I’m willing to expose my own struggles to them and to other people,” Reinburg said. “That shows what’s human about what I’m doing, as opposed to something that is just a product that embraces all struggles, and produces a clear narrative of some kind.” 

February 18, 2024