Concerned about ChatGPT, universities begin to review teaching methods

by

Antony Aumann, professor of philosophy at the University of Northern Michigan, was grading student essays in his world religions course last month when he came across “by far the best text in the class.” The paper analyzed the morality of the burqa ban, using crisp paragraphs, correct examples, and rigorous arguments.

Immediately, Aumann smelled something suspicious.

He called the student over to ask if he had written the essay himself. The student admitted that he had used ChatGPT, a chatbot that provides information, explains concepts and generates ideas in simple sentences – and that, in this case, he had written the student’s work.

Alarmed by the discovery, Aumann decided to change the way essays will be written in his courses this semester. He intends to require students to write first drafts of texts in the classroom, using browsers that monitor and restrict computer activity. They will have to explain every change made to later versions of the texts. Aumann, who is considering perhaps giving up essays in subsequent semesters, also plans to include ChatGPT in her classes, asking students to rate responses given by the chatbot.

“What’s going to happen in the classroom is no longer going to be ‘here are some questions – let’s discuss this amongst us human beings,'” he said, but “something like ‘and what does this alien robot think about the question?’

Across the country, professors like Aumann, department heads and university administrators are beginning to reassess classroom procedures in response to ChatGPT, leading to a potentially massive transformation in teaching and learning. Some professors are completely redesigning their courses, embracing changes that include more oral exams, group work, and assignments that need to be handwritten rather than typed.

The initiatives are part of an effort undertaken in real time to deal with a new wave of technology known as generative artificial intelligence. Launched in November by the OpenAI artificial intelligence lab, ChatGPT is at the forefront of this wave. In response to short requests, the chatbot generates surprisingly well-articulated and nuanced text, so much so that people are using it to write love letters, poetry, fan fiction — and their schoolwork.

The news is causing confusion at some high schools, where teachers and administrators struggle to discern whether students are using the chatbot to do their homework. To prevent cheating, some public school networks, including those in New York and Seattle, have banned the tool on their Wi-Fi networks and on school computers. But students have no trouble finding ways around the ban and accessing ChatGPT.

In the higher education sector, colleges and universities have been reluctant to ban the AI ​​tool. Administrators doubt that a ban would work and do not want to infringe on academic freedom. Because of this, the teaching mode of teachers is changing.

“We’re looking to institute general policies that enforce a teacher’s authority to run a class,” rather than attacking specific methods of cheating, said Joe Glover, administrative director at the University of Florida. “This is not going to be the last innovation we have to deal with.”

This is especially true as generative AI is still in its infancy. OpenAI plans to release another tool soon, GPT-4, which will be better than previous versions at generating text. Google built rival chatbot LaMDA, and Microsoft is discussing a $10 billion investment in OpenAI. Some Silicon Valley startups, including Stability AI and Character.AI, are also working on generative AI tools.

An OpenAI representative said the lab recognizes that its programs can be used to deceive people and is developing technology to help people identify text generated by ChatGPT.

ChatGPT has leapt to the top of the agenda for many universities. Administrators are creating task forces and promoting debates involving their entire institutions to decide how to react to the tool. Much of the proposed guidance is adapting to technology.

At institutions including George Washington University in Washington, Rutgers University in New Brunswick, New Jersey, and Appalachian State University in Boone, North Carolina, faculty are reducing the number of assignments they ask students to do in casa, which during the pandemic became the most used assessment method, but now seems to be vulnerable to chatbots. They are opting instead for classroom assignments, handwritten assignments, group assignments, and oral exams.

No more instructions like “write five pages on topic x”. Instead, some teachers prepare questions they hope are too intricate for chatbots to answer and ask students to write about their own lives and current events.

Sid Dobrin, director of the English department at the University of Florida, said that “students are using ChatGPT to submit plagiarism, because assignments can be plagiarized.”

Frederick Luis Aldama, director of humanities at the University of Texas at Austin, said he intends to teach newer or more niche texts that ChatGPT may have less information about. For example, instead of “A Midsummer Night’s Dream”, you’ll opt for William Shakespeare’s early sonnets.

For him, the chatbot can motivate people “interested in primary canonical texts to step out of their comfort zone to look for things that are not online”.

If the new methods they’ve adopted fail to prevent plagiarism, Aldama and other professors said they intend to institute more rigorous expectations and evaluation criteria. Today it is no longer enough for an essay or essay to have a thesis, introduction, additional paragraphs and conclusion.

“We need to up our game,” said Aldama. “The imagination, creativity and innovative analysis that normally earns an A grade now needs to be present in the works that will earn a B.”

Universities also want to educate students about new AI tools. The University at Buffalo in New York and Furman University in Greenville, South Carolina, said they intend to build a discussion of AI tools into required courses that introduce concepts like academic integrity to freshmen.

“We need to include a scenario around this so students can see a concrete example,” said Kelly Ahuna, director of Academic Integrity at Buffalo. “Instead of catching problems when they happen, we want to prevent them from happening.”

Other universities are trying to set limits on the use of AI. Washington University in St. Louis, and the University of Vermont, Burlington, are revising their academic integrity policies to include generative AI in their definitions of plagiarism.

John Dyer, vice president of admissions and educational technologies at Dallas Theological Seminary, said the language used in the seminary’s honor code “was already feeling a little archaic anyway.” He intends to update the definition of plagiarism to include: “The use of text written by a generation system, passing it off as text itself (for example, inserting a prompt into an artificial intelligence tool and using the result in an academic paper)” .

The misuse of AI tools is unlikely to go away, which is why some faculty and universities have said they intend to use detectors to root out this activity. The plagiarism detection service Turnitin said that this year it will start to incorporate more elements to identify AI, including ChatGPT.

More than 6,000 professors from Harvard, Yale, Rhode Island and beyond have signed up to use the GPTZero program, which promises to detect AI-generated text promptly. The information is from Edward Tian, ​​creator of the program and a final year student at Princeton University.

Some students find it helpful to use AI tools to learn. Lizzie Shackney, 27, studying law and design at the University of Pennsylvania, started using ChatGPT to generate ideas for academic papers and debug coding problem sets.

“There are subjects where they want you to share and not have unnecessary work,” she said, speaking of her computer science and statistics classes. “Where my brain is useful is in understanding the meaning of the code.”

But she has fears. According to her, ChatGPT sometimes explains ideas and cites sources incorrectly. The University of Pennsylvania has not adopted rules around using the tool, so it doesn’t want to use ChatGPT if the university bans it or sees it as a sham.

Other students don’t have the same qualms, sharing on forums like Reddit that they’ve turned in papers written and resolved via ChatGPT – and that they’ve even sometimes done so for peers. The hashtag #chatgpt has had over 578 million views on Twitter. Users share videos of the tool writing academic papers and solving coding problems.

One video shows a student copying a multiple-choice test and pasting it into the tool, with the caption: “I don’t know about you, but I’m going to have ChatGPT do my final exams. Have fun studying!”

You May Also Like

Recommended for you

Immediate Peak