Professor Flunks All His Students After ChatGPT Falsely Claims It Wrote Their Papers

Authored by rollingstone.com and submitted by Sariel007

A number of seniors at Texas A&M University–Commerce who already walked the stage at graduation this year have been temporarily denied their diplomas after a professor ineptly used AI software to assess their final assignments, the partner of a student in his class — known as DearKick on Reddit — claims to Rolling Stone.

Dr. Jared Mumm, a campus rodeo instructor who also teaches agricultural classes, sent an email on Monday to a group of students informing them that he had submitted grades for their last three essay assignments of the semester. Everyone would be receiving an “X” in the course, Mumm explained, because he had used “Chat GTP” (the OpenAI chatbot is actually called “ChatGPT”) to test whether they’d used the software to write the papers — and the bot claimed to have authored every single one.

“I copy and paste your responses in [ChatGPT] and [it] will tell me if the program generated the content,” he wrote, saying he had tested each paper twice. He offered the class a makeup assignment to avoid the failing grade — which could otherwise, in theory, threaten their graduation status.

There’s just one problem: ChatGPT doesn’t work that way. The bot isn’t made to detect material composed by AI — or even material produced by itself — and is known to sometimes emit damaging misinformation. With very little prodding, ChatGPT will even claim to have written passages from famous novels such as Crime and Punishment. Educators can choose among a wide variety of effective AI and plagiarism detection tools to assess whether students have completed assignments themselves, including Winston AI and Content at Scale; ChatGPT is not among them. And OpenAI’s own tool for determining whether a text was written by a bot has been judged “not very accurate” by a digital marketing agency that recommends tech resources to businesses.

But all that would apparently be news to Mumm, who appeared so out of his depth as to incorrectly name the software he was misusing. Students claim they supplied him with proof they hadn’t used ChatGPT — exonerating timestamps on the Google Documents they used to complete the homework — but that he initially ignored this, commenting in the school’s grading software system, “I don’t grade AI bullshit.” (Mumm did not return Rolling Stone‘s request for comment.) Editor’s picks

Mumm’s email was shared on Reddit by a user with the handle DearKick, who identified himself as the fiancé of one of the students to receive a failing grade for submitting supposedly AI-generated essays. He claims to Rolling Stone that his partner had never heard of ChatGPT herself and was baffled by the accusation, noting that “she feels even worse considering it’s something she knows nothing about.” She immediately “reached out to the dean and CC’d the president of the university,” DearKick alleges, but did not immediately receive assistance and went to plead her case with administrators in person on Tuesday. DearKick adds that Mumm allegedly flunked “several” whole classes in similar fashion rather than question the validity of his methods for detecting cheaters.

In an amusing wrinkle, Mumm’s claims appear to be undercut by a simple experiment using ChatGPT. On Tuesday, redditor Delicious_Village112 found an abstract of Mumm’s doctoral dissertation on pig farming and submitted a section of that paper to the bot, asking if it might have written the paragraph. “Yes, the passage you shared could indeed have been generated by a language model like ChatGPT, given the right prompt,” the program answered. “The text contains several characteristics that are consistent with AI-generated content.” At the request of other redditors, Delicious_Village112 also submitted Mumm’s email to students about their presumed AI deception, asking the same question. “Yes, I wrote the content you’ve shared,” ChatGPT replied. Yet the bot also clarified: “If someone used my abilities to help draft an email, I wouldn’t have a record of it.”

DearKick tells Rolling Stone that their fiancée’s meeting on Tuesday with the university’s Dean of Agricultural Science “should clear things up, I hope,” and speculates that Mumm had little familiarity with chatbots before attempting to run student papers through one. In an update to his original post, he revealed that while at least one student has already been exonerated through Google Docs timestamps and received an apology from Mumm, another two had admitted to using ChatGPT earlier in the semester, which “no doubt greatly complicates the situation for those who did not.” Related

In a statement sent to Rolling Stone on Wednesday, Texas A&M University-Commerce said they were investigating the incident and developing policies related to AI in the classroom. The university denied that anyone had received a failing grade. “A&M-Commerce confirms that no students failed the class or were barred from graduating because of this issue,” the school noted. “Dr. Jared Mumm, the class professor, is working individually with students regarding their last written assignments. Some students received a temporary grade of ‘X’ — which indicates ‘incomplete’ — to allow the professor and students time to determine whether AI was used to write their assignments and, if so, at what level.” The university also confirmed that several students had been cleared of any academic dishonesty.

“University officials are investigating the incident and developing policies to address the use or misuse of AI technology in the classroom,” the statement continued. “They are also working to adopt AI detection tools and other resources to manage the intersection of AI technology and higher education. The use of AI in coursework is a rapidly changing issue that confronts all learning institutions.”

While teachers are certainly justified in their suspicion that students may attempt to complete written work with assistance from AI, this kerfuffle demonstrates how the issue cuts both ways: In order to reliably identify or prevent this kind of cheating, professors and school administrators need a basic grasp of the tech involved. The best-case scenario here is that everyone got a crash-course education.

UPDATE: This article was updated at 2:17 p.m. ET, May 17, to include a statement from Texas A&M University-Commerce.

DugCoal on May 17th, 2023 at 23:34 UTC »

I get that this is new stuff and mistakes will be made, but when your new tool tells you every single student cheated then it defies logic that you wouldn't double check. Clearly he has no decent relationship with any of his students and despises them all and for that reason--not for making a mistake--he should be removed.

HobbesDaBobbes on May 17th, 2023 at 23:32 UTC »

Wouldn't a logical person take pause and think that clearly not all of my students cheated with AI. Isn't that statistically so unlikely it should raise some red flags for your detection process?

Thoughtful_Mouse on May 17th, 2023 at 23:09 UTC »

He flunked them because he believed they used AI to do their work for them, a conclusion he came to when he used... AI to... do his work for him?