Unlock the Secrets of Ethical Hacking!
Ready to dive into the world of offensive security? This course gives you the Black Hat hacker’s perspective, teaching you attack techniques to defend against malicious activity. Learn to hack Android and Windows systems, create undetectable malware and ransomware, and even master spoofing techniques. Start your first hack in just one hour!
Enroll now and gain industry-standard knowledge: Enroll Now!
In classrooms, reactions to the rise of artificial intelligence have ranged from abject horror from some educators to excited adoption from others.
With ChatGPT approaching its three-year birthday, we’ve seen students and teachers alike issue all kinds of complaints and defenses — and this latest one might take the cake for the more extreme backlash we’ve seen.
As New Zealand’s Stuff reports some 115 postgraduate students at the country’s Lincoln University were flabbergasted to learn that they would have to all re-take a coding exam in person after their teacher concluded that some of them had used AI to cheat.
In an email leaked to the kiwi outlet, students were told that there had been a “high number of suspected cases” of “unethical” AI use on the test.
“While I acknowledge that a small number of students may have extensive prior coding experience,” the email continued, the “probability of this being the case across many submissions is low.”
The instructor, who Stuff chose not to name, added that the only way to “ensure fairness across all students” would be to re-assess them all in person and for them to verbally defend their code while doing so. The department head signed off on this approach, based on school policies prohibiting “unethical” AI use, the lecturer added.
“The rule is simple: if you wrote the code yourself, you can explain it,” the educator wrote in his email. “If you cannot explain it, you did not write it.”
With such a strict approach, it comes as little surprise that some of the implicated postgrads considered the teacher’s response an overreaction.
“What makes this particularly difficult is the atmosphere it has created,” one of the students, who asked to remain anonymous, told Stuff. “Many students feel under suspicion despite having done nothing wrong.”
“Being compelled to defend our work through live coding and interrogation, with the threat of disciplinary action if we falter, is extremely stressful and unorthodox,” they added.
That same student said that the email’s wording made it seem like they would be disciplined if they didn’t comply or failed to pass the lecturer’s test — and indeed, the teacher added that any student who they determined had used AI, or even those who failed to re-book their exam, would be reported to Lincoln’s provost.
“That atmosphere of ‘one slip and you’re guilty’ is what is creating such unease,” the student complained.
While we’ve seen educators fail students under false suspicion of AI use before, it’s generally been on an individual basis — except for the Texas A&M University professor who failed half his class back in 2023 because, ironically, ChatGPT incorrectly clocked their papers as AI-written.
Because the Lincoln lecturer’s name was kept anonymous, we can’t reach out to him to ask about his severe reaction — but we’d waged there’s a good likelihood that it would include, at very least, some strong words.
More on AI and academia: Founder of Google’s Generative AI Team Says Don’t Even Bother Getting a Law or Medical Degree, Because AI’s Going to Destroy Both Those Careers Before You Can Even Graduate
Unlock the Secrets of Ethical Hacking!
Ready to dive into the world of offensive security? This course gives you the Black Hat hacker’s perspective, teaching you attack techniques to defend against malicious activity. Learn to hack Android and Windows systems, create undetectable malware and ransomware, and even master spoofing techniques. Start your first hack in just one hour!
Enroll now and gain industry-standard knowledge: Enroll Now!
0 Comments