The lecturers learning to spot AI misconduct

0
1
4 hours ago

Ellis MaddisonLeicester

BBC Dr Abiodun Egbetokun, Brett Koenig and and Ruth McKie standing side by side on a cobbled street in the sunBBC

With the rise of artificial intelligence (AI) and more students turning to technology in their studies, academics have a complex challenge on their hands.

University bosses at De Montfort University (DMU) are trying to strike a tricky balance between encouraging students to use AI ethically and preventing them from gaining an unfair advantage.

But with AI getting “better and better”, it is “becoming increasingly difficult to spot”, according to Dr Abiodun Egbetokun, associate professor of entrepreneurship and innovation at DMU.

Lecturers there are now being given guidance, as well as face-to-face training sessions, on how to spot the signs of wrongful AI use among students.

Dr Abiodun Egbetokun wearing a blue suit standing in front of a red brick building

“We’re still in the take-off stage [of AI],” said Dr Egbetokun, who is researching generative AI in higher education.

“There are more and more academics and students making use of different AI tools for different purposes,” he said.

Dr Egbetokun said the use of AI was hard to spot due to its “increasing capabilities” but academics were getting better at looking for “specific markers” such as a high repetition of the same word or Americanisms.

However, AI use is not only allowed for students at De Montfort University, it is encouraged, as long as it is used properly.

“Our role as educators has to be to encourage students to think critically about AI,” said Shushma Patel, pro vice-chancellor for artificial intelligence at DMU.

She said the university’s AI policy stated students could use AI to “support their thinking” or help them “fill in the blanks” when tasks were not clear, but they must demonstrate how they used it.

Mrs Patel said if a student submitted AI-generated material “as their own” or used it to “invent” references, it was classed as misconduct and they could face a disciplinary panel.

Jennifer Hing, academic practice officer, added: “The biggest thing to remember is, is it your work? Is it your words? If it is not, then you’ve crossed a line.”

Mrs Hing, who sits on misconduct panels, said the majority of AI misuse among students was accidental due to “a lack of skill or knowledge, or confidence”.

Brett wearing a cream jumper smiling on campus

Brett Koenig, associate head of education in business and law, has attended DMU’s sessions looking at how to spot wrongful AI use.

He said he did sometimes get a “gut feeling” when the technology had been used wrongfully, but the training had helped him to look for specific markers, such as certain punctuation.

“It’s an indication if it’s 50 or 60 references to [the word] ‘fostering’, for example, that it may have been generated by artificial intelligence.

“It’s not to say that’s cheating, and it’s not to say that’s plagiarism because people use that word – but that seems to be a popular word with ChatGPT,” he said.

DMU is not the only UK institution embracing the technology, with students and staff at the University of Oxford now having access to ChatGPT Edu – the education version of artificial intelligence (AI) tool ChatGPT.

A recent survey of thousands of students across the globe also found many used artificial intelligence to assist their studies, but also feared it could affect their future careers.

Dr Ruth McKie, senior lecturer in criminology at DMU, said it was important to “acknowledge the realities within society” that AI was now a “necessary tool”.

Ms McKie leaning against a black lamp post with her arms crossed

The lecturer said there was an understanding academics “need to be trained to catch people that use AI with the assumption that they are cheating in some way”.

“We need to start figuring out why [it’s been used], we can’t just automatically assume that they’ve used it for the easy route out,” she said.

Staff at DMU also said they had reservations about some AI detection software.

“I put some work through an AI checker just to see what it comes up with and most of the time it says all of it is AI-generated, and I know that’s rubbish,” said Dr McKie.

The 34-year-old said a friend put their PHD thesis through an AI checker just to see the outcome.

It said “it was 100% AI”, despite him using “no AI whatsoever”, Dr McKie added.

Mr Koenig said this was why the technology was not being used widely by lecturers.

“It’s as damaging to a student to falsely accuse them [of wrongful AI misuse] as it is to accurately say that they have committed plagiarism.

“So, it’s our job to move with the times rather than just try to catch them out,” he said.

Yassim in a green hoodie standing in front of a black reflective wall

DMU engineering student Yassim Hijji, 19, who speaks English as his third language, said he used AI in lectures to help him with more complex words or phrases.

“Sometimes I write the message in my language, which is Italian or Arabic, and then I ask it to translate it to English,” he said.

“If it helps you to get better then why not use it? It’s like using a book at the end of the day.”

Jodie and Lucy standing side by side with a smile on their faces

Nursing student Jodie Hurt, 37, said there was “definitely a place” for the technology as it could be used in her industry to “help protect” nurses and patients.

“There are tools that can be used in general practice that will help document a consultation between a nurse and a patient,” she said.

However, Lucy Harrigan – who also studies nursing – said there was a clear line when it came to using AI.

“You can’t copy and paste, you can’t use it as a reliable source,” she said.

The 36-year-old said it was “massive misconduct” to submit AI-generated work as your own.

“It’s not showing your knowledge, you’ve got to earn your degree,” she added.

More on this story
Related internet links

Disclaimer : This story is auto aggregated by a computer programme and has not been created or edited by DOWNTHENEWS. Publisher: BBC