Revealed: Thousands of UK university students caught cheating using AI
-
Personally, I think we have homework the wrong way around. Instead of teaching the subject in class and then assign practice for home, we should be learn the subject at home and so the practice in class.
I always found it easier to read up on something, get an idea of a concept by my self. But when trying to solve the problems I ran into questions, but no one was there I could ask. If the problem were to be solved in class I could ask fellow students or the teacher.
Plus if the kids want to learn the concept from ChatGPT or Wikipedia that's fine by me as long as they learn it somehow.
Of course this does not apply to all concepts, subjects and such but as a general rule I think it works.
This is mostly the purpose of my homework. I assign daily homework. I don't expect students to get the correct answers but instead attempt them and then come to class with questions. My lectures are typically short so that i can dedicate class time to solving problems and homework assignments.
I always open my class with "does anyone have any questions on the homework?". Prior chatgpt, students would ask me to go through all the homework, since much of my homework is difficult. Last semester though, with so many students using chatgpt, they rarely asked me about the homework... I would often follow up with "Really? No questions at all?"
-
Seems like an awful lot of debt to go into for something that's really not that valuable. If the certificate is the goal then a masters or PhD will end up being what's needed and faking your way through undergrad won't do much good.
This is a story ask about the UK, not the US, though I imagine the situation is similar.
-
This is a story ask about the UK, not the US, though I imagine the situation is similar.
I don't understand what point you're trying to make. I know it's about the UK..?
-
I don't understand what point you're trying to make. I know it's about the UK..?
Who's going into debt to be at university in the UK?
-
Guardian investigation finds almost 7,000 proven cases of cheating – and experts says these are tip of the iceberg
Thousands of university students in the UK have been caught misusing ChatGPT and other artificial intelligence tools in recent years, while traditional forms of plagiarism show a marked decline, a Guardian investigation can reveal.
A survey of academic integrity violations found almost 7,000 proven cases of cheating using AI tools in 2023-24, equivalent to 5.1 for every 1,000 students. That was up from 1.6 cases per 1,000 in 2022-23.
Figures up to May suggest that number will increase again this year to about 7.5 proven cases per 1,000 students – but recorded cases represent only the tip of the iceberg, according to experts.
The data highlights a rapidly evolving challenge for universities: trying to adapt assessment methods to the advent of technologies such as ChatGPT and other AI-powered writing tools.
If ChatGPT can effectively do the work for you, then is it really necessary to do the work? Nobody saying to go to the library and find a book instead of letting a search engine do the work for you. Education has to evolve and so does the testing. A lot of things GPT’s can’t do well. Grade on that.
-
If ChatGPT can effectively do the work for you, then is it really necessary to do the work? Nobody saying to go to the library and find a book instead of letting a search engine do the work for you. Education has to evolve and so does the testing. A lot of things GPT’s can’t do well. Grade on that.
The "work" that LLMs are doing here is "being educated".
Like, when a prof says "read this book and write paper answering these questions", they aren't doing that because the world needs another paper written. They are inviting the student to go on a journey, one that is designed to change the person who travels that path.
-
Guardian investigation finds almost 7,000 proven cases of cheating – and experts says these are tip of the iceberg
Thousands of university students in the UK have been caught misusing ChatGPT and other artificial intelligence tools in recent years, while traditional forms of plagiarism show a marked decline, a Guardian investigation can reveal.
A survey of academic integrity violations found almost 7,000 proven cases of cheating using AI tools in 2023-24, equivalent to 5.1 for every 1,000 students. That was up from 1.6 cases per 1,000 in 2022-23.
Figures up to May suggest that number will increase again this year to about 7.5 proven cases per 1,000 students – but recorded cases represent only the tip of the iceberg, according to experts.
The data highlights a rapidly evolving challenge for universities: trying to adapt assessment methods to the advent of technologies such as ChatGPT and other AI-powered writing tools.
Surprise motherfuckers. Maybe don't give grant money to LLM snakeoil fuckers, and maybe don't allow mass for-profit copyright violations.
-
The "work" that LLMs are doing here is "being educated".
Like, when a prof says "read this book and write paper answering these questions", they aren't doing that because the world needs another paper written. They are inviting the student to go on a journey, one that is designed to change the person who travels that path.
Education needs to change too. Have students do something hands on.
-
Guardian investigation finds almost 7,000 proven cases of cheating – and experts says these are tip of the iceberg
Thousands of university students in the UK have been caught misusing ChatGPT and other artificial intelligence tools in recent years, while traditional forms of plagiarism show a marked decline, a Guardian investigation can reveal.
A survey of academic integrity violations found almost 7,000 proven cases of cheating using AI tools in 2023-24, equivalent to 5.1 for every 1,000 students. That was up from 1.6 cases per 1,000 in 2022-23.
Figures up to May suggest that number will increase again this year to about 7.5 proven cases per 1,000 students – but recorded cases represent only the tip of the iceberg, according to experts.
The data highlights a rapidly evolving challenge for universities: trying to adapt assessment methods to the advent of technologies such as ChatGPT and other AI-powered writing tools.
Actually caught, or caught with a "ai detection" software?
-
Education needs to change too. Have students do something hands on.
Hands on, like engage with prior material on the subject and formulate complex ideas based on that...?
Sarcasm aside, asking students to do something in lab often requires them to have gained an understanding of the material so they can do something, an understanding they utterly lack if they use AI to do their work. Although tbf this lack of understanding in-person is really the #1 way we catch students who are using AI.
-
Hands on, like engage with prior material on the subject and formulate complex ideas based on that...?
Sarcasm aside, asking students to do something in lab often requires them to have gained an understanding of the material so they can do something, an understanding they utterly lack if they use AI to do their work. Although tbf this lack of understanding in-person is really the #1 way we catch students who are using AI.
Class discussion. Live presentations with question and answer. Save papers for supplementing hands on research.
-
Class discussion. Live presentations with question and answer. Save papers for supplementing hands on research.
Have you seen the size of these classrooms? It's not uncommon for lecture halls to seat 200+ students. You're thinking that each student is going to present? Are they all going to create a presentation for each piece of info they learn? 200 presentations a day every day? Or are they each going to present one thing? What does a student do during the other 199 presentations? When does the teacher (the expert in the subject) provide any value in this learning experience?
There's too much to learn to have people only learning by presenting.
-
Maybe we need a new way to approach school. I don't think I agree with turning education into a competition where the difficulty is curved towards the most competitive creating a system that became so difficult that students need to edge each other out any way they can.
I guess what I don’t understand is what changed? Is everything homework now? When I was in school, even college, a significant percentage of learning was in class work, pop quizzes, and weekly closed book tests. How are these kids using LLMs so much for class if a large portion of the work is still in the classroom? Or is that just not the case anymore? It’s not like ChatGPT can handwrite an essay in pencil or give an in person presentation (yet).
-
Have you seen the size of these classrooms? It's not uncommon for lecture halls to seat 200+ students. You're thinking that each student is going to present? Are they all going to create a presentation for each piece of info they learn? 200 presentations a day every day? Or are they each going to present one thing? What does a student do during the other 199 presentations? When does the teacher (the expert in the subject) provide any value in this learning experience?
There's too much to learn to have people only learning by presenting.
Have you seen the cost of tuition? Hire more professors and smaller classes.
Anyways, undergrad isn’t even that important in the grand scheme of things. Let people cheat and let that show when they apply for entry level jobs or higher education. If they can be successful after cheating in undergrad, then does it even matter?
When you get to grad school and beyond is what really matters. Speaking from a US perspective.
-
Actually caught, or caught with a "ai detection" software?
Actually caught. That's why it's tip of the iceberg, all the cases that were not caught.
-
Have you seen the cost of tuition? Hire more professors and smaller classes.
Anyways, undergrad isn’t even that important in the grand scheme of things. Let people cheat and let that show when they apply for entry level jobs or higher education. If they can be successful after cheating in undergrad, then does it even matter?
When you get to grad school and beyond is what really matters. Speaking from a US perspective.
"Let them cheat"
I mean, yeah, that's one way to go. You could say "the students who cheat are only cheating themselves" as well. And you'd be half right about that.
I see most often that there are two reasons that we see articles from professors who are waving the warning flags. First is that these students aren't just cheating themselves. There are only so many spots available for post-grad work or jobs that require a degree. Folks who are actually putting the time into learning the material are being drowned in a sea of folks who have gotten just as far without doing so.
And the second reason I think is more important. Many of these professors have dedicated their lives to teaching their subject to the next generation. They want to help others learn. That is being compromised by a massively disruptive technology. the article linked here provides evidence of that, and therefore deserves more than just a casual "teach better! the tech isn't going away"
-
Guardian investigation finds almost 7,000 proven cases of cheating – and experts says these are tip of the iceberg
Thousands of university students in the UK have been caught misusing ChatGPT and other artificial intelligence tools in recent years, while traditional forms of plagiarism show a marked decline, a Guardian investigation can reveal.
A survey of academic integrity violations found almost 7,000 proven cases of cheating using AI tools in 2023-24, equivalent to 5.1 for every 1,000 students. That was up from 1.6 cases per 1,000 in 2022-23.
Figures up to May suggest that number will increase again this year to about 7.5 proven cases per 1,000 students – but recorded cases represent only the tip of the iceberg, according to experts.
The data highlights a rapidly evolving challenge for universities: trying to adapt assessment methods to the advent of technologies such as ChatGPT and other AI-powered writing tools.
we're doomed
-
Surprise motherfuckers. Maybe don't give grant money to LLM snakeoil fuckers, and maybe don't allow mass for-profit copyright violations.
So is it snake oil, or dangerously effective (to the point it enables evil)?
-
So is it snake oil, or dangerously effective (to the point it enables evil)?
it is snake oil in the sense that it is being sold as "AI", which it isn't. It is dangerous because LLMs can be used for targeted manipulation of millions if not billions of people.
-
it is snake oil in the sense that it is being sold as "AI", which it isn't. It is dangerous because LLMs can be used for targeted manipulation of millions if not billions of people.
Yeah, I do worry about that. We haven't seen much in the way of propaganda bots or even LLM scams, but the potential is there.
Hopefully, people will learn to be skeptical they way they did with photoshopped photos, and not the way they didn't with where their data is going.