The vast majority of people go to school to be "employable" (or because it's what your "supposed to do"), not to learn. The only thing that matters is your grade, therefore people will optimize for that. With grades being the only thing that matters and time pressure (you must graduate by this time) it's no surprise that students offload their work to ChatGPT. They don't care about learning they care about what is actually valued: their grade and graduating. If those things didn't matter people wouldn't use ChatGPT because their incentive would be to learn. You see this with older people who go back to college or take community college classes, they aren't cheating because there is no incentive to cheat.
Not sure this was the fault of ChatGPT as much as it was the fault of disinterested students bullshitting a class for credits. I've seen similar bad work in group projects when I was a student well before ChatGPT was a thing.
This
> Some of the sections were written to answer a subtly different question than the one we were supposed to answer. The writing wasn’t even answering the correct question.
Is my absolute biggest issue with LLMs - and it is really week written.
It is like two concepts are really close in latent space, and the LLM projects the question to the wrong representation.
I am seeing this in job as well where there is increasingly a trend of submitting code that authors haven't thoroughly reviewed themselves and can't reason through
I think your 'group' is not communicating very well. Just telling a groupmate 'Now I will take over your work' is not very supportive. When people start editing each others work out of the blue, there seems to be no healthy discussion at all.
This sucks, though it’s formative. An experience that I value highly from my time studying was working on a group project in a team with misaligned goals: it teaches you how much it matters to find good people to work with in the real world!
Might be worthwile to have an actual conversation with your peers instead of just deleting and running over it. Maybe they thought your level of work was lower than that produced by GPT? Maybe they thought it useful to have filler draft instead of starting from a blank sheet? Maybe they have a sick parent at home and needed to fill it in just to move on? Complaining externally and not communicating is just a very tiny step up from what your classmates are doing.
Also having your entire semester spoiled by some incidents induced by random passersby? Come on, university is for growing up, start.
Ah. The hell of group projects. In my days, if it were a group assignment you would still end up doing all the work yourself 80% of the time as the group just couldn't be arsed to produce anything and rather play a game of chicken.
Worse yet. In 10% of the cases you'ld get some clueless but very opinionated student wanting to be 'manager' and 'editor in chief', contributing nothing but bossing everyone around.
And yes, in 10% of the cases, you would have another student actually smart and helpfull.
So is it better to have LLM slop rather than nothing at all? Probably not, but it is not like those people would have turned in good contributions otherwise.
If it wasn't ChatGPT, those students are more than likely to be the kind that buys solutions so that they still don't have to work.
Some people somehow think that having more while working less is an act of resourcefulness. To some extent it maybe is, because we shouldn't work for work's sake, but making "working less" a life goal doesn't seem right to me.
ChatGPT didn't ruin anything. Lazy students did.
I'm a prof, and my experience so far is that - where AI is concerned - there are two kinds of students: (1) those who use AI to support their learning, and (2) those who use AI to do their assignments, thus avoiding learning altogether.
In some classes this has flipped the bell curve on its head: lots of students at either end, and no one in the middle