T O P

  • By -

Postingatthismoment

It’s handy when AI just gives them a terrible paper.  Then it doesn’t matter whether someone thinks it’s a tool; they turned in a failing assignment.  


Huck68finn

True, but I don't even want to award a 50 to an AI-generated paper. I want it to earn the 0 it deserves 


Postingatthismoment

If they turned in the wrong kind of paper altogether, why not just give it a zero?   I had a guy turn in an obviously AI paper the other day. I just failed him in the course for cheating.  


Huck68finn

Well, if it doesn't address the assignment, I agree. If it's just an awful version of the assignment, the zero wouldn't be justified unless I could prove it is AI


fishnoguns

>If they turned in the wrong kind of paper altogether, why not just give it a zero?  Because usually it is not *that* bad. Surface-level, but would be a passing grade in pre-ChatGPT times. Especially for first- or second year undergraduate classes. >an obviously AI paper the other day. The problem is evidence. We let students write an monitored essay without internet for one specific class. Some students do legitimately write as unfocussed as an AI.


uttamattamakin

For the first five papers in a class I had the students were not to use AI. Then I trained them how to make it write exactly the paper they were supposed to. Then they're to expand on what the AI did. They got bad grades if they didn't do exactly what I told them. You need to give the students directions that take into account the fact that they're going to use AI but make them use it in a very particular sort of way.


Postingatthismoment

Why would I do that when the rest of the class actually wrote respectable papers themselves? I’m not going to sacrifice good students to lazy idiots.  


ohwrite

I agree with this. I had students who followed directions. This student chose not to


uttamattamakin

Well then you give those students to grades they deserve. Doesn't matter if they use an AI to not follow your directions. Some of us reserve the right to give directions that include using an AI an intellectually honest and proper manner. Just as after a certain point we allow people to use a calculator in math class. After a certain point the true human rigor of it is not arithmetic but calculus.


Hpstorian

The research makes it pretty clear that educators are not capable, at least in lab conditions, of telling the difference between the writing of an AI and the writing of a human. There's even evidence that says that educators with more experience are more likely to be overconfident. If you catch students it will be those who don't know what they're doing, but AI is already so ubiquitous and so integrated within programs that students use habitually that the essay that doesn't use AI is probably the exception. I know this because as well as being an academic I spend a lot of time at the library doing consultations with students and I see them working and this is just the reality. If your entire course isn't designed to make AI use more trouble than it's worth then you're only catching the students who don't know the tools well enough to make it believable.


HowlingFantods5564

Can you link the research? Genuinely interested. If a student has strong enough writing abilities to effectively mask the use of AI, convert it into their own voice and add their own sources and understanding of context, then I'm not that worried about it. But those that rely on unedited AI output should not pass the course. Those are the ones I'm trying to catch, and It's not hard.


Hpstorian

Fleckenstein et. al. 2024, "Do teachers spot AI? Evaluating the detectability of AI-generated texts among student essays," Computers and Education: Artificial Intelligence, vol. 6. The abstract is: " The potential application of generative artificial intelligence (AI) in schools and universities poses great challenges, especially for the assessment of students’ texts. Previous research has shown that people generally have difficulty distinguishing AI-generated from human-written texts; however, the ability of teachers to identify an AI-generated text among student essays has not yet been investigated. Here we show in two experimental studies that novice (N = 89) and experienced teachers (N = 200) could not identify texts generated by ChatGPT amongstudent-written texts. However, there are some indications that more experienced teachers made more differentiated and more accurate judgments. Furthermore, both groups were overconfident in their judgments. Effects of real and assumed source on quality assessment were heterogeneous. Our findings demonstrate that with relatively little prompting, current AI can generate texts that are not detectable for teachers, which poses a challenge to schools and universities in grading student essays. Our study provides empirical evidence for the current debate regarding exam strategies in schools and universities in light of the latest technological developments." I mentioned lab conditions because sometimes there can be other assessments or personal familiarity with a student that might change this but in general I think there's compelling evidence that we should never trust that we are able to identify AI use with confidence. Disturbingly it was very common that good work was more likely to be wrongly identified as AI.


naocalemala

Nope.


Historical_Grab_4789

I usually think this way, too--until I realize that I am spending part of my life grading a robot. 😖 However, I guess if the bad grade shows a student that AI is fallible, then the time spent grading it is worth it?🤷‍♀️


Postingatthismoment

That’s what I’m hoping.  Also, the really abysmal ones I’ve seen have been obvious enough that I could just skim the actual paper…the huge time suck is in looking up the references.  It is definitely a massive waste of my time, like any cheating I catch because I then have to pull together the evidence to submit it to student conduct.  


Historical_Grab_4789

Oh my gosh, the references! Those are the worst! AI is so bad about them, and students don't even realize it. I mean, I feel like telling students, if you're going to use AI, at least check the references and citations!! 🤦‍♀️


Postingatthismoment

Yes, it's crazy. I had a student cite a two-page pediatric psychology overview of mental health challenges of the Ukraine war. I assure you that the paper was NOT about mental health in any way whatsoever. There was a direct quote--that did not exist in the article. Dude...it was so obvious. I had another student earlier in the semester citing page 283 of a book that had under 200 pages. I guess if you are lazy, you are lazy....


Historical_Grab_4789

🤦‍♀️🤦‍♀️🤦‍♀️


Acceptable_Month9310

>she believes AI is a “tool.” As a CS professor who teaches machine learning. I hate responses like these. Of ***course*** it's a tool, but that doesn't mean it's an ***appropriate*** tool for every task. While there are several good uses for AI and even ChatGPT, I rarely see good examples of it being used in an educational context. In fact, I rather suspect that these tools perform ***better*** in some educational contexts than they ever would in real-life. Causing their evaluation as "tools" to vastly overestimated. Writing code, for example, I can get ChatGPT to write the kind of code I wish all my students would hand in -- for a wide variety of assignments. However, I regularly throw simple things at it and have it do nothing useful. Why? Well an LLM has modeled a response to the kind of question you have asked. The more times someone has answered the same question -- such as in the elementary kinds of assignments we give to students -- the better it's going to be able to answer those issues. Which means, "learning to use ChatGPT as a tool" in these contexts may well be setting students up to fail.


GeorgeCharlesCooper

I'd tell her, "Yeah, crowbars are tools, too, but I'd think we'd prefer folks use keys to open doors around here."


AsturiusMatamoros

This is so dangerous. It works well… for toy examples we expect students to solve.


GriffinGalang

Oh, to revert to oral exams!


RuralWAH

A spell checker is a tool as well, but you wouldn't use it on a spelling test.


CodeBirder

A great point. Makes it quite clear why it is so troubling for students to use AI on a writing assignment, especially when it is supposed to be an assessment focuses on writing ability. I wonder if students view final assignments in the same light though. I mean, they are encouraged to go through the whole writing process, drafting, review, various types of editing and proofreading. Many of us encourage getting feedback from a writing center or some similar practice. Depending on the class and assignment, they may have literal weeks between getting the prompt and the submission of a final draft. How many ways do we signal students to use every resource they can to improve the paper? If we are trying to assess their writing ability, it seems there are many ways that this type of assessment is already flawed.


Pikaus

I had 6 options for a final essay and a student did them all. With AI.


Interesting_Chart30

This is the first semester in a long time where I've had to turn in academic misconduct reports due to AI use. One student had a 99% similarity and 44% AI score on his paper. As part of the assignment, students are required to use an online or in-person tutor before they upload the final version. Even the questions to the tutor prompts were copied. I haven't seen such flowery language outside of Wordsworth. We have to turn in an academic misconduct form, a report detailing why the report was being submitted, and a copy of the similarity report. The student has five days to respond. He never responded so I assume he realized he was sunk when he saw the report. The other student used AI to find synonyms which resulted in a very weird word salad. She didn't respond to her report either. I don't understand why they think it won't be noticed. Do they get away with it in high school?


ProfessorCH

From my experience, kid in high school and teach an obscene amount of dual enrolled students, they rarely receive feedback and most items are graded on submission not the actual content. My son fights with me about my corrections to his work, basics in writing, he says they don’t care, teachers never say anything to anyone about it. I’ve experienced this with my dual enrolled as well, just some simple middle school writing rules, they are clueless. I went so far as to ask my teacher friends about some of those rules, making sure they hadn’t been altered. Like the whole two spaces after a period rule changing. They don’t capitalize, they start paragraphs and sentences with an actual number like 3. I warn them with all submissions, if I see a pattern in their writing like a lowercase i consistently, or not capitalizing beginning of sentences, I will stop grading the submission. If a student doesn’t know the basics, they certainly shouldn’t be utilizing a tool without learning those simple rules first. I am not anti tools but I am not handing my son my woodworking shop tools without him proving he knows the basic rules first. It’s dangerous. I ask myself ‘how did we get here’ often, there are so many different answers to that question.


SabertoothLotus

>I don't understand why they think it won't be noticed. Do they get away with it in high school? short answer: yes


bluebird-1515

I swear I have students who use it so much they are utterly dependent on it. Even after having “the conversation” about AI misuse, and tearful promises never to do use it against policy again — or forceful denials they used it when I putting my prompt generates the same ideas in the same order with many of the same key phrases — more than half blithely try it again a few weeks later. Adult online learners are the worst about it. We shouldn’t be accepting some of these folks because they need stronger fundamental skills and should start at a solid CC. But, tuition $$$. I am salty today because I am so tired of it. I am talking about AI writing that is like going to someone’s house and having them serve me McDonald’s but swearing it is food they made from scratch. Well, you might be serving it on a a glass plate with a metal fork but I still know a McDonald’s burger and fries when I encounter them.


Seacarius

>Adult online learners are the worst about it. I found this . . . interesting. In my experience, adult learners, whether online or in person, are far less likely to use AI. So far, it's *always* been the younger kids. (By "adult learners," I'm referring to people who are generally over the age of 30.) I give no quarter when it comes to AI and writing code. If a student uses AI and I can prove it, they're out - failed and withdrawn. They lose their tuition for the class and their GPA takes a hit. They may also run afoul of financial aid / veterans benefits. Most of my adult learners pay out-of-pocket and therefore don't want to flush their tuition money down the toilet. Also, they've been "out there and done that," and, as a result, have a greater appreciation for the importance of their education.


ProfBootyPhD

No, these are my homemade steamed hams!


kierabs

Students who lack these skills and are at CCs to gain them still use it, and they think it’s fine because it’s “just a CC.”


ConclusionRelative

"the head of my program for guidance but she believes AI is a “tool.” AI is a tool. A tool can be misused and abused.


Cornchip91

The internet is also a tool. It would still be inappropriate to use it in a closed-book test. A.I as a tool? Yes--But there is a time and place for tools to be used.


No_Taro7770

I’ve decided to employ a ‘trust’ rubric. If I suspect that a submission has NOT been generated by the student, then this damages their credibility in my eyes as it would for a potential employer or client. This is when I apply the rubric. This rubric is a multiplier from 0 to 1, and assesses on Authenticity of Voice, and Consistency with previous submissions. This shifts the argument. It says : ‘I don’t care if you use AI, but if I SUSPECT that you have used AI, then this impacts our relationship’. If a student uses AI, it is at their own risk.


throughthequad

Had one students paper on canvas come back as 58%. Met with the student and felt okay about the meeting but not overly convinced. A colleague told me about some other tools to check through an admin at the school…both came back 99.9% AI written. I felt like an idiot for getting duped that bad. Still not sure how I’ll approach since I already met with the student and said I wouldn’t fail him because I wasn’t 100% sure and he has been a decent student this semester. I feel like asking him how he did it and he can keep his grade if he tells me or if he denies again I just push it to the integrity board and let them deal with it. I just am exhausted and wish they would just realize it’s part of learning, and AI punching the keys (likely) isn’t going to help them when they hit the real world


Caddy15

AI detectors are notoriously incorrect for both false positives and false negatives. One student turned in a paper that literally said "I'm an AI, not a person, so I can't give an opinion" and Turnitin said 0% AI. Also, be sure not to give student data, including papers, to software without leave from both the student and the institution, otherwise you may be breaching both FERPA and copyright, not to mention training the AI on new data.


HowlingFantods5564

Putting a student essay, with name omitted, into a plagiarism or AI checker is not a FERPA violation.


Caddy15

Normally I'd agree, but since it becomes permanently in the database of the AI and their text patterns may be identifiable, I wonder what the limits of that are. Copyright violation also isn't a slam dunk, but it's not as close a call.


throughthequad

The administration ran the queries. I appreciate the look out for sure.


Easy_East2185

Question- on the 99% positives, did you run the full paper thru (including title page)? If so, did it flag the students name as AI written? Because 99%+ sounds like it may have and that’s more likely gonna be a false positive. AI likely didn’t write the students name. Edited to add: I’d be more inclined to trust the 58%.


throughthequad

I did not run the papers on the 99%, the university did.


beepbooplazer

My fiancé is getting his PhD in AI. AI checkers are not valid.


shilohali

AI is a tool. Tools are only as good as the tools that use them.


uttamattamakin

You should still go to the head of your department AI is a tool but only if it's used correctly. The student giving you something that doesn't answer the prompts, and is not the kind of composition you wanted, is not correct . In mathematical science if I use a program called Mathematica to solve a differential equation for a research project, but I gave it the wrong boundary conditions and then get the wrong solution, I'm still wrong. It doesn't matter that I use a computer. The skill of using such a system is going to be recognizing that the computer is giving you nonsense. Just explain what the student did wrong in a way that isn't hostile to the view that AI can be a legitimate tool. Think of AI as being like a computer algebra system but for writing. Algebra and calculus are just a language for mathematics.


Schopenschluter

The trouble with the “AI is a tool” mentality in a field like English is that you already need to be a competent writer and critical thinker to judge the output. By using AI, undergraduates are failing to practice the skills they need to use it as a tool. Personal style is also essential and desirable for advanced literary analysis, and AI delivers anything but that. I’m not sure how well the comparison to mathematics carries over because there aren’t really “answers” in the same way in both fields. Sometimes a good “answer” in a field like English takes a creative or stylistic leap that AI can’t perform—it *matters* if you use a computer. Then again, maybe I’m just grumpy after a long day of grading AI papers…


ohwrite

This is so refreshing to read. I’m tired of trying to explain to people that students are using it *instead* of writing, not in addition to it :(


uttamattamakin

Everything you just said also applies to mathematics in which computers have been used to do math for a very long time. Is someone who sets up a differential equation that would be very tedious to solve by hand, and then has a computer algebra system do in minutes what would take a human being months or years simply to write down, lazy?


Schopenschluter

But that’s the thing: excluding certain trends in “digital humanities,” there are no equations or algorithms in fields like English. There’s not really a clear cut difference between “setting up” and “solving” a problem because one’s process and style of writing shape the answer in an essential way. I guess you could say humanities writing is often equally, if not more so, about the “how” than the “what.” That’s why I would never want a computer to do the labor of writing for me in the same way that a mathematician would want—and rightly so—a computer to do the labor of solving a complex equation. They’re different species of “answers,” so the same tool won’t necessarily offer the same benefits.


Olthar6

AI is a tool just like a calculator.  And just like the calculator didn't kill math,  AI won't kill writing.  But we need to learn to use the tool that is AI well and we're very far from that right now. 


[deleted]

[удалено]


optionderivative

AI is nothing more than a pencil, practically the same thing


hourglass_nebula

This is the kind of logical fallacy we’re learning about in my English 101 class


Cautious-Yellow

the "tool" here appears to be the head of your program.