50 comments

  1. *This is about to turn out for the worst some directions but it works out for AI to be understood. AI shouldn’t be used for wrong. It’s funny how these days a sci-fi topic could be modern topic now. AI ETHNICS anybody haha!*

    1. You didn’t write that comment…it obviously came from an AI algorithm that hasn’t yet mastered sentence construction.

  2. I’m sure when calculators were first invented there was no way to tell if someone used one for their math homework either. AI, like all technologies before it, is eliminating the need for some human labor.

    1. Since you had to show your work to get credit, it didn’t matter if someone used it for their math homework (as someone who went through that technological change). That’s because the answer isn’t the point of the math problem. The understanding of the proper application of equations, methods,, and logic is the point of the math class and the problems.

    2. @Paul Bernhardt My point is that the professors concerned about not being able to tell if their students are cheating are now teaching a skill that humans will not have to perform if they do not want to. Just like doing math in your head, you CAN, but no longer are REQUIRED to. Humans will still be creative, humans will still write stories and create art, but because of AI, we won’t HAVE to. It will become something that is done for pleasure rather than industry.

    1. Exactly. People tend to write like they speak, because both the written and spoken word are just an expression of their thought process. If they write wonderfully and speak terribly, they are cheating.

  3. I went to HS before the internet, this was always a thing. People would just use books or encyclopedias and write it into your own words. Then you could just google it and write it in your own words. For this type of ‘ research ‘ it was always just about finding some else’s stuff and writing it in a different way. There are like a million biographies of Abraham Lincoln, and all of them are just people who read other people’s books and rewrote the parts they liked using their own words.

  4. If Chatbot can answer my philosophical questions, what do I need philosophers for anyway? If Chatbot can do it, what do I need university graduates for? The real problem isn’t, “How do we prevent cheating?”. The real problem is, “How much does Chatbot cost?”

  5. it’s easy to spot. Adjudicators need to up their game. the simplest method to catch students out is to enter the question, as it’s written into ChatGPT, it repeats its answer verbatim by what question it’s asked. There’s gonna be a lot of students with the same answer, word for word. Start a data base of answers to a specific question, add new answers to its and use lexicology analysis to check for similarities. In other words, use A.I. to check for cheaters.

    1. The answer seems to change every time, but there do appear to be some stylistic and vocabulary similarities in the answers.

  6. Well this I can agree AI is going to become a big thing and a very part of our lives one day the question is what if it’s good or bad I don’t know🤔

  7. Having played with this app for the last couple of days to see what it can do, I can see where it would be a significant issue for educators who are not as familiar with their fields as they should be. Plagiarism as a result of this AI app will be a significant problem for high schools and early college years, sadly. Beyond that, it’ll just be professorial malpractice if an expert in a given field (what a truly qualified professor is) can’t see the obvious superficiality of the AI-generated essay, its banal arguments, and the featureless writing patterns that became quite obvious after only reading a handful of manufactured historical essays I generated on the site. Clever app, no doubt, and it’ll only get better every day. Still, I must admit a feeling of relief that I’m now retired from higher ed. (Comment not written by ChatGPT, btw–could you tell? And if not, why not?)

    1. But don’t you think, if the bot is kicking out similar responses to the same question for multiple students, that a program such as turnitin might pick up on it?

  8. My teenage kids and I tested this bot online and it was surprisingly good, and the results could be almost satirical depending on your prompt. We were entering prompts like “Explain the importance of crack cocaine in higher mathematics” and the bot would rhetorically tie itself in knots attempting the do so.

  9. The end of learning. There is no need to learn anything anymore, since AI can do it, and soon, do it better than any single human (or collectively) can ever do.

  10. So, you mean, AI can construct paragraphs that can fool a seasoned university professor, but the average social media commenter can’t write his way out of a flow-thru tea bag? We are doomed.

  11. Teachers will just have to step up and test students by assigning essays as in-person exams, with the prompt not available until test time. Just like math teachers know that kids use calculators for homework, but may ban them in some exams.

  12. You just need to be good at cheating. Make Ai write the essay and then edit it. Don’t just submit it as it is.

  13. As a teacher, I’m constantly working alongside my students and know their abilities. When a CAT assessment is handed in, I can tell if the wording or phrasing is what the student is capable of or would use. I imagine, this would get harder after high school in advanced situations. This is also why written pieces/exams needs to form part of the assessment. eg: Persuasive Writing – students write a piece but then present it as an oral presentation, answering questions from the audience at the end will indicate how well they know their subject.

    1. Yes, a good instructor checks planning, writing and revisions all along in the process. Essays in my classes are part of a process. They cannot just appear.

    2. Yep. I’ve never taught lecture hall classes (though my classes were ridiculously large when I started). I know what they’re capable of.

  14. Great story, I can’t imagine the amount of hours that this educator had to spend to be able to take that in front of a board to present his case about a student however, I’m grateful that he did.

    1. I know. I used to get so frustrated having to substantiate what I knew: that a student had cheated. In my case, it was because they were writing their papers in the language other than english. So they translated, and often quite badly, material that they had lifted from a source directly. So I had to translate it back into English and do a search for it. If it wasn’t word for word, sometimes it proved to be tricky, and definitely time-consuming. But in the end, I only had to prove that they had plagiarized in one location. After that, the other places were suspected, but plagiarism is plagiarism, and you can nail them just for that one proven example.

  15. I think there are a lot of positives to this too, as chatgpt and AI are ultimately just holding a mirror up to the dire state of higher education.

    These technologies could be the catalyst for a re-evaluation of outdated and frankly toxic practices, values, and attitudes toward students, teaching, and learning within universities…

    This could help accelerate the inevitable shift towards student-centered and self-directed learning instead of rote learning and from brick-and-mortar universities to online learning systems.

  16. It’s not as big of a concern as some think.
    1) good learning design shouldn’t be about retrieval – it should be synthesis and personal relationship to knowledge.
    2) in the experiments I’ve tried so far, the AI is consistent in the way it answers question prompts. So if you get multiple responses that seem “similar” check it out with an AI yourself.

    It’s a new change, but may not be catastrophic.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.