A college professor and I were talking about artificial intelligence and what she’s seeing from her students. She confirmed that AI-generated content is a regular occurrence in papers students write. Some tools (such as Turnitin.com) are getting smarter about detecting AI-generated content to help professors determine when work is authentic versus AI-generated. One situation, though, was sadly amusing to me.
One of the papers turned in by a student included a statement to the effect of, “I’m sorry, but I don’t know the answer to that question.” It was clear that the student had submitted a query to an AI chatbot—which didn’t know how to respond—and the student blindly copied the content into the paper without actually reading it. I probably don’t need to tell you the grade the student got.
In my article Critical Thinking Isn’t Enough: 8 Ways to Be a Critical Persuader, I talk about a fellow named Vick, who was very well-versed on a topic but couldn’t present coherent thoughts in a way that would convince someone of his way of thinking. Vick was a critical thinker, but couldn’t make the leap to being a critical persuader. The situation the professor mentioned underscored a new “critical” category I hadn’t considered.
Read more at ProjectManagement.com
Artificial intelligence has quickly transformed from something only the techies talked about to a kitchen table discussion topic. As an author whose work has been frequently plagiarized, my interest in AI has increased as the technology becomes more pervasive. To get more information on the topic, I decided to ask Bing’s AI copilot about AI and plagiarism. I asked it three questions, and am sharing the responses I received:
Read more at ProjectManagement.com.
Contact Lonnie about article reprints. Please specify article you wish to reprint.
See Lonnie's Amazon Author Page