Bette A. Ludwig (PhD) (I explain AI to parents, educators, and professionals…) had a post on LinkedIn about AI ruining trust in education.
Everyone is a Fraud (According to Everyone)
New Pew Research data on AI in schoolwork reveals a massive disconnect in how students perceive each other.
— 60% of teens believe their peers use AI to cheat on assignments
— Among teens who actually use AI themselves, that number jumps to 76%.The more a student knows about the tool, the less they trust one another’s work.
The Takeaway: We are raising a generation of Detectives, not Thinkers.
We have successfully taught students to be suspicious of any polished output, but we haven’t taught them to be critical of the ideas themselves. When 76% of users assume their peers are frauds, excellence becomes a red flag in the classroom.
If we train the next generation to start every assignment by asking if it is real instead of asking if it is a good idea, we aren’t protecting academic integrity.
We are killing the possibility of a serious education.
We are so focused on catching the bot that we are losing the student.
#AILiteracy #AcademicIntegrity #CriticalThinking
My response:
I disagree. The source of the idea matters, too. Using AI in the educational context is the same as plagiarism.
If it is a good idea, but was generated by AI, and the expectation is that AI cannot be used, that’s the problem. There’s no defense for this violation of trust. And that’s what we are teaching students who use AI: if don’t get caught using AI, congratulations!
“If we train the next generation to start every assignment by asking if it is real instead of asking if it is a good idea, we aren’t protecting academic integrity.
We are killing the possibility of a serious education.
We are so focused on catching the bot that we are losing the student.”
Serious education of an individual doesn’t include outsourcing thinking to AI. That’s a co-dependent education.
In the comments, Richard Bulzacchelli (Affiliate Assistant Professor of Ministry / University of Dallas) wrote:
YES! What you’re identifying here (correctly, I should add) is the issue Immanuel Kant saw at work in the moral prohibition against lying. Doing so poisons the well of discourse, because, unless we begin our interactions with others on the premise of honesty and truthfulness, the contrary suspicion that our interlocutor is dishonest disrupts the social fabric, undermining the very possibility of civilization.
AI essentially transgresses this same line.
Kant also noted that we could ask whether an action could be “universalizable”–that is, whether this action one person proposes to take could be taken by all, and what that would mean. In the case of AI, we are faced, once again, with the same issue. THIS student, in THIS class, in THIS assignment, resorts to AI to satisfy the terms of the assignment. What if every student did that? What if no one did his own work? The whole endeavor of education–indeed, of human exchange of ideas and information–would collapse. Society would collapse.
Bette A. Ludwig (PhD) responded:
Richard Bulzacchelli This is a fascinating philosophical expansion of Kant and the trust issue I was pointing to. Suspicion at scale absolutely corrodes the entire educational exchange.
My response:
I disagree. Unless your suspicion equals tuning out, pushing away, and disengaging, being suspicious is synonymous to to being critical in the academic setting.
Don’t fall for the argument from authority logical fallacy. Question everything.
As the old adage goes: trust, but verify.
And that’s how science works. Old scientific ideas collapse under the weight of new analysis and experimentation. Without suspicion and skepticism that something was wrong, we wouldn’t have overcome transmutation of species, spontaneous generation, vitalism, etc.
source: https://en.wikipedia.org/wiki/List_of_superseded_scientific_theories
Another comment from Tiffany Hunter, MBA (Instructional Designer Candidate (MA May 2026)):
A strong starting point is helping teachers, parents, and students understand clear boundaries for responsible AI use in the classroom. AI literacy is quickly becoming essential as the job market evolves, so it’s important that students learn not only how to use these tools, but how to use them strategically and responsibly.
At the same time, AI should complement, not replace, critical thinking, problem-solving, and offline learning experiences. When integrated intentionally, AI can enhance instruction while still preserving the core cognitive skills that high-quality K–12 education is meant to develop.
My response:
You can’t outsource critical thinking to AI and expect to maintain your own strengths in critical thinking.
It’s the same for outsourcing any thinking. Unless you use it, you lose it.
Do you have studies and experiments that back up your claims that “AI can enhance instruction while still preserving the core cognitive skills that high-quality K–12 education is meant to develop.”



