More Teachers Are Using AI-Detection Tools. Here's Why That Might Be a Problem (2024)

As ChatGPT and similar technologies have gained prominence in middle and high school classrooms, so, too, have AI-detection tools. The majority of teachers have used an AI-detection program to assess whether a student’s work was completed with the assistance of generative AI, according to a new survey of educators by the Center for Democracy & Technology. And students are increasingly getting disciplined for using generative AI.

But while detection software can help overwhelmed teachers feel like they are staying one step ahead of their students, there is a catch: AI detection tools are imperfect, said Victor Lee, an associate professor of learning sciences and technology design and STEM education at the Stanford Graduate School of Education.

“They are fallible, you can work around them,” he said. “And there is a serious harm risk associated in that an incorrect accusation is a very serious accusation to make.”

A false positive from an AI-detection tool is a scary prospect for many students, said Soumil Goyal, a senior at an International Baccalaureate high school in Houston.

“For example, my teacher might say, ‘In my previous class I had six students come up through the AI-detection test,’” he said, although he’s unsure if this is true or if his teachers might be using this as a scare tactic. “If I was ever faced with a teacher, and in his mind he is 100 percent certain that I did use AI even though I didn’t, that’s a tough scenario. [...] It can be very harmful to the student.”

Schools are adapting to growing AI use but concerns remain

In general, the survey by the Center for Democracy & Technology, a nonprofit organization that aims to shape technology policy, with an emphasis on protecting consumer rights, finds that generative AI products are becoming more a part of teachers’ and students’ daily lives, and schools are adjusting to that new reality. The survey included a nationally representative sample of 460 6th through 12th grade public school teachers in December of last year.

Most teachers—59 percent—believe their students are using generative AI products for school purposes. Meanwhile, 83 percent of teachers say they have used ChatGPT or similar products for personal or school use, representing a 32 percentage point increase since the Center for Democracy & Technology surveyed teachers last year.

The survey also found that schools are adapting to this new technology. More than 8 in 10 teachers say their schools now have policies either that outline whether generative AI tools are permitted or banned and that they have had training on those policies, a drastic change from last year when many schools were still scrambling to figure out a response to a technology that can write essays and solve complex math problems for students.

And nearly three-quarters of teachers say their schools have asked them for input on developing policies and procedures around students’ use of generative AI.

Overall, teachers gave their schools good marks when it comes to responding to the challenges created by students using generative AI—73 percent of teachers said their school and district are doing a good job.

That’s the good news, but the survey data reveals some troubling trends as well.

Far fewer teachers report receiving training on appropriate student use of AI and how teachers should respond if they think students are abusing the technology.

  • Twenty-eight percent of teachers said they have received guidance on how to respond if they think a student is using ChatGPT;
  • Thirty-seven percent said they have received guidance on what responsible student use of generative AI technologies looks like;
  • Thirty-seven percent also say they have not received guidance on how to detect whether students are using generative AI in their school assignments;
  • And 78 percent said their school sanctions the use of AI detection tools.

Only a quarter of teachers said they are “very effective” at discerning whether assignments were written by their students or by an AI tool. Half of teachers say generative AI has made them more distrustful that students’ schoolwork is actually their own.

A lack of training coupled with a lack of faith in students’ work products may explain why teachers are reporting that students are increasingly being punished for using generative AI in their assignments, even as schools are permitting more student use of AI, the report said.

Taken together, this makes the fact that so many teachers are using AI detection software—68 percent, up substantially from last year—concerning, the report said.

“Teachers are becoming reliant on AI content-detection tools, which is problematic given that research shows these tools are not consistently effective at differentiating between AI-generated and human-written text,” the report said. “This is especially concerning given the concurrent increase in student disciplinary action.”

Simply confronting students with the accusation that they used AI can lead to punishment, the report found. Forty percent of teachers said that a student got in trouble for how they reacted when a teacher or principal approached them about misusing AI.

What role should AI detectors play in schools’ fight against cheating?

Schools should critically examine the role of AI-detection software in policing students’ use of generative AI, said Lee, the professor from Stanford.

“The comfort level we have about what is an acceptable error rate is a loaded question—would we accept one percent of students being incorrectly labeled or accused? That’s still a lot of students,” he said.

A false accusation could carry wide-ranging consequences.

“It could put a label on a student that could have longer term effects on the students’ standing or disciplinary record,” he said. “It could also alienate them from school, because if it was not AI produced text, and they wrote it and were told it’s bad, that is not a very affirming message.”

Additionally, some research has found that AI detection tools are more likely to falsely identify English learners’ writing as produced by AI.

Low-income students may also be more likely to get in trouble for using AI, the CDT report said because they are more likely to use school-issued devices. Nearly half the teachers in the survey agree that students who use school-provided devices are more likely to get in trouble for using generative AI.

The report notes that students in special education use generative AI more often than their peers and special education teachers are more likely to say they use AI-detection tools regularly.

Research is also finding that there are ways to trick AI detection systems, said Lee. And schools need to think about the tradeoffs in time and resources of keeping abreast with inevitable developments both in AI, AI-detection tools, and students’ skills at getting around those tools.

Lee said he sees why detection tools would be attractive to overwhelmed teachers. But he doesn’t think that AI detection tools should alone determine whether a student is improperly using AI to do their schoolwork. It could be one data point among several used to determine whether students are breaking any—what should be clearly defined—rules.

In Poland, Maine, Shawn Vincent is the principal of the Bruce Whittier middle school, serving about 200 students. He said that he hasn’t had too many problems with students using generative AI programs to cheat. Teachers have used AI-detection tools as a check on their gut instincts when they have suspicions that a student has improperly used generative AI.

“For example, we had a teacher recently who had students writing paragraphs about Supreme Court cases, and a student used AI to generate answers to the questions,” he said. “For her, it did not match what she had seen from the student in the past, so she went online to use one of the tools that are available to check for AI usage. That’s what she used as her decider.”

When the teacher approached the student, Vincent said, the student admitted to using a generative AI tool to write the answers.

Teachers are also meeting the challenge by changing their approaches to assigning schoolwork, such as requiring students to write essays by hand in class, Vincent said. And although he’s unsure about how to formulate policies to address students’ AI use, he wants to approach the issue first as a learning opportunity.

“These are middle school kids. They are learning about a lot of things this time in their life. So we try to use it as an educational opportunity,” he said. “I think we are all learning about AI together.”

Speaking from a robotics competition in Houston, Goyal, the high school student from Houston, said that sometimes he and his friends trade ideas for tricking AI-detection systems, although he said he doesn’t use ChatGPT to do the bulk of his assignments. When he uses it, it’s to generate ideas or check grammar, he said.

Goyal, who wants to work in robotics when he graduates from college, worries that some of his teachers don’t really understand how AI detection tools work and that they may be putting too much trust in the technology.

“The school systems should educate their teachers that their AI-detection tool is not a plagiarism detector [...] that can give you a direct link to what was plagiarized from,” he said. “It’s also a little bit like a hypocrisy: The teachers will say: Don’t use AI because it is very inaccurate and it will make up things. But then they use AI to detect AI.”

More Teachers Are Using AI-Detection Tools. Here's Why That Might Be a Problem (1)
Arianna Prothero

Assistant Editor, Education Week

Arianna Prothero covers technology, student well-being, and the intersection of the two for Education Week.

More Teachers Are Using AI-Detection Tools. Here's Why That Might Be a Problem (2024)
Top Articles
The Complete Guide to Buying Life Insurance for Your Parents
The best iPad stylus of 2024: Expert tested
Warren Ohio Craigslist
Unit 30 Quiz: Idioms And Pronunciation
Tmf Saul's Investing Discussions
Research Tome Neltharus
Sissy Hypno Gif
Sprague Brook Park Camping Reservations
Wild Smile Stapleton
Garrick Joker'' Hastings Sentenced
Simple Steamed Purple Sweet Potatoes
10 Great Things You Might Know Troy McClure From | Topless Robot
No Strings Attached 123Movies
Accuradio Unblocked
Moonshiner Tyler Wood Net Worth
Les Schwab Product Code Lookup
Lazarillo De Tormes Summary and Study Guide | SuperSummary
Gopher Hockey Forum
Georgetown 10 Day Weather
Espn Horse Racing Results
Ppm Claims Amynta
Mtr-18W120S150-Ul
Jayah And Kimora Phone Number
University Of Michigan Paging System
Skycurve Replacement Mat
Cognitive Science Cornell
R/Airforcerecruits
Remnants of Filth: Yuwu (Novel) Vol. 4
Co10 Unr
Www Mydocbill Rada
Isablove
How often should you visit your Barber?
Guide to Cost-Benefit Analysis of Investment Projects Economic appraisal tool for Cohesion Policy 2014-2020
Broken Gphone X Tarkov
How does paysafecard work? The only guide you need
Compress PDF - quick, online, free
Pitco Foods San Leandro
7543460065
Housing Intranet Unt
Craigslist Pa Altoona
Craigslist Boats Dallas
COVID-19/Coronavirus Assistance Programs | FindHelp.org
Grizzly Expiration Date Chart 2023
Syrie Funeral Home Obituary
Page 5747 – Christianity Today
Arginina - co to jest, właściwości, zastosowanie oraz przeciwwskazania
Treatise On Jewelcrafting
Sml Wikia
Pilot Travel Center Portersville Photos
Osrs Vorkath Combat Achievements
The Significance Of The Haitian Revolution Was That It Weegy
Latest Posts
Article information

Author: Reed Wilderman

Last Updated:

Views: 6030

Rating: 4.1 / 5 (52 voted)

Reviews: 83% of readers found this page helpful

Author information

Name: Reed Wilderman

Birthday: 1992-06-14

Address: 998 Estell Village, Lake Oscarberg, SD 48713-6877

Phone: +21813267449721

Job: Technology Engineer

Hobby: Swimming, Do it yourself, Beekeeping, Lapidary, Cosplaying, Hiking, Graffiti

Introduction: My name is Reed Wilderman, I am a faithful, bright, lucky, adventurous, lively, rich, vast person who loves writing and wants to share my knowledge and understanding with you.