A few months ago, a high school English teacher in Los Angeles Unified noticed something different about his students’ tests. Students who had struggled all semester were suddenly getting A’s. He suspected some were cheating, but he couldn’t figure out how. Until a student showed him the latest version of Google Lens. Google had recently made the visual search tool easier to use on the company’s Chrome browser. When users click on an icon hidden in the tool bar, a moveable bubble pops up. Wherever the bubble is placed, a sidebar appears with an artificial intelligence answer, description, explanation or interpretation of whatever is inside the bubble. For students, it provides an easy way to cheat on digital tests without typing in a prompt, or even leaving the page. All they have to do is click. Even within schools, teachers have different AI rules. A recent survey by RAND research organization found only 34% of teachers said their school or district had consistent policies related to AI and cheating, and 80% of students said their teachers haven’t provided guidance on how to use AI for schoolwork. That confusion is the crux of the problem, said Alix Gallagher, a director at Policy Analysis for California Education who has studied AI use in schools. Because there are few clear rules about AI use, students and teachers tend to have “significantly” different views about what constitutes cheating, according to a recent report by the education nonprofit Project Tomorrow. “Because adults aren’t clear, it’s actually not surprising that kids aren’t clear,” Gallagher said. “It’s adults’ responsibility to fix that, and if adults don’t get on the same page they will make it harder for kids who actually want to do the ‘right’ thing.”

Related Authors