What AI College Exam Proctors Are Really Teaching Our Kids

Universities are digitally spying on students to make sure they don’t cheat on online tests. A whole generation could be learning to tolerate surveillance.
a tight circle focused on a student at a desk
Illustration: Michelle Thompson

When Haley, a sophomore at Indiana University, took a test for an accounting class in September, she—like many college students during this pandemic—was sitting not in a classroom but in her bedroom. And instead of a teacher watching for signs of cheating, there was something new: an AI, studying Haley's every move through her laptop's webcam.

The university was conducting remote exams using Respondus, a type of “online proctoring” software. The software locks down a student's desktop so they can't switch tabs to Google an answer, and then it uses visual AI to examine—among other things—their head movements to judge whether they're looking somewhere other than at the screen.

Haley's head was setting off alarms. “I guess I slouch when I'm sitting,” she tells me, so at one point the software flashed a scary warning at her. “It stopped my test, and it popped an alert on the screen saying we can't see your face any more.” Unsettled, she began to stare more robotically at her screen.

Haley finished the exam and got a good grade. (I'm using only her first name at her request.) But the stress of that HAL-level surveillance? Yikes.

“Online proctoring” isn't new. It's been around for over a decade, used mostly for distance education or corporate accreditation tests. But as formerly F2F colleges went remote because of Covid-19, its use spread like ivy. Mike Olsen, the CEO of Proctorio—one firm that makes such wares—says their business has risen by 900 percent since the spring. “April was one of our craziest months,” he adds.

We talk a lot about the rise of surveillance capitalism and ponder the grim future to which that Orwellian path leads. But for students? That future is now, as they try to act dutiful in front of their glowing webcams.

It's a dreadful experience, they'll tell you. Some systems identify possible cheats using AI; in others, a live human, employed by the firm, stares at you. Oodles of Reddit posts catalog moments of violation: housemates or family unwittingly captured on camera, normal body movements flagged as illicit behavior, and the existential exhaustion of performing obedience. “This legitimately scares the fuck out of me,” one student posted.

It sets a terrible civic precedent. “We are indoctrinating our youth to think that this is normal,” says Lindsay Oliver, activism project manager at the Electronic Frontier Foundation. Students trained to accept digital surveillance may well be less likely to rebel against spyware deployed by their bosses at work or by abusive partners. “What are we telling them about what they should expect for the rest of their lives?”

Universities plead that they need some way to prevent academic malfeasance, which is a real thing. A recent survey found that just over 30 percent of students admit to having engaged in some form of cheating. Administrators tell me they try to be as respectful as possible of student privacy: “We work very hard not to be invasive,” notes Brian Marchman, the director of distance and continuing education at the University of Florida. For example, his school encrypts any video or data collected by the proctorware, and it is professors—not the proctoring companies—that make the final decision on whether cheating occurred.

All fair enough. But there's something bonkers about trying to parse the most ethical way to creep on students. The rise of proctoring software is a symptom of a deeper mistake, one that we keep making in the internet age: using tech to manage a problem that is fundamentally economic.

After all, there are other ways to assess students that minimize the chances of cheating. Rather than give multiple-choice tests, you could ask them to “do more applications-based projects or essays,” Haley says. We could ask students to engage in serious, real-world tasks: “There are a bunch of Wikipedia articles that could be worked on,” says Audrey Watters, author of the blog Hack Education. If you give students complex projects, you don't need to ban Google, because there's no simple answer.

Exciting possibilities, right? But this sort of work is “much harder to grade,” Watters says—which is why schools so often rely on drearier assessments, particularly multiple-choice tests. If we truly wanted schools to have the resources to grade serious, complex work, we'd need to put more money into the big public institutions (like Haley's school) that educate the great majority of US students. But at those places, funding has decreased, on a per student basis, over the past few decades. The more creative answers take time and money, so they get pushed aside and quick tech fixes get used instead.

Think about it that way, and what seems like a problem of dishonest kids is actually a problem of … public policy. Using tech to paper it over isn't a good answer. In the long run, we're only cheating ourselves.

Photograph: Getty Images

Updated 10/21/2020 3:30 pm ET: A previous version of this story misspelled Lindsay Oliver's name and misstated her title. She is the activism project manager at the Electronic Frontier Foundation.


This article appears in the November issue. Subscribe now.


More Great WIRED Stories