Technology

AI Proctoring Has Led to False Positives

Moving forward, a combination of AI with professional reviewers will improve accuracy and fairness while, ideally, reducing bias.

With the onset of the global pandemic forcing universities and colleges to reconsider online classes and expand remote offerings, remote proctoring software companies have seen usage increase over 500%. 54% of universities and colleges are currently using online proctoring, and another 23% are considering it. However, the AI software presents considerable concerns in terms of invasiveness, bias, and efficacy. Students with disabilities, and lower income students are being unfairly discriminated against, and false positives are rarely investigated.

What allows AI to automate repetitive tasks, is also one of its greatest weaknesses as it lacks human discrimination. Here we discuss the limitations of AI proctoring and what companies are doing to improve it. 

What is AI Proctoring?

AI proctoring, as the name suggests, uses artificial intelligence to proctor a test. The students give the software access to their computer during the test, to prevent cheating or unauthorized internet searches, as well as access to their microphone and camera to monitor the room. The idea is that the AI will be able to detect if the student looks at books, performs internet searches, asking for help, or otherwise cheating on a test. 

In a perfect system, the advantages of AI proctoring are that it saves the time of university professors for grading, reviewing, and lesson development, rather than monitoring tests. While online AI proctoring has been around for many years, the application became essential in the last 18 months. 

The three major proctoring companies have proctored over 30 million tests since the beginning of the pandemic. However, problems arise when the AI lacks basic human discrimination, therefore inaccurately flagging students who were not cheating. 

Limitations of AI Proctoring

The primary limitation arises when relying solely on AI proctoring without human review. Ideally, professors will review the video footage of any student flagged by the AI to determine the situation. In practice, this rarely happens. 

One study by ProctorU, one of the major suppliers of proctoring software, found that only 11% of videos flagged for cheating were reviewed.  An independent audit at the University of Iowa found just 14% of faculty members were reviewing the results they received from Proctorio, another major AI proctoring company. This fails students for a variety of reasons. 

AI proctoring software has been attacked for bias, accessibility impacts, and extensive evidence that it leads to false positives, especially in vulnerable students. These concerns even led to a US Senate inquiry letter directed at all three top proctoring companies. 

False Positives

One of the biggest concerns is the number of false positive results from underprivileged and minority groups. False positives are frequent, and rarely represent an actual concern. 

If a student’s children come into the room, the AI will flag them for cheating. Any professor will immediately see it was a request for a peanut butter sandwich and not the answer to the chemistry questions. Likewise, the AI software will often flag dogs barking, children playing outside, or other activities in the home. 

This means that students who do not have the luxury of a quiet, private workspace are routinely flagged, simply for working from home. This affects low and middle-income students disproportionately, furthering a social divide and unfairly punishing students for showing up from where they can. 

Disability Discirimnation

For disabled students, AI presents an additional challenge as the software will flag their activity as suspicious, simply for utilizing necessary assistive technology. It lacks basic human discimination and understanding. A dyslexic student who reads aloud, an autistic student who waves their arms, or a student with Crohn’s disease who leaves the room frequently to use the bathroom will all be flagged for cheating. Instead of facilitating the work of these students, it can undermine confidence and cause false positives to an already marginalized group.

Steps Forward

Of the three major AI proctoring companies — ProctorU, Proctorio, and ExamSoft, last month ProctorU was the first to announce it is abandoning an exclusively AI proctoring software. 

ProctorU will transition to professional, human proctoring and plans to complete the transition by the start of the 2021/2022 academic year. The company is adopting this policy after a review of its own practices and data. 

ProctorU stated that they realize that the amount of time required to review a test session is an unreasonable burden to place on faculty and faculty assistants. Reviewing a 60-minute proctored test for 150 students can take more than 9 hours. 

Moving forward, a combination of AI with professional reviewers will improve accuracy and fairness while, ideally, reducing bias. AI has its place in automating repetitive tasks and improving efficiency, but it lacks inherent compassion and discrimination. The AI software will still improve proctoring efficiency in online testing, but with human reviewers many of the critical flaws can be mitigated.  

Previous

What is Solar Geoengineering?

Back to Technology
Next

Battery Storage Industry Seeing Massive Growth