Historically, proctoring has served two core purposes. First, proctors facilitate the smooth administration of an exam (for example, checking IDs, answering candidate questions, and troubleshooting technology). Second, and far more critical, proctors secure the exam during exam administration. Both of these functions are important, but proctoring is no longer a viable way of achieving these goals.
This paper traces the historical arc of proctoring, explains three ways in which it fails society today, and outlines a modern alternative: the combination of secure exam design with continuous, risk‑based monitoring to not only truly secure exams, but to do so while improving the test-taker experience and dramatically reducing costs.
Our primary line of defense for securing the administration of exams today (whether administered in-person or remotely) remains essentially unchanged from centuries past: a human proctor, lately aided by AI tools, monitoring for misconduct. That paradigm doesn’t work. Let’s discuss why.
Understanding where proctoring began—and how cheaters have always out‑maneuvered it—helps explain why today’s models continue to fall short. Proctoring’s limitations are not new glitches in remote technology but centuries‑old design flaws baked into the model itself.
Caveon’s 2024‑25 secret shopping data shows that more than 90 percent of attempted cheating or exam‑content theft remains undetected by proctors, whether the exam is delivered in a test center or remotely. Even the simplest forms of misconduct (keeping a phone within reach, leaving the camera’s view, or having another person in the room) escape detection or are allowed to continue in the majority of sessions. Importantly, these weaknesses apply to both in‑person and remote proctoring. Moving exams back to test centers, or layering today’s webcam tools with more flags, does not solve the fundamental problem: visual and audio surveillance of a test-taker (whether live, in-person, remote, AI‑assisted, “record‑and‑review”, or some other variation) can only catch what the lens can see and what the proctor can recognize in real time.
Imperial China (206 BCE - 1905 CE) – The world’s first high‑stakes exams, the Imperial Civil Service Exams, took place in vast pavilion compounds where thousands of candidates sat for days under the gaze of roving proctors. Surviving edicts and exam logs describe ingenious methods of cheating: answers copied onto silk, minuscule notes carved in jade pendants, and “double candidates” hired to sit the exam in place of nobles. One Tang‑dynasty minister lamented that “trickery multiplies more swiftly than punishment.” Importantly, these accounts capture only the cheating methods that were caught; most undetected schemes left no record at all.
Early Modern and Industrial‑Age Schools – Fast‑forward a millennium and the scene is not much changed: a proctor pacing wooden-floored aisles while students or exam candidates trade cheat sheets and whispered cues. The setting may have changed but the outcome did not. Observers could deter some misconduct but failed to spot much of it, as evidenced by 19th‑century university discipline logs that mirror their Han‑dynasty counterparts in tone and frequency.
Center-Based Computerized Testing (1990s-present) – Digitized exams introduced the use of randomized exam questions, adaptive tests, cameras, and secure browsers. However, cheaters continued to flourish with counterfeit drivers’ licenses, proxy test- takers, button‑hole cameras, hidden earpieces, and internet-accessed pre-knowledge. But as in generations past, the underlying contest between a proctor’s eyesight (camera or not) and concealed intent remained unchanged.
Remote Proctoring (2006-present) – Webcams and AI flags have promised to solve scale and vigilance problems, yet Caveon and academic researchers now report that detection rates are no lower than in decades past.
We must now ask ourselves these three questions:
Is there reason to believe that the amount of cheating and exam theft has decreased over the past thousand years?
Is there any evidence to suggest that proctors today are more vigilant or effective than they have been throughout history?
Has our technology advanced enough to reliably detect and thwart cheating and exam theft?
The answer to all these questions is a resounding “No.” Across every era, two facts persist:
Observation alone is fragile. People intent on cheating almost always find ways to hide evidence from watchful eyes.
Proctors are human, even when supported by AI. Fatigue, divided attention, and a finite field of view limits effectiveness—even when integrity and training are exemplary.
Let’s now explore further the failings of proctoring in 2025.
Below are four real‑world snapshots showing how easily modern proctoring, remote or in person, misses routine cheating tactics.
Placed five ordinary recording devices (button cam, phone in bag, webcam on bookshelf, etc.) in a remote exam scene.
None could be detected by the live proctor.
Sent trained undercover “shoppers” into remote and in-person exam administrations to attempt obvious violations (phones, printed notes, accomplice in room).
> 90% completed tasks undetected or were merely warned, then allowed to finish their exams.
Caveon regularly offers data forensics services to evaluate testing data signals for misconduct.
One certification pilot found 85% of exams were flagged for extreme similarity, indicating widespread pre-knowledge and item compromise.
Shows that misconduct not only persists but is often rampant, even where proctors report few incidents.
Screenshots of high‑stakes exam items were posted publicly within days of delivery and within the open testing window.
Confirms content from “secure” sessions is rapidly leaked online, allowing others to utilize pre-knowledge.
The snapshots above highlight some of the most common methods of misconduct which are neither prevented nor detected by proctors. Now consider the most prevalent, high-risk threats to exam security during exam administration, such as AI‑assisted answer generation, remote proxy test‑takers, or pre‑knowledge.
Test takers use hidden devices to steal content. Traditional security methods are bad at detecting or preventing this.
A proctor can't know if a test-taker is answering from memory, using a cheat sheet, or referencing stolen content.
Proctors struggle to detect covert communication devices, AI use, and proxy test-takers – especially in remote testing environments.
These are the three biggest security risks in 2025, and none can be identified by live, AI-assisted, or recorded proctoring alone.
To learn more about Caveon’s research and evidence exemplifying how ineffective traditional proctoring is at ensuring secure exam administration, check out this article.
Let’s face the difficult truth: nobody likes taking tests. One big reason for that is proctoring.
Research consistently links webcam proctoring to heightened anxiety and lower performance. A 2023 systematic review found that “online proctoring increases student anxiety and raises serious privacy concerns.” Another 2023 empirical study reported that lockdown‑browser exams “can increase students’ anxiety levels and decrease their performance.”
Beyond psychology:
Privacy worries over continuous video of bedrooms and biometric data raise legal and ethical issues.
Technical glitches such as camera failures, drops in bandwidth, and false AI flags, interrupt honest sessions, are incredibly frustrating, and sometimes void valid scores.
Presumption of guilt erodes candidate trust and brand reputation.
In short, the public pushback against proctoring has become both pervasive and persuasive. From TikTok rants and student‑led petitions, to articles and reporting from major news outlets, proctored exams are more often framed as instruments of surveillance than as safeguards of security and fairness. When learners, parents, and educators overwhelmingly associate an assessment technique with stress, privacy risks, and technical breakdowns, its social license erodes—especially when the security rationale behind it is weak.
Proctoring is expensive. In Caveon’s Experience:
In‑person proctoring often costs $30–$100 per exam once salaries, facilities, and overhead are included.
Live online proctoring typically runs $15–$50 per exam, plus platform fees and reviewer labor.
Automated or AI‑assisted models still require post‑exam human review, leaving hidden labor costs.
For large programs, annual proctoring invoices can reach seven figures, without meaningful security return. For smaller programs, the price per exam can be even higher due to limited volume, which demands a higher relative percentage of their available budget be sacrificed to proctoring costs.
Every dollar spent surveilling honest candidates is a dollar not invested in better item development, richer learning content, improved candidate support, or preventative security measures.
Making incremental tweaks to proctoring is not enough. One cannot mend a fundamentally leaky roof with bandages. 2,000 years of proctoring history has demonstrated this over and again. There is a better way.
We can replace the roof altogether by marrying two disciplines, secure exam architecture and real‑time, risk‑based monitoring, into a testing system that is simultaneously harder to cheat and easier to administer. We call this Observer Plus.
Secure exam administration begins long before exam day with Randomly Parallel Tests (RPTs) built from vast item pools built with AI’s assistance. Each candidate receives a unique yet psychometrically equivalent exam form. Therefore, stolen content has almost no resale value or usefulness. Decades of item‑sampling research, along with new field studies with Cisco®, confirm that RPTs match or exceed the reliability of fixed forms while sharply reducing item exposure.
Additional layers like dynamic item types (e.g., DOMC™), embedded answer variants, and on‑the‑fly sampling further neutralize pre‑knowledge and collusion. By blocking the easiest attack vectors upstream, secure design shifts security focus where it is cheapest, least intrusive, and most effective.
With the biggest vulnerabilities already blocked, Caveon Observer focuses on monitoring behavioral data during exam administration rather than surveilling the test-taker. Response‑time curves, answer‑path patterns, and environment signals stream into a patented model that produces a live risk score. Only sessions that cross a data‑driven risk threshold trigger human attention: whether that’s a real‑time chat, a pop‑up warning, or a post‑exam forensic audit.
This targeted model removes the need for universal, high‑friction surveillance. Honest candidates experience fewer (usually, none) pop‑ups and disruptions, no intrusive room scans, and demonstrably lower anxiety; program staff spend their time only where evidence warrants it is most needed.
Stronger security – Layered prevention plus analytics detect threats the lens can’t see—pre‑knowledge, proxy test-takers, AI answer feeds.
Lower costs – Monitoring scales logarithmically, not linearly; some programs even adopt bring‑your‑own‑proctor (BYOP) models to cut overhead.
Better experience – Minimal surveillance for low‑risk sessions means fewer disruptions and lower exam anxiety.
Click here for a deeper dive into Observer Plus: including cost models, case studies, demos, and research inquiries.
Proctoring has had a 2,000 run, but history shows that adding sharper eyes, extra cameras, or smarter algorithms has never removed fundamental blind spots. In 2025, we face a clear inflection point: exams are more valuable, and more vulnerable, than ever, yet the methods built to protect them still rely on humans watching people take exams and hoping they notice misbehavior. The evidence assembled here—from secret‑shopping audits and data‑forensics findings to growing public backlash—demonstrates that the traditional proctoring model is now failing on three fronts at once: security, candidate experience, and cost.
Observer Plus offers an escape from that zero‑sum trade‑off. By engineering security into the design of the exam and using behavior‑based analytics to focus attention only where data indicates a real risk, exam delivery is reshaped into something both more fair and harder to game. Program budgets shift from paying armies of proctors toward investments that strengthen content and expand access. Test‑takers, no longer treated like suspects, encounter an experience with a lighter touch and measurably less stress. And most importantly, certification bodies and educators gain that which proctoring has never consistently delivered: confidence that a reported score reflects genuine knowledge and skill.
The invitation is straightforward. Step back from incremental tweaks and ask whether watching every candidate all the time is the best we can do. If the answer feels unsatisfying, it’s time to explore a model built for the threats and expectations of this century—not the last.
Caveon’s Observer ecosystem operationalizes that vision.
To learn more contact info@caveon.com or visit caveon.com/observer/overview/
Caveon, LLC is the world’s only company dedicated exclusively to exam security. For 22 years we have helped certification bodies, licensure boards, and education programs protect the validity of millions of exam scores through secure exam design, data forensics, web patrol, and now Observer™, the first end‑to‑end monitoring platform built for the AI era.



