White Paper:
Weathering the Perfect Test Security STorm in Educational assessments

Authors:
Jennifer Miller, Caveon Data Forensics Coordinator
Dennis Maynes, Former Caveon Chief Scientist
Introduction
Caveon’s independent position as an analyzer of the security of public education assessments has given us the opportunity to work with over forty states, performing interviews with key personnel and state assessment directors, security audits, and data forensics analyses. We have worked directly with the Council of Chief State School Officers (CCSSO) and the Technical Issues in Large Scale Assessment (TILSA) State Collaboratives on Assessment and Student Standards (SCASS) to address test security. We also worked with state education leaders and service providers creating the 2010 and 2013 versions of the Operational Best Practices for Large-Scale Statewide Assessments, published by CCSSO and ATP, with our contribution focusing on test security best practices. From our unique position, we are able to identify the most prevalent security issues facing those who manage and administer public assessments in the state departments of education.
In recent years, test security issues have received greater attention by school system administrators. Motivation to cheat on state assessments appears to be higher than ever. The number of test security violations, the severity of breaches, and threats to state assessments have been increasing. The Partnership for Assessment of Readiness for College and Careers (PARCC) and the Smarter Balanced Assessment (SBAC) Consortia each are using their own tests, which will be administered in a large number of states, increasing the likelihood that the actual content of state assessments will be illicitly distributed on the Internet and made accessible to students in the member states. Unless coordinated action is taken, these critical elements may converge to produce the perfect test security storm in state assessments. This storm is likely to result in more security challenges, more revelations of test security breaches, and more emergency funding requests to deal with the aftermath.
In this paper, we share our perspectives on these issues and how they have evolved, and then we provide suggestions for how states can develop a comprehensive test-security program that can respond to the emerging threats and benefit from lessons learned.
The perfect Storm
When the stakes are increased in testing, the motivation to cheat increases as well. Critical elements have converged that increase the threats to the security of educational assessments. The following elements appear to be responsible for these increased threats:
- Mandated assessments are tied to school funding,
- Schools and some teachers are evaluated based on test scores,
- The responsibility of test security is placed in the hands of those who stand to gain from higher student test scores, and
- Technology presents additional threats.
In short, certain aspects of the education system in the United States tend to increase the temptation for educators to cheat.
Mandated assessments are tied to school funding.
School funding is linked to standards-based student test scores due in part to the No Child Left Behind Act (NCLB) of 2001 as well as other performance-based programs implemented by states and districts. More than half of the public schools in the United States receive Title I funding under NCLB (National Center of Education Statistics). These schools are subject to sanctions if they do not meet adequate yearly progress (AYP), as defined by the state and NCLB. Sanctions for failing to meet AYP can include being required to develop a school improvement plan, restructuring of the school, and even replacement of staff. According to Michael J. Petrilli, executive vice president at the Thomas B. Fordham Institute, the lower performing schools are under the most pressure. Petrilli says, “…these [standardized] tests are mainly about raising the floor and putting pressure on the lowest-performing schools to do better” (Motoko, 2013). As a result, educators face immense pressure to improve student performance and raise test scores.
Schools and teachers are evaluated based on test scores.
In addition to school funding amounts being heavily influenced by test scores, in many states, teachers’ salaries and performance incentives also are tied to student scores. States are using student test data as part of performance evaluations for teachers and administrators, which are, in turn, directly related to compensation. For example, in his announcement of the Mississippi performance based compensation program in 2012, Governor Bryant said, “Mississippi must improve its student outcomes and provide our children with the best possible education. One way to do that is to start encouraging our teachers to perform at higher levels…a performance-based system is a way to inspire all teachers to learn, grow and improve with their students” (Gov. Phil Bryant, 2012). In a number of instances, the salaries and incentives of higher-ranking administrators also are performancebased, motivating such individuals to place even more pressure on teachers.
Moreover, the compensation structure is often a make-or-break situation, where educators may lose their jobs, and communities can lose control of their schools, if improvements are not demonstrated, which results in painful consequences for all involved. An investigation into cheating allegations in the Atlanta public schools in Georgia found that “Teachers received bonuses when schools achieved 70% or more of their annual progress goals — mostly based on students’ performance on standardized tests — but their jobs were threatened if they fell short” (Jarvie, 2014). The investigation found “organized and systemic misconduct” in the majority of the 56 schools investigated and said a “culture of fear, intimidation and retaliation” was created (Jarvie, 2014).
To some administrators, a performance-based compensation system provides incentives to cheat, not only to merely maintain employment, but also for signi#cant personal gain. In 2012 in El Paso, TX, Lorenzo Garcia, Superintendent of the El Paso Independent School District (EPISD) was convicted of conducting a scheme where administrators went to various lengths to discourage or prevent low-performing students from taking the Texas Assessment of Knowledge and Skills (TAKS). By keeping the low-performing students from taking the test, the overall scores in the school district were artificially inflated. Mr. Garcia received over $50,000 in bonuses related to test scores during his tenure (Fernandez, 2013).
According to the New York Times, “State education data showed that 381 students were enrolled as
freshmen at Bowie [a school in the EPISD] in the fall of 2007. The following fall, the sophomore class was
only 170 students. Dozens of the missing students had ‘disappeared’ through Mr. Garcia’s program, said Eliot Shapleigh, a lawyer and former state senator who began his own investigation into testing misconduct and was credited with bringing the case to light. Mr. Shapleigh said he believed that hundreds of students were affected and that district leaders had failed to do enough to locate and help them.” In addition to the conviction, Mr. Garcia was fined $56,500, which represented the amount of bonuses he had received(Fernandez, 2013).
The responsibility of test security is placed in the hands of those who stand to gain from higher student test scores
Perhaps the most troubling aspect is that the responsibility of test security is placed in the hands of people who have the most to gain from higher student test scores. In the case of paper and pencil tests, testing materials are in the custody of teachers and administrators from the moment the materials arrive at the school for the testing session until they are shipped out for scoring. They are responsible for receiving and inventorying the testing materials, administering the tests to students, storing the materials in a secure location, and for packaging and shipping the materials back to the vendor for scoring. In schools that have migrated to computerized testing, whereas teachers no longer have access to hardcopy testing materials, they still often proctor the tests, which provides them the opportunity to assist students during testing and to photograph or transcribe live test content from the computer screens, unless the delivery method prevents it. While previously it might have been proper that teachers within the school act as custodians and proctors of standardized tests, it is now questionable whether teachers should continue to serve this role because they are no longer disinterested parties in the test outcome.
And, if linking their compensation to test scores and giving them responsibility for administering the tests
weren’t enough, teachers are further feeling unappreciated for any efforts to do the right thing combined
with additional pressure from their own peers to cheat. A survey administered to educators in Michigan
found that 29% of the educators surveyed felt pressure to cheat on a standardized test, 34% felt pressure to help students answer correctly, 21% knew educators who changed students’ answers, and 8% admitted to changing students’ answer sheets (Dawsey and Tanner-White, 2011).
Not only are educators provided an incentive to cheat, in many cases so are the students themselves. In many states, high school students must meet minimum requirements to graduate, which includes passing standardized end of course (EOC) tests for subject areas such as biology, algebra, and history.
In another survey of 70,000 college students, 64% admitted to cheating on tests in high school
(McCabe, 2005). Both of these surveys suggest there is a generation of students who consider cheating an acceptable path in the educational process.
A survey conducted in 2012 of member states for the TILSA Test Security Guidebook (Olson and Fremer, 2013) found that, of the 21 states that responded, almost 96% admitted to having a test security breach in the previous three years. All of the states that responded indicated their state found “firm evidence of….teachers or school administrators cheating on behalf of their students.”
In 2011, the New York Times succinctly summarized the state of affairs with the security of educational assessments in an article about yet another emerging scandal, this time in Pennsylvania: “Never before have so many had so much reason to cheat. Students’ scores are now used to determine whether teachers and principals are good or bad, whether teachers should get a bonus or be fired, whether a school is a success or failure (Winerip, 2011).” With more educators cheating, and a generation of students coming up who view cheating as an appropriate way to succeed, incidences of cheating stand only to increase. As a result, states are under pressure to shore up their test security programs and respond to ever-increasing threats.
Technology Presents Additional Threats
With today’s advances in technology and the prevalence of computerized exam delivery, more and more sophisticated methods and tools for stealing test content and making that content available to broader populations are being employed. Individuals who may seek to gain an unfair advantage as well as teachers and educators who may wish to gain unauthorized access to and record test content for the benefit of their students can gain access to stolen content. Tiny cameras that fit on clothing, pens, and other personal articles can be used to harvest test content. Such activities can go undetected because the devices are so small or are disguised as other items such as watches or clothing.
Many states are moving to computerized exam delivery to improve efficiency and for long-term cost savings. However, computerized exam delivery, by design, can introduce new threats to test security. As described in the TILSA Test Security Guidebook (Olson and Fremer, 2013), several emerging risks specific to computerized exam delivery include hacking into computers, keystroke logging (i.e., recording the keys struck on a keyboard using a hardware or software recording device), and printing/storing test materials outside of the computer network.
Moreover, some of the threats to computerized exam delivery are exacerbated because in most states, the testing window must be lengthened to allow all students access to the limited number of computers available. A longer testing window allows more time for students who have taken the test, or individuals who have proctored the test, to share information with others who have not yet taken it.
The adoption of common assessments by members of the PARCC and SBAC consortia means that the breadth and length of item exposure for state assessments will increase greatly. The potential exists for test preparation providers to harvest the PARCC and SBAC exams and sell them on the Internet because the number of potential buyers could increase ten- or twenty-fold over current state assessments. By necessity, testing windows have been expanded in consortia states (e.g., member states of PARCC and SBAC), which administer the same tests to students in their member states, which presents the additional threat that relevant test content can be disclosed among multiple states at once.
Another technology that threatens assessment security is social media. The pervasive presence of Facebook, Twitter, and Instagram has created a platform for sharing stolen exam questions on a scale here-to-fore not imagined or seen. For example, the State of California found pictures of its exam booklets posted on the Internet (Blume, 2012).
Risks Associated with the Perfect Storm
An obvious consequence of the test security storm is the potential for diminished exam or score integrity. The test scores may no longer be valid. As a result, educational decisions using those test scores may be compromised. States that are unaware of the seriousness of, and are unprepared to respond to, the relevant test security threats leave their students and education programs unprotected and in danger of great harm.
Moreover, revelations of cheating often lead to embarrassing scandals, which are played out at length in the media, and to unplanned costs associated with investigations and prosecution of the cheaters. Risks are quantified as the potential for harm, loss, or damage.
The two aspects of risk are:
- The probability or potential the test security will be breached
- The amount of harm or loss that could be incurred. State assessment programs face three main areas of harm or loss:
- Harm to students
- Lost credibility as a result of negative media attention
- Unexpected costs
Harm to Students:
When educators cheat to help their students achieve higher scores on state assessments, it is the students who suffer most. When students appear to be performing at higher levels than they really are, they don’t get the attention they need to improve. John Thompson, educator and writer for the Huffington Post writes, in reference to the Atlanta Public Schools (APS) cheating scandal, “Our children are being robbed of opportunities for real learning, and being socialized into the reward, punish, and silence work culture of the Atlanta schools and other systems dominated by fear and compliance” (Thompson, 2014).
With increased pressure on teachers to raise test scores or risk losing their jobs, teachers may be inclined to leave lower performing schools and seek employment at schools where the students are more likely to reach the standards. John Thompson writes, “Now, teachers in Atlanta and other high-poverty districts can be fire with value-added models that are systematically biased against teachers in high-poverty schools. Teachers who value their peace of mind may have to transfer to schools where it is easier to meet those dubious test score growth targets.” This leaves underperforming schools vulnerable to teacher shortages, which serve to further disadvantage the students in these schools.
Lost Credibility as a Result of Negative Media Attention:
Revelations of cheating by educators make for eye-catching news stories. The fact that they involve some of our most trusted public servants makes them all the more scandalous, and such scandals can severely compromise the public confidence in districts and state education departments.
Several incidences of suspected cheating on state assessments have been uncovered by the media, leaving some districts and states to explain why they didn’t detect (or respond to) it before it went public. Perhaps one of the most well-known cases is the APS cheating scandal, which was initially uncovered by the Atlanta Journal Constitution in December 2008 when a reporter obtained students’ scores on the state’s Criterion Referenced Competency Test (CRCT) and noticed unusual score gains among fifth-graders at Atherton Elementary, which is located in Dekalb County (Perry and Vogell, 2012). This prompted a statewide investigation by the state of Georgia, which found cheating at 44 Atlanta schools (Severson, 2011).
Other examples of cheating uncovered by the media include the Pennsylvania cheating scandal, where
a reporter from the website, The Notebook, obtained 2009 student test data from the state and found
evidence of cheating in 89 schools state-wide (Winerip, 2011). In 2014, the Clarion Ledger, a small
newspaper in Jackson, Mississippi published an article questioning the validity of 2013 Mississippi
Curriculum Test (MCT2) test scores at an elementary school in the Clarksdale School District. The article
prompted teachers to come forward with allegations of cheating in the school and an investigation. (Le Coz, 2014).
Once a cheating scandal is uncovered, it can last for years, as new allegations emerge, complex charges
and counter-charges are filed in court, debates emerge about teacher accountability, and politicians weigh in on the matter. For example, the APS story initially broke in 2008 and has continued for over six years through a state-wide investigation and eventually to prosecution of twelve educators. In this case, Georgia pursued criminal prosecution under the Racketeer Influenced and Corrupt Organizations (RICO) Act of 1970. This arguably “heavy-handed” approach to prosecuting cheating in public schools has prompted a lengthy debate about how to hold educators accountable, which further prolonged the scandal in Atlanta. The criminal trial began in the fall of 2014. The teachers were convicted in March 2015 and sentenced to one to three years in prison plus fines and community service in April (Ellis and Lopez, 2015). Other lengthy cheating scandals have occurred in Baltimore public schools (Green, 2010), New York State (Brody, 2014), Texas, and Los Angelas (The 7 Most Shocking Teacher Cheating Scandals in U.S. History).
The media attention received by schools and districts embroiled in cheating scandals itself can harm both students and honest educators, even those who didn’t attend or teach at the schools involved, by tarnishing the reputation of the districts, states, and even regions where the cheating occurred. Jenn Steckl, a student in Atlanta, describes the burden she feels the APS cheating scandal has imposed upon her and her classmates, even though they didn’t actually attend the schools where the cheating occurred. She writes, “As a graduating senior from a tarnished school system, I have to worry every time I apply to a college. My peers and I now have to hope the schools don’t associate the scandal with us. It’s a burden that the guiltless don’t deserve…So what does the cheating scandal mean for all of us? Now we have to work twice as hard. Getting good grades isn’t considered a reward in itself anymore because those grades come from a school system of perceived cheaters. Our offenses are national news. And they’ve become the butt of jokes on latenight talk shows. But it’s no joke for students who have to overcome the perception of a morally bankrupt school system…students who have to rebuild the nation’s trust of the district and its teachers” (Steckl, 2014).
Unexpected Costs:
Costs associated with responding to cheating allegations can be high, especially if there is no budget for it to begin with. In the survey conducted for the TILSA Test Security Guidebook (Olson and Fremer, 2013), only one of the 21 states who responded indicated their testing program budgeted funds specifically for test security. Cheating investigations can cost upwards of several hundred thousand dollars, which is money better spent on efforts to prevent cheating or improve education. If the test security breach involves the compromise of the assessment instrument itself, expensive emergency re-development of the stolen test items also may be required. Other costs may involve increasing the security of the exam administrations, such as using monitors or observers to ensure that tests are administered securely.
Weathering the Perfect Storm
The perfect test security storm may be weathered by putting measures in place to mitigate the risks and address threats and vulnerabilities. It’s only a matter of time before test security breaches will happen; preparation is essential in order to be able to successfully deal with these breaches. The critical need faced by assessment programs is to begin preparations now and begin educating all stakeholders on the need to administer tests securely and fairly.
The following actions are essential for doing this:
- Establish a test security budget
- Develop a test security strategy
- create security mission and vision statements
- analyze threats and determine program vulnerabilities
- review and allocate resources
- establish a communications plan
- Implement a comprehensive test security program with the following components:
- Prevention/deterrence
- Detection
- Response
- Improvement
Establish a Test Security Budget:
While finding the money in state educational budgets to fund test security may seem difficult, budgeting a reasonable amount for it up front can save the state the much greater cost of funding a response after cheating has occurred. As Benjamin Franklin once wrote, “An ounce of prevention is worth a pound of cure.” Establishing a budget specifically for test security allows the state to develop a comprehensive security program and sends a strong message that the state is serious about preventing, deterring, and detecting test security breaches. States should consider budgeting for the development of a formal test security plan, security training for staff, specific security solutions, and contingency funds for possible follow-up investigations and emergency revision of item banks.
Develop a Test Security Strategy:
Prior to developing the tactical elements of a test security program, state assessment sta$ should #rst develop a strategy to guide it. A test security program strategy de#nes the overarching vision, goals, and principles that govern the test security initiatives. A program test security strategy should include the following elements:
- Mission and Vision Statements: A mission statement should answer the questions “why” and “how” for a test security program. This is the reason why time and resources are allocated to this program and how the program functions to achieve its goals. An example of a mission statement for a state test security program could be: It is the mission of the State Department of Education Assessment Division to ensure the integrity of state assessments and to ensure that test scores are valid measures of students’ abilities by assessing risks to test security, developing processes for secure delivery of assessments, detecting testing irregularities, and responding to security breaches.
A test security vision statement answers the question “what.” What does the program believe in? What will test security look like when the program is in place? An example of a vision statement is: The State Assessment Division believes in fair and valid test results and strives to develop assessments that support our educational goals. - Analysis of Threats and Program Vulnerabilities: Every test security program should periodically analyze the threats to the security of their exams and vulnerabilities in their programs. The analyses could answer questions such as: In what ways is the program vulnerable to security breaches? Who is most likely to cheat on the assessments? What potential losses are associated with cheating? Such an analysis can guide decisions about where to allocate resources and what measures can be taken to mitigate the risks. It should be reviewed and updated regularly as test security becomes stronger, as technical capabilities of cheaters improve, and as threats to test security evolve.
- Review and Allocation of Resources: A review of resources available to the program helps to prioritize and determine how to allocate them. Resources include personnel, budget, support elements, and test administration and security vendors.
- Communications Plan: Some of the elements of communications that need to be implemented and managed are:
- Receiving reports of testing irregularities,
- Persuading stake holders and staff of the program vision,
- Outward-facing public relations plans.
The message and the manner in which it is communicated define efforts to those who are interested and concerned.
Implement a Comprehensive Program:
Caveon believes in a holistic approach to test security, where security measures are built into the testing program at all levels, such that threats are deterred on the front end and detected and corrected on the back end. The notion of implementing a program to prevent all theft and use of proprietary test content can be overwhelming. Rather, a comprehensive test security program with a focus on improvement rather than on perfection is recommended.
A test security program should include at least some (and preferably all) of the following components:
- Prevention
- Deterrence
- Detection
- Response
- Improvement
These elements interact with one another in the form of a positive feedback loop, where effort invested on the front end leads to results and lessons learned that can be fed back into the program for continued overall improvement.
Prevention and Deterrence: As a first line of defense against test security threats, assessment programs should implement processes and plans to prevent security breaches and deter cheating. Prevention efforts may include developing an overarching security plan for the administration of the tests, providing proper training to those handling and administering the assessments, and patrolling the web for disclosed content prior to test administration.
Efforts to deter cheating may include creating legally defensible testing agreements (including agreements with proctors), publically announcing the use of test security solutions such as web patrolling and data forensics, patrolling the web, and developing the assessments (and/or the administration model) so that inherently they deter cheating (e.g., using dual option multiple choice [DOMC] items) and enable the detection of cheating if it does occur (e.g., inoculation with Trojan Horse and Embedded Verification Test items). Performing a security audit prior to administering the assessments can also be helpful to identify weaknesses in the program so they can be corrected before testing occurs.
Detection: In the unfortunate event that cheating does occur, it is important to have processes in place
to detect it. Testing programs should have a process by which irregularities and suspected instances
of cheating can be reported and addressed. Such a program should have a means by which to collect
information from staff in the field, such as proctors and other administration staff, as well as a means by
which individuals, such as other examinees, can report security concerns anonymously without fear of
retaliation. Information should be collected and reported to the appropriate individuals who have authority to investigate and respond to incidents.
Other ways to detect test fraud and cheating include web monitoring (searching the web for exposed
test content before, during, and after testing) and data forensics. Data forensics analyses after every
administration are recommended to identify testing irregularities that may be indicative of cheating.
Regular data forensics analyses also are helpful in creating a baseline against which to measure future
improvements to a program.
Response: Should a breach in test security be suspected or actually occur, a response plan should be in place.
- Gathering information
- Communicating with stakeholders
- Initiating and performing investigations
- Gathering evidence
- Analyzing test material chain of custody
- Document reviews
- Conducting interviews with witnesses and parties of interest
- Reviewing test session video recordings, etc.
- Gauging the scope and impact of the breach
- Identifying if any items are affected
- Refurbishing items and item banks
- Reviewing proctor notes and seating charts
- Adjusting policies and procedures
- Imposing penalties in accordance with the testing agreements that were developed as part of prevention
Improvement. Perhaps the most important part of a test security program is a means of capturing information and feeding it back into the program so the program can improve. The most effective and efficient way to improve a security program is to use empirical knowledge gathered from the program itself to then fortify it against future threats.
Data forensics analysis is very helpful in measuring improvements in a test security program. An initial analysis establishes a baseline measurement of the state of the program, and future analyses provide results with which to compare it. Also, data forensics analyses can pinpoint the types of security
problems in a program. For example, data forensics can help discern irregularities that are caused by one or two individuals who may have copied each other’s work versus irregularities that are caused by
groups of people who are colluding or receiving, as a group, unsanctioned aid while taking the test. This information can help program directors determine where to allocate their resources. In this example, the two scenarios are quite different, and therefore having results that can identify where the security threat is would help the program director decide what kind of improvement strategy to implement (e.g., an educational program that explains to all students and educators what is considered cheating or the installation of opaque barriers between computer monitors to prevent students from copying each other’s work).
Summary and COnclusions
In this article, causes that have converged to create the perfect test security storm in statewide assessments have been listed and discussed. These include:
- Mandated state assessments are tied to school funding
- Schools and some teachers are evaluated based on test scores
- The responsibility of test security is placed in the hands of those who stand to gain from higher student test scores
- Technology presents additional threats
The perfect test security storm has the potential to impact statewide assessment programs in three main areas:
- Harm to students
- Lost credibility as a result of negative media attention
- Unexpected costs
This storm may be weathered by putting in place measures to mitigate the risks, offset the harm, and address threats and vulnerabilities. The critical need faced by assessment programs is to take control now and begin educating all stakeholders on the need to administer tests securely and fairly. It is recommended that assessment programs establish a test security budget, develop a test security strategy, and implement a comprehensive test security program which involves continuous efforts to prevent, deter, detect, respond, and improve.
Additional information about designing and implementing a secure testing program, including best practices guidance, can be found in the TILSA Guidebook (Olson and Fremer, 2013), Testing and Data Integrity in the Administration of Statewide Student Assessment Programs (NCME, 2012), and Operational Best Practices for Statewide Large-Scale Assessment Programs (CCSSO, 2013). Also, a general guide to security of all types of testing programs is the Handbook of Test Security (Wollack and Fremer, 2013).
References
http://articles.latimes.com/2012/jul/18/local/la-me-0719-state-tests-20120719.
Brody, L. (November 25, 2014). www.wsj.com/articles/new-york-unit-focuses-on-cheating-by-teachers-1416967384?mobile=y.
CCSSO (2013). Operational Best Practices for Statewide Large-Scale Assessment Programs 2013 Edition.
Council of Chief State School Officers, July, 2013.
Dawsey, C. P. and Tanner-White, K. (July 27, 2011). Survey: Nearly 30% of Michigan teachers report pressure to cheat. Retrieved from: http://archive.freep.com/article/20110727/NEWS06/107270396/Survey-Nearly30-Michigan-teachers-report-pressure-cheat.
Ellis, R. and Lopez, E. (April 30, 2015). Judge reduces sentences for 3 educators in Atlanta cheating scandal. Retrieved from: http://www.cnn.com/2015/04/30/us/atlanta-schools-cheating-scandal
Fernandez, M. (October 13, 2012). El Paso Schools Confront Scandal of Students Who ‘Disappeared’ at Test
Time. Retrieved from: http://www.nytimes.com/2012/10/14/education/el-paso-rattled-by-scandal-ofdisappeared-students.html?pagewanted=all.
Gov. Phil Bryant Unveils Plan to Improve Student Achievement (July 27, 2012). Retrieved from: http://www.governorbryant.com/gov-phil-bryant-unveils-plan-to-improve-student-achievement/.
Green, E. (July 23, 2010). Alonso orders investigation into plummeting test scores at elementary school. Retrieved from: http://articles.baltimoresun.com/2010-07-23/news/bs-ci-abbottston-msainvestigation-20100722_1_maryland-school-assessment-test-annual-reading-and-math-scores.
Jarvie, J. (September 6, 2014). Atlanta school cheating trial has teachers facing prison. Retrieved from: http://www.latimes.com/nation/la-na-cheating-trial-20140907-story.html#page=1.
Josephson Institute (2011). The Ethics if American Youth: 2010. What Would Honest Abe Lincoln Say?
Center for Youth Ethics, Los Angeles, CA.
Le Coz, E. (May 14, 2014). Clarksdale faces cheating allegations. Retrieved from: http://www.clarionledger.com/story/news/2014/05/13/clarksdale-faces-cheating-allegations/9065667/.
McCabe, D.L. (2005). Cheating Among College and University Students: A North American Perspective. International Journal of International Integrity; 1(1).
Motoko, R. (April 2, 2013) Scandal in Atlanta Reignites Debate Over Tests’ Role. Retrieved from: http://www.
nytimes.com/2013/04/03/education/atlanta-cheating-scandal-reignites-testing-debate.html?_r=0.
National Center for Education Statistics. Retrieved from: http://nces.ed.gov/pubs2013/2014098/tables.asp.
NCME (2012). Testing and Data Integrity in the Administration of Statewide Student Assessment Programs. National Council on Measurement in Education, October 2012.
Olson, J. F. and Fremer, J. (May 2013). TILSA Test Security Guidebook: Preventing, Detecting, andInvestigating Test Security Irregularities. Council of Chief State School Officers, Washington, D.C.
Perry, J. and Vogell, H. (March 26, 2012).
Surge in CRCT results raises ‘big red flag.’ Retrieved from: http://www.ajc.com/news/news/local/surge-incrct-results-raises-big-red-%ag-1/nQSXD/.
Severson, K. (July 5, 2011). Systematic Cheating Is Found in Atlanta’s School System. Retrieved from: http://www.nytimes.com/2011/07/06/education/06atlanta.html?_r=0.
Steckl, J. (October 27, 2014). A Student View on the Ripple Effects of Cheating Teachers. Retrieved from:https://youthradio.org/news/article/a-student-view-on-the-ripple-e$ects-of-cheating-teachers/.
The 7 Most Shocking Teacher Cheating Scandals in U.S. History. Retrieved from: http://www. businessinsider.com/8-of-the-most-incredible-teacher-cheating-scandals-in-us-history-2011-9?op=1#ixzz3WpjFY6Lq.
Thompson, J. (September 29, 2014). The New Yorker Nails the Real Lesson of the Atlanta Testing Scandal. Retrieved from: http://www.hu”ngtonpost.com/john-thompson/new-yorker-nails-the-real_b_5624923. html?.
Winerip, M. (July 31, 2011). Pa. Joins States Facing a School Cheating Scandal. Retrieved from: http://www.nytimes.com/2011/08/01/education/01winerip.html?pagewanted=all).
Wollack, J. A. and Fremer, J. J. (2013). Handbook of Test Security. Routledge, New York, NY.
READY TO TALK TO AN EXAM SECURITY EXPERT?
Reach out and tell us about your organization’s needs today!