
This report is jointly prepared by Learnosity and Caveon to explain what pseudonymous personal data is and how it can be useful in the field of assessments. It was written to support a session of the same name at the ATP Innovations in Testing conference 2022.
Learnosity and Caveon are independent entities and no partnership is implied by this document.
This document is not legal advice; it should be regarded as general information only and you should obtain your own legal advice for your own circumstances.
The table below shows three examples. A common example of personal data in the assessment context is a list of names of people and their scores in an assessment.
The left column shows the full personal data — the name and score achieved.
The middle column shows anonymous data — essentially just a list of scores.
The right-hand column shows pseudonymous data — the names of people have been replaced with IDs.

In most assessment use cases, it is important to be able to associate particular data with the people that generated the data. Data cannot be anonymous as you need to know who passed or failed a test so you can take the appropriate action( e.g., give a certificate or provide notification of failure to pass).
However, making data pseudonymous is a useful measure with assessment data. It still allows data to be associated with people when needed, but identity is masked for other processing. For a lot of tasks, pseudonymous data is sufficient and personal data from which an individual’s data can be identified or is readily identifiable is not needed to achieve the organization’s purposes.
A well-established example is the manual grading of essays. It’s common practice to mask the name of the test taker to graders so they will not be influenced by any knowledge of the test taker, but, of course, the system requires the identity of the test taker to be able to assign the score in the master records. The European GDPR law advocates pseudonymization and says²:
“The application of pseudonymization to personal data can reduce the risks to the data subjects concerned and help controllers and processors to meet their data-protection obligations”
Let’s move on to look at the security and legal compliance benefits of pseudonymization for personal assessment data.
When considering the security of assessment data, it is helpful to identify risks and threats to data and potential countermeasures. A common approach to analyzing security risk is to consider the impact and probability of each potential risk to the confidentiality, integrity, and availability of data and put in place measures to reduce the impact and probability.
In general, pseudonymization reduces the impact or consequences of many risks, for example:
Data that is pseudonymous, where the index or keys to the pseudonymized data is held separately, is in general much more secure than identified data.
There are significant benefits to pseudonymity under many privacy laws. These laws vary by geography so here is an overview for some countries and territories.
Under the GDPR, pseudonymized personal data is still personal data and therefore processing needs to comply with GDPR³ . The same is true under UK data protection law after Brexit. Although individuals working with the data may not know the identity of the test takers, the testing organization is still able to link the individual records back to individual data subjects.
However, there are significant benefits to pseudonymization under the GDPR and UK data protection law:
For more detailed information on the GDPR implications, this IAPP document has some useful guidance: https://iapp.org/media/pdf/resource_center/PA_WP2-Anonymous-pseudonymous-comparison.pdf.⁴
The US has a patchwork of federal and state privacy laws, the former being largely sector-specific (at least when it comes to the commercial sector) and the latter still relatively few in number and existing alongside state data breach notification laws.
At the state level, only California, Virginia, and Colorado presently have privacy laws of general application. The California Consumer Privacy Act recognizes pseudonymization as a concept and pseudonymous data may have certain benefits in research contexts, although this may become clearer over time. The Virginia Consumer Data Protection Act, which becomes operative on January 1, 2023, goes a bit further, excluding pseudonymous data from the scope of some obligations, like responding to consumer rights requests, while providing that certain requirements have to be followed, such as separately storing data to preserve the pseudonym and using effective technical and organizational measures to prevent unauthorized access. Similarly, the Colorado Privacy Act, which becomes effective on July 1, 2023, exempts pseudonymous data from some consumer rights and the obligation to comply with certain requests. It is likely that other US state privacy laws will similarly provide incentives to pseudonymize personal data.
All fifty US states, Washington D.C., and Puerto Rico now have their own breach notification laws. Although these laws differ with respect to specific details, all create incentives for organizations to follow good security measures. A common theme among these laws is that the definition of personal information involves a combination of identified data. Only breaches of specified, identifiable personal information trigger reporting requirements under all US state laws—for example, a name and an email address, or a social security number. Because pseudonymous data is not personal information and is not capable of identifying a person without the key, if only pseudonymous data is leaked, there is no breach that would trigger a reporting obligation under state breach notification laws.
An ever-increasing number of other countries are enacting privacy laws, often taking inspiration from the European GDPR. A comprehensive overview is not possible in this document, but examples of these laws that include provisions on pseudonymous data are:
Learnosity is the global leader in assessment solutions. Serving over 700 customers and more than 40 million learners, our mission is to advance education and learning worldwide with best-in-class technology.
Learnosity Assessment Engine APIs make it easy for modern learning platforms to quickly launch fully-featured products, scale on-demand, and always meet fast-evolving market needs.
Pseudonymity is at the core of the Learnosity Assessment Engine APIs. Learnosity doesn’t use or need learners’ personal identities. Therefore, following the principles of privacy by design and data minimization, Learnosity requires that customers using its services pass a nameless user ID. Learnosity then delivers an assessment to that unknown learner and passes back the results. Learnosity has no knowledge of the learner’s identity. Only the customer can map the user ID back to that individual.
For example, in the diagram below, the learner is called Jane Doe, but an ID is generated “1234567” and Learnosity only knows that and doesn’t know her name, address, or date of birth. It delivers the assessment and passes back the result.
The advantage of using pseudonymity for Learnosity customers is that they can use Learnosity as a processor with much less concern about the privacy of their learners than with a processor that has the identities of learners. This reduces both security and compliance risk and is a good example of how privacy by design benefits all stakeholders—Learnosity, its customers, and learners.



