Pseudonymisation: The GDPR’s great “loophole”?

The GDPR was approved by the Parliament of the European Union on the 14th April 2016 and has been in force since the 25th May 2018. Organisations that are not compliant can now face heavy fines.

Suffice to say, significant changes to the way businesses and companies can now collect personal data have been made. Freely stockpiling personal information in order to use at a later date is no longer advisable as any collection of information must henceforth be in strict compliance with the Regulation.

Prudent business owners have acted accordingly; sweeping changes have been made to their procedures for collecting personal data. Individuals have much more control over how their information is stored and used than was previously the case. Nonetheless, some commentators have noted a potential exception, or loophole, to the GDPR which appears in the Regulation’s Recital 26 and Article 6(4). The requirements of the GDPR do not apply in full to data “rendered anonymous in such a manner that the data subject is […] no longer identifiable”. This applies to two practices: anonymisation and pseudonymisation

What is anonymisation?
“Anonymisation” (or “anonymization”) of data means to process it in order to conclusively prevent the identification of the party whom it relates to. Anonymisation is a developing field. Knowledge of the effectiveness of the numerous known anonymisation techniques changes almost constantly. It should be noted, therefore, that it is not possible to guarantee that a particular method of anonymisation will prove to be 100% effective in protecting the identity (i.e. the possibility of discovering the name, address, date of birth etc., but also the potential identifiability by narrowing down options so as to single out by linkability and inference) of data subjects.

What is pseudonymisation?
The GDPR defines pseudonymisation as the processing of personal data in a way that it may no longer be connected to the data subject without the aid of additional information. It is a requirement that (1) any such additional information is stored separately, and (2) technical and organisational measures are in place to guarantee that the personal data not be attributed to any individual.

Pseudonymisation is distinct from anonymisation in that in many cases it provides only limited protection for the identity of data subjects and may permit identification using indirect means. When a pseudonym is used, it may be feasible to identify the individual concerned by the data by analysing related data.

Pseudonymisation does not replace the data set in its entirety. This could mean, for example, replacing identifying information such as names, birth dates and home addresses with data that appears to be similar but do not reveal details about the real subject person. As opposed to anonymised data, it remains “personal data” for the purposes of the GDPR. Therefore, misuse of pseudonymised data is indeed punishable under the GDPR, however, the damage and repercussions of this form of data are considerably less serious. Consequently, some critics have begun to view pseudonymisation as a type of loophole in the GDPR.

The potential loophole
The “loophole”, or ‘escape hatch’ as the Economist magazine has termed it, is found at Article 6(4) of the regulation itself:

Where the processing for a purpose other than that for which the personal data have been collected is not based on the data subject’s consent or on a Union or Member State law which constitutes a necessary and proportionate measure in a democratic society to safeguard the objectives referred to in Article 23(1), the controller shall, in order to ascertain whether processing for another purpose is compatible with the purpose for which the personal data are initially collected, take into account, inter alia:
(a) any link between the purposes for which the personal data have been collected and the purposes of the intended further processing;
(b) the context in which the personal data have been collected, in particular regarding the relationship between data subjects and the controller;
(c) the nature of the personal data, in particular, whether special categories of personal data are processed, pursuant to Article 9, or whether personal data related to criminal convictions and offences are processed, pursuant to Article 10;
(d) the possible consequences of the intended further processing for data subjects;
(e) the existence of appropriate safeguards, which may include encryption or pseudonymisation.

Why is Pseudonymisation useful to businesses
Pseudonymised data is useful for testing systems, analysing trends and evaluating patterns in polls, statistics or any other circumstances where the focus on a specific person is irrelevant. This can be particularly helpful should a company be attempting to compile data on a certain age category, or e.g. the percentage of women or men within a particular profession or industry.
Multinationals such as Apple, Google and Uber have already commenced pseudonymising data. There appears to be little doubt that this practice will inevitably become commonplace as companies seek to benefit from it.

In the context of the GDPR, the clear benefit is that pseudonymised data is not subject to the Regulation’s protections concerning individuals’ rights. Given that the organisations no longer know who the information belongs to, they cannot provide a right of access to it. Even if that were not the case, the information that they could theoretically provide to any given data subject would in fact no longer be accurate.

What are the ethics of pseudonymised data-handling? One can convincingly argue that the use of any loophole is essentially an attempt to circumvent the intended effects of the law. Others would respond that if a gap in the legislation exists, we are free to exploit it; or perhaps even that those who drafted the law must have in fact intended it to be so. This is, therefore, an ethical as much as a legal question. What appears to be apparent is that large companies have already shown that they are willing to collect and use data in such a manner and that software is being produced by the likes of IBM to enable it.

What are the risks of this practice?
Pseudonymization has the potential to be reversed, i.e. the data subject may be re-identified. This is not, in fact, a problem in itself as there may be circumstances where this is necessary. Nonetheless, it must be remembered that only authorised personnel may do so. Should a hacker succeed in reversing the pseudonymisation of an individual’s data this will constitute a personal data breach and expose the organisation which inadvertently leaked the data to the legal consequences thereof.

About Eoin Campbell 11 Articles
Eoin P. Campbell is an honours law graduate (LL.B) from Queen's University Belfast and is a qualified solicitor. Eoin has moved from practicing law to teaching. Eoin is currently lecturing in law at two universities in Lyon, France, including a master's degree course in cyberlaw. Eoin provides commentary with a legal perspective on cybersecurity and data protection.