The Technical Aspects of Privacy by Design and Default

When GDPR is implemented the right and the correct way digitisation can be a contributing factor and make the role and responsibilities of the DPO or CIO a bit easy. However, if not implemented in a structured and holistic way. GDPR can create some threats to the fundamental rights of the individual or data subject.

Data privacy should be preserved and respected for individuals regardless of being in physical space or cyberspace. The EUGDPR provision 25 (Data Protection by Design and by Default) must be seen as technical solutions by building a framework by using the EUGDPR Institute design science as the methodology. The structure consists of three phases and is evaluated by a narrative and a case study on a possible artificial intelligence application, ChatBot. Upon applying the context, the IT system shall be compliant, and with the individual rights preserved, and the implementation will flourish across the organisation.

Technologies that support data protection through design and default
Based on the above diagram that describes the relationship between article 25 and article 32, there is no material content in Article 25. Together they do provide some legal basis for the data controller to consider the technologies that only support data protection through design, but that does not help some of the other GDPR goals, for example, the treatment of data protection and security provided in article 32.

Without material content in Article 25, the implementation may not register the data protection expected on the security basis of the regulation.

Article 25 should be understood in the same way as Article 32: Data controller must make an accurate assessment and choose from its wide range of technologies and methods available and implement them as measures to support security and a design that protects the registered warranties, rights and freedoms in the regulation.

Pseudonymous data is not anonymous
Pseudonymisation, however, involves eliminating or concealing the direct identifiers and, in some cases, some particular indirect identifiers that when combined could reveal a person’s identity. TÍn the process the data themes must be held in a separate database. The data points then can be linked to the de-identified database through the use of an encryption key, such as a random identification number or some other pseudonym.

The result is the pseudonymized data, unlike anonymous data, faces the risk of re-identification in two ways.

  • If a data breach permits the attacker to obtain the key or otherwise link the pseudonymised data set to individual identities.
  • Even if the key is not revealed, a malicious attacker may be able to identify individuals by combining indirect identifiers in the pseudonymous database with readily available information.

The GDPR addresses the concerns by instructing the controllers to implement appropriate safeguards to prevent the unauthorised reversal of pseudonymisation process. To address the risks, controllers should have in place appropriate technical (e.g., encryption, hashing or tokenisation) and organisational (e.g., agreements, policies, privacy by design) measures separating pseudonymous data from an identification key.


This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies. By continuing to use this site, you accept our use of cookies.  Learn more