As artificial intelligence (AI) continues to play an increasingly important role in decision-making processes, ensuring transparency and protecting individuals’ rights have become critical priorities. Both the AI Act and the General Data Protection Regulation (GDPR) address the need for transparency in AI-driven decisions.

The two legislations do so in different contexts regarding especially when it comes to high-risk AI systems and the right to an explanation. Here’s a breakdown of the key differences between the two laws regarding data privacy and protection.

  1. The AI Act: Right to Explanation for High-Risk AI Systems

The AI Act introduces a right to an explanation specifically for decisions made by high-risk AI systems. These are systems that have the potential to significantly impact individuals, such as AI used in hiring, credit scoring, or law enforcement. According to Article 86 of the AI Act, if a high-risk AI system is used to make a decision that directly affects an individual, that individual has the right to understand how the decision was reached. This includes transparency regarding the logic, reasoning, and data used by the AI system.

  • Scope: The right to explanation under the AI Act applies to high-risk AI systems, regardless of whether there is human involvement in the decision-making process.
  • Key Focus: The focus is on high-risk AI systems that have a significant impact on people’s rights, with the goal of ensuring accountability and transparency for these impactful technologies.
  1. GDPR: Right to Explanation for Automated Decisions

On the other hand, the General Data Protection Regulation (GDPR), specifically Article 15(1)(h), also grants a right to an explanation but applies to fully automated decisions made without any human intervention. In other words, when an AI system makes a decision that affects an individual and no human is involved in the process, the GDPR ensures that the individual has the right to know how that decision was made.

  • Scope: The GDPR’s right to explanation is specifically for solely automated decisions that have significant effects on individuals, such as profiling for marketing or credit scoring where there’s no human input.
  • Key Focus: The focus of the GDPR is on fully automated decisions made by AI, protecting individuals from automated processes that could lead to discriminatory or harmful outcomes without transparency.
  1. Key Distinction Between the AI Act and GDPR

The key distinction between the AI Act and GDPR lies in the type of decision-making they regulate:

  • GDPR: Addresses solely automated decisions, where an AI system makes a decision on its own without any human intervention. If such a decision is made, individuals have the right to be informed about how that decision was reached.
  • AI Act: Focuses on decisions made by high-risk AI systems, even if there is some human involvement in the decision-making process. This means that if an AI system outputs a decision and a human is involved in reviewing or adjusting that decision, the AI Act applies, not the GDPR.

In summary, GDPR is concerned with fully automated decisions (where no human is involved), whereas the AI Act is concerned with high-risk AI systems, even if human intervention plays a role in the final decision.

  1. Non-High-Risk AI Systems and Right to Explanation

One important nuance is that if an AI system is not classified as high-risk, the AI Act does not provide any right to an explanation, regardless of whether a human is involved in the decision-making.

  • Non-High-Risk AI: For AI systems that are not deemed high-risk, there is no requirement for transparency or a right to explanation under the AI Act, even if humans intervene in the decision-making process.

This highlights the AI Act’s emphasis on higher levels of transparency and accountability for AI systems that are deemed high-risk due to their potential to impact individuals’ rights and freedoms.

  1. Other Regulations: Additional Layers of Protection

In addition to the GDPR and the AI Act, other directives may come into play in specific scenarios. For example, the Platform Work Directive and the Consumer Credit Directive include rules that could override both the AI Act and GDPR in certain cases.

  • Platform Work Directive: Provides additional protections in situations where automated decisions are related to working conditions, ensuring that platform workers are protected from unfair decision-making processes.
  • Consumer Credit Directive: Addresses the use of automated systems in creditworthiness assessments, ensuring that individuals are protected from unfair or non-transparent credit decisions.

These directives can introduce specific rules for particular industries or situations, providing additional layers of protection when automated decisions are involved.

Conclusion: Navigating the Overlaps of the AI Act and GDPR

While both the AI Act and GDPR aim to ensure transparency and protect individuals from the risks of automated decision-making, they apply in different contexts:

  • GDPR: Focuses on solely automated decisions, ensuring individuals can challenge fully automated decisions that impact their rights and freedoms.
  • AI Act: Focuses on high-risk AI systems, ensuring transparency and accountability for decisions made by AI systems that significantly impact individuals, even when there is human involvement.

Understanding these differences is critical for organizations using AI technologies, as they must navigate the complexities of these regulations to ensure compliance and protect individuals’ data privacy and rights. As AI continues to evolve, so too will the regulatory landscape, demanding ongoing attention to both legal requirements and ethical considerations regarding key differences in high-risk ai systems and the right to explanation.