The Artificial Intelligence Governance Code by The Corporate Governance Institute is a comprehensive framework for AI technologies’ ethical and responsible development. Addressing concerns such as bias, privacy, and societal impact, the code emphasizes the importance of transparency, accountability, and risk management throughout the AI lifecycle.

1. Risk Management:

Employ thorough risk management measures at every advanced AI system development stage, from inception to deployment, to identify, assess, and mitigate potential risks.

2. Transparency:

Ensure transparency in AI systems, with developers providing clear explanations of system operations and data usage, including disclosure of any biases or limitations.

3. Accountability

Hold developers and organizations accountable for AI system decisions, establishing mechanisms to address the harm caused and providing avenues for recourse and redress.

4. Privacy and Data Protection:

Uphold user privacy and adhere to data protection laws, obtaining informed consent for data collection and ensuring secure storage and processing of personal data.

5. Fairness and Bias:

Design and train AI systems to be fair and unbiased, actively mitigating biases in training data to prevent discrimination based on protected characteristics.

6. Human Oversight:

Implement human oversight and control for AI systems, allowing intervention, override, or modification of decisions, especially in critical domains like healthcare, finance, and criminal justice.

7. Safety and Security:

Prioritise safety and security in AI system design, implementing measures to prevent malicious use, protect against cyber threats, and ensure robustness and reliability.

8. Social Impact:

Consider and minimize the societal impact of AI systems, addressing issues such as job displacement, economic inequality, and the potential amplification of social biases.

9. Early Identification of Vulnerabilities:

Actively monitor AI systems for vulnerabilities and emerging risks post-deployment, taking appropriate action to address issues and encouraging third-party and user reporting.

10. Responsible Information Sharing:

Engage in responsible information sharing and incident reporting among organizations developing advanced AI systems, collaborating with industry, governments, civil society, and academia to enhance AI safety and security.