Implementing a responsible AI program is key to achieving AI compliance. The Artificial Intelligence Governance Code by The Corporate Governance Institute is a comprehensive framework for the ethical and responsible development of AI technologies. Addressing concerns such as bias, privacy, and societal impact, the code emphasises the importance of sustainability, transparency, accountability, and risk management throughout the AI lifecycle. The AI Code will harness AI’s potential and comply with emerging regulations through a robust AI governance codex. We recommend that you do not rush, as reaching high AI maturity will take 2-3 years, highlighting the urgency to start now to build customer trust while mitigating risks. Here are some steps to consider:

1. Risk Management
Create a framework to identify high-risk AI applications for enhanced scrutiny. Employ thorough risk management measures at every advanced AI system development stage, from inception to deployment, to identify, assess, and mitigate potential risks.

2. Transparency
Ensure transparency in AI systems, with developers providing clear explanations of system operations and data usage, including disclosure of any biases or limitations.

3. Accountability
Develop principles, policies, and guardrails governing AI use that are aligned with the mission and values of the organisation. Hold developers and organisations accountable for AI system decisions, establish mechanisms to address the harm caused, and provide avenues for recourse and redress.

4. Privacy and Data Protection
Uphold user privacy and adhere to data protection laws, obtaining informed consent for data collection and ensuring secure storage and processing of personal data.

5. Fairness and Bias
Integrate AI governance into existing corporate structures to ensure apparent decision-making authority. Design and train AI systems to be fair and unbiased, actively mitigating biases in training data to prevent discrimination based on protected characteristics.

6. Human Oversight
Establish a senior leaders committee to oversee AI development and implementation that includes human oversight and control for AI systems, allowing intervention, override, or modification of decisions, especially in critical domains like healthcare, finance, and criminal justice.

7. Safety and Security
Prioritise safety and security in AI system design, implementing measures to prevent malicious use, protect against cyber threats, and ensure robustness and reliability.

8. Social Impact
Consider and minimise the societal impact of AI systems, addressing issues such as job displacement, economic inequality, and the potential amplification of social biases.

9. Early Identification of Vulnerabilities
Actively monitor AI systems for vulnerabilities and emerging risks post-deployment, taking appropriate action to address issues and encouraging third-party and user reporting.

10. Responsible Information Sharing
Engage in responsible information sharing and incident reporting among organisations developing advanced AI systems, collaborating with industry, governments, civil society, and academia to enhance AI safety and security.

Monitor high-profile litigation cases related to AI to prepare for evolving legal issues. Building a comprehensive RAI program takes time, but it’s a journey worth it as it can drive value and growth. Contact us for an inhouse onsite or online workshop.