Modern AI systems are rarely built and deployed within a single organization. Frameworks like the NIST AI RMF and the EU AI Act recognise this complex, multi-organisational nature. However, moving from high-level regulatory principles to practical, coordinated risk management and compliance across the entire AI value chain remains a significant challenge.
This regulatory gap is particularly apparent in regulations such as Article 25 of the EU AI Act, which mandates “written agreements” between AI providers and deployers. Currently, there is a distinct lack of guidance on the content, structure, and development of these crucial inter-organisational contracts, especially for AI initiatives that cross multiple enterprises and jurisdictions.
🧩 A Systematic Methodology for Enforceable Agreements
To address this, we introduce a practical, systematic methodology adapted from risk analysis in multi-enterprise software systems. This method translates the high-level strategic intents of AI initiatives—including regulatory compliance—into specific, measurable, and enforceable inter-organisational agreements.
Our approach is driven by a four-layer contract architecture that ensures architectural traceability from legal requirements down to machine-verifiable runtime checks:
* Regulatory Layer: Directly maps to legal and regulatory mandates (e.g., EU AI Act, data privacy laws).
* Enterprise Layer: Defines high-level organisational policies and risk appetite.
* Domain Layer: Focuses on specific operational risks and requirements within a particular sector (e.g., healthcare, finance).
* Operation Layer: Translates requirements into technical specifications and machine-verifiable runtime checks.
These layered structures enhance modularity and agility in contract development and evolution. It is supported by an Assume-Guarantee scheme, derived from the Design-by-Contract method in software engineering, which clearly defines the responsibilities and expectations between parties.
🚀 Transforming Governance from Post-Hoc to Proactive
Effective governance of multi-organisational AI systems requires more than principles; it demands operational mechanisms for coordination, verification, and enforcement.
The driver-based framework we propose provides this missing operational layer. It systematically:
* Links AI initiative objectives to specific drivers (key factors influencing risk and compliance).
* Maps those drivers onto the four-layer contract architecture.
* Grounds the agreements in agent operations.
This concrete methodology transforms AI governance from a reactive, procedural, post-hoc compliance exercise into a proactive, design-time discipline. It empowers organizations in the AI supply chain to build trust, manage risk effectively, and create a verifiable record of diligence from the outset.
Our future work aims to develop tool support to automate parts of the analysis, contract generation, and change management processes, and to further tailor the framework for high-risk domains such as healthcare and autonomous vehicles.