The United States and the European Union released a joint roadmap on evaluation and measurement tools for trustworthy AI and risk management in early December 2022 (the “Joint Roadmap”). The aim of the roadmap is to guide the development of tools, methodologies and approaches to AI risk management and trustworthy AI by the EU and the US, and to advance the shared interests of both parties in supporting international standardisation efforts and promoting trustworthy AI. The roadmap takes practical steps to advance trustworthy AI and uphold the commitment of both parties to the Organisation for Economic Co-operation and Development (OECD) Recommendation on AI.
Both the US and EU acknowledge that a risk-based approach and a focus on trustworthy AI systems can provide people with confidence in AI-based solutions, while inspiring enterprises to develop trustworthy AI technologies. This approach supports common values, protects the rights and dignity of people, sustains the planet, and encourages market innovation.
The Joint Roadmap suggests several activities aimed at aligning EU and US risk-based approaches, including:
1. advancing shared terminology, concepts and frameworks for trustworthy AI and risk management;
2. establishing a joint hub for metrics and methodologies that can be used to evaluate and measure the trustworthiness and risk of AI systems;
3. promoting transparency and explainability of AI systems;
4. developing common approaches to AI ethics, including the development of AI systems that respect human rights and dignity; and
5. promoting public awareness and understanding of AI and its risks and benefits.
Interestingly, this involves establishing a joint tracker of existing and emergent risks and risk categories based on context, use cases and empirical data on AI incidents impacts and harms. The Joint Roadmap also suggests the EU and the US conduct joint activities aimed at promoting the development of trustworthy AI in general, such as promoting the use of AI for social good and addressing global challenges, and supporting research and innovation in trustworthy AI.
The parties have set out a high level implementation plan, which outlines specific steps and timelines for each of the suggested activities. These steps include forming expert working groups, analysing the emerging frameworks to coordinate on areas of overlap and sharing learnings across a diverse group of stakeholders.
With the proliferation of global AI standards, frameworks, regulations and policies, some international cooperation to try and conform on a set of generally accepted standards is a welcome addition to the landscape. Although at its earliest stages, and still between only two of the world’s many economic blocs set to be impacted by AI, it does represent an important step forward in advancing the development of trustworthy AI and effective risk management.
They will need to move fast — also just last week, the European council adopted a common position in advance of entering trilogue negotiations with the other EU institutions (a critical step in the process, which we will evaluate in a coming blog post). However, as the EU AI Act moves towards implementation, this kind of joint international cooperation will hopefully bring some additional clarity to the AI regulatory/ policy landscape. We hope to see the cooperation expanded in future to also include other global stakeholders.
To find out more, get in touch at email@example.com.