Why organisations need to adopt AI standards

Organisations must comply with AI regulations, enforceable laws with clear consequences for non-compliance. But why should organisations adopt AI governance standards?
EnzaiEnzai
Enzai
22 Apr
2024
Why organisations need to adopt AI standards

The AI revolution

Although AI is nothing new (Alan Turing’s seminal paper on computing machinery and intelligence was first published in 1950), over the past couple of years we have seen exponential growth both in AI’s capabilities and its adoption. There is no question why: innovative AI technologies have revolutionised basically every industry, from healthcare to financial services and creative industries

The latest advancements in generative AI - with the release of models such as Open AI’s GPT-4, Google’s Gemini and Meta’s Llama - have only accelerated this trend as these technologies become more widely accessible. But with increased access come increased risks: deepfakes and misinformation, algorithmic bias and discrimination, security and data privacy concerns, among others.    

The importance of standards

New forms of technology that have the potential to change how we live our lives also come with a lot of risk. When we look back through history and try to dissect how different technologies became so deeply integrated into our modern lives, a pattern emerges. In order to truly establish trust in these new systems, standards were established either through regulation or market practice.

For example, the invention of commercial electrical systems in the 1880s completely transformed society and enabled increased productivity, efficiency and economic growth. However, electricity also exposed individuals to dangers like electrocution and fires, causing injuries and deaths. At the turn of the twentieth century the first electrical safety guidelines in the UK were introduced to manage the risks of this new technology and prevent accidents. This was followed by a raft of electricity standards which ensured that installations were properly constructed and maintained to avoid fire risks. The standardisation of the electricity supply and installation enabled the widespread adoption of electricity - by the end of the 1930s, two-thirds of UK homes had electricity, up from 6% in 1919.

A more recent example relates to information security. The expansion of the internet over the past 30 years created a flourishing tech industry and has successfully disrupted the way that businesses are run and services are offered. Yet the internet also led to an explosion of cybercrime, among other risks. Threats can range from phishing attacks to severe data breaches and cyber-terrorism, with new types of cybercrime regularly making headlines. 

In response to these risks, information security standards were developed. The ISO/IEC 27001 standard provides organisations “with guidance for establishing, implementing, maintaining and continually improving an information security management system.” Even though ISO/IEC 27001 is a standard and not a regulation, it has become widely accepted and required across industries that rely on information systems or keep digital records. 

There are countless other examples - from cars and planes, to food safety and medical devices. Through the ages, standards have provided a framework for the safe adoption of innovative technologies, allowing these to flourish while working to mitigate and manage their risks. Why should the development of AI be any different?

What are the benefits of standards in the AI space?

Standards therefore have the potential to ensure AI has the same positive impact on the world that other transformational technologies have had. However, to ensure these standards are effective, we need to first look at the types of risks they should seek to mitigate. We have set out some of the key considerations below.

1. Mitigating bias, discrimination, or harm to individuals

AI systems use volumes of data to make decisions that affect people in a variety of tasks, from hiring to lending and even criminal sentencing, immigration and the provision of welfare benefits. Algorithms are trained on data sets, from which they learn what the correct outputs should be. Using AI can speed up decision making so that, for example, the government can process more asylum or benefits claims in the same amount of time, reducing backlogs and improving satisfaction with their services. 

However, evidence has shown that algorithms sometimes replicate or exacerbate human biases, particularly towards minorities. A 2023 Guardian investigation found evidence that some of the UK government’s tools produced discriminatory results, leading to dozens of people having their benefits removed for no reason. Bias and discrimination expose organisations to huge reputational risks, as well as costly lawsuits. 

Standards enable organisations to detect and flag potential algorithmic bias early, so that it can be corrected before any harm occurs. For example, the NIST AI Risk Management Framework and ISO 42001 (each of which are discussed in further detail below) have requirements to evaluate the risk of bias and to examine the quality of the data sets used. A common requirement is an initial risk assessment, focusing on the possibility of bias inherent in the context of the AI system, i.e. the historical or structural disadvantages faced by certain demographic groups that the algorithms could replicate. 

2. Increased transparency and accountability

People should understand how, when and for which purposes an AI system is being used. This should be explained in language that is accessible to a wide range of stakeholders. Unfortunately, users are not always aware that an AI system is behind a specific decision (for example, the approval of a credit application). Even when they are aware that AI is being used, there is usually not enough information on how the system reached that decision.

Because AI systems work with a high degree of autonomy, they can reach decisions in ways that the developers did not intend or foresee. Accountability is key, as it can help to identify risks early on - for example, through assurance techniques like impact assessments. Standards ensure that individuals and organisations take ownership over the actions and decisions of AI systems and that there is always a human in the loop.

Standards often require that organisations document information about their AI systems, including data sources, use cases, decision-making processes, performance benchmarks and potential limitations. This information is crucial to understand and communicate how the algorithms work and how they are performing. Many standards increase transparency by promoting product labelling, where the user is made aware that they are interacting with an AI system. Enzai’s AI policy guide provides valuable advice on how to implement transparency and accountability measures throughout the AI system lifecycle.

3. Enhanced risk management

AI offers great opportunities but comes with many risks. These risks include things such as the risk of hallucinations (when large language models create false information), intellectual property infringement and data privacy breaches. Effective risk management reduces the likelihood of negative outcomes and helps organisations build trust with stakeholders and maintain a good reputation.

Risk registers and risk assessments, common in AI governance standards, enable organisations to identify, assess and mitigate risks associated with AI technologies.  

4. Improved safety and reliability

Because AI systems are used for many critical domains - healthcare, national security and infrastructure are a few examples - there is a strong need for reliability and safety. The AI systems should be technically secure, work as intended and be resilient to security threats like cyber attacks. According to the National Institute of Standards and Technology (NIST),  ‘adversaries can deliberately confuse or even “poison” AI systems to make them malfunction’.

By setting criteria for performance, accuracy, scalability and resilience, organisations can develop robust and reliable AI systems that can better resist attacks, and that can detect attacks early if they do happen. 

5. Compliance with laws and regulations

Organisations need to comply with relevant laws and regulations governing the use of AI technologies. While there are new laws coming in to specifically address AI risks, such as the EU AI Act, existing legislation cannot simply be ignored because the product/ service contains an AI element. Data protection laws, consumer protection regulations, anti-discrimination laws and industry-specific regulations are all already in place. Non-compliance can lead to legal challenges, fines, penalties and reputational damage. 

Standards are not mandatory in the same way that regulations are, but they enable compliance with existing regulations while ensuring organisations adopt best practice in their relevant field. AI specific standards can cover things such as performance metrics, risk assessments, use case analyses and many other measures to ensure effective risk management and create an auditable record of decision making.

6. Enhanced public trust

Enhanced public trust can lead to better acceptance and adoption rates of AI technologies, as well as stronger relationships with stakeholders and a competitive advantage in the market, since the public values ethical considerations and trustworthiness. In order for AI to reach the scale and be that force for good in the world that we would all like to see, it must earn public trust. Setting high standards around how the technology is built, deployed and used can enable that.

Standards and AI governance frameworks 

Fortunately, there are now a number of different standards available in the AI space that can help organisations navigate the risks set out above. Some of the world’s leading standards setting bodies, such as the National Institute for Standards in Technology (NIST), the Institute of Electrical and Electronics Engineers (IEEE) and The International Organization for Standardization (ISO) have recently published frameworks that organisations can use to adopt high standards around their AI programmes. 

We have set out a brief overview of some of these below.

ISO/IEC JTC 1/SC 42

ISO, along with the International Electrotechnical Commission (IEC), set up a joint technical working group to develop standards in AI, and this has culminated in the publication of ISO 42001. These standards, which allow organisations to adopt an effective AI Management System (AIMS)  cover various aspects of AI, including terminology, ethical considerations and evaluation methods. 

The team are now working on a new standard - ISO 42006 - which will set out the guidelines for auditors to effectively measure how compliant individual organisations are with the requirements of ISO 42001, and this auditing process will enable organisations to be able to gain regular certifications that they are in compliance (similar to the process around information security under ISO 27001, described in more detail earlier).

Note, Enzai was one of the first organisations in the world to offer a comprehensive AI Management System in full compliance with the requirements of ISO 42001.

NIST’s AI Risk Management Framework

The National Institute of Standards and Technology (NIST) in the United States has developed a risk management framework specifically for AI systems. This framework provides guidance on identifying, assessing and mitigating risks associated with AI technologies and is divided into four sections for the core functions of AI governance: (1) Govern; (2) Map; (3) Measure; and (4) Manage. Many organisations in North America have adopted the NIST AI Risk Management Framework as their north star.

We prepared a blog setting out the requirements of the NIST AI RMF - see here. Our AI governance solution ensures organisations can operate in full compliance with these requirements.

OECD’s AI Principles

One of the most prominent early efforts to establish a baseline for high standards in the AI space came from The Organisation for Economic Co-operation and Development (OECD). The OECD adopted a set of practical and flexible principles for the responsible development and use of AI in May 2019 (and their definition of AI has since been largely adopted by the final text of the EU AI Act). The OECD’s AI Principles include values-based principles like transparency, fairness and accountability along with recommendations for policy makers.

Beyond the principles, the OECD’s AI Policy Observatory has a catalogue of tools and metrics designed to help organisations develop and use trustworthy AI, including Enzai’s AI governance platform

World Ethical Data Foundation (WEDF)’s ‘Me-We-It’: An Open Standard

The WEDF’s open standard is a free, live online forum designed with three goals: (1) giving advice to build more ethical AI to help the industry restart on healthy foundations; (2) helping the public understand the process of building AI systems; and (3) creating a space in which the public can freely ask any questions to the AI and data science community. 

The framework is divided into three sections, ‘Me’, ‘We’ and ‘It’. The WEDF describes them as follows: ‘Me’ are ‘the questions each individual who is working on the AI should ask themselves before they start and as they work through the process’. ‘We’ are ‘the questions the group should ask themselves - in particular, to define the diversity required to reduce as much human bias as possible. ‘It’ are ‘the questions we should ask individuals and the group as they relate to the model being created and the impact it can have on our world’.

Adopting AI governance standards

If your organisation is looking to implement AI governance standards or improve existing practices, it can be difficult to know where to start. Our Start Fast method contains everything you need in three simple steps: (1) create or select policies and assessments; (2) build your AI registry; and (3) begin your assessments. The Enzai platform has ready-made policies and assessment frameworks available out of the box, to allow you to adopt best in class standards that will set you apart from your peers.. 

If you are interested in learning more about how Enzai can help your organisation adopt AI governance standards and comply with upcoming regulation, contact us today.

Build and deploy AI with confidence

Enzai's AI governance platform allows you to build and deploy AI with confidence.
Contact us to begin your AI governance journey.