An artificial intelligence system impact assessment, or AI impact assessment (AIIA) is by far the most substantial piece of work that you will undertake in the development and implementation of an ISO 42001 conformant artificial intelligence management system (AIMS). An AIIA considers the impact on individuals, groups of individuals and society by AI producers (developers) for AI systems they produce, and by AI users or consumers for AI systems they employ (it is important to note that organisations and individuals can be both AI producers and consumers). These assessments are not too dissimilar to a data privacy impact assessment (DPIA), conducted in compliance with the General Data Protection Regulation (GDPR), however AIIAs require more detail and are much longer.
In this blog, Neil Jones (Senior Consultant at URM) draws upon his early involvement in ISO 42001 to discuss AIIAs, their structure, and the guidance provided by ISO 42005 (a supporting standard to ISO 42001 which is focused on AIIAs). This blog is based on a URM webinar, delivered in 2024 by Neil and Lisa Dargan (Director at URM). In the webinar, Neil and Lisa discuss ISO 42001 and provide advice and guidance to organisations looking to conform or certify to the Standard.
The Structure of an AIIA
ISO 42005 – Information technology — Artificial intelligence — AI system impact assessment, which at the time of writing is in draft, sets out the required content and a template for completing an AIIA. There are other approaches available – Microsoft, for example, has an impact assessment template available for download on its website. However, it is likely that certification bodies will be looking to ISO 42005 when conducting assessments and, as such, using the template provided in ISO 42005 is the most advisable approach.
AIIAs are structured into seven sections, with each section outlining particular information that you will need to impart. The first of these sections, Section A, focuses on system information, such as a system description, its features, purpose, intended uses and unintended uses. Section B, meanwhile, looks at data information and quality, and requires you to record details of all the datasets employed in the system (production, training, validation, etc.). This is a particularly large section of the AIIA and the amount of information you will need to provide about each dataset is fairly extensive. You will need to assess each dataset in terms of the following characteristics:
- Accuracy
- Completeness
- Representativeness
- Consistency
- Credibility
- Currency
- Accessibility
- Compliance
- Efficiency
- Precision
- Understandability
- Portability
- Auditability
- Identifiability
- Effectiveness
- Balance
- Diversity
- Relevance
- Similarity
- Timeliness.
This list may initially seem daunting, however ISO 42005 contains information on each characteristic and provides descriptions of what you need to consider.
Section C focuses on algorithms and models information, such as the origin of the algorithms (have they been developed by your organisation, a third party, or are they off-the-shelf?) and the approach taken in their development. Section D, deployment environment, requires you to define where the model is going to be used, including geographical areas, any language considerations, and any complexity or constraints within the deployment environment, as well as how the model will be deployed. For example, the simplest form of AI might be available to individuals on their mobile phone and is, therefore, accessed through app stores.
Section E, relevant interested parties, is similar to the ‘interested parties’ considerations you will see in other standards, such as ISO 27001. In this section, you will need to set out the individuals and entities (both internal and external to your organisation) which have an interest in the AI system you are producing and the roles of those individuals and entities. There is some overlap between this Section and Clause 4 of ISO 42001 (context of the organisation), which also requires you to consider interested parties, so you should be able to draw upon the work you’ve already completed under Clause 4 when you reach Section E of the AIIA.
Like Section B, Section F, actual and potential benefits and harms, is one of the most substantial sections of the AIIA. This section requires you to record all of the AI system’s potential benefits and harms to each of the interested parties, for each of the AI perspectives defined in ISO 42005. To learn more about the concept of AI ‘perspectives’ as they relate to ISO 42001, and the AI perspectives ISO 42005 defines, read our blog on ISO 42001 and AI Perspectives.
To help you avoid repetition and unnecessary work in this already voluminous section of the AIIA, we would recommend you join together any interested parties which have similar interests or objectives. Unfortunately, there is not a great deal of guidance available around the benefits and harms that can impact interested parties, and the relevant benefits and harms could be almost anything, so you may need to ‘think outside of the box’ when completing this section.
Section G looks at AI system failures and misuse or abuse. The ‘system failures’ aspect of this section is closely related to disaster recovery (DR) and business continuity (BC) plans. You will need to consider the architecture of your AI system, e.g., database servers, middleware servers, applications, or any other components involved in the system’s architecture, along with the impact on the AI model’s performance if any one of those components failed. The ‘misuse and abuse’ aspect, meanwhile, requires you to consider the impact of an individual misusing the system, either accidentally or intentionally.
ISO 42005 Guidance
ISO 42005 provides guidance on how and when to perform AIIAs. An AIIA must be completed at the very beginning of the AI lifecycle, prior to inception or use of the AI technology, and must also be regularly updated and maintained throughout the system’s lifecycle / whilst the system is in use.
There is some flexibility around how AIIAs are conducted; you can conduct a separate AIIA for each AI system you develop/use, group multiple AI systems together, or conduct a more general AIIA that covers the use of AI technology as a whole in your organisation. The best approach will depend on the specifics of your organisation as an AI producer and/or consumer, and whether there is any benefit from grouping systems together in the AIIA. However, it is important to remember that if you are using AI systems for different outcomes and with different associated risks, you may need to conduct separate assessments of those systems.
The basic approach to conducting an AIIA is similar to an audit – you will need to gather and document information in line with each section of the assessment. Unlike an audit, there is no need to gather evidence to support the assessment, which will make the process slightly easier. However, the output of the AIIA will still be a very substantial document (perhaps 60-70 pages).
Risk Assessments in ISO 42001
An AIIA is not a replacement for your risk assessment. ISO 42001 still requires you to conduct a risk assessment of your AI systems in addition to an AIIA. Whilst there are some different inputs you will need to consider, the approach to an AI risk assessment is very similar to that of risk assessments conducted in line with other standards, such as ISO 27001. In your risk assessment, you should identify the AI systems that you need to cover, and identify any prohibited activities (those defined in any relevant regulations and legislation), and high-risk AI systems that you have produced or employed.
How URM can Help?
If your organisation is looking to conform and/or certify to ISO 42001 and would benefit from support with the completion of its AIIA(s), or with any other aspect of the AIMS development and implementation process, URM is ideally placed to assist you. With nearly 2 decades of experience helping organisations to certify against management system standards, URM possesses both the relevant knowledge and experience to help you conform/certify to this new AI standard.
Our team of experts can conduct a gap analysis of your existing approach against ISO 42001, allowing you to identify where you are and are not currently meeting its requirements. For any areas of nonconformity, the URM consultant can support you with the necessary remediation and implementation activities. This can include guiding your implementation of an ISO 42001 conformant AIMS, assisting you to conduct an AIIA, and providing any other support you require in your journey to achieving ISO 42001 certification. Following the implementation of your AIMS, URM can also conduct internal audits of the management system on your behalf to ensure it is operating as intended and aligned with ISO 42001 requirements, providing you with confidence and security in AI usage or development.
URM can assist with all aspects of implementation and maintenance of your medical device quality management system.
URM’s blog discusses the data protection considerations for utilising AI technologies, and how organisations can stay GDPR compliant in their use of AI.
URM’s blog breaks down the EU AI Act and discusses its scope, requirements, how it will be enforced, how it may impact the UK & the rest of the world, and more.
URM’s blog discusses the need for policy in relation to the use of AI, real-world cases where AI has caused organisations issues & how to create an AI policy.