Artificial Intelligence Frameworks and Regulations: ISO 42001, the NIST AI RMF and the EU AI Act

Neil Jones
|
Senior Consultant at URM
|
|
PUBLISHED on
08
May
2026
SUMMARY

In this blog, we explore the rapidly evolving artificial intelligence (AI) governance and regulatory landscape and set out what organisations need to understand as AI adoption accelerates.  We explain key AI concepts before examining the purpose, structure and use of ISO 42001 and its supporting standard, ISO 42005, alongside the NIST AI Risk Management Framework (RMF) as an alternative, non certifiable approach.  We then contrast these voluntary frameworks with the mandatory requirements of the EU AI Act, highlighting how the three can be applied together and what their practical implications are for organisations developing or deploying AI.

The field of artificial intelligence (AI) is developing at an unprecedented pace, driving widespread adoption across a range of products and industries.  As the technology evolves, so too do the standards, regulatory frameworks, and legislative environments that seek to govern it, in an effort to keep pace with this rapidly changing landscape.

What is AI?

AI is, in short, the intelligence of machines (or software) as distinct from that of humans.  It is a highly interdisciplinary field broadly based on computer science, data science, natural sciences, humanities, mathematics, social sciences and others, and has become increasingly pervasive in fields such as:

  • Computer vision / image recognition
  • Natural language processing
  • Data mining (information discovery)
  • Planning and logistics
  • Finance
  • Healthcare.

There are a number of key concepts and terminology that are prevalent throughout the field:

Weak/narrow AI, which focused on learning and delivering a specific task that does not adapt to its environment, such as image processing, vs. strong/general AI, which demonstrates the ability to learn new tasks without human interaction, effectively training itself, characterised by the ability to pass the Turing Test.  Current AI implementations lean towards weak/narrow.

Knowledge, as opposed to data and information, per the data-information-knowledge hierarchy:

  • Data can be used to produce information (observations by the system)
  • Information can be used to produce knowledge (what the system retains from such observations).

Model training, which the process of training an AI model for a specific task, employing training data.

Machine learning, the process of optimising model parameters through computation techniques to tailor the model’s behaviour to reflect the data or experience.

Autonomy, referring to the level of automation exhibited by the system, from 0 (no automation), where the operator fully controls the system, to 6 (autonomy), where the system is capable of modifying its intended domain of use or its goals without external intervention, control or oversight.

Trustworthiness, articulated though the characteristics of robustness, reliability, resilience, explainability, controllability, predictability, transparency, AI bias/fairness.

The AI (system) Lifecycle, from Inception to Retirement.

As mentioned above, the standards and legislation around the governance of AI are evolving rapidly, but in this blog, we will focus on three that are currently leading the field:

  • ISO 42001:2023 (and its sister standard, ISO 42005:2025)
  • National Institute of Standards and Technology (NIST) AI Risk Management Framework (RMF)
  • EU AI Act.

What is the ISO 42001 Standard?

ISO 42001 has been introduced with the aim of providing assurance for AI consumers through guidance on establishing an AI Management System (AIMS).  The Standard is for use by organisations developing and using AI, but is not a ‘how to’ guide to developing or deploying AI.  Instead, it’s intended to be a framework to assist the management of AI-related risks while supporting innovation, accountability and trust.

ISO 42001 is written in the ‘Harmonized Structure’ shared by other ISO management system standards (e.g., ISO 9001 and ISO 27001).  While it does set out its requirements in familiar Clauses 4-10, all clauses (except Clauses 7 and 10) include some unique elements.  In a departure from other harmonised standards, ISO 42001 contains four annexes:

  • Reference controls set out in Annex A (similar to ISO 27001)
  • Implementation guidance in Annex B (distinct from other standards, where the guidance in in a separate document, such as ISO 27002)
  • Guidance on organisational objectives and risk sources in Annex C
  • Guidance on the use of an AIMS across domains or sectors in Annex D.

The ISO 42001 standard draws from its supporting standard, ISO 42005, which details what the Standard terms ‘AI perspectives’, including accountability, transparency, privacy, etc.  Perspectives are used to articulate the trustworthiness characteristics of AI and are defined in the context of benefits and harms to interested parties.  To learn more about AI perspectives, read our blog on ISO 42001 and AI Perspectives.

ISO 42005 also sets out the structure and content requirements for AI impact assessments (AIIAs), which must be completed by organisations conforming/certifying to ISO 42001.  AIIAs are extremely lengthy and detailed and represent the most significant difference between ISO 42001 and other, similar standards, such as ISO 27001.  For more information on AIIAs, read our blog ISO 42001 Artificial Intelligence Impact Assessments (AIIAs).

Assurance is provided to consumers by organisations undertaking certification assessments against ISO 42001 (and by implication the ISO 42005).  It is reasonable to assume that as the compliance landscape for ISO 42001 evolves, certification will increasingly become a requirement of customers; or, at the very least, that certification against ISO 42001 will provide a marketing advantage, in a similar way to how ISO 27001 has evolved over the years.

Despite marketing and contractual drivers that may develop, there is no mandatory requirement to be certified against ISO 42001.  However, in common with other ISO standards, ISO 42001 requires organisations to be aware of their applicable regulatory landscape.  This will vary globally, but within Europe this means the EU AI Act (more on this below).

The Standard also requires organisations to undertake risk assessments, though there are some differences between an ISO 42001 risk assessment and an information security risk assessment, for example.  However, the requirements set out in ISO 42001, ISO 27001 and other similar standards are sufficiently similar for a common process to be adopted, albeit with different criteria and risks defined for the different standards.

If you do not wish to pursue ISO certification, there are credible alternatives, such as the NIST AI Risk Management Framework.  While the AI RMF does not offer formal certification, it does support structured maturity assessments aligned to the framework

What is the NIST AI RMF?

The NIST AI RMF is a set of guidelines to help organisations develop, use, and evaluate AI systems responsibly.   It aims to manage risks related to AI, focussing on the AI system’s trustworthiness and transparency.  Like ISO 42001, the AI RMF is applied voluntarily, with the purpose of supporting the responsible development and deployment of AI systems, promoting trustworthy AI by addressing risks across the AI lifecycle, and being flexible and adaptable for all types of organisations, both public and private.

In the same manner as ISO 42001, the AI RMF looks for organisations to undertake AI risk assessments and, as such, is not itself an AI risk assessment approach.  As with other NIST frameworks, such as the NIST Cyber Security Framework (CSF), the AI RMF is divided into two main parts: the Core (four key functions), and Profiles.

However, the core functions are uniquely defined for the AI RMF as four interconnected functions that help manage AI risk through the following approach:

  • GOVERN
    • Establishes policies, processes, and structures to manage AI risks
    • Includes leadership, accountability, and organisational culture
  • MAP
    • Understand and document the context and potential impacts of the AI system
    • Identifies risks, stakeholders, intended uses, and limitations
  • MEASURE
    • Involves assessing, analysing, and tracking AI risks and impacts
    • Uses metrics, evaluations, and testing to gauge performance and fairness
  • MANAGE
    • Prioritise and mitigate risks over time
    • Involves response planning, system updates, and continuous monitoring.

Though not specifically called out in the framework, the AI RMF promotes continuous improvement through the use of profiles, which help tailor the AI RMF to specific use cases, sectors, or risk levels.  They allow you to define your current and target risk levels for AI risk management, and subsequently evidence continuous improvement through fostering ongoing risk management.  It is not intended to be a one-time checklist, instead encouraging an ongoing approach to AI governance throughout the AI lifecycle.

The AI RMF emphasises characteristics of trustworthy AI systems, helping to ensure that results drawn from an AI system are:

  • Valid and reliable
  • Safe
  • Secure and resilient
  • Explainable and interpretable
  • Privacy-enhanced
  • Fair (with harmful bias managed)
  • Accountable and transparent.

In common with other NIST guidelines, NIST has defined a playbook (the NIST AI RMF Playbook) to help to apply and evaluate application of the guidelines.  For those not familiar with NIST, a playbook provides guidance on the implementation of controls much like ISO 27002 does for ISO 27001, or Annex B does for Annex A in ISO 42001.

Both the AI RMF and the underlying AI RMF Playbook are extensive, often approaching the same subject from different perspectives for each of the core functions.  This can make assessment against the AI RMF more onerous in comparison to ISO 42001.

What is the EU AI Act?

Whilst ISO 42001 and the NIST AI Risk Management Framework are voluntary, the EU AI Act is mandatory.  Any organisation that develops, deploys, or uses AI systems within the European Union must comply with the Act.  That said, not all AI systems fall within its scope.  The EU AI Act adopts a risk-based approach, classifying AI systems and specifying the obligations and controls required according to their associated level of risk.

Unacceptable risk is prohibited, e.g., social scoring systems and manipulative AI.  Most of the Act’s text addresses high-risk AI systems, which are regulated.  A smaller section handles limited-risk AI systems, subject to lighter transparency obligations; developers and deployers of such systems must ensure that end-users are aware that they are interacting with AI (chatbots and deepfakes).  Meanwhile, minimal risk is unregulated, and this includes the majority of AI applications currently available in the EU single market, such as AI enabled video games and spam filters.  While this is currently the case, it may change with generative AI becoming more prevalent.

The majority of obligations fall on providers (developers) of high-risk AI systems, as follows:

  • Those that intend to place on the market or put into service high-risk AI systems in the EU, regardless of whether they are based in the EU or a third country
  • Third-country providers where the high-risk AI system’s output is used in the EU.

As defined in the Act, users are natural or legal persons that deploy an AI system in a professional capacity, not the affected end-users (subjects) of an AI system.  Users (deployers) of high-risk AI systems have some obligations, though less than providers (developers).

There are additional provisions for General Purpose AI (GPAI), i.e., strong/general AI:

  • All GPAI model providers must provide technical documentation, instructions for use, comply with the Copyright Directive, and publish a summary about the content used for training
  • Free and open licence GPAI model providers only need to comply with copyright and publish the training data summary, unless they present a systemic risk
  • All providers of GPAI models that present a systemic risk, whether the risk is open or closed, must also conduct model evaluations, adversarial testing, track and report serious incidents and ensure cybersecurity protections.

To learn more about the Act, read our blog The EU Artificial Intelligence Act.

In Summary

As the AI governance and regulatory landscape has developed, ISO 42001:2023, the NIST AI RMF and the EU AI Act have emerged as key frontrunners within the evolving ecosystem of standards, frameworks and regulatory instruments.  

Whilst the AI RMF is not a standard against which you can certify, like ISO 42001 it can provide assurance and confidence to potential users of AI, customers of AI-producing organisations and, perhaps most importantly, AI subjects, i.e., those of us whose information is being processed by AI systems.

As such, ISO 42001 and the AI RMF can be seen as complementary and used in parallel.  In the same manner that organisations can (and do) certify to ISO 27001:2022 and complete NIST CSF 2.0 assessments, an organisation can certify to ISO 42001:2023 and complete a NIST AI RMF assessment.  However, given the resources required for each, it is likely that only those organisations with a specific requirement (perhaps customer driven) to undertake both will do so – or those with extensive resources.

Conversely, the EU AI Act operates within an entirely different context.  It is not optional for organisations wishing to produce or use AI within the EU.  Indeed, both ISO 42001 and the AI RMF require organisations to comply with regulations and legislation within their target geographical markets, meaning that any organisation looking to operate within the EU must comply with the EU AI Act.  If you are operating within the EU, failure to identify the Act as relevant legislation will undoubtedly be called out in certifications / assessments against either of these frameworks.  It should be noted that although your use of AI may ultimately not require any additional measures under the Act, your organisation must be able demonstrate that it has considered the Act and concluded it does not need to take any additional measures.

How URM Can Help?

With extensive, cutting-edge AI governance expertise, URM can help you provide the necessary assurance and confidence in your use/development of AI through ISO 42001 certification, EU AI Act compliance, NIST AI RMF alignment or a combination of these.

Drawing on 2 decades experience helping organisations certify against numerous management system standards, URM can support your entire ISO 42001 journey from initial implementation to ongoing conformance.  We can conduct a gap analysis against ISO 42001, identifying where you are and are not currently meeting its requirements, as well as guiding the necessary remediation and implementation activities.  This can include helping you develop an AI policy, conduct an AIIA, implement an ISO 42001-conformant AIMS and any other support you require.  Once your AIMS is in place, URM can also perform internal audits of the management system and controls to ensure they are operating as intended and aligned with ISO 42001 requirements.

To learn more about ISO 42001, attend URM’s webinar Implementing and Certifying to ISO 42001 on 13 May at 11am, where we break down what it is, why it matters, and how to successfully implement and certify against the Standard.  

If your organisation develops, deploys or uses AI systems within the EU, our team can conduct an EU AI Act Applicability Assessment, where we determine how and where the AI Act applies to you, including which aspects of your business are in scope, your organisation’s ‘role’ under the Act, compliance obligations, etc.  Having established how the AI Act applies to your organisation, URM’s AI consultant can help identify and implement the necessary measures for compliance.  Meanwhile, for the NIST AI RMF, we can conduct a gap analysis against the Framework’s requirements to identify your current alignment status, recommending practical and appropriate actions to close any gaps.

Neil Jones
Neil Jones
Senior Consultant at URM
Neil is a Senior Consultant at URM, with over 20 years of ‘real world’ information security knowledge and experience, having worked in complex telecommunications, (multinational) financial services and professional services environments, with both regional and global responsibilities.

Receive a Bespoke AI Management System

URM tailors ISO 42001 solutions to your unique risks, sector, and culture—ensuring seamless integration into business-as-usual.
Thumbnail of the Blog Illustration
Other Standards
Published on
5/6/2024
ISO 42001 Artificial Intelligence Impact Assessments (AIIAs)

URM’s blog explores artificial intelligence impact assessments (AIIAs) and offers advice on how to conduct these assessments in full conformance with ISO 42001.

Read more
Thumbnail of the Blog Illustration
Data Protection
Published on
12/4/2024
Data Protection Considerations for Artificial Intelligence (AI)

URM’s blog discusses the data protection considerations for utilising AI technologies, and how organisations can stay GDPR compliant in their use of AI.

Read more
Thumbnail of the Blog Illustration
Artificial Intelligence
Published on
22/11/2024
Establishing Organisational Control Over Artificial Intelligence

URM’s blog discusses the need for policy in relation to the use of AI, real-world cases where AI has caused organisations issues & how to create an AI policy.

Read more
Cyber security has never been higher on our agenda. We’re very pleased to have gained our Cyber Essentials Plus Certification. We are committed to providing the most secure and robust solutions to our customers and partners. This certification helps to demonstrate this commitment – through independent vulnerability testing and to test the awareness of information security across our teams. We’re very pleased with the support and expertise provided by URM.
Speech Technologies Provider
contact US

Let us help you

Let us help you in your compliance journey by completing the form and letting us know how we can best support you.