The EU Artificial Intelligence Act

What might be the effects of the EU’s AI Act outside the EU?

Martin Brazier
|
Senior Consultant at URM
|
PUBLISHED on
16 Aug
2024

The European Parliament passed the EU Artificial Intelligence Act (the Act) on 13 March 2024 and has also proposed an AI Liability Directive to mitigate risks posed by evolving AI tools.  What might be the effect of the Act outside the EU, and will it affect us in the UK?

What is artificial intelligence?

Up to now, there hasn’t been an agreed legal definition of ‘artificial intelligence’, not least because it can cover a wide scope of technology, but Article 3 of the Act now defines an AI system as:

a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.

The Act also defines:

a general-purpose AI model as ‘an AI model, including where such an AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications, except AI models that are used for research, development or prototyping activities before they are placed on the market’, and

a general-purpose AI system as ‘an AI system which is based on a general-purpose AI model and which has the capability to serve a variety of purposes, both for direct use as well as for integration in other AI systems’.

There are other laws applicable in the EU that affect the development or use of AI, including the EU General Data Protection Regulation (GDPR) and the Product Liability Directive (which grants those harmed by software a right to compensation) as well as protections for intellectual property.  

In this blog, we will look at how the Act works and whether it will influence AI regulation in the UK and further afield.

What does the EU AI Act say?

The Act sets out its scope, where it applies, how it applies to a number of compliance roles performed by organisations, the categories of risk posed by AI and the obligations placed on those with compliance roles in relation to those risks.

Scope

Article 2 of the Act defines the scope as applying to:

  • Providers placing on the market or putting into service AI systems or placing on the market general-purpose AI models in the Union, irrespective of whether those providers are established or are located within the Union or in a third country;
  • Deployers of AI systems that have their place of establishment or who are located within the Union;
  • Providers and deployers of AI systems that have their place of establishment or are located in a third country, where the output produced by the system is used in the Union;
  • Importers and distributors of AI systems;
  • Product manufacturers placing on the market or putting into service an AI system together with their product and under their own name or trademark;
  • Authorised representatives of providers, which are not established in the Union;
  • Affected persons that are located in the Union.

The Act applies to all sectors.

Compliance roles

The Act sets out in Article 3(3) to (8) a number of compliance roles, each of which come with compliance obligations:

  • Provider’ means a natural or legal person, public authority, agency or other body that develops an AI system or a general-purpose AI model or that has an AI system or a general-purpose AI model developed and places them on the market or puts the system into service under its own name or trademark, whether for payment or free of charge;
  • Deployer’ means any natural or legal person, public authority, agency or other body using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity;
  • Authorised representative’ means any natural or legal person located or established in the Union who has received and accepted a written mandate from a provider of an AI system or a general-purpose AI model to, respectively, perform and carry out on its behalf the obligations and procedures established by this Regulation;
  • Importer’ means any natural or legal person located or established in the Union that places on the market an AI system that bears the name or trademark of a natural or legal person established outside the Union;
  • Distributor’ means any natural or legal person in the supply chain, other than the provider or the importer, that makes an AI system available on the Union market;
  • Operator’ means the provider, the product manufacturer, the deployer, the authorised representative, the importer or the distributor.

AI Compliance Roles

Categorising risk

The Act is intended to promote the uptake of the technology while ensuring that there is a high level of protection for health, safety, rights and the rule of law.  To do this, the Act classifies AI systems and imposes requirements according to the different levels of risk posed by the various types of AI.

Article 5 prohibits outright those systems that present an unacceptable risk, including the ‘placing on the market, putting into service or use’ of systems which:

  • Deploy subliminal techniques beyond a person’s consciousness or purposefully manipulative or deceptive techniques, with the objective to or the effect of materially distorting a person’s or a group of persons’ behaviour by appreciably impairing the person’s ability to make an informed decision, thereby causing the person to take a decision that that person would not have otherwise taken in a manner that causes or is likely to cause that person, another person or group of persons significant harm;
  • Exploit any of the vulnerabilities of a person or a specific group of persons due to their age, disability or a specific social or economic situation;
  • Evaluate or classify natural persons or groups thereof over a certain period of time based on their social behaviour or known, inferred or predicted personal social score;
  • Use biometric categorisation systems that categorise individually natural persons based on their biometric data to deduce or infer their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation;
  • Use ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement unless for purposes stipulated in the Act.

Article 6 goes on the classify the risk levels for those applications which are not prohibited.

  • High risk - those systems which are either used as a safety component of a product or which are deployed in certain areas, such as education, employment, access to public services and law enforcement.  Systems and their providers falling into this category must be registered with the EU and must comply with a range of requirements including training, governance, recordkeeping, technical aspects, human oversight and information security.
  • Limited Risk - those systems which directly interact with people such as chatbots, emotion recognition systems and biometric categorisation systems.  Providers and deployers of these systems must comply with a range of transparency requirements.  For example, systems that produce audio or visual images which appear genuine but which are produced by AI are required to declare that the content has been artificially manipulated, although those which are ‘evidently’ artistic, creative or satirical have reduced disclosure requirements.
  • Low or minimal risk - those systems which do not fall into either other risk category.  These carry no specific obligations under the Act.

Regulation in practice

The Act requires that each EU member state will establish a ‘notifying authority’ and a ‘market surveillance authority’ and ensure they have the necessary technical abilities, funding and staffing to fulfil their duties under the Act.

The notifying authority will set up and carry out assessment and designation procedures.

The market surveillance authority will report to the Commission and enforce compliance at a national level.

An AI office in the Commission will enforce common rules across the EU and will advise and assist member states on consistent and effective application of the Act.

Enforcement

Monetary penalties range from €7.5m or up to 1% of global annual turnover for the supply of incorrect, incomplete or misleading information, and €35m or up to 7% of global annual turnover for non-compliance with prohibited AI practices.  This enforcement model will be familiar to those working with the EU GDPR, although the upper limits of the fines are even higher than in the GDPR.

What is the UK approach?

In February 2024, the UK government published its response to its white paper consultation on AI regulation.  It is a regulatory framework led by the Department for Science, Innovation and Technology (DSIT) and is underpinned by core principles designed to address key risks with AI systems.  Like the EU Act, it is cross-sector and applies a technology-neutral approach.  

In contrast to the EU Act, there is no formal definition of AI, but instead an outcomes-based approach which focusses on adaptivity and autonomy for its interpretation.  Existing regulators such as the Information Commissioner’s Office (ICO), Financial Conduct Authority (FCA) and Ofcom can interpret adaptivity and autonomy to create specific definitions if they wish, but this raises a concern that different interpretations could be applied by different regulators, thereby creating confusion for those operating across regulators or sectors.  

The framework distinguishes between:

  1. Highly capable general-purpose AI: Foundation models that can perform a wide variety of tasks and match or exceed the capabilities present in today’s most advanced models. Generally, such models will span from novice through to expert capabilities with some even showing superhuman performance across a range of tasks.
  2. Highly capable narrow AI: Foundation models that can perform a narrow set of tasks, normally within a specific field such as biology, with capabilities that match or exceed those present in today’s most advanced models.  Generally, such models will demonstrate superhuman abilities on these narrow tasks or domains.
  3. Agentic AI or AI agents: An emerging subset of AI technologies that can competently complete tasks over long timeframes and with multiple steps.  These systems can use tools such as coding environments, the internet, and narrow AI models to complete tasks.

UK AI Regulatory Framework

Principles

Clause 10 of the framework sets out 5 cross-sector principles for existing regulators to interpret and apply within their own remits:

  • Safety, security and robustness.
  • Appropriate transparency and explainability.
  • Fairness.
  • Accountability and governance.
  • Contestability and redress.

The strategy for implementing these core principles is predicated on:

  1. Leveraging existing regulatory authorities and frameworks, as there are no plans to introduce a new AI regulator.  Neither the implementation of the principles nor the requirement for regulators to collaborate will be legally binding, although the government anticipates the need to include a legal duty on them in the future.  Regulators have been asked to produce strategic plans on AI, and DSIT have clarified that those plans should include how they will align their plan with the principles, an analysis for the AI risk in their sectors, how they will manage those risks and a plan of activities.
  2. A co-ordinating central function to facilitate risk monitoring and supporting regulation and coordination.  Given the widespread use of AI, individual regulators cannot fully address the opportunities and risks posed by AI, so the government has set up a new central function within DSIT to monitor and evaluate risks, promote coherence and address gaps.  It will deliver an ongoing review of regulatory powers and remits, develop an overarching AI risk register and continue collaboration with other forums.
  3. Piloting a multi-regulator advisory service, the AI and Digital Hub, to help innovators to navigate multiple regulatory obligations and to facilitate compliance.

The framework approach is in contrast to the EU and US’ more prescriptive, legislation-driven approach, but there is a growing recognition that legislation may be required going forwards.  However, the previous government did not propose specific legislation or mandatory measures and went as far as saying that legislation would only be introduced if specific conditions are met.  They would need to be confident that existing legal powers are insufficient, and that voluntary transparency and risk management are ineffective.

The recent change of government may have some impact on how (and how quickly) the UK approach evolves; the King’s speech in July 2024 stated that the newly elected Labour government  will ‘seek to establish the appropriate legislation to place requirements on those working to develop the most powerful artificial intelligence models’, but stopped short of committing to a specific AI Bill.  Meanwhile, the Prime Minister’s introduction included mention of ‘[harnessing] the power of AI as we look to strengthen safety frameworks’.  So, details of the Labour government’s overall approach to AI regulation remain unclear, however there does appear to be a departure from that of the Conservatives.  

The current framework approach has flaws, in that some areas of AI operation may fall outside the scope of existing regulators.  Furthermore, regulators are already hard-pressed to deliver existing regulation; for example, the ICO’s existing regulation of data protection and the Freedom of Information Act is weak in some areas and negligible in others.  Whether regulators can collaborate to the extent that they can provide complete and blanket regulation across all sectors remains to be seen.

What might be the consequences of the EU AI Act outside the EU?

Let us first consider the effect of the EU AI Act in the UK.  Any UK company developing or working with AI systems for the EU market will have to consider the new EU Act.  Following in the wake of Brexit, the UK government’s stance on AI is to use the ‘freedom’ from EU legislation gained by Brexit to enable innovation and to give as light a touch as possible on legislation and regulation to cut the cost of doing business.  But if businesses must meet the more prescriptive EU legislation to operate there, how much advantage will the UK approach deliver, except for the few businesses domiciled and operating wholly within the UK?

For those examining the effects of the UK AI framework, there is currently little clarity from regulators as to what exactly will be required to achieve compliance.

The very wide territorial and sector scope of the Act means that organisations developing and using AI in the EU will be covered by it.  We have seen in the past how EU legislation can change the approach taken by businesses.  This was especially true with the General Data Protection Regulation (GDPR) and led to global changes as businesses complied with the GDPR to enable access to the European market of 450 million potential customers, not least because business processes can be simpler and cheaper if one approach is taken worldwide.

Nations with developing markets understand this and are keen to develop their own legislation in such a way to attract global business.  In this way we have seen other data protection regimes align with the GDPR, such as Brazil’s Lei Geral de Proteçao de Dados (LGPD), India's Personal Data Protection Bill (PDPB) and, like many other members of the African Union, Nigeria’s Data Protection Regulation (NDPR).  Many countries may find it convenient to adopt an approach to regulation similar to the EU for data protection and may do so for AI.

It has also been the case that EU regulation raises the profile of its subject, and already the effects of the AI Act have been seen in the actions of other countries which are keen to be at the forefront of technology and to be seen using AI to drive the new economy.  For example, China – keen to be a leading AI innovation centre - has developed its Chinese Cybersecurity Law and the New Generation AI Development Plan to provide measures for data protection and cyber security in AI, emphasising compliance and risk management.  Canada has introduced key government-led programmes, such as the Pan-Canadian AI Strategy and the Canadian AI Ethics Council, to advocate for the responsible development of AI and address relevant ethical issues in the AI sector.  These initiatives sit alongside its Personal Information Protection and Electronic Documents Act (PIPEDA) to regulate the collection, use, and disclosure of individuals’ personal information using AI technologies.  The National Artificial Intelligence Ethics Framework is the cornerstone of AI regulation in Australia and directs the ethical principles that guide AI systems’ development and implementation process, overseen by the Australian Consumer and Competition Commission (ACCC), which plays a role in enforcing regulations.

The EU’s action is a clear commitment to regulating AI to protect its citizens, to allay fears from European technology companies that they are falling behind the early domination of technology markets by US companies and to offer a stable environment for AI companies in which to operate.

What does all this mean for organisations developing and using AI?

Whilst the development of AI systems and technology has been conducted thus far in a fragmented and threadbare regulatory and legislative environment, that is now changing.  Some retrospective work may be necessary for existing players in the market and newcomers should use the opportunity to prepare for future AI development.  That should include:

  • Obtaining specialist advice on AI legislation and associated topics such as data protection and information security.
  • Taking a close look at any existing products they use or provide which qualify as an AI system under the legislation.
  • Asking their advisor about whether those products are caught by the legislation, which category of risk they fall under, their compliance role and their obligations.
  • Devising a plan to comply with their obligations and integrating those obligations within their wider compliance framework, looking for governance synergies with other areas of compliance such as data protection and information security.

How URM can help?

For organisations looking to develop, provide or deploy AI systems in full compliance with regulations and frameworks such as the EU AI Act, conformance to ISO 42001, the International Standard for Artificial Intelligence Management Systems (AIMS’), is the ideal starting point.  Whilst conformance to ISO 42001 will not guarantee compliance with the EU AI Act, there is overlap between two in terms of requirements, and both are concerned with ensuring AI systems are developed and used responsibly and ethically.  As such, this AI standard can be a significant help in enabling you to achieve AI Act compliance.

With nearly 2 decades of experience delivering governance, risk and compliance training and consultancy, URM can provide ISO 42001 training that will ideally position you to undertake AI impact assessments (AIIAs), develop and implement an ISO 42001-conformant AIMS.  Leveraging the expertise gained from supporting over 400 management system implementation projects in line with a range of relevant ISO standards, our one-day Introduction to ISO 42001 Course will equip you with the skills and knowledge necessary to govern and effectively manage an AI system as per the requirements of ISO 42001.

Martin Brazier
Senior Consultant at URM
Martin is a highly experienced and knowledgeable GRC consultant at URM specialising in data protection. He holds BCS Certificates in Data Protection and Freedom of Information and achieved Certified Information Privacy Professional (Europe) (CIPP/E). He also holds BCS Certificates in Information Security Management Principles, Business Continuity Management and Information Risk Management.
Read more

Does your organisation fully comply with the General Data Protection Regulation (GDPR)?

If uncertain, URM is able to conduct a high-level GDPR gap analysis which will assist you understand your current levels of compliance and identify gaps and vulnerabilities.
Thumbnail of the Blog Illustration
Other Standards
Published on
16/8/2024
The EU Artificial Intelligence Act

URM’s blog breaks down the EU AI Act and discusses its scope, requirements, how it will be enforced, how it may impact the UK & the rest of the world, and more.

Read more
Thumbnail of the Blog Illustration
Other Standards
Published on
5/6/2024
ISO 42001 Artificial Intelligence Impact Assessments (AIIAs)

URM’s blog explores artificial intelligence impact assessments (AIIAs) and offers advice on how to conduct these assessments in full conformance with ISO 42001.

Read more
Thumbnail of the Blog Illustration
Data Protection
Published on
12/4/2024
Data Protection Considerations for Artificial Intelligence (AI)

URM’s blog discusses the data protection considerations for utilising AI technologies, and how organisations can stay GDPR compliant in their use of AI.

Read more
Cyber security has never been higher on our agenda. We’re very pleased to have gained our Cyber Essentials Plus Certification. We are committed to providing the most secure and robust solutions to our customers and partners. This certification helps to demonstrate this commitment – through independent vulnerability testing and to test the awareness of information security across our teams. We’re very pleased with the support and expertise provided by URM.
Speech Technologies Provider
contact US

Let us help you

Let us help you in your compliance journey by completing the form and letting us know how we can best support you.