Establishing Organisational Control Over Artificial Intelligence

George Ryan
|
Consultant at URM
|
PUBLISHED on
22 Nov
2024

Artificial Intelligence (AI) is transforming the way that organisations operate, and we at URM recognise that a lot of organisations face challenges in understanding exactly what AI is, how it can impact organisations and how to establish effective control of it.  Let’s start with the basic question: what exactly is AI?

As defined in Article 3 of the EU Artificial Intelligence Act, AI is:

“a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.”

To learn more about the EU AI Act and its provisions, read our blog on the EU Artificial Intelligence Act.

How can AI impact Your Organisation?

AI can be a business enabler when used correctly, with benefits including increasing operational efficiency, aiding in the decision-making process, improving customer experience, identifying trends and reviewing code – but that comes at a cost, and improper usage can put your organisation at risk.  There have already been numerous incidents in which incorrect usage of AI has led to accidental information disclosure, incorrect decision making, poor customer experience and unintentional appropriation of intellectual property.  Your organisation needs to consider that there may also be additional regulatory impact, for example, where accidental disclosure may mean a breach of the General Data Protection Regulation (GDPR) (our blog on Data Protection Considerations for Artificial Intelligence (AI) explores these challenges and others associated with AI and data protection compliance, and how to overcome them).

AI in Action

AI has many applications which are beneficial to the work environment, extending beyond services such as ChatGPT and CoPilot.  With each application comes different risks that need to be understood and mitigated. Below are just a few examples:

Autonomous systems

Such systems leverage AI to perform tasks without the need for human intervention; however, there have already been issues where this type of AI has made mistakes, such as registering the moon as a traffic light. Due to the lack of human intervention, these mistakes can occur before anyone has had the opportunity to prevent them.  Such systems can present a range of issues, including the risk of injury in the workplace.

In addition to this, threat actors have already been detected attempting to poison such systems.  The act of poisoning occurs when an individual is able to compromise a training set used by an AI or machine learning model and subsequently manipulate the ways it behaves.  For example, ‘poisoning’ an email filter by adding specific keywords so that on recognition of those words, emails are marked as not spam.  By then including those specific key words in spam mail, threat actors can bypass the filter.

Predictive analysis

AI can be used to provide predictions based on identified trends and patterns.  However, there is the possibility that the data input or the algorithm used by the AI itself can skew results.  In the case of Zillow, an error in its machine learning algorithm led to it purchasing homes at higher prices than AI had estimated it would be able to sell for in the future.  The result ended up with around 2,000 employees being cut and a $304 million inventory write-down.

Research

AI can help during the research process due to the vast amount of information it has access to, as well as its ability to understand requests to find specific information better than a standard browser would.  However, AI is also subject to hallucinations, where it presents incorrect or misleading results.  This was the case for Roberto Malta, whose lawyers had cited cases from ChatGPT which were not real, but created by AI, with ChatGPT going so far as to confirm that they were, in fact, real.

Natural Language Processing

This is where AI is used to understand, interpret and generate human language, featuring as a major aspect of chat bots.  This has, however, already been exploited, such as in the case of DPD, which had to disable part of its online support chatbot after it swore at a customer and criticised the Company.

Chat bots do not necessarily have to be maliciously exploited, however; they can present risk to your organisation without specific user prompts.  This was seen in the case of the ‘MyCity’ chatbot developed by New York City, which came under fire for advising small businesses to break the law.

The AI you don’t know about

An element of AI which is often forgotten about is its usage by employees without organisational knowledge and/or direction.  As mentioned above, most people are familiar with AI in the forms of ChatGPT and Copilot, and a significant number of organisations, whether they know it or not, are using these AI systems in their day-to-day operations.  However, this is done with very little oversight or control from the organisation, presenting a significant exposure to risk.  AI services are notoriously vague about how data is handled and stored, and your organisation is still responsible for the data input into such services.

How to Establish Control Over AI Usage Within Your Organisation

Imagine you have an employee who is trying to identify customer trends: they’ve decided to use AI, feeding it customer information.  Inadvertently, they have now disclosed personally identifiable information (PII) externally, where it cannot be controlled by the organisation.  This leaves the organisation at risk of reputational damage and regulatory fines, even though the data is still retained internally.  These issues and risks can be mitigated by an AI usage policy, in which you can outline the acceptable usage of AI.

Writing your AI policy

Whilst AI will have different implications and applications for different organisations, URM has outlined general guidance for your AI usage policy, to help your organisation establish effective control.

Address usage and scope: One of the first things you need to address in your policy is whether AI is to be used in the organisation; if it is decided that it will not be, then you will simply need to state this in your policy.

If AI is to be used, then your policy should define the usages for which it is permitted. Here, you may include activities such as skills development and research.

Prohibited usage: Your policy should also explicitly state uses of AI that are prohibited. This may include prohibiting the uploading of PII to AI tools or the use of AI to produce client deliverables.  By stating prohibited usages there is less ambiguity, and it is clear to all members of the organisation what AI cannot be used for.

Applicable tools: Your AI policy should state or make reference to which AI tools are approved for usage by the organisation.  You should also outline a point of contact for or define a means of requesting new tools to be added to the permitted tools list.

Compliance: As with any policy, it must be clear as to what happens if a violation of the policy occurs. Additionally, this should detail a clear line of reporting if there is a suspected violation.

How URM can help?

With nearly 2 decades of experience delivering governance, risk and compliance training and consultancy, in addition to our extensive AI expertise, URM can assist you to create and implement an effective AI policy that enables you to establish control over the use of AI in your organisation, whilst also being pragmatic and commensurate with your organisation’s needs.

George Ryan
Consultant at URM
George Ryan is a Consultant at URM, working predominantly with ISO 27001. He is an IASME certified Cyber Essentials and Cyber Essentials Plus Assessor.
Read more

Do you need any help with ISO 27001 certificate?

URM can help you achieve ISO 27001 certification
Thumbnail of the Blog Illustration
Data Protection
Published on
12/4/2024
Data Protection Considerations for Artificial Intelligence (AI)

URM’s blog discusses the data protection considerations for utilising AI technologies, and how organisations can stay GDPR compliant in their use of AI.

Read more
Thumbnail of the Blog Illustration
Other Standards
Published on
16/8/2024
The EU Artificial Intelligence Act

URM’s blog breaks down the EU AI Act and discusses its scope, requirements, how it will be enforced, how it may impact the UK & the rest of the world, and more.

Read more
Thumbnail of the Blog Illustration
Other Standards
Published on
5/6/2024
ISO 42001 Artificial Intelligence Impact Assessments (AIIAs)

URM’s blog explores artificial intelligence impact assessments (AIIAs) and offers advice on how to conduct these assessments in full conformance with ISO 42001.

Read more
After a bad experience with a previous provider, we looked to URM for QSA support. The URM QSA we have worked with is phenomenal, and considerably better than our previous QSAs. My team enjoy working with him, and find him to be extremely credible and effective. Whenever we have asked our QSA and account manager whether additional work is required outside of the annual cycle, there has never been a hard sell of any of URM’s services, and instead offer advice based on our compliance requirements and business needs. Our URM QSA always consults with the aim of making compliance as straightforward as possible, and pointed us towards a way of significantly minimising and streamlining our assessment scope that neither we nor our previous PCI DSS consultancy provider had considered.
CISO at University of Surrey
contact US

Let us help you

Let us help you in your compliance journey by completing the form and letting us know how we can best support you.