Artificial Intelligence (AI) is transforming the way that organisations operate, and we at URM recognise that a lot of organisations face challenges in understanding exactly what AI is, how it can impact organisations and how to establish effective control of it. Let’s start with the basic question: what exactly is AI?
As defined in Article 3 of the EU Artificial Intelligence Act, AI is:
“a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.”
To learn more about the EU AI Act and its provisions, read our blog on the EU Artificial Intelligence Act.
How can AI impact Your Organisation?
AI can be a business enabler when used correctly, with benefits including increasing operational efficiency, aiding in the decision-making process, improving customer experience, identifying trends and reviewing code – but that comes at a cost, and improper usage can put your organisation at risk. There have already been numerous incidents in which incorrect usage of AI has led to accidental information disclosure, incorrect decision making, poor customer experience and unintentional appropriation of intellectual property. Your organisation needs to consider that there may also be additional regulatory impact, for example, where accidental disclosure may mean a breach of the General Data Protection Regulation (GDPR) (our blog on Data Protection Considerations for Artificial Intelligence (AI) explores these challenges and others associated with AI and data protection compliance, and how to overcome them).
AI in Action
AI has many applications which are beneficial to the work environment, extending beyond services such as ChatGPT and CoPilot. With each application comes different risks that need to be understood and mitigated. Below are just a few examples:
Autonomous systems
Such systems leverage AI to perform tasks without the need for human intervention; however, there have already been issues where this type of AI has made mistakes, such as registering the moon as a traffic light. Due to the lack of human intervention, these mistakes can occur before anyone has had the opportunity to prevent them. Such systems can present a range of issues, including the risk of injury in the workplace.
In addition to this, threat actors have already been detected attempting to poison such systems. The act of poisoning occurs when an individual is able to compromise a training set used by an AI or machine learning model and subsequently manipulate the ways it behaves. For example, ‘poisoning’ an email filter by adding specific keywords so that on recognition of those words, emails are marked as not spam. By then including those specific key words in spam mail, threat actors can bypass the filter.
Predictive analysis
AI can be used to provide predictions based on identified trends and patterns. However, there is the possibility that the data input or the algorithm used by the AI itself can skew results. In the case of Zillow, an error in its machine learning algorithm led to it purchasing homes at higher prices than AI had estimated it would be able to sell for in the future. The result ended up with around 2,000 employees being cut and a $304 million inventory write-down.
Research
AI can help during the research process due to the vast amount of information it has access to, as well as its ability to understand requests to find specific information better than a standard browser would. However, AI is also subject to hallucinations, where it presents incorrect or misleading results. This was the case for Roberto Malta, whose lawyers had cited cases from ChatGPT which were not real, but created by AI, with ChatGPT going so far as to confirm that they were, in fact, real.
Natural Language Processing
This is where AI is used to understand, interpret and generate human language, featuring as a major aspect of chat bots. This has, however, already been exploited, such as in the case of DPD, which had to disable part of its online support chatbot after it swore at a customer and criticised the Company.
Chat bots do not necessarily have to be maliciously exploited, however; they can present risk to your organisation without specific user prompts. This was seen in the case of the ‘MyCity’ chatbot developed by New York City, which came under fire for advising small businesses to break the law.
The AI you don’t know about
An element of AI which is often forgotten about is its usage by employees without organisational knowledge and/or direction. As mentioned above, most people are familiar with AI in the forms of ChatGPT and Copilot, and a significant number of organisations, whether they know it or not, are using these AI systems in their day-to-day operations. However, this is done with very little oversight or control from the organisation, presenting a significant exposure to risk. AI services are notoriously vague about how data is handled and stored, and your organisation is still responsible for the data input into such services.
How to Establish Control Over AI Usage Within Your Organisation
Imagine you have an employee who is trying to identify customer trends: they’ve decided to use AI, feeding it customer information. Inadvertently, they have now disclosed personally identifiable information (PII) externally, where it cannot be controlled by the organisation. This leaves the organisation at risk of reputational damage and regulatory fines, even though the data is still retained internally. These issues and risks can be mitigated by an AI usage policy, in which you can outline the acceptable usage of AI.
Writing your AI policy
Whilst AI will have different implications and applications for different organisations, URM has outlined general guidance for your AI usage policy, to help your organisation establish effective control.
Address usage and scope: One of the first things you need to address in your policy is whether AI is to be used in the organisation; if it is decided that it will not be, then you will simply need to state this in your policy.
If AI is to be used, then your policy should define the usages for which it is permitted. Here, you may include activities such as skills development and research.
Prohibited usage: Your policy should also explicitly state uses of AI that are prohibited. This may include prohibiting the uploading of PII to AI tools or the use of AI to produce client deliverables. By stating prohibited usages there is less ambiguity, and it is clear to all members of the organisation what AI cannot be used for.
Applicable tools: Your AI policy should state or make reference to which AI tools are approved for usage by the organisation. You should also outline a point of contact for or define a means of requesting new tools to be added to the permitted tools list.
Compliance: As with any policy, it must be clear as to what happens if a violation of the policy occurs. Additionally, this should detail a clear line of reporting if there is a suspected violation.
How URM can help?
With nearly 2 decades of experience delivering governance, risk and compliance training and consultancy, in addition to our extensive AI expertise, URM can assist you to create and implement an effective AI policy that enables you to establish control over the use of AI in your organisation, whilst also being pragmatic and commensurate with your organisation’s needs.
URM’s blog discusses the data protection considerations for utilising AI technologies, and how organisations can stay GDPR compliant in their use of AI.
URM’s blog breaks down the EU AI Act and discusses its scope, requirements, how it will be enforced, how it may impact the UK & the rest of the world, and more.
URM’s blog explores artificial intelligence impact assessments (AIIAs) and offers advice on how to conduct these assessments in full conformance with ISO 42001.