The Perils of Using AI In 2025
Whatever industry you are operating in, the increasing reliance of companies using AI both internally and externally pose a range of challenges. The risk is amplified where you are using AI content in the EU as a result of the EU AI Act. In this article, Nick Parkinson provides an overview of 8 key considerations when using AI in 2005.
Transparency for AI Content
Many businesses are already using AI to produce images & video content. If your AI content is being made available in the EU, you are required to disclose that was produced using AI. The use of AI chatbots is also commonplace with advanced AI voicebots now being deployed to handle customer calls. Again, you will need to disclose that customers are chatting with, or talking to AI. Failure to do so exposes you to catastrophic fines equivalent to €7.5 Million or 1.5% of global turnover. Ouch!
GDPR & Data Breaches
Using AI is no defence to a breach of the GDPR and could lead to massive fines! Is your AI tool trained on large datasets that includes personal data? Is it only using relevant data? Do you have proper consent through your Privacy Policy or otherwise? What if your customers ask for transparency about how any why your advice or decisions have been made? Do you have guarantees and indemnities from your AI provider as to how they will be processing and retaining this data? Plenty to think about here!
Recruitment & the Equality Act
Recruitment can be a laborious process and it is naturally tempting to use an AI tool to sift through applications and shortlist candidates. The risk? Most AI tools excel at ‘pattern recognition’. If your AI tool identifies that your current IT department is dominated by young, white males with a degree from Oxford University in Computer Science, the chances are an unsophisticated AI tool will profile candidates meeting that criteria for interview. Doing so is not just poor form for equality & diversity, it also has practical consequences by exposing companies to a breach of the Equality Act, exposing them to unlimited compensation claims.
Over time we can expect the ICO and EU AI Regulatory bodies to impose stricter compliance checks!
Customer Apps Used In the EU?
Don’t think that because you are a UK company that you are shielded from the EU AI Act. If your customers or employees are using an app whilst working or travelling in the EU that uses AI – there is a real risk you will be caught by the EU AI Act! Again there are massive fines for breaching the EU AI Act!
Marketing and Copyright Infringement
There is already widespread use of generative AI to produce written and visual content for marketing on websites and social media. It could be as simple as a travel company saying “produce me a picture of a duck on a beach wearing flip flops with an aircraft above”. But what data is your AI tool trained on to produce that image? Does it matter? This is a developing area of law with high profile disputes already in motion between Getty Images and Stability AI in the UK, French publishers and authors taking on META and similar actions in the USA and beyond.
If the courts in your jurisdiction or elsewhere side with the copyright holders then, by loading up on generative AI content now, you could be exposing your company to claims and litigation in years to come!
Commercial Terms with AI providers
“Guns don’t kill people, rappers do, sound of the police, woo woo woo!” The point here is that AI is just a tool and, in most cases, the onus will be on you as the company using that tool to ensure it is doing so without breaching copyright, GDPR the Equality Act or other obligations under the EU AI Act. You can expect the Terms with your AI provider to place the risk and liability on you. But what assurances do you have that ‘what goes on behind the scenes’ with their software is in fact compliant? What if you end up with a hefty fine or compensation claim through no fault of your own?
You may be able to negotiate an indemnity from your AI provider. This will very much depend on the purpose and application of the AI tool, the promises they are making about the process & outputs and, of course, your commercial leverage. Dealing with a small to medium sized AI provider will be easier to negotiate with than the likes of Google!
Who owns the copyright for AI generated media?
This is another minefield in a developing area of law, with different outcomes possible in different jurisdictions! Let’s say you produce a simple image of a ‘duck on a beach’ for use on social media. Potentially, nobody owns the copyright in that image if it lacks originality. Tech companies may even argue that they own the copyright, which sounds a bit absurd if you compare it to a paint brush manufacturer claiming to own the next Banksy artwork.
By contrast, you may have produced a 5 minute video that required substantial input from you as the user to guide the AI tool to the outcome. In effect, the AI tool is just an advanced version of video editing software enabling you to produce content much quicker and at much lower cost! Naturally, you would want to own the copyright in this case, but do you?
The answer usually comes down to the Terms & Conditions you have with the AI provider. Do the T&C’s bestow ownership on you, the AI provider or are they silent? Do they hold the copyright but licence it to you? Either way, before you start suing someone for plagiarising your content, make sure you stop to consider if you are actually the copyright holder in the first place!
The Future of AI Regulations
The EU AI Act is already in motion and is being phased in gradually over the coming years. Meanwhile, the UK is deliberating as to whether to follow the EU’s ‘red tape’ style regime (comparable to the GDPR), at the risk of stifling innovation and the tech economy. Alternatively, whether to take a pro-tech, pro-innovation approach, at the expense of copyright holders feeling exploited by AI providers that use their content for ‘training purposes’. Whatever path the UK may choose, UK businesses may need to comply with both regimes and we can expect the regulatory landscape to change considerably over the next few years.
Final Thoughts
Remember, the better AI is the more dangerous it is. Why? Because the better it is, the less likely you are to check if the output is accurate and compliant with the EU AI Act, GDPR, Copyright Laws, the Equality Act or otherwise! If you are not sure whether your AI applications are compliant, feel free to get in touch with Techlaw for advice.