GDPR’s Battle with the AI Boom

The innovation Artificial Intelligence (‘AI’) can bring to a business’s operations cannot go ignored. However, the security risks including cyberattacks and data breaches which accompany this ground-breaking tool cannot be overlooked either, due to the potential catastrophic fines (equivalent to 2-4% of a business’ turnover!) which come with non-compliance. Therefore, writes Katherine Peatfield, it is critical businesses keep UK and EU data protection laws (‘GDPR’) at the forefront of their minds when using AI…

Compliance with UK/EU GDPR

On a very basic level, where you are dealing with personal data of UK or EU individuals (whether storing, processing, transferring etc.) within the UK, EU or not, you are required under law to protect it, using what the law calls ‘Technical and Organisational Measures’. These data security measures are the responsibility of the businesses and they should be regularly tested, assessed and evaluated as to their effectiveness in regard to its protective abilities. With AI becoming a tool central to some businesses, such testing and evaluations needs to be assessed in light of the threats which AI poses.

What Counts as a Data Breach?

A personal data breach can occur where personal information is accidentally or unlawfully accessed, destroyed, lost or altered by unauthorized individuals and businesses covered by EU/UK GDPR Examples of AI-related data breaches could include:

  • Model inversion attacks: Hackers extract personal data from an AI model;

  • Data poisoning: Malicious actors insert fake data to corrupt AI models.

  • AI hallucinations: AI generates false information about a person, potentially leading to privacy violations.

Responsibilities of Businesses Using AI

To ensure compliance with UK/EU GDPR when using AI, we have summarised 4 Key Steps businesses should follow:

(1)    Conduct Risk Assessments

  • Identify security risks specific to AI tools.

  • Determine if AI handles personal data and how to protect it.

  • Complete a Data Protection Impact Assessment (DPIA) when required.

 

(2)    Review Incident Response Plans

  • Develop clear steps for identifying and responding to AI-related security incidents.

  • Assign specific roles to handle breaches effectively.

 

(3)    Seek Expert Guidance

  • Follow best practices from regulators like the UK Information Commissioner’s Office (ICO).

  • Consider submitting a voluntary request to the ICO (if UK based) to undertake an audit.

  • Consult AI security experts or legal advisors.

 

(4)    Ensure Vendor Compliance

  • Require AI vendors to prove their tools follow data privacy laws.

  • Use contracts that enforce strong security measures.

Responsibilities of AI Suppliers

Naturally, the responsibilities do not stop at the door of the business using AI- suppliers of such models need to consider quite a lot as well!  Below are 3 Key Steps which suppliers should take to ensure strong measures are in place to protect their AI systems from attacks and misuse.

(1)    Secure AI Models

  • Protect against hacking attempts that could expose personal data.

  • Regularly update software and fix vulnerabilities.

 

(2)    Set Clear Data Protection Terms

  • Ensure contracts require customers to follow security best practices.

  • Define responsibilities for handling data breaches.

 

(3)    Promote Safe AI Use

  • Provide clear instructions on how to use AI securely.

  • Educate customers on privacy risks and mitigation strategies.

Collaboration

It is also important to note that in order for AI security to be effective, businesses and AI companies must collaborate. Steps to take in solidifying this collaboration could include:

  • Defining who is responsible for reporting and managing data breaches (take note what the contract says!).

  • Sharing relevant security information to prevent threats.

  • Training employees to recognize and respond to AI-related risks.

Final Thoughts

At the end of the day, the cutting-edge possibilities which AI can brings to a business cannot be ignored. However, you can see from above, such possibilities also brings great risk and exposure, and therefore wide liability (remember risks of fines of up to 2-4% of annual turnover!). Therefore, it is critical for

  • businesses to implement stronger data protection measures and regularly assess risks, update incident response plans, and choose secure vendors.

  • suppliers to ensure AI models are adequately protected, have set security requirements, and are educating their customers; and

  • both suppliers of AI and the businesses using such tools should aim to collaborate –ensuring data security is maintained.

Whether you are a business using AI, or an AI supplier, feel free to get in touch to discuss any of the issues arising out of this article at info@techlaw.co.uk.

Katherine Peatfield

►  Associate Solicitor

info@techlaw.co.uk

►  0113 258 0033

Next
Next

Blueprint for Success: The Importance of Specifications in Tech Contracts