Data protection in the age of AI: GDPR considerations for businesses using AI tools
Artificial intelligence is changing everything - from how we communicate with customers to how we make business decisions. AI tools can be powerful, fast, and incredibly efficient. But there’s a catch: they process vast amounts of data, and that means you could be walking a fine line when it comes to privacy laws like the General Data Protection Regulation. In this post, we’ll break down the key risks, what GDPR says about AI, and the steps you can take to stay on the right side of data protection regulations.

AI needs data to work, and lots of it. The more information you feed into an AI system, the better its predictions, decisions, or recommendations tend to be. But often, that data includes personal information: names, contact details, behavioural data, even sensitive information like health records or demographics.
This creates a natural tension. On one hand, you want AI to be useful but on the other, you’ve got a legal obligation to protect the rights and freedoms of individuals whose data you're processing.
Even when data is anonymised, there's a risk it can be re-identified - especially when combined with other data sets. That’s why AI and data protection can’t be treated as separate conversations.
Learn more about how we support organisations with data protection services.
Share this Article
Contents
- Why AI and data protection go hand in hand
- Businesses can use AI in several ways, each with distinct consideration under the GDPR
- What about automated decisions and profiling?
- The real risks of getting it wrong
- How to stay compliant: practical steps
- How a virtual DPO can help manage AI risks
- What’s next? AI regulation is just getting started
Related Service
Compliance ServicesWhy AI and data protection go hand in hand
The GDPR sets out clear principles for how personal data should be handled. Here are some that AI tools can easily clash with:
Lawfulness, fairness, and transparency – Many AI tools operate like a 'black box’, making decisions in ways that are hard to explain. That makes it tricky to stay transparent with users.
Identification of lawful basis: Businesses should identify a valid lawful basis under Article 6. It is essential to understand the scope and limitations of various lawful basis such as consent and legitimate interest. Consent must be informed, and would require businesses to provide the right to withdraw consent to data subjects. Similarly, Legitimate Interest requires a balancing test between the interest of the business and rights and interests of data subjects.
Transparency: Under GDPR, data subjects must be clearly informed when their data is being used in an AI system. Some of the key information that needs to be disclosed prior to processing personal information would include:
What data is collected and why?
How the AI works and how personal data will be used in the AI system?
What is the lawful basis?
Security of the data and storage limitation
Potential consequences of the AI’s decisions
Data subject rights including right to object against automated decision making.
Purpose limitation – Data must only be used for the purpose it was originally collected for. However, AI often finds new use cases for existing data this can pose serious risk to GDPR compliance.
Data minimisation – You should only collect and process the data you actually need. AI models, however, tend to hoard data to improve performance.
-- Data Anonymisation: You may assume that the data used in AI development is anonymous and are exempt from GDPR compliance because it appears anonymous. If data used for training the models, collectively can identify a natural person, it would be considered as personal data and would require compliance with GDPR and other data protection laws.
Accuracy – If your AI is making decisions based on outdated or incorrect data, you’re breaching GDPR.
Storage limitation – Data shouldn’t be kept longer than necessary, which can be hard to enforce with machine learning models that retrain on old data.
Rights of the data subject – People have the right to access, rectify, or erase their data, and to object to processing - including automated decision-making.
Businesses can use AI in several ways, each with distinct consideration under the GDPR
Bespoke AI for Internal Use: Companies can develop bespoke AI tools for internal use, such as an application for internal documentation management, project management or other internal communication. The personal data used to develop AI can be processed based on the lawful basis of legitimate interest or as necessary for the employment. Data Minimisation, security, transparency and accountability are key considerations. Employers must be transparent with staff how their data is being collected and used, particularly where automated decision-making is involved.
Third party AI tools: Businesses can adopt another approach, using third party AI tools which are available off-the-shelf such as AI powered customer service platforms or analytics tools. In these cases, it is essential to understand where an AI tool company and the business are acting as a data controller or a data processor. Additionally, it is important to agree to appropriate GDPR compliant terms and conditions. Prior to that, it is essential to conduct an AI, Data Protection and Information Security supplier due diligence to understand the information security stature and compliance with GDPR and other data protection laws. Additionally, it is crucial to ascertain how AI company would use the data collected from the business? Where would they store the data? Would they use personal data or special categories of data or business confidential information for training their models? Other crucial considerations are of business confidential data and personal data
AI Tools developed for Public Use (such as ChatGPT): Some businesses can deploy AI tools for public facing services, a good example is Gen AI tools like Chat GPT or Microsoft’s CoPilot. Companies processing personal data, would need to clearly define the legal basis for the collection of the data, and they may rely on consent. But it will lead to issues such as – how to address right to withdraw consent or right to delete/erase the data? As discussed above, other important aspects for GDPR compliance would be transparency, purpose limitation and lawful consent.
What about automated decisions and profiling?
One of the most talked-about areas of GDPR in the AI world is automated decision-making and profiling. Under Article 22, individuals have the right not to be subject to decisions made solely by automated means that significantly affect them - like being denied a loan or job based on an AI assessment.
If your AI tool is making important decisions without human involvement, you’ll need to:
Explain how the decision is made (in plain language).
Outline the consequences of the decision.
Offer a way for someone to challenge it or request human intervention.
Many AI tools used in recruitment, finance, or insurance already fall under this category. Ignoring this requirement isn’t just a bad look, it could land you in hot water with regulators.
The real risks of getting it wrong
To be blunt: if your AI setup isn’t GDPR-compliant, you’re exposed. And the consequences are significant.
Fines – The ICO and other European regulators have the power to issue fines of up to €20 million or 4% of your global turnover (whichever is higher).
Reputational damage – If customers lose trust in how you handle their data, they might take their business elsewhere.
Operational disruption – An investigation or enforcement notice can halt your AI projects or force you to retrain models from scratch.
For a real-world example, in 2023, a Dutch tax authority was fined for using an algorithm that led to biased profiling of citizens - resulting in thousands of people being wrongly accused of fraud. This wasn’t just a GDPR issue. It became a national scandal.
Need a review of your AI setup? Start with a cyber security assessment .
How to stay compliant: practical steps
The good news is you don’t need to ditch AI to comply with GDPR, but you do need to bake privacy into your AI tools from day one. Here’s how:
Conduct a Data Protection Impact Assessment (DPIA)
If your AI system is processing personal data or making decisions that affect people, you must assess the risks and document how you’ll mitigate them.
Limit the data you collect
Only use the data you actually need. Avoid the temptation to 'collect everything, just in case’.Be transparent
Tell people what you’re doing with their data, especially if it’s being used by AI. This includes updating your privacy policy and providing meaningful explanations.Audit your models regularly (If your business owns the AI system)
AI systems change over time. Keep checking for bias, accuracy, and fairness, especially in high-stakes decisions.Involve humans
Don’t leave it all to machines. Make sure there’s a human in the loop for significant decisions.Plan for data subject requests
Can your system locate and delete someone’s data on request? If not, it needs work.
How a virtual DPO can help manage AI risks
If all this feels overwhelming, you’re not alone. That’s where a virtual Data Protection Officer (DPO) can make a big difference.
A virtual DPO works with your business to:
Guide you through DPIAs and compliance steps.
Help set up clear policies around AI use.
Monitor ongoing risks and keep you up to date with evolving regulations.
Communicate with regulators if issues arise.
It’s a cost-effective way to bring in expert oversight, especially if your business isn’t quite ready for a full-time DPO.
What’s next? AI regulation is just getting started
GDPR won’t be the last word on AI and data. The EU AI Act is set to introduce specific obligations for high-risk AI systems, including stricter transparency and accountability rules. In the UK, the ICO has already published guidance on AI and data protection and Westminster is pondering on the last bits of the Data (Use and Access) Bill, with more to come.
The bottom line? The regulatory landscape is only going to get more complex. Future-proofing your business means staying proactive now.
Final thoughts
AI tools are here to stay, but so are the risks. If you’re using AI in your business, don’t wait for a knock on the door from the regulator. Start by understanding your data protection obligations, building privacy into your tools, and getting the right advice.
Whether you’re just exploring AI or already deep into implementation, staying GDPR-compliant doesn’t have to be a burden - it can be your competitive edge.

Get help with your GDPR obligations
Our experienced team of DPOs can take the stress of compliance out of your hands.
Learn more