AI Without the Hype. What Decision-Makers Really Need to Know.
Many companies are now actively using AI: to analyse documents, automate processes, and evaluate communication. The tools are there, the potential is real, and the pressure to become more efficient is greater than ever.
At the same time, uncertainty is growing. Can customer data go into an AI tool? What actually happens to the text an employee types into ChatGPT? Is any of this GDPR-compliant?
The answers I hear most often in practice are either too relaxed (“We accepted the terms of service, so we’re covered”) or too absolute (“AI and GDPR simply don’t work together”). Neither is true.
The moment an employee enters a name, an email address, or a customer number into an AI tool, data processing takes place within the meaning of the GDPR (Art. 4 No. 2). This is not a matter of interpretation. It is the text of the law.
For this processing to be lawful, the company needs a legal basis under Art. 6 GDPR. And that legal basis must not only exist, it must be documented.
Most companies don’t have this in place. Not because they’re ignoring it, but because nobody has communicated it clearly. An employee using ChatGPT for work doesn’t think about whether they’re conducting a third-country data transfer. They think: “This tool helps me get my job done faster.”
The result: AI gets used, data flows, and the legal basis is missing. Fines under Art. 83 GDPR can reach up to 20 million euros or 4 percent of global annual turnover.
The real problem runs deeper. It’s not just about missing contracts. It’s about a structural conflict between European data protection law and the location of the most capable AI models.
OpenAI, Anthropic, Google: the companies behind the most powerful language models are based in the United States. The moment a European company uses these AI services and transfers personal data in the process, Art. 44 GDPR on third-country transfers applies. The US is classified as an unsafe third country under data protection law.
A transfer is only permissible under specific conditions. And even when those conditions are met, one problem remains that cannot be resolved contractually.
Since July 2023, the EU Commission’s adequacy decision for the EU-US Data Privacy Framework (DPF) has been in place. On 3 September 2025, the General Court of the European Union (Case T-553/23) dismissed the legal challenge against this decision, confirming its validity.
That sounds like a green light. It isn’t, entirely.
The DPF is based on a US presidential Executive Order, not an act of parliament. The appeal route to the European Court of Justice remains open. Max Schrems, whose legal challenges already brought down two predecessor frameworks (Safe Harbor in 2015, Privacy Shield in 2020), has strongly criticised the ruling. Law firm Heuking recommends companies keep Standard Contractual Clauses ready in parallel, to be able to switch without operational disruption if the ECJ rules differently.
Anyone relying exclusively on the DPF is building on a foundation that has already collapsed twice.
Even when all contracts are in place, a Data Processing Agreement has been signed, and Standard Contractual Clauses are included, one structural problem remains.
The US CLOUD Act (Clarifying Lawful Overseas Use of Data Act, 2018) requires US companies to hand over data upon a formally valid request from US authorities. Even when that data is stored on European servers.
This is not theoretical. On 10 June 2025, Anton Carniaux, Legal Director of Microsoft France, was asked under oath before a French Senate inquiry committee whether he could guarantee that data belonging to French citizens would never be transferred to US authorities without the explicit approval of the French government. His answer was unambiguous, as reported by The Register: “Non, je ne peux pas le garantir.” No, I cannot guarantee that.
Art. 48 GDPR prohibits the recognition of foreign authority orders without a bilateral legal agreement. No such EU-US mutual legal assistance treaty exists for the CLOUD Act. This is an unresolved legal conflict. And no DPA, no contractual clause, no data protection commitment from a vendor can resolve it.
Consider a typical automated process in a company: incoming customer emails are analysed by an AI system, relevant tasks are extracted and transferred to an internal system. The workflow runs automatically, around the clock.
What happens in the process, without anyone having explicitly decided it: every email containing names, customer numbers, or other personal information is transmitted to the API endpoint of a US provider. Automatically. With every run. Without notification, without a log, without a legal basis.
This is not an edge case. This is the default when AI automation is introduced without data protection guidance.
AI and GDPR in companies are not fundamentally incompatible. But they require a deliberate decision about which data goes to which system, before the first process goes live.
There are four ways to handle this cleanly.
Anonymisation before the AI. Personal data is replaced by neutral placeholders before being transferred to an AI system. The AI model only sees anonymised text, processes it, and only afterwards are the results matched back to the real data records. The key advantage: when no personal data is transferred, GDPR does not apply to the transferred portion, regardless of whether the server is in Frankfurt or Virginia. The EDPB confirms this logic: genuine anonymisation removes the applicability of GDPR to the transmitted data.
Use European AI providers. Capable AI models exist from European companies. Mistral AI is based in France and operates its infrastructure within the EU. No third-country transfer, no CLOUD Act exposure. For many business tasks such as text analysis, categorisation, and summarisation, these models are well-suited.
Run models locally. Open-source language models can be operated entirely on a company’s own infrastructure. No byte leaves the company network, no external provider is involved, no third-country transfer takes place. This requires dedicated hardware and IT capacity, but offers the highest level of data security. For companies handling particularly sensitive data, it is often the only GDPR-compliant path.
Contractual safeguards with US providers. When US AI models are necessary for quality reasons and data cannot be fully anonymised, full legal safeguarding is required: a Data Processing Agreement under Art. 28 GDPR, verified DPF certification status of the provider, Standard Contractual Clauses as a fallback, training opt-out activated, data minimisation consistently applied, and everything documented in the records of processing activities. The residual risk from the CLOUD Act remains structurally in place.
The key question for decision-makers is not: “Are we allowed to use AI?” The question is: “Do we know where our data is going?” Anyone who doesn’t know is operating blind. And operating blind with personal data is exactly what GDPR is designed to prevent.
The first step is not legal advice. It is a factual review of your company’s own AI processes.
This article provides a practice-oriented overview and does not constitute legal advice. For specific GDPR assessments in a business context, a data protection officer or specialist lawyer should be consulted.
In many companies, this is exactly where unnecessary time losses and structural problems arise. Often this goes unnoticed for a long time — until projects start to stall.