The most unsettling moment at an AI Mapping workshop with a European services company, 50 staff, looks like this: I show the map of AI tools the company “officially” uses — zero. Zero tools. Company policy: we don’t use AI, customer data is sensitive.
Then I ask the people in the room one thing. “Who here has used ChatGPT this past week for something work-related?”
Most of the hands in the room go up.
One of the accountants adds spontaneously: “Yesterday I pasted a supplier invoice — with the VAT number, company name, amounts — into ChatGPT to turn it into an Excel. It would have taken me 20 minutes by hand; it did it in 30 seconds. Massively helpful.”
In that one sentence the following leaked: the customer’s VAT ID, the financial details of the transaction, the company’s cost structure. All of it to a model owned by a US company, with no data processing agreement, no GDPR basis, and no board awareness.
And that is exactly what’s happening in your company right now. You just don’t know about it.
This post is about how to understand it and what to do about it. Not theoretically. Concretely.
Why it happens — the asymmetry you don’t see
There’s one number that explains everything. 68% of employees use AI tools without IT’s knowledge or approval. That’s from Gartner research across 500 companies. Not European specifically — across the board. The European market doesn’t deviate.
Another study, by CybSafe and the National Cybersecurity Alliance across 7,000 employees, found that 38% admit outright to sharing confidential company data with AI tools without authorisation. Not by accident. Knowingly. They paste.
Another source — analysis of real prompt logs — shows that 27% of queries to public AI contain confidential or proprietary data. 11% contain regulated data — PII, medical records, financial documents. One question in four.
Where does the gap between company policy (“we don’t use AI”) and reality (everyone uses AI) come from?
The answer is simpler than most decision-makers want to admit. The employee and the company see completely different cost/benefit ledgers.
The employee sees: “ChatGPT will save me 30% of the time on this task.” Concrete. Measurable in minutes. Visible the same afternoon.
The company sees: “ChatGPT could expose our data, breach GDPR, break our customer contracts.” Abstract. Measured in scenarios that may never happen. Visible, maybe, in two years at an audit.
When abstract risk goes up against a concrete 30% time saving, the employee picks the saving. Every day. And they’re right from the perspective of their own productivity.
This isn’t an insubordination problem. It’s a problem with the decision architecture you’ve created — by not giving them an alternative.
What actually leaks from your company
To grasp the scale — here are the five categories of data most likely flowing into public AI models from your employees’ accounts right now. Roughly in order of frequency.
Customer personal data. Contracts with both parties’ names, invoices with VAT numbers, recruitment documents with candidate CVs, correspondence with customers pasted in to be “summarised” or “paraphrased”. GDPR lawful basis for this processing — none. Data processing agreements with OpenAI/Anthropic/Google — don’t exist in the consumer tier.
Trade secrets. Tender offers pasted in for “analysis”, internal price lists dropped in for “comparison”, supplier contracts run through a chatbot, negotiation drafts sanity-checked with “how does this sound?”. In one high-profile 2023 case, Samsung engineers pasted fragments of production code and notes from technical meetings into ChatGPT in three separate incidents over a single month. Samsung banned ChatGPT company-wide. The data was already out.
Employee data. HR receives a CV, drops it into ChatGPT with a prompt like “pick the three best ones”. Personal data, addresses, experience, previous salaries — all of it lands at a third-party vendor without the candidate’s consent. A candidate who didn’t get the job could file a complaint with their national Data Protection Authority, and would rightly win.
Source code and secrets. Developers paste code fragments in for “debugging” or “optimisation”. Often with embedded API keys, database passwords, internal system schemas. Every fragment is a potential map for an attacker who gets hold of the logs.
Internal policies and procedures. An employee asks ChatGPT: “how do I push back on an angry customer professionally?” — and pastes the entire complaints handling procedure in for context. Accidentally, you export operational know-how you spent ten years building.
Each of those five categories five times a day. Multiply by 50 employees. Multiply by 250 working days. The resulting number is one you don’t want to know.
Why blocking doesn’t work
The instinctive board reaction to this problem: “let’s block ChatGPT on the corporate network.”
It won’t work. For three reasons, any one of which is sufficient.
Reason one: the employee has a personal phone. You block ChatGPT on the work laptop — the employee opens ChatGPT on their smartphone, takes a photo of the document, uploads it, gets the answer, retypes it. Took them three minutes longer than before. You’ve lost any visibility.
Reason two: 86% of IT leaders admit they don’t see shadow AI in their own monitoring. Blocking popular domains is an illusion of control. New tools appear every week. Local models (Ollama, LM Studio) can be run on a laptop with no internet access. APIs can be called indirectly. Every block has a half-life of two weeks.
Reason three — the most important — you’re also blocking the upside. The employees who use AI most cleverly are often your best people. The ones who look for ways to be more productive. By blocking AI, you don’t reduce their usage. You push it underground, while simultaneously signalling that the company can’t keep up with the industry.
Your three best people leave within a year to a competitor that lets them use AI. Work out the cost of losing each of them. It’ll be larger than the cost of the hypothetical leak you were afraid of.
Conclusion: don’t block. Channel.
Three levels of AI policy — the model that works
Simple concept. Instead of “everything’s allowed” or “nothing’s allowed” — three levels, just like IT security, which you already know.
🔴 Red list — forbidden, no exceptions
Tools and scenarios that clearly breach GDPR, customer contracts, or company rules. Including:
- US consumer tools (ChatGPT Free, Claude Free, Gemini Free) for any sensitive data
- Pasting full customer contracts into any tool without authorisation
- Pasting customer personal data (names, addresses, VAT numbers, national IDs) into unapproved tools
- HR data — salaries, CVs, employee reviews — into external tools
- Production code with secrets — into tools without a zero-retention guarantee
The list has to be concrete, finite, intelligible. Not “don’t use AI for sensitive data” (what does that mean?), but “don’t paste customer contracts and invoices into ChatGPT, Claude, Gemini in their free versions”.
Worth remembering: even enterprise versions don’t always behave the way the vendor promises — the April Microsoft Copilot flex-routing incident showed that default settings can change without explicit customer consent. That applies to yellow-list tools too.
🟡 Yellow list — approved with limits
Tools the company has officially sanctioned and that employees can use for non-classified work. Including:
- Paid enterprise tiers (ChatGPT Enterprise, Claude Teams, Copilot for Microsoft 365) — only for internal, non-customer data
- Locally installed tools (Cursor with a local model) for programming tasks
- Sector-specific tools approved by legal — e.g. Harvey AI for law firms, CoCounsel, and so on
Yellow list requires: a contract with the vendor including a no-training clause on your data, guidelines on what’s allowed and what isn’t, and periodic usage audits. But it lets the employee do their job.
🟢 Green list — the preferred internal tool
A company assistant with access to your documents, under your control, with no leakage out. This is the tool you actively promote, because it solves 80% of the scenarios your people were previously using ChatGPT for.
- Answers questions about company procedures, price lists, template contracts — without sending data outside
- Summarises documents, emails, meeting notes — within your infrastructure
- Helps draft proposals, RFQ responses, follow-ups — with access to customer context from your CRM
- Runs cross-tool actions (Gmail, Drive, Calendar, HubSpot) — under your audit log
This is channelling instead of banning. The employee who used ChatGPT for all of these gets a better tool (because it knows company context) at the same speed (because it answers immediately) under full company control.
The architecture of this tool — one assistant plugged into your systems via a standard protocol — is exactly what we cover in MCP — USB-C for AI. One assistant instead of five chatbots. The decision about which sovereignty tier to run it in (regional EU cloud or on-premise) is the subject of our pillar post 4 levels of AI sovereignty.
Why this is the only model that works
If your employees have three options:
- Red: no AI, takes me twice as long
- Yellow: approved AI, fast, within limits
- Green: company assistant, fast and smarter, because it knows the context
…most will pick green, because it’s the most convenient. Not because they love compliance. Because they like their job and want to finish at 5 p.m., not 7 p.m.
If the green option doesn’t exist, employees switch to unmarked — personal accounts, personal phones, hiding. That’s shadow AI in its pure form.
Channelling is always more effective than banning. That applies to water, traffic, shadow IT, and shadow AI. People follow the path of least resistance. Your job isn’t to force them onto the harder path — it’s to make the safe path the easiest one.
10-point plan for rolling out an AI policy
Here’s a concrete path. It’ll take 6 to 12 weeks depending on company size.
-
Current-state audit. An anonymous survey across employees: which AI tools do you use, for what, how often. Plus a DNS/proxy log review for traffic to AI domains (openai.com, anthropic.com, gemini.google.com). Goal: see reality, not the board’s assumptions.
-
Data classification. Which data categories are restricted (red), confidential (yellow), internal (green), public (white)? A one-page A4 table is usually enough — not a 50-page classification programme.
-
Identify typical AI use cases. What tasks would employees actually want AI to help with? Writing emails, summarising documents, data analysis, searching internal knowledge, generating marketing content. That’s your roadmap for the “green list”.
-
Pick an internal tool (green list). A company assistant with access to your documents. Could be Ragen, Microsoft 365 Copilot (with limits), ChatGPT Enterprise with custom GPTs. Criterion: it has to be more convenient than ChatGPT Free for the majority of typical tasks. We describe an iterative approach — first use cases in four weeks — in the RAG in 4 weeks playbook.
-
Define the red list. Concrete, short, intelligible. Maximum ten points. “Don’t paste customer contracts into X, Y, Z”, not “follow information security rules”.
-
Define the yellow list. Specific tools, specific conditions. “ChatGPT Enterprise — OK for internal work, provided you use a company account. Add a #ai tag to the ticket so it’s trackable.”
-
Internal communication — critical, do not skip. An all-hands from the owner/CEO (not from IT!) who explains: why we’re introducing the policy (not “because we have to”, but “because we care about your work”), what specifically is allowed, what is not, and why we’re giving you the green list as a better solution.
-
Practical training (one hour). Not about AI philosophy — about how to use the company assistant, how to recognise the data you can’t paste into ChatGPT, what to do when you’re unsure. Live examples. Workshop, not lecture.
-
Champions in each team. One person per department who knows the policy, uses the green tool daily, helps the rest. Not a formal role — more like natural early adopters whom you anoint.
-
Eight-week review. What works, what doesn’t, what to add to the green list, what turned out to be impractical. An AI policy isn’t a project — it’s a process. It has to evolve with usage.
If you don’t know where to start
The most common reaction on reaching this point of the post: “I get it, but I don’t have time to run this myself.”
That’s exactly the moment it’s worth having a conversation. 30 minutes, no product pitch, no marketing fluff. In that call we:
- Map the shadow AI in your company — real tools and real risks, not theoretical ones
- Prioritise together what to do first — audit, policy, internal tool
- Show you how this played out across European organisations we’ve worked with
- If it turns out Ragen AI fits as the “green list” for you, we’ll show you specifically how. If it doesn’t fit, we’ll point you at alternatives.
This is an audit, not a demo. You leave the conversation with a concrete list of next-quarter actions, even if you buy nothing from us.
If you want a quicker fit check before you book the call — we have a readiness checklist. Eight questions that’ll tell you whether Ragen is for your company.
The cost of doing nothing
I won’t scare you with GDPR fines, because that type of post chases them anyway and you have them in your head daily. I’ll tell you something that gets said less often.
Every day your employees use ChatGPT without a policy is a day on which:
- Your customer data sits in logs you don’t control
- Your trade secrets can surface in responses to other users of the model
- Your position in the next corporate contract audit gets weaker
- Your best people are learning to work with AI — but not with your AI, because there isn’t one
- Your competition is building skill advantage while you’re banning it
Shadow AI isn’t a problem to solve “one day”. It’s a problem happening now, every day, in your company.
The good news: the solution isn’t hard. One internal tool, one policy on one page, one all-hands meeting. That’s six weeks of work in a mid-sized company.
The bad news: it won’t solve itself. The longer you wait, the deeper shadow AI grows into daily team habits, and the harder it is to pull it out into an official channel.
Book a 30-min fit call — we’ll map the shadow AI in your company and prepare the first step of an AI policy. No commitment.
If you’d rather cost out the internal assistant first — our calculator gives a monthly budget for your scale.
Sources:
- Gartner, AI adoption study across 500 companies (2025)
- CybSafe & National Cybersecurity Alliance, “Oh Behave!” report (2024) — 7,000 respondents
- IBM, “Cost of a Data Breach Report 2025” — shadow AI impact analysis
- Microsoft Threat Intelligence, internal data (2026)
- Samsung semiconductor case (March 2023) — publicly reported leaks
- Our own experience from AI Mapping workshops across European organisations
