There are industries where the question “can we send this data to OpenAI?” has only one acceptable answer: no.
A law firm works on confidential client contracts. A strategy consultant juggling three competing clients can’t mix their data in one public model. A healthcare company has personal data protected by GDPR. A bank has national financial-regulator requirements. A state-owned enterprise has its own rules. A manufacturing company worries that its process documentation will end up in a competitor’s training set.
For all of them, we built the on-premise version of Ragen.
What it means in practice
The entire platform — including the AI models — is installed in your infrastructure. In your data centre, in your private cloud (AWS, Azure, Google Cloud, or your own), under your security policy. Data never leaves your network. No transfers to the US, no “trusted vendor”, no dependency on someone else’s privacy policy.
You get exactly the same platform our other customers use — with chatbots, knowledge bases, semantic search, department assistants, export, integrations. The only difference is where it runs. And who has access: you and nobody else.
Three industries where this is must-have
Law firms. Attorney-client privilege isn’t a client preference — it’s a statutory obligation. Sending a contract, litigation strategy, or client correspondence to an external AI model is, in practice, a breach of privilege. On-premise solves this 100%. The assistant analyses contracts, searches precedents in the internal base, summarises case files — all inside the firm’s network. Not a byte leaks.
Consultancies. Here’s a paradox that rarely gets discussed. A strategy consultancy serving three clients from the same industry has a problem the law firm doesn’t: each client is a potential competitor to the others. Client A’s data can’t “blend” with client B’s data in a shared AI model. On-premise plus an isolated-knowledge-base architecture gives you a situation where you can have one assistant per client — and nobody else, not even your other consultants, has access to them. This lets you use AI on projects that were previously an “AI-free zone” for ethical and contractual reasons.
Healthcare, pharma, and research. Patient data, clinical trial results, IP of products in development — these aren’t materials you can drop into a public model, no matter what its terms of service promise. On-premise is the only option that passes a compliance audit.
Costs and return on investment
On-premise isn’t “ChatGPT, only more expensive”. It’s the full platform, plus AI models running locally (you can use Ollama or other open-source models with no per-token fees), plus implementation support.
For most customers the deployment investment pays back in the first year on AI model savings alone (a free local model instead of paid GPT/Claude per conversation). The second payback is projects that were previously impossible with AI for compliance reasons — and now are.
Who this isn’t for
If you have a 10-person company and don’t process regulated data — on-premise is overkill. For you there’s the cloud version Ragen Cloud, which is encrypted, segmented and GDPR-compliant, with servers in the EU. On-premise is the tool for companies where “we can’t deploy AI” used to be the board’s answer. Not anymore.
