Imagine an office where every employee has keys to every room. To the CEO’s office. To the server room. To the contracts archive. Sounds absurd — yet that’s exactly what some AI deployments look like.
The principle of least privilege has been a cornerstone of IT security for 30 years. In the world of AI, most platforms forget about it. In Ragen we built the whole system around it.
What exactly each assistant knows
The HR assistant knows the employee handbook, leave policy, onboarding procedures, recruitment documents, job descriptions, evaluation processes. Ask it about a public holiday — it answers. About remote work rules — it answers. About how many days of leave you get — it answers. About who to invite to a project meeting — it answers, because it knows the org chart. About margin on product X? “I don’t have that information.” Because it genuinely doesn’t.
The sales assistant knows price lists, marketing collateral, call playbooks, case studies, product documentation, customer interaction history, objection-handling scripts. It helps draft proposals — because it knows pricing. It suggests arguments — because it knows the marketing. It searches the product documentation — because it has access. Personnel files, salaries, HR policy — it doesn’t even try to reach.
The customer-facing website assistant sees only what’s public. FAQ, product pages, returns policy, privacy policy. Nothing beyond that. Even if a customer cleverly asks “what’s the internal margin on this product?” — the assistant answers truthfully: “I don’t have that information.” Because it doesn’t.
The compliance assistant knows legal acts, internal policies, audit history, supervisory decisions. It helps legal teams navigate changing regulations.
One panel, one logic, zero risk
Everything is configured from a single panel. You assign documents to assistants, users to roles, roles to assistants. When someone asks — the system checks what the asker can see, what the assistant has in its base, and answers only at the intersection of those sets.
For the IT director this is the end of sleepless nights. You don’t have to check whether some employee is extracting things from AI they shouldn’t have access to. The system simply won’t allow it.
For compliance, it’s a story that writes itself. The auditor asks: “How do you ensure personal data doesn’t leak through AI?”. Answer: “An assistant that could touch it doesn’t have access at the infrastructure level.” End of discussion.
What we don’t do, that competitors do
Most AI platforms use “output filters” — the AI has access to everything, then censors the answer. That’s the illusion of security. The filter can be bypassed with clever prompt engineering, detection only happens after the fact, and once a leak occurs — the data is out.
With us there are no filters. There’s isolation. An assistant that doesn’t have the document won’t return it, because it doesn’t have it. Nothing to censor. Nothing to work around. Simpler, safer, auditable.
Who this is must-have for
For every company where GDPR is a real obligation, not just a paper one. For every company with an information classification policy (confidential, internal, public). For every company where a data leak means not just a fine but a loss of customers and reputation. Which is — practically every company above 50 people.
