Bots are not secret, but they need clear rules
Readingtime: 4 min
Digital Tuesday AI agents offer insurers significant efficiency gains – whether in document analysis, claims processing or portfolio evaluation. They move exactly within the roles and rights assigned to them, but by no means freely or uncontrolled in IT systems. It must be ensured that the integration of AI agents takes place along established compliance and security policies.
In the Insurance Monitor of 7 May 2025, under the title "The secret identity of bots [2]", the thesis was put forward that AI agents move largely uncontrolled through the IT systems of insurers – in contrast to human employees, whose rights are narrowly limited and monitored. The threat suggested by this is striking, but technically abbreviated – and it threatens to distort the factual debate about the sensible use of AI in the insurance industry. It's high time for a classification.
First of all, AI agents are not "black boxes" with a life of their own. They act within a clearly defined framework ("Agentic Workflows"), receive targeted tasks ("prompts") and only operate with such system rights and data access that are explicitly assigned to them.
Modern agent architectures, as they are currently being implemented in the first companies, are based on established identity management principles: Each agent instance is uniquely identifiable, documented and controllable as a so-called "Non-Human Identity" (NHI).
Governance principles for AI agents
The concept of NHI is not new. Technical identities for machines, services or bots have existed for a long time – and they are standard in insurance IT environments. The same applies to AI agents as to any rule-based software in use today. Governance principles include, but are not limited to:
Central recording and regular auditing of all NHIs,
A role-based access model (RBAC) that consistently implements the principle of "least privilege",
Lifecycle management for controlled agent creation, use, and deactivation, and
Comprehensive logging and monitoring of all activities, integrated into so-called SIEM systems, with which companies can identify and eliminate potential vulnerabilities.
An AI agent accessing data that doesn't match their defined profile would be just as conspicuous as a human user trying to access payroll data without permission, for example. The technical possibilities for control are therefore available – and they are also used by AI agents professionally.
AI agents create efficiency
The use of AI agents offers insurers significant efficiency gains – whether in the analysis of large volumes of documents, claims processing or portfolio evaluation. It is important to note that these systems do not replace human decisions, but provide structured input on the basis of which people can make well-founded decisions. Agents are assistance systems that process autonomously defined tasks. But they are not
An AI agent is de facto an (agentic) workflow that is actively created, trained, and embedded along existing company policies – just like any other form of software. Although AI agents are much more agile and flexible than classic rule-based systems, they also receive defined resources, specific data access and technical framework conditions. In other words, they do not move freely or uncontrolled in IT systems, but exactly within the roles and rights assigned to them.
Therefore, it is misleading when AI agents are portrayed as "secret employees" who "nest in IT systems without guidelines". However, it must be ensured that the integration of AI agents takes place along established compliance and security guidelines.
Ignoring or even avoiding this technology is not a solution. The only certainty that would be gained is that it would certainly lose competitiveness in the medium term.
German Version: Bots are not secret, but they need clear rules
Original published version: Bots are not secret, but they need clear rules