You are here

Large Language Models in Insurance: Strategies, Structures and Quick Wins

LLMs promise significant efficiency gains. But how can their use be implemented in a way that is regulatory-compliant, architecturally sound, and economically effective?
Written on 08/08/25
Key visual Cominia Whitepaper on Large Language Models in Insurance displaying hand with AI symbol

This article is based on a whitepaper contributed by Artjom Eckhardt and Mischa Pupashenko, Cominia Actuarial Services. You can download the document here (German).

The insurance industry, like many others, is undergoing a profound transformation driven by rapid technological advances and the rise of modern AI models such as Large Language Models (LLMs). In public discourse, these modern AI technologies are often referred to under the umbrella term "Generative AI" (GenAI). The rapid development of such models opens up entirely new possibilities for insurance companies to make their processes more efficient, customer-focused, and agile. Given the intense competition, sustained regulatory pressure, and the impact of demographic change – particularly the shortage of skilled labor – the strategic deployment of modern AI technologies is becoming increasingly relevant for insurers.

LLMs, as currently used in applications, differ fundamentally in their functionality from traditional AI systems, namely established machine learning (ML) models. While classical ML models are primarily used for mathematical and statistical tasks such as forecasting, risk analysis, or pattern recognition – and are specifically trained on structured datasets – LLMs are large, pre-trained language models. What makes LLMs special is not only their technical performance but also their versatility in practical use. For example, they can automatically process large volumes of unstructured data, generate content such as reports or emails, understand semantic queries, or be integrated into existing database systems. This opens up numerous opportunities for insurance companies to automate processes, transfer knowledge more efficiently, and develop new innovative services.

Thanks to their ability to understand natural language semantically, extract relevant information from unstructured texts, and structure it, LLMs can be used in a targeted and versatile way throughout complex business processes.

Architecture and Strategy: The Path to Successful AI Use

A suitable technical foundation is essential to derive sustainable, company-wide benefits from LLMs. From the outset, it should be clear that siloed solutions must be avoided. Instead, a strategic framework and scalable architecture are needed to fully unlock the potential of LLMs.

It is advisable to build a company-wide foundation that serves as a central platform for AI use. This architecture allows various use cases to be efficiently integrated and operated. New applications can be added via a “plug-and-play” principle, enabling quick scaling and adaptation to changing requirements. Key components of such a platform include a central knowledge database, clearly defined interfaces (APIs) to existing systems, and a well-thought-out authorization and access management system. On this technical foundation, various business requirements can be addressed without needing to develop a separate solution for each use case.

However, the success of LLMs and AI applications depends not only on the technical architecture but also on their strategic integration into the company. This involves defining which areas and processes to prioritize, how new use cases are identified and integrated, and what goals AI adoption should pursue. A process-oriented approach is particularly important. The architecture and foundation should be designed to be applicable across the entire value chain – from claims handling and customer management to actuarial and risk management. This creates a modular and flexible infrastructure that enables efficiency gains not only in individual projects but across the company. Strategic integration ensures that AI solutions are not isolated, but are continuously developed in line with corporate goals and technological advancements. In this way, LLMs can make full use of their potential and make a sustainable contribution to the digitization and innovation capacity of the insurance industry.

The effective and sustainable use of LLMs hinges on scalable architecture, a strategic framework, and a structured approach for testing and prioritizing use cases – especially by reusing solutions along the value chain and continuously integrating new applications.

Governance and Regulation: The Foundation for Secure AI Use

The successful and sustainable use of LLMs in the insurance industry requires regulatory requirements and governance structures to be considered from the outset. In recent years, legal and supervisory requirements for the use of AI have increased significantly. These concern both technical implementation and data handling, decision documentation, and the protection of customer interests.

Key regulatory frameworks include:

  • Data Protection and GDPR: Responsible handling of personal data is a core element of any AI strategy. In addition to complying with General Data Protection Regulation (GDPR), insurers must integrate measures such as data minimization, access control, and implementation of data subject rights into their processes.
  • EU AI Act: This introduces a comprehensive regulatory framework for the development, deployment, and use of AI systems within the EU. It emphasizes transparency and documentation obligations, risk management, and traceability of decisions. Insurance companies must consider these requirements when selecting and implementing LLMs.
  • DORA (Digital Operational Resilience Act): DORA sets the key requirements for IT resilience and cybersecurity, including strengthening IT security measures, developing contingency plans, and regular review of the technologies in use. This ensures that LLMs and AI solutions are deployed securely and robustly.
  • Cybersecurity: Given the increasing threat of cyberattacks, it is essential to apply proven and internationally recognized security standards when deploying LLMs and AI systems. Following established frameworks such as those from NIST (National Institute of Standards and Technology) or OWASP (Open Web Application Security Project) helps systematically identify vulnerabilities and implement effective safeguards.

A gap analysis can help the company to systematically review which regulatory requirements are already met by existing systems and processes, where there is still room for improvement, and what action must be taken to meet minimum standards. In addition, responsibilities should be clearly defined, and regular audits scheduled to ensure compliance.

Governance is not a static condition – it is an ongoing process. Technological and regulatory developments must be constantly monitored, and internal structures regularly adjusted. Only in this way can insurers maintain the trust of customers and regulators while responsibly harnessing the potential of LLMs and AI solutions.

The responsible use of LLMs requires consistent integration of regulatory requirements, clearly defined governance structures, and IT security standards from the outset. This ensures trust, compliance, and security while unlocking the potential of AI in a sustainable and risk-conscious manner.

Use Case: AI Agent for Supporting Supervisory Group Functions such as Risk Management or the Actuarial Function

Once strategy, architecture, and governance are clearly defined, LLMs can already create tangible value in day-to-day business with relatively little effort. Particularly in highly repetitive processes, targeted interventions in individual process steps can yield efficiency gains and cost reductions quickly. These kinds of quick wins are an ideal starting point for the gradual and scalable expansion of AI applications.

A practical example is the implementation of a specialized AI agent to support daily operations, especially in Governance, Risk & Compliance (GRC) and internal control systems.

Initial situation:

In an internationally structured organization with central governance and decentralized operational responsibility, group functions regularly face the challenge of answering complex methodological questions. This includes handling standard inquiries (e.g., via group mailboxes), supporting and training local units through governance documents, validating local implementations, and continuously monitoring them. Results must also be regularly collected, processed, aggregated, and reported to management in an appropriate format.

AI Use Case:

An LLM-based chatbot or agent – fed with company-specific knowledge – can significantly improve efficiency. This system is developed based on a secure, established architecture (e.g., using a (graph)-retrieval-augmented generation model), with information security ensured through sandboxing and restrictive access rights. Additional applications connected to the system can unlock further optimization potential, such as automated report generation.

Typical application scenarios include:

  • Context-based query responses: The AI agent answers queries on methodological topics from departments directly using a central knowledge base. Sources used are referenced (e.g., as downloads, with page references, highlights, or excerpts). All responses are collected and automatically forwarded to the group function for sample-based review – enabling continuous improvement of the agent.
  • Efficient document management: The agent searches extensive document pools and delivers specific information without manual browsing.
  • Technical support for group tools: During assessments or workflow checks, users can upload screenshot-based queries. The agent identifies the problem area and provides step-by-step guidance – much more efficiently than traditional click guides.
  • Support during system release testing: During updates or new workflow implementations, the agent can automatically generate test cases to verify if methodologies are correctly integrated into the system.

Value and benefits:

  • Resource optimization: Significant relief of group mailboxes and 2nd-line support through automation of standardized, repetitive inquiries.
  • Faster knowledge transfer: Departments receive immediate and accurate responses and can access required documents more quickly.
  • High flexibility and scalability: The system is modular, can be expanded with new features, and applied to other departments.
  • Information security: Integration into a secure technical infrastructure and adherence to recognized standards (e.g., sandboxes, controlled access) ensures protection of sensitive information.
  • Low implementation effort: With the right architecture, governance, and process expertise, a basic version of such an AI agent can be implemented with relatively little effort (approx. 5 - 10 person-days).

This use case demonstrates that LLM-based agents can go beyond delivering isolated efficiency improvements – they can become flexible, continuously evolving tools that drive digital transformation in insurance companies in a realistic, practical, and secure way.

An AI agent that not only makes knowledge accessible but also ensures quality, reduces workload, and fosters learning becomes a scalable building block for increasing efficiency in governance-related tasks.

Conclusion: Sustainable AI Deployment Through Strategy and Substance

The introduction of LLMs in the insurance industry offers a wide range of opportunities to increase efficiency, quality, and innovation – provided their implementation is strategically planned, technically sound, and executed with regulatory compliance in mind. A centralized technical foundation and clear governance are just as essential as the selection of suitable, practical use cases.

In particular, simple and quickly implementable use cases, such as the AI agent presented here, demonstrate how LLMs can generate measurable added value without requiring high risks or investment. With the right foundation, the insurance industry can fully harness the potential of modern AI solutions and prepare itself for the future.

LLMs have the greatest impact where strategy, structure, and accountability come together – turning isolated interventions into sustainable progress.

Would you like to use LLMs securely, efficiently, and with a future-oriented approach? Get in touch with us.

Portrait Artjom EckhardtArtjom Eckhardt
Senior Consultant Risk Management Division
artjom.eckhardt@cominia.de
+49 1520 8461373

 

 

Portrait Dr Mischa PupashenkoDr Mischa Pupashenko
Principal & Head of Risk Management Division
mischa.pupashenko@cominia.de
+49 152 08437644

 

 

References

  • Dubovci, Dea & Pupashenko, Dr. Mischa (2025): DORA in insurance: Lean implementation, clear priorities. Contribution for actupool news blog, published on 8. Juli 2025.
  • Datenschutz-Grundverordnung (DSGVO): Verordnung (EU) 2016/679 des Europäischen Parlaments und des Rates vom 27. April 2016 zum Schutz natürlicher Personen bei der Verarbeitung personenbezogener Daten und zum freien Datenverkehr. 
  • EU AI Act: Verordnung (EU) 2024/1241 des Europäischen Parlaments und des Rates vom 21. Mai 2024 zur Festlegung harmonisierter Vorschriften für künstliche Intelligenz und zur Änderung bestimmter Rechtsakte der Union. 
  • Digital Operational Resilience Act (DORA): Verordnung (EU) 2022/2554 des Europäischen Parlaments und des Rates vom 14. Dezember 2022 über die digitale operationale Resilienz im Finanzsektor, Amtsblatt der EU, L 333/1. 
  • National Institute of Standards and Technology (NIST): Framework for Artificial Intelligence Risk Management (AI RMF 1.0), NIST Special Publication 1270, Januar 2023. 
  • OWASP Foundation (2023): OWASP Top 10 for Large Language Model Applications.
  • Microsoft Research (2024): From Local to Global: A Graph RAG Approach to Query-Focused Summarization. arXiv:2404.16130, 24. April 2024.

  • Yao et al. (2023): ReAct: Synergizing Reasoning and Acting in Language Models. International Conference on Learning Representations (ICLR) 2023.