arXiv Open Access 2025

Prudential Reliability of Large Language Models in Reinsurance: Governance, Assurance, and Capital Efficiency

Stella C. Dong
Lihat Sumber

Abstrak

This paper develops a prudential framework for assessing the reliability of large language models (LLMs) in reinsurance. A five-pillar architecture--governance, data lineage, assurance, resilience, and regulatory alignment--translates supervisory expectations from Solvency II, SR 11-7, and guidance from EIOPA (2025), NAIC (2023), and IAIS (2024) into measurable lifecycle controls. The framework is implemented through the Reinsurance AI Reliability and Assurance Benchmark (RAIRAB), which evaluates whether governance-embedded LLMs meet prudential standards for grounding, transparency, and accountability. Across six task families, retrieval-grounded configurations achieved higher grounding accuracy (0.90), reduced hallucination and interpretive drift by roughly 40%, and nearly doubled transparency. These mechanisms lower informational frictions in risk transfer and capital allocation, showing that existing prudential doctrines already accommodate reliable AI when governance is explicit, data are traceable, and assurance is verifiable.

Topik & Kata Kunci

Penulis (1)

S

Stella C. Dong

Format Sitasi

Dong, S.C. (2025). Prudential Reliability of Large Language Models in Reinsurance: Governance, Assurance, and Capital Efficiency. https://arxiv.org/abs/2511.08082

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2025
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓