arXiv Open Access 2026

Adaptive Trust Metrics for Multi-LLM Systems: Enhancing Reliability in Regulated Industries

Tejaswini Bollikonda
Lihat Sumber

Abstrak

Large Language Models (LLMs) are increasingly deployed in sensitive domains such as healthcare, finance, and law, yet their integration raises pressing concerns around trust, accountability, and reliability. This paper explores adaptive trust metrics for multi LLM ecosystems, proposing a framework for quantifying and improving model reliability under regulated constraints. By analyzing system behaviors, evaluating uncertainty across multiple LLMs, and implementing dynamic monitoring pipelines, the study demonstrates practical pathways for operational trustworthiness. Case studies from financial compliance and healthcare diagnostics illustrate the applicability of adaptive trust metrics in real world settings. The findings position adaptive trust measurement as a foundational enabler for safe and scalable AI adoption in regulated industries.

Topik & Kata Kunci

Penulis (1)

T

Tejaswini Bollikonda

Format Sitasi

Bollikonda, T. (2026). Adaptive Trust Metrics for Multi-LLM Systems: Enhancing Reliability in Regulated Industries. https://arxiv.org/abs/2601.08858

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2026
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓