Why do many households remain exposed to large exogenous sources of non-systematic income risk? We use a series of randomized field experiments in rural India to test the importance of price and non-price factors in the adoption of an innovative rainfall insurance product. Demand is significantly price sensitive, but widespread take-up would not be achieved even if the product offered a payout ratio comparable to U.S. insurance contracts. We present evidence suggesting that lack of trust, liquidity constraints and limited salience are significant non-price frictions that constrain demand. We suggest contract design improvements to mitigate these frictions.
Classical Monte Carlo methods for pricing catastrophe insurance tail risk converge at order reciprocal root N, requiring large simulation budgets to resolve upper-tail percentiles of the loss distribution. This sample-sparsity problem can lead to AI models trained on impoverished tail data, producing poorly calibrated risk estimates where insolvency risk is greatest. Quantum Amplitude Estimation (QAE), following Montanaro, achieves convergence approaching order reciprocal N in oracle queries - a quadratic speedup that, at scale, would enable high-resolution tail estimation within practical budgets. We validate this advantage empirically using a Qiskit Aer simulator with genuine Grover amplification. A complete pipeline encodes fitted lognormal catastrophe distributions into quantum oracles via amplitude encoding, producing small readout probabilities that enable safe Grover amplification with up to k=16 iterations. Seven experiments on synthetic and real (NOAA Storm Events, 58,028 records) data yield three main findings: an oracle-model advantage, that strong classical baselines win when analytical access is available, and that discretisation, not estimation, is the current bottleneck.
We develop a reinforcement learning (RL) framework for insurance loss reserving that formulates reserve setting as a finite-horizon sequential decision problem under claim development uncertainty, macroeconomic stress, and solvency governance. The reserving process is modeled as a Markov Decision Process (MDP) in which reserve adjustments influence future reserve adequacy, capital efficiency, and solvency outcomes. A Proximal Policy Optimization (PPO) agent is trained using a risk-sensitive reward that penalizes reserve shortfall, capital inefficiency, and breaches of a volatility-adjusted solvency floor, with tail risk explicitly controlled through Conditional Value-at-Risk (CVaR). To reflect regulatory stress-testing practice, the agent is trained under a regime-aware curriculum and evaluated using both regime-stratified simulations and fixed-shock stress scenarios. Empirical results for Workers Compensation and Other Liability illustrate how the proposed RL-CVaR policy improves tail-risk control and reduces solvency violations relative to classical actuarial reserving methods, while maintaining comparable capital efficiency. We further discuss calibration and governance considerations required to align model parameters with firm-specific risk appetite and supervisory expectations under Solvency II and Own Risk and Solvency Assessment (ORSA) frameworks.
Mallika Mainali, Harsha Sureshbabu, Anik Sen
et al.
As algorithmic decision-makers are increasingly applied to high-stakes domains, AI alignment research has evolved from a focus on universal value alignment to context-specific approaches that account for decision-maker attributes. Prior work on Decision-Maker Alignment (DMA) has explored two primary strategies: (1) classical AI methods integrating case-based reasoning, Bayesian reasoning, and naturalistic decision-making, and (2) large language model (LLM)-based methods leveraging prompt engineering. While both approaches have shown promise in limited domains such as medical triage, their generalizability to novel contexts remains underexplored. In this work, we implement a prior classical AI model and develop an LLM-based algorithmic decision-maker evaluated using a large reasoning model (GPT-5) and a non-reasoning model (GPT-4) with weighted self-consistency under a zero-shot prompting framework, as proposed in recent literature. We evaluate both approaches on a health insurance decision-making dataset annotated for three target decision-makers with varying levels of risk tolerance (0.0, 0.5, 1.0). In the experiments reported herein, classical AI and LLM-based models achieved comparable alignment with attribute-based targets, with classical AI exhibiting slightly better alignment for a moderate risk profile. The dataset and open-source implementation are publicly available at: https://github.com/TeX-Base/ClassicalAIvsLLMsforDMAlignment and https://github.com/Parallax-Advanced-Research/ITM/tree/feature_insurance.
Marco Rondina, Antonio Vetrò, Riccardo Coppola
et al.
Context. As software systems become more integrated into society's infrastructure, the responsibility of software professionals to ensure compliance with various non-functional requirements increases. These requirements include security, safety, privacy, and, increasingly, non-discrimination. Motivation. Fairness in pricing algorithms grants equitable access to basic services without discriminating on the basis of protected attributes. Method. We replicate a previous empirical study that used black box testing to audit pricing algorithms used by Italian car insurance companies, accessible through a popular online system. With respect to the previous study, we enlarged the number of tests and the number of demographic variables under analysis. Results. Our work confirms and extends previous findings, highlighting the problematic permanence of discrimination across time: demographic variables significantly impact pricing to this day, with birthplace remaining the main discriminatory factor against individuals not born in Italian cities. We also found that driver profiles can determine the number of quotes available to the user, denying equal opportunities to all. Conclusion. The study underscores the importance of testing for non-discrimination in software systems that affect people's everyday lives. Performing algorithmic audits over time makes it possible to evaluate the evolution of such algorithms. It also demonstrates the role that empirical software engineering can play in making software systems more accountable.
Shahrzad Khayatbashi, Viktor Sjölind, Anders Granåker
et al.
Recent advancements in Artificial Intelligence (AI), particularly Large Language Models (LLMs), have enhanced organizations' ability to reengineer business processes by automating knowledge-intensive tasks. This automation drives digital transformation, often through gradual transitions that improve process efficiency and effectiveness. To fully assess the impact of such automation, a data-driven analysis approach is needed - one that examines how traditional and AI-enhanced process variants coexist during this transition. Object-Centric Process Mining (OCPM) has emerged as a valuable method that enables such analysis, yet real-world case studies are still needed to demonstrate its applicability. This paper presents a case study from the insurance sector, where an LLM was deployed in production to automate the identification of claim parts, a task previously performed manually and identified as a bottleneck for scalability. To evaluate this transformation, we apply OCPM to assess the impact of AI-driven automation on process scalability. Our findings indicate that while LLMs significantly enhance operational capacity, they also introduce new process dynamics that require further refinement. This study also demonstrates the practical application of OCPM in a real-world setting, highlighting its advantages and limitations.
Refugees fleeing the Democratic Republic of Congo are vulnerable to health and social inequities. Women from the DRC are at unique risk within the social and cultural milieu of the U.S., but there is insufficient evidence to inform tailored programs and policies for this population. This article describes results from a longitudinal, qualitative Photovoice study with women refugees from the DRC between 2016 and 2023. Participatory analysis with participant co-researchers and inductive manual analysis revealed four themes illustrating experiences with employment and the workplace: job (in)security, discrimination, injuries, and workplace potential. Evidence from this study demonstrates the need for more intentional, tailored public health and social service interventions centering on the workplace for Congolese refugee women resettled in the U.S. The federal policy pushes refugees toward early self-sufficiency. Our findings suggest this is problematic as it negatively impacts language acquisition, which in turn creates a ripple effect of negative outcomes, including insufficient access to jobs offering a living wage, limited access to jobs with health insurance, and exposure to jobs with high risk of injury or social settings enhancing discrimination. These experiences can be further exacerbated for women refugees from Africa standing at the intersection of race, gender, and refugee status. Study results also show opportunities for the workplace to be an outlet for positive health impacts and advocacy for social justice for this population and potentially other refugee groups that are marginalized in the U.S.
Abstract Background Despite achieving near-universal social health insurance coverage, some Chinese continue to grapple with issues of poverty vulnerability. Supplementary private health insurance (SPHI) serves as a crucial complement to the social health insurance system. This study aimed to investigate its efficacy in mitigating poverty vulnerability. Methods Cross-sectional data of 18,426 representative samples were obtained from the Sixth National Health Service Survey (Shandong) conducted in 2018. A three-stage feasible generalised least squares estimation procedure was employed to estimate poverty vulnerability. Additionally, we explored the impact of SPHI on poverty vulnerability using the propensity score matching (PSM) method to balance treatment and control groups along observable dimensions. To address the potential endogeneity issues, we used the instrumental variable (IV) estimation approach to determine how SPHI affects poverty vulnerability. Results We found that SPHI reduced the probability of poverty vulnerability for individuals enrolled in social health insurance. Furthermore, SPHI had a more pronounced protective effect on respondents with chronic diseases, those aged over 60 years, and those living in urban areas or western Shandong. Conclusions The results imply that supplementary SPHI can be effective in poverty reduction even among populations with basic health insurance. Therefore, we encourage governments in other low- and middle-income countries to consider implementing supplementary SPHI for vulnerable people to reduce medical impoverishment.
Hasan Ayaydın, Tolga Ergün, Abdulkadir Barut
et al.
Abstract The increasing emphasis on climate change and environmental sustainability worldwide has brought about the convergence and increased focus on financial technologies (FinTech) and green energy initiatives. In light of this, the study’s objective is to investigate how FinTech moderates the relationship between the ecological footprint (EF) and green energy transition (GTE) in the BRICS-T nations between 1990 and 2021. This study examines how fintech moderated the GTE and EF in the BRICS-T from 1990 to 2021. We applied the Fully Modified Ordinary Least Squares (FMOLS) and used Dynamic Ordinary Least Squares (DOLS) as the main estimators for longitudinal analysis. In contrast, Driscoll-Kraay was used to verify the robustness of the results under cross-sectional dependence and heteroskedasticity. The results reveal that FinTech indirectly hinders EF by facilitating GTE. The outcomes also show that FinTech significantly constrains EF, while GDP and industrialization worsen EF. The results also confirm the important role of GTE and foreign direct investment (FDI) in reducing CO2 emissions in BRICS-T countries. Lastly, the paper offers policymakers useful recommendations for lowering EF in light of these outcomes. The study suggests establishing appropriate policies and strategies that encourage FinTech platforms to invest in green energy projects, including financial technology, promoting energy-efficient and low-carbon foreign direct investment, and encouraging GTE.
In this paper, we investigate an optimal investment problem associated with proportional portfolio insurance (PPI) strategies in the presence of jumps in the underlying dynamics. PPI strategies enable investors to mitigate downside risk while still retaining the potential for upside gains. This is achieved by maintaining an exposure to risky assets proportional to the difference between the portfolio value and the present value of the guaranteed amount. While PPI strategies are known to be free of downside risk in diffusion modeling frameworks with continuous trading, see e.g., Cont and Tankov (2009), real market applications exhibit a significant non-negligible risk, known as gap risk, which increases with the multiplier value. The goal of this paper is to determine the optimal PPI strategy in a setting where gap risk may occur, due to downward jumps in the asset price dynamics. We consider a loss-averse agent who aims at maximizing the expected utility of the terminal wealth exceeding a minimum guarantee. Technically, we model agent's preferences with an S-shaped utility functions to accommodate the possibility that gap risk occurs, and address the optimization problem via a generalization of the martingale approach that turns to be valid under market incompleteness in a jump-diffusion framework.
Loss development modelling is the actuarial practice of predicting the total 'ultimate' losses incurred on a set of policies once all claims are reported and settled. This poses a challenging prediction task as losses frequently take years to fully emerge from reported claims, and not all claims might yet be reported. Loss development models frequently estimate a set of 'link ratios' from insurance loss triangles, which are multiplicative factors transforming losses at one time point to ultimate. However, link ratios estimated using classical methods typically underestimate ultimate losses and cannot be extrapolated outside the domains of the triangle, requiring extension by 'tail factors' from another model. Although flexible, this two-step process relies on subjective decision points that might bias inference. Methods that jointly estimate 'body' link ratios and smooth tail factors offer an attractive alternative. This paper proposes a novel application of Bayesian hidden Markov models to loss development modelling, where discrete, latent states representing body and tail processes are automatically learned from the data. The hidden Markov development model is found to perform comparably to, and frequently better than, the two-step approach, as well as a latent change-point model, on numerical examples and industry datasets.
I Nyoman Widana, Ni Luh Putu Suciptawati, Sulma Sulma
Program Education plays a vital role in improving human resources. But on the other hand, education costs are not cheap. For this reason, people need to prepare education funds from an early age. One way is to take part in an education insurance program. This is a business opportunity that a village-owned enterprise (BUMDes) can run by offering education insurance services to the public. This research aims to develop and use programming software to calculate education insurance premiums offered by BUMDes. The method used is The Equivalence Principle method. Based on the case study, the premium price calculated using software that has been developed is very competitive – below market price, depending on the interest rate and fees charged.
Ellen Kuhlmann, Michelle Falkenbach, Monica Georgina Brînzac
et al.
Abstract Background Primary healthcare has emerged as a powerful global concept, but little attention has been directed towards the pivotal role of the healthcare workforce and the diverse institutional setting in which they work. This study aims to bridge the gap between the primary healthcare policy and the ongoing healthcare workforce crisis debate by introducing a health system and governance approach to identify capacities that may help respond effectively to the HCWF crisis in health system contexts. Methods A qualitative comparative methodology was employed, and a rapid assessment of the primary healthcare workforce was conducted across nine countries: Denmark, Germany, Kazakhstan, Netherlands, Portugal, Romania, Serbia, Switzerland, and the United Kingdom/ England. Results Our findings reveal both convergence and pronounced diversity across the healthcare systems, with none fully aligning with the ideal attributes of primary healthcare suggested by WHO. However, across all categories, Denmark, the Netherlands, and to a lesser extent Kazakhstan, depict closer alignment to this model than the other countries. Workforce composition and skill-mix vary strongly, while disparities persist in education and data availability, particularly within Social Health Insurance systems. Policy responses and interventions span governance, organisational, and professional realms, although with weaknesses in the implementation of policies and a systematic lack of data and evaluation. Conclusions Aligning primary healthcare and workforce considerations within the broader health system context may help move the debate forward and build governance capacities to improve resilience in both areas.
Svitlana Khalatur, Svitlana Kachula, Vitalii Oleksiuk
et al.
Crisis management is an important tool for managing modern agricultural businesses, especially in the face of uncertainty and changes in the market. This article examines the role of crisis management as a key element in the formation of a financial mechanism for the sustainable development of the agricultural sector. It analyses the main aspects of crisis management in agricultural business and its impact on the formation of a sustainable financial mechanism. The relationship between crisis management and sustainable development of the agrarian sector is studied. The possibilities of using the principles of crisis management to improve the financial stability and competitiveness of agricultural enterprises are determined. As a result, the article emphasizes the importance of crisis management as a key factor in the formation of a sustainable financial mechanism for achieving sustainable development of agricultural businesses. As follows, the scientific novelty in the article lies in several key aspects: integration of crisis management and sustainable development; application of crisis management principles to agriculture; emphasis on financial mechanisms: the article focuses on the financial aspect of crisis management and sustainable development in agriculture. Thus, the scientific novelty of the article lies in its innovative approach to integrating crisis management principles into the agricultural context, emphasizing the financial mechanism involved in the pursuit of sustainable development in the agricultural sector. The results of the study can be useful for agricultural entrepreneurs, managers, academics, and regulators to improve management strategies and increase the sustainability of the agricultural sector.
<p>January 2007 was a bad storm month for much of central and northern Europe with a series of extratropical cyclones bringing high winds and precipitation to highly populated areas between Ireland and Russia. Although Storm Kyrill on 18–19 January 2007 was the most serious for its infrastructure damage and insurance costs, Storm Franz from the preceding week on 11–12 January 2007 was actually more serious for its maritime impacts in western Europe. This contribution takes a closer look at Storm Franz with an overview of its wind field and its impact on energy infrastructure, transportation networks and building damage. Maritime casualties are reviewed with respect to met-ocean conditions. The storm was notable for a series of wave-related accidents off southeast Ireland, the English Channel, and German Bight. An analysis is carried out on water level recorders around the North Sea to assess the storm surge and short period oscillations that may reveal harbour seiches or meteotsunamis. The results are compared with wave recorders, which had a fairly good coverage across the North Sea in 2007. The issue of wave damage to offshore infrastructure was highlighted in events associated with Storm Britta on 31 October–1 November 2006. Offshore wind energy in northwest Europe was in a growth phase during this time, and there were questions about the extreme met-ocean conditions that could be expected in the 20 year lifetime of an offshore wind turbine.</p>
AbstractOutsourcing plays an important role in the operation of insurance and reinsurance companies. This article aims to define the legal conditions of insurance outsourcing and their evaluation by the author. The example of limiting the scope of outsourcing in the activities of insurance and reinsurance companies in the Polish law shows its specificity compared to other outsourcing in business. This specificity lies primarily in the need to control insurance outsourcing by the EU and national supervisory authorities. There is a tendency in the law to extend the regulations related to insurance outsourcing to the further performance of a process, service or activity by insurance companies, particularly in the field of cooperation of traditional distributors withInsurtech. The lack of legal regulations forces EIOPA to look for appropriate and effective legal solutions in the field of supervision over insurance outsourcing. This process is mainly based on self-regulation of the market through ‘soft law’—this practice sets new tasks for the EU and national regulators.
Loss reserving generally focuses on identifying a single model that can generate superior predictive performance. However, different loss reserving models specialise in capturing different aspects of loss data. This is recognised in practice in the sense that results from different models are often considered, and sometimes combined. For instance, actuaries may take a weighted average of the prediction outcomes from various loss reserving models, often based on subjective assessments. In this paper, we propose a systematic framework to objectively combine (i.e. ensemble) multiple _stochastic_ loss reserving models such that the strengths offered by different models can be utilised effectively. Our framework contains two main innovations compared to existing literature and practice. Firstly, our criteria model combination considers the full distributional properties of the ensemble and not just the central estimate - which is of particular importance in the reserving context. Secondly, our framework is that it is tailored for the features inherent to reserving data. These include, for instance, accident, development, calendar, and claim maturity effects. Crucially, the relative importance and scarcity of data across accident periods renders the problem distinct from the traditional ensembling techniques in statistical learning. Our framework is illustrated with a complex synthetic dataset. In the results, the optimised ensemble outperforms both (i) traditional model selection strategies, and (ii) an equally weighted ensemble. In particular, the improvement occurs not only with central estimates but also relevant quantiles, such as the 75th percentile of reserves (typically of interest to both insurers and regulators). The framework developed in this paper can be implemented thanks to an R package, `ADLP`, which is available from CRAN.
Ambarish Chattopadhyay, Carl N. Morris, Jose R. Zubizarreta
The Finite Selection Model (FSM) was developed by Carl Morris in the 1970s for the design of the RAND Health Insurance Experiment (HIE) (Morris 1979, Newhouse et al. 1993), one of the largest and most comprehensive social science experiments conducted in the U.S. The idea behind the FSM is that each treatment group takes its turns selecting units in a fair and random order to optimize a common assignment criterion. At each of its turns, a treatment group selects the available unit that maximally improves the combined quality of its resulting group of units in terms of the criterion. In the HIE and beyond, we revisit, formalize, and extend the FSM as a general tool for experimental design. Leveraging the idea of D-optimality, we propose and analyze a new selection criterion in the FSM. The FSM using the D-optimal selection function has no tuning parameters, is affine invariant, and when appropriate, retrieves several classical designs such as randomized block and matched-pair designs. For multi-arm experiments, we propose algorithms to generate a fair and random selection order of treatments. We demonstrate FSM's performance in a case study based on the HIE and in ten randomized studies from the health and social sciences. On average, the FSM achieves 68% better covariate balance than complete randomization and 56% better covariate balance than rerandomization in a typical study. We recommend the FSM be considered in experimental design for its conceptual simplicity, efficiency, and robustness.
Large data sets comprising diagnoses about chronic conditions are becoming increasingly available for research purposes. In Germany, it is planned that aggregated claims data including medical diagnoses from the statutory health insurance with roughly 70 million insurants will be published on a regular basis. Validity of the diagnoses in such big data sets can hardly be assessed. In case the data set comprises prevalence, incidence and mortality, it is possible to estimate the proportion of false positive diagnoses using mathematical relations from the illness-death model. We apply the method to age-specific aggregated claims data from 70 million Germans about type 2 diabetes in Germany stratified by sex and report the findings in terms of the ratio of false positive diagnoses of type 2 diabetes (FPR) in the data set. The age-specific FPR for men and women changes with age. In men, the FPR increases linearly from 1 to 3 per mil in the age 30 to 50. For ages between 50 to 80 years, FPR remains below 4 per mil. After 80 years of age, we have an increase to about 5 per mil. In women, we find a steep increase from age 30 to 60, the peak FPR is reached at about 12 per mil between 60 and 70 years of age. After age 70, the FPR of women drops tremendously. In all age-groups, the FPR is higher in women than in men. In terms of absolute numbers, we find that there are 217 thousand people with a false-positive diagnosis in the data set (95% confidence interval, CI: 204 to 229), the vast majority women (172 thousand, 95% CI: 162 to 180). Our work indicates that possible false positive (and negative) diagnoses should appropriately be dealt with in claims data, e.g., by inclusion of age- and sex-specific error terms in statistical models, to avoid potentially biased or wrong conclusions.
Churn prediction in credit cards, fraud detection in insurance, and loan default prediction are important analytical customer relationship management (ACRM) problems. Since frauds, churns and defaults happen less frequently, the datasets for these problems turn out to be naturally highly unbalanced. Consequently, all supervised machine learning classifiers tend to yield substantial false-positive rates when trained on such unbalanced datasets. We propose two ways of data balancing. In the first, we propose an oversampling method to generate synthetic samples of minority class using Generative Adversarial Network (GAN). We employ Vanilla GAN [1], Wasserstein GAN [2] and CTGAN [3] separately to oversample the minority class samples. In order to assess the efficacy of our proposed approach, we use a host of machine learning classifiers, including Random Forest, Decision Tree, support vector machine (SVM), and Logistic Regression on the data balanced by GANs. In the second method, we introduce a hybrid method to handle data imbalance. In this second way, we utilize the power of undersampling and over-sampling together by augmenting the synthetic minority class data oversampled by GAN with the undersampled majority class data obtained by one-class support vigor machine (OCSVM) [4]. We combine both over-sampled data generated by GAN and the data under-sampled by OCSVM [4] and pass the resultant data to classifiers. When we compared our results to those of Farquad et al. [5], Sundarkumar, Ravi, and Siddeshwar [6], our proposed methods outperform the previous results in terms of the area under the ROC curve (AUC) on all datasets.