Hasil untuk "Risk in industry. Risk management"

Menampilkan 20 dari ~6283745 hasil · dari CrossRef, arXiv

JSON API
arXiv Open Access 2025
Risk-sensitive Reinforcement Learning Based on Convex Scoring Functions

Shanyu Han, Yang Liu, Xiang Yu

We propose a reinforcement learning (RL) framework under a broad class of risk objectives, characterized by convex scoring functions. This class covers many common risk measures, such as variance, Expected Shortfall, entropic Value-at-Risk, and mean-risk utility. To resolve the time-inconsistency issue, we consider an augmented state space and an auxiliary variable and recast the problem as a two-state optimization problem. We propose a customized Actor-Critic algorithm and establish some theoretical approximation guarantees. A key theoretical contribution is that our results do not require the Markov decision process to be continuous. Additionally, we propose an auxiliary variable sampling method inspired by the alternating minimization algorithm, which is convergent under certain conditions. We validate our approach in simulation experiments with a financial application in statistical arbitrage trading, demonstrating the effectiveness of the algorithm.

en q-fin.MF, cs.AI
arXiv Open Access 2025
Stochastic Optimal Control of Iron Condor Portfolios for Profitability and Risk Management

Hanyue Huang, Qiguo Sun, Xibei Yang

Previous research on option strategies has primarily focused on their behavior near expiration, with limited attention to the transient value process of the portfolio. In this paper, we formulate Iron Condor portfolio optimization as a stochastic optimal control problem, examining the impact of the control process \( u(k_i, τ) \) on the portfolio's potential profitability and risk. By assuming the underlying price process as a bounded martingale within $[K_1, K_2]$, we prove that the portfolio with a strike structure of $k_1 < k_2 = K_2 < S_t < k_3 = K_3 < k_4$ has a submartingale value process, which results in the optimal stopping time aligning with the expiration date $τ= T$. Moreover, we construct a data generator based on the Rough Heston model to investigate general scenarios through simulation. The results show that asymmetric, left-biased Iron Condor portfolios with $τ= T$ are optimal in SPX markets, balancing profitability and risk management. Deep out-of-the-money strategies improve profitability and success rates at the cost of introducing extreme losses, which can be alleviated by using an optimal stopping strategy. Except for the left-biased portfolios $τ$ generally falls within the range of [50\%,75\%] of total duration. In addition, we validate these findings through case studies on the actual SPX market, covering bullish, sideways, and bearish market conditions.

en q-fin.PM
arXiv Open Access 2025
Risk Assessment Framework for Code LLMs via Leveraging Internal States

Yuheng Huang, Lei Ma, Keizaburo Nishikino et al.

The pre-training paradigm plays a key role in the success of Large Language Models (LLMs), which have been recognized as one of the most significant advancements of AI recently. Building on these breakthroughs, code LLMs with advanced coding capabilities bring huge impacts on software engineering, showing the tendency to become an essential part of developers' daily routines. However, the current code LLMs still face serious challenges related to trustworthiness, as they can generate incorrect, insecure, or unreliable code. Recent exploratory studies find that it can be promising to detect such risky outputs by analyzing LLMs' internal states, akin to how the human brain unconsciously recognizes its own mistakes. Yet, most of these approaches are limited to narrow sub-domains of LLM operations and fall short of achieving industry-level scalability and practicability. To address these challenges, in this paper, we propose PtTrust, a two-stage risk assessment framework for code LLM based on internal state pre-training, designed to integrate seamlessly with the existing infrastructure of software companies. The core idea is that the risk assessment framework could also undergo a pre-training process similar to LLMs. Specifically, PtTrust first performs unsupervised pre-training on large-scale unlabeled source code to learn general representations of LLM states. Then, it uses a small, labeled dataset to train a risk predictor. We demonstrate the effectiveness of PtTrust through fine-grained, code line-level risk assessment and demonstrate that it generalizes across tasks and different programming languages. Further experiments also reveal that PtTrust provides highly intuitive and interpretable features, fostering greater user trust. We believe PtTrust makes a promising step toward scalable and trustworthy assurance for code LLMs.

en cs.SE, cs.AI
arXiv Open Access 2024
Assessing solution quality in risk-averse stochastic programs

E. Ruben van Beesten, Nick W. Koning, David P. Morton

In optimization problems, the quality of a candidate solution can be characterized by the optimality gap. For most stochastic optimization problems, this gap must be statistically estimated. We show that for risk-averse problems, standard estimators are optimistically biased, which compromises the statistical guarantee on the optimality gap. We introduce estimators for risk-averse problems that do not suffer from this bias. Our method relies on using two independent samples, each estimating a different component of the optimality gap. Our approach extends a broad class of optimality gap estimation methods from the risk-neutral case to the risk-averse case, such as the multiple replications procedure and its one- and two-sample variants. We show that our approach is tractable and leads to high-quality optimality gap estimates for spectral and quadrangle risk measures. Our approach can further make use of existing bias and variance reduction techniques.

en math.OC, q-fin.RM
arXiv Open Access 2024
Modeling, Prediction and Risk Management of Distribution System Voltages with Non-Gaussian Probability Distributions

Yuanhai Gao, Xiaoyuan Xu, Zheng Yan et al.

High renewable energy penetration into power distribution systems causes a substantial risk of exceeding voltage security limits, which needs to be accurately assessed and properly managed. However, the existing methods usually rely on the joint probability models of power generation and loads provided by probabilistic prediction to quantify the voltage risks, where inaccurate prediction results could lead to over or under estimated risks. This paper proposes an uncertain voltage component (UVC) prediction method for assessing and managing voltage risks. First, we define the UVC to evaluate voltage variations caused by the uncertainties associated with power generation and loads. Second, we propose a Gaussian mixture model-based probabilistic UVC prediction method to depict the non-Gaussian distribution of voltage variations. Then, we derive the voltage risk indices, including value-at-risk (VaR) and conditional value-at-risk (CVaR), based on the probabilistic UVC prediction model. Third, we investigate the mechanism of UVC-based voltage risk management and establish the voltage risk management problems, which are reformulated into linear programming or mixed-integer linear programming for convenient solutions. The proposed method is tested on power distribution systems with actual photovoltaic power and load data and compared with those considering probabilistic prediction of nodal power injections. Numerical results show that the proposed method is computationally efficient in assessing voltage risks and outperforms existing methods in managing voltage risks. The deviation of voltage risks obtained by the proposed method is only 15% of that by the methods based on probabilistic prediction of nodal power injections.

en eess.SY
arXiv Open Access 2024
GraphRPM: Risk Pattern Mining on Industrial Large Attributed Graphs

Sheng Tian, Xintan Zeng, Yifei Hu et al.

Graph-based patterns are extensively employed and favored by practitioners within industrial companies due to their capacity to represent the behavioral attributes and topological relationships among users, thereby offering enhanced interpretability in comparison to black-box models commonly utilized for classification and recognition tasks. For instance, within the scenario of transaction risk management, a graph pattern that is characteristic of a particular risk category can be readily employed to discern transactions fraught with risk, delineate networks of criminal activity, or investigate the methodologies employed by fraudsters. Nonetheless, graph data in industrial settings is often characterized by its massive scale, encompassing data sets with millions or even billions of nodes, making the manual extraction of graph patterns not only labor-intensive but also necessitating specialized knowledge in particular domains of risk. Moreover, existing methodologies for mining graph patterns encounter significant obstacles when tasked with analyzing large-scale attributed graphs. In this work, we introduce GraphRPM, an industry-purpose parallel and distributed risk pattern mining framework on large attributed graphs. The framework incorporates a novel edge-involved graph isomorphism network alongside optimized operations for parallel graph computation, which collectively contribute to a considerable reduction in computational complexity and resource expenditure. Moreover, the intelligent filtration of efficacious risky graph patterns is facilitated by the proposed evaluation metrics. Comprehensive experimental evaluations conducted on real-world datasets of varying sizes substantiate the capability of GraphRPM to adeptly address the challenges inherent in mining patterns from large-scale industrial attributed graphs, thereby underscoring its substantial value for industrial deployment.

en cs.LG, cs.AI
arXiv Open Access 2024
Managing Financial Climate Risk in Banking Services: A Review of Current Practices and the Challenges Ahead

Victor Cardenas

The document discusses the financial climate risk in the context of the banking industry, emphasizing the need for a comprehensive understanding of climate change across different spatial and temporal scales. It highlights the challenges in estimating physical and transition risks, specifically extreme events and limitations of current climate models. The document also reviews current gaps in assessing physical and transition risks, including the development, improvement of modeling frameworks, highlighting the need for detailed databases of exposed physical assets and climatic hazard modeling. It also emphasizes the importance of integrating financial climate risks into financial risk management practices, particularly in smaller banks and lending organizations.

en econ.GN
arXiv Open Access 2024
On a risk model with tree-structured Poisson Markov random field frequency, with application to rainfall events

Hélène Cossette, Benjamin Côté, Alexandre Dubeau et al.

In many insurance contexts, dependence between risks of a portfolio may arise from their frequencies. We investigate a dependent risk model in which we assume the vector of count variables to be a tree-structured Markov random field with Poisson marginals. The tree structure translates into a wide variety of dependence schemes. We study the global risk of the portfolio and the risk allocation to all its constituents. We provide asymptotic results for portfolios defined on infinitely growing trees. To illustrate its flexibility and computational scalability to higher dimensions, we calibrate the risk model on real-world extreme rainfall data and perform a risk analysis.

en stat.ME, q-fin.RM
CrossRef Open Access 2023
A Risk Management Framework for Industry 4.0 Environment

László Péter Pusztai, Lajos Nagy, István Budai

In past decades, manufacturing companies have paid considerable attention to using their available resources in the most efficient way to satisfy customer demands. This endeavor is supported by many Industry 4.0 methods. One of these is called MES (Manufacturing Execution System), which is applied for monitoring and controlling manufacturing by recording and processing production-related data. This article presents a possible method of implementation of a risk-adjusted production schedule in a data-rich environment. The framework is based on production datasets of multiple workshops, which is followed by statistical analysis, and its results are used in stochastic network models. The outcome of the simulation is implemented in a production scheduling model to determine how to assign the production among workshops. After collecting the necessary data, the reliability indicator-based stochastic critical path method was applied in the case study. Two cases were presented based on the importance of inventory cost and two different scheduling results were created and presented. With the objective of the least inventory cost, the production was postponed to the latest time possible, which means that workshops had more time to finish their previous work on the first day due to the small production quantity. When the cost was not relevant, the production started on the first day of each workshop, and the production was completed before the deadline. These are optimal solutions, but alternative solutions can also be performed by the decision maker based on the results. The use of the modified stochastic critical path method and its analysis shed light on the deficiency of the production, which is a merit in the continuous improvement process and the estimation of the total project time.

arXiv Open Access 2023
Estimating Systemic Risk within Financial Networks: A Two-Step Nonparametric Method

Weihuan Huang

CoVaR (conditional value-at-risk) is a crucial measure for assessing financial systemic risk, which is defined as a conditional quantile of a random variable, conditioned on other random variables reaching specific quantiles. It enables the measurement of risk associated with a particular node in financial networks, taking into account the simultaneous influence of risks from multiple correlated nodes. However, estimating CoVaR presents challenges due to the unobservability of the multivariate-quantiles condition. To address the challenges, we propose a two-step nonparametric estimation approach based on Monte-Carlo simulation data. In the first step, we estimate the unobservable multivariate-quantiles using order statistics. In the second step, we employ a kernel method to estimate the conditional quantile conditional on the order statistics. We establish the consistency and asymptotic normality of the two-step estimator, along with a bandwidth selection method. The results demonstrate that, under a mild restriction on the bandwidth, the estimation error arising from the first step can be ignored. Consequently, the asymptotic results depend solely on the estimation error of the second step, as if the multivariate-quantiles in the condition were observable. Numerical experiments demonstrate the favorable performance of the two-step estimator.

en q-fin.RM, math.ST
arXiv Open Access 2023
Law-Invariant Return and Star-Shaped Risk Measures

Roger J. A. Laeven, Emanuela Rosazza Gianin, Marco Zullino

This paper presents novel characterization results for classes of law-invariant star-shaped functionals. We begin by establishing characterizations for positively homogeneous and star-shaped functionals that exhibit second- or convex-order stochastic dominance consistency. Building on these characterizations, we proceed to derive Kusuoka-type representations for these functionals, shedding light on their mathematical structure and intimate connections to Value-at-Risk and Expected Shortfall. Furthermore, we offer representations of general law-invariant star-shaped functionals as robustifications of Value-at-Risk. Notably, our results are versatile, accommodating settings that may, or may not, involve monotonicity and/or cash-additivity. All of these characterizations are developed within a general locally convex topological space of random variables, ensuring the broad applicability of our results in various financial, insurance and probabilistic contexts.

en q-fin.RM, math.PR
arXiv Open Access 2023
Robust Asset-Liability Management

Tjeerd de Vries, Alexis Akira Toda

How should financial institutions hedge their balance sheets against interest rate risk when managing long-term assets and liabilities? We address this question by proposing a bond portfolio solution based on ambiguity-averse preferences, which generalizes classical immunization and accommodates arbitrary liability structures, portfolio constraints, and interest rate perturbations. In a further extension, we show that the optimal portfolio can be computed as a simple generalized least squares problem, making the solution both transparent and computationally efficient. The resulting portfolio also reduces leverage by implicitly regularizing the portfolio weights, which enhances out-of-sample performance. Numerical evaluations using both empirical and simulated yield curves support the feasibility and accuracy of our approach relative to existing methods.

en q-fin.RM, q-fin.MF
arXiv Open Access 2022
Data-Driven Risk Measurement by SV-GARCH-EVT Model

Minheng Xiao

This paper aims to more effectively manage and mitigate stock market risks by accurately characterizing financial market returns and volatility. We enhance the Stochastic Volatility (SV) model by incorporating fat-tailed distributions and leverage effects, estimating model parameters using Markov Chain Monte Carlo (MCMC) methods. By integrating extreme value theory (EVT) to fit the tail distribution of standard residuals, we develop the SV-EVT-VaR-based dynamic model. Our empirical analysis, using daily S\&P 500 index data and simulated returns, shows that SV-EVT-based models outperform others in backtesting. These models effectively capture the fat-tailed properties of financial returns and the leverage effect, proving superior for out-of-sample data analysis.

en stat.AP, q-fin.MF
arXiv Open Access 2021
Software Estimations Risk in Pakistan Software Industry

Suresh Kumar, Qaisar Imtiaz, Sarmad Mahar

Software and IT industry in Pakistan have seen a dramatic growth and success in past few years and is expected to get doubled by 2020, according to a research. Software development life cycle comprises of multiple phases, activities and techniques that can lead to successful projects, and software evaluation is one of the vital and important parts of that. Software estimation can alone be the reason of product success factor or the products failure factor. To estimate the right cost, effort and resources is an art. But it is also very important to include the risks that may arise in the in a software project which can affect your estimates. In this paper, we highlight how the risks in Pakistan Software Industry can affect the estimates and how to mitigate them.

en cs.SE
arXiv Open Access 2021
Modeling surrender risk in life insurance: theoretical and experimental insight

Mark Kiermayer

Surrender poses one of the major risks to life insurance and a sound modeling of its true probability has direct implication on the risk capital demanded by the Solvency II directive. We add to the existing literature by performing extensive experiments that present highly practical results for various modeling approaches, including XGBoost, random forest, GLM and neural networks. Further, we detect shortcomings of prevalent model assessments, which are in essence based on a confusion matrix. Our results indicate that accurate label predictions and a sound modeling of the true probability can be opposing objectives. We illustrate this with the example of resampling. While resampling is capable of improving label prediction in rare event settings, such as surrender, and thus is commonly applied, we show theoretically and numerically that models trained on resampled data predict significantly biased event probabilities. Following a probabilistic perspective on surrender, we further propose time-dependent confidence bands on predicted mean surrender rates as a complementary assessment and demonstrate its benefit. This evaluation takes a very practical, going concern perspective, which respects that the composition of a portfolio, as well as the nature of underlying risk drivers might change over time.

en q-fin.RM
arXiv Open Access 2019
Cyber Risk at the Edge: Current and future trends on Cyber Risk Analytics and Artificial Intelligence in the Industrial Internet of Things and Industry 4.0 Supply Chains

Petar Radanliev, David De Roure, Kevin Page et al.

Digital technologies have changed the way supply chain operations are structured. In this article, we conduct systematic syntheses of literature on the impact of new technologies on supply chains and the related cyber risks. A taxonomic/cladistic approach is used for the evaluations of progress in the area of supply chain integration in the Industrial Internet of Things and Industry 4.0, with a specific focus on the mitigation of cyber risks. An analytical framework is presented, based on a critical assessment with respect to issues related to new types of cyber risk and the integration of supply chains with new technologies. This paper identifies a dynamic and self-adapting supply chain system supported with Artificial Intelligence and Machine Learning (AI/ML) and real-time intelligence for predictive cyber risk analytics. The system is integrated into a cognition engine that enables predictive cyber risk analytics with real-time intelligence from IoT networks at the edge. This enhances capacities and assist in the creation of a comprehensive understanding of the opportunities and threats that arise when edge computing nodes are deployed, and when AI/ML technologies are migrated to the periphery of IoT networks.

arXiv Open Access 2015
Systemic risk in multiplex networks with asymmetric coupling and threshold feedback

Rebekka Burkholz, Matt V. Leduc, Antonios Garas et al.

We study cascades on a two-layer multiplex network, with asymmetric feedback that depends on the coupling strength between the layers. Based on an analytical branching process approximation, we calculate the systemic risk measured by the final fraction of failed nodes on a reference layer. The results are compared with the case of a single layer network that is an aggregated representation of the two layers. We find that systemic risk in the two-layer network is smaller than in the aggregated one only if the coupling strength between the two layers is small. Above a critical coupling strength, systemic risk is increased because of the mutual amplification of cascades in the two layers. We even observe sharp phase transitions in the cascade size that are less pronounced on the aggregated layer. Our insights can be applied to a scenario where firms decide whether they want to split their business into a less risky core business and a more risky subsidiary business. In most cases, this may lead to a drastic increase of systemic risk, which is underestimated in an aggregated approach.

en physics.soc-ph, q-fin.RM

Halaman 53 dari 314188