Hasil untuk "Management. Industrial management"

Menampilkan 20 dari ~13293238 hasil · dari CrossRef, arXiv, DOAJ, Semantic Scholar

JSON API
arXiv Open Access 2026
Robust Investment-Driven Insurance Pricing and Liquidity Management

Bingzheng Chen, Jan Dhaene, Chun Liu et al.

This paper develops a dynamic equilibrium model of the insurance market that jointly characterizes insurers' underwriting, investment, recapitalization, and dividend policies under model uncertainty and financial frictions. Competitive insurers maximize shareholder value under a subjective worst-case probability measure, giving rise to liquidity-driven underwriting cycles and flight-to-quality behavior. While an equilibrium typically fails to exist in such dynamic liquidity management framework with external financial investment, we show that incorporating model uncertainty restores equilibrium existence under plausible parameter conditions. Moreover, the model uncovers a novel relationship between the correlation of insurance and financial market risks and the equilibrium insurance price: negative loadings may emerge when insurance gains and financial returns are positively correlated, contrary to conventional intuition.

en q-fin.RM
DOAJ Open Access 2026
UDPLDP-Tree: Range Queries Under User-Distinguished Personalized Local Differential Privacy

Dongli Deng, Sen Zhao, Meixia Miao

Local Differential Privacy (LDP) and its personalized variants (PLDP) have been widely used for privacy-preserving data analytics. However, existing schemes often enforce a uniform indistinguishability level among users, failing to accommodate the nuanced privacy needs of diverse individuals. To address this, we propose User-Distinguished Local Differential Privacy (UDPLDP), a novel framework that formalizes user-level distinguishability to support more flexible, non-uniform privacy budgets. Under this framework, we tackle the fundamental task of frequency range queries, namely UDPLDP-Tree, which overcomes the challenge due to limited user-level distinguishability, insufficient robustness in estimation under complex data distributions, and the assumption of uniform privacy requirements across different attributes in existing multi-dimensional schemes. To demonstrate the effectiveness, we conduct extensive experiments and the results show that UDPLDP-Tree reduces the mean squared error (MSE) by about 30–50% compared with a recent state-of-the-art baseline.

Information technology
DOAJ Open Access 2026
A Review of Thermal Safety and Management of Second-Life Batteries: Cell Screening, Pack Configuration and Health Estimation

Md Imran Hasan, Gang Lei, Dylan Lu et al.

Electric vehicle (EV) adoption is generating a rapidly increasing stream of retired lithium-ion batteries for second-life deployment. However, thermal safety concerns continue to limit their reuse. This paper reviews second-life battery (SLB) thermal safety and management and organizes existing work through a mechanism-to-deployment framework linking four domains: degradation mechanisms, cell screening, pack configuration, and monitoring. Evidence indicates that thermal risk depends on the degradation pathway rather than capacity fade. In fact, cells with comparable capacity can exhibit substantially different trigger temperatures depending on whether lithium plating or solid-electrolyte interphase (SEI) growth dominates. Therefore, capacity-based screening is insufficient because cells that satisfy capacity thresholds may still remain thermally unstable. The four domains are tightly coupled: the degradation pathway determines screening requirements; screening outcomes constrain pack design; pack topology influences fault escalation; and together these factors determine what monitoring can reliably detect. This review highlights three gaps and outlines future research directions in the field of SLB thermal safety and management: limited aged-cell thermal characterization by degradation pathway, insufficient diagnostic validation under industrial-throughput conditions, and the incomplete translation of screening outputs into design rules.

Production of electric energy or power. Powerplants. Central stations, Industrial electrochemistry
arXiv Open Access 2025
Statistical applications of the 20/60/20 rule in risk management and portfolio optimization

Kewin Pączek, Damian Jelito, Marcin Pitera et al.

This paper explores the applications of the 20/60/20 rule-a heuristic method that segments data into top-performing, average-performing, and underperforming groups-in mathematical finance. We review the statistical foundations of this rule and demonstrate its usefulness in risk management and portfolio optimization. Our study highlights three key applications. First, we apply the rule to stock market data, showing that it enables effective population clustering. Second, we introduce a novel, easy-to-implement method for extracting heavy-tail characteristics in risk management. Third, we integrate spatial reasoning based on the 20/60/20 rule into portfolio optimization, enhancing robustness and improving performance. To support our findings, we develop a new measure for quantifying tail heaviness and employ conditional statistics to reconstruct the unconditional distribution from the core data segment. This reconstructed distribution is tested on real financial data to evaluate whether the 20/60/20 segmentation effectively balances capturing extreme risks with maintaining the stability of central returns. Our results offer insights into financial data behavior under heavy-tailed conditions and demonstrate the potential of the 20/60/20 rule as a complementary tool for decision-making in finance.

en q-fin.PM, stat.ME
arXiv Open Access 2025
Singular Control in Inventory Management with Smooth Ambiguity

Arnon Archankul, Jacco J. J. Thijssen

We consider singular control in inventory management under Knightian uncertainty, where decision makers have a smooth ambiguity preference over Gaussian-generated priors. We demonstrate that continuous-time smooth ambiguity is the infinitesimal limit of Kalman-Bucy filtering with recursive robust utility. Additionally, we prove that the cost function can be determined by solving forward-backward stochastic differential equations with quadratic growth. With a sufficient condition and utilising variational inequalities in a viscosity sense, we derive the value function and optimal control policy. By the change-of-coordinate technique, we transform the problem into two-dimensional singular control, offering insights into model learning and aligning with classical singular control free boundary problems. We numerically implement our theory using a Markov chain approximation, where inventory is modeled as cash management following an arithmetic Brownian motion. Our numerical results indicate that the continuation region can be divided into three key areas: (i) the target region; (ii) the region where it is optimal to learn and do nothing; and (iii) the region where control becomes predominant and learning should inactive. We demonstrate that ambiguity drives the decision maker to act earlier, leading to a smaller continuation region. This effect becomes more pronounced at the target region as the decision maker gains confidence from a longer learning period. However, these dynamics do not extend to the third region, where learning is excluded.

en math.OC, math.PR
DOAJ Open Access 2025
Blood-based tri-hybrid nanofluid flow through a porous channel with the impact of thermal radiation used in drug administration

Subhalaxmi Dey, Surender Ontela, P.K. Pattnaik et al.

In recent trends science and technology is developed due to the utilization of modern devices of high quality and their longevity with potential efficiency. The implementation of nanoparticles now characterizes the effectiveness and efficiency. Specifically, in biomedical research drug delivery into the target, hyperthermia treatment for cancer, etc. the use of nanofluid is vital. The present article brings the characteristic of the blood-based tri-hybrid nanofluid through a porous channel embedding within a porous matrix with the interaction of magnetization and Darcy-Forchheimer inertial drag in the flow behavior. Further, the inclusion of thermal radiation, and heat source energies the heat transport properties. The formulated model for the interaction of alloy nanoparticles AA7072 and AA7075 with Zirconium oxide ZrO2 in the base liquid blood is characterized by their physical properties. The designed model is transformed into a non-dimensional form with the utilization of similarity rules. Further, a semi-analytical approach Adomian Decomposition Method (ADM) is proposed for the solution of the model. The validation with the existing article shows the convergence properties of the current methodology and the significant behavior of the factors involved in the flow phenomena are presented through graphs. Finally, the important findings are reported as; The enhanced Reynolds number decelerates the inertia force and a velocity profile shows a dual characteristic for the increasing deformation factor. Further, in comparison to the single and hybrid nanofluid, the tri-hybrid nanofluid encourages the fluid temperature due to the increasing thermal conductivity.

Applied mathematics. Quantitative methods
arXiv Open Access 2024
Towards Automated Solution Recipe Generation for Industrial Asset Management with LLM

Nianjun Zhou, Dhaval Patel, Shuxin Lin et al.

This study introduces a novel approach to Industrial Asset Management (IAM) by incorporating Conditional-Based Management (CBM) principles with the latest advancements in Large Language Models (LLMs). Our research introduces an automated model-building process, traditionally reliant on intensive collaboration between data scientists and domain experts. We present two primary innovations: a taxonomy-guided prompting generation that facilitates the automatic creation of AI solution recipes and a set of LLM pipelines designed to produce a solution recipe containing a set of artifacts composed of documents, sample data, and models for IAM. These pipelines, guided by standardized principles, enable the generation of initial solution templates for heterogeneous asset classes without direct human input, reducing reliance on extensive domain knowledge and enhancing automation. We evaluate our methodology by assessing asset health and sustainability across a spectrum of ten asset classes. Our findings illustrate the potential of LLMs and taxonomy-based LLM prompting pipelines in transforming asset management, offering a blueprint for subsequent research and development initiatives to be integrated into a rapid client solution.

en cs.AI
arXiv Open Access 2024
MILLION: A General Multi-Objective Framework with Controllable Risk for Portfolio Management

Liwei Deng, Tianfu Wang, Yan Zhao et al.

Portfolio management is an important yet challenging task in AI for FinTech, which aims to allocate investors' budgets among different assets to balance the risk and return of an investment. In this study, we propose a general Multi-objectIve framework with controLLable rIsk for pOrtfolio maNagement (MILLION), which consists of two main phases, i.e., return-related maximization and risk control. Specifically, in the return-related maximization phase, we introduce two auxiliary objectives, i.e., return rate prediction, and return rate ranking, combined with portfolio optimization to remit the overfitting problem and improve the generalization of the trained model to future markets. Subsequently, in the risk control phase, we propose two methods, i.e., portfolio interpolation and portfolio improvement, to achieve fine-grained risk control and fast risk adaption to a user-specified risk level. For the portfolio interpolation method, we theoretically prove that the risk can be perfectly controlled if the to-be-set risk level is in a proper interval. In addition, we also show that the return rate of the adjusted portfolio after portfolio interpolation is no less than that of the min-variance optimization, as long as the model in the reward maximization phase is effective. Furthermore, the portfolio improvement method can achieve greater return rates while keeping the same risk level compared to portfolio interpolation. Extensive experiments are conducted on three real-world datasets. The results demonstrate the effectiveness and efficiency of the proposed framework.

en q-fin.PM, cs.AI
arXiv Open Access 2024
Explainable Post hoc Portfolio Management Financial Policy of a Deep Reinforcement Learning agent

Alejandra de la Rica Escudero, Eduardo C. Garrido-Merchan, Maria Coronado-Vaca

Financial portfolio management investment policies computed quantitatively by modern portfolio theory techniques like the Markowitz model rely on a set on assumptions that are not supported by data in high volatility markets. Hence, quantitative researchers are looking for alternative models to tackle this problem. Concretely, portfolio management is a problem that has been successfully addressed recently by Deep Reinforcement Learning (DRL) approaches. In particular, DRL algorithms train an agent by estimating the distribution of the expected reward of every action performed by an agent given any financial state in a simulator. However, these methods rely on Deep Neural Networks model to represent such a distribution, that although they are universal approximator models, they cannot explain its behaviour, given by a set of parameters that are not interpretable. Critically, financial investors policies require predictions to be interpretable, so DRL agents are not suited to follow a particular policy or explain their actions. In this work, we developed a novel Explainable Deep Reinforcement Learning (XDRL) approach for portfolio management, integrating the Proximal Policy Optimization (PPO) with the model agnostic explainable techniques of feature importance, SHAP and LIME to enhance transparency in prediction time. By executing our methodology, we can interpret in prediction time the actions of the agent to assess whether they follow the requisites of an investment policy or to assess the risk of following the agent suggestions. To the best of our knowledge, our proposed approach is the first explainable post hoc portfolio management financial policy of a DRL agent. We empirically illustrate our methodology by successfully identifying key features influencing investment decisions, which demonstrate the ability to explain the agent actions in prediction time.

en cs.CE, cs.AI
arXiv Open Access 2024
Bridging the Gap: A Study of AI-based Vulnerability Management between Industry and Academia

Shengye Wan, Joshua Saxe, Craig Gomes et al.

Recent research advances in Artificial Intelligence (AI) have yielded promising results for automated software vulnerability management. AI-based models are reported to greatly outperform traditional static analysis tools, indicating a substantial workload relief for security engineers. However, the industry remains very cautious and selective about integrating AI-based techniques into their security vulnerability management workflow. To understand the reasons, we conducted a discussion-based study, anchored in the authors' extensive industrial experience and keen observations, to uncover the gap between research and practice in this field. We empirically identified three main barriers preventing the industry from adopting academic models, namely, complicated requirements of scalability and prioritization, limited customization flexibility, and unclear financial implications. Meanwhile, research works are significantly impacted by the lack of extensive real-world security data and expertise. We proposed a set of future directions to help better understand industry expectations, improve the practical usability of AI-based security vulnerability research, and drive a synergistic relationship between industry and academia.

en cs.CR, cs.SE
DOAJ Open Access 2024
AI-Driven Identification of Critical Dependencies in US-China Technology Supply Chains: Implications for Economic Security Policy

Guoli Rao, Chengru Ju, Zhen Feng

This research examines the critical dependencies within US-China technology supply chains through advanced artificial intelligence methodologies, addressing significant economic security implications in an era of strategic competition. The study develops and applies novel machine learning algorithms, network analysis techniques, and predictive models to identify, quantify, and visualize complex dependencies across semiconductor, telecommunications, and emerging technology sectors. Findings reveal pronounced asymmetric vulnerabilities, with semiconductor manufacturing equipment and advanced node production representing severe chokepoints in the global technology ecosystem. The research documents how AI-driven dependency mapping can detect non-obvious relationships and predict potential disruptions with 91.5% accuracy, outperforming traditional analytical approaches by 37.5%. Case studies demonstrate that critical technology supply chains exhibit increasing concentration despite diversification efforts, with vulnerability metrics particularly elevated in EUV lithography equipment, specialized telecommunications components, and quantum computing materials. The study proposes an integrated economic security framework incorporating targeted industrial policies, public-private resilience partnerships, and multilateral governance mechanisms calibrated to dependency severity levels. This research contributes to the emerging field of technology security by establishing quantitative vulnerability thresholds and developing AI-enhanced methodologies for strategic dependency management in complex global supply networks.

Technology (General), Science (General)
arXiv Open Access 2023
Survey on Foundation Models for Prognostics and Health Management in Industrial Cyber-Physical Systems

Ruonan Liu, Quanhu Zhang, Te Han

Industrial Cyber-Physical Systems (ICPS) integrate the disciplines of computer science, communication technology, and engineering, and have emerged as integral components of contemporary manufacturing and industries. However, ICPS encounters various challenges in long-term operation, including equipment failures, performance degradation, and security threats. To achieve efficient maintenance and management, prognostics and health management (PHM) finds widespread application in ICPS for critical tasks, including failure prediction, health monitoring, and maintenance decision-making. The emergence of large-scale foundation models (LFMs) like BERT and GPT signifies a significant advancement in AI technology, and ChatGPT stands as a remarkable accomplishment within this research paradigm, harboring potential for General Artificial Intelligence. Considering the ongoing enhancement in data acquisition technology and data processing capability, LFMs are anticipated to assume a crucial role in the PHM domain of ICPS. However, at present, a consensus is lacking regarding the application of LFMs to PHM in ICPS, necessitating systematic reviews and roadmaps to elucidate future directions. To bridge this gap, this paper elucidates the key components and recent advances in the underlying model.A comprehensive examination and comprehension of the latest advances in grand modeling for PHM in ICPS can offer valuable references for decision makers and researchers in the industrial field while facilitating further enhancements in the reliability, availability, and safety of ICPS.

en cs.AI
DOAJ Open Access 2023
A Generalized Framework for Adopting Regression-Based Predictive Modeling in Manufacturing Environments

Mobayode O. Akinsolu, Khalil Zribi

In this paper, the growing significance of data analysis in manufacturing environments is exemplified through a review of relevant literature and a generic framework to aid the ease of adoption of regression-based supervised learning in manufacturing environments. To validate the practicality of the framework, several regression learning techniques are applied to an open-source multi-stage continuous-flow manufacturing process data set to typify inference-driven decision-making that informs the selection of regression learning methods for adoption in real-world manufacturing environments. The investigated regression learning techniques are evaluated in terms of their training time, prediction speed, predictive accuracy (R-squared value), and mean squared error. In terms of training time (<inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mi>T</mi><mi>T</mi></mrow></semantics></math></inline-formula>), <i>k</i>-NN20 (<i>k</i>-Nearest Neighbour with 20 neighbors) ranks first with average and median values of 4.8 ms and 4.9 ms, and 4.2 ms and 4.3 ms, respectively, for the first stage and second stage of the predictive modeling of the multi-stage continuous-flow manufacturing process, respectively, over 50 independent runs. In terms of prediction speed (<inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mi>P</mi><mi>S</mi></mrow></semantics></math></inline-formula>), DTR (decision tree regressor) ranks first with average and median values of <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mn>5.6784</mn><mo>×</mo><msup><mn>10</mn><mn>6</mn></msup></mrow></semantics></math></inline-formula> observations per second (ob/s) and <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mn>4.8691</mn><mo>×</mo><msup><mn>10</mn><mn>6</mn></msup></mrow></semantics></math></inline-formula> observations per second (ob/s), and <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mn>4.9929</mn><mo>×</mo><msup><mn>10</mn><mn>6</mn></msup></mrow></semantics></math></inline-formula> observations per second (ob/s) and <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mn>5.8806</mn><mo>×</mo><msup><mn>10</mn><mn>6</mn></msup></mrow></semantics></math></inline-formula> observations per second (ob/s), respectively, for the first stage and second stage of the predictive modeling of the multi-stage continuous-flow manufacturing process, respectively, over 50 independent runs. In terms of R-squared value (<inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><msup><mi>R</mi><mn>2</mn></msup></semantics></math></inline-formula>), BR (bagging regressor) ranks first with average and median values of 0.728 and 0.728, respectively, over 50 independent runs, for the first stage of the predictive modeling of the multi-stage continuous-flow manufacturing process, and RFR (random forest regressor) ranks first with average and median values of 0.746 and 0.746, respectively, over 50 independent runs, for the second stage of the predictive modeling of the multi-stage continuous-flow manufacturing process. In terms of mean squared error (<inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mi>M</mi><mi>S</mi><mi>E</mi></mrow></semantics></math></inline-formula>), BR (bagging regressor) ranks first with average and median values of 2.7 and 2.7, respectively, over 50 independent runs, for the first stage of the predictive modeling of the multi-stage continuous-flow manufacturing process, and RFR (random forest regressor) ranks first with average and median values of 3.5 and 3.5, respectively, over 50 independent runs, for the second stage of the predictive modeling of the multi-stage continuous-flow manufacturing process. All methods are further ranked inferentially using the statistics of their performance metrics to identify the best method(s) for the first and second stages of the predictive modeling of the multi-stage continuous-flow manufacturing process. A Wilcoxon rank sum test is then used to statistically verify the inference-based rankings. <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mi>D</mi><mi>T</mi><mi>R</mi></mrow></semantics></math></inline-formula> and <i>k</i>-NN20 have been identified as the most suitable regression learning techniques given the multi-stage continuous-flow manufacturing process data used for experimentation.

Engineering machinery, tools, and implements, Technological innovations. Automation
DOAJ Open Access 2023
Using fuzzy and machine learning iterative optimized models to generate the flood susceptibility maps: case study of Prahova River basin, Romania

Romulus Costache, Hazem Ghassan Abdo, Arun Pratap Mishra et al.

AbstractIn this work, the vulnerability to flooding in the Prahova River basin was calculated and analyzed using advanced methods and techniques. Thus, 2 hybrid models represented by Iterative Classifier Optimizer – Multiclass Alternating Decision Tree – Certainty Factor (ICO-LADT-CF) and Fuzzy-Analytical Hierarchy Process – Certainty Factor (FAHP-CF) were generated, which had as input data the values of 10 flood predictors and a number of 158 points where historical floods occurred. In the first step, the Certainty Factor values were calculated, which were then used in the Fuzzy-Analytical Hierarchy Process and Multiclass Alternating Decision Tree models. It should be mentioned that the Multiclass Alternating Decision Tree model was optimized with the help of the Iterative Classifier Optimizer. In the case of both ensemble models the slope angle was the most important flood conditioning factor. Moreover, according to Certainty Factor modelling the 8 classes/categories achieved the maximum value of 1. Next, the susceptibility to floods on the surface of the study area was derived. On average, about 20% of the study area has areas with high and medium susceptibility to flash floods. After evaluating the quality of the models through Receiver Operating Characteristics (ROC) Curve, the following results emerged: Success Rate for Flood Potential Index (FPI) Iterative Classifier Optimizer – Multiclass Alternating Decision Tree – Certainty Factor (ICO-LADT-CF) (Area Under Curve = 0.985) and Flood Potential Index (FPI) Fuzzy-Analytical Hierarchy Process – Certainty Factor (FAHP-CF) (Area Under Curve = 0.967); Prediction Rate for Flood Potential Index (FPI) Iterative Classifier Optimizer – Multiclass Alternating Decision Tree – Certainty Factor (ICO-LADT-CF) (Area Under Curve = 0.952) and Flood Potential Index Fuzzy-Analytical Hierarchy Process – Certainty Factor (FAHP-CF) (Area Under Curve = 0.913). At the same time, the accuracies of the models were: Training dataset − 0.943 (Iterative Classifier Optimizer – Multiclass Alternating Decision Tree – Certainty Factor) and 0.931 (Fuzzy-Analytical Hierarchy Process – Certainty Factor); Validating dataset − 0.935 (Iterative Classifier Optimizer – Multiclass Alternating Decision Tree – Certainty Factor) and 0.926 (Fuzzy-Analytical Hierarchy Process – Certainty Factor). As main conclusion, it can be mentioned that the 2 ensemble models outperform the previous machine learning models applied on the same study area before.

Environmental technology. Sanitary engineering, Environmental sciences
DOAJ Open Access 2023
Recognition of expiry data on food packages based on improved DBNet

Jishi Zheng, Junhui Li, Zhigang Ding et al.

To prevent products with missing character information from reaching the market, manufacturers need an automatic character recognition method. One of the key problems of this recognition method is to recognise text under complex package patterns. In addition, some products use dot matrix characters to reduce printing costs, which makes text extraction more difficult. We propose a character detection algorithm using DBNet as the base network, combined with the Convolutional Block Attention Module (CBAM) to improve its feature extraction of characters in complex contexts. After the character area has been located by the detection algorithm, it is intercepted and fed into a fully convolutional character recognition network to achieve print character recognition. We use ResNet as the backbone network and CTC loss for training. In addition, the CBAM module was added to the backbone network to enhance its recognition of dot matrix characters. The algorithm was finally deployed on the jetson nano. The experimental results show that the character detection accuracy reaches 97.9%, an improvement of 1.9% compared to the original network. As for the character recognition algorithm, the inference speed is doubled when deployed to the nano platform compared to the CRNN network, with an accuracy of 97.8%.

Information technology
DOAJ Open Access 2022
Sustainable waste management: international experience for Ukraine regions

Kateryna Antoniuk, Dmytro Antoniuk

The article considers the key statistical indicators of waste management in the context of sustainable development of the EU and the regions of Ukraine, which allows to understand development trends, identify problems and suggest ways to solve them. Positive trends in waste generation, processing and utilization have been identified, which contributes to the increase in the circular use of materials (CMU) in the EU. It is demonstrated that the unsatisfactory state of waste management in the regions of Ukraine is associated with significant territorial disparities in their formation and accumulation and with the low level of their utilization. The necessity of introduction of responsible consumption and production as preconditions of rational waste management at the regional level in the context of sustainable development is substantiated. The aim of the study is to substantiate the areas of implementation of the experience of EU countries in waste management for the regions of Ukraine to ensure sustainable development and security of the state. Methodology: the theoretical and methodological basis of the research are the fundamental basics on sustainable development studies, ecology, consumption and the circular economy. To ensure the conceptual integrity of the study, the following methods were used: statistical analysis and systematization, grouping, desk research. The scientific significance of the work is that the European and domestic experience of waste management with a focus on sustainable development is studied, the tendencies of improving the environmental situation in the EU countries are analyzed (introduction of circular economy principles, reduction of accumulation and recycling); recommendations for improving the results of sustainable waste management for Ukraine have been developed. The value of the study lies in the analysis and substantiation of problematic areas of sustainable waste management in the regions of Ukraine based on the experience of EU member states.

Management. Industrial management
arXiv Open Access 2021
Liquidity Stress Testing in Asset Management -- Part 1. Modeling the Liability Liquidity Risk

Thierry Roncalli, Fatma Karray-Meziou, François Pan et al.

This article is part of a comprehensive research project on liquidity risk in asset management, which can be divided into three dimensions. The first dimension covers liability liquidity risk (or funding liquidity) modeling, the second dimension focuses on asset liquidity risk (or market liquidity) modeling, and the third dimension considers asset-liability liquidity risk management (or asset-liability matching). The purpose of this research is to propose a methodological and practical framework in order to perform liquidity stress testing programs, which comply with regulatory guidelines (ESMA, 2019) and are useful for fund managers. The review of the academic literature and professional research studies shows that there is a lack of standardized and analytical models. The aim of this research project is then to fill the gap with the goal to develop mathematical and statistical approaches, and provide appropriate answers. In this first part that focuses on liability liquidity risk modeling, we propose several statistical models for estimating redemption shocks. The historical approach must be complemented by an analytical approach based on zero-inflated models if we want to understand the true parameters that influence the redemption shocks. Moreover, we must also distinguish aggregate population models and individual-based models if we want to develop behavioral approaches. Once these different statistical models are calibrated, the second big issue is the risk measure to assess normal and stressed redemption shocks. Finally, the last issue is to develop a factor model that can translate stress scenarios on market risk factors into stress scenarios on fund liabilities.

en q-fin.RM, q-fin.CP

Halaman 15 dari 664662