Hasil untuk "Production management. Operations management"

Menampilkan 20 dari ~6417530 hasil · dari arXiv, DOAJ, CrossRef, Semantic Scholar

JSON API
CrossRef Open Access 2025
Advancements in Inventory Management: Insights From INFORMS Franz Edelman Award Finalists

Shubham Gupta, Samayita Guha, Subodha Kumar

This article provides a comprehensive overview of the advancements and applications of operations research (OR) and management science (MS) in inventory management described by the Franz Edelman Award finalists from 1985 to 2023. The research presents the transformative potential of OR/MS in addressing complex inventory management challenges across various industries. Through an in-depth examination of the methodologies and solutions employed by these studies, we highlight the strategic implementations of several OR/MS techniques, including simulation-optimization, deep learning and advanced forecasting, dynamic pricing and yield management, and stochastic modeling. The analysis reveals the tangible benefits realized, such as optimized inventory levels, reduced costs, improved profits and revenue, and improved customer satisfaction. This research underscores the critical role of inventory process optimization and risk mitigation in navigating uncertainties and demand fluctuations. The managerial insights derived from these initiatives provide a roadmap for practitioners seeking to implement advanced OR/MS methodologies in real-world inventory management. The findings also encourage the adoption of innovative methods to enhance the operational efficiency and competitiveness. Through this exploration, we aim to stimulate further innovation and research in the evolving field of inventory management and to celebrate the achievements in applied analytics brought forth by the Franz Edelman Award finalists.

3 sitasi en
arXiv Open Access 2025
SPRINT: An Assistant for Issue Report Management

Ahmed Adnan, Antu Saha, Oscar Chaparro

Managing issue reports is essential for the evolution and maintenance of software systems. However, manual issue management tasks such as triaging, prioritizing, localizing, and resolving issues are highly resource-intensive for projects with large codebases and users. To address this challenge, we present SPRINT, a GitHub application that utilizes state-of-the-art deep learning techniques to streamline issue management tasks. SPRINT assists developers by: (i) identifying existing issues similar to newly reported ones, (ii) predicting issue severity, and (iii) suggesting code files that likely require modification to solve the issues. We evaluated SPRINT using existing datasets and methodologies, measuring its predictive performance, and conducted a user study with five professional developers to assess its usability and usefulness. The results show that SPRINT is accurate, usable, and useful, providing evidence of its effectiveness in assisting developers in managing issue reports. SPRINT is an open-source tool available at https://github.com/sea-lab-wm/sprint_issue_report_assistant_tool.

en cs.SE
arXiv Open Access 2025
Brame: Hierarchical Data Management Framework for Cloud-Edge-Device Collaboration

Xianglong Liu, Hongzhi Wang, Yingze Li et al.

In the realm of big data, cloud-edge-device collaboration is prevalent in industrial scenarios. However, a systematic exploration of the theory and methodologies related to data management in this field is lacking. This paper delves into the sub-problem of data storage and scheduling within cloud-edge-device collaborative environments. Following extensive research and analysis of the characteristics and requirements of data management in cloud-edge collaboration, it is evident that existing studies on hierarchical data management primarily focus on the migration of hot and cold data. Additionally, these studies encounter challenges such as elevated operational and maintenance costs, difficulties in locating data within tiered storage, and intricate metadata management attributable to excessively fine-grained management granularity. These challenges impede the fulfillment of the storage needs in cloud-edge-device collaboration. To overcome these challenges, we propose a \underline{B}lock-based hie\underline{R}archical d\underline{A}ta \underline{M}anagement fram\underline{E}work, \textbf{Brame}, which advocates for a workload-aware three-tier storage architecture and suggests a shift from using tuples to employing $Blocks$ as the fundamental unit for data management. \textbf{Brame} owns an offline block generation method designed to facilitate efficient block generation and expeditious query routing. Extensive experiments substantiate the superior performance of \textbf{Brame}.

en cs.DB
DOAJ Open Access 2025
Towards safer steel operations with a multi model framework for accident prediction and risk assessment simulation

Shatrudhan Pandey, Abhishek Kumar Singh, Shreyanshu Parhi et al.

Abstract This research concentrates on an introduction of a multi-model approach integrating Bayesian Networks (BN), Machine Learning (ML) models, Natural Language Processing (NLP) with Sentiment Analysis, Agent-Based Modeling (ABM), and Survival Analysis to improve predictive modelling of accident causation in high-risk steel industries. The significance of the artificial intelligence (AI) based models is that every approach complements other substantiating the hypothesis. Also, the augmentation of prediction accuracy could be achieved through AI approaches contrary to conventional methods. Results reveal that the application of AI model improves the prediction accuracy compared to conventional approaches. BN application uncovers the machine conditions and human errors responsible for causing accidents. Gradient Boosting Machines discussed equipment-related incidents, while NLP analysis demonstrated negative sentiment due to non-compliance with safety protocols. Moving forward, ABM simulations in accidents focus on personal protective equipment (PPE) compliance and machine maintenance. Survival analysis indicated the role of timely interventions in reducing severe accidents. Additionally, temporal insights aid in timing interventions, improving safety strategy efficacy. The outcome of this research discusses advancements in proactive accident prediction and risk management in high-risk steel industrial environments by addressing latent risk factors.

Medicine, Science
CrossRef Open Access 2025
An Economic Analysis of Subscription Sharing of Digital Services

Xiaokun Wu, Shinyi Wu, Zhongju Zhang

Subscription sharing, where users share premium family plans with non-family members via platforms like Together Price and Sharesub, has become increasingly common. This raises a key question: should providers still offer discounted family plans alongside individual ones? Our research explores this issue for a monopolistic provider facing this sharing threat. We analyze the optimal pricing strategy and the effects of subscription sharing on profits, plan offerings, consumer surplus, and social welfare. We find that offering both plans is at least as profitable as offering individual plans only, and sharing can sometimes supercharge profits. However, platform fees reduce these benefits by narrowing the price gap between plans, weakening the market expansion effect. Our numerical results show that sharing often improves social welfare. Overall, these insights suggest that subscription sharing is not always as harmful to providers, and the platform service fees play a key role in pricing strategies.

CrossRef Open Access 2025
Simultaneous Search, Reservation Fees, and Sequential Outcomes

Ying-Ju Chen

This paper studies a simultaneous-search problem in which a player observes the outcomes sequentially, and must pay reservation fees to maintain eligibility for recalling the earlier offers. We use postgraduate program applications to illustrate the key ingredients of this family of problems. We develop a parsimonious model with two categories of schools: Reach schools, which the player feels very happy upon joining, but the chance of getting into one is low; and safety schools, which are a safer choice but not as exciting. The player first decides on the application portfolio, and then the outcomes from the schools applied to arrive randomly over time. We start with the extreme case wherein the safety schools always admit the player, and show that it suffices to focus on the last safety school. This allows us to conveniently represent the player’s value function by a product form of the probability of entering the last safety period and the expected payoff from then on. We show that the player’s payoff after applications is increasing and discrete concave in both the numbers of reach and safety schools, and the optimal number of reach schools increases in the reservation fee. The proof technique utilizes stochastic coupling, stochastic dominance, and directional monotone comparative statics arguments. We also develop a recursive dynamic programing algorithm when admissions to safety schools are no longer guaranteed. We demonstrate instances in which the player applies to more safety schools when either the reservation fee gets higher or the admission probability drops lower, and articulate how these arise from the portfolio optimization consideration.

CrossRef Open Access 2025
Product Recall Contagion in the Supply Chain

Ljubomir Pupovac, Vivek Astvansh, François Carrillat et al.

Following a manufacturer's large product recall, its supplier's shareholders may perceive uncertain future demand for the supplier's products and react punitively, causing a drop in the supplier's stock return—that is, a contagion (or negative spillover). Moreover, shareholders’ information asymmetry may cause them to “screen” the supplier's information cues to determine the supplier's extent of demand uncertainty. The ideal screen is the supplier's proportion of sales revenue from the recalling manufacturer. However, not all suppliers disclose this information. Therefore, we propose that shareholders use a two-stage screening. The first screen is whether the supplier demonstrates transparency by voluntarily disclosing information about its customer portfolio. The second screen—available only to the subset of suppliers that disclose customer information—is the supplier's sales revenue from the recalling manufacturer. We used a sample of 896 U.S. public manufacturer–supplier dyads impacted by 27 large manufacturer recalls. An event study followed by cross-sectional regressions provides evidence of contagion. In addition, it reveals that the supplier's voluntary disclosure of customer information mitigates contagion, whereas revenue dependence aggravates it. Contextual (i.e., recall) variables also impact contagion. Our research study contributes to the supply-chain contagion literature, screening theory, and customer information disclosure literature. The findings inform supplier firm managers that their prior customer-related disclosures and the contextual variables can moderate contagion.

S2 Open Access 2022
The application of digital twin technology in operations and supply chain management: a bibliometric review

Rajinder Bhandal, R. Meriton, R. Kavanagh et al.

Purpose The application of digital twins to optimise operations and supply chain management functions is a bourgeoning practice. Scholars have attempted to keep pace with this development initiating a fast-evolving research agenda. The purpose of this paper is to take stock of the emerging research stream identifying trends and capture the value potential of digital twins to the field of operations and supply chain management. Design/methodology/approach In this work we employ a bibliometric literature review supported by bibliographic coupling and keyword co-occurrence network analysis to examine current trends in the research field regarding the value-added potential of digital twin in operations and supply chain management. Findings The main findings of this work are the identification of four value clusters and one enabler cluster. Value clusters are comprised of articles that describe how the application of digital twin can enhance supply chain activities at the level of business processes as well as the level of supply chain capabilities. Value clusters of production flow management and product development operate at the business processes level and are maturing communities. The supply chain resilience and risk management value cluster operates at the capability level, it is just emerging, and is positioned at the periphery of the main network. Originality/value This is the first study that attempts to conceptualise digital twin as a dynamic capability and employs bibliometric and network analysis on the research stream of digital twin in operations and supply chain management to capture evolutionary trends, literature communities and value-creation dynamics in a digital-twin-enabled supply chain.

arXiv Open Access 2024
A Systematic Review of NeurIPS Dataset Management Practices

Yiwei Wu, Leah Ajmani, Shayne Longpre et al.

As new machine learning methods demand larger training datasets, researchers and developers face significant challenges in dataset management. Although ethics reviews, documentation, and checklists have been established, it remains uncertain whether consistent dataset management practices exist across the community. This lack of a comprehensive overview hinders our ability to diagnose and address fundamental tensions and ethical issues related to managing large datasets. We present a systematic review of datasets published at the NeurIPS Datasets and Benchmarks track, focusing on four key aspects: provenance, distribution, ethical disclosure, and licensing. Our findings reveal that dataset provenance is often unclear due to ambiguous filtering and curation processes. Additionally, a variety of sites are used for dataset hosting, but only a few offer structured metadata and version control. These inconsistencies underscore the urgent need for standardized data infrastructures for the publication and management of datasets.

en cs.LG
arXiv Open Access 2024
Data Governance and Data Management in Operations and Supply Chain: A Literature Review

Xuejiao Li, Yang Cheng, Xiaoning Xia et al.

In the dynamic landscape of contemporary business, the wave in data and technological advancements has directed companies toward embracing data-driven decision-making processes. Despite the vast potential that data holds for strategic insights and operational efficiencies, substantial challenges arise in the form of data issues. Recognizing these obstacles, the imperative for effective data governance (DG) becomes increasingly apparent. This research endeavors to bridge the gap in DG research within the Operations and Supply Chain Management (OSCM) domain through a comprehensive literature review. Initially, we redefine DG through a synthesis of existing definitions, complemented by insights gained from DG practices. Subsequently, we delineate the constituent elements of DG. Building upon this foundation, we develop an analytical framework to scrutinize the collected literature from the perspectives of both OSCM and DG. Beyond a retrospective analysis, this study provides insights for future research directions. Moreover, this study also makes a valuable contribution to the industry, as the insights gained from the literature are directly applicable to real-world scenarios.

en cs.DB
arXiv Open Access 2024
The Shapley Value in Database Management

Leopoldo Bertossi, Benny Kimelfeld, Ester Livshits et al.

Attribution scores can be applied in data management to quantify the contribution of individual items to conclusions from the data, as part of the explanation of what led to these conclusions. In Artificial Intelligence, Machine Learning, and Data Management, some of the common scores are deployments of the Shapley value, a formula for profit sharing in cooperative game theory. Since its invention in the 1950s, the Shapley value has been used for contribution measurement in many fields, from economics to law, with its latest researched applications in modern machine learning. Recent studies investigated the application of the Shapley value to database management. This article gives an overview of recent results on the computational complexity of the Shapley value for measuring the contribution of tuples to query answers and to the extent of inconsistency with respect to integrity constraints. More specifically, the article highlights lower and upper bounds on the complexity of calculating the Shapley value, either exactly or approximately, as well as solutions for realizing the calculation in practice.

arXiv Open Access 2024
AgGym: An agricultural biotic stress simulation environment for ultra-precision management planning

Mahsa Khosravi, Matthew Carroll, Kai Liang Tan et al.

Agricultural production requires careful management of inputs such as fungicides, insecticides, and herbicides to ensure a successful crop that is high-yielding, profitable, and of superior seed quality. Current state-of-the-art field crop management relies on coarse-scale crop management strategies, where entire fields are sprayed with pest and disease-controlling chemicals, leading to increased cost and sub-optimal soil and crop management. To overcome these challenges and optimize crop production, we utilize machine learning tools within a virtual field environment to generate localized management plans for farmers to manage biotic threats while maximizing profits. Specifically, we present AgGym, a modular, crop and stress agnostic simulation framework to model the spread of biotic stresses in a field and estimate yield losses with and without chemical treatments. Our validation with real data shows that AgGym can be customized with limited data to simulate yield outcomes under various biotic stress conditions. We further demonstrate that deep reinforcement learning (RL) policies can be trained using AgGym for designing ultra-precise biotic stress mitigation strategies with potential to increase yield recovery with less chemicals and lower cost. Our proposed framework enables personalized decision support that can transform biotic stress management from being schedule based and reactive to opportunistic and prescriptive. We also release the AgGym software implementation as a community resource and invite experts to contribute to this open-sourced publicly available modular environment framework. The source code can be accessed at: https://github.com/SCSLabISU/AgGym.

en cs.AI, cs.LG
arXiv Open Access 2024
Data clustering: a fundamental method in data science and management

Tai Dinh, Wong Hauchi, Daniil Lisik et al.

This paper explores the critical role of data clustering in data science, emphasizing its methodologies, tools, and diverse applications. Traditional techniques, such as partitional and hierarchical clustering, are analyzed alongside advanced approaches such as data stream, density-based, graph-based, and model-based clustering for handling complex structured datasets. The paper highlights key principles underpinning clustering, outlines widely used tools and frameworks, introduces the workflow of clustering in data science, discusses challenges in practical implementation, and examines various applications of clustering. By focusing on these foundations and applications, the discussion underscores clustering's transformative potential. The paper concludes with insights into future research directions, emphasizing clustering's role in driving innovation and enabling data-driven decision-making.

arXiv Open Access 2024
Approaching Emergent Risks: An Exploratory Study into Artificial Intelligence Risk Management within Financial Organisations

Finlay McGee

Globally, artificial intelligence (AI) implementation is growing, holding the capability to fundamentally alter organisational processes and decision making. Simultaneously, this brings a multitude of emergent risks to organisations, exposing vulnerabilities in their extant risk management frameworks. This necessitates a greater understanding of how organisations can position themselves in response. This issue is particularly pertinent within the financial sector with relatively mature AI applications matched with severe societal repercussions of potential risk events. Despite this, academic risk management literature is trailing behind the speed of AI implementation. Adopting a management perspective, this study aims to contribute to the understanding of AI risk management in organisations through an exploratory empirical investigation into these practices. In-depth insights are gained through interviews with nine practitioners from different organisations within the UK financial sector. Through examining areas of organisational convergence and divergence, the findings of this study unearth levels of risk management framework readiness and prevailing approaches to risk management at both a processual and organisational level. Whilst enhancing the developing literature concerning AI risk management within organisations, the study simultaneously offers a practical contribution, providing key areas of guidance for practitioners in the operational development of AI risk management frameworks.

en cs.CY
DOAJ Open Access 2024
Impacts of Natural Gas Pipeline Congestion on the Integrated Gas–Electricity Market in Peru

Richard Navarro, Hugo Rojas, Jaime E. Luyo et al.

This paper investigates the impact of natural gas pipeline congestion on the integrated gas–electricity market in Peru, focusing on short-term market dynamics. By simulating congestion by reducing the primary natural gas pipeline’s capacity, the study reveals significant patterns in production costs and load flows within the electrical network. The research highlights the critical interdependencies between natural gas and electricity systems, emphasizing how constraints in one network can directly affect the other. The findings underscore the importance of coordinated management of these interconnected systems to optimize economic dispatch and ensure the reliability of both gas and electricity grids. The study also proposes strategic public policy interventions to mitigate the financial and physical impacts of pipeline congestion, contributing to more efficient and resilient energy market operations.

DOAJ Open Access 2024
Theory of Resource-Based View (RBV): Integrated Framework of Distinctive Capability in University Performance

Sri Wartini, Widya Prananta, Bogy Febriatmoko et al.

This research aims to examine the role of corporate strategy as a mediating influence between unique capabilities and environmental turbulence on the performance of Legal Entity Universities (Perguruan Tinggi Negeri Berbadan Hukum - PTNBH university). Corporate strategy is important in supporting unique capabilities and environmental volatility to achieve performance. The population of this study were leaders at the PTNBH University in Central Java. The sample collection technique uses purposive sampling, namely data collection, by considering the criteria: it is a state university with a legal entity, and it has been active in structural positions for at least 1 year. The number of samples was a total of 109 respondents. The results found that unique capabilities do not directly influence university performance but directly influence corporate strategy, and environmental turbulence directly influences corporate performance and strategy. The unique capabilities possessed by universities are good capital in the increasingly fierce competition because they can be used to develop higher education strategies. The results of indirect testing (mediation) show that corporate strategy plays a mediating role between unique capabilities and environmental turbulence on the performance of PTNBH University. Future researchers can combine or add other predictor variables to generalize this research.

Production management. Operations management, Management. Industrial management
CrossRef Open Access 2023
Determining maximum shipping age requirements for shelf life and food waste management

Arzum Akkas, Dorothee Honhon

Products approaching the end of their shelf life on retail store shelves are more likely to result in food waste. For this reason, manufacturers establish shipping policies related to the age of the products that leave their warehouses, that is, they set a maximum age beyond which shipping to retail stores is no longer allowed. In practice, most existing policies are simple one‐size‐fits‐all rules that do not accommodate the varying characteristics of the products. We offer a framework that manufacturers can use to determine maximum shipping age thresholds based on a Markov chain model, where the objective is to maximize profit, net of the cost of expiration at the warehouse and retail stores. We derive analytical insights about the impact of a shipping age threshold on food waste in the supply chain and obtain sufficient conditions under which a maximum shipping age threshold is suboptimal within the class of Ship‐Oldest‐First (a.k.a. First‐In‐First‐Out ) issuing policies. We also numerically investigate the relationship between different system parameters (e.g., demand rate, waste cost at warehouse and retail, warehouse inventory, total shelf life) and the optimal shipping age threshold. Using real data from our industry collaborator, we compute the optimal shipping age thresholds at the stock‐keeping‐unit (SKU) level for over 450 products and find that 9–10% of the SKUs currently have suboptimal shipping age thresholds. This presents an opportunity to improve profits by up to 8.7% and reduce food waste by up to 14.7%. These improvements correspond to up to $292,561 savings and 1846 truckloads of waste reduction annually. Our framework can be adopted by any firm and satisfies a much emphasized need in industry to control food waste through shelf life management.

25 sitasi en
CrossRef Open Access 2023
Optimal cardinal contests

Goutham Takasi, Milind Dawande, Ganesh Janakiraman

We study the design of crowdsourcing contests in settings where the outputs of the contestants are quantifiable, for example, a data science challenge. This setting is in contrast to those where the output is only qualitative and cannot be objectively quantified, for example, when the goal of the contest is to design a logo. The literature on crowdsourcing contests focuses largely on ordinal contests, where contestants' outputs are ranked by the designer and awards are based on relative ranks. Such contests are ideally suited for the latter setting, where output is qualitative. For our setting (quantitative output), it is possible to design cardinal contests, where awards could be based on the actual outputs and not on their ranking alone—thus, the family of cardinal contests includes the family of ordinal contests. We study the problem of designing an optimal cardinal contest. We use mechanism design theory to derive an optimal cardinal mechanism and provide a convenient implementation—a decreasing reward‐meter mechanism—of the optimal contest. We establish the practicality of our mechanism by showing that it is “Obviously Strategy‐Proof,” a recently introduced formal notion of simplicity in the literature. We also compare the optimal cardinal contest with the most popular ordinal contest—namely, the Winner‐Takes‐All (WTA) contest, along several metrics. In particular, the optimal cardinal mechanism delivers a superior expected best output, whereas the WTA contest yields a greater expected contestant welfare. Furthermore, under a sufficiently large budget, the contest designer's expected net‐benefit is higher under the optimal cardinal mechanism than that under the WTA contest, regardless of the number of contestants in the two mechanisms. Our numerical analysis suggests that, for the contest designer, the average improvement provided by the optimal cardinal mechanism over the WTA contest is about 23%. For a given number of contestants, the benefit of the optimal cardinal mechanism is especially appreciable for projects where the ratio of the designer's utility to agents' cost‐of‐effort falls within a wide practical range. For projects where this ratio is very high, the expected profit of the best WTA contest is reasonably close to that of the optimal cardinal mechanism.

S2 Open Access 2021
Energy value mapping: A novel lean method to integrate energy efficiency into production management

Xuanhao Wen, Huajun Cao, B. Hon et al.

Abstract Integrating energy efficiency as a key criterion in production management is critical to increasing energy efficiency while maintaining productivity in manufacturing systems. However, this integration still poses a huge challenge for decision-makers due to a lack of knowledge about the linkage between energy efficiency and productivity. Consequently, related energy-saving potential remains unexploited. In this context, this paper presents an innovative Energy Value Mapping (EVM) method to promote the systematic integration of energy efficiency into production management. The method includes three consecutive phases: (i) energy loss modeling to reveal the coupling relation between energy losses and productivity variables; (ii) lean energy analysis using production-oriented energy performance indicators to highlight energy inefficiencies and indicate improvement directions; (iii) improvement strategies determination to improve energy efficiency simultaneously considering traditional production management decisions. Furthermore, an industrial case study of a die-casting plant has demonstrated the effeteness and practicality of the method, showing its great potential in identifying, visualizing, quantifying, analyzing, and decreasing the energy losses related to production and operations management. The results showed that the overall energy demand of the process chain could be reduced by 6.17% with the energy utilization and time utilization being increased by 5.0%, 4.8%, respectively.

55 sitasi en Computer Science
S2 Open Access 2019
Big Production Enterprise Supply Chain Endogenous Risk Management Based on Blockchain

Yonggui Fu, Jian-ming Zhu

In view of the influence of information’s “incompleteness” and “asymmetry” to supply chain operation efficiency, we make big production enterprise as the object and apply blockchain to its supply chain endogenous risk management, to research the specific operation mechanism and application value. In the operation process of big production enterprise supply chain, because of the information’s asymmetry, the fraud problem will produce among the business subjects; blockchain is a decentralized distributed accounting and data storage technology, and with blockchain technology, we can resolve the business subjects’ fraud problem and can provide more accurate decision information basis for each business section, and realize group decision. This paper has described the system structure and intelligent contract operation mechanism under consensus authentication of blockchain applying in big production enterprise supply chain and analyzed by the case. In view of the limitation of classical blockchain technology applying in big production enterprise supply chain, we constructed the corresponding blockchain data storage mechanism and data access mechanism. Analyzed the economic value of this paper researching from the aspects of response speed, supply accuracy, cooperation integrity, business interaction economic cost, supply quality, and supply price. This paper research will provide ideas and model structure for developing supply chain area’s blockchain system and will promote the application research development of blockchain in specific area.

117 sitasi en Computer Science, Business

Halaman 35 dari 320877