Hasil untuk "Industrial engineering. Management engineering"

Menampilkan 20 dari ~11140706 hasil · dari CrossRef, DOAJ, Semantic Scholar, arXiv

JSON API
S2 Open Access 2020
A comprehensive review of engineered biochar: Production, characteristics, and environmental applications

H. Panahi, M. Dehhaghi, Y. Ok et al.

Abstract A sustainable management of environment and agriculture is crucial to protect soil, water, and air during intensified agriculture practices as well as huge industrial and transportation activities. A promising tool to address these challenges could be the application of biochar, a carbonaceous product of biomass pyrolysis. The efficiency of biochar could be improved through physical, chemical and microbial procedures. Engineered biochar could then be applied for various applications ranging from sustainable agriculture to pollution remediation and catalytic reactions. Biochar engineering allows achieving biochar properties which are optimum for specific applications and/or under specific conditions. This would lead to harnessing the favorable features of biochar and to enhance its efficiency while simultaneously minimizing the existing tradeoffs. This review covers the production and applications of engineered biochar by summarizing great deals of research and knowledge on the field. Unlike previous reviews, herein biochar physical and chemical properties and the factors affecting them (i.e., biomass nature and pyrolysis conditions) have been discussed in detail. Moreover, the contributions of each physical and chemical activation/modification method to improving biochar characteristics with respect to environmental applications have been specifically scrutinized. By providing the state-of-the-art knowledge about engineered biochar production, properties, and applications, this review aims to help research in this field for identification of the culprits that must be addressed in future experiments.

373 sitasi en Environmental Science
S2 Open Access 2023
Maintenance 4.0 technologies – new opportunities for sustainability driven maintenance

M. Jasiulewicz-Kaczmarek, S. Legutko, P. Kluk

Digitalization and sustainability are important topics for manufacturing industries as they are affecting all parts of the production chain. Various initiatives and approaches are set up to help companies adopt the principles of the fourth industrial revolution with respect sustainability. Within these actions the use of modern maintenance approaches such as Maintenance 4.0 is highlighted as one of the prevailing smart & sustainable manufacturing topics. The goal of this paper is to describe the latest trends within the area of maintenance management from the perspective of the challenges of the fourth industrial revolution and the economic, environmental and social challenges of sustainable development. In this work, intelligent and sustainable maintenance was considered in three perspectives. The first per- spective is the historical perspective, in relation to which evolution has been presented in the approach to maintenance in accordance with the development of production engineering. The next perspective is the development perspective, which presents historical perspectives on maintenance data and data-driven maintenance technology. The third perspective, presents maintenance in the context of the dimensions of sustainable development and potential opportunities for including data-driven maintenance technology in the implementation of the economic, environmental and social challenges of sustainable production.

97 sitasi en Business
arXiv Open Access 2025
Benchmarking AI Models in Software Engineering: A Review, Search Tool, and Unified Approach for Elevating Benchmark Quality

Roham Koohestani, Philippe de Bekker, Begüm Koç et al.

Benchmarks are essential for unified evaluation and reproducibility. The rapid rise of Artificial Intelligence for Software Engineering (AI4SE) has produced numerous benchmarks for tasks such as code generation and bug repair. However, this proliferation has led to major challenges: (1) fragmented knowledge across tasks, (2) difficulty in selecting contextually relevant benchmarks, (3) lack of standardization in benchmark creation, and (4) flaws that limit utility. Addressing these requires a dual approach: systematically mapping existing benchmarks for informed selection and defining unified guidelines for robust, adaptable benchmark development. We conduct a review of 247 studies, identifying 273 AI4SE benchmarks since 2014. We categorize them, analyze limitations, and expose gaps in current practices. Building on these insights, we introduce BenchScout, an extensible semantic search tool for locating suitable benchmarks. BenchScout employs automated clustering with contextual embeddings of benchmark-related studies, followed by dimensionality reduction. In a user study with 22 participants, BenchScout achieved usability, effectiveness, and intuitiveness scores of 4.5, 4.0, and 4.1 out of 5. To improve benchmarking standards, we propose BenchFrame, a unified framework for enhancing benchmark quality. Applying BenchFrame to HumanEval yielded HumanEvalNext, featuring corrected errors, improved language conversion, higher test coverage, and greater difficulty. Evaluating 10 state-of-the-art code models on HumanEval, HumanEvalPlus, and HumanEvalNext revealed average pass-at-1 drops of 31.22% and 19.94%, respectively, underscoring the need for continuous benchmark refinement. We further examine BenchFrame's scalability through an agentic pipeline and confirm its generalizability on the MBPP dataset. All review data, user study materials, and enhanced benchmarks are publicly released.

en cs.SE, cs.AI
arXiv Open Access 2025
Impostor Phenomenon Among Software Engineers: Investigating Gender Differences and Well-Being

Paloma Guenes, Rafael Tomaz, Bianca Trinkenreich et al.

Research shows that more than half of software professionals experience the Impostor Phenomenon (IP), with a notably higher prevalence among women compared to men. IP can lead to mental health consequences, such as depression and burnout, which can significantly impact personal well-being and software professionals' productivity. This study investigates how IP manifests among software professionals across intersections of gender with race/ethnicity, marital status, number of children, age, and professional experience. Additionally, it examines the well-being of software professionals experiencing IP, providing insights into the interplay between these factors. We analyzed data collected through a theory-driven survey (n = 624) that used validated psychometric instruments to measure IP and well-being in software engineering professionals. We explored the prevalence of IP in the intersections of interest. Additionally, we applied bootstrapping to characterize well-being within our field and statistically tested whether professionals of different genders suffering from IP have lower well-being. The results show that IP occurs more frequently in women and that the prevalence is particularly high among black women as well as among single and childless women. Furthermore, regardless of gender, software engineering professionals suffering from IP have significantly lower well-being. Our findings indicate that effective IP mitigation strategies are needed to improve the well-being of software professionals. Mitigating IP would have particularly positive effects on the well-being of women, who are more frequently affected by IP.

en cs.SE
arXiv Open Access 2025
Embracing Experiential Learning: Hackathons as an Educational Strategy for Shaping Soft Skills in Software Engineering

Allysson Allex Araújo, Marcos Kalinowski, Maria Teresa Baldassarre

In recent years, Software Engineering (SE) scholars and practitioners have emphasized the importance of integrating soft skills into SE education. However, teaching and learning soft skills are complex, as they cannot be acquired passively through raw knowledge acquisition. On the other hand, hackathons have attracted increasing attention due to their experiential, collaborative, and intensive nature, which certain tasks could be similar to real-world software development. This paper aims to discuss the idea of hackathons as an educational strategy for shaping SE students' soft skills in practice. Initially, we overview the existing literature on soft skills and hackathons in SE education. Then, we report preliminary empirical evidence from a seven-day hybrid hackathon involving 40 students. We assess how the hackathon experience promoted innovative and creative thinking, collaboration and teamwork, and knowledge application among participants through a structured questionnaire designed to evaluate students' self-awareness. Lastly, our findings and new directions are analyzed through the lens of Self-Determination Theory, which offers a psychological lens to understand human behavior. This paper contributes to academia by advocating the potential of hackathons in SE education and proposing concrete plans for future research within SDT. For industry, our discussion has implications around developing soft skills in future SE professionals, thereby enhancing their employability and readiness in the software market.

en cs.SE
arXiv Open Access 2025
A Comparative Study of Delta Parquet, Iceberg, and Hudi for Automotive Data Engineering Use Cases

Dinesh Eswararaj, Ajay Babu Nellipudi, Vandana Kollati

The automotive industry generates vast amounts of data from sensors, telemetry, diagnostics, and real-time operations. Efficient data engineering is critical to handle challenges of latency, scalability, and consistency. Modern data lakehouse formats Delta Parquet, Apache Iceberg, and Apache Hudi offer features such as ACID transactions, schema enforcement, and real-time ingestion, combining the strengths of data lakes and warehouses to support complex use cases. This study presents a comparative analysis of Delta Parquet, Iceberg, and Hudi using real-world time-series automotive telemetry data with fields such as vehicle ID, timestamp, location, and event metrics. The evaluation considers modeling strategies, partitioning, CDC support, query performance, scalability, data consistency, and ecosystem maturity. Key findings show Delta Parquet provides strong ML readiness and governance, Iceberg delivers high performance for batch analytics and cloud-native workloads, while Hudi is optimized for real-time ingestion and incremental processing. Each format exhibits tradeoffs in query efficiency, time-travel, and update semantics. The study offers insights for selecting or combining formats to support fleet management, predictive maintenance, and route optimization. Using structured datasets and realistic queries, the results provide practical guidance for scaling data pipelines and integrating machine learning models in automotive applications.

arXiv Open Access 2025
ACM SIGSOFT SEN Empirical Software Engineering: Introducing Our New Regular Column

Justus Bogner, Roberto Verdecchia

From its early foundations in the 1970s, empirical software engineering (ESE) has evolved into a mature research discipline that embraces a plethora of different topics, methodologies, and industrial practices. Despite its remarkable progress, the ESE research field still needs to keep evolving, as new impediments, shortcoming, and technologies emerge. Research reproducibility, limited external validity, subjectivity of reviews, and porting research results to industrial practices are just some examples of the drivers for improvements to ESE research. Additionally, several facets of ESE research are not documented very explicitly, which makes it difficult for newcomers to pick them up. With this new regular ACM SIGSOFT SEN column (SEN-ESE), we introduce a venue for discussing meta-aspects of ESE research, ranging from general topics such as the nature and best practices for replication packages, to more nuanced themes such as statistical methods, interview transcription tools, and publishing interdisciplinary research. Our aim for the column is to be a place where we can regularly spark conversations on ESE topics that might not often be touched upon or are left implicit. Contributions to this column will be grounded in expert interviews, focus groups, surveys, and position pieces, with the goal of encouraging reflection and improvement in how we conduct, communicate, teach, and ultimately improve ESE research. Finally, we invite feedback from the ESE community on challenging, controversial, or underexplored topics, as well as suggestions for voices you would like to hear from. While we cannot promise to act on every idea, we aim to shape this column around the community interests and are grateful for all contributions.

arXiv Open Access 2025
Lost in Transition: The Struggle of Women Returning to Software Engineering Research after Career Breaks

Shalini Chakraborty, Sebastian Baltes

The IT industry provides supportive pathways such as returnship programs, coding boot camps, and buddy systems for women re-entering their job after a career break. Academia, however, offers limited opportunities to motivate women to return. We propose a diverse multicultural research project investigating the challenges faced by women with software engineering (SE) backgrounds re-entering academia or related research roles after a career break. Career disruptions due to pregnancy, immigration status, or lack of flexible work options can significantly impact women's career progress, creating barriers for returning as lecturers, professors, or senior researchers. Although many companies promote gender diversity policies, such measures are less prominent and often under-recognized within academic institutions. Our goal is to explore the specific challenges women encounter when re-entering academic roles compared to industry roles; to understand the institutional perspective, including a comparative analysis of existing policies and opportunities in different countries for women to return to the field; and finally, to provide recommendations that support transparent hiring practices. The research project will be carried out in multiple universities and in multiple countries to capture the diverse challenges and policies that vary by location.

arXiv Open Access 2025
Engineering Artificial Intelligence: Framework, Challenges, and Future Direction

Jay Lee, Hanqi Su, Dai-Yan Ji et al.

Over the past ten years, the application of artificial intelligence (AI) and machine learning (ML) in engineering domains has gained significant popularity, showcasing their potential in data-driven contexts. However, the complexity and diversity of engineering problems often require the development of domain-specific AI approaches, which are frequently hindered by a lack of systematic methodologies, scalability, and robustness during the development process. To address this gap, this paper introduces the "ABCDE" as the key elements of Engineering AI and proposes a unified, systematic engineering AI ecosystem framework, including eight essential layers, along with attributes, goals, and applications, to guide the development and deployment of AI solutions for specific engineering needs. Additionally, key challenges are examined, and eight future research directions are highlighted. By providing a comprehensive perspective, this paper aims to advance the strategic implementation of AI, fostering the development of next-generation engineering AI solutions.

en cs.AI, cs.LG
S2 Open Access 2024
Advancements in Manufacturing Technology for the Biotechnology Industry: The Role of Artificial Intelligence and Emerging Trends

Anirudh Mehta, Moazam Niaz, Adeyanju Adetoro et al.

The biotechnology industry is evolving rapidly in terms of production processes. In this study, we will evaluate the manufacturing technology in bioprocessing, automation, and data integration in biotechnological engineering. It will also examine how artificial intelligence (AI) enhances industrial operations, Predictive maintenance through AI, and Process optimization using AI; it facilitates quality control and supply chain management performance with the implementation of artificial intelligence. The evaluation result significantly incorporates artificial intelligence into different stages of biomanufacturing processes, unveiling current trends within this sector and their implications for future growth. Artificial intelligence is a productive and novel result in biotechnologies

31 sitasi en
DOAJ Open Access 2024
Investigating Brain Responses to Transcutaneous Electroacupuncture Stimulation: A Deep Learning Approach

Tahereh Vasei, Harshil Gediya, Maryam Ravan et al.

This study investigates the neurophysiological effects of transcutaneous electroacupuncture stimulation (TEAS) on brain activity, using advanced machine learning techniques. This work analyzed the electroencephalograms (EEG) of 48 study participants, in order to analyze the brain’s response to different TEAS frequencies (2.5, 10, 80, and sham at 160 pulses per second (pps)) across 48 participants through pre-stimulation, during-stimulation, and post-stimulation phases. Our approach introduced several novel aspects. EEGNet, a convolutional neural network specifically designed for EEG signal processing, was utilized in this work, achieving over 95% classification accuracy in detecting brain responses to various TEAS frequencies. Additionally, the classification accuracies across the pre-stimulation, during-stimulation, and post-stimulation phases remained consistently high (above 92%), indicating that EEGNet effectively captured the different time-based brain responses across different stimulation phases. Saliency maps were applied to identify the most critical EEG electrodes, potentially reducing the number needed without sacrificing accuracy. A phase-based analysis was conducted to capture time-based brain responses throughout different stimulation phases. The robustness of EEGNet was assessed across demographic and clinical factors, including sex, age, and psychological states. Additionally, the responsiveness of different EEG frequency bands to TEAS was investigated. The results demonstrated that EEGNet excels in classifying EEG signals with high accuracy, underscoring its effectiveness in reliably classifying EEG responses to TEAS and enhancing its applicability in clinical and therapeutic settings. Notably, gamma band activity showed the highest sensitivity to TEAS, suggesting significant effects on higher cognitive functions. Saliency mapping revealed that a subset of electrodes (Fp1, Fp2, Fz, F7, F8, T3, T4) could achieve accurate classification, indicating potential for more efficient EEG setups.

Industrial engineering. Management engineering, Electronic computers. Computer science
DOAJ Open Access 2024
Assessing climate trends in the Northwestern Himalayas: a comprehensive analysis of high-resolution gridded and observed datasets

Rayees Ahmed, Taha Shamim, Joshal Kumar Bansal et al.

Climate change poses significant challenges to the Himalayas, a region characterised by its fragile ecosystems and vulnerable communities dependent on environmental resources. Accurate climate data are crucial for understanding regional climatic variations and assessing climate change impacts, particularly in areas with limited observational networks. This study represents a pioneering effort in evaluating climatic fluctuations in the Jhelum basin, located in the North Western Himalayas, by utilising a diverse range of gridded meteorological datasets (APHRODITE, CHIRPS, CRU, and IMDAA) alongside observed climate data from the Indian Meteorological Department. The primary goal is to identify the most effective gridded climate data product for regions with limited data and to explore the potential of combining gridded data sets with observed data to understand climatic variability. Findings indicate a consistent upward trend in temperature across all datasets, with varying rates of increase. CRU records a rise of 1 °C in Tmax and 1.6 °C in Tmin, while APHRODITE shows a Tmean increase of approximately 1 °C. IMDAA reports increases in Tmax and Tmin. Observed mean annual Tmax and Tmin show net increases of 1 °C and 0.6 °C, respectively. Regarding precipitation, all datasets except IMDAA exhibit an increasing trend, contrary to observed data, which decreases from 1266 mm to 1068 mm over 40 years. CHIRPS, CRU, and APHRODITE display increasing trends, while IMDAA aligns closely with observed data but tends to overestimate precipitation by about 30%. Our research identifies IMDAA as the most suitable gridded climate data for the Jhelum basin in the North-western Himalayas. Despite some discrepancies in precipitation trends, IMDAA closely aligns with observed data, providing valuable insights for scholars and policymakers navigating climate data uncertainties in complex environments. Our findings contribute to informed decision-making and effective climate change mitigation strategies in the region.

Environmental technology. Sanitary engineering, Environmental sciences
DOAJ Open Access 2024
Comparative Analysis of CNN Methods for Periapical Radiograph Classification

I Gusti Lanang Trisna Sumantara, Made Windu Antara Kesiman, I Made Gede Sunarya

Periapical radiographs are commonly used by dentists to diagnose dental problems and overall dental health conditions. The varying abilities of dentists to diagnose may be limited by their visual acuity and individual skills. To address this issue, there is a need for an application capable of computationally recognizing and classifying periapical radiographs. The commonly used computational method for image processing, specifically image recognition, is the Convolutional Neural Network (CNN) method. This study aims to create an application that can classify periapical radiographs and analyze the capabilities of the Convolutional Neural Network (CNN) method in this classification process. In general, periapical classification is divided into five types: Primary Endo with Secondary Perio, Primary Endodontic Lesion, Primary Perio with Secondary Endo, Primary Periodontal Lesion, and True Combined Lesions. The periapical radiograph classification process was tested using four CNN models: ResNet50v2, EfficientNetB1, MobileNet, and Shalow CNN. The evaluation of the CNN method utilized a confusion matrix-based technique to generate accuracy, precision, recall, F1-score and Weighted Average F1-score values. Based on the evaluation results, the highest accuracy value was achieved by EfficientNetB1 with 82%, followed by ResNet50v2 with 76%, MobileNet with 75%, and Shallow CNN with 71%.

Information technology
arXiv Open Access 2024
Apples, Oranges, and Software Engineering: Study Selection Challenges for Secondary Research on Latent Variables

Marvin Wyrich, Marvin Muñoz Barón, Justus Bogner

Software engineering (SE) is full of abstract concepts that are crucial for both researchers and practitioners, such as programming experience, team productivity, code comprehension, and system security. Secondary studies aimed at summarizing research on the influences and consequences of such concepts would therefore be of great value. However, the inability to measure abstract concepts directly poses a challenge for secondary studies: primary studies in SE can operationalize such concepts in many ways. Standardized measurement instruments are rarely available, and even if they are, many researchers do not use them or do not even provide a definition for the studied concept. SE researchers conducting secondary studies therefore have to decide a) which primary studies intended to measure the same construct, and b) how to compare and aggregate vastly different measurements for the same construct. In this experience report, we discuss the challenge of study selection in SE secondary research on latent variables. We report on two instances where we found it particularly challenging to decide which primary studies should be included for comparison and synthesis, so as not to end up comparing apples with oranges. Our report aims to spark a conversation about developing strategies to address this issue systematically and pave the way for more efficient and rigorous secondary studies in software engineering.

arXiv Open Access 2024
GPT-Powered Elicitation Interview Script Generator for Requirements Engineering Training

Binnur Görer, Fatma Başak Aydemir

Elicitation interviews are the most common requirements elicitation technique, and proficiency in conducting these interviews is crucial for requirements elicitation. Traditional training methods, typically limited to textbook learning, may not sufficiently address the practical complexities of interviewing techniques. Practical training with various interview scenarios is important for understanding how to apply theoretical knowledge in real-world contexts. However, there is a shortage of educational interview material, as creating interview scripts requires both technical expertise and creativity. To address this issue, we develop a specialized GPT agent for auto-generating interview scripts. The GPT agent is equipped with a dedicated knowledge base tailored to the guidelines and best practices of requirements elicitation interview procedures. We employ a prompt chaining approach to mitigate the output length constraint of GPT to be able to generate thorough and detailed interview scripts. This involves dividing the interview into sections and crafting distinct prompts for each, allowing for the generation of complete content for each section. The generated scripts are assessed through standard natural language generation evaluation metrics and an expert judgment study, confirming their applicability in requirements engineering training.

en cs.SE, cs.AI
DOAJ Open Access 2023
Predicting Travel Insurance Purchases in an Insurance Firm through Machine Learning Methods after COVID-19

Shiuh Tong Lim, Joe Yee Yuan, Khai Wah Khaw et al.

Travel insurance serves as a crucial financial safeguard, offering coverage against unforeseen expenses and losses incurred during travel. With the advent of the proliferation of insurance types and the amplified demand for Covid-related coverage, insurance companies face the imperative task of accurately predicting customers’ likelihood to purchase insurance. This can assist the insurance providers in focusing on the most lucrative clients and boosting sales. By employing advanced machine learning techniques, this study aims to forecast the consumer segments most inclined to acquire travel insurance, allowing targeted strategies to be developed. A comprehensive analysis was carried out on a Kaggle dataset comprising prior clients of a travel insurance firm utilizing the K-Nearest Neighbors (KNN), Decision Tree Classifier (DT), Support Vector Machines (SVM), Naïve Bayes (NB), Logistic Regression (LR), and Random Forest (RF) models. Extensive data cleaning was done before model building. Performance evaluation was then based on accuracy, F1 score, and the Area Under Curve (AUC) with Receiver Operating Characteristics (ROC) curve. Inexplicably, KNN outperformed other models, achieving an accuracy of 0.81, precision of 0.82, recall of 0.82, F1 score of 0.80, and an AUC of 0.78. The findings of this study are a valuable guide for deploying machine learning algorithms in predicting travel insurance purchases, thus empowering insurance companies to target the most lucrative clientele and bolster revenue generation.

Electronic computers. Computer science, Information technology
DOAJ Open Access 2023
An Intelligent Fuzzy System for Diabetes Disease Detection using Harris Hawks Optimization

Zahra Asghari Varzaneh, Soodeh Hosseini

This paper proposed a fuzzy expert system for diagnosing diabetes. In the proposed method, at first, the fuzzy rules are generated based on the Pima Indians Diabetes Database (PIDD) and then the fuzzy membership functions are tuned using the Harris Hawks optimization (HHO). The experimental data set, PIDD with the age group from 25-30 is initially processed and the crisp values are converted into fuzzy values in the stage of fuzzification. The improved fuzzy expert system increases the classification accuracy which outperforms several famous methods for diabetes disease diagnosis. The HHO algorithm is applied to tune fuzzy membership functions to determine the best range for fuzzy membership functions and increase the accuracy of fuzzy rule classification. The experimental results in terms of accuracy, sensitivity, and specificity prove that the proposed expert system has a higher ability than other data mining models in diagnosing diabetes.

Information technology, Computer software
DOAJ Open Access 2023
Novel mathematical model for the classification of music and rhythmic genre using deep neural network

Swati A. Patil, G. Pradeepini, Thirupathi Rao Komati

Abstract Music Genre Classification (MGC) is a crucial undertaking that categorizes Music Genre (MG) based on auditory information. MGC is commonly employed in the retrieval of music information. The three main stages of the proposed system are data readiness, feature mining, and categorization. To categorize MG, a new neural network was deployed. The proposed system uses features from spectrographs derived from short clips of songs as inputs to a projected scheme building to categorize songs into an appropriate MG. Extensive experiment on the GTZAN dataset, Indian Music Genre(IMG) dataset, Hindustan Music Rhythm (HMR) and Tabala Dataset show that the proposed strategy is more effective than existing methods. Indian rhythms were used to test the proposed system design. The proposed system design was compared with other existing algorithms based on time and space complexity.

Computer engineering. Computer hardware, Information technology
arXiv Open Access 2023
How Many Papers Should You Review? A Research Synthesis of Systematic Literature Reviews in Software Engineering

Xiaofeng Wang, Henry Edison, Dron Khanna et al.

[Context] Systematic Literature Review (SLR) has been a major type of study published in Software Engineering (SE) venues for about two decades. However, there is a lack of understanding of whether an SLR is really needed in comparison to a more conventional literature review. Very often, SE researchers embark on an SLR with such doubts. We aspire to provide more understanding of when an SLR in SE should be conducted. [Objective] The first step of our investigation was focused on the dataset, i.e., the reviewed papers, in an SLR, which indicates the development of a research topic or area. The objective of this step is to provide a better understanding of the characteristics of the datasets of SLRs in SE. [Method] A research synthesis was conducted on a sample of 170 SLRs published in top-tier SE journals. We extracted and analysed the quantitative attributes of the datasets of these SLRs. [Results] The findings show that the median size of the datasets in our sample is 57 reviewed papers, and the median review period covered is 14 years. The number of reviewed papers and review period have a very weak and non-significant positive correlation. [Conclusions] The results of our study can be used by SE researchers as an indicator or benchmark to understand whether an SLR is conducted at a good time.

en cs.SE

Halaman 14 dari 557036