Hasil untuk "Industrial psychology"

Menampilkan 20 dari ~4864173 hasil · dari arXiv, DOAJ, CrossRef, Semantic Scholar

JSON API
S2 Open Access 2020
Array programming with NumPy

Charles R. Harris, K. Millman, S. Walt et al.

Array programming provides a powerful, compact and expressive syntax for accessing, manipulating and operating on data in vectors, matrices and higher-dimensional arrays. NumPy is the primary array programming library for the Python language. It has an essential role in research analysis pipelines in fields as diverse as physics, chemistry, astronomy, geoscience, biology, psychology, materials science, engineering, finance and economics. For example, in astronomy, NumPy was an important part of the software stack used in the discovery of gravitational waves1 and in the first imaging of a black hole2. Here we review how a few fundamental array concepts lead to a simple and powerful programming paradigm for organizing, exploring and analysing scientific data. NumPy is the foundation upon which the scientific Python ecosystem is constructed. It is so pervasive that several projects, targeting audiences with specialized needs, have developed their own NumPy-like interfaces and array objects. Owing to its central position in the ecosystem, NumPy increasingly acts as an interoperability layer between such array computation libraries and, together with its application programming interface (API), provides a flexible framework to support the next decade of scientific and industrial analysis. NumPy is the primary array programming library for Python; here its fundamental concepts are reviewed and its evolution into a flexible interoperability layer between increasingly specialized computational libraries is discussed.

19576 sitasi en Computer Science, Mathematics
arXiv Open Access 2026
Template-Based Feature Aggregation Network for Industrial Anomaly Detection

Wei Luo, Haiming Yao, Wenyong Yu

Industrial anomaly detection plays a crucial role in ensuring product quality control. Therefore, proposing an effective anomaly detection model is of great significance. While existing feature-reconstruction methods have demonstrated excellent performance, they face challenges with shortcut learning, which can lead to undesirable reconstruction of anomalous features. To address this concern, we present a novel feature-reconstruction model called the \textbf{T}emplate-based \textbf{F}eature \textbf{A}ggregation \textbf{Net}work (TFA-Net) for anomaly detection via template-based feature aggregation. Specifically, TFA-Net first extracts multiple hierarchical features from a pre-trained convolutional neural network for a fixed template image and an input image. Instead of directly reconstructing input features, TFA-Net aggregates them onto the template features, effectively filtering out anomalous features that exhibit low similarity to normal template features. Next, TFA-Net utilizes the template features that have already fused normal features in the input features to refine feature details and obtain the reconstructed feature map. Finally, the defective regions can be located by comparing the differences between the input and reconstructed features. Additionally, a random masking strategy for input features is employed to enhance the overall inspection performance of the model. Our template-based feature aggregation schema yields a nontrivial and meaningful feature reconstruction task. The simple, yet efficient, TFA-Net exhibits state-of-the-art detection performance on various real-world industrial datasets. Additionally, it fulfills the real-time demands of industrial scenarios, rendering it highly suitable for practical applications in the industry. Code is available at https://github.com/luow23/TFA-Net.

en cs.CV
arXiv Open Access 2026
EvoOpt-LLM: Evolving industrial optimization models with large language models

Yiliu He, Tianle Li, Binghao Ji et al.

Optimization modeling via mixed-integer linear programming (MILP) is fundamental to industrial planning and scheduling, yet translating natural-language requirements into solver-executable models and maintaining them under evolving business rules remains highly expertise-intensive. While large language models (LLMs) offer promising avenues for automation, existing methods often suffer from low data efficiency, limited solver-level validity, and poor scalability to industrial-scale problems. To address these challenges, we present EvoOpt-LLM, a unified LLM-based framework supporting the full lifecycle of industrial optimization modeling, including automated model construction, dynamic business-constraint injection, and end-to-end variable pruning. Built on a 7B-parameter LLM and adapted via parameter-efficient LoRA fine-tuning, EvoOpt-LLM achieves a generation rate of 91% and an executability rate of 65.9% with only 3,000 training samples, with critical performance gains emerging under 1,500 samples. The constraint injection module reliably augments existing MILP models while preserving original objectives, and the variable pruning module enhances computational efficiency, achieving an F1 score of ~0.56 on medium-sized LP models with only 400 samples. EvoOpt-LLM demonstrates a practical, data-efficient approach to industrial optimization modeling, reducing reliance on expert intervention while improving adaptability and solver efficiency.

en cs.AI
arXiv Open Access 2025
LR-IAD:Mask-Free Industrial Anomaly Detection with Logical Reasoning

Peijian Zeng, Feiyan Pang, Zhanbo Wang et al.

Industrial Anomaly Detection (IAD) is critical for ensuring product quality by identifying defects. Traditional methods such as feature embedding and reconstruction-based approaches require large datasets and struggle with scalability. Existing vision-language models (VLMs) and Multimodal Large Language Models (MLLMs) address some limitations but rely on mask annotations, leading to high implementation costs and false positives. Additionally, industrial datasets like MVTec-AD and VisA suffer from severe class imbalance, with defect samples constituting only 23.8% and 11.1% of total data respectively. To address these challenges, we propose a reward function that dynamically prioritizes rare defect patterns during training to handle class imbalance. We also introduce a mask-free reasoning framework using Chain of Thought (CoT) and Group Relative Policy Optimization (GRPO) mechanisms, enabling anomaly detection directly from raw images without annotated masks. This approach generates interpretable step-by-step explanations for defect localization. Our method achieves state-of-the-art performance, outperforming prior approaches by 36% in accuracy on MVTec-AD and 16% on VisA. By eliminating mask dependency and reducing costs while providing explainable outputs, this work advances industrial anomaly detection and supports scalable quality control in manufacturing. Code to reproduce the experiment is available at https://github.com/LilaKen/LR-IAD.

en cs.CV
arXiv Open Access 2025
A Survey on Web Testing: On the Rise of AI and Applications in Industry

Iva Kertusha, Gebremariem Assress, Onur Duman et al.

Web application testing is an essential practice to ensure the reliability, security, and performance of web systems in an increasingly digital world. This paper presents a systematic literature survey focusing on web testing methodologies, tools, and trends from 2014 to 2025. By analyzing 259 research papers, the survey identifies key trends, demographics, contributions, tools, challenges, and innovations in this domain. In addition, the survey analyzes the experimental setups adopted by the studies, including the number of participants involved and the outcomes of the experiments. Our results show that web testing research has been highly active, with ICST as the leading venue. Most studies focus on novel techniques, emphasizing automation in black-box testing. Selenium is the most widely used tool, while industrial adoption and human studies remain comparatively limited. The findings provide a detailed overview of trends, advancements, and challenges in web testing research, the evolution of automated testing methods, the role of artificial intelligence in test case generation, and gaps in current research. Special attention was given to the level of collaboration and engagement with the industry. A positive trend in using industrial systems is observed, though many tools lack open-source availability

en cs.SE
arXiv Open Access 2025
Digital Transformation in the Petrochemical Industry -- Challenges and Opportunities in the Implementation of {IoT} Technologies

Noel Portillo

The petrochemical industry faces significant technological, environmental, occupational safety, and financial challenges. Since its emergence in the 1920s, technologies that were once innovative have now become obsolete. However, factors such as the protection of trade secrets in industrial processes, limited budgets for research and development, doubts about the reliability of new technologies, and resistance to change from decision-makers have hindered the adoption of new approaches, such as the use of IoT devices. This paper addresses the challenges and opportunities presented by the research, development, and implementation of these technologies in the industry. It also analyzes the investment in research and development made by companies in the sector in recent years and provides a review of current research and implementations related to Industry 4.0.

en cs.CY
arXiv Open Access 2025
Causal Inference based Transfer Learning with LLMs: An Efficient Framework for Industrial RUL Prediction

Yan Chen, Cheng Liu

Accurate prediction of Remaining Useful Life (RUL) for complex industrial machinery is critical for the reliability and maintenance of mechatronic systems, but it is challenged by high-dimensional, noisy sensor data. We propose the Causal-Informed Data Pruning Framework (CIDPF), which pioneers the use of causal inference to identify sensor signals with robust causal relationships to RUL through PCMCI-based stability analysis, while a Gaussian Mixture Model (GMM) screens for anomalies. By training on only 10% of the pruned data, CIDPF fine-tunes pre-trained Large Language Models (LLMs) using parameter-efficient strategies, reducing training time by 90% compared to traditional approaches. Experiments on the N-CMAPSS dataset demonstrate that CIDPF achieves a 26% lower RMSE than existing methods and a 25% improvement over full-data baselines, showcasing superior accuracy and computational efficiency in industrial mechatronic systems. The framework's adaptability to multi-condition scenarios further underscores its practicality for industrial deployment.

en eess.SP
arXiv Open Access 2025
Improving Industrial Injection Molding Processes with Explainable AI for Quality Classification

Georg Rottenwalter, Marcel Tilly, Victor Owolabi

Machine learning is an essential tool for optimizing industrial quality control processes. However, the complexity of machine learning models often limits their practical applicability due to a lack of interpretability. Additionally, many industrial machines lack comprehensive sensor technology, making data acquisition incomplete and challenging. Explainable Artificial Intelligence offers a solution by providing insights into model decision-making and identifying the most relevant features for classification. In this paper, we investigate the impact of feature reduction using XAI techniques on the quality classification of injection-molded parts. We apply SHAP, Grad-CAM, and LIME to analyze feature importance in a Long Short-Term Memory model trained on real production data. By reducing the original 19 input features to 9 and 6, we evaluate the trade-off between model accuracy, inference speed, and interpretability. Our results show that reducing features can improve generalization while maintaining high classification performance, with an small increase in inference speed. This approach enhances the feasibility of AI-driven quality control, particularly for industrial settings with limited sensor capabilities, and paves the way for more efficient and interpretable machine learning applications in manufacturing.

arXiv Open Access 2025
Large Language Model for Extracting Complex Contract Information in Industrial Scenes

Yunyang Cao, Yanjun Li, Silong Dai

This paper proposes a high-quality dataset construction method for complex contract information extraction tasks in industrial scenarios and fine-tunes a large language model based on this dataset. Firstly, cluster analysis is performed on industrial contract texts, and GPT-4 and GPT-3.5 are used to extract key information from the original contract data, obtaining high-quality data annotations. Secondly, data augmentation is achieved by constructing new texts, and GPT-3.5 generates unstructured contract texts from randomly combined keywords, improving model robustness. Finally, the large language model is fine-tuned based on the high-quality dataset. Experimental results show that the model achieves excellent overall performance while ensuring high field recall and precision and considering parsing efficiency. LoRA, data balancing, and data augmentation effectively enhance model accuracy and robustness. The proposed method provides a novel and efficient solution for industrial contract information extraction tasks.

en cs.CL
DOAJ Open Access 2025
Enhancing Interagency Coordination in Smart Bureaucracy for New Capital City: Case Study of Nusantara

Dana Indra Sensuse, Eko Yon Handri, Muhammad Mishbah et al.

The development of a new capital city presents complex challenges, especially in terms of the complicated bureaucratic flow in realizing an efficient, sustainable, and people-centered smart city as the new identity of a country. This study explores the complex issues faced in the development of Nusantara, the new capital city of Indonesia, and proposes a smart bureaucracy model to improve interagency coordination and transform modern bureaucratic processes. Using a hybrid methodological approach that combines soft systems methodology (SSM) with the TELOS framework and MoSCoW method, this study conducted extensive stakeholder interviews with officials from the Nusantara Authority, local governments, and academics, in addition to a comparative analysis with Putrajaya and Naypyidaw. This study produced a five-layer smart bureaucracy model consisting of strategic layers, interagency coordination, operations, monitoring and evaluation, and digital infrastructure. Implementation is structured in three phases: building basic elements through comprehensive regulations and a single authority; strengthening basic systems through the development of ICT infrastructure; and implementing advanced features, including a smart city platform. This model offers a comprehensive solution to improve interagency coordination through technology integration, considering organizational readiness and resource availability, and provides valuable insights for developing countries implementing similar initiatives in their national capitals.

Psychology, Information technology
DOAJ Open Access 2025
Adopting the Future: How Generative AI, Agentic AI, and Blockchain Are Redefining Success for Dubai's Emerging Start-Ups

Mohammad Alhur, Firas Omar, Raed Alqirem et al.

This study investigates the uptake of emerging technologies—generative AI, agentic AI, and blockchain—by start-up companies in Dubai, a rapidly emerging innovation hub in the MENA region. Based on the innovation diffusion theory (IDT) and the technology–organization–environment (TOE) framework, the study investigates how technology, organizational, and environmental determinants influence technology adoption by start-ups. Specifically, it takes into account trialability, observability, relative advantage, and compatibility (technological context); stakeholder dynamics, innovation capability, and organizational resources (organizational context); and competition intensity and regulatory environment (environmental context). Citing recent criticisms of IDT, the study removes complexity as a determinant considering the growing user-centric design of technologies like GenAI. Using a mixed-methods design, the study collects data through interviews and surveys from start-up founders and technology leads. The outcomes should unveil the way start-ups view and imbed disruptive technologies and equip policymakers, entrepreneurs, and strategists for innovations with lessons to inform real-world applications. Placing the study in Dubai's vibrant setting, this study provides theoretical and empirical contributions to technology diffusion in the emerging economies, where the flexibility of regulation, preparedness of infrastructure, and competitive stress all come together to create innovation trajectories.

Psychology, Information technology
DOAJ Open Access 2025
Geospatial and Linguistic Analysis of Twitter Behavioral Trends: Examining the Impact of Socioeconomic Development on Social Media Use

Shahab Saquib Sohail, Mohammad Muzammil Khan, Dag Øivind Madsen et al.

This paper presents an analysis of Twitter (now X), one of the largest social media platforms, aimed at exploring behavioral trends. The objective of this study is to examine geographical and language differences, frequent user patterns, and contributing countries on Twitter. Utilizing a dataset comprising 49,945,240 tweets from 12,845,715 users across 237 countries and 64 languages, we investigate the relationship between human development indices and tweet generation rates. Our findings reveal that countries with higher human development indices tend to generate more tweets, supporting theories of social change and cultural evolution. Additionally, we identify notable linguistic trends, with users predominantly tweeting in native languages, except in countries like India, where English dominates despite linguistic diversity. We also observe that a select group of countries, particularly the United States, accounts for a significant portion of retweets, highlighting retweeting as a widespread behavior in contrast to original tweet creation. These insights contribute to a broader understanding of user behavior on Twitter and provide a nuanced view of the interplay between socioeconomic factors and digital engagement on a global scale.

Psychology, Information technology
arXiv Open Access 2024
LLMPot: Dynamically Configured LLM-based Honeypot for Industrial Protocol and Physical Process Emulation

Christoforos Vasilatos, Dunia J. Mahboobeh, Hithem Lamri et al.

Industrial Control Systems (ICS) are extensively used in critical infrastructures ensuring efficient, reliable, and continuous operations. However, their increasing connectivity and addition of advanced features make them vulnerable to cyber threats, potentially leading to severe disruptions in essential services. In this context, honeypots play a vital role by acting as decoy targets within ICS networks, or on the Internet, helping to detect, log, analyze, and develop mitigations for ICS-specific cyber threats. Deploying ICS honeypots, however, is challenging due to the necessity of accurately replicating industrial protocols and device characteristics, a crucial requirement for effectively mimicking the unique operational behavior of different industrial systems. Moreover, this challenge is compounded by the significant manual effort required in also mimicking the control logic the PLC would execute, in order to capture attacker traffic aiming to disrupt critical infrastructure operations. In this paper, we propose LLMPot, a novel approach for designing honeypots in ICS networks harnessing the potency of Large Language Models (LLMs). LLMPot aims to automate and optimize the creation of realistic honeypots with vendor-agnostic configurations, and for any control logic, aiming to eliminate the manual effort and specialized knowledge traditionally required in this domain. We conducted extensive experiments focusing on a wide array of parameters, demonstrating that our LLM-based approach can effectively create honeypot devices implementing different industrial protocols and diverse control logic.

en cs.CR, cs.LG
arXiv Open Access 2024
Diff-MTS: Temporal-Augmented Conditional Diffusion-based AIGC for Industrial Time Series Towards the Large Model Era

Lei Ren, Haiteng Wang, Yuanjun Laili

Industrial Multivariate Time Series (MTS) is a critical view of the industrial field for people to understand the state of machines. However, due to data collection difficulty and privacy concerns, available data for building industrial intelligence and industrial large models is far from sufficient. Therefore, industrial time series data generation is of great importance. Existing research usually applies Generative Adversarial Networks (GANs) to generate MTS. However, GANs suffer from unstable training process due to the joint training of the generator and discriminator. This paper proposes a temporal-augmented conditional adaptive diffusion model, termed Diff-MTS, for MTS generation. It aims to better handle the complex temporal dependencies and dynamics of MTS data. Specifically, a conditional Adaptive Maximum-Mean Discrepancy (Ada-MMD) method has been proposed for the controlled generation of MTS, which does not require a classifier to control the generation. It improves the condition consistency of the diffusion model. Moreover, a Temporal Decomposition Reconstruction UNet (TDR-UNet) is established to capture complex temporal patterns and further improve the quality of the synthetic time series. Comprehensive experiments on the C-MAPSS and FEMTO datasets demonstrate that the proposed Diff-MTS performs substantially better in terms of diversity, fidelity, and utility compared with GAN-based methods. These results show that Diff-MTS facilitates the generation of industrial data, contributing to intelligent maintenance and the construction of industrial large models.

en cs.LG, cs.AI
arXiv Open Access 2024
Enhancing Depression-Diagnosis-Oriented Chat with Psychological State Tracking

Yiyang Gu, Yougen Zhou, Qin Chen et al.

Depression-diagnosis-oriented chat aims to guide patients in self-expression to collect key symptoms for depression detection. Recent work focuses on combining task-oriented dialogue and chitchat to simulate the interview-based depression diagnosis. Whereas, these methods can not well capture the changing information, feelings, or symptoms of the patient during dialogues. Moreover, no explicit framework has been explored to guide the dialogue, which results in some useless communications that affect the experience. In this paper, we propose to integrate Psychological State Tracking (POST) within the large language model (LLM) to explicitly guide depression-diagnosis-oriented chat. Specifically, the state is adapted from a psychological theoretical model, which consists of four components, namely Stage, Information, Summary and Next. We fine-tune an LLM model to generate the dynamic psychological state, which is further used to assist response generation at each turn to simulate the psychiatrist. Experimental results on the existing benchmark show that our proposed method boosts the performance of all subtasks in depression-diagnosis-oriented chat.

en cs.HC, cs.AI
arXiv Open Access 2024
Domain Adaptation for Industrial Time-series Forecasting via Counterfactual Inference

Chao Min, Guoquan Wen, Jiangru Yuan et al.

Industrial time-series, as a structural data responds to production process information, can be utilized to perform data-driven decision-making for effective monitoring of industrial production process. However, there are some challenges for time-series forecasting in industry, e.g., predicting few-shot caused by data shortage, and decision-confusing caused by unknown treatment policy. To cope with the problems, we propose a novel causal domain adaptation framework, Causal Domain Adaptation (CDA) forecaster to improve the performance on the interested domain with limited data (target). Firstly, we analyze the causality existing along with treatments, and thus ensure the shared causality over time. Subsequently, we propose an answer-based attention mechanism to achieve domain-invariant representation by the shared causality in both domains. Then, a novel domain-adaptation is built to model treatments and outcomes jointly training on source and target domain. The main insights are that our designed answer-based attention mechanism allows the target domain to leverage the existed causality in source time-series even with different treatments, and our forecaster can predict the counterfactual outcome of industrial time-series, meaning a guidance in production process. Compared with commonly baselines, our method on real-world and synthetic oilfield datasets demonstrates the effectiveness in across-domain prediction and the practicality in guiding production process

en cs.LG, cs.IT
arXiv Open Access 2024
Decarbonisation of industry and the energy system: exploring mutual impacts and investment planning

Quentin Raillard-Cazanove, Thibaut Knibiehly, Robin Girard

The decarbonisation of the energy system is crucial for achieving climate goals and is inherently linked to the decarbonisation of industry. Despite this, few studies explore the simultaneous impacts of decarbonising both sectors. This paper aims to examine how industrial decarbonisation in Europe affects the energy system and vice versa. To address this, an industry model incorporating key heavy industry sectors across six European countries is combined with an energy system model for electricity and hydrogen covering fifteen European regions, refered to as the EU-15, divided into eleven zones. The study evaluates various policy scenarios under different conditions.The results demonstrate that industrial decarbonisation leads to a significant increase in electricity and hydrogen demand. This additional demand for electricity is largely met through renewable energy sources, while hydrogen supply is predominantly addressed by blue hydrogen production when fossil fuels are authorized and the system lacks renewable energy. This increased demand results in higher prices with considerable regional disparities. Furthermore, the findings reveal that, regardless of the scenario, the electricity mix in the EU-15 remains predominantly renewable, exceeding 85%.A reduction in carbon taxes lowers the prices of electricity and hydrogen, but does not increase consumption, as the lower carbon tax makes the continued use of fossil fuels more attractive to industry. In scenarios that enforce a phase-out of fossil fuels, electricity prices rise, leading to a greater reliance on imports of low-carbon hydrogen and methanol. Results also suggest that domestic hydrogen production benefits from synergies between electrolytic hydrogen and blue hydrogen, helping to maintain competitive prices.

en physics.soc-ph
arXiv Open Access 2024
Examining the Role of Peer Acknowledgements on Social Annotations: Unraveling the Psychological Underpinnings

Xiaoshan Huang, Haolun Wu, Xue Liu et al.

This study explores the impact of peer acknowledgement on learner engagement and implicit psychological attributes in written annotations on an online social reading platform. Participants included 91 undergraduates from a large North American University. Using log file data, we analyzed the relationship between learners' received peer acknowledgement and their subsequent annotation behaviours using cross-lag regression. Higher peer acknowledgements correlate with increased initiation of annotations and responses to peer annotations. By applying text mining techniques and calculating Shapley values to analyze 1,969 social annotation entries, we identified prominent psychological themes within three dimensions (i.e., affect, cognition, and motivation) that foster peer acknowledgment in digital social annotation. These themes include positive affect, openness to learning and discussion, and expression of motivation. The findings assist educators in improving online learning communities and provide guidance to technology developers in designing effective prompts, drawing from both implicit psychological cues and explicit learning behaviours.

en cs.HC
arXiv Open Access 2023
Semi-Automated Computer Vision based Tracking of Multiple Industrial Entities -- A Framework and Dataset Creation Approach

Jérôme Rutinowski, Hazem Youssef, Sven Franke et al.

This contribution presents the TOMIE framework (Tracking Of Multiple Industrial Entities), a framework for the continuous tracking of industrial entities (e.g., pallets, crates, barrels) over a network of, in this example, six RGB cameras. This framework, makes use of multiple sensors, data pipelines and data annotation procedures, and is described in detail in this contribution. With the vision of a fully automated tracking system for industrial entities in mind, it enables researchers to efficiently capture high quality data in an industrial setting. Using this framework, an image dataset, the TOMIE dataset, is created, which at the same time is used to gauge the framework's validity. This dataset contains annotation files for 112,860 frames and 640,936 entity instances that are captured from a set of six cameras that perceive a large indoor space. This dataset out-scales comparable datasets by a factor of four and is made up of scenarios, drawn from industrial applications from the sector of warehousing. Three tracking algorithms, namely ByteTrack, Bot-Sort and SiamMOT are applied to this dataset, serving as a proof-of-concept and providing tracking results that are comparable to the state of the art.

en cs.CV
arXiv Open Access 2023
How to Do Machine Learning with Small Data? -- A Review from an Industrial Perspective

Ivan Kraljevski, Yong Chul Ju, Dmitrij Ivanov et al.

Artificial intelligence experienced a technological breakthrough in science, industry, and everyday life in the recent few decades. The advancements can be credited to the ever-increasing availability and miniaturization of computational resources that resulted in exponential data growth. However, because of the insufficient amount of data in some cases, employing machine learning in solving complex tasks is not straightforward or even possible. As a result, machine learning with small data experiences rising importance in data science and application in several fields. The authors focus on interpreting the general term of "small data" and their engineering and industrial application role. They give a brief overview of the most important industrial applications of machine learning and small data. Small data is defined in terms of various characteristics compared to big data, and a machine learning formalism was introduced. Five critical challenges of machine learning with small data in industrial applications are presented: unlabeled data, imbalanced data, missing data, insufficient data, and rare events. Based on those definitions, an overview of the considerations in domain representation and data acquisition is given along with a taxonomy of machine learning approaches in the context of small data.

en cs.LG

Halaman 47 dari 243209