Industrial policy has returned to the centre of economic governance, particularly in the high-tech sectors where positive network externalities in demand make market dominance self-reinforcing. This paper studies the welfare effects of an industrial policy targeting a sector with network externalities in a two-country model with strategic trade and R&D investment. We show how the welfare consequences of this policy are determined by the interaction between the strength of the externality, the type of R&D, and the degree of product differentiation between the home and the imported goods. When externalities are weak or the goods are close substitutes, the business-stealing effect produces a race to the bottom that dissipates more surplus than it creates. Under sufficiently strong externalities and weak substitutability or complementarity of the goods, industrial policy competition can make both countries simultaneously better off compared to the laissez-faire outcome because of the mutual business-enhancement effect. The case is stronger for the product innovation than for the process innovation, as the former directly affects the demand and triggers a stronger network effects than the latter which operates indirectly through the supply. Thus, the network externalities create an opportunity for a win-win industrial policies, but its realisation depends on the market structure and the nature of innovation.
Tommaso Dorigo, Pietro Vischia, Shahzaib Abbas
et al.
The optimization of large experiments in fundamental science, such as detectors for subnuclear physics at particle colliders, shares with the optimization of complex systems for industrial or societal applications the common issue of addressing the inter-relation between parameters describing the hardware used in data production and parameters used to analyse those data. While in many cases this coupling can be ignored -- when the problem can be successfully factored into simpler sub-tasks and the latter addressed serially -- there are situations in which that approach fails to converge to the absolute maximum of expected performance, as it results in a mis-alignment of the optimized hardware and software solutions. In this work we consider a few use cases of interest in fundamental science collected primarily from particle physics and related areas, and a pot-pourri of industrial and societal applications where the matter is similarly of relevance. We discuss the emergence of strong hardware-software coupling in some of those systems, as well as co-design procedures that may be deployed to identify the global maximum of their relevant utility functions. We observe how numerous opportunities exist to advance methods and tools for hardware-software co-design optimization, bridging fundamental science and industry through application- and challenge-driven projects, and shaping the future of scientific experiments and industrial systems.
Introduction/Main Objectives: This paper explores how calm, concentration, and coldness shape a vulnerable leadership style, fostering trust, psychological safety, and flexibility. Drawing from Nordic leadership traditions, the paper examines how these traits enhance emotional resilience and openness. Background Problems: Contemporary leadership often misunderstands vulnerability, despite its potential to enhance trust and psychological safety. The gap lies in understanding how specific qualities like calm, concentration, and coldness contribute to psychological flexibility. Novelty: Shows how traits often seen as passive or negative—coldness, calmness, and concentration can foster psychological flexibility and trust. It offers a new perspective on how Nordic leadership balances vulnerability and resilience. Research Methods: Using a phenomenological approach, a personal anecdote is interpreted through leadership theories and psychological framework. Finding/Results: The paper proposes that calmness, concentration, and coldness enhance leaders' psychological flexibility, fostering trust and improving team dynamics. Conclusion: These traits are essential for trust-based, adaptive leadership that balances vulnerability and resilience, benefiting organizational psychological safety and flexibility.
The family is a formal, methodical, and systematic institution consisting of a mother, father, and children, in which authority relationships are defined. Religion, on the other hand, is an institution that generates values and proposes a way of life, encompassing philosophical, psychological, and sociological dimensions, all systematized around an absolute authority. In the modern era, the shift from agricultural to industrial society, from rural to urban living, and from the extended to the nuclear family has transformed the understanding of authority in both the family and religion. This article aims to evaluate the relationship between the father, as the authority figure within the family, and the concept of divine authority in religion from an interdisciplinary perspective. The perspective and main arguments of this article are rooted in the philosophy of religion, while the concepts are drawn from the disciplines of sociology and psychology. This article aims to explore the possibility of explaining God's authority in religion through the authority of the father in the family, and to examine how the modern world's changes impact the authority within the Father-God relationship. Although numerous studies have investigated the family, none have explored the family–religion relationship through the lens of authority within the field of the philosophy of religion. The method of this article is literature analysis. Furthermore, analogy, as a method of reasoning, was utilized. This article concludes that the transformation of the father's authority within the family in the modern world has played a significant role in the transformation of God’s authority in religion. Furthermore, the study found that while the authority of God can be interpreted analogically through the authority of the father, this analogy does not justify a conclusion regarding God's existence.
The recent development of Agentic AI systems, empowered by autonomous large language models (LLMs) agents with planning and tool-usage capabilities, enables new possibilities for the evolution of industrial automation and reduces the complexity introduced by Industry 4.0. This work proposes a conceptual framework that integrates Agentic AI with the intent-based paradigm, originally developed in network research, to simplify human-machine interaction (HMI) and better align automation systems with the human-centric, sustainable, and resilient principles of Industry 5.0. Based on the intent-based processing, the framework allows human operators to express high-level business or operational goals in natural language, which are decomposed into actionable components. These intents are broken into expectations, conditions, targets, context, and information that guide sub-agents equipped with specialized tools to execute domain-specific tasks. A proof of concept was implemented using the CMAPSS dataset and Google Agent Developer Kit (ADK), demonstrating the feasibility of intent decomposition, agent orchestration, and autonomous decision-making in predictive maintenance scenarios. The results confirm the potential of this approach to reduce technical barriers and enable scalable, intent-driven automation, despite data quality and explainability concerns.
Rapid industrial digitalization has created intricate cybersecurity demands that necessitate effective validation methods. While cyber ranges and simulation platforms are widely deployed, they frequently face limitations in scenario diversity and creation efficiency. In this paper, we present SpiderSim, a theoretical cybersecurity simulation platform enabling rapid and lightweight scenario generation for industrial digitalization security research. At its core, our platform introduces three key innovations: a structured framework for unified scenario modeling, a multi-agent collaboration mechanism for automated generation, and modular atomic security capabilities for flexible scenario composition. Extensive implementation trials across multiple industrial digitalization contexts, including marine ranch monitoring systems, validate our platform's capacity for broad scenario coverage with efficient generation processes. Built on solid theoretical foundations and released as open-source software, SpiderSim facilitates broader research and development in automated security testing for industrial digitalization.
Large Language Models (LLMs) have gained considerable popularity and protected by increasingly sophisticated safety mechanisms. However, jailbreak attacks continue to pose a critical security threat by inducing models to generate policy-violating behaviors. Current paradigms focus on input-level anomalies, overlooking that the model's internal psychometric state can be systematically manipulated. To address this, we introduce Psychological Jailbreak, a new jailbreak attack paradigm that exposes a stateful psychological attack surface in LLMs, where attackers exploit the manipulation of a model's psychological state across interactions. Building on this insight, we propose Human-like Psychological Manipulation (HPM), a black-box jailbreak method that dynamically profiles a target model's latent psychological vulnerabilities and synthesizes tailored multi-turn attack strategies. By leveraging the model's optimization for anthropomorphic consistency, HPM creates a psychological pressure where social compliance overrides safety constraints. To systematically measure psychological safety, we construct an evaluation framework incorporating psychometric datasets and the Policy Corruption Score (PCS). Benchmarking against various models (e.g., GPT-4o, DeepSeek-V3, Gemini-2-Flash), HPM achieves a mean Attack Success Rate (ASR) of 88.1%, outperforming state-of-the-art attack baselines. Our experiments demonstrate robust penetration against advanced defenses, including adversarial prompt optimization (e.g., RPO) and cognitive interventions (e.g., Self-Reminder). Ultimately, PCS analysis confirms HPM induces safety breakdown to satisfy manipulated contexts. Our work advocates for a fundamental paradigm shift from static content filtering to psychological safety, prioritizing the development of psychological defense mechanisms against deep cognitive manipulation.
Juha-Matti Runtti, Usman Virk, Pekka Kyosti
et al.
6G radio access architecture is envisioned to contain a network of short-range in-X subnetworks with enhanced capabilities to provide efficient and reliable wireless connectivity. Short-range communications in industrial environments are actively researched at the so-called mid-bands or FR3, e.g., in the EU SNS JU 6G-SHINE project. In this paper, we analyze omni-directional radio channel measurements at 10--12 GHz frequency band to estimate large-scale channel characteristics including power-delay profile, delay spread, K-factor, and pathloss for 254 radio links measured in the Industrial Production Lab at Aalborg University, Denmark. Moreover, we perform a comparison of estimated parameters with those of the 3GPP Indoor Factory channel model.
Aaditya Baranwal, Abdul Mueez, Jason Voelker
et al.
Large-scale Vision-Language Models (VLMs) have transformed general-purpose visual recognition through strong zero-shot capabilities. However, their performance degrades significantly in niche, safety-critical domains such as industrial spill detection, where hazardous events are rare, sensitive, and difficult to annotate. This scarcity -- driven by privacy concerns, data sensitivity, and the infrequency of real incidents -- renders conventional fine-tuning of detectors infeasible for most industrial settings. We address this challenge by introducing a scalable framework centered on a high-quality synthetic data generation pipeline. We demonstrate that this synthetic corpus enables effective Parameter-Efficient Fine-Tuning (PEFT) of VLMs and substantially boosts the performance of state-of-the-art object detectors such as YOLO and DETR. Notably, in the absence of synthetic data (SynSpill dataset), VLMs still generalize better to unseen spill scenarios than these detectors. When SynSpill is used, both VLMs and detectors achieve marked improvements, with their performance becoming comparable. Our results underscore that high-fidelity synthetic data is a powerful means to bridge the domain gap in safety-critical applications. The combination of synthetic generation and lightweight adaptation offers a cost-effective, scalable pathway for deploying vision systems in industrial environments where real data is scarce/impractical to obtain. Project Page: https://synspill.vercel.app
Context: Fairness in systems has emerged as a critical concern in software engineering, garnering increasing attention as the field has advanced in recent years. While several guidelines have been proposed to address fairness, achieving a comprehensive understanding of research solutions for ensuring fairness in software systems remains challenging. Objectives: This paper presents a systematic literature mapping to explore and categorize current advancements in fairness solutions within software engineering, focusing on three key dimensions: research trends, research focus, and viability in industrial contexts. Methods: We develop a classification framework to organize research on software fairness from a fresh perspective, applying it to 95 selected studies and analyzing their potential for industrial adoption. Results: Our findings reveal that software fairness research is expanding, yet it remains heavily focused on methods and algorithms. It primarily focuses on post-processing and group fairness, with less emphasis on early-stage interventions, individual fairness metrics, and understanding bias root causes. Additionally fairness research remains largely academic, with limited industry collaboration and low to medium Technology Readiness Level (TRL), indicating that industrial transferability remains distant. Conclusion: Our results underscore the need to incorporate fairness considerations across all stages of the software development life-cycle and to foster greater collaboration between academia and industry. This analysis provides a comprehensive overview of the field, offering a foundation to guide future research and practical applications of fairness in software systems.
Kien Nguyen-Trung, Alexander K. Saeri, Stefan Kaufman
Artificial intelligence (AI) tools have been used to improve the productivity of evidence review and synthesis since at least 2016, with EPPI-Reviewer and Abstrackr being two prominent examples. However, since the release of ChatGPT by OpenAI in late 2022, the use of generative AI for research, especially for text-based data analysis, has exploded. In this article, we used a critical reflection approach to document and evaluate the capacity of different generative AI tools such as ChatGPT, GPT for Google Sheets and Docs, Casper AI, and ChatPDF to assist in the early stages of a rapid evidence review process. Our results demonstrate that these tools can boost research productivity in formulating search strings and screening literature, but they have some notable weaknesses, including producing inconsistent results and occasional errors. We recommend that researchers exercise caution when using generative AI technologies by designing a thorough research strategy and review protocol to ensure effective monitoring and quality control.
Photo-realistic avatar is a modern term referring to the digital asset that represents a human in computer graphic advanced systems such as video games and simulation tools. These avatars utilize the advances in graphic technologies in both software and hardware aspects. While photo-realistic avatars are increasingly used in industrial simulations, representing human factors such as human workers psychophysiological states, remains a challenge. This article contributes to resolving this issue by introducing the novel concept of MetaStates which are the digitization and representation of the psychophysiological states of a human worker in the digital world. The MetaStates influence the physical representation and performance of a digital human worker while performing a task. To demonstrate this concept, this study presents the development of a photo-realistic avatar enhanced with multi-level graphical representations of psychophysiological states relevant to Industry 5.0. This approach represents a major step forward in the use of digital humans for industrial simulations, allowing companies to better leverage the benefits of the Industrial Metaverse in their daily operations and simulations while keeping human workers at the center of the system.
The aim of this study is to investigate an automated industrial manipulation pipeline, where assembly tasks can be flexibly adapted to production without the need for a robotic expert, both for the vision system and the robot program. The objective of this study is first, to develop a synthetic-dataset-generation pipeline with a special focus on industrial parts, and second, to use Learning-from-Demonstration (LfD) methods to replace manual robot programming, so that a non-robotic expert/process engineer can introduce a new manipulation task by teaching it to the robot.
Purpose: Industrial robots allow manufacturing companies to increase productivity and remain competitive. For robots to be used, they must be accepted by operators on the one hand and bought by decision-makers on the other. The roles involved in such organizational processes have very different perspectives. It is therefore essential for suppliers and robot customers to understand these motives so that robots can successfully be integrated on manufacturing shopfloors. Methodology: We present findings of a qualitative study with operators and decision-makers from two Swiss manufacturing SMEs. Using laddering interviews and means-end analysis, we compare operators' and deciders' relevant elements and how these elements are linked to each other on different abstraction levels. These findings represent drivers and barriers to the acquisition, integration and acceptance of robots in the industry. Findings: We present the differing foci of operators and deciders, and how they can be used by demanders as well as suppliers of robots to achieve robot acceptance and deployment. First, we present a list of relevant attributes, consequences and values that constitute robot acceptance and/or rejection. Second, we provide quantified relevancies for these elements, and how they differ between operators and deciders. And third, we demonstrate how the elements are linked with each other on different abstraction levels, and how these links differ between the two groups.
Hilary Weingarden, Roger Garriga Calleja, Jennifer L. Greenberg
et al.
Smartphone psychotherapies are growing in popularity, yet little is understood about (1) how people prefer to engage with psychotherapy apps, or (2) which engagement patterns constitute effective engagement. The present study uses secondary data from a 12-week randomized waitlist-controlled trial of smartphone-delivered cognitive behavioral therapy (CBT) for body dysmorphic disorder (BDD) (N = 77) to address these aims. Additionally, using the present study as a use-case, we seek to provide a roadmap for how researchers may improve upon methodological limitations of existing smartphone psychotherapy engagement research. We measured behavioral engagement via 19 objective variables derived from phone analytics data, which we reduced via factor analysis into two factors: 1) use volume and frequency, and 2) session duration. Cluster analysis based on engagement factors yielded three engager types, which mapped onto “deep” users, “samplers,” and “light” users. The clusters did not differ significantly in improvement in BDD severity across treatment, although deep users improved more than light users at a marginally significant level. Results suggest that varying patterns of preferred engagement may be efficacious. Moreover, the study's methods provide an example of how researchers can measure and study behavioral engagement comprehensively and objectively.Trial Registration: ClinicalTrials.gov Identifier: NCT04034693
Jia-Bin Xu, Meng Tian, Jing Wang, Guo-Bin Lin, Qian-Le Lei, Xian-Hao Lin, Qin Jiang The School of Health, Fujian Medical University, Fuzhou, People’s Republic of ChinaCorrespondence: Qin Jiang; Xian-hao Lin, School of Health, Fujian Medical University, Fuzhou, Fujian, People’s Republic of China, Email jiangqin@fjmu.edu.cn; linxh@fjmu.edu.cnPurpose: To explore the structure of postgraduate research innovation ability and verify the Postgraduate Research Innovation Ability Scale.Patients and Methods: This study was based on the componential theory of creativity. First, we drafted an item pool from the literature review, semi-structured interviews, and group discussions. A total of 125 postgraduates were selected for the pre-test. After item selection and exploratory factor analysis, an 11-item, 3-factor postgraduate research innovation ability scale was formed. The scale was applied to a sample of 330 postgraduates from various domestic universities. Exploratory factor analysis and confirmatory factor analysis were used to examine the factor structure of the scales.Results: The results support a three-factor model including creativity-relevant processes, domain-relevant skills, and intrinsic motivation for the Postgraduate Research Innovation Ability Scale. The scale showed good internal consistency (α =0.89) and test-retest reliability (r=0.86). Exploratory factor analysis showed that the KMO value was 0.87, and the Bartlett’s sphericity test results were significant. Confirmatory factor analysis confirmed that the three-factor construct demonstrated a good model fit (χ 2/df=1.945, GFI=0.916, CFI=0.950, RMSEA=0.076).Conclusion: The Postgraduate Research Innovation Ability Scale has good reliability and validity, and it can be used for future research in related fields.Keywords: postgraduate, research innovation ability, validity, reliability
Jian Mao,1,2 Bin Zhang1 1Department of Psychology, Hunan University of Chinese Medicine, Changsha, People’s Republic of China; 2The School of Humanities, Jiangxi University of Chinese Medicine, Nanchang, People’s Republic of ChinaCorrespondence: Bin Zhang, Department of Psychology, Hunan University of Chinese Medicine, Changsha, People’s Republic of China, Email zb303@163.comPurpose: Given the prevalence of the fear of missing out (FoMO) phenomenon and the limitations regarding understanding the relationship between social media use and FoMO, this research examines the links that bind different types of social media usage to different aspects of FoMO.Methods: In the scope of this research, a structural equation modeling was developed to investigate the intricate connections that exist between active social media use (ASMU), passive social media use (PSMU), online-specific state-FoMO, and general trait-FoMO. Data were obtained from 394 Chinese university students (65% female) with experience in social media who completed the Active Social Media Use Scale, Passive Social Media Use Scale, and the Chinese Trait-State Fear of Missing Out Scale.Results: Bivariate correlations analysis revealed that ASMU was significantly related with state-FoMO but not significantly related with trait-FoMO. Structural equation modeling revealed that ASMU had a significant direct negative predictive effect on trait-FoMO while positive association with trait-FoMO through the indirect effect of State-FoMO, illustrating that ASMU had a suppressing effect on trait-FoMO via state-FoMO. PSMU significantly moderated the direct effect of ASMU on trait-FoMO, and the direct effect was only significant at low levels of PSMU.Conclusion: This study revealed whether and how social media use is linked to FoMO. Social media may not always increase FoMO, because positive, active social media interactions are conducive to the alleviation of trait-FoMO. However, it is significant to note that active interactions may also predict higher state-FoMO, so moderate social media use needs to be encouraged. In addition, a reduction in passive, non-communicative information browsing would be conducive to the alleviation of trait-FoMO by ASMU.Keywords: fear of missing out, online-specific state-FoMO, general trait-FoMO, suppression effects, moderation effects, active social media use, passive social media use