Akhil Gupta Chigullapally, Sharvan Vittala, Razin Farhan Hussian
et al.
The fast pace of modern AI is rapidly transforming traditional industrial systems into vast, intelligent and potentially unmanned autonomous operational environments driven by AI-based solutions. These solutions leverage various forms of machine learning, reinforcement learning, and generative AI. The introduction of such smart capabilities has pushed the envelope in multiple industrial domains, enabling predictive maintenance, optimized performance, and streamlined workflows. These solutions are often deployed across the Industrial Internet of Things (IIoT) and supported by the Edge-Fog-Cloud computing continuum to enable urgent (i.e., real-time or near real-time) decision-making. Despite the current trend of aggressively adopting these smart industrial solutions to increase profit, quality, and efficiency, large-scale integration and deployment also bring serious hazards that if ignored can undermine the benefits of smart industries. These hazards include unforeseen interoperability side-effects and heightened vulnerability to cyber threats, particularly in environments operating with a plethora of heterogeneous IIoT systems. The goal of this study is to shed light on the potential consequences of industrial smartness, with a particular focus on security implications, including vulnerabilities, side effects, and cyber threats. We distinguish software-level downsides stemming from both traditional AI solutions and generative AI from those originating in the infrastructure layer, namely IIoT and the Edge-Cloud continuum. At each level, we investigate potential vulnerabilities, cyber threats, and unintended side effects. As industries continue to become smarter, understanding and addressing these downsides will be crucial to ensure secure and sustainable development of smart industrial systems.
Despite the incessant poor service delivery serving as a reminder to the existence of governance challenges, the existing literature gives scant attention to what constitute governance challenges in public organisations. Thus, informed by the New Public Management theory and Public Choice Theory, the impetus of this study is to explore the governance challenges in South African public organisations. A mixed research approach, nested within the exploratory research design, was utilised in which multilevel and multisource data was solicited to accomplish the study’s objectives with semi-structured interviews conducted with traditional leaders and close-ended questionnaires administered to municipal officials and councillors from four local municipalities in South Africa. While quantitative usable data collected from 109 municipal officials and councillors were subjected to relative importance index analysis, qualitative data from 14 traditional leaders were thematically analysed. The results revealed both structural constraints (scarce resources challenges, lack of funds and unnecessary delays) and behavioural-specific dilemmas (corruption, nepotism, lack of accountability) as the worst dilemmas that hinder sound governance. The evidence from the study also indicated the systematic complexities (political uncertainties, improper consultation and high bottlenecks in society) as the second worst constraints to proper governance. The study provides both practical and theoretical implications.
The foundation model industry exhibits unprecedented concentration in critical inputs: semiconductors, energy infrastructure, elite talent, capital, and training data. Despite extensive sectoral analyses, no comprehensive framework exists for assessing overall industrial vulnerability. We develop the Artificial Intelligence Industrial Vulnerability Index (AIIVI) grounded in O-Ring production theory, recognizing that foundation model production requires simultaneous availability of non-substitutable inputs. Given extreme data opacity and rapid technological evolution, we implement a validated human-in-the-loop methodology using large language models to systematically extract indicators from dispersed grey literature, with complete human verification of all outputs. Applied to six state-of-the-art foundation model developers, AIIVI equals 0.82, indicating extreme vulnerability driven by compute infrastructure (0.85) and energy systems (0.90). While industrial policy currently emphasizes semiconductor capacity, energy infrastructure represents the emerging binding constraint. This methodology proves applicable to other fast-evolving, opaque industries where traditional data sources are inadequate.
Foundation models have revolutionized AI, yet they struggle with zero-shot deployment in real-world industrial settings due to a lack of high-quality, domain-specific datasets. To bridge this gap, Superb AI introduces ZERO, an industry-ready vision foundation model that leverages multi-modal prompting (textual and visual) for generalization without retraining. Trained on a compact yet representative 0.9 million annotated samples from a proprietary billion-scale industrial dataset, ZERO demonstrates competitive performance on academic benchmarks like LVIS-Val and significantly outperforms existing models across 37 diverse industrial datasets. Furthermore, ZERO achieved 2nd place in the CVPR 2025 Object Instance Detection Challenge and 4th place in the Foundational Few-shot Object Detection Challenge, highlighting its practical deployability and generalizability with minimal adaptation and limited data. To the best of our knowledge, ZERO is the first vision foundation model explicitly built for domain-specific, zero-shot industrial applications.
Industrial diagrams such as piping and instrumentation diagrams (P&IDs) are essential for the design, operation, and maintenance of industrial plants. Converting these diagrams into digital form is an important step toward building digital twins and enabling intelligent industrial automation. A central challenge in this digitalization process is accurate object detection. Although recent advances have significantly improved object detection algorithms, there remains a lack of methods to automatically evaluate the quality of their outputs. This paper addresses this gap by introducing a framework that employs Visual Language Models (VLMs) to assess object detection results and guide their refinement. The approach exploits the multimodal capabilities of VLMs to identify missing or inconsistent detections, thereby enabling automated quality assessment and improving overall detection performance on complex industrial diagrams.
Ros Nirwana, Reny Marliadi, Rahmatullah Alfikri
et al.
Purpose: This research aims to examine environmental sustainability at STIE Pancasetia Banjarbaru by investigating environmental awareness, environmental involvement, environmental reporting, and environmental audit within the community.
Methodology: This study adopts an innovative Mixed Methods approach, combining quantitative analysis using Chi Square tests with in-depth qualitative analysis of interview results. With a sample of 92 respondents from lecturers, staff, and students at STIE Pancasetia, this research collects data through questionnaires and interviews to present a comprehensive picture of the phenomenon under investigation.
Research Findings: This research indicates a significant correlation between environmental awareness, involvement, and environmentally friendly behavior and management at STIE Pancasetia, supported by a Chi Square probability value of 0.000 < α 0.05. Nevertheless, environmental reporting and auditing require improvement, as 55.6% of respondents perceive reporting as adequate but insufficient in environmental impact disclosure, and 55.6% consider auditing ineffective. Hence, enhancing competence and developing reporting and auditing systems is crucial for improving transparency and accountability.
Contribution: To achieve environmental sustainability, it is recommended to: enhance environmental awareness and involvement through targeted programs, integrate Green Accounting based on University Social Responsibility (USR) into education, research, and community service activities, and improve transparency and accountability in environmental reporting and auditing through accurate and effective systems.
Tujuan Penelitian: Penelitian ini bertujuan untuk meneliti keberlanjutan lingkungan di STIE Pancasetia Banjarbaru melalui investigasi kesadaran lingkungan, keterlibatan lingkungan, Pelaporan lingkungan, dan audit lingkungan di kalangan komunitas.
Metodologi: Penelitian ini mengadopsi pendekatan Mixed Methods yang inovatif, memadukan analisis kuantitatif dengan uji Chi Square dan analisis kualitatif mendalam dari hasil wawancara. Dengan sampel sebanyak 92 responden dari dosen, staf, dan mahasiswa STIE Pancasetia, penelitian ini mengumpulkan data melalui kuesioner dan wawancara untuk menyajikan gambaran komprehensif tentang fenomena yang diteliti.
Temuan Penelitian: Penelitian ini menunjukkan bahwa kesadaran dan keterlibatan lingkungan di STIE Pancasetia memiliki hubungan signifikan dengan perilaku ramah lingkungan dan pengelolaan lingkungan, dengan nilai probabilitas Chi Square 0,000 < α 0,05. Namun, pelaporan dan audit lingkungan masih memiliki kekurangan, dengan 55,6% responden menilai pelaporan sudah baik namun kurang dalam pengungkapan dampak lingkungan, dan 55,6% menilai audit lingkungan belum efektif. Oleh karena itu, diperlukan peningkatan kompetensi dan pengembangan sistem pelaporan dan audit lingkungan untuk meningkatkan transparansi dan akuntabilitas.
Kontribusi: Untuk mencapai keberlanjutan lingkungan, direkomendasikan untuk meningkatkan kesadaran dan keterlibatan lingkungan melalui program yang ditargetkan, mengintegrasikan Akuntansi Hijau berbasis Tanggung Jawab Sosial Universitas (USR) ke dalam kegiatan pendidikan, penelitian, dan pengabdian kepada masyarakat, serta meningkatkan transparansi dan akuntabilitas dalam pelaporan dan audit lingkungan melalui sistem yang akurat dan efektif.
Economics as a science, Management. Industrial management
Lena Låstad, Jacobus Pienaar, Katharina Näswall
et al.
Job insecurity constitutes uncertainty about the future of the current job. Such uncertainty is expected to impact attitudes and behaviors about one’s work and career and how it will progress. The aim of the present study is to meta-analytically consolidate research on the associations between job insecurity and career-related outcomes. A further aim of the study is to explore two methodological moderators: study design (cross-sectional vs. longitudinal) and type of job insecurity measure (cognitive, affective, or combined). Based on a sample of 237 primary studies, our main results show that job insecurity was positively related to occupational and organizational turnover intention, job search behaviors, and knowledge hiding, and negatively related with career satisfaction, career opportunities, employability, and proactive skill development. In terms of the moderators, the associations were generally stronger in cross-sectional studies compared to longitudinal studies, while the impact of the type of job insecurity measure used was mixed. While our results inform research on job insecurity and career-related outcomes, more studies with a longitudinal design are needed on this research topic. Future research should also further examine how different types of job insecurity measures – cognitive, affective, or combined – are associated with career-related outcomes.
Víctor Hugo Capacho-Alfonso, Dario Enrique Soto-Durán, Jovani Alberto Jimenez-Builes
This research explores how technologies are adopted and used in the health field, focusing on how people integrate these innovations into their daily lives. Different models of technology adoption in the field of health are presented with evaluation methodologies to measure the adoption of technologies in healthcare, including qualitative and quantitative approaches. The challenges and barriers that may arise in the implementation of technologies in hospital and health environments are discussed; the importance of considering ethical, organizational, social, and legal aspects in this process is emphasized. It is essential to understand how users perceive the usefulness and ease of use of technology, in addition to considering the influence of social factors, previous experiences, and psychological aspects in this process. In summary, the article highlights the importance of thoroughly understanding the criteria that influence the adoption of technologies in the health field and underscores the need for comprehensive strategies that address both technical and human aspects to ensure successful adoption of these technologies.
Kimmo Eriksson, Pontus Strimling, Irina Vartanova
et al.
Abstract Every social situation that people encounter in their daily lives comes with a set of unwritten rules about what behavior is considered appropriate or inappropriate. These everyday norms can vary across societies: some societies may have more permissive norms in general or for certain behaviors, or for certain behaviors in specific situations. In a preregistered survey of 25,422 participants across 90 societies, we map societal differences in 150 everyday norms and show that they can be explained by how societies prioritize individualizing moral foundations such as care and liberty versus binding moral foundations such as purity. Specifically, societies with more individualistic morality tend to have more permissive norms in general (greater liberty) and especially for behaviors deemed vulgar (less purity), but they exhibit less permissive norms for behaviors perceived to have negative consequences in specific situations (greater care). By comparing our data with available data collected twenty years ago, we find a global pattern of change toward more permissive norms overall but less permissive norms for the most vulgar and inconsiderate behaviors. This study explains how social norms vary across behaviors, situations, societies, and time.
Background:
Delirium, an acute and often fluctuating disorder of attention and cognition, poses significant challenges in clinical care due to its varied presentation and complex etiological factors. In rural healthcare settings, where resources and awareness are limited, delirium is frequently under-recognized and inadequately managed.
Aim:
To investigate the factors associated with and types of delirium and their correlation with sociodemographic profiles in hospitalized patients at a tertiary care rural hospital in Central India.
Materials and Methods:
This cross-sectional observational study was conducted on 120 patients diagnosed with delirium and referred to the Department of Psychiatry. A comprehensive assessment was performed using the Delirium Etiology Checklist (DEC) and Richmond Agitation Sedation Scale (RASS), and data spanning various associated factors, subtypes, and demographic variables were analyzed using SPSS version 27.0.
Results:
The cohort had a mean age of 48.2 ± 15.96 years, with a predominance of male patients (84.2%). Substance withdrawal (16.96%), anemia (12.5%), and renal derangement (11.6%) emerged as the major factors associated with delirium. Hyperactive delirium was observed in 88.3% of patients, while hypoactive delirium was found in 11.7%. A significant association was noted between cardiac decompensation and sepsis with hypoactive delirium, while substance withdrawal with hyperactive delirium.
Conclusion:
The study highlights the need for a systematic approach to identify and manage delirium’s underlying associated factors, particularly in resource-limited settings, to prevent adverse outcomes.
Valentina Zaccaria, Chiara Masiero, David Dandolo
et al.
While Machine Learning has become crucial for Industry 4.0, its opaque nature hinders trust and impedes the transformation of valuable insights into actionable decision, a challenge exacerbated in the evolving Industry 5.0 with its human-centric focus. This paper addresses this need by testing the applicability of AcME-AD in industrial settings. This recently developed framework facilitates fast and user-friendly explanations for anomaly detection. AcME-AD is model-agnostic, offering flexibility, and prioritizes real-time efficiency. Thus, it seems suitable for seamless integration with industrial Decision Support Systems. We present the first industrial application of AcME-AD, showcasing its effectiveness through experiments. These tests demonstrate AcME-AD's potential as a valuable tool for explainable AD and feature-based root cause analysis within industrial environments, paving the way for trustworthy and actionable insights in the age of Industry 5.0.
Nowadays, electric robots play big role in many fields as they can replace humans and/or decrease the amount of load on humans. There are several types of robots that are present in the daily life, some of them are fully controlled by humans while others are programmed to be self-controlled. In addition there are self-control robots with partial human control. Robots can be classified into three major kinds: industry robots, autonomous robots and mobile robots. Industry robots are used in industries and factories to perform mankind tasks in the easier and faster way which will help in developing products. Typically industrial robots perform difficult and dangerous tasks, as they lift heavy objects, handle chemicals, paint and assembly work and so on. They are working all the time hour after hour, day by day with the same precision and they do not get tired which means that they do not make errors due to fatigue. Indeed, they are ideally suited to complete repetitive tasks.
Julian Coda-Forno, Marcel Binz, Jane X. Wang
et al.
Large language models (LLMs) have significantly advanced the field of artificial intelligence. Yet, evaluating them comprehensively remains challenging. We argue that this is partly due to the predominant focus on performance metrics in most benchmarks. This paper introduces CogBench, a benchmark that includes ten behavioral metrics derived from seven cognitive psychology experiments. This novel approach offers a toolkit for phenotyping LLMs' behavior. We apply CogBench to 35 LLMs, yielding a rich and diverse dataset. We analyze this data using statistical multilevel modeling techniques, accounting for the nested dependencies among fine-tuned versions of specific LLMs. Our study highlights the crucial role of model size and reinforcement learning from human feedback (RLHF) in improving performance and aligning with human behavior. Interestingly, we find that open-source models are less risk-prone than proprietary models and that fine-tuning on code does not necessarily enhance LLMs' behavior. Finally, we explore the effects of prompt-engineering techniques. We discover that chain-of-thought prompting improves probabilistic reasoning, while take-a-step-back prompting fosters model-based behaviors.
Large Language Models (LLMs) have gradually become the gateway for people to acquire new knowledge. However, attackers can break the model's security protection ("jail") to access restricted information, which is called "jailbreaking." Previous studies have shown the weakness of current LLMs when confronted with such jailbreaking attacks. Nevertheless, comprehension of the intrinsic decision-making mechanism within the LLMs upon receipt of jailbreak prompts is noticeably lacking. Our research provides a psychological explanation of the jailbreak prompts. Drawing on cognitive consistency theory, we argue that the key to jailbreak is guiding the LLM to achieve cognitive coordination in an erroneous direction. Further, we propose an automatic black-box jailbreaking method based on the Foot-in-the-Door (FITD) technique. This method progressively induces the model to answer harmful questions via multi-step incremental prompts. We instantiated a prototype system to evaluate the jailbreaking effectiveness on 8 advanced LLMs, yielding an average success rate of 83.9%. This study builds a psychological perspective on the explanatory insights into the intrinsic decision-making logic of LLMs.
Addressing the challenge of data scarcity in industrial domains, transfer learning emerges as a pivotal paradigm. This work introduces Style Filter, a tailored methodology for industrial contexts. By selectively filtering source domain data before knowledge transfer, Style Filter reduces the quantity of data while maintaining or even enhancing the performance of transfer learning strategy. Offering label-free operation, minimal reliance on prior knowledge, independence from specific models, and re-utilization, Style Filter is evaluated on authentic industrial datasets, highlighting its effectiveness when employed before conventional transfer strategies in the deep learning domain. The results underscore the effectiveness of Style Filter in real-world industrial applications.
Background:
Major depression is a commonly occuring, seriously impairing, and often recurrent mental disorder. Depression and cognitive impairement have enormous implications for public health. Cognitive symptoms represent one of the core features of depression and have an impact on many functional outcomes. Different cognitive domains such as attention and concentration, psychomotor speed, executive functioning, and memory have been found to be implicated.
Aim:
This study aimed at assessing the cognitive domains affected and severity of cognitive dysfunction in first-episode patients of the unipolar depressive episode without psychosis and to assess the correlation of association of severity of cognitive deficit with severity of depression.
Materials and Methods:
A total of 40 patients with depression diagnosed according to International Classification of Diseases Research Diagnostic Criteria and 40 healthy controls were included. PGI Battery of Brain Dysfunction, Frontal Assessement Battery, and Hamilton Depression Rating scale (HAM–D) were adminstered, and analysis was done using Chi-square test, unpaired t-test, and Pearson’s correlation.
Results:
The study revealed significant differences in the dysfunction scores between the study and control populations. In the study group, more than 80% of patients had cognitive dysfunction and a positive correlation was found between dysfunction and HAM–D scores.
Conclusion:
Depression is associated with significant disturbance in cognitive functioning, and the cognitive dysfunction increases with an increase in the severity of depression.
ABSTRACT We present a targeted review of recent developments and advances in digital selection procedures (DSPs) with particular attention to advances in internet-based techniques. By reviewing the emergence of DSPs in selection research and practice, we highlight five main categories of methods (online applications, online psychometric testing, digital interviews, gamified assessment and social media). We discuss the evidence base for each of these DSP groups, focusing on construct and criterion validity, and applicant reactions to their use in organizations. Based on the findings of our review, we present a critique of the evidence base for DSPs in industrial, work and organizational psychology and set out an agenda for advancing research. We identify pressing gaps in our understanding of DSPs, and ten key questions to be answered. Given that DSPs are likely to depart further from traditional non-digital selection procedures in the future, a theme in this agenda is the need to establish a distinct and specific literature on DSPs, and to do so at a pace that reflects the speed of the underlying technological advancement. In concluding, we, therefore, issue a call to action for selection researchers in work and organizational psychology to commence a new and rigorous multidisciplinary programme of scientific study of DSPs.