Editorial
Sandra Hernández
Artificial intelligence (AI) has rapidly transitioned from a conceptual innovation to an integral component of scientific practice. In recent years, AI-based tools have been increasingly incorporated into research workflows, particularly in tasks related to writing, editing, translation, and data interpretation [1, 2]. In engineering disciplines—where clarity, precision, and reproducibility are essential, this technological shift presents both significant opportunities and critical challenges for authors, reviewers, and editors.
Engineering (General). Civil engineering (General)
СОВРЕМЕННЫЕ ПОДХОДЫ К СКРИНИНГУ И ДИАГНОСТИКЕ РАКА МОЛОЧНОЙ ЖЕЛЕЗЫ: ИНТЕГРАЦИЯ ТРАДИЦИОННЫХ МЕТОДОВ И ИСКУССТВЕННОГО ИНТЕЛЛЕКТА
Фролов С.А., Золотарев П.Н.
Рак молочной железы остаётся ведущей причиной онкологической заболеваемости и смертности среди женщин в мире и особенно в Российской Федерации, где наблюдается тенденция к «омоложению» заболевания. Несмотря на доказанную эффективность маммографического скрининга, данный метод имеет существенные ограничения, включая вариабельность чувствительности, субъективность интерпретации и дефицит кадров. В последние годы активно развиваются технологии искусственного интеллекта, способные повысить точность и эффективность диагностики. В настоящей статье представлен анализ современных подходов к скринингу и диагностике рака молочной железы с акцентом на интеграцию систем искусственного интеллекта в клиническую практику. На основе обзора эпидемиологических данных, нормативно-правовой базы и результатов пилотных проектов в России (включая Самарскую область) показано, что алгоритмы на основе сверточных нейронных сетей обеспечивают чувствительность до 96%, снижают нагрузку на специалистов и повышают стандартизацию заключений. Обсуждаются организационные, этические и регуляторные аспекты широкого внедрения искусственного интеллекта в онкологический скрининг. Делается вывод о перспективности гибридных диагностических моделей, сочетающих машинное обучение и клинический опыт врача.
Leveraging support vector regression, radiomics and dosiomics for outcome prediction in personalized ultra-fractionated stereotactic adaptive radiotherapy (PULSAR)
Yajun Yu, Steve Jiang, Robert Timmerman
et al.
Personalized ultra-fractionated stereotactic adaptive radiotherapy (PULSAR) is a novel treatment that delivers radiation in pulses of protracted intervals. Accurate prediction of gross tumor volume (GTV) changes through regression models has substantial prognostic value. This study aims to develop a multi-omics based support vector regression (SVR) model for predicting GTV change. A retrospective cohort of 39 patients with 69 brain metastases was analyzed, based on radiomics (magnetic resonance image images) and dosiomics (dose maps) features. Delta features were computed to capture relative changes between two time points. A feature selection pipeline using least absolute shrinkage and selection operator (Lasso) algorithm with weight- or frequency-based ranking criterion was implemented. SVR models with various kernels were evaluated using the coefficient of determination ( R ^2 ) and relative root mean square error (RRMSE). Five-fold cross-validation with 10 repeats was employed to mitigate the limitation of small data size. Multi-omics models that integrate radiomics, dosiomics, and their delta counterparts outperform individual-omics models. Delta-radiomic features play a critical role in enhancing prediction accuracy relative to features at single time points. The top-performing model achieves an R ^2 of 0.743 and an RRMSE of 0.022. The proposed multi-omics SVR model shows promising performance in predicting continuous change of GTV. It provides a more quantitative and personalized approach to assist patient selection and treatment adjustment in PULSAR.
Computer engineering. Computer hardware, Electronic computers. Computer science
Assessing the effectiveness of artificial intelligence education and training for healthcare workers: a systematic review
Leanna Woods, Kayley Lyons, Anton Van Der Vegt
et al.
Abstract Background Artificial intelligence (AI) is increasingly integrated into healthcare, yet upskilling the health workforce remains a challenge. We addressed the research question: What evidence exists on the effectiveness of AI education and training programs in improving AI literacy among healthcare workers? Methods Following PRISMA guidelines and PROSPERO registration, five databases (PubMed, Scopus, CINAHL, Embase, ERIC) were searched on 20 August 2024, focusing on studies with an intervention of AI training or education for the healthcare workforce, in any study design that reported an evaluation. Results 27 studies were included. Programs improved AI literacy outcomes mapped to levels 1–3 of the Kirkpatrick-Barr training evaluation hierarchy including improved learner reactions, shifts in attitudes and perceptions, enhanced knowledge and skills, and behavior changes. Programs did not map to level 4, where healthcare workers learn to metacognition levels, including organizational change and patient benefit. Programs were short in length (44%), delivered in academic settings (56%), to doctors (44%) or medical students (44%), at entry-to-practice level (56%). Most taught an introduction to AI (67%), with technical AI skills less frequent. Conclusions These programs are a promising start but often lack sufficient depth to build advanced competencies. Improving AI literacy in healthcare will require appropriate course design, an evolving understanding of this rapidly changing area, and evaluating learning effectiveness. As the adoption of AI accelerates across healthcare, health systems may seek to standardise and assess the efficacy of these courses.
Special aspects of education, Medicine
Applications of Artificial Intelligence in Corneal Nerve Images in Ophthalmology
Raul Hernan Barcelo-Canton, Mingyi Yu, Chang Liu
et al.
Corneal nerves (CNs) are essential to maintain corneal epithelial integrity and ocular surface homeostasis. In vivo confocal microscopy (IVCM) enables the acquisition of high-resolution visualization of CNs, allowing visualization on a microscopic level. Traditionally, CN images must be analyzed by manual examination, which is time consuming and labor intensive. Artificial intelligence (AI) has facilitated reliable analysis of CN parameters, allowing for automatic and semiautomatic analysis of CNs. These include the identification, segmentation, and quantitative analysis of various CN parameters. This review summarizes the applications of AI-driven, automatic, and semiautomatic models in the CN analysis of IVCM images while also focusing on their diagnostic relevance in dry eye disease (DED) and neuropathic corneal pain (NCP). Recent advancements in AI have transformed IVCM image analysis by improving reproducibility and reducing operator dependency and time. The AI-based algorithm has been demonstrated to have good performance and sensitivity to identify and quantify the CN metrics. AI has also been utilized to improve the diagnostic accuracy of DED with IVCM scans, involving multiple portions of the CNs, such as the inferior whorl region. When employed with IVCM images of patients with NCP, AI-assisted identification of microneuromas and changes in CN metrics has provided an improvement in diagnostic accuracy. Despite promising advances and outcomes, the widespread implementation of these AI models in CN image analysis requires large-scale validation. Future integration of multimodal AI algorithms remains a promising endeavor to enhance diagnostic accuracy and disease stratification.
LLM For Automated Dental EMR Quality Assessment
Jiakun Fang, Wang Xiaoying
Aim or purpose: High-quality Electronic Medical Records (EMRs) are crucial for digital dentistry data analysis and applications. Manual EMR quality assessment is resource-intensive and inconsistent, limiting data utility. Automated methods are needed for efficient data integrity. This study evaluated a Large Language Model (LLM) automating quality assessment for outpatient dental EMRs based on group standards. Materials and methods: 100 typical outpatient dental EMRs collected from February to December 2024 in hospital with known errors, manually de-identified, were randomly split into training (80) and testing (20) sets. Records were annotated by 3 senior quality control experts, providing the reference standard. The DeepSeek-r1 model assessed record quality based on group standard criteria, using evaluation prompts iteratively refined on the training set via expert feedback. Performance on the testing set was compared against the expert consensus reference using metrics including Cohen's Kappa for agreement, precision, recall, and F1-score (p<0.05). Evaluation time was also compared. Results: On the testing set, the LLM achieved 98.0% recall, 100% precision, and an F1-score of 0.990 for identifying annotated quality deficiencies. The LLM demonstrated strong alignment with expert consensus on the test set. Furthermore, automated assessment significantly reduced evaluation time per record compared to manual review. Conclusions: LLMs show significant promise as effective, efficient tools for automated dental EMR quality assessment. This technology can enhance data quality within digital dentistry workflows, improving applications such as clinical research and practice analytics, supporting digital transformation in dentistry.
Edge AI for Industrial Visual Inspection: YOLOv8-Based Visual Conformity Detection Using Raspberry Pi
Marcelo T. Okano, William Aparecido Celestino Lopes, Sergio Miele Ruggero
et al.
This paper presents a lightweight and cost-effective computer vision solution for automated industrial inspection using You Only Look Once (YOLO) v8 models deployed on embedded systems. The YOLOv8 Nano model, trained for 200 epochs, achieved a precision of 0.932, an mAP@0.5 of 0.938, and an F1-score of 0.914, with an average inference time of ~470 ms on a Raspberry Pi 500, confirming its feasibility for real-time edge applications. The proposed system aims to replace physical jigs used for the dimensional verification of extruded polyamide tubes in the automotive sector. The YOLOv8 Nano and YOLOv8 Small models were trained on a Graphics Processing Unit (GPU) workstation and subsequently tested on a Central Processing Unit (CPU)-only Raspberry Pi 500 to evaluate their performance in constrained environments. The experimental results show that the Small model achieved higher accuracy (a precision of 0.951 and an mAP@0.5 of 0.941) but required a significantly longer inference time (~1315 ms), while the Nano model achieved faster execution (~470 ms) with stable metrics (precision of 0.932 and mAP@0.5 of 0.938), therefore making it more suitable for real-time applications. The system was validated using authentic images in an industrial setting, confirming its feasibility for edge artificial intelligence (AI) scenarios. These findings reinforce the feasibility of embedded AI in smart manufacturing, demonstrating that compact models can deliver reliable performance without requiring high-end computing infrastructure.
Industrial engineering. Management engineering, Electronic computers. Computer science
A Deep Backtracking Bare‐Bones Particle Swarm Optimisation Algorithm for High‐Dimensional Nonlinear Functions
Jia Guo, Guoyuan Zhou, Ke Yan
et al.
ABSTRACT The challenge of optimising multimodal functions within high‐dimensional domains constitutes a notable difficulty in evolutionary computation research. Addressing this issue, this study introduces the Deep Backtracking Bare‐Bones Particle Swarm Optimisation (DBPSO) algorithm, an innovative approach built upon the integration of the Deep Memory Storage Mechanism (DMSM) and the Dynamic Memory Activation Strategy (DMAS). The DMSM enhances the memory retention for the globally optimal particle, promoting interaction between standard particles and their historically optimal counterparts. In parallel, DMAS assures the updated position of the globally optimal particle is appropriately aligned with the deep memory repository. The efficacy of DBPSO was rigorously assessed through a series of simulations employing the CEC2017 benchmark suite. A comparative analysis juxtaposed DBPSO's performance against five contemporary evolutionary algorithms across two experimental conditions: Dimension‐50 and Dimension‐100. In the 50D trials, DBPSO attained an average ranking of 2.03, whereas in the 100D scenarios, it improved to an average ranking of 1.9. Further examination utilising the CEC2019 benchmark functions revealed DBPSO's robustness, securing four first‐place finishes, three second‐place standings, and three third‐place positions, culminating in an unmatched average ranking of 1.9 across all algorithms. These empirical results corroborate DBPSO's proficiency in delivering precise solutions for complex, high‐dimensional optimisation challenges.
Computational linguistics. Natural language processing, Computer software
Predictions of postoperative and perioperative complications of laparoscopic cholecystectomy using machine learning algorithms: systematic review
Shahzeb Leghari, Muhammad Tausif, Rooma Rehan
et al.
Abstract Background Laparoscopic cholecystectomy (LC) is a widely performed procedure with potential postoperative and perioperative complications. Recent advances in machine learning (ML) can lead to early prediction of these complications, but no systematic review has synthesized this data. This review aims to assess ML algorithms’ accuracy in predicting these complications following LC. Methods A systematic review was conducted by PRISMA guidelines. A comprehensive search was performed on PubMed, Embase, Scopus, and Web of Science databases for studies published between 2010 and 2024. Studies that applied ML algorithms to predict complications during and after LC were included. Quality assessment was performed using the Newcastle-Ottawa Scale (NOS). Due to study heterogeneity, a meta-analysis was not conducted; instead, a narrative synthesis was performed. Results A total of 6 studies were included in the review. Various machine learning algorithms, such as decision trees, deep learning, artificial neural networks (ANN), and adaptive boosting, were assessed for predicting postoperative and perioperative complications after laparoscopic cholecystectomy (LC). ANN models showed superior performance, with mean absolute percentage error (MAPE) values ranging from 4.20 to 8.60% in predicting quality of life post-LC. Deep learning models achieved a balanced accuracy of 71.4% for critical view of safety (CVS) assessment during LC. Adaboost algorithms effectively identified key risk factors for hepatic fibrosis in post-cholecystectomy patients. However, models predicting surgical adverse events faced limitations due to low prevalence, resulting in lower predictive values. Conclusion ML models show great potential in predicting postoperative complications following LC while also considering intraoperative and perioperative outcomes that impact patient safety and postoperative recovery, but limitations such as small sample sizes and limited applicability remain. Further research is needed to validate these models in larger, more diverse populations.
Interpretable artificial intelligence model for predicting heart failure severity after acute myocardial infarction
Chenglong Guo, Binyu Gao, Xuexue Han
et al.
Abstract Background Heart failure (HF) after acute myocardial infarction (AMI) is a leading cause of mortality and morbidity worldwide. Accurate prediction and early identification of HF severity are crucial for initiating preventive measures and optimizing treatment strategies. This study aimed to develop an interpretable artificial intelligence (AI) model for HF severity prediction using multidimensional clinical data. Methods This study included data from 1574 AMI patients, including medical history, clinical features, physiological parameters, laboratory test, coronary angiography and echocardiography results. Both deep learning (TabNet, Multi-Layer Perceptron) and machine learning (Random Forest, XGboost) models were employed in constructing model. Additionally, the Shapley Additive Explanation (SHAP) method was used to elucidate clinical factors importance and enhance model interpretability. A web platform ( https://prediction-killip-gby.streamlit.app/ ) was also developed to facilitate clinical application. Results Among the models, TabNet demonstrated the best performance, achieving an AUROC of 0.827 for KILLIP four-class classification and 0.831 for KILLIP binary classification. Key clinical factors such as GRACE score, NT-pro BNP, and TIMI score were highly correlated with KILLIP classification, aligning with established clinical knowledge. Conclusions By leveraging easily accessible multidimensional data, this model enables accurate early prediction and personalized diagnosis of HF risk and severity following AMI. It supports early clinical intervention and improves patient outcomes, offering significant clinical application value. Clinical trial number Not applicable.
Diseases of the circulatory (Cardiovascular) system
Artificial intelligence-driven multi-omics approaches in Alzheimer's disease: Progress, challenges, and future directions
Fang Ren, Jing Wei, Qingxin Chen
et al.
Alzheimer's disease (AD) is a progressive neurodegenerative disorder characterized by cognitive decline and memory loss, with few effective treatments currently available. The multifactorial nature of AD, shaped by genetic, environmental, and biological factors, complicates both research and clinical management. Recent advances in artificial intelligence (AI) and multi-omics technologies provide new opportunities to elucidate the molecular mechanisms of AD and identify early biomarkers for diagnosis and prognosis. AI-driven approaches such as machine learning, deep learning, and network-based models have enabled the integration of large-scale genomic, transcriptomic, proteomic, metabolomic, and microbiomic datasets. These efforts have facilitated the discovery of novel molecular signatures and therapeutic targets. Methods including deep belief networks and joint deep semi-non-negative matrix factorization have contributed to improvements in disease classification and patient stratification. However, ongoing challenges remain. These include data heterogeneity, limited interpretability of complex models, a lack of large and diverse datasets, and insufficient clinical validation. The absence of standardized multi-omics data processing methods further restricts progress. This review systematically summarizes recent advances in AI-driven multi-omics research in AD, highlighting achievements in early diagnosis and biomarker discovery while discussing limitations and future directions needed to advance these approaches toward clinical application.
Therapeutics. Pharmacology
A Comparative Evaluation of Machine Learning Methods for Predicting Student Outcomes in Coding Courses
Zakaria Soufiane Hafdi, Said El Kafhali
Artificial intelligence (AI) has found applications across diverse sectors in recent years, significantly enhancing operational efficiencies and user experiences. Educational data mining (EDM) has emerged as a pivotal AI application to transform educational environments by optimizing learning processes and identifying at-risk students. This study leverages EDM within a Moroccan university (Hassan First, University Settat, Morocco) context to augment educational quality and improve learning. We introduce a novel “Hybrid approach” that synthesizes students’ historical academic records and their in-class behavioral data, provided by instructors, to predict student performance in initial coding courses. Utilizing a range of machine learning (ML) algorithms, our research applies multi-classification, data augmentation, and binary classification techniques to evaluate student outcomes effectively. The key performance metrics, accuracy, precision, recall, and F1-score, are calculated to assess the efficacy of classification. Our results highlight the long short-term memory (LSTM) algorithm’s robustness achieving the highest accuracy of 94% and an F1-score of 0.87 along with a support vector machine (SVM), indicating high efficacy in predicting student success at the onset of learning coding. Furthermore, the study proposes a comprehensive framework that can be integrated into learning management systems (LMSs) to accommodate generational shifts in student populations, evolving university pedagogies, and varied teaching methodologies. This framework aims to support educational institutions in adapting to changing educational dynamics while ensuring high-quality, tailored learning experiences for students.
Artificial intelligence as the clinical assistant for detection of femoral neck fracture: Intelligent medicine brings the bright future
Pengran Liu, Dan Zhang, Yufei Chen
et al.
Objective: The high rates of missed diagnosis and misdiagnosis limit the diagnosis of femoral neck fracture (FNF), which requires a new method to assist doctors to get more accurate diagnosis of FNF. This study aims to estimate the ability of AI in the detection of FNF and further compare its performance with human level. And the performance of AI-aided human level is also explored to confirm the value of AI as an assistant for clinical doctors to detect the FNF. Materials and methods: 4477 hip X-rays (consisted of 2884 FNF X-rays and 1593 normal hip X-rays) from eight Chinese top tree hospitals (Union Hospital, Tongji Medical College, Huazhong University of Science and Technology (Wuhan Union Hospital), Wuhan Pu'ai Hospital, Tianyou Hospital, Wuhan University of Science and Technology, Hanyang Hospital, Wuhan University of Science and Technology, Northern Jiangsu People's Hospital, Xiangya Changde Hospital, People's Hospital of Tibet Autonomous Region and the Second Affiliated Hospital of Soochow University) were collected to establish a large multi-center clinical sample database. Then the X-rays were labeled, and the database was divided into training dataset (4029 X-rays) and testing dataset (448 X-rays). A Faster RCNN model with three different backbones (VGG16, VGG16-nottop and Resnet 50) was set up and trained with the training dataset, then the diagnostic performance of the Faster RCNN was assessed by the testing dataset and further compared with five doctors, in the form of accuracy, sensitivity, specificity, missed diagnosis rate, misdiagnosis rate, positive predictive value (PPV), negative predictive value (NPV), and time consumption. The result of the backbone with best performance was further set as reference for the doctor to diagnose the testing dataset again to confirm the value of AI as an assistant to detect the FNF. Results: Faster RCNN with Resnet 50 performed best compared with the other two backbones (VGG16 performed lowest, VGG16-nottop performed at medium level) in accuracy (0.82 vs 0.58 and 0.76), sensitivity (0.93 vs 0.83 and 0.94), specificity (0.62 vs 0.12 and 0.43), missed diagnosis rate (0.07 vs 0.17 and 0.06), misdiagnosis rate (0.38 vs 0.88 and 0.57), PPV (0.82 vs 0.63 and 0.75), NPV (0.82 vs 0.28 and 0.81) and time consumption (0.02h vs 0.04 h and 0.03h). And compared with human level, the Faster RCNN with Resnet 50 expressed better ability in terms of accuracy, sensitivity, missed diagnosis rate, NPV and time consumption, and worse ability in specificity and misdiagnosis rate. As for the PPV, there was not significant difference. Under the assistance of Faster RCNN with Resnet 50, the human level was enhanced in all aspects. Conclusion: As a new application of intelligent medicine, AI can be qualified in the detection of FNF, and can also be an excellent assistant for clinical doctors to improve the diagnosis of FNF.
Science (General), Social sciences (General)
Artificial Intelligence Agents in Music Analysis: An Integrative Perspective Based on Two Use Cases
Antonio Manuel Martínez-Heredia, Dolores Godrid Rodríguez, Andrés Ortiz García
This paper presents an integrative review and experimental validation of artificial intelligence (AI) agents applied to music analysis and education. We synthesize the historical evolution from rule-based models to contemporary approaches involving deep learning, multi-agent architectures, and retrieval-augmented generation (RAG) frameworks. The pedagogical implications are evaluated through a dual-case methodology: (1) the use of generative AI platforms in secondary education to foster analytical and creative skills; (2) the design of a multiagent system for symbolic music analysis, enabling modular, scalable, and explainable workflows. Experimental results demonstrate that AI agents effectively enhance musical pattern recognition, compositional parameterization, and educational feedback, outperforming traditional automated methods in terms of interpretability and adaptability. The findings highlight key challenges concerning transparency, cultural bias, and the definition of hybrid evaluation metrics, emphasizing the need for responsible deployment of AI in educational environments. This research contributes to a unified framework that bridges technical, pedagogical, and ethical considerations, offering evidence-based guidance for the design and application of intelligent agents in computational musicology and music education.
ELAIPBench: A Benchmark for Expert-Level Artificial Intelligence Paper Understanding
Xinbang Dai, Huikang Hu, Yongrui Chen
et al.
While large language models (LLMs) excel at many domain-specific tasks, their ability to deeply comprehend and reason about full-length academic papers remains underexplored. Existing benchmarks often fall short of capturing such depth, either due to surface-level question design or unreliable evaluation metrics. To address this gap, we introduce ELAIPBench, a benchmark curated by domain experts to evaluate LLMs' comprehension of artificial intelligence (AI) research papers. Developed through an incentive-driven, adversarial annotation process, ELAIPBench features 403 multiple-choice questions from 137 papers. It spans three difficulty levels and emphasizes non-trivial reasoning rather than shallow retrieval. Our experiments show that the best-performing LLM achieves an accuracy of only 39.95%, far below human performance. Moreover, we observe that frontier LLMs equipped with a thinking mode or a retrieval-augmented generation (RAG) system fail to improve final results-even harming accuracy due to overthinking or noisy retrieval. These findings underscore the significant gap between current LLM capabilities and genuine comprehension of academic papers.
Quantum Artificial Intelligence (QAI): Foundations, Architectural Elements, and Future Directions
Siva Sai, Rajkumar Buyya
Mission critical (MC) applications such as defense operations, energy management, cybersecurity, and aerospace control require reliable, deterministic, and low-latency decision making under uncertainty. Although the classical Machine Learning (ML) approaches are effective, they often struggle to meet the stringent constraints of robustness, timing, explainability, and safety in the MC domains. Quantum Artificial Intelligence (QAI), the fusion of machine learning and quantum computing (QC), can provide transformative solutions to the challenges faced by classical ML models. In this paper, we provide a comprehensive exploration of QAI for MC systems. We begin with a conceptual background to quantum computing, MC systems, and quantum machine learning (QAI). We then examine the core mechanisms and algorithmic principles of QAI in MC systems, including quantum-enhanced learning pipelines, quantum uncertainty quantification, and quantum explainability frameworks. Subsequently, we discuss key application areas like aerospace, defense, cybersecurity, smart grids, and disaster management, focusing on the role of QA in enhancing fault tolerance, real-time intelligence, and adaptability. We provide an exploration of the positioning of QAI for MC systems in the industry in terms of deployment. We also propose a model for management of quantum resources and scheduling of applications driven by timeliness constraints. We discuss multiple challenges, including trainability limits, data access, and loading bottlenecks, verification of quantum components, and adversarial QAI. Finally, we outline future research directions toward achieving interpretable, scalable, and hardware-feasible QAI models for MC application deployment.
Building the future: innovative application and development prospect of telecommunication big data open platform in for intelligent social governance
QIU Baohua
The importance of big data in telecommunications for the governance of intelligent societies was delved into. Taking into account the current policy environment, the research objectives and contributions was elucidated. The theoretical foundations of telecommunications big data, its developmental trends, and its status both domestically and internationally was presented, and the necessity and application directions for constructing a big data open platform was discussed. The solutions to key issues, including multi-domain data fusion, spatiotemporal data model construction, and big data openness strategies, was deeply analyzed, and the project outcomes and their impact at the technological, economic, and social levels was forecasted. This study provides a valuable reference for the future application of telecom big data.
Telecommunication, Technology
Rozwój technologii kwantowej: wyzwania i aspekty regulacyjne. Przegląd wybranych zagadnień prawnych
Dorota Glaza-Jankowska
In the era of the Fourth Industrial Revolution, quantum technology is entering the scene with the promise of radically changing the technological paradigm. Quantum computing, simulation, quantum communications and the combination of this technology with artificial intelligence are opening up new horizons of innovation that could revolutionize various industries. Along with its enormous potential, this technology brings legal, ethical and social challenges that require detailed analysis and an interdisciplinary approach. The industrial deployment of quantum technology entails dilemmas regarding human rights, cyber security and national security, as well as the risk of exacerbating inequality, technological exclusion and algorithmic discrimination. With regard to artificial intelligence enhanced by quantum computing technology, we can speak of a completely new facet of the problem of the opacity of algorithms, which is determined not only by the cognitive limitations of the human mind, but stems from the indefinability of the world described by the laws of quantum mechanics. In light of the above, the thesis can be advanced that the classical approach to the principle of transparency and the postulates of creating algorithms that will clearly present the path to the final result, which are part of the broad current of creating ethical and explainable artificial intelligence, may prove difficult to realize in relation to Quantum AI. In the era of the Fourth Industrial Revolution, we are therefore faced with the challenge of implementing new instruments for testing, certifying and inspecting algorithms, such as tools for analyzing and visualizing the results of quantum algorithms, which will be suited to the specifics of Quantum AI and ensure the ethical correctness of the systems. This article will identify selected areas of application of quantum technology, and thus the potential benefits and risks of quantum technology, and also analyze the need to develop ethical and legal standards that allow for the sustainable and socially responsible development of these disruptive technologies.
High stretchable and self-adhesive multifunctional hydrogel for wearable and flexible sensors
Hao Zhong, Wubin Shan, Lei Liang
et al.
Ionic conductive hydrogel has recently garnered significant research attention due to its potential applications in the field of wearable and flexible electronics. Nonetheless, the integration of multifunctional and synergistic advantages, including reliable electronic properties, high swelling capacity, exceptional mechanical characteristics, and self-adhesive properties, presents an ongoing challenge. In this study, we have developed an ionic conductive hydrogel through the co-polymerization of 4-Acryloylmorpholine (ACMO) and sodium acrylate using UV curing technology. The hydrogel exhibits excellent mechanical properties, high conductivity, superior swelling capacity, and remarkable self-adhesive attributes. The hydrogel serves as a highly sensitive strain sensor, enabling precise monitoring of both substantial and subtle human motions. Furthermore, the hydrogel demonstrates the capability to adhere to human skin, functioning as a human-machine interface for the detection of physiological signals, including electromyogram (EMG) signals, with low interfacial impedance. This work is anticipated to yield a new class of stretchable and conductive materials with diverse potential applications, ranging from flexible sensors and wearable bio-electronics to contributions in the field of artificial intelligence.
Science (General), Social sciences (General)
Artificial Intelligence in Industry 4.0: A Review of Integration Challenges for Industrial Systems
Alexander Windmann, Philipp Wittenberg, Marvin Schieseck
et al.
In Industry 4.0, Cyber-Physical Systems (CPS) generate vast data sets that can be leveraged by Artificial Intelligence (AI) for applications including predictive maintenance and production planning. However, despite the demonstrated potential of AI, its widespread adoption in sectors like manufacturing remains limited. Our comprehensive review of recent literature, including standards and reports, pinpoints key challenges: system integration, data-related issues, managing workforce-related concerns and ensuring trustworthy AI. A quantitative analysis highlights particular challenges and topics that are important for practitioners but still need to be sufficiently investigated by academics. The paper briefly discusses existing solutions to these challenges and proposes avenues for future research. We hope that this survey serves as a resource for practitioners evaluating the cost-benefit implications of AI in CPS and for researchers aiming to address these urgent challenges.