Immunotherapy plays a crucial role in cancer treatment, but its efficacy varies among patients, with some showing suboptimal responses. Recent studies indicate that radiotherapy not only kills tumor cells locally but also induces immunogenic cell death and modulates the tumor immune microenvironment, acting like an “in situ vaccine.” This provides a strong biological basis for combining radiotherapy and immunotherapy. However, challenges remain, including individual variability in responses, complex treatment regimens, and overlapping toxicities. Artificial intelligence (AI), especially through machine learning, presents new solutions by processing high-dimensional multi-omics data. This article explores how AI enhances radiotherapy and immunotherapy combinations by optimizing synergistic effects, developing predictive biomarkers, and elucidating the regulatory mechanisms of radiotherapy on the immune microenvironment, while also discussing future directions for AI in oncology.
This 21st century is the era of e-education, and there is an urgent need to adapt to the modern needs of education in line with the 4th Sustainable Development Goal (SDG). The goal of this study is to align with SDG 4's goal of providing inclusive and equitable quality education. It aims to discuss the emergence of proctoring software that utilises artificial intelligence as a means of addressing the increasing cases of cheating in remote learning environments and online assessments, thereby reducing the need for physical infrastructure and travel, and contributing to sustainability. This study proposes a hybrid proctoring system based on two different folds: the first fold detects the significant improvement in the candidate's marks from the custom dataset of 350 students to identify the suspected candidates using Long-Short Term Memory (LSTM), and the second fold analyses the exam video recording of suspected candidates frame by frame to perform the behaviour analysis to detect the anomalies with the help of Convolutional Neural Network (CNN). In this paper, various anomalies were identified, including off-screen gazes, the use of cell phones and earphones, and talking. The proposed system obtains an accuracy of about 87.8% as well as exhibits resource-efficient performance with respect to processing time, CPU usage, along with memory usage.
Background: Risk stratification is an integral component of ST-segment-elevation myocardial infarction (STEMI) management practices. This study aimed to derive a machine learning (ML) model for risk stratification and identification of factors associated with in-hospital and 30-day mortality in patients with STEMI and compare it with traditional TIMI score. Methods: This was a single center prospective study wherein subjects >18 years with STEMI (n = 1700) were enrolled. Patients were divided into two groups: training (n = 1360) and validation dataset (n = 340). Six ML algorithms (Extra Tree, Random Forest, Multiple Perceptron, CatBoost, Logistic Regression and XGBoost) were used to train and tune the ML model and to determine the predictors of worse outcomes using feature selection. Additionally, the performance of ML models both for in-hospital and 30-day outcomes was compared to that of TIMI score. Results: Of the 1700 patients, 168 (9.88 %) had in-hospital mortality while 30-day mortality was reported in 210 (12.35 %) subjects. In terms of in-hospital mortality, Random Forest ML model (sensitivity: 80 %; specificity: 74 %; AUC: 80.83 %) outperformed the TIMI score (sensitivity: 70 %; specificity: 64 %; AUC:70.7 %). Similarly, Random Forest ML model (sensitivity: 81.63 %; specificity: 78.35 %; AUC: 78.29 %) had better performance as compared to TIMI score (sensitivity: 63.26 %; specificity: 63.91 %; AUC: 63.59 %) for 30-day mortality. Key predictors for worse outcomes at 30-days included mitral regurgitation on presentation, smoking, cardiogenic shock, diabetes, ventricular septal rupture, Killip class, age, female gender, low blood pressure and low ejection fraction. Conclusions: ML model outperformed the traditional regression based TIMI score as a risk stratification tool in patients with STEMI.
Surgery, Diseases of the circulatory (Cardiovascular) system
The integration of distributed generation (DG), renewable energy sources (RES), and power electronic converters into distribution systems (DSs) has introduced significant power quality (PQ) challenges, such as voltage fluctuations, harmonic distortions, and transients. These issues can undermine the reliability and stability of power systems, making it essential to address them to ensure a consistent and resilient power supply, especially as RES adoption continues to grow. While previous reviews have explored artificial intelligence (AI) applications for PQ management, most have been limited to specific AI techniques or targeted PQ problems, such as harmonics. This review, however, offers a comprehensive synthesis of AI-based approaches across a wide range of PQ applications, encompassing detection, classification, and improvement, while also considering the specific PQ issues addressed in each case. By adopting an integrated approach, this review identifies key research gaps, particularly the limited focus on leveraging AI to control power converters in RESs for PQ improvement, as most existing studies emphasize devices like active power filters, compensators, and conditioners. The review also evaluates the effectiveness of these AI methods in terms of accuracy and the extent of total harmonic distortion (THD) reduction. In addition, it provides novel insights that can help guide researchers, engineers, and industry professionals toward developing more adaptive, scalable, and robust PQ solutions. Finally, future research directions are proposed to advance AI-based PQ management, facilitating the integration of AI into diverse and evolving power systems.
Abstract Background Artificial intelligence has become an integral part of modern radiology, improving diagnostic accuracy, workflow efficiency, and decision-making processes. However, the acceptance and effective use of artificial intelligence in healthcare largely depends on healthcare professionals’ perceptions and literacy regarding these technologies. The aim of this study was to develop and validate the “Perception Scale for Artificial Intelligence in Radiologic Imaging” and to examine the factors that influence healthcare professionals’ perceptions of artificial intelligence in radiology. It also aimed to determine healthcare professionals’ perceptions regarding the use of artificial intelligence in radiology and to examine the factors that influence these perceptions, particularly the role of artificial intelligence literacy. Methods This cross-sectional, questionnaire-based study was conducted between March and May 2025 among healthcare professionals working in public and private hospitals in Turkey. Data were collected from 425 participants using convenience sampling. The “Perception Scale for Artificial Intelligence in Radiologic Imaging” was developed for this study, and the “Artificial Intelligence Literacy Scale” was employed to test contextual validity. Validity and reliability were evaluated using Cronbach’s Alpha, and analyses were performed with parametric tests in SPSS 26.0 and AMOS 24. Results The Perception of Artificial Intelligence in Radiologic Imaging Scale demonstrated a valid three-dimensional structure with 14 items and high reliability. The mean perception score of healthcare professionals regarding artificial intelligence in radiologic imaging was 3.14 ± 0.66 (mean ± standard deviation), indicating a moderate level of perception. A significant positive correlation was observed between artificial intelligence literacy and perception (r = 0.270, p < 0.001), while no significant differences were found across demographic variables (p > 0.05). Conclusion The study highlights that healthcare professionals in Turkey hold a moderately positive perception of artificial intelligence use in radiology. Furthermore, higher artificial intelligence literacy levels are associated with more favorable perceptions. These findings emphasize the need for educational initiatives to improve artificial intelligence literacy and foster informed, confident adoption of artificial intelligence technologies in clinical radiology practice.
Introduction: Biological systems inherently exhibit metabolic variability that functions within optimal ranges, as described by the Constrained Disorder Principle (CDP). Deviations from these ranges, whether excessive or insufficient, are linked to adverse health outcomes. This review examines how signatures of metabolic variability can enhance GLP-1 receptor agonist therapy using artificial intelligence platforms. Methods: We conducted a comprehensive literature review examining metabolic variability across various parameters, including heart rate, blood pressure, lipid levels, glucose control, body weight, and metabolic rate. We focused on studies investigating the relationship between variability patterns and treatment responses, particularly in the context of GLP-1 receptor agonist therapy and the use of CDP-based AI systems. Results: Increased variability in metabolic parameters consistently predicts adverse outcomes, such as cardiovascular events, mortality, and disease progression. Heart rate variability shows a U-shaped association with outcomes, while blood pressure, lipid, and glucose variability demonstrate predominantly linear relationships with risk. Body weight variability is associated with cognitive decline and an increased risk of cardiovascular complications. Additionally, genetic polymorphisms and baseline metabolic profiles can influence responses to GLP-1 receptor agonists. CDP-based AI platforms have successfully enhanced therapeutic outcomes in conditions like heart failure, cancer, and multiple sclerosis by leveraging biological variability rather than suppressing it. Summary: The identification of metabolic variability signatures offers valuable predictive insights for personalizing therapy with GLP-1 receptor agonists. Artificial intelligence systems based on clinical data patterns that include these variabilities represent a significant shift toward dynamic and individualized treatment approaches. This can enhance therapeutic efficacy and help counteract drug resistance in chronic metabolic disorders, potentially improving the response to GLP-1-based therapies.
IntroductionThis study employs the Job–Demands-Resources model and Conservation of Resources theory to examine the impact of artificial intelligence (AI) technology adoption on intergenerational knowledge transfer among older employees. It focuses on the psychological motivation underlying this phenomenon and identifies individual factors that affect intergenerational knowledge transfer. The purpose is to gain a deep understanding of the internal mechanisms of employee cognition and behavior change in the context of technological transformation.MethodsWe surveyed 635 older employees from various industries in China and analyzed the data using SPSS 27.0, Mplus 8.3, and fsQCA 4.1. The data were analyzed via a moderated sequential mediation model to examine the relationships among AI technology adoption, identity threat, relational crafting, digital self-efficacy and intergenerational knowledge transfer, supplemented by fuzzy-set qualitative comparative analysis (fsQCA). The study tested the mediating effects of identity threat and relational crafting between AI technology adoption and intergenerational knowledge transfer, as well as the moderating role of digital self-efficacy. In addition, fsQCA was used to test antecedents of intergenerational knowledge transfer among older employees.ResultsThe findings indicate that AI technology adoption positively influences intergenerational knowledge transfer. Identity threat and relational crafting play mediating roles between AI technology adoption and intergenerational knowledge transfer and also serve as sequential mediators. Digital self-efficacy negatively moderates the impact of AI technology adoption on identity threat, thereby moderating both the mediating role of identity threat and the sequential mediating effect of identity threat and relational crafting. Additionally, fsQCA identified three antecedent configurations that trigger intergenerational knowledge transfer among older employees.DiscussionPrior research on AI technology adoption has tended to emphasize singular positive or negative impacts on specific variables. This study constructs a model that incorporates both positive and negative effects, elucidating the multifaceted mechanisms through which AI technology adoption influences intergenerational knowledge transfer and enriches research on the consequences of AI technology adoption. While existing literature often highlights negative psychological and behavioral impacts of AI technology adoption on older employees, the present findings show that AI technology adoption can significantly enhance intergenerational knowledge transfer among older employees, thereby complementing current findings. Finally, by adopting a configurational thinking, this study identifies multiple pathways through which various factors affect intergenerational knowledge transfer, providing a useful complement to single-factor analyses of AI technology adoption’s impact. Thereby, the study offers practical insights for organizations seeking to develop inclusive technological-culture strategies.
Employee attrition is considered a persistent and significant problem across all the leading businesses globally. This is evidenced by the fact that the issue negatively impacted not only production but also impeded the ability of businesses to maintain continuity and adopt strategic planning. Typically, employee attrition occurs when employees are dissatisfied with respective work experiences. To effectively address this issue, proactive measures can be implemented to enhance employee retention through early identification and mitigation of factors that contribute to perceived dissatisfaction in work places. In the current era of big data, people analytics has been widely adopted by human resource (HR) departments across various businesses with the aim of understanding the different workforces across distinct fields and reducing the attrition rate. As a result, organizations are presently incorporating machine learning (ML) and artificial intelligence (AI) into HR practices to help decision-makers make better, well-informed decisions about respective human resources. The application of ML has been confirmed to be the optimal method for predicting employee attrition, but the optimization of its hyperparameter can further improve the prediction accuracy. Therefore, this novel study aimed to tune the hyperparameters of boosting ML algorithm family and develop a potential tool for employee attrition prediction through the adoption of Bayesian optimization (BO). Using IBM HR Analytics dataset, the exploration compared the performance of six ensemble classifiers and identified categorical boosting (CB) as the superior model which achieved the highest accuracy of 95.8% and AUC of 0.98 with optimized hyperparameters, showing its comprehensiveness and reliability. The comparison results showed how various boosting ML variants could be used to build a promising tool that is capable of accurately predicting employee attrition and enabling HR managers to enhance employee retention as well as satisfaction.
Abstract Optoelectronic synapses can be crucial for advancing artificial intelligence and visual systems. Optoelectronic synapses based on organic field-effect transistors have been widely studied but still face significant challenges including obvious programming nonlinearity, restricted response wavelength, high operation voltage, and limited storage memory. Organic electrochemical transistors can be another candidate but lack intensive studies. Additionally, wafer-scale photolithographic fabrication on optoelectronic synapses responding to near-infrared (NIR) light is highly desirable but rarely reported. Here, we propose the NIR organic photoelectrochemical transistor (OPECT) array capable of low voltage multi-level memories fabricated by photolithography. Based on NIR photo-induced electrochemical doping mechanism, the OPECTs enable linear weight programming with ultra-low nonlinearity (−0.015) over a wide range (47.3). We further demonstrate OPECTs arrays for image sensing, memorization, and visualization. Eventually, a convolutional computing system is constructed, executing accurate recognition of noisy handwritten digits. This work offers a promising insight into neuromorphic sensory computing applications.
Alzheimer’s disease (AD) is a neurodegenerative disorder characterized by memory loss. While applying Machine Learning (ML) demands a certain level of expertise, which is often a barrier for healthcare professionals, automated machine learning (AutoML) significantly lowers this barrier. This study analyzes an AutoML tool (PyCaret) for AD classification and prediction. Two experiments were designed to evaluate its diagnostic and prognostic capabilities with AD, Mild cognitive impairment (MCI), and Normal Controls (NC). SHapley Additive exPlanations (SHAP) was used to explain the ML models. For diagnosis, it had an accuracy of 98.6% for NC vs AD, 91.3%, for NC vs MCI, 92.5% for MCI vs AD, and 89.5% for the multiclass NC vs MCI vs AD. Regarding the prognosis capabilities, prediction of future cognitive states four years after their initial visit produced an accuracy of 92.8% for NC vs AD, 82.7% for NC vs MCI, 90.2% for MCI vs AD, and 81.4% for NC vs MCI vs AD. These results are in range and, in some cases, improve the state of the art even when compared to deep learning solutions. They confirm the potential of AutoML tools to automate ML algorithm selection and tuning for a specific medical application.
Advancements in high–throughput microscopy imaging have transformed cell analytics, enabling functionally relevant, rapid, and in–depth bioanalytics with Artificial Intelligence (AI) as a powerful driving force in cell therapy (CT) manufacturing. High–content microscopy screening often suffers from systematic noise, such as uneven illumination or vignetting artifacts, which can result in false–negative findings in AI models. Traditionally, AI models have been expected to learn to deal with these artifacts, but success in an inductive framework depends on sufficient training examples. To address this challenge, we propose a two–fold approach: (1) reducing noise through an image decomposition and restoration technique called the Periodic Plus Smooth Wavelet transform (PPSW) and (2) developing an interpretable machine learning (ML) platform using tree–based Shapley Additive exPlanations (SHAP) to enhance end–user understanding. By correcting artifacts during pre–processing, we lower the inductive learning load on the AI and improve end–user acceptance through a more interpretable heuristic approach to problem solving. Using a dataset of human Mesenchymal Stem Cells (MSCs) cultured under diverse density and media environment conditions, we demonstrate supervised clustering with mean SHAP values, derived from the ‘DFT Modulus’ applied to the decomposition of bright–field images, in the trained tree–based ML model. Our innovative ML framework offers end-to-end interpretability, leading to improved precision in cell characterization during CT manufacturing.
Hanieh Rezazadeh Tamrin, Elham Saniei, Mehdi Salehi Barough
Introduction: Breast cancer is the most common cancer in women that causes more deaths than other cancers. Thermography is one of the methods of breast cancer diagnosis. The most important challenge in early detection of these images can be human error or lack of access to a skilled person. The use of artificial intelligence methods in image processing can be effective in early detection and reduction of human error. The main aim of this research was to introduce hybrid networks for intelligent diagnosis of breast cancer from thermographic images.
Method: The thermographic images used in this study were collected from the DMR-IR database. First, the main features of the images were extracted by deep convolutional network (CNN). Then, FCNNs and SVM algorithms were used to classify breast cancer from thermographic images.
Results: The accuracy rate for CNN_FC and CNN-SVM algorithms was 94.2% and 0.95%, respectively. In addition, the reliability parameters for these classifiers were calculated as 92.1%, and 97.5%, and the sensitivity for each of these classifiers as 95.5%, and 94.1%, respectively.
Conclusion: The proposed model based on the deep hybrid network has good accuracy compared to similar algorithms; therefore, it can help doctors in the early diagnosis of breast cancer through thermographic images and minimize human error.
Computer applications to medicine. Medical informatics, Medical technology
The construction of new critical infrastructure, represented by high-speed full-time signal coverage, intelligent and fine-grained urban management, and deep space and deep sea scientific innovation experimental fields, has entered a new stage with the deep integration and development of new technologies such as 5G/6G, artificial intelligence, and blockchain in various fields.The security evaluation of cryptography applications, as a key technological resource for ensuring the security of national information, integration, and innovation infrastructure, has risen to the level of international law and national development strategy.It is urgent to construct a comprehensive, fine-grained, and self-evolving cryptography security evaluation system throughout the data lifecycle.The typical APT attacks and ransomware attacks faced by new critical infrastructure in industries such as energy, medicine, and transportation in recent years were considered.And then the growing demand for security evaluation of cryptography applications was analyzed in the face of new business requirements such as preventing endogenous data security risks, achieving differentiated privacy protection, and supporting authenticated attack traceability.The new challenges were also examined, which were brought by new information infrastructure (including big data, 5G communication, fundamental software, etc.), integration infrastructure (including intelligent connected vehicles, intelligent connected industrial control systems, etc.), and innovation infrastructure (including big data, artificial intelligence, blockchain, etc.) to the security evaluation of cryptography applications.Furthermore, the new requirements were revealed about domestically produced cryptography algorithms and protocols deployed on high-performance computing chips, ultra-high-speed communication modules, and large-capacity storage media for cryptography application security evaluation technology.Finally, the development of automated and intelligent cryptography application security evaluation technology was explored.
A complete surveillance strategy for wind turbines requires both the condition monitoring (CM) of their mechanical components and the structural health monitoring (SHM) of their load-bearing structural elements (foundations, tower, and blades). Therefore, it spans both the civil and mechanical engineering fields. Several traditional and advanced non-destructive techniques (NDTs) have been proposed for both areas of application throughout the last years. These include visual inspection (VI), acoustic emissions (AEs), ultrasonic testing (UT), infrared thermography (IRT), radiographic testing (RT), electromagnetic testing (ET), oil monitoring, and many other methods. These NDTs can be performed by human personnel, robots, or unmanned aerial vehicles (UAVs); they can also be applied both for isolated wind turbines or systematically for whole onshore or offshore wind farms. These non-destructive approaches have been extensively reviewed here; more than 300 scientific articles, technical reports, and other documents are included in this review, encompassing all the main aspects of these survey strategies. Particular attention was dedicated to the latest developments in the last two decades (2000–2021). Highly influential research works, which received major attention from the scientific community, are highlighted and commented upon. Furthermore, for each strategy, a selection of relevant applications is reported by way of example, including newer and less developed strategies as well.
When skin cells divide abnormally, it can cause a tumor or abnormal lymph fluid or blood. The masses appear benign and malignant, with the benign being limited to one area and not spreading, but some can spread throughout the body through the body’s lymphatic system. Skin cancer is easier to diagnose than other cancers because its symptoms can be seen with the naked eye. This makes us to provide an artificial intelligence-based methodology to diagnose this cancer with higher accuracy. This article proposes a new non-destructive testing method based on the AlexNet and Extreme Learning Machine network to provide better results of the diagnosis. The method is then optimized based on a new improved version of the Grasshopper optimization algorithm (GOA). Simulation of the proposed method is then compared with some different state-of-the-art methods and the results showed that the proposed method with 98% accuracy and 93% sensitivity has the highest efficiency.
Hajira Dambha-Miller, Glenn Simpson, Ralph K Akyea
et al.
BackgroundMultiple long-term health conditions (multimorbidity) (MLTC-M) are increasingly prevalent and associated with high rates of morbidity, mortality, and health care expenditure. Strategies to address this have primarily focused on the biological aspects of disease, but MLTC-M also result from and are associated with additional psychosocial, economic, and environmental barriers. A shift toward more personalized, holistic, and integrated care could be effective. This could be made more efficient by identifying groups of populations based on their health and social needs. In turn, these will contribute to evidence-based solutions supporting delivery of interventions tailored to address the needs pertinent to each cluster. Evidence is needed on how to generate clusters based on health and social needs and quantify the impact of clusters on long-term health and costs.
ObjectiveWe intend to develop and validate population clusters that consider determinants of health and social care needs for people with MLTC-M using data-driven machine learning (ML) methods compared to expert-driven approaches within primary care national databases, followed by evaluation of cluster trajectories and their association with health outcomes and costs.
MethodsThe mixed methods program of work with parallel work streams include the following: (1) qualitative semistructured interview studies exploring patient, caregiver, and professional views on clinical and socioeconomic factors influencing experiences of living with or seeking care in MLTC-M; (2) modified Delphi with relevant stakeholders to generate variables on health and social (wider) determinants and to examine the feasibility of including these variables within existing primary care databases; and (3) cohort study with expert-driven segmentation, alongside data-driven algorithms. Outputs will be compared, clusters characterized, and trajectories over time examined to quantify associations with mortality, additional long-term conditions, worsening frailty, disease severity, and 10-year health and social care costs.
ResultsThe study will commence in October 2021 and is expected to be completed by October 2023.
ConclusionsBy studying MLTC-M clusters, we will assess how more personalized care can be developed, how accurate costs can be provided, and how to better understand the personal and medical profiles and environment of individuals within each cluster. Integrated care that considers “whole persons” and their environment is essential in addressing the complex, diverse, and individual needs of people living with MLTC-M.
International Registered Report Identifier (IRRID)PRR1-10.2196/34405
Medicine, Computer applications to medicine. Medical informatics