Thiago Fernandes Pinto, Juliana de Melo Batista dos Santos, Estéfane Caroline Monteiro Reis
et al.
Abstract
BackgroundIndividuals with severe chronic obstructive pulmonary disease (COPD) may exhibit thoracoabdominal asynchrony, which reduces ventilatory efficiency. A novel intervention using elastic tape (ET) applied to the chest wall has been shown to acutely reduce thoracoabdominal asynchrony and dyspnea during exercise among individuals with COPD. We hypothesize that using ET in pulmonary rehabilitation (PR) may increase the benefits of PR in this population.
ObjectiveThis study aims to evaluate the additional effects of ET on exercise capacity, symptoms of anxiety and depression, health-related quality of life, and physical activity in daily life among male individuals with moderate to very severe COPD who are undergoing PR.
MethodsThis is a protocol for a randomized, controlled, 2-arm, parallel, blinded assessor clinical trial. Individuals will be followed for 8 weeks, twice a week, with PR sessions lasting approximately 1 hour. Health status (COPD Assessment Test), health-related quality of life (Chronic Respiratory Questionnaire), and psychological distress (Hospital Anxiety and Depression Scale) will be assessed before and after the intervention. Then, exercise capacity will be assessed via the incremental shuttle walking test and endurance shuttle walking test, and participants will use a triaxial accelerometer (ActiGraph) for 7 days to assess physical activity in daily life. Subsequently, individuals will be randomized into ET or sham groups; both groups will complete a PR program (2 times per week for 8 weeks). The ET group will receive applications of ET, whereas the sham group will receive a nonelastic tape. Data will be presented as means and SDs or medians and IQRs. Intergroup comparisons will be performed using a 2-way ANOVA, followed by the Bonferroni post hoc correction test, or the Kruskal-Wallis test, followed by the Dunn post hoc test. The threshold for statistical significance will be set at 5%.
ResultsThe clinical trial registration was approved in June 2023. Recruitment and data collection for the trial are ongoing; as of November 2025, a total of 10 individuals have been recruited, and the results are expected to be available by the end of November 2026.
ConclusionsWe hypothesize that the use of ET can enhance the benefits of PR in individuals with moderate to very severe COPD and increase exercise capacity and quality of life, as well as reduce symptoms of anxiety and depression.
Medicine, Computer applications to medicine. Medical informatics
We propose VL-DUN, a principled framework for joint All-in-One Medical Image Restoration and Segmentation (AiOMIRS) that bridges the gap between low-level signal recovery and high-level semantic understanding. While standard pipelines treat these tasks in isolation, our core insight is that they are fundamentally synergistic: restoration provides clean anatomical structures to improve segmentation, while semantic priors regularize the restoration process. VL-DUN resolves the sub-optimality of sequential processing through two primary innovations. (1) We formulate AiOMIRS as a unified optimization problem, deriving an interpretable joint unfolding mechanism where restoration and segmentation are mathematically coupled for mutual refinement. (2) We introduce a frequency-aware Mamba mechanism to capture long-range dependencies for global segmentation while preserving the high-frequency textures necessary for restoration. This allows for efficient global context modeling with linear complexity, effectively mitigating the spectral bias of standard architectures. As a pioneering work in the AiOMIRS task, VL-DUN establishes a new state-of-the-art across multi-modal benchmarks, improving PSNR by 0.92 dB and the Dice coefficient by 9.76\%. Our results demonstrate that joint collaborative learning offers a superior, more robust solution for complex clinical workflows compared to isolated task processing. The codes are provided in https://github.com/cipi666/VLDUN.
Risk stratification (characterization) of tumors from radiology images can be more accurate and faster with computer-aided diagnosis (CAD) tools. Tumor characterization through such tools can also enable non-invasive cancer staging, prognosis, and foster personalized treatment planning as a part of precision medicine. In this papet, we propose both supervised and unsupervised machine learning strategies to improve tumor characterization. Our first approach is based on supervised learning for which we demonstrate significant gains with deep learning algorithms, particularly by utilizing a 3D convolutional neural network and transfer learning. Motivated by the radiologists’ interpretations of the scans, we then show how to incorporate task-dependent feature representations into a CAD system via a graph-regularized sparse multi-task learning framework. In the second approach, we explore an unsupervised learning algorithm to address the limited availability of labeled training data, a common problem in medical imaging applications. Inspired by learning from label proportion approaches in computer vision, we propose to use proportion-support vector machine for characterizing tumors. We also seek the answer to the fundamental question about the goodness of “deep features” for unsupervised tumor classification. We evaluate our proposed supervised and unsupervised learning algorithms on two different tumor diagnosis challenges: lung and pancreas with 1018 CT and 171 MRI scans, respectively, and obtain the state-of-the-art sensitivity and specificity results in both problems.
Purpose To evaluate two automated tools for detecting lesions on fluorine 18 (18F) fluoroestradiol (FES) PET/CT images and assess concordance of 18F-FES PET/CT with standard diagnostic CT and/or 18F fluorodeoxyglucose (FDG) PET/CT in patients with breast cancer. Materials and Methods This retrospective analysis of a prospective study included participants with breast cancer who underwent 18F-FES PET/CT examinations (n = 52), 18F-FDG PET/CT examinations (n = 13 of 52), and diagnostic CT examinations (n = 37 of 52). A convolutional neural network was trained for lesion detection using manually contoured lesions. Concordance in lesions labeled by a nuclear medicine physician between 18F-FES and 18F-FDG PET/CT and between 18F-FES PET/CT and diagnostic CT was assessed using an automated software medical device. Lesion detection performance was evaluated using sensitivity and false positives per participant. Wilcoxon tests were used for statistical comparisons. Results The study included 52 participants. The lesion detection algorithm achieved a median sensitivity of 62% with 0 false positives per participant. Compared with sensitivity in overall lesion detection, the sensitivity was higher for detection of high-uptake lesions (maximum standardized uptake value > 1.5, P = .002) and similar for detection of large lesions (volume > 0.5 cm3, P = .15). The artificial intelligence (AI) lesion detection tool was combined with a standardized uptake value threshold to demonstrate a fully automated method of labeling patients as having FES-avid metastases. Additionally, automated concordance analysis showed that 17 of 25 participants (68%) had over half of the detected lesions across two modalities present on 18F-FES PET/CT images. Conclusion An AI model was trained to detect lesions on 18F-FES PET/CT images and an automated concordance tool measured heterogeneity between 18F-FES PET/CT and standard-of-care imaging. Keywords: Molecular Imaging-Cancer, Neural Networks, PET/CT, Breast, Computer Applications-General (Informatics), Segmentation, 18F-FES PET, Metastatic Breast Cancer, Lesion Detection, Artificial Intelligence, Lesion Matching Supplemental material is available for this article. Clinical Trials Identifier: NCT04883814 Published under a CC BY 4.0 license.
BackgroundAccurately measuring the health care needs of patients with different diseases remains a public health challenge for health care management worldwide. There is a need for new computational methods to be able to assess the health care resources required by patients with different diseases to avoid wasting resources.
ObjectiveThis study aimed to assessing dissatisfaction with allocation of health care resources from the perspective of patients with different diseases that can help optimize resource allocation and better achieve several of the Sustainable Development Goals (SDGs), such as SDG 3 (“Good Health and Well-being”). Our goal was to show the effectiveness and practicality of large language models (LLMs) in assessing the distribution of health care resources.
MethodsWe used aspect-based sentiment analysis (ABSA), which can divide textual data into several aspects for sentiment analysis. In this study, we used Chat Generative Pretrained Transformer (ChatGPT) to perform ABSA of patient reviews based on 3 aspects (patient experience, physician skills and efficiency, and infrastructure and administration)00 in which we embedded chain-of-thought (CoT) prompting and compared the performance of Chinese and English LLMs on a Chinese dataset. Additionally, we used the International Classification of Diseases 11th Revision (ICD-11) application programming interface (API) to classify the sentiment analysis results into different disease categories.
ResultsWe evaluated the performance of the models by comparing predicted sentiments (either positive or negative) with the labels judged by human evaluators in terms of the aforementioned 3 aspects. The results showed that ChatGPT 3.5 is superior in a combination of stability, expense, and runtime considerations compared to ChatGPT-4o and Qwen-7b. The weighted total precision of our method based on the ABSA of patient reviews was 0.907, while the average accuracy of all 3 sampling methods was 0.893. Both values suggested that the model was able to achieve our objective. Using our approach, we identified that dissatisfaction is highest for sex-related diseases and lowest for circulatory diseases and that the need for better infrastructure and administration is much higher for blood-related diseases than for other diseases in China.
ConclusionsThe results prove that our method with LLMs can use patient reviews and the ICD-11 classification to assess the health care needs of patients with different diseases, which can assist with resource allocation rationally.
Computer applications to medicine. Medical informatics, Public aspects of medicine
BackgroundPediatric respiratory diseases, including asthma and pneumonia, are major causes of morbidity and mortality in children. Auscultation of lung sounds is a key diagnostic tool but is prone to subjective variability. The integration of artificial intelligence (AI) and machine learning (ML) with electronic stethoscopes offers a promising approach for automated and objective lung sound.
ObjectiveThis systematic review and meta-analysis assess the performance of ML models in pediatric lung sound analysis. The study evaluates the methodologies, model performance, and database characteristics while identifying limitations and future directions for clinical implementation.
MethodsA systematic search was conducted in Medline via PubMed, Embase, Web of Science, OVID, and IEEE Xplore for studies published between January 1, 1990, and December 16, 2024. Inclusion criteria are as follows: studies developing ML models for pediatric lung sound classification with a defined database, physician-labeled reference standard, and reported performance metrics. Exclusion criteria are as follows: studies focusing on adults, cardiac auscultation, validation of existing models, or lacking performance metrics. Risk of bias was assessed using a modified Quality Assessment of Diagnostic Accuracy Studies (version 2) framework. Data were extracted on study design, dataset, ML methods, feature extraction, and classification tasks. Bivariate meta-analysis was performed for binary classification tasks, including wheezing and abnormal lung sound detection.
ResultsA total of 41 studies met the inclusion criteria. The most common classification task was binary detection of abnormal lung sounds, particularly wheezing. Pooled sensitivity and specificity for wheeze detection were 0.902 (95% CI 0.726-0.970) and 0.955 (95% CI 0.762-0.993), respectively. For abnormal lung sound detection, pooled sensitivity was 0.907 (95% CI 0.816-0.956) and specificity 0.877 (95% CI 0.813-0.921). The most frequently used feature extraction methods were Mel-spectrogram, Mel-frequency cepstral coefficients, and short-time Fourier transform. Convolutional neural networks were the predominant ML model, often combined with recurrent neural networks or residual network architectures. However, high heterogeneity in dataset size, annotation methods, and evaluation criteria were observed. Most studies relied on small, single-center datasets, limiting generalizability.
ConclusionsML models show high accuracy in pediatric lung sound analysis, but face limitations due to dataset heterogeneity, lack of standard guidelines, and limited external validation. Future research should focus on standardized protocols and the development of large-scale, multicenter datasets to improve model robustness and clinical implementation.
Computer applications to medicine. Medical informatics, Public aspects of medicine
John Paul Kuwornu, David Brain, Kheng-Seong Ng
et al.
Abstract
BackgroundReducing the time to surgery for patients requiring cholecystectomy may lessen the risk of adverse outcomes. Dedicated day-surgery lists supported by out-of-hospital remote monitoring have been explored as a potential solution; however, the cost-effectiveness of such innovative care models remains largely unexplored.
ObjectiveThis study presents a cost-effectiveness analysis comparing an acute day-surgery care model with remote patient monitoring to a conventional inpatient-centric care model for high-acuity cases of cholecystitis.
MethodsPost-surgical complications, effectiveness (measured by bed days saved and quality-adjusted life years [QALYs]), and health care costs associated with the two models of care were compared over a 1-year time horizon using a decision tree model. Health care costs were estimated from the Australian health care funder perspective and expressed in 2023 Australian dollars. Uncertainty was assessed using both deterministic and probabilistic sensitivity analyses.
ResultsThe acute day-surgery care model dominated the conventional inpatient-centric care model by saving a mean of 1.7 inpatient days per patient (3.2 days for the conventional model versus 1.5 days for the acute day-surgery model) and lowering net health care costs by a mean of AU $1,416 (US $935) per case over the 1-year time horizon. There was no meaningful difference in QALYs between the care models. These results remained robust in both deterministic and probabilistic sensitivity analyses.
ConclusionsAn acute day-surgery care model with remote patient monitoring for individuals with acute cases of cholecystitis requiring cholecystectomy would likely free bed days and provide economic benefits to the health care system compared to inpatient-centric practice. Uncertainty in QALY estimates remains a limitation.
Computer applications to medicine. Medical informatics, Surgery
Veriserum is an open-source dataset designed to support the training of deep learning registration for dual-plane fluoroscopic analysis. It comprises approximately 110,000 X-ray images of 10 knee implant pair combinations (2 femur and 5 tibia implants) captured during 1,600 trials, incorporating poses associated with daily activities such as level gait and ramp descent. Each image is annotated with an automatically registered ground-truth pose, while 200 images include manually registered poses for benchmarking. Key features of Veriserum include dual-plane images and calibration tools. The dataset aims to support the development of applications such as 2D/3D image registration, image segmentation, X-ray distortion correction, and 3D reconstruction. Freely accessible, Veriserum aims to advance computer vision and medical imaging research by providing a reproducible benchmark for algorithm development and evaluation. The Veriserum dataset used in this study is publicly available via https://movement.ethz.ch/data-repository/veriserum.html, with the data stored at ETH Zürich Research Collections: https://doi.org/10.3929/ethz-b-000701146.
Early cancer detection remains one of the most critical challenges in modern healthcare, where delayed diagnosis significantly reduces survival outcomes. Recent advancements in artificial intelligence, particularly deep learning, have enabled transformative progress in medical imaging analysis. Deep learning-based computer vision models, such as convolutional neural networks (CNNs), transformers, and hybrid attention architectures, can automatically extract complex spatial, morphological, and temporal patterns from multimodal imaging data including MRI, CT, PET, mammography, histopathology, and ultrasound. These models surpass traditional radiological assessment by identifying subtle tissue abnormalities and tumor microenvironment variations invisible to the human eye. At a broader scale, the integration of multimodal imaging with radiogenomics linking quantitative imaging features with genomics, transcriptomics, and epigenetic biomarkers has introduced a new paradigm for personalized oncology. This radiogenomic fusion allows the prediction of tumor genotype, immune response, molecular subtypes, and treatment resistance without invasive biopsies.
Medical image analysis often faces significant challenges due to limited expert-annotated data, hindering both model generalization and clinical adoption. We propose an expert-guided explainable few-shot learning framework that integrates radiologist-provided regions of interest (ROIs) into model training to simultaneously enhance classification performance and interpretability. Leveraging Grad-CAM for spatial attention supervision, we introduce an explanation loss based on Dice similarity to align model attention with diagnostically relevant regions during training. This explanation loss is jointly optimized with a standard prototypical network objective, encouraging the model to focus on clinically meaningful features even under limited data conditions. We evaluate our framework on two distinct datasets: BraTS (MRI) and VinDr-CXR (Chest X-ray), achieving significant accuracy improvements from 77.09% to 83.61% on BraTS and from 54.33% to 73.29% on VinDr-CXR compared to non-guided models. Grad-CAM visualizations further confirm that expert-guided training consistently aligns attention with diagnostic regions, improving both predictive reliability and clinical trustworthiness. Our findings demonstrate the effectiveness of incorporating expert-guided attention supervision to bridge the gap between performance and interpretability in few-shot medical image diagnosis.
Medical Referring Image Segmentation (MRIS) involves segmenting target regions in medical images based on natural language descriptions. While achieving promising results, recent approaches usually involve complex design of multimodal fusion or multi-stage decoders. In this work, we propose NTP-MRISeg, a novel framework that reformulates MRIS as an autoregressive next-token prediction task over a unified multimodal sequence of tokenized image, text, and mask representations. This formulation streamlines model design by eliminating the need for modality-specific fusion and external segmentation models, supports a unified architecture for end-to-end training. It also enables the use of pretrained tokenizers from emerging large-scale multimodal models, enhancing generalization and adaptability. More importantly, to address challenges under this formulation-such as exposure bias, long-tail token distributions, and fine-grained lesion edges-we propose three novel strategies: (1) a Next-k Token Prediction (NkTP) scheme to reduce cumulative prediction errors, (2) Token-level Contrastive Learning (TCL) to enhance boundary sensitivity and mitigate long-tail distribution effects, and (3) a memory-based Hard Error Token (HET) optimization strategy that emphasizes difficult tokens during training. Extensive experiments on the QaTa-COV19 and MosMedData+ datasets demonstrate that NTP-MRISeg achieves new state-of-the-art performance, offering a streamlined and effective alternative to traditional MRIS pipelines.
Benedetta Rossini, Aldo Carnevale, Gian Carlo Parenti
et al.
Conventional radiography is widely used for postmortem foetal imaging, but its role in diagnosing congenital anomalies is debated. This study aimed to assess the effectiveness of X-rays in detecting skeletal abnormalities and guiding genetic analysis and counselling. This is a retrospective analysis of all post-abortion diagnostic imaging studies conducted at a centre serving a population of over 300,000 inhabitants from 2008 to 2023. The data were analysed using descriptive statistics. X-rays of 81 aborted foetuses (total of 308 projections; mean: 3.8 projections/examination; SD: 1.79) were included. We detected 137 skeletal anomalies. In seven cases (12.7%), skeletal anomalies identified through radiology were missed by prenatal sonography. The autopsy confirmed radiological data in all cases except for two radiological false positives. Additionally, radiology failed to identify a case of syndactyly, which was revealed by anatomopathology. X-ray is crucial for accurately classifying skeletal abnormalities, determining the causes of spontaneous abortion, and guiding the request for genetic counselling. Formal training for both technicians and radiologists, as well as multidisciplinary teamwork, is necessary to perform X-ray examinations on aborted foetuses and interpret the results effectively.
Photography, Computer applications to medicine. Medical informatics
Letizia Jaccheri, Barbora Buhnova, Birgit Penzenstadler
et al.
This chapter provides a summary of the activities and results of the European Network For Gender Balance in Informatics (EUGAIN, EU COST Action CA19122). The main aim and objective of the network is to improve gender balance in informatics at all levels, from undergraduate and graduate studies to participation and leadership both in academia and industry, through the creation of a European network of colleagues working at the forefront of the efforts for gender balance in informatics in their countries and research communities.
Deep learning is an advanced technology that relies on large-scale data and complex models for feature extraction and pattern recognition. It has been widely applied across various fields, including computer vision, natural language processing, and speech recognition. In recent years, deep learning has demonstrated significant potential in the realm of proteomics informatics, particularly in deciphering complex biological information. The introduction of this technology not only accelerates the processing speed of protein data but also enhances the accuracy of predictions regarding protein structure and function. This provides robust support for both fundamental biology research and applied biotechnological studies. Currently, deep learning is primarily focused on applications such as protein sequence analysis, three-dimensional structure prediction, functional annotation, and the construction of protein interaction networks. These applications offer numerous advantages to proteomic research. Despite its growing prevalence in this field, deep learning faces several challenges including data scarcity, insufficient model interpretability, and computational complexity; these factors hinder its further advancement within proteomics. This paper comprehensively reviews the applications of deep learning in proteomics along with the challenges it encounters. The aim is to provide a systematic theoretical discussion and practical basis for research in this domain to facilitate ongoing development and innovation of deep learning technologies within proteomics.
Sarah Khavandi, Fatema Zaghloul, Aisling Higham
et al.
BackgroundWhile digital health innovations are increasingly being adopted by health care organizations, implementation is often carried out without considering the impacts on frontline staff who will be using the technology and who will be affected by its introduction. The enthusiasm surrounding the use of artificial intelligence (AI)–enabled digital solutions in health care is tempered by uncertainty around how it will change the working lives and practices of health care professionals. Digital enablement can be viewed as facilitating enhanced effectiveness and efficiency by improving services and automating cognitive labor, yet the implementation of such AI technology comes with challenges related to changes in work practices brought by automation. This research explores staff experiences before and after care pathway automation with an autonomous clinical conversational assistant, Dora (Ufonia Ltd), that is able to automate routine clinical conversations.
ObjectiveThe primary objective is to examine the impact of AI-enabled automation on clinicians, allied health professionals, and administrators who provide or facilitate health care to patients in high-volume, low-complexity care pathways. In the process of transforming care pathways through automation of routine tasks, staff will increasingly “work at the top of their license.” The impact of this fundamental change on the professional identity, well-being, and work practices of the individual is poorly understood at present.
MethodsWe will adopt a multiple case study approach, combining qualitative and quantitative data collection methods, over 2 distinct phases, namely phase A (preimplementation) and phase B (postimplementation).
ResultsThe analysis is expected to reveal the interrelationship between Dora and those affected by its introduction. This will reveal how tasks and responsibilities have changed or shifted, current tensions and contradictions, ways of working, and challenges, benefits, and opportunities as perceived by those on the frontlines of the health care system. The findings will enable a better understanding of the resistance or susceptibility of different stakeholders within the health care workforce and encourage managerial awareness of differing needs, demands, and uncertainties.
ConclusionsThe implementation of AI in the health care sector, as well as the body of research on this topic, remain in their infancy. The project’s key contribution will be to understand the impact of AI-enabled automation on the health care workforce and their work practices.
International Registered Report Identifier (IRRID)PRR1-10.2196/49374
Medicine, Computer applications to medicine. Medical informatics
Hussain Ahmad Madni, Rao Muhammad Umer, Gian Luca Foresti
Federated Learning (FL) is an evolving machine learning method in which multiple clients participate in collaborative learning without sharing their data with each other and the central server. In real-world applications such as hospitals and industries, FL counters the challenges of data heterogeneity and model heterogeneity as an inevitable part of the collaborative training. More specifically, different organizations, such as hospitals, have their own private data and customized models for local training. To the best of our knowledge, the existing methods do not effectively address both problems of model heterogeneity and data heterogeneity in FL. In this paper, we exploit the data and model heterogeneity simultaneously, and propose a method, MDH-FL (Exploiting Model and Data Heterogeneity in FL) to solve such problems to enhance the efficiency of the global model in FL. We use knowledge distillation and a symmetric loss to minimize the heterogeneity and its impact on the model performance. Knowledge distillation is used to solve the problem of model heterogeneity, and symmetric loss tackles with the data and label heterogeneity. We evaluate our method on the medical datasets to conform the real-world scenario of hospitals, and compare with the existing methods. The experimental results demonstrate the superiority of the proposed approach over the other existing methods.
Waldemar Hahn, K. Schütte, Kristian Schultz
et al.
AI model development for synthetic data generation to improve Machine Learning (ML) methodologies is an integral part of research in Computer Science and is currently being transferred to related medical fields, such as Systems Medicine and Medical Informatics. In general, the idea of personalized decision-making support based on patient data has driven the motivation of researchers in the medical domain for more than a decade, but the overall sparsity and scarcity of data are still major limitations. This is in contrast to currently applied technology that allows us to generate and analyze patient data in diverse forms, such as tabular data on health records, medical images, genomics data, or even audio and video. One solution arising to overcome these data limitations in relation to medical records is the synthetic generation of tabular data based on real world data. Consequently, ML-assisted decision-support can be interpreted more conveniently, using more relevant patient data at hand. At a methodological level, several state-of-the-art ML algorithms generate and derive decisions from such data. However, there remain key issues that hinder a broad practical implementation in real-life clinical settings. In this review, we will give for the first time insights towards current perspectives and potential impacts of using synthetic data generation in palliative care screening because it is a challenging prime example of highly individualized, sparsely available patient information. Taken together, the reader will obtain initial starting points and suitable solutions relevant for generating and using synthetic data for ML-based screenings in palliative care and beyond.
Lina Weinert, Maximilian Klass, Gerd Schneider
et al.
BackgroundIn recent years, research and developments in advancing artificial intelligence (AI) in health care and medicine have increased. High expectations surround the use of AI technologies, such as improvements for diagnosis and increases in the quality of care with reductions in health care costs. The successful development and testing of new AI algorithms require large amounts of high-quality data. Academic hospitals could provide the data needed for AI development, but granting legal, controlled, and regulated access to these data for developers and researchers is difficult. Therefore, the German Federal Ministry of Health supports the Protected Artificial Intelligence Innovation Environment for Patient-Oriented Digital Health Solutions for Developing, Testing, and Evidence-Based Evaluation of Clinical Value (pAItient) project, aiming to install the AI Innovation Environment at the Heidelberg University Hospital in Germany. The AI Innovation Environment was designed as a proof-of-concept extension of the already existing Medical Data Integration Center. It will establish a process to support every step of developing and testing AI-based technologies.
ObjectiveThe first part of the pAItient project, as presented in this research protocol, aims to explore stakeholders’ requirements for developing AI in partnership with an academic hospital and granting AI experts access to anonymized personal health data.
MethodsWe planned a multistep mixed methods approach. In the first step, researchers and employees from stakeholder organizations were invited to participate in semistructured interviews. In the following step, questionnaires were developed based on the participants’ answers and distributed among the stakeholders’ organizations to quantify qualitative findings and discover important aspects that were not mentioned by the interviewees. The questionnaires will be analyzed descriptively. In addition, patients and physicians were interviewed as well. No survey questionnaires were developed for this second group of participants. The study was approved by the Ethics Committee of the Heidelberg University Hospital (approval number: S-241/2021).
ResultsData collection concluded in summer 2022. Data analysis is planned to start in fall 2022. We plan to publish the results in winter 2022 to 2023.
ConclusionsThe results of our study will help in shaping the AI Innovation Environment at our academic hospital according to stakeholder requirements. With this approach, in turn, we aim to shape an AI environment that is effective and is deemed acceptable by all parties.
International Registered Report Identifier (IRRID)DERR1-10.2196/42208
Medicine, Computer applications to medicine. Medical informatics