Nouf Khalid Al-Kahtani, Arun Vijay Subbarayalu, Vinoth Raman
et al.
The increasing universal acceptance of artificial intelligence (AI) in healthcare systems is driving advancements, with clinical documentation at the forefront. This research aimed to gain insight into the views, perspectives, and influencing factors on AI implementation in clinical documentation among healthcare professionals [HCPs] and students, with special consideration for the obstacles and driving factors that influence the adoption of AI. A cross-sectional survey design was employed, involving 437 participants, comprising HCPs (n = 173) and health science students (n = 264). Statistical analysis, including descriptive and inferential methods, was applied to interpret the gathered data. Most participants (68.3%) had previously learned about AI for application in clinical documentation, but fewer (41.2%) were actively using it. HCPs, as well as students, demonstrated a positive perception of AI performance (76.5%), but expressed concerns about accuracy (53.8%) and the need for data privacy (61.4%). Reliability and accuracy (92.7%) emerged as key factors, followed by efficiency (87.3%), maintaining data privacy (84.9%), and peer adoption (72.1%), which influenced adoption. AI benefits were viewed differently by HCPs and students, with the students being more optimistic (p < 0.05). The successful implementation of AI in clinical documentation was considered to rely on training requirements (89.6%), the presence of technical support (76.2%), and the development of guidelines (81.5%). Although there is widespread acceptance of AI for clinical documentation among the participants, the success of implementation can only be realized by addressing areas such as accuracy, data privacy concerns, and providing adequate training and support to relevant stakeholders involved.
Computer applications to medicine. Medical informatics
This data paper provides image dataset that includes 8432 high-quality images of Tamarindus indica [1] (tamarind), categorized into six types: Shelled Healthy Single, Shelled Healthy Multiple, Unshelled Healthy Single, Unshelled Healthy Multiple, Shelled Unhealthy Single, and Shelled Unhealthy Multiple. The collection is intended primarily to assist agricultural research as well as machine learning applications for identifying and evaluating quality. There are differences in brightness and orientation in each category in the collection, which showcases a wide variety of images taken under controlled conditions. For accurate Tamarindus indica quality assessment, this dataset offers a useful resource for training and assessing computer vision models and machine learning techniques. Application in agriculture could be possible, enabling rapid, localized quality evaluation, with potential for broader industry adoption when adapted to other crops. In order to improve plant quality assessment methods and contribute to the creation of trustworthy automated systems for Tamarindus indica quality evaluation, we invite researchers to investigate this dataset and use creative thinking.
Computer applications to medicine. Medical informatics, Science (General)
Deep learning has achieved significant breakthroughs in medical imaging, but these advancements are often dependent on large, well-annotated datasets. However, obtaining such datasets poses a significant challenge, as it requires time-consuming and labor-intensive annotations from medical experts. Consequently, there is growing interest in learning paradigms such as incomplete, inexact, and absent supervision, which are designed to operate under limited, inexact, or missing labels. This survey categorizes and reviews the evolving research in these areas, analyzing around 600 notable contributions since 2018. It covers tasks such as image classification, segmentation, and detection across various medical application areas, including but not limited to brain, chest, and cardiac imaging. We attempt to establish the relationships among existing research studies in related areas. We provide formal definitions of different learning paradigms and offer a comprehensive summary and interpretation of various learning mechanisms and strategies, aiding readers in better understanding the current research landscape and ideas. We also discuss potential future research challenges.
The growing global population of older adults, combined with ongoing healthcare workforce shortages, has increased reliance on informal caregivers, including family members and friends who provide unpaid support to individuals with chronic illnesses. Among their daily responsibilities, medication management remains one of the most demanding and error-prone tasks. Non-adherence to prescribed regimens not only undermines patient outcomes but also intensifies caregiver stress, anxiety, and fatigue. Although digital health technologies have proliferated to address adherence, most solutions focus exclusively on patients and neglect the informational and emotional needs of caregivers. This paper introduces Adhera, a caregiver-inclusive health informatics system designed to support medication adherence while reducing caregiver burden. Using a mixed-methods research design that included fifteen semi-structured caregiver interviews, sixty-five survey responses, and five pharmacist consultations, this study identified three primary challenges: caregiver stress related to uncertainty about medication intake, fragmented communication with healthcare professionals, and distrust in existing digital tools. Informed by the CeHRes Roadmap 2.0 and the Triple Bottom Line by Design and Culture (TBLD+C) framework, as well as recent co-design studies involving caregivers, Adhera integrates a sensor-equipped smart pill organizer with a mobile companion application that records intake events, sends real-time reminders, and provides caregivers with synchronized adherence data. Preliminary evaluation suggests that Adhera enhances visibility, improves caregiver confidence, and streamlines medication routines. This study contributes to the field of health informatics by demonstrating how human-centered design and collaborative frameworks can align technical innovation with empathy-driven care.
Although Vision Transformers (ViTs) have recently demonstrated superior performance in medical imaging problems, they face explainability issues similar to previous architectures such as convolutional neural networks. Recent research efforts suggest that attention maps, which are part of decision-making process of ViTs can potentially address the explainability issue by identifying regions influencing predictions, especially in models pretrained with self-supervised learning. In this work, we compare the visual explanations of attention maps to other commonly used methods for medical imaging problems. To do so, we employ four distinct medical imaging datasets that involve the identification of (1) colonic polyps, (2) breast tumors, (3) esophageal inflammation, and (4) bone fractures and hardware implants. Through large-scale experiments on the aforementioned datasets using various supervised and self-supervised pretrained ViTs, we find that although attention maps show promise under certain conditions and generally surpass GradCAM in explainability, they are outperformed by transformer-specific interpretability methods. Our findings indicate that the efficacy of attention maps as a method of interpretability is context-dependent and may be limited as they do not consistently provide the comprehensive insights required for robust medical decision-making.
Objective A ballistocardiogram (BCG) is a vibration signal generated by the ejection of the blood in each cardiac cycle. The BCG has significant variability in amplitude, temporal aspects, and the deficiency of waveform components, attributed to individual differences, instantaneous heart rate, and the posture of the person being measured. This variability may make methods of extracting J-waves, the most distinct components of BCG less generalizable so that the J-waves could not be precisely localized, and further analysis is difficult. This study is dedicated to solving the variability of BCG to achieve accurate feature extraction. Methods Inspired by the generation mechanism of the BCG, we proposed an original method based on a profile of second-order derivative of BCG waveform (2ndD-P) to capture the nature of vibration and solve the variability, thereby accurately localizing the components especially when the J-wave is not prominent. Results In this study, 51 recordings of resting state and 11 recordings of high-heart-rate from 24 participants were used to validate the algorithm. Each recording lasts about 3 min. For resting state data, the sensitivity and positive predictivity of proposed method are: 98.29% and 98.64%, respectively. For high-heart-rate data, the proposed method achieved a performance comparable to those of low-heart-rate: 97.14% and 99.01% for sensitivity and positive predictivity, respectively. Conclusion Our proposed method can detect the peaks of the J-wave more accurately than conventional extraction methods, under the presence of different types of variability. Higher performance was achieved for BCG with non-prominent J-waves, in both low- and high-heart-rate cases.
Computer applications to medicine. Medical informatics
Abstract Topological data analysis (TDA) has shown great success in various applications involving wearable sensor data. However, there are difficulties in leveraging topological features in machine learning and wearable sensors because of the large time consumption and computational resources required to extract the features. To address this problem, knowledge distillation (KD) is utilized to generate a small model and accommodate topological features with persistence image (PI) representations from the raw time series data. Deploying topological knowledge in KD enables the student to achieve better performance compared to the one trained solely on raw time series data. However, it is not yet known if there are coherent characteristics for topological features in PI, which can aid in improving the performance during KD. In this paper, we investigate the suitability and challenges of utilizing topological features in KD for wearable sensor data, thereby contributing to the advancement of the field. Our study explores the impact of transferred topological features by comparing the Teacher-to-Student framework with Multiple Teachers-to-Student where teachers utilize both time series data and persistence images obtained by TDA as inputs. Additionally, we conduct a rigorous examination of topological knowledge effects by testing under various corruptions, knowledge types, and learning strategies in the context of human activity recognition tasks. Our analysis of topological features in KD presents the optimal strategy for incorporating these features. This study includes datasets of varying scales, window lengths, and activity classes, providing a comprehensive evaluation. Our results demonstrate that leveraging topological features in KD to enhance performance across databases.
Computer applications to medicine. Medical informatics
BackgroundDiabetes poses a significant public health challenge in China and globally, with the number of patients expected to reach 592 million by 2035, notably in Asia. In China alone, an estimated 140 million individuals are living with diabetes, and a significant portion is nonadherent to medications, underscoring the urgency of effective management strategies. Recognizing the necessity of early and comprehensive management for newly diagnosed patients with type 2 diabetes, this study leverages an online teach-back method and “Internet + Nursing” platform based on King’s Theory of Goal Attainment. The approach aims to enhance glycemic control and reduce fear and misconceptions about the disease, addressing both the educational and emotional needs of the patients.
ObjectiveThe primary aim of this study was to assess the effectiveness of King’s Goal Attainment Theory in the management of newly diagnosed patients with type 2 diabetes. This research sought to develop a collaborative model for blood glucose management, integrating the expertise and roles of physicians, nurses, and patients. The model is designed to enhance the synergy in health care provision, ensuring a comprehensive approach to diabetes management.
MethodsIn this study conducted at Changzhou Traditional Chinese Medicine Hospital between January 2022 and February 2023, eligible patients were randomized into a control group or an online feedback group. The control group received standard care, while the online feedback group participated in a King’s Theory of Goal Attainment–based online teach-back program, enhanced by “Internet + Nursing” strategies. This included an interactive platform for goal planning, video content sharing, comprehension assessment, misconception correction, and patient-driven recaps of disease information. Health monitoring was facilitated through the “Internet + Nursing” platform. The study focused on comparing changes in glucose metabolism and emotional disorder symptoms between the groups to evaluate the intervention’s effectiveness.
ResultsFollowing a 24-week intervention, we observed significant differences in key metrics between the online feedback group and the control group, each comprising 60 participants. The online feedback group demonstrated significant reductions in fasting plasma glucose, 2-hour postprandial glucose, and hemoglobin A1c (P<.05). Additionally, there was a notable decrease in hypoglycemia-related anxiety and alexithymia within this group. Conversely, the control group maintained relatively higher values for these metrics at the same time point (P<.05). These findings underscore the efficacy of online feedback in managing glycemic control and reducing psychological distress associated with hypoglycemia.
ConclusionsThe online teaching-back method, guided by King’s Theory of Goal Attainment, effectively enhances glycemic control, reducing fasting plasma glucose, 2-hour postprandial glucose, and hemoglobin A1c levels in newly diagnosed patients with type 2 diabetes. Simultaneously, it alleviates hypoglycemia-related anxiety and mitigates alexithymia. This approach merits widespread promotion and implementation in clinical settings.
Trial RegistrationChinese Clinical Trial Registry ChiCTR2400079547; https://www.chictr.org.cn/showproj.html?proj=208223
Computer applications to medicine. Medical informatics, Public aspects of medicine
Background Natural language processing (NLP) is an important traditional field in computer science, but its application in medical research has faced many challenges. With the extensive digitalization of medical information globally and increasing importance of understanding and mining big data in the medical field, NLP is becoming more crucial. Objective The goal of the research was to perform a systematic review on the use of NLP in medical research with the aim of understanding the global progress on NLP research outcomes, content, methods, and study groups involved. Methods A systematic review was conducted using the PubMed database as a search platform. All published studies on the application of NLP in medicine (except biomedicine) during the 20 years between 1999 and 2018 were retrieved. The data obtained from these published studies were cleaned and structured. Excel (Microsoft Corp) and VOSviewer (Nees Jan van Eck and Ludo Waltman) were used to perform bibliometric analysis of publication trends, author orders, countries, institutions, collaboration relationships, research hot spots, diseases studied, and research methods. Results A total of 3498 articles were obtained during initial screening, and 2336 articles were found to meet the study criteria after manual screening. The number of publications increased every year, with a significant growth after 2012 (number of publications ranged from 148 to a maximum of 302 annually). The United States has occupied the leading position since the inception of the field, with the largest number of articles published. The United States contributed to 63.01% (1472/2336) of all publications, followed by France (5.44%, 127/2336) and the United Kingdom (3.51%, 82/2336). The author with the largest number of articles published was Hongfang Liu (70), while Stéphane Meystre (17) and Hua Xu (33) published the largest number of articles as the first and corresponding authors. Among the first author’s affiliation institution, Columbia University published the largest number of articles, accounting for 4.54% (106/2336) of the total. Specifically, approximately one-fifth (17.68%, 413/2336) of the articles involved research on specific diseases, and the subject areas primarily focused on mental illness (16.46%, 68/413), breast cancer (5.81%, 24/413), and pneumonia (4.12%, 17/413). Conclusions NLP is in a period of robust development in the medical field, with an average of approximately 100 publications annually. Electronic medical records were the most used research materials, but social media such as Twitter have become important research materials since 2015. Cancer (24.94%, 103/413) was the most common subject area in NLP-assisted medical research on diseases, with breast cancers (23.30%, 24/103) and lung cancers (14.56%, 15/103) accounting for the highest proportions of studies. Columbia University and the talents trained therein were the most active and prolific research forces on NLP in the medical field.
BackgroundGiven the rapid development of social media, effective extraction and analysis of the contents of social media for health care have attracted widespread attention from health care providers. As far as we know, most of the reviews focus on the application of social media, and there is a lack of reviews that integrate the methods for analyzing social media information for health care.
ObjectiveThis scoping review aims to answer the following 4 questions: (1) What types of research have been used to investigate social media for health care, (2) what methods have been used to analyze the existing health information on social media, (3) what indicators should be applied to collect and evaluate the characteristics of methods for analyzing the contents of social media for health care, and (4) what are the current problems and development directions of methods used to analyze the contents of social media for health care?
MethodsA scoping review following Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines was conducted. We searched PubMed, the Web of Science, EMBASE, the Cumulative Index to Nursing and Allied Health Literature, and the Cochrane Library for the period from 2010 to May 2023 for primary studies focusing on social media and health care. Two independent reviewers screened eligible studies against inclusion criteria. A narrative synthesis of the included studies was conducted.
ResultsOf 16,161 identified citations, 134 (0.8%) studies were included in this review. These included 67 (50.0%) qualitative designs, 43 (32.1%) quantitative designs, and 24 (17.9%) mixed methods designs. The applied research methods were classified based on the following aspects: (1) manual analysis methods (content analysis methodology, grounded theory, ethnography, classification analysis, thematic analysis, and scoring tables) and computer-aided analysis methods (latent Dirichlet allocation, support vector machine, probabilistic clustering, image analysis, topic modeling, sentiment analysis, and other natural language processing technologies), (2) categories of research contents, and (3) health care areas (health practice, health services, and health education).
ConclusionsBased on an extensive literature review, we investigated the methods for analyzing the contents of social media for health care to determine the main applications, differences, trends, and existing problems. We also discussed the implications for the future. Traditional content analysis is still the mainstream method for analyzing social media content, and future research may be combined with big data research. With the progress of computers, mobile phones, smartwatches, and other smart devices, social media information sources will become more diversified. Future research can combine new sources, such as pictures, videos, and physiological signals, with online social networking to adapt to the development trend of the internet. More medical information talents need to be trained in the future to better solve the problem of network information analysis. Overall, this scoping review can be useful for a large audience that includes researchers entering the field.
Computer applications to medicine. Medical informatics, Public aspects of medicine
Han Shi Jocelyn Chew, Nagadarshini Nicole Rajasegaran, Yip Han Chin
et al.
BackgroundSelf-monitoring smartphone apps and health coaching have both individually been shown to improve weight-related outcomes, but their combined effects remain unclear.
ObjectiveThis study aims to examine the effectiveness of combining self-monitoring apps with health coaching on anthropometric, cardiometabolic, and lifestyle outcomes in people with overweight and obesity.
MethodsRelevant articles published from inception till June 9, 2022, were searched through 8 databases (Embase, CINAHL, PubMed, PsycINFO, Scopus, The Cochrane Library, and Web of Science). Effect sizes were pooled using random-effects models. Behavioral strategies used were coded using the behavior change techniques taxonomy V1.
ResultsA total of 14 articles were included, representing 2478 participants with a mean age of 39.1 years and a BMI of 31.8 kg/m2. Using combined intervention significantly improved weight loss by 2.15 kg (95% CI −3.17 kg to −1.12 kg; P<.001; I2=60.3%), waist circumference by 2.48 cm (95% CI −3.51 cm to −1.44 cm; P<.001; I2=29%), triglyceride by 0.22 mg/dL (95% CI −0.33 mg/dL to 0.11 mg/dL; P=.008; I2=0%), glycated hemoglobin by 0.12% (95% CI −0.21 to −0.02; P=.03; I2=0%), and total calorie consumption per day by 128.30 kcal (95% CI −182.67 kcal to −73.94 kcal; P=.003; I2=0%) kcal, but not BMI, blood pressure, body fat percentage, cholesterol, and physical activity. Combined interventional effectiveness was superior to receiving usual care and apps for waist circumference but only superior to usual care for weight loss.
ConclusionsCombined intervention could improve weight-related outcomes, but more research is needed to examine its added benefits to using an app.
Trial RegistrationPROSPERO CRD42022345133; https://tinyurl.com/2zxfdpay
Computer applications to medicine. Medical informatics, Public aspects of medicine
Abstract Background This study used machine learning techniques to evaluate cardiovascular disease risk factors (CVD) and the relationship between sex and these risk factors. The objective was pursued in the context of CVD being a major global cause of death and the need for accurate identification of risk factors for timely diagnosis and improved patient outcomes. The researchers conducted a literature review to address previous studies' limitations in using machine learning to assess CVD risk factors. Methods This study analyzed data from 1024 patients to identify the significant CVD risk factors based on sex. The data comprising 13 features, such as demographic, lifestyle, and clinical factors, were obtained from the UCI repository and preprocessed to eliminate missing information. The analysis was performed using principal component analysis (PCA) and latent class analysis (LCA) to determine the major CVD risk factors and to identify any homogeneous subgroups between male and female patients. Data analysis was performed using XLSTAT Software. This software provides a comprehensive suite of tools for Data Analysis, Machine Learning, and Statistical Solutions for MS Excel. Results This study showed significant sex differences in CVD risk factors. 8 out of 13 risk factors affecting male and female patients found that males and females share 4 of the eight risk factors. Identified latent profiles of CVD patients, suggesting the presence of subgroups among CVD patients. These findings provide valuable insights into the impact of sex differences on CVD risk factors. Moreover, they have important implications for healthcare professionals, who can use this information to develop individualized prevention and treatment plans. The results highlight the need for further research to elucidate these disparities better and develop more effective CVD prevention measures. Conclusions The study explored the sex differences in the CVD risk factors and the presence of subgroups among CVD patients using ML techniques. The results revealed sex-specific differences in risk factors and the existence of subgroups among CVD patients, thus providing essential insights for personalized prevention and treatment plans. Hence, further research is necessary to understand these disparities better and improve CVD prevention.
Computer applications to medicine. Medical informatics
Amirreza Hashemi, Yuemeng Feng, Arman Rahmim
et al.
This work investigates use of equivariant neural networks as efficient and high-performance frameworks for image reconstruction and denoising in nuclear medicine. Our work aims to tackle limitations of conventional Convolutional Neural Networks (CNNs), which require significant training. We investigated equivariant networks, aiming to reduce CNN's dependency on specific training sets. Specifically, we implemented and evaluated equivariant spherical CNNs (SCNNs) for 2- and 3-dimensional medical imaging problems. Our results demonstrate superior quality and computational efficiency of SCNNs in both image reconstruction and denoising benchmark problems. Furthermore, we propose a novel approach to employ SCNNs as a complement to conventional image reconstruction tools, enhancing the outcomes while reducing reliance on the training set. Across all cases, we observed significant decrease in computational cost by leveraging the inherent inclusion of equivariant representatives while achieving the same or higher quality of image processing using SCNNs compared to CNNs. Additionally, we explore the potential of SCNNs for broader tomography applications, particularly those requiring rotationally variant representation.
Iván Vallés-Pérez, Emilio Soria-Olivas, Marcelino Martínez-Sober
et al.
In this work we propose a new non-monotonic activation function: the modulus. The majority of the reported research on nonlinearities is focused on monotonic functions. We empirically demonstrate how by using the modulus activation function on computer vision tasks the models generalize better than with other nonlinearities - up to a 15% accuracy increase in CIFAR100 and 4% in CIFAR10, relative to the best of the benchmark activations tested. With the proposed activation function the vanishing gradient and dying neurons problems disappear, because the derivative of the activation function is always 1 or -1. The simplicity of the proposed function and its derivative make this solution specially suitable for TinyML and hardware applications.
Despite recent progress in enhancing the privacy of federated learning (FL) via differential privacy (DP), the trade-off of DP between privacy protection and performance is still underexplored for real-world medical scenario. In this paper, we propose to optimize the trade-off under the context of client-level DP, which focuses on privacy during communications. However, FL for medical imaging involves typically much fewer participants (hospitals) than other domains (e.g., mobile devices), thus ensuring clients be differentially private is much more challenging. To tackle this problem, we propose an adaptive intermediary strategy to improve performance without harming privacy. Specifically, we theoretically find splitting clients into sub-clients, which serve as intermediaries between hospitals and the server, can mitigate the noises introduced by DP without harming privacy. Our proposed approach is empirically evaluated on both classification and segmentation tasks using two public datasets, and its effectiveness is demonstrated with significant performance improvements and comprehensive analytical studies. Code is available at: https://github.com/med-air/Client-DP-FL.
Mohammad Reza Hosseinzadeh Taher, Michael B. Gotway, Jianming Liang
Human anatomy is the foundation of medical imaging and boasts one striking characteristic: its hierarchy in nature, exhibiting two intrinsic properties: (1) locality: each anatomical structure is morphologically distinct from the others; and (2) compositionality: each anatomical structure is an integrated part of a larger whole. We envision a foundation model for medical imaging that is consciously and purposefully developed upon this foundation to gain the capability of "understanding" human anatomy and to possess the fundamental properties of medical imaging. As our first step in realizing this vision towards foundation models in medical imaging, we devise a novel self-supervised learning (SSL) strategy that exploits the hierarchical nature of human anatomy. Our extensive experiments demonstrate that the SSL pretrained model, derived from our training strategy, not only outperforms state-of-the-art (SOTA) fully/self-supervised baselines but also enhances annotation efficiency, offering potential few-shot segmentation capabilities with performance improvements ranging from 9% to 30% for segmentation tasks compared to SSL baselines. This performance is attributed to the significance of anatomy comprehension via our learning strategy, which encapsulates the intrinsic attributes of anatomical structures-locality and compositionality-within the embedding space, yet overlooked in existing SSL methods. All code and pretrained models are available at https://github.com/JLiangLab/Eden.
Rapid integration of large language models (LLMs) in health care is sparking global discussion about their potential to revolutionize health care quality and accessibility. At a time when improving health care quality and access remains a critical concern for countries worldwide, the ability of these models to pass medical examinations is often cited as a reason to use them for medical training and diagnosis. However, the impact of their inevitable use as a self-diagnostic tool and their role in spreading healthcare misinformation has not been evaluated. This study aims to assess the effectiveness of LLMs, particularly ChatGPT, from the perspective of an individual self-diagnosing to better understand the clarity, correctness, and robustness of the models. We propose the comprehensive testing methodology evaluation of LLM prompts (EvalPrompt). This evaluation methodology uses multiple-choice medical licensing examination questions to evaluate LLM responses. We use open-ended questions to mimic real-world self-diagnosis use cases, and perform sentence dropout to mimic realistic self-diagnosis with missing information. Human evaluators then assess the responses returned by ChatGPT for both experiments for clarity, correctness, and robustness. The results highlight the modest capabilities of LLMs, as their responses are often unclear and inaccurate. As a result, medical advice by LLMs should be cautiously approached. However, evidence suggests that LLMs are steadily improving and could potentially play a role in healthcare systems in the future. To address the issue of medical misinformation, there is a pressing need for the development of a comprehensive self-diagnosis dataset. This dataset could enhance the reliability of LLMs in medical applications by featuring more realistic prompt styles with minimal information across a broader range of medical fields.