Matthew A. Reyna, Zuzana Koscova, Jan Pavlus
et al.
Objective: Chagas disease is a parasitic infection that is endemic to South America, Central America, and, more recently, the U.S., primarily transmitted by insects. Chronic Chagas disease can cause cardiovascular diseases and digestive problems. Serological testing capacities for Chagas disease are limited, but Chagas cardiomyopathy often manifests in ECGs, providing an opportunity to prioritize patients for testing and treatment. Approach: The George B. Moody PhysioNet Challenge 2025 invites teams to develop algorithmic approaches for identifying Chagas disease from electrocardiograms (ECGs). Main results: This Challenge provides multiple innovations. First, we leveraged several datasets with labels from patient reports and serological testing, provided a large dataset with weak labels and smaller datasets with strong labels. Second, we augmented the data to support model robustness and generalizability to unseen data sources. Third, we applied an evaluation metric that captured the local serological testing capacity for Chagas disease to frame the machine learning problem as a triage task. Significance: Over 630 participants from 111 teams submitted over 1300 entries during the Challenge, representing diverse approaches from academia and industry worldwide.
Large language models (LLMs) have demonstrated impressive capabilities in disease diagnosis. However, their effectiveness in identifying rarer diseases, which are inherently more challenging to diagnose, remains an open question. Rare disease performance is critical with the increasing use of LLMs in healthcare settings. This is especially true if a primary care physician needs to make a rarer prognosis from only a patient conversation so that they can take the appropriate next step. To that end, several clinical decision support systems are designed to support providers in rare disease identification. Yet their utility is limited due to their lack of knowledge of common disorders and difficulty of use. In this paper, we propose RareScale to combine the knowledge LLMs with expert systems. We use jointly use an expert system and LLM to simulate rare disease chats. This data is used to train a rare disease candidate predictor model. Candidates from this smaller model are then used as additional inputs to black-box LLM to make the final differential diagnosis. Thus, RareScale allows for a balance between rare and common diagnoses. We present results on over 575 rare diseases, beginning with Abdominal Actinomycosis and ending with Wilson's Disease. Our approach significantly improves the baseline performance of black-box LLMs by over 17% in Top-5 accuracy. We also find that our candidate generation performance is high (e.g. 88.8% on gpt-4o generated chats).
India, as a predominantly agrarian economy, faces significant challenges in agriculture, including substantial crop losses caused by diseases, pests, and environmental stress. Early detection and accurate identification of diseases across different crops are critical for improving yield and ensuring food security. This paper proposes a deep learning based solution for detecting multiple diseases in multiple crops, aimed to cover India's diverse agricultural landscape. We first create a unified dataset encompassing images of 17 different crops and 34 different diseases from various available repositories. Proposed deep learning model is trained on this dataset and outperforms the state-of-the-art in terms of accuracy and the number of crops, diseases covered. We achieve a significant detection accuracy, i.e., 99 percent for our unified dataset which is 7 percent more when compared to state-of-the-art handling 14 crops and 26 different diseases only. By improving the number of crops and types of diseases that can be detected, proposed solution aims to provide a better product for Indian farmers.
The exponential growth of information presents a significant challenge for researchers and professionals seeking to remain at the forefront of their fields and this paper introduces an innovative framework for automatically generating insightful financial digests using the power of Large Language Models (LLMs), specifically Google's Gemini Pro. By leveraging a combination of data extraction from OpenAlex, strategic prompt engineering, and LLM-driven analysis, we demonstrate the automated example of creating a comprehensive digests that generalize key findings, identify emerging trends. This approach addresses the limitations of traditional analysis methods, enabling the efficient processing of vast amounts of unstructured data and the delivery of actionable insights in an easily digestible format. This paper describes how LLMs work in simple words and how we can use their power to help researchers and scholars save their time and stay informed about current trends. Our study includes step-by-step process, from data acquisition and JSON construction to interaction with Gemini and the automated generation of PDF reports, including a link to the project's GitHub repository for broader accessibility and further development.
Accurate disease detection is of paramount importance for effective medical treatment and patient care. However, the process of disease detection is often associated with extensive medical testing and considerable costs, making it impractical to perform all possible medical tests on a patient to diagnose or predict hundreds or thousands of diseases. In this work, we propose Collaborative Learning for Disease Detection (CLDD), a novel graph-based deep learning model that formulates disease detection as a collaborative learning task by exploiting associations among diseases and similarities among patients adaptively. CLDD integrates patient-disease interactions and demographic features from electronic health records to detect hundreds or thousands of diseases for every patient, with little to no reliance on the corresponding medical tests. Extensive experiments on a processed version of the MIMIC-IV dataset comprising 61,191 patients and 2,000 diseases demonstrate that CLDD consistently outperforms representative baselines across multiple metrics, achieving a 6.33\% improvement in recall and 7.63\% improvement in precision. Furthermore, case studies on individual patients illustrate that CLDD can successfully recover masked diseases within its top-ranked predictions, demonstrating both interpretability and reliability in disease prediction. By reducing diagnostic costs and improving accessibility, CLDD holds promise for large-scale disease screening and social health security.
Abstract The use of robotic surgery has experienced rapid growth across diverse medical conditions, with a notable emphasis on gastrointestinal cancers. The advanced technologies incorporated into robotic surgery platforms have played a pivotal role in enabling the safe performance of complex procedures, including gastrectomy and pancreatectomy, through a minimally invasive approach. However, there exists a noteworthy gap in high‐level evidence demonstrating that robotic surgery for gastric and pancreatic cancers has substantial benefits compared to traditional open or laparoscopic methods. The primary impediment hindering the broader implementation of robotic surgery is its cost. The escalating healthcare expenses in the United States have prompted healthcare providers and payors to explore patient‐centered, value‐based healthcare models and reimbursement systems that embrace cost‐effectiveness. Thus, it is important to determine what defines the value of robotic surgery. It must either maintain or enhance oncological quality and improve complication rates compared to open procedures. Moreover, its true value should be apparent in patients' expedited recovery and improved quality of life. Another essential aspect of robotic surgery's value lies in minimizing or even eliminating opioid use, even after major operations, offering considerable benefits to the broader public health landscape. A quicker return to oncological therapy has the potential to improve overall oncological outcomes, while a speedier return to work not only alleviates individual financial distress but also positively impacts societal productivity. In this article, we comprehensively review and summarize the current landscape of health economics and value‐based care, with a focus on robotic surgery for gastrointestinal cancers.
Surgery, Diseases of the digestive system. Gastroenterology
Background and Aims: Biliary tract cancer (BTC) is a rare, lethal, heterogeneous group of cancers often diagnosed at an advanced stage. While gemcitabine plus cisplatin is the standard of care for first-line treatment of locally advanced or metastatic BTC, no globally accepted standard of care currently exists for second-line treatment of BTC following chemotherapy. However, the treatment landscape is evolving with approvals for therapies targeting actionable mutations. This study aimed to characterize treatment patterns and survival in patients with locally advanced or metastatic BTC. Methods: Patients with advanced or metastatic BTC in the Surveillance, Epidemiology, and End Results Medicare database between 2010 and 2015 (N = 2063) were included; patients with nonprimary BTC were excluded. Patient and clinical characteristics, line and type of therapy, and overall survival of patients were analyzed. Results: Only 45.5% (n = 938) of patients initiated systemic therapy within 90 days of diagnosis. The most common event following diagnosis was initiation of first-line therapy, and the most common event following first-line treatment was death. Median survival ranged from 5.0 months for patients receiving second-line fluoropyrimidine to 9.7 months for patients receiving second-line gemcitabine. Duration of therapy ranged from 0.7 months for patients receiving second-line fluoropyrimidine to 3.7 months for patients receiving first-line gemcitabine plus cisplatin therapy. Conclusion: Overall survival from diagnosis was poor and influenced by age, sex, stage, mobility limitations, comorbidity burden, poverty, and previous cancer. Treatment patterns varied for patients who progressed following first-line therapy, as there was no consensus second-line treatment for locally advanced or metastatic BTC without clinically targetable mutations.
Diseases of the digestive system. Gastroenterology
Many patients with chronic diseases resort to multiple medications to relieve various symptoms, which raises concerns about the safety of multiple medication use, as severe drug-drug antagonism can lead to serious adverse effects or even death. This paper presents a Decision Support System, called DSSDDI, based on drug-drug interactions to support doctors prescribing decisions. DSSDDI contains three modules, Drug-Drug Interaction (DDI) module, Medical Decision (MD) module and Medical Support (MS) module. The DDI module learns safer and more effective drug representations from the drug-drug interactions. To capture the potential causal relationship between DDI and medication use, the MD module considers the representations of patients and drugs as context, DDI and patients' similarity as treatment, and medication use as outcome to construct counterfactual links for the representation learning. Furthermore, the MS module provides drug candidates to doctors with explanations. Experiments on the chronic data collected from the Hong Kong Chronic Disease Study Project and a public diagnostic data MIMIC-III demonstrate that DSSDDI can be a reliable reference for doctors in terms of safety and efficiency of clinical diagnosis, with significant improvements compared to baseline methods.
Jérémie Lespinasse, Carole Dufouil, Cécile Proust-Lima
et al.
Background. Alzheimer's disease and related dementia (ADRD) are characterized by multiple and progressive anatomo clinical changes. Yet, modeling changes over disease course from cohort data is challenging as the usual timescales are inappropriate and time-to-clinical diagnosis is available on small subsamples of participants with short follow-up durations prior to diagnosis. One solution to circumvent this challenge is to define the disease time as a latent variable. Methods: We developed a multivariate mixed model approach that realigns individual trajectories into the latent disease time to describe disease progression. Our methodology exploits the clinical diagnosis information as a partially observed and approximate reference to guide the estimation of the latent disease time. The model estimation was carried out in the Bayesian Framework using Stan. We applied the methodology to 2186 participants of the MEMENTO study with 5-year follow-up. Repeated measures of 12 ADRD markers stemmed from cerebrospinal fluid (CSF), brain imaging and cognitive tests were analyzed. Result: The estimated latent disease time spanned over twenty years before the clinical diagnosis. Considering the profile of a woman aged 70 with a high level of education and APOE4 carrier (the main genetic risk factor for ADRD), CSF markers of tau proteins accumulation preceded markers of brain atrophy by 5 years and cognitive decline by 10 years. We observed that individual characteristics could substantially modify the sequence and timing of these changes. Conclusion: Our disease progression model does not only realign trajectories into the most homogeneous way. It accounts for the inherent residual inter-individual variability in dementia progression to describe the long-term changes according to the years preceding clinical diagnosis, and to provide clinically meaningful information on the sequence of events.
Early and accurate detection systems for ear diseases, powered by deep learning, are essential for preventing hearing impairment and improving population health. However, the limited diversity of existing otoendoscopy datasets and the poor balance between diagnostic accuracy, computational efficiency, and model size have hindered the translation of artificial intelligence (AI) algorithms into healthcare applications. In this study, we constructed a large-scale, multi-center otoendoscopy dataset covering eight common ear diseases and healthy cases. Building upon this resource, we developed Best-EarNet, an ultrafast and lightweight deep learning architecture integrating a novel Local-Global Spatial Feature Fusion Module with a multi-scale supervision strategy, enabling real-time and accurate classification of ear conditions. Leveraging transfer learning, Best-EarNet, with a model size of only 2.94 MB, achieved diagnostic accuracies of 95.23% on an internal test set (22,581 images) and 92.14% on an external test set (1,652 images), while requiring only 0.0125 seconds (80 frames per second) to process a single image on a standard CPU. Further subgroup analysis by gender and age showed consistently excellent performance of Best-EarNet across all demographic groups. To enhance clinical interpretability and user trust, we incorporated Grad-CAM-based visualization, highlighting the specific abnormal ear regions contributing to AI predictions. Most importantly, we developed Ear-Keeper, a cross-platform intelligent diagnosis system built upon Best-EarNet, deployable on smartphones, tablets, and personal computers. Ear-Keeper enables public users and healthcare providers to perform comprehensive real-time video-based ear canal screening, supporting early detection and timely intervention of ear diseases.
Apple diseases, if not diagnosed early, can lead to massive resource loss and pose a serious threat to humans and animals who consume the infected apples. Hence, it is critical to diagnose these diseases early in order to manage plant health and minimize the risks associated with them. However, the conventional approach of monitoring plant diseases entails manual scouting and analyzing the features, texture, color, and shape of the plant leaves, resulting in delayed diagnosis and misjudgments. Our work proposes an ensembled system of Xception, InceptionResNet, and MobileNet architectures to detect 5 different types of apple plant diseases. The model has been trained on the publicly available Plant Pathology 2021 dataset and can classify multiple diseases in a given plant leaf. The system has achieved outstanding results in multi-class and multi-label classification and can be used in a real-time setting to monitor large apple plantations to aid the farmers manage their yields effectively.
Chromatic dispersion is a common problem to degrade the system resolution in optical coherence tomography (OCT). This study is to develop a deep learning network for automated dispersion compensation (ADC-Net) in OCT. The ADC-Net is based on a redesigned UNet architecture which employs an encoder-decoder pipeline. The input section encompasses partially compensated OCT B-scans with individual retinal layers optimized. Corresponding output is a fully compensated OCT B-scans with all retinal layers optimized. Two numeric parameters, i.e., peak signal to noise ratio (PSNR) and structural similarity index metric computed at multiple scales (MS-SSIM), were used for objective assessment of the ADC-Net performance. Comparative analysis of training models, including single, three, five, seven and nine input channels were implemented. The five-input channels implementation was observed as the optimal mode for ADC-Net training to achieve robust dispersion compensation in OCT
Valeria Cardellini, Emiliano Casalicchio, Stefano Iannucci
et al.
Intrusion Response is a relatively new field of research. Recent approaches for the creation of Intrusion Response Systems (IRSs) use Reinforcement Learning (RL) as a primary technique for the optimal or near-optimal selection of the proper countermeasure to take in order to stop or mitigate an ongoing attack. However, most of them do not consider the fact that systems can change over time or, in other words, that systems exhibit a non-stationary behavior. Furthermore, stateful approaches, such as those based on RL, suffer the curse of dimensionality, due to a state space growing exponentially with the size of the protected system. In this paper, we introduce and develop an IRS software prototype, named irs-partition. It leverages the partitioning of the protected system and Deep Q-Networks to address the curse of dimensionality by supporting a multi-agent formulation. Furthermore, it exploits transfer learning to follow the evolution of non-stationary systems.
Robert S. Brown, Jr., Michio Imawari, Namiki Izumi
et al.
Background & Aims: Despite limitations, platelet transfusion has been used to minimise bleeding risk in patients with thrombocytopaenia. Lusutrombopag is an oral, thrombopoietin receptor agonist approved for treatment of thrombocytopaenia associated with chronic liver disease in patients undergoing planned invasive procedures. This post-hoc analysis assessed the magnitude of platelet count change based on the integrated per-protocol population from 2 similar phase III multicentre, randomised, double-blind, placebo-controlled trials. Methods: Adults with chronic liver disease-induced thrombocytopaenia and platelet count <50 (× 109/L) received lusutrombopag 3 mg or placebo ≤7 days before invasive procedure scheduled 9–14 days after randomisation. Platelet transfusion was required per protocol if the platelet count remained <50 no more than 2 days before the planned invasive procedure. Post-hoc analysis included: proportion of patients with platelet count ≥50, ≥1.5-fold increase, and a doubling of platelet count; maximum and maximum change in platelet count; and platelet count time course. Results: Platelet count ≥50, a platelet count increase ≥1.5-fold, and at least a doubling in platelet count were achieved in 88.3%, 86.9%, and 52.6% of patients in the lusutrombopag group (n = 137) vs. 58.6%, 32.3%, and 6.0% of patients in the placebo group (n = 133), respectively. In the lusutrombopag group, median maximum platelet count across baseline platelet counts of <30, ≥30 to <40, and ≥40 was 46, 76, and 87, respectively. Median maximum change in platelet count by baseline platelet count was +24, +42, and +40, respectively. Patients who received lusutrombopag without platelet transfusion achieved a median platelet count ≥50 for 3 weeks. Conclusions: Patients treated with lusutrombopag experienced a clinically relevant response in platelet count for a substantial duration of time. Lay summary: Patients with low platelet counts caused by chronic liver disease may not receive planned invasive procedures or surgeries because of an increased risk of bleeding. Lusutrombopag has previously demonstrated efficacy in raising platelet counts and is approved to treat chronic liver disease patients with low platelet counts in advance of a planned surgery. Physicians need to understand more clearly what to expect in terms of platelet count change when using lusutrombopag; this integrated analysis provides data to help guide its clinical application.
Diseases of the digestive system. Gastroenterology