D. Lindberg, B. Humphreys, A. McCray
Hasil untuk "Computer applications to medicine. Medical informatics"
Menampilkan 20 dari ~87804 hasil · dari DOAJ, arXiv, Semantic Scholar
Lei Huang, Chen Wu, Tingting Sun et al.
Objectives To evaluate the ability of large language models (LLMs) to simulate multidisciplinary team (MDT) decision-making in colorectal cancer, a malignancy that often requires complex treatment planning.Methods We retrospectively analysed 1423 colorectal cancer cases discussed at MDT meetings at Peking University Cancer Hospital between January 2023 and December 2024. Three LLMs—OpenAI o3-mini-2025-01-31, DeepSeek-R1 671b and Qwen qwq-plus-2025-03-05—were tested for their ability to replicate MDT recommendations using a standardised treatment categorisation framework. Each case was processed three times per model; only cases with consistent outputs across all three runs were included. Concordance between AI-generated decisions and expert MDT consensus was assessed using agreement percentages and Cohen’s kappa.Results O3 demonstrated the highest intramodel stability, with an agreement rate of 81.0% (Fleiss’ kappa=0.794), yielding 1153 cases with consistent outputs. Concordance with MDT consensus was comparable across the three models, ranging from 62.5% to 65.4%. Multivariable analysis of O3 outputs identified treatment-naïve status, non-metastatic disease and colon tumour location as independent predictors of higher concordance with experts.Discussion LLMs showed fair overall agreement with expert MDT decisions, with stronger performance in standardised and less complex clinical scenarios. Areas of higher concordance included treatment-naïve non-metastatic colon cancer, treated non-metastatic rectal cancer and treated non-metastatic colon cancer.Conclusion LLMs can partially replicate expert MDT recommendations in colorectal cancer. Their integration into clinical workflows should aim to complement, rather than replace, human expertise.
Zhongying Deng, Cheng Tang, Ziyan Huang et al.
Foundation models have demonstrated remarkable success across diverse domains and tasks, primarily due to the thrive of large-scale, diverse, and high-quality datasets. However, in the field of medical imaging, the curation and assembling of such medical datasets are highly challenging due to the reliance on clinical expertise and strict ethical and privacy constraints, resulting in a scarcity of large-scale unified medical datasets and hindering the development of powerful medical foundation models. In this work, we present the largest survey to date of medical image datasets, covering over 1,000 open-access datasets with a systematic catalog of their modalities, tasks, anatomies, annotations, limitations, and potential for integration. Our analysis exposes a landscape that is modest in scale, fragmented across narrowly scoped tasks, and unevenly distributed across organs and modalities, which in turn limits the utility of existing medical image datasets for developing versatile and robust medical foundation models. To turn fragmentation into scale, we propose a metadata-driven fusion paradigm (MDFP) that integrates public datasets with shared modalities or tasks, thereby transforming multiple small data silos into larger, more coherent resources. Building on MDFP, we release an interactive discovery portal that enables end-to-end, automated medical image dataset integration, and compile all surveyed datasets into a unified, structured table that clearly summarizes their key characteristics and provides reference links, offering the community an accessible and comprehensive repository. By charting the current terrain and offering a principled path to dataset consolidation, our survey provides a practical roadmap for scaling medical imaging corpora, supporting faster data discovery, more principled dataset creation, and more capable medical foundation models.
K. Wong, G. Fortino, D. Abbott
Abstract Artificial intelligence (AI) is becoming a vital concept in medicine leading to a rapid emergence of important tools for medical diagnostics. Now, as a crucial machine learning tool in the field of computer vision, deep learning (DL) is being widely used in medical imaging. Furthermore, as reported in the medical literature, DL has been widely used in medical related research. However, the practical application of DL in clinical diagnosis is relatively small, and it is a new field that may have some challenges. How to effectively perform medical image analysis is a major problem in the field of disease diagnosis, and further diagnostic methods need to be developed. At this stage, DL could be viewed as a black box requiring knowledge of its internal workings, and hence presents some crucial technical challenges that need further methodological development. Thereafter with proper diagnostics, pre-operative computerized simulation planning can be carried out for use of appropriate surgical intervention technology. This paper presents important questions on cardiovascular disease (CVD) diagnostics, using this powerful and yet not adequately understood technology. It discusses issues brought by the paradigm shift of AI vis-a-vis DL in CVD diagnostics, provides possible solutions to potential issues, and envisions the future of the related machine intelligence applications. The discussed problems are dissected into the modular aspects of DL in relation to CVD image classification, segmentation, and detection. A proper perspective on management of these issues is the key to a successful technological implementation of DL in modern medical science.
Lena Spangenberg, Luise Böhler, Tina-Marie Hoke et al.
Background Increasingly, studies and reviews have highlighted the potentials of ecological momentary assessments (EMAs) and wearables in suicide research. However, to date it is only poorly understood how patients experience frequent assessment of suicidal ideation over weeks and months. Methods Following discharge from inpatient psychiatric care due to a suicidal crisis or suicide attempt, patients started a 21- to 24-day EMA (EMA 1) with four semi-random prompts per day. After that, participants received four prompts per day, on two randomly chosen consecutive days per week for the following 26 weeks (EMA 2). Participants were additionally given a wearable during EMA 1 and 2. Debriefings on participants’ thoughts and experiences were conducted via telephone interviews after EMA 1 (n = 68) and after EMA 2 (n = 51) using rating scales and open questions. Qualitative and quantitative methods were used to analyze the data. Results After EMA 1, 62% of participants stated that they had experienced a change in their behavior or mood due to the study (66% in EMA 2). Different aspects were mentioned, highlighting the helpfulness of EMA (e.g. improving insight and grounding oneself) but also the burden (e.g. feeling weighed down/exhausted) and reactivity effects (e.g. feeling worse/annoyed and increased brooding). Discussion The findings illustrate positive and negative effects of EMAs over longer observation intervals in individuals at high risk of suicide-related thoughts and behaviors. These findings can help in the development of study protocols, to evaluate data quality and enhance the interpretation of EMA data.
Aleksandra Edwards, Antonio F Pardiñas, George Kirov et al.
Abstract BackgroundFree-text clinical data are unstructured and narrative in nature, providing a rich source of patient information, but extracting research-quality clinical phenotypes from these data remains a challenge. Manually reviewing and extracting clinical phenotypes from free-text patient notes is a time-consuming process and not suitable for large-scale datasets. On the other hand, automatically extracting clinical phenotypes can be challenging because medical researchers lack gold-standard annotated references and other purpose-built resources, including software. Recent large language models (LLMs) can understand natural language instructions, which help them adapt to different domains and tasks without the need for specific training data. This makes them suitable for clinical applications, though their use in this field is limited. ObjectiveWe aimed to develop an LLM pipeline based on the few-shot learning framework that could extract clinical information from free-text clinical summaries. We assessed the performance of this pipeline for classifying individuals with confirmed or suspected comorbid intellectual disability (ID) from clinical summaries of patients with severe mental illness and performed genetic validation of the results by testing whether individuals with LLM-defined ID carried more genetic variants known to confer risk of ID when compared with individuals without LLM-defined ID. MethodsWe developed novel approaches for performing classification, based on an intermediate information extraction (IE) step and human-in-the-loop techniques. We evaluated two models: Fine-Tuned Language Text-To-Text Transfer Transformer (Flan-T5) and Large Language Model Architecture (LLaMA). The dataset comprised 1144 free-text clinical summaries, of which 314 were manually annotated and used as a gold standard for evaluating automated methods. We also used published genetic data from 547 individuals to perform a genetic validation of the classification results; Firth’s penalized logistic regression framework was used to test whether individuals with LLM-defined ID carry significantly more de novo variants in known developmental disorder risk genes than individuals without LLM-defined ID. ResultsThe results demonstrate that a 2-stage approach, combining IE with manual validation, can effectively identify individuals with suspected IDs from free-text patient records, requiring only a single training example per classification label. The best-performing method based on the Flan-T5 model and incorporating the IE step achieved an F1P−5 ConclusionsLLMs and in-context learning techniques combined with human-in-the-loop approaches can be highly beneficial for extraction and categorization of information from free-text clinical data. In this proof-of-concept study, we show that LLMs can be used to identify individuals with a severe mental illness who also have suspected ID, which is a biologically and clinically meaningful subgroup of patients.
Sahr Wali, Jeremy I Schwartz, Justice Seidel et al.
BackgroundWith many socially disadvantaged populations experiencing a higher level of illness than the general population, health research has begun to recognize the impact of social determinants on health outcomes. Community-based research has increasingly been used to understand the complexities of the local context. However, given the number of interdependent factors influencing individual well-being, no single methodology can explore this level of complexity alone. To put context into perspective, research processes need to shift from the sole use of Western methodologies and, instead, incorporate collaborative methods from nontraditional research. Specifically, Indigenous methodologies have been developed to better understand the complexity of context within multiple worldviews, but current studies have failed to apply these approaches within other cultural settings. ObjectiveThis mixed methods study will use Western and Indigenous methodologies to adapt a digital health program for remote communities in Uganda. MethodsUsing the principles of community-based research and user-centered design, a 4-phase mixed methods study will be conducted. The Indigenous method of 2-eyed seeing will be used to promote a reflexive engagement strategy throughout all study phases. Phase 1 will focus on partnership building to codevelop the project priorities and study design. Phase 2 will involve a needs assessment to elicit a context-focused understanding of the local clinic and community environment. Phase 3 will involve a series of system adaptations to co-design the program. Phase 4 will consist of a community-based field study to evaluate the usability and cultural relevance of the adapted program. ResultsThis study was approved by the Makerere University School of Medicine Research and Ethics Committee (Mak-SOMREC-2021-63) and the University Health Network Research Ethics Board (20-6022). This protocol provides a novel strategy leveraging a range of community-based methods to ensure that the contextual significance of each community’s challenges is reflected in the design of the Medly Uganda program. Partnership building was initiated in June 2019, and the first stage of data collection in phase 2 began in January 2021. At the time of manuscript submission, phases 1 to 3 have been completed. Phase 4 data analysis is ongoing and expected to be completed in October 2025. ConclusionsIntegrating the community’s local knowledge into the design of the Medly Uganda program will lead to the development of meaningful interventions that improve health outcomes. International Registered Report Identifier (IRRID)DERR1-10.2196/75136
Jialin Yue, Tianyuan Yao, Ruining Deng et al.
Artificial intelligence (AI) has demonstrated significant success in automating the detection of glomeruli—key functional units of the kidney—from whole slide images (WSIs) in kidney pathology. However, existing open-source tools are often distributed as source code or Docker containers, requiring advanced programming skills that hinder accessibility for non-programmers, such as clinicians. Additionally, current models are typically trained on a single dataset and lack flexibility in adjusting confidence levels for predictions. To overcome these challenges, we introduce GloFinder, a QuPath plugin designed for single-click automated glomerular detection across entire WSIs with online editing through the graphical user interface. GloFinder employs CircleNet, an anchor-free detection framework utilizing circle representations for precise object localization, with models trained on approximately 160,000 manually annotated glomeruli. To further enhance accuracy, the plugin incorporates weighted circle fusion—an ensemble method that combines confidence scores from multiple CircleNet models to produce refined predictions, achieving superior performance in glomerular detection. GloFinder enables direct visualization and editing of results in QuPath, facilitating seamless interaction for clinicians and providing a powerful tool for nephropathology research and clinical practice.
Heyuan Huang, Alexandra DeLucia, Vijay Murari Tiyyala et al.
While Large Language Models (LLMs) can generate fluent and convincing responses, they are not necessarily correct. This is especially apparent in the popular decompose-then-verify factuality evaluation pipeline, where LLMs evaluate generations by decomposing the generations into individual, valid claims. Factuality evaluation is especially important for medical answers, since incorrect medical information could seriously harm the patient. However, existing factuality systems are a poor match for the medical domain, as they are typically only evaluated on objective, entity-centric, formulaic texts such as biographies and historical topics. This differs from condition-dependent, conversational, hypothetical, sentence-structure diverse, and subjective medical answers, which makes decomposition into valid facts challenging. We propose MedScore, a new pipeline to decompose medical answers into condition-aware valid facts and verify against in-domain corpora. Our method extracts up to three times more valid facts than existing methods, reducing hallucination and vague references, and retaining condition-dependency in facts. The resulting factuality score substantially varies by decomposition method, verification corpus, and used backbone LLM, highlighting the importance of customizing each step for reliable factuality evaluation by using our generalizable and modularized pipeline for domain adaptation.
Guan-Yan Yang, Tzu-Yu Cheng, Ya-Wen Teng et al.
The integration of Large Language Models (LLMs) into computer applications has introduced transformative capabilities but also significant security challenges. Existing safety alignments, which primarily focus on semantic interpretation, leave LLMs vulnerable to attacks that use non-standard data representations. This paper introduces ArtPerception, a novel black-box jailbreak framework that strategically leverages ASCII art to bypass the security measures of state-of-the-art (SOTA) LLMs. Unlike prior methods that rely on iterative, brute-force attacks, ArtPerception introduces a systematic, two-phase methodology. Phase 1 conducts a one-time, model-specific pre-test to empirically determine the optimal parameters for ASCII art recognition. Phase 2 leverages these insights to launch a highly efficient, one-shot malicious jailbreak attack. We propose a Modified Levenshtein Distance (MLD) metric for a more nuanced evaluation of an LLM's recognition capability. Through comprehensive experiments on four SOTA open-source LLMs, we demonstrate superior jailbreak performance. We further validate our framework's real-world relevance by showing its successful transferability to leading commercial models, including GPT-4o, Claude Sonnet 3.7, and DeepSeek-V3, and by conducting a rigorous effectiveness analysis against potential defenses such as LLaMA Guard and Azure's content filters. Our findings underscore that true LLM security requires defending against a multi-modal space of interpretations, even within text-only inputs, and highlight the effectiveness of strategic, reconnaissance-based attacks. Content Warning: This paper includes potentially harmful and offensive model outputs.
Yehan Yang, Tianhao Ma, Ruotai Li et al.
The practice of Traditional Chinese Medicine (TCM) requires profound expertise and extensive clinical experience. While Large Language Models (LLMs) offer significant potential in this domain, current TCM-oriented LLMs suffer two critical limitations: (1) a rigid consultation framework that fails to conduct comprehensive and patient-tailored interactions, often resulting in diagnostic inaccuracies; and (2) treatment recommendations generated without rigorous syndrome differentiation, which deviates from the core diagnostic and therapeutic principles of TCM. To address these issues, we develop \textbf{JingFang (JF)}, an advanced LLM-based multi-agent system for TCM that facilitates the implementation of AI-assisted TCM diagnosis and treatment. JF integrates various TCM Specialist Agents in accordance with authentic diagnostic and therapeutic scenarios of TCM, enabling personalized medical consultations, accurate syndrome differentiation and treatment recommendations. A \textbf{Multi-Agent Collaborative Consultation Mechanism (MACCM)} for TCM is constructed, where multiple Agents collaborate to emulate real-world TCM diagnostic workflows, enhancing the diagnostic ability of base LLMs to provide accurate and patient-tailored medical consultation. Moreover, we introduce a dedicated \textbf{Syndrome Differentiation Agent} fine-tuned on a preprocessed dataset, along with a designed \textbf{Dual-Stage Recovery Scheme (DSRS)} within the Treatment Agent, which together substantially improve the model's accuracy of syndrome differentiation and treatment. Comprehensive evaluations and experiments demonstrate JF's superior performance in medical consultation, and also show improvements of at least 124% and 21.1% in the precision of syndrome differentiation compared to existing TCM models and State of the Art (SOTA) LLMs, respectively.
Miriam Cobo, David Corral Fontecha, Wilson Silva et al.
Artificial intelligence in medical imaging has seen unprecedented growth in the last years, due to rapid advances in deep learning and computing resources. Applications cover the full range of existing medical imaging modalities, with unique characteristics driven by the physics of each technique. Yet, artificial intelligence professionals entering the field, and even experienced developers, often lack a comprehensive understanding of the physical principles underlying medical image acquisition, which hinders their ability to fully leverage its potential. The integration of physics knowledge into artificial intelligence algorithms enhances their trustworthiness and robustness in medical imaging, especially in scenarios with limited data availability. In this work, we review the fundamentals of physics in medical images and their impact on the latest advances in artificial intelligence, particularly, in generative models and reconstruction algorithms. Finally, we explore the integration of physics knowledge into physics-inspired machine learning models, which leverage physics-based constraints to enhance the learning of medical imaging features.
T. Eche, L. Schwartz, F. Mokrane et al.
The clinical deployment of artificial intelligence (AI) applications in medical imaging is perhaps the greatest challenge facing radiology in the next decade. One of the main obstacles to the incorporation of automated AI-based decision-making tools in medicine is the failure of models to generalize when deployed across institutions with heterogeneous populations and imaging protocols. The most well-understood pitfall in developing these AI models is overfitting, which has, in part, been overcome by optimizing training protocols. However, overfitting is not the only obstacle to the success and generalizability of AI. Underspecification is also a serious impediment that requires conceptual understanding and correction. It is well known that a single AI pipeline, with prescribed training and testing sets, can produce several models with various levels of generalizability. Underspecification defines the inability of the pipeline to identify whether these models have embedded the structure of the underlying system by using a test set independent of, but distributed identically, to the training set. An underspecified pipeline is unable to assess the degree to which the models will be generalizable. Stress testing is a known tool in AI that can limit underspecification and, importantly, assure broad generalizability of AI models. However, the application of stress tests is new in radiologic applications. This report describes the concept of underspecification from a radiologist perspective, discusses stress testing as a specific strategy to overcome underspecification, and explains how stress tests could be designed in radiology-by modifying medical images or stratifying testing datasets. In the upcoming years, stress tests should become in radiology the standard that crash tests have become in the automotive industry. Keywords: Computer Applications-General, Informatics, Computer-aided Diagnosis © RSNA, 2021.
Deanne K. Thompson, Claire E. Kelly, Thijs Dhollander et al.
Background: The effects of low-moderate prenatal alcohol exposure (PAE) on brain development have been infrequently studied. Aim: To compare cortical and white matter structure between children aged 6 to 8 years with low-moderate PAE in trimester 1 only, low-moderate PAE throughout gestation, or no PAE. Methods: Women reported quantity and frequency of alcohol consumption before and during pregnancy. Magnetic resonance imaging was undertaken for 143 children aged 6 to 8 years with PAE during trimester 1 only (n = 44), PAE throughout gestation (n = 58), and no PAE (n = 41). T1-weighted images were processed using FreeSurfer, obtaining brain volume, area, and thickness of 34 cortical regions per hemisphere. Fibre density (FD), fibre cross-section (FC) and fibre density and cross-section (FDC) metrics were computed for diffusion images. Brain measures were compared between PAE groups adjusted for age and sex, then additionally for intracranial volume. Results: After adjustments, the right caudal anterior cingulate cortex volume (pFDR = 0.045) and area (pFDR = 0.008), and right cingulum tract cross-sectional area (pFWE < 0.05) were smaller in children exposed to alcohol throughout gestation compared with no PAE. Conclusion: This study reports a relationship between low-moderate PAE throughout gestation and cingulate cortex and cingulum tract alterations, suggesting a teratogenic vulnerability. Further investigation is warranted.
Vijaytha Muralidharan, Joel Schamroth, Alaa Youssef et al.
Given the potential benefits of artificial intelligence and machine learning (AI/ML) within healthcare, it is critical to consider how these technologies can be deployed in pediatric research and practice. Currently, healthcare AI/ML has not yet adapted to the specific technical considerations related to pediatric data nor adequately addressed the specific vulnerabilities of children and young people (CYP) in relation to AI. While the greatest burden of disease in CYP is firmly concentrated in lower and middle-income countries (LMICs), existing applied pediatric AI/ML efforts are concentrated in a small number of high-income countries (HICs). In LMICs, use-cases remain primarily in the proof-of-concept stage. This narrative review identifies a number of intersecting challenges that pose barriers to effective AI/ML for CYP globally and explores the shifts needed to make progress across multiple domains. Child-specific technical considerations throughout the AI/ML lifecycle have been largely overlooked thus far, yet these can be critical to model effectiveness. Governance concerns are paramount, with suitable national and international frameworks and guidance required to enable the safe and responsible deployment of advanced technologies impacting the care of CYP and using their data. An ambitious vision for child health demands that the potential benefits of AI/Ml are realized universally through greater international collaboration, capacity building, strong oversight, and ultimately diffusing the AI/ML locus of power to empower researchers and clinicians globally. In order that AI/ML systems that do not exacerbate inequalities in pediatric care, teams researching and developing these technologies in LMICs must ensure that AI/ML research is inclusive of the needs and concerns of CYP and their caregivers. A broad, interdisciplinary, and human-centered approach to AI/ML is essential for developing tools for healthcare workers delivering care, such that the creation and deployment of ML is grounded in local systems, cultures, and clinical practice. Decisions to invest in developing and testing pediatric AI/ML in resource-constrained settings must always be part of a broader evaluation of the overall needs of a healthcare system, considering the critical building blocks underpinning effective, sustainable, and cost-efficient healthcare delivery for CYP.
Matúš Falis, Aryo Pradipta Gema, Hang Dong et al.
Objective: To investigate GPT-3.5 in generating and coding medical documents with ICD-10 codes for data augmentation on low-resources labels. Materials and Methods: Employing GPT-3.5 we generated and coded 9,606 discharge summaries based on lists of ICD-10 code descriptions of patients with infrequent (generation) codes within the MIMIC-IV dataset. Combined with the baseline training set, this formed an augmented training set. Neural coding models were trained on baseline and augmented data and evaluated on a MIMIC-IV test set. We report micro- and macro-F1 scores on the full codeset, generation codes, and their families. Weak Hierarchical Confusion Matrices were employed to determine within-family and outside-of-family coding errors in the latter codesets. The coding performance of GPT-3.5 was evaluated both on prompt-guided self-generated data and real MIMIC-IV data. Clinical professionals evaluated the clinical acceptability of the generated documents. Results: Augmentation slightly hinders the overall performance of the models but improves performance for the generation candidate codes and their families, including one unseen in the baseline training data. Augmented models display lower out-of-family error rates. GPT-3.5 can identify ICD-10 codes by the prompted descriptions, but performs poorly on real data. Evaluators note the correctness of generated concepts while suffering in variety, supporting information, and narrative. Discussion and Conclusion: GPT-3.5 alone is unsuitable for ICD-10 coding. Augmentation positively affects generation code families but mainly benefits codes with existing examples. Augmentation reduces out-of-family errors. Discharge summaries generated by GPT-3.5 state prompted concepts correctly but lack variety, and authenticity in narratives. They are unsuitable for clinical practice.
Qiyang Sun, Alican Akman, Björn W. Schuller
The continuous development of artificial intelligence (AI) theory has propelled this field to unprecedented heights, owing to the relentless efforts of scholars and researchers. In the medical realm, AI takes a pivotal role, leveraging robust machine learning (ML) algorithms. AI technology in medical imaging aids physicians in X-ray, computed tomography (CT) scans, and magnetic resonance imaging (MRI) diagnoses, conducts pattern recognition and disease prediction based on acoustic data, delivers prognoses on disease types and developmental trends for patients, and employs intelligent health management wearable devices with human-computer interaction technology to name but a few. While these well-established applications have significantly assisted in medical field diagnoses, clinical decision-making, and management, collaboration between the medical and AI sectors faces an urgent challenge: How to substantiate the reliability of decision-making? The underlying issue stems from the conflict between the demand for accountability and result transparency in medical scenarios and the black-box model traits of AI. This article reviews recent research grounded in explainable artificial intelligence (XAI), with an emphasis on medical practices within the visual, audio, and multimodal perspectives. We endeavour to categorise and synthesise these practices, aiming to provide support and guidance for future researchers and healthcare professionals.
Daniel Duenias, Brennan Nichyporuk, Tal Arbel et al.
The integration of diverse clinical modalities such as medical imaging and the tabular data extracted from patients' Electronic Health Records (EHRs) is a crucial aspect of modern healthcare. Integrative analysis of multiple sources can provide a comprehensive understanding of the clinical condition of a patient, improving diagnosis and treatment decision. Deep Neural Networks (DNNs) consistently demonstrate outstanding performance in a wide range of multimodal tasks in the medical domain. However, the complex endeavor of effectively merging medical imaging with clinical, demographic and genetic information represented as numerical tabular data remains a highly active and ongoing research pursuit. We present a novel framework based on hypernetworks to fuse clinical imaging and tabular data by conditioning the image processing on the EHR's values and measurements. This approach aims to leverage the complementary information present in these modalities to enhance the accuracy of various medical applications. We demonstrate the strength and generality of our method on two different brain Magnetic Resonance Imaging (MRI) analysis tasks, namely, brain age prediction conditioned by subject's sex and multi-class Alzheimer's Disease (AD) classification conditioned by tabular data. We show that our framework outperforms both single-modality models and state-of-the-art MRI tabular data fusion methods. A link to our code can be found at https://github.com/daniel4725/HyperFusion
N. Wake, A. Rosenkrantz, R. Huang et al.
BackgroundPatient-specific 3D models are being used increasingly in medicine for many applications including surgical planning, procedure rehearsal, trainee education, and patient education. To date, experiences on the use of 3D models to facilitate patient understanding of their disease and surgical plan are limited. The purpose of this study was to investigate in the context of renal and prostate cancer the impact of using 3D printed and augmented reality models for patient education.MethodsPatients with MRI-visible prostate cancer undergoing either robotic assisted radical prostatectomy or focal ablative therapy or patients with renal masses undergoing partial nephrectomy were prospectively enrolled in this IRB approved study (n = 200). Patients underwent routine clinical imaging protocols and were randomized to receive pre-operative planning with imaging alone or imaging plus a patient-specific 3D model which was either 3D printed, visualized in AR, or viewed in 3D on a 2D computer monitor. 3D uro-oncologic models were created from the medical imaging data. A 5-point Likert scale survey was administered to patients prior to the surgical procedure to determine understanding of the cancer and treatment plan. If randomized to receive a pre-operative 3D model, the survey was completed twice, before and after viewing the 3D model. In addition, the cohort that received 3D models completed additional questions to compare usefulness of the different forms of visualization of the 3D models. Survey responses for each of the 3D model groups were compared using the Mann-Whitney and Wilcoxan rank-sum tests.ResultsAll 200 patients completed the survey after reviewing their cases with their surgeons using imaging only. 127 patients completed the 5-point Likert scale survey regarding understanding of disease and surgical procedure twice, once with imaging and again after reviewing imaging plus a 3D model. Patients had a greater understanding using 3D printed models versus imaging for all measures including comprehension of disease, cancer size, cancer location, treatment plan, and the comfort level regarding the treatment plan (range 4.60–4.78/5 vs. 4.06–4.49/5, p < 0.05).ConclusionsAll types of patient-specific 3D models were reported to be valuable for patient education. Out of the three advanced imaging methods, the 3D printed models helped patients to have the greatest understanding of their anatomy, disease, tumor characteristics, and surgical procedure.
Rosa Gram-Nielsen, Ivar Yannick Christensen, Mohammad Naghavi-Behzad et al.
The study aimed to compare the metastatic pattern of breast cancer and the intermodality proportion of agreement between [<sup>18</sup>F]FDG-PET/CT and CE-CT. Women with metastatic breast cancer (MBC) were enrolled prospectively and underwent a combined [<sup>18</sup>F]FDG-PET/CT and CE-CT scan to diagnose MBC. Experienced nuclear medicine and radiology physicians evaluated the scans blinded to the opposite scan results. Descriptive statistics were applied, and the intermodality proportion of agreement was used to compare [<sup>18</sup>F]FDG-PET/CT and CE-CT. In total, 76 women with verified MBC were enrolled in the study. The reported number of site-specific metastases for [<sup>18</sup>F]FDG-PET/CT vs. CE-CT was 53 (69.7%) vs. 44 (57.9%) for bone lesions, 31 (40.8%) vs. 43 (56.6%) for lung lesions, and 16 (21.1%) vs. 23 (30.3%) for liver lesions, respectively. The proportion of agreement between imaging modalities was 76.3% (95% CI 65.2–85.3) for bone lesions; 82.9% (95% CI 72.5–90.6) for liver lesions; 57.9% (95% CI 46.0–69.1) for lung lesions; and 59.2% (95% CI 47.3–70.4) for lymph nodes. In conclusion, bone and distant lymph node metastases were reported more often by [<sup>18</sup>F]FDG-PET/CT than CE-CT, while liver and lung metastases were reported more often by CE-CT than [<sup>18</sup>F]FDG-PET/CT. Agreement between scans was highest for bone and liver lesions and lowest for lymph node metastases.
Halaman 26 dari 4391