Rahel Naef
Hasil untuk "Medical emergencies. Critical care. Intensive care. First aid"
Menampilkan 20 dari ~7532944 hasil · dari CrossRef, DOAJ, arXiv, Semantic Scholar
Weronika Wasyluk, Robert Fiut, Marcin Czop et al.
Abstract Background Continuous veno-venous hemodiafiltration (CVVHDF) is used in critically ill patients, but its impact on O₂ and CO₂ removal, as well as the accuracy of resting energy expenditure (REE) measurement using indirect calorimetry (IC) remains unclear. This study aims to evaluate the effects of CVVHDF on O₂ and CO₂ removal and the accuracy of REE measurement using IC in patients undergoing continuous renal replacement therapy. Design Prospective, observational, single-center study. Methodology Patients with sepsis undergoing CVVHDF had CO₂ flow (QCO₂) and O₂ flow (QO₂) measured at multiple sampling points before and after the filter. REE was calculated using the Weir equation based on V̇CO₂ and V̇O₂ measured by IC, using true V̇CO₂ accounting for the CRRT balance, and estimated using the Harris-Benedict equation. The respiratory quotient (RQ), the ratio of V̇CO₂ to V̇O₂, was evaluated by comparing measured and true values. Results The mean QCO₂ levels measured upstream of the filter were 76.26 ± 17.33 ml/min and significantly decreased to 62.12 ± 13.64 ml/min downstream of the filter (p < 0.0001). The mean QO₂ levels remained relatively unchanged. The mean true REE was 1774.28 ± 438.20 kcal/day, significantly different from both the measured REE of 1758.59 ± 434.06 kcal/day (p = 0.0029) and the estimated REE of 1619.36 ± 295.46 kcal/day (p = 0.0475). The mean measured RQ value was 0.693 ± 0.118, while the mean true RQ value was 0.731 ± 0.121, with a significant difference (p < 0.0001). Conclusions CVVHDF may significantly alter QCO₂ levels without affecting QO₂, influencing the REE and RQ results measured by IC. However, the impact on REE is not clinically significant, and the REE value obtained via IC is closer to the true REE than that estimated using the Harris-Benedict equation. Further studies are recommended to confirm these findings. Graphical Abstract
Paul Zaha, Lars Böcking, Simeon Allmendinger et al.
Medical image segmentation is crucial for disease diagnosis and treatment planning, yet developing robust segmentation models often requires substantial computational resources and large datasets. Existing research shows that pre-trained and finetuned foundation models can boost segmentation performance. However, questions remain about how particular image preprocessing steps may influence segmentation performance across different medical imaging modalities. In particular, edges-abrupt transitions in pixel intensity-are widely acknowledged as vital cues for object boundaries but have not been systematically examined in the pre-training of foundation models. We address this gap by investigating to which extend pre-training with data processed using computationally efficient edge kernels, such as kirsch, can improve cross-modality segmentation capabilities of a foundation model. Two versions of a foundation model are first trained on either raw or edge-enhanced data across multiple medical imaging modalities, then finetuned on selected raw subsets tailored to specific medical modalities. After systematic investigation using the medical domains Dermoscopy, Fundus, Mammography, Microscopy, OCT, US, and XRay, we discover both increased and reduced segmentation performance across modalities using edge-focused pre-training, indicating the need for a selective application of this approach. To guide such selective applications, we propose a meta-learning strategy. It uses standard deviation and image entropy of the raw image to choose between a model pre-trained on edge-enhanced or on raw data for optimal performance. Our experiments show that integrating this meta-learning layer yields an overall segmentation performance improvement across diverse medical imaging tasks by 16.42% compared to models pre-trained on edge-enhanced data only and 19.30% compared to models pre-trained on raw data only.
Matthew JY Kang, Wenli Yang, Monica R Roberts et al.
The recent boom of large language models (LLMs) has re-ignited the hope that artificial intelligence (AI) systems could aid medical diagnosis. Yet despite dazzling benchmark scores, LLM assistants have yet to deliver measurable improvements at the bedside. This scoping review aims to highlight the areas where AI is limited to make practical contributions in the clinical setting, specifically in dementia diagnosis and care. Standalone machine-learning models excel at pattern recognition but seldom provide actionable, interpretable guidance, eroding clinician trust. Adjacent use of LLMs by physicians did not result in better diagnostic accuracy or speed. Key limitations trace to the data-driven paradigm: black-box outputs which lack transparency, vulnerability to hallucinations, and weak causal reasoning. Hybrid approaches that combine statistical learning with expert rule-based knowledge, and involve clinicians throughout the process help bring back interpretability. They also fit better with existing clinical workflows, as seen in examples like PEIRS and ATHENA-CDS. Future decision-support should prioritise explanatory coherence by linking predictions to clinically meaningful causes. This can be done through neuro-symbolic or hybrid AI that combines the language ability of LLMs with human causal expertise. AI researchers have addressed this direction, with explainable AI and neuro-symbolic AI being the next logical steps in further advancement in AI. However, they are still based on data-driven knowledge integration instead of human-in-the-loop approaches. Future research should measure success not only by accuracy but by improvements in clinician understanding, workflow fit, and patient outcomes. A better understanding of what helps improve human-computer interactions is greatly needed for AI systems to become part of clinical practice.
Siyi Xun, Yue Sun, Jingkun Chen et al.
Rapid advances in medical imaging technology underscore the critical need for precise and automated image quality assessment (IQA) to ensure diagnostic accuracy. Existing medical IQA methods, however, struggle to generalize across diverse modalities and clinical scenarios. In response, we introduce MedIQA, the first comprehensive foundation model for medical IQA, designed to handle variability in image dimensions, modalities, anatomical regions, and types. We developed a large-scale multi-modality dataset with plentiful manually annotated quality scores to support this. Our model integrates a salient slice assessment module to focus on diagnostically relevant regions feature retrieval and employs an automatic prompt strategy that aligns upstream physical parameter pre-training with downstream expert annotation fine-tuning. Extensive experiments demonstrate that MedIQA significantly outperforms baselines in multiple downstream tasks, establishing a scalable framework for medical IQA and advancing diagnostic workflows and clinical decision-making.
S M A Sharif, Rizwan Ali Naqvi, Woong-Kee Loh
Medical image denoising is considered among the most challenging vision tasks. Despite the real-world implications, existing denoising methods have notable drawbacks as they often generate visual artifacts when applied to heterogeneous medical images. This study addresses the limitation of the contemporary denoising methods with an artificial intelligence (AI)-driven two-stage learning strategy. The proposed method learns to estimate the residual noise from the noisy images. Later, it incorporates a novel noise attention mechanism to correlate estimated residual noise with noisy inputs to perform denoising in a course-to-refine manner. This study also proposes to leverage a multi-modal learning strategy to generalize the denoising among medical image modalities and multiple noise patterns for widespread applications. The practicability of the proposed method has been evaluated with dense experiments. The experimental results demonstrated that the proposed method achieved state-of-the-art performance by significantly outperforming the existing medical image denoising methods in quantitative and qualitative comparisons. Overall, it illustrates a performance gain of 7.64 in Peak Signal-to-Noise Ratio (PSNR), 0.1021 in Structural Similarity Index (SSIM), 0.80 in DeltaE ($ΔE$), 0.1855 in Visual Information Fidelity Pixel-wise (VIFP), and 18.54 in Mean Squared Error (MSE) metrics.
Felix Buendía, Joaquín Gayoso-Cabada, José-Luis Sierra
In this paper, we describe an approach to transforming the huge amount of medical knowledge available in existing online medical collections into standardized learning packages ready to be integrated into the most popular e-learning platforms. The core of our approach is a tool called Clavy, which makes it possible to retrieve pieces of content in medical collections, to transform this content into meaningful learning units, and to export it in the form of standardized learning packages. In addition to describing the approach, we demonstrate its feasibility by applying it to the generation of IMS content packages from MedPix, a popular online database of medical cases in the domain of radiology.
Nikita Malik, Pratinav Seth, Neeraj Kumar Singh et al.
Deep learning has driven significant advances in medical image analysis, yet its adoption in clinical practice remains constrained by the large size and lack of transparency in modern models. Advances in interpretability techniques such as DL-Backtrace, Layer-wise Relevance Propagation, and Integrated Gradients make it possible to assess the contribution of individual components within neural networks trained on medical imaging tasks. In this work, we introduce an interpretability-guided pruning framework that reduces model complexity while preserving both predictive performance and transparency. By selectively retaining only the most relevant parts of each layer, our method enables targeted compression that maintains clinically meaningful representations. Experiments across multiple medical image classification benchmarks demonstrate that this approach achieves high compression rates with minimal loss in accuracy, paving the way for lightweight, interpretable models suited for real-world deployment in healthcare settings.
Raza Imam, Rufael Marew, Mohammad Yaqub
Medical Vision-Language Models (MVLMs) have achieved par excellence generalization in medical image analysis, yet their performance under noisy, corrupted conditions remains largely untested. Clinical imaging is inherently susceptible to acquisition artifacts and noise; however, existing evaluations predominantly assess generally clean datasets, overlooking robustness -- i.e., the model's ability to perform under real-world distortions. To address this gap, we first introduce MediMeta-C, a corruption benchmark that systematically applies several perturbations across multiple medical imaging datasets. Combined with MedMNIST-C, this establishes a comprehensive robustness evaluation framework for MVLMs. We further propose RobustMedCLIP, a visual encoder adaptation of a pretrained MVLM that incorporates few-shot tuning to enhance resilience against corruptions. Through extensive experiments, we benchmark 5 major MVLMs across 5 medical imaging modalities, revealing that existing models exhibit severe degradation under corruption and struggle with domain-modality tradeoffs. Our findings highlight the necessity of diverse training and robust adaptation strategies, demonstrating that efficient low-rank adaptation when paired with few-shot tuning, improves robustness while preserving generalization across modalities.
Tatsuru Kikuchi
Emergency medical services (EMS) response times are critical determinants of patient survival, yet existing approaches to spatial coverage analysis rely on discrete distance buffers or ad-hoc geographic information system (GIS) isochrones without theoretical foundation. This paper derives continuous spatial boundaries for emergency response from first principles using fluid dynamics (Navier-Stokes equations), demonstrating that response effectiveness decays exponentially with time: $τ(t) = τ_0 \exp(-κt)$, where $τ_0$ is baseline effectiveness and $κ$ is the temporal decay rate. Using 10,000 simulated emergency incidents from the National Emergency Medical Services Information System (NEMSIS), I estimate decay parameters and calculate critical boundaries $d^*$ where response effectiveness falls below policy-relevant thresholds. The framework reveals substantial demographic heterogeneity: elderly populations (85+) experience 8.40-minute average response times versus 7.83 minutes for younger adults (18-44), with 33.6\% of poor-access incidents affecting elderly populations despite representing 5.2\% of the sample. Non-parametric kernel regression validation confirms exponential decay is appropriate (mean squared error 8-12 times smaller than parametric), while traditional difference-in-differences analysis validates treatment effect existence (DiD coefficient = -1.35 minutes, $p < 0.001$). The analysis identifies vulnerable populations--elderly, rural, and low-income communities--facing systematically longer response times, informing optimal EMS station placement and resource allocation to reduce health disparities.
Joseph D Forrester, Joshua Aaron Villarreal
Jacob Karlsson, Anders Svedmyr, Mats Wallin et al.
Abstract Background Respiratory quotient (RQ) is an important variable when assessing metabolic status in intensive care patients. However, analysis of RQ requires cumbersome technical equipment. The aim of the current study was to examine a simplified blood gas-based method of RQ assessment, using Douglas bag measurement of RQ (Douglas-RQ) as reference in a laboratory porcine model under metabolic steady state. In addition, we aimed at establishing reference values for RQ in the same population, thereby generating data to facilitate further research. Methods RQ was measured in 11 mechanically ventilated pigs under metabolic steady state using Douglas-RQ and CO-oximetry blood gas analysis of pulmonary artery and systemic carbon dioxide and oxygen content. The CO-oximetry data were used to calculate RQ (blood gas RQ). Paired recordings with both methods were made once in the morning and once in the afternoon and values obtained were analyzed for potential significant differences. Results The average Douglas-RQ, for all data points over the whole day, was 0.97 (95%CI 0.95–0.99). The corresponding blood gas RQ was 0.95 (95%CI 0.87–1.02). There was no statistically significant difference in RQ values obtained using Douglas-RQ or blood gas RQ for all data over the whole day (P = 0.43). Bias was − 0.02 (95% limits of agreement ± 0.3). Douglas-RQ decreased during the day 1.00 (95%CI 0.97–1.03) vs 0.95 (95%CI 0.92–0.98) P < 0.001, whereas the decrease was not significant for blood gas RQ 1.02 (95%CI 0.89–1.16 vs 0.87 (0.80–0.94) P = 0.11. Conclusion RQ values obtained with blood gas analysis did not differ statistically, compared to gold standard Douglas bag RQ measurement, showing low bias but relatively large limits of agreement, when analyzed for the whole day. This indicates that a simplified blood gas-based method for RQ estimations may be used as an alternative to gold standard expired gas analysis on a group level, even if individual values may differ. In addition, RQ estimated with Douglas bag analysis of exhaled air, was 0.97 in anesthetized non-fasted pigs and decreased during prolonged anesthesia.
Hassan Alhuzali, Ashwag Alasmari
Pre-trained Language Models (PLMs) have the potential to transform mental health support by providing accessible and culturally sensitive resources. However, despite this potential, their effectiveness in mental health care and specifically for the Arabic language has not been extensively explored. To bridge this gap, this study evaluates the effectiveness of foundational models for classification of Questions and Answers (Q&A) in the domain of mental health care. We leverage the MentalQA dataset, an Arabic collection featuring Q&A interactions related to mental health. In this study, we conducted experiments using four different types of learning approaches: traditional feature extraction, PLMs as feature extractors, Fine-tuning PLMs and prompting large language models (GPT-3.5 and GPT-4) in zero-shot and few-shot learning settings. While traditional feature extractors combined with Support Vector Machines (SVM) showed promising performance, PLMs exhibited even better results due to their ability to capture semantic meaning. For example, MARBERT achieved the highest performance with a Jaccard Score of 0.80 for question classification and a Jaccard Score of 0.86 for answer classification. We further conducted an in-depth analysis including examining the effects of fine-tuning versus non-fine-tuning, the impact of varying data size, and conducting error analysis. Our analysis demonstrates that fine-tuning proved to be beneficial for enhancing the performance of PLMs, and the size of the training data played a crucial role in achieving high performance. We also explored prompting, where few-shot learning with GPT-3.5 yielded promising results. There was an improvement of 12% for question and classification and 45% for answer classification. Based on our findings, it can be concluded that PLMs and prompt-based approaches hold promise for mental health support in Arabic.
Dipayan Chaudhuri, MD, Vatsal Trivedi, MD, Kimberley Lewis, MD et al.
OBJECTIVES:. To evaluate the efficacy and cost-effectiveness of high-flow nasal cannula (HFNC) when compared with noninvasive positive pressure ventilation (NIPPV) in patients with acute hypoxic respiratory failure (AHRF). DATA SOURCES:. We performed a comprehensive search of MEDLINE, Embase, CINAHL, the Cochrane library, and the international Health Technology Assessment database from inception to September 14, 2022. STUDY SELECTION:. We included randomized control studies that compared HFNC to NIPPV in adult patients with AHRF. For clinical outcomes, we included only parallel group and crossover randomized control trials (RCTs). For economic outcomes, we included any study design that evaluated cost-effectiveness, cost-utility, or cost benefit analyses. DATA EXTRACTION:. Clinical outcomes of interest included intubation, mortality, ICU and hospital length of stay (LOS), and patient-reported dyspnea. Economic outcomes of interest included costs, cost-effectiveness, and cost-utility. DATA SYNTHESIS:. We included nine RCTs (n = 1,539 patients) and one cost-effectiveness study. Compared with NIPPV, HFNC may have no effect on the need for intubation (relative risk [RR], 0.93; 95% CI, 0.69–1.27; low certainty) and an uncertain effect on mortality (RR, 0.84; 95% CI, 0.59–1.21; very low certainty). In subgroup analysis, NIPPV delivered through the helmet interface—as opposed to the facemask interface—may reduce intubation compared with HFNC (p = 0.006; moderate credibility of subgroup effect). There was no difference in ICU or hospital LOS (both low certainty) and an uncertain effect on patient-reported dyspnea (very low certainty). We could make no conclusions regarding the cost-effectiveness of HFNC compared with NIPPV. CONCLUSIONS:. HFNC and NIPPV may be similarly effective at reducing the need for intubation with an uncertain effect on mortality in hospitalized patients with hypoxemic respiratory failure. More research evaluating different interfaces in varying clinical contexts is needed to improve generalizability and precision of findings.
Alice Lux Fawzi, Christian Franck
Plain language summary It is commonly assumed that there is no brain injury if there are no noticeable symptoms following a head impact. There is growing evidence that traumatic brain injuries can occur with no outward symptoms and that the damage from these injuries can accumulate over time resulting in disease and impairment later in life. It is time to rethink the role that symptoms play in traumatic brain injury and adopt a quantitative understanding of brain health at the cellular level to improve the way we diagnose, prevent, and ultimately heal brain injury.
Christos Matsoukas, Johan Fredin Haslum, Moein Sorkhei et al.
Convolutional Neural Networks (CNNs) have reigned for a decade as the de facto approach to automated medical image diagnosis, pushing the state-of-the-art in classification, detection and segmentation tasks. Over the last years, vision transformers (ViTs) have appeared as a competitive alternative to CNNs, yielding impressive levels of performance in the natural image domain, while possessing several interesting properties that could prove beneficial for medical imaging tasks. In this work, we explore the benefits and drawbacks of transformer-based models for medical image classification. We conduct a series of experiments on several standard 2D medical image benchmark datasets and tasks. Our findings show that, while CNNs perform better if trained from scratch, off-the-shelf vision transformers can perform on par with CNNs when pretrained on ImageNet, both in a supervised and self-supervised setting, rendering them as a viable alternative to CNNs.
Adam Valen Levinson, Abhay Goyal, Roger Ho Chun Man et al.
Telehealth is a valuable tool for primary health care (PHC), where depression is a common condition. PHC is the first point of contact for most people with depression, but about 25% of diagnoses made by PHC physicians are inaccurate. Many other barriers also hinder depression detection and treatment in PHC. Artificial intelligence (AI) may help reduce depression misdiagnosis in PHC and improve overall diagnosis and treatment outcomes. Telehealth consultations often have video issues, such as poor connectivity or dropped calls. Audio-only telehealth is often more practical for lower-income patients who may lack stable internet connections. Thus, our study focused on using audio data to predict depression risk. The objectives were to: 1) Collect audio data from 24 people (12 with depression and 12 without mental health or major health condition diagnoses); 2) Build a machine learning model to predict depression risk. TPOT, an autoML tool, was used to select the best machine learning algorithm, which was the K-nearest neighbors classifier. The selected model had high performance in classifying depression risk (Precision: 0.98, Recall: 0.93, F1-Score: 0.96). These findings may lead to a range of tools to help screen for and treat depression. By developing tools to detect depression risk, patients can be routed to AI-driven chatbots for initial screenings. Partnerships with a range of stakeholders are crucial to implementing these solutions. Moreover, ethical considerations, especially around data privacy and potential biases in AI models, need to be at the forefront of any AI-driven intervention in mental health care.
N V S Abhishek, Pushpak Bhattacharyya
Speech Emotion Recognition (SER) is the task of identifying the emotion expressed in a spoken utterance. Emotion recognition is essential in building robust conversational agents in domains such as law, healthcare, education, and customer support. Most of the studies published on SER use datasets created by employing professional actors in a noise-free environment. In natural settings such as a customer care conversation, the audio is often noisy with speakers regularly switching between different languages as they see fit. We have worked in collaboration with a leading unicorn in the Conversational AI sector to develop Natural Speech Emotion Dataset (NSED). NSED is a natural code-mixed speech emotion dataset where each utterance in a conversation is annotated with emotion, sentiment, valence, arousal, and dominance (VAD) values. In this paper, we show that by incorporating word-level VAD value we improve on the task of SER by 2%, for negative emotions, over the baseline value for NSED. High accuracy for negative emotion recognition is essential because customers expressing negative opinions/views need to be pacified with urgency, lest complaints and dissatisfaction snowball and get out of hand. Escalation of negative opinions speedily is crucial for business interests. Our study then can be utilized to develop conversational agents which are more polite and empathetic in such situations.
Catarina Mendes Silva, João Pedro Baptista, Paulo Mergulhão et al.
RESUMO Objetivo: Avaliar a influência das características dos pacientes na hiperlactatemia em uma população admitida com infecção em unidades de terapia intensiva, bem como a influência da gravidade da hiperlactatemia na mortalidade hospitalar. Metódos: Foi realizada uma análise post hoc da hiperlactatemia no INFAUCI, um estudo nacional prospectivo, observacional e multicêntrico, que incluiu 14 unidades de terapia intensiva portuguesas. Foram selecionados pacientes admitidos com infecção em unidades de terapia intensiva com dosagem de lactato nas primeiras 12 horas de admissão. A sepse foi identificada de acordo com a definição Sepsis-2 aceita no momento da coleta de dados. A gravidade da hiperlactatemia foi classificada como leve (2 - 3,9mmol/L), moderada (4,0 - 9,9mmol/L) ou grave (> 10mmol/L). Resultados: De 1.640 pacientes admitidos com infecção, a hiperlactatemia ocorreu em 934 (57%) e foi classificada como leve, moderada e grave em 57,0%, 34,4% e 8,7% dos pacientes, respectivamente. A presença de hiperlactatemia e um maior grau de hiperlactatemia se associaram a um maior Simplified Acute Physiology Score II, a maior Índice de Comorbidade de Charlson e à presença de choque séptico. Em relação à curva Receiver Operating Characteristic do lactato para mortalidade hospitalar, foi encontrada área sob a curva de 0,64 (IC95% 0,61 - 0,72), que aumentou para 0,71 (IC95% 0,68 - 0,74) quando se combinou o Sequential Organ Failure Assessment. A mortalidade intra-hospitalar com outras covariáveis ajustadas pelo Simplified Acute Physiology Score II se associou à hiperlactatemia moderada e grave, com razão de chances de 1,95 (IC95% 1,4 - 2,7; p < 0,001) e 4,54 (IC95% 2,4 - 8,5; p < 0,001), respectivamente. Conclusão: Os níveis de lactato sanguíneo correlacionam-se independentemente com a mortalidade intra-hospitalar para graus moderados e graves de hiperlactatemia.
Jichong Hou, Ruifang Zhang
Objective. To explore the clinical effects of tandospirone citrate assisted by drawing therapy (DT) on medication compliance and sleep quality in patients with anxiety disorders. Methods. A total of 128 patients with anxiety disorders treated in the hospital were enrolled between January 2020 and January 2022. According to the random number table method, they were divided into the observation group (n = 64) and the control group (n = 64). The control group was treated with tandospirone citrate, while the observation group was additionally treated with DT. The clinical curative effect and medication compliance after treatment, scores of Hamilton Anxiety Scale (HAMA), Pittsburgh Sleep Quality Index (PSQI), and the World Health Organization’s Quality of Life Questionnaire-Brief Version (WHOQOL-BREF) before and after treatment were compared between the two groups. The occurrence of adverse reactions during treatment was recorded. Results. After treatment, the total response rate in the observation group was higher than that in the control group (96.88% vs 86.94%) (P<0.05). After treatment, scores of HAMA and PSQI in both groups were decreased, which were lower in the observation group than in the control group (P<0.05). After treatment, medication compliance in the observation group was higher than that in the control group (P<0.05). After treatment, scores of environmental factors, social relations, physiological function, and psychological status in both groups were increased, which were higher in the observation group than in the control group (P<0.05). During treatment, there was no significant difference in the incidence of adverse reactions between the two groups (P>0.05). Conclusion. DT-assisted tandospirone citrate can effectively improve the clinical symptoms of patients with anxiety disorders, improve medication compliance, sleep quality, and quality of life, and have a certain degree of safety.
Halaman 37 dari 376648