L. Cronenwett, G. Sherwood, J. Barnsteiner et al.
Hasil untuk "Nursing"
Menampilkan 20 dari ~2075608 hasil · dari arXiv, DOAJ, CrossRef, Semantic Scholar
I. Coyne
Jacob Barker, Doga Demirel, Cullen Jackson et al.
Although effective teamwork and communication are critical to surgical safety, structured training for non-technical skills (NTS) remains limited compared with technical simulation. The ACS/APDS Phase III Team-Based Skills Curriculum calls for scalable tools that both teach and objectively assess these competencies during laparoscopic emergencies. We introduce the Virtual Operating Room Team Experience (VORTeX), a multi-user virtual reality (VR) platform that integrates immersive team simulation with large language model (LLM) analytics to train and evaluate communication, decision-making, teamwork, and leadership. Team dialogue is analyzed using structured prompts derived from the Non-Technical Skills for Surgeons (NOTSS) framework, enabling automated classification of behaviors and generation of directed interaction graphs that quantify communication structure and hierarchy. Two laparoscopic emergency scenarios, pneumothorax and intra-abdominal bleeding, were implemented to elicit realistic stress and collaboration. Twelve surgical professionals completed pilot sessions at the 2024 SAGES conference, rating VORTeX as intuitive, immersive, and valuable for developing teamwork and communication. The LLM consistently produced interpretable communication networks reflecting expected operative hierarchies, with surgeons as central integrators, nurses as initiators, and anesthesiologists as balanced intermediaries. By integrating immersive VR with LLM-driven behavioral analytics, VORTeX provides a scalable, privacy-compliant framework for objective assessment and automated, data-informed debriefing across distributed training environments.
Dirk Douwes-Schultz, Rob Deardon, Alexandra M. Schmidt
Individual-level epidemic models are increasingly being used to help understand the transmission dynamics of various infectious diseases. However, fitting such models to individual-level epidemic data is challenging, as we often only know when an individual's disease status was detected (e.g., when they showed symptoms) and not when they were infected or removed. We propose an autoregressive coupled hidden Markov model to infer unknown infection and removal times, as well as other model parameters, from a single observed detection time for each detected individual. Unlike more traditional data augmentation methods used in epidemic modelling, we do not assume that this detection time corresponds to infection or removal or that infected individuals must at some point be detected. Bayesian coupled hidden Markov models have been used previously for individual-level epidemic data. However, these approaches assumed each individual was continuously tested and that the tests were independent. In practice, individuals are often only tested until their first positive test, and even if they are continuously tested, only the initial detection times may be reported. In addition, multiple tests on the same individual may not be independent. We accommodate these scenarios by assuming that the probability of detecting the disease can depend on past observations, which allows us to fit a much wider range of practical applications. We illustrate the flexibility of our approach by fitting two examples: an experiment on the spread of tomato spot wilt virus in pepper plants and an outbreak of norovirus among nurses in a hospital.
Naghmeh Akhavan, Alexander George, Michelle Starz-Gaiano et al.
In the Drosophila melanogaster egg chamber, the collective migration of border cells toward the oocyte is guided by spatial gradients of chemoattractants. While cellular responses to these cues are well characterized, the spatial distribution of chemoattractant within the tissue remains difficult to measure experimentally due to imaging limitations and extracellular complexity. In this study, we develop a spatially resolved mathematical framework to model local chemoattractant concentrations during border cell migration. We use a phase-field approach to represent the egg chamber geometry and define a diffusion-reaction system with spatially heterogeneous diffusivity that accounts for confinement by cellular domains. This framework allows chemoattractant diffusion to be restricted to extracellular space while remaining excluded from the interiors of nurse cells, the border cell cluster, and the oocyte, similar to what we observe in vivo. We simulate secretion from the oocyte and degradation throughout the domain, showing how geometry shapes the distribution of signaling molecules. We further couple this chemical field to a mechanical model of cluster migration that includes a tangential interface migration (TIM) force, allowing the cluster to respond to both chemoattractant gradients and cell-cell contact. Our results show that signal localization and tissue geometry jointly influence directional persistence and the speed of migration. Notably, geometric bottlenecks and intersections can flatten local gradients and slow migration, consistent with experimental observations. This modeling framework offers a tool to investigate how biophysical constraints shape signaling environments and guide collective cell movement in vivo.
D. Polit-O'hara, B. Hungler
Ben Rahman
Semantic segmentation has made significant strides in pixel-level image understanding, yet it remains limited in capturing contextual and semantic relationships between objects. Current models, such as CNN and Transformer-based architectures, excel at identifying pixel-level features but fail to distinguish semantically similar objects (e.g., "doctor" vs. "nurse" in a hospital scene) or understand complex contextual scenarios (e.g., differentiating a running child from a regular pedestrian in autonomous driving). To address these limitations, we proposed a novel Context-Aware Semantic Segmentation framework that integrates Large Language Models (LLMs) with state-of-the-art vision backbones. Our hybrid model leverages the Swin Transformer for robust visual feature extraction and GPT-4 for enriching semantic understanding through text embeddings. A Cross-Attention Mechanism is introduced to align vision and language features, enabling the model to reason about context more effectively. Additionally, Graph Neural Networks (GNNs) are employed to model object relationships within the scene, capturing dependencies that are overlooked by traditional models. Experimental results on benchmark datasets (e.g., COCO, Cityscapes) demonstrate that our approach outperforms the existing methods in both pixel-level accuracy (mIoU) and contextual understanding (mAP). This work bridges the gap between vision and language, paving the path for more intelligent and context-aware vision systems in applications including autonomous driving, medical imaging, and robotics.
Maximilian Kratz, Steffen Zschaler, Jens Kosiol et al.
Once an optimisation problem has been solved, the solution may need adaptation when contextual factors change. This challenge, also known as reoptimisation, has been addressed in various problem domains, such as railway crew rescheduling, nurse rerostering, or aircraft recovery. This requires a modified problem to be solved again to ensure that the adapted solution is optimal in the new context. However, the new optimisation problem differs notably from the original problem: (i) we want to make only minimal changes to the original solution to minimise the impact; (ii) we may be unable to change some parts of the original solution (e.g., because they refer to past allocations); and (iii) we need to derive a change script from the original solution to the new solution. In this paper, we argue that Model-Driven Engineering (MDE) - in particular, the use of declarative modelling languages and model transformations for the high-level specification of optimisation problems - offers new opportunities for the systematic derivation of reoptimisation problems from the original optimisation problem specification. We focus on combinatorial reoptimisation problems and provide an initial categorisation of changing problems and strategies for deriving the corresponding reoptimisation specifications. We introduce an initial proof-of-concept implementation based on the GIPS (Graph-Based (Mixed) Integer Linear Programming Problem Specification) tool and apply it to an example resource-allocation problem: the allocation of teaching assistants to teaching sessions.
Ege Özsoy, Arda Mamur, Felix Tristram et al.
Operating rooms (ORs) demand precise coordination among surgeons, nurses, and equipment in a fast-paced, occlusion-heavy environment, necessitating advanced perception models to enhance safety and efficiency. Existing datasets either provide partial egocentric views or sparse exocentric multi-view context, but do not explore the comprehensive combination of both. We introduce EgoExOR, the first OR dataset and accompanying benchmark to fuse first-person and third-person perspectives. Spanning 94 minutes (84,553 frames at 15 FPS) of two emulated spine procedures, Ultrasound-Guided Needle Insertion and Minimally Invasive Spine Surgery, EgoExOR integrates egocentric data (RGB, gaze, hand tracking, audio) from wearable glasses, exocentric RGB and depth from RGB-D cameras, and ultrasound imagery. Its detailed scene graph annotations, covering 36 entities and 22 relations (568,235 triplets), enable robust modeling of clinical interactions, supporting tasks like action recognition and human-centric perception. We evaluate the surgical scene graph generation performance of two adapted state-of-the-art models and offer a new baseline that explicitly leverages EgoExOR's multimodal and multi-perspective signals. This new dataset and benchmark set a new foundation for OR perception, offering a rich, multimodal resource for next-generation clinical perception.
Xudong Han, Xianglun Gao, Xiaoyi Qu et al.
Multidisciplinary team (MDT) consultations are the gold standard for cancer care decision-making, yet current practice lacks structured mechanisms for quantifying consensus and ensuring decision traceability. We introduce a Multi-Agent Medical Decision Consensus Matrix System that deploys seven specialized large language model agents, including an oncologist, a radiologist, a nurse, a psychologist, a patient advocate, a nutritionist and a rehabilitation therapist, to simulate realistic MDT workflows. The framework incorporates a mathematically grounded consensus matrix that uses Kendall's coefficient of concordance to objectively assess agreement. To further enhance treatment recommendation quality and consensus efficiency, the system integrates reinforcement learning methods, including Q-Learning, PPO and DQN. Evaluation across five medical benchmarks (MedQA, PubMedQA, DDXPlus, MedBullets and SymCat) shows substantial gains over existing approaches, achieving an average accuracy of 87.5% compared with 83.8% for the strongest baseline, a consensus achievement rate of 89.3% and a mean Kendall's W of 0.823. Expert reviewers rated the clinical appropriateness of system outputs at 8.9/10. The system guarantees full evidence traceability through mandatory citations of clinical guidelines and peer-reviewed literature, following GRADE principles. This work advances medical AI by providing structured consensus measurement, role-specialized multi-agent collaboration and evidence-based explainability to improve the quality and efficiency of clinical decision-making.
Thanh Cong Ho, Farah Kharrat, Abderrazek Abid et al.
With the widespread adoption of wearable devices in our daily lives, the demand and appeal for remote patient monitoring have significantly increased. Most research in this field has concentrated on collecting sensor data, visualizing it, and analyzing it to detect anomalies in specific diseases such as diabetes, heart disease and depression. However, this domain has a notable gap in the aspect of human-machine interaction. This paper proposes REMONI, an autonomous REmote health MONItoring system that integrates multimodal large language models (MLLMs), the Internet of Things (IoT), and wearable devices. The system automatically and continuously collects vital signs, accelerometer data from a special wearable (such as a smartwatch), and visual data in patient video clips collected from cameras. This data is processed by an anomaly detection module, which includes a fall detection model and algorithms to identify and alert caregivers of the patient's emergency conditions. A distinctive feature of our proposed system is the natural language processing component, developed with MLLMs capable of detecting and recognizing a patient's activity and emotion while responding to healthcare worker's inquiries. Additionally, prompt engineering is employed to integrate all patient information seamlessly. As a result, doctors and nurses can access real-time vital signs and the patient's current state and mood by interacting with an intelligent agent through a user-friendly web application. Our experiments demonstrate that our system is implementable and scalable for real-life scenarios, potentially reducing the workload of medical professionals and healthcare costs. A full-fledged prototype illustrating the functionalities of the system has been developed and being tested to demonstrate the robustness of its various capabilities.
Lo Pang-Yun Ting, Hong-Pei Chen, An-Shan Liu et al.
Early detection of patient deterioration is crucial for reducing mortality rates. Heart rate data has shown promise in assessing patient health, and wearable devices offer a cost-effective solution for real-time monitoring. However, extracting meaningful insights from diverse heart rate data and handling missing values in wearable device data remain key challenges. To address these challenges, we propose TARL, an innovative approach that models the structural relationships of representative subsequences, known as shapelets, in heart rate time series. TARL creates a shapelet-transition knowledge graph to model shapelet dynamics in heart rate time series, indicating illness progression and potential future changes. We further introduce a transition-aware knowledge embedding to reinforce relationships among shapelets and quantify the impact of missing values, enabling the formulation of comprehensive heart rate representations. These representations capture explanatory structures and predict future heart rate trends, aiding early illness detection. We collaborate with physicians and nurses to gather ICU patient heart rate data from wearables and diagnostic metrics assessing illness severity for evaluating deterioration. Experiments on real-world ICU data demonstrate that TARL achieves both high reliability and early detection. A case study further showcases TARL's explainable detection process, highlighting its potential as an AI-driven tool to assist clinicians in recognizing early signs of patient deterioration.
Magda Rafaela Carneiro Freitas, Ana da Conceição Alves Faria, Carla Gomes da Rocha et al.
<b>Background:</b> Population ageing and the growing prevalence of chronic diseases, particularly stroke, have negative repercussions on fine motor function, compromising the independence of older adults. The Specialist Nurse in Rehabilitation Nursing plays a central role in functional recovery and in improving quality of life. This study aims to describe the process of developing and validating the design of rehabilitation nursing care for older adults with impaired fine motor function. <b>Methods:</b> This paper is a three-phase methodological study conducted between January and July 2025: (1) initial development of the design of rehabilitation nursing care for older adults with impaired fine motor function; (2) validation of the content of the proposed design, using the modified e-Delphi technique; and (3) development of the final model of the care design. <b>Results:</b> The e-Delphi study, involving a panel of 15 experts, allowed the content validation of the design of rehabilitation nursing care for older adults with impaired fine motor function after two rounds. Following the suggestions, the final care design model, in relation to fine motor function, comprises five steps: (1) collection of relevant data, (2) identification of possible nursing diagnoses, (3) definition of objectives, (4) planning and implementation of interventions, and (5) evaluation of outcomes. As part of step 4, photographic records of exercises focused on the recovery of fine motor function were included. <b>Conclusions:</b> The final model of the design of rehabilitation nursing care for older adults with impaired fine motor function, developed and validated in this study, may serve as a guiding framework in the delivery of specialised care to this population.
Sebastian Crutch, Claire Waddington, Emma Harding et al.
Rarer dementias are associated with atypical symptoms and younger onset, which result in a higher burden of care. We provide a review of the global literature on longitudinal decline in activities of daily living (ADLs) in dementias that account for less than 10% of dementia diagnoses. Published studies were identified through searches conducted in Medical Literature Analysis and Retrieval System Online (MEDLINE), Excerpta Medica Database (Embase), Excerpta Medica Care (Emcare), PsycINFO, and Cumulative Index in Nursing and Allied Health Literature (CINAHL). The search criteria included terms related to ‘rarer dementias’, ‘activities of daily living’ and ‘longitudinal or cross-sectional studies’ following a predefined protocol registered. Studies were screened, and those that met the criteria were citation searched. Quality assessments were performed, and relevant data were extracted. 20 articles were selected, of which 19 focused on dementias within the frontotemporal dementia/primary progressive aphasia spectrum, while one addressed posterior cortical atrophy. Four studies were cross-sectional and 16 studies were longitudinal, with a median duration of 2.2 years. The Disability Assessment for Dementia was used to measure decline in 8 of the 20 studies. The varied sequences of ADL decline reported in the literature reflect variation in diagnostic specificity between studies and within-syndrome heterogeneity. Most studies used Alzheimer’s disease staging scales to measure decline, which cannot capture variant-specific symptoms. To enhance care provision in dementia, ADL scales could be deployed postdiagnosis to aid treatment and planning. This necessitates staging scales that are variant-specific and span the disease course from diagnosis to end of life. PROSPERO registration number: CRD42021283302.
Miriam Jacqueline Muñoz-Aucapiña, Rosa Elvira Muñoz-Aucapiña, Inmaculada García-García et al.
Gender-based violence among young people is a pressing global problem, causing injury and disability to women and posing physical, mental, sexual, and reproductive health risks. This study aimed to psychometrically validate the Dating Violence Questionnaire—Revised (DVQ-R) in a sample of 340 Ecuadorian university students. The study included 340 male and female students from two universities in Ecuador. The reliability and validity of the questionnaire were rigorously assessed by exploratory and confirmatory factor analyses, which revealed a four-factor model as the most parsimonious solution (RMSEA = 0.012). The factors were labelled as follows: ‘emotional neglect and contempt’, ‘physical violence and aggression’, ‘coercion and control’, and ‘emotional manipulation and testing’. The validated scale yielded a Cronbach’s alpha (α) of 0.839, with individual alpha values of 0.872, 0.764, 0.849, and 0.729 for each dimension. Convergent validity was established, as the mean variance extracted per factor exceeded 0.4. Divergent validity was confirmed, as the variance retained by each factor was greater than the variance shared between them (mean variance extracted per factor > ϕ<sup>2</sup>). These results indicate that the DVQ-R is a valid and reliable instrument to assess dating violence among Spanish-speaking young adults, which supports future research and prevention programmes.
Sunjun Kweon, Byungjin Choi, Gyouk Chu et al.
We present KorMedMCQA, the first Korean Medical Multiple-Choice Question Answering benchmark, derived from professional healthcare licensing examinations conducted in Korea between 2012 and 2024. The dataset contains 7,469 questions from examinations for doctor, nurse, pharmacist, and dentist, covering a wide range of medical disciplines. We evaluate the performance of 59 large language models, spanning proprietary and open-source models, multilingual and Korean-specialized models, and those fine-tuned for clinical applications. Our results show that applying Chain of Thought (CoT) reasoning can enhance the model performance by up to 4.5% compared to direct answering approaches. We also investigate whether MedQA, one of the most widely used medical benchmarks derived from the U.S. Medical Licensing Examination, can serve as a reliable proxy for evaluating model performance in other regions-in this case, Korea. Our correlation analysis between model scores on KorMedMCQA and MedQA reveals that these two benchmarks align no better than benchmarks from entirely different domains (e.g., MedQA and MMLU-Pro). This finding underscores the substantial linguistic and clinical differences between Korean and U.S. medical contexts, reinforcing the need for region-specific medical QA benchmarks. To support ongoing research in Korean healthcare AI, we publicly release the KorMedMCQA via Huggingface.
Makiko Aok, Mai Nishimura, Masato Suzuki et al.
Many sexually mature females suffer from premenstrual syndrome (PMS), but effective coping methods for PMS are limited due to the complexity of symptoms and unclear pathogenesis. Awareness has shown promise in alleviating PMS symptoms but faces challenges in long-term recording and consistency. Our research goal is to establish a convenient and simple method to make individual female aware of their own psychological, and autonomic conditions. In previous research, we demonstrated that participants could be classified into non-PMS and PMS groups based on mood scores obtained during the follicular phase. However, the properties of neurophysiological activity in the participants classified by mood scores have not been elucidated. This study aimed to classify participants based on their scores on a mood questionnaire during the follicular phase and to evaluate their autonomic nervous system (ANS) activity using a simple device that measures pulse waves from the earlobe. Participants were grouped into Cluster I (high positive mood) and Cluster II (low mood). Cluster II participants showed reduced parasympathetic nervous system activity from the follicular to the menstrual phase, indicating potential PMS symptoms. The study demonstrates the feasibility of using mood scores to classify individuals into PMS and non-PMS groups and monitor ANS changes across menstrual phases. Despite limitations such as sample size and device variability, the findings highlight a promising avenue for convenient PMS self-monitoring.
Irene Siragusa, Salvatore Contino, Massimo La Ciura et al.
The increasing interest in developing Artificial Intelligence applications in the medical domain, suffers from the lack of high-quality data set, mainly due to privacy-related issues. In addition, the recent increase in Vision Language Models (VLM) leads to the need for multimodal medical data sets, where clinical reports and findings are attached to the corresponding medical scans. This paper illustrates the entire workflow for building the MedPix 2.0 data set. Starting with the well-known multimodal data set MedPix\textsuperscript{\textregistered}, mainly used by physicians, nurses, and healthcare students for Continuing Medical Education purposes, a semi-automatic pipeline was developed to extract visual and textual data followed by a manual curing procedure in which noisy samples were removed, thus creating a MongoDB database. Along with the data set, we developed a Graphical User Interface aimed at navigating efficiently the MongoDB instance and obtaining the raw data that can be easily used for training and/or fine-tuning VLMs. To enforce this point, in this work, we first recall DR-Minerva, a Retrieve Augmented Generation-based VLM model trained upon MedPix 2.0. DR-Minerva predicts the body part and the modality used to scan its input image. We also propose the extension of DR-Minerva with a Knowledge Graph that uses Llama 3.1 Instruct 8B, and leverages MedPix 2.0. The resulting architecture can be queried in a end-to-end manner, as a medical decision support system. MedPix 2.0 is available on GitHub.
K M Sajjadul Islam, Ayesha Siddika Nipu, Praveen Madiraju et al.
The Chief Complaint (CC) is a crucial component of a patient's medical record as it describes the main reason or concern for seeking medical care. It provides critical information for healthcare providers to make informed decisions about patient care. However, documenting CCs can be time-consuming for healthcare providers, especially in busy emergency departments. To address this issue, an autocompletion tool that suggests accurate and well-formatted phrases or sentences for clinical notes can be a valuable resource for triage nurses. In this study, we utilized text generation techniques to develop machine learning models using CC data. In our proposed work, we train a Long Short-Term Memory (LSTM) model and fine-tune three different variants of Biomedical Generative Pretrained Transformers (BioGPT), namely microsoft/biogpt, microsoft/BioGPT-Large, and microsoft/BioGPT-Large-PubMedQA. Additionally, we tune a prompt by incorporating exemplar CC sentences, utilizing the OpenAI API of GPT-4. We evaluate the models' performance based on the perplexity score, modified BERTScore, and cosine similarity score. The results show that BioGPT-Large exhibits superior performance compared to the other models. It consistently achieves a remarkably low perplexity score of 1.65 when generating CC, whereas the baseline LSTM model achieves the best perplexity score of 170. Further, we evaluate and assess the proposed models' performance and the outcome of GPT-4.0. Our study demonstrates that utilizing LLMs such as BioGPT, leads to the development of an effective autocompletion tool for generating CC documentation in healthcare settings.
Robert J. Goddard, Wim P. Krijnen, Vincent Roelfsema et al.
Introduction: Bruxism is a repetitive masticatory muscle activity that may cause substantial morbidity and reduce the quality of life in children with profound intellectual and multiple disabilities. Assessment methods most commonly used were caregiver reporting and dental examination, This systematic review with meta-analysis aims to determine the prevalence of bruxism in children with profound intellectual and multiple disabilities and to describe the currently used assessment methods for bruxism in this population. Methods: We conducted a systematic review and meta-analysis using a multi-component search strategy. We used a random effects model to calculate the prevalence and 95 % confidence intervals for each study, for all studies combined, and specifically for Rett syndrome (RS), cerebral palsy (CP), Down syndrome (DS), and “other disorders (primarily Angelman syndrome and Prader–Willi syndrome).” Results: The prevalence for the entire group based on a random effects model was found to be 49 % (95 %CI 41–57 %) with high heterogeneity (I2 = 93 %, p < 0.01), for RS 74 % (95 %CI 53–88 %, I2 = 84 %, p < 0.01), CP 48 % (95 %CI 38–57 %, I2 = 86 %, p < 0.01), DS 40 % (95 %CI 33–47 %, I2 = 60 %, p < 0.01) and “other disorders” 40 % (95 %CI 18–67 %, I2 = 98 %, p < 0.01). The group prevalences were not equal, indicating a significant difference (P-value = 0.03), with a notably higher likelihood of RS. Conclusion: We observed a five-fold increased likelihood of bruxism in children with profound intellectual and multiple disabilities. The disorder with the highest prevalence was Rett syndrome, with a seven-fold increased likelihood of bruxism. The increased likelihood of bruxism in this vulnerable group of children demands clinicians pay heed to this substantial morbidity.
Halaman 20 dari 103781