K. Yeager, Rosemary Agostini, A. Nattiv et al.
Hasil untuk "Sports medicine"
Menampilkan 20 dari ~2646035 hasil · dari DOAJ, arXiv, Semantic Scholar
S. Lephart, D. Pincivero, Jorge L. Giraido et al.
C. Bouchard, R. Shephard, P. Brubaker
Satoshi Matsuura, Masaki Tatsumura, Reo Asai et al.
Introduction: The Scottie dog sign on plain oblique radiography is an imaging evaluation indicator for detecting lumbar spondylolysis; however, if the cleft distance is small, the sensitivity of this method is low, making it difficult to detect the Scottie dog sign. Detailed studies on this aspect are scarce. Therefore, this study aimed to investigate the relationship between the Scottie dog sign and cleft distance in patients with terminal-stage bilateral lumbar spondylolysis. Methods: The retrospective, cross-sectional study included 75 patients with 150 clefts of lumbar spondylolysis, all of whom had terminal-stage bilateral lumbar spondylolysis at the time of their first visit to our hospital. Patients were classified into the Scottie dog sign-positive group (P) and negative group (N). The mean cleft distance between the two groups was compared by a t-test using the sagittal and axial planes of computed tomography (CT) images. Results: The mean cleft distance in groups P and N, respectively, was 3.05±2.00 mm and 1.96±2.38 mm in sagittal planes (p<0.01), and 2.64±1.70 mm and 1.92±1.93 mm in axial planes (p<0.01), with significant differences observed between groups. However, in some cases, the Scottie dog sign was negative even when the cleft distance was large, depending on the angle of the bone defect. Conclusions: We observed an association between the Scottie dog sign and cleft distance in patients with terminal-stage bilateral lumbar spondylolysis. The cleft distance in cases where the Scottie dog sign was positive was larger than that in cases where the Scottie dog sign was negative. These findings suggest that measuring the cleft distance using CT may help predict the visibility of the Scottie dog sign on plain radiography, thereby aiding in the diagnostic evaluation of lumbar spondylolysis.
Alessio Di Rubbo, Mattia Neri, Remo Pareschi et al.
This paper explores how semantic-space reasoning, traditionally used in computational linguistics, can be extended to tactical decision-making in team sports. Building on the analogy between texts and teams -- where players act as words and collective play conveys meaning -- the proposed methodology models tactical configurations as compositional semantic structures. Each player is represented as a multidimensional vector integrating technical, physical, and psychological attributes; team profiles are aggregated through contextual weighting into a higher-level semantic representation. Within this shared vector space, tactical templates such as high press, counterattack, or possession build-up are encoded analogously to linguistic concepts. Their alignment with team profiles is evaluated using vector-distance metrics, enabling the computation of tactical ``fit'' and opponent-exploitation potential. A Python-based prototype demonstrates how these methods can generate interpretable, dynamically adaptive strategy recommendations, accompanied by fine-grained diagnostic insights at the attribute level. Beyond football, the approach offers a generalizable framework for collective decision-making and performance optimization in team-based domains -- ranging from basketball and hockey to cooperative robotics and human-AI coordination systems. The paper concludes by outlining future directions toward real-world data integration, predictive simulation, and hybrid human-machine tactical intelligence.
Unmesh Padalkar
In daily fantasy sports (DFS), match participation is highly time-sensitive. Users must act within a narrow window before a game begins, making match recommendation a time-critical task to prevent missed engagement and revenue loss. Existing recommender systems, typically designed for static item catalogs, are ill-equipped to handle the hard temporal deadlines inherent in these live events. To address this, we designed and deployed a recommendation engine using the Deep Interest Network (DIN) architecture. We adapt the DIN architecture by injecting temporality at two levels: first, through real-time urgency features for each candidate match (e.g., time-to-round-lock), and second, via temporal positional encodings that represent the time-gap between each historical interaction and the current recommendation request, allowing the model to dynamically weigh the recency of past actions. This approach, combined with a listwise neuralNDCG loss function, produces highly relevant and urgency-aware rankings. To support this at industrial scale, we developed a multi-node, multi-GPU training architecture on Ray and PyTorch. Our system, validated on a massive industrial dataset with over 650k users and over 100B interactions, achieves a +9% lift in nDCG@1 over a heavily optimized LightGBM baseline with handcrafted features. The strong offline performance of this model establishes its viability as a core component for our planned on-device (edge) recommendation system, where on-line A/B testing will be conducted.
Chihiro Nakatani, Hiroaki Kawashima, Norimichi Ukita
This paper proposes human-in-the-loop adaptation for Group Activity Feature Learning (GAFL) without group activity annotations. This human-in-the-loop adaptation is employed in a group-activity video retrieval framework to improve its retrieval performance. Our method initially pre-trains the GAF space based on the similarity of group activities in a self-supervised manner, unlike prior work that classifies videos into pre-defined group activity classes in a supervised learning manner. Our interactive fine-tuning process updates the GAF space to allow a user to better retrieve videos similar to query videos given by the user. In this fine-tuning, our proposed data-efficient video selection process provides several videos, which are selected from a video database, to the user in order to manually label these videos as positive or negative. These labeled videos are used to update (i.e., fine-tune) the GAF space, so that the positive and negative videos move closer to and farther away from the query videos through contrastive learning. Our comprehensive experimental results on two team sports datasets validate that our method significantly improves the retrieval performance. Ablation studies also demonstrate that several components in our human-in-the-loop adaptation contribute to the improvement of the retrieval performance. Code: https://github.com/chihina/GAFL-FINE-CVIU.
R. Marx, Timothy J. Stump, Edward C. Jones et al.
Waqar Husain, Khaled Trabelsi, Hadeel Ghazzawi et al.
Background: Biphasic sleep (segmented sleep) has been documented in preindustrial societies. The Biphasic Sleep Scale (BiSS) was recently developed to measure this pattern. This study aimed to translate and validate the BiSS into Arabic. Methods: The BiSS was translated following international cross-cultural adaptation guidelines. A cross-sectional survey of 511 Arabic-speaking young adults (mean age = 22.1 years; 73.8 % female) used the Arabic BiSS and Glasgow Sleep Effort Scale. Analysis included descriptive statistics, confirmatory factor analysis (CFA), reliability analysis (Cronbach's α and McDonald's ω), correlations, and regression models examining age, sex, and marital status effects. Results: CFA confirmed the original three-factor structure—likelihood of first sleep, consequences of first sleep, and sleep disturbance—with acceptable fit (RMSEA = 0.05, 90 % CI [0.02, 0.06]; SRMR = 0.04; CFI > 0.9; TLI > 0.9). Internal consistency was robust for the total scale, α = 0.9 and ω = 0.9. Internal consistency was also acceptable for subscales: likelihood of first sleep (α/ω = 0.8), consequences of first sleep (α/ω = 0.8), and borderline for sleep disturbance (α/ω = 0.6). Age (β = 0.1, p = 0.03) and marital status (single vs. married; β = -0.4, p = 0.02 for likelihood; β = -0.4, p = 0.01 for consequences significantly predicted biphasic sleep tendencies, while sex showed no significant effect. Conclusion: The Arabic BiSS demonstrates sound psychometric properties for assessing biphasic sleep. Future research should examine applicability in diverse populations, including older adults and married individuals, and further validate the sleep disturbance dimension.
Nahla Tharwat Moussa Ahmed, Hany Ezzat Obaya, Azza Abd Elaziz Abd Elhadi et al.
INTRODUCTION. The double chin is an excessive accumulation of fat in the pre- and post-platysma that can manifest in various forms and sizes. Thin individuals might manifest a double chin, similar to those who are afflicted by obesity. It may result in a reduction in the definition of the mandible and give the perception of obesity or aging. AIM. To evaluate any effect of High-Intensity Focused Ultrasound (HIFU) on sleep quality measures in obese women with a double chin. MATERIALS AND METHODS. Typically, 60 women aged 35–50 years were selected from AL Qasr-Alaini Hospital and were equally divided at random into groups A and B (n = 30). Group A (HIFU with exercise) received 3 HIFU sessions, a session/month/three months with adouble chin exercise (daily). Group B (Exercise group) received daily double chin exercises for only three months. Pre- and post-intervention, we assessed body mass index (BMI), hormonal changes (cortisol level), submental fat, and sleep apnea Apnea-Hypopnea Index. RESULTS AND DISCUSSION. The results revealed no significant effect in age, weight, and height in both groups (p 0.05). Post-three-month intervention, group A demonstrated a statistically significant decrease in the predetermined assessed outcomes compared to group B (p 0.001). CONCLUSION. There was established a significant impact of HIFU on measurements of sleep quality in obese doubled chin women.
Chengfeng Dou, Ying Zhang, Zhi Jin et al.
Evidence-based medicine (EBM) plays a crucial role in the application of large language models (LLMs) in healthcare, as it provides reliable support for medical decision-making processes. Although it benefits from current retrieval-augmented generation~(RAG) technologies, it still faces two significant challenges: the collection of dispersed evidence and the efficient organization of this evidence to support the complex queries necessary for EBM. To tackle these issues, we propose using LLMs to gather scattered evidence from multiple sources and present a knowledge hypergraph-based evidence management model to integrate these evidence while capturing intricate relationships. Furthermore, to better support complex queries, we have developed an Importance-Driven Evidence Prioritization (IDEP) algorithm that utilizes the LLM to generate multiple evidence features, each with an associated importance score, which are then used to rank the evidence and produce the final retrieval results. Experimental results from six datasets demonstrate that our approach outperforms existing RAG techniques in application domains of interest to EBM, such as medical quizzing, hallucination detection, and decision support. Testsets and the constructed knowledge graph can be accessed at \href{https://drive.google.com/file/d/1WJ9QTokK3MdkjEmwuFQxwH96j_Byawj_/view?usp=drive_link}{https://drive.google.com/rag4ebm}.
Keshav Jha, Joseph Mayer
Three-dimensional (3D) printed preoperative planning models serve a critical role in the success of many medical procedures. However, many of these models do not portray the patient's complete anatomy due to their monolithic and static nature. The use of dynamic 3D-printed models can better equip physicians by providing a more anatomically accurate model due to its movement capabilities and the ability to remove and replace printed anatomies based on planning stages. A dynamic 3D-printed preoperative planning model has the capability to move in similar ways to the anatomy that is being represented by the model, or reveal additional issues that may arise during the use of a movement mechanism. The 3D-printed models are constructed in a similar manner to their static counterparts; however, in the digital post-processing phase, additional care is needed to ensure the dynamic functionality of the model. Here, we discuss the process of creating a dynamic 3D-printed model and its benefits and uses in modern medicine.
ChaoBo Zhang, Long Tan
Artificial intelligence technology plays a crucial role in recommending prescriptions for traditional Chinese medicine (TCM). Previous studies have made significant progress by focusing on the symptom-herb relationship in prescriptions. However, several limitations hinder model performance: (i) Insufficient attention to patient-personalized information such as age, BMI, and medical history, which hampers accurate identification of syndrome and reduces efficacy. (ii) The typical long-tailed distribution of herb data introduces training biases and affects generalization ability. (iii) The oversight of the 'monarch, minister, assistant and envoy' compatibility among herbs increases the risk of toxicity or side effects, opposing the 'treatment based on syndrome differentiation' principle in clinical TCM. Therefore, we propose a novel hierarchical structure-enhanced personalized recommendation model for TCM formulas based on knowledge graph diffusion guidance, namely TCM-HEDPR. Specifically, we pre-train symptom representations using patient-personalized prompt sequences and apply prompt-oriented contrastive learning for data augmentation. Furthermore, we employ a KG-guided homogeneous graph diffusion method integrated with a self-attention mechanism to globally capture the non-linear symptom-herb relationship. Lastly, we design a heterogeneous graph hierarchical network to integrate herbal dispensing relationships with implicit syndromes, guiding the prescription generation process at a fine-grained level and mitigating the long-tailed herb data distribution problem. Extensive experiments on two public datasets and one clinical dataset demonstrate the effectiveness of TCM-HEDPR. In addition, we incorporate insights from modern medicine and network pharmacology to evaluate the recommended prescriptions comprehensively. It can provide a new paradigm for the recommendation of modern TCM.
Heming Zhang, Di Huang, Wenyu Li et al.
In precision medicine, quantitative multi-omic features, topological context, and textual biological knowledge play vital roles in identifying disease-critical signaling pathways and targets. Existing pipelines capture only part of these-numerical omics ignore topological context, text-centric LLMs lack quantitative grounded reasoning, and graph-only models underuse node semantics and the generalization of LLMs-limiting mechanistic interpretability. Although Process Reward Models (PRMs) aim to guide reasoning in LLMs, they remain limited by unreliable intermediate evaluation, and vulnerability to reward hacking with computational cost. These gaps motivate integrating quantitative multi-omic signals, topological structure with node annotations, and literature-scale text via LLMs, using subgraph reasoning as the principle bridge linking numeric evidence, topological knowledge and language context. Therefore, we propose GALAX (Graph Augmented LAnguage model with eXplainability), an innovative framework that integrates pretrained Graph Neural Networks (GNNs) into Large Language Models (LLMs) via reinforcement learning guided by a Graph Process Reward Model (GPRM), which generates disease-relevant subgraphs in a step-wise manner initiated by an LLM and iteratively evaluated by a pretrained GNN and schema-based rule check, enabling process-level supervision without explicit labels. As an application, we also introduced Target-QA, a benchmark combining CRISPR-identified targets, multi-omic profiles, and biomedical graph knowledge across diverse cancer cell lines, which enables GNN pretraining for supervising step-wise graph construction and supports long-context reasoning over text-numeric graphs (TNGs), providing a scalable and biologically grounded framework for explainable, reinforcement-guided subgraph reasoning toward reliable and interpretable target discovery in precision medicine.
M. Halstead, K. Walter
Kishi Kobe Yee Francisco, Andrane Estelle Carnicer Apuhin, Myles Joshua Toledo Tan et al.
Personalized medicine (PM) promises to transform healthcare by providing treatments tailored to individual genetic, environmental, and lifestyle factors. However, its high costs and infrastructure demands raise concerns about exacerbating health disparities, especially between high-income countries (HICs) and low- and middle-income countries (LMICs). While HICs benefit from advanced PM applications through AI and genomics, LMICs often lack the resources necessary to adopt these innovations, leading to a widening healthcare divide. This paper explores the financial and ethical challenges of PM implementation, with a focus on ensuring equitable access. It proposes strategies for global collaboration, infrastructure development, and ethical frameworks to support LMICs in adopting PM, aiming to prevent further disparities in healthcare accessibility and outcomes.
Mahela Pandukabhaya, Tharaka Fonseka, Madhumini Kulathunge et al.
Mastering psychomotor skills, such as those essential in sports, rehabilitation, and professional training, often requires a precise understanding of motion patterns and performance metrics. This study proposes a versatile framework for optimizing psychomotor learning through human motion analysis. Utilizing a wearable IMU sensor system, the motion trajectories of a given psychomotor task are acquired and then linked to points in a performance space using a predefined set of quality metrics specific to the psychomotor skill. This enables the identification of a benchmark cluster in the performance space, which represents a group of reference points that define optimal performance across multiple criteria, allowing correspondences to be established between the performance clusters and sets of trajectories in the motion space. As a result, common or specific deviations in the performance space can be identified, enabling remedial actions in the motion space to optimize performance. A thorough validation of the proposed framework is done in this paper using a Table Tennis forehand stroke as a case study. The resulting quantitative and visual representation of performance empowers individuals to optimize their skills and achieve peak performance.
Ping Yu, Kaitao Song, Fengchen He et al.
The recently unprecedented advancements in Large Language Models (LLMs) have propelled the medical community by establishing advanced medical-domain models. However, due to the limited collection of medical datasets, there are only a few comprehensive benchmarks available to gauge progress in this area. In this paper, we introduce a new medical question-answering (QA) dataset that contains massive manual instruction for solving Traditional Chinese Medicine examination tasks, called TCMD. Specifically, our TCMD collects massive questions across diverse domains with their annotated medical subjects and thus supports us in comprehensively assessing the capability of LLMs in the TCM domain. Extensive evaluation of various general LLMs and medical-domain-specific LLMs is conducted. Moreover, we also analyze the robustness of current LLMs in solving TCM QA tasks by introducing randomness. The inconsistency of the experimental results also reveals the shortcomings of current LLMs in solving QA tasks. We also expect that our dataset can further facilitate the development of LLMs in the TCM area.
Anh Le, Amirreza Hashemi, Mark P. Ottensmeyer et al.
The design of nuclear imaging scanners is crucial for optimizing detection and imaging processes. While advancements have been made in simplistic, symmetrical modalities, current research is progressing towards more intricate structures, however, the widespread adoption of computer-aided design (CAD) tools for modeling and simulation is still limited. This paper introduces FreeCAD and the GDML Workbench as essential tools for designing and testing complex geometries in nuclear imaging modalities. FreeCAD is a parametric 3D CAD modeler, and GDML is an XML-based language for describing complex geometries in simulations. Their integration streamlines the design and simulation of nuclear medicine scanners, including PET and SPECT scanners. The paper demonstrates their application in creating calibration phantoms and conducting simulations with Geant4, showcasing their precision and versatility in generating sophisticated components for nuclear imaging. The integration of these tools is expected to streamline design processes, enhance efficiency, and facilitate widespread application in the nuclear imaging field.
Lingxiao Luo, Bingda Tang, Xuanzhong Chen et al.
Recent advancements in Vision Language Models (VLMs) have demonstrated remarkable promise in generating visually grounded responses. However, their application in the medical domain is hindered by unique challenges. For instance, most VLMs rely on a single method of visual grounding, whereas complex medical tasks demand more versatile approaches. Additionally, while most VLMs process only 2D images, a large portion of medical images are 3D. The lack of medical data further compounds these obstacles. To address these challenges, we present VividMed, a vision language model with versatile visual grounding for medicine. Our model supports generating both semantic segmentation masks and instance-level bounding boxes, and accommodates various imaging modalities, including both 2D and 3D data. We design a three-stage training procedure and an automatic data synthesis pipeline based on open datasets and models. Besides visual grounding tasks, VividMed also excels in other common downstream tasks, including Visual Question Answering (VQA) and report generation. Ablation studies empirically show that the integration of visual grounding ability leads to improved performance on these tasks. Our code is publicly available at https://github.com/function2-llx/MMMM.
Halaman 31 dari 132302