Hasil untuk "Internal medicine"

Menampilkan 20 dari ~10674986 hasil · dari arXiv, DOAJ, Semantic Scholar, CrossRef

JSON API
S2 Open Access 2002
A consensus statement on health care transitions for young adults with special health care needs.

R. W. Blum, D. Hirsch, Theodore A. Kastner et al.

This policy statement represents a consensus on the critical first steps that the medical profession needs to take to realize the vision of a family-centered, continuous, comprehensive, coordinated, compassionate, and culturally competent health care system that is as developmentally appropriate as it is technically sophisticated. The goal of transition in health care for young adults with special health care needs is to maximize lifelong functioning and potential through the provision of high-quality, developmentally appropriate health care services that continue uninterrupted as the individual moves from adolescence to adulthood. This consensus document has now been approved as policy by the boards of the American Academy of Pediatrics, the American Academy of Family Physicians, and the American College of Physicians-American Society of Internal Medicine.

997 sitasi en Medicine
S2 Open Access 2020
COVID-19-related myocarditis in a 21-year-old female patient

In-Cheol Kim, Jin Young Kim, H. Kim et al.

In-Cheol Kim , Jin Young Kim, Hyun Ah Kim , and Seongwook Han * Division of Cardiology, Department of Internal Medicine, Cardiovascular Center, Keimyung University Dongsan Hospital, Keimyung University School of Medicine, Daegu, Republic of Korea; Department of Radiology, Keimyung University Dongsan Hospital, Keimyung University School of Medicine, Daegu, Republic of Korea; and Department of Infectious Disease, Keimyung University Dongsan Hospital, Keimyung University School of Medicine, Daegu, Republic of Korea

289 sitasi en Medicine
S2 Open Access 2018
Hodgkin lymphoma: ESMO Clinical Practice Guidelines for diagnosis, treatment and follow-up.

D. Eichenauer, B. Aleman, M. André et al.

First Department of Internal Medicine, University Hospital Cologne, Cologne, Germany; Department of Radiation Oncology, The Netherlands Cancer Institute, Amsterdam, The Netherlands; Université Catholique de Louvain, Yvoir; Department of Hematology, CHU UCL Namur, Yvoir, Belgium; Department of Diagnostic, Clinical and Public Health Medicine, University of Modena and Reggio Emilia, Modena, Italy; Department of Hematology, Rigshospitalet, Copenhagen University Hospital, Copenhagen, Denmark; Division of Cancer Sciences, University of Manchester, Manchester; The Christie NHS Foundation Trust, Manchester, UK; Hematology Division, Azienda Ospedaliera Santi Antonio e Biagio e Cesare Arrigo, Alessandria, Italy

338 sitasi en Medicine
S2 Open Access 2022
The Digital Metaverse: Applications in Artificial Intelligence, Medical Education, and Integrative Health

A. Ahuja, Bryce W. Polascik, Divyesh Doddapaneni et al.

a Charles E. Schmidt College of Medicine, Florida Atlantic University, Boca Raton, FL, United States of America b Wake Forest University School of Medicine, Winston-Salem, North Carolina, United States of America c Department of Internal Medicine, University of Colorado Anschutz Medical Campus, Aurora, Colorado, United States of America d Department of Internal Medicine, Orlando Regional Medical Center, Orlando, Florida, United States of America e Department of Ophthalmology, Bascom Palmer Eye Institute, University of Miami, Miller School of Medicine, Miami, Florida, United States of America

165 sitasi en Medicine
arXiv Open Access 2026
Visualizing and Benchmarking LLM Factual Hallucination Tendencies via Internal State Analysis and Clustering

Nathan Mao, Varun Kaushik, Shreya Shivkumar et al.

Large Language Models (LLMs) often hallucinate, generating nonsensical or false information that can be especially harmful in sensitive fields such as medicine or law. To study this phenomenon systematically, we introduce FalseCite, a curated dataset designed to capture and benchmark hallucinated responses induced by misleading or fabricated citations. Running GPT-4o-mini, Falcon-7B, and Mistral 7-B through FalseCite, we observed a noticeable increase in hallucination activity for false claims with deceptive citations, especially in GPT-4o-mini. Using the responses from FalseCite, we can also analyze the internal states of hallucinating models, visualizing and clustering the hidden state vectors. From this analysis, we noticed that the hidden state vectors, regardless of hallucination or non-hallucination, tend to trace out a distinct horn-like shape. Our work underscores FalseCite's potential as a foundation for evaluating and mitigating hallucinations in future LLM research.

en cs.CL, cs.AI
arXiv Open Access 2026
How Retrieved Context Shapes Internal Representations in RAG

Samuel Yeh, Sharon Li

Retrieval-augmented generation (RAG) enhances large language models (LLMs) by conditioning generation on retrieved external documents, but the effect of retrieved context is often non-trivial. In realistic retrieval settings, the retrieved document set often contains a mixture of documents that vary in relevance and usefulness. While prior work has largely examined these phenomena through output behavior, little is known about how retrieved context shapes the internal representations that mediate information integration in RAG. In this work, we study RAG through the lens of latent representations. We systematically analyze how different types of retrieved documents affect the hidden states of LLMs, and how these internal representation shifts relate to downstream generation behavior. Across four question-answering datasets and three LLMs, we analyze internal representations under controlled single- and multi-document settings. Our results reveal how context relevancy and layer-wise processing influence internal representations, providing explanations on LLMs output behaviors and insights for RAG system design.

en cs.CL
arXiv Open Access 2026
Co-Evolution of Policy and Internal Reward for Language Agents

Xinyu Wang, Hanwei Wu, Jingwei Song et al.

Large language model (LLM) agents learn by interacting with environments, but long-horizon training remains fundamentally bottlenecked by sparse and delayed rewards. Existing methods typically address this challenge through post-hoc credit assignment or external reward models, which provide limited guidance at inference time and often separate reward improvement from policy improvement. We propose Self-Guide, a self-generated internal reward for language agents that supports both inference-time guidance and training-time supervision. Specifically, the agent uses Self-Guide as a short self-guidance signal to steer the next action during inference, and converts the same signal into step-level internal reward for denser policy optimization during training. This creates a co-evolving loop: better policy produces better guidance, and better guidance further improves policy as internal reward. Across three agent benchmarks, inference-time self-guidance already yields clear gains, while jointly evolving policy and internal reward with GRPO brings further improvements (8\%) over baselines trained solely with environment reward. Overall, our results suggest that language agents can improve not only by collecting more experience, but also by learning to generate and refine their own internal reward during acting and learning.

en cs.LG, cs.AI

Halaman 31 dari 533750