Petr Svobodný
Hasil untuk "History of medicine. Medical expeditions"
Menampilkan 20 dari ~9401763 hasil · dari CrossRef, DOAJ, arXiv, Semantic Scholar
Ümit Karakaş
Amaç: Lavanta yağı, Lamiaceae ailesinin önemli bir üyesi olan lavanta bitkisinden elde edilen ve çeşitli terapötik etkilerle öne çıkan bir uçucu yağdır. Tarihsel olarak, antik medeniyetlerden günümüze kadar antibakteriyel, antienflamatuar, anksiyolitik, antifungal ve yara iyileştirici özellikleri nedeniyle geniş bir kullanım alanı bulmuştur. Lavanta yağının temel bileşenleri olan linalool ve linalil asetat, sinir sistemi üzerinde yatıştırıcı etkiler gösterirken, inflamasyonu ve oksidatif stresi azaltarak çeşitli kronik hastalıkların tedavisine yardımcı olabilmektedir. Yapılan araştırmalar, lavanta yağının nörolojik bozukluklardan kansere kadar uzanan geniş bir etki spektrumu olduğunu ortaya koymuş, ancak bu etkilerin moleküler düzeyde nasıl gerçekleştiği henüz tam olarak anlaşılmamıştır. Çalışmanın amacı güncel literatürü mikroRNA’lar çerçevesinde derlemektir.Yöntem: Çalışmanın içeriği “Lavanta yağı”, “miRNA”, “epigenetik”, “gen ifadesi” anahtar kelimelerinin kombinasyonları ile PubMed, Google akademik, Web of science ve Science direct veri tabanlarından yapılan taramalarla oluşturulmuştur.Bulgular: Son yıllarda, esansiyel yağların genetik ve epigenetik düzeydeki etkileri üzerinde durulmakta ve miRNA'lar ile olası etkileşimleri incelenmektedir. miRNA'ların gen ekspresyonunun düzenlenmesindeki kritik rolü göz önüne alındığında, lavanta yağının özellikle stres, depresyon ve inflamasyonla ilişkili miRNA'ları etkileyerek genetik mekanizmaları modüle etme potansiyeli büyük bir araştırma alanı olarak karşımıza çıkmaktadır. Ancak, bu alandaki literatür eksikliği dikkat çekmektedir. Lavanta yağının terapötik etkilerinin daha iyi anlaşılması, özellikle miRNA'lar üzerindeki etkilerine odaklanan moleküler çalışmaların artmasıyla mümkün olacaktır.Sonuç: Gelecekteki çalışmalar, lavanta yağının moleküler biyolojideki yerini sağlamlaştırabilir ve klinik kullanımı genişletebilir. Bu doğrultuda, lavanta yağının biyolojik mekanizmalarını aydınlatmak, onu hastalıkların tedavisinde daha etkili bir araç haline getirebilir.
Robert Sparrow, Joshua Hatherley
What does Artificial Intelligence (AI) have to contribute to health care? And what should we be looking out for if we are worried about its risks? In this paper we offer a survey, and initial evaluation, of hopes and fears about the applications of artificial intelligence in medicine. AI clearly has enormous potential as a research tool, in genomics and public health especially, as well as a diagnostic aid. It's also highly likely to impact on the organisational and business practices of healthcare systems in ways that are perhaps under-appreciated. Enthusiasts for AI have held out the prospect that it will free physicians up to spend more time attending to what really matters to them and their patients. We will argue that this claim depends upon implausible assumptions about the institutional and economic imperatives operating in contemporary healthcare settings. We will also highlight important concerns about privacy, surveillance, and bias in big data, as well as the risks of over trust in machines, the challenges of transparency, the deskilling of healthcare practitioners, the way AI reframes healthcare, and the implications of AI for the distribution of power in healthcare institutions. We will suggest that two questions, in particular, are deserving of further attention from philosophers and bioethicists. What does care look like when one is dealing with data as much as people? And, what weight should we give to the advice of machines in our own deliberations about medical decisions?
Dingkang Yang, Jinjie Wei, Dongling Xiao et al.
Developing intelligent pediatric consultation systems offers promising prospects for improving diagnostic efficiency, especially in China, where healthcare resources are scarce. Despite recent advances in Large Language Models (LLMs) for Chinese medicine, their performance is sub-optimal in pediatric applications due to inadequate instruction data and vulnerable training procedures. To address the above issues, this paper builds PedCorpus, a high-quality dataset of over 300,000 multi-task instructions from pediatric textbooks, guidelines, and knowledge graph resources to fulfil diverse diagnostic demands. Upon well-designed PedCorpus, we propose PediatricsGPT, the first Chinese pediatric LLM assistant built on a systematic and robust training pipeline. In the continuous pre-training phase, we introduce a hybrid instruction pre-training mechanism to mitigate the internal-injected knowledge inconsistency of LLMs for medical domain adaptation. Immediately, the full-parameter Supervised Fine-Tuning (SFT) is utilized to incorporate the general medical knowledge schema into the models. After that, we devise a direct following preference optimization to enhance the generation of pediatrician-like humanistic responses. In the parameter-efficient secondary SFT phase, a mixture of universal-specific experts strategy is presented to resolve the competency conflict between medical generalist and pediatric expertise mastery. Extensive results based on the metrics, GPT-4, and doctor evaluations on distinct doctor downstream tasks show that PediatricsGPT consistently outperforms previous Chinese medical LLMs. Our model and dataset will be open-source for community development.
Pedram Hosseini, Jessica M. Sin, Bing Ren et al.
There is a lack of benchmarks for evaluating large language models (LLMs) in long-form medical question answering (QA). Most existing medical QA evaluation benchmarks focus on automatic metrics and multiple-choice questions. While valuable, these benchmarks fail to fully capture or assess the complexities of real-world clinical applications where LLMs are being deployed. Furthermore, existing studies on evaluating long-form answer generation in medical QA are primarily closed-source, lacking access to human medical expert annotations, which makes it difficult to reproduce results and enhance existing baselines. In this work, we introduce a new publicly available benchmark featuring real-world consumer medical questions with long-form answer evaluations annotated by medical doctors. We performed pairwise comparisons of responses from various open and closed-source medical and general-purpose LLMs based on criteria such as correctness, helpfulness, harmfulness, and bias. Additionally, we performed a comprehensive LLM-as-a-judge analysis to study the alignment between human judgments and LLMs. Our preliminary results highlight the strong potential of open LLMs in medical QA compared to leading closed models. Code & Data: https://github.com/lavita-ai/medical-eval-sphere
Yi Zhang, Yidong Zhao, Hui Xue et al.
Image registration is essential for medical image applications where alignment of voxels across multiple images is needed for qualitative or quantitative analysis. With recent advancements in deep neural networks and parallel computing, deep learning-based medical image registration methods become competitive with their flexible modelling and fast inference capabilities. However, compared to traditional optimization-based registration methods, the speed advantage may come at the cost of registration performance at inference time. Besides, deep neural networks ideally demand large training datasets while optimization-based methods are training-free. To improve registration accuracy and data efficiency, we propose a novel image registration method, termed Recurrent Inference Image Registration (RIIR) network. RIIR is formulated as a meta-learning solver to the registration problem in an iterative manner. RIIR addresses the accuracy and data efficiency issues, by learning the update rule of optimization, with implicit regularization combined with explicit gradient input. We evaluated RIIR extensively on brain MRI and quantitative cardiac MRI datasets, in terms of both registration accuracy and training data efficiency. Our experiments showed that RIIR outperformed a range of deep learning-based methods, even with only $5\%$ of the training data, demonstrating high data efficiency. Key findings from our ablation studies highlighted the important added value of the hidden states introduced in the recurrent inference framework for meta-learning. Our proposed RIIR offers a highly data-efficient framework for deep learning-based medical image registration.
Tianwei Zhang, Dong Wei, Mengmeng Zhu et al.
Self-supervised learning has emerged as a powerful tool for pretraining deep networks on unlabeled data, prior to transfer learning of target tasks with limited annotation. The relevance between the pretraining pretext and target tasks is crucial to the success of transfer learning. Various pretext tasks have been proposed to utilize properties of medical image data (e.g., three dimensionality), which are more relevant to medical image analysis than generic ones for natural images. However, previous work rarely paid attention to data with anatomy-oriented imaging planes, e.g., standard cardiac magnetic resonance imaging views. As these imaging planes are defined according to the anatomy of the imaged organ, pretext tasks effectively exploiting this information can pretrain the networks to gain knowledge on the organ of interest. In this work, we propose two complementary pretext tasks for this group of medical image data based on the spatial relationship of the imaging planes. The first is to learn the relative orientation between the imaging planes and implemented as regressing their intersecting lines. The second exploits parallel imaging planes to regress their relative slice locations within a stack. Both pretext tasks are conceptually straightforward and easy to implement, and can be combined in multitask learning for better representation learning. Thorough experiments on two anatomical structures (heart and knee) and representative target tasks (semantic segmentation and classification) demonstrate that the proposed pretext tasks are effective in pretraining deep networks for remarkably boosted performance on the target tasks, and superior to other recent approaches.
Yinchi Zhou, Tianqi Chen, Jun Hou et al.
Image-to-image translation is a vital component in medical imaging processing, with many uses in a wide range of imaging modalities and clinical scenarios. Previous methods include Generative Adversarial Networks (GANs) and Diffusion Models (DMs), which offer realism but suffer from instability and lack uncertainty estimation. Even though both GAN and DM methods have individually exhibited their capability in medical image translation tasks, the potential of combining a GAN and DM to further improve translation performance and to enable uncertainty estimation remains largely unexplored. In this work, we address these challenges by proposing a Cascade Multi-path Shortcut Diffusion Model (CMDM) for high-quality medical image translation and uncertainty estimation. To reduce the required number of iterations and ensure robust performance, our method first obtains a conditional GAN-generated prior image that will be used for the efficient reverse translation with a DM in the subsequent step. Additionally, a multi-path shortcut diffusion strategy is employed to refine translation results and estimate uncertainty. A cascaded pipeline further enhances translation quality, incorporating residual averaging between cascades. We collected three different medical image datasets with two sub-tasks for each dataset to test the generalizability of our approach. Our experimental results found that CMDM can produce high-quality translations comparable to state-of-the-art methods while providing reasonable uncertainty estimations that correlate well with the translation error.
Zihao Zhao, Yuxiao Liu, Han Wu et al.
Contrastive Language-Image Pre-training (CLIP), a simple yet effective pre-training paradigm, successfully introduces text supervision to vision models. It has shown promising results across various tasks due to its generalizability and interpretability. The use of CLIP has recently gained increasing interest in the medical imaging domain, serving as a pre-training paradigm for image-text alignment, or a critical component in diverse clinical tasks. With the aim of facilitating a deeper understanding of this promising direction, this survey offers an in-depth exploration of the CLIP within the domain of medical imaging, regarding both refined CLIP pre-training and CLIP-driven applications. In this paper, we (1) first start with a brief introduction to the fundamentals of CLIP methodology; (2) then investigate the adaptation of CLIP pre-training in the medical imaging domain, focusing on how to optimize CLIP given characteristics of medical images and reports; (3) further explore practical utilization of CLIP pre-trained models in various tasks, including classification, dense prediction, and cross-modal tasks; and (4) finally discuss existing limitations of CLIP in the context of medical imaging, and propose forward-looking directions to address the demands of medical imaging domain. Studies featuring technical and practical value are both investigated. We expect this survey will provide researchers with a holistic understanding of the CLIP paradigm and its potential implications. The project page of this survey can also be found on https://github.com/zhaozh10/Awesome-CLIP-in-Medical-Imaging.
Kathrin Krieger, Jan Egger, Jens Kleesiek et al.
3D data from high-resolution volumetric imaging is a central resource for diagnosis and treatment in modern medicine. While the fast development of AI enhances imaging and analysis, commonly used visualization methods lag far behind. Recent research used extended reality (XR) for perceiving 3D images with visual depth perception and touch but used restrictive haptic devices. While unrestricted touch benefits volumetric data examination, implementing natural haptic interaction with XR is challenging. The research question is whether a multisensory XR application with intuitive haptic interaction adds value and should be pursued. In a study, 24 experts for biomedical images in research and medicine explored 3D medical shapes with 3 applications: a multisensory virtual reality (VR) prototype using haptic gloves, a simple VR prototype using controllers, and a standard PC application. Results of standardized questionnaires showed no significant differences between all application types regarding usability and no significant difference between both VR applications regarding presence. Participants agreed to statements that VR visualizations provide better depth information, using the hands instead of controllers simplifies data exploration, the multisensory VR prototype allows intuitive data exploration, and it is beneficial over traditional data examination methods. While most participants mentioned manual interaction as best aspect, they also found it the most improvable. We conclude that a multisensory XR application with improved manual interaction adds value for volumetric biomedical data examination. We will proceed with our open-source research project ISH3DE (Intuitive Stereoptic Haptic 3D Data Exploration) to serve medical education, therapeutic decisions, surgery preparations, or research data analysis.
Angona Biswas, MD Abdullah Al Nasim, Md Shahin Ali et al.
The development of medical science greatly depends on the increased utilization of machine learning algorithms. By incorporating machine learning, the medical imaging field can significantly improve in terms of the speed and accuracy of the diagnostic process. Computed tomography (CT), magnetic resonance imaging (MRI), X-ray imaging, ultrasound imaging, and positron emission tomography (PET) are the most commonly used types of imaging data in the diagnosis process, and machine learning can aid in detecting diseases at an early stage. However, training machine learning models with limited annotated medical image data poses a challenge. The majority of medical image datasets have limited data, which can impede the pattern-learning process of machine-learning algorithms. Additionally, the lack of labeled data is another critical issue for machine learning. In this context, active learning techniques can be employed to address the challenge of limited annotated medical image data. Active learning involves iteratively selecting the most informative samples from a large pool of unlabeled data for annotation by experts. By actively selecting the most relevant and informative samples, active learning reduces the reliance on large amounts of labeled data and maximizes the model's learning capacity with minimal human labeling effort. By incorporating active learning into the training process, medical imaging machine learning models can make more efficient use of the available labeled data, improving their accuracy and performance. This approach allows medical professionals to focus their efforts on annotating the most critical cases, while the machine learning model actively learns from these annotated samples to improve its diagnostic capabilities.
Sergio Tascon-Morales, Pablo Márquez-Neila, Raphael Sznitman
Visual Question Answering (VQA) models aim to answer natural language questions about given images. Due to its ability to ask questions that differ from those used when training the model, medical VQA has received substantial attention in recent years. However, existing medical VQA models typically focus on answering questions that refer to an entire image rather than where the relevant content may be located in the image. Consequently, VQA models are limited in their interpretability power and the possibility to probe the model about specific image regions. This paper proposes a novel approach for medical VQA that addresses this limitation by developing a model that can answer questions about image regions while considering the context necessary to answer the questions. Our experimental results demonstrate the effectiveness of our proposed model, outperforming existing methods on three datasets. Our code and data are available at https://github.com/sergiotasconmorales/locvqa.
Hubert Baniecki, Bartlomiej Sobieski, Patryk Szatkowski et al.
Time-to-event prediction, e.g. cancer survival analysis or hospital length of stay, is a highly prominent machine learning task in medical and healthcare applications. However, only a few interpretable machine learning methods comply with its challenges. To facilitate a comprehensive explanatory analysis of survival models, we formally introduce time-dependent feature effects and global feature importance explanations. We show how post-hoc interpretation methods allow for finding biases in AI systems predicting length of stay using a novel multi-modal dataset created from 1235 X-ray images with textual radiology reports annotated by human experts. Moreover, we evaluate cancer survival models beyond predictive performance to include the importance of multi-omics feature groups based on a large-scale benchmark comprising 11 datasets from The Cancer Genome Atlas (TCGA). Model developers can use the proposed methods to debug and improve machine learning algorithms, while physicians can discover disease biomarkers and assess their significance. We hope the contributed open data and code resources facilitate future work in the emerging research direction of explainable survival analysis.
Fernando Tadeu Germinatti
Saeedeh Babaii, Alireza Monajemi
The quality of care crisis (QCC) is one of the most crucial crises the modern medicine is confronting, as the existential and psychological needs of patients have not been addressed and satisfied. Several attempts have been made to find solutions for QCC, e.g., the Marcum's recommendation to make physicians virtuous. Most of the existing formulations for the QCC have regarded technology as one of the causes of this crisis and not part of its solution.Although the authors agree with the role of technology in creating the crisis of care to some extent, in this article we try to present the crisis of care so that medical technology is an important part of its solution. For this purpose, we analyzed QCC from the philosophical perspectives of Husserl and Borgmann and put forward a novel proposal to take account of technology in QCC. In the first step, it is discussed that the role of technology in causing the crisis of care is due to the gap between the techno-scientific world and the life-world of the patients. This formulation shows that the crisis-causing role of technology is not inherent. In the second step, it is tried to find a way to integrate technology into the solution to the crisis. In the proposed reframing, designing and applying technologies based on focal things and practices make it possible to develop technologies that are caring and are able to mitigate QCC.
María Claudia Pantoja
Resumen En el presente artículo se abordan las maneras en que la fotografía fue parte de las estrategias argumentativas y expositivas de los profesionales de la medicina y su rol en la producción de conocimiento experimental entre 1890 y 1915. Para dar cuenta de estas cuestiones se relevaron revistas médicas representativas del período siempre teniendo en cuenta los avances en las técnicas de reproducción de imágenes. El análisis realizado permite dar cuenta de cómo las fotografías fueron parte de las explicaciones presentadas ante la comunidad científica para persuadir de la eficacia de tratamientos experimentales y procedimientos quirúrgicos novedosos, en un contexto de profesionalización de la medicina y necesidad de legitimación de una “cultura de laboratorio”.
Zihan Li, Yunxiang Li, Qingde Li et al.
Deep learning has been widely used in medical image segmentation and other aspects. However, the performance of existing medical image segmentation models has been limited by the challenge of obtaining sufficient high-quality labeled data due to the prohibitive data annotation cost. To alleviate this limitation, we propose a new text-augmented medical image segmentation model LViT (Language meets Vision Transformer). In our LViT model, medical text annotation is incorporated to compensate for the quality deficiency in image data. In addition, the text information can guide to generate pseudo labels of improved quality in the semi-supervised learning. We also propose an Exponential Pseudo label Iteration mechanism (EPI) to help the Pixel-Level Attention Module (PLAM) preserve local image features in semi-supervised LViT setting. In our model, LV (Language-Vision) loss is designed to supervise the training of unlabeled images using text information directly. For evaluation, we construct three multimodal medical segmentation datasets (image + text) containing X-rays and CT images. Experimental results show that our proposed LViT has superior segmentation performance in both fully-supervised and semi-supervised setting. The code and datasets are available at https://github.com/HUANGLIZI/LViT.
Stine Hansen, Srishti Gautam, Robert Jenssen et al.
Recent work has shown that label-efficient few-shot learning through self-supervision can achieve promising medical image segmentation results. However, few-shot segmentation models typically rely on prototype representations of the semantic classes, resulting in a loss of local information that can degrade performance. This is particularly problematic for the typically large and highly heterogeneous background class in medical image segmentation problems. Previous works have attempted to address this issue by learning additional prototypes for each class, but since the prototypes are based on a limited number of slices, we argue that this ad-hoc solution is insufficient to capture the background properties. Motivated by this, and the observation that the foreground class (e.g., one organ) is relatively homogeneous, we propose a novel anomaly detection-inspired approach to few-shot medical image segmentation in which we refrain from modeling the background explicitly. Instead, we rely solely on a single foreground prototype to compute anomaly scores for all query pixels. The segmentation is then performed by thresholding these anomaly scores using a learned threshold. Assisted by a novel self-supervision task that exploits the 3D structure of medical images through supervoxels, our proposed anomaly detection-inspired few-shot medical image segmentation model outperforms previous state-of-the-art approaches on two representative MRI datasets for the tasks of abdominal organ segmentation and cardiac segmentation.
Erik Fredenberg, Bjorn Cederstrom, Carolina Ribbing et al.
Conventional energy filters for x-ray imaging are based on absorbing materials which attenuate low energy photons, sometimes combined with an absorption edge, thus also discriminating towards photons of higher energies. These filters are fairly inefficient, in particular for photons of higher energies, and other methods for achieving a narrower bandwidth have been proposed. Such methods include various types of monochromators, based on for instance mosaic crystals or refractive multi-prism x-ray lenses (MPL's). Prism-array lenses (PAL's) are similar to MPL's, but are shorter, have larger apertures, and higher transmission. A PAL consists of a number of small prisms arranged in columns perpendicular to the optical axis. The column height decreases along the optical axis so that the projection of lens material is approximately linear with a Fresnel phase-plate pattern superimposed on it. The focusing effect is one dimensional, and the lens is chromatic. Hence, unwanted energies can be blocked by placing a slit in the image plane of a desired energy. We present the first experimental and theoretical results on an energy filter based on a silicon PAL. The study includes an evaluation of the spectral shaping properties of the filter as well as a quantification of the achievable increase in dose efficiency compared to standard methods. Previously, PAL's have been investigated with synchrotron radiation, but in this study a medical imaging setup, based on a regular x-ray tube, is considered.
Xuxin Chen, Ximin Wang, Ke Zhang et al.
Deep learning has received extensive research interest in developing new medical image processing algorithms, and deep learning based models have been remarkably successful in a variety of medical imaging tasks to support disease detection and diagnosis. Despite the success, the further improvement of deep learning models in medical image analysis is majorly bottlenecked by the lack of large-sized and well-annotated datasets. In the past five years, many studies have focused on addressing this challenge. In this paper, we reviewed and summarized these recent studies to provide a comprehensive overview of applying deep learning methods in various medical image analysis tasks. Especially, we emphasize the latest progress and contributions of state-of-the-art unsupervised and semi-supervised deep learning in medical image analysis, which are summarized based on different application scenarios, including classification, segmentation, detection, and image registration. We also discuss the major technical challenges and suggest the possible solutions in future research efforts.
Halaman 6 dari 470089