Beyaz Önlüğün Anlamı Üzerine Bir Konuşma Metni
Ilgaz Akdoğan
Beyaz önlük töreni, henüz meslekten olmayan bir kişinin hekimliğe girişini ve sağlık mesleğinin bir üyesine dönüşümünü simgeleyen ritüel olarak tanımlanabilir. Tören ilk kez 1989 yılında bir toplantı formatında Chicago Üniversitesi Pritzker Tıp Fakültesi’nde ortaya çıkmıştır. Törende beyaz önlük verilen öğrencilere, fakültede hastalar da bulunduğu için profesyoneller gibi görünmeleri, beyaz önlük giymeleri ve rollerine uygun şekilde davranmaları istendiği belirtilmiş ve böylece beyaz önlük töreninin temeli atılmıştır. İlk organize beyaz önlük töreni ise 1993 yılında Columbia Üniversitesi Doktorlar ve Cerrahlar Koleji’nde gerçekleşmiştir. Böylece, tıp eğitiminin en başında tıpta hümanizmi vurgulamanın bir yolu olarak Beyaz önlük törenleri yapılmaya başlanmış ve giderek geleneksel hale gelmiştir. Bu çalışmanın amacı, Aydın Adnan Menderes Üniversitesi Tıp Fakültesi’nde 2024 yılında düzenlenen Beyaz Önlük Töreni’nde “Beyaz Önlüğün Anlamı” ile ilgili yaptığım konuşma metnini tanıtmaktır. Konuşma metni ülkemizin değerleri ile hekimlik mesleğinin insani, etik, toplumsal ve psikolojik boyutlarını bir araya getiren bir metindir. Empati, diğerkamlık, yardımseverlik, iyilik, fedakarlık, şefkat ve merhamet gibi doktorluğun profesyonel değerleri göz önüne alınarak yazılmış metnin, bundan sonra düzenlenecek Beyaz önlük törenlerinde okunabileceği düşüncesindeyim.
History of medicine. Medical expeditions, Miscellaneous systems and treatments
Transformers in Medicine: Improving Vision-Language Alignment for Medical Image Captioning
Yogesh Thakku Suresh, Vishwajeet Shivaji Hogale, Luca-Alexandru Zamfira
et al.
We present a transformer-based multimodal framework for generating clinically relevant captions for MRI scans. Our system combines a DEiT-Small vision transformer as an image encoder, MediCareBERT for caption embedding, and a custom LSTM-based decoder. The architecture is designed to semantically align image and textual embeddings, using hybrid cosine-MSE loss and contrastive inference via vector similarity. We benchmark our method on the MultiCaRe dataset, comparing performance on filtered brain-only MRIs versus general MRI images against state-of-the-art medical image captioning methods including BLIP, R2GenGPT, and recent transformer-based approaches. Results show that focusing on domain-specific data improves caption accuracy and semantic alignment. Our work proposes a scalable, interpretable solution for automated medical image reporting.
Rethinking Boundary Detection in Deep Learning-Based Medical Image Segmentation
Yi Lin, Dong Zhang, Xiao Fang
et al.
Medical image segmentation is a pivotal task within the realms of medical image analysis and computer vision. While current methods have shown promise in accurately segmenting major regions of interest, the precise segmentation of boundary areas remains challenging. In this study, we propose a novel network architecture named CTO, which combines Convolutional Neural Networks (CNNs), Vision Transformer (ViT) models, and explicit edge detection operators to tackle this challenge. CTO surpasses existing methods in terms of segmentation accuracy and strikes a better balance between accuracy and efficiency, without the need for additional data inputs or label injections. Specifically, CTO adheres to the canonical encoder-decoder network paradigm, with a dual-stream encoder network comprising a mainstream CNN stream for capturing local features and an auxiliary StitchViT stream for integrating long-range dependencies. Furthermore, to enhance the model's ability to learn boundary areas, we introduce a boundary-guided decoder network that employs binary boundary masks generated by dedicated edge detection operators to provide explicit guidance during the decoding process. We validate the performance of CTO through extensive experiments conducted on seven challenging medical image segmentation datasets, namely ISIC 2016, PH2, ISIC 2018, CoNIC, LiTS17, and BTCV. Our experimental results unequivocally demonstrate that CTO achieves state-of-the-art accuracy on these datasets while maintaining competitive model complexity. The codes have been released at: https://github.com/xiaofang007/CTO.
Limits of trust in medical AI
Joshua Hatherley
Artificial intelligence (AI) is expected to revolutionize the practice of medicine. Recent advancements in the field of deep learning have demonstrated success in a variety of clinical tasks: detecting diabetic retinopathy from images, predicting hospital readmissions, aiding in the discovery of new drugs, etc. AI's progress in medicine, however, has led to concerns regarding the potential effects of this technology upon relationships of trust in clinical practice. In this paper, I will argue that there is merit to these concerns, since AI systems can be relied upon, and are capable of reliability, but cannot be trusted, and are not capable of trustworthiness. Insofar as patients are required to rely upon AI systems for their medical decision-making, there is potential for this to produce a deficit of trust in relationships in clinical practice.
Medical Report Generation Is A Multi-label Classification Problem
Yijian Fan, Zhenbang Yang, Rui Liu
et al.
Medical report generation is a critical task in healthcare that involves the automatic creation of detailed and accurate descriptions from medical images. Traditionally, this task has been approached as a sequence generation problem, relying on vision-and-language techniques to generate coherent and contextually relevant reports. However, in this paper, we propose a novel perspective: rethinking medical report generation as a multi-label classification problem. By framing the task this way, we leverage the radiology nodes from the commonly used knowledge graph, which can be better captured through classification techniques. To verify our argument, we introduce a novel report generation framework based on BLIP integrated with classified key nodes, which allows for effective report generation with accurate classification of multiple key aspects within the medical images. This approach not only simplifies the report generation process but also significantly enhances performance metrics. Our extensive experiments demonstrate that leveraging key nodes can achieve state-of-the-art (SOTA) performance, surpassing existing approaches across two benchmark datasets. The results underscore the potential of re-envisioning traditional tasks with innovative methodologies, paving the way for more efficient and accurate medical report generation.
MedAide: Information Fusion and Anatomy of Medical Intents via LLM-based Agent Collaboration
Dingkang Yang, Jinjie Wei, Mingcheng Li
et al.
In healthcare intelligence, the ability to fuse heterogeneous, multi-intent information from diverse clinical sources is fundamental to building reliable decision-making systems. Large Language Model (LLM)-driven information interaction systems currently showing potential promise in the healthcare domain. Nevertheless, they often suffer from information redundancy and coupling when dealing with complex medical intents, leading to severe hallucinations and performance bottlenecks. To this end, we propose MedAide, an LLM-based medical multi-agent collaboration framework designed to enable intent-aware information fusion and coordinated reasoning across specialized healthcare domains. Specifically, we introduce a regularization-guided module that combines syntactic constraints with retrieval augmented generation to decompose complex queries into structured representations, facilitating fine-grained clinical information fusion and intent resolution. Additionally, a dynamic intent prototype matching module is proposed to utilize dynamic prototype representation with a semantic similarity matching mechanism to achieve adaptive recognition and updating of the agent's intent in multi-round healthcare dialogues. Ultimately, we design a rotation agent collaboration mechanism that introduces dynamic role rotation and decision-level information fusion across specialized medical agents. Extensive experiments are conducted on four medical benchmarks with composite intents. Experimental results from automated metrics and expert doctor evaluations show that MedAide outperforms current LLMs and improves their medical proficiency and strategic reasoning.
Active learning for medical image segmentation with stochastic batches
Mélanie Gaillochet, Christian Desrosiers, Hervé Lombaert
The performance of learning-based algorithms improves with the amount of labelled data used for training. Yet, manually annotating data is particularly difficult for medical image segmentation tasks because of the limited expert availability and intensive manual effort required. To reduce manual labelling, active learning (AL) targets the most informative samples from the unlabelled set to annotate and add to the labelled training set. On the one hand, most active learning works have focused on the classification or limited segmentation of natural images, despite active learning being highly desirable in the difficult task of medical image segmentation. On the other hand, uncertainty-based AL approaches notoriously offer sub-optimal batch-query strategies, while diversity-based methods tend to be computationally expensive. Over and above methodological hurdles, random sampling has proven an extremely difficult baseline to outperform when varying learning and sampling conditions. This work aims to take advantage of the diversity and speed offered by random sampling to improve the selection of uncertainty-based AL methods for segmenting medical images. More specifically, we propose to compute uncertainty at the level of batches instead of samples through an original use of stochastic batches (SB) during sampling in AL. Stochastic batch querying is a simple and effective add-on that can be used on top of any uncertainty-based metric. Extensive experiments on two medical image segmentation datasets show that our strategy consistently improves conventional uncertainty-based sampling methods. Our method can hence act as a strong baseline for medical image segmentation. The code is available on: https://github.com/Minimel/StochasticBatchAL.git.
Attention Mechanisms in Medical Image Segmentation: A Survey
Yutong Xie, Bing Yang, Qingbiao Guan
et al.
Medical image segmentation plays an important role in computer-aided diagnosis. Attention mechanisms that distinguish important parts from irrelevant parts have been widely used in medical image segmentation tasks. This paper systematically reviews the basic principles of attention mechanisms and their applications in medical image segmentation. First, we review the basic concepts of attention mechanism and formulation. Second, we surveyed over 300 articles related to medical image segmentation, and divided them into two groups based on their attention mechanisms, non-Transformer attention and Transformer attention. In each group, we deeply analyze the attention mechanisms from three aspects based on the current literature work, i.e., the principle of the mechanism (what to use), implementation methods (how to use), and application tasks (where to use). We also thoroughly analyzed the advantages and limitations of their applications to different tasks. Finally, we summarize the current state of research and shortcomings in the field, and discuss the potential challenges in the future, including task specificity, robustness, standard evaluation, etc. We hope that this review can showcase the overall research context of traditional and Transformer attention methods, provide a clear reference for subsequent research, and inspire more advanced attention research, not only in medical image segmentation, but also in other image analysis scenarios.
BayeSeg: Bayesian Modeling for Medical Image Segmentation with Interpretable Generalizability
Shangqi Gao, Hangqi Zhou, Yibo Gao
et al.
Due to the cross-domain distribution shift aroused from diverse medical imaging systems, many deep learning segmentation methods fail to perform well on unseen data, which limits their real-world applicability. Recent works have shown the benefits of extracting domain-invariant representations on domain generalization. However, the interpretability of domain-invariant features remains a great challenge. To address this problem, we propose an interpretable Bayesian framework (BayeSeg) through Bayesian modeling of image and label statistics to enhance model generalizability for medical image segmentation. Specifically, we first decompose an image into a spatial-correlated variable and a spatial-variant variable, assigning hierarchical Bayesian priors to explicitly force them to model the domain-stable shape and domain-specific appearance information respectively. Then, we model the segmentation as a locally smooth variable only related to the shape. Finally, we develop a variational Bayesian framework to infer the posterior distributions of these explainable variables. The framework is implemented with neural networks, and thus is referred to as deep Bayesian segmentation. Quantitative and qualitative experimental results on prostate segmentation and cardiac segmentation tasks have shown the effectiveness of our proposed method. Moreover, we investigated the interpretability of BayeSeg by explaining the posteriors and analyzed certain factors that affect the generalization ability through further ablation studies. Our code will be released via https://zmiclab.github.io/projects.html, once the manuscript is accepted for publication.
Multi-Point Detection of the Powerful Gamma Ray Burst GRB221009A Propagation through the Heliosphere on October 9, 2022
Andrii Voshchepynets, Oleksiy Agapitov, Lynn Wilson
et al.
We present the results of processing the effects of the powerful Gamma Ray Burst GRB221009A captured by the charged particle detectors (electrostatic analyzers and solid-state detectors) onboard spacecraft at different points in the heliosphere on October 9, 2022. To follow the GRB221009A propagation through the heliosphere we used the electron and proton flux measurements from solar missions Solar Orbiter and STEREO-A; Earth magnetosphere and the solar wind missions THEMIS and Wind; meteorological satellites POES15, POES19, MetOp3; and MAVEN - a NASA mission orbiting Mars. GRB221009A had a structure of four bursts: less intense Pulse 1 - the triggering impulse - was detected by gamma-ray observatories at 131659 UT (near the Earth); the most intense Pulses 2 and 3 were detected on board all the spacecraft from the list, and Pulse 4 detected in more than 500 s after Pulse 1. Due to their different scientific objectives, the spacecraft, which data was used in this study, were separated by more than 1 AU (Solar Orbiter and MAVEN). This enabled tracking GRB221009A as it was propagating across the heliosphere. STEREO-A was the first to register Pulse 2 and 3 of the GRB, almost 100 seconds before their detection by spacecraft in the vicinity of Earth. MAVEN detected GRB221009A Pulses 2, 3, and 4 at the orbit of Mars about 237 seconds after their detection near Earth. By processing the time delays observed we show that the source location of the GRB221009A was at RA 288.5 degrees, Dec 18.5 degrees (J2000) with an error cone of 2 degrees
en
astro-ph.HE, astro-ph.IM
CapsNet for Medical Image Segmentation
Minh Tran, Viet-Khoa Vo-Ho, Kyle Quinn
et al.
Convolutional Neural Networks (CNNs) have been successful in solving tasks in computer vision including medical image segmentation due to their ability to automatically extract features from unstructured data. However, CNNs are sensitive to rotation and affine transformation and their success relies on huge-scale labeled datasets capturing various input variations. This network paradigm has posed challenges at scale because acquiring annotated data for medical segmentation is expensive, and strict privacy regulations. Furthermore, visual representation learning with CNNs has its own flaws, e.g., it is arguable that the pooling layer in traditional CNNs tends to discard positional information and CNNs tend to fail on input images that differ in orientations and sizes. Capsule network (CapsNet) is a recent new architecture that has achieved better robustness in representation learning by replacing pooling layers with dynamic routing and convolutional strides, which has shown potential results on popular tasks such as classification, recognition, segmentation, and natural language processing. Different from CNNs, which result in scalar outputs, CapsNet returns vector outputs, which aim to preserve the part-whole relationships. In this work, we first introduce the limitations of CNNs and fundamentals of CapsNet. We then provide recent developments of CapsNet for the task of medical image segmentation. We finally discuss various effective network architectures to implement a CapsNet for both 2D images and 3D volumetric medical image segmentation.
FREE LYRICISTS OF BUKOVINA, AS BEARERS OF A COMPETITIVE WORLDVIEW
Igor BIRYUK, Oleh KORYTSKIY, Iryna KUKOVSKA
et al.
The need to
address this problem is due to increased interest in the origins of
traditional national culture and spirituality of our people, growing
interest in authentic culture of the Ukrainian ethnic group, and in
particular its component – the life of wandering elders-singers. The
music of the Ukrainian lyre (kobza) is an organic part of people's
worldviews, their thoughts and aspirations, diverse and rich spiritual
life. One of the important roles in awakening the spirituality of our
people was played by lyre players and kobzars, who carried the fiery
Ukrainian word to the people, called for the struggle for freedom, for
Cossack glory, for the ancient ancestral Orthodox faith. The article
presents an analysis of the formation of lyricism (kobzarism) as a
significant part of the cultural heritage of the Ukrainian population of
Bukovina. The lyricists are portrayed as witnesses of the life and
development of the people in different historical epochs, as well as
their influence on knowledge of history, education of patriotism, love
for the native land and respect for their ancestors. The purpose. Based
on the analysis of literature sources and available historical documents
to trace the peculiarities of the process of formation and reproduction
of the history of lyricism in Bukovina, as part of the historical heritage
of the Ukrainian people. Research methods: retrospective, synthetic
analytical and generalizing methods.
The scientific novelty lies in the generalization of information
about the representatives of Ukrainian epic singing in Bukovina and
Bukovina Hutsul region. Conclusions. The biographies of lyricists of
Bukovyna, recollections about them, features of Hutsul lyre are given.
Lyricism as a unique cultural phenomenon was spread all over Ukraine
and in Hutsul region and Bukovyna as well from XVI century till 30ies
of the twentieth century. As in the territory of Bukovina, as well as in
all Ukraine, industrial production of lyres was not developed - in
comparison with similar tools from other countries such lyres were
much simpler in the design. The lyre in the Bukovynian Hutsul region
had a layer of religiosity, so in addition to the heroic epic, the repertoire
included chants and psalms. Well-known lyricists in Bukovyna were
Yuriy Fedkovych (“Bukovynskyi Solovyi”), from the village of Putyla,
Vasyl Tonievych from the village of Samakova, Petro Dzurak from the
village of Dytynets, Dmytro Hentsar from the village of Ryzha
(Pylypkove hamlet), Vasyl Hrytsko, Ivan Pokhovych (Hnat) from
Sadhora (Chernivtsi).
History of medicine. Medical expeditions, Social Sciences
Is it Time to Replace CNNs with Transformers for Medical Images?
Christos Matsoukas, Johan Fredin Haslum, Magnus Söderberg
et al.
Convolutional Neural Networks (CNNs) have reigned for a decade as the de facto approach to automated medical image diagnosis. Recently, vision transformers (ViTs) have appeared as a competitive alternative to CNNs, yielding similar levels of performance while possessing several interesting properties that could prove beneficial for medical imaging tasks. In this work, we explore whether it is time to move to transformer-based models or if we should keep working with CNNs - can we trivially switch to transformers? If so, what are the advantages and drawbacks of switching to ViTs for medical image diagnosis? We consider these questions in a series of experiments on three mainstream medical image datasets. Our findings show that, while CNNs perform better when trained from scratch, off-the-shelf vision transformers using default hyperparameters are on par with CNNs when pretrained on ImageNet, and outperform their CNN counterparts when pretrained using self-supervision.
A Spatial Guided Self-supervised Clustering Network for Medical Image Segmentation
Euijoon Ahn, Dagan Feng, Jinman Kim
The segmentation of medical images is a fundamental step in automated clinical decision support systems. Existing medical image segmentation methods based on supervised deep learning, however, remain problematic because of their reliance on large amounts of labelled training data. Although medical imaging data repositories continue to expand, there has not been a commensurate increase in the amount of annotated data. Hence, we propose a new spatial guided self-supervised clustering network (SGSCN) for medical image segmentation, where we introduce multiple loss functions designed to aid in grouping image pixels that are spatially connected and have similar feature representations. It iteratively learns feature representations and clustering assignment of each pixel in an end-to-end fashion from a single image. We also propose a context-based consistency loss that better delineates the shape and boundaries of image regions. It enforces all the pixels belonging to a cluster to be spatially close to the cluster centre. We evaluated our method on 2 public medical image datasets and compared it to existing conventional and self-supervised clustering methods. Experimental results show that our method was most accurate for medical image segmentation.
SSMD: Semi-Supervised Medical Image Detection with Adaptive Consistency and Heterogeneous Perturbation
Hong-Yu Zhou, Chengdi Wang, Haofeng Li
et al.
Semi-Supervised classification and segmentation methods have been widely investigated in medical image analysis. Both approaches can improve the performance of fully-supervised methods with additional unlabeled data. However, as a fundamental task, semi-supervised object detection has not gained enough attention in the field of medical image analysis. In this paper, we propose a novel Semi-Supervised Medical image Detector (SSMD). The motivation behind SSMD is to provide free yet effective supervision for unlabeled data, by regularizing the predictions at each position to be consistent. To achieve the above idea, we develop a novel adaptive consistency cost function to regularize different components in the predictions. Moreover, we introduce heterogeneous perturbation strategies that work in both feature space and image space, so that the proposed detector is promising to produce powerful image representations and robust predictions. Extensive experimental results show that the proposed SSMD achieves the state-of-the-art performance at a wide range of settings. We also demonstrate the strength of each proposed module with comprehensive ablation studies.
Bridging 2D and 3D Segmentation Networks for Computation Efficient Volumetric Medical Image Segmentation: An Empirical Study of 2.5D Solutions
Yichi Zhang, Qingcheng Liao, Le Ding
et al.
Recently, deep convolutional neural networks have achieved great success for medical image segmentation. However, unlike segmentation of natural images, most medical images such as MRI and CT are volumetric data. In order to make full use of volumetric information, 3D CNNs are widely used. However, 3D CNNs suffer from higher inference time and computation cost, which hinders their further clinical applications. Additionally, with the increased number of parameters, the risk of overfitting is higher, especially for medical images where data and annotations are expensive to acquire. To issue this problem, many 2.5D segmentation methods have been proposed to make use of volumetric spatial information with less computation cost. Despite these works lead to improvements on a variety of segmentation tasks, to the best of our knowledge, there has not previously been a large-scale empirical comparison of these methods. In this paper, we aim to present a review of the latest developments of 2.5D methods for volumetric medical image segmentation. Additionally, to compare the performance and effectiveness of these methods, we provide an empirical study of these methods on three representative segmentation tasks involving different modalities and targets. Our experimental results highlight that 3D CNNs may not always be the best choice. Despite all these 2.5D methods can bring performance gains to 2D baseline, not all the methods hold the benefits on different datasets. We hope the results and conclusions of our study will prove useful for the community on exploring and developing efficient volumetric medical image segmentation methods.
Learning stochastic object models from medical imaging measurements using Progressively-Growing AmbientGANs
Weimin Zhou, Sayantan Bhadra, Frank J. Brooks
et al.
It has been advocated that medical imaging systems and reconstruction algorithms should be assessed and optimized by use of objective measures of image quality that quantify the performance of an observer at specific diagnostic tasks. One important source of variability that can significantly limit observer performance is variation in the objects to-be-imaged. This source of variability can be described by stochastic object models (SOMs). A SOM is a generative model that can be employed to establish an ensemble of to-be-imaged objects with prescribed statistical properties. In order to accurately model variations in anatomical structures and object textures, it is desirable to establish SOMs from experimental imaging measurements acquired by use of a well-characterized imaging system. Deep generative neural networks, such as generative adversarial networks (GANs) hold great potential for this task. However, conventional GANs are typically trained by use of reconstructed images that are influenced by the effects of measurement noise and the reconstruction process. To circumvent this, an AmbientGAN has been proposed that augments a GAN with a measurement operator. However, the original AmbientGAN could not immediately benefit from modern training procedures, such as progressive growing, which limited its ability to be applied to realistically sized medical image data. To circumvent this, in this work, a new Progressive Growing AmbientGAN (ProAmGAN) strategy is developed for establishing SOMs from medical imaging measurements. Stylized numerical studies corresponding to common medical imaging modalities are conducted to demonstrate and validate the proposed method for establishing SOMs.
A Question-Centric Model for Visual Question Answering in Medical Imaging
Minh H. Vu, Tommy Löfstedt, Tufve Nyholm
et al.
Deep learning methods have proven extremely effective at performing a variety of medical image analysis tasks. With their potential use in clinical routine, their lack of transparency has however been one of their few weak points, raising concerns regarding their behavior and failure modes. While most research to infer model behavior has focused on indirect strategies that estimate prediction uncertainties and visualize model support in the input image space, the ability to explicitly query a prediction model regarding its image content offers a more direct way to determine the behavior of trained models. To this end, we present a novel Visual Question Answering approach that allows an image to be queried by means of a written question. Experiments on a variety of medical and natural image datasets show that by fusing image and question features in a novel way, the proposed approach achieves an equal or higher accuracy compared to current methods.
Unified Multi-scale Feature Abstraction for Medical Image Segmentation
Xi Fang, Bo Du, Sheng Xu
et al.
Automatic medical image segmentation, an essential component of medical image analysis, plays an importantrole in computer-aided diagnosis. For example, locating and segmenting the liver can be very helpful in livercancer diagnosis and treatment. The state-of-the-art models in medical image segmentation are variants ofthe encoder-decoder architecture such as fully convolutional network (FCN) and U-Net.1A major focus ofthe FCN based segmentation methods has been on network structure engineering by incorporating the latestCNN structures such as ResNet2and DenseNet.3In addition to exploring new network structures for efficientlyabstracting high level features, incorporating structures for multi-scale image feature extraction in FCN hashelped to improve performance in segmentation tasks. In this paper, we design a new multi-scale networkarchitecture, which takes multi-scale inputs with dedicated convolutional paths to efficiently combine featuresfrom different scales to better utilize the hierarchical information.
A Medical Literature Search System for Identifying Effective Treatments in Precision Medicine
Jiaming Qu, Yue Wang
The Precision Medicine Initiative states that treatments for a patient should take into account not only the patient's disease, but his/her specific genetic variation as well. The vast biomedical literature holds the potential for physicians to identify effective treatment options for a cancer patient. However, the complexity and ambiguity of medical terms can result in vocabulary mismatch between the physician's query and the literature. The physician's search intent (finding treatments instead of other types of studies) is difficult to explicitly formulate in a query. Therefore, simple ad hot retrieval approach will suffer from low recall and precision. In this paper, we propose a new retrieval system that helps physicians identify effective treatments in precision medicine. Given a cancer patient with a specific disease, genetic variation, and demographic information, the system aims to identify biomedical publications that report effective treatments. We approach this goal from two directions. First, we expand the original disease and gene terms using biomedical knowledge bases to improve recall of the initial retrieval. We then improve precision by promoting treatment-related publications to the top using a machine learning reranker trained on 2017 Text Retrieval Conference Precision Medicine (PM) track corpus. Batch evaluation results on 2018 PM track corpus show that the proposed approach effectively improves both recall and precision, achieving performance comparable to the top entries on the leaderboard of 2018 PM track.