Telemedicine for healthcare: Capabilities, features, barriers, and applications
A. Haleem, M. Javaid, R. Singh
et al.
Regular hospital visits can be expensive, particularly in rural areas, due to travel costs. In the era of the Covid-19 Pandemic, where physical interaction becomes risky, people prefer telemedicine. Fortunately, medical visits can be reduced when telemedicine services are used through video conferencing or other virtual technologies. Thus, telemedicine saves both the patient's and the health care provider time and the cost of the treatment. Furthermore, due to its fast and advantageous characteristics, it can streamline the workflow of hospitals and clinics. This disruptive technology would make it easier to monitor discharged patients and manage their recovery. As a result, it is sufficient to state that telemedicine can create a win-win situation. This paper aims to explore the significant capabilities, features with treatment workflow, and barriers to the adoption of telemedicine in Healthcare. The paper identifies seventeen significant applications of telemedicine in Healthcare. Telemedicine is described as a medical practitioner to diagnose and treat patients in a remote area. Using health apps for scheduled follow-up visits makes doctors and patients more effective and improves the probability of follow-up, reducing missing appointments and optimising patient outcomes. Patients should have an accurate medical history and show the doctor any prominent rashes, bruises, or other signs that need attention through the excellent quality audio-video system. Further, practitioners need file management and a payment gateway system. Telemedicine technologies allow patients and doctors both to review the treatment process. However, this technology supplements physical consultation and is in no way a substitute for a physical consultation. Today this technology is a safe choice for patients who cannot go to the doctor or sit at home, especially during a pandemic.
1176 sitasi
en
Medicine, Computer Science
Automatic diagnosis of the 12-lead ECG using a deep neural network
Antônio H. Ribeiro, Manoel Horta Ribeiro, Gabriela M. M. Paixão
et al.
The role of automatic electrocardiogram (ECG) analysis in clinical practice is limited by the accuracy of existing models. Deep Neural Networks (DNNs) are models composed of stacked transformations that learn tasks by examples. This technology has recently achieved striking success in a variety of task and there are great expectations on how it might improve clinical practice. Here we present a DNN model trained in a dataset with more than 2 million labeled exams analyzed by the Telehealth Network of Minas Gerais and collected under the scope of the CODE (Clinical Outcomes in Digital Electrocardiology) study. The DNN outperform cardiology resident medical doctors in recognizing 6 types of abnormalities in 12-lead ECG recordings, with F1 scores above 80% and specificity over 99%. These results indicate ECG analysis based on DNNs, previously studied in a single-lead setup, generalizes well to 12-lead exams, taking the technology closer to the standard clinical practice. The role of automatic electrocardiogram (ECG) analysis in clinical practice is limited by the accuracy of existing models. In that context, the authors present a Deep Neural Network (DNN) that recognizes different abnormalities in ECG recordings which matches or outperform cardiology and emergency resident medical doctors.
923 sitasi
en
Computer Science, Mathematics
The imperative for regulatory oversight of large language models (or generative AI) in healthcare
B. Meskó, E. Topol
The rapid advancements in artificial intelligence (AI) have led to the development of sophisticated large language models (LLMs) such as GPT-4 and Bard. The potential implementation of LLMs in healthcare settings has already garnered considerable attention because of their diverse applications that include facilitating clinical documentation, obtaining insurance pre-authorization, summarizing research papers, or working as a chatbot to answer questions for patients about their specific data and concerns. While offering transformative potential, LLMs warrant a very cautious approach since these models are trained differently from AI-based medical technologies that are regulated already, especially within the critical context of caring for patients. The newest version, GPT-4, that was released in March, 2023, brings the potentials of this technology to support multiple medical tasks; and risks from mishandling results it provides to varying reliability to a new level. Besides being an advanced LLM, it will be able to read texts on images and analyze the context of those images. The regulation of GPT-4 and generative AI in medicine and healthcare without damaging their exciting and transformative potential is a timely and critical challenge to ensure safety, maintain ethical standards, and protect patient privacy. We argue that regulatory oversight should assure medical professionals and patients can use LLMs without causing harm or compromising their data or privacy. This paper summarizes our practical recommendations for what we can expect from regulators to bring this vision to reality.
776 sitasi
en
Computer Science, Medicine
Histopathological Image Analysis: A Review
M. Gurcan, L. Boucheron, A. Can
et al.
1943 sitasi
en
Medicine, Computer Science
Physics of liquid jets
J. Eggers, E. Villermaux
Update of AAPM Task Group No. 43 Report: A revised AAPM protocol for brachytherapy dose calculations.
M. Rivard, B. Coursey, L. DeWerd
et al.
Since publication of the American Association of Physicists in Medicine (AAPM) Task Group No. 43 Report in 1995 (TG-43), both the utilization of permanent source implantation and the number of low-energy interstitial brachytherapy source models commercially available have dramatically increased. In addition, the National Institute of Standards and Technology has introduced a new primary standard of air-kerma strength, and the brachytherapy dosimetry literature has grown substantially, documenting both improved dosimetry methodologies and dosimetric characterization of particular source models. In response to these advances, the AAPM Low-energy Interstitial Brachytherapy Dosimetry subcommittee (LIBD) herein presents an update of the TG-43 protocol for calculation of dose-rate distributions around photon-emitting brachytherapy sources. The updated protocol (TG-43U1) includes (a) a revised definition of air-kerma strength; (b) elimination of apparent activity for specification of source strength; (c) elimination of the anisotropy constant in favor of the distance-dependent one-dimensional anisotropy function; (d) guidance on extrapolating tabulated TG-43 parameters to longer and shorter distances; and (e) correction for minor inconsistencies and omissions in the original protocol and its implementation. Among the corrections are consistent guidelines for use of point- and line-source geometry functions. In addition, this report recommends a unified approach to comparing reference dose distributions derived from different investigators to develop a single critically evaluated consensus dataset as well as guidelines for performing and describing future theoretical and experimental single-source dosimetry studies. Finally, the report includes consensus datasets, in the form of dose-rate constants, radial dose functions, and one-dimensional (1D) and two-dimensional (2D) anisotropy functions, for all low-energy brachytherapy source models that met the AAPM dosimetric prerequisites [Med. Phys. 25, 2269 (1998)] as of July 15, 2001. These include the following 125I sources: Amersham Health models 6702 and 6711, Best Medical model 2301, North American Scientific Inc. (NASI) model MED3631-A/M, Bebig/Theragenics model I25.S06, and the Imagyn Medical Technologies Inc. isostar model IS-12501. The 103Pd sources included are the Theragenics Corporation model 200 and NASI model MED3633. The AAPM recommends that the revised dose-calculation protocol and revised source-specific dose-rate distributions be adopted by all end users for clinical treatment planning of low energy brachytherapy interstitial sources. Depending upon the dose-calculation protocol and parameters currently used by individual physicists, adoption of this protocol may result in changes to patient dose calculations. These changes should be carefully evaluated and reviewed with the radiation oncologist preceding implementation of the current protocol.
Long-term outcome of cultured corneal endothelial cell transplantation with descemetorhexis: A 10-year follow-up study
Yuto Kataoka, Yasufumi Tomioka, Morio Ueno
et al.
Purpose: To describe the 10-year clinical course after cultured human corneal endothelial cell (CEC) (cHCEC) transplantation combined with central descemetorhexis in a single patient with Fuchs endothelial corneal dystrophy (FECD). Observations: A 49-year-old Japanese male was referred to the Department of Ophthalmology at Kyoto Prefectural University of Medicine, Kyoto, Japan in 2013 due to decreased visual acuity (VA) and CEC loss in his left eye caused by FECD. Upon examination, FECD-related central corneal edema, cataract, and decreased VA were observed, and on September 4, 2014 the patient underwent cHCEC transplantation in his left eye. Intraoperatively, a Descemet membrane (DM) tear occurred while abnormal CECs were being removed, thus requiring a change in the surgical plan. Subsequently, the DM was completely stripped (descemetorhexis) in an approximately 5-mm-diameter area including the pupillary center, followed by cHCEC transplantation into the anterior chamber. Prior to surgery, best-corrected VA (BCVA) was 20/50 and central corneal thickness (CCT) was 637 μm, yet corneal transparency was restored and BCVA improved to 20/20 at 6-months postoperative. At 10-years postoperative, a reasonable CEC density (CECD) was found to have adhered to the descemetorhexis area, with maintained corneal transparency; i.e., CCT measured 548 μm, CECD in the central area was 938 cells/mm2, and BCVA remained stable at 20/13. Conclusion and importance: While prospective studies are needed to generalize safety and efficacy, this FECD case treated with cHCEC transplantation combined with descemetorhexis showed no serious adverse events and sustained corneal clarity with stable CECD and CCT.
Implementing a Spanish Wikipedia elective for medical students
Juli McCarroll
Background: Individuals seeking health information often turn to the Internet for answers. Wikipedia is a dynamic, crowdsourced encyclopedia and one of the most accessed online sources for this content. However, the Spanish Wikipedia is not nearly as in-depth as the English version, creating a large disparity. Medical students with English and Spanish proficiency possess a distinct skill set that positions them to contribute timely, trusted, evidence-based content to the platform and reduce this inequity.
Case Presentation: This case study presents the implementation of a credit-bearing Spanish Wikipedia translation elective by the library for fourth-year medical students at Western Michigan University Homer Stryker M.D. School of Medicine, currently the only Spanish Wikipedia elective in a medical school in the United States. The purpose of the course is to increase the quality and readability of medical articles in the English and Spanish versions of the online encyclopedia using evidence-based medicine (EBM) principles.
Conclusions: The output from this elective demonstrates that medical students can use their medical knowledge and skills to create and improve articles in English and Spanish on Wikipedia and disseminate evidence-based information to millions of consumers worldwide seeking reputable health information. Learners can leverage their specialized training to minimize the gap between these versions and become active participants in global health. By using technology to their advantage, they provide enduring health information that impacts and reaches many more people in a virtual setting than in a traditional one-on-one clinical encounter.
Bibliography. Library science. Information resources, Medicine
Biological aging accelerates hepatic fibrosis: Insights from the NHANES 2017–2020 and genome-wide association study analysis
Jiaxin Zhao, Huiying Zhou, Rui Wu
et al.
Introduction and Objectives: This study aimed to investigate the association between biological aging and liver fibrosis in patients with metabolic dysfunction-associated steatotic liver disease (MASLD). Materials and Methods: We analyzed NHANES 2017–2020 data to calculate phenotypic age. Hepatic steatosis and fibrosis were identified using controlled attenuation parameters (CAP), fatty liver index (FLI) and transient elastography (TE). The odds ratios (ORs) and 95 % confidence intervals (CI) for significant MASLD fibrosis were calculated using multivariate logistic regression, and subgroup analyses were performed. We explored the potential causal relationship between telomere length and liver fibrosis using Mendelian randomization (MR). Additionally, we used the expression quantitative trait loci (eQTL) method and GSE197112 data to identify genes related to liver fibrosis and senescence. Finally, the APOLD1 expression was validated using GSE89632. Results: Phenotypic age was associated with liver fibrosis occurrence in MASLD (OR = 1.08, 95 % CI 1.05–1.12). Subgroup analyses by BMI and age revealed differences. For obese or young to middle-aged MASLD patients, phenotypic age is significantly associated with liver fibrosis. (OR = 1.14, 95 % CI 1.10–1.18; OR = 1.07, 95 % CI 1.01–1.14 and OR = 1.14, 95 % CI 1.07–1.22). MR revealed a negative association between telomere length and liver fibrosis (IVW method: OR = 0.63288, 95 % CI 0.42498–0.94249). The gene APOLD1 was identified as a potential target through the intersection of the GEO dataset and eQTL genes. Conclusions: This study emphasized the link between biological aging and fibrosis in young to middle-aged obese MASLD patients. We introduced phenotypic age as a clinical indicator and identified APOLD1 as a potential therapeutic target.
Specialties of internal medicine
Mifepristone alone and in combination with scAAV9-SMN1 gene therapy improves disease phenotypes in Smn 2B/- spinal muscular atrophy mice
Emma R. Sutton, Eve McCallion, Joseph M. Hoolachan
et al.
Abstract Spinal muscular atrophy (SMA) is a neuromuscular disease caused by deletions or mutations in the survival motor neuron 1 (SMN1) gene. SMA is characterised by alpha motor neuron loss in the spinal cord and subsequent muscle atrophy. There are currently three approved SMN-directed therapies for SMA patients. While these therapies have transformed what was once a life-limiting condition into one that can be managed and even improved, they are unfortunately not cures, highlighting the need for additional supporting second-generation therapies. These should not only target the neuromuscular system but also peripheral and metabolic perturbations that are present in both SMA models and patients. Krüppel-like factor 15 (Klf15) is a transcription factor that maintains metabolic homeostasis, is involved in the glucocorticoid-glucocorticoid receptor (GR) signalling pathway and is dysregultated in several peripheral and metabolic tissues in SMA mice. Here, we used murine and human cellular models as well as SMA mice and Caenorhabditis Elegans (C. elegans) to assess the therapeutic potential of reducing Klf15 activity with mifepristone, a glucocorticoid antagonist, combined with a SMN-targeted gene therapy. We report that mifepristone reduces Klf15 expression across several in vitro models, ameliorates neuromuscular pathology in SMA smn-1(ok355) C. elegans and improves survival of SMA Smn 2B/- mice. Furthermore, we show that combining mifepristone with an approved SMN-directed gene therapy (scAAV9-SMN1) results in improved tissue- and sex-specific responses to treatment. Our study demonstrates that a multi-tissue targeting SMN-independent drug, alone and in combination with an approved SMN-dependent therapy, has the potential to improve SMA disease pathology.
Automated classification of chest X-rays: a deep learning approach with attention mechanisms
Burcu Oltu, Selda Güney, Seniha Esen Yuksel
et al.
Abstract Background Pulmonary diseases such as COVID-19 and pneumonia, are life-threatening conditions, that require prompt and accurate diagnosis for effective treatment. Chest X-ray (CXR) has become the most common alternative method for detecting pulmonary diseases such as COVID-19, pneumonia, and lung opacity due to their availability, cost-effectiveness, and ability to facilitate comparative analysis. However, the interpretation of CXRs is a challenging task. Methods This study presents an automated deep learning (DL) model that outperforms multiple state-of-the-art methods in diagnosing COVID-19, Lung Opacity, and Viral Pneumonia. Using a dataset of 21,165 CXRs, the proposed framework introduces a seamless combination of the Vision Transformer (ViT) for capturing long-range dependencies, DenseNet201 for powerful feature extraction, and global average pooling (GAP) for retaining critical spatial details. This combination results in a robust classification system, achieving remarkable accuracy. Results The proposed methodology delivers outstanding results across all categories: achieving 99.4% accuracy and an F1-score of 98.43% for COVID-19, 96.45% accuracy and an F1-score of 93.64% for Lung Opacity, 99.63% accuracy and an F1-score of 97.05% for Viral Pneumonia, and 95.97% accuracy with an F1-score of 95.87% for Normal subjects. Conclusion The proposed framework achieves a remarkable overall accuracy of 97.87%, surpassing several state-of-the-art methods with reproducible and objective outcomes. To ensure robustness and minimize variability in train-test splits, our study employs five-fold cross-validation, providing reliable and consistent performance evaluation. For transparency and to facilitate future comparisons, the specific training and testing splits have been made publicly accessible. Furthermore, Grad-CAM-based visualizations are integrated to enhance the interpretability of the model, offering valuable insights into its decision-making process. This innovative framework not only boosts classification accuracy but also sets a new benchmark in CXR-based disease diagnosis.
Hierarchical Variable Importance with Statistical Control for Medical Data-Based Prediction
Joseph Paillard, Antoine Collas, Denis A. Engemann
et al.
Recent advances in machine learning have greatly expanded the repertoire of predictive methods for medical imaging. However, the interpretability of complex models remains a challenge, which limits their utility in medical applications. Recently, model-agnostic methods have been proposed to measure conditional variable importance and accommodate complex non-linear models. However, they often lack power when dealing with highly correlated data, a common problem in medical imaging. We introduce Hierarchical-CPI, a model-agnostic variable importance measure that frames the inference problem as the discovery of groups of variables that are jointly predictive of the outcome. By exploring subgroups along a hierarchical tree, it remains computationally tractable, yet also enjoys explicit family-wise error rate control. Moreover, we address the issue of vanishing conditional importance under high correlation with a tree-based importance allocation mechanism. We benchmarked Hierarchical-CPI against state-of-the-art variable importance methods. Its effectiveness is demonstrated in two neuroimaging datasets: classifying dementia diagnoses from MRI data (ADNI dataset) and analyzing the Berger effect on EEG data (TDBRAIN dataset), identifying biologically plausible variables.
Physical foundations for trustworthy medical imaging: a review for artificial intelligence researchers
Miriam Cobo, David Corral Fontecha, Wilson Silva
et al.
Artificial intelligence in medical imaging has seen unprecedented growth in the last years, due to rapid advances in deep learning and computing resources. Applications cover the full range of existing medical imaging modalities, with unique characteristics driven by the physics of each technique. Yet, artificial intelligence professionals entering the field, and even experienced developers, often lack a comprehensive understanding of the physical principles underlying medical image acquisition, which hinders their ability to fully leverage its potential. The integration of physics knowledge into artificial intelligence algorithms enhances their trustworthiness and robustness in medical imaging, especially in scenarios with limited data availability. In this work, we review the fundamentals of physics in medical images and their impact on the latest advances in artificial intelligence, particularly, in generative models and reconstruction algorithms. Finally, we explore the integration of physics knowledge into physics-inspired machine learning models, which leverage physics-based constraints to enhance the learning of medical imaging features.
The Latent Space Hypothesis: Toward Universal Medical Representation Learning
Salil Patel
Medical data range from genomic sequences and retinal photographs to structured laboratory results and unstructured clinical narratives. Although these modalities appear disparate, many encode convergent information about a single underlying physiological state. The Latent Space Hypothesis frames each observation as a projection of a unified, hierarchically organized manifold -- much like shadows cast by the same three-dimensional object. Within this learned geometric representation, an individual's health status occupies a point, disease progression traces a trajectory, and therapeutic intervention corresponds to a directed vector. Interpreting heterogeneous evidence in a shared space provides a principled way to re-examine eponymous conditions -- such as Parkinson's or Crohn's -- that often mask multiple pathophysiological entities and involve broader anatomical domains than once believed. By revealing sub-trajectories and patient-specific directions of change, the framework supplies a quantitative rationale for personalised diagnosis, longitudinal monitoring, and tailored treatment, moving clinical practice away from grouping by potentially misleading labels toward navigation of each person's unique trajectory. Challenges remain -- bias amplification, data scarcity for rare disorders, privacy, and the correlation-causation divide -- but scale-aware encoders, continual learning on longitudinal data streams, and perturbation-based validation offer plausible paths forward.
Applications and challenges of photodynamic therapy in the treatment of skin malignancies
Yunqi Hua, Xiaoling Tian, Xinyi Zhang
et al.
Photodynamic Therapy (PDT), as a minimally invasive treatment method, has demonstrated its distinct advantages in the management of skin malignant tumors. This article examines the current application status of PDT, assesses its successful cases and challenges in clinical treatment, and anticipates its future development trends. PDT utilizes photosensitizers to interact with light of specific wavelengths to generate reactive oxygen species that selectively eradicate cancer cells. Despite PDT’s exceptional performance in enhancing patients’ quality of life and prognosis, the limitation of treatment depth and the side effects of photosensitizers remain unresolved issues. With the advancement of novel photosensitizers and innovative treatment technology, the application prospects of PDT are increasingly expansive. This article delves into the mechanism of PDT, its application in various skin malignancies, its advantages and limitations, and envisions its future development. We believe that through continuous technological enhancements and integration with other treatment technologies, PDT has the potential to assume a more pivotal role in the treatment of skin malignancies.
Therapeutics. Pharmacology
Haemoglobin types and variant interference with HbA1c and its association with uncontrolled HbA1c in type 2 diabetes mellitus
Joseph Malaba, Paul Kosiyo, Bernard Guyah
Abstract Diabetes mellitus is among the leading global health concerns, causing over 1.5 million deaths alongside other significant comorbidities and complications. Conventional diagnosis involves estimating fasting, random blood glucose levels and glucose tolerance test. For monitoring purposes, long-term glycaemic control has been achieved through the measurement of glycated haemoglobin (HbA1c) which is considered reliable and preferred tool. However, its estimation could be affected by haemoglobin types like HbA0, HbA2, and HbF concentrations whose magnitude remains unclear as well as other haematological parameters. As such, the current study determined the association between HbA1c and haemoglobin types and determined correlation between haemoglobin types and haematological parameters among patients with type 2 diabetes mellitus (T2DM) compared to healthy non-diabetic participants. In this cross-sectional study, participants [n = 144 (72 per group), ages 23–80 years] were recruited and the desired parameter measured. HbA1c and other Haemoglobin variants were measured using ion-exchange high-performance liquid chromatography (HPLC) by the Bio-Rad D-10 machine (Bio-Rad Laboratories, Inc). Haematological parameters were measured using the Celtac G MEK-i machine (Nihon Kohden Europe). SPSS version 27 (IBM Corporation, Chicago, Illinois, United States) was used for the analysis. Chi-square (χ2) analysis, Mann-Whitney U test, Binary logistic regression and Pearson correlation were used to determine the differences between proportions, compare laboratory characteristics, associations and correlations respectively. With non-diabetics as the reference group, HbA1c was associated with increased HbA0 [OR = 1.509, 95% CI = 1.020–1.099, p = 0.003] and increased HbA2 [OR = 3.893, 95% CI = 2.161–7.014, p = 0.001]. However, there was no significant association between HbA1c and HbF [OR = 2.062, 95% CI = 0.873–4.875, p = 0.099]. Further, haematocrit (HCT) had a negative correlation with HbAO and a positive correlation with HbAS in participants with controlled diabetes. Mean cell volume (MCV) and mean cell haemoglobin (MCH) had a negative correlation with HbF. MCHC (mean cell haemoglobin concentration) had a negative correlation with HbA2 in participant with uncontrolled diabetes. The study concluded that levels of various haemoglobin types should be considered while monitoring glycaemic control through HbA1c. Additionally, MCHC should be considered in individuals with high concentration of HbA2 among T2DM patients while interpretating results for HbA1c.
Medicine, Biology (General)
Characterization of lipid composition and nutritional quality of yak ghee at different altitudes: A quantitative lipidomic analysis
Feiyan Yang, Xin Wen, Siwei Xie
et al.
Efficient and comprehensive analysis of lipid profiles in yak ghee samples collected from different elevations is crucial for optimal utilization of these resources. Unfortunately, such research is relatively rare. Yak ghee collected from three locations at different altitudes (S2: 2986 m; S5: 3671 m; S6: 4508 m) were analyzed by quantitative lipidomic. Our analysis identified a total of 176 lipids, and 147 s lipid of them were upregulated and 29 lipids were downregulated. These lipids have the potential to serve as biomarkers for distinguishing yak ghee from different altitudes. Notably, S2 exhibited higher levels of fatty acids (21:1) and branched fatty acid esters of hydroxy fatty acids (14:0/18:0), while S5 showed increased levels of phosphatidylserine (O-20:0/19:1) and glycerophosphoric acid (19:0/22:1). S6 displayed higher levels of triacylglycerol (17:0/20:5/22:3), ceramide alpha-hydroxy fatty acid-sphingosine (d17:3/34:2), and acyl glucosylceramides (16:0–18:0–18:1). Yak ghee exhibited a high content of neutralizing glycerophospholipids and various functional lipids, including sphingolipids and 21 newly discovered functional lipids. Our findings provide insights into quantitative changes in yak ghee lipids during different altitudes, development of yak ghee products, and screening of potential biomarkers.
Nutrition. Foods and food supply, Food processing and manufacture
Universal Topology Refinement for Medical Image Segmentation with Polynomial Feature Synthesis
Liu Li, Hanchun Wang, Matthew Baugh
et al.
Although existing medical image segmentation methods provide impressive pixel-wise accuracy, they often neglect topological correctness, making their segmentations unusable for many downstream tasks. One option is to retrain such models whilst including a topology-driven loss component. However, this is computationally expensive and often impractical. A better solution would be to have a versatile plug-and-play topology refinement method that is compatible with any domain-specific segmentation pipeline. Directly training a post-processing model to mitigate topological errors often fails as such models tend to be biased towards the topological errors of a target segmentation network. The diversity of these errors is confined to the information provided by a labelled training set, which is especially problematic for small datasets. Our method solves this problem by training a model-agnostic topology refinement network with synthetic segmentations that cover a wide variety of topological errors. Inspired by the Stone-Weierstrass theorem, we synthesize topology-perturbation masks with randomly sampled coefficients of orthogonal polynomial bases, which ensures a complete and unbiased representation. Practically, we verified the efficiency and effectiveness of our methods as being compatible with multiple families of polynomial bases, and show evidence that our universal plug-and-play topology refinement network outperforms both existing topology-driven learning-based and post-processing methods. We also show that combining our method with learning-based models provides an effortless add-on, which can further improve the performance of existing approaches.
Recent Advances in the Development of Liquid Crystalline Nanoparticles as Drug Delivery Systems
Jassica S. L. Leu, Jasy J. X. Teoh, Angel L. Q. Ling
et al.
Due to their distinctive structural features, lyotropic nonlamellar liquid crystalline nanoparticles (LCNPs), such as cubosomes and hexosomes, are considered effective drug delivery systems. Cubosomes have a lipid bilayer that makes a membrane lattice with two water channels that are intertwined. Hexosomes are inverse hexagonal phases made of an infinite number of hexagonal lattices that are tightly connected with water channels. These nanostructures are often stabilized by surfactants. The structure’s membrane has a much larger surface area than that of other lipid nanoparticles, which makes it possible to load therapeutic molecules. In addition, the composition of mesophases can be modified by pore diameters, thus influencing drug release. Much research has been conducted in recent years to improve their preparation and characterization, as well as to control drug release and improve the efficacy of loaded bioactive chemicals. This article reviews current advances in LCNP technology that permit their application, as well as design ideas for revolutionary biomedical applications. Furthermore, we have provided a summary of the application of LCNPs based on the administration routes, including the pharmacokinetic modulation property.
Pharmacy and materia medica
SwIPE: Efficient and Robust Medical Image Segmentation with Implicit Patch Embeddings
Yejia Zhang, Pengfei Gu, Nishchal Sapkota
et al.
Modern medical image segmentation methods primarily use discrete representations in the form of rasterized masks to learn features and generate predictions. Although effective, this paradigm is spatially inflexible, scales poorly to higher-resolution images, and lacks direct understanding of object shapes. To address these limitations, some recent works utilized implicit neural representations (INRs) to learn continuous representations for segmentation. However, these methods often directly adopted components designed for 3D shape reconstruction. More importantly, these formulations were also constrained to either point-based or global contexts, lacking contextual understanding or local fine-grained details, respectively--both critical for accurate segmentation. To remedy this, we propose a novel approach, SwIPE (Segmentation with Implicit Patch Embeddings), that leverages the advantages of INRs and predicts shapes at the patch level--rather than at the point level or image level--to enable both accurate local boundary delineation and global shape coherence. Extensive evaluations on two tasks (2D polyp segmentation and 3D abdominal organ segmentation) show that SwIPE significantly improves over recent implicit approaches and outperforms state-of-the-art discrete methods with over 10x fewer parameters. Our method also demonstrates superior data efficiency and improved robustness to data shifts across image resolutions and datasets. Code is available on Github (https://github.com/charzharr/miccai23-swipe-implicit-segmentation).