RPP: A Certified Poisoned-Sample Detection Framework for Backdoor Attacks under Dataset Imbalance
Miao Lin, Feng Yu, Rui Ning
et al.
Deep neural networks are highly susceptible to backdoor attacks, yet most defense methods to date rely on balanced data, overlooking the pervasive class imbalance in real-world scenarios that can amplify backdoor threats. This paper presents the first in-depth investigation of how the dataset imbalance amplifies backdoor vulnerability, showing that (i) the imbalance induces a majority-class bias that increases susceptibility and (ii) conventional defenses degrade significantly as the imbalance grows. To address this, we propose Randomized Probability Perturbation (RPP), a certified poisoned-sample detection framework that operates in a black-box setting using only model output probabilities. For any inspected sample, RPP determines whether the input has been backdoor-manipulated, while offering provable within-domain detectability guarantees and a probabilistic upper bound on the false positive rate. Extensive experiments on five benchmarks (MNIST, SVHN, CIFAR-10, TinyImageNet and ImageNet10) covering 10 backdoor attacks and 12 baseline defenses show that RPP achieves significantly higher detection accuracy than state-of-the-art defenses, particularly under dataset imbalance. RPP establishes a theoretical and practical foundation for defending against backdoor attacks in real-world environments with imbalanced data.
Fast, Private, and Protected: Safeguarding Data Privacy and Defending Against Model Poisoning Attacks in Federated Learning
Nicolas Riccieri Gardin Assumpcao, Leandro Villas
Federated Learning (FL) is a distributed training paradigm wherein participants collaborate to build a global model while ensuring the privacy of the involved data, which remains stored on participant devices. However, proposals aiming to ensure such privacy also make it challenging to protect against potential attackers seeking to compromise the training outcome. In this context, we present Fast, Private, and Protected (FPP), a novel approach that aims to safeguard federated training while enabling secure aggregation to preserve data privacy. This is accomplished by evaluating rounds using participants' assessments and enabling training recovery after an attack. FPP also employs a reputation-based mechanism to mitigate the participation of attackers. We created a dockerized environment to validate the performance of FPP compared to other approaches in the literature (FedAvg, Power-of-Choice, and aggregation via Trimmed Mean and Median). Our experiments demonstrate that FPP achieves a rapid convergence rate and can converge even in the presence of malicious participants performing model poisoning attacks.
Chain-of-Thought Poisoning Attacks against R1-based Retrieval-Augmented Generation Systems
Hongru Song, Yu-an Liu, Ruqing Zhang
et al.
Retrieval-augmented generation (RAG) systems can effectively mitigate the hallucination problem of large language models (LLMs),but they also possess inherent vulnerabilities. Identifying these weaknesses before the large-scale real-world deployment of RAG systems is of great importance, as it lays the foundation for building more secure and robust RAG systems in the future. Existing adversarial attack methods typically exploit knowledge base poisoning to probe the vulnerabilities of RAG systems, which can effectively deceive standard RAG models. However, with the rapid advancement of deep reasoning capabilities in modern LLMs, previous approaches that merely inject incorrect knowledge are inadequate when attacking RAG systems equipped with deep reasoning abilities. Inspired by the deep thinking capabilities of LLMs, this paper extracts reasoning process templates from R1-based RAG systems, uses these templates to wrap erroneous knowledge into adversarial documents, and injects them into the knowledge base to attack RAG systems. The key idea of our approach is that adversarial documents, by simulating the chain-of-thought patterns aligned with the model's training signals, may be misinterpreted by the model as authentic historical reasoning processes, thus increasing their likelihood of being referenced. Experiments conducted on the MS MARCO passage ranking dataset demonstrate the effectiveness of our proposed method.
Poisoning Behavioral-based Worker Selection in Mobile Crowdsensing using Generative Adversarial Networks
Ruba Nasser, Ahmed Alagha, Shakti Singh
et al.
With the widespread adoption of Artificial intelligence (AI), AI-based tools and components are becoming omnipresent in today's solutions. However, these components and tools are posing a significant threat when it comes to adversarial attacks. Mobile Crowdsensing (MCS) is a sensing paradigm that leverages the collective participation of workers and their smart devices to collect data. One of the key challenges faced at the selection stage is ensuring task completion due to workers' varying behavior. AI has been utilized to tackle this challenge by building unique models for each worker to predict their behavior. However, the integration of AI into the system introduces vulnerabilities that can be exploited by malicious insiders to reduce the revenue obtained by victim workers. This work proposes an adversarial attack targeting behavioral-based selection models in MCS. The proposed attack leverages Generative Adversarial Networks (GANs) to generate poisoning points that can mislead the models during the training stage without being detected. This way, the potential damage introduced by GANs on worker selection in MCS can be anticipated. Simulation results using a real-life dataset show the effectiveness of the proposed attack in compromising the victim workers' model and evading detection by an outlier detector, compared to a benchmark. In addition, the impact of the attack on reducing the payment obtained by victim workers is evaluated.
FuncPoison: Poisoning Function Library to Hijack Multi-agent Autonomous Driving Systems
Yuzhen Long, Songze Li
Autonomous driving systems increasingly rely on multi-agent architectures powered by large language models (LLMs), where specialized agents collaborate to perceive, reason, and plan. A key component of these systems is the shared function library, a collection of software tools that agents use to process sensor data and navigate complex driving environments. Despite its critical role in agent decision-making, the function library remains an under-explored vulnerability. In this paper, we introduce FuncPoison, a novel poisoning-based attack targeting the function library to manipulate the behavior of LLM-driven multi-agent autonomous systems. FuncPoison exploits two key weaknesses in how agents access the function library: (1) agents rely on text-based instructions to select tools; and (2) these tools are activated using standardized command formats that attackers can replicate. By injecting malicious tools with deceptive instructions, FuncPoison manipulates one agent s decisions--such as misinterpreting road conditions--triggering cascading errors that mislead other agents in the system. We experimentally evaluate FuncPoison on two representative multi-agent autonomous driving systems, demonstrating its ability to significantly degrade trajectory accuracy, flexibly target specific agents to induce coordinated misbehavior, and evade diverse defense mechanisms. Our results reveal that the function library, often considered a simple toolset, can serve as a critical attack surface in LLM-based autonomous driving systems, raising elevated concerns on their reliability.
The Gradient Puppeteer: Adversarial Domination in Gradient Leakage Attacks through Model Poisoning
Kunlan Xiang, Haomiao Yang, Meng Hao
et al.
In Federated Learning (FL), clients share gradients with a central server while keeping their data local. However, malicious servers could deliberately manipulate the models to reconstruct clients' data from shared gradients, posing significant privacy risks. Although such active gradient leakage attacks (AGLAs) have been widely studied, they suffer from two severe limitations: (i) coverage: no existing AGLAs can reconstruct all samples in a batch from the shared gradients; (ii) stealthiness: no existing AGLAs can evade principled checks of clients. In this paper, we address these limitations with two core contributions. First, we introduce a new theoretical analysis approach, which uniformly models AGLAs as backdoor poisoning. This analysis approach reveals that the core principle of AGLAs is to bias the gradient space to prioritize the reconstruction of a small subset of samples while sacrificing the majority, which theoretically explains the above limitations of existing AGLAs. Second, we propose Enhanced Gradient Global Vulnerability (EGGV), the first AGLA that achieves complete attack coverage while evading client-side detection. In particular, EGGV employs a gradient projector and a jointly optimized discriminator to assess gradient vulnerability, steering the gradient space toward the point most prone to data leakage. Extensive experiments show that EGGV achieves complete attack coverage and surpasses state-of-the-art (SOTA) with at least a 43% increase in reconstruction quality (PSNR) and a 45% improvement in stealthiness (D-SNR).
FedUP: Efficient Pruning-based Federated Unlearning for Model Poisoning Attacks
Nicolò Romandini, Cristian Borcea, Rebecca Montanari
et al.
Federated Learning (FL) can be vulnerable to attacks, such as model poisoning, where adversaries send malicious local weights to compromise the global model. Federated Unlearning (FU) is emerging as a solution to address such vulnerabilities by selectively removing the influence of detected malicious contributors on the global model without complete retraining. However, unlike typical FU scenarios where clients are trusted and cooperative, applying FU with malicious and possibly colluding clients is challenging because their collaboration in unlearning their data cannot be assumed. This work presents FedUP, a lightweight FU algorithm designed to efficiently mitigate malicious clients' influence by pruning specific connections within the attacked model. Our approach achieves efficiency by relying only on clients' weights from the last training round before unlearning to identify which connections to inhibit. Isolating malicious influence is non-trivial due to overlapping updates from benign and malicious clients. FedUP addresses this by carefully selecting and zeroing the highest magnitude weights that diverge the most between the latest updates from benign and malicious clients while preserving benign information. FedUP is evaluated under a strong adversarial threat model, where up to 50%-1 of the clients could be malicious and have full knowledge of the aggregation process. We demonstrate the effectiveness, robustness, and efficiency of our solution through experiments across IID and Non-IID data, under label-flipping and backdoor attacks, and by comparing it with state-of-the-art (SOTA) FU solutions. In all scenarios, FedUP reduces malicious influence, lowering accuracy on malicious data to match that of a model retrained from scratch while preserving performance on benign data. FedUP achieves effective unlearning while consistently being faster and saving storage compared to the SOTA.
Securing the Model Context Protocol: Defending LLMs Against Tool Poisoning and Adversarial Attacks
Saeid Jamshidi, Kawser Wazed Nafi, Arghavan Moradi Dakhel
et al.
The Model Context Protocol (MCP) enables Large Language Models to integrate external tools through structured descriptors, increasing autonomy in decision-making, task execution, and multi-agent workflows. However, this autonomy creates a largely overlooked security gap. Existing defenses focus on prompt-injection attacks and fail to address threats embedded in tool metadata, leaving MCP-based systems exposed to semantic manipulation. This work analyzes three classes of semantic attacks on MCP-integrated systems: (1) Tool Poisoning, where adversarial instructions are hidden in tool descriptors; (2) Shadowing, where trusted tools are indirectly compromised through contaminated shared context; and (3) Rug Pulls, where descriptors are altered after approval to subvert behavior. To counter these threats, we introduce a layered security framework with three components: RSA-based manifest signing to enforce descriptor integrity, LLM-on-LLM semantic vetting to detect suspicious tool definitions, and lightweight heuristic guardrails that block anomalous tool behavior at runtime. Through evaluation of GPT-4, DeepSeek, and Llama-3.5 across eight prompting strategies, we find that security performance varies widely by model architecture and reasoning method. GPT-4 blocks about 71 percent of unsafe tool calls, balancing latency and safety. DeepSeek shows the highest resilience to Shadowing attacks but with greater latency, while Llama-3.5 is fastest but least robust. Our results show that the proposed framework reduces unsafe tool invocation rates without model fine-tuning or internal modification.
Comprehensive overview of the toxicities of small-molecule cryoprotectants for carnivorous spermatozoa: foundation for computational cryobiotechnology
Isaac Karimi, Layth Jasim Mohammad, A. Suvitha
et al.
BackgroundThe specific and non-specific toxicities of cryoprotective agents (CPAs) for semen or spermatozoa cryopreservation/vitrification (SC/SV) remain challenges to the success of assisted reproductive technologies.ObjectiveWe searched for and integrated the physicochemical and toxicological characteristics of small-molecule CPAs as well as curated the information of all extenders reported for carnivores to provide a foundation for new research avenues and computational cryobiology.MethodsThe PubMed database was systematically searched for CPAs reported in SC/SV of carnivores from 1964 to 2024. The physicochemical features, ADMET parameters, toxicity classes, optimized structures, biological activities, thermodynamic equilibrium constants, and kinetic parameters were curated and assessed computationally.ResultsSixty-two relevant papers pertaining to CPAs used in SC/SV were found, and 11 CPAs were selected. Among the properties of CPAs, the molecular weight range (59–758 g/mol), melting point (−60°C to 236°C), XlogP3 (−4.5 to 12.9), topological polar surface area (TPSA; 20–160 Å2), Caco2 permeability (−0.62 to 1.55 log(Papp) in 10–6 cm/s), volume of distribution (−1.04 to 0.19 log L/kg), unbound fraction of a CPA in plasma (0.198–0.895), and Tetrahymena pyriformis toxicity (log µg/L; −2.230 to 0.285) are reported here. Glutathione, dimethyl formamide, methyl formamide, and dimethyl sulfoxide were used as the P-glycoprotein substrates. Ethylene glycol, dimethyl sulfoxide, dimethyl formamide, methyl formamide, glycerol, and soybean lecithin showed Caco2 permeabilities in this order, whereas fructose, glutathione, glutamine, glucose, and citric acid were not Caco2-permeable. The CPAs were distributed in various compartments and could alter the physiological properties of both seminal plasma and spermatozoa. Low volume distributions of all CPAs except glucose indicate high water solubility or high protein binding because higher amounts of the CPAs remain in the seminal plasma.ConclusionADMET information of the CPAs and extenders in the bipartite compartments of seminal plasma and intracellular spaces of spermatozoa are very important for systematic definition and integration because the nature of the extenders and seminal plasma could alter the physiology of cryopreserved spermatozoa.
LSP Framework: A Compensatory Model for Defeating Trigger Reverse Engineering via Label Smoothing Poisoning
Beichen Li, Yuanfang Guo, Heqi Peng
et al.
Deep neural networks are vulnerable to backdoor attacks. Among the existing backdoor defense methods, trigger reverse engineering based approaches, which reconstruct the backdoor triggers via optimizations, are the most versatile and effective ones compared to other types of methods. In this paper, we summarize and construct a generic paradigm for the typical trigger reverse engineering process. Based on this paradigm, we propose a new perspective to defeat trigger reverse engineering by manipulating the classification confidence of backdoor samples. To determine the specific modifications of classification confidence, we propose a compensatory model to compute the lower bound of the modification. With proper modifications, the backdoor attack can easily bypass the trigger reverse engineering based methods. To achieve this objective, we propose a Label Smoothing Poisoning (LSP) framework, which leverages label smoothing to specifically manipulate the classification confidences of backdoor samples. Extensive experiments demonstrate that the proposed work can defeat the state-of-the-art trigger reverse engineering based methods, and possess good compatibility with a variety of existing backdoor attacks.
Meta Stackelberg Game: Robust Federated Learning against Adaptive and Mixed Poisoning Attacks
Tao Li, Henger Li, Yunian Pan
et al.
Federated learning (FL) is susceptible to a range of security threats. Although various defense mechanisms have been proposed, they are typically non-adaptive and tailored to specific types of attacks, leaving them insufficient in the face of multiple uncertain, unknown, and adaptive attacks employing diverse strategies. This work formulates adversarial federated learning under a mixture of various attacks as a Bayesian Stackelberg Markov game, based on which we propose the meta-Stackelberg defense composed of pre-training and online adaptation. {The gist is to simulate strong attack behavior using reinforcement learning (RL-based attacks) in pre-training and then design meta-RL-based defense to combat diverse and adaptive attacks.} We develop an efficient meta-learning approach to solve the game, leading to a robust and adaptive FL defense. Theoretically, our meta-learning algorithm, meta-Stackelberg learning, provably converges to the first-order $\varepsilon$-meta-equilibrium point in $O(\varepsilon^{-2})$ gradient iterations with $O(\varepsilon^{-4})$ samples per iteration. Experiments show that our meta-Stackelberg framework performs superbly against strong model poisoning and backdoor attacks of uncertain and unknown types.
Towards More Robust Retrieval-Augmented Generation: Evaluating RAG Under Adversarial Poisoning Attacks
Jinyan Su, Jin Peng Zhou, Zhengxin Zhang
et al.
Retrieval-Augmented Generation (RAG) systems have emerged as a promising solution to mitigate LLM hallucinations and enhance their performance in knowledge-intensive domains. However, these systems are vulnerable to adversarial poisoning attacks, where malicious passages injected into the retrieval corpus can mislead models into producing factually incorrect outputs. In this paper, we present a rigorously controlled empirical study of how RAG systems behave under such attacks and how their robustness can be improved. On the generation side, we introduce a structured taxonomy of context types-adversarial, untouched, and guiding-and systematically analyze their individual and combined effects on model outputs. On the retrieval side, we evaluate several retrievers to measure how easily they expose LLMs to adversarial contexts. Our findings also reveal that "skeptical prompting" can activate LLMs' internal reasoning, enabling partial self-defense against adversarial passages, though its effectiveness depends strongly on the model's reasoning capacity. Together, our experiments (code available at https://github.com/JinyanSu1/eval_PoisonRaG) and analysis provide actionable insights for designing safer and more resilient RAG systems, paving the way for more reliable real-world deployments.
A Novel Defense Against Poisoning Attacks on Federated Learning: LayerCAM Augmented with Autoencoder
Jingjing Zheng, Xin Yuan, Kai Li
et al.
Recent attacks on federated learning (FL) can introduce malicious model updates that circumvent widely adopted Euclidean distance-based detection methods. This paper proposes a novel defense strategy, referred to as LayerCAM-AE, designed to counteract model poisoning in federated learning. The LayerCAM-AE puts forth a new Layer Class Activation Mapping (LayerCAM) integrated with an autoencoder (AE), significantly enhancing detection capabilities. Specifically, LayerCAM-AE generates a heat map for each local model update, which is then transformed into a more compact visual format. The autoencoder is designed to process the LayerCAM heat maps from the local model updates, improving their distinctiveness and thereby increasing the accuracy in spotting anomalous maps and malicious local models. To address the risk of misclassifications with LayerCAM-AE, a voting algorithm is developed, where a local model update is flagged as malicious if its heat maps are consistently suspicious over several rounds of communication. Extensive tests of LayerCAM-AE on the SVHN and CIFAR-100 datasets are performed under both Independent and Identically Distributed (IID) and non-IID settings in comparison with existing ResNet-50 and REGNETY-800MF defense models. Experimental results show that LayerCAM-AE increases detection rates (Recall: 1.0, Precision: 1.0, FPR: 0.0, Accuracy: 1.0, F1 score: 1.0, AUC: 1.0) and test accuracy in FL, surpassing the performance of both the ResNet-50 and REGNETY-800MF. Our code is available at: https://github.com/jjzgeeks/LayerCAM-AE
Class Machine Unlearning for Complex Data via Concepts Inference and Data Poisoning
Wenhan Chang, Tianqing Zhu, Heng Xu
et al.
In current AI era, users may request AI companies to delete their data from the training dataset due to the privacy concerns. As a model owner, retraining a model will consume significant computational resources. Therefore, machine unlearning is a new emerged technology to allow model owner to delete requested training data or a class with little affecting on the model performance. However, for large-scaling complex data, such as image or text data, unlearning a class from a model leads to a inferior performance due to the difficulty to identify the link between classes and model. An inaccurate class deleting may lead to over or under unlearning. In this paper, to accurately defining the unlearning class of complex data, we apply the definition of Concept, rather than an image feature or a token of text data, to represent the semantic information of unlearning class. This new representation can cut the link between the model and the class, leading to a complete erasing of the impact of a class. To analyze the impact of the concept of complex data, we adopt a Post-hoc Concept Bottleneck Model, and Integrated Gradients to precisely identify concepts across different classes. Next, we take advantage of data poisoning with random and targeted labels to propose unlearning methods. We test our methods on both image classification models and large language models (LLMs). The results consistently show that the proposed methods can accurately erase targeted information from models and can largely maintain the performance of the models.
Securing Distributed Network Digital Twin Systems Against Model Poisoning Attacks
Zifan Zhang, Minghong Fang, Mingzhe Chen
et al.
In the era of 5G and beyond, the increasing complexity of wireless networks necessitates innovative frameworks for efficient management and deployment. Digital twins (DTs), embodying real-time monitoring, predictive configurations, and enhanced decision-making capabilities, stand out as a promising solution in this context. Within a time-series data-driven framework that effectively maps wireless networks into digital counterparts, encapsulated by integrated vertical and horizontal twinning phases, this study investigates the security challenges in distributed network DT systems, which potentially undermine the reliability of subsequent network applications such as wireless traffic forecasting. Specifically, we consider a minimal-knowledge scenario for all attackers, in that they do not have access to network data and other specialized knowledge, yet can interact with previous iterations of server-level models. In this context, we spotlight a novel fake traffic injection attack designed to compromise a distributed network DT system for wireless traffic prediction. In response, we then propose a defense mechanism, termed global-local inconsistency detection (GLID), to counteract various model poisoning threats. GLID strategically removes abnormal model parameters that deviate beyond a particular percentile range, thereby fortifying the security of network twinning process. Through extensive experiments on real-world wireless traffic datasets, our experimental evaluations show that both our attack and defense strategies significantly outperform existing baselines, highlighting the importance of security measures in the design and implementation of DTs for 5G and beyond network systems.
ACE: A Model Poisoning Attack on Contribution Evaluation Methods in Federated Learning
Zhangchen Xu, Fengqing Jiang, Luyao Niu
et al.
In Federated Learning (FL), a set of clients collaboratively train a machine learning model (called global model) without sharing their local training data. The local training data of clients is typically non-i.i.d. and heterogeneous, resulting in varying contributions from individual clients to the final performance of the global model. In response, many contribution evaluation methods were proposed, where the server could evaluate the contribution made by each client and incentivize the high-contributing clients to sustain their long-term participation in FL. Existing studies mainly focus on developing new metrics or algorithms to better measure the contribution of each client. However, the security of contribution evaluation methods of FL operating in adversarial environments is largely unexplored. In this paper, we propose the first model poisoning attack on contribution evaluation methods in FL, termed ACE. Specifically, we show that any malicious client utilizing ACE could manipulate the parameters of its local model such that it is evaluated to have a high contribution by the server, even when its local training data is indeed of low quality. We perform both theoretical analysis and empirical evaluations of ACE. Theoretically, we show our design of ACE can effectively boost the malicious client's perceived contribution when the server employs the widely-used cosine distance metric to measure contribution. Empirically, our results show ACE effectively and efficiently deceive five state-of-the-art contribution evaluation methods. In addition, ACE preserves the accuracy of the final global models on testing inputs. We also explore six countermeasures to defend ACE. Our results show they are inadequate to thwart ACE, highlighting the urgent need for new defenses to safeguard the contribution evaluation methods in FL.
Grimm: A Plug-and-Play Perturbation Rectifier for Graph Neural Networks Defending against Poisoning Attacks
Ao Liu, Wenshan Li, Beibei Li
et al.
Recent studies have revealed the vulnerability of graph neural networks (GNNs) to adversarial poisoning attacks on node classification tasks. Current defensive methods require substituting the original GNNs with defense models, regardless of the original's type. This approach, while targeting adversarial robustness, compromises the enhancements developed in prior research to boost GNNs' practical performance. Here we introduce Grimm, the first plug-and-play defense model. With just a minimal interface requirement for extracting features from any layer of the protected GNNs, Grimm is thus enabled to seamlessly rectify perturbations. Specifically, we utilize the feature trajectories (FTs) generated by GNNs, as they evolve through epochs, to reflect the training status of the networks. We then theoretically prove that the FTs of victim nodes will inevitably exhibit discriminable anomalies. Consequently, inspired by the natural parallelism between the biological nervous and immune systems, we construct Grimm, a comprehensive artificial immune system for GNNs. Grimm not only detects abnormal FTs and rectifies adversarial edges during training but also operates efficiently in parallel, thereby mirroring the concurrent functionalities of its biological counterparts. We experimentally confirm that Grimm offers four empirically validated advantages: 1) Harmlessness, as it does not actively interfere with GNN training; 2) Parallelism, ensuring monitoring, detection, and rectification functions operate independently of the GNN training process; 3) Generalizability, demonstrating compatibility with mainstream GNNs such as GCN, GAT, and GraphSAGE; and 4) Transferability, as the detectors for abnormal FTs can be efficiently transferred across different systems for one-step rectification.
Transforming environmental health datasets from the comparative toxicogenomics database into chord diagrams to visualize molecular mechanisms
Brent Wyatt, Allan Peter Davis, Thomas C. Wiegers
et al.
In environmental health, the specific molecular mechanisms connecting a chemical exposure to an adverse endpoint are often unknown, reflecting knowledge gaps. At the public Comparative Toxicogenomics Database (CTD; https://ctdbase.org/), we integrate manually curated, literature-based interactions from CTD to compute four-unit blocks of information organized as a potential step-wise molecular mechanism, known as “CGPD-tetramers,” wherein a chemical interacts with a gene product to trigger a phenotype which can be linked to a disease. These computationally derived datasets can be used to fill the gaps and offer testable mechanistic information. Users can generate CGPD-tetramers for any combination of chemical, gene, phenotype, and/or disease of interest at CTD; however, such queries typically result in the generation of thousands of CGPD-tetramers. Here, we describe a novel approach to transform these large datasets into user-friendly chord diagrams using R. This visualization process is straightforward, simple to implement, and accessible to inexperienced users that have never used R before. Combining CGPD-tetramers into a single chord diagram helps identify potential key chemicals, genes, phenotypes, and diseases. This visualization allows users to more readily analyze computational datasets that can fill the exposure knowledge gaps in the environmental health continuum.
Intervention effect and mechanism of Dracocephalum moldavica L. extract on bleomycin-induced pulmonary fibrosis in rats
Xiaoyu SUN, Li CHEN, Ruifang GAO
et al.
BackgroundExposures to environmental pollution and specific occupational hazards exacerbate pulmonary fibrosis which has a complex pathogenesis and lacks effective therapeutic drugs. The extract from Dracocephalum moldavica L. can alleviate pulmonary fibrosis through anti-inflammatory and anti-pyroptosis pathways, but its mechanism of prevention and treatment for pulmonary fibrosis remains unclear. ObjectiveTo elucidate the targets and potential mechanism underlying the anti-pulmonary fibrosis efficacy of Dracocephalum moldavica L. extract by employing an amalgamation of network pharmacology and empirical verification. MethodsThe chemical composition of the extract of Dracocephalum moldavica L. was retrieved with the help of China National Knowledge Infrastructure (CNKI) and Traditional Chinese Medicine Systems Pharmacology Database and Analysis Platform (TCMSP). The disease targets related to pulmonary fibrosis were inquired using Gene Cards and DisGeNET. A protein-protein interaction (PPI) was constructed using the Search Tool for the Retrieval of Interacting Genes (STRING) database and Cytoscape software. The predicted potential targets were analyzed by the Gene Ontology (GO) and Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway enrichment analyses through the Database for Annotation, Visualization and Integrated Discovery (DAVID) and validated by molecular docking. Thirty-two rats were randomly divided into a control group, a model group, a low-dose group of Dracocephalum moldavica L. extract (100 mg·kg−1), and a high-dose group of Dracocephalum moldavica L. extract (400 mg·kg−1), with eight rats in each group. A rat model of pulmonary fibrosis was constructed using bleomycin (5 mg·kg−1) intratracheal instillation, and an equal volume of saline was instilled into the control group. After modelling, 400 and 100 mg·kg−1 of Dracocephalum moldavica L. extract were given the high-dose and low-dose groups by gavage, and an equal volume of saline was given by gavage to the control group and the model group, once per day, for consecutive 28 d. The animals were then neutralized, and lung tissues were collected. Structural changes in rat lung tissue were evaluated by observing stained pathological sections. Western blot (WB) was used to detect fibrosis-related proteins type I collagen (Col-I), α-smooth muscle actin (α-SMA), phosphatidylinositol 3-kinase (PI3K), protein kinase B (AKT) in lung tissues. Real-time fluorescence quantitative PCR (RT-qPCR) was used to detect α-SMA and Col-I mRNA levels in lung tissue. Enzyme-linked immunosorbent assay (ELISA) was used to detect tumour necrosis factor-α (TNF-α), interleukin-6 (IL-6), and interleukin-1β (IL-1β) in rats. ResultsA total of 378 key chemical components of the Dracocephalum moldavica L. extract and 1611 lung fibrosis-related targets were identified. Among them, 574 potential targets of Dracocephalum moldavica L. extract acting on lung fibrosis were obtained. The key targets determined by Degree value were albumin (ALB), tumour necrosis factor (TNF), AKT1, etc. The results of KEGG analysis suggested that the potential targets of Dracocephalum moldavica L. extract against pulmonary fibrosis mainly involved the PI3K-AKT, HIF-1 signaling pathway, TNF signaling pathway, and MAPK signaling pathway. The molecular docking results showed that the binding energy between the active components (quercetin, apigenin, aesculetin, quercitrin) of Dracocephalum moldavica L. extract and the core targets of pulmonary fibrosis (TNF, IL-6, ALB, AKT1) were all <-29.288 kJ·mol−1, indicating good binding ability. The animal validation results showed that compared with the control group, the rats in the model group had disrupted alveolar structure, obvious inflammatory cell infiltration, positive blue-striped collagen fibre deposition, and increased Col-I and α-SMA protein expression and transcription levels (P<0.001), p-PI3K and p-AKT expression levels (P<0.001), and levels of inflammatory factors TNF-α, IL-6, and IL-1β (P<0.001). Compared with the model group, the high- and low-dose groups of Dracocephalum moldavica L. extract alleviated the progression of pulmonary fibrosis, reduced inflammatory cell infiltration, and attenuated collagen fibre deposition in rats, with a decrease in the protein expression and transcription levels of Col-I and α-SMA (P<0.01), the expression levels of p-PI3K and p-AKT (P<0.001), and the levels of inflammatory factors IL-6 and TNF-α (P<0.05, P<0.001). ConclusionDracocephalum moldavica L. extract manifests anti-pulmonary fibrotic properties through the modulation of the PI3K-AKT signaling cascade and the suppression of inflammatory reactions.
Medicine (General), Toxicology. Poisons
Reproductive outcomes of women with uterine anomalies: A retrospective study
Sabina Parveen, Shifa Roohi
Aim: To analyse the reproductive performance of women with uterine anomalies. Materials and Methods: It is a Retrospective study carried out over a period of one year from April 2022 to April 2023 at the Department of Obstetrics and Gynaecology, Al-Ameen Medical College and Hospital, Vijayapura. Ethical clearance was taken. A total of nine cases were studied. Result: Our Study observed that; there were total of 23 pregnancies in 9 patients, which include 4 (17.4%) miscarriage 6 (26.1%) preterm deliveries, 12 term deliveries (52%) and 1IUD. In 9 patients with uterine anomaly; 6 were Unicornuate uterus 6 (67%), 2 were septate (22.2%), and uterine didelphys 1 (11.1%).Conclusion: Uterine anomalies are not always associated with poor obstetric outcomes, as many of our patients conceived spontaneously and continued till term. So, reproductive outcomes depend on the type of anomaly and the degree of severity. Hence, patients with uterine anomalies need to be properly counselled and evaluated for a better outcome.
Therapeutics. Pharmacology, Toxicology. Poisons