Hasil untuk "Toxicology. Poisons"

Menampilkan 20 dari ~801073 hasil · dari arXiv, DOAJ, Semantic Scholar, CrossRef

JSON API
arXiv Open Access 2026
Semantic Chameleon: Corpus-Dependent Poisoning Attacks and Defenses in RAG Systems

Scott Thornton

Retrieval-Augmented Generation (RAG) systems extend large language models (LLMs) with external knowledge sources but introduce new attack surfaces through the retrieval pipeline. In particular, adversaries can poison retrieval corpora so that malicious documents are preferentially retrieved at inference time, enabling targeted manipulation of model outputs. We study gradient-guided corpus poisoning attacks against modern RAG pipelines and evaluate retrieval-layer defenses that require no modification to the underlying LLM. We implement dual-document poisoning attacks consisting of a sleeper document and a trigger document optimized using Greedy Coordinate Gradient (GCG). In a large-scale evaluation on the Security Stack Exchange corpus (67,941 documents) with 50 attack attempts, gradient-guided poisoning achieves a 38.0 percent co-retrieval rate under pure vector retrieval. We show that a simple architectural modification, hybrid retrieval combining BM25 and vector similarity, substantially mitigates this attack. Across all 50 attacks, hybrid retrieval reduces gradient-guided attack success from 38 percent to 0 percent without modifying the model or retraining the retriever. When attackers jointly optimize payloads for both sparse and dense retrieval signals, hybrid retrieval can be partially circumvented, achieving 20-44 percent success, but still significantly raises attack difficulty relative to vector-only retrieval. Evaluation across five LLM families (GPT-5.3, GPT-4o, Claude Sonnet 4.6, Llama 4, and GPT-4o-mini) shows attack success ranging from 46.7 percent to 93.3 percent. Cross-corpus evaluation on the FEVER Wikipedia dataset (25 attacks) yields 0 percent attack success across all retrieval configurations.

en cs.CR, cs.AI
arXiv Open Access 2026
Gold standard process Markovian poisoning: a semiparametric approach

Claire Lacour, Pierre Vandekerkhove

We consider in this paper a stochastic process that mixes in time, according to a nonobserved stationary Markov selection process, two separate sources of randomness: i) a stationary process which distribution is accessible (gold standard); ii) a pure i.i.d. sequence which distribution is unknown (poisoning process). In this framework we propose to estimate, with two different approaches, the transition of the hidden Markov selection process along with the distribution, not supposed to belong to any parametric family, of the unknown i.i.d. sequence, under minimal (identifiability, stationarity and dependence in time) conditions. We show that both estimators provide consistent estimations of the Euclidean transition parameter, and also prove that one of them, which is $\sqrt$ n-consistent, allows to establish a functional central limit theorem about the unknown poisoning sequence cumulative distribution function. The numerical performances of our estimators are illustrated through various challenging examples.

en math.ST
DOAJ Open Access 2026
The integration of artificial intelligence and biotechnology in medicine: accelerating novelty in biomarker-targeted discovery and drug delivery systems

Christopher Busayo Olowosoke, Daniel Ogbonnaya Nwankwo, Chinedu Shedrach Izu et al.

Artificial intelligence (AI) and biotechnology are two transformative fields converging to benefit STEM because of their reliance on data and models. The tools from both fields have redefined and improved translational performance of laboratory investigations on drug and biomarker-targeted discovery, drug design, drug development and drug delivery for personalized treatment. Although the long term merit of this integration of AI in biotechnology outweighs the demerit but is the current trend suitable and truly applicable for clinical intervention and therapy upgrades. In this article, we indicated how some key AI-biotech research is supporting accelerated novelty in biomarker-targeted discovery and drug delivery system (DDS) for disease management rather than merely incremental changes in 21st century.

Toxicology. Poisons, Biotechnology
arXiv Open Access 2025
Effectiveness of Adversarial Benign and Malware Examples in Evasion and Poisoning Attacks

Matouš Kozák, Martin Jureček

Adversarial attacks present significant challenges for malware detection systems. This research investigates the effectiveness of benign and malicious adversarial examples (AEs) in evasion and poisoning attacks on the Portable Executable file domain. A novel focus of this study is on benign AEs, which, although not directly harmful, can increase false positives and undermine trust in antivirus solutions. We propose modifying existing adversarial malware generators to produce benign AEs and show they are as successful as malware AEs in evasion attacks. Furthermore, our data show that benign AEs have a more decisive influence in poisoning attacks than standard malware AEs, demonstrating their superior ability to decrease the model's performance. Our findings introduce new opportunities for adversaries and further increase the attack surface that needs to be protected by security researchers.

arXiv Open Access 2025
The Art of Hide and Seek: Making Pickle-Based Model Supply Chain Poisoning Stealthy Again

Tong Liu, Guozhu Meng, Peng Zhou et al.

Pickle deserialization vulnerabilities have persisted throughout Python's history, remaining widely recognized yet unresolved. Due to its ability to transparently save and restore complex objects into byte streams, many AI/ML frameworks continue to adopt pickle as the model serialization protocol despite its inherent risks. As the open-source model ecosystem grows, model-sharing platforms such as Hugging Face have attracted massive participation, significantly amplifying the real-world risks of pickle exploitation and opening new avenues for model supply chain poisoning. Although several state-of-the-art scanners have been developed to detect poisoned models, their incomplete understanding of the poisoning surface leaves the detection logic fragile and allows attackers to bypass them. In this work, we present the first systematic disclosure of the pickle-based model poisoning surface from both model loading and risky function perspectives. Our research demonstrates how pickle-based model poisoning can remain stealthy and highlights critical gaps in current scanning solutions. On the model loading surface, we identify 22 distinct pickle-based model loading paths across five foundational AI/ML frameworks, 19 of which are entirely missed by existing scanners. We further develop a bypass technique named Exception-Oriented Programming (EOP) and discover 9 EOP instances, 7 of which can bypass all scanners. On the risky function surface, we discover 133 exploitable gadgets, achieving almost a 100% bypass rate. Even against the best-performing scanner, these gadgets maintain an 89% bypass rate. By systematically revealing the pickle-based model poisoning surface, we achieve practical and robust bypasses against real-world scanners. We responsibly disclose our findings to corresponding vendors, receiving acknowledgments and a $6000 bug bounty.

en cs.CR
DOAJ Open Access 2025
Association between heatwave and risk of traffic injuries and its disease burden in Yunnan Province

Haorong MENG, Jianxiong HU, Qingping SHI et al.

BackgroundPrevious studies found that high temperature and heatwave increase the risk of traffic injuries. The complex road conditions in Yunnan Province result in frequent traffic accidents. However, there is limited evidence on the correlation between heatwave and traffic injuries in Yunnan Province.ObjectiveTo assess the association between heatwave events and traffic injuries, to estimate its disease burden, and to identify relevant sensitive groups.MethodsWe collected data on traffic injury cases and concurrent meteorological information from four surveillance sites in Yunnan Province, China: Dali, Lufeng, Zhaoyang, and Qilin from May to September each year from 2015 to 2023. Traffic injury cases refer to patients who visited the outpatient or emergency departments of local surveillance hospitals for the first time due to traffic injuries. Meteorological data were derived from the fifth generation atmosphericreanalysis dataset of the global climate provided by the European Centre for Medium-Range Weather Forecasts. A time-stratified case-crossover design combined with distributed lag non-linear model was used to analyze the association between short-term exposure to heatwave and traffic injuries. We also conducted subgroup analyses by sex, age, occupation, injury cause, activity at the time of injury occurrence, and severity of injury.ResultsA total of 34764 traffic injury surveillance cases were included in the analysis. The risk of traffic injuries occurred during heatwave was 1.13 times (95%CI: 1.07, 1.20) greater than those occurred during non-heatwave, and 2.64% (95%CI: 1.49%, 3.73%) of traffic injuries were attributed to heatwave exposure throughout the study period. The results of stratified analysis showed greater impacts of heatwave on the risk of traffic injuries in female [odds ratio (OR)=1.17, attributable fraction (AF)=3.23%], people aged 15-64 years (OR=1.13, AF=2.68%), farmers/workers (OR=1.22, AF=4.07%), people with non-motor vehicle traffic injuries (OR=1.17, AF=3.24%), people who were driving or riding when injuries occurred (OR=1.12, AF=2.43%), and people with minor injuries (OR=1.18, AF=3.49%) ( P<0.05). Longer duration [excess risk (ER)=0.99%] and greater intensity (ER=4.12%) of heatwave had greater impacts on the risk of traffic injuries (P<0.05).ConclusionHeatwave events are associated with an increased risk of traffic injuries. Female, people aged 15-64, farmers/workers, people with non-motor vehicle traffic injuries, those injured while driving/riding a vehicle, and people with minor injuries are particularly vulnerable. The findings suggest that targeted mitigation and adaptation measures should be taken to address the risk of traffic injuries associated with extreme climate events.

Medicine (General), Toxicology. Poisons
DOAJ Open Access 2025
Modulation of PM20D1 expression by rosiglitazone confers neuroprotection in tramadol-induced Parkinsonian rats

Farah Hazim Hadi, Huda Jaber Waheed, Nawfal Abdulmonem Numan

Parkinson’s disease (PD) is a progressive neurodegenerative disorder with no available disease‑modifying therapy, and tramadol misuse has been increasingly associated with PD‑like neurotoxicity through oxidative stress, mitochondrial dysfunction, and apoptosis. This study investigated whether rosiglitazone (RSG), a PPARγ agonist, confers neuroprotection in tramadol-induced Parkinsonian rats by modulating PM20D1 gene expression, and whether its effects are enhanced in combination with levodopa–carbidopa. Fifty‑six male rats were randomized into seven groups: control, tramadol‑only, RSG (5/10/15 mg/kg) plus tramadol, levodopa–carbidopa plus tramadol, and RSG 5 mg/kg plus levodopa–carbidopa plus tramadol. Tramadol significantly impaired motor function and reduced dopamine compared to controls (serum: 189.0 ± 12.8 vs. 470.0 ± 21.2 pg/mL; brain: 54.8 ± 9.0 vs. 251.0 ± 43.6 pg/mL, p < 0.001), depleted antioxidants (SOD: 58.1 ± 8.5 vs. 155.0 ± 16.3 ng/mL; GSH: 6.9 ± 0.8 vs. 21.6 ± 2.6 µg/mL), and increased apoptosis (caspase‑3: 36.9 ± 3.6 vs. 10.0 ± 2.5 ng/mL). Relative to the tramadol‑only group, RSG dose‑dependently restored dopamine (up to 397.0 ± 23.1 pg/mL), normalized oxidative stress (MDA reduced to 1.5 ± 0.2 ng/mL), and upregulated PM20D1 gene expression (3.2 ± 0.5 fold) and BCL2 gene expression (3.2 ± 0.5 fold). The low-dose RSG plus levodopa–carbidopa combination achieved maximal behavioral recovery and dopamine restoration (1023.0 ± 248.0 pg/mL) compared to the tramadol-only group. These findings provide the first evidence that RSG confers neuroprotection against tramadol-induced Parkinsonism through PPARγ-mediated modulation of PM20D1 gene expression, highlighting a novel translational therapeutic axis with potential disease-modifying implications for PD.

Toxicology. Poisons
DOAJ Open Access 2025
Subcutaneous Infection Resulting From the Migration of a Peripherally Inserted Central Catheter Into the Angular Vein

Foroozan Faress, Leyla Abdolkarimi, Sayed Mahdi Marashi

Background: Peripherally inserted central catheter (PICC) migration into smaller veins is rare, often recognized only when catheter dysfunction occurs or associated clinical complications manifest. This study aims to highlight subcutaneous tissue infections as an unusual complication of PICC migration in newborns. Case Presentation: We report a case of a newborn male who experienced PICC migration into the angular vein after a prior successful repositioning. Factors contributing to this migration include anatomical variations in the venous system, left-sided catheter insertion, the need for mechanical ventilation due to persistent pulmonary hypertension of the newborn, and the potential influence of using a 2 Fr diameter PICC line. Conclusion: Maintaining detailed documentation of the external catheter length and conducting regular imaging post-PICC placement is crucial, especially if the patient shows signs of catheter dysfunction or if unanticipated complications develop.

Medicine (General), Toxicology. Poisons
arXiv Open Access 2024
Poisoning Prevention in Federated Learning and Differential Privacy via Stateful Proofs of Execution

Norrathep Rattanavipanon, Ivan De Oliveira Nunes

The rise in IoT-driven distributed data analytics, coupled with increasing privacy concerns, has led to a demand for effective privacy-preserving and federated data collection/model training mechanisms. In response, approaches such as Federated Learning (FL) and Local Differential Privacy (LDP) have been proposed and attracted much attention over the past few years. However, they still share the common limitation of being vulnerable to poisoning attacks wherein adversaries compromising edge devices feed forged (a.k.a. poisoned) data to aggregation back-ends, undermining the integrity of FL/LDP results. In this work, we propose a system-level approach to remedy this issue based on a novel security notion of Proofs of Stateful Execution (PoSX) for IoT/embedded devices' software. To realize the PoSX concept, we design SLAPP: a System-Level Approach for Poisoning Prevention. SLAPP leverages commodity security features of embedded devices - in particular ARM TrustZoneM security extensions - to verifiably bind raw sensed data to their correct usage as part of FL/LDP edge device routines. As a consequence, it offers robust security guarantees against poisoning. Our evaluation, based on real-world prototypes featuring multiple cryptographic primitives and data collection schemes, showcases SLAPP's security and low overhead.

en cs.CR
arXiv Open Access 2024
Online Poisoning Attack Against Reinforcement Learning under Black-box Environments

Jianhui Li, Bokang Zhang, Junfeng Wu

This paper proposes an online environment poisoning algorithm tailored for reinforcement learning agents operating in a black-box setting, where an adversary deliberately manipulates training data to lead the agent toward a mischievous policy. In contrast to prior studies that primarily investigate white-box settings, we focus on a scenario characterized by \textit{unknown} environment dynamics to the attacker and a \textit{flexible} reinforcement learning algorithm employed by the targeted agent. We first propose an attack scheme that is capable of poisoning the reward functions and state transitions. The poisoning task is formalized as a constrained optimization problem, following the framework of \cite{ma2019policy}. Given the transition probabilities are unknown to the attacker in a black-box environment, we apply a stochastic gradient descent algorithm, where the exact gradients are approximated using sample-based estimates. A penalty-based method along with a bilevel reformulation is then employed to transform the problem into an unconstrained counterpart and to circumvent the double-sampling issue. The algorithm's effectiveness is validated through a maze environment.

en cs.LG, cs.CR
arXiv Open Access 2024
Federated Learning Under Attack: Exposing Vulnerabilities through Data Poisoning Attacks in Computer Networks

Ehsan Nowroozi, Imran Haider, Rahim Taheri et al.

Federated Learning (FL) is a machine learning (ML) approach that enables multiple decentralized devices or edge servers to collaboratively train a shared model without exchanging raw data. During the training and sharing of model updates between clients and servers, data and models are susceptible to different data-poisoning attacks. In this study, our motivation is to explore the severity of data poisoning attacks in the computer network domain because they are easy to implement but difficult to detect. We considered two types of data-poisoning attacks, label flipping (LF) and feature poisoning (FP), and applied them with a novel approach. In LF, we randomly flipped the labels of benign data and trained the model on the manipulated data. For FP, we randomly manipulated the highly contributing features determined using the Random Forest algorithm. The datasets used in this experiment were CIC and UNSW related to computer networks. We generated adversarial samples using the two attacks mentioned above, which were applied to a small percentage of datasets. Subsequently, we trained and tested the accuracy of the model on adversarial datasets. We recorded the results for both benign and manipulated datasets and observed significant differences between the accuracy of the models on different datasets. From the experimental results, it is evident that the LF attack failed, whereas the FP attack showed effective results, which proved its significance in fooling a server. With a 1% LF attack on the CIC, the accuracy was approximately 0.0428 and the ASR was 0.9564; hence, the attack is easily detectable, while with a 1% FP attack, the accuracy and ASR were both approximately 0.9600, hence, FP attacks are difficult to detect. We repeated the experiment with different poisoning percentages.

en cs.CR, cs.AI
arXiv Open Access 2024
Modeling phonon-mediated quasiparticle poisoning in superconducting qubit arrays

Eric Yelton, Clayton P. Larson, Vito Iaia et al.

Correlated errors caused by ionizing radiation impacting superconducting qubit chips are problematic for quantum error correction. Such impacts generate quasiparticle (QP) excitations in the qubit electrodes, which temporarily reduce qubit coherence significantly. The many energetic phonons produced by a particle impact travel efficiently throughout the device substrate and generate quasiparticles with high probability, thus causing errors on a large fraction of the qubits in an array simultaneously. We describe a comprehensive strategy for the numerical simulation of the phonon and quasiparticle dynamics in the aftermath of an impact. We compare the simulations with experimental measurements of phonon-mediated QP poisoning and demonstrate that our modeling captures the spatial and temporal footprint of the QP poisoning for various configurations of phonon downconversion structures. We thus present a path forward for the operation of superconducting quantum processors in the presence of ionizing radiation.

en quant-ph, cond-mat.supr-con
arXiv Open Access 2024
Visualizing the Shadows: Unveiling Data Poisoning Behaviors in Federated Learning

Xueqing Zhang, Junkai Zhang, Ka-Ho Chow et al.

This demo paper examines the susceptibility of Federated Learning (FL) systems to targeted data poisoning attacks, presenting a novel system for visualizing and mitigating such threats. We simulate targeted data poisoning attacks via label flipping and analyze the impact on model performance, employing a five-component system that includes Simulation and Data Generation, Data Collection and Upload, User-friendly Interface, Analysis and Insight, and Advisory System. Observations from three demo modules: label manipulation, attack timing, and malicious attack availability, and two analysis components: utility and analytical behavior of local model updates highlight the risks to system integrity and offer insight into the resilience of FL systems. The demo is available at https://github.com/CathyXueqingZhang/DataPoisoningVis.

en cs.CR
arXiv Open Access 2024
Federated Learning in Adversarial Environments: Testbed Design and Poisoning Resilience in Cybersecurity

Hao Jian Huang, Hakan T. Otal, M. Abdullah Canbaz

This paper presents the design and implementation of a Federated Learning (FL) testbed, focusing on its application in cybersecurity and evaluating its resilience against poisoning attacks. Federated Learning allows multiple clients to collaboratively train a global model while keeping their data decentralized, addressing critical needs for data privacy and security, particularly in sensitive fields like cybersecurity. Our testbed, built using Raspberry Pi and Nvidia Jetson hardware by running the Flower framework, facilitates experimentation with various FL frameworks, assessing their performance, scalability, and ease of integration. Through a case study on federated intrusion detection systems, the testbed's capabilities are shown in detecting anomalies and securing critical infrastructure without exposing sensitive network data. Comprehensive poisoning tests, targeting both model and data integrity, evaluate the system's robustness under adversarial conditions. The results show that while federated learning enhances data privacy and distributed learning, it remains vulnerable to poisoning attacks, which must be mitigated to ensure its reliability in real-world applications.

en cs.CR, cs.DC
arXiv Open Access 2024
Leveraging MTD to Mitigate Poisoning Attacks in Decentralized FL with Non-IID Data

Chao Feng, Alberto Huertas Celdrán, Zien Zeng et al.

Decentralized Federated Learning (DFL), a paradigm for managing big data in a privacy-preserved manner, is still vulnerable to poisoning attacks where malicious clients tamper with data or models. Current defense methods often assume Independently and Identically Distributed (IID) data, which is unrealistic in real-world applications. In non-IID contexts, existing defensive strategies face challenges in distinguishing between models that have been compromised and those that have been trained on heterogeneous data distributions, leading to diminished efficacy. In response, this paper proposes a framework that employs the Moving Target Defense (MTD) approach to bolster the robustness of DFL models. By continuously modifying the attack surface of the DFL system, this framework aims to mitigate poisoning attacks effectively. The proposed MTD framework includes both proactive and reactive modes, utilizing a reputation system that combines metrics of model similarity and loss, alongside various defensive techniques. Comprehensive experimental evaluations indicate that the MTD-based mechanism significantly mitigates a range of poisoning attack types across multiple datasets with different topologies.

en cs.CR, cs.DC
arXiv Open Access 2024
RFLPA: A Robust Federated Learning Framework against Poisoning Attacks with Secure Aggregation

Peihua Mai, Ran Yan, Yan Pang

Federated learning (FL) allows multiple devices to train a model collaboratively without sharing their data. Despite its benefits, FL is vulnerable to privacy leakage and poisoning attacks. To address the privacy concern, secure aggregation (SecAgg) is often used to obtain the aggregation of gradients on sever without inspecting individual user updates. Unfortunately, existing defense strategies against poisoning attacks rely on the analysis of local updates in plaintext, making them incompatible with SecAgg. To reconcile the conflicts, we propose a robust federated learning framework against poisoning attacks (RFLPA) based on SecAgg protocol. Our framework computes the cosine similarity between local updates and server updates to conduct robust aggregation. Furthermore, we leverage verifiable packed Shamir secret sharing to achieve reduced communication cost of $O(M+N)$ per user, and design a novel dot product aggregation algorithm to resolve the issue of increased information leakage. Our experimental results show that RFLPA significantly reduces communication and computation overhead by over $75\%$ compared to the state-of-the-art secret sharing method, BREA, while maintaining competitive accuracy.

en cs.CR, cs.AI
arXiv Open Access 2023
Securing NextG Systems against Poisoning Attacks on Federated Learning: A Game-Theoretic Solution

Yalin E. Sagduyu, Tugba Erpek, Yi Shi

This paper studies the poisoning attack and defense interactions in a federated learning (FL) system, specifically in the context of wireless signal classification using deep learning for next-generation (NextG) communications. FL collectively trains a global model without the need for clients to exchange their data samples. By leveraging geographically dispersed clients, the trained global model can be used for incumbent user identification, facilitating spectrum sharing. However, in this distributed learning system, the presence of malicious clients introduces the risk of poisoning the training data to manipulate the global model through falsified local model exchanges. To address this challenge, a proactive defense mechanism is employed in this paper to make informed decisions regarding the admission or rejection of clients participating in FL systems. Consequently, the attack-defense interactions are modeled as a game, centered around the underlying admission and poisoning decisions. First, performance bounds are established, encompassing the best and worst strategies for attackers and defenders. Subsequently, the attack and defense utilities are characterized within the Nash equilibrium, where no player can unilaterally improve its performance given the fixed strategies of others. The results offer insights into novel operational modes that safeguard FL systems against poisoning attacks by quantifying the performance of both attacks and defenses in the context of NextG communications.

en cs.NI, cs.AI
arXiv Open Access 2023
Policy Poisoning in Batch Learning for Linear Quadratic Control Systems via State Manipulation

Courtney M. King, Son Tung Do, Juntao Chen

In this work, we study policy poisoning through state manipulation, also known as sensor spoofing, and focus specifically on the case of an agent forming a control policy through batch learning in a linear-quadratic (LQ) system. In this scenario, an attacker aims to trick the learner into implementing a targeted malicious policy by manipulating the batch data before the agent begins its learning process. An attack model is crafted to carry out the poisoning strategically, with the goal of modifying the batch data as little as possible to avoid detection by the learner. We establish an optimization framework to guide the design of such policy poisoning attacks. The presence of bi-linear constraints in the optimization problem requires the design of a computationally efficient algorithm to obtain a solution. Therefore, we develop an iterative scheme based on the Alternating Direction Method of Multipliers (ADMM) which is able to return solutions that are approximately optimal. Several case studies are used to demonstrate the effectiveness of the algorithm in carrying out the sensor-based attack on the batch-learning agent in LQ control systems.

en eess.SY

Halaman 24 dari 40054