Hasil untuk "Toxicology. Poisons"

Menampilkan 20 dari ~801067 hasil · dari arXiv, DOAJ, CrossRef, Semantic Scholar

JSON API
arXiv Open Access 2026
Stealthy Poisoning Attacks Bypass Defenses in Regression Settings

Javier Carnerero-Cano, Luis Muñoz-González, Phillippa Spencer et al.

Regression models are widely used in industrial processes, engineering, and in natural and physical sciences, yet their robustness to poisoning has received less attention. When it has, studies often assume unrealistic threat models and are thus less useful in practice. In this paper, we propose a novel optimal stealthy attack formulation that considers different degrees of detectability and show that it bypasses state-of-the-art defenses. We further propose a new methodology based on normalization of objectives to evaluate different trade-offs between effectiveness and detectability. Finally, we develop a novel defense (BayesClean) against stealthy attacks. BayesClean improves on previous defenses when attacks are stealthy and the number of poisoning points is significant.

en cs.LG, cs.AI
arXiv Open Access 2026
Confundo: Learning to Generate Robust Poison for Practical RAG Systems

Haoyang Hu, Zhejun Jiang, Yueming Lyu et al.

Retrieval-augmented generation (RAG) is increasingly deployed in real-world applications, where its reference-grounded design makes outputs appear trustworthy. This trust has spurred research on poisoning attacks that craft malicious content, inject it into knowledge sources, and manipulate RAG responses. However, when evaluated in practical RAG systems, existing attacks suffer from severely degraded effectiveness. This gap stems from two overlooked realities: (i) content is often processed before use, which can fragment the poison and weaken its effect, and (ii) users often do not issue the exact queries anticipated during attack design. These factors can lead practitioners to underestimate risks and develop a false sense of security. To better characterize the threat to practical systems, we present Confundo, a learning-to-poison framework that fine-tunes a large language model as a poison generator to achieve high effectiveness, robustness, and stealthiness. Confundo provides a unified framework supporting multiple attack objectives, demonstrated by manipulating factual correctness, inducing biased opinions, and triggering hallucinations. By addressing these overlooked challenges, Confundo consistently outperforms a wide range of purpose-built attacks across datasets and RAG configurations by large margins, even in the presence of defenses. Beyond exposing vulnerabilities, we also present a defensive use case that protects web content from unauthorized incorporation into RAG systems via scraping, with no impact on user experience.

en cs.CR, cs.LG
arXiv Open Access 2025
Layer of Truth: Probing Belief Shifts under Continual Pre-Training Poisoning

Svetlana Churina, Niranjan Chebrolu, Kokil Jaidka

We show that continual pretraining on plausible misinformation can overwrite specific factual knowledge in large language models without degrading overall performance. Unlike prior poisoning work under static pretraining, we study repeated exposure to counterfactual claims during continual updates. Using paired fact-counterfact items with graded poisoning ratios, we track how internal preferences between competing facts evolve across checkpoints, layers, and model scales. Even moderate poisoning (50-100%) flips over 55% of responses from correct to counterfactual while leaving ambiguity nearly unchanged. These belief flips emerge abruptly, concentrate in late layers (e.g., Layers 29-36 in 3B models), and are partially reversible via patching (up to 56.8%). The corrupted beliefs generalize beyond poisoned prompts, selectively degrading commonsense reasoning while leaving alignment benchmarks largely intact and transferring imperfectly across languages. These results expose a failure mode of continual pre-training in which targeted misinformation replaces internal factual representations without triggering broad performance collapse, motivating representation-level monitoring of factual integrity during model updates.

en cs.LG, cs.CR
arXiv Open Access 2025
Poison Once, Refuse Forever: Weaponizing Alignment for Injecting Bias in LLMs

Md Abdullah Al Mamun, Ihsen Alouani, Nael Abu-Ghazaleh

Large Language Models (LLMs) are aligned to meet ethical standards and safety requirements by training them to refuse answering harmful or unsafe prompts. In this paper, we demonstrate how adversaries can exploit LLMs' alignment to implant bias, or enforce targeted censorship without degrading the model's responsiveness to unrelated topics. Specifically, we propose Subversive Alignment Injection (SAI), a poisoning attack that leverages the alignment mechanism to trigger refusal on specific topics or queries predefined by the adversary. Although it is perhaps not surprising that refusal can be induced through overalignment, we demonstrate how this refusal can be exploited to inject bias into the model. Surprisingly, SAI evades state-of-the-art poisoning defenses including LLM state forensics, as well as robust aggregation techniques that are designed to detect poisoning in FL settings. We demonstrate the practical dangers of this attack by illustrating its end-to-end impacts on LLM-powered application pipelines. For chat based applications such as ChatDoctor, with 1% data poisoning, the system refuses to answer healthcare questions to targeted racial category leading to high bias ($ΔDP$ of 23%). We also show that bias can be induced in other NLP tasks: for a resume selection pipeline aligned to refuse to summarize CVs from a selected university, high bias in selection ($ΔDP$ of 27%) results. Even higher bias ($ΔDP$~38%) results on 9 other chat based downstream applications.

en cs.LG, cs.AI
arXiv Open Access 2025
Swallowing the Poison Pills: Insights from Vulnerability Disparity Among LLMs

Peng Yifeng, Wu Zhizheng, Chen Chen

Modern large language models (LLMs) exhibit critical vulnerabilities to poison pill attacks: localized data poisoning that alters specific factual knowledge while preserving overall model utility. We systematically demonstrate these attacks exploit inherent architectural properties of LLMs, achieving 54.6% increased retrieval inaccuracy on long-tail knowledge versus dominant topics and up to 25.5% increase retrieval inaccuracy on compressed models versus original architectures. Through controlled mutations (e.g., temporal/spatial/entity alterations) and, our method induces localized memorization deterioration with negligible impact on models' performance on regular standard benchmarks (e.g., <2% performance drop on MMLU/GPQA), leading to potential detection evasion. Our findings suggest: (1) Disproportionate vulnerability in long-tail knowledge may result from reduced parameter redundancy; (2) Model compression may increase attack surfaces, with pruned/distilled models requiring 30% fewer poison samples for equivalent damage; (3) Associative memory enables both spread of collateral damage to related concepts and amplification of damage from simultaneous attack, particularly for dominant topics. These findings raise concerns over current scaling paradigms since attack costs are lowering while defense complexity is rising. Our work establishes poison pills as both a security threat and diagnostic tool, revealing critical security-efficiency trade-offs in language model compression that challenges prevailing safety assumptions.

en cs.CR, cs.AI
arXiv Open Access 2025
PPFPL: Cross-silo Privacy-preserving Federated Prototype Learning Against Data Poisoning Attacks

Hongliang Zhang, Jiguo Yu, Fenghua Xu et al.

Privacy-Preserving Federated Learning (PPFL) enables multiple clients to collaboratively train models by submitting secreted model updates. Nonetheless, PPFL is vulnerable to data poisoning attacks due to its distributed training paradigm in cross-silo scenarios. Existing solutions have struggled to improve the performance of PPFL under poisoned Non-Independent and Identically Distributed (Non-IID) data. To address the issues, this paper proposes a privacy-preserving federated prototype learning framework, named PPFPL, which enhances the cross-silo FL performance against poisoned Non-IID data while protecting client privacy. Specifically, we adopt prototypes as client-submitted model updates to eliminate the impact of poisoned data distributions. In addition, we design a secure aggregation protocol utilizing homomorphic encryption to achieve Byzantine-robust aggregation on two servers, significantly reducing the impact of malicious clients. Theoretical analyses confirm the convergence and privacy of PPFPL. Experimental results on public datasets show that PPFPL effectively resists data poisoning attacks under Non-IID settings.

en cs.CR, cs.DC
DOAJ Open Access 2025
Assessment of Metabolome Variation in Field‐Grown Lettuce in Context to Its Different Types and Soil Types as Analyzed via GC‐MS Analysis and Using of Chemometric Tools

Mostafa H. Baky, Sally E. Khaled, Mohamed R. Khalifa et al.

ABSTRACT Lettuce (Lactuca sativa L.) is one of the most important ready‐to‐eat vegetables widely consumed worldwide owing to its nutritional and health benefits. A total of 111 peaks were identified via gas chromatography‐mass spectrometry (GC‐MS) with sugars represented the most abundant primary metabolite class detected in lettuce specially in sandy soil grown lettuce compared to that in mud soil. The highest sugar level was detected in iceberg lettuce grown in sand soil at 967.1 mg/g versus lowest in “Baladi” lettuce grown in mud soil at 48.2 mg/g. Glucose represented the major sugar at 733.4 mg in iceberg grown in sand soil (SC) compared to 94.7 mg/g in that grown in muddy soil (MC). Sucrose detected at 212‐434 mg/g compared to traces in samples grown in muddy soil (MB and MC). Higher levels of amino acids were detected in green leaf lettuce in sandy soil (SC) at 130 mg/g, with L‐proline as the major amino form. Iceberg lettuce grown in SC was discriminated from other samples with the aid of chemometric analysis due to its richness in sugars, while green leaf lettuce in SC was discriminated by its richness in amino acids, organic acids, and sugar alcohols.

Food processing and manufacture, Toxicology. Poisons
DOAJ Open Access 2025
Acute and subchronic toxicity of aqueous extracts of Combretum micranthum (G. Don) and Gardenia sokotensis (Hutch) having ethnobotanical uses in Burkina Faso

OUEDRAOGO Elisabeth, ZABRE Généviève, TINDANO Basile et al.

Medicinal plants are the major sources of drugs used to treat diseases. Scientific studies were performed on some plants, but few data are available on the medicinal plants used to manage bone diseases in Burkina Faso. This study was conducted to identify medicinal plants used in the treatment of osteoporosis and investigate the acute and subchronic toxicity of Combretum micranthum and Gardenia sokotensis aqueous extracts. A survey was carried out through a structured interview with traditional practitioners. Phytochemical screening was performed using a validated thin-layer chromatographic method. The acute oral toxicity study of extracts was validated at 2000 mg/kg in mice. In the subchronic toxicity, rats were orally administered 100, 200, and 400 mg/kg of each extract for 90 days. Results show sixty-one plant species divided into 33 families. C. micranthum and G. sokotensis were most cited. Phytochemical screening of aqueous extracts of plants revealed flavonoids, tannins, and terpenoids. Acute toxicity study indicated up to 2000 mg/kg of each extract was tolerated without death or any signs of toxicity. In the subchronic toxicity test, physiological, serum biochemistry, and hematology examination, no features suggestive of each extract's toxicity were observed at doses of 100 and 200 mg/kg. The hepatic balance (aspartate and alanine aminotransferases) was significantly (p < 0.0001) reduced at doses of 100 and 200 mg/kg. A significant (p < 0.001) decrease in triglyceride and cholesterol levels was observed. To conclude, extracts were non-toxic and could be used for their ethnopharmacological properties, but experimental therapeutic evidence is still needed.

Toxicology. Poisons
DOAJ Open Access 2025
Assessment of the toxic effects of parabens, commonly used preservatives in cosmetics, and their halogenated by-products on human skin and endothelial cells

Alisha Janiga-MacNelly, Mackenna McGraw, Maria Teresa Fernandez-Luna et al.

Parabens, widely used as preservatives in cosmetics, are increasingly concerning due to potential health risks, while their chlorinated and brominated by-products, found in aquatic environments, pose additional toxicity concerns. This study evaluated the toxic effects of parabens, their metabolite, and three halogenated by-products on human skin and endothelial cells using cytotoxicity and wound healing assays. Human epidermal keratinocytes (HEK001) and human dermal microvascular endothelial cells (HMEC-1) were used as models. In keratinocytes, butylparaben (BuP) and benzylparaben (BeP) were the most cytotoxic, with EC50 values of 1.52 ± 0.51 µM and 3.34 ± 0.97 µM, respectively. Halogenated parabens, such as methyl 3‑chloro-4-hydroxybenzoate (CMeP) and methyl 3,5-dibromo-4-hydroxybenzoate (DBMeP), also showed significant cytotoxicity, with EC50 values of 2.20 ± 0.76 µM and 1.49 ± 0.37 µM. Methylparaben (MeP), ethylparaben (EtP), and the metabolite 4-hydroxybenzoic acid (HBA) showed lower toxicity, with EC50 values ranging from 536 ± 178 µM to 1,313 ± 464 µM. In endothelial cells, MeP, EtP, and HBA had reduced toxicity, and halogenated by-products were less toxic, with EC50 values from 788 ± 140 µM to >10 mM. High concentrations (100 µM) of BuP, BeP, and halogenated by-products significantly inhibited wound healing in both cell types, while halogenated parabens inhibited keratinocyte proliferation at just 1 µM. This research enhances understanding of parabens' impact on wound healing, informing safety assessments for cosmetics and personal care products.

Toxicology. Poisons
arXiv Open Access 2024
Data Poisoning Attacks in Gossip Learning

Alexandre Pham, Maria Potop-Butucaru, Sébastien Tixeuil et al.

Traditional machine learning systems were designed in a centralized manner. In such designs, the central entity maintains both the machine learning model and the data used to adjust the model's parameters. As data centralization yields privacy issues, Federated Learning was introduced to reduce data sharing and have a central server coordinate the learning of multiple devices. While Federated Learning is more decentralized, it still relies on a central entity that may fail or be subject to attacks, provoking the failure of the whole system. Then, Decentralized Federated Learning removes the need for a central server entirely, letting participating processes handle the coordination of the model construction. This distributed control urges studying the possibility of malicious attacks by the participants themselves. While poisoning attacks on Federated Learning have been extensively studied, their effects in Decentralized Federated Learning did not get the same level of attention. Our work is the first to propose a methodology to assess poisoning attacks in Decentralized Federated Learning in both churn free and churn prone scenarios. Furthermore, in order to evaluate our methodology on a case study representative for gossip learning we extended the gossipy simulator with an attack injector module.

en cs.DC
arXiv Open Access 2024
A GAN-based data poisoning framework against anomaly detection in vertical federated learning

Xiaolin Chen, Daoguang Zan, Wei Li et al.

In vertical federated learning (VFL), commercial entities collaboratively train a model while preserving data privacy. However, a malicious participant's poisoning attack may degrade the performance of this collaborative model. The main challenge in achieving the poisoning attack is the absence of access to the server-side top model, leaving the malicious participant without a clear target model. To address this challenge, we introduce an innovative end-to-end poisoning framework P-GAN. Specifically, the malicious participant initially employs semi-supervised learning to train a surrogate target model. Subsequently, this participant employs a GAN-based method to produce adversarial perturbations to degrade the surrogate target model's performance. Finally, the generator is obtained and tailored for VFL poisoning. Besides, we develop an anomaly detection algorithm based on a deep auto-encoder (DAE), offering a robust defense mechanism to VFL scenarios. Through extensive experiments, we evaluate the efficacy of P-GAN and DAE, and further analyze the factors that influence their performance.

en cs.LG, cs.AI
arXiv Open Access 2024
Data Poisoning: An Overlooked Threat to Power Grid Resilience

Nora Agah, Javad Mohammadi, Alex Aved et al.

As the complexities of Dynamic Data Driven Applications Systems increase, preserving their resilience becomes more challenging. For instance, maintaining power grid resilience is becoming increasingly complicated due to the growing number of stochastic variables (such as renewable outputs) and extreme weather events that add uncertainty to the grid. Current optimization methods have struggled to accommodate this rise in complexity. This has fueled the growing interest in data-driven methods used to operate the grid, leading to more vulnerability to cyberattacks. One such disruption that is commonly discussed is the adversarial disruption, where the intruder attempts to add a small perturbation to input data in order to "manipulate" the system operation. During the last few years, work on adversarial training and disruptions on the power system has gained popularity. In this paper, we will first review these applications, specifically on the most common types of adversarial disruptions: evasion and poisoning disruptions. Through this review, we highlight the gap between poisoning and evasion research when applied to the power grid. This is due to the underlying assumption that model training is secure, leading to evasion disruptions being the primary type of studied disruption. Finally, we will examine the impacts of data poisoning interventions and showcase how they can endanger power grid resilience.

en cs.LG, cs.CR
arXiv Open Access 2024
Outlier-Oriented Poisoning Attack: A Grey-box Approach to Disturb Decision Boundaries by Perturbing Outliers in Multiclass Learning

Anum Paracha, Junaid Arshad, Mohamed Ben Farah et al.

Poisoning attacks are a primary threat to machine learning models, aiming to compromise their performance and reliability by manipulating training datasets. This paper introduces a novel attack - Outlier-Oriented Poisoning (OOP) attack, which manipulates labels of most distanced samples from the decision boundaries. The paper also investigates the adverse impact of such attacks on different machine learning algorithms within a multiclass classification scenario, analyzing their variance and correlation between different poisoning levels and performance degradation. To ascertain the severity of the OOP attack for different degrees (5% - 25%) of poisoning, we analyzed variance, accuracy, precision, recall, f1-score, and false positive rate for chosen ML models.Benchmarking our OOP attack, we have analyzed key characteristics of multiclass machine learning algorithms and their sensitivity to poisoning attacks. Our experimentation used three publicly available datasets: IRIS, MNIST, and ISIC. Our analysis shows that KNN and GNB are the most affected algorithms with a decrease in accuracy of 22.81% and 56.07% while increasing false positive rate to 17.14% and 40.45% for IRIS dataset with 15% poisoning. Further, Decision Trees and Random Forest are the most resilient algorithms with the least accuracy disruption of 12.28% and 17.52% with 15% poisoning of the IRIS dataset. We have also analyzed the correlation between number of dataset classes and the performance degradation of models. Our analysis highlighted that number of classes are inversely proportional to the performance degradation, specifically the decrease in accuracy of the models, which is normalized with increasing number of classes. Further, our analysis identified that imbalanced dataset distribution can aggravate the impact of poisoning for machine learning models

en cs.LG
arXiv Open Access 2024
Defense against Joint Poison and Evasion Attacks: A Case Study of DERMS

Zain ul Abdeen, Padmaksha Roy, Ahmad Al-Tawaha et al.

There is an upward trend of deploying distributed energy resource management systems (DERMS) to control modern power grids. However, DERMS controller communication lines are vulnerable to cyberattacks that could potentially impact operational reliability. While a data-driven intrusion detection system (IDS) can potentially thwart attacks during deployment, also known as the evasion attack, the training of the detection algorithm may be corrupted by adversarial data injected into the database, also known as the poisoning attack. In this paper, we propose the first framework of IDS that is robust against joint poisoning and evasion attacks. We formulate the defense mechanism as a bilevel optimization, where the inner and outer levels deal with attacks that occur during training time and testing time, respectively. We verify the robustness of our method on the IEEE-13 bus feeder model against a diverse set of poisoning and evasion attack scenarios. The results indicate that our proposed method outperforms the baseline technique in terms of accuracy, precision, and recall for intrusion detection.

en cs.CR, eess.SY
arXiv Open Access 2024
Precision Guided Approach to Mitigate Data Poisoning Attacks in Federated Learning

K Naveen Kumar, C Krishna Mohan, Aravind Machiry

Federated Learning (FL) is a collaborative learning paradigm enabling participants to collectively train a shared machine learning model while preserving the privacy of their sensitive data. Nevertheless, the inherent decentralized and data-opaque characteristics of FL render its susceptibility to data poisoning attacks. These attacks introduce malformed or malicious inputs during local model training, subsequently influencing the global model and resulting in erroneous predictions. Current FL defense strategies against data poisoning attacks either involve a trade-off between accuracy and robustness or necessitate the presence of a uniformly distributed root dataset at the server. To overcome these limitations, we present FedZZ, which harnesses a zone-based deviating update (ZBDU) mechanism to effectively counter data poisoning attacks in FL. Further, we introduce a precision-guided methodology that actively characterizes these client clusters (zones), which in turn aids in recognizing and discarding malicious updates at the server. Our evaluation of FedZZ across two widely recognized datasets: CIFAR10 and EMNIST, demonstrate its efficacy in mitigating data poisoning attacks, surpassing the performance of prevailing state-of-the-art methodologies in both single and multi-client attack scenarios and varying attack volumes. Notably, FedZZ also functions as a robust client selection strategy, even in highly non-IID and attack-free scenarios. Moreover, in the face of escalating poisoning rates, the model accuracy attained by FedZZ displays superior resilience compared to existing techniques. For instance, when confronted with a 50% presence of malicious clients, FedZZ sustains an accuracy of 67.43%, while the accuracy of the second-best solution, FL-Defender, diminishes to 43.36%.

en cs.CR, cs.AI
arXiv Open Access 2024
Empirical Perturbation Analysis of Linear System Solvers from a Data Poisoning Perspective

Yixin Liu, Arielle Carr, Lichao Sun

The perturbation analysis of linear solvers applied to systems arising broadly in machine learning settings -- for instance, when using linear regression models -- establishes an important perspective when reframing these analyses through the lens of a data poisoning attack. By analyzing solvers' responses to such attacks, this work aims to contribute to the development of more robust linear solvers and provide insights into poisoning attacks on linear solvers. In particular, we investigate how the errors in the input data will affect the fitting error and accuracy of the solution from a linear system-solving algorithm under perturbations common in adversarial attacks. We propose data perturbation through two distinct knowledge levels, developing a poisoning optimization and studying two methods of perturbation: Label-guided Perturbation (LP) and Unconditioning Perturbation (UP). Existing works mainly focus on deriving the worst-case perturbation bound from a theoretical perspective, and the analysis is often limited to specific kinds of linear system solvers. Under the circumstance that the data is intentionally perturbed -- as is the case with data poisoning -- we seek to understand how different kinds of solvers react to these perturbations, identifying those algorithms most impacted by different types of adversarial attacks.

en cs.LG, cs.CR
DOAJ Open Access 2024
Environmental health risk assessment and acute effects of sulfur dioxide (SO2) inhalation exposure on traditional sulfur miners at Ijen Crater Volcano, Indonesia

Septian Hadi Susetyo, Azham Umar Abidin, Taiki Nagaya et al.

The Ijen Crater volcano is one of the geological wonders recognized by UNESCO. Inside it is a blue lake with a high acidity level, and a blue fire phenomenon has formed due to the very high concentration of sulfur. This crater is also one of Indonesia's largest sources of sulfur and is used by locals as a traditional sulfur mine. This study aims to measure SO2 concentrations and assess the health risks of SO2 exposure in traditional sulfur mine workers. The SO2 measurements were taken using impingers at six sample points along the mine workers' path. In addition, anthropometric data, work activity patterns, and health complaints during work were collected through direct interviews with 30 respondents selected based on inclusion criteria. Short-Term Health Impact Method was carried out based on a comparison of threshold level values and acute effects obtained from interviews regarding health complaints. The Hazard Question Index (HQ Index) of SO2 exposure was calculated using the health risk assessment method. The SO2 concentrations between 3.14 and 18.24 mg/m3. All sample points were above the quality standard threshold set by the EPA of 1.97 mg/m3. The most common health complaints workers experienced were eye irritation and coughing while working, followed by headache, shortness of breath, and skin irritation. The HQ index of SO2 exposure in workers was 1.02 for real-time exposure and 2.15 for long-term exposure. An HQ index ≥ 1 indicates a potential health risk for workers. Therefore, it is important to control workers' SO2 exposure.

Toxicology. Poisons
DOAJ Open Access 2024
Neurotoxicological manifestations among drug abusers admitted to Alexandria poison unit and Al-Mamora hospital: prospective study

Haidy Megahed, Maha Ghanem, Ayman Abdelshafi

Background: Neurotoxicity is a significant contributor to neurodegenerative disorders, often linked to drug abuse, a critical issue affecting Egyptian youth during their productive years. The aim of the study was: a) To describe the neurotoxic effects of acute and chronic use of drugs of abuse in patients admitted to Alexandria poison unit (APU) and Al-Mamora hospital addiction treatment unit (MH), b) To evaluate them regarding age, sex, education, residency, marital status, cause, family history, past history and study the presentation, pattern of abuse, relapse, identify the risk factors, and c) To assess facilities to prevent further morbidity and mortality.Methods: A prospective study was conducted on all patients with neurotoxicological manifestations due to drug abuse admitted to APU and MH from June 1st to December 31st, 2020. Data collected included demographics, full history of medications, mental and neurological examinations, and investigations.Results: Among 242 patients, drug abuse spanned both sexes and all age groups. Most patients were single, educated, unemployed, or working in skilled labor. Smoking and repeated relapses were common, highlighting the chronic and challenging nature of drug abuse.Conclusion: Drug abuse is a persistent problem requiring community awareness and collaboration among ministries. A targeted healthcare program with preventive and curative measures is essential to mitigate its impact.

Toxicology. Poisons

Halaman 21 dari 40054