Hasil untuk "Toxicology. Poisons"

Menampilkan 20 dari ~800906 hasil · dari DOAJ, arXiv, CrossRef, Semantic Scholar

JSON API
DOAJ Open Access 2026
Invisible victims: rising pediatric cocaine exposures in France (2020–2024) – insights from the national poison center database

Katharina von Fabeck, Mathieu Glaizal, Corinne Schmitt et al.

BackgroundCocaine use remains prevalent in Europe and has been associated with pediatric exposures through accidental ingestion, passive inhalation, and perinatal or postnatal transmission, potentially leading to significant toxicity in young children.ObjectiveThis study aimed to quantify the number of pediatric cocaine exposures for which consultation with a French poison center was requested, characterize clinical presentations and severity, evaluate medical interventions and outcomes, and assess child protection service involvement.MethodsWe conducted a retrospective observational study of children aged 0–10 years with suspected or confirmed cocaine exposure reported to French Poison Centers from 1 January 2020, to 31 December 2024. Data collected included demographics, exposure route, clinical manifestations, toxicological analyses, treatments, outcomes, and Poisoning Severity Score (PSS).ResultsA total of 113 suspected pediatric exposures were identified, of which 76 (67%) were confirmed by toxicological analysis. Median age was 1.8 years, and 63 children were younger than 3 years. Exposure routes included intrauterine exposure (n = 7), breastfeeding (n = 12), ingestion (n = 9), and inhalation (n = 1). Most cases were symptomatic, with 25 minor (PSS 1), 24 moderate (PSS 2), and 8 severe cases (PSS 3), one fatality (PSS 4). No consistent association between measured cocaine or metabolite concentrations and clinical severity was observed in the limited number of cases with quantitative data (n = 15). Supportive care was sufficient in most cases, while 17 children required specific medical interventions.ConclusionPediatric cocaine exposures represent a significant clinical and public health concern, occurring through multiple pathways without predictable dose-response relationships. Clinical assessment must be guided by physical examination rather than quantitative toxicology alone. Prevention efforts must target households with substance use disorders.

Toxicology. Poisons
DOAJ Open Access 2025
A sociological perspective on the challenges of displacing animal research within academia: the contribution of Bourdieu

Pandora Pound

The use of non-animal, new approach methodologies (NAMs) is increasing but there has been no associated decrease in animal use. Reasons may include the focus on phasing-in NAMs over phasing-out animal use and the focus on transition within the regulatory sphere, although most animals are used in basic research. The transition to NAMs is often viewed as a technical matter, without acknowledging that scientific knowledge and practices are socially produced and that scientists may be motivated – like all social beings – by interests and power relationships. This paper employs the insights of French sociologist, Pierre Bourdieu, to explore the persistence of animal research within academia. Several of Bourdieu’s concepts are applied, including field and habitus, but perhaps it is his concept of capital that is most valuable in this context, providing a valuable shorthand for discussing the system of rewards and penalties within academia, clarifying how animal research converts into symbolic as well as social and economic capital and elucidating what scientists risk if they attempt to transition from animal research to NAMs. Bourdieu reminds us to attend to power relationships, particularly the relationship between the animal research field and the field of power. Importantly, he argued that scientific change does not occur simply as a result of paradigm shifts, but because of struggles between scientists for capital and for the power to define their science as ‘legitimate’. Bourdieu’s concepts bring clarity and sensitivity to discussions about the transition as these gather momentum.

Toxicology. Poisons
arXiv Open Access 2025
Joint-GCG: Unified Gradient-Based Poisoning Attacks on Retrieval-Augmented Generation Systems

Haowei Wang, Rupeng Zhang, Junjie Wang et al.

Retrieval-Augmented Generation (RAG) systems enhance Large Language Models (LLMs) by retrieving relevant documents from external corpora before generating responses. This approach significantly expands LLM capabilities by leveraging vast, up-to-date external knowledge. However, this reliance on external knowledge makes RAG systems vulnerable to corpus poisoning attacks that manipulate generated outputs via poisoned document injection. Existing poisoning attack strategies typically treat the retrieval and generation stages as disjointed, limiting their effectiveness. We propose Joint-GCG, the first framework to unify gradient-based attacks across both retriever and generator models through three innovations: (1) Cross-Vocabulary Projection for aligning embedding spaces, (2) Gradient Tokenization Alignment for synchronizing token-level gradient signals, and (3) Adaptive Weighted Fusion for dynamically balancing attacking objectives. Evaluations demonstrate that Joint-GCG achieves at most 25% and an average of 5% higher attack success rate than previous methods across multiple retrievers and generators. While optimized under a white-box assumption, the generated poisons show unprecedented transferability to unseen models. Joint-GCG's innovative unification of gradient-based attacks across retrieval and generation stages fundamentally reshapes our understanding of vulnerabilities within RAG systems. Our code is available at https://github.com/NicerWang/Joint-GCG.

en cs.CR, cs.AI
arXiv Open Access 2025
Poisoned-MRAG: Knowledge Poisoning Attacks to Multimodal Retrieval Augmented Generation

Yinuo Liu, Zenghui Yuan, Guiyao Tie et al.

Multimodal retrieval-augmented generation (RAG) enhances the visual reasoning capability of vision-language models (VLMs) by dynamically accessing information from external knowledge bases. In this work, we introduce \textit{Poisoned-MRAG}, the first knowledge poisoning attack on multimodal RAG systems. Poisoned-MRAG injects a few carefully crafted image-text pairs into the multimodal knowledge database, manipulating VLMs to generate the attacker-desired response to a target query. Specifically, we formalize the attack as an optimization problem and propose two cross-modal attack strategies, dirty-label and clean-label, tailored to the attacker's knowledge and goals. Our extensive experiments across multiple knowledge databases and VLMs show that Poisoned-MRAG outperforms existing methods, achieving up to 98\% attack success rate with just five malicious image-text pairs injected into the InfoSeek database (481,782 pairs). Additionally, We evaluate 4 different defense strategies, including paraphrasing, duplicate removal, structure-driven mitigation, and purification, demonstrating their limited effectiveness and trade-offs against Poisoned-MRAG. Our results highlight the effectiveness and scalability of Poisoned-MRAG, underscoring its potential as a significant threat to multimodal RAG systems.

en cs.CR, cs.LG
arXiv Open Access 2025
P2P: A Poison-to-Poison Remedy for Reliable Backdoor Defense in LLMs

Shuai Zhao, Xinyi Wu, Shiqian Zhao et al.

During fine-tuning, large language models (LLMs) are increasingly vulnerable to data-poisoning backdoor attacks, which compromise their reliability and trustworthiness. However, existing defense strategies suffer from limited generalization: they only work on specific attack types or task settings. In this study, we propose Poison-to-Poison (P2P), a general and effective backdoor defense algorithm. P2P injects benign triggers with safe alternative labels into a subset of training samples and fine-tunes the model on this re-poisoned dataset by leveraging prompt-based learning. This enforces the model to associate trigger-induced representations with safe outputs, thereby overriding the effects of original malicious triggers. Thanks to this robust and generalizable trigger-based fine-tuning, P2P is effective across task settings and attack types. Theoretically and empirically, we show that P2P can neutralize malicious backdoors while preserving task performance. We conduct extensive experiments on classification, mathematical reasoning, and summary generation tasks, involving multiple state-of-the-art LLMs. The results demonstrate that our P2P algorithm significantly reduces the attack success rate compared with baseline models. We hope that the P2P can serve as a guideline for defending against backdoor attacks and foster the development of a secure and trustworthy LLM community.

en cs.CR, cs.AI
arXiv Open Access 2025
Poisoned Source Code Detection in Code Models

Ehab Ghannoum, Mohammad Ghafari

Deep learning models have gained popularity for conducting various tasks involving source code. However, their black-box nature raises concerns about potential risks. One such risk is a poisoning attack, where an attacker intentionally contaminates the training set with malicious samples to mislead the model's predictions in specific scenarios. To protect source code models from poisoning attacks, we introduce CodeGarrison (CG), a hybrid deep-learning model that relies on code embeddings to identify poisoned code samples. We evaluated CG against the state-of-the-art technique ONION for detecting poisoned samples generated by DAMP, MHM, ALERT, as well as a novel poisoning technique named CodeFooler. Results showed that CG significantly outperformed ONION with an accuracy of 93.5%. We also tested CG's robustness against unknown attacks, achieving an average accuracy of 85.6% in identifying poisoned samples across the four attacks mentioned above.

en cs.CR, cs.LG
arXiv Open Access 2025
Data Poisoning in Deep Learning: A Survey

Pinlong Zhao, Weiyao Zhu, Pengfei Jiao et al.

Deep learning has become a cornerstone of modern artificial intelligence, enabling transformative applications across a wide range of domains. As the core element of deep learning, the quality and security of training data critically influence model performance and reliability. However, during the training process, deep learning models face the significant threat of data poisoning, where attackers introduce maliciously manipulated training data to degrade model accuracy or lead to anomalous behavior. While existing surveys provide valuable insights into data poisoning, they generally adopt a broad perspective, encompassing both attacks and defenses, but lack a dedicated, in-depth analysis of poisoning attacks specifically in deep learning. In this survey, we bridge this gap by presenting a comprehensive and targeted review of data poisoning in deep learning. First, this survey categorizes data poisoning attacks across multiple perspectives, providing an in-depth analysis of their characteristics and underlying design princinples. Second, the discussion is extended to the emerging area of data poisoning in large language models(LLMs). Finally, we explore critical open challenges in the field and propose potential research directions to advance the field further. To support further exploration, an up-to-date repository of resources on data poisoning in deep learning is available at https://github.com/Pinlong-Zhao/Data-Poisoning.

en cs.CR, cs.AI
DOAJ Open Access 2024
Targeting mutation sites in the omicron variant of SARS-CoV-2 as potential therapeutic strategy against COVID-19 by antiretroviral drugs

Ochuko L. Erukainure, Aliyu Muhammad, Rahul Ravichandran et al.

The multiple mutation of the spike (S) protein of the Omicron SARS-CoV-2 variant is a major concern, as it has been implicated in the severity of COVID-19 and its complications. These mutations have been attributed to COVID-19-infected immune-compromised individuals, with HIV patients being suspected to top the list. The present study investigated the mutation of the S protein of the omicron variant in comparison to the Delta and Wuhan variants. It also investigated the molecular interactions of antiretroviral drugs (ARVd) vis-à-vis dolutegravir, lamivudine, tenofovir-disoproxilfumarate and lenacapavir with the initiation and termination codons of the mRNAs of the mutated proteins of the omicron variant using computational tools. The complete genome sequences of the respective S proteins for omicron (OM066778.1), Delta (OK091006.1) and Wuhan (NC 045512.2) SARS-CoV-2 variants were retrieved from the National Center for Biotechnology Information (NCBI) database. Evolutionary analysis revealed high trends of mutations in the S protein of the omicron SARS-CoV-2 variant compared to the delta and Wuhan variants coupled with 68 % homology. The sequences of the translation initiation sites (TISs), translation termination sites (TTSs), high mutation region-1 (HMR1) and high region mutation-2 (HMR2) mRNAs were retrieved from the full genome of the omicron variant S protein. Molecular docking analysis revealed strong molecular interactions of ARVd with TISs, TTSs, HMR1 and HMR2 of the S protein mRNA. These results indicate mutations in the S protein of the Omicron SARS-CoV-2 variant compared to the Delta and Wuhan variants. These mutation points may present new therapeutic targets for COVID-19.

Toxicology. Poisons
arXiv Open Access 2024
The Effect of Data Poisoning on Counterfactual Explanations

André Artelt, Shubham Sharma, Freddy Lecué et al.

Counterfactual explanations are a widely used approach for examining the predictions of black-box systems. They can offer the opportunity for computational recourse by suggesting actionable changes on how to alter the input to obtain a different (i.e., more favorable) system output. However, recent studies have pointed out their susceptibility to various forms of manipulation. This work studies the vulnerability of counterfactual explanations to data poisoning. We formally introduce and investigate data poisoning in the context of counterfactual explanations for increasing the cost of recourse on three different levels: locally for a single instance, a sub-group of instances, or globally for all instances. In this context, we formally introduce and characterize data poisonings, from which we derive and investigate a general data poisoning mechanism. We demonstrate the impact of such data poisoning in the critical real-world application of explaining event detections in water distribution networks. Additionally, we conduct an extensive empirical evaluation, demonstrating that state-of-the-art counterfactual generation methods and toolboxes are vulnerable to such data poisoning. Furthermore, we find that existing defense methods fail to detect those poisonous samples.

en cs.LG, cs.AI
DOAJ Open Access 2023
Pollution characteristics and preliminary environmental risk assessment of typical pharmaceutical and personal care products in surface water of Qingpu District, Shanghai

Lu SHEN, Xiaoqian CHEN, Min LIU

BackgroundPollution prevention and control of surface water is the focus of joint prevention and control work in the Yangtze River Delta Demonstration zone. As an important part of the zone, Qingpu District of Shanghai is rich in water resources. However, there are many livestock, poultry, and aquaculture, which may result in potential pollution risks of pharmaceutical and personal care products (PPCPs). ObjectiveTo investigate the pollution levels and distribution characteristics of typical PPCPs in the surface water of Qingpu District of Shanghai, and to conduct a preliminary environmental risk assessment for the chemicals with relatively high concentrations. MethodsSurface water samples at 15 pre-determined sites in Qingpu District of Shanghai were collected according to the Technical specifications for surface water environmental quality monitoring (HJ 91.2-2022), focusing on sewage treatment plants and animal husbandry and aquaculture farms with potential PPCP discharge. A total of 47 PPCPs were determined by automated solid phase extraction coupled with ultra-high-performance liquid chromatography tandem triple quadrupole mass spectrometry in the collected surface water samples. The regional distribution characteristics of different PPCPs were analyzed. Risk quotient was applied to access the preliminary environmental risk of 17 key PPCPs with relatively high concentrations. ResultsThe results showed that 36 PPCPs were positive with a maximum concentration range of 0.53–1720.00 ng·L−1 and a detection rate range of 6.67%–100.00%. The dominant detected PPCPs were neurostimulants (caffeine), sulfonamides, quinolones, and cardiovascular drugs. Caffeine was determined in a concentration range of 77.10–1720.00 ng·L−1, accounting for 65.28% of the total detected PPCPs. Sulfonamides accounted for 10.38% and the typical sulfonamides were sulfadiazine and sulfamethoxazole with the highest detected concentrations of 349.00 ng·L−1 and 23.40 ng·L−1, respectively. In contrast, quinolones and cardiovascular drugs had relatively low proportions (7.08% and 6.59%, respectively). According to the spatial distribution analysis of PPCPs in the surface water of Qingpu District, it exhibited a trend of high pollution level in the northeast and southern regions and a low pollution level in the central region, except for a few sites. Although the caffeine concentration in surface water was high, its ecotoxicity effect was low without obvious environmental risk. However, sulfadiazine and sarafloxacin had a potential high risk (risk quotient: 1.10–2.59). The environmental risks of sarafloxacin might be overestimated for its limited ecotoxicity data. Potential moderate risks (risk quotient: 0.103–0.980) were identified for sertraline, carbamazepine, fluoxetine, and ciprofloxacin, etc. Medium or high environmental risk was found in 2–5 kinds of PPCPs in each sampling site. ConclusionPollution of PPCPs to certain extent in the surface water of Qingpu District of Shanghai is determined. Identified potential environmental risks are posed by sulfonamides, quinolones, and neuropathic drugs. It is suggested to strengthen the supervision of sulfanilamides and quinolones, and formulate policies of the use and emission management of key neuropathic drugs.

Medicine (General), Toxicology. Poisons
arXiv Open Access 2023
Pick your Poison: Undetectability versus Robustness in Data Poisoning Attacks

Nils Lukas, Florian Kerschbaum

Deep image classification models trained on vast amounts of web-scraped data are susceptible to data poisoning - a mechanism for backdooring models. A small number of poisoned samples seen during training can severely undermine a model's integrity during inference. Existing work considers an effective defense as one that either (i) restores a model's integrity through repair or (ii) detects an attack. We argue that this approach overlooks a crucial trade-off: Attackers can increase robustness at the expense of detectability (over-poisoning) or decrease detectability at the cost of robustness (under-poisoning). In practice, attacks should remain both undetectable and robust. Detectable but robust attacks draw human attention and rigorous model evaluation or cause the model to be re-trained or discarded. In contrast, attacks that are undetectable but lack robustness can be repaired with minimal impact on model accuracy. Our research points to intrinsic flaws in current attack evaluation methods and raises the bar for all data poisoning attackers who must delicately balance this trade-off to remain robust and undetectable. To demonstrate the existence of more potent defenders, we propose defenses designed to (i) detect or (ii) repair poisoned models using a limited amount of trusted image-label pairs. Our results show that an attacker who needs to be robust and undetectable is substantially less threatening. Our defenses mitigate all tested attacks with a maximum accuracy decline of 2% using only 1% of clean data on CIFAR-10 and 2.5% on ImageNet. We demonstrate the scalability of our defenses by evaluating large vision-language models, such as CLIP. Attackers who can manipulate the model's parameters pose an elevated risk as they can achieve higher robustness at low detectability compared to data poisoning attackers.

en cs.CR, cs.LG
arXiv Open Access 2023
Transferable Availability Poisoning Attacks

Yiyong Liu, Michael Backes, Xiao Zhang

We consider availability data poisoning attacks, where an adversary aims to degrade the overall test accuracy of a machine learning model by crafting small perturbations to its training data. Existing poisoning strategies can achieve the attack goal but assume the victim to employ the same learning method as what the adversary uses to mount the attack. In this paper, we argue that this assumption is strong, since the victim may choose any learning algorithm to train the model as long as it can achieve some targeted performance on clean data. Empirically, we observe a large decrease in the effectiveness of prior poisoning attacks if the victim employs an alternative learning algorithm. To enhance the attack transferability, we propose Transferable Poisoning, which first leverages the intrinsic characteristics of alignment and uniformity to enable better unlearnability within contrastive learning, and then iteratively utilizes the gradient information from supervised and unsupervised contrastive learning paradigms to generate the poisoning perturbations. Through extensive experiments on image benchmarks, we show that our transferable poisoning attack can produce poisoned samples with significantly improved transferability, not only applicable to the two learners used to devise the attack but also to learning algorithms and even paradigms beyond.

en cs.CR, cs.LG
arXiv Open Access 2023
Sharpness-Aware Data Poisoning Attack

Pengfei He, Han Xu, Jie Ren et al.

Recent research has highlighted the vulnerability of Deep Neural Networks (DNNs) against data poisoning attacks. These attacks aim to inject poisoning samples into the models' training dataset such that the trained models have inference failures. While previous studies have executed different types of attacks, one major challenge that greatly limits their effectiveness is the uncertainty of the re-training process after the injection of poisoning samples, including the re-training initialization or algorithms. To address this challenge, we propose a novel attack method called ''Sharpness-Aware Data Poisoning Attack (SAPA)''. In particular, it leverages the concept of DNNs' loss landscape sharpness to optimize the poisoning effect on the worst re-trained model. It helps enhance the preservation of the poisoning effect, regardless of the specific retraining procedure employed. Extensive experiments demonstrate that SAPA offers a general and principled strategy that significantly enhances various types of poisoning attacks.

en cs.CR
arXiv Open Access 2023
Temporal Robustness against Data Poisoning

Wenxiao Wang, Soheil Feizi

Data poisoning considers cases when an adversary manipulates the behavior of machine learning algorithms through malicious training data. Existing threat models of data poisoning center around a single metric, the number of poisoned samples. In consequence, if attackers can poison more samples than expected with affordable overhead, as in many practical scenarios, they may be able to render existing defenses ineffective in a short time. To address this issue, we leverage timestamps denoting the birth dates of data, which are often available but neglected in the past. Benefiting from these timestamps, we propose a temporal threat model of data poisoning with two novel metrics, earliness and duration, which respectively measure how long an attack started in advance and how long an attack lasted. Using these metrics, we define the notions of temporal robustness against data poisoning, providing a meaningful sense of protection even with unbounded amounts of poisoned samples when the attacks are temporally bounded. We present a benchmark with an evaluation protocol simulating continuous data collection and periodic deployments of updated models, thus enabling empirical evaluation of temporal robustness. Lastly, we develop and also empirically verify a baseline defense, namely temporal aggregation, offering provable temporal robustness and highlighting the potential of our temporal threat model for data poisoning.

en cs.LG, cs.AI
arXiv Open Access 2023
Poisoning Network Flow Classifiers

Giorgio Severi, Simona Boboila, Alina Oprea et al.

As machine learning (ML) classifiers increasingly oversee the automated monitoring of network traffic, studying their resilience against adversarial attacks becomes critical. This paper focuses on poisoning attacks, specifically backdoor attacks, against network traffic flow classifiers. We investigate the challenging scenario of clean-label poisoning where the adversary's capabilities are constrained to tampering only with the training data - without the ability to arbitrarily modify the training labels or any other component of the training process. We describe a trigger crafting strategy that leverages model interpretability techniques to generate trigger patterns that are effective even at very low poisoning rates. Finally, we design novel strategies to generate stealthy triggers, including an approach based on generative Bayesian network models, with the goal of minimizing the conspicuousness of the trigger, and thus making detection of an ongoing poisoning campaign more challenging. Our findings provide significant insights into the feasibility of poisoning attacks on network traffic classifiers used in multiple scenarios, including detecting malicious communication and application classification.

en cs.CR, cs.LG
DOAJ Open Access 2022
Clinical application of gene chip technology in diagnosis and treatment of silicosis complicated with mycobacterial infection

Hongbo HUANG, Xiaoting XU, Xibin ZHUANG et al.

BackgroundGene chip technology has been increasingly used in the diagnosis and treatment of common tuberculosis. However, its role in the diagnosis and treatment of silicosis complicated with mycobacterial infection remains unclear. ObjectiveTo evaluate the application value of gene chip technology in the diagnosis and treatment of silicosis complicated with mycobacterial infection. MethodsFrom January 2019 to June 2021, 197 silicosis patients suspected to be complicated with mycobacterial infection in Quanzhou First Hospital Affiliated to Fujian Medical University were enrolled in this study. The etiology evaluation for the 197 patients was conducted by acid-fast staining of sputum smear (sputum smear method), culture of Mycobacterium tuberculosis of sputum (sputum culture method), and gene chip technology of bronchoalveolar lavage fluid (BALF); and for 80 patients among them, acid-fast staining of BALF (BALF smear method) and culture of Mycobacterium tuberculosis of BALF (BALF culture method) were additionally performed. The positive rates and consistency were assessed using intraclass correlation coefficient (ICC). Test for Mycobacterium tuberculosis drug resistance mutation gene was added for patients with Mycobacterium tuberculosis complex by BALF gene chip technology. ResultsThe average age of the 197 patients was (53.1±9.1) years, and the average dust exposure time was (21.1±9.4) years, including 192 males and 5 females. There were 8 cases with stage I silicosis, 17 cases with stage II silicosis, and 172 cases with stage III silicosis. Among them, 11.2% were positive for sputum smear; 24.4% were positive for sputum culture, and 36.0% were positive by gene chip of BALF. The difference between the three methods was statistically significant (P<0.05). The result of consistency test for the three methods showed that the ICC was 0.539 (P<0.001). Among the 80 patients, there was a significant difference in the positive rates of the five methods (χ2=25.23, P<0.001). The results of Bonferroni test showed statistically significant pair-wise differences between BALF culture method and sputum smear method, BALF culture method and BALF smear method, BALF gene chip method and sputum smear method, BALF gene chip method and BALF smear method (P<0.05), while there were no statistically significant differences between the other pairs (P>0.05). The result of consistency test for the five methods showed that the ICC was 0.586 (P<0.001). Among the 71 BALF gene chip positive cases, 59 cases reported positive Mycobacterium tuberculosis complex (17 cases were positive in the first-line anti-tuberculosis resistance test, and 2 cases were found positive quinolone resistance gene in the second-line anti-tuberculosis resistance test), and received regular anti-tuberculosis treatment, among them 45 cases improved and 14 cases were stable; 12 cases reported non-tuberculous mycobacteria cases, among them 5 cases received anti-non-tuberculous mycobacteria treatment (4 cases improved and 1 case was stable), and 7 cases with mild symptoms did not receive anti-non-tuberculous mycobacteria treatment. ConclusionCompared with sputum smear, sputum culture, and other traditional methods, gene chip technology of BALF can improve the positive rate of pathogenic diagnosis of silicosis complicated with mycobacterial infection, and can also quickly identify whether it is non-tuberculous mycobacteria infection or drug-resistant Mycobacterium tuberculosis infection, which is helpful to adjust treatment as soon as possible.

Medicine (General), Toxicology. Poisons
DOAJ Open Access 2022
In vitro evaluation of mutagenic, cytotoxic, genotoxic and oral irritation potential of nicotine pouch products

Jacqueline Miller-Holt, Irene Baskerville-Abraham, Masanori Sakimura et al.

Non-clinical in vitro studies were conducted to investigate the characteristics of extracts from tobacco free nicotine pouches alongside a reference snus product and/or 1R6F reference cigarette. In vitro investigations were conducted in the Neutral Red Uptake (NRU) cytotoxicity assay, Bacterial Reverse Mutation (Ames) assay, and in vitro Mammalian Cell Micronucleus (ivMN) assay. These products were also investigated for their oral irritation potential in the EpiGingival™ 3D tissue model. Results from the Ames, in vitro Micronucleus and NRU assays indicated that the tested products were non-mutagenic, non-genotoxic and non-cytotoxic in contrast to results obtained for the 1R6F reference cigarette. Results from Complete Artificial Saliva (CAS) extracts from these products also failed to be classified as irritants (as measured using the MTT assay), in the EpiGingival™ 3D tissue model

Toxicology. Poisons

Halaman 4 dari 40046