Hasil untuk "Toxicology. Poisons"

Menampilkan 20 dari ~801069 hasil · dari DOAJ, arXiv, Semantic Scholar, CrossRef

JSON API
DOAJ Open Access 2026
Impacts of extreme weather on drinking water safety in urban and rural areas and control strategies

Jingxian LIU, Erming OUYANG, Shiyun WANG et al.

Climate change is altering the Earth's water cycle system. The resulting three extreme weather events—heatwaves, droughts, and extreme precipitation—impacts urban and rural water security through multi-layered mechanisms. A primary structural disparity exists between urban and rural systems: while urban areas benefit from comprehensive and standardized pipe networks that ensure terminal water quality, rural areas often suffer from "last mile" vulnerability due to inadequate infrastructure and outdated purification facilities. Extreme weather can directly alter the microbial community structure, concentrations of chemical pollutants and physicochemical properties of source water. These alterations interfere with the efficiency of water treatment processes and ultimately compromise the integrity of distribution systems. Because distribution networks often lack real-time monitoring and adaptive response capabilities, they have emerged as the most vulnerable link in the "water source-water treatment-distribution system" chain. Based on a systematic analysis of these chain-wide impacts, this paper proposed a series of control strategies, including security frameworks based on multi-model coupling and water source protection measures, improvement of water treatment technologies, optimization of distribution systems, and development of new water quality monitoring methods. These strategies aim to enhance the climate adaptability of urban and rural drinking water systems through multi-dimensional intervention, providing a theoretical basis for constructing climate-resilient water infrastructure.

Medicine (General), Toxicology. Poisons
DOAJ Open Access 2025
Multi-endpoint assessment of tunnel wash water and tyre-particle leachate in zebrafish larvae

Shubham Varshney, Chinmayi Ramaghatta, Prabhugouda Siriyappagouder et al.

Washing of road tunnels is essential for removing accumulated pollutants such as tyre wear particles, brake dust, exhaust residues, and road debris to ensure visibility and safe driving. Tunnel washing generates large volumes of contaminated runoff known as untreated tunnel wash runoff (UTWR). Some countries filter UTWR through a sedimentation process before release to reduce contamination, generating what is known as treated tunnel wash runoff (TWR). This study investigates the potential environmental impact of diluted UTWR (25 %) and TWR (50 %) by evaluating their toxicity in fish and comparing the effect to tyre-particle leachate (TPL, 2 g/L). UTWR was collected during tunnel cleaning, and TWR was collected after 14 days of filtration through sand sediments, from the Bodø tunnel in Norway. Zebrafish larvae, used as a fish model, exposed to contaminated runoff exhibited increased mortality, impaired growth, developmental anomalies, altered swimming behaviour, and changes in gene expression. Both UTWR and TWR exposure induced significant toxicity in zebrafish larvae, though the toxicity caused by TWR was notably lower than that of UTWR. This study shows that current filtration methods of tunnel wash water reduce the levels of most pollutants, however, more research is needed on how tunnel wash-water runoff affect aquatic ecosystems.

Toxicology. Poisons
DOAJ Open Access 2025
Per- and polyfluorinated substances (PFAS) promote osteoclastogenesis and bone loss through PPARα activation

Laimar C. Garmo, Mackenzie K. Herroon, Shane Mecca et al.

Per- and polyfluoroalkyl substances (PFAS) are emerging as significant environmental contaminants affecting bone health, with studies linking their exposure to decreased bone mineral density (BMD), enhanced osteoclastogenesis, and disruptions in the bone marrow microenvironment. While current research highlights the effects on bone and BMD, there is a critical gap in understanding the mechanisms behind these effects. Studies presented here investigate the effects of legacy and alternative PFAS, particularly hexafluoropropylene oxide dimer acid (GenX) and perfluorohexane sulfonic acid (PFHxS), on bone health using in vitro and in vivo models. An environmentally relevant mixture of five PFAS was found to promote osteoclastic differentiation of murine bone marrow macrophages (BMMs) in vitro. Among the five components of the Mixture, the emerging compound, GenX, had the highest propensity to induce osteoclastogenesis. Utilizing pharmacological and genetic approaches, we identified peroxisome proliferator-activated receptor alpha (PPARα) as a potential mediator of PFAS-driven osteoclastogenesis. Furthermore, our in vivo mouse experiments demonstrated a decrease in trabecular and cortical bone thickness as well as altered bone mineral composition in male FVB/N mice exposed to either GenX or PFHxS (2 mg/L) for 12 weeks. Altogether, our results reveal potentially negative effects of PFAS exposure on BMD, bone mineral composition, and overall bone health and underscore the need for further research assessing the health risks associated with exposure to alternative PFAS.

Toxicology. Poisons
arXiv Open Access 2025
FedNIA: Noise-Induced Activation Analysis for Mitigating Data Poisoning in FL

Ehsan Hallaji, Roozbeh Razavi-Far, Mehrdad Saif

Federated learning systems are increasingly threatened by data poisoning attacks, where malicious clients compromise global models by contributing tampered updates. Existing defenses often rely on impractical assumptions, such as access to a central test dataset, or fail to generalize across diverse attack types, particularly those involving multiple malicious clients working collaboratively. To address this, we propose Federated Noise-Induced Activation Analysis (FedNIA), a novel defense framework to identify and exclude adversarial clients without relying on any central test dataset. FedNIA injects random noise inputs to analyze the layerwise activation patterns in client models leveraging an autoencoder that detects abnormal behaviors indicative of data poisoning. FedNIA can defend against diverse attack types, including sample poisoning, label flipping, and backdoors, even in scenarios with multiple attacking nodes. Experimental results on non-iid federated datasets demonstrate its effectiveness and robustness, underscoring its potential as a foundational approach for enhancing the security of federated learning systems.

en cs.LG, cs.AI
arXiv Open Access 2025
Exploring the Security Threats of Knowledge Base Poisoning in Retrieval-Augmented Code Generation

Bo Lin, Shangwen Wang, Liqian Chen et al.

The integration of Large Language Models (LLMs) into software development has revolutionized the field, particularly through the use of Retrieval-Augmented Code Generation (RACG) systems that enhance code generation with information from external knowledge bases. However, the security implications of RACG systems, particularly the risks posed by vulnerable code examples in the knowledge base, remain largely unexplored. This risk is particularly concerning given that public code repositories, which often serve as the sources for knowledge base collection in RACG systems, are usually accessible to anyone in the community. Malicious attackers can exploit this accessibility to inject vulnerable code into the knowledge base, making it toxic. Once these poisoned samples are retrieved and incorporated into the generated code, they can propagate security vulnerabilities into the final product. This paper presents the first comprehensive study on the security risks associated with RACG systems, focusing on how vulnerable code in the knowledge base compromises the security of generated code. We investigate the LLM-generated code security across different settings through extensive experiments using four major LLMs, two retrievers, and two poisoning scenarios. Our findings highlight the significant threat of knowledge base poisoning, where even a single poisoned code example can compromise up to 48% of generated code. Our findings provide crucial insights into vulnerability introduction in RACG systems and offer practical mitigation recommendations, thereby helping improve the security of LLM-generated code in future works.

en cs.CR, cs.SE
arXiv Open Access 2025
REDEditing: Relationship-Driven Precise Backdoor Poisoning on Text-to-Image Diffusion Models

Chongye Guo, Jinhu Fu, Junfeng Fang et al.

The rapid advancement of generative AI highlights the importance of text-to-image (T2I) security, particularly with the threat of backdoor poisoning. Timely disclosure and mitigation of security vulnerabilities in T2I models are crucial for ensuring the safe deployment of generative models. We explore a novel training-free backdoor poisoning paradigm through model editing, which is recently employed for knowledge updating in large language models. Nevertheless, we reveal the potential security risks posed by model editing techniques to image generation models. In this work, we establish the principles for backdoor attacks based on model editing, and propose a relationship-driven precise backdoor poisoning method, REDEditing. Drawing on the principles of equivalent-attribute alignment and stealthy poisoning, we develop an equivalent relationship retrieval and joint-attribute transfer approach that ensures consistent backdoor image generation through concept rebinding. A knowledge isolation constraint is proposed to preserve benign generation integrity. Our method achieves an 11\% higher attack success rate compared to state-of-the-art approaches. Remarkably, adding just one line of code enhances output naturalness while improving backdoor stealthiness by 24\%. This work aims to heighten awareness regarding this security vulnerability in editable image generation models.

en cs.CR, cs.CV
arXiv Open Access 2025
Multi-level Certified Defense Against Poisoning Attacks in Offline Reinforcement Learning

Shijie Liu, Andrew C. Cullen, Paul Montague et al.

Similar to other machine learning frameworks, Offline Reinforcement Learning (RL) is shown to be vulnerable to poisoning attacks, due to its reliance on externally sourced datasets, a vulnerability that is exacerbated by its sequential nature. To mitigate the risks posed by RL poisoning, we extend certified defenses to provide larger guarantees against adversarial manipulation, ensuring robustness for both per-state actions, and the overall expected cumulative reward. Our approach leverages properties of Differential Privacy, in a manner that allows this work to span both continuous and discrete spaces, as well as stochastic and deterministic environments -- significantly expanding the scope and applicability of achievable guarantees. Empirical evaluations demonstrate that our approach ensures the performance drops to no more than $50\%$ with up to $7\%$ of the training data poisoned, significantly improving over the $0.008\%$ in prior work~\citep{wu_copa_2022}, while producing certified radii that is $5$ times larger as well. This highlights the potential of our framework to enhance safety and reliability in offline RL.

en cs.LG, cs.AI
arXiv Open Access 2025
StealthAttack: Robust 3D Gaussian Splatting Poisoning via Density-Guided Illusions

Bo-Hsu Ke, You-Zhe Xie, Yu-Lun Liu et al.

3D scene representation methods like Neural Radiance Fields (NeRF) and 3D Gaussian Splatting (3DGS) have significantly advanced novel view synthesis. As these methods become prevalent, addressing their vulnerabilities becomes critical. We analyze 3DGS robustness against image-level poisoning attacks and propose a novel density-guided poisoning method. Our method strategically injects Gaussian points into low-density regions identified via Kernel Density Estimation (KDE), embedding viewpoint-dependent illusory objects clearly visible from poisoned views while minimally affecting innocent views. Additionally, we introduce an adaptive noise strategy to disrupt multi-view consistency, further enhancing attack effectiveness. We propose a KDE-based evaluation protocol to assess attack difficulty systematically, enabling objective benchmarking for future research. Extensive experiments demonstrate our method's superior performance compared to state-of-the-art techniques. Project page: https://hentci.github.io/stealthattack/

en cs.CV
arXiv Open Access 2025
VAGUEGAN: Stealthy Poisoning and Backdoor Attacks on Image Generative Pipelines

Mostafa Mohaimen Akand Faisal, Rabeya Amin Jhuma

Generative models such as GANs and diffusion models are widely used to synthesize photorealistic images and to support downstream creative and editing tasks. While adversarial attacks on discriminative models are well studied, attacks targeting generative pipelines where small, stealthy perturbations in inputs lead to controlled changes in outputs are less explored. This study introduces VagueGAN, an attack pipeline combining a modular perturbation network PoisonerNet with a Generator Discriminator pair to craft stealthy triggers that cause targeted changes in generated images. Attack efficacy is evaluated using a custom proxy metric, while stealth is analyzed through perceptual and frequency domain measures. The transferability of the method to a modern diffusion based pipeline is further examined through ControlNet guided editing. Interestingly, the experiments show that poisoned outputs can display higher visual quality compared to clean counterparts, challenging the assumption that poisoning necessarily reduces fidelity. Unlike conventional pixel level perturbations, latent space poisoning in GANs and diffusion pipelines can retain or even enhance output aesthetics, exposing a blind spot in pixel level defenses. Moreover, carefully optimized perturbations can produce consistent, stealthy effects on generator outputs while remaining visually inconspicuous, raising concerns for the integrity of image generation pipelines.

en cs.CV, cs.LG
arXiv Open Access 2025
Deterministic Certification of Graph Neural Networks against Graph Poisoning Attacks with Arbitrary Perturbations

Jiate Li, Meng Pang, Yun Dong et al.

Graph neural networks (GNNs) are becoming the de facto method to learn on the graph data and have achieved the state-of-the-art on node and graph classification tasks. However, recent works show GNNs are vulnerable to training-time poisoning attacks -- marginally perturbing edges, nodes, or/and node features of training graph(s) can largely degrade GNNs' testing performance. Most previous defenses against graph poisoning attacks are empirical and are soon broken by adaptive / stronger ones. A few provable defenses provide robustness guarantees, but have large gaps when applied in practice: 1) restrict the attacker on only one type of perturbation; 2) design for a particular GNN architecture or task; and 3) robustness guarantees are not 100\% accurate. In this work, we bridge all these gaps by developing PGNNCert, the first certified defense of GNNs against poisoning attacks under arbitrary (edge, node, and node feature) perturbations with deterministic robustness guarantees. Extensive evaluations on multiple node and graph classification datasets and GNNs demonstrate the effectiveness of PGNNCert to provably defend against arbitrary poisoning perturbations. PGNNCert is also shown to significantly outperform the state-of-the-art certified defenses against edge perturbation or node perturbation during GNN training.

en cs.LG, cs.CR
DOAJ Open Access 2024
Non-targeted metallomics based on synchrotron radiation X-ray fluorescence spectroscopy and machine learning for screening inorganic or methylmercury-exposed rice plants

Piaoxue AO, Chaojie WEI, Hongxin XIE et al.

BackgroundMercury, as a global heavy metal pollutant, poses a serious threat to human health. The toxicity of mercury depends on its chemical form. Distinguishing the forms of mercury in the environment is of great significance for mercury management and reducing human mercury exposure risks. ObjectiveTo establish a non-targeted metallomics method based on synchrotron radiation X-ray fluorescence (SRXRF) spectroscopy combined with machine learning to screen inorganic mercury (IHg) or methylmercury (MeHg) exposed rice plants. MethodsRice seeds were exposed to ultra-pure water (control group), 0.1 mg·L−1 IHg (IHg group) or MeHg (MeHg group) solutions, respectively. After germination, the seedlings were cultured for 21 d, and rice leaves were collected, dried, weighed, and pressed. The content of metallome in rice leaves was determined by SRXRF. Machine learning models including soft independent modeling cluster analysis (SIMCA), partial least squares discriminant analysis (PLS-DA), and logistic regression (LR) were used to classify the SRXRF full spectra of different groups and find the best model to distinguish rice exposed to IHg or MeHg. Besides, characteristic elements were selected as input parameters to optimize the model by improving computing speed and reducing model calculation. ResultsThe SRXRF spectral intensities of the control group, IHg group, and MeHg group were different, indicating that exposure to IHg and MeHg can interfere the homeostasis of metallome in rice leaves. The results of principal component analysis (PCA) of SRXRF spectra showed that the control group could be well distinguished from the mercury exposed groups, but the IHg group and the MeHg group were mostly overlapped. The accuracy rates of the three models (PLS-DA, SIMCA, and LR) were higher than 98% for the training set, higher than 95% for the validation set, and higher than 94% for the cross-validation set. Besides, the accuracy of the LR model was higher than that of the PLS-DA model and the SIMCA model. Furthermore, the accuracy was 92.05% when using characteristic elements K, Ca, Mn, Fe, and Zn selected by LR to distinguish the IHg group and the MeHg group. Compared with the full spectra model, although the prediction accuracy of the characteristic spectral model decreased, the input parameters of the model decreased by 99.51%, and precision, recall, and F1 score were above 84.48%, indicating that the model could distinguish rice exposed to different mercury forms. ConclusionNon-targeted metallomics method based on SRXRF and machine learning can be applied for high-throughput screening of rice exposed to different forms of mercury and thus decrease the risks of people being exposed to mercury.

Medicine (General), Toxicology. Poisons
DOAJ Open Access 2024
An in vitro toxicological assessment of two electronic cigarettes: E-liquid to aerosolisation

E. Bishop, F. Miazzi, S. Bozhilova et al.

Interest in the toxicological assessment of iterations of e-cigarette devices, e-liquid formulations and flavour use is increasing. Here, we describe a multiple test matrix and in vitro approach to assess the biological impact of differing e-cigarette activation mechanism (button vs. puff-activated) and heating technology (cotton vs. ceramic wick). The e-liquids selected for each device contained the same nicotine concentration and flavourings. We tested both e-liquid and aqueous extract of e-liquid aerosol using a high throughput cytotoxicity and genotoxicity screen. We also conducted whole aerosol assessment both in a reconstituted human airway lung tissue (MucilAir) with associated endpoint assessment (cytotoxicity, TEER, cilia beat frequency and active area) and an Ames whole aerosol assay with up to 900 consecutive undiluted puffs. Following this testing it is shown that the biological impact of these devices is similar, taking into consideration the limitations and capturing efficiencies of the different testing matrices. We have contextualised these responses against previous published reference cigarette data to establish the comparative reduction in response consistent with reduced risk potential of the e-cigarette products tested in this study as compared to conventional cigarettes.

Toxicology. Poisons
arXiv Open Access 2024
Reliable Poisoned Sample Detection against Backdoor Attacks Enhanced by Sharpness Aware Minimization

Mingda Zhang, Mingli Zhu, Zihao Zhu et al.

Backdoor attack has been considered as a serious security threat to deep neural networks (DNNs). Poisoned sample detection (PSD) that aims at filtering out poisoned samples from an untrustworthy training dataset has shown very promising performance for defending against data poisoning based backdoor attacks. However, we observe that the detection performance of many advanced methods is likely to be unstable when facing weak backdoor attacks, such as low poisoning ratio or weak trigger strength. To further verify this observation, we make a statistical investigation among various backdoor attacks and poisoned sample detections, showing a positive correlation between backdoor effect and detection performance. It inspires us to strengthen the backdoor effect to enhance detection performance. Since we cannot achieve that goal via directly manipulating poisoning ratio or trigger strength, we propose to train one model using the Sharpness-Aware Minimization (SAM) algorithm, rather than the vanilla training algorithm. We also provide both empirical and theoretical analysis about how SAM training strengthens the backdoor effect. Then, this SAM trained model can be seamlessly integrated with any off-the-shelf PSD method that extracts discriminative features from the trained model for detection, called SAM-enhanced PSD. Extensive experiments on several benchmark datasets show the reliable detection performance of the proposed method against both weak and strong backdoor attacks, with significant improvements against various attacks ($+34.38\%$ TPR on average), over the conventional PSD methods (i.e., without SAM enhancement). Overall, this work provides new insights about PSD and proposes a novel approach that can complement existing detection methods, which may inspire more in-depth explorations in this field.

en cs.CV
arXiv Open Access 2024
Accelerating the Surrogate Retraining for Poisoning Attacks against Recommender Systems

Yunfan Wu, Qi Cao, Shuchang Tao et al.

Recent studies have demonstrated the vulnerability of recommender systems to data poisoning attacks, where adversaries inject carefully crafted fake user interactions into the training data of recommenders to promote target items. Current attack methods involve iteratively retraining a surrogate recommender on the poisoned data with the latest fake users to optimize the attack. However, this repetitive retraining is highly time-consuming, hindering the efficient assessment and optimization of fake users. To mitigate this computational bottleneck and develop a more effective attack in an affordable time, we analyze the retraining process and find that a change in the representation of one user/item will cause a cascading effect through the user-item interaction graph. Under theoretical guidance, we introduce \emph{Gradient Passing} (GP), a novel technique that explicitly passes gradients between interacted user-item pairs during backpropagation, thereby approximating the cascading effect and accelerating retraining. With just a single update, GP can achieve effects comparable to multiple original training iterations. Under the same number of retraining epochs, GP enables a closer approximation of the surrogate recommender to the victim. This more accurate approximation provides better guidance for optimizing fake users, ultimately leading to enhanced data poisoning attacks. Extensive experiments on real-world datasets demonstrate the efficiency and effectiveness of our proposed GP.

en cs.IR
arXiv Open Access 2024
Poisoning Prevention in Federated Learning and Differential Privacy via Stateful Proofs of Execution

Norrathep Rattanavipanon, Ivan De Oliveira Nunes

The rise in IoT-driven distributed data analytics, coupled with increasing privacy concerns, has led to a demand for effective privacy-preserving and federated data collection/model training mechanisms. In response, approaches such as Federated Learning (FL) and Local Differential Privacy (LDP) have been proposed and attracted much attention over the past few years. However, they still share the common limitation of being vulnerable to poisoning attacks wherein adversaries compromising edge devices feed forged (a.k.a. poisoned) data to aggregation back-ends, undermining the integrity of FL/LDP results. In this work, we propose a system-level approach to remedy this issue based on a novel security notion of Proofs of Stateful Execution (PoSX) for IoT/embedded devices' software. To realize the PoSX concept, we design SLAPP: a System-Level Approach for Poisoning Prevention. SLAPP leverages commodity security features of embedded devices - in particular ARM TrustZoneM security extensions - to verifiably bind raw sensed data to their correct usage as part of FL/LDP edge device routines. As a consequence, it offers robust security guarantees against poisoning. Our evaluation, based on real-world prototypes featuring multiple cryptographic primitives and data collection schemes, showcases SLAPP's security and low overhead.

en cs.CR
DOAJ Open Access 2023
Causal association between arsenic metabolism and non-alcoholic fatty liver disease based on Mendelian randomization

Yuenan LIU, Weiya LI, Yan YAN et al.

BackgroundAnimal experimental studies have shown that arsenic exposure contributes to hepatic lipid accumulation, but epidemiological findings are inconsistent. Moreover, the role of arsenic metabolism is still unclear. ObjectiveTo evaluate the potential causal association between arsenic metabolism and non-alcoholic fatty liver disease (NAFLD). MethodsA total of 1020 participants from the Dongfeng-Tongji cohort with urinary arsenic metabolites and genotype data were included in the present study (NAFLD group, n=529; non- NAFLD group, n=491). Epidemiological information was obtained by questionnaire survey, liver ultrasound was obtained by physical examination, arsenic metabolites in urine were measured by high-performance liquid chromatography-inductively coupled plasma mass spectrometry, and DNA from leukocytes was extracted for genome-wide genotype. NAFLD was diagnosed if the following two criteria were met: (1) positive fatty liver according to abdominal ultrasonography; (2) excluding participants reporting history of excessive alcohol consumption (≥30 g·d−1 for men; ≥20 g·d−1 for women) and/or fatty liver with other known causes. Genetic risk score (GRS) and weighted genetic risk score (w-GRS) were constructed using single nucleotide polymorphisms (SNPs) related to arsenic metabolism reported in previous studies to predict the estimated arsenic metabolism. Logistic regression models were used to analyze the association between arsenic metabolism and NAFLD; linear regression models were used to analyze the association between GRS/w-GRS and arsenic metabolism, and Mendelian randomization analysis was performed using GRS method, inverse variance weighting, Egger regression, and weighted median. ResultsThe mean age of the 1020 participants was (68.14±7.45) years, of which 64% were female, and 529 (51.9%) were NAFLD cases. The median (P25, P75) level of total arsenic in urine was 18.34 (11.93, 27.14) μg·L−1 with a geometric mean and standard deviation of (15.86±1.81) μg·L−1. The proportions of inorganic arsenic (iAs%), monomethylarsenic (MMA%), and dimethylarsenic (DMA%) in the total arsenic were 13.90%±9.90%, 9.49%±4.97%, and 76.60%±11.00%, respectively. After adjustment for potential confounders, the ORs (95%CIs) for NAFLD risk by per standard deviation increase in iAs% and MMA% were 1.21 (1.06, 1.38) and 0.62 (0.51, 0.74) respectively. Each unit increase in GRS constructed from 77 SNPs was associated with a 0.16% increase in MMA% and a 0.19% decrease in DMA%, and each unit increase in w-GRS was associated with a 0.17% increase in MMA% and a 0.14% decrease in DMA%. After further exclusion of SNPs with linkage disequilibrium (r2>0.3) and pleiotropic effect, a total of 25 SNPs were included in the Mendelian randomization analysis. The GRS method showed that the OR (95%CI) for NAFLD risk by per unit increase in MMA% expectation was 0.95 (0.90, 0.99), and the inverse variance weighting method also showed a significant association between MMA% and NAFLD, with OR (95%CI) of 0.91 (0.84, 0.99). ConclusionThere is a negative causal association between MMA% and NAFLD.

Medicine (General), Toxicology. Poisons
DOAJ Open Access 2023
Olive juice dry extract containing hydroxytyrosol, as a nontoxic and safe substance: Results from pre-clinical studies and review of toxicological studies

Marie Liamin, Maria Pilar Lara, Olivier Michelet et al.

Products derived from olives, such as the raw fruit and oils, are widely consumed due to their taste, and purported nutritional/health benefits. Phenolic compounds, especially hydroxytyrosol (HT), have been proposed as one of the key substances involved in these effects. An olive juice extract, standardized to contain 20% HT (“OE20HT”), was produced to investigate its health benefits. The aim of this study was to demonstrate the genotoxic safety of this ingredient based on in vitro Ames assay and in vitro micronucleus assay. Results indicated that OE20HT was not mutagenic at concentrations of up to 5000 µg/plate, with or without metabolic activation, and was neither aneugenic nor clastogenic after 3-hour exposure at concentrations of up to 60 µg/mL with or without metabolic activation, or after 24-hour exposure at concentrations of up to 40 µg/mL. To further substantiate the safety of OE20HT following ingestion without conducting additional animal studies, a comprehensive literature review was conducted. No safety concerns were identified based on acute or sub-chronic studies in animals, including reproductive and developmental studies. These results were supported by clinical studies demonstrating the absence of adverse effects after oral supplementation with olive extracts or HT. Based on in vitro data and the literature review, the OE20HT extract is therefore considered as safe for human consumption at doses up to 2.5 mg/kg body weight/day.

Toxicology. Poisons
arXiv Open Access 2023
RLHFPoison: Reward Poisoning Attack for Reinforcement Learning with Human Feedback in Large Language Models

Jiongxiao Wang, Junlin Wu, Muhao Chen et al.

Reinforcement Learning with Human Feedback (RLHF) is a methodology designed to align Large Language Models (LLMs) with human preferences, playing an important role in LLMs alignment. Despite its advantages, RLHF relies on human annotators to rank the text, which can introduce potential security vulnerabilities if any adversarial annotator (i.e., attackers) manipulates the ranking score by up-ranking any malicious text to steer the LLM adversarially. To assess the red-teaming of RLHF against human preference data poisoning, we propose RankPoison, a poisoning attack method on candidates' selection of preference rank flipping to reach certain malicious behaviors (e.g., generating longer sequences, which can increase the computational cost). With poisoned dataset generated by RankPoison, we can perform poisoning attacks on LLMs to generate longer tokens without hurting the original safety alignment performance. Moreover, applying RankPoison, we also successfully implement a backdoor attack where LLMs can generate longer answers under questions with the trigger word. Our findings highlight critical security challenges in RLHF, underscoring the necessity for more robust alignment methods for LLMs.

en cs.AI, cs.CL
DOAJ Open Access 2022
Proapoptotic effect of nanoliposomes loaded with hydroalcoholic extract of Hypericum perforatum L. in combination with curcumin on SW48 and SW1116 colorectal cancer cell lines

Farzaneh Rezaeinejad, Hasan Bardania, Farideh Ghalamfarsa et al.

Background: Colorectal cancer (CRC) continues to be a leading cause of cancer related death in the world and approximately 70 to 75 % of patients with metastatic colorectal cancer survive for up to 1 year after diagnosis. Curcumin (CUR) is a potential chemotherapeutic agent used to treat cancer. There is ample evidence of the inhibitory effects of Hypericum perforatum L. extract (HPE) on cell proliferation and its effects on the induction of apoptosis in various human cancer cell lines. Objective: The purpose of this study was to investigate the proapoptotic effect of HPE and its nanoliposomes (HPE-Lip) and to scrutinize the synergistic and therapeutic potential of HPE/CUR-loaded nanoliposome (HPE/CUR-Lip). Methods: In the present in vitro study, SW1116 and SW48 cell lines were cultured and then treated with different doses of HPE, CUR, bare liposome solely (Lip-Sol), and nanoliposomes loaded with HPE (HPE-Lip), CUR (CUR-Lip) and CUR/HPE (HPE/CUR-Lip) for 24, 48 and 72 hours. Cytotoxicity was measured by MTT assay and apoptosis rate by an annexin-V FITC/propidium iodide double-staining method using flow cytometry. Results: The results showed that cell viability was inhibited in a dose-dependent and time-dependent manner in all groups compared to the control group. The use of nanoliposomes improved the outcomes. HPE/CUR-Lip exhibited higher in vitro cytotoxic and proapoptotic activity against SW1116 and SW48 cell lines (P < 0.05). Conclusion: The findings of this study suggest that the HPE/CUR-Lip complex could provide a potential strategy to achieve a synergistic effect of HPE and CUR in the treatment of colorectal cancer.

Therapeutics. Pharmacology, Toxicology. Poisons
arXiv Open Access 2022
Using deep convolutional neural networks to classify poisonous and edible mushrooms found in China

Baiming Zhang, Ying Zhao, Zhixiang Li

Because of their abundance of amino acids, polysaccharides, and many other nutrients that benefit human beings, mushrooms are deservedly popular as dietary cuisine both worldwide and in China. However, if people eat poisonous fungi by mistake, they may suffer from nausea, vomiting, mental disorder, acute anemia, or even death. Each year in China, there are around 8000 people became sick, and 70 died as a result of eating toxic mushrooms by mistake. It is counted that there are thousands of kinds of mushrooms among which only around 900 types are edible, thus without specialized knowledge, the probability of eating toxic mushrooms by mistake is very high. Most people deem that the only characteristic of poisonous mushrooms is a bright colour, however, some kinds of them do not correspond to this trait. In order to prevent people from eating these poisonous mushrooms, we propose to use deep learning methods to indicate whether a mushroom is toxic through analyzing hundreds of edible and toxic mushrooms smartphone pictures. We crowdsource a mushroom image dataset that contains 250 images of poisonous mushrooms and 200 images of edible mushrooms. The Convolutional Neural Network (CNN) is a specialized type of artificial neural networks that use a mathematical operation called convolution in place of general matrix multiplication in at least one of their layers, which can generate a relatively precise result by analyzing a huge amount of images, and thus is very suitable for our research. The experimental results demonstrate that the proposed model has high credibility and can provide a decision-making basis for the selection of edible fungi, so as to reduce the morbidity and mortality caused by eating poisonous mushrooms. We also open source our hand collected mushroom image dataset so that peer researchers can also deploy their own model to advance poisonous mushroom identification.

en cs.CV, cs.LG

Halaman 23 dari 40054