Hasil untuk "Toxicology. Poisons"

Menampilkan 20 dari ~801163 hasil · dari arXiv, DOAJ, Semantic Scholar, CrossRef

JSON API
arXiv Open Access 2026
Towards Poisoning Robustness Certification for Natural Language Generation

Mihnea Ghitu, Matthew Wicker

Understanding the reliability of natural language generation is critical for deploying foundation models in security-sensitive domains. While certified poisoning defenses provide provable robustness bounds for classification tasks, they are fundamentally ill-equipped for autoregressive generation: they cannot handle sequential predictions or the exponentially large output space of language models. To establish a framework for certified natural language generation, we formalize two security properties: stability (robustness to any change in generation) and validity (robustness to targeted, harmful changes in generation). We introduce Targeted Partition Aggregation (TPA), the first algorithm to certify validity/targeted attacks by computing the minimum poisoning budget needed to induce a specific harmful class, token, or phrase. Further, we extend TPA to provide tighter guarantees for multi-turn generations using mixed integer linear programming (MILP). Empirically, we demonstrate TPA's effectiveness across diverse settings including: certifying validity of agent tool-calling when adversaries modify up to 0.5% of the dataset and certifying 8-token stability horizons in preference-based alignment. Though inference-time latency remains an open challenge, our contributions enable certified deployment of language models in security-critical applications.

en cs.LG
arXiv Open Access 2026
Poisoned Acoustics

Harrison Dahme

Training-data poisoning attacks can induce targeted, undetectable failure in deep neural networks by corrupting a vanishingly small fraction of training labels. We demonstrate this on acoustic vehicle classification using the MELAUDIS urban intersection dataset (approx. 9,600 audio clips, 6 classes): a compact 2-D convolutional neural network (CNN) trained on log-mel spectrograms achieves 95.7% Attack Success Rate (ASR) -- the fraction of target-class test samples misclassified under the attack -- on a Truck-to-Car label-flipping attack at just p=0.5% corruption (48 records), with zero detectable change in aggregate accuracy (87.6% baseline; 95% CI: 88-100%, n=3 seeds). We prove this stealth is structural: the maximum accuracy drop from a complete targeted attack is bounded above by the minority class fraction (beta). For real-world class imbalances (Truck approx. 3%), this bound falls below training-run noise, making aggregate accuracy monitoring provably insufficient regardless of architecture or attack method. A companion backdoor trigger attack reveals a novel trigger-dominance collapse: when the target class is a dataset minority, the spectrogram patch trigger becomes functionally redundant--clean ASR equals triggered ASR, and the attack degenerates to pure label flipping. We formalize the ML training pipeline as an attack surface and propose a trust-minimized defense combining content-addressed artifact hashing, Merkle-tree dataset commitment, and post-quantum digital signatures (ML-DSA-65/CRYSTALS-Dilithium3, NIST FIPS 204) for cryptographically verifiable data provenance.

en cs.CR, cs.AI
arXiv Open Access 2026
RefineRAG: Word-Level Poisoning Attacks via Retriever-Guided Text Refinement

Ziye Wang, Guanyu Wang, Kailong Wang

Retrieval-Augmented Generation (RAG) significantly enhances Large Language Models (LLMs), but simultaneously exposes a critical vulnerability to knowledge poisoning attacks. Existing attack methods like PoisonedRAG remain detectable due to coarse-grained separate-and-concatenate strategies. To bridge this gap, we propose RefineRAG, a novel framework that treats poisoning as a holistic word-level refinement problem. It operates in two stages: Macro Generation produces toxic seeds guaranteed to induce target answers, while Micro Refinement employs a retriever-in-the-loop optimization to maximize retrieval priority without compromising naturalness. Evaluations on NQ and MSMARCO demonstrate that RefineRAG achieves state-of-the-art effectiveness, securing a 90% Attack Success Rate on NQ, while registering the lowest grammar errors and repetition rates among all baselines. Crucially, our proxy-optimized attacks successfully transfer to black-box victim systems, highlighting a severe practical threat.

en cs.CR
arXiv Open Access 2026
Robust Aggregation for Federated Sequential Recommendation with Sparse and Poisoned Data

Minh Hieu Nguyen

Federated sequential recommendation distributes model training across user devices so that behavioural data remains local, reducing privacy risks. Yet, this setting introduces two intertwined difficulties. On the one hand, individual clients typically contribute only short and highly sparse interaction sequences, limiting the reliability of learned user representations. On the other hand, the federated optimisation process is vulnerable to malicious or corrupted client updates, where poisoned gradients can significantly distort the global model. These challenges are particularly severe in sequential recommendation, where temporal dynamics further complicate signal aggregation. To address this problem, we propose a robust aggregation framework tailored for federated sequential recommendation under sparse and adversarial conditions. Instead of relying on standard averaging, our method introduces a defence-aware aggregation mechanism that identifies and down-weights unreliable client updates while preserving informative signals from sparse but benign participants. The framework incorporates representation-level constraints to stabilise user and item embeddings, preventing poisoned or anomalous contributions from dominating the global parameter space. In addition, we integrate sequence-aware regularisation to maintain temporal coherence in user modelling despite limited local observations.

en cs.IR
DOAJ Open Access 2026
Effects and mechanisms of pesticide carbendazim on osteogenic differentiation

Liming XUE, Jiale XU, Jingxian ZHOU et al.

BackgroundCarbendazim (CBZ), a widely used benzimidazole fungicide, has raised increasing concerns regarding the health risks associated with its residues. However, the toxic effects and associated mechanisms of CBZ on the skeletal system have not been reported.ObjectiveTo elucidate the effects of carbendazim on osteogenic differentiation and its underlying mechanisms.MethodsMC3T3-E1 mouse pre-osteoblastic cells were treated with 1, 10, and 100 μmol·L−1 CBZ for 24 h to examine cell viability, alkaline phosphatase (ALP) activity, bone nodule formation, reactive oxygen species (ROS) level, malondialdehyde (MDA) content, and nitric oxide synthase (NOS) activity. Transcriptomics was used to identify differentially expressed genes (DEGs) in osteoblasts exposed to CBZ. Kyoto Encyclopedia of Genes and Genomes (KEGG) and gene set enrichment analysis (GSEA) were employed to analyze the potential biological pathways of DEGs. Real-time polymerase chain reaction (RT-PCR) and Western blot were used to validate changes in gene and protein expression.ResultsExposure to 10 and 100 μmol·L−1 CBZ significantly reduced osteoblast viability, ALP activity, bone nodule formation, and NOS activity, while increasing intracellular ROS levels. CBZ at 100 μmol·L−1 concentration significantly elevated MDA level (P < 0.05). The transcriptomic analysis revealed that 1 μmol·L−1 CBZ treatment resulted in 385 significantly DEGs. The KEGG enrichment analysis revealed that CBZ significantly affects hormone regulation pathways (including parathyroid hormone, growth hormone, dopamine, and oxytocin), mitogen-activated protein kinase (MAPK) and cyclic GMP-dependent protein kinase G (cGMP-PKG) signaling pathways, focal adhesion and adherens junction, as well as the NOD-like receptor signaling pathway and the mRNA surveillance (NMD) pathway. The results of GSEA showed that CBZ significantly inhibited the bile acid metabolism and the Wnt/β-catenin pathway in osteoblasts. The validation results demonstrated that CBZ significantly suppressed the mRNA expression of Wnt3a and β-catenin, as well as the protein expression of Runx2 and Osterix in the Wnt/β-catenin pathway.ConclusionCBZ exposure exhibits potential skeletal toxicity, and its mechanism is through promoting oxidative stress, interfering with the Wnt/β-catenin pathway in osteogenic differentiation, thereby inhibiting the bone formation function of osteoblasts.

Medicine (General), Toxicology. Poisons
DOAJ Open Access 2026
Assessment of the levels of oxidative and anti-oxidative stress markers in subjects with cerebrovascular accident

Nnamani Vitus Ikenna, Okolonkwo Benjamin Nnamdi, Okoro Ikechukwu Jacob et al.

Background: Stroke, a leading cause of global mortality and long-term disability, is characterized by a disturbance in cerebral blood supply, often leading to significant oxidative stress and inflammatory responses. Objectives: This study investigated the levels of oxidative and antioxidative stress markers in cerebrovascular accident (CVA) patients to elucidate their roles in disease progression and potential for intervention. Methods: Sixty CVA patients (30 males, 30 females, aged 40-80 years) and forty age-matched healthy controls were enrolled. Serum concentrations of malondialdehyde (MDA), Vitamin C (Ascorbic acid), and Vitamin E (Alpha-tocopherol) were determined using spectrophotometric methods. Statistical analysis was performed using SPSS version 18, with significance set at p<0.05. Results: The findings showed significantly higher serum MDA concentrations in CVA patients (7.14±0.03) compared to controls (6.83±0.03), indicating increased lipid peroxidation and oxidative stress. Conversely, CVA patients exhibited significantly lower serum Vitamin C (Ascorbic acid) levels (1.18±0.04) than controls (1.81±0.04), suggesting a compromised antioxidant status. While Vitamin E (Alpha-tocopherol) showed a slight, non-significant decrease in CVA patients, β-carotene levels were not statistically different between groups. Furthermore, female CVA patients had significantly higher serum MDA levels than male CVA patients, while male CVA patients presented with significantly higher serum Vitamin C concentrations. Conclusion: These findings provide strong evidence for the presence of significant oxidative stress and a diminished antioxidant defense in CVA patients, underscoring the critical role of free radical-mediated injury in stroke pathogenesis. The observed sex-based differences warrant further investigation. This study highlights the potential for therapeutic strategies targeting oxidative stress to improve outcomes in stroke management.

Therapeutics. Pharmacology, Toxicology. Poisons
arXiv Open Access 2025
Beyond Natural Language Perplexity: Detecting Dead Code Poisoning in Code Generation Datasets

Chi-Chien Tsai, Chia-Mu Yu, Ying-Dar Lin et al.

The increasing adoption of large language models (LLMs) for code-related tasks has raised concerns about the security of their training datasets. One critical threat is dead code poisoning, where syntactically valid but functionally redundant code is injected into training data to manipulate model behavior. Such attacks can degrade the performance of neural code search systems, leading to biased or insecure code suggestions. Existing detection methods, such as token-level perplexity analysis, fail to effectively identify dead code due to the structural and contextual characteristics of programming languages. In this paper, we propose DePA (Dead Code Perplexity Analysis), a novel line-level detection and cleansing method tailored to the structural properties of code. DePA computes line-level perplexity by leveraging the contextual relationships between code lines and identifies anomalous lines by comparing their perplexity to the overall distribution within the file. Our experiments on benchmark datasets demonstrate that DePA significantly outperforms existing methods, achieving 0.14-0.19 improvement in detection F1-score and a 44-65% increase in poisoned segment localization precision. Furthermore, DePA enhances detection speed by 0.62-23x, making it practical for large-scale dataset cleansing. Overall, by addressing the unique challenges of dead code poisoning, DePA provides a robust and efficient solution for safeguarding the integrity of code generation model training datasets.

en cs.CL
arXiv Open Access 2025
PoisonedParrot: Subtle Data Poisoning Attacks to Elicit Copyright-Infringing Content from Large Language Models

Michael-Andrei Panaitescu-Liess, Pankayaraj Pathmanathan, Yigitcan Kaya et al.

As the capabilities of large language models (LLMs) continue to expand, their usage has become increasingly prevalent. However, as reflected in numerous ongoing lawsuits regarding LLM-generated content, addressing copyright infringement remains a significant challenge. In this paper, we introduce PoisonedParrot: the first stealthy data poisoning attack that induces an LLM to generate copyrighted content even when the model has not been directly trained on the specific copyrighted material. PoisonedParrot integrates small fragments of copyrighted text into the poison samples using an off-the-shelf LLM. Despite its simplicity, evaluated in a wide range of experiments, PoisonedParrot is surprisingly effective at priming the model to generate copyrighted content with no discernible side effects. Moreover, we discover that existing defenses are largely ineffective against our attack. Finally, we make the first attempt at mitigating copyright-infringement poisoning attacks by proposing a defense: ParrotTrap. We encourage the community to explore this emerging threat model further.

en cs.LG, cs.CR
arXiv Open Access 2025
PoisonArena: Uncovering Competing Poisoning Attacks in Retrieval-Augmented Generation

Liuji Chen, Xiaofang Yang, Yuanzhuo Lu et al.

Retrieval-Augmented Generation (RAG) systems, widely used to improve the factual grounding of large language models (LLMs), are increasingly vulnerable to poisoning attacks, where adversaries inject manipulated content into the retriever's corpus. While prior research has predominantly focused on single-attacker settings, real-world scenarios often involve multiple, competing attackers with conflicting objectives. In this work, we introduce PoisonArena, the first benchmark to systematically study and evaluate competing poisoning attacks in RAG. We formalize the multi-attacker threat model, where attackers vie to control the answer to the same query using mutually exclusive misinformation. PoisonArena leverages the Bradley-Terry model to quantify each method's competitive effectiveness in such adversarial environments. Through extensive experiments on the Natural Questions and MS MARCO datasets, we demonstrate that many attack strategies successful in isolation fail under competitive pressure. Our findings highlight the limitations of conventional evaluation metrics like Attack Success Rate (ASR) and F1 score and underscore the need for competitive evaluation to assess real-world attack robustness. PoisonArena provides a standardized framework to benchmark and develop future attack and defense strategies under more realistic, multi-adversary conditions.

en cs.IR
DOAJ Open Access 2025
The role of the leukocyte glucose index in predicting clinical outcomes in acute methanol toxicity

Ola Elsayed Nafea, Walaa Gomaa Abdelhamid, Fatma Ibrahim

Introduction: Acute methanol poisoning signifies a global health issue. This study was designed to explore the role of the leukocyte glucose index (LGI) in predicting clinical outcomes; in-hospital mortality and visual impairment, and length of hospital stay, in acute methanol toxicity and to evaluate the association between LGI and all initial patient characteristics. Patients and methods: This was a retrospective analysis that involved 82 acutely methanol-intoxicated patients, starting from January 2021 to December 2023. Patients were categorized by on-admission LGI tertiles into low, intermediate, and high groups. Results: Approximately 27 % (22 out of 82) of patients died during hospitalization, with most of them belonging to the high LGI group. No significant differences existed in the proportions of patients with total vision loss, or the length of hospital stay. The majority of the undesirable findings were apparent in patients in either the intermediate or high LGI groups. LGI can distinguish exceptionally between survivors and non-survivors with an area under the curve of 0.808. However, LGI does not have any discriminatory power in predicting adverse visual outcomes. Conclusion: LGI can serve as a valuable tool in predicting early in-hospital mortality in acute methanol poisoning.

Toxicology. Poisons
DOAJ Open Access 2025
In vitro models for gonadal and placental toxicity: A review and industry survey

Samuel Madureira Silva, Steven Van Cruchten, Freddy Van Goethem et al.

In recent years, there has been a significant push towards the development of animal-free methods in toxicology. Despite this progress, the adoption of such methods in safety assessment practices remains limited, particularly in the context of gonadal and placental toxicity. This paper reviews current in vitro models relevant to gonadal (gametogenesis and steroidogenesis) and placental biology, and potentially applicable in developmental and reproductive toxicology (DART). Additionally, we present the results of a survey conducted among DART experts (n = 16), examining current practices and perceptions in industry regarding these in vitro methodologies. The findings indicate a predominant reliance on animal models, largely driven by regulatory requirements, despite concerns about their ability to reliably predict human outcomes. Respondents reported limited familiarity with and confidence in available in vitro models for gonadal and placental toxicity, yet expressed optimism about their future integration. These findings underscore the need for increased awareness of gonadal and placental in vitro models, particularly among DART risk assessors, to facilitate a shift toward more reliable and human relevant risk assessments.

Toxicology. Poisons
arXiv Open Access 2024
Resilience in Online Federated Learning: Mitigating Model-Poisoning Attacks via Partial Sharing

Ehsan Lari, Reza Arablouei, Vinay Chakravarthi Gogineni et al.

Federated learning (FL) allows training machine learning models on distributed data without compromising privacy. However, FL is vulnerable to model-poisoning attacks where malicious clients tamper with their local models to manipulate the global model. In this work, we investigate the resilience of the partial-sharing online FL (PSO-Fed) algorithm against such attacks. PSO-Fed reduces communication overhead by allowing clients to share only a fraction of their model updates with the server. We demonstrate that this partial sharing mechanism has the added advantage of enhancing PSO-Fed's robustness to model-poisoning attacks. Through theoretical analysis, we show that PSO-Fed maintains convergence even under Byzantine attacks, where malicious clients inject noise into their updates. Furthermore, we derive a formula for PSO-Fed's mean square error, considering factors like stepsize, attack probability, and the number of malicious clients. Interestingly, we find a non-trivial optimal stepsize that maximizes PSO-Fed's resistance to these attacks. Extensive numerical experiments confirm our theoretical findings and showcase PSO-Fed's superior performance against model-poisoning attacks compared to other leading FL algorithms.

en cs.LG, cs.CR
arXiv Open Access 2024
EAB-FL: Exacerbating Algorithmic Bias through Model Poisoning Attacks in Federated Learning

Syed Irfan Ali Meerza, Jian Liu

Federated Learning (FL) is a technique that allows multiple parties to train a shared model collaboratively without disclosing their private data. It has become increasingly popular due to its distinct privacy advantages. However, FL models can suffer from biases against certain demographic groups (e.g., racial and gender groups) due to the heterogeneity of data and party selection. Researchers have proposed various strategies for characterizing the group fairness of FL algorithms to address this issue. However, the effectiveness of these strategies in the face of deliberate adversarial attacks has not been fully explored. Although existing studies have revealed various threats (e.g., model poisoning attacks) against FL systems caused by malicious participants, their primary aim is to decrease model accuracy, while the potential of leveraging poisonous model updates to exacerbate model unfairness remains unexplored. In this paper, we propose a new type of model poisoning attack, EAB-FL, with a focus on exacerbating group unfairness while maintaining a good level of model utility. Extensive experiments on three datasets demonstrate the effectiveness and efficiency of our attack, even with state-of-the-art fairness optimization algorithms and secure aggregation rules employed.

en cs.LG, cs.AI
arXiv Open Access 2024
Peak-Controlled Logits Poisoning Attack in Federated Distillation

Yuhan Tang, Aoxu Zhang, Zhiyuan Wu et al.

Federated Distillation (FD) offers an innovative approach to distributed machine learning, leveraging knowledge distillation for efficient and flexible cross-device knowledge transfer without necessitating the upload of extensive model parameters to a central server. While FD has gained popularity, its vulnerability to poisoning attacks remains underexplored. To address this gap, we previously introduced FDLA (Federated Distillation Logits Attack), a method that manipulates logits communication to mislead and degrade the performance of client models. However, the impact of FDLA on participants with different identities and the effects of malicious modifications at various stages of knowledge transfer remain unexplored. To this end, we present PCFDLA (Peak-Controlled Federated Distillation Logits Attack), an advanced and more stealthy logits poisoning attack method for FD. PCFDLA enhances the effectiveness of FDLA by carefully controlling the peak values of logits to create highly misleading yet inconspicuous modifications. Furthermore, we introduce a novel metric for better evaluating attack efficacy, demonstrating that PCFDLA maintains stealth while being significantly more disruptive to victim models compared to its predecessors. Experimental results across various datasets confirm the superior impact of PCFDLA on model accuracy, solidifying its potential threat in federated distillation systems.

en cs.LG, cs.AI
arXiv Open Access 2024
Poisoned LangChain: Jailbreak LLMs by LangChain

Ziqiu Wang, Jun Liu, Shengkai Zhang et al.

With the development of natural language processing (NLP), large language models (LLMs) are becoming increasingly popular. LLMs are integrating more into everyday life, raising public concerns about their security vulnerabilities. Consequently, the security of large language models is becoming critically important. Currently, the techniques for attacking and defending against LLMs are continuously evolving. One significant method type of attack is the jailbreak attack, which designed to evade model safety mechanisms and induce the generation of inappropriate content. Existing jailbreak attacks primarily rely on crafting inducement prompts for direct jailbreaks, which are less effective against large models with robust filtering and high comprehension abilities. Given the increasing demand for real-time capabilities in large language models, real-time updates and iterations of new knowledge have become essential. Retrieval-Augmented Generation (RAG), an advanced technique to compensate for the model's lack of new knowledge, is gradually becoming mainstream. As RAG enables the model to utilize external knowledge bases, it provides a new avenue for jailbreak attacks. In this paper, we conduct the first work to propose the concept of indirect jailbreak and achieve Retrieval-Augmented Generation via LangChain. Building on this, we further design a novel method of indirect jailbreak attack, termed Poisoned-LangChain (PLC), which leverages a poisoned external knowledge base to interact with large language models, thereby causing the large models to generate malicious non-compliant dialogues.We tested this method on six different large language models across three major categories of jailbreak issues. The experiments demonstrate that PLC successfully implemented indirect jailbreak attacks under three different scenarios, achieving success rates of 88.56%, 79.04%, and 82.69% respectively.

en cs.CL, cs.AI
arXiv Open Access 2024
Mellivora Capensis: A Backdoor-Free Training Framework on the Poisoned Dataset without Auxiliary Data

Yuwen Pu, Jiahao Chen, Chunyi Zhou et al.

The efficacy of deep learning models is profoundly influenced by the quality of their training data. Given the considerations of data diversity, data scale, and annotation expenses, model trainers frequently resort to sourcing and acquiring datasets from online repositories. Although economically pragmatic, this strategy exposes the models to substantial security vulnerabilities. Untrusted entities can clandestinely embed triggers within the dataset, facilitating the hijacking of the trained model on the poisoned dataset through backdoor attacks, which constitutes a grave security concern. Despite the proliferation of countermeasure research, their inherent limitations constrain their effectiveness in practical applications. These include the requirement for substantial quantities of clean samples, inconsistent defense performance across varying attack scenarios, and inadequate resilience against adaptive attacks, among others. Therefore, in this paper, we endeavor to address the challenges of backdoor attack countermeasures in real-world scenarios, thereby fortifying the security of training paradigm under the data-collection manner. Concretely, we first explore the inherent relationship between the potential perturbations and the backdoor trigger, and demonstrate the key observation that the poisoned samples perform more robustness to perturbation than the clean ones through the theoretical analysis and experiments. Then, based on our key explorations, we propose a robust and clean-data-free backdoor defense framework, namely Mellivora Capensis (\texttt{MeCa}), which enables the model trainer to train a clean model on the poisoned dataset.

en cs.CR
arXiv Open Access 2024
Preference Poisoning Attacks on Reward Model Learning

Junlin Wu, Jiongxiao Wang, Chaowei Xiao et al.

Learning reward models from pairwise comparisons is a fundamental component in a number of domains, including autonomous control, conversational agents, and recommendation systems, as part of a broad goal of aligning automated decisions with user preferences. These approaches entail collecting preference information from people, with feedback often provided anonymously. Since preferences are subjective, there is no gold standard to compare against; yet, reliance of high-impact systems on preference learning creates a strong motivation for malicious actors to skew data collected in this fashion to their ends. We investigate the nature and extent of this vulnerability by considering an attacker who can flip a small subset of preference comparisons to either promote or demote a target outcome. We propose two classes of algorithmic approaches for these attacks: a gradient-based framework, and several variants of rank-by-distance methods. Next, we evaluate the efficacy of best attacks in both these classes in successfully achieving malicious goals on datasets from three domains: autonomous control, recommendation system, and textual prompt-response preference learning. We find that the best attacks are often highly successful, achieving in the most extreme case 100\% success rate with only 0.3\% of the data poisoned. However, \emph{which} attack is best can vary significantly across domains. In addition, we observe that the simpler and more scalable rank-by-distance approaches are often competitive with, and on occasion significantly outperform, gradient-based methods. Finally, we show that state-of-the-art defenses against other classes of poisoning attacks exhibit limited efficacy in our setting.

en cs.LG, cs.AI
arXiv Open Access 2024
Phantom: Untargeted Poisoning Attacks on Semi-Supervised Learning (Full Version)

Jonathan Knauer, Phillip Rieger, Hossein Fereidooni et al.

Deep Neural Networks (DNNs) can handle increasingly complex tasks, albeit they require rapidly expanding training datasets. Collecting data from platforms with user-generated content, such as social networks, has significantly eased the acquisition of large datasets for training DNNs. Despite these advancements, the manual labeling process remains a substantial challenge in terms of both time and cost. In response, Semi-Supervised Learning (SSL) approaches have emerged, where only a small fraction of the dataset needs to be labeled, leaving the majority unlabeled. However, leveraging data from untrusted sources like social networks also creates new security risks, as potential attackers can easily inject manipulated samples. Previous research on the security of SSL primarily focused on injecting backdoors into trained models, while less attention was given to the more challenging untargeted poisoning attacks. In this paper, we introduce Phantom, the first untargeted poisoning attack in SSL that disrupts the training process by injecting a small number of manipulated images into the unlabeled dataset. Unlike existing attacks, our approach only requires adding few manipulated samples, such as posting images on social networks, without the need to control the victim. Phantom causes SSL algorithms to overlook the actual images' pixels and to rely only on maliciously crafted patterns that \ourname superimposed on the real images. We show Phantom's effectiveness for 6 different datasets and 3 real-world social-media platforms (Facebook, Instagram, Pinterest). Already small fractions of manipulated samples (e.g., 5\%) reduce the accuracy of the resulting model by 10\%, with higher percentages leading to a performance comparable to a naive classifier. Our findings demonstrate the threat of poisoning user-generated content platforms, rendering them unsuitable for SSL in specific tasks.

DOAJ Open Access 2024
Effect of inulin on preventing drunkenness and relieving acute alcoholic intoxication of mice and preparation of its hangover beverage

Honglin Lan, Xingguo Li, Yunhui Zhang et al.

Abstract The aim of this study was to evaluate the effects of different types of inulin on acute alcoholic intoxication (AAI) in mice and prepare its hangover beverage. Basic physical and chemical properties of different types of inulin (short‐chain inulin, long‐chain inulin, and phosphorylated long‐chain inulin) were analyzed and given by gavage at a dose of 400 mg kg−1 day−1 for a continuous period of 7 days through animal behavior experiments, and the inebriation percentage, mortality rate, duration of inebriation, and sobering time were recorded with the righting reflex as the judgment criterion. The results showed that, compared with the control group, the drunkenness and mortality rates of short‐chain inulin decreased by 12% and 100%, respectively, and the sober time decreased by 18%, while alcohol tolerance was also improved. The best formula for a short‐chain inulin hangover drink was determined to be: 0.4% granulated sugar, 0.5% citric acid, and 0.5% pectin. These suggest that short‐chain inulin may have potential in preventing AAI.

Food processing and manufacture, Toxicology. Poisons
DOAJ Open Access 2024
Analysis of nutrient loads, heavy metals and physicochemical properties of wastewater, wetland grass, and papaya samples: Gondar Malt factory, Ethiopia with global implication

Tesfamariam Gezahegn, Meseret Dereje, Molla Tefera et al.

Robust attention was brought to researchers due to deterioration of wastewater quality of lakes and reservoirs as major global concerns by industrial release. The uncontrolled releases of effluents impose serious impacts for both aquatic and terrestrial environments. In the current study, many parameters like nutrient loads, heavy metals and physicochemical properties of wastewater, wetland grass, and papaya samples were analysed. The investigated nutrients, alkalinity, and total hardness in fresh water samples were within the allowable limits except for phosphate in fresh wastewater and alkalinity in wastewater. The detected levels of heavy metals (mg/L) in wastewater samples were:- Cd (0.386–0.905), Cr (ND-0.074), Cu (0.064–0.096), Mn (0.184–1.528), Fe (0.167–4.636), Zn (0.175–0.333), and Pb (0.044–0.892) (mg/L). The studied metals in the wastewater sample, except Cd, Fe, and Pb were lower than the allowable limit. The level of heavy metals in the grass and papaya samples ranged from Cd (37.14–147.62), Cr (ND-8.82), Cu (3.14–8.33), Mn (2.89–85.46), Fe(5.0–65.15), Zn (3.44–36.84), and Pb (ND-60.36) (mg/kg). The detected metals were below the permissible limits, except Cd, Cr, and Pb. The findings of the physicochemical characteristics in wastewater samples were computed: pH (6.61–8.54), temperatures (21.63–26.57 °C), TDS (205.9–1896 mg/L), EC (359.9–3226.67 μs/cm), BOD (12.0–732.67 mg/L), COD (3.67–1691.33 mg/L). Except for temperature and pH, all levels in the wastewater were above the recommended limit for wastewater discharge by USEPA.

Toxicology. Poisons

Halaman 29 dari 40059