J. Raub, M. Mathieu-Nolf, N. Hampson et al.
Hasil untuk "Toxicology. Poisons"
Menampilkan 20 dari ~434231 hasil · dari DOAJ, arXiv, Semantic Scholar
Tianzi SHAN, Junxiang MA, Tian CHEN et al.
BackgroundWork-related musculoskeletal disorders (WMSDs) are a major occupational health concern, particularly among workers exposed to adverse ergonomic conditions. Manganese production involves heavy physical demands, yet research on WMSDs among manganese workers remains limited.ObjectiveTo investigate the prevalence and influencing factors of WMSDs among manganese workers in a manganese enterprise in Guangxi.MethodsA cross-sectional survey was conducted from May to June 2024 on workers at a manganese factory in Guangxi. The Chinese Musculoskeletal Disorders Questionnaire was used to collect information on demographic characteristics, distribution of musculoskeletal symptoms, and work-related exposures. χ2 test was applied to compare differences in positive WMSDs rates across groups, and logistic regression analysis was performed to identify associated factors.ResultsA total of 1476 workers were enrolled in the study after pre-determined inclusion and exclusion criteria. The overall prevalence of WMSDs was 34.15%. The most commonly affected body regions were the lower back (17.28%), neck (16.67%), and shoulders (13.82%). The results of logistic regression analysis indicated that female, older age, and education level of college or above were associated with a higher risk of WMSDs (P<0.05). Awkward working postures were significantly associated with WMSDs in corresponding body regions; in particular, awkward postures of the neck, upper limbs, trunk, and lower limbs were related to an increased risk of WMSDs in multiple body sites (P<0.05). In addition, poor lighting conditions, high workplace temperature, frequent or sustained arm support during work, and high job demands were associated with an increased risk of overall or site-specific WMSDs (P<0.05).ConclusionThe high prevalence of WMSDs among manganese workers is closely associated with demographic characteristics, working postures, and work environment and organizational factors. Targeted ergonomic interventions focusing on high-risk body regions and key ergonomic exposures are warranted to reduce the risk of WMSDs among manganese workers.
Ray-Chang Tzeng, Ming-Chi Lai, Sheng-Nan Wu et al.
Abstract Background Topiramate (TPM) is a sulfamate-substituted monosaccharide known for its wide-ranging effects on epilepsy, neuropathic pain, and migraines. However, its precise influence on plasmalemmal ionic currents, including their magnitude and gating kinetics, remains uncertain. Therefore, a reassessment of the regulatory effect of TPM on ionic currents in electrically excitable cells is warranted. Methods With the aid of patch clamp technology, we investigated the effects of TPM on the amplitude, gating, and hysteresis of plasmalemmal ionic currents from GH3 lactotrophs. Results We observed that TPM exhibited a concentration-dependent inhibition of both transient (I Na(T)) and late (I Na(L)) components of I Na, activated by brief depolarizing stimuli. At low concentration, TPM did not show any noticeable effect on I Na(T); however, it was effective in reducing I Na(L) amplitude. TPM caused a leftward shift in the midpoint of the steady-state inactivation curve of I Na(T) without altering the gating charge. Importantly, the overall current density versus voltage relationship of I Na(T) remained unaltered during TPM exposure. Intriguingly, the reduction in I Na(T) induced by TPM could not be reversed by subsequent additions of flumazenil or chlorotoxin. Furthermore, TPM suppressed the density of the hyperpolarization-activated cation current (I h). Simultaneously, the activation time course of I h slowed in the presence of TPM. Moreover, TPM exposure decreased the hysteretic strength activated by double triangular ramp voltage, a change partially reversed by oxaliplatin. In current-clamp potential recordings, spontaneous action potentials were susceptible to suppression in the presence of TPM. Conclusions Collectively, these findings strongly suggest that TPM’s effects on I Na and I h have the potential to impact the functional activities and electrical behaviors of excitable cells.
Binyan Xu, Fan Yang, Xilin Dai et al.
Deep Neural Networks (DNNs) are susceptible to backdoor attacks, where adversaries poison training data to implant backdoor into the victim model. Current backdoor defenses on poisoned data often suffer from high computational costs or low effectiveness against advanced attacks like clean-label and clean-image backdoors. To address them, we introduce CLIP-Guided backdoor Defense (CGD), an efficient and effective method that mitigates various backdoor attacks. CGD utilizes a publicly accessible CLIP model to identify inputs that are likely to be clean or poisoned. It then retrains the model with these inputs, using CLIP's logits as a guidance to effectively neutralize the backdoor. Experiments on 4 datasets and 11 attack types demonstrate that CGD reduces attack success rates (ASRs) to below 1% while maintaining clean accuracy (CA) with a maximum drop of only 0.3%, outperforming existing defenses. Additionally, we show that clean-data-based defenses can be adapted to poisoned data using CGD. Also, CGD exhibits strong robustness, maintaining low ASRs even when employing a weaker CLIP model or when CLIP itself is compromised by a backdoor. These findings underscore CGD's exceptional efficiency, effectiveness, and applicability for real-world backdoor defense scenarios. Code: https://github.com/binyxu/CGD.
Zhiqiang Wang, Haohua Du, Guanquan Shi et al.
The Model Context Protocol (MCP) is increasingly adopted to standardize the interaction between LLM agents and external tools. However, this trend introduces a new threat: Tool Poisoning Attacks (TPA), where tool metadata is poisoned to induce the agent to perform unauthorized operations. Existing defenses that primarily focus on behavior-level analysis are fundamentally ineffective against TPA, as poisoned tools need not be executed, leaving no behavioral trace to monitor. Thus, we propose MindGuard, a decision-level guardrail for LLM agents, providing provenance tracking of call decisions, policy-agnostic detection, and poisoning source attribution against TPA. While fully explaining LLM decision remains challenging, our empirical findings uncover a strong correlation between LLM attention mechanisms and tool invocation decisions. Therefore, we choose attention as an empirical signal for decision tracking and formalize this as the Decision Dependence Graph (DDG), which models the LLM's reasoning process as a weighted, directed graph where vertices represent logical concepts and edges quantify the attention-based dependencies. We further design robust DDG construction and graph-based anomaly analysis mechanisms that efficiently detect and attribute TPA attacks. Extensive experiments on real-world datasets demonstrate that MindGuard achieves 94\%-99\% average precision in detecting poisoned invocations, 95\%-100\% attribution accuracy, with processing times under one second and no additional token cost. Moreover, DDG can be viewed as an adaptation of the classical Program Dependence Graph (PDG), providing a solid foundation for applying traditional security policies at the decision level.
Alexander Branch, Omead Pooladzandi, Radin Khosraviani et al.
We introduce PureVQ-GAN, a defense against data poisoning that forces backdoor triggers through a discrete bottleneck using Vector-Quantized VAE with GAN discriminator. By quantizing poisoned images through a learned codebook, PureVQ-GAN destroys fine-grained trigger patterns while preserving semantic content. A GAN discriminator ensures outputs match the natural image distribution, preventing reconstruction of out-of-distribution perturbations. On CIFAR-10, PureVQ-GAN achieves 0% poison success rate (PSR) against Gradient Matching and Bullseye Polytope attacks, and 1.64% against Narcissus while maintaining 91-95% clean accuracy. Unlike diffusion-based defenses requiring hundreds of iterative refinement steps, PureVQ-GAN is over 50x faster, making it practical for real training pipelines.
Kaiwen Duan, Hongwei Yao, Yufei Chen et al.
Reinforcement Learning from Human Feedback (RLHF) is crucial for aligning text-to-image (T2I) models with human preferences. However, RLHF's feedback mechanism also opens new pathways for adversaries. This paper demonstrates the feasibility of hijacking T2I models by poisoning a small fraction of preference data with natural-appearing examples. Specifically, we propose BadReward, a stealthy clean-label poisoning attack targeting the reward model in multi-modal RLHF. BadReward operates by inducing feature collisions between visually contradicted preference data instances, thereby corrupting the reward model and indirectly compromising the T2I model's integrity. Unlike existing alignment poisoning techniques focused on single (text) modality, BadReward is independent of the preference annotation process, enhancing its stealth and practical threat. Extensive experiments on popular T2I models show that BadReward can consistently guide the generation towards improper outputs, such as biased or violent imagery, for targeted concepts. Our findings underscore the amplified threat landscape for RLHF in multi-modal systems, highlighting the urgent need for robust defenses. Disclaimer. This paper contains uncensored toxic content that might be offensive or disturbing to the readers.
Xin Wang, Feilong Wang, Yuan Hong et al.
The growing reliance of intelligent systems on data makes the systems vulnerable to data poisoning attacks. Such attacks could compromise machine learning or deep learning models by disrupting the input data. Previous studies on data poisoning attacks are subject to specific assumptions, and limited attention is given to learning models with general (equality and inequality) constraints or lacking differentiability. Such learning models are common in practice, especially in Intelligent Transportation Systems (ITS) that involve physical or domain knowledge as specific model constraints. Motivated by ITS applications, this paper formulates a model-target data poisoning attack as a bi-level optimization problem with a constrained lower-level problem, aiming to induce the model solution toward a target solution specified by the adversary by modifying the training data incrementally. As the gradient-based methods fail to solve this optimization problem, we propose to study the Lipschitz continuity property of the model solution, enabling us to calculate the semi-derivative, a one-sided directional derivative, of the solution over data. We leverage semi-derivative descent to solve the bi-level optimization problem, and establish the convergence conditions of the method to any attainable target model. The model and solution method are illustrated with a simulation of a poisoning attack on the lane change detection using SVM.
Chaymaa Abbas, Mariette Awad, Razane Tajeddine
Style-conditioned data poisoning is identified as a covert vector for amplifying sociolinguistic bias in large language models. Using small poisoned budgets that pair dialectal prompts -- principally African American Vernacular English (AAVE) and a Southern dialect -- with toxic or stereotyped completions during instruction tuning, this work probes whether linguistic style can act as a latent trigger for harmful behavior. Across multiple model families and scales, poisoned exposure elevates toxicity and stereotype expression for dialectal inputs -- most consistently for AAVE -- while Standard American English remains comparatively lower yet not immune. A multi-metric audit combining classifier-based toxicity with an LLM-as-a-judge reveals stereotype-laden content even when lexical toxicity appears muted, indicating that conventional detectors under-estimate sociolinguistic harms. Additionally, poisoned models exhibit emergent jailbreaking despite the absence of explicit slurs in the poison, suggesting weakened alignment rather than memorization. These findings underscore the need for dialect-aware evaluation, content-level stereotype auditing, and training protocols that explicitly decouple style from toxicity to prevent bias amplification through seemingly minor, style-based contamination.
Matthieu Carreau, Roi Naveiro, William N. Caballero
Research in adversarial machine learning (AML) has shown that statistical models are vulnerable to maliciously altered data. However, despite advances in Bayesian machine learning models, most AML research remains concentrated on classical techniques. Therefore, we focus on extending the white-box model poisoning paradigm to attack generic Bayesian inference, highlighting its vulnerability in adversarial contexts. A suite of attacks are developed that allow an attacker to steer the Bayesian posterior toward a target distribution through the strategic deletion and replication of true observations, even when only sampling access to the posterior is available. Analytic properties of these algorithms are proven and their performance is empirically examined in both synthetic and real-world scenarios. With relatively little effort, the attacker is able to substantively alter the Bayesian's beliefs and, by accepting more risk, they can mold these beliefs to their will. By carefully constructing the adversarial posterior, surgical poisoning is achieved such that only targeted inferences are corrupted and others are minimally disturbed.
Meiling OU, Jinxiong CEN, Guodong LU et al.
China has a large, advanced, and rapidly aging population, coupled with a trend of decreasing birth rates that is exacerbating the issue of an aging society. It is expected that by 2030, China will become a hyper-aged society. As the demographic dividend gradually diminishes, the working-age population is shrinking in size and becoming increasingly older, making elderly labor force an important resource. Japan and South Korea have accumulated rich experience in promoting the healthy employment and occupational health services for the elderly workforce. It is necessary for China to thoroughly consider the issue and implement corresponding measures. This article compared and analyzed the current employment situation of aging populations in China, Japan, and South Korea, revealing that Japan and South Korea presented certain advantages in healthy employment policies, occupational health measures for the elderly, and digital economy. Taking into account the circumstances in China, we can further strengthen occupational health promotion activities, optimize employment structure, utilize new technologies such as artificial intelligence, promote active aging, ensure the sustainable development of elderly labor force resources, steadily enhance comprehensive national strength, and meet the increasing demand for better quality of life among the people.
Aftab Hussain, Md Rafiqul Islam Rabin, Mohammad Amin Alipour
Large language models (LLMs) have revolutionized software development practices, yet concerns about their safety have arisen, particularly regarding hidden backdoors, aka trojans. Backdoor attacks involve the insertion of triggers into training data, allowing attackers to manipulate the behavior of the model maliciously. In this paper, we focus on analyzing the model parameters to detect potential backdoor signals in code models. Specifically, we examine attention weights and biases, and context embeddings of the clean and poisoned CodeBERT and CodeT5 models. Our results suggest noticeable patterns in context embeddings of poisoned samples for both the poisoned models; however, attention weights and biases do not show any significant differences. This work contributes to ongoing efforts in white-box detection of backdoor signals in LLMs of code through the analysis of parameters and embeddings.
Zongwei Wang, Min Gao, Junliang Yu et al.
Modern recommender systems (RS) have seen substantial success, yet they remain vulnerable to malicious activities, notably poisoning attacks. These attacks involve injecting malicious data into the training datasets of RS, thereby compromising their integrity and manipulating recommendation outcomes for gaining illicit profits. This survey paper provides a systematic and up-to-date review of the research landscape on Poisoning Attacks against Recommendation (PAR). A novel and comprehensive taxonomy is proposed, categorizing existing PAR methodologies into three distinct categories: Component-Specific, Goal-Driven, and Capability Probing. For each category, we discuss its mechanism in detail, along with associated methods. Furthermore, this paper highlights potential future research avenues in this domain. Additionally, to facilitate and benchmark the empirical comparison of PAR, we introduce an open-source library, ARLib, which encompasses a comprehensive collection of PAR models and common datasets. The library is released at https://github.com/CoderWZW/ARLib.
Cristina Improta
AI-based code generators have gained a fundamental role in assisting developers in writing software starting from natural language (NL). However, since these large language models are trained on massive volumes of data collected from unreliable online sources (e.g., GitHub, Hugging Face), AI models become an easy target for data poisoning attacks, in which an attacker corrupts the training data by injecting a small amount of poison into it, i.e., astutely crafted malicious samples. In this position paper, we address the security of AI code generators by identifying a novel data poisoning attack that results in the generation of vulnerable code. Next, we devise an extensive evaluation of how these attacks impact state-of-the-art models for code generation. Lastly, we discuss potential solutions to overcome this threat.
Wei Tong, Haoyu Chen, Jiacheng Niu et al.
Local differential privacy (LDP) provides a way for an untrusted data collector to aggregate users' data without violating their privacy. Various privacy-preserving data analysis tasks have been studied under the protection of LDP, such as frequency estimation, frequent itemset mining, and machine learning. Despite its privacy-preserving properties, recent research has demonstrated the vulnerability of certain LDP protocols to data poisoning attacks. However, existing data poisoning attacks are focused on basic statistics under LDP, such as frequency estimation and mean/variance estimation. As an important data analysis task, the security of LDP frequent itemset mining has yet to be thoroughly examined. In this paper, we aim to address this issue by presenting novel and practical data poisoning attacks against LDP frequent itemset mining protocols. By introducing a unified attack framework with composable attack operations, our data poisoning attack can successfully manipulate the state-of-the-art LDP frequent itemset mining protocols and has the potential to be adapted to other protocols with similar structures. We conduct extensive experiments on three datasets to compare the proposed attack with four baseline attacks. The results demonstrate the severity of the threat and the effectiveness of the proposed attack.
Yongyi Su, Yushu Li, Nanqing Liu et al.
Test-time adaptation (TTA) updates the model weights during the inference stage using testing data to enhance generalization. However, this practice exposes TTA to adversarial risks. Existing studies have shown that when TTA is updated with crafted adversarial test samples, also known as test-time poisoned data, the performance on benign samples can deteriorate. Nonetheless, the perceived adversarial risk may be overstated if the poisoned data is generated under overly strong assumptions. In this work, we first review realistic assumptions for test-time data poisoning, including white-box versus grey-box attacks, access to benign data, attack order, and more. We then propose an effective and realistic attack method that better produces poisoned samples without access to benign samples, and derive an effective in-distribution attack objective. We also design two TTA-aware attack objectives. Our benchmarks of existing attack methods reveal that the TTA methods are more robust than previously believed. In addition, we analyze effective defense strategies to help develop adversarially robust TTA methods. The source code is available at https://github.com/Gorilla-Lab-SCUT/RTTDP.
A. Bronstein, D. Spyker, L. Cantilena et al.
Wenhan Yang, Jingdong Gao, Baharan Mirzasoleiman
Contrastive Language-Image Pre-training (CLIP) on large image-caption datasets has achieved remarkable success in zero-shot classification and enabled transferability to new domains. However, CLIP is extremely more vulnerable to targeted data poisoning and backdoor attacks, compared to supervised learning. Perhaps surprisingly, poisoning 0.0001% of CLIP pre-training data is enough to make targeted data poisoning attacks successful. This is four orders of magnitude smaller than what is required to poison supervised models. Despite this vulnerability, existing methods are very limited in defending CLIP models during pre-training. In this work, we propose a strong defense, SAFECLIP, to safely pre-train CLIP against targeted data poisoning and backdoor attacks. SAFECLIP warms up the model by applying unimodal contrastive learning (CL) on image and text modalities separately. Then, it divides the data into safe and risky sets, by applying a Gaussian Mixture Model to the cosine similarity of image-caption pair representations. SAFECLIP pre-trains the model by applying the CLIP loss to the safe set and applying unimodal CL to image and text modalities of the risky set separately. By gradually increasing the size of the safe set during pre-training, SAFECLIP effectively breaks targeted data poisoning and backdoor attacks without harming the CLIP performance. Our extensive experiments on CC3M, Visual Genome, and MSCOCO demonstrate that SAFECLIP significantly reduces the success rate of targeted data poisoning attacks from 93.75% to 0% and that of various backdoor attacks from up to 100% to 0%, without harming CLIP's performance.
Zhibo Zhang, Sani Umar, Ahmed Y. Al Hammadi et al.
The major aim of this paper is to explain the data poisoning attacks using label-flipping during the training stage of the electroencephalogram (EEG) signal-based human emotion evaluation systems deploying Machine Learning models from the attackers' perspective. Human emotion evaluation using EEG signals has consistently attracted a lot of research attention. The identification of human emotional states based on EEG signals is effective to detect potential internal threats caused by insider individuals. Nevertheless, EEG signal-based human emotion evaluation systems have shown several vulnerabilities to data poison attacks. The findings of the experiments demonstrate that the suggested data poison assaults are model-independently successful, although various models exhibit varying levels of resilience to the attacks. In addition, the data poison attacks on the EEG signal-based human emotion evaluation systems are explained with several Explainable Artificial Intelligence (XAI) methods, including Shapley Additive Explanation (SHAP) values, Local Interpretable Model-agnostic Explanations (LIME), and Generated Decision Trees. And the codes of this paper are publicly available on GitHub.
Alvin C Bronstein, Daniel A. Spyker, L. Cantilena et al.
Halaman 22 dari 21712