Hasil untuk "Toxicology. Poisons"

Menampilkan 20 dari ~801575 hasil · dari arXiv, Semantic Scholar, CrossRef, DOAJ

JSON API
arXiv Open Access 2026
TRUSTDESC: Preventing Tool Poisoning in LLM Applications via Trusted Description Generation

Hengkai Ye, Zhechang Zhang, Jinyuan Jia et al.

Large language models (LLMs) increasingly rely on external tools to perform time-sensitive tasks and real-world actions. While tool integration expands LLM capabilities, it also introduces a new prompt-injection attack surface: tool poisoning attacks (TPAs). Attackers manipulate tool descriptions by embedding malicious instructions (explicit TPAs) or misleading claims (implicit TPAs) to influence model behavior and tool selection. Existing defenses mainly detect anomalous instructions and remain ineffective against implicit TPAs. In this paper, we present TRUSTDESC, the first framework for preventing tool poisoning by automatically generating trusted tool descriptions from implementations. TRUSTDESC derives implementation-faithful descriptions through a three-stage pipeline. SliceMin performs reachability-aware static analysis and LLM-guided debloating to extract minimal tool-relevant code slices. DescGen synthesizes descriptions from these slices while mitigating misleading or adversarial code artifacts. DynVer refines descriptions through dynamic verification by executing synthesized tasks and validating behavioral claims. We evaluate TRUSTDESC on 52 real-world tools across multiple tool ecosystems. Results show that TRUSTDESC produces accurate tool descriptions that improve task completion rates while mitigating implicit TPAs at their root, with minimal time and monetary overhead.

en cs.CR
arXiv Open Access 2026
Interpretable, Physics-Informed Learning Reveals Sulfur Adsorption and Poisoning Mechanisms in 13-Atom Icosahedra Nanoclusters

Raiane Ferreira Monteiro, João Marcos T. Palheta, Tulio Gnoatto Grison et al.

Transition-metal nanoclusters exhibit structural and electronic properties that depend on their size, often making them superior to bulk materials for heterogeneous catalysis. However, their performance can be limited by sulfur poisoning. Here, we use dispersion-corrected density functional theory (DFT) and physics-informed machine learning to map how atomic sulfur adsorbs and causes poisoning on 13-atom icosahedral clusters from 30 different transition metals (3$d$ to 5$d$). We measure which sites sulfur prefers to adsorb to, the thermodynamics and energy breakdown, changes in structure, such as bond lengths and coordination, and electronic properties, such as $\varepsilon_d$, the HOMO-LUMO gap, and charge transfer. Vibrational analysis reveals true energy minima and provides ZPE-based descriptors that reflect the lattice stiffening upon sulfur adsorption. For most metals, the metal-sulfur interaction mainly determines adsorption energy. At the same time, distortion penalties are usually moderate but can be significant for a few metals, suggesting these are more likely to restructure when sulfur is adsorbed. Using unsupervised \textit{k}-means clustering, we identify periodic trends and group metals based on their adsorption responses. Supervised regression models with leave-one-feature-out analysis identify the descriptors that best predict adsorption for new samples. Our results highlight the isoelectronic triad \ce{Ti}, \ce{Zr}, and \ce{Hf} as a balanced group that combines strong sulfur binding with minimal structural change. Additional DFT calculations for \ce{SO2} adsorption reveal strong binding and a clear tendency toward dissociation on these clusters, linking electronic states, lattice response, and poisoning strength. These findings offer data-driven guidelines for designing sulfur-tolerant nanocatalysts at the subnanometer scale.

en physics.atm-clus, cond-mat.mtrl-sci
arXiv Open Access 2026
Memory Poisoning Attack and Defense on Memory Based LLM-Agents

Balachandra Devarangadi Sunil, Isheeta Sinha, Piyush Maheshwari et al.

Large language model agents equipped with persistent memory are vulnerable to memory poisoning attacks, where adversaries inject malicious instructions through query only interactions that corrupt the agents long term memory and influence future responses. Recent work demonstrated that the MINJA (Memory Injection Attack) achieves over 95 % injection success rate and 70 % attack success rate under idealized conditions. However, the robustness of these attacks in realistic deployments and effective defensive mechanisms remain understudied. This work addresses these gaps through systematic empirical evaluation of memory poisoning attacks and defenses in Electronic Health Record (EHR) agents. We investigate attack robustness by varying three critical dimensions: initial memory state, number of indication prompts, and retrieval parameters. Our experiments on GPT-4o-mini, Gemini-2.0-Flash and Llama-3.1-8B-Instruct models using MIMIC-III clinical data reveal that realistic conditions with pre-existing legitimate memories dramatically reduce attack effectiveness. We then propose and evaluate two novel defense mechanisms: (1) Input/Output Moderation using composite trust scoring across multiple orthogonal signals, and (2) Memory Sanitization with trust-aware retrieval employing temporal decay and pattern-based filtering. Our defense evaluation reveals that effective memory sanitization requires careful trust threshold calibration to prevent both overly conservative rejection (blocking all entries) and insufficient filtering (missing subtle attacks), establishing important baselines for future adaptive defense mechanisms. These findings provide crucial insights for securing memory-augmented LLM agents in production environments.

en cs.CR, cs.MA
DOAJ Open Access 2026
Hourly ozone concentration estimation and its health impact study based on ensemble machine learning: A case study of Taiyuan City

Rule DU, Xiaojuan YANG, Ruixia NIU et al.

BackgroundOzone (O3) is a major air pollutant. The existing monitoring system has uneven distribution of sites, insufficient coverage in underdeveloped areas, and low temporal resolution, making it difficult to obtain hourly data. This limits the dynamic identification of pollution and the formulation of prevention and control strategies. ObjectiveTo construct an hourly O3 concentration estimation model based on ensemble machine learning, aiming to improve the accuracy of pollution exposure assessment and explore O3 health impacts. MethodsThis study integrated land use regression modeling with modern machine learning techniques, employing random forest and XGBoost algorithms to construct base models, and stacking integration using non-negative least squares. The ensemble model was trained and validated across China using high-resolution, multi-source geographic data (e.g., meteorologicaldata, population density, land cover types, and aerosol optical thickness). It was tested in Taiyuan City, combined with a distributed lag non-linear model to analyze the association between O3 and emergency admissions.ResultsThe constructed ensemble model performed well in predicting O3 concentration, with a higher coefficient of determination (R2) and a lower root-mean-square deviation (RMSE) compared to the single models. The R2 improved from 0.90 to 0.92, and the RMSE decreased from 11.41 to 10.62, enhancing both prediction accuracy and generalization ability. In the application to Taiyuan City, the model successfully imputed the hourly-level data for the entire year. The distributed lag non-linear model analysis revealed that the relative risk (RR) values for the 6th to 8th days following O3 exposure were 1.14 (95%CI: 1.01, 1.29), 1.16 (95%CI: 1.02, 1.31), and 1.14 (95%CI: 1.01, 1.29), respectively, which were significantly higher than 1, indicating a significant lagged association (lagged 6-8 d) between O3 and the number of emergency room visits.ConclusionA high-precision, hourly-level O3 concentration estimation model is successfully constructed by combining the land use regression model with an ensemble machine learning approach to provide a scientific basis for environmental policy formulation and public health intervention. The application of the model verifies its generalization ability and practical application value, which can provide a new technical framework for subsequent environmental health research.

Medicine (General), Toxicology. Poisons
arXiv Open Access 2025
Stealthy LLM-Driven Data Poisoning Attacks Against Embedding-Based Retrieval-Augmented Recommender Systems

Fatemeh Nazary, Yashar Deldjoo, Tommaso Di Noia et al.

We present a systematic study of provider-side data poisoning in retrieval-augmented recommender systems (RAG-based). By modifying only a small fraction of tokens within item descriptions -- for instance, adding emotional keywords or borrowing phrases from semantically related items -- an attacker can significantly promote or demote targeted items. We formalize these attacks under token-edit and semantic-similarity constraints, and we examine their effectiveness in both promotion (long-tail items) and demotion (short-head items) scenarios. Our experiments on MovieLens, using two large language model (LLM) retrieval modules, show that even subtle attacks shift final rankings and item exposures while eluding naive detection. The results underscore the vulnerability of RAG-based pipelines to small-scale metadata rewrites and emphasize the need for robust textual consistency checks and provenance tracking to thwart stealthy provider-side poisoning.

en cs.IR
arXiv Open Access 2025
PCAP-Backdoor: Backdoor Poisoning Generator for Network Traffic in CPS/IoT Environments

Ajesh Koyatan Chathoth, Stephen Lee

The rapid expansion of connected devices has made them prime targets for cyberattacks. To address these threats, deep learning-based, data-driven intrusion detection systems (IDS) have emerged as powerful tools for detecting and mitigating such attacks. These IDSs analyze network traffic to identify unusual patterns and anomalies that may indicate potential security breaches. However, prior research has shown that deep learning models are vulnerable to backdoor attacks, where attackers inject triggers into the model to manipulate its behavior and cause misclassifications of network traffic. In this paper, we explore the susceptibility of deep learning-based IDS systems to backdoor attacks in the context of network traffic analysis. We introduce \texttt{PCAP-Backdoor}, a novel technique that facilitates backdoor poisoning attacks on PCAP datasets. Our experiments on real-world Cyber-Physical Systems (CPS) and Internet of Things (IoT) network traffic datasets demonstrate that attackers can effectively backdoor a model by poisoning as little as 1\% or less of the entire training dataset. Moreover, we show that an attacker can introduce a trigger into benign traffic during model training yet cause the backdoored model to misclassify malicious traffic when the trigger is present. Finally, we highlight the difficulty of detecting this trigger-based backdoor, even when using existing backdoor defense techniques.

en cs.LG, cs.CR
arXiv Open Access 2025
GaussTrap: Stealthy Poisoning Attacks on 3D Gaussian Splatting for Targeted Scene Confusion

Jiaxin Hong, Sixu Chen, Shuoyang Sun et al.

As 3D Gaussian Splatting (3DGS) emerges as a breakthrough in scene representation and novel view synthesis, its rapid adoption in safety-critical domains (e.g., autonomous systems, AR/VR) urgently demands scrutiny of potential security vulnerabilities. This paper presents the first systematic study of backdoor threats in 3DGS pipelines. We identify that adversaries may implant backdoor views to induce malicious scene confusion during inference, potentially leading to environmental misperception in autonomous navigation or spatial distortion in immersive environments. To uncover this risk, we propose GuassTrap, a novel poisoning attack method targeting 3DGS models. GuassTrap injects malicious views at specific attack viewpoints while preserving high-quality rendering in non-target views, ensuring minimal detectability and maximizing potential harm. Specifically, the proposed method consists of a three-stage pipeline (attack, stabilization, and normal training) to implant stealthy, viewpoint-consistent poisoned renderings in 3DGS, jointly optimizing attack efficacy and perceptual realism to expose security risks in 3D rendering. Extensive experiments on both synthetic and real-world datasets demonstrate that GuassTrap can effectively embed imperceptible yet harmful backdoor views while maintaining high-quality rendering in normal views, validating its robustness, adaptability, and practical applicability.

en cs.CV, cs.AI
arXiv Open Access 2025
FedLAD: A Linear Algebra Based Data Poisoning Defence for Federated Learning

Qi Xiong, Hai Dong, Nasrin Sohrabi et al.

Sybil attacks pose a significant threat to federated learning, as malicious nodes can collaborate and gain a majority, thereby overwhelming the system. Therefore, it is essential to develop countermeasures that ensure the security of federated learning environments. We present a novel defence method against targeted data poisoning, which is one of the types of Sybil attacks, called Linear Algebra-based Detection (FedLAD). Unlike existing approaches, such as clustering and robust training, which struggle in situations where malicious nodes dominate, FedLAD models the federated learning aggregation process as a linear problem, transforming it into a linear algebra optimisation challenge. This method identifies potential attacks by extracting the independent linear combinations from the original linear combinations, effectively filtering out redundant and malicious elements. Extensive experimental evaluations demonstrate the effectiveness of FedLAD compared to five well-established defence methods: Sherpa, CONTRA, Median, Trimmed Mean, and Krum. Using tasks from both image classification and natural language processing, our experiments confirm that FedLAD is robust and not dependent on specific application settings. The results indicate that FedLAD effectively protects federated learning systems across a broad spectrum of malicious node ratios. Compared to baseline defence methods, FedLAD maintains a low attack success rate for malicious nodes when their ratio ranges from 0.2 to 0.8. Additionally, it preserves high model accuracy when the malicious node ratio is between 0.2 and 0.5. These findings underscore FedLAD's potential to enhance both the reliability and performance of federated learning systems in the face of data poisoning attacks.

en cs.LG
arXiv Open Access 2025
Devil's Hand: Data Poisoning Attacks to Locally Private Graph Learning Protocols

Longzhu He, Chaozhuo Li, Peng Tang et al.

Graph neural networks (GNNs) have achieved significant success in graph representation learning and have been applied to various domains. However, many real-world graphs contain sensitive personal information, such as user profiles in social networks, raising serious privacy concerns when graph learning is performed using GNNs. To address this issue, locally private graph learning protocols have gained considerable attention. These protocols leverage the privacy advantages of local differential privacy (LDP) and the effectiveness of GNN's message-passing in calibrating noisy data, offering strict privacy guarantees for users' local data while maintaining high utility (e.g., node classification accuracy) for graph learning. Despite these advantages, such protocols may be vulnerable to data poisoning attacks, a threat that has not been considered in previous research. Identifying and addressing these threats is crucial for ensuring the robustness and security of privacy-preserving graph learning frameworks. This work introduces the first data poisoning attack targeting locally private graph learning protocols. The attacker injects fake users into the protocol, manipulates these fake users to establish links with genuine users, and sends carefully crafted data to the server, ultimately compromising the utility of private graph learning. The effectiveness of the attack is demonstrated both theoretically and empirically. In addition, several defense strategies have also been explored, but their limited effectiveness highlights the need for more robust defenses.

en cs.LG, cs.CR
arXiv Open Access 2025
MIRAGE: Misleading Retrieval-Augmented Generation via Black-box and Query-agnostic Poisoning Attacks

Tailun Chen, Yu He, Yan Wang et al.

Retrieval-Augmented Generation (RAG) systems enhance LLMs with external knowledge but introduce a critical attack surface: corpus poisoning. While recent studies have demonstrated the potential of such attacks, they typically rely on impractical assumptions, such as white-box access or known user queries, thereby underestimating the difficulty of real-world exploitation. In this paper, we bridge this gap by proposing MIRAGE, a novel multi-stage poisoning pipeline designed for strict black-box and query-agnostic environments. Operating on surrogate model feedback, MIRAGE functions as an automated optimization framework that integrates three key mechanisms: it utilizes persona-driven query synthesis to approximate latent user search distributions, employs semantic anchoring to imperceptibly embed these intents for high retrieval visibility, and leverages an adversarial variant of Test-Time Preference Optimization (TPO) to maximize persuasion. To rigorously evaluate this threat, we construct a new benchmark derived from three long-form, domain-specific datasets. Extensive experiments demonstrate that MIRAGE significantly outperforms existing baselines in both attack efficacy and stealthiness, exhibiting remarkable transferability across diverse retriever-LLM configurations and highlighting the urgent need for robust defense strategies.

en cs.CR
arXiv Open Access 2025
Synthetic Poisoning Attacks: The Impact of Poisoned MRI Image on U-Net Brain Tumor Segmentation

Tianhao Li, Tianyu Zeng, Yujia Zheng et al.

Deep learning-based medical image segmentation models, such as U-Net, rely on high-quality annotated datasets to achieve accurate predictions. However, the increasing use of generative models for synthetic data augmentation introduces potential risks, particularly in the absence of rigorous quality control. In this paper, we investigate the impact of synthetic MRI data on the robustness and segmentation accuracy of U-Net models for brain tumor segmentation. Specifically, we generate synthetic T1-contrast-enhanced (T1-Ce) MRI scans using a GAN-based model with a shared encoding-decoding framework and shortest-path regularization. To quantify the effect of synthetic data contamination, we train U-Net models on progressively "poisoned" datasets, where synthetic data proportions range from 16.67% to 83.33%. Experimental results on a real MRI validation set reveal a significant performance degradation as synthetic data increases, with Dice coefficients dropping from 0.8937 (33.33% synthetic) to 0.7474 (83.33% synthetic). Accuracy and sensitivity exhibit similar downward trends, demonstrating the detrimental effect of synthetic data on segmentation robustness. These findings underscore the importance of quality control in synthetic data integration and highlight the risks of unregulated synthetic augmentation in medical image analysis. Our study provides critical insights for the development of more reliable and trustworthy AI-driven medical imaging systems.

en eess.IV, cs.CR
CrossRef Open Access 2025
combined effect of poisons

Citation: 'combined effect of poisons' in the IUPAC Compendium of Chemical Terminology, 5th ed.; International Union of Pure and Applied Chemistry; 2025. Online version 5.0.0, 2025. 10.1351/goldbook.15575 • License: The IUPAC Gold Book is licensed under Creative Commons Attribution-ShareAlike CC BY-SA 4.0 International for individual terms. Requests for commercial usage of the compendium should be directed to IUPAC.

DOAJ Open Access 2025
Integral approach to organelle profiling in human iPSC-derived cardiomyocytes enhances in vitro cardiac safety classification of known cardiotoxic compounds

Brigitta R. Szabo, Brigitta R. Szabo, Jeroen Stein et al.

IntroductionEfficient preclinical prediction of cardiovascular side effects poses a pivotal challenge for the pharmaceutical industry. Human induced pluripotent stem cell-derived cardiomyocytes (hiPSC-CMs) are becoming increasingly important in this field due to inaccessibility of human native cardiac tissue. Current preclinical hiPSC-CMs models focus on functional changes such as electrophysiological abnormalities, however other parameters, such as structural toxicity, remain less understood.MethodsThis study utilized hiPSC-CMs from three independent donors, cultured in serum-free conditions, and treated with a library of 17 small molecules with stratified cardiac side effects. High-content imaging (HCI) targeting ten subcellular organelles, combined with multi-electrode array data, was employed to profile drug responses. Dimensionality reduction and clustering of the data were performed using principal component analysis (PCA) and sparse partial least squares discriminant analysis (sPLS-DA).ResultsBoth supervised and unsupervised clustering revealed patterns associated with known clinical side effects. In supervised clustering, morphological features outperformed electrophysiological data alone, and the combined data set achieved a 76% accuracy in recapitulating known clinical cardiotoxicity classifications. RNA-sequencing of all drugs versus vehicle conditions was used to support the mechanistic insights derived from morphological profiling, validating the former as a valuable cardiotoxicity tool.ConclusionResults demonstrate that a combined approach of analyzing morphology and electrophysiology enhances in-vitro prediction and understanding of drug cardiotoxicity. Our integrative approach introduces a potential framework that is accessible, scalable and better aligned with clinical outcomes.

Toxicology. Poisons
DOAJ Open Access 2025
Proliferation and metabolic activity of the Atlantic sturgeon cell line AOXlar7y under short-term serum-reduced conditions, and the effect of stimulation with growth factors and cytokines

Valeria Di Leonardo, Julia Brenmoehl, Heike Wanka et al.

IntroductionFish cell lines represent a powerful tool for studying the biology and toxicology of aquatic species in compliance with the 3Rs principles. In addition, they hold potential for more advanced biotechnological applications. However, fish cell cultures are mainly cultivated with fetal bovine serum. Therefore, in this study, we investigated the impact of serum reduction and the effects of six growth factors and cytokines on a sturgeon larval cell line (AOXla7y), which has been previously proven to be a valuable model for climate change and toxicology studies.MethodsThe serum reduction (from 10% to 5% and 2%) and the addition of two concentrations (10 and 50 ng/mL) of six growth factors and cytokines (FGF-2, IGF-1, LIF, IFN-γ, IL-13, and IL-15) to the 2% serum growth medium were evaluated over 6 days of cultivation. The morphology and cell density were determined using phasecontrast images after the experiment ended, while real-time label-free cell impedance (xCELLigence) was recorded throughout the cultivation period. Moreover, the end-point oxygen consumption in basal and uncoupled respiration conditions was analyzed with the Seahorse XF Cell Mito Stress Test Kit.ResultsThe results demonstrated a general adaptation of the sturgeon cell line to a serum-reduced environment and the modulatory effects of growth factor and cytokine supplementation on cell growth and metabolism.DiscussionThese findings suggest that the sturgeon cell line has the potential to transition to a serumfree medium without major observed morphological modifications and with a limited reduction in proliferation. Its metabolism was differentially modulated by the signaling of growth factors and cytokines and exhibited a variable metabolic phenotype under mitochondrial stress. This study provides a characterization of the Atlantic sturgeon cell metabolism and offers a preliminary assessment for developing an animal-free and chemically defined medium.

Toxicology. Poisons
DOAJ Open Access 2025
A Synergistic Approach with Doxycycline and Spirulina Extracts in DNBS-Induced Colitis: Enhancing Remission and Controlling Relapse

Meriem Aziez, Mohamed Malik Mahdjoub, Tahar Benayad et al.

<b>Background:</b> Chronic relapsing colitis involves immune dysregulation and oxidative stress, making monotherapies often insufficient. This study investigates a therapeutic strategy combining doxycycline (Dox), an immunomodulatory antibiotic, with <i>Arthrospira platensis</i> extracts to enhance anti-inflammatory and antioxidant effects, improving remission and controlling relapse. <b>Methods:</b> Ethanolic (ES) and aqueous (AS) extracts of <i>A. platensis</i> were chemically characterized by GC-MS after derivatization. Colitis was induced in mice using two intrarectal DNBS administrations spaced 7 days apart, with oral treatments (Dox, ES, AS, or combinations) given daily between doses. Disease progression was evaluated through clinical monitoring, histological scoring, and biochemical analysis, including MPO and CAT activities, as well as NO, MDA, and GSH levels. <b>Results:</b> GC-MS identified 16 bioactive compounds in each extract. ES contained mainly fatty acids and amino acids, whereas AS was rich in polysaccharides and phytol. Combined doxycycline and <i>A. platensis</i> extracts significantly enhanced recovery in reactivated DNBS colitis compared to monotherapies. Each treatment alone reduced disease severity, but their combination showed synergistic effects, significantly reducing disease activity index (<i>p</i> < 0.001), restoring mucosal integrity, and modulating inflammatory and oxidative markers (<i>p</i> < 0.001). <b>Conclusion:</b> Doxycycline potentiates the anti-colitic effects of <i>A. platensis</i> extracts via complementary mechanisms, offering a promising combination for managing relapsing colitis.

Therapeutics. Pharmacology, Toxicology. Poisons
arXiv Open Access 2024
The Implicit Bias of Structured State Space Models Can Be Poisoned With Clean Labels

Yonatan Slutzky, Yotam Alexander, Noam Razin et al.

Neural networks are powered by an implicit bias: a tendency of gradient descent to fit training data in a way that generalizes to unseen data. A recent class of neural network models gaining increasing popularity is structured state space models (SSMs), regarded as an efficient alternative to transformers. Prior work argued that the implicit bias of SSMs leads to generalization in a setting where data is generated by a low dimensional teacher. In this paper, we revisit the latter setting, and formally establish a phenomenon entirely undetected by prior work on the implicit bias of SSMs. Namely, we prove that while implicit bias leads to generalization under many choices of training data, there exist special examples whose inclusion in training completely distorts the implicit bias, to a point where generalization fails. This failure occurs despite the special training examples being labeled by the teacher, i.e. having clean labels! We empirically demonstrate the phenomenon, with SSMs trained independently and as part of non-linear neural networks. In the area of adversarial machine learning, disrupting generalization with cleanly labeled training examples is known as clean-label poisoning. Given the proliferation of SSMs, we believe that delineating their susceptibility to clean-label poisoning, and developing methods for overcoming this susceptibility, are critical research directions to pursue.

en cs.LG, stat.ML
arXiv Open Access 2024
Towards Robust Detection of Open Source Software Supply Chain Poisoning Attacks in Industry Environments

Xinyi Zheng, Chen Wei, Shenao Wang et al.

The exponential growth of open-source package ecosystems, particularly NPM and PyPI, has led to an alarming increase in software supply chain poisoning attacks. Existing static analysis methods struggle with high false positive rates and are easily thwarted by obfuscation and dynamic code execution techniques. While dynamic analysis approaches offer improvements, they often suffer from capturing non-package behaviors and employing simplistic testing strategies that fail to trigger sophisticated malicious behaviors. To address these challenges, we present OSCAR, a robust dynamic code poisoning detection pipeline for NPM and PyPI ecosystems. OSCAR fully executes packages in a sandbox environment, employs fuzz testing on exported functions and classes, and implements aspect-based behavior monitoring with tailored API hook points. We evaluate OSCAR against six existing tools using a comprehensive benchmark dataset of real-world malicious and benign packages. OSCAR achieves an F1 score of 0.95 in NPM and 0.91 in PyPI, confirming that OSCAR is as effective as the current state-of-the-art technologies. Furthermore, for benign packages exhibiting characteristics typical of malicious packages, OSCAR reduces the false positive rate by an average of 32.06% in NPM (from 34.63% to 2.57%) and 39.87% in PyPI (from 41.10% to 1.23%), compared to other tools, significantly reducing the workload of manual reviews in real-world deployments. In cooperation with Ant Group, a leading financial technology company, we have deployed OSCAR on its NPM and PyPI mirrors since January 2023, identifying 10,404 malicious NPM packages and 1,235 malicious PyPI packages over 18 months. This work not only bridges the gap between academic research and industrial application in code poisoning detection but also provides a robust and practical solution that has been thoroughly tested in a real-world industrial setting.

en cs.CR, cs.SE
DOAJ Open Access 2024
Evaluation of the Body Burden of Short- and Medium-Chain Chlorinated Paraffins in the Blood Serum of Residents of the Czech Republic

Denisa Parizkova, Aneta Sykorova, Jakub Tomasko et al.

Short- and medium-chain chlorinated paraffins (SCCPs and MCCPs) are environmental contaminants known for their persistence and bioaccumulation in fatty tissues. SCCPs are considered potential carcinogens and endocrine disruptors, with similar effects expected for MCCPs. This study investigated the body burden of SCCPs and MCCPs in residents of two regions of the Czech Republic with different levels of industrial pollution. Blood serum samples from 62 individuals in Ceske Budejovice (control area) and Ostrava (industrial area) were analysed. The results showed higher concentrations of SCCPs (<120–650 ng/g lipid weight (lw)) and MCCPs (<240–1530 ng/g lw) in Ostrava compared to Ceske Budejovice (SCCPs: <120–210 ng/g lw, MCCPs: <240–340 ng/g lw). The statistical analysis revealed no significant correlations between chemical concentrations and demographic variables such as age, BMI, or gender. The findings are consistent with European and Australian studies but significantly lower than levels reported in China. This is the first comprehensive survey of SCCPs and MCCPs in human blood serum in the Czech Republic and the second study in Europe. The data collected in this study are essential for assessing SCCPs and MCCPs. They will contribute to a better understanding the potential health risks associated with exposure to these chemicals.

Therapeutics. Pharmacology, Toxicology. Poisons
arXiv Open Access 2023
Fed-LSAE: Thwarting Poisoning Attacks against Federated Cyber Threat Detection System via Autoencoder-based Latent Space Inspection

Tran Duc Luong, Vuong Minh Tien, Nguyen Huu Quyen et al.

The significant rise of security concerns in conventional centralized learning has promoted federated learning (FL) adoption in building intelligent applications without privacy breaches. In cybersecurity, the sensitive data along with the contextual information and high-quality labeling in each enterprise organization play an essential role in constructing high-performance machine learning (ML) models for detecting cyber threats. Nonetheless, the risks coming from poisoning internal adversaries against FL systems have raised discussions about designing robust anti-poisoning frameworks. Whereas defensive mechanisms in the past were based on outlier detection, recent approaches tend to be more concerned with latent space representation. In this paper, we investigate a novel robust aggregation method for FL, namely Fed-LSAE, which takes advantage of latent space representation via the penultimate layer and Autoencoder to exclude malicious clients from the training process. The experimental results on the CIC-ToN-IoT and N-BaIoT datasets confirm the feasibility of our defensive mechanism against cutting-edge poisoning attacks for developing a robust FL-based threat detector in the context of IoT. More specifically, the FL evaluation witnesses an upward trend of approximately 98% across all metrics when integrating with our Fed-LSAE defense.

en cs.CR

Halaman 35 dari 40079