PIDP-Attack: Combining Prompt Injection with Database Poisoning Attacks on Retrieval-Augmented Generation Systems
Haozhen Wang, Haoyue Liu, Jionghao Zhu
et al.
Large Language Models (LLMs) have demonstrated remarkable performance across a wide range of applications. However, their practical deployment is often hindered by issues such as outdated knowledge and the tendency to generate hallucinations. To address these limitations, Retrieval-Augmented Generation (RAG) systems have been introduced, enhancing LLMs with external, up-to-date knowledge sources. Despite their advantages, RAG systems remain vulnerable to adversarial attacks, with data poisoning emerging as a prominent threat. Existing poisoning-based attacks typically require prior knowledge of the user's specific queries, limiting their flexibility and real-world applicability. In this work, we propose PIDP-Attack, a novel compound attack that integrates prompt injection with database poisoning in RAG. By appending malicious characters to queries at inference time and injecting a limited number of poisoned passages into the retrieval database, our method can effectively manipulate LLM response to arbitrary query without prior knowledge of the user's actual query. Experimental evaluations across three benchmark datasets (Natural Questions, HotpotQA, MS-MARCO) and eight LLMs demonstrate that PIDP-Attack consistently outperforms the original PoisonedRAG. Specifically, our method improves attack success rates by 4% to 16% on open-domain QA tasks while maintaining high retrieval precision, proving that the compound attack strategy is both necessary and highly effective.
Gradient Purification: Defense Against Poisoning Attack in Decentralized Federated Learning
Bin Li, Xiaoye Miao, Yan Zhang
et al.
Decentralized federated learning (DFL) is inherently vulnerable to data poisoning attacks, as malicious clients can transmit manipulated gradients to neighboring clients. Existing defense methods either reject suspicious gradients per iteration or restart DFL aggregation after excluding all malicious clients. They all neglect the potential benefits that may exist within contributions from malicious clients. In this paper, we propose a novel gradient purification defense, termed GPD, to defend against data poisoning attacks in DFL. It aims to separately mitigate the harm in gradients and retain benefits embedded in model weights, thereby enhancing overall model accuracy. For each benign client in GPD, a recording variable is designed to track historically aggregated gradients from one of its neighbors. It allows benign clients to precisely detect malicious neighbors and mitigate all aggregated malicious gradients at once. Upon mitigation, benign clients optimize model weights using purified gradients. This optimization not only retains previously beneficial components from malicious clients but also exploits canonical contributions from benign clients. We analyze the convergence of GPD, as well as its ability to harvest high accuracy. Extensive experiments demonstrate that, GPD is capable of mitigating data poisoning attacks under both iid and non-iid data distributions. It also significantly outperforms state-of-the-art defense methods in terms of model accuracy.
Potential amelioration of liver function by low-dose tolvaptan in heart failure patients
Yasuaki Mino, Kohei Hoshikawa, Takafumi Naito
et al.
Aim: This study aimed to evaluate the relationships between the pharmacokinetics of tolvaptan and its metabolites (DM-4103 and DM-4107) and liver injury in heart failure patients, using relevant laboratory test values and markers of hepatocyte injury and biliary cholestasis. Method: The plasma concentrations of tolvaptan, DM-4103, and DM-4107 were determined using LC-MS/MS in 51 Japanese heart failure patients. The relationships between the concentrations and the N-terminal fragment of pro-B-type natriuretic peptide (NT-proBNP), AST, and ALT were assessed. K18 and glutamate dehydrogenase as a marker of liver injury and CP-I and CP-III as indicators of OATP activity were also determined. Results: The median concentrations of tolvaptan, DM-4103, and DM-4107 were 16.2, 287, and 38.0 ng/mL, respectively. AST, ALT, and T-Bil were significantly decreased after tolvaptan administration. They were negatively correlated with tolvaptan concentration. AST was also negatively correlated with DM-4107 concentration. CP-III was positively correlated with DM-4103 concentration; however, CP-I was negatively correlated with DM-4103 concentration. K18 and glutamate dehydrogenase were not correlated with tolvaptan concentration. Conclusion: Low-dose tolvaptan did not cause liver injury. Pharmacokinetics of tolvaptan may be associated with potential amelioration of liver function in heart failure patients.
Interaction between expression of CD23 on B-lymphocytes and level of specific IgE against molecular components of NPC2 family, lipocalins, uteroglobins, and molecular components of molds and yeast
Jarmila Čelakovská, Petra Boudkova, Eva Cermakova
et al.
The aim of this study was to assess the relationship between the expression of the CD23 molecule on B-cells and the levels of specific IgE against allergens and molecular components of storage mites (Gly d 2, Lep d 2), dog (Can f 1, Can f 2), cat (Fel d 1), shrimp (Pen m 2), molds (Asp f 6, Mala s 11, Alt a 6, Alt a 1, Mala s 6, Cla h), and German cockroach (Bla g 9) in atopic dermatitis (AD) patients (with and without dupilumab therapy). Here, 46 patients with AD were included (26 without dupilumab treatment, 20 with dupilumab treatment). Serum levels of specific IgE were measured using the component-resolved diagnostic microarray ALEX2 Allergy Xplorer, and the expression of the CD23 molecule on B-cells was evaluated using flow cytometry. For statistical analysis, a Spearman’s rank correlation was used. The data indicated there was a higher correlation between CD23 expression on B-cells and specific IgE against molecular components of storage mites Bla g 9 (up to 27%), cat Fel d 1 (22.7%), and allergen extract Cla h (Cladosporium herbarum) up to 38.9% in AD patients treated with dupilumab. These results regarding the higher association suggested a significant role in the non-inflammatory clearance and uptake of these specific IgE antibodies.
Immunologic diseases. Allergy, Toxicology. Poisons
The Stronger the Diffusion Model, the Easier the Backdoor: Data Poisoning to Induce Copyright Breaches Without Adjusting Finetuning Pipeline
Haonan Wang, Qianli Shen, Yao Tong
et al.
The commercialization of text-to-image diffusion models (DMs) brings forth potential copyright concerns. Despite numerous attempts to protect DMs from copyright issues, the vulnerabilities of these solutions are underexplored. In this study, we formalized the Copyright Infringement Attack on generative AI models and proposed a backdoor attack method, SilentBadDiffusion, to induce copyright infringement without requiring access to or control over training processes. Our method strategically embeds connections between pieces of copyrighted information and text references in poisoning data while carefully dispersing that information, making the poisoning data inconspicuous when integrated into a clean dataset. Our experiments show the stealth and efficacy of the poisoning data. When given specific text prompts, DMs trained with a poisoning ratio of 0.20% can produce copyrighted images. Additionally, the results reveal that the more sophisticated the DMs are, the easier the success of the attack becomes. These findings underline potential pitfalls in the prevailing copyright protection strategies and underscore the necessity for increased scrutiny to prevent the misuse of DMs.
Robust Thompson Sampling Algorithms Against Reward Poisoning Attacks
Yinglun Xu, Zhiwei Wang, Gagandeep Singh
Thompson sampling is one of the most popular learning algorithms for online sequential decision-making problems and has rich real-world applications. However, current Thompson sampling algorithms are limited by the assumption that the rewards received are uncorrupted, which may not be true in real-world applications where adversarial reward poisoning exists. To make Thompson sampling more reliable, we want to make it robust against adversarial reward poisoning. The main challenge is that one can no longer compute the actual posteriors for the true reward, as the agent can only observe the rewards after corruption. In this work, we solve this problem by computing pseudo-posteriors that are less likely to be manipulated by the attack. We propose robust algorithms based on Thompson sampling for the popular stochastic and contextual linear bandit settings in both cases where the agent is aware or unaware of the budget of the attacker. We theoretically show that our algorithms guarantee near-optimal regret under any attack strategy.
Concealing Backdoor Model Updates in Federated Learning by Trigger-Optimized Data Poisoning
Yujie Zhang, Neil Gong, Michael K. Reiter
Federated Learning (FL) is a decentralized machine learning method that enables participants to collaboratively train a model without sharing their private data. Despite its privacy and scalability benefits, FL is susceptible to backdoor attacks, where adversaries poison the local training data of a subset of clients using a backdoor trigger, aiming to make the aggregated model produce malicious results when the same backdoor condition is met by an inference-time input. Existing backdoor attacks in FL suffer from common deficiencies: fixed trigger patterns and reliance on the assistance of model poisoning. State-of-the-art defenses based on analyzing clients' model updates exhibit a good defense performance on these attacks because of the significant divergence between malicious and benign client model updates. To effectively conceal malicious model updates among benign ones, we propose DPOT, a backdoor attack strategy in FL that dynamically constructs backdoor objectives by optimizing a backdoor trigger, making backdoor data have minimal effect on model updates. We provide theoretical justifications for DPOT's attacking principle and display experimental results showing that DPOT, via only a data-poisoning attack, effectively undermines state-of-the-art defenses and outperforms existing backdoor attack techniques on various datasets.
Defending Against Sophisticated Poisoning Attacks with RL-based Aggregation in Federated Learning
Yujing Wang, Hainan Zhang, Sijia Wen
et al.
Federated learning is highly susceptible to model poisoning attacks, especially those meticulously crafted for servers. Traditional defense methods mainly focus on updating assessments or robust aggregation against manually crafted myopic attacks. When facing advanced attacks, their defense stability is notably insufficient. Therefore, it is imperative to develop adaptive defenses against such advanced poisoning attacks. We find that benign clients exhibit significantly higher data distribution stability than malicious clients in federated learning in both CV and NLP tasks. Therefore, the malicious clients can be recognized by observing the stability of their data distribution. In this paper, we propose AdaAggRL, an RL-based Adaptive Aggregation method, to defend against sophisticated poisoning attacks. Specifically, we first utilize distribution learning to simulate the clients' data distributions. Then, we use the maximum mean discrepancy (MMD) to calculate the pairwise similarity of the current local model data distribution, its historical data distribution, and global model data distribution. Finally, we use policy learning to adaptively determine the aggregation weights based on the above similarities. Experiments on four real-world datasets demonstrate that the proposed defense model significantly outperforms widely adopted defense models for sophisticated attacks.
Transfer-based Adversarial Poisoning Attacks for Online (MIMO-)Deep Receviers
Kunze Wu, Weiheng Jiang, Dusit Niyato
et al.
Recently, the design of wireless receivers using deep neural networks (DNNs), known as deep receivers, has attracted extensive attention for ensuring reliable communication in complex channel environments. To adapt quickly to dynamic channels, online learning has been adopted to update the weights of deep receivers with over-the-air data (e.g., pilots). However, the fragility of neural models and the openness of wireless channels expose these systems to malicious attacks. To this end, understanding these attack methods is essential for robust receiver design. In this paper, we propose a transfer-based adversarial poisoning attack method for online receivers. Without knowledge of the attack target, adversarial perturbations are injected to the pilots, poisoning the online deep receiver and impairing its ability to adapt to dynamic channels and nonlinear effects. In particular, our attack method targets Deep Soft Interference Cancellation (DeepSIC)[1] using online meta-learning. As a classical model-driven deep receiver, DeepSIC incorporates wireless domain knowledge into its architecture. This integration allows it to adapt efficiently to time-varying channels with only a small number of pilots, achieving optimal performance in a multi-input and multi-output (MIMO) scenario. The deep receiver in this scenario has a number of applications in the field of wireless communication, which motivates our study of the attack methods targeting it. Specifically, we demonstrate the effectiveness of our attack in simulations on synthetic linear, synthetic nonlinear, static, and COST 2100 channels. Simulation results indicate that the proposed poisoning attack significantly reduces the performance of online receivers in rapidly changing scenarios.
In vitro-in silico study on the influence of dose, fraction bioactivated and endpoint used on the relative potency value of pyrrolizidine alkaloid N-oxides compared to parent pyrrolizidine alkaloids
Yasser Alhejji, Frances Widjaja, Shenghan Tian
et al.
Pyrrolizidine alkaloids (PAs) and their N-oxides (PA-N-oxides) are phytotoxins found in food, feed and the environment. Yet, limited data exist from which the relative potency of a PA-N-oxide relative to its corresponding PA (REPPANO to PA) can be defined. This study aims to investigate the influence of dose, fraction bioactivated and endpoint on the REPPANO to PA of a series of pyrrolizidine N-oxides using in vitro-in silico data and physiologically based kinetic (PBK) modeling. The first endpoint used to calculate the REPPANO to PA was the ratio of the area under the concentration–time curve of PA resulting from an oral dose of PA-N-oxide divided by that from an equimolar dose of PA (Method 1). The second endpoint was the ratio of the amount of pyrrole-protein adducts formed under these conditions (Method 2). REPPANO to PA values appeared to decrease with increasing dose, with the decrease for Method 2 already starting at lower dose level than for Method 1. At dose levels as low as estimated daily human intakes, REPPANO to PA values amounted to 0.92, 0.81, 0.78, and 0.68 for retrorsine N-oxide, seneciphylline N-oxide, riddelliine N-oxide and senecivernine N-oxide, respectively, and became independent of the dose or fraction bioactivated, because no GSH depletion, saturation of PA clearance or PA-N-oxide reduction occurs. Overall, the results demonstrate the strength of using PBK modeling in defining REPPANO to PA values, thereby substantiating the use of the same approach for other PA-N-oxides for which in vivo data are lacking.
Green synthesis of silver and iron nano composites using aqueous extract of zanthoxylum armatum seeds and their application for removal of acid black 234 dye
Nadia Bashir, Saba Gulzar, Salma Shad
Green nanotechnology has gained attraction in recent years due to the growing awareness of the environmental and health risks associated with traditional methods of nanomaterial synthesis. In the present study, nanocomposite (NCs) of silver and Iron were prepared using Zanthoxylum Armatum seeds aqueous extract which acts as a reducing, stabilizing, and capping agent. The synthesized NCs were characterized using UV/Vis Spectroscopy, powder X-ray diffraction (XRD), Scanning Electron Microscopy (SEM), and EDX. The UV/Vis spectroscopy analysis of the NCs revealed the presence of a surface plasmonic resonance band occurring at 420 nm. Examination of the NCs through SEM demonstrated that they exhibited a nearly spherical morphology, with an average particle diameter measuring 54.8 nm. The crystalline nature of these NCs was verified through X-ray diffraction (XRD), and the calculation of crystallite size using the Scherrer-Debye equation yielded a value of 12.6 nm. The adsorption ability of newly synthesized nanocomposites was investigated against Acid Black 234 Dye. The results showed that a 0.5 g of NCs dose at pH 4 removed 99.3% of 10 mg/L of Acid Black 234 Dye within 60 min. Based on the findings of this research, it can be inferred that the that Ag-Fe NCs synthesized from Zanthoxylum Armatum seeds aqueous extract hold significant potential for addressing environmental pollution caused by Acid Black 234 Dye. The NCs were used as adsorbent for the removal of Acid Black 234 dye from the wastewater sample and showed 98% removal of dye from the commercial sample within 60 min. In this context, the research highlights that the environmentally friendly synthesis of Ag-Fe nanocrystals (Ag-Fe NCs) using Zanthoxylum Armatum as a mediator offers an efficient and cost-effective solution for mitigating environmental pollution.
Association between urinary metal levels and kidney stones in metal smelter workers
Yiqi HUANG, Jiazhen ZHOU, Yaotang DENG
et al.
BackgroundArsenic, cobalt, barium, and other individual metal exposure have been confirmed to be associated with the incidence of kidney stones. However, there are few studies on the association between mixed metal exposure and kidney stones, especially in occupational groups. ObjectiveTo investigate the association between mixed metal exposure and kidney stones in an occupational population from a metal smelting plant. MethodsA questionnaire survey was conducted to collect sociodemographic characteristics, medical history, and lifestyle information of 1158 mixed metal-exposed workers in a metal smelting plant in Guangdong Province from July 2021 to January 2022. Midstream morning urine samples were collected from the workers, the concentrations of 18 metals including lithium, vanadium, chromium, manganese, cobalt, nickel, copper, zinc, arsenic, selenium, strontium, molybdenum, cadmium, cesium, barium, tungsten, titanium, and lead were measured by inductively coupled plasma mass spectrometry, and the urinary mercury levels were measured by cold atomic absorption spectroscopy. Based on predetermined inclusion criteria, a total of 919 mixed metal-exposed workers were included in the study, including 117 workers in the kidney stone group and 802 workers in the non-kidney stone group. With a detection rate of urinary metals greater than 80% as entry criterion, 16 eligible metals were finally included for further analysis. Parametric or non-parametric methods were used to compare the differences between continuous or categorical variables of the non-kidney stone group and the kidney stone group. Logistic regression models were constructed to explore the association between individual metal exposures and kidney stones. Weighted quantile sum (WQS) regression models were used to evaluate the association between mixed metal exposure and kidney stones, as well as the weights of each metal on kidney stones. Then Bayesian kernel machine regression (BKMR) models were used to explore the overall effect of mixed metal exposure on renal calculi and the potential interactions between metals. ResultsWe found that there were significant differences in sex, age, length of service, and body mass Index (BMI) between the non-kidney stone group and the kidney stone group (P<0.05). The urinary concentrations of molybdenum and barium in the kidney stone group were higher than those in the non-kidney stone group, and the differences were statistically significant (P<0.05). The logistic regression models demonstrated that urinary cobalt, arsenic, molybdenum, and barium were positively correlated with the risk of kidney stones (Ptrend<0.05). The WQS regression models showed that the mixed exposure to vanadium, cobalt, arsenic, molybdenum, and barium was positively associated with the risk of kidney stones (P<0.05). Among them, molybdenum, arsenic, and barium accounted for 0.391, 0.337, and 0.154, respectively. The BKMR results revealed a positive association between metal mixture exposure and the risk of kidney stones (P<0.05). When other metals were fixed at the 25th, 50th, or 75th percentile, arsenic, molybdenum, cobalt, and barium exhibited significant positive effects on the risk of kidney stones (P<0.05), while vanadium showed a significant negative effect (P<0.05). The interaction analysis demonstrated interactions between barium and cobalt, as well as between vanadium and cobalt (P<0.05). ConclusionIn the occupational population of this smelter, occupational mixed metal exposure could increase the risk of kidney stones, and the main metals are molybdenum, arsenic, barium, and cobalt.
Medicine (General), Toxicology. Poisons
Snake venom cysteine-rich secretory protein from Mojave rattlesnake venom (Css-CRiSP) induces acute inflammatory responses on different experimental models
Emelyn Salazar, Abcde Cirilo, Armando Reyes
et al.
Snake venoms contain various molecules known for activating innate immunity and causing local effects associated with increased vascular permeability, such as vascular leakage and edema, common symptoms seen in snakebite envenomings. We have demonstrated that snake venom cysteine-rich secretory proteins (svCRiSPs) from North American pit vipers increase vascular permeability. This study aimed to explore the functional role of CRiSP isolated from Mojave rattlesnake (Crotalus scutulatus scutulatus) venom (Css-CRiSP) on the activation of inflammatory responses in different models. We measured the release of inflammatory mediators in cultured human dermal blood endothelial cells (HDBEC), lymphatic endothelial cells (HDLEC) and monocyte-derived macrophages (MDM) at 0.5, 1, 3, 6, and 24 h after treatment with Css-CRiSP (1 μM). We also determined the acute inflammatory response in BALB/c mice 30 min after intraperitoneal injection of the toxin (2 μg/mouse). Css-CRiSP induced the production of IL-8 and IL-6, but not TNF-α, in HDBEC and HDLEC in a time-dependent manner. In addition, Css-CRiSP significantly enhanced the production of IL-6, TNF-α, IL-8, and IL-1β in MDM. Moreover, it caused a remarkable increase of chemotactic mediators in the exudates of experimental mice. Our results reveal that Css-CRiSPs can promote a sustained release of inflammatory mediators on cell lines and an acute activation of innate immunity in a murine model. These findings contribute to the growing body of evidence supporting the involvement of svCRiSPs in the augmentation of envenomation effects, specifically, the role of svCRiSPs in inducing vascular dysfunction, initiating early inflammatory responses, and facilitating the activation of leukocytes and releasing mediators. These findings will lead to a better understanding of the pathophysiology of envenoming by Mojave rattlesnakes, allowing the development of more efficient therapeutic strategies.
Hiding Backdoors within Event Sequence Data via Poisoning Attacks
Alina Ermilova, Elizaveta Kovtun, Dmitry Berestnev
et al.
The financial industry relies on deep learning models for making important decisions. This adoption brings new danger, as deep black-box models are known to be vulnerable to adversarial attacks. In computer vision, one can shape the output during inference by performing an adversarial attack called poisoning via introducing a backdoor into the model during training. For sequences of financial transactions of a customer, insertion of a backdoor is harder to perform, as models operate over a more complex discrete space of sequences, and systematic checks for insecurities occur. We provide a method to introduce concealed backdoors, creating vulnerabilities without altering their functionality for uncontaminated data. To achieve this, we replace a clean model with a poisoned one that is aware of the availability of a backdoor and utilize this knowledge. Our most difficult for uncovering attacks include either additional supervised detection step of poisoned data activated during the test or well-hidden model weight modifications. The experimental study provides insights into how these effects vary across different datasets, architectures, and model components. Alternative methods and baselines, such as distillation-type regularization, are also explored but found to be less efficient. Conducted on three open transaction datasets and architectures, including LSTM, CNN, and Transformer, our findings not only illuminate the vulnerabilities in contemporary models but also can drive the construction of more robust systems.
Chameleon: Increasing Label-Only Membership Leakage with Adaptive Poisoning
Harsh Chaudhari, Giorgio Severi, Alina Oprea
et al.
The integration of machine learning (ML) in numerous critical applications introduces a range of privacy concerns for individuals who provide their datasets for model training. One such privacy risk is Membership Inference (MI), in which an attacker seeks to determine whether a particular data sample was included in the training dataset of a model. Current state-of-the-art MI attacks capitalize on access to the model's predicted confidence scores to successfully perform membership inference, and employ data poisoning to further enhance their effectiveness. In this work, we focus on the less explored and more realistic label-only setting, where the model provides only the predicted label on a queried sample. We show that existing label-only MI attacks are ineffective at inferring membership in the low False Positive Rate (FPR) regime. To address this challenge, we propose a new attack Chameleon that leverages a novel adaptive data poisoning strategy and an efficient query selection method to achieve significantly more accurate membership inference than existing label-only attacks, especially at low FPRs.
How Potent are Evasion Attacks for Poisoning Federated Learning-Based Signal Classifiers?
Su Wang, Rajeev Sahay, Christopher G. Brinton
There has been recent interest in leveraging federated learning (FL) for radio signal classification tasks. In FL, model parameters are periodically communicated from participating devices, training on their own local datasets, to a central server which aggregates them into a global model. While FL has privacy/security advantages due to raw data not leaving the devices, it is still susceptible to several adversarial attacks. In this work, we reveal the susceptibility of FL-based signal classifiers to model poisoning attacks, which compromise the training process despite not observing data transmissions. In this capacity, we develop an attack framework in which compromised FL devices perturb their local datasets using adversarial evasion attacks. As a result, the training process of the global model significantly degrades on in-distribution signals (i.e., signals received over channels with identical distributions at each edge device). We compare our work to previously proposed FL attacks and reveal that as few as one adversarial device operating with a low-powered perturbation under our attack framework can induce the potent model poisoning attack to the global classifier. Moreover, we find that more devices partaking in adversarial poisoning will proportionally degrade the classification performance.
Leveraging Diffusion-Based Image Variations for Robust Training on Poisoned Data
Lukas Struppek, Martin B. Hentschel, Clifton Poth
et al.
Backdoor attacks pose a serious security threat for training neural networks as they surreptitiously introduce hidden functionalities into a model. Such backdoors remain silent during inference on clean inputs, evading detection due to inconspicuous behavior. However, once a specific trigger pattern appears in the input data, the backdoor activates, causing the model to execute its concealed function. Detecting such poisoned samples within vast datasets is virtually impossible through manual inspection. To address this challenge, we propose a novel approach that enables model training on potentially poisoned datasets by utilizing the power of recent diffusion models. Specifically, we create synthetic variations of all training samples, leveraging the inherent resilience of diffusion models to potential trigger patterns in the data. By combining this generative approach with knowledge distillation, we produce student models that maintain their general performance on the task while exhibiting robust resistance to backdoor triggers.
Determinants for low birth weight in Asmara, Eritrea: A maternity hospital-based study
Zeccarias Andemariam, Sadasivan Karuppusamy, Ghidey Ghebreyohannes
et al.
Introduction
Weight at birth is a good indicator of the
newborn’s chances for survival, growth, long-term health,
and psychosocial development. This maternity hospitalbased
study was done to determine factors affecting low
birth weight (LBW) of neonates in Asmara, Eritrea.
Methods
A cross-sectional analytical study was used
and a sample of 806 mother–neonate pairs who attended
during the data collection period, were taken consecutively.
Maternal and neonatal anthropometric measurements were
taken; a standard questionnaire was utilized and maternal
health card reviewed. Data were entered and cleaned in
Statistical Package for Social Sciences (SPSS) version 25
and exported to Stata version 14 for data analysis. Simple
and multivariable logistic regression, using the Backward
Stepwise Likelihood Ratio (LR) method, was employed; crude
and adjusted odds ratios along with 95% confidence interval
(CI) were calculated, and the level of significance was set at
0.05.
Results
Out of all the variables, 9 variables were retained
in the final model. These variables were: sex of the neonate,
number of ANC visits, gravidity, pre-pregnancy utilization of
modern family planning methods, pregnancy related illnesses
during current pregnancy, current body weight, current body
height, gestational age in weeks, and paternal employment
status.
Conclusions
Except for the sex of the neonate, all other
variables could be considered as modifiable. It is therefore
recommended that comprehensive ANC services should be
strengthened and emphasis given to primigravida women to
increase neonate birth weight.
Invisible Backdoor Attacks Using Data Poisoning in the Frequency Domain
Chang Yue, Peizhuo Lv, Ruigang Liang
et al.
With the broad application of deep neural networks (DNNs), backdoor attacks have gradually attracted attention. Backdoor attacks are insidious, and poisoned models perform well on benign samples and are only triggered when given specific inputs, which cause the neural network to produce incorrect outputs. The state-of-the-art backdoor attack work is implemented by data poisoning, i.e., the attacker injects poisoned samples into the dataset, and the models trained with that dataset are infected with the backdoor. However, most of the triggers used in the current study are fixed patterns patched on a small fraction of an image and are often clearly mislabeled, which is easily detected by humans or defense methods such as Neural Cleanse and SentiNet. Also, it's difficult to be learned by DNNs without mislabeling, as they may ignore small patterns. In this paper, we propose a generalized backdoor attack method based on the frequency domain, which can implement backdoor implantation without mislabeling and accessing the training process. It is invisible to human beings and able to evade the commonly used defense methods. We evaluate our approach in the no-label and clean-label cases on three datasets (CIFAR-10, STL-10, and GTSRB) with two popular scenarios (self-supervised learning and supervised learning). The results show our approach can achieve a high attack success rate (above 90%) on all the tasks without significant performance degradation on main tasks. Also, we evaluate the bypass performance of our approach for different kinds of defenses, including the detection of training data (i.e., Activation Clustering), the preprocessing of inputs (i.e., Filtering), the detection of inputs (i.e., SentiNet), and the detection of models (i.e., Neural Cleanse). The experimental results demonstrate that our approach shows excellent robustness to such defenses.
COPA: Certifying Robust Policies for Offline Reinforcement Learning against Poisoning Attacks
Fan Wu, Linyi Li, Chejian Xu
et al.
As reinforcement learning (RL) has achieved near human-level performance in a variety of tasks, its robustness has raised great attention. While a vast body of research has explored test-time (evasion) attacks in RL and corresponding defenses, its robustness against training-time (poisoning) attacks remains largely unanswered. In this work, we focus on certifying the robustness of offline RL in the presence of poisoning attacks, where a subset of training trajectories could be arbitrarily manipulated. We propose the first certification framework, COPA, to certify the number of poisoning trajectories that can be tolerated regarding different certification criteria. Given the complex structure of RL, we propose two certification criteria: per-state action stability and cumulative reward bound. To further improve the certification, we propose new partition and aggregation protocols to train robust policies. We further prove that some of the proposed certification methods are theoretically tight and some are NP-Complete problems. We leverage COPA to certify three RL environments trained with different algorithms and conclude: (1) The proposed robust aggregation protocols such as temporal aggregation can significantly improve the certifications; (2) Our certification for both per-state action stability and cumulative reward bound are efficient and tight; (3) The certification for different training algorithms and environments are different, implying their intrinsic robustness properties. All experimental results are available at https://copa-leaderboard.github.io.