Hayden Bolin, Katherine Z. Isoardi, Carol Wylie et al.
Hasil untuk "Toxicology. Poisons"
Menampilkan 20 dari ~800994 hasil · dari DOAJ, arXiv, Semantic Scholar, CrossRef
Runhua Xu, Shiqi Gao, Chao Li et al.
Federated learning (FL) is inherently susceptible to privacy breaches and poisoning attacks. To tackle these challenges, researchers have separately devised secure aggregation mechanisms to protect data privacy and robust aggregation methods that withstand poisoning attacks. However, simultaneously addressing both concerns is challenging; secure aggregation facilitates poisoning attacks as most anomaly detection techniques require access to unencrypted local model updates, which are obscured by secure aggregation. Few recent efforts to simultaneously tackle both challenges offen depend on impractical assumption of non-colluding two-server setups that disrupt FL's topology, or three-party computation which introduces scalability issues, complicating deployment and application. To overcome this dilemma, this paper introduce a Dual Defense Federated learning (DDFed) framework. DDFed simultaneously boosts privacy protection and mitigates poisoning attacks, without introducing new participant roles or disrupting the existing FL topology. DDFed initially leverages cutting-edge fully homomorphic encryption (FHE) to securely aggregate model updates, without the impractical requirement for non-colluding two-server setups and ensures strong privacy protection. Additionally, we proposes a unique two-phase anomaly detection mechanism for encrypted model updates, featuring secure similarity computation and feedback-driven collaborative selection, with additional measures to prevent potential privacy breaches from Byzantine clients incorporated into the detection process. We conducted extensive experiments on various model poisoning attacks and FL scenarios, including both cross-device and cross-silo FL. Experiments on publicly available datasets demonstrate that DDFed successfully protects model privacy and effectively defends against model poisoning threats.
P. Anusha, B. Roshni Sumalini, S.K. Aamer Saleem et al.
Introduction: CD44 is a cell surface transmembrane glycoprotein is a member of cell adhesion molecules responsible for mediating communication and adhesion between adjacent cells and extracellular matrix. In recent years, CD44 has garnered a significant attention because of its utility as a stem cell marker and has surfaced as a potential therapeutic target, necessitating a greater understand of CD44 in breast cancer. Aim: The aim of this study is to determine the correlation between CD44 expression of tumour cells in breast cancer and presence of axillary lymph node metastasis. Materials and Methods: A retrospective study was conducted by the department of Pathology at Kamineni Academy of Medical Sciences and Research Centre, Hyderabad, India from August 2022 to March 2023. Female patients who underwent modified radical mastectomy for invasive ductal carcinoma were taken in this study. Tumour and axillary lymph node histologically examined and histological grading was given. Immunohistochemistry of CD44 marker and its expression by the tumour cells was done by standard immunoperoxidase method. Results: The study included 35 invasive ductal carcinoma of breast. Out of 35 cases, 19 cases showed lymph nodal metastasis. Among these 19 cases, only 13 showed CD44 expression in tumour cells with P value of 0.108 which is not statistically significant. A positive trend is noted with CD44 expression with higher tumour grade. Conclusion: The above results do not show any significant association between CD44 expression in tumour cells and lymph nodal metastasis in invasive breast carcinoma.
Mark D. Nelms, Todor Antonijevic, Caroline Ring et al.
IntroductionThe U. S. Environmental Protection Agency’s Endocrine Disruptor Screening Program (EDSP) Tier 1 assays are used to screen for potential endocrine system–disrupting chemicals. A model integrating data from 16 high-throughput screening assays to predict estrogen receptor (ER) agonism has been proposed as an alternative to some low-throughput Tier 1 assays. Later work demonstrated that as few as four assays could replicate the ER agonism predictions from the full model with 98% sensitivity and 92% specificity. The current study utilized chemical clustering to illustrate the coverage of the EDSP Universe of Chemicals (UoC) tested in the existing ER pathway models and to investigate the utility of chemical clustering to evaluate the screening approach using an existing 4-assay model as a test case. Although the full original assay battery is no longer available, the demonstrated contribution of chemical clustering is broadly applicable to assay sets, chemical inventories, and models, and the data analysis used can also be applied to future evaluation of minimal assay models for consideration in screening.MethodsChemical structures were collected for 6,947 substances via the CompTox Chemicals Dashboard from the over 10,000 UoC and grouped based on structural similarity, generating 826 chemical clusters. Of the 1,812 substances run in the original ER model, 1,730 substances had a single, clearly defined structure. The ER model chemicals with a clearly defined structure that were not present in the EDSP UoC were assigned to chemical clusters using a k-nearest neighbors approach, resulting in 557 EDSP UoC clusters containing at least one ER model chemical.Results and DiscussionPerformance of an existing 4-assay model in comparison with the existing full ER agonist model was analyzed as related to chemical clustering. This was a case study, and a similar analysis can be performed with any subset model in which the same chemicals (or subset of chemicals) are screened. Of the 365 clusters containing >1 ER model chemical, 321 did not have any chemicals predicted to be agonists by the full ER agonist model. The best 4-assay subset ER agonist model disagreed with the full ER agonist model by predicting agonist activity for 122 chemicals from 91 of the 321 clusters. There were 44 clusters with at least two chemicals and at least one agonist based upon the full ER agonist model, which allowed accuracy predictions on a per-cluster basis. The accuracy of the best 4-assay subset ER agonist model ranged from 50% to 100% across these 44 clusters, with 32 clusters having accuracy ≥90%. Overall, the best 4-assay subset ER agonist model resulted in 122 false-positive and only 2 false-negative predictions compared with the full ER agonist model. Most false positives (89) were active in only two of the four assays, whereas all but 11 true positive chemicals were active in at least three assays. False positive chemicals also tended to have lower area under the curve (AUC) values, with 110 out of 122 false positives having an AUC value below 0.214, which is lower than 75% of the positives as predicted by the full ER agonist model. Many false positives demonstrated borderline activity. The median AUC value for the 122 false positives from the best 4-assay subset ER agonist model was 0.138, whereas the threshold for an active prediction is 0.1.ConclusionOur results show that the existing 4-assay model performs well across a range of structurally diverse chemicals. Although this is a descriptive analysis of previous results, several concepts can be applied to any screening model used in the future. First, the clustering of the chemicals provides a means of ensuring that future screening evaluations consider the broad chemical space represented by the EDSP UoC. The clusters can also assist in prioritizing future chemicals for screening in specific clusters based on the activity of known chemicals in those clusters. The clustering approach can be useful in providing a framework to evaluate which portions of the EDSP UoC chemical space are reliably covered by in silico and in vitro approaches and where predictions from either method alone or both methods combined are most reliable. The lessons learned from this case study can be easily applied to future evaluations of model applicability and screening to evaluate future datasets.
John L. Vahle, Joe Dybowski, Michael Graziano et al.
Industry representatives on the ICH S1B(R1) Expert Working Group (EWG) worked closely with colleagues from the Drug Regulatory Authorities to develop an addendum to the ICH S1B guideline on carcinogenicity studies that allows for a weight-of-evidence (WoE) carcinogenicity assessment in some cases, rather than conducting a 2-year rat carcinogenicity study. A subgroup of the EWG composed of regulators have published in this issue a detailed analysis of the Prospective Evaluation Study (PES) conducted under the auspices of the ICH S1B(R1) EWG. Based on the experience gained through the Prospective Evaluation Study (PES) process, industry members of the EWG have prepared the following commentary to aid sponsors in assessing the standard WoE factors, considering how novel investigative approaches may be used to support a WoE assessment, and preparing appropriate documentation of the WoE assessment for presentation to regulatory authorities. The commentary also reviews some of the implementation challenges sponsors must consider in developing a carcinogenicity assessment strategy. Finally, case examples drawn from previously marketed products are provided as a supplement to this commentary to provide additional examples of how WoE criteria may be applied. The information and opinions expressed in this commentary are aimed at increasing the quality of WoE assessments to ensure the successful implementation of this approach.
Rafael Hernández-Tenorio, Octavio Gaspar-Ramírez, Cinthia G. Aba-Guevara et al.
Pharmaceutical active compounds (PACs) in the concentration range of hundreds of ng/L to μg/L have been identified in urban surface water, groundwater, and agricultural land where they cause various health risks. These pollutants are classified as emerging and cannot be efficiently removed by conventional wastewater treatment processes. The use of nano-enabled photocatalysts in the removal of pharmaceuticals in aquatic systems has recently received research attention owing to their enhanced properties and effectiveness. In the current study, toxicological and environmental risks of enalapril (ENL) and their possible transformation products (TPs) generated under phototransformation processes (e.g., photolysis and photocatalysis reactions) were assessed. In photolysis reaction, removal of ENL was incomplete (< 16 %), while mineralization degree was negligible. In contrast, total removal of ENL was achieved through the photocatalytic process and its maximum mineralization ratio was 66 % by using natural radiation. Proposed transformation pathways during the phototransformation of ENL include hydroxylation and fragmentation reactions generating transformation products (TPs) such as hydroxylated TPs (m/z 393) and enalaprilat (m/z 349). Potential environmental risks for aquatic organisms were not observed in the concentrations of both ENL and enalaprilat contained in surface water. However, the acute and chronic toxicities prediction of TPs such as m/z 409, 363, and 345 showed toxic effects on aquatic organisms. Thus, more studies regarding TPs monitoring for both ENL and PhACs with the highest occurrence worldwide are necessary for the creation of a database of the concentrations contained in surface water and groundwater for the assessment of the potential environmental risk for aquatic organisms.
Maya Wai, Ari B. Filip, Ashlyn Abbott et al.
Xuanli He, Qiongkai Xu, Jun Wang et al.
Modern NLP models are often trained on public datasets drawn from diverse sources, rendering them vulnerable to data poisoning attacks. These attacks can manipulate the model's behavior in ways engineered by the attacker. One such tactic involves the implantation of backdoors, achieved by poisoning specific training instances with a textual trigger and a target class label. Several strategies have been proposed to mitigate the risks associated with backdoor attacks by identifying and removing suspected poisoned examples. However, we observe that these strategies fail to offer effective protection against several advanced backdoor attacks. To remedy this deficiency, we propose a novel defensive mechanism that first exploits training dynamics to identify poisoned samples with high precision, followed by a label propagation step to improve recall and thus remove the majority of poisoned instances. Compared with recent advanced defense methods, our method considerably reduces the success rates of several backdoor attacks while maintaining high classification accuracy on clean test sets.
Wei Sun, Bo Gao, Ke Xiong et al.
In federated learning (FL), although the original intention of available but not visible data is to allay data privacy concerns, it potentially brings new security threats, particularly poisoning attacks that target such not visible local data. Intuitively, such data poisoning attacks have great potential in stealthily degrading global FL outcomes, and are expected to be even stealthier if being enhanced by generative models like generative adversarial networks (GANs). However, existing defense methods have not been thoroughly challenged in this regard and generally fail to be aware of a local generation of seemingly legitimate poisoned data. With a growing concern on potentially stealthier attacks, in this paper, a cost-effective defense mechanism named Model Consistency-Based Defense (MCD) is proposed, which offers a comprehensive examination of available local models across multiple feature dimensions, providing an indirect yet effective means of identifying hidden data poisoning attackers. To push the limit of MCD against stealthier attacks, we propose a new GAN-based data poisoning attack model named VagueGAN and an unsupervised variant of it, which can be flexibly deployed to generate seemingly legitimate but noisy poisoned data. The consistency of GAN outputs revealed by VagueGAN helps strengthen MCD to work against stealthier GAN-based attacks as well as other mainstream ones. Extensive experiments on multiple open datasets (MNIST, Fashion-MNIST, CIFAR-10, CIFAR-100, and Mini-Imagenet) indicate that our attack method better balances the trade-off between attack effectiveness and stealthiness with low complexity. More importantly, our defense mechanism is shown to be more competent in identifying a variety of poisoned data, particularly stealthier GAN-poisoned ones.
Yiwei Lu, Matthew Y. R. Yang, Gautam Kamath et al.
Machine learning models have achieved great success in supervised learning tasks for end-to-end training, which requires a large amount of labeled data that is not always feasible. Recently, many practitioners have shifted to self-supervised learning methods that utilize cheap unlabeled data to learn a general feature extractor via pre-training, which can be further applied to personalized downstream tasks by simply training an additional linear layer with limited labeled data. However, such a process may also raise concerns regarding data poisoning attacks. For instance, indiscriminate data poisoning attacks, which aim to decrease model utility by injecting a small number of poisoned data into the training set, pose a security risk to machine learning models, but have only been studied for end-to-end supervised learning. In this paper, we extend the exploration of the threat of indiscriminate attacks on downstream tasks that apply pre-trained feature extractors. Specifically, we propose two types of attacks: (1) the input space attacks, where we modify existing attacks to directly craft poisoned data in the input space. However, due to the difficulty of optimization under constraints, we further propose (2) the feature targeted attacks, where we mitigate the challenge with three stages, firstly acquiring target parameters for the linear head; secondly finding poisoned features by treating the learned feature representations as a dataset; and thirdly inverting the poisoned features back to the input space. Our experiments examine such attacks in popular downstream tasks of fine-tuning on the same dataset and transfer learning that considers domain adaptation. Empirical results reveal that transfer learning is more vulnerable to our attacks. Additionally, input space attacks are a strong threat if no countermeasures are posed, but are otherwise weaker than feature targeted attacks.
Shuli Jiang, Swanand Ravindra Kadhe, Yi Zhou et al.
The increasing use of large language models (LLMs) trained by third parties raises significant security concerns. In particular, malicious actors can introduce backdoors through poisoning attacks to generate undesirable outputs. While such attacks have been extensively studied in image domains and classification tasks, they remain underexplored for natural language generation (NLG) tasks. To address this gap, we conduct an investigation of various poisoning techniques targeting the LLM's fine-tuning phase via prefix-tuning, a Parameter Efficient Fine-Tuning (PEFT) method. We assess their effectiveness across two generative tasks: text summarization and text completion; and we also introduce new metrics to quantify the success and stealthiness of such NLG poisoning attacks. Through our experiments, we find that the prefix-tuning hyperparameters and trigger designs are the most crucial factors to influence attack success and stealthiness. Moreover, we demonstrate that existing popular defenses are ineffective against our poisoning attacks. Our study presents the first systematic approach to understanding poisoning attacks targeting NLG tasks during fine-tuning via PEFT across a wide range of triggers and attack settings. We hope our findings will aid the AI security community in developing effective defenses against such threats.
Yi Liu, Cong Wang, Xingliang Yuan
Federated Learning (FL) is susceptible to poisoning attacks, wherein compromised clients manipulate the global model by modifying local datasets or sending manipulated model updates. Experienced defenders can readily detect and mitigate the poisoning effects of malicious behaviors using Byzantine-robust aggregation rules. However, the exploration of poisoning attacks in scenarios where such behaviors are absent remains largely unexplored for Byzantine-robust FL. This paper addresses the challenging problem of poisoning Byzantine-robust FL by introducing catastrophic forgetting. To fill this gap, we first formally define generalization error and establish its connection to catastrophic forgetting, paving the way for the development of a clean-label data poisoning attack named BadSampler. This attack leverages only clean-label data (i.e., without poisoned data) to poison Byzantine-robust FL and requires the adversary to selectively sample training data with high loss to feed model training and maximize the model's generalization error. We formulate the attack as an optimization problem and present two elegant adversarial sampling strategies, Top-$κ$ sampling, and meta-sampling, to approximately solve it. Additionally, our formal error upper bound and time complexity analysis demonstrate that our design can preserve attack utility with high efficiency. Extensive evaluations on two real-world datasets illustrate the effectiveness and performance of our proposed attacks.
Andre Stefanus Panggabean, Ismail Setyopranoto, Arjanto Ramadian Wicaksono et al.
Uncontrolled and unsafe use of pesticides can lead to acute and chronic toxicity in farmers, with neuropathy being one of the most common symptoms of chronic toxicity. However, the effects of this toxicity on farmers' electroneuromyography (ENMG) are still unclear. To address this, we conducted a cross-sectional study from July to October 2017 in Ngablak District, Magelang, Central Java, Indonesia. Eligible farmers who were exposed to pesticides underwent electrophysiology examinations, as well as additional tests such as physical examination and laboratory testing. We collected general information such as age and work history by interview. In total, 64 farmers were included in this study. Out of these, 44 farmers were found to have polyneuropathy, with 41 of them having motor polyneuropathy and 19 of them having sensory polyneuropathy. Our findings showed that low blood cholinesterase was associated with distal latency prolongation (p-value: 0.014). The group exposed to organophosphate/carbamate pesticides was also significantly associated with prolonged distal latency (p-value: 0.012). However, motor polyneuropathy was significantly associated with chronic exposure to organophosphate/carbamate pesticides (p-value: 0.009) and not with low blood cholinesterase levels (p-value: 0.454). The study concludes that chronic exposure to organophosphate or carbamate pesticides could result in polyneuropathy disease, particularly in the motor system.
Thalia De Castelbajac, Kiara Aiello, Celia Garcia Arenas et al.
New approach methodologies (NAMs) have the potential to become a major component of regulatory risk assessment, however, their actual implementation is challenging. The European Partnership for the Assessment of Risks from Chemicals (PARC) was designed to address many of the challenges that exist for the development and implementation of NAMs in modern chemical risk assessment. PARC’s proximity to national and European regulatory agencies is envisioned to ensure that all the research and innovation projects that are initiated within PARC agree with actual regulatory needs. One of the main aims of PARC is to develop innovative methodologies that will directly aid chemical hazard identification, risk assessment, and regulation/policy. This will facilitate the development of NAMs for use in risk assessment, as well as the transition from an endpoint-based animal testing strategy to a more mechanistic-based NAMs testing strategy, as foreseen by the Tox21 and the EU Chemical’s Strategy for Sustainability. This work falls under work package 5 (WP5) of the PARC initiative. There are three different tasks within WP5, and this paper is a general overview of the five main projects in the Task 5.2 ‘Innovative Tools and methods for Toxicity Testing,’ with a focus on Human Health. This task will bridge essential regulatory data gaps pertaining to the assessment of toxicological prioritized endpoints such as non-genotoxic carcinogenicity, immunotoxicity, endocrine disruption (mainly thyroid), metabolic disruption, and (developmental and adult) neurotoxicity, thereby leveraging OECD’s and PARC’s AOP frameworks. This is intended to provide regulatory risk assessors and industry stakeholders with relevant, affordable and reliable assessment tools that will ultimately contribute to the application of next-generation risk assessment (NGRA) in Europe and worldwide.
Adejoke Elizabeth Memudu, Gambo A. Dongo
Anabolic Androgenic steroids (AAS) are abused and reports have been made on their deleterious effects on various organs. It is imperative to report the mechanism of inducing oxidative tissue damage even in the presence of an intracellular antioxidant system by the interaction between lipid peroxidation and the antioxidant system in the kidney. Twenty (20) adult male Wistar rats used were grouped into: A- Control, BOlive oil vehicle, C- 120 mg/kg of AAS orally for three weeks, and D- 7 days withdrawal group following 120 mg/kg/ 21days of AAS intake. Serum was assayed for lipid peroxidation marker Malondialdehyde (MDA) and antioxidant enzyme –superoxide Dismutase (SOD). Sectioned of kidneys were stained to see the renal tissue, mucin granules, and basement membrane. AAS-induced oxidative tissue damage, in the presence of an endogenous antioxidant, is characterized by increased lipid peroxidation and decreased SOD level which resulted in the loss of renal tissue cells membrane integrity which is a characteristic of the pathophysiology of nephron toxicity induced by a toxic compound. However, this was progressively reversed by a period of discontinuation of AAS drug exposure.
Kshitiz Aryal, Maanak Gupta, Mahmoud Abdelsalam
With the increase in machine learning (ML) applications in different domains, incentives for deceiving these models have reached more than ever. As data is the core backbone of ML algorithms, attackers shifted their interest toward polluting the training data. Data credibility is at even higher risk with the rise of state-of-art research topics like open design principles, federated learning, and crowd-sourcing. Since the machine learning model depends on different stakeholders for obtaining data, there are no reliable automated mechanisms to verify the veracity of data from each source. Malware detection is arduous due to its malicious nature with the addition of metamorphic and polymorphic ability in the evolving samples. ML has proven to solve the zero-day malware detection problem, which is unresolved by traditional signature-based approaches. The poisoning of malware training data can allow the malware files to go undetected by the ML-based malware detectors, helping the attackers to fulfill their malicious goals. A feasibility analysis of the data poisoning threat in the malware detection domain is still lacking. Our work will focus on two major sections: training ML-based malware detectors and poisoning the training data using the label-poisoning approach. We will analyze the robustness of different machine learning models against data poisoning with varying volumes of poisoning data.
Hossein Fereidooni, Alessandro Pegoraro, Phillip Rieger et al.
Federated learning (FL) is a collaborative learning paradigm allowing multiple clients to jointly train a model without sharing their training data. However, FL is susceptible to poisoning attacks, in which the adversary injects manipulated model updates into the federated model aggregation process to corrupt or destroy predictions (untargeted poisoning) or implant hidden functionalities (targeted poisoning or backdoors). Existing defenses against poisoning attacks in FL have several limitations, such as relying on specific assumptions about attack types and strategies or data distributions or not sufficiently robust against advanced injection techniques and strategies and simultaneously maintaining the utility of the aggregated model. To address the deficiencies of existing defenses, we take a generic and completely different approach to detect poisoning (targeted and untargeted) attacks. We present FreqFed, a novel aggregation mechanism that transforms the model updates (i.e., weights) into the frequency domain, where we can identify the core frequency components that inherit sufficient information about weights. This allows us to effectively filter out malicious updates during local training on the clients, regardless of attack types, strategies, and clients' data distributions. We extensively evaluate the efficiency and effectiveness of FreqFed in different application domains, including image classification, word prediction, IoT intrusion detection, and speech recognition. We demonstrate that FreqFed can mitigate poisoning attacks effectively with a negligible impact on the utility of the aggregated model.
Marcella Nunes de Melo-Braga, Raniele da Silva Moreira, João Henrique Diniz Brandão Gervásio et al.
Abstract Accidents with venomous animals are a public health issue worldwide. Among the species involved in these accidents are scorpions, spiders, bees, wasps, and other members of the phylum Arthropoda. The knowledge of the function of proteins present in these venoms is important to guide diagnosis, therapeutics, besides being a source of a large variety of biotechnological active molecules. Although our understanding about the characteristics and function of arthropod venoms has been evolving in the last decades, a major aspect crucial for the function of these proteins remains poorly studied, the posttranslational modifications (PTMs). Comprehension of such modifications can contribute to better understanding the basis of envenomation, leading to improvements in the specificities of potential therapeutic toxins. Therefore, in this review, we bring to light protein/toxin PTMs in arthropod venoms by accessing the information present in the UniProtKB/Swiss-Prot database, including experimental and putative inferences. Then, we concentrate our discussion on the current knowledge on protein phosphorylation and glycosylation, highlighting the potential functionality of these modifications in arthropod venom. We also briefly describe general approaches to study “PTM-functional-venomics”, herein referred to the integration of PTM-venomics with a functional investigation of PTM impact on venom biology. Furthermore, we discuss the bottlenecks in toxinology studies covering PTM investigation. In conclusion, through the mining of PTMs in arthropod venoms, we observed a large gap in this field that limits our understanding on the biology of these venoms, affecting the diagnosis and therapeutics development. Hence, we encourage community efforts to draw attention to a better understanding of PTM in arthropod venom toxins.
Dazhong Rong, Shuai Ye, Ruoyan Zhao et al.
Federated Recommendation (FR) has received considerable popularity and attention in the past few years. In FR, for each user, its feature vector and interaction data are kept locally on its own client thus are private to others. Without the access to above information, most existing poisoning attacks against recommender systems or federated learning lose validity. Benifiting from this characteristic, FR is commonly considered fairly secured. However, we argue that there is still possible and necessary security improvement could be made in FR. To prove our opinion, in this paper we present FedRecAttack, a model poisoning attack to FR aiming to raise the exposure ratio of target items. In most recommendation scenarios, apart from private user-item interactions (e.g., clicks, watches and purchases), some interactions are public (e.g., likes, follows and comments). Motivated by this point, in FedRecAttack we make use of the public interactions to approximate users' feature vectors, thereby attacker can generate poisoned gradients accordingly and control malicious users to upload the poisoned gradients in a well-designed way. To evaluate the effectiveness and side effects of FedRecAttack, we conduct extensive experiments on three real-world datasets of different sizes from two completely different scenarios. Experimental results demonstrate that our proposed FedRecAttack achieves the state-of-the-art effectiveness while its side effects are negligible. Moreover, even with small proportion (3%) of malicious users and small proportion (1%) of public interactions, FedRecAttack remains highly effective, which reveals that FR is more vulnerable to attack than people commonly considered.
Naoya Tezuka, Hideya Ochiai, Yuwei Sun et al.
Wireless ad hoc federated learning (WAFL) is a fully decentralized collaborative machine learning framework organized by opportunistically encountered mobile nodes. Compared to conventional federated learning, WAFL performs model training by weakly synchronizing the model parameters with others, and this shows great resilience to a poisoned model injected by an attacker. In this paper, we provide our theoretical analysis of the WAFL's resilience against model poisoning attacks, by formulating the force balance between the poisoned model and the legitimate model. According to our experiments, we confirmed that the nodes directly encountered the attacker has been somehow compromised to the poisoned model but other nodes have shown great resilience. More importantly, after the attacker has left the network, all the nodes have finally found stronger model parameters combined with the poisoned model. Most of the attack-experienced cases achieved higher accuracy than the no-attack-experienced cases.
Halaman 15 dari 40050