Silent Sabotage During Fine-Tuning: Few-Shot Rationale Poisoning of Compact Medical LLMs
Jingyuan Xie, Wenjie Wang, Ji Wu
et al.
Supervised fine-tuning (SFT) is essential for the development of medical large language models (LLMs), yet prior poisoning studies have mainly focused on the detectable backdoor attacks. We propose a novel poisoning attack targeting the reasoning process of medical LLMs during SFT. Unlike backdoor attacks, our method injects poisoned rationales into few-shot training data, leading to stealthy degradation of model performance on targeted medical topics. Results showed that knowledge overwriting was ineffective, while rationale poisoning caused significant decline on the accuracy of the target subject, as long as no correct samples of the same subject appear in the dataset. A minimum number and ratio of poisoned samples was needed to carry out an effective and stealthy attack, which was more efficient and accurate than catastrophic forgetting. We demonstrate though this study the risk of SFT-stage poisoning, hoping to spur more studies of defense in the sensitive medical domain.
Thought-Transfer: Indirect Targeted Poisoning Attacks on Chain-of-Thought Reasoning Models
Harsh Chaudhari, Ethan Rathbun, Hanna Foerster
et al.
Chain-of-Thought (CoT) reasoning has emerged as a powerful technique for enhancing large language models' capabilities by generating intermediate reasoning steps for complex tasks. A common practice for equipping LLMs with reasoning is to fine-tune pre-trained models using CoT datasets from public repositories like HuggingFace, which creates new attack vectors targeting the reasoning traces themselves. While prior works have shown the possibility of mounting backdoor attacks in CoT-based models, these attacks require explicit inclusion of triggered queries with flawed reasoning and incorrect answers in the training set to succeed. Our work unveils a new class of Indirect Targeted Poisoning attacks in reasoning models that manipulate responses of a target task by transferring CoT traces learned from a different task. Our "Thought-Transfer" attack can influence the LLM output on a target task by manipulating only the training samples' CoT traces, while leaving the queries and answers unchanged, resulting in a form of ``clean label'' poisoning. Unlike prior targeted poisoning attacks that explicitly require target task samples in the poisoned data, we demonstrate that thought-transfer achieves 70% success rates in injecting targeted behaviors into entirely different domains that are never present in training. Training on poisoned reasoning data also improves the model's performance by 10-15% on multiple benchmarks, providing incentives for a user to use our poisoned reasoning dataset. Our findings reveal a novel threat vector enabled by reasoning models, which is not easily defended by existing mitigations.
What Really is a Member? Discrediting Membership Inference via Poisoning
Neal Mangaokar, Ashish Hooda, Zhuohang Li
et al.
Membership inference tests aim to determine whether a particular data point was included in a language model's training set. However, recent works have shown that such tests often fail under the strict definition of membership based on exact matching, and have suggested relaxing this definition to include semantic neighbors as members as well. In this work, we show that membership inference tests are still unreliable under this relaxation - it is possible to poison the training dataset in a way that causes the test to produce incorrect predictions for a target point. We theoretically reveal a trade-off between a test's accuracy and its robustness to poisoning. We also present a concrete instantiation of this poisoning attack and empirically validate its effectiveness. Our results show that it can degrade the performance of existing tests to well below random.
Virus Infection Attack on LLMs: Your Poisoning Can Spread "VIA" Synthetic Data
Zi Liang, Qingqing Ye, Xuan Liu
et al.
Synthetic data refers to artificial samples generated by models. While it has been validated to significantly enhance the performance of large language models (LLMs) during training and has been widely adopted in LLM development, potential security risks it may introduce remain uninvestigated. This paper systematically evaluates the resilience of synthetic-data-integrated training paradigm for LLMs against mainstream poisoning and backdoor attacks. We reveal that such a paradigm exhibits strong resistance to existing attacks, primarily thanks to the different distribution patterns between poisoning data and queries used to generate synthetic samples. To enhance the effectiveness of these attacks and further investigate the security risks introduced by synthetic data, we introduce a novel and universal attack framework, namely, Virus Infection Attack (VIA), which enables the propagation of current attacks through synthetic data even under purely clean queries. Inspired by the principles of virus design in cybersecurity, VIA conceals the poisoning payload within a protective "shell" and strategically searches for optimal hijacking points in benign samples to maximize the likelihood of generating malicious content. Extensive experiments on both data poisoning and backdoor attacks show that VIA significantly increases the presence of poisoning content in synthetic data and correspondingly raises the attack success rate (ASR) on downstream models to levels comparable to those observed in the poisoned upstream models.
Performance Guaranteed Poisoning Attacks in Federated Learning: A Sliding Mode Approach
Huazi Pan, Yanjun Zhang, Leo Yu Zhang
et al.
Manipulation of local training data and local updates, i.e., the poisoning attack, is the main threat arising from the collaborative nature of the federated learning (FL) paradigm. Most existing poisoning attacks aim to manipulate local data/models in a way that causes denial-of-service (DoS) issues. In this paper, we introduce a novel attack method, named Federated Learning Sliding Attack (FedSA) scheme, aiming at precisely introducing the extent of poisoning in a subtle controlled manner. It operates with a predefined objective, such as reducing global model's prediction accuracy by 10%. FedSA integrates robust nonlinear control-Sliding Mode Control (SMC) theory with model poisoning attacks. It can manipulate the updates from malicious clients to drive the global model towards a compromised state, achieving this at a controlled and inconspicuous rate. Additionally, leveraging the robust control properties of FedSA allows precise control over the convergence bounds, enabling the attacker to set the global accuracy of the poisoned model to any desired level. Experimental results demonstrate that FedSA can accurately achieve a predefined global accuracy with fewer malicious clients while maintaining a high level of stealth and adjustable learning rates.
Policy Teaching via Data Poisoning in Learning from Human Preferences
Andi Nika, Jonathan Nöther, Debmalya Mandal
et al.
We study data poisoning attacks in learning from human preferences. More specifically, we consider the problem of teaching/enforcing a target policy $π^\dagger$ by synthesizing preference data. We seek to understand the susceptibility of different preference-based learning paradigms to poisoned preference data by analyzing the number of samples required by the attacker to enforce $π^\dagger$. We first propose a general data poisoning formulation in learning from human preferences and then study it for two popular paradigms, namely: (a) reinforcement learning from human feedback (RLHF) that operates by learning a reward model using preferences; (b) direct preference optimization (DPO) that directly optimizes policy using preferences. We conduct a theoretical analysis of the effectiveness of data poisoning in a setting where the attacker is allowed to augment a pre-existing dataset and also study its special case where the attacker can synthesize the entire preference dataset from scratch. As our main results, we provide lower/upper bounds on the number of samples required to enforce $π^\dagger$. Finally, we discuss the implications of our results in terms of the susceptibility of these learning paradigms under such data poisoning attacks.
POPS: From History to Mitigation of DNS Cache Poisoning Attacks
Yehuda Afek, Harel Berger, Anat Bremler-Barr
We present a novel yet simple and comprehensive DNS cache POisoning Prevention System (POPS), designed to integrate as a module in Intrusion Prevention Systems (IPS). POPS addresses statistical DNS poisoning attacks, including those documented from 2002 to the present, and offers robust protection against similar future threats. It consists of two main components: a detection module that employs three simple rules, and a mitigation module that leverages the TC flag in the DNS header to enhance security. Once activated, the mitigation module has zero false positives or negatives, correcting any such errors on the side of the detection module. We first analyze POPS against historical DNS services and attacks, showing that it would have mitigated all network-based statistical poisoning attacks, yielding a success rate of only 0.0076% for the adversary. We then simulate POPS on traffic benchmarks (PCAPs) incorporating current potential network-based statistical poisoning attacks, and benign PCAPs; the simulated attacks still succeed with a probability of 0.0076%. This occurs because five malicious packets go through before POPS detects the attack and activates the mitigation module. In addition, POPS completes its task using only 20%-50% of the time required by other tools (e.g., Suricata or Snort), and after examining just 5%-10% as many packets. Furthermore, it successfully identifies DNS cache poisoning attacks-such as fragmentation attacks-that both Suricata and Snort fail to detect, underscoring its superiority in providing comprehensive DNS protection.
Diversity-aware Dual-promotion Poisoning Attack on Sequential Recommendation
Yuchuan Zhao, Tong Chen, Junliang Yu
et al.
Sequential recommender systems (SRSs) excel in capturing users' dynamic interests, thus playing a key role in various industrial applications. The popularity of SRSs has also driven emerging research on their security aspects, where data poisoning attack for targeted item promotion is a typical example. Existing attack mechanisms primarily focus on increasing the ranks of target items in the recommendation list by injecting carefully crafted interactions (i.e., poisoning sequences), which comes at the cost of demoting users' real preferences. Consequently, noticeable recommendation accuracy drops are observed, restricting the stealthiness of the attack. Additionally, the generated poisoning sequences are prone to substantial repetition of target items, which is a result of the unitary objective of boosting their overall exposure and lack of effective diversity regularizations. Such homogeneity not only compromises the authenticity of these sequences, but also limits the attack effectiveness, as it ignores the opportunity to establish sequential dependencies between the target and many more items in the SRS. To address the issues outlined, we propose a Diversity-aware Dual-promotion Sequential Poisoning attack method named DDSP for SRSs. Specifically, by theoretically revealing the conflict between recommendation and existing attack objectives, we design a revamped attack objective that promotes the target item while maintaining the relevance of preferred items in a user's ranking list. We further develop a diversity-aware, auto-regressive poisoning sequence generator, where a re-ranking method is in place to sequentially pick the optimal items by integrating diversity constraints.
MT4DP: Data Poisoning Attack Detection for DL-based Code Search Models via Metamorphic Testing
Gong Chen, Wenjie Liu, Xiaoyuan Xie
et al.
Recently, several studies have indicated that data poisoning attacks pose a severe security threat to deep learning-based (DL-based) code search models. Attackers inject carefully crafted malicious patterns into the training data, misleading the code search model to learn these patterns during training. During the usage of the poisoned code search model for inference, once the malicious pattern is triggered, the model tends to rank the vulnerability code higher. However, existing detection methods for data poisoning attacks on DL-based code search models remain insufficiently effective. To address this critical security issue, we propose MT4DP, a Data Poisoning Attack Detection Framework for DL-based Code Search Models via Metamorphic Testing. MT4DP introduces a novel Semantically Equivalent Metamorphic Relation (SE-MR) designed to detect data poisoning attacks on DL-based code search models. Specifically, MT4DP first identifies the high-frequency words from search queries as potential poisoning targets and takes their corresponding queries as the source queries. For each source query, MT4DP generates two semantically equivalent follow-up queries and retrieves its source ranking list. Then, each source ranking list is re-ranked based on the semantic similarities between its code snippets and the follow-up queries. Finally, variances between the source and re-ranked lists are calculated to reveal violations of the SE-MR and warn the data poisoning attack. Experimental results demonstrate that MT4DP significantly enhances the detection of data poisoning attacks on DL-based code search models, outperforming the best baseline by 191% on average F1 score and 265% on average precision. Our work aims to promote further research into effective techniques for mitigating data poisoning threats on DL-based code search models.
Privacy-Preserving Federated Learning Scheme with Mitigating Model Poisoning Attacks: Vulnerabilities and Countermeasures
Jiahui Wu, Fucai Luo, Tiecheng Sun
et al.
The privacy-preserving federated learning schemes based on the setting of two honest-but-curious and non-colluding servers offer promising solutions in terms of security and efficiency. However, our investigation reveals that these schemes still suffer from privacy leakage when considering model poisoning attacks from malicious users. Specifically, we demonstrate that the privacy-preserving computation process for defending against model poisoning attacks inadvertently leaks privacy to one of the honest-but-curious servers, enabling it to access users' gradients in plaintext. To address both privacy leakage and model poisoning attacks, we propose an enhanced privacy-preserving and Byzantine-robust federated learning (PBFL) scheme, comprising three components: (1) a two-trapdoor fully homomorphic encryption (FHE) scheme to bolster users' privacy protection; (2) a novel secure normalization judgment method to preemptively thwart gradient poisoning; and (3) an innovative secure cosine similarity measurement method for detecting model poisoning attacks without compromising data privacy. Our scheme guarantees privacy preservation and resilience against model poisoning attacks, even in scenarios with heterogeneous, non-IID (Independently and Identically Distributed) datasets. Theoretical analyses substantiate the security and efficiency of our scheme, and extensive experiments corroborate the efficacy of our private attacks. Furthermore, the experimental results demonstrate that our scheme accelerates training speed while reducing communication overhead compared to the state-of-the-art PBFL schemes.
Naringin ameliorates high-fat diet-induced hepatotoxicity and dyslipidemia in experimental rat model via modulation of anti-oxidant enzymes, AMPK and SERBP-1c signaling pathways
Sweata Sarkar, Sanjib Ghosh, Maharaj Biswas
High-fat diet causes elevation of steatosis, dyslipidemia and oxidative stress which eventually leads to hepatic injury in the form of non-alcoholic fatty liver disease (NAFLD). Naringin, a natural flavonoid, having tremendous potentiality including antioxidant, anti-inflammatory, hypolipidemic role. Based on this proposition, we investigated the role of naringin in hepatotoxicity and its possible underlying mechanism caused by high-fat diet for prolonged time. Fifteen Wistar rats were divided into three groups: Group A (CON) received normal diet; Group B (HFD) was administered with high-fat diet for 16 weeks; and Group C (THN) was treated with naringin (100 mg/kg B.W.) for last 6 weeks after induction of obesity. After autopsy, various parameters were studied like gravimetry, serum biochemistry, ROS activity, anti-oxidant enzymes, genes expression (AMPK and SREBP-1C), histochemistry, histopathology and ultrastructure of hepatic tissue. In HFD group, Masson’s trichome stain intensity increased 6.8-folds, indicating the onset of liver fibrosis; ROS generation and lipid peroxidation (TBARS) were significantly (p < 0.01) increased, whereas SOD and CAT were decreased by 36.7 % and 49.7 %, respectively. Furthermore, these parameters were remained normal in THN group. Besides, HFD group displayed extreme elevation in hepatic SREBP-1C expression (147 %) and downregulation of AMPK gene (77 %) compared to control. The ultrastructural study revealed most important and new insight of this study where HFD induced extreme reticule stress in hepatic tissue which was significantly improved by the treatment of naringin. These findings demonstrate that the naringin may be used as a potential therapeutic agent to combat obesity related hyperlipidemia and NAFLD.
Differential effect of targeting cisplatin-induced nitrative stress using MnTBAP in auditory and cancer cells
Shomaila Mehmood, Pankaj Bhatia, Nicole Doyon-Reale
et al.
Ototoxicity is a major dose-limiting side effect of cisplatin, a highly effective anti-cancer drug used to treat many solid tumors. Oxidative stress plays a central role in mediating cisplatin-induced ototoxicity. However, broad-spectrum antioxidants that prevent ototoxicity compromise the anti-cancer activity of cisplatin. Therefore, there is a need to identify novel interventional targets/compounds for otoprotection. Recent reports indicated that cisplatin-induced nitration of cochlear proteins is a critical factor in causing ototoxicity, and inhibition of cochlear nitrative stress mitigated cisplatin-induced ototoxicity. The use of peroxynitrite decomposition catalysts that selectively target nitrative stress appears to be an attractive strategy for mitigating the ototoxic effects of cisplatin because they do not scavenge free radicals. We hypothesized that cotreatment with selective inhibitors of nitrative stress prevents cisplatin-induced ototoxicity without compromising the anti-cancer effects. Here, we test this hypothesis by investigating the effect of MnTBAP cotreatment on cell viability, nitrative stress, DNA damage, and cell migration in cisplatin-treated organ of Corti as well as cancer cells. Our results indicate that cisplatin treatment decreases cell viability in both auditory and cancer cells, while cotreatment with MnTBAP mitigates cisplatin-induced cytotoxicity in the auditory cells but not in the cancer cells. Collectively, the findings of this study suggest that selective targeting of cisplatin-induced nitrative stress is a promising strategy for mitigating the ototoxic effects of cisplatin because it does not compromise the anti-cancer effects.
Adversarial Data Poisoning Attacks on Quantum Machine Learning in the NISQ Era
Satwik Kundu, Swaroop Ghosh
With the growing interest in Quantum Machine Learning (QML) and the increasing availability of quantum computers through cloud providers, addressing the potential security risks associated with QML has become an urgent priority. One key concern in the QML domain is the threat of data poisoning attacks in the current quantum cloud setting. Adversarial access to training data could severely compromise the integrity and availability of QML models. Classical data poisoning techniques require significant knowledge and training to generate poisoned data, and lack noise resilience, making them ineffective for QML models in the Noisy Intermediate Scale Quantum (NISQ) era. In this work, we first propose a simple yet effective technique to measure intra-class encoder state similarity (ESS) by analyzing the outputs of encoding circuits. Leveraging this approach, we introduce a \underline{Qu}antum \underline{I}ndiscriminate \underline{D}ata Poisoning attack, QUID. Through extensive experiments conducted in both noiseless and noisy environments (e.g., IBM\_Brisbane's noise), across various architectures and datasets, QUID achieves up to $92\%$ accuracy degradation in model performance compared to baseline models and up to $75\%$ accuracy degradation compared to random label-flipping. We also tested QUID against state-of-the-art classical defenses, with accuracy degradation still exceeding $50\%$, demonstrating its effectiveness. This work represents the first attempt to reevaluate data poisoning attacks in the context of QML.
Increased oxidative stress in shoe industry workers with low-level exposure to a mixture of volatile organic compounds
Umićević Nina, Kotur-Stevuljević Jelena, Baralić Katarina
et al.
This study aimed to assess the redox status and trace metal levels in 49 shoe industry workers (11 men and 38 women) occupationally exposed to a mixture of volatile organic compounds (VOCs), which includes aliphatic hydrocarbons, aromatic hydrocarbons, ketones, esters, ethers, and carboxylic acids. All measured VOCs were below the permitted occupational exposure limits. The control group included 50 unexposed participants (25 men and 25 women). The following plasma parameters were analysed: superoxide anion (O2•−), advanced oxidation protein products (AOPP), total oxidative status (TOS), prooxidant-antioxidant balance (PAB), oxidative stress index (OSI), superoxide dismutase (SOD) and paraoxonase-1 (PON1) enzyme activity, total SH group content (SHG), and total antioxidant status (TAS). Trace metal levels (copper, zinc, iron, magnesium, and manganese) were analysed in whole blood. All oxidative stress and antioxidative defence parameters were higher in the exposed workers than controls, except for PON1 activity. Higher Fe, Mg, and Zn, and lower Cu were observed in the exposed vs control men, while the exposed women had higher Fe and lower Mg, Zn, and Cu than their controls. Our findings confirm that combined exposure to a mixture of VOCs, even at permitted levels, may result in additive or synergistic adverse health effects and related disorders. This raises concern about current risk assessments, which mainly rely on the effects of individual chemicals, and calls for risk assessment approaches that can explain combined exposure to multiple chemicals.
A follow-up on the hesperetin issue in modelling the first electrochemical oxidation potential and antioxidant activity of flavonoids
Miličević Ante
PORE: Provably Robust Recommender Systems against Data Poisoning Attacks
Jinyuan Jia, Yupei Liu, Yuepeng Hu
et al.
Data poisoning attacks spoof a recommender system to make arbitrary, attacker-desired recommendations via injecting fake users with carefully crafted rating scores into the recommender system. We envision a cat-and-mouse game for such data poisoning attacks and their defenses, i.e., new defenses are designed to defend against existing attacks and new attacks are designed to break them. To prevent such a cat-and-mouse game, we propose PORE, the first framework to build provably robust recommender systems in this work. PORE can transform any existing recommender system to be provably robust against any untargeted data poisoning attacks, which aim to reduce the overall performance of a recommender system. Suppose PORE recommends top-$N$ items to a user when there is no attack. We prove that PORE still recommends at least $r$ of the $N$ items to the user under any data poisoning attack, where $r$ is a function of the number of fake users in the attack. Moreover, we design an efficient algorithm to compute $r$ for each user. We empirically evaluate PORE on popular benchmark datasets.
Feed Me: Robotic Infiltration of Poison Frog Families
Tony G. Chen, Billie C. Goolsby, Guadalupe Bernal
et al.
We present the design and operation of tadpole-mimetic robots prepared for a study of the parenting behaviors of poison frogs, which pair bond and raise their offspring. The mission of these robots is to convince poison frog parents that they are tadpoles, which need to be fed. Tadpoles indicate this need, at least in part, by wriggling with a characteristic frequency and amplitude. While the study is in progress, preliminary indications are that the TadBots have passed their test, at least for father frogs. We discuss the design and operational requirements for producing convincing TadBots and provide some details of the study design and plans for future work.
Histopathological, ultrastructural, and biochemical traits of apoptosis induced by peroxisomicine A1 (toxin T-514) from Karwinskia parvifolia in kidney and lung
Adolfo Soto-Domínguez, Daniel Salas-Treviño, Gloria A. Guillén-Meléndez
et al.
Peroxisomicine A1 (PA1) is a toxin isolated from the Karwinskia genus plants whose target organs are the liver, kidney, and lung. In vitro studies demonstrated the induction of apoptosis by PA1 in cancer cell lines, and in vivo in the liver. Apoptosis has a wide range of morphological features such as cell shrinkage, plasma membrane blistering, loss of microvilli, cytoplasm, and chromatin condensation, internucleosomal DNA fragmentation, and formation of apoptotic bodies that are phagocytized by resident macrophages or nearby cells. Early stages of apoptosis can be detected by mitochondrial alterations. We investigated the presence of apoptosis in vivo at the morphological, ultrastructural, and biochemical levels in two target organs of PA1: kidney and lung. Sixty CD-1 mice were divided into three groups (n = 20): untreated control (ST), vehicle control (VH), and PA1 intoxicated group (2LD50). Five animals of each group were sacrificed at 4, 8, 12, and 24 h post-intoxication. Kidney and lung were examined by morphometry, histopathology, ultrastructural, and DNA fragmentation analysis. Pre-apoptotic mitochondrial alterations were present at 4 h. Apoptotic bodies were observed at 8 h and increased over time. TUNEL positive cells were detected as early as 4 h, and the DNA ladder pattern was observed at 12 h and 24 h. The liver showed the highest value of fragmented DNA, followed by the kidney and the lung. We demonstrated the induction of apoptosis by a toxic dose of PA1 in the kidney and lung in vivo. These results could be useful in understanding the mechanism of action of this compound at toxic doses in vivo.
Comparative study of dexmedetomidine versus midazolam infusion with local anesthesia for middle ear surgeries
K.J. Vedhashree, Manjunatha C, Sunil Khyadi
et al.
Background: Middle ear surgeries can be performed under local anesthesia, which is well tolerated when used with sedation. Objective: compare the effect of dexmedetomidine versus midazolam as a sedative in middle ear surgeries done under local anesthesia. Methods: 60 adult patients undergoing middle ear surgeries were randomly allocated into two groups, Group D (n=30) received inj Dexmedetomidine loading dose of 1 mcg/kg over 10 minutes followed by maintenance of 0.2 mcg/kg/hr. Group M received inj. midazolam loading dose of 0.03mg/kg over 10 minutes followed by maintenance of 0.02 mg/kg/hr. Parameters recorded were sedation, patient’s & surgeon’s satisfaction, pain, side effects. Results: Demographic data were comparable in both the groups. Mean RSS was 2.27±0.45 in group M and 2.90±0.31 in group D (P <0.001), significant. Mean VAS for pain was 2.24±0.9 and 1.36±0.6 in group M and group D respectively (P=0.001), significant. Patient’s and surgeon’s satisfaction were significant in group D, (P≤0.001) compared to group M. Side effects were minimal and treated effectively (P =0.212), statistically not-significant. Conclusion: Dexmedetomidine compared to midazolam found to be better drug with respect to sedation, analgesia, patient’s and surgeon’s satisfaction for middle ear surgeries done under local anesthesia.
Therapeutics. Pharmacology, Toxicology. Poisons
FedRecover: Recovering from Poisoning Attacks in Federated Learning using Historical Information
Xiaoyu Cao, Jinyuan Jia, Zaixi Zhang
et al.
Federated learning is vulnerable to poisoning attacks in which malicious clients poison the global model via sending malicious model updates to the server. Existing defenses focus on preventing a small number of malicious clients from poisoning the global model via robust federated learning methods and detecting malicious clients when there are a large number of them. However, it is still an open challenge how to recover the global model from poisoning attacks after the malicious clients are detected. A naive solution is to remove the detected malicious clients and train a new global model from scratch, which incurs large cost that may be intolerable for resource-constrained clients such as smartphones and IoT devices. In this work, we propose FedRecover, which can recover an accurate global model from poisoning attacks with small cost for the clients. Our key idea is that the server estimates the clients' model updates instead of asking the clients to compute and communicate them during the recovery process. In particular, the server stores the global models and clients' model updates in each round, when training the poisoned global model. During the recovery process, the server estimates a client's model update in each round using its stored historical information. Moreover, we further optimize FedRecover to recover a more accurate global model using warm-up, periodic correction, abnormality fixing, and final tuning strategies, in which the server asks the clients to compute and communicate their exact model updates. Theoretically, we show that the global model recovered by FedRecover is close to or the same as that recovered by train-from-scratch under some assumptions. Empirically, our evaluation on four datasets, three federated learning methods, as well as untargeted and targeted poisoning attacks (e.g., backdoor attacks) shows that FedRecover is both accurate and efficient.