Trending..

V1I2P8

Adversarial Attacks and Defenses in Federated Learning: A State of the Art Survey on Security Vulnerabilities and Mitigation Approaches

Funminiyi Olagunju1*

Abstract

Federated Learning (FL) enables collaborative model training across decentralized devices while preserving data privacy, making it ideal for applications in IoT, healthcare, and autonomous systems. However, FL’s distributed nature exposes it to a wide range of adversarial attacks, including data and model poisoning, privacy inference, and sophisticated collusion strategies, which threaten the integrity and confidentiality of the learning process. This survey comprehensively reviews the current state of the art adversarial threats and defense mechanisms in FL. We discuss robust aggregation techniques, anomaly detection, privacy preserving methods, and trust-based frameworks that enhance FL’s resilience. Additionally, we explore evaluation metrics, real world use cases, and open challenges, highlighting future research directions such as adaptive defenses, integration with emerging technologies, and benchmarking standardization. This work aims to provide researchers and practitioners with a detailed understanding of FL’s security landscape and guide the development of more secure and trustworthy federated systems.

Keywords: Federated Learning, Adversarial Attacks, Data Poisoning, Model Poisoning, Privacy Attacks