Journal article2024Array

DART: A Solution for decentralized federated learning model robustness analysis

Federated Learning (FL) has emerged as a promising approach to address privacy concerns inherent in Machine Learning (ML) practices. However, conventional FL methods, particularly those following the Centralized FL (CFL) paradigm, utilize a central server for global aggregation, which exhibits limitations such as bottleneck and single point of failure. To ad...

Decentralized federated learningPoisoning attackCybersecurityModel robustness

Quick facts

Year
2024
Venue
Array
Identifier
feng2024dart

Suggested citation

Chao Feng, Alberto Huertas Celdrán, Jan von der Assen, Enrique Tomás Martínez Beltrán, Gérôme Bovet, Burkhard Stiller (2024). DART: A Solution for decentralized federated learning model robustness analysis. Array.

Abstract

Federated Learning (FL) has emerged as a promising approach to address privacy concerns inherent in Machine Learning (ML) practices. However, conventional FL methods, particularly those following the Centralized FL (CFL) paradigm, utilize a central server for global aggregation, which exhibits limitations such as bottleneck and single point of failure. To address these issues, the Decentralized FL (DFL) paradigm has been proposed, which removes the client–server boundary and enables all participants to engage in model training and aggregation tasks. Nevertheless, as CFL, DFL remains vulnerable to adversarial attacks, notably poisoning attacks that undermine model performance. While existing research on model robustness has predominantly focused on CFL, there is a noteworthy gap in understanding the model robustness of the DFL paradigm. In this paper, a thorough review of poisoning attacks targeting the model robustness in DFL systems, as well as their corresponding countermeasures, are presented. Additionally, a solution called DART is proposed to evaluate the robustness of DFL models, which is implemented and integrated into a DFL platform. Through extensive experiments, this paper compares the behavior of CFL and DFL under diverse poisoning attacks, pinpointing key factors affecting attack spread and effectiveness within the DFL. It also evaluates the performance of different defense mechanisms and investigates whether defense mechanisms designed for CFL are compatible with DFL. The empirical results provide insights into research challenges and suggest ways to improve the robustness of DFL models for future research.

Authors

Chao FengAlberto Huertas CeldránJan von der AssenEnrique Tomás Martínez BeltránGérôme BovetBurkhard Stiller

Keywords

Decentralized federated learningPoisoning attackCybersecurityModel robustness

Related publications

Works with stronger overlap in topic, type, and tags.

Journal article2024Applied Intelligence

Analyzing the robustness of decentralized horizontal and vertical federated learning architectures in a non-IID scenario

Pedro Miguel Sánchez Sánchez, Alberto Huertas Celdrán, Enrique Tomás Martínez Beltrán, Daniel Demeter, Gérôme Bovet, Gregorio Martínez Pérez, Burkhard Stiller

Federated learning (FL) enables participants to collaboratively train machine and deep learning models while safeguarding data privacy. However, the FL paradigm still has drawbacks that affect its trustworthiness, as malicious participants...

Journal article2024Information Fusion

Data fusion in neuromarketing: Multimodal analysis of biosignals, lifecycle stages, current advances, datasets, trends, and challenges

Mario Quiles Pérez, Enrique Tomás Martínez Beltrán, Sergio López Bernal, Eduardo Horna Prat, Luis Montesano Del Campo, Lorenzo Fernández Maimó, Alberto Huertas Celdrán

The primary goal of any company is to increase its profits by improving both the quality of its products and how they are advertised. In this context, neuromarketing seeks to enhance the promotion of products and generate a greater acceptan...

Related Research