Asynchronous Cache-based Aggregation with Fairness and Filtering for Decentralized Federated Learning
Decentralized Federated Learning (DFL) offers a scalable paradigm for collaborative intelligence at the edge, yet its practical efficacy is severely constrained by system heterogeneity. Traditional synchronous protocols enforce rigid, lockstep aggregation barriers, where the training velocity of the entire collective is strictly dictated by the slowest strag...
Quick facts
- Year
- 2026
- Venue
- Computer Networks
- Identifier
- martinezbeltran2026asynchronous
Suggested citation
Enrique Tomás Martínez Beltrán, Eduard Gash, Gérôme Bovet, Alberto Huertas Celdrán, Burkhard Stiller (2026). Asynchronous Cache-based Aggregation with Fairness and Filtering for Decentralized Federated Learning. Computer Networks.
Abstract
Decentralized Federated Learning (DFL) offers a scalable paradigm for collaborative intelligence at the edge, yet its practical efficacy is severely constrained by system heterogeneity. Traditional synchronous protocols enforce rigid, lockstep aggregation barriers, where the training velocity of the entire collective is strictly dictated by the slowest straggler node, inevitably leading to significant idle time and resource underutilization. While asynchronous strategies mitigate latency, they often introduce complex pathologies, such as unbounded staleness and systemic unfairness, because high-performance nodes disproportionately bias the global model toward local data distributions, thereby marginalizing slower contributors. To rigorously reconcile these conflicting trade-offs, this work presents CAFF, a novel asynchronous communication framework for DFL that decouples local optimization from global synchronization via a topology-aware, event-driven protocol. By implementing a topology-aware cache with a strict per-neighbor replacement policy, the mechanism limits per-peer dominance by enforcing a one-slot-per-neighbor cache and exclusive replacement, preventing any peer from contributing multiple updates within a single aggregation event. Furthermore, a configurable staleness filter and a dynamic aggregation threshold ensure robust convergence stability across diverse federation topologies. Extensive empirical evaluations using MNIST, FashionMNIST, CIFAR-10, and SVHN, conducted on a high-fidelity, virtualized testbed across fully connected, star, and ring topologies, demonstrate that CAFF significantly outperforms synchronous baselines. Specifically, in dense network configurations, the framework reduces wall-clock training time by up to 39% and network traffic by up to 75%, while maintaining competitive predictive fidelity with controlled accuracy degradation. These results position CAFF as a robust and scalable efficiency-oriented solution for heterogeneous peer-to-peer learning environments.
Authors
Keywords
Related publications
Works with stronger overlap in topic, type, and tags.
Decentralized Federated Learning with Multimodal Prototypes for Heterogeneous Data
Enrique Tomás Martínez Beltrán, Gérôme Bovet, Gregorio Martínez Pérez, Alberto Huertas Celdrán
Analyzing the impact of Driving tasks when detecting emotions through brain--computer interfaces
Mario Quiles Pérez, Enrique Tomás Martínez Beltrán, Sergio López Bernal, Gregorio Martínez Pérez, Alberto Huertas Celdrán
Traffic accidents are the leading cause of death among young people, a problem that today costs an enormous number of victims. Several technologies have been proposed to prevent accidents, being brain--computer interfaces (BCIs) one of the...
Analyzing the robustness of decentralized horizontal and vertical federated learning architectures in a non-IID scenario
Pedro Miguel Sánchez Sánchez, Alberto Huertas Celdrán, Enrique Tomás Martínez Beltrán, Daniel Demeter, Gérôme Bovet, Gregorio Martínez Pérez, Burkhard Stiller
Federated learning (FL) enables participants to collaboratively train machine and deep learning models while safeguarding data privacy. However, the FL paradigm still has drawbacks that affect its trustworthiness, as malicious participants...
Related Research

Apr 2023 — Nov 2023
DEFENDIS: Decentralized Federated Learning for IoT Device Identification and Security
DEFENDIS develops a framework for uniquely identifying IoT devices in a distributed manner while solving security threats through decentralized federated learning.

Dec 2022 — Nov 2025
EU-GUARDIAN: European Framework and Proofs-of-concept for the Intelligent Automation of Cyber Defence Incident Management
A cutting-edge AI-based solution for automating cyber defence incident management processes, enhancing EU cyber defence posture and operational capabilities.