Skip to content
Enrique Tomás Martínez Beltrán
HomeResearchPublicationsTeachingBlog
ENES
Contact
HomeResearchPublicationsTeachingBlog
ENES
Contact

Enrique Tomás Martínez Beltrán

Ph.D. student at the University of Murcia working at the intersection of federated learning, cybersecurity, and privacy-preserving AI for real-world systems.

  • Privacy Policy
  • Terms of Service
  • Accessibility Statement
  • GitHubopens in a new tab
  • LinkedInopens in a new tab
  • Google Scholaropens in a new tab
  • RGopens in a new tab
  • ORCIDopens in a new tab
  • Scopusopens in a new tab
  • DBLPopens in a new tab
  • Web of Scienceopens in a new tab

Enrique Tomás Martínez Beltrán. All rights reserved.

Back to top

This site uses cookies for analytics to understand how visitors interact with the content. No personal data is shared with third parties.

  1. Home
  2. Publications
  3. Asynchronous Cache-based Aggregation with Fairness and Filtering for Decentralized Federated Learning
Journal article2026

Computer Networks

Asynchronous Cache-based Aggregation with Fairness and Filtering for Decentralized Federated Learning

Decentralized Federated Learning (DFL) offers a scalable paradigm for collaborative intelligence at the edge, yet its practical efficacy is severely constrained by system heterogeneity. Traditional synchronous protocols...

Publisher Pageopens in a new tabDOIopens in a new tab
  • LinkedInopens in a new tab
  • Xopens in a new tab

Quick facts

Year
2026
Venue
Computer Networks
Identifier
martinezbeltran2026asynchronous

Suggested citation

Enrique Tomás Martínez Beltrán, Eduard Gash, Gérôme Bovet, Alberto Huertas Celdrán, Burkhard Stiller (2026). Asynchronous Cache-based Aggregation with Fairness and Filtering for Decentralized Federated Learning. Computer Networks.

Abstract

Decentralized Federated Learning (DFL) offers a scalable paradigm for collaborative intelligence at the edge, yet its practical efficacy is severely constrained by system heterogeneity. Traditional synchronous protocols enforce rigid, lockstep aggregation barriers, where the training velocity of the entire collective is strictly dictated by the slowest straggler node, inevitably leading to significant idle time and resource underutilization. While asynchronous strategies mitigate latency, they often introduce complex pathologies, such as unbounded staleness and systemic unfairness, because high-performance nodes disproportionately bias the global model toward local data distributions, thereby marginalizing slower contributors. To rigorously reconcile these conflicting trade-offs, this work presents CAFF, a novel asynchronous communication framework for DFL that decouples local optimization from global synchronization via a topology-aware, event-driven protocol. By implementing a topology-aware cache with a strict per-neighbor replacement policy, the mechanism limits per-peer dominance by enforcing a one-slot-per-neighbor cache and exclusive replacement, preventing any peer from contributing multiple updates within a single aggregation event. Furthermore, a configurable staleness filter and a dynamic aggregation threshold ensure robust convergence stability across diverse federation topologies. Extensive empirical evaluations using MNIST, FashionMNIST, CIFAR-10, and SVHN, conducted on a high-fidelity, virtualized testbed across fully connected, star, and ring topologies, demonstrate that CAFF significantly outperforms synchronous baselines. Specifically, in dense network configurations, the framework reduces wall-clock training time by up to 39% and network traffic by up to 75%, while maintaining competitive predictive fidelity with controlled accuracy degradation. These results position CAFF as a robust and scalable efficiency-oriented solution for heterogeneous peer-to-peer learning environments.

Authors

Enrique Tomás Martínez BeltránEduard GashGérôme BovetAlberto Huertas CeldránBurkhard Stiller

Keywords

Related publications

Works with stronger overlap in topic, type, and tags.

Journal article2026

Information Fusion

Decentralized Federated Learning with Multimodal Prototypes for Heterogeneous Data

Enrique Tomás Martínez Beltrán, Gérôme Bovet, Gregorio Martínez Pérez, Alberto Huertas Celdrán

Publisher Pageopens in a new tabDOIopens in a new tab
Journal article2026

Computer Networks

RepuNet: A Reputation System for Mitigating Malicious Clients in DFL

Isaac Marroqui Penalva, Enrique Tomás Martínez Beltrán, Manuel Gil Pérez, Alberto Huertas Celdrán

Decentralized Federated Learning (DFL) enables nodes to collaboratively train models without a central server, introducing new vulnerabilities since each node independently selects peers for model aggregation. Malicious...

Publisher Pageopens in a new tabDOIopens in a new tab
Journal article2026

IEEE Access

TemporalFED: Detecting Cyberattacks in Industrial Time-Series Data Using Decentralized Federated Learning

Ángel Luis Perales Gómez, Enrique Tomás Martínez Beltrán, Pedro Miguel Sánchez Sánchez, Alberto Huertas Celdrán

Industry 4.0 has brought numerous advantages, such as increasing productivity through automation. However, it also presents major cybersecurity issues, such as cyberattacks affecting industrial processes. Federated Learn...

Publisher Pageopens in a new tabDOIopens in a new tab

Related Research

DEFENDIS: Decentralized Federated Learning for IoT Device Identification and Security

Apr 2023 — Nov 2023

DEFENDIS: Decentralized Federated Learning for IoT Device Identification and Security

DEFENDIS develops a framework for uniquely identifying IoT devices in a distributed manner while solving security threats through decentralized federated learning.

EU-GUARDIAN: European Framework and Proofs-of-concept for the Intelligent Automation of Cyber Defence Incident Management

Dec 2022 — Nov 2025

EU-GUARDIAN: European Framework and Proofs-of-concept for the Intelligent Automation of Cyber Defence Incident Management

A cutting-edge AI-based solution for automating cyber defence incident management processes, enhancing EU cyber defence posture and operational capabilities.