Skip to content
Enrique Tomás Martínez Beltrán
HomeResearchPublicationsTeachingBlog
ENES
Contact
HomeResearchPublicationsTeachingBlog
ENES
Contact

Enrique Tomás Martínez Beltrán

Ph.D. student at the University of Murcia working at the intersection of federated learning, cybersecurity, and privacy-preserving AI for real-world systems.

  • Privacy Policy
  • Terms of Service
  • Accessibility Statement
  • GitHubopens in a new tab
  • LinkedInopens in a new tab
  • Google Scholaropens in a new tab
  • RGopens in a new tab
  • ORCIDopens in a new tab
  • Scopusopens in a new tab
  • DBLPopens in a new tab
  • Web of Scienceopens in a new tab

Enrique Tomás Martínez Beltrán. All rights reserved.

Back to top

This site uses cookies for analytics to understand how visitors interact with the content. No personal data is shared with third parties.

  1. Home
  2. Publications
  3. SYNAPSE: Framework for Neuron Analysis and Perturbation in Sequence Encoding
Preprint2026

arXiv preprint arXiv:2603.08424

SYNAPSE: Framework for Neuron Analysis and Perturbation in Sequence Encoding

In recent years, Artificial Intelligence has become a powerful partner for complex tasks such as data analysis, prediction, and problem-solving, yet its lack of transparency raises concerns about its reliability. In sens...

Publisher Pageopens in a new tabDOIopens in a new tab
  • LinkedInopens in a new tab
  • Xopens in a new tab

Quick facts

Year
2026
Venue
arXiv preprint arXiv:2603.08424
Identifier
sanchoochoa2026synapse

Suggested citation

Jesús Sánchez Ochoa, Enrique Tomás Martínez Beltrán, Alberto Huertas Celdrán (2026). SYNAPSE: Framework for Neuron Analysis and Perturbation in Sequence Encoding. arXiv preprint arXiv:2603.08424.

Abstract

In recent years, Artificial Intelligence has become a powerful partner for complex tasks such as data analysis, prediction, and problem-solving, yet its lack of transparency raises concerns about its reliability. In sensitive domains such as healthcare or cybersecurity, ensuring transparency, trustworthiness, and robustness is essential, since the consequences of wrong decisions or successful attacks can be severe. Prior neuron-level interpretability approaches are primarily descriptive, task-dependent, or require retraining, which limits their use as systematic, reusable tools for evaluating internal robustness across architectures and domains. To overcome these limitations, this work proposes SYNAPSE, a systematic, training-free framework for understanding and stress-testing the internal behavior of Transformer models across domains. It extracts per-layer [CLS] representations, trains a lightweight linear probe to obtain global and per-class neuron rankings, and applies forward-hook interventions during inference. This design enables controlled experiments on internal representations without altering the original model, thereby allowing weaknesses, stability patterns, and label-specific sensitivities to be measured and compared directly across tasks and architectures. Across all experiments, SYNAPSE reveals a consistent, domain-independent organization of internal representations, in which task-relevant information is encoded in broad, overlapping neuron subsets. This redundancy provides a strong degree of functional stability, while class-wise asymmetries expose heterogeneous specialization patterns and enable label-aware analysis. In contrast, small structured manipulations in weight or logit space are sufficient to redirect predictions, highlighting complementary vulnerability profiles and illustrating how SYNAPSE can guide the development of more robust Transformer models.

Authors

Jesús Sánchez OchoaEnrique Tomás Martínez BeltránAlberto Huertas Celdrán

Keywords

Related publications

Works with stronger overlap in topic, type, and tags.

Preprint2026

Submitted to Information Fusion

Decentralized Self-Supervised Representation Learning via Prototype Exchange under Non-IID Data

Enrique Tomás Martínez Beltrán, Gérôme Bovet, Gregorio Martínez Pérez, Alberto Huertas Celdrán

Preprint2026

Submitted to Future Generation Computer Systems

FedEnD: Communication-Efficient Federated Learning for Non-IID Data via Decentralized Ensemble Distillation

Enrique Tomás Martínez Beltrán, Philip Giryes, Gérôme Bovet, Burkhard Stiller, Gregorio Martínez Pérez, Alberto Huertas Celdrán

Preprint2021

Journal of Healthcare Engineering

Breaching Subjects’ Thoughts Privacy: A Study with Visual Stimuli and Brain-Computer Interfaces

Mario Quiles Pérez, Enrique Tomás Martínez Beltrán, Sergio López Bernal, Alberto Huertas Celdrán, Gregorio Martínez Pérez

Brain-computer interfaces (BCIs) started being used in clinical scenarios, reaching nowadays new fields such as entertainment or learning. Using BCIs, neuronal activity can be monitored for various purposes, with the stu...

Publisher Pageopens in a new tabDOIopens in a new tab

Related Research

DEFENDIS: Decentralized Federated Learning for IoT Device Identification and Security

Apr 2023 — Nov 2023

DEFENDIS: Decentralized Federated Learning for IoT Device Identification and Security

DEFENDIS develops a framework for uniquely identifying IoT devices in a distributed manner while solving security threats through decentralized federated learning.

EU-GUARDIAN: European Framework and Proofs-of-concept for the Intelligent Automation of Cyber Defence Incident Management

Dec 2022 — Nov 2025

EU-GUARDIAN: European Framework and Proofs-of-concept for the Intelligent Automation of Cyber Defence Incident Management

A cutting-edge AI-based solution for automating cyber defence incident management processes, enhancing EU cyber defence posture and operational capabilities.