TRUST‑WEIGHTED AGGREGATION IN FEDERATED LEARNING FOR MICROGRIDS
DOI:
https://doi.org/10.31891/2307-5732-2025-359-77Keywords:
federated learning, trust-aware aggregation, microgrid security, anomaly detection, adversarial clients, cyber-physical systemsAbstract
This article presents a comprehensive framework for enhancing the robustness of federated learning (FL) in microgrid cybersecurity by introducing a novel trust-weighted aggregation (TWA) mechanism. In decentralized energy infrastructures, federated learning enables collaborative training of predictive and anomaly detection models across distributed controllers while preserving data privacy. However, the presence of unreliable or adversarial participants poses a serious risk to the stability of global learning. Malicious or noisy updates may bias aggregation, delay convergence, or even compromise resilience of the energy system. To address this vulnerability, the proposed approach integrates a dynamic trust scoring algorithm that continuously evaluates each client’s reliability using three criteria: local loss, weight divergence from the global model, and residual-based anomaly scores. Trust scores are updated after every training round with adaptive decay to penalize inconsistent behavior, and they are incorporated directly into the aggregation rule, ensuring that unreliable nodes exert less influence on the global model.
The experimental study is conducted on a real-world energy consumption and generation dataset, where a mixed population of clients is simulated, including honest nodes with clean data, noisy nodes affected by perturbations, and adversarial nodes injecting poisoned updates. The infrastructure relies on PyTorch and Flower to coordinate training rounds, while trust computation and anomaly scoring are embedded in a lightweight module executed both locally and on the server. The server aggregates updates under two competing strategies—FedAvg and the proposed TWA. The evaluation criteria include root mean squared error (RMSE), mean absolute error (MAE), and F1-score across all rounds, with additional statistics such as final RMSE, average RMSE, and standard deviation to measure stability. Results demonstrate that while FedAvg suffers significantly from adversarial noise, the TWA mechanism consistently achieves lower error, faster convergence, and superior resilience to data poisoning. Moreover, trust scores evolve dynamically, enabling the system to detect and down-weight malicious clients within the first few rounds of training. This adaptive suppression ensures that the global model is guided primarily by reliable participants.
The contribution of this work lies in demonstrating that trust-aware federated learning can combine privacy, resilience, and adaptivity in cyber-physical energy systems. By coupling trust scoring with anomaly detection and cross-validation against digital twin simulations, the proposed approach moves beyond static defenses and enables a self-correcting, self-healing federated ecosystem. These findings open the path toward secure and scalable federated intelligence for smart grids, offering a solid methodological and experimental foundation for future research in cybersecurity-critical domains.
References
Downloads
Published
Issue
Section
License
Copyright (c) 2025 ГЕННАДІЙ ШИБАЄВ, ЛЕОНІД ГАЛЬЧИНСЬКИЙ (Автор)

This work is licensed under a Creative Commons Attribution 4.0 International License.