Hijacking the hive mind: A new stealth attack on federated learning
A novel security threat, dubbed HijackFL, has been demonstrated against federated learning systems. Unlike traditional data poisoning attacks that alter model parameters, this method searches for pixel-level perturbations to subtly manipulate input data. By aligning hijacking samples with legitimate ones in the feature space, an adversary can force a globally trained model to perform an entirely different, unauthorized task without detection by the central server or other participants. The attack achieved a hijacking success rate of over 92% in experiments, significantly outperforming prior methods.
Why it might matter to you: For professionals working with distributed machine learning models, this research highlights a critical vulnerability in a paradigm often chosen for its privacy benefits. It underscores the need to move beyond conventional data poisoning defenses and consider adversarial attacks that operate directly on input features. This finding could influence how you design model evaluation protocols and security audits for collaborative learning systems, pushing for more robust anomaly detection in the feature space.
Source →Stay curious. Stay informed — with Science Briefing.
Always double check the original article for accuracy.

