By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
Science Briefing
  • Medicine
  • Biology
  • Engineering
  • Environment
  • More
    • Dentistry
    • Chemistry
    • Physics
    • Agriculture
    • Business
    • Computer Science
    • Energy
    • Materials Science
    • Mathematics
    • Politics
    • Social Sciences
Notification
  • Home
  • My Feed
  • SubscribeNow
  • My Interests
  • My Saves
  • History
  • SurveysNew
Personalize
Science BriefingScience Briefing
Font ResizerAa
  • Home
  • My Feed
  • SubscribeNow
  • My Interests
  • My Saves
  • History
  • SurveysNew
Search
  • Quick Access
    • Home
    • Contact Us
    • Blog Index
    • History
    • My Saves
    • My Interests
    • My Feed
  • Categories
    • Business
    • Politics
    • Medicine
    • Biology

Top Stories

Explore the latest updated news!

Today’s Public Health Science Briefing | April 21st 2026, 9:00:12 am

Today’s Political Science Science Briefing | April 21st 2026, 9:00:12 am

Today’s Neurology Science Briefing | April 21st 2026, 9:00:12 am

Stay Connected

Find us on socials
248.1KFollowersLike
61.1KFollowersFollow
165KSubscribersSubscribe
Made by ThemeRuby using the Foxiz theme. Powered by WordPress

Home - Machine Learning - Hijacking the hive mind: A new stealth attack on federated learning

Machine Learning

Hijacking the hive mind: A new stealth attack on federated learning

Last updated: February 28, 2026 4:35 am
By
Science Briefing
ByScience Briefing
Science Communicator
Instant, tailored science briefings — personalized and easy to understand. Try 30 days free.
Follow:
No Comments
Share
SHARE

Hijacking the hive mind: A new stealth attack on federated learning

A novel security threat, dubbed HijackFL, has been demonstrated against federated learning systems. Unlike traditional data poisoning attacks that alter model parameters, this method searches for pixel-level perturbations to subtly manipulate input data. By aligning hijacking samples with legitimate ones in the feature space, an adversary can force a globally trained model to perform an entirely different, unauthorized task without detection by the central server or other participants. The attack achieved a hijacking success rate of over 92% in experiments, significantly outperforming prior methods.

Why it might matter to you: For professionals working with distributed machine learning models, this research highlights a critical vulnerability in a paradigm often chosen for its privacy benefits. It underscores the need to move beyond conventional data poisoning defenses and consider adversarial attacks that operate directly on input features. This finding could influence how you design model evaluation protocols and security audits for collaborative learning systems, pushing for more robust anomaly detection in the feature space.

Source →

Stay curious. Stay informed — with Science Briefing.

Always double check the original article for accuracy.

- Advertisement -

Feedback

Share This Article
Facebook Flipboard Pinterest Whatsapp Whatsapp LinkedIn Tumblr Reddit Telegram Threads Bluesky Email Copy Link Print
Share
ByScience Briefing
Science Communicator
Follow:
Instant, tailored science briefings — personalized and easy to understand. Try 30 days free.
Previous Article The Unlearning Paradox: How Forgetting Data Can Leak It
Next Article Predicting Urban Movement: A New Vision for Multimodal Transport
Leave a Comment Leave a Comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Related Stories

Uncover the stories that related to the post!

A New Architecture for Learning Structured Neural Representations

From Data to Diagnosis: AI’s Systematic Path to Predicting Diabetes

The AI Revolution in Cancer Research: From Classical Models to Foundational Giants

How AI is learning to anonymize text with unprecedented precision

Hiding in Plain Text: A New Framework for Covert Communication

A New Benchmark for AI’s Understanding of Metaphor

The Privacy-Utility Trade-Off: Rewriting Text to Conceal Authorship

How the brain’s early visual code untangles objects for AI to see

Show More

Science Briefing delivers personalized, reliable summaries of new scientific papers—tailored to your field and interests—so you can stay informed without doing the heavy reading.

Science Briefing
  • Categories:
  • Medicine
  • Biology
  • Social Sciences
  • Gastroenterology
  • Surgery
  • Natural Language Processing
  • Energy
  • Chemistry
  • Engineering
  • Neurology

Quick Links

  • My Feed
  • My Interests
  • History
  • My Saves

About US

  • Adverts
  • Our Jobs
  • Term of Use

ScienceBriefing.com, All rights reserved.

Personalize you Briefings
To Receive Instant, personalized science updates—only on the discoveries that matter to you.
Please enable JavaScript in your browser to complete this form.
Loading
Zero Spam, Cancel, Upgrade or downgrade anytime!
Welcome Back!

Sign in to your account

Username or Email Address
Password

Lost your password?