ARES: Scalable and Practical Gradient Inversion Attack in Federated Learning through Activation Recovery

Published in IEEE Symposium on Security and Privacy (S&P) 2026, 2026

Federated Learning (FL) is widely adopted in healthcare, finance, and beyond — precisely because it allows models to train on sensitive data without ever sharing it. But how private is it, really?

Our work reveals that a malicious server can reconstruct private training data directly from gradients — at scale, and without modifying the model architecture at all! 🥷⚠️

🔍 The key insight Existing attacks rely on architectural modifications that are easily detectable in practice. ARES sidesteps this limitation entirely by tackling the fundamental challenge of inverting hidden activations back into training samples.

⚙️ How it works

  • Formulates activation inversion as a noisy sparse recovery problem
  • Leverages Lasso-based compressed sensing to reconstruct samples from intermediate activations
  • Requires no architectural modifications
  • Scales to large batch sizes in realistic FL settings

📊 Key results — across image, text, and audio datasets

  • Up to 7× PSNR improvement over prior state-of-the-art attacks
  • Remains effective under differential privacy, gradient sparsification, quantization, and secure aggregation defenses
  • Backed by theoretical recovery error bounds via the Restricted Isometry Property

🔓The takeaway: intermediate activations are a seriously underestimated privacy risk in federated learning. Stronger defenses are urgently needed.

Huge thanks to my wonderful co-authors and supervisors: Leo Yu Zhang, Yanjun Zhang, Viet Vo, Tianqing Zhu, Shirui Pan, and Cong Wang — this was a true team effort!

Recommended citation: Gong, Zirui, Leo Yu Zhang, Yanjun Zhang, Viet Vo, Tianqing Zhu, Shirui Pan, and Cong Wang. "ARES: Scalable and Practical Gradient Inversion Attack in Federated Learning through Activation Recovery." IEEE Symposium on Security and Privacy (S&P) 2026.
Download Paper