Sitemap
A list of all the posts and pages found on the site. For you robots out there, there is an XML version available for digesting as well.
Pages
Posts
IEEE S&P Presentation
Published:
Honored to have presented at the IEEE Symposium on Security and Privacy today! 🎉🎉
IEEE S&P 2025 Paper Introduction
Published:
Our paper “Not All Edges are Equally Robust: Evaluating the Robustness of Ranking-Based Federated Learning” has been accepted to the IEEE S&P 2025 — one of the top-tier conferences in cybersecurity and privacy research! 🎉🎉
portfolio
Portfolio item number 1
Short description of portfolio item number 1
Portfolio item number 2
Short description of portfolio item number 2
publications
Agramplifier: Defending Federated Learning Against Poisoning Attacks Through Local Update Amplification
Published in IEEE Transactions on Information Forensics and Security (TIFS), 2023
The collaborative nature of federated learning (FL) poses a major threat in the form of manipulation of local training data and local updates, known as the Byzantine poisoning attack. To address this issue, many Byzantine-robust aggregation rules (AGRs) have been proposed to filter out or moderate suspicious local updates uploaded by Byzantine participants. This paper introduces a novel approach called AGRAMPLIFIER, aiming to simultaneously improve robustness, fidelity, and efficiency of the existing AGRs. The core idea of AGRAMPLIFIER is to amplify the “morality” of local updates by identifying the most repressive features of each gradient update, which provides a clearer distinction between malicious and benign updates, consequently improving the detection effect. To achieve this objective, two approaches, namely AGRMP and AGRXAI, are proposed. AGRMP organizes local updates into patches and extracts the largest value from each patch, while AGRXAI leverages explainable AI methods to extract the gradient of the most activated features. By equipping AGRAMPLIFIER with the existing Byzantine-robust mechanisms, we successfully enhance the model robustness, maintaining its fidelity and improving overall efficiency. AGRAMPLIFIER is universally compatible with the existing Byzantine-robust mechanisms. The paper demonstrates its effectiveness by integrating it with all mainstream AGR mechanisms. Extensive evaluations conducted on seven datasets from diverse domains against seven representative poisoning attacks consistently show enhancements in robustness, fidelity, and efficiency, with average gains of 40.08%, 39.18%, and 10.68%, respectively.
Recommended citation: Gong, Zirui, et al. "Agramplifier: defending federated learning against poisoning attacks through local update amplification." IEEE Transactions on Information Forensics and Security 19 (2023): 1241-1250.
Production evaluation of citrus fruits based on the yolov5 compressed by knowledge distillation
Published in CSCWD, 2023
Accurate pre-harvest fruit yield estimation is essential for planning storage, logistics, and pricing in agriculture. However, existing computer vision methods often struggle with small fruit sizes, occlusion by leaves, and overlapping fruits. Moreover, many rely on large, resource-intensive models that are unsuitable for real-world, mobile deployment.
Not All Edges are Equally Robust: Evaluating the Robustness of Ranking-Based Federated Learning
Published in IEEE Symposium on Security and Privacy (S&P), 2025
Federated Ranking Learning (FRL) is a state-of-the-art FL framework that stands out for its communication efficiency and resilience to poisoning attacks. It diverges from the traditional FL framework in two ways: 1) it leverages discrete rankings instead of gradient updates, significantly reducing communication costs and limiting the potential space for malicious updates, and 2) it uses majority voting on the server side to establish the global ranking, ensuring that individual updates have minimal influence since each client contributes only a single vote. These features enhance the system’s scalability and position FRL as a promising paradigm for FL training. However, our analysis reveals that FRL is not inherently robust, as certain edges are particularly vulnerable to poisoning attacks. Through a theoretical investigation, we prove the existence of these vulnerable edges and establish a lower bound and an upper bound for identifying them in each layer. Based on this finding, we introduce a novel local model poisoning attack against FRL, namely the Vulnerable Edge Manipulation (VEM) attack. The VEM attack focuses on identifying and perturbing the most vulnerable edges in each layer and leveraging an optimization-based approach to maximize the attack’s impact. Through extensive experiments on benchmark datasets, we demonstrate that our attack achieves an overall 53.23% attack impact and is 3.7x more impactful than existing methods. Our findings highlight significant vulnerabilities in ranking-based FL systems and underline the urgency for the development of new robust FL frameworks.
Recommended citation: Gong, Zirui, et al. "Not All Edges are Equally Robust: Evaluating the Robustness of Ranking-Based Federated Learning." arXiv preprint arXiv:2503.08976 (2025).
talks
Talk 1 on Relevant Topic in Your Field
Published:
This is a description of your talk, which is a markdown file that can be all markdown-ified like any other post. Yay markdown!
Conference Proceeding talk 3 on Relevant Topic in Your Field
Published:
This is a description of your conference proceedings talk, note the different field in type. You can put anything in this field.
teaching
Programming Principles
Undergraduate/Postgraduate course, Griffith University, ICT, 2023
Programming Principles (1801ICT)
This foundational course introduces students to programming using Python. It covers:
- Fundamental programming constructs, such as loops, functions, and conditionals
- Data structures (lists, dictionaries, tuples)
- Problem solving, algorithm design, and debugging
I lead hands-on lab sessions, review code, assist with assignments, and help students strengthen their programming and computational thinking skills.
Trustworthy AI
Undergraduate/Postgraduate course, Griffith University, ICT, 2023
Trustworthy AI (3015ICT)
This course introduces key aspects of building trustworthy machine learning systems. Topics include:
- Adversarial machine learning and backdoor attacks
- Privacy-preserving techniques, including federated learning
- Fairness, accountability, and explainability in AI
I assist in tutorials, labs, and assessments, and help students implement attack and defense techniques in practice.
Ethical Hacking
Undergraduate/Postgraduate course, Griffith University, ICT, 2025
Ethical Hacking (3809ICT)
Ethical Hacking provides students with practical cybersecurity knowledge from an attacker’s perspective. Core topics include:
- Penetration testing and vulnerability assessment
- Secure coding, threat modeling, and defensive strategies
- Use of tools such as Burp Suite, Nmap, and Metasploit
My duties involve supporting students in lab environments, debugging technical issues, and guiding ethical considerations.