Novel Evasion Attacks Against Adversarial Training Defense for Smart Grid Federated Learning

12Citations
Citations of this article
24Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

In the advanced metering infrastructure (AMI) of the smart grid, smart meters (SMs) are deployed to collect fine-grained electricity consumption data, enabling billing, load monitoring, and efficient energy management. However, some consumers engage in fraudulent behavior by hacking their meters, leading to either traditional electricity theft or more sophisticated evasion attacks (EAs). EAs aim to illegally reduce electricity bills while deceiving theft detection mechanisms. The current methods for identifying such attacks raise privacy concerns due to the need for access to consumers' detailed consumption data to train detection mechanisms. To address privacy concerns, federated learning (FL) is proposed as a collaborative training approach across multiple consumers. Adversarial training (AT) has shown promise in countering evasion threats on machine learning models. This paper, first, investigates the susceptibility of traditional electricity theft classifiers trained by FL to EAs for both independent and identically distributed (IID) and Non-IID consumption data. Then, it investigates the effectiveness of AT in securing the global electricity theft detector against EAs, assuming no misbehavior from the participant consumers in the FL process. After that, we introduce three novel attacks, namely Distillation, No-Adversarial-Sample-Training, and False-Labeling, which can be launched during the AT process to make the global model susceptible to evasion at inference time. Finally, extensive experiments are conducted to validate the severity of these proposed attacks. Our findings reveal that the AT can counter EAs effectively when the FL participants are honest, but it fails when they act maliciously and launch our attacks. This work lays the foundation for future endeavors in exploring additional countermeasures, in conjunction with AT, to bolster the security and resilience of FL machine learning models against adversarial attacks in the context of electricity theft detection.

Cite

CITATION STYLE

APA

Bondok, A. H., Mahmoud, M., Badr, M. M., Fouda, M. M., Abdallah, M., & Alsabaan, M. (2023). Novel Evasion Attacks Against Adversarial Training Defense for Smart Grid Federated Learning. IEEE Access, 11, 112953–112972. https://doi.org/10.1109/ACCESS.2023.3323617

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free