Loading…

Channel Aware Adversarial Attacks are Not Robust

Adversarial Machine Learning (AML) has shown significant success when applied to deep learning models across various domains. This paper explores channel-aware adversarial attacks on DNN-based modulation classification models within wireless environments. Our investigation focuses on the robustness...

Full description

Saved in:
Bibliographic Details
Main Authors: Sinha, Sujata, Soysal, Alkan
Format: Conference Proceeding
Language:English
Subjects:
Online Access:Request full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Adversarial Machine Learning (AML) has shown significant success when applied to deep learning models across various domains. This paper explores channel-aware adversarial attacks on DNN-based modulation classification models within wireless environments. Our investigation focuses on the robustness of these attacks with respect to channel distribution and path-loss parameters. We examine two scenarios: one in which the attacker has instantaneous channel knowledge and another in which the attacker relies on statistical channel data. In both cases, we study channels subject to Rayleigh fading alone, Rayleigh fading combined with shadowing, and Rayleigh fading combined with both shadowing and path loss. Our findings reveal that the distance between the attacker and the legitimate receiver largely dictates the success of an AML attack. Without precise distance estimation, adversarial attacks are likely to fail.
ISSN:2155-7586
DOI:10.1109/MILCOM58377.2023.10356294