Loading…

Transformer-Based Attention Network for In-Vehicle Intrusion Detection

Despite the significant advantages of communication systems between electronic control units, the controller area network (CAN) protocol is vulnerable to attacks owing to its weak security structure. The persistent development of intrusion detection systems (IDS) is geared toward preventing vehicles...

Full description

Saved in:
Bibliographic Details
Published in:IEEE access 2023, Vol.11, p.55389-55403
Main Authors: Nguyen, Trieu Phong, Nam, Heungwoo, Kim, Daehee
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Despite the significant advantages of communication systems between electronic control units, the controller area network (CAN) protocol is vulnerable to attacks owing to its weak security structure. The persistent development of intrusion detection systems (IDS) is geared toward preventing vehicles from being targeted by malicious attacks. Recurrent neural networks (RNNs) have emerged as a prominent approach in this domain, contributing significantly to the evolution of IDS. Nonetheless, RNN-based methods have certain limitations in step-by-step processing. Their feature extraction at any given point in time only relies on the hidden state of previously observed information, possibly resulting in missing features in the context vector. In this paper, we propose a novel multi-class IDS using a transformer-based attention network (TAN) for an in-vehicle CAN bus. Our model builds on the self-attention mechanism, removing RNNs and classifying attacks into multiple categories. Furthermore, the proposed models can detect replay attacks by aggregating sequential CAN IDs. The experimental results indicate that the TAN is more efficient than the baselines for different input data types and datasets. The highlight is that, although sequential CAN IDs are used, our model can identify intrusion messages without requiring message labeling. Finally, by inheriting the advantages of transformers, TAN employs transfer learning to improve the performance of models trained on small data from other car models.
ISSN:2169-3536
2169-3536
DOI:10.1109/ACCESS.2023.3282110