Loading…

A Deep Learning Framework for Decoding Motor Imagery Tasks of the Same Hand Using EEG Signals

This study aims to increase the control's dimensions of the electroencephalography (EEG)-based brain-computer interface (BCI) systems by distinguishing between the motor imagery (MI) tasks associated with fine body-parts of the same hand, such as the wrist and fingers. This in turn can enable i...

Full description

Saved in:
Bibliographic Details
Published in:IEEE access 2019, Vol.7, p.109612-109627
Main Authors: Alazrai, Rami, Abuhijleh, Motaz, Alwanni, Hisham, Daoud, Mohammad I.
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:This study aims to increase the control's dimensions of the electroencephalography (EEG)-based brain-computer interface (BCI) systems by distinguishing between the motor imagery (MI) tasks associated with fine body-parts of the same hand, such as the wrist and fingers. This in turn can enable individuals who are suffering from transradial amputations to better control prosthetic hands and to perform various dexterous hand tasks. In particular, we present a novel three-stage framework for decoding MI tasks of the same hand. The three stages of the proposed framework are the input, feature extraction, and classification stages. At the input stage, we employ a quadratic time-frequency distribution (QTFD) to analyze the EEG signals in the joint time-frequency domain. The use of a QTFD enables to transform the EEG signals into a set of two-dimensional (2D) time-frequency images (TFIs) that describe the distribution of the energy encapsulated within the EEG signals in terms of the time, frequency, and electrode position. At the feature extraction stage, we design a new convolutional neural network (CNN) architecture that can automatically analyze and extract salient features from the TFIs created at the input stage. Finally, the features obtained at the feature extraction stage are passed to the classification stage to assign each input TFI to one of the eleven MI tasks that are considered in the current study. The performance of our proposed framework is evaluated using EEG signals that were acquired from eighteen able-bodied subjects and four transradial amputated subjects while performing eleven MI tasks within the same hand. The average classification accuracies obtained for the able-bodied and transradial amputated subjects are 73.7% and 72.8%, respectively. Moreover, our proposed framework yields 14.5% and 11.2% improvements over the results obtained for the able-bodied and transradial amputated subjects, respectively, using conventional QTFD-based handcrafted features and a multi-class support vector machine classifier. The results demonstrate the efficacy of the proposed framework to decode the MI tasks associated with the same hand for able-bodied and transradial amputated subjects.
ISSN:2169-3536
2169-3536
DOI:10.1109/ACCESS.2019.2934018