Diversity in a signal-to-image transformation approach for EEG-based motor imagery task classification


Yilmaz B. H., Yilmaz C. M., Kose C.

MEDICAL & BIOLOGICAL ENGINEERING & COMPUTING, vol.58, no.2, pp.443-459, 2020 (SCI-Expanded) identifier identifier identifier

  • Publication Type: Article / Article
  • Volume: 58 Issue: 2
  • Publication Date: 2020
  • Doi Number: 10.1007/s11517-019-02075-x
  • Journal Name: MEDICAL & BIOLOGICAL ENGINEERING & COMPUTING
  • Journal Indexes: Science Citation Index Expanded (SCI-EXPANDED), Scopus, Academic Search Premier, ABI/INFORM, Agricultural & Environmental Science Database, Applied Science & Technology Source, BIOSIS, Biotechnology Research Abstracts, Business Source Elite, Business Source Premier, CINAHL, Compendex, Computer & Applied Sciences, EMBASE, INSPEC, MEDLINE
  • Page Numbers: pp.443-459
  • Keywords: Angle-amplitude transformation, Angle-amplitude graph images, EEG, Motor imagery, Classification, COMMON SPATIAL-PATTERNS, ALGORITHMS
  • Karadeniz Technical University Affiliated: Yes

Abstract

Nowadays, motor imagery-based brain-computer interfaces (BCIs) have been developed rapidly. In these systems, electroencephalogram (EEG) signals are recorded when a subject is involved in the imagination of doing any motor imagery movement like the imagination of the right/left hands, etc. In this paper, we sought to validate and enhance our previously proposed angle-amplitude transformation (AAT) technique, which is a simple signal-to-image transformation approach for the classification of EEG and MEG signals. For this purpose, we diversified our previous method and proposed four new angle-amplitude graph (AAG) representation methods for AAT transformation. These modifications were made on some points such as using different left/right side changing points at a different distance. To confirm the validity of the proposed methods, we performed experiments on the BCI Competition III Dataset IIIa, which is a benchmark dataset widely used for EEG-based multi-class motor imagery tasks. The procedure of proposed methods can be summarized in a concise manner as follows: (i) convert EEG signals to AAG images by using the proposed AAT transformation approaches; (ii) extract image features by employing Scale Invariant Feature Transform (SIFT)-based Bag of Visual Word (BoW); and (iii) classify features with k-Nearest Neighbor (k NN) algorithm. Experimental results showed that the changes in the baseline AAT approaches enhanced the classification performance on Dataset IIIa with an accuracy of 96.50% for two-class problem (left/right hand movement imaginations) and 97.99% for four-class problem (left/right hand, foot and tongue movement imaginations). These achievements are mainly due to the help of effective enhancements on AAG image representations.