A novel multimodal EEG-image fusion approach for emotion recognition: introducing a multimodal KMED dataset


HATİPOĞLU YILMAZ B., KÖSE C., YILMAZ Ç. M.

Neural Computing and Applications, 2025 (SCI-Expanded) identifier

  • Yayın Türü: Makale / Tam Makale
  • Basım Tarihi: 2025
  • Doi Numarası: 10.1007/s00521-024-10925-5
  • Dergi Adı: Neural Computing and Applications
  • Derginin Tarandığı İndeksler: Science Citation Index Expanded (SCI-EXPANDED), Scopus, Academic Search Premier, PASCAL, Applied Science & Technology Source, Biotechnology Research Abstracts, Compendex, Computer & Applied Sciences, Index Islamicus, INSPEC, zbMATH
  • Anahtar Kelimeler: DEAP dataset, EEG, Face images, feature-level fusion, KMED dataset, Multimodal emotion recognition
  • Karadeniz Teknik Üniversitesi Adresli: Evet

Özet

Nowadays, bio-signal-based emotion recognition have become a popular research topic. However, there are some problems that must be solved before emotion-based systems can be realized. We therefore aimed to propose a feature-level fusion (FLF) method for multimodal emotion recognition (MER). In this method, first, EEG signals are transformed to signal images named angle amplitude graphs (AAG). Second, facial images are recorded simultaneously with EEG signals, and then peak frames are selected among all the recorded facial images. After that, these modalities are fused at the feature level. Finally, all feature extraction and classification experiments are evaluated on these final features. In this work, we also introduce a new multimodal benchmark dataset, KMED, which includes EEG signals and facial videos from 14 participants. Experiments were carried out on the newly introduced KMED and publicly available DEAP datasets. For the KMED dataset, we achieved the highest classification accuracy of 89.95% with k-Nearest Neighbor algorithm in the (3-disgusting and 4-relaxing) class pair. For the DEAP dataset, we got the highest accuracy of 92.44% with support vector machines in arousal compared to the results of previous works. These results demonstrate that the proposed feature-level fusion approach have considerable potential for MER systems. Additionally, the introduced KMED benchmark dataset will facilitate future studies of multimodal emotion recognition.