Energy-Time Analysis of Convolutional Neural Networks Distributed on Heterogeneous Clusters for EEG Classification

Jose Escobar J., Ortega J., Damas M., Kiziltepe R., Gan J. Q.

15th International Work-Conference on Artificial Neural Networks (IWANN), Spain, 12 - 14 June 2019, vol.11507, pp.895-907 identifier identifier

  • Publication Type: Conference Paper / Full Text
  • Volume: 11507
  • Doi Number: 10.1007/978-3-030-20518-8_74
  • Country: Spain
  • Page Numbers: pp.895-907
  • Keywords: CPU-GPU clusters, Energy-time analysis, EEG classification, Convolutional Neural Networks, Hybrid Master-worker algorithms
  • Karadeniz Technical University Affiliated: No


Training a deep neural network usually requires a high computational cost. Nowadays, the most common way to carry out this task is through the use of GPUs due to their efficiency implementing complicated algorithms for this kind of tasks. However, training several neural networks, each with different hyperparameters, is still a very heavy task. Typically, clusters include one or more GPUs that could be used for deep learning. This paper proposes and analyzes a distributed parallel procedure to train multiple Convolutional Neural Networks (CNNs) for EEG classification, in a heterogeneous CPU-GPU cluster and in a Desktop PC. The procedure is implemented in C++ and with the MPI library to dynamically distribute the hyperparameters among the nodes, which are responsible for training the corresponding CNN by using Python, Keras, and TensorFlow. The proposed algorithm has been analyzed considering running times and energy measures, showing that when more nodes are used, the procedure scales linearly and the lowest running time is obtained. However, the desktop PC provides the best energy results.