15th International Work-Conference on Artificial Neural Networks (IWANN), İspanya, 12 - 14 Haziran 2019, cilt.11507, ss.895-907
Training a deep neural network usually requires a high computational cost. Nowadays, the most common way to carry out this task is through the use of GPUs due to their efficiency implementing complicated algorithms for this kind of tasks. However, training several neural networks, each with different hyperparameters, is still a very heavy task. Typically, clusters include one or more GPUs that could be used for deep learning. This paper proposes and analyzes a distributed parallel procedure to train multiple Convolutional Neural Networks (CNNs) for EEG classification, in a heterogeneous CPU-GPU cluster and in a Desktop PC. The procedure is implemented in C++ and with the MPI library to dynamically distribute the hyperparameters among the nodes, which are responsible for training the corresponding CNN by using Python, Keras, and TensorFlow. The proposed algorithm has been analyzed considering running times and energy measures, showing that when more nodes are used, the procedure scales linearly and the lowest running time is obtained. However, the desktop PC provides the best energy results.