Neural Computing and Applications, cilt.38, sa.3, 2026 (Scopus)
Activation functions are a fundamental component of deep neural networks and enable the learning of complex data representations by introducing nonlinearity between layers. Existing activation functions such as ReLU, ELU, Swish and Mish have achieved successful results on various tasks. However, ReLU leads to the “dead neuron” problem, while functions such as Swish and Mish can cause numerical instabilities in deep architectures. In this paper, a new activation function called SSLU (Skew Student’s Linear Unit) is introduced that aims to overcome the limitations of existing functions. The SSLU is based on the Skew Student’s t-distribution and is designed to be robust against gradient vanishing, dead neurons and saturation problems. The function is differentiable, non-saturating in positive and negative input regions, zero-centered output, monotone and quasi-identity transforming. The effectiveness of SSLU has been demonstrated through experimental analysis on various deep learning architectures. In tests on convolutional neural network architectures such as ResNet50 and DenseNet121, SSLU outperformed both standard and current activation functions. Significant increases in mean precision (mAP) values were also observed in object detection tasks with the YOLOv8 architecture. These findings show that SSLU is not only theoretically sound but also a practical alternative for modern deep learning applications.