Enhanced 3D DenseNet with CDC for Multimodal Brain Tumor Segmentation


Creative Commons License

Berkcan B., KAYIKÇIOĞLU T.

Applied Sciences (Switzerland), cilt.16, sa.3, 2026 (SCI-Expanded, Scopus) identifier identifier

  • Yayın Türü: Makale / Tam Makale
  • Cilt numarası: 16 Sayı: 3
  • Basım Tarihi: 2026
  • Doi Numarası: 10.3390/app16031572
  • Dergi Adı: Applied Sciences (Switzerland)
  • Derginin Tarandığı İndeksler: Science Citation Index Expanded (SCI-EXPANDED), Scopus, Compendex, INSPEC, Directory of Open Access Journals
  • Anahtar Kelimeler: 3D, brain tumor segmentation, BraTS 2023, CDC, deep learning, DenseNet, glioma, hybrid loss function, medical image segmentation
  • Karadeniz Teknik Üniversitesi Adresli: Evet

Özet

Precise tumor segmentation in multimodal MRI is crucial for glioma diagnosis and treatment planning; yet, deep learning models still struggle with irregular boundaries and severe class imbalance under computational constraints. An Enhanced 3D DenseNet with CDC architecture was proposed, integrating Central Difference Convolution, attention gates, and Atrous Spatial Pyramid Pooling for brain tumor segmentation on the BraTS 2023-GLI dataset. CDC layers enhance boundary sensitivity by combining intensity-level semantics and gradient-level features. Attention gates selectively emphasize relevant encoder features during skip connections, whereas the ASPP captures the multi-scale context with dilation rates. A hybrid loss function spanning three levels was introduced, consisting of a region-based Dice loss for volumetric overlap, a GPU-native 3D Sobel boundary loss for edge precision, and a class-weighted focal loss for handling class imbalance. The proposed model achieved a mean Dice score of 91.30% (ET: 87.84%, TC: 92.73%, WT: 93.34%) on the test set. Notably, these results were achieved with approximately 3.7 million parameters, representing a 17–76x reduction compared to the 50–200 million parameters required by transformer-based approaches. Enhanced 3D DenseNet with CDC architecture demonstrates that the integration of gradient-sensitive convolutions, attention mechanisms, multi-scale feature extraction, and multi-level loss optimization achieves competitive segmentation performance with significantly reduced computational requirements.