Engineering Applications of Artificial Intelligence, cilt.133, 2024 (SCI-Expanded)
Stain normalization is a key preprocessing step that has been shown to significantly improve the segmentation and classification performance of computer-aided diagnosis (CAD) systems. In recent advancements, numerous approaches have demonstrated significant progress in the domain of stain normalization; however, the most of these approaches are based on Generative Adversarial Networks. In this paper, we propose a novel vision transformer-based model, termed as StainSWIN, that combines the strengths of swin transformer and the architecture of super resolution to achieve improved performance in stain normalization task. The key concept behind the StainSWIN is the utilization of swin transformer blocks that exploit content-based interactions to capture long-range dependencies. The proposed model is equipped with two key blocks, including residual stain swin block (ResStainSWIN) and swin transformer block (STB). The StainSWIN has a residual super resolution architecture, in which the high-level features, extracted by STB, are combined to ResStainSWIN block. The performance of the StainSWIN model was compared with other state-of-the-art methods on a commonly used MITOS-ATYPIA14 histopathology dataset. The StainSWIN outperformed other methods in stain normalization with a large margin in terms of PSNR, SSIM, and RMSE metrics. The StainSWIN model achieved PSNR of 26.667 ± 3.492, SSIM of 0.943 ± 0.037, and RMSE of 6.206 ± 1.973. Additionally, we evaluated the model's impact to the segmentation performance of the MICCAI GlaS’16 dataset. The results demonstrates a 4.3% improvement in segmentation accuracy, attributed to a reduction in stain color variation. The proposed method has the ability to greatly assist CAD systems in maintaining consistent performance despite color variations.