A novel approach with the dynamic decision mechanism (DDM) in multi-focus image fusion


Aymaz S., KÖSE C., AYMAZ Ş.

MULTIMEDIA TOOLS AND APPLICATIONS, cilt.82, sa.2, ss.1821-1871, 2023 (SCI-Expanded) identifier identifier

  • Yayın Türü: Makale / Tam Makale
  • Cilt numarası: 82 Sayı: 2
  • Basım Tarihi: 2023
  • Doi Numarası: 10.1007/s11042-022-13323-y
  • Dergi Adı: MULTIMEDIA TOOLS AND APPLICATIONS
  • Derginin Tarandığı İndeksler: Science Citation Index Expanded (SCI-EXPANDED), Scopus, FRANCIS, ABI/INFORM, Applied Science & Technology Source, Compendex, Computer & Applied Sciences, INSPEC, zbMATH
  • Sayfa Sayıları: ss.1821-1871
  • Anahtar Kelimeler: Multi-focus, Image fusion, Deep learning, Focus metrics, CNN, ALGORITHM, TRANSFORM, FRAMEWORK, NETWORKS, WAVELET
  • Karadeniz Teknik Üniversitesi Adresli: Evet

Özet

Multi-focus image fusion merges multiple source images of the same scene with different focus values to obtain a single image that is more informative. A novel approach is proposed to create this single image in this paper. The method's primary stages include creating initial decision maps, applying morphological operations, and obtaining the fused image with the created fusion rule. Initial decision maps consist of label values represented as focused or non-focused. While determining these values, the first decision is made by feeding the image patches obtained from each source image to the modified CNN architecture. If the modified CNN architecture is unstable in determining label values, a new improvement mechanism designed based on focus measurements is applied for unstable regions where each image patch is labelled as non-focused. Then, the initial decision maps obtained for each source image are improved by morphological operations. Finally, the dynamic decision mechanism (DDM) fusion rule, designed considering the label values in the decision maps, is applied to minimize the disinformation resulting from classification errors in the fused image. At the end of all these steps, the final fused image is obtained. Also, in the article, a rich dataset containing two or more than two source images for each scene is created based on the COCO dataset. As a result, the method's success is measured with the help of objective and subjective metrics. When the visual and quantitative results are examined, it is proven that the proposed method successfully creates a perfect fused image.