A novel approach with the dynamic decision mechanism (DDM) in multi-focus image fusion

Aymaz S., KÖSE C., AYMAZ Ş.

MULTIMEDIA TOOLS AND APPLICATIONS, vol.82, no.2, pp.1821-1871, 2023 (SCI-Expanded) identifier identifier

  • Publication Type: Article / Article
  • Volume: 82 Issue: 2
  • Publication Date: 2023
  • Doi Number: 10.1007/s11042-022-13323-y
  • Journal Indexes: Science Citation Index Expanded (SCI-EXPANDED), Scopus, FRANCIS, ABI/INFORM, Applied Science & Technology Source, Compendex, Computer & Applied Sciences, INSPEC, zbMATH
  • Page Numbers: pp.1821-1871
  • Keywords: Multi-focus, Image fusion, Deep learning, Focus metrics, CNN, ALGORITHM, TRANSFORM, FRAMEWORK, NETWORKS, WAVELET
  • Karadeniz Technical University Affiliated: Yes


Multi-focus image fusion merges multiple source images of the same scene with different focus values to obtain a single image that is more informative. A novel approach is proposed to create this single image in this paper. The method's primary stages include creating initial decision maps, applying morphological operations, and obtaining the fused image with the created fusion rule. Initial decision maps consist of label values represented as focused or non-focused. While determining these values, the first decision is made by feeding the image patches obtained from each source image to the modified CNN architecture. If the modified CNN architecture is unstable in determining label values, a new improvement mechanism designed based on focus measurements is applied for unstable regions where each image patch is labelled as non-focused. Then, the initial decision maps obtained for each source image are improved by morphological operations. Finally, the dynamic decision mechanism (DDM) fusion rule, designed considering the label values in the decision maps, is applied to minimize the disinformation resulting from classification errors in the fused image. At the end of all these steps, the final fused image is obtained. Also, in the article, a rich dataset containing two or more than two source images for each scene is created based on the COCO dataset. As a result, the method's success is measured with the help of objective and subjective metrics. When the visual and quantitative results are examined, it is proven that the proposed method successfully creates a perfect fused image.