Multi-focus image fusion for different datasets with super-resolution using gradient-based new fusion rule


Aymaz S., KÖSE C., AYMAZ Ş.

MULTIMEDIA TOOLS AND APPLICATIONS, cilt.79, ss.13311-13350, 2020 (SCI-Expanded) identifier identifier

  • Yayın Türü: Makale / Tam Makale
  • Cilt numarası: 79
  • Basım Tarihi: 2020
  • Doi Numarası: 10.1007/s11042-020-08670-7
  • Dergi Adı: MULTIMEDIA TOOLS AND APPLICATIONS
  • Derginin Tarandığı İndeksler: Science Citation Index Expanded (SCI-EXPANDED), Scopus, FRANCIS, ABI/INFORM, Applied Science & Technology Source, Compendex, Computer & Applied Sciences, INSPEC, zbMATH
  • Sayfa Sayıları: ss.13311-13350
  • Anahtar Kelimeler: Multi-focus, Super-resolution, New dataset, Image fusion, SWT, Sobel, New fusion rule, ALGORITHM, TRANSFORM, NETWORKS, DEPTH
  • Karadeniz Teknik Üniversitesi Adresli: Evet

Özet

Multi-focus image fusion methods combine two or more images which have blurred and defocused parts to create an all-in-focused image. All-in-focused image has more information, clearer parts and clearer edges than the source images. In this paper, a new approach for multi-focus image fusion is proposed. Firstly, the information of source images is enhanced using bicubic interpolation-based super-resolution method. Secondly, source images with high resolution are decomposed into four sub-bands which are LL (low-low), LH (low-high), HL (high-low) and HH (high-high) using Stationary Wavelet Transform with dmey (Discrete Meyer) filter. Then, a new fusion rule which depend on gradient-based method with sobel operator is implemented to create fused images with good visuality. The weight coefficients which show the importance rates of corresponding pixels in source images for fused image are calculated using designed formula based on gradient magnitudes. The each pixel of fused sub-bands is created using these weight coefficients and fused image is reconstructed using Inverse Stationary Wavelet Transform. Lastly, the performance evaluation of proposed method is measured using three different metrics which are objective, subjective and time criterion metrics. Besides these features, the new dataset which is different from the datasets in the literature is created and used firstly in this paper. The results show that the proposed method produces high quality images with clear edges and transmits most of the information of source images into all-in-focused image.