A novel extended depth of field process based on nonsubsampled shearlet transform by estimating optimal range in microscopic systems


OPTICS COMMUNICATIONS, vol.429, pp.88-99, 2018 (SCI-Expanded) identifier identifier

  • Publication Type: Article / Article
  • Volume: 429
  • Publication Date: 2018
  • Doi Number: 10.1016/j.optcom.2018.08.006
  • Journal Indexes: Science Citation Index Expanded (SCI-EXPANDED), Scopus
  • Page Numbers: pp.88-99
  • Karadeniz Technical University Affiliated: Yes


Increasing the depth of field (DOF) and maintain a high resolution have been the classical challenges for acquiring a single 2D image sample investigated with the microscope to enhance a complete in-focus. The extended depth-of-field microscope is implemented to overcome these problems. Various studies have proposed the wavefront coding and image fusion as the remedies for the blurred content of the in-focus image. In this study, a classical extended depth of field (EDOF) process based on the image fusion is implemented by moving a microscope platform along the fixed range, a distance between initial and final positions, to determine a random Z axis. During the movement of the platform in this range, a certain number of multi-focus images are acquired at infinite steps (Delta d). However, it is seen that the magnification objective affects the range and number of the multi-focus images. Instead of determining the range randomly, the optimal range is selected to extract a significant information from the multi-focus images. In this study, a novel EDOF process based on the multi-scale representations is improved, estimating the optimal range in the microscopic systems. Our proposed EDOF process is performed in two main stages: pre-process and image fusion. In the pre-processing stage, various ranges with different initial and final positions are extracted to scan the whole structure of the sample on the Z axis. In the second stage, a novel image fusion approach based on the Nonsubsampled Shearlet transform (NSST) is implemented into all ranges to obtain the optimal fused image. To evaluate the performances of the proposed image fusion approach and to show the effects of other color spaces on the image fusion approaches based on the multi-scale representations, fused images created with the different fusion approaches including Maximum Absolute Selection, Variance, Tenengrad, Discrete Complex Valued Wavelet Transform, Discrete Curvelet Transform, and other color spaces (HSV, YIQ and YCbCr) are tested in terms of their transferred focus information, outliers and blurring. From the obtained experimental results, the fused image created with our proposed approach contains more detailed information and fewer outliers and artifacts. Furthermore, the YCbCr and HSV color models provide the highest performances that capture the critical information in terms of focusing.