Medical Image Fusion Using Gabor And Gradient Measurement | Open Access Journals

ISSN ONLINE(2319-8753)PRINT(2347-6710)

Yakışıklı erkek tatil için bir beldeye gidiyor burada kendisine türk Porno güzel bir seksi kadın ayarlıyor Onunla beraber otel odasına gidiyorlar Otel odasına rokettube giren kadın ilk önce erkekle sohbet ederek işi yavaş halletmeye çalışıyor sex hikayeleri Kocası fabrikatör olan sarışın Rus hatun şehirden biraz uzak olan bir türk porno kasabaya son derece lüks bir villa yaptırıp yerleşiyor Kocasını işe gönderip mobil porno istediği erkeği eve atan Rus hatun son olarak fotoğraf çekimi yapmak üzere türk porno evine gelen genç adamı bahçede azdırıyor Güzel hatun zengin bir iş adamının porno indir dostu olmayı kabul ediyor Adamın kendisine aldığı yazlık evde sikiş kalmaya başlayan hatun bir süre sonra kendi erkek arkadaşlarını bir bir çağırarak onlarla porno izle yapıyor Son olarak çağırdığı arkadaşını kapıda üzerinde beyaz gömleğin açık sikiş düğmelerinden fışkıran dik memeleri ile karşılayıp içeri girer girmez sikiş dudaklarına yapışarak sevişiyor Evin her köşesine yayılan inleme seslerinin eşliğinde yorgun düşerek orgazm oluyor

Medical Image Fusion Using Gabor And Gradient Measurement

M.Ramamoorthy and K.Anees Barvin
Assistant professor, Dept of CSE, Sri Lakshmi Ammal Engineering College, Chennai, India.
Related article at Pubmed, Scholar Google

Visit for more related articles at International Journal of Innovative Research in Science, Engineering and Technology


Medical image fusion is the tool for the clinical applications. For medical diagnosis, the edges and outlines of the interested objects is more important than other information. Therefore, how to preserve the edge-like features is worthy of investigating for medical image fusion. As we know, the image with higher contrast contains more edge-like features. In terms of this view, this paper proposed a new medical image fusion scheme based on Non Subsampled Contourlet Transform (NSCT) and pixel level fusion rule, which is useful to provide more details about edges at curves. It is used to improve the edge information of fused image by reducing the distortion. This transformation will decompose the image into finer and coarser details and finest details will be decomposed into different resolution in different orientation. The pixel level fusion rule will be applied and low frequency and high frequency coefficients are selected, in these fusion rule we are following Gabor filter bank and Gradient based fusion algorithm. The fused contourlet coefficients are reconstructed by Inverse Non Subsampled Contourlet Transformation (NSCT).The goal of image fusion is to obtain useful complementary information from CT/MRI multimodality images. By this method we can get more complementary information and also Better correlation coefficient, PSNR (Peak-Signal-to-Noise Ratio) and less MSE (Mean square error).


Non Subsampled Contourlet Transform (NSCT), Multimodal medical image fusion, Pixel level fusion, Gabor filter bank, Gradient based fusion.


Generally, Medical image fusion technique is to combine the information of a variety of images with computer-based image processing method. It is being used for medical image fusion so as to get a better image which is clearer and contains more information.
In the clinical diagnosis and treatment, the use of fused images can provide more useful information. It is important for lesion location, making treatment and pathological study. In the medical images, CT can clearly reflect the anatomical structure of bone tissues. Oppositely, MRI can clearly reflect the anatomical structure of soft tissues, organs and blood vessels. In the clinical diagnosis and treatment, the problems about the comparison and synthesis between CT and MRI were frequently encountered. For this purpose, the multimodal medical image fusion has been identified as a promising solution which aims to integrating information from multiple modality images to obtain a more complete and accurate description of the same object. Multimodal medical image fusion not only helps in diagnosing diseases, but it also reduces the storage cost by reducing storage to a single fused image instead of multiple-source images[1].
The salient contributions of the proposed framework over existing methods can be summarized as follows.
• This paper proposes a new image fusion frame work for multimodal medical images, which relies on the NSCT it will decompose the images into subbands.
• The sub band images of two source images obtained from NSCT are utilized for morphological process to get the enhanced information to diagnose the brain diseases.
• Here, the pixel level fusion method is approached for this process. It will be implemented based on Gabor-filter bank and gradient detection for coefficient selection.
• The low frequency subbands of two source images will be fused by Gabor coefficients selection and high frequency subbands will be fused by Gradient measurement to select desired coefficients.
• Finally, fused frequency subbands are inverse transformed to reconstruct the fused image and parameters will be evaluated between input and fused image.
The rest of the paper is organized as follows. NSCT can be described in Section II followed by the proposed Multimodal medical image fusion framework and NSCT decomposition in Section III. Experimental results and discussions are given in Section IV and the concluding remarks are described in Section V.


This section provides the description of concept on which the proposed framework is based. These concept include NSCT which described as follows.
A. Non-Subsampled Contourlet Transform (NSCT)
NSCT is a kind of multi-scale and multidirection computation framework of the discrete images. It can be divided into two stages including non-subsampled pyramid (NSP) and non-subsampled directional filter bank (NSDFB). The former stage ensures the multiscale property by using two-channel non-subsampled filter bank, and one low-frequency image and one high-frequency image can be produced at each NSP decomposition level. The subsequent NSP decomposition stages are carried out to decompose the low Frequency component available iteratively to capture the singularities in the image. As a result, NSP can result in sub-images, which consists of one lowand several high-frequency images having the same size as the source image which denotes the number of decomposition levels. Fig.1 gives the NSP decomposition with k=3 levels.


In this section, we have discussed some of the motivating factors in the design of our approach to multimodal medical image fusion. The proposed framework takes a pair of source image denoted by A and B to generate a composite image. The basic condition in the proposed framework is that all the source images must be registered in order to align the corresponding pixels. The block diagram of the proposed framework is depicted in Fig. 3.
Fig. 3 Block diagram of proposed multimodal medical image fusion Framework
A.NSCT Decomposition
NSCT decomposition is to compute the multi scale and different direction components of the discrete images. The block diagram of Decomposition Flow is depicted in Fig.4. It involves the two stages such as non sub sampled pyramid (NSP) and non subsampled directional filter bank (NSDFB) to extract the texture, contours and detailed coefficients. NSP decomposes the image into low and high frequency subbands at each decomposition level and it produces n+1 sub images if decomposition level is n. NSDFB extracts the detailed coefficients from direction decomposition of high frequency subbands obtained from NSP. This NSDFB includes Fan Filters and Parallelogram Filters. It generates m power of 2 direction sub images if number of stages be m. The images of two source images obtained from NSCT are utilized for morphological process to get the enhanced information to diagnose the brain diseases.
B. Pixel Level Fusion
Here, the pixel level fusion method is approached for this process. It will be implemented based on Gabor filter bank and Gradient detection for coefficient selection. The low frequency subbands of two source images will be fused by Gabor coefficients selection and high frequency subbands will be fused by Gradient measurement to select desired coefficients. Finally, fused two different frequency subbands are inverse transformed to reconstruct the fused image and parameters will be evaluated between input and fused image.
C.LF Fusion and HF Fusion
1)Fusion of low frequency coefficients(Gabor Filter Approach): The low frequency subbands of two source images are fused based on selection of appropriate coefficients using Gabor filtering. It is useful to discriminate and characterize the texture of an image through frequency and orientation representation. It uses the Gaussian kernel function modulated by sinusoidal wave to evaluate the filter coefficients for convolving with an image.The complex Gabor in space domain, here is the formula of a complex Gabor function in space domain


Some general requirements for fusion algorithm are:(1)It should be able to extract complimentary features from input images.(2) it must not introduce artifacts or inconsistencies (3) it should be robust and reliable. However, selecting a proper consistent criterion with the subjective assessment of the image quality is rigorous. Hence, there is a need to create an evaluation system. Therefore, first an evaluation index system is established to evaluate the proposed fusion algorithm. These indices are determined according to the statistical parameters.
A. Parameter Evaluation
1)Peak–signal-to noise ratio and Mean square error: To establish an objective criterion for digital image quality, a parameter named PSNR (Peak Signal to Noise Ratio) is defined in equation as follows:
PSNR = 10*log10 (255*255/MSE)
where MSE (Mean Square Error) - mean-squared difference between the fused-image and the originalimage. The mathematical definition for MSE is defined in equation as follows:
Where, B - difference between fused image and its overall mean value. A - difference between source image and its overall mean value.
B. Experiments on CT/MRI Image Fusion
To evaluate the performance of the proposed image fusion approach, two different datasets of human brain are considered(see Fig. 6 and 7). These images are characterized in two different groups 1) CT-MR Dataset 1 and 2) CT-MR Dataset 2. The corresponding pixels of two input images have been perfectly coaligned. All images have the same size of 256 × 256 pixel, with 256-level gray scale. The proposed medical fusion technique is applied to these image sets .It can be seen that due to various imaging principle and environment, the source images with different modality contain complementary information. For implementing NSCT, flat filters and diamond maxflat filters are used as pyramidal and directional filters respectively. Here the Experiments are based on decomposition of NSP and NSDFB with 2 levels, based on that 1 low frequency and 2 High frequencies are obtained for each image.
The parameter evaluation for fused image can be shown in Table I and it is clear that the proposed algorithms not only preserve spectral information but also improve the spatial detail information of medical images. (see Table I).


In this paper, a novel image fusion framework is proposed for multi-modal medical images, which is based on non-subsampled contourlet transform. For fusion, two different rules are used, by which more information can be preserved in the fused image with improved quality. The low frequency bands are fused by Gabor filter bank whereas Gradient measurement is adopted as the fusion measurement for high-frequency bands. In our experiment, two groups of CT/MRI images are fused using proposed framework. The visual and parameter evaluation demonstrate that the proposed algorithm can enhance the details of the fused image, and can improve the visual effect with much less information distortion. Further, in order to show the practical applicability of the proposed method, two clinical examples are also considered which includes analysis of diseased person’s brain with recurrent tumor.


[1] Gaurav Bhatnagar , Q.M. Jonathan Wu and Zheng Liu," Directive Contrast Based Multimodal Medical Image Fusion in NSCT Domain" IEEE Trans .Multimedia, vol. 15, no. 5, aug. 2013.

[2] F. Maes, D. Vandermeulen, and P. Suetens, “Medical image registration using mutual information,” Proc. IEEE, vol. 91, no. 10,pp.1699–1721, Oct. 2003.

[3] G. Bhatnagar, Q. M. J. Wu, and B. Raman, “Real time human Visual System based framework for image fusion,” in Proc. Int. Conf.Signal and Image Processing,Trois-Rivieres, Quebec, Canada, 2010,pp.71– 78.

[4] A. Cardinali and G. P. Nason, “A statistical multiscale approach to image segmentation and fusion,” in Proc. Int. Conf. Information Fusion, Philadelphia, PA, USA, 2005, pp. 475– 482.

[5] P. S. Chavez and A. Y. Kwarteng, “Extracting spectral contrast In Landsat thematic mapper image data using selective principal Component analysis,‖ Photogrammetric Eng. Remote Sens., vol. 55, pp.339–348, 1989.

[6] A.Toet, L. V. Ruyven, and J. Velaton, “Merging thermal and Visual Images by a contrast pyramid,” Opt. Eng., vol. 28, no. 7, pp .789 792,1989.

[7] V.S.Petrovic and C. S. Xydeas, “Gradient-based multi resolution image fusion, ” IEEE Trans. Image Process., vol. 13, no. 2, pp. 228–237, Feb. 2004. [8] A. Toet, “Hierarchical image fusion, " Mach. Vision Appl.,vol. 3, no.1, pp. 1–11, 1990.

[8] X. Qu, J. Yan, H. Xiao, and Z. Zhu, “Image fusion algorithm based on s patial frequency-motivated pulse coupled neural networks in non subsampledcontourlet transform domain,” Acta Automatica Sinica, vol.34, no. 12, pp. 1508–1514, 2008.

[9] G. Bhatnagar and B. Raman, “A new image fusion technique Based On directive contrast,” Electron. Lett. Comput. Vision Image Anal., vol. 8,no. 2, pp. 18–38, 2009.

[10] Q. Zhang and B. L. Guo, “Multi focus image fusion using the Non subsampledcontourlet transform,” Signal Process., vol. 89, no. 7, pp.1334–1346, 2009.

[11] Y. Chai, H. Li, and X. Zhang, “Multi focus image fusion based on features contrast of multiscale products in non subsampled contourlet transform domain,” Optik, vol. 123, pp. 569– 581, 2012.

[12] G. Bhatnagar and Q. M. J. Wu, “An image fusion framework based on human visual system in framelet domain,” Int.J. Wavelets, Multires.,Inf. Process., vol. 10, no. 1, pp. 12500021– 30, 2012.

[13] S. Yang, M. Wang, L. Jiao, R. Wu, and Z. Wang, “Image fusion based On a new contourlet packet,” Inf. Fusion, vol. 11, no. 2, pp. 78– 84,2010.

[14] Q. Miao, C. Shi, P. Xu, M. Yang, and Y. Shi, “A novel algorithm Of image fusion using shearlets,” Opt. Commun., vol. 284, no. 6, pp.1540– 1547, 2011.

[15] S. Li, B. Yang, and J. Hu, “Performance comparison of different multi resolution transforms for image fusion,” Inf. Fusion, vol. 12, no. 2,pp.74–84, 2011.

[16] R. Redondo, F. Sroubek, S. Fischer, and G. Cristobal, “Multi focus image fusion using the log-Gabor transform and a multi size windows technique,” Inf. Fusion, vol. 10, no. 2, pp. 163– 171, 2009.

[17] S. Yang, M. Wang, Y. Lu, W. Qi, and L. Jiao, “Fusion of multi parametric SAR images based on SW-non subsampled contourlet and PCNN, ” Signal Process., vol. 89, no. 12, pp. 2596–2608, 2009.

[18] Y. Chai, H. Li, and X. Zhang, ―Multi focus image fusion based on Features contrast of multiscale products in non sub sampled contourlet transform domain,” Optik—Int. J. Light Electron Opt., vol.123, no. 7,pp. 569–581, 2012.

[19] Q.Guihong, Z. Dali, and Y.Pingfan, “Medical image fusion by wavelet transform modulus maxima,” Opt. Express, vol. 9, pp. 184–190,2001.

[20] V. Barra and J. Y. Boire, “A general framework for the fusion of anatomical and functional medical images,” Neuro Image, vol. 13, no.3, pp. 410–424, 2001.

[21] L. Yang, B. L. Guo, and W. Ni, “Multimodality medical image fusion based on multiscale geometric analysis of contourlet transform,” Neurocomputing,vol. 72, pp. 203–211, 2008.

[22] F. E. Ali, I. M. El-Dokany, A. A. Saad, and F. E. Abd El Samie , “Curvelet fusion of MR and CT images,” Progr. Electromagn. Res. C, vol. 3, pp. 215–224, 2008.

[23] N. Boussion, M. Hatt, F. Lamare, C. C. L. Rest, and D. Visvikis , “Contrast enhancement in emission tomography by way of synergistic PET/CT image combination,” Comput. Meth.Programs Biomed., vol.90, no. 3, pp. 191–201, 2008.

[24] S. Daneshvar and H. Ghassemian, “MRI and PET image fusion by Combining IHS and retina-inspired models,” Inf. Fusion, vol.11, no.2, pp. 114–123, 2010.

[25] Y.Yang, D. S. Park, S.Huang, and N. Rao, “Medical image fusion via an effective wavelet-based approach,” EURASIP J. Adv. Signal Process., vol. 2010, pp. 44-1–44-13, 2010.

[26] Y.Yang, D. S.Park, S.Huang, and N. Rao, “Medical image fusion Via an effective wavelet-based approach,” EURASIP J. Adv. Signal Process., vol. 2010, pp. 44-1–44-13, 2010.

[27] S. Das, M. Chowdhury, and M. K. Kundu, “Medical image fusion based on ripplet transform type-I,‖ Progr. Electromagn. Res. B, vol.30, pp. 355–370, 2011.

[28] T. Li and Y.Wang, “Biological image fusion using a NSCT based variable-weight method,‖ Inf. Fusion, vol. 12, no. 2, pp. 85– 92, 2011.

[29] A. L. da Cunha, J. Zhou, and M. N. Do, “The non subsampled Contourlet transform: Theory, design, and applications,” IEEE Trans.Image Process., vol. 15, no. 10, pp. 3089–3101, Oct. 2006.

[30] P. Kovesi, “Image features from phase congruency,” Videre: J.Comput. Vision Res., vol. 1, no. 3, pp. 2–26, 1999.