ISSN ONLINE(23198753)PRINT(23476710)
Gagandeep kaur ^{1}, Anand Kumar Mittal^{2}

Related article at Pubmed, Scholar Google 
Visit for more related articles at International Journal of Innovative Research in Science, Engineering and Technology
With the availability of multisensor data in many fields, image fusion has been receiving increasing attention in the researches for a wide spectrum of applications. Image fusion is the process that combines information from multiple images of the same scene. These images may be captured from different sensors, acquired at different times, or having different spatial and spectral characteristics. Discrete wavelet transform (DWT) was performed on source image. Because DWT is the basic and simplest transform among numerous multiscale transform and other type of wavelet based fusion schemes are usually similar to the DWT fusion scheme. In this paper, hybrid method for image fusion is proposed. The method used is combination of DCT (Discrete Cosine Transform) and Variance and compared it with Hybrid DWT.
Keywords 
DCT, DWT, PCA 
INTRODUCTION 
Image fusion is the process that combines information from multiple images of the same scene. These images may be captured from different sensors, acquired at different times, or having different spatial and spectral characteristics. The object of the image fusion is to retain the most desirable characteristics of each image. It is basically a process of combining the relevant information from a set of images into a single image, where the resultant fused image will be more informative and complete than any of the input images. Image fusion techniques can improve the quality and increase the application of these data. 
Image fusion is a useful technique for merging single sensor and multisensor images to enhance the information. The objective of image fusion is to combine information from multiple images in order to produce an image that deliver only the useful information. The discrete cosine transformation (DCT) based methods of image fusion are more suitable and timesaving in realtime systems. In this paper an efficient approach for fusion of multifocus images is presented which is based on variance calculated in dct domain. 
A. SINGLE SENSOR IMAGE FUSION SYSTEM 
The basic single sensor image fusion scheme has been presented in Figure 1. The sensor shown could be visibleband sensors or some matching band sensors. This sensor captures the real world as a sequence of images. The sequence of images are then fused together to generate a new image with optimum information content. For example in illumination variant and noisy environment, a human operator may not be able to detect objects of his interest which can be highlighted in the resultant fused image. 
B. MULTISENSOR IMAGE FUSION SYSTEM 
A multisensor image fusion scheme overcomes the limitations of a single sensor image fusion by merging the images from several sensors to form a composite image. Figure 2 illustrates a multisensor image fusion system. Here, an infrared camera is accompanying the digital camera and their individual images are merged to obtain a fused image. The digital camera is suitable for daylight scenes; the infrared camera is appropriate in poorly illuminated environments. 
II. IMAGE FUSION CATEGORIES 
1. Multimodal Images: Multimodal fusion of images is applied to images coming from different modalities like visible and infrared, CT and NMR, or panchromatic and multispectral satellite images. The goal of the multimodal image fusion system is to decrease the amount of data and to emphasize bandspecific information. 
2. Multifocal Images: In applications of digital cameras, when a lens focuses on a subject at a certain distance, all subjects at that distance are not sharply focused. A possible way to solve this problem is by image fusion, in which one can acquire a series of pictures with different focus settings and fuse them to produce a single image with extended depth of field. 
3. Multiview Images: In multiview image fusion, a set of images of the same scene is taken by the same sensor but from different viewpoints or several 3D acquisitions of the same specimen taken from different viewpoints are fused to obtain an image with higher resolution. 
4. MultiTemporal Images: In Multitemporal image fusion, images taken at different times (seconds to years) in order to detect changes between them are fused together to obtain one single image. 
III. IMAGE FUSION METHODS 
1. Spatial domain fusion method: In spatial domain techniques, we directly deal with the image pixels. The pixel values are manipulated to achieve desired result. 
2. Transform domain fusion method: In transform domain method image is first transferred in to frequency domain. 
IV. IMAGE FUSION TECHNIQUES/ALGORITHMS 
1. Simple Average: It is a well documented fact that regions of images that are in focus tend to be of higher pixel intensity. Thus this algorithm is a simple way of obtaining an output image with all regions in focus. The value of the pixel P (i, j) of each image is taken and added. This sum is then divided by 2 to obtain the average. The average value is assigned to the corresponding pixel of the output image which is given in equation. This is repeated for all pixel values. K (i, j) = {X (i, j) + Y (i, j)}/2 Where X (i , j) and Y ( i, j) are two input images. 
2. Select Maximum: The greater the pixel values the more in focus the image. Thus this algorithm chooses the infocus regions from each input image by choosing the greatest value for each pixel, resulting in highly focused output. The value of the pixel P (i, j) of each image is taken and compared to each other. The greatest pixel value is assigned to the corresponding pixel. 
3. Brovey Transform (BT): Brovey transform (BT), also known as color normalized fusion, is based on the chromaticity transform and the concept of intensity modulation. The basic procedure of the Brovey Transform first multiplies each MS band by the highresolution PAN band, and then divides each product by the sum of the MS bands. 
3. Brovey Transform (BT): Brovey transform (BT), also known as color normalized fusion, is based on the chromaticity transform and the concept of intensity modulation. The basic procedure of the Brovey Transform first multiplies each MS band by the highresolution PAN band, and then divides each product by the sum of the MS bands. 
i. Perform image registration (IR) to PAN and MS, and resample MS. 
ii. Convert MS from RGB space into IHS space. 
iii. Match the histogram of PAN to the histogram of the I component. 
iv. Replace the I component with PAN.zzv. Convert the fused MS back to RGB space. 
5. Principal Component Analysis (PCA) Technique: Principal Component Analysis is a sub space method, which reduces the multidimensional data sets into lower dimensions for analysis. The PCA involves a mathematical procedure that transforms a number of correlated variables into a number of uncorrelated variables called principal components. The PCA is also called as KarhunenLove transform or the Hotelling transform. 
6. Discrete Wavelet Transform (DWT): Wavelet transforms are multiresolution image decomposition tool that provide a variety of channels representing the image feature by different frequency subbands. It is a famous technique in analyzing signals. 2D Discrete Wavelet Transformat ion (DWT) converts the image from the spatial domain to frequency domain. The image is divided by vertical and horizontal lines and represents the firstorder of DWT, and the image can be separated with four parts those are LL1,LH1, HL1 and HH1 [i]. The most important step for fusion is the formation of fusion pyramid. It is difficult to decide a uniform standard for fusion principle. 
7. Wavelet based image fusion: The standard image fusion techniques, such as IHS based method, PCA based method and Brovey transform method operate under spatial domain. However, the spatial domain fusions may produce spectral degradation. It has been found that waveletbased fusion techniques outperform the standard fusion techniques in spatial and spectral quality, especially in minimizing color distortion. Schemes that combine the standard methods (HIS or PCA) with wavelet transforms produce superior results than either standard methods or simple waveletbased methods alone. However, the tradeoff is higher complexity and cost. 
V. MOTIVATION 
The aim of this research is to study the concept of image fusion in image processing. Discrete wavelet transform (DWT) was performed on source image. Because DWT is the basic and simplest transform among numerous multiscale transform and other type of wavelet based fusion schemes are usually similar to the DWT fusion scheme. 
The research is based on following objectives: 
1. The objective of image fusion is to represent relevant information from multiple individual images in a single image. 
2. Apply discrete wavelet transform on Intensity images. 
3. Combining multiple image signals into a single fused image using wavelet techniques. 
4. Calculate the perimeters Euclidean Distance, PSNR and DCT. 
5. To improve the image fusion using three approaches DCT, TCA, DSWT. 
VI. RESEARCH METHODOLOGY 
A .DISCRETE STATIONARY WAVELET 
Discrete stationary wavelet transform (DSWT), transforms a discrete time signal to a discrete wavelet representation. Image multiresolution analysis was introduced by Mallat in the decimated case (critically subsampled).The DSWT has been extensively employed for remote sensing data fusion. Couples of subbands of corresponding frequency content are merged together. The fused image is synthesized by taking the inverse transform. In literature are proposed fusion schemes based on ‘a trous’ wavelet algorithm and Laplacian pyramids (LP). Unlike the DSWT which is critically subsampled, the ‘a trous’ wavelet and the LP are oversampled. 
Image fusion is implemented by twodimensional discrete wavelet transform. The resolution of an image, which is a measure of amount of detail information in the image, is changed by filtering operations of wavelet transform and the scale is changed by sampling. The DSWT analyses the image at different frequency bands with different resolutions by decomposing the image into coarse approximation and detail coefficients (Gonzalez and Woods, 1998). 
B. FUSION RULES 
Fusion rules determine how the source transforms will be combined: 
 Fusion rules may be application dependent 
 Fusion rules can be the same for all subbands or dependent on which subband is being fused. 
There are two basic steps to determine the rules: 
 compute salience measures corresponding to the individual source transforms 
 decide how to combine the coefficients 
After comparing the salience measures (selection or averaging). 
Other rules involve more complicated operations, like: energy or edge. For these methods have to be used spatial filtering, like Energy filter or Laplacian operator edge filter. 
Discrete cosine transformation (DCT) is important to numerous applicatons in science,engineering and in image compression like MPEG etc (4.4).For simplicity ,Discrete cosine transformation (DCT) can convert the spatial domain image to frequency domain image .figure 4.4 shows the process flow diagram for discrete cosine transmation (DCT) fusion.The images to be fused are divided into blocks of size NxN .DCT coefficient are computed and fusion rules are applied to get fused DCT coefficient.IDCT is applied to produce the fused image[13]. 
Principal component analysis (PCA) is an important statistical tool that transforms multivariate data with correlated variables into one with uncorrelated variable [5]. PCA is used amply in all forms of analysisfrom neuroscience to computer graphics –because it is a simple, nonparametric method of extracting relevant information from mystifying data sets. This technique is applied to the multispectral bands. 
C. FLOWCHART 
VII. RESULTS 
In this section, proposed method has been implemented and the results are presented. 
VIII. CONCLUSION AND FUTURE SCOPE 
In this work, we proposed a new hybrid method for Image fusion, The Method we use have combination of DCT(Discrete Cosine Transform) and Variance then we compare it with Hybrid DWT(Discrete Wavelet Transform) . Our proposed technique has much better results in terms of PSNR and MSE comparison to all other existing techniques. This Modified Method has very efficient to use for many Applications in image processing areas. 
Till now, we are clear with the idea that we have built a new hybrid technique for the image fusion which clearly works only on two images. But in future, fusion can be done on two videos or even on one video and one image or on an audio and one image 
ACKNOWLEDGMENT 
The paper has been written with the kind assistance, guidance and active support of my department who have helped me in this work. I would like to thank all the individuals whose encouragement and support has made the completion of this work possible. 
References 
[1]S. Ibrahim and M. Wirth, “Multiresolution regionbased image fusion using the Contourlet Transform,” in IEEE TICSTH, Sept. 2009 [2]W. Huang, Zh.L. Jing, “Multifocus image fusion using pulse coupled neural network”, Pattern Recognition Letters, vol. 28, no. 9, 2007, pp. 11231132. [3] Dr.Nikoloas Mitianoudis, “Image fusion: theory and application”, http://www.iti.gr/iti/files/documents/seminars/iti_mitianoudis_280410.pdf [4]G. Pajares and J. M. Cruz, “A waveletbased image fusion tutorial,” Pattern Recognit.,vol. 37, no. 9, pp. 1855–1872, 2004. [5]T. Stathaki, Image Fusion: Algorithms and Applications. New York: Academic, 2008. [6]F. Sadjadi, “Comparative image fusion analysis,” in Proc. IEEE Conf. Comput. Vision Pattern Recogn., San Diego, CA, Jun. 2005, vol. 3. [7]M. Gonzáles Audícana and A. Seco, “Fusion of multispectral and panchromatic images using wavelet transform. Evaluation of crop classification accuracy,” in Proc. 22nd EARSeL Annu. Symp. Geoinformation Eur.Wide Integr., Prague, Czech Republic, 4–6 June 2002, T. Benes, Ed., 2003, pp. 265–272. [8]Performance Comparison of various levels of Fusion of Multifocused Images using Wavelet Transform [9]Y. Zhang, “Understanding image fusion,” Photogramm. Eng. Remote Sens., vol. 70, no. 6, pp. 657–661, Jun. 2004. [10]Krista Amolins, Yun Zhang, and Peter Dare, “Wavelet based image fusion techniques—An introduction, review and comparison”, ISPRS Journal of Photogrammetric and Remote Sensing, Vol. 62, pp. 249263, 2007. [11]J. Nuñez, X. Otazu, O. Fors, A. Prades, V. Palà, and Romàn Arbiol, “Multiresolutionbased image fusion with additive wavelet decomposition,” IEEE Trans. Geosci. Remote Sensing, vol. 37, pp. 1204–1211, May 1999. [12]V. Vijayaraj, N. H. Younan, and C. G. O’hara, “Quantitative analysis of pansharpened images,” Opt. Eng., vol. 45, no. 4, pp. 046 2021–046 20212, 2006. [13] Haixia Liu, Bing Zhang, Xia Zhang, Junsheng Li, Zhengchao Chen and Xiaoxue Zhou, “AN improved fusion method for pansharpening Beijing1 MicroSatellite images ,” in IEEE IGARSS, 2009 [14] K. A. Kalpoma and J. Kudoh, “Image fusion processing for IKONOS 1m color imagery,” IEEE Trans. on Geosci. Remote Sens., vol. 45, no. 10, pp. 30753086, Oct. 2007. [15] JianJiun Ding, “TimeFrequency Analysis and Wavelet Transform,” http://djj.ee.ntu.edu.tw/index.php. [16] Isaac Bankman, Handbook of Medical Imaging: Medical image processing and analysis, 1st edition, Academic Press, 2000 [17] Jonathan Sachs, “Image Resampling,” http://www.dlc.com/Resampling.pdf [18] RC Gonzalez and RE Woods, Digital Image Processing, 2nd Ed., Englewood Cliffs,NJ: PrenticeHall, Inc., 2002. [19] Patil, Ujwala, and Uma Mudengudi. "Image fusion using hierarchical PCA." In image Information Processing (ICIIP), 2011 International Conference on, pp. 16. IEEE, 2011. [20] Pei, Yijian, Huayu Zhou, Jiang Yu, and Guanghui Cai. "The improved wavelet transforms based image fusion algorithm and the quality assessment." In Image and Signal Processing (CISP), 2010 3rd International Congress on, vol. 1, pp. 219223. IEEE, 2010. [21] Sruthy, S., Latha Parameswaran, and Ajeesh P. Sasi. "Image Fusion Technique using DTCWT", IEEE International MultiConference on automation, computing,control, communication & compressed sensing (iMac4S)”, Kottayam, pp. 160164, 2223 March, 2013. 