ISSN ONLINE(2278-8875) PRINT (2320-3765)

All submissions of the EM system will be redirected to Online Manuscript Submission System. Authors are requested to submit articles directly to Online Manuscript Submission System of respective journal.

A Review:Image Fusion Techniques for Multisenso r Images

S.A.Panwar1, SayaliMalwadkar2
  1. Assistant Professor, Dept. of E&TC, Smt. Kashibai Navale College of Engineering, Vadgaon (BK) Pune, Maharashtra, India
  2. PG Student [VLSI& ES], Dept. of E&TC,Smt. Kashibai Navale College of Engineering, Vadgaon (BK) Pune, Maharashtra, India
Related article at Pubmed, Scholar Google

Visit for more related articles at International Journal of Advanced Research in Electrical, Electronics and Instrumentation Engineering

Abstract

Image fusion has found many applications in computer vision, remote sensing, intelligent robots and military purposes. The use of different image fusion algorithms gives precise resultant images.In many remote sensing applications, the quantity of image data from the satellite sensors has been increasing because of advanced sensor technology.To avoid the limitations of single sensor images, multisensory image fusion provides the data that is suitable for further applications by eliminating the problem of lack of information. In this paper, a literature review has been made based on different techniques for combining multispectral images available. It includes IHS transform, High Pass filtering, PCA analysis, ANN, Wavelet transform and DCT. One of the effective techniques to get a good quality image is by using the Fuzzylet Fusion Algorithm in which the advantages of both Stationary Wavelet Transform and Fuzzy logic are combined.

Keywords

Image fusion techniques, multisensory images, Principal Component Analysis, wavelet transform,fuzzylet image fusion

INTRODUCTION

In large scale applications such as remote sensing, medical imaging image fusion is of great importance. It combines the significant information from two or more source images into a single resultant image that describes the scene better and retains useful information from the input images. A high resolution panchromatic image gives geometric details of an image because of the presence of natural as well as man-made objects in the scene and a low resolution multispectral image gives the color information of the source image. For remote sensing applications, the image fusion, also referred to as pansharpening, integrates the source images to form a high resolution multispectral image. The fused image is then used for many image processing tasks such as image segmentation, feature extraction, feature identification, object detection and target recognition. The image fusion using single sensor processes a sequence of images. It presents some limitations because of the limited focus depth of the imaging sensors employed. In intelligent systems applications, multiple sensors are employed.
The aim of multisensor image fusion is to represent the visual information from multiple images having different geometric representations into a singleresultant image without any information loss. The advantages ofimage fusion include image sharpening, feature enhancement, improved classification, and creation of stereo data sets. Multisensor image fusion provides the benefits in terms of range of operation, spatial and temporal characteristics, system performance, reduced ambiguity and improved reliability.
Based on the processing levels, image fusion techniques can be divided into different categories. These are pixel level, feature level and symbol level/decision level. Pixel level method is the simplest and widely used method. This method processes pixels in the source image and retains most of the original image information. Compared to other two methods pixel level image fusion gives more accurate results. Feature level method processes the characteristics of the source image. This method can be used with the decision level method to fuse images effectively. Because of the reduced data size, it is easier to compress and transmit the data. The top level of image fusion is decision making level.
It uses the data information extracted from the pixel level fusion or thefeature level fusion to make optimal decision to achieve a specific objective. Moreover, it reduces the redundancy and uncertain information.
The pre-processing steps of image fusion are shown in Fig.1. These include image registration, image resampling and combining of the images. Combining the sensor images to get the well-defined result is an important task in the image fusion.
Input Image

LITERATURE SURVEY

A lot of research and work has been done on image fusion techniques since mid nineteen eighties. The simplest way of fusing images is by taking thegrayscale average of the pixels of source images. This simple method gives good results at the cost of reduced contrast level. These techniques of fusing images can be applied to different data sets depending upon their spatial and temporal characteristics. Spatial domain techniques and Frequency domain techniques are used for combining the images. Spatial domain techniques process image pixels to achieve the desired result while frequency domain techniques first transfer the image into frequency domain by applying Fourier transform and then obtain the resultant image by performing inverse Fourier transform. These methods are compared using performance measurement characteristics such as entropy, Peak signal to noise ratio (PSNR), Mean square error.
Many researchers have worked on a multisensor image fusion techniques.To perform a pyramid decomposition on the input images and reconstructing the fused image by performing inverse pyramid transform is described in [8]. The oldest method of color image fusion is IHS transform [3]. Principal Component Analysis is one of the commonly used method which is similar to the IHS transform [7]. PCA, IHS and high pass filtering[4] methods fall under the category of spatial domain techniques. Using the shift invariant property of the wavelet, wavelet transform is used for combining of multispectral images[10]. Several medical imaging applications use Artificial Neural Network as a method of image fusion[5].DCT method described in[9] gives less PSNR but is reliable in terms of MSE. These methods for a good quality image fusion are reviewed in the next chapter.

EXISTING IMAGE FUSION TECHNIQUES

Different image fusion techniques that have been studied and developed so far are as follows.
1. IHS (Intensity-Hue-Saturation) Transform
2. Principal Component Analysis (PCA)
3. Pyramid techniques
a)Laplacian Pyramid b)Guassian Pyramid
c)Gradient Pyramid d)Morphological Pyramid
4. High pass filtering
5. Wavelet Transform
6. Artificial Neural Networks
7. Discrete Cosine Transform
1)IHS (INTENSITY-HUE-SATURATION) TRANSFORM
Intensity, Hue and Saturation are the three properties of a color that give controlled visual representation of an image. IHS transform method is the oldest method of image fusion. In the IHS space, hue and saturation need to be carefully controlled because it contains most of the spectral information. For the fusion of high resolution PAN image and multispectral images, the detail information of high spatial resolution is added to the spectral information. This paper presents many IHS transformation techniques based on different color models. These techniques include HSV, IHS1, IHS2, HIS3, IHS4, IHS5, IHS6, YIQ. Based on these different formula, IHS transformation gives different results [3].
2)PYRAMID TECHNIQUE
Image pyramids can be described as a model for the binocular fusion for human visual system. By forming the pyramid structure an original image is represented in different levels. A composite image is formed by applying a pattern selective approach of image fusion. Firstly, the pyramid decomposition is performed on each source image. All these images are integrated to form a composite image and then inverse pyramid transform is applied to get the resultant image. The MATLAB implementation of the pyramid technique is shown in this paper. Image fusion is carried out at each level of decomposition to form a fused pyramid and the fused image is obtained from it [8].
3) HIGH PASS FILTERING (HPF)
The high resolution multispectral images are obtained fromhigh pass filtering. The high frequency information from the high resolution panchromatic image is added to the low resolution multispectral image to obtain the resultant image. It is performed either by filtering the High Resolution Panchromatic Image with a high pass filter or by taking the original HRPI and subtracting LRPI from it. The spectral information contained in the low frequency information of the HRMI is preserved by this method. When the low pass filter is used, it shows a smooth transition band along with a high ripple outside the pass band [4].
4) PRINCIPAL COMPONENT ANALYSIS (PCA)
Despite of being similar to IHS transform, the advantage of PCA method over IHS method is that an arbitrary number of bands can be used. This is one of the most popular methods for image fusion. Uncorrelated Principal components are formed from the low resolution multispectral images. The first principal component (PC1) has the information that is common to all bands used. It contains high variance such that it gives more information about panchromatic image. A high resolution PAN component is stretched to have the same variance as PC1 and replaces PC1. Then an inverse PCA transform is employed to get the high resolution multispectral image. The second principal component is made to be in the subspace perpendicular to the first and the third one to be in the subspace perpendicular to the first two and so on. PCA and HIS transforms provide good results at the cost of color distortion [7].
5) WAVELET TRANSFORM
Wavelet transform is considered as an alternative to the short time Fourier transforms. It is advantageous over Fourier transform in that it provides desired resolution in time domain as well as in frequency domain whereas Fourier transform gives a good resolution in only frequency domain. In Fourier transform, the signal is decomposed into sine waves of different frequencies whereas the wavelet transform decomposes the signal into scaled and shifted forms of the mother wavelet or function. In the image fusion using wavelet transform, the input images are decomposed intoapproximate and informative coefficients using DWT at some specific level. A fusion rule is applied to combine these two coefficients and the resultant image is obtained by taking the inverse wavelet transform [10].
6)DISCRETE COSINE TRANSFORM (DCT)
Discrete cosine Transform has found importance for the compressed images in the form of MPEG, JVT etc. By taking discrete cosine transform, the spatial domain image is converted into the frequency domain image. Chu-Hui Lee and Zheng-Wei Zhou have divided the images into three parts as low frequency, medium frequency and high frequency. Average illumination is represented by the DC value and the AC values are the coefficients of high frequency. The RGB image is divided into the blocks of with the size of 8*8 pixels. The image is then grouped by the matrices of red, green and blue and transformed to the greyscale image.
The 2 Dimensional Discrete Cosine Transform is then applied on the greyscale image. The frequency of the greyscale block is converted from the spatial domain to frequency domain. Once the DCT coefficients are calculated, fused DCT coefficients are obtained by applying the fusion rule. By taking inverse DCT, the fused image is obtained.
DCT based methods are more reliable in terms of time and hence they are useful in real time systems. DCT coefficients show energy compactness because all DCT coefficients are brought together in the low frequency zone. It gives real results when the real time data is given as an input [9].
7) ARTIFICIAL NEURAL NETWORKS (ANN)
Artificial Neural networks (ANN) have found their importance in pattern recognition. In this a non linear response function is used. It uses Pulse Coupled Neural Network (PCNN) which consists of a feedback network. This network is divided into three parts namely the receptive field, the modulation field and the pulse generator. Each neuron corresponds to the pixel of the input image. The matching pixel’s intensity is used as an external input to the PCNN. This method is advantageous in terms of hardiness against noise, independence of geometric variations and capability of bridging minor intensity variations in input patterns. PCNN has biological importance and used in medical imaging as this method is feasible and gives real time system performance [5].

FUZZYLET IMAGE FUSION METHOD

Fuzzy Interference systems formulate the mapping from the given input to an output using fuzzy sets of rules. This mapping is made based on the membership functions, fuzzy logic operators and if then rules. Mamdani type and Sugeno type are the two types of fuzzy interference systems. They differ in the way they determine the output. When more number of membership functions are employed Fuzzy image fusion system gives better results. Multisensor image fusion has been using fuzzy logic. Spatial information in multi sensor images is represented by the fuzzy sets. When the membership functions and the fusion rules are applied properly a good quality fused image can be obtained. In Fuzzylet fusion algorithm, pixel level image fusion is performed using fuzzy logic along with the stationary wavelet transform. By using Mamdani Fuzzy interference system provides results in less execution time. By combining the advantages of both Fuzzy logic and SWTFuzzylet image fusion provides better results than that are obtained by applying SWT and fuzzy logic independently. It gives good results but as the levels of decomposition are increased execution time increases [1].

RESULT AND DISCUSSION

The table given below shows the comparison of various methods of impulse noise removal by using parameters such as Peak signal to noise ratio (PSNR) and Root Mean Square Error(RMSE). RMSE is calculated as root mean square error of the corresponding pixels in the input images and the fused image. It measures the change in pixels because of processing.Peak signal to noise ratio is high when the fused and the reference image are similar.
When the difference between the original and reconstructed image is smaller, the PSNR value is larger. There are some other performance measurement characteristics such as Entropy, Standard deviation, Execution time and Error Image.

CONCLUSION

The process of image fusion combines the input images and extracts useful information giving the resultant image. The Fuzzylet Fusion Algorithm uses the Fuzzy Interference system to perform image fusion.This system gives good results combining the advantages of both SWT and fuzzy logic. It also gives a high PSNR value and low RMSE value. These values are better than those obtained when SWT and Fuzzy logic are used independently. Thus it can be used in many applications of image processing domain.

Tables at a glance

Table icon
Table 1
 

Figures at a glance

Figure
Figure 1
 

References