ISSN: 2319-9873
Department of Electrical Engineering, Urmia Branch, Islamic Azad University, Urmia, Iran.
Received: 10/01/2014; Revised: 25/02/2014; Accepted: 27/02/2014
Visit for more related articles at Research & Reviews: Journal of Engineering and Technology
We proposed a new information divergence measure, referred to as divergence, which satisfies the requirements. A decision map is then generated by applying the divergence to measure the coherenceof sourceactivity maps at the pixel level. We further segment the decision map into two regions. It is the set of pixels whose activity patterns are similar in all the sourceimages, while it is the set of pixels whose activity patterns are different. Our fusionscheme is to find the solution for optimization problem.In fact that averaging results in reduced contrast for all the patterns which appear in only one source. On the other hand, maximum selection scheme produces some mosaic like artifacts due to the high frequency noise introduced by a sudden switch between two sets of source wavelet coefficients
Medical image, Algorithms,
With the development of new imaging methods in medical diagnostics arises the need of meaningful and spatial correct combination of all available image datasets. Examples for imaging devices include computer tomography, magnetic resonance imaging or the newer positron emission tomography [1-3]. Image fusion of a CT image with our multiscaleinformation theoreticapproach. For comparison purpose, fusion by pixel averaging and by multiscale based maximum selection scheme. Image fusion is often involved in remote sensing: modern spectral sensors gather up to several hundreds of spectral bands which can be either visualized and processed individually, or it can be fused into a single image, depending on the image analysis task. Image fusion of two bands from a multispectral sensor with our multiscale information theoretic approach is illustrated [2,3,4]. A separable two dimensional convolution can be factored into one-dimensional convolutions along with rows and columns of the image. The rows of first convolved with hand and subsampled. The columns of these two output images are then convolved respectively and sub-sampled, We denotethe image obtained by inserting a row of zeros and a column of zeros between pairs of consecutive rows and columns is recovered from the coarser scale approximation and the wavelet coefficients. two-dimensional separable convolutionsThese four convolutions can also be factored into six groups of one-dimensional convolutions along rows and columns a digital image whose pixel interval equals [5-9]. A bi-orthogonalwavelet image representation of depth is computed by iterating. The original digital image is recovered from this wavelet representation by iteratingthe reconstruction.digital images of the same scene taken from different sensors. For the pixel level image fusion problem, we assume all the source images areregistered so that the difference in resolution, coverage, treatment of a theme, characteristics of the image acquisition methods are eliminated. The goal of our fusionalgorithm is to construct a composite image such that information captured in all thesource images are combined and the source image data is hence compressed. To achievethis goal [8], we apply information theoretic fusion approach based on a biorthogonalwavelet representationbiorthogonal wavelet image representation of as defined in equation. With no loss ofgenerality. For any, an activity pattern vector is defined aswhich is a vector of energy concentrated at pixel across scales.
Activity maps characterize the inherent information pattern in source images. To fusethe source wavelet coefficients, it is necessary to compare the activity patterns for everypixel. For instance, if the activity patterns are different in some region, taking the average of wavelet coefficients to generate a composite image is not a good choice, sinceit would create artifacts. On the other hand, if the activity patterns are similar for thatregion, taking the average would inject more information to the fused image due to thecontribution from different sources .A reasonable measure for activity patterns should satisfy the following properties :It should be capable of measuring the difference between two or more activitypatterns.It should be nonnegative and symmetric .It should vanish to zero if and only if the activity patterns are the sameIt should reach the maximum value when activity patterns are degenerate distributions.This constraint makes sure that the solution stays in the closure, no image outside the scenario we are contemplating.The goal of image fusion is to integrate complementary information from multi-sensordata such that the fused images are more suitable for the purpose of human visual perception. the digital images generated by different sensors .Our information theoretic fusion approach first calculates a biorthogonal wavelet image representation for each, then a pixel level activity map is formed. we define a normalizedactivity patterndegenerate distribution. To fuse the source waveletcoefficients, we compare the normalized activity patterns of all the source images interms of divergence, and create a selection mapThe selection map is further segmented into two decision regions. the mean value of selection map.it can be obtained by searching for eachwavelet coefficient of the fused image individually. It is composite image definedby its wavelet coefficientswheresatisfies our fusion criteria, including multi-sensor navigation image fusion, multi-modality medical image fusion, multi-spectral remote sensing image fusion, and multi-focus opticalimage fusion are now presented to illustrate the fusion scheme defined above .To help helicopter pilots navigate under poor visibility conditions, such as fog or heavyrain, helicopters are equipped with several imaging sensors, which can be viewed bythe pilot in a helmet mounted display. A typical sensor suite includes both a low-light - television (LLTV) sensor and a thermal imaging forward-looking-infrared (FLIR) sensor. In the current configuration, the pilot can choose one of the two sensors to watchin his display. Sample LLTV and FLIR images are shown respectively. A possible improvement is to combine both imaging sources into a single fused image. Image fusion by standard techniques such as pixel averaging and multiscale basedmaximum selection scheme respectively. Notethat the pixel averaging method has a muddy appearance. This is due primarily to the Due to the limited depth-of-focus of optical lenses, it is often impossible to get an image which contains all relevant objects ’in focus’. One possibility to overcome this problem is to take several pictures with different focus points and combine them together into a single frame which finally contains the focused regions of all input images. It illustrates our multi-scale information theoretic fusion approach. For comparison purpose, fusion by pixel averaging and by multiscale based maximum selection scheme. It is difficult to define a general performance measure for fusion algorithms. Some performance metrics which are widely used in signal and image processing do not fit the application of image fusion. One such example is mean square error. Digital images of the same scene taken from different sensors and fusion result. A cost function characterizing the mean square error between the fused image and source inputs may be defined as where denote l norm. It is easy to find out that fusion by pixel averaging, i.e. minimizes this cost.
Pixel averaging method is not accepted as the best fusion scheme. It is noted earlier that pixel averaging has a muddy appearance. Figure 1 illustrate enhancement of grayscale image by Otsu algorithm. This is due primarily to the fact that averaging results in reduced contrast for all the patterns which appear in only one source and a mixture of different patterns which come from different sources.Another possible candidate of performance measure for image fusion is the correlation: Correlations between different fusion results and source images. It denotes two source images.
Denote fusion outputs by pixel averaging, wavelet-based maximum selection and the proposed method respectively. The correlation between defined as Where it stands for the mean value. a performance metric based on correlation may be defined as where is the fusion output and are source images. Maximizing correlation is closely related to minimizing the mean square error. Fusion by pixel averaging usually maximizes the overall correlation given by. It lists the correlations between different fusion outputs and source images in the experiments of multi-sensor, multi-modality, multi-spectral and multi-focus image fusion. Following the same argument for the cost function of mean square error, we conclude that correlation is also not a good choice to measure the performance of image fusion.If the “ground truth” of the fusion result is known, we can perform a quantitative performance measure to compare different fusion algorithms. For the above experiment of multi-focus fusion, an ideal image should contain both well focused clocks and it may be constructed manually by cut and paste, as demonstrated. A performance measure it can be defined as the standard deviation of the error between summarizes the standard deviation between an ideal image and fusion results by different algorithms. The proposed information theoretic approach clearly generates a fusion result that is the closest to the ideal image among the outputs of the pixel averaging and wavelet-based maximum selection scheme. It has to be pointed out that this method is restricted to specially constructed images and generally not applicable to real multi-sensor data where the ideal fusion is defined and cannot be obtained. Figure 2 shows noise addition and extracted uncorrelated image from scrambled data.There is no general quantitative performance measure for image fusion algorithms in the current literature except for specific applications. The resulting decision map makes our approach more effective in preserving significant features from all the sources without suffering from artifacts.
We have successfully tested the new scheme on fusion of multi-sensor, multi-modality, multi-spectral, and multi-focus images. Quantitative performance measure for fusion of synthetic test images and visual evaluation for real multi-sensor image fusion demonstrates that the presented algorithm clearly outperforms pixel averaging and wavelet based maximum selection fusion schemes.
In this paper, we derive a new multiscale image fusion algorithm, which aims at integrating complementary information from multi-sensor data so that the fused imagesare more suitable for visual perception. We formulate the image fusion as an optimization problem to which we propose a solution. As a first step, a biorthogonal wavelet transform of each source image is calculated to generate a scale space representation. Biorthogonal wavelets can be synthesized with perfect reconstruction filters having a linear phase, which is a desirable property for image fusion applications. In contrast to the “choose max” type of selection rules, our proposed technique relies on the intrinsic statistical structure. Using spatially specific wavelet coefficients from fine to coarse scales, we construct activity pattern vectors which we compare using a new divergence.