ISSN ONLINE(23198753)PRINT(23476710)
R.Gunachitra, Mr.D.Suresh

Related article at Pubmed, Scholar Google 
Visit for more related articles at International Journal of Innovative Research in Science, Engineering and Technology
Resolution enhancement (RE) schemes (which are not based on wavelets) suffer from the drawback of losing high frequency contents (which results in blurring). Here we propose the Adaptive wavelet transformbased (AWT) for resolution Enhancement in satellite images. A satellite input image is decomposed by DTCWT (which is nearly shift invariant) to obtain highfrequency sub bands. The highfrequency sub bands and the lowresolution (LR) input image are interpolated using the Lanczos interpolator. The high frequency sub bands are passed through an NLM filter to cater for the artifacts generated by DTCWT (despite of its nearly shift invariance). The filtered highfrequency sub bands and the LR input image are combined using inverse DTCWT to obtain a resolutionenhanced image.We propose DWT for sub band interpolation process and we compare the resolution of satellite image enhancements using Wavelet Methods with ARSIS methods.
Keywords 
Dualtree complex wavelet transform (DTCWT), Lanczos interpolation, resolution enhancement (RE), shift variant, PAN sharpening methods,ATWT,MTF. 
INTRODUCTION 
RESOLUTION (spatial, spectral, and temporal) is the limiting factor for the utilization of remote sensing data (satellite imaging, etc.). Spatial and spectral resolutions of satellite images (unprocessed) are related (a high spatial resolution is associated with a low spectral resolution and vice versa) with each other [1]. Therefore, spectral, as well as spatial, resolution enhancement (RE) is desirable. 
Interpolation has been widely used for RE [2], [3]. Commonly used interpolation techniques are basedon nearest neighbors (include nearest neighbor, bilinear, bicubic, and Lanczos). The Lanczos interpolation (windowed form of a sinc filter) is superior than its counterparts (including nearest neighbor, bi linear, and bi cubic) due to increased ability to detect edges and linear features. It also offers the best compromise in terms of reduction of aliasing, sharpness, and ringing [4]. Methods based on vectorvalued image regularization with partial differential equations (VVIRPDE) [5] and in painting and zooming using sparse representations [6] are now state of the art in the field (mostly applied for image in painting but can be also seen as interpolation).RE schemes (which are not based on wavelets) suffer from the drawback of losing highfrequency contents (which results in blurring). 
RE in the wavelet domain is a new research area, and recently, many algorithms [discrete wavelet transform (DWT) [7], stationary wavelet transform (SWT) [8], and dualtree complex wavelet transform (DTCWT) [9] have been proposed [7]–[11]. An RE scheme was proposed in [9] using DTCWT and bi cubic interpolations, and results were compared (shown superior) with the conventional schemes (i.e., nearest neighbor, bilinear, and bi cubic interpolations and wavelet zero padding). More recently, in [7], a scheme based on DWT and bicubic interpolation was proposed, and results were compared with the conventional schemes and the stateofart schemes (wavelet zero padding and cyclic spinning [12] and DTCWT [9]). Note that, DWT is shift variant, which causes artifacts in the RE image, and has a lack of directionality; however, DTCWT is almost shift and rotation invariant [13]. 
DWTbased RE schemes generate artifacts (due to DWT shiftvariant property). In this letter, a DTCWTbased nonlocalmeansbased RE (DTCWTNLMRE) technique is proposed, using the DTCWT, Lanczos interpolation, and NLM. Note that DTCWT is nearly shift invariant and directional selective. Moreover, DTCWT preserved the usual properties of perfect reconstruction with wellbalanced frequency responses [13], [14]. Consequentially, DTCWT gives promising results after the modification of the wavelet coefficients and provides less artifacts, as compared with traditional DWT. Since the Lanczos filter offer less aliasing, sharpness, and minimal ringing, therefore, it a good choice for RE. NLM filtering [15] is used to further enhance the performance of DTCWTNLMRE by reducing the artifacts. The results (for spatial RE of optical images) are compared with the best performing techniques [5],[7]–[9]. 
II. PRELIMINARIES 
A. NLM Filtering 
The NLM filter (an extension of neighborhood filtering algorithms) is based on the assumption that image content is likely to repeat itself within some neighborhood (in the image) [15] and in neighboring frames [16]. It computes denoised pixel x(p, q) by the weighted sum of the surrounding pixels of 
Fig. 1. Block diagram of the proposed DTCWTRE algorithm. Y (p, q) (within frame and in the neighboring frames) [16]. This feature provides a way to estimate the pixel value from noise contaminated images. In a 3D NLM algorithm, the estimate of a pixel at position (p, q) is 
wheremis the frame index, and N represents the neighborhood of the pixel at location (p, q). K values are the filter weights, 
where V is the window [usually a square window centered at the pixels Y (p, q) and Y (r, s)] of pixel values from a geometric neighborhood of pixels Y (p, q) and Y (r, s), σ is the filter coefficient, and f(.) is a geometric distance function. K is inversely proportional to the distance between Y (p, q) and Y (r, s). 
A. NLMRE 
RE is achieved by modifying NLM with the following model [17]: 
Lm is the vectorized lowresolution (LR) frame, I is the decimation operator, J is the blurring matrix, Q is the warping matrix, X is the vectorized highresolution (HR) image, and n denotes the Gaussian white noise. The aim is to restore X from a series of L. Penalty function _ is defined as 
where R is a regularization term, λ is the scale coefficient, x is the targeted image, and Ym is the LR input image. In [17], the total variation kernel is chosen to replace R, acting as an image deblurring kernel. To simplify the algorithm, a separation of the problem in (4) is done by minimizing 
where Z is the blurred version of the targeted image, and Om is the weight matrix, followed by minimizing a deblurring equation [11], i.e., 
A pixelwise solution of (5) can be obtained as 
DTCWTRE. (i) Proposed DTCWTRE. (j) Proposed DTCWTNLMRE. Where the superscript r refers to the HR coordinate. Instead of estimating the target pixel position in nearby frames, this algorithm considers all possible positions where the pixel may appear; therefore, motion estimation is avoided [11]. Equation (7) apparently resembles (1), but (7) has some differences as compared with (1). The weight estimation in (2) should be modified because K’s corresponding matrix O has to be of the same size as the HR image. Therefore, a simple up scaling process to patch V 
process to patch V is needed before computing K. The total number of pixel Y in (7) should be equal to the number of weights K. Thus, a zeropadding interpolation is applied to L before fusing the images [11]. 
I. PROPOSED IMPLEMENTATION 
The ARSISbased methods are inherently built to respect the consistency property. Consequently, the present work focuses on the synthesis property. This property has been expressed in general terms [11], [13], [15], [16]. We propose here a precise definition of the synthesis property in terms of geometry: The restitution of the spatial organization of the radiometry in the observed landscape by the synthesized image should be as identical as possible to the organization that the corresponding sensor would observe with the highest spatial resolution, if existent. 
The work presented here aims at including the difference in MTF between PAN and MS images into an existing ARSIS based fusion method in order to demonstrate that taking into account this difference leads to a better respect of the synthesis property from the geometrical point of view without degrading the quality of the other aspects of the synthesized images. The fusion method used in this demonstration is the method ATWTM3, where ATWT denotes the MSM and means Ã¢ÂÂà trousÃ¢ÂÂ (with holes) wavelet transform, which is an undecimated transform, and M3 is the IMM described by [12], [13], and [19]. The HRIMM is identical to the IMM as in most published works. This method has been shown to provide good results in most cases. It is rather simple and allows the assessment of the impact resulting from the MTF adaptation. While the results will not be the same than those obtained here, our method for accounting for the difference in MTF may easily apply to other ARSISbased methods. The MTFs for the four MS channels of the Pleiades sensor, and that for the PAN, It is function of the spatial frequency normalized by the sampling frequency of the PAN image. Consequently, the relative Nyquist frequency for PAN is 0.5 and that for MS is 0.125. The spatial resolution of the PAN image (70 cm) is four times better than that of the MS image (280 cm). The MTFs mainly account for detector performances and blurring since the MTF for eachMSisalmost qual to zero for relative frequencies close to 0.25. It means that onecannotdistinguishdetailslessthan280cminsizeintheMS images. 
On the contrary, the MTF of PAN is large equal to 0.4—for this frequency, and such details are highly visible. The difference in MTF between low and highspatial resolution images is evidenced in this figure: The MTF of MS has a sharper decrease with frequency than that of PAN. According to the synthesis property, the synthesized images should exhibit the typical MTF of real images at this resolution. However, without modification of the MTF during the fusion process, the resulting MTF exhibits a discontinuity as shown in broad dash in Fig. 4. In this figure are drawn the schematic representations of the MTF for the following: 1) the LR; 2) the HR; and 3) the synthesized image without modification of the MTF. The curves for the MTF at LR and HR are similar to those for MS and PAN One can see the discontinuity in the MTF of the synthesized image for the relative frequency equal to 0.125. Below this value, the MTF is similar to that of the MS at LR. Above this value, the MTF is similar to that of an HR image. Such a discontinuity should not exist if the synthesis property were to be respected. Its existence explains partly the several artifacts observed in synthesized images. It also illustrates the expected benefit of taking into account the difference in MTF during the fusion process. In an illustrative way of speaking, we can say that the solution consists in Ã¢ÂÂraisingÃ¢ÂÂ the MTF of the MS frequencies in the range [0, 0.125] so that it is close to that of an HR image. Doing so provides an MTF closer to a Ã¢ÂÂrealÃ¢ÂÂ MTF and similar contrast in the image for the same frequency without considering its origin: PAN or MS. It is an adaptation of the original LR MTF of MS images to an HR MTF for the synthesized images. This is why we will use the term Ã¢ÂÂMTF adaptation.Ã¢ÂÂ 
A new measure, i.e., the MTF normalized deviation MTFdev, has been proposed to evaluate the performances of a fusion method with respect to the synthesis property expressed in terms of geometry. It highlights strengths and weaknesses in the MTF of the synthesized image. This measure has been used here in the case of simulation of images. It could be used in the general case where no reference image is present at HR to assess the synthesis property . One conclusion of our work is that the degradation of resolution should take the MTF into account in an accurate way as done in the pioneering work [24]. 
We have demonstrated that taking into account the difference in MTF between MS and PAN images leads to synthesized images of better quality. This work can be applied to other concepts and not only to ARSIS [16]. It could be interesting to observe the difference of behavior of this technique depending on the concept it is applied on. The MTF normalized deviation is one means of observation. We have shown how the difference in MTF between LR and HR can be taken into account into a Ã¢ÂÂstandardÃ¢ÂÂ fusion method based on the ARSIS concept. We have observed for three cases a better restitution of the geometry and an improvement in all indices classically used in quality budget in pan sharpening. These results demonstrate the following: 1) the benefit of taking into account the MTF in the fusion process and 2) the value of our method for MTF adaptation. An improvement is also observed in the case of noisy images, although the SNR is decreased by the MTF adaptation. 
IV. RESULTS AND DISCUSSION 
V.CONCLUSION 
We also evaluate the impact of the degree of the spline function used to resample images. We observed that the higher the degree, the better the MTF of the fused image. The difference is noticeable for low degree and becomes negligible when the degree is greater than four. We show that pansharpening methods benefit from the use of MTF. Reference [36] also shows that quality assessment benefits from the consideration of the MTF. The pansharpening context (method and quality assessment) seems to be inseparable from the use and consideration of the MTF.Although this study is quite limited in methods and data, thepresent results are encouraging and may constitute a new way to improve the restitution of geometrical features by already efficient fusion methods. 
References 
[1]Online.Available:http://www.satimagingcorp.com/ [2] Y. Piao, I. Shin, and H. W. Park, Ã¢ÂÂImage resolution enhancement usingintersubband correlation in wavelet domain,Ã¢ÂÂ in Proc. Int. Conf. ImageProcess., San Antonio, TX, 2007, pp. I445–I448. [3] C. B. Atkins, C. A. Bouman, and J. P. Allebach, Ã¢ÂÂOptimal image scalingusing pixel classification,Ã¢ÂÂ in Proc. Int. Conf. Image Process., Oct. 7–10, 2001, pp. 864–867. [4] A. S. Glassner, K. Turkowski, and S. Gabriel, Ã¢ÂÂFilters for common resampling tasks,Ã¢ÂÂ in Graphics Gems. New York: Academic, 1990, pp. 147–165. [5] D. Tschumperle and R. Deriche, Ã¢ÂÂVectorvalued image regularization with PDE’s: A common framework for different applications,Ã¢ÂÂ IEEE Trans. Pattern Anal. Mach. Intell., vol. 27, no. 4, pp. 506–517, Apr. 2005. [6] M. J. Fadili, J. Starck, and F. Murtagh, Ã¢ÂÂInpainting and zooming using sparse representations,Ã¢ÂÂ Comput. J., vol. 52, no. 1, pp. 64–79, Jan. 2009. [7] H. Demirel and G. Anbarjafari, Ã¢ÂÂDiscrete wavelet transformbased satelliteimage resolution enhancement,Ã¢ÂÂ IEEE Trans. Geosci. Remote Sens., vol. 49, no. 6, pp. 1997–2004, Jun. 2011. [8] H. Demirel and G. Anbarjafari, Ã¢ÂÂImage resolution enhancement by using discrete and stationary wavelet decomposition,Ã¢ÂÂ IEEE Trans. Image Process., vol. 20, no. 5, pp. 1458–1460, May 2011. [9] H. Demirel and G. Anbarjafari, Ã¢ÂÂSatellite image resolution enhancement using complex wavelet transform,Ã¢ÂÂ IEEE Geosci. Remote Sens. Lett., vol. 7, no. 1, pp. 123–126, Jan. 2010. [10] H. Demirel and G. Anbarjafari, Ã¢ÂÂImage super resolution based on interpolation of wavelet domain high frequency subbands and the spatial domain input image,Ã¢ÂÂ ETRI J., vol. 32, no. 3, pp. 390–394, Jan. 2010. [11] H. Zheng, A. Bouzerdoum, and S. L. Phung, Ã¢ÂÂWavelet based nonlocalmeans superresolution for video sequences,Ã¢ÂÂ in Proc. IEEE 17th Int. Conf. Image Process., Hong Kong, Sep. 26–29, 2010, pp. 2817– 2820. [12] A. Gambardella andM.Migliaccio, Ã¢ÂÂOn the superresolution of microwave scanning radiometer measurements,Ã¢ÂÂ IEEE Geosci. Remote Sens. Lett., vol. 5, no. 4, pp. 796–800, Oct. 2008. [13] I. W. Selesnick, R. G. Baraniuk, and N. G. Kingsbur, Ã¢ÂÂThe dualtree complex wavelet transform,Ã¢ÂÂ IEEE Signal Prcess. Mag., vol. 22, no. 6, pp. 123–151, Nov. 2005. [14] J. L. Starck, F. Murtagh, and J. M. Fadili, Sparse Image and Signal Processing: Wavelets, Curvelets, Morphological Diversity. Cambridge, U.K.: Cambridge Univ. Press, 2010. [15] A. Buades, B. Coll, and J. M. Morel, Ã¢ÂÂA review of image denoising algorithms, with a new one,Ã¢ÂÂ Multisc. Model.Simul., vol. 4, no. 2, pp. 490–530, 2005. [16] A. Buades, B. Coll, and J. M. Morel, Ã¢ÂÂDenoising image sequences does not require motion estimation,Ã¢ÂÂ in Proc. IEEE Conf. Audio, Video Signal Based Surv., 2005, pp. 70–74. [17] M. Protter, M. Elad, H. Takeda, and P. Milanfar, Ã¢ÂÂGeneralizing the nonlocalmeans to superresolution reconstruction,Ã¢ÂÂ IEEE Trans. Image Process., vol. 18, no. 1, pp. 36–51, Jan. 2009. [18] Z.Wang and A. C. Bovik, Ã¢ÂÂA universal image quality index,Ã¢ÂÂ IEEE Signal Process. Lett., vol. 9, no. 3, pp. 81–84, Mar. 2002. [19] L. Wald, Ã¢ÂÂSome terms of reference in data fusion,Ã¢ÂÂ IEEE Trans. Geosci. Remote Sens., vol. 37, no. 3, pp. 1190–1193, May 1999. [20] F. LaporterieDéjean, H. De Boissezon, G. Flouzat, and M.J. LefèvreFonollosa,Ã¢ÂÂThematic and statistical evaluations of five panchromatic/ multispectral fusion methods on simulated PLEIADESHR images,Ã¢ÂÂ Inf.Fusion, vol. 6, no. 3, pp. 193–212, Sep. 2005. [21] A. Filippidis, L. C. Jain, and N. Martin, Ã¢ÂÂMultisensor data fusion for surface landmine detection,Ã¢ÂÂ IEEE Trans. Syst., Man, Cybern. C, Appl.Rev., vol. 30, no. 1, pp. 145–150, Feb. 2000. [22] P. S. Huang and T. TeMing, Ã¢ÂÂA target fusionbased approach for classifying high spatial resolution imagery,Ã¢ÂÂ in Proc. IEEE Workshop Adv. Tech.Anal. Remotely Sens. Data, Oct. 27–28, 2003, pp. 175–181. [23] T. Ranchin, B. Aiazzi, L. Alparone, S. Baronti, and L. Wald, Ã¢ÂÂImage fusion—The ARSIS concept and some successful implementation schemes,Ã¢ÂÂ ISPRS J. Photogramm. Remote Sens., vol. 58, no. 1/2, pp. 4– 18,Jun. 2003. [24] L. Wald and J.M. Baleynaud, Ã¢ÂÂObserving air quality over the city of Nantes by means of Landsat thermal infrared data,Ã¢ÂÂ Int. J. Remote Sens.,vol. 20, no. 5, pp. 947–959, 1999. [25] I. Couloigner, T. Ranchin, V. P. Valtonen, and L. Wald, Ã¢ÂÂBenefit of the future SPOT 5 and of data fusion to urban mapping,Ã¢ÂÂ Int. J. Remote Sens., vol. 19, no. 8, pp. 1519–1532, 1998. [26] P. Terretaz, Ã¢ÂÂComparison of different methods to merge SPOT P and XS data: Evaluation in an urban area,Ã¢ÂÂ in Proc. 17th Symp. EARSeL, Future Trends Remote Sens., P. Gudmansen, Ed., Lyngby, Denmark, Jun. 17– 20,1997, pp. 435–445. [27] J. A. Malpica, Ã¢ÂÂHue adjustment to IHS Pansharpened Ikonos imagery for vegetation enhancement,Ã¢ÂÂ IEEE Geosci. Remote Sens. Lett., vol. 4, no. 1,pp. 27–31, Jan. 2007. [28]L. Alparone, L. Wald, J. Chanussot, C. Thomas, P. Gamba, and L. M. Bruce, Ã¢ÂÂComparison of pansharpening algorithms: Outcome of the 2006 GRSS data fusion contest,Ã¢ÂÂ IEEE Trans. Geosci. Remote Sens.,vol. 45, no. 10, pp. 3012–3021, Oct. 2007. [29] L. Wald, T. Ranchin, and M. Mangolini, Ã¢ÂÂFusion of satellite images of different spatial resolutions: Assessing the quality of esulting images,Ã¢ÂÂPhotogramm. Eng. Remote Sens., vol. 63, no. 6, pp. 691–699, 1997. [30] T. Ranchin and L. Wald, Ã¢ÂÂFusion of high spatial and spectral resolution images: The ARSIS concept and its implementation,Ã¢ÂÂ Photogramm. Eng.Remote Sens., vol. 66, no. 1, pp. 49–61, Jan. 2000. [31] L. Wald, Data Fusion: Definitions and Architectures. Fusion of Images of Different Spatial Resolutions. Paris, France: Presses de l’Ecole, MINES ParisTech, 2002, 200 pp. [32] 0972184465C. Thomas and L. Wald, Ã¢ÂÂA MTFbased distance for the assessment of geometrical quality of fused products,Ã¢ÂÂ in Proc. 9th Int. Conf. Inf. Fusion, Ã¢ÂÂFusion 2006Ã¢ÂÂ, Florence, Italy, Jul. 10–13, 2006,pp. 1–7, [CDROM], paper 267. [33] A. Papoulis, Signal Analysis, 3rd ed. New York: McGrawHill, 1987. [34] B. Aiazzi, B. Alparone, S. Baronti, and A. Garzelli, Ã¢ÂÂContextdriven fusion of high spatial and spectral resolution images based on oversampled multiresolution analysis,Ã¢ÂÂ IEEE Trans. Geosci. Remote Sens., vol. 40,no. 10, pp. 2300–2312, Oct. 2002. [35] C. Thomas, Ã¢ÂÂFusion d’images de résolutions spatiales différentes,Ã¢ÂÂ Ph.D.dissertation, Ecole des Mines de Paris, Paris, France, 2006. [36] M. M. Khan, L. Alparone, and J. Chanussot, Ã¢ÂÂPansharpening quality assessment using the modulation transfer functions of instruments,Ã¢ÂÂ IEEE Trans. Geosci. Remote Sens., vol. 47, no. 11, pp. 3880–3891, Nov. 2009. 