ISSN ONLINE(23198753)PRINT(23476710)
Deepa M^{1}, Dr.T.V.U.Kirankumar^{2}, Muthukumaran S^{3}, Mohankumar C^{4}

Related article at Pubmed, Scholar Google 
Visit for more related articles at International Journal of Innovative Research in Science, Engineering and Technology
This paper presents, for the first time, a combined blur deconvolution and edgedirected interpolator based on adaptive gradient angle interpolation locally defined, straight line approximations of image isophotes for super resolution. The proposed blur estimation process is supported by an edgeemphasizing smoothing operation, which improves the quality of blur estimates by enhancing strong soft edges toward step edges, while filtering out weak structures. For better performance, the blur estimation is done in the filter domain rather than the pixel domain using the gradients of the Low Resolution and High Resolution images. The proposed method can accommodate arbitrary scaling factors, provides stateoftheart results in terms of PSNR as well as other quantitative visual quality metrics, and has the advantage of reduced computational complexity that is directly proportional to the number of pixels..
Keywords 
LR (Low resolution), HR (High resolution), MIDB (Multiimage Deblurring), SIDB (Singleimage Deblurring), SR (super resolution) 
INTRODUCTION 
Capturing highquality images and videos is critical in many applications such as medical imaging, astronomy, surveillance, and remote sensing. Traditional highresolution (HR) imaging systems require highcost and bulky optical elements whose physical sizes dictate the lightgathering capability and the resolving power of the imaging system, a constraint that has persisted since their invention [1][2] .In contrast; computational imaging systems combine the power of digital processing with data gathered from optical elements to generate HR images. Artefacts such as aliasing, blurring, and noise may affect the spatial resolution of an imaging system, which is defined as the finest detail that can be visually resolved in the captured images. Blur deconvolution (BD) and superresolution (SR) are two groups of techniques to increase the apparent resolution of the imaging system. One major difference between these two groups is that the goal in a BD problem is just to undo blurring and noise, whereas SR also removes or reduces the effect of aliasing. As a result, the input and output images in BD are of the same size, while in SR the output image is larger than the input image(s). The other difference is that since severe blurs eliminate or attenuate aliasing in the underlying lowresolution (LR) images, the blur in a SR problem may not be as extensive as in a BD problem. In image resolution limits the extent to which zooming enhances clarity, restricts the quality of digital photograph enlargements, and, in the context of medical images, can prevent a correct diagnosis. this paper is literature is composed as blur deconvolution from the paper[21] and the interpolation approach nonlinear algorithms frequently result in better visual and quantitative performance than fixedkernel approaches [22].Single image interpolation (zooming, upsampling, or resizing) can artificially increase image resolution for viewing or printing, but is generally limited in terms of enhancing image clarity or revealing higher frequency content. Interpolators based on approximations of the ideal sinc kernel (pixel replication, bilinear, bicubic, and higherorder splines) are commonly used for their flexibility and speed; however, these approaches frequently contribute to blurring, ringing artefacts, jagged edges, and unnatural representation of isophotes (curves of constant intensity) in processed images. Results can be improved somewhat by tailoring the sincapproximating kernel to suit the image being interpolated and nonlinear algorithms frequently result in better visual and quantitative performance than fixedkernel approaches. These papers can be classified into two main categories: methods that consider blur identification and image restoration as two disjointed processes [11], [13], [14], [15], and methods that combine these two processes into a combined procedure, e.g. interpolation(SAGA) [16], [18]–[20]. For both BD and SR, techniques are proposed in the literature for reconstruction from a single image or multiple Images. Methods reconstructs one HR image by fusing from multiple edgedirected interpolator. By contrast, superresolution (SR) methods, which are also known as learningbased, patchbased or examplebased SR techniques, are proposed in which small spatial patches within the input LR image are replaced by similar higher resolution patches previously extracted from a number of HR images. 
PROPOSED SYSTEM 
A. Blind Estimation 
Blind deconvolution is the problem of recovering a sharp version of an input blurry image when the blur kernel is unknown. Mathematically, we wish to decompose a blurred image y as 
where x is a visually plausible sharp image, and k is a non negative blur kernel, whose support is small compared to the image size. This problem is severely illposed and there is an infinite set of pairs (x, k) explaining any observed y. For example, One undesirable solution that perfectly satisfies is the noblur explanation: k is the delta (identity) kernel and x = y. The illposed nature of the problem implies that additional assumptions on x or k must be introduced. Blind deconvolution is the subject of numerous papers in the signal and image processing. Despite the exhaustive research, results on real world images are rarely produced. Recent algorithms have proposed to address the illposedness of blind deconvolution by characterizing x using natural image statistics. While this principle has lead to tremendous progress, the results are still far from perfect. 
The linear forward imaging model in the spatial domain which illustrates the process of generating the kth LR image gk(x↓, y↓; c) of size Ng x × Ng y ×C from a HR image f (x, y; c) of size N f x × N f y × C is defined 
Here, (x↓, y↓) and (x, y) are the position of pixels within the LR and HR image planes, respectively, Nx and Ny are the number of pixels in x and y spatial directions, respectively, N is the total number of LR images, C is the number of colour channels, and * is 2D convolution operator. In w(·) is the warping function according to a global/local parametric/ nonparametric motion model, hk is the kth PSF caused by the overall blur of the system originated from different sources (such as optical lens, sensor, motion, depth of scene, etc.), and nk is the noise which is commonly modelled as AWGN. Also [·]↓L is the downsampling operator called SR downsampling factor or SR scale ratio (depending on the point of view) so that N f x = LNg x and N f y = LNg y . According to, the original HR image is warped, convolved with the overall system PSF, down sampled by the value of L, and finally corrupted by noise to generate each LR image. 
Blind deconvolution algorithms exhibit some common building principles, and vary in others. proposed a unified method for blind BD and SR in which the wellknown TV regularization is used as the image prior and the following regularization for the blurs: 
Where zj is the LR image gi up sampled to the size of HR image f, i.e. zi = DT gi . A TV regularization term is also added to Rh with a very small coefficient, the role of which is just to diminish noise in the estimated PSFs. The form of regularization (but without preup sampling of LR images) is first proposed in for the blind MIBD problem. By rearranging Rh, it can be written as Rh = Nh where h =[hT 1 , . . . , hT n ]T and N is a matrix solely based on the LR images which will contain the correct PSFs in its null space, i.e. Nh = 0 only if h contains the correct PSFs. While this prior is able to properly estimate the HR image even when the LR images have different blurs, the estimated PSFs are with some inevitable ambiguity Even if the PSFs are the same; this ambiguity will still exist in the estimated PSFs. By contrast, in our proposed method which is introduced in Section III, there is no such ambiguity in estimating the blurs since our method is intrinsically designed for the case that all PSFs are equal. Another work on blind SR using the MAP framework is suggested in which the cost function for estimating the HR image includes a TV prior. The blur identification process consists of three optimization steps: first, initial estimates of the blurs h0 are obtained using the GMRF prior’s Eh2, where E is the convolution kernel of Laplacian mask. Second, parametric blur functions b that best fit the initial blur estimates are calculated. Third, final estimates of the blurs h are obtained by reinforcement learning toward the parametric estimates using the quadratic prior hb2. The most important limitation of this work is that considering just a few parametric models does not cope with the diversity of blur functions in reality. Also in this work, the estimated blurs are not demonstrated and the reported PSNR values for the estimated HR images are low (less than 18 dB). 
B. Fixed Kernel, Preliminary Interpolation 
It is shown for the first time in that, in a blind image deconvolution problem, more accurate estimation for the blur (and subsequently for the image) can be obtained if in the blur estimation process, the estimated image f is preprocessed by an edgeemphasizing smoothing operation. This has a positive effect on the quality of the blur estimate from the following aspects: 
1) Blur(s) can be estimated best from salient edges and their adjacent pixels, so by smoothing f weak edges and also false edges caused by ringing are smoothed out and do not contribute to the blur estimation; 
2) Noise has a stronger adverse effect in nonedge regions than in edge regions, so smoothing the nonedge regions helps in improving the blur estimation; 
3) Groundtruth images are assumed to have sharp and binarylike edges; therefore, replacing soft edges with step edges in f provides a closer estimate of the groundtruth image and assists in getting better blur estimation, especially in the initial steps of the blind optimization where the estimated image f is still quite blurry. 
To smooth out nonedge regions and improve the sharpness of edges, in the shock filtering operation of is used by which a ramp edge gradually approaches a step edge through a few iterations. Since the performance of the shock filtering and some other edgeemphasizing smoothing techniques are influenced by noise, in some works the image is presmoothed. 
C.Segment Adaptive Gradient Angle Interpolation 
The first step of SAGA interpolation is to identify the vector of parameters (α or β) that best maps a line of N pixels to intensitymatched pixels in adjacent lines (i.e., the α or β values that best satisfy. As established in the previous section, the displacements for a given pixel are influenced by the pixels in a surrounding neighbourhood. Specifically, we introduce a stiffness parameter k such that elements in α or β are linearly related over segments of k +1pixels. The α and β values are explicitly computed at the end points of each segment or “nodes” and interpolated for intermediate locations. 
Where pixel locations [m, n] and [m + k, n] are considered nodes and the internodal pixel displacements are interpolated for 0 < i < k. In Eq. (14), θ1 and θ2 are the linear basis functions: θ1(i ) = k−I k and θ2(i ) = ik , for 0 < i < k. Determinations of α for other rows are independent repetitions of the same process that can be run in parallel. The algorithm for determining β for a column comprises a straightforward change of variables or, alternatively, the image matrix can be transposed and the procedure can be repeated as described. 
RESULTS 
The Right row of images shows the enlargement results for a twotimes downsampled version of the Lena image the enlargement results for a region of the original Barbara test image The SAGA result shows a lesser degree of jaggedness and some thinning. The SME result shows neither issue in the knee area; however, along the bottom side of the pant leg, there is significant crosshatching observable. Moderate artificial texture is also present in the arm region of all of the output images. 
CONCLUSION 
The proposed blur estimation procedure preprocesses the estimated HR image by applying an edge emphasizing smoothing operation which enhances the soft edges toward step edges while smoothing out weak structures of the image. Segment Adaptive Gradient Angle (SAGA) interpolation. The algorithm operates at a low computational cost. In addition to having low raw computational complexity, the SAGA algorithm is easily configured for parallelization as data can be independently processed one line at a time. Beyond addressing the classic problem of resizing greyscale images by factors of two with accuracy and speed, the algorithm is also well suited for dynamic zooming applications. Displacements need only be calculated for an image once, and can then be applied to interpolate the image to any new size. Given the ever increasing range of image viewing devices and consumer expectations for interactivity, this feature is viewed as an important benefit of SAGA interpolation. 
References 
[1]. M. P. Christensen, V. Bhakta, D. Rajan, T. Mirani, S. C. Douglas, S. L. Wood, and M. W. Haney, “Adaptive flat multiresolution multiplexed
computational imaging architecture utilizing micromirror arrays to steer subimager fields of view,” Appl. Opt., vol. 45, no. 13, pp. 2884–
2892,May 2006. [2]. P. Milojkovic, J. Gill, D. Frattin, K. Coyle, K. Haack, S. Myhr, D. Rajan, S. Douglas, P. Papamichalis, M. Somayaji, M. P. Christensen, and K. Krapels, “Multichannel, agile, computationally enhanced camera based on PANOPTES architecture,” in Proc. Comput. Opt. Sens. Imag. Opt.Soc. Amer., Oct. 2009, no. CTuB4, pp. 1–3. [3]. M. Irani and S. Peleg, “Improving resolution by image registration,” Graph. Models Image Process., vol. 53, pp. 231–239, Apr. 1991. [4]. R. C. Hardie, K. J. Barnard, J. G. Bognar, E. E. Armstrong, and E. A. Watson, “Highresolution image reconstruction from a sequence of rotated and translated frames and its application to an infrared imaging system,” Opt. Eng., vol. 37, no. 1, pp. 247–260, 1998. [5]. R. C. Hardie, K. J. Barnard, and E. E. Armstrong, “Joint map registration and highresolution image estimation using a sequence of undersampled images,” IEEE Trans. Image Process., vol. 6, no. 12, pp. 1621–1633, Dec. 1997. [6]. S. Farsiu, M. Robinson, M. Elad, and P. Milanfar, “Fast and robust multiframe super resolution,” IEEE Trans. Image Process., vol. 13, no. 10, pp. 1327–1344, Oct. 2004. [7]. R. Schultz and R. Stevenson, “Extraction of highresolution frames from video sequences,” IEEE Trans. Image Process., vol. 5, no. 6, pp. 996– 1011, Jun. 1996. [8]. W. Freeman, T. Jones, and E. Pasztor, “Examplebased superresolution,” IEEE Comput. Graph. Appl., vol. 22, no. 2, pp. 56–65, Mar.–Apr. 2002. [9]. D. Glasner, S. Bagon, and M. Irani, “Superresolution from a single image,” in Proc. IEEE 12th Int. Conf. Comput. Vis., Sep. 2009, pp. 349– 356. [10]. W. Dong, L. Zhang, G. Shi, and X. Wu, “Image deblurring and superresolution by adaptive sparse domain selection and adaptive regularization,” IEEE Trans. Image Process., vol. 20, no. 7, pp. 1838–1857, Jul. 2011. [11]. G. Harikumar and Y. Bresler, “Perfect blind restoration of images blurred by multiple filters: Theory and efficient algorithms,” IEEE Trans. Image Process., vol. 8, no. 2, pp. 202–219, Feb. 1999. [12]. R. Madani, A. Ayremlou, A. Amini, and F. Marvasti, “Optimized compactsupport interpolation kernels,” IEEE Trans. Signal Process.,vol. 60, no. 2, pp. 626–633, Feb. 2012. [13]. H. A. Aly and E. Dubois, “Image upsampling using totalvariation regularization with a new observation model,” IEEE Trans. Image Process., vol. 14, no. 10, pp. 1647–1659, Oct. 2005. [14]. S. Ramani, P. Thevenaz, and M. Unser, “Regularized interpolation for noisy images,” IEEE Trans. Med. Imag., vol. 29, no. 2, pp. 543–558,Feb. 2010. [15]. B. S. Morse and D. Schwartzwald, “Image magnification using levelset reconstruction,” in Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recogn., Apr. 2001, pp. 330–340. [16]. C. B. Atkins, C. A. Bouman, and J. P. Allebach, “Optimal image scaling using pixel classification,” in Proc. IEEE Int. Conf. Image Process.,Oct. 2001, pp. 864–867. [17]. J.H. Lee, J.O. Kim, J.W. Han, K.S. Choi, and S.J. Ko, “Edgeoriented twostep interpolation based on training set,” IEEE Trans.Consum. Electron., vol. 56, no. 3, pp. 1848–1855, Aug. 2010. [18]. J. V. Manjón, P. Coupe, A. Buades, V. Fonov, D. L. Collins, and M. Robles, “Nonlocal MRI upsampling,” Med. Image Anal., vol. 14,no. 6, pp. 784–792, Dec. 2010. [19]. K. Guo, X. Yang, H. Zha, W. Lin, and S. Yu, “Multiscale semilocal interpolation with antialiasing,” IEEE Trans. Image Process., vol. 21,no. 2, pp. 615–625, Feb. 2012. [20]. A. Temizel, “Image resolution enhancement using wavelet domain hidden Markov tree and coefficient sign estimation,” in Proc. IEEE Int.Conf. Image Process., Oct. 2007, pp. 381–384. [21]. " Unified Blind Method for MultiImage SuperResolution and Single/MultiImage Blur Deconvolution" Esmaeil Faramarzi, Member, IEEE, Dinesh Rajan, Senior Member, IEEE, and Marc P. Christensen, Senior Member, IEEE, VOL. 22, NO. 6, JUNE 2013 [22]. "Segment Adaptive Gradient Angle Interpolation" Christine M. Zwart, Student Member, IEEE, and David H. Frakes, Member, IEEE, VOL. 22, NO. 8, AUGUST 2013 