Review on Significance Research on Enhancing the Quality of the Brain Image Using Neuro-Fuzzy System | Open Access Journals

ISSN ONLINE(2320-9801) PRINT (2320-9798)

Review on Significance Research on Enhancing the Quality of the Brain Image Using Neuro-Fuzzy System

Somashekhar Swamy1, P.K.Kulkarni2
  1. Ph.D Scholar, Department of ECE, Visvesvaraya Technological University, Belagavi, Basavakalyan Engineering College, Basavakalyan, Karnataka, India.
  2. Department of ECE, Poojya Doddappa Appa College of Engineering, Kalaburgi, Karnataka, India
Related article at Pubmed, Scholar Google

Visit for more related articles at International Journal of Innovative Research in Computer and Communication Engineering


Computed tomography (CT) is the technique that uses X-ray equipment to produce detailed images of section inside the animal or human body. CT scanning is very beneficial for detecting the diseases like cancers, tumors in human body. But noise and Blur are the major factors that degrade the quality of CT images and makes difficult to diagnose. Reconstructing the CT images is the way to overcome these problems. This paper presents the study of various techniques that are suitable to reconstruct the CT images, along with the performance evaluation of various techniques. This survey mainly focuses on most prominent techniques like filtering technique (discrete wavelets, complex wavelets, and median filter), artificial neural network approach for image reconstruction and Median filtering technique, also discussed other denoising and deblurring techniques. The aim of this survey is to provide useful information on various deblurring and denoising techniques and to motivate research to implement the effective algorithm.


Computed Tomography, De-noising, De-blurring


Computed tomography, more commonly known as a CT or CAT scan, is similar to X-rays for the diagnosing medical test, CT produces series of images or pictures of the body. CT scan has got significant role in medical field for detecting and analyzing diseases. CT scan is used to examine the chest, injuries, kidney, etc. CT images provide detailed, cross-sectional views of all types of tissue in our body, this makes easier to diagnose. But many of CT images suffer from deterioration in the quality of the image because of noise, blur deficiency. These degradations makes hard to analyze. Analysis of CT image is very essential to take decision. Bad analysis results in wrong decision, which results failure in detection. Removing noise and blur is very crucial. Researchers have been working for decades on reconstruction of CT images.
Image noise is random pixels scattered all over the regions of image. Source of noise generally comes from cameras and poor illumination. Noisy images provide less information. Blurs occur because of poor illumination, camera movement when capturing or object movement when capturing. Removing noise and blur is big challenge in image reconstruction. Most of the algorithms have disadvantage of information loss like, edge information, small spatial structures, etc. Preserving edges and small spatial structures in CT reconstruction is challenging task for researchers. Identifying tumors, cancer cells or small injuries is the main objective of analyst. But finding tumors or cancer cells in early stage is difficult, because in early stage spatial structure of tumor is very tiny in size [2, 14, 17].
Various types of filtering technique used as a denoising tool such as wavelet based filtering [5, 7, 16, 19, 22], Median filtering [18], etc. Filtering is process of removing noisy components from the transformed image. For deblurring process sharpening the image to get clear structure of the object is included. All the researchers’ goal is to achieve high efficiency and accuracy in the performance of algorithm. But most of their respective algorithms have their own pros and cons also. Some reconstruction techniques are also briefly explained in this paper.


Filtering is the traditional way of image denoising. It is a process of removing some of the coefficients of transformed image, those coefficients treated as noise. In proportion to actual image characteristic, noise statistical property and frequency spectrum distribution rule, people have developed many algorithms of diminishing noises, which are approximately divided into space and transformation fields. A data operation carried on the original image called space field and it processes the image gray value. Transformation field is responsible for image transformation, after the transformation coefficients are obtained and these coefficients are processed. Finally the output image is obtained by using inverse transformation as shown in Figure 1. Successful usage of wavelet transform might lessen the effect of noise or even overcome it completely. Wavelet transform is classified into two types, continuous wavelet transform and discrete wavelet transform.

A. Discrete Wavelet Transform (DWT)

Marcin Kociołek, [24] proposed an image denoising technique based on DWT. It is a wavelet based denoising technique and used widely in the field of image construction. DWT is linear in nature that operates on a data vector and its length is an integer power of two, transforming it into a numerically different vector of the same length. DWT is a tool that separates data into different variations of frequency components and then finds every component with resolution matched to its scale. DWT is computed with a sequence of filtering followed by a factor 2 sub sampling as shown in Figure 2.
H denotes the high pass filters and L denotes the low-pass filters, ↓ 2 represent sub sampling. Outputs of these filters are given by equations (1) and (2).
Elements of aj are used for next level of transform and dj denotes coefficients, that determines the transformation result. Here l[n] denotes coefficients of low pass filters and h[n] denotes coefficients of high-pass filters. One assumption is to be taken, i.e. on scale j + 1 there is only half from number of a and d elements on scalej. So that, Transformation should be done until only two a�� elements remain in the signal. Finally coefficients of scaling function obtained. DWT mechanism is similar for all two-dimensional images. DWT is applied for rows and then applied for columns. Working of DWT is shown in Figure 3. DWT provides better spatial and spectral localization of image formation.
DWT still it suffers from two serious disadvantages of shift invariant and no phase information. For reducing these disadvantages several solutions have been proposed, but the most agreeable solution is using Complex Wavelet Transform [7].

B. Dual-Tree Complex Wavelet Transform (DTCWT):

Ashish Khare, [7] proposed a Dual-tree complex wavelet transform for denoising and deblurring. Denoising and deblurring using complex wavelet transform which includes 2 steps, first denoising the image using wavelet transformation and next apply deblurring algorithm to get the resultant de-noised image. The working of DT CWT is shown in Figure 4.
For denoising of image, threshold based denoising scheme is used. Used threshold is complex in nature and also adaptive. It can be computed by estimating noise variation from the image. Threshold value is computed to both real and imaginary domain. Here soft-thresholding scheme is used for modification of wavelet coefficients. Inverse wavelet transform gives the output (de-noised) image. The denoising process of this stage introduces a small amount of blur in the image and it does not remove the original blur. Forward algorithm is used to remove noise from the image, but it gives inaccurate results when non-Gaussian noise exists. To overcome this problem author modified the algorithm is as follows:
 Applying regularized inverse filter to the original image.
 Estimate the noise variance
 Compute Daubechies Complex Wavelet transform of regularized inverse filtered image.
 Shrink the computed wavelet coefficients by adaptive complex threshold, by soft Thresholding.
 Perform wavelet domain filtering, which gives real and imaginary part of wavelet coefficients for deblurred image
 Apply Inverse wavelet transform that gives resultant deblurred image.

C. Complex wavelets combined with Filtered Back-Projection (FBP):

V Thavavel, [16] proposed a regularized reconstruction method which combines features from the Filtered Back- Projection (FBP) algorithm and regularization theory in CWT, filters have both complex and real coefficients Still, a further problem arises in attaining perfect reconstruction for complex wavelet decomposition above level 1. By using Dual-tree Complex wavelet transform (DT CWT) we can calculate the complex transform of a signal using two separate DWT decompositions. By obtaining both real as well as imaginary coefficients we can get more information. The filtering part of FBP performs Fourier-domain inversion followed by noise elimination in complex wavelet domain, to cater efficient estimate of the original image even with ill-conditioned systems.

D. Haar wavelet and Daubechies wavelet:

Kanwaljot Singh Sidhu, [23] proposed method involves Haar wavelet and Daubechies Transform for medical image denoising. The work comprise 5 sections namely, wavelet selection, threshold selection, decomposition of input image, reconstruction of image and comparison of PSNR. In wavelet selection Haar wavelets and Daubechies transform are selected for decomposition of signal. In threshold selection tested both hard and soft Thresholding. Haar wavelet is simple and old method used for signal transformation. Haar transformation decomposes non-continuous signal into two separate sub-signals. One sub-signal is a running average and other is a running difference. Haar wavelet is simple, fast and consumes less memory.
Set of scaling function in Daubechies wavelet is orthogonal in nature. Frequency responses are balanced in Daubechies wavelet and phase responses are non-linear. It has a property of overlapping windows and coefficients spectrum reflects at the high frequency changes. It is more efficient in removing noise and compressing audio signal. In the next step thresholding operation is used. Using thresholding a binary image can be created from gray values of gray image. Another use of thresholding is used to segment an image by setting all pixels whose intensity values are above a threshold to a foreground value and all the remaining pixels to a background value. Thresholding is classified into two categories:
1. Hard Thresholding
Procedure of hard threshold is "keep or kill" and is appealing more possibly. Sometimes pure noise coefficients may come across the hard threshold. Hard Thresholding is used widely in medical image processing.
2. Soft Thresholding
Soft threshold compresses the coefficients above the threshold in absolute value. The false structures in hard thresholding can be overcome by soft thresholding.
PSNR (Peak Signal to Noise Ratio) is calculated by comparing both degraded input image and restored image. For PSNR calculation following equation is used.
After getting PSNR values using equation (3), it is compared with different transformation techniques.

E. Average Filter Or Mean Filter

Important features are characterized by large wavelet coefficient across scales in most of the timer scales. Ghosh, [5] proposed an image-to-image transformation designed to smoothen or flatten an image by reducing the rapid pixelto- pixel variation in grey-levels. Smoothing may be accomplished by applying an averaging or mean mask that computes a weighted sum of the pixel grey-levels in a neighborhood and replaces the centre pixel with that grey-level. The image is blurred and its brightness retained as the mask coefficients are all-positive and sum to one. The mean filter is one of the most basic smoothing filters. Like other convolutions it is based around a kernel, which represents the shape and size of the neighborhood to be sampled. When calculating the mean smoothing is applied subject to the condition that the centre pixel grey-level is changed only if the difference between its original value and the average value is greater than a preset threshold. This causes the noise to be smoothed with a less blurring in image detail. Mean filters thus find extensive use in blurring and noise removal. Blurring is usually a preprocessing step bridging gaps in lines or curves, helping remove small unwanted detail before the extraction of relevant larger objects.
In order to succeed over the bad influence of noise and shading, there is a need to choose right algorithm being employed. Most of algorithm’s limitation is that they show efficient results on only certain datasets. By considering the above problem, Shruti bhargava, [19] proposed a hybrid filter, by combining the set of filters such as median filter, average filter and diffusion filter on adaptive wavelet thresholding based de-noised medical data in order to enhance results. This algorithm is tested on different types of medical datasets like Ultrasound, SPECT, MRI, CT and PET.


Median filtering is a nonlinear method useful in lessening impulsive or salt-and-pepper noise. Additionally, it is easy for edge preserving in a picture whereas reducing random noise. Impulsive or salt-and pepper noise will occur because of random bit error in an exceedingly channel. In an exceedingly median filter, a window slides on the image and also the median intensity value of the pixels among the window becomes the output intensity of the pixel being processed. Median filtering is not appropriate for encountering high impulsive noise. Suman Shrestha, [18] proposed a technique called decision based Median filter, which is more effective in removing impulsive noise. The decision based median filtering method processes the corrupted image pixels by detecting whether the pixel value is a corrupted one. This decision is made similar to the one made for adaptive median filtering technique, i.e. based on whether the pixel value to be processed lies between the minimum and the maximum value inside the window to be processed. If the pixel value lies between the minimum and the maximum value in the window, the pixel remains as it is (i.e. the pixel is noise free), otherwise the pixel is replaced with the median value of the window or its neighborhood pixel value. The decision based filtering algorithm is as stated below:


ANN is a mathematical or computational model that attempts to model the functional features of biological neural networks. These networks consist of an interconnected group of artificial neurons to process input information. ANN can be an adaptive system that changes its construction based on external or internal information that flows through the network during the learning phase. ANNs are usually used to model complex relationships between inputs and outputs which can be identified mathematically. ANN has good learning ability. Learning algorithms seek through the solution space in order to find a cost function that has the smallest possible measure of how far away from an optimal solution to the problem that we want to solve. Figure 5 represents the structure of ANN. Each circles treated as nodes i.e. artificial neurons and arrow is the connection mapping to each input and output nodes.

A. Neural Network In Image Reconstruction:

Artificial neural network is a machine learning approach used for pattern recognition. ANN is used for deblurring the images [4, 8, and 10]. Yazeed A, [8] proposed a Artificial neural network as a denoising tool. The technique includes using both mean and median statistical functions for calculating the output pixels of the training pattern of the neural network. The system was trained with different types of degraded images and test images with different noise levels were used.
Neeraj Kumar, [4] proposed an algorithm with three layer neural networks with back propagation. To reconstruct the enhanced image using neural network, we need to have a blurred image and correlated original image. They considered original image as Markov Random field (MRF) and blurred image as degraded version of MRF. The complicate is in establishing the functional mapping between original image and noisy blurry image, which is usually non-linear in nature. To do that, method needs to pick a 3×3 patch form blurred image and the center pixel of the corresponding patch form the original image. The desired mapping is learnt by giving blurred patch as input to the neural network and the center pixel of the original patch as the output. Training samples is generated by shifting the center pixel of 3×3 patch by one pixel in either direction for both the blurred patch (training input) and the original patch (target output). The system achieves desired mapping by using a small number of sigmoid basis functions. The system checks that learnt mapping captures the orientation and distance from the center of an edge passing through the original patch. This is showed in Figure 6. Like this, the function can efficiently reconstruct the edge in the deblurred patch.
P.Subashini, [10] proposed a back propagation neural network for deblurring images. Along this gradient decent rule is used which consists of three layers. This procedure uses highly nonlinear back propagation neuron for image restoration to get a high quality restored image and attains fast neural computation, less computational complexity due to the less number of neurons used and quick convergence without long training algorithm. The algorithm achieved better result when comparing traditional image reconstruction approach.


A. Multi-resolution Based Method:

Gerald zauner, [1] proposed an image denoising technique based on Multi-resolution, that uses wavelet filters and platelets. It focuses on the tiny objects in the CT images. Because of noise small objects like holes, marks become hidden and invisible to the human eye. To overcome this problem, platelet filtering is used. Even though wavelet provides efficient result in various applications, it still suffers from less edge information. Also, most of the waveletbased approaches are based on Gaussian approximations and which gives bad result when low count levels are encountered. By observing this condition new multiscale image representation based on platelets are presented. The platelet is based on the wedge-let-approach. Each wedge-let is defined on a dynamic square with certain edge-points. Instead of constant partitions of approximations, planar surfaces are introduced by platelets and in this way additionally gradients. This representation is coded in tree structure and its adaptive ‘pruning’ process removes portion of noisy signals. Compared to Linear filter method, the platelet filter has got better result. After the linear filtering, small spatial structures have been almost removed. Platelet filter still holds the information of small spatial structures.

B. Edge Preserving Reconstruction techniques:

Most of the image reconstruction algorithms have the problem of information loss. After the reconstruction process, the noise will be removed, but some significant information like edges, tiny spatial structures will also be removed. Preserving the edges is one of the challenges in image reconstruction. Using nonlocal regularization [2, 14] we can preserve edges in the images. Daniel Yu, [2] proposed an edge-preserving tomography reconstruction algorithm with nonlocal Regularization. Reconstruction process includes smoothing, that sometimes reduce the edge information. Commonly small differences between neighboring pixels are often due to noise and large differences are because of edges. This observation is a basis for many edge preserving algorithms. Most edge preserving algorithms consists of large line-site model. In each pixel local neighborhood information is passed for estimation of edge, i.e., a penalty is assigned to each pixel of group.
Ming Jiang, [14] presented a new method called blind deblurring algorithm based on the edge-to-noise ratio, which has been applied to improve the quality of spiral CT images. Since the discrepancy measure used to quantify the edge and noise effects is not symmetric, there are several ways to formulate the edge-to-noise ratio. Daniel Yu, made a comparative study on different edge-to-noise ratio for proposed de-blurring algorithm.

C. Laplacian sharpening filter and iterative Richardson-Lucy Algorithms:

Zohair Al-Ameen, [3] proposed an algorithm for reducing Gaussian blur from the CT images by combining Laplacian sharpening filter and iterative Richardson Lucy Algorithm. Gaussian blur is added to degrade the CT image which is applied after deblurring algorithm. Deblurring procedure includes diminution of blur and transform image to sharpened form. By integrating Laplacian sharpening filter and iterative Richardson Lucy algorithm CT image is processed. The algorithms tested with several medical CT images that are degraded by artificial (Gaussian blur) and natural blurs.
1. Laplacian Sharpening Filter:
Laplacian filter is normally used to sharpen the image by highlighting edge information of the images. Here 3x3 matrixes is the Laplacian kernel which is convolved to the image. The result of this convolution is treated as mask that will be the difference of degraded image and the original image. There are various Laplacian kernels sorts. New sort of kernel is shown in Figure 7.
optimization and adaptive techniques, which makes it very attractive in control problems, particularly for dynamic nonlinear systems. Sugeno-type fuzzy systems are popular general nonlinear modeling tools because they are very suitable for tuning by optimization and they employ polynomial type output membership functions, which greatly simplifies de-fuzzification process. The fuzzy inference is performed on the impulsive noisy input image pixel by pixel. Each non-linear noisy pixel is independently processed by the Decision Based Switching Median Filter (DBSMF) and the canny edge detector before being applied to the fuzzy network. Hence, in the structure of the proposed operator, we can give in the input in three phases one is much noise, second would be the blur, and third would be the combination of both. We can represent these by any of the convenient notations. Each possible combination of inputs and their associated membership functions is represented by a rule in the rule base of fuzzy network.


For accurate information, de-noising and de-blurring is very much essential. The survey in this paper discusses about the major approaches for image reconstruction, particularly in the field of medical image processing. All the algorithms are efficient to improve the quality of degraded image. Algorithm has to be still more effective to obtain more accurate diagnostic information. Also there is a need to improve algorithms to make system robust. In view of this sufficient information about various methodologies in denoising and deblurring of different types of medical images are presented in this paper.


1. Gerald Zauner, Michael Reiter and Johann Kastner, “Denoising of Computed Tomography Images using Multiresolution Based Methods”, ECNDT, 2006

2. Daniel F. Yu and Jeffrey A. Fessler, “Edge-Preserving Tomographic Reconstruction with Nonlocal Regularization”, IEEE, Volume 21, Issue 2, 2002

3. Zohair Al-Ameen, Ghazali Sulong and Gapar Johar, “ Reducing The Gaussian Blur Artifact From Ct Medical Images By Employing A Combination Of Sharpening Filters And Iterative Deblurring Algorithms”, Journal of Theoretical and Applied Information Technology, Volume 46, Issue 1, 2012

4. Neeraj Kumar, Rahul Nallamothu and Amit Sethi, “Neural Network Based Image Deblurring”, IEEE, 2012

5. S. Rakshit, A. Ghosh, B. Uma Shankar, “Fast mean filtering technique”, Pattern Recognition Society. Published by Elsevier Ltd, Volume 48, and Issue 2, 2010

6. V N Prudhvi Raj and T Venkateswarlu, “Denoising Of Medical Images Using Total Variational Method”, International Journal (SIPIJ), Volume 3, Issue 2, 2012

7. Ashish Khare and Uma Shanker Tiwary, “A New Method for Deblurring and Denoising of Medical Images using Complex Wavelet Transform”, IEEE, 2005

8. Yazeed A and Al-Sbou, “Artificial Neural Networks Evaluation as an Image Denoising Tool”, ISSN, 2012

9. Sreejith.S and Tripty Singh, “A New Technique for Image Enhancement in Biomedical Applications”, ISSN, Volume 2, Issue 2, 2014

10. P.Subashini, M.Krishnaveni and Vijay Singh, “Image Deblurring Using Back Propagation Neural Network”, World of Computer Science and Information Technology Journal, Volume 1, Issue 6, 2011

11. Anja Borsdorf, Steffen Kappler, Rainer Raupach and Joachim Hornegger, “Analytic Noise Propagation for Anisotropic Denoising of CT Images”, IEEE, PP 5335-5338, 2008

12. Idris A. Elbakri and Jeffrey A. Fessler,” Statistical Image Reconstruction for Polyenergetic X-Ray Computed Tomography”, IEEE, Volume 21, Issue 2, 2002

13. Jung Kuk Kim, Jeffrey A. Fessler and Zhengya Zhang, “Forward-Projection Architecture for Fast Iterative Image Reconstruction in X-Ray CT”, IEEE, Volume 60, Issue 10, 2012

14. Ming Jiang, Ge Wang, Margaret W. Skinner, Jay T. Rubinstein and Michael W. Vannier, “Blind deblurring of spiral CT images—comparative studies on edge-to-noise ratios”, Medical Physics, Volume 29, Issue 5, 2002

15. Issam El Naqa, Daniel A. Low, Jeffrey D. Bradley, Milos Vicic, and Joseph O. Deasy, “Deblurring of breathing motion artifacts in thoracic PET images by deconvolution methods”, Medical Physics, Volume 33, Issue 10, 2006

16. V.Thavavel and R. Murugesan, “Regularized Computed Tomography using Complex Wavelets”, International Journal of Magnetic Resonance Imaging , Volume 1, Issue 1, 2007

17. Daniel E Yu and Jeffrey A. Fessler, “Edge-Preserving Tomographic Reconstruction with Nonlocal Regularization”, IEEE, volume 21, 2, 2002

18. Suman Shrestha, “Image Denoising Using New Adaptive Based Median Filter” An International Journal (SIPIJ), Volume 5, Issue 4, 2014

19. Shruti bhargava and Ajay Somkuwar, “Hybrid Filters based Denoising of Medical Images using Adaptive Wavelet Thresholding Algorithm”, International Journal of Computer Applications Volume 83, Issue 3, 2013

20. Liyan Ma, Lionel Moisan, Jian Yu and Tieyong Zeng, “A Dictionary Learning Approach for Poisson Image Deblurring”, IEEE, Volume 32, Issue 7, 2013

21. Sachin D Ruikar and Dharmpal D Doye, “Wavelet Based Image Denoising Technique”, International Journal of Advanced Computer Science and Applications, Volume 2, issue 3, 2011

22. S.Kother Mohideen, S. Arumuga Perumal and M.Mohamed Sathik, “Image De-noising using Discrete Wavelet transform”, International Journal of Computer Science and Network Security, Volume 8, Issue 1, 2008

23. Kanwaljot Singh Sidhu, Baljeet Singh Khaira and Ishpreet Singh Virk, “Medical Image Denoising In The Wavelet Domain Using Haar And DB3 Filtering”, International Refereed Journal of Engineering and Science, Volume 1, Issue 1, 2012

24. Marcin Kociolek , Andrzej Materka, Michal Strzelecki and Piotr Szczypinski, “Discrete Wavelet Transform – Derived Features for Digital Image Texture Analysis”, Interational Conference on Signals and Electronic Systems, 2001

25. Gayatri P. Bhelke , V. S. Gulhane , “A Review of Detection and Reduction of Noise in Degraded Images by Efficient Noise Detection Algorithm”, International Journal of Computer Applications Technology and Research Volume 3– Issue 2, 105 - 108, 2014, ISSN: 2319–8656

26. V N Prudhvi Raj, Dr. T Venkateswarlu, De-noising of Computed Tomography Images using Multi-resolution Based Methods, Signal & Image Processing : An International Journal (SIPIJ) Vol.3, No.4, August 2012

27. Anil A patil, Image de-noising using curvelet transform: An approach for edge preservation, Journal on scientific and industrial research

28. V.Thavavel, and R. Murugesan, Regularized Computed Tomography using Complex Wavelets , International Journal of Magnetic Resonance Imaging Vol. 01, No. 01, 2007, pp. 027-032, ISSN 1749-8023 (print), 1749-8031

29. Aleksandra Pižurica, Ljubomir Jovanov, Bruno Huysmans, “Multiresolution Denoising for Optical Coherence Tomography: A Review and Evaluation”


31. PIECZYNSKI, D.VISVIKIS, M.HATT,” PET/CT Image Denoising and Segmentation based on a Multi Observation and a Multi Scale Markov Tree Model”

32. Maria V. Storozhilova, Alexey S. Lukin, Dmitry V. Yurin, Valentin E. Sinitsyn, ”Two Approaches for Noise Filtering in 3D Medical CTImages”

33. Adrian Taruttis, Jing Claussen, Daniel Razansky, Vasilis Ntziachristos, “Motion clustering for de-blurring multispectral optoacoustic tomography images of the mouse heart”, Journal of Biomedical Optics 17(1), 016009 (January 2012)

34. Stefan Weber, Thomas Sch¨ule, Attila Kuba, and Christoph Schn¨orr,”Binary Tomography with De-blurring”, U. Eckardt et al. (Eds.): IWCIA 2006, LNCS 4040, pp. 375–388, 2006. c Springer-Verlag Berlin Heidelberg 2006

35. Per Christian Hansen, ”Rotational Image De-blurring with Sparse Matrices” BIT Numerical Mathematics DOI: 10.1007/s10543-013-0464-y

36. Ming Jiang, Ge Wang,”Blind Deblurring of Spiral CT Images”, Ieee Transactions On Medical Imaging, Vol. 22, No. 7, July 2003

37. Empar Rollano-Hijarrubia*, Rashindra Manniesing, and Wiro J. Niessen, Senior Member, IEEE, “Selective Deblurring for Improved Calcification Visualization and Quantification in Carotid CT Angiography: Validation Using Micro-CT”

38. Ankit Gupta, , Neel Joshi, , C. Lawrence Zitnick, Michael Cohen, and Brian Curless,”Single Image De-blurring Using Motion Density Functions”

39. Oliver Whyte, Josef Sivic, Andrew Zisserman, Jean Ponce , “Non-uniform Deblurring for Shaken Images”

40. Jiaya Jia, “Single Image Motion De-blurring Using Transparency”, 1-4244-1180-7/07/$25.00 ©2007 IEEE