ISSN ONLINE(2278-8875) PRINT (2320-3765)

All submissions of the EM system will be redirected to Online Manuscript Submission System. Authors are requested to submit articles directly to Online Manuscript Submission System of respective journal.

Selective Review on Various Images Enhancement Techniques

Deepak K. Pandey1, Prof. Rajesh Nema2
  1. PG Scholar (Digital Communication), Department of Electronics & Communication, NIIST, Bhopal, India
  2. Professor & Head, Department of Electronics & Communication Engineering, NIIST, Bhopal, India
Related article at Pubmed, Scholar Google

Visit for more related articles at International Journal of Advanced Research in Electrical, Electronics and Instrumentation Engineering

Abstract

Principle goal of Image enhancement is to process an image so that result is more suitable than original image. As the Image clarity is very much affected by surrounding like lighting, weather, or equipment that was used to capture the image, as a result, many techniques have developed known as image Enhancement techniques to recover the information in an image. Digital image enhancement techniques provide a multiple choices for improving the visual quality of images. The choice of these techniques is influenced by many factors such as the imaging module, task at hand and viewing condition or display devices. This paper will provide an overview of the Image Enhancement concepts, along with algorithms commonly used for image enhancement. Main focus of this paper is on point processing methods, histogram processing and some more complex algorithms which has been proposed up to now by different researchers uses histogram equalization, curvelet transform, network and channel division concepts and to compare the output Image quality of these algorithms. Then we will find out the shortcoming of some important previously proposed algorithms and on the basis of their experimental results comparison point out the efficient direction for future research to get higher result.

Keywords

Digital Image Processing, Point Processing, Histogram Equalization, Curvelet Transform, Perceptron Network and Channel Division, Image Enhancement

INTRODUCTION

The principal objective of image enhancement is to modify attributes of an image to make it more suitable for a given task and a specific observer. In this process, one or more attributes of the image are modified and processed. The choice of attributes and the way they will be modified are specific to a given task. Here observer-specific factors, such as the human visual system such as human organs and the observer's experience, will introduce the subjectivity for the choice that which image enhancement method should be used. There exist many techniques that can enhance a digital image without spoiling its content. The enhancement methods can broadly be divided in to the following two categories:
1. Spatial Domain Methods
2. Frequency Domain Methods
The basic concept of spatial domain[1] is to directly manipulate the values of image pixels so that desired output could be achived.In mathematical form, in spatial domain the value of pixel intensity are manipulated directly as equation given below,
G(x, y) =T [f(x, y)].
In frequency domain methods [2], for Image enhancement purpose, the image is first transferred in to frequency domain by use of Fourier Transform. All the enhancement operations are now performed on the Fourier transform of the image and then the Inverse Fourier transform is performed to get the resultant image. These enhancement operations are performed in order to modify the image attributes as brightness, contrast or distribution of the grey levels etc. It means that the pixel value of the output image will be modified according to the transformation function applied on the input values. In the frequency domain the image enhancement can be done as following equation,
G (u, v) = H (u. v) F (u, v)
Where G (u, v) is enhanced image, F (u, v) is input image and H (u, v) is transfer function.
Image enhancement simply means, transforming an input image “f” into output image “g’ using transformation function “T”. The values of pixels in images “f” and “g” could be denoted by “r” and “s”, respectively. These are related by the expression,
s = T(r)
Where T is function called a transformation function which is used to maps a pixel value r into a pixel value s. The results of this transformation are mapped into the grey scale range as we are dealing here only with grey scale digital images. I will consider only gray level images. A digital gray image can have pixel values in the range of 0 to 255.
In this paper basic image enhancement techniques have been discussed .This paper will provide an overview of the concept of Image Enhancement, along with algorithms which are commonly used for this purpose. Main focus of this paper is on point processing methods, histogram processing and some more complex algorithms which uses histogram equalization, curvelet transform, perceptron network and channel division concept,image fusion concept and on the basis of this study we will try to find out the possible way for future research to design a highly efficient algorithm for enhancing the quality of Image.

PREVIOUS METHODOLOGIES

Point Processing Operation
Point processing operations which are also called Intensity transformation functions is the simplest & basic spatial domain operation and these operations are performed on single pixel only. Pixel values of the processed image depend on pixel values of original image. The Point processing approaches can be classified into four major categories as,
1) - Negative Transformation of an Image : In Image Negative [3],the negative image of the actual image is created. For this purpose gray level values of the pixels present in an image are inverted to get its negative image. Suppose we have a 8 bit digital image of size M x N, then each pixel value from original image is subtracted from 255 as g (x, y)=255- f(x, y) for 0 ≤ x < M and 0 ≤ x < N. In a normalized gray scale, s = 1.0 – r. Negative images are useful for enhancing white or gray detail embedded in dark regions of an image.
2) - Thresholding Transformations: Thresholding transformations [4] are particularly useful for segmentation in which we want to isolate an object of interest from a background. Image thresholding is the process of separating the information (objects) of an image from its background, hence, thresholding is applied to grey-level or colour document scanned images. Thresholding can be categorized into two main categories: global and local. Global thresholding methods choose one threshold value for the entire document image, which is often based on the estimation of the background level from the intensity histogram of the image;this is the reason why thersholding is considered a point processing operation. Local adaptive thresholding uses different values for each pixel according to the local area information. Local thresholding techniques are used with document images having non-uniform background illumination or complex backgrounds, such as watermarks found in security documents if the global thresholding methods fail to separate the foreground from the background.
3) - Logarithmic Transformations: The log transformation maps [5] a narrow range of low input grey level values into a wider range of output values. It’s opposite i.e. the inverse log transformation performs the opposite transformation. Log functions are useful for certain conditions such as when the input grey level values may have an extremely large range of values.
Sometimes the dynamic range of a processed image far exceeds the capability of the display device, in these types of cases only the brightest parts of the images are visible on the display screen. To solve this problem an effective way to compress the dynamic range of pixel values is to use the Log Transformations, and this can be expressed by,
g (x, y) = c . Log (1 + r))
Where c is constant and it is assumed that r≥0.
This transformation maps a narrow range of low-level grey scale intensities into a wider range of output values. Log Transformations is used to expand values of dark pixels and compress values of bright pixels. Inverse log transform function is used to expand the values of high pixels in an image while compressing the darker-level values.Inverse log transform function maps the wide range of high-level grey scale intensities into a narrow range of high level output values.
4) - Powers-Law Transformations: Powers-Law Transformation function is expressed by,
s = crγ
Power law transformation function is also called as gamma correction [6]. For different values of γ different levels of enhancements can be obtained. We can observe that different display monitors display images at different intensities and clarity hat means, every monitor has built-in gamma correction in it with certain gamma ranges and so a good monitor automatically corrects all the images displayed on it for the best contrast to give user the best experience. The difference between the log transformation function and the power-law functions is that using the power-law function a family of possible transformation curves can be obtained just by varying the γ .The difference between Image Negative formula, Logarithmic formula & Power Law Formula is that, in Image Negative it is not necessary for the results to be mapped into the grey scale range [0, L-1]. Output of L-1-r automatically falls in the range of [0, L-1], but for the Log and Power-Law transformations resulting values are often quite distinctive, depending upon control parameters like γ and logarithmic scales. Therefore the results of these values should be mapped back to the grey scale range to get a meaningful output image.
Histogram Processing
In the Image Enhancement Histogram processing is used as the basic operation. A Histogram simply plots the frequency at which each grey-level occurs from 0 (black) to 255 (white). Histogram processing should be the initial step in pre-processing. Histogram represents the frequency of occurrence of all gray-level in the image, that means it tell us how the values of individual pixel in an image are distributed. Histogram of any Image can be given as
h (rk) = nk/N
Where nk and nk are intensity level and number of pixels in image with intensity nk respectively.
1) Histogram Equalization: Histogram equalization [7] is a very basic type of technique for enhancing the appearance of images. In this technique for Image Enhancement purpose the Histogram of Image is stretched to make its distribution uniform. Suppose we have an image which is predominantly dark, as a result its histogram would be skewed towards the lower end of the grey scale and all the image detail is compressed into the dark end of the histogram. Now if we could `stretch out' the grey levels at the dark end to produce a more uniformly distributed histogram then the image would become much clearer. Histogram equalization automatically determines a transformation function seeking to produce an output image with a uniform histogram.In general; Histogram Equalization can be divided into three types, Global Histogram Equalization (GHE), Adaptive Histogram Equalization (AHE), and Block-based Histogram Equalization (BHE).In Global Histogram Equalization (GHE), each pixel is assigned a new intensity value based on previous cumulative distribution function (cdf). To perform Global Histogram Equalization (GHE), the original histogram of the grayscale image needs to be equalized. The cumulative histogram obtained from the input image needs to be equalized to 255 by creating the new intensity. In Adaptive Histogram Equalization (AHE) the histogram is equalized based on localized data.
2)-Local Enhancement: Previous methods of histogram equalizations and histogram matching are global and the result of those is not up to mark, so local enhancement [7] comes in existent. In this define square or rectangular neighbourhood (mask) and move the centre from pixel to pixel. For each of the neighbourhood, calculate histogram of the points in the neighbourhood. Obtain histogram equalization/specification function. Map gray level of pixel centred in neighbourhood. It can use new pixel values and previous histogram to calculate next histogram.
Brightness preserving Bi-HE (BBHE)
In order to overcome the drawback introduced by above HE & LHE method new algorithm was proposed by Y.-T. Kim “Brightness Preserving Bi-Histogram Equalization (BBHE)” [8]. The essence of the BBHE method is to decompose the original image into two sub-images, and for this purpose image mean gray-level is used, and then apply the CHE (Classical HE) method on each of the sub-images. The ultimate goal of the BBHE algorithm is to preserve the mean brightness of a given image while the contrast is enhanced. The BBHE firstly decomposes an input image into two sub-images based on the mean of the input image. One of the sub-images is the set of samples less than or equal to the mean whereas the other one is the set of samples greater than the mean. Then the BBHE is used to equalizes the sub-images independently based on their respective histograms with the constraint that the samples in the formal set are mapped into the range from the minimum gray level to the input mean and the samples in the latter set are mapped into the range from the mean to the maximum gray level. In other words, one of the sub-images is equalized over the range up to the mean and the other sub-image is equalized over the range from the mean based on the respective histograms. Therefore the resulting equalized sub-images are bounded by each other around the input mean, which has an effect of preserving mean brightness of the image.
Minimum Mean Brightness Error Bi-Histogram Equalization
By S.-D. Chen and A. Ramli, Minimum Mean Brightness Error Bi-Histogram Equalization (MMBEBHE) [9] algorithm was proposed to overcome the limitation of BBHE method.Becase there are still cases that are not handled well by BBHE.The MMBEBHE provide maximum brightness preservation. BBHE method separates the input image's histogram into two based on input mean before equalizing them independently. However, using input mean as the threshold level to separate the histogram does not guarantee to give maximum brightness preservation. Here described brightness preservation is based on an objective measurement referred as Absolute Mean Brightness Error (AMBE). It is defined as the absolute difference between the input and the output mean as follow:
AMBE=(X, Y) =Mx – My
Lower AMBE implies the better brightness preservation.
E.Multiple-Peak Images Based on Histogram Equalization
Fan Yang, Jin Wu [10], to enhance image contrast proposed “An Improved Image Contrast Enhancement in Multiple-Peak Images Based on Histogram Equalization”, specifically for multiple-peak images. In this process at first, the input image is convolved by a Gaussian filter with optimum parameters. Then secondly, the original histogram can be divided into different areas by the valley values of the image histogram. And finally we use the proposed method to processes images. This method has excellent degree of simplicity and adaptability in comparison of others methods.
In order to reduce the noise's interference and improve the quality of input image, in this work Fan Yang and Jin Wu propose to use Gaussian filter convolving the image firstly. Gaussian filter reduces the difference in brightness between adjacent elements. It also can reduce blocking effects.
To make the processing of the image enhancement more purposed and adaptive, firstly analyze the image histogram. Image histogram can be divided into several sub-layer images' histogram by local minimum gray level.
In order to overcome the drawback of HE, a new algorithm has proposed to calculate the PDF as follows:
image
Where N is the total number of pixels in the image, n K is the number of pixels that have gray level rK and L is the total number of possible gray levels in the image
image
Image Dependent Brightness Preserving Histogram Equalization
BY P. Rajavel [11], proposes “image-dependent brightness preserving histogram equalization (IDBPHE) technique” to enhance image contrast while preserving image brightness. The image dependent brightness preserving histogram equalization (IDBPHE) technique use the wrapping discrete curvelet transforms (WDCvT) and the histogram matching technique. A simple diagram of IDBPHE is shown and the corresponding steps are given below.
• Region identification and separation: The curvelet transform is used to identify bright regions of an original image.
• Histogram computation and matching:
- the histogram of the original image and the histogram of pixels which belong to the identified regions are computed.
- Modification of the histogram of the original image with respect to a histogram of the identified regions.
image
Image Pixel Interdependency Linear Perceptron Network (IPILN)
Murli D.Vishwakarma proposes “Image Pixel Interdependency Linear Perceptron Network”(IPILN) [12]. IPILN uses Gaussian filter, curvelet transform and perceptron network. Basically this technique involves three steps for contrast enhancement of the Image that are explained below.
1. Image Filtration: The Gaussian filter is used to obtain a row image from input image.
2. Image Transformation: Transformation is a process that is used to convert a signal from one domain to another without the loss of information. In our approach we are using a multi- resolution curvelet transform. To transform the row image, the curvelet transform is used that is a multidirectional transform.
3. Perceptron Network: To adjust the weight of input image, the concept of perceptron network is used. In perceptron network to adjust the weight, the learning factor is used which vary from 0 to 1
image
Content-Aware Dark Image Enhancement through Channel Division
Adin Ramirez Rivera, Byungyong Ryu, and Oksam Chae, “Content-Aware Dark Image Enhancement Through Channel Division”, [13] proposed a content-aware algorithm that enhances dark images, sharpens edges, and details present in textured regions, and give high degree of preservation to the smoothness of flat regions. This algorithm produces an ad hoc transformation for each image, by adapting the mapping functions to each image characteristic to produce the maximum enhancement. They specially analyzed the contrast of the image in the boundary and textured regions, and then grouping of the information done with common characteristics. Then these groups were used to model the relations within the image, from which the transformation functions were extracted. The results of the above whole process were mixed adaptively by considering the human vision system characteristics, this boost the details in the image.
This algorithm Enhance the appearance of: human faces, blue skies with or without clouds without introducing artifacts but it is unable to recover information from the shadowed or dark areas of images that had near-black intensities.
I. A Method to Improve the Image Enhancement Result based on Image Fusion-
Xiaoying Fang, Jingao Liu, Wenquan Gu, Yiwen Tang , “ A Method to Improve the Image Enhancement Result based on Image Fusion,” [14] proposes a method to improve the enhancement result with image fusion method with evaluation on sharpness. As we know that Image enhancement can improve the perception of information. In this algorithm at first an image is taken from a real scene and then it should be divided into several regions according to the need for enhancement. As one particular enhancement method improves some regions and actually deteriorates the other regions which have no need for such enhancement or any enhancement at all. So in this algorithm proposes a method to improve the enhancement result with image fusion method with evaluation on sharpness.

PERFORMANCE COMPARISON OF SELECTED ALGORITHMS

In this section at first we will study the major advantages and limitation of all of above proposed algorithms. After studying advantages and limitation we will compare the parameters of above algorithms like PSNR (Peak signal to noise ratio) and AMBE (Absolute mean brightness error) on the basis of experimental results of above algorithms and will try the find out best one.
On the basis of these advantages and limitation explained in table-1 and experimental result in table-2 we will try to find out the possible direction of research for future so that we can design a highly efficient algorithm for enhancing the Image quality.
image
In this section we will compare the results of some of above explained algorithms and will try to find out which algorithm is best for enhancement purpose. For comparison we will take two parameter AMBE and PSNR.
Absolute mean brightness error (AMBE) is used to assess the degree of brightness preservation. Smaller AMBE is better. AMBE is calculated by,
AMBE=(X, Y) =Mx – My
Where the Mx is mean of input and My is the mean of output image.
Peak signal to noise ratio (PSNR) is used to assess the degree of contrast enhancement, greater PSNR is better. PSNR is calculated by,
image
Method which has smaller AMBE and greater PSNR is considered as good algorithm. For comparison purpose here I will consider Histogram equalization, multi-histogram equalization, IDBPHE & IPILN.
image

CONCLUSION & FUTURE WORK

Conclusion of above discussion is that classical method as Contrast stretching losses some of the detail information of the images during enhancement process, Histogram Equalization gives better results but it cannot preserve the brightness of the original image. Many other algorithms like MHE, BBHE, MMBEBHE, and Multiple-Peak Images Based on Histogram Equalization, IPILN, content aware algorithms and Image fusion method were proposed for contrast enhancement with brightness preservation of original image and to give great PSNR and low AMBE value. Up to some extent these algorithms achieve their goal. But we know that there is no any particular parameter or any reference point of existing parameters which could be taken as reference for declaring any image quality as good or bad. If the output image of any algorithm look good for our eye then the algorithm said good otherwise it is not up to mark. Despite of these all we have devised some parameters for comparing the effectiveness of any algorithm like AMBE, PSNR, computational cost and computational time of algorithm.Thease parameters could be used for choosing an algorithm for real time application. Despite the effectiveness of each of these algorithms when applied separately, in practice we shoul devise a combination of such methods to achieve more effective image enhancement. So from above explanation it is clear that there is more scope for future work in this field for developing new and efficient algorithm specially by using Discrete Wavelet Transforms (DWT) because Wavelet transforms is the very good technique for the image denoising and input images always faces the noise during image processing. Content Aware technique was unable to enhance the dark areas of image but image fusion technique overcomes this limitation and enhances all regions of an image. So by combining DWT with Image fusion technique we can get more efficient technique for Image Enhancement purpose.

References

  1. Bhabatosh Chanda and Dwijest Dutta Majumder, Digital Image Processing and Analysis 2002.
  2. Raman Maini and Himanshu Aggarwal “A Comprehensive Review of Image Enhancement Techniques “Journal of Computing, Volume 2, Issue 3, March 2010.
  3. R.Shanmugalakshmi and S.Annadurai“Fundamental of Digital Image Processing,”Pearson Education India, 2007.
  4. R.M. Haralick, and L.G. Shapiro, Computer and RobotVision, Vol-1, Addison Wesley, Reading, MA, 1992.
  5. R. Jain, R. Kasturi and B.G. Schunck, Machine Vision, McGraw-Hill International Edition, 1995.
  6. W. K. Pratt, Digital image processing, Prentice Hall, 1989.
  7. A. Rafael C. Gonzalez, and Richard E. Woods, “Digital Image Processing,” 3rd Edition, Prentice Hall, 2009.
  8. Y.-T. Kim,“Contrast enhancement using brightness preserving bi-histogram equalization,” IEEE Trans. on Consumer Electronics, vol. 43, no. 1, Feb. 1997.
  9. S.-D. Chen and A. Ramli, “Minimum mean brightness error bi-histogram equalization in contrast enhancement,” IEEE Trans. on Consumer Electronics, vol. 49, no. 4, Nov. 2003.
  10. Fan Yang, Jin Wu “An Improved Image Contrast Enhancement in Multiple-Peak Images Based on Histogram Equalization” IEEE International Conference on Computer Design and Applications’ 2010
  11. P.Rajavel “Image Dependent Brightness Preserving Histogram Equalization” IEEE Transactions on Consumer Electronics, Vol. 56, No. 2, May 2010.
  12. Murli D.Vishwakarma“Image Pixel Interdependency Linear Perceptron Network” IJARCS, Volume 2, no.4, 2011.
  13. Adin Ramirez Rivera, Byungyong Ryu, and Oksam Chae “Content Aware Dark Image Enhancement through Channel Divison”IEEE Transactions on Image Processing,volume 21,issue 9, 2012.
  14. X. Fang, J. Liu, W. Gu, Y. Tang ,“ A Method to Improve the Image Enhancement Result based on Image Fusion,” 978-1-61284-774-0/11 ©2011 IEEE
  15. A.C. Bovik, Digital Image Processing Course Notes, Dept. Of Electrical Engineering, U. of Texas at Austin, 1995.