ISSN ONLINE(2320-9801) PRINT (2320-9798)

All submissions of the EM system will be redirected to Online Manuscript Submission System. Authors are requested to submit articles directly to Online Manuscript Submission System of respective journal.

An Adaptive Contrast Enhancement of Colored Foggy Images

S.Mohanram1, T. Joyce Selva Hephzibah2, Aarthi.B3, Sakthivel.P3
  1. Graduate Student, Department of ECE, Indus College of Engineering, Coimbatore, India
  2. Assistant Professor, Department of ECE, Indus College of Engineering, Coimbatore, India
  3. PG Student, Department of ECE, Indus College of Engineering, Coimbatore, India
Related article at Pubmed, Scholar Google

Visit for more related articles at International Journal of Innovative Research in Computer and Communication Engineering

Abstract

An Adaptive Contrast image enhancement is proposed for degraded color images. The fogs are removed in two concerns, at initial stage the foggy image information’s are extracted which means that HIS components are obtained by converting RGB component from an input image. At second stage the Intensity (I) has been calculated by contrast enhancement and then adjusted with the value of atmospheric light. After completion of morphological operation the fog is slightly removed from an source image. It is observed that transmission ratio is not perfect for the output of calculated atmospheric light and morphological operation. Because the fog is still exist according to the distance of the object. Moreover the gamma adjustment is made for image which comes from fulfillment operation of transmission ratio and morphological operation. Finally the enhanced defogging image has been obtained as a required or goal output. The resultant image has increased efficiency and reduced system computation time. The proposed method overcomes the drawbacks of retinex method of theory approach.

 

Keywords

Contrast enhancement, Gamma adjustment, morphology, intensity, Transmission ratio

INTRODUCTION

An image may be defined as a two-dimensional function, f(x, y), where x and y are spatial (plane) coordinates, and the amplitude of f at any pair of coordinates (x, y) is called the intensity of the image at that point [3]. The term gray level is used often to refer the intensity of monochrome images. Color images are formed by a combination of individual 2-D images. For example, in the RGB color system, a color image consists of three individual component images. When x, y and the amplitude values of are all finite, discrete quantities, then the image is digital image. The field of digital image processing refers to processing digital images by means of a digital computer. Note that a digital image is composed of a finite number of elements, each of which has a particular location and value. These elements are referred to as picture elements, image elements, pels and pixels [8]. Pixel is the term most widely used to denote the elements of a digital image.
The 2D continuous image a(x,y) is divided into N rows and M columns. The intersection of a row and a column is termed a pixel. The value assigned to the integer coordinates [m,n] with {m=0,1,2,…,M–1} and {n=0,1,2,…,N–1} is a[m,n]. In fact, in most cases a(x,y) which consider to be the physical signal that impinges on the face of a 2D sensor is actually a function of many variables including depth (z), colour(λ), and time (t). Vision is the most advanced of our senses, so it is not surprising that images play the single most important role in human perception. Digital image processing, the manipulation of images by computer, is relatively recent development in terms of humans’ ancient fascination with visual simulink.
At its most basic level, digital image processing requires a computer upon which to process images and two pieces of special input/output devices: an image digitizer and an image display device. In their naturally occurring form, images are not directly amenable to computer analysis. Since computer works with numerical data, rather than pictorial data, an image must be converted to numerical form before processing by computer can commence.

RELATED WORK

Xia Lan, Liangpei Zhang[1] proposed a technique in two stages at the first stage the degraded image and eliminate the blur/noise interference to estimate the hazy image. In the second stage, we estimate the transmission and atmospheric light by the dark channel prior method. In the third stage, a regularized method is proposed to recover the underlying image. Experimental results with both simulated and real data demonstrate that the proposed algorithm is effective, based on both the visual effect and quantitative assessment. Inhye Yoon, Seonyung Kim, Donggyun Kim et.al., have[2] proposed a method can significantly enhance the visibility of foggy video frames using the estimated atmospheric light and the transmission map with consideration of temporal correlation of consecutive frame. Another contribution of the proposed work is the compensation of color distortion between consecutive frames using the temporal difference ratio of H and S channels. Experimental results show that the proposed method can be applied to consumer video surveillance systems for removing atmospheric artifacts without color distortion. Ovidiu Ghita, Dana E [3] introduces a new variational approach for image enhancement that is constructed to alleviate the intensity saturation effects that are introduced by standard contrast enhancement (CE) methods based on histogram equalization. In this paper, we initially apply total variation (TV) minimization with a L1 fidelity term to decompose the input image with respect to cartoon and texture components. Contrary to previous papers that rely solely on the information encompassed in the distribution of the intensity information, in this paper, the texture information is also employed to emphasize the contribution of the local textural features in the CE process. This is achieved by implementing a nonlinear histogram warping CE strategy that is able to maximize the information content in the transformed image. Tak-Shing Wong, Charles A. et.al.,[4] presented the hypothesis selection filter (HSF) as a new approach for image quality enhancement. Assumption taken as set of filters has been selected a priori to improve the quality of a distorted image containing regions with different characteristics. At each pixel, HSF uses a locally computed feature vector to predict the relative performance of the filters in estimating the corresponding pixel intensity in the original undistorted image. The prediction result then determines the proportion of each filter used to obtain the final processed output. In this way, the HSF serves as a framework for combining the outputs of a number of different user selected filters, each best suited for a different region of an image. In this scheme the probabilistic framework where the HSF output is obtained as the Bayesian minimum mean square error estimate of the original image.
Maximum likelihood estimates of the model parameters are determined from an offline fully unsupervised training procedure that is derived from the expectation-maximization algorithm. To illustrate how to apply the HSF and to demonstrate its potential, this scheme as a post-processing step to improve the decoding quality of JPEG-encoded document images. The scheme consistently improves the quality of the decoded image over a variety of image content with different characteristics. We show that our scheme results in quantitative improvements over several other stateof- the-art JPEG decoding methods. [5] Novel Method for the Contrast Enhancement of Fog Degraded Video Sequences is proposed based on the Contrast Limited Adaptive Histogram Equalization (CLAHE), which limits the intensity of each pixel to user determined maximum. Thus, it mitigates the degradation due to fog and improves the visibility of the video signal. Initially, the background and foreground images are extracted from the video sequence. Then, the background and foreground images are separately defogged by applying CLAHE. The defogged background and foreground images are fused into the new frame. Finally, the defogged video sequence is obtained. The experimental results show that the proposed method is more effective than the traditional method. Performance of the proposed method is also analyzed with contrast improvement index (CI) and Tenengrad criterion (TEN). [6] Night Image Enhancement using Hybrid of Good and Poor Images which gives us good intensity image created by combining the good intensity day image and poor intensity night image taken for the same area of the highway. The resultant image shows the vehicles of the night time but surrounding is fused from the day image taken at day. The results are compared using two metrics SNR and MSE. The proposed technique is compared with other present techniques and is better than those due its computational simplicity and fast processing. [7] Using Wavelet based Decomposition, Quadratic Thresholding and Auto-Adaptive LUM Filter an image captured in a bad weather suffers from poor contrast.
As one of the most common weather conditions, fog whitens the scenery that is the captured image and decreases the atmospheric visibility which leads to the decline of image contrast, gained by optical equipment and produces fuzzy look to the images. All the problems mentioned above might bring great difficulty to the image information extraction, outdoor image monitoring, automatic navigation, target identification, tracking and etc,.. Therefore, it is necessary that image captured in a bad weather or the foggy image is enhanced. In this paper, a new method for foggy image enhancement has been proposed, that integrates multilevel wavelet decomposition, the autoadapted LUM filter, quadratic thresholding function and so on. Firstly, the multilevel wavelet decomposition is done to the image secondly low-frequency and high-frequency components of the image is obtained and finally the autoadapted LUM filter is applied to low-frequency component. This new shrinkage function based on wavelet packet approximations turn out to be more flexible than the soft and hard-thresholding function and eventually carrying on wavelet restructuring to the processed components.

EXISTING SYSTEM

With the fast development of digital image processing technology, video surveillance system had been used widely. The quality of video images is caused by the condition of weather, it will be better when weather is good, while it will be worse when weather is bad. Foggy is a main bad weather, the image obtained in foggy weather has low contrast and clarity, and some color information is also lost, which affects the following analysis and recognition. In our country, foggy is worse and worse with the economic development. Most image recognition systems are suitable for normal weather, so the restoration of foggy degradation image to improve the image quality has high application value. The tradition methods for image degradation are using all kinds of low filters to get rid of noise, but they are not suitable for foggy degradation image. In foggy weather, there are lots of aerosol molecules in the atmosphere, the diameter of these molecules is bigger than the length of light, which affects the strength distribution of the light scatter, so it affects the quality of the image, and the contrast of the image is dropped with the increase of the distance between the objects and the camera. Discrete wavelet transforms are one of the available method to increase contrast in an image. While using DWT there will be some quantization error occurs. To eliminate blur noise in an image the wavelet transforms are used but it produced different sampling noises in spatial coordinates.

PROPOSED SYSTEM

A. An Adaptive contrast enhancement algorithm
The proposed model follows Adaptive contrast enhancement algorithm. It consisting of three steps. In first step image RGB components are converted into HIS. At second stage the Intensity value has been calculated by the adjustment value of atmospheric light then Gamma correction made in S component And finally the RGB components are obtained by converting HIS into RGB. The model of the foggy image In computation visual the model of foggy image is shown as following,
I(x) = J(x)t(x) + A(1- t(x))………(1.1)
In Equation 1.1 the I(x) is the foggy image, J(x) the image without fog, A the atmosphere light, t(x) the ratio of transmission. The object of remove fog is to recover J(x), A and t(x). The first item in the right hand side of equation (1) J(x)t(x) is called direct attenuation, and the second item A(l-t(x)) is called air light component. The direct attenuation describes the scene radiance and its decay in the medium, while air light results from the scattered light previously and will lead to the color shift of the scene.
B.Stages involved
Stage 1 : Assume input degraded foggy image I (x, y) be of size M X N . Convert this input image to HIS component .
Stage 2: Take output image as O(x,y). Initialize SUM and COUNT value. Then calculate the Step size with A=1 and B=1.
Stage 3: Calculate the Intensity I value with P=Max(I(x,y)).
Stage 4: Apply Gamma correction at S component S=S’. Then combine the H,I,S component together.
Stage 5: Finally calculate the average value of (x,y)
O(x,y)= SUM (x,y)/COUNT(x,y);

EXPERIMENTAL RESULTS

MATLAB (matrix laboratory) is a numerical computing environment and fourth generation programming language. Developed by mathworks, MATLAB allows matrix manipulations, plotting of functions and data, implementation of algorithms, creation of user interfaces, and interfacing with programs written in other languages, including C, C++, Java, and Fortran. Although MATLAB is intended primarily for numerical computing, an optional toolbox uses the MuPAD symbolic engine, allowing access to symbolic computing capabilities.
C. INPUT IMAGE
The above figure 1 represents the input image consisting of different fogs. Here the value of Intensity will be minimum.
D.MAIN GUIDE
The Fig 2 illustrate that the main guide to upload an input image by using browse option in main guide.
E. LOADING INPUT IMAGE TO MAIN GUIDE
The Fig 3 indicates that the input image is stretched into M X N size and it can be uploaded into the main guide. By clicking enhance image option in main guide the input image will be enhanced and output image will be displayed.
F. OUTPUT FOG FREE IMAGE
The Fig 4 Shows that the Fog free output image with Enhanced contrast level due to gamma adjustment.

CONCLUSION

The image Enhancement has become one of the active areas in the field of image processing. This paper has proposed an adaptive defogging of image enhancement algorithm which uses a morphological operation and transmission ratio. The transmission ratio and the atmospheric light improved to a significant value after the application of the proposed technique. Through morphological operation, the more essential information about the image is focused in fewer coefficients and less information parts is distributed over many coefficients. This approach uses Gamma adjustment to get fog free images without any deblurring which in turn improve the overall performance of the image Enhancement system. The color and deblurring image details are obtained is so good when compared to retinex theory approach. And also the proposed method is realized by using C++ and multi-kernel operation technology.

Figures at a glance

Figure 1 Figure 2 Figure 3 Figure 4 Figure 5
Figure 1 Figure 2 Figure 3 Figure 4 Figure 5

References