ISSN ONLINE(2320-9801) PRINT (2320-9798)

All submissions of the EM system will be redirected to Online Manuscript Submission System. Authors are requested to submit articles directly to Online Manuscript Submission System of respective journal.

Video Enhancement Based on Piecewise Tone Mapping

T. Manikandan1
Department of Computer science and Engineering, Dr. Sivanthi Aditanar college of Engineering, Tiruchendur-628215, Tamilnadu, India
Related article at Pubmed, Scholar Google

Visit for more related articles at International Journal of Innovative Research in Computer and Communication Engineering

Abstract

Video enhancement plays an important role in various video applications. It includes.1) to achieve high intra frame quality of the entire picture where multiple regions-of-interest (ROIs) can be adaptively and its well simultaneously enhanced and 2) It guarantee the inter frame quality Consistencies among video frames. HE is a useful technique for improving image contrast, but its effect is too severe for many purposes. However, dramatically different results can be obtained with relatively minor modifications. .Histogram equalization is an effective technique for contrast enhancement. However, conventional histogram equalization (HE) usually results in excessive contrast enhancement, which in turn gives the processed image an unnatural look and creates visual artifacts.

INTRODUCTION

VIDEO services have become increasingly important in many areas including communications, entertainment, healthcare, and surveillance. However, in many applications, the quality of video service is still hindered by several technical limitations such as poor lightening conditions, bad exposure level, and unpleasant skin color tone. Thus, it is crucial to enhance the perceptual quality of videos. The first aim of this paper is to set out a concise mathematical description of AHE. The second aim is to show that the resulting framework can be used to generate a variety of contrast enhancement effects, of which HE is a special case. This is achieved by specifying alternative forms of a function which we call the cumulation function. Several contrast enhancement techniques have been introduced to improve the contrast of an image. These techniques can be broadly categorized into two groups: direct methods and indirect methods. Direct methods define a contrast measure and try to improve it.

A. Intra frame Quality Enhancement with Multiple Regions of Interest (ROIs)

image
Since a frame may often contain multiple ROIs, it is desirable for the enhancement algorithm to achieve high intra frame quality of the entire Picture where multiple ROIs can be adaptively and simultaneously enhanced.

B. Inter frame Quality Enhancement among frames:

image
They are not suitable for enhancing videos since the inter frame quality consistencies among frames are not considered. Some state-of-the-art algorithms can be extended for enhancing.Inter frame qualities under some specific applications. For example, Liu et al. proposed a learning-based method for video conferencing where frames share the same tone mapping function if their backgrounds do not change much. Although this method can achieve good inter frame quality in video conferencing scenarios, it cannot be applied to other scenarios if the video backgrounds or contents change frequently. To derici et al. introduced a temporally coherent method by combining the frame feature and the shot feature to enhance a frame. Their method can effectively enhance both the shot change frames and the regular frames.
image
image

Adaptive Histogram Equalization:

The AHE process can be understood in different ways. In one perspective the histogram of grey levels (GL’s) in a window around each pixel is generated first. The cumulative distribution of GL’s, that is the cumulative sum over the histogram, is used to map the input pixel GL’s to output GL’s. If a pixel has a GL lower than all others in the surrounding window the output is maximally black; if it has the median value in its window the output is 50% grey.This section proceeds with a concise mathematical description of AHE which can be readily generalized, and then considers the two main types of modification. The relationship between the equations and different (conceptual) perspectives on AHE, such as GL comparison, might not be immediately clear, but generalizations can be expressed far more easily in this framework.

Modified cumulating functions:

There are situations in which it is desirable to enhance details in an image without significantly changing its general characteristics. There are also situations in which stronger contrast enhancement is required, but where the effect of standard AHE is too severe. Our aim was to develop a fast and flexible method of adaptive contrast enhancement which could deal with such tasks using few parameters. The version of Gaussian blurring proposed is quite effective, but slow to process because a histogram is processed for each pixel. In this study we considered alternative forms of enhancement by directly specifying alternative cumulation functions. The procedure we set out adds flexibility and is simple, and parameter values can be chosen quite easily. A wide variety of contrast enhancement effects can be obtained using the procedure outlined and choosing different cumulation functions. Any reasonable choice can be accommodated by the Fourier series method, and so the challenge is to develop a simple form which produces a desirable range of effects.

System overview:

Our hardware prototype consists of two light stands one on each side of the laptop so that both sides of the face are lit equally. Each light stand contains 20 LED lights in 4 different colors: red, green, blue, and white. The LEDs are mounted on a circuit board covered with a diffuser which softens the lighting and makes it less intrusive. The LEDs of the same color are connected to an LED driver. The four LED drivers are connected to a data acquisition controller which is plugged into a laptop’s USB port. The data acquisition controller provides a programmable interface allowing applications to adjust the voltages of the LED drivers. Each LED driver is a voltage-to-current converter which adjusts the brightness of the LED lights.
Exposure and White Light Initialization:
When the system is started, it goes to the state of “Exposure and White Light Init.” The system first checks the overall intensity of the image (no face detection yet). If it is too dark, the system sets the voltage of the white LED light to be an initial voltage value. Then the camera exposure is adjusted to ensure reasonable face brightness. Denote Ymin as the minimal intensity value and Ymax as the maximal intensity value, which are set to be 70 and 170, respectively, in our implementation. If the average intensity in the face region is less than Ymin or larger than Ymax, we will increase or decrease the exposure level by one level at a time until the average intensity in the face region Iy falls in between Ymin and Ymax.

Setting up the Target Face Color:

After adjusting camera exposure, the system enters the state of “Target ��.” In this state, we use the average face color of the current frame to compute the target face color I �� based on learned good skin our tone model.

Converging at Target and Global Illumination Detection:

When there are environment illumination changes after the system enters the state “In Target,” the system needs to adjust the camera exposure and voltages accordingly. We have implemented a simple environment illumination change detector in our system. After the system enters the state “In Target,” the system invokes the environment illumination change detector. At each frame, the detector computes the average intensity of the entire image including the non-face area. The detector maintains a mean value and standard deviation over time and uses the accumulated statistics to determine whether there is an environment illumination change in the new frame. If the environment illumination change is detected, the system goes back to the beginning state “Exposure and White Light Init” and starts the optimization loop.

A+ECB Video Enhancement:

A. Motivations:

As mentioned, most existing approaches have various limitations in enhancing videos. It shows enhanced results by modified global histogram equalization algorithm and region based method. Since image is enhanced based on global contrast metric without considering region differences such as face are not properly enhanced. Since region based method identifies the face region and performs enhancement, the visual enhancement is much improved. The tone mapping function trained from face region will be applied to an entire image; the quality of some other regions such as screen becomes poorer.
image
From the above discussions, we have the following observations:
1) In order to achieve suitable enhancement results, features from ROIs need to be considered;
2) It is desirable to enhance the entire frame “globally” but with the consideration of different ROIs at the same time.
Intra-and-Inter-Constraint-Combined Algorithm:
The framework of our A+ECB algorithm can be described an input frame is first enhanced by the proposed ACB step for improving the intra frame quality. Then, the resulting frame will be further enhanced by the proposed ECB step for handling the inter frame constraints. The ACB step and the ECB step will be described in detail in the following.
image
The processes of the ACB step can be further described by multiple ROIs are first identified from the input video frame. In this paper, we use video conferencing or video surveillance as example application scenarios and identify ROIs (such as human faces, screens, cars, and whiteboards) based on an AdaBoost-based object detection method. Other object detection and saliency detection algorithms can also be adopted to obtain the ROIs.
ECB Step:
The ECB step can be implemented by the HEM-based framework. In our paper, besides we also propose another ECB step.

Results for the ACB Step:

It compares the enhancement results for different intra frame enhancement methods. Since the colors of the two people are far different to each other, the learning-based method cannot properly enhance both faces simultaneously. As when it enhances the face of one person, the quality of another person’s face becomes unsatisfactory. Although by using the factorbased strategy in the trade-off between the two faces can be improved, it is still less effective in creating a tone mapping curve for enhancing both ROIs. We can see that the face of the right person is still dark in. Comparatively, our ACB algorithm will select the piecewise strategy that calculates a fused piecewise global tone mapping function based on both regions. We can achieve satisfactory qualities in both faces; Moreover, although the original video from each party may have large difference in illumination conditions, the enhancement results of different users are more coherent by our algorithm.

Statistics and color correction:

The goal of our work is to make a synthetic image take on another image’s look and feel. More formally this means that we would like some aspects of the distribution of data points in lαβ space to transfer between images. For our purposes, the mean and standard deviations along each of the three axes suffice. Thus, we Compute these measures for both the source and target images. Note that we compute the means and standard deviations for each axis separately in lαβ space. Because we assume that we want to transfer one image’s appearance to another, it’s possible to select source and target images that don’t work well together. The result’s quality depends on the images’ similarity in composition. For example, if the synthetic image contains much grass and the photograph has more sky in it, then we can expect the transfer of statistics to fail. We can easily remedy this issue. First, it can select separate swatches of grass and sky and compute their statistics, leading to two pairs of clusters in lαβ space (one pair for the grass and sky swatches). Then, we convert the whole rendering to lαβ space. We scale and shift each pixel in the input image according to the statistics associated with each of the cluster pairs. Then, we compute the distance to the center of each of the source clusters and divide it by the cluster’s standard deviation σc,s. This division is required to com-pensate for different cluster sizes. We blend the scaled and shifted pixels with weights inversely proportional to these normalized distances, yielding the final color. This approach naturally extends to images with more than two clusters. We can devise other metrics to weight relative contributions of each cluster, but weighting based on scaled inverse distances is simple and worked reasonably well in our experiments. Another possible extension would be to compute higher moments such as skew and kurtosis, which are measures of the lopsidedness of a distribution and of the thickness of a respective distribution’s tails. Imposing such higher moments on a second image would shape its distribution of pixel values along each axis to more closely resemble the corresponding distribution in the first image. While it appears that the mean and standard deviation alone suffice to produce practical results, the effect of including higher moments remains an interesting question.

Stretching and clipping:

From a deeper analysis of the intensity histogram before and after the local correction proposed, we find that despite a better occupation of the gray levels, the overall contrast enhancement is not satisfying. Also, especially for low quality images with compression artifacts, the noise in the darker zones is enhanced. These effects that make the processed image grayish are intrinsic in the mathematic formulation adopted for the local correction. To overcome this undesirable loss in the image quality, a further step of contrast enhancement, consisting of a stretching and Clipping procedure, and an algorithm to increase the saturation are introduced in this section. The main characteristic of the contrast procedure we propose is that it is image dependent: stretching and thus clipping are piloted by the image histogram properties and are not fixed a priori. To determine the strength of the stretching and thus the number of bins to be clipped, it is considered how the darker regions occupy the intensity histogram before and after the LCC algorithm. The idea is that pixels belonging to a dark area, such as a dark object, that usually occupy a narrow and peaked group of bins at the beginning of the intensity histogram will populate more or less the same bins after a contrast enhancement algorithm. On the other hand, pixels of an underexposed background that create a more spread histogram peak after the same algorithm must populate an even more widespread region of the histogram. To evaluate
How these dark pixels are distributed, the algorithm proceeds as follows:
1. The RGB input image is converted to the YCbCr space. This space is chosen because it is common in the case of JPEG compression, but other spaces where the luminance and the chrominance components are separated can be adopted.
2. The percentage of the dark pixels in the image is computed.
3.In the case of dark regions that must be recovered, this percentage of pixels, which experience has suggested be set at 30%, generally falls in the first bins, under a narrow peak of the histogram, together with the rest of the dark pixels, and thus most of these pixels are repositioned at almost the initial values.
4. If there are no dark pixels, the stretching is done to obtain a clipping of 0.2% of the darker pixels.
5. In any case, the maximum number of bins to be clipped is set to 50.For the brighter pixels, the stretching is done to obtain a clipping of the 0.2%, with a maximum of 50 bins.

Quality Assessment:

The improvement in images after enhancement is often very difficult to measure. To date, no objective criteria to assess image enhancement capable of giving a meaningful result for every image exist in the literature. Full reference quality assessment metrics cannot be applied, since these methods evaluate the departure of the output image with respect to an original one that is supposed to be free of distortions. In our present case of contrast enhancement, no such original “distortion-free” images exist, and the elaborated images should be an enhanced version of the input.

CONCLUSION

In this paper, we proposed a new A+ECB algorithm for video enhancement. The proposed method analyzed features from different ROIs and created a “global” tone mapping curve for the entire frame such that the intra frame quality of a frame can be properly enhanced. Furthermore, new inter frame constraints were introduced in the proposed algorithm to further improve the inter frame qualities among frames. Experimental results demonstrated the effectiveness of our algorithm.

References

  1. M. Sun, Z. Liu, J. Qiu, Z. Zhang, and M. Sinclair, “Active lighting for video conferencing,” IEEE Trans. Circuits Syst. Video Technol., vol. 19, no. 12, pp. 1819–1829, Dec. 2009.
  2. G. R. Shorack, Probability for Statisticians. New York: Springer, 2000.
  3. J. A. Stark, “Adaptive image contrast enhancement using generalizations of histogram equalization,” IEEE Trans. Image Process., vol. 9, no. 5, pp. 889–896, May 2000.
  4. T. Arici, S. Dikbas, and Y. Altunbasak, “A histogram modification framework and its application for image contrast enhancement,” IEEE Trans. Image Process., vol. 18, no. 9, pp. 1921–1935, Sep. 2009.
  5. E. Reinhard, M. Ashikhmin, B. Gooch, and P. Shirley, “Color transfer between images,” IEEE Comput. Graphics Applicat., vol. 21, no. 5, pp. 34–41, Sep.–Oct. 2001.
  6. P. Viola and M. Jones, “Robust real-time object detection,” in Proc. 2nd Int. Workshop Statist. Computat. Theories Vision, 2001, pp. 137–154.
  7. C. Shi, K. Yu, J. Li, and S. Li, “Automatic image quality improvement for videoconferencing,” in Proc. ICASSP, 2004, pp. 701–704.
  8. Z. Liu, C. Zhang, and Z. Zhang, “Learning-based perceptual image quality improvement for video conferencing,” in Proc. ICME, 2007, pp. 1035–1038.
  9. W.-C. Chiou and C.-T. Hsu, “Region-based color transfer from multireference with graph-theoretic region correspondence estimation,” in Proc. ICIP, 2009, pp. 501–504.
  10. Y. W. Tai, J. Jia, and C. K. Tang, “Local color transfer via probabilistic segmentation by expectation-maximization,” in Proc. CVPR, 2005, pp.747– 754.
  11. R. Schettini and F. Gasparini, “Contrast image correction method,” J. Electron. Imag., vol. 19, no. 2, pp. 5–16, 2010.
  12. S. Battiato and A. Bosco, “Automatic image enhancement by content dependent exposure correction,” EURASIP J. Appl. Signal Process., vol.2004, no. 12, pp. 1849–1860, 2004.
  13. G. D. Toderici and J. Yagnik, “Automatic, efficient, temporally-coherent video enhancement for large scale applications,” in Proc. ACM Multimedia,2009, pp. 609–612.
  14. M. Swain and D. Ballard, “Color indexing,” Int. J. Comput. Vision, vol.7, no. 1, pp. 11–32, 1991.