ISSN ONLINE(2278-8875) PRINT (2320-3765)

All submissions of the EM system will be redirected to Online Manuscript Submission System. Authors are requested to submit articles directly to Online Manuscript Submission System of respective journal.

Specular Edge Weighting for Local Color Correctionof Images

Ancy Varghese1, Darsana Vijay2
Department of Electronics and Communication, Ilahia College of Engineering and Technology, Muvattupuzha, India
Related article at Pubmed, Scholar Google

Visit for more related articles at International Journal of Advanced Research in Electrical, Electronics and Instrumentation Engineering

Abstract

Color constancy algorithms are used for making illuminant independent images. Illuminant independency is achieved in most of algorithms by uniform light source assumption. But this condition may violate in most of real world images which introduces performance failure in computer vision applications like object tracking and object recognition. Specular edge weighting scheme which works on derivative structure of image is applied to grid sampled patches of the image to achieve local color correction. Each estimate from patches is combined and result is projected onto input image so that pixel estimate is obtained. Color correction is achieved on a diagonal model with pixel estimates. Specular edges are selected for weighting scheme since it contains more information about color values. Experiments show that this method reduces the error when compared with existing local correction methods which uses derivative image structures.

Keywords

Color constancy, Specular edges, Illuminant, Illuminant independency

INTRODUCTION

None of the objects in the universe have color. The color seen by human eyes for an object is the result of reflectance. Color of an object depends on light energy strikes on the object. So the images taken of an object may differ in color if they are taken in different light shining. In various application of computer vision like object tracking, object recognition and surveillance object color have notable importance. As a result of illuminant changing the image color varies. This will cause false result in computer vision applications.To make constant color for the image of an object, color constancy algorithms are used. The color constancy aim in removing the variant color effects occurred due to different light source environment. One way to make color constant image is to find out invariant features and another way of achieving color constancy is to transform image into a standard form before applying to computer vision applications. This second method is called white balancing, which means removing color of the striking light from the image.
Different color constancy algorithms have been developed over the years to estimate color of light source. White patch retinex[6] is the first color constancy method which is based on the assumption that light source color varies smoothly in an image and there is no sharp transition of light source color in adjacent pixels. Light source color obtained from white patch retinex method is the color of most bright patch in an image itself. Another color constancy algorithm named grey world [7] estimates the color of light source by taking the average color of the pixels in an image. This algorithm has a better result than retinex theory. The reason is that it considers the color value of whole image rather than a single patch in the case of white patch retinex. Methods which use the information obtained from the learning phase are also exit. Gammut constrained algorithm [4] is an example for such method. In this a map is created for convert the transferred gamut to a canonical gamut. Gamut means a group of colors possible for a specific light source. Gamut method may fail due to specular highlights in an image under varying illumination so another algorithm [8] is developed to avoid specular highlights by normalizing RGB color vectors. This new method of gamut uses previous gamut algorithm and apply it on 2D instead of 3D. Finlayson and Barnard propose a color by correlation technique [14] which correlates the original image color into a possible set of illuminants. From this likelihood estimation of illuminants is used for color correction.
image
This entire color constancy methods works on pure images. T.Gevers and A.Gijsenij [3] introduced a new color constancy method which works on derivative of images named as grey edge hypothesis. This method uses derivative structure of the images to calculate illuminant value. This work on the assumption that edges in an image contains more variant information so the color value at the edges may give better result than any other portion in an image. Different higher order grey edge hypothesis can be possible depending on level of differentiation used.The drawback of all these color constancy techniques is they are working on the assumption that the illuminant light source is uniform on the image. But this supposition may violate on most of the images.
The reason is that most of the images contain two or more light illuminants. More errors in the results may occur due to single light source assumption. One of the examples for images contains two or more light sources are images taken under sunlight. This type of images contains regions with sunlight as light source and shadow regions where the presence of sunlight is less when compared with other regions. In order to consider the effect of two or more light sources T.Gevers and A.Gijsenij[1] proposed a new algorithm which uses the already defined previous color constancy methods in a different way. In this paper they presented a local correction of color of images rather than the global correction used previously. This method can be applicable for images contain one, two or more illuminants. They used different sampling methods before applying color constancy to images. Among the sampling method grid sampling is the better one.
In this paper a combination of grid sampling and specular edge weighting scheme of grey edge hypothesis is used. Photometric edge weighting scheme is used to improve color constancy by the use of specular edges in an image. In grid sampling image is divided into small patches. Then specular edge weighting color constancy scheme is applied to each patches to calculate illuminant value from each patches. Combination of results from each patch is done to reduce errors. Pixel wise illuminant estimate is obtained by back projection. Finally when get pixel wise illuminant estimation von Kries diagonal model [15] is used to correct the color of image.
This paper is organized as follows: Details of color constancy is described in Section II. Proposed method of specular edge weighting is discussed in Section III and comparison between other existing methods is given in Section IV.

WHAT IS COLOR CONSTANCY?

Color constancy is the ability to recognize the color of object irrespective of the color of light source illuminating the scene. The way of acquiring color constancy is briefly explained below

A. Image color

Image color is the combination of three factors. They are: (1) color light source affecting the scene, (2) reflectance of object surface, and (3) camera sensor response. The image of same object taken by the same camera under different light source environment may vary in color values due to the variation of illuminant values. We can represent the color of an image by
image
image
By this estimation of light source we can create a new image which is independent from the light source effect. But most of the color constancy algorithms work on the uniform illuminant assumption. So only a single illuminant estimate is obtained even though there are two or more light sources.

B. Color constancy: Single light source effect

Most of methods for color constancy work on the principle that three will be only a single light for shining the entire scene. For example, maximum response patch is selected for color correction in white patch retinex [6] whereas in grey world [7] theory mean of the image pixel values are used for color correction. All the other methodsare also using this type of assumptions to calculate the single color value for the illuminating light source. By using this single estimate per image color correction of entire image is done globally. Let us see how to obtain the general framework for global correction of images used in most of the color constancy methods. First of all integrate the surface reflectance over the entire image.
image
The value of G is between one for full reflectance (white) and zero for no reflectance (black). For obtaining better results assume full reflectance, that is take value of G as one. Then find out the average image color by integrating the color of image.
image
where is the color of light source together with camera sensitivity. Averaging the equation with Minkowski norm p and smoothen it by using a Gaussian filter with standard deviation σ gives general equation for about five different color constancy instantiations.
image
image
image

C. Color constancy: Multiple light source effect

Recently T. Gevers and A. Gijsenij[1] proposed a method for finding two estimations of illuminant value from an image. In this new method they applied previously described algorithms into the sampled small patches of the image. Grid sampling, key point sampling and segmentation sampling are the three different sampling strategies used in this method.Among this three methods grid sampling is best because it covers entire image without loss of information from the image. Thus small patches formed after sampling is used for estimation of light source color. Single estimate per patch is obtained after using any of the previously defined algorithms. Then these values are combined to form two groups of estimation values. Average from each group is selected as the corresponding two illuminant value of image. These two estimates are back projected onto the image to find out pixel wise estimation. This method and the methods with uniform light source assumption uses von kries model [15] for color correction.

D. Color correction

The image under an unknown light effect is converted into a standard form by using the estimated value of light source. For this Von Kries diagonal model is used.
image
image
image

SPECULAR EDGE WEIGHTING ON SAMPLED IMAGE

This new method applies photometric edge weighting of specular edges of image [2] into sampled small patches. In these two estimations per image is obtained. The steps for doing this algorithm are given below. Overall steps for doing proposed methodology is shown in Fig.2.

A. Grid sampling

The input image is divided into equal sized small sections called patches. First of all row wise division is done then column wise division is done. Main advantage of grid sampling over other sampling methods like key point sampling and segmentation sampling is it covers the entire image and no information loss. Different sizing of patches can be done. But various results shows that small patch size shows less error. But the problem with very small patch size is it will take large time for evaluating estimation algorithms over each patches. So a medium level patch size is selected.

B. Specular edge weighting estimation method

Grey edge hypothesis uses derivative structure of the images to calculate light estimate. This is based on the theorem that in images edges is the most information carrying character. There are various types of edges in an image such as shadow, material and specular edges. Material edge is the edge between an object and its background whereas shadow or geometry edge means edge between two different surface geometries. Specular edge is the edge that differentiates two light sources. A weighting scheme is applied on the derivative of images so that the specular edge effect can be used for illuminant estimation.
image
where v(E)k1 is the weight which is assigned to every value of E and k1 is used to impose the effect of weight on edges. Weight can be computed from the specular variant. The projection of image derivative on light source color is called as specular variant which shows the energy of derivative among specular edge direction.
image
where is the derivative of image and is the light source color which is assumed as white light source initially. The specular edge weight is calculated as ratio of specular variant and derivative of image.
image
Here an iterative algorithm is used to rearrange the weight because an assumption is made about the light source color for calculating specular variant. In the iterative algorithm weighted grey edge works many times until the weighted grey edge algorithm become converges. In previous works this weighting scheme was used for global color correction of images but in this work it is applied for local color correction of images.

C. Merging of estimates

Single value of color from each patch is obtained from previous color estimation step. The middle value of these estimations is find out and the values are classified into two groups in which one group contains values less than middle value and other contains values greater than middle value. Average of each group is taken as two illuminant values of the image which is used further to correct the color of image.

D. Pixel wise illuminant calculation

The two illuminant values are projected into full image and find which illuminant is affecting each pixel. But the problem is lying with the images which have no sharp distinction between two light sources. There will a chance for detecting false illuminant estimate for a pixel within a patch if the patch is distributed with two light sources. To overcome this mask is created. Chromatic distance cdj(x), between the estimated light source of the patch located at position x is and the other illuminant is find out. From this the similarity between estimated light source color and other is computed as cd’ j(x).
image
A mask is created to define the probability of estimated light source. And this mask is used for calculating pixel wise estimation
image
image

E. Color correction

Main aim of this step is to convert all images into a form such that it was taken under a white light background. So that a standard form for evaluating the color of image can be formed. Here the input image is multiplied with the von Kries [15] diagonal model which is created from the pixel color estimates.

PERFORMANCE MEASUREMENT

For evaluating the performance of this new method for local color correction of images the standard median angular error is taken as the criteria. The estimated light source value ep and original color value of the light source el is taken to calculate angular error of the proposed method.
image
Normalized vectors of values are used here which is shown by (^) sign. Pixel by pixel angular error is calculated throughout the image and an average of this is taken as measurement. Experiments are done on a dataset of eight images illuminated by two different light sources randomly selected from a set of 94 illuminant spectra. By this 752 images are created and previous local color correction methods are also applied to these images to compare the performance of proposed methodology with existing methods. Matlab R2012 is used as the platform for performance measurement. Median angular error is taken as the parameter for performance measurement. Experiment results are shown in Table1.Finally the result of proposed algorithm on some real images without ground truth of illumination and on the images from database is shown in Fig.4 and Fig.3 respectively.
Results show thatin local color correction methods grey world algorithm with grid sampling outperforms all other methods because a small section of the image is selected rather than entire image and all pixel values of the image is selected in this method to calculate the average of pixel colors. White patch method and general grey world hypothesis results in almost similar errors. Median angular error is highest for first order and second order grey edge methods.This shows that this local correction based on derivative structure introduce more errors. Proposed methodology reduces the error of grey edge hypothesis by specular edge weighting. Specular edges are lying in the orientation of light sources so that by weighting such edges more accurate illuminant estimation can be find out. Finally the working of proposed algorithm is shown on some real images without ground truth of illumination. These images are worked in the color constancy under the assumption that there are only two light sources.
image
image

CONCLUSION

Specular edges have the property that they differentiate the light sources there by proposed methodology of grid sampling with specular edge weighting for local color correction outperforms the grey edge hypothesis. This method can also be used for scenes with uniform illumination. Main drawback of this hypothesis is slow response since it works on an iterative algorithm it will take more time. But in the scenes with more edges this method gives better results than any other methods. As a future work other edge types can also be included in edge weighting.

References

  1. ArjanGijsenij, Rui Lu , and Theo Gevers, “Color constancy for multiple light sources,” IEEE Transactions on Image Processsing, vol. 21 no. 2, February 2012
  2. ArjanGijsenij, Joost van de Weijer, and Theo Gevers, “Improving color constancy by photometric edge weighting”IEEE Transactions on pattern analysis and machine intelligence, May 2012
  3. J. van de Weijer, Th. Gevers, and A. Gijsenij, “Edge-based color constancy”,IEEE Transactions on Image Processing, Sep 2007.
  4. D. Forsyth “A novel algorithm for color constancy” International ournal on computer vision, vol5. August 1990
  5. M. Ebner, “Color Constancy”. Hoboken, NJ: Wiley, 2007
  6. E. H. Land, “The retinex theory of color vision,” Sci. Amer., vol. 237, December 1977
  7. G. Buchsbaum, “A spatial processor model for object colour Perception “J. Franklin Inst., vol. 310, no. 1, pp. 1–26, Jul. 1980
  8. G. Finlayson, B. Funt, and K. Barnard, “Color constancy under varying illumination,” in Proc. IEEE Int. Conf. Comput. Vis., 1995.
  9. G. Hordley, “Scene illuminant estimation: Past, present, and future,” Color Res. Appl., vol. 31, no. 4, pp. 303–314, Aug. 2006
  10. B. Funt, F. Ciurea, and J. McCann, “Retinex in MATLAB,” J. Electron.Imag., vol. 13, no. 1, pp. 48–57, Jan. 2004.
  11. E. Hsu, T. Mertens, S. Paris, S. Avidan, and F. Durand, “Light mixture estimation for spatially varying white balance,” ACM Trans. Graph., vol. 27, no. 3, pp. 70:1–70:7, Aug. 2008. .
  12. G. Finlayson and S. Hordley, “Color constancy at a pixel,” Opt. Soc.Amer., vol. 18, no. 2, pp. 253–264, Feb. 2001.
  13. M. Ebner, “Color constancy based on local space average color,” Mach. . Vis. Appl.,, vol. 20, no. 5, pp. 283–301, Jun. 2009.
  14. Finlayson and Hordley“Color by Correlation: A Simple, UnifyingFramework for Color Constancy” IEEE Transactions on pattern analysis and machine intelligene
  15. J. von Kries, “Influence of Adaptation on the Effects Produced by Luminous Stimuli”. Cambridge, MA: MIT Press, 1970
  16. Finlayson and Hordley“Color by Correlation: A Simple, UnifyingFramework for Color Constancy” IEEE transactions on pattern analysis and machine intelligence, vol. 23, no. 11, Nov 2001