The whole dynamic range of the human visibility is not captured by the hand-held ordinary consumable cameras. To enhance the dynamic range of a captured scene, the set of images of the same scene are taken. The images captured with short-exposure preserve sharp details, the long exposure focuses only on the object, not on the background information. In image fusion quality is considered to be an essential determinant of the value of processing image fusion in various applications. The quality varies on different applications. To determine the quality, spatial and spectral are alleged to be two important indexes. The image quality evaluation methods like standard deviation, mutual information, and information entropy, average difference, spatial frequency, and image fusion quality assessment are used. Many of the image fusion algorithms are used for different fusion schemes and input images with distortion like the flah and no flash images. The extraction approaches are the gradient and luminance extraction. The study deals with the color transfer between images and fusion performed as a simple aggregation of images. This paper presents a survey on the variational approach for the exposure fusion of images.
Keywords |
Image fusion, color transfer, Laplacian pyramid, variational methods |
INTRODUCTION |
Exposure fusion to give an enhanced information in an image from one or more images with the [9] proposes fusing
the multiple exposures into a high-quality, low dynamic range image, ready for display (like a tone-mapped picture),
termed as exposure fusion and skip the usual step of computing a high dynamic range image. The initiative behind the
approach is to compute a perceptual quality measure for each pixel in the multi-exposure sequence, which encodes
desirable qualities, like saturation and contrast. To obtain the required quality measures, the “good” pixels are selected
from the sequence and combined to get the final result. Exposure fusion is similar to other image fusion techniques for
depth-of-field extension [19] and photomontage [1]. Burt et al. [4] have proposed the idea of fusing a multi-exposure
sequence, but in the context of general image fusion. According to a technical report submitted by Tassio Knop de
Castro, Alexander Chapiro, exposure fusion works as a weighted blending of photographs. The image data is sufficient
and no other information beside it is needed. The properties of the input pixels are used and fused to get a better
exposed and overall more aesthetically pleasant. The method of image registration is used for the alignment of the
photographs. |
The given input sequences of images are considered as a stack. The weight is nothing but every pixel in each image
of the given stack. The weight can be from the over-exposed or the under-exposed pixels of the images in the stack.
The averaging process takes the pixels with low weights for the reduction in effect of the final resultant image.
The image needs to be concentrated on the preserving of the detailed region. |
The visual aspects like the clear edges, the curves and a noise free image are important. The color channels in hardware
side, they are separated and multiplied with the weights. High Dynamic video reconstruction is a more challenging task
because, from the hardware side it needs a programmable camera and from the software side, the data is dynamic. A
variational approach for fusion without human intervention combining an exposure bracketed pair of images into a
single image that reflects the most select properties of each one. |
A. FUSION OF DIFFERENTLY EXPOSED IMAGES |
Ron Rubinstein proposed that, when capturing a high dynamic range scene, the sequence is obtained by varying the
exposure settings of the sensor. The captured scenes must be represented as a series of differently exposed images
representing different sub bands of the complete dynamic range. The fusion algorithm based on Laplacian pyramid
representation. The Laplacian pyramids when compared to wavelet representations have several advantages. |
B. CATEGORIZATION OF IMAGE FUSION TECHNIQUES |
1) Signal level fusion. In sensors, signals are combined to create a new signal with a better signal-to noise ratio than
the original signals. |
2) Pixel level fusion. Performed on a pixel-by-pixel basis. Generally to perform segmentation. Information associated
with each pixel is determined from a set of pixels in source images to improve the performance of image processing
tasks. |
3) Feature level fusion. At feature level requires an extraction of objects recognized in the various data sources. It
requires the extraction of salient features which are depending on their environment such as pixel intensities, edges or
textures. These similar features from input images are fused. |
4) Decision-level fusion consists of merging information at a higher level of abstraction, combines the results from
multiple algorithms to yield a final fused decision. Input images are processed individually for information extraction.
The obtained information is then combined applying decision rules to reinforce common interpretation. |
C. QUALITY MEASURES |
The stack of images contains flat, colorless regions; this is due to the over-exposed and under-exposed coverage. |
1) Contrast: Laplacian filter is applied to the gray scale image, it yields the contrast C. The extended depth-offield
is measured in the same way. |
2) Saturation: the desaturated colors in the long-exposed image are computed with S within the R, G, B channels
of each of the pixel. |
3) Exposedness: the well exposed color channel intensities i based in the implementation. |
D. LAPLACIAN PYRAMID |
Gradient methods convert images to gradient fields first, apply the blending operation, and reconstruct the final
image from the resulting gradients. The Laplacian Pyramid implements a “pattern selective”, the composite image is
constructed not a pixel at a time, but a feature at a time. There are two modes of the combination averaging and the
selection. In the selection process the most salient component patterns are chosen and less salient component pattern
are discarded. The steps for the Laplacian pyramid are i) image size checking, ii) construction of pyramid level,
iii)pyramid level fusion, iv) final level analysis, v)reconstruction of fused image. The actual pyramid content which
enables the quality measures decouples the weighting of the technique. We take N images and N normalized weight
maps that act as alpha masks. Let the l-th level in a Laplacian pyramid decomposition of an image A be defined as
L{A}l, and G{B}l for a Gaussian pyramid of image B Then, we blend the coefficients (pixel intensities in the different
pyramid levels). |
|
i.e., each level l of the resulting Laplacian pyramid is computed as a weighted average of the original Laplacian
decompositions for level l, with the l-th level of Gaussian pyramid of the weight map serving as the weights. Finally,
the pyramid L{R}l is collapsed to obtain R. |
E. DARK FLASH PHOTOGRAPHY |
[11] Considered a multi-spectral version of the flash/no-flash technique introduced by [5] and [8]. [5] focused on
the removal of flash artifacts but did not apply their method to ambient images containing significant noise, unlike [8].
The two latter approaches are similar in that they use a cross-bilateral (also known as joint-bilateral) filter and detail
transfer. However, to denoise the ambient, adding detail from the flash, while [5] alter the flash image using ambient
tones. |
[11] proposed to capture the pictures using infra-red and ultra-violet flash lights. The two images used are a
dark flash image and an ambient illuminent image. |
F. COLOR SPACES FOR COLOR TRANSFERS |
PCA(Principal component analysis used copiously in all forms of analysis, starting from the neurosciences to the
computer graphics.it is much helpful in reducing the complex data set to a lower dimension. [9] proposed that the
image color transfer can be done with an image of its 3D decomposition and three 1D decomposition. The study of
statistical regularities in the images is said to be the image statistics. Principal Component Analysis (PCA), presents
that the three principal components obtain closely correspond to the opponent colour channels found in the human
visual system, exacting the retina.PCA decorrelates, by transforming the natural images according to the color space,
which is yielded by Lαβ, with independent channels. |
For results can be obtained by simply choosing the CIELAB space. |
The covariance between the channels are measured, the analysis of the color space decorrelation between channels for a
detailed image are grouped. The matrix representation of the image in 128X128 pixels, in which the image is converted
into different spaces. The centre patch of each image is selected and arranged to form an NX3 vector. |
G. COLOR TRANSFER BETWEEN IMAGES |
[9] proposed that prevailing and adverse color cast, like the yellow in the photos taken in a luminescent elucidation.
The general form of color correction that scrounge one image’s color characteristics from another. |
For example, in RGB space, most pixels will have values for the red, blue and green channel. This implies that if we
want to change the appearance of a pixel’s color in a coherent way, we must modify all color channels in tandem. |
The goal for the decorrelation process is to manipulate RGB images which are often of unknown phosphor
chromaticity. As mentioned earlier according to Ruderman et al. Perception based color space Lαβ. The coordinate
LMS cone space are transformed in two steps. The white chromaticity x=X/(X + Y + Z)=0.333,
y=Y/(X+Y+Z)=0.0333, for the transformation we take X=Y=Z=1 to R=G=B=1. |
|
According to Gooch et.al nonphotorealistic shading (NPR) models can be reconstructed. The model needs color of the
different sets.An image of 3D model of Michealangelo was automatically generated . |
H. GAMMA CORRECTIONS |
The mean and standard deviations are calculated in the color space using the RGB images. Log xγ=γlog x |
The various Fusion Methods in existence namely, |
• Principal Component Analysis (PCA) |
• Laplacian Pyramid |
• Filter-Subtract-Decimate Hierarchical Pyramid (FSD) |
• Ratio Pyramid |
• Gradient Pyramid |
• Discrete Wavelet Transform (DWT) |
• Shift Invariant Discrete Wavelet Transform (SIDWT) |
I. PERFORMANCE EVALUATION |
CONCLUSION |
Exposure fusion gives the high intensity level. The color and the gradient level need to be enhanced for the other
methods and varied applications. Depending on the type of the application and the need of the user, the image features
vary. The paper gives a comprehensive analysis of the various image fusion techniques available. The specific
technique is applied in variation on case to case basis. Each technique is no superior to other based on applications. |
ACKNOWLEDGMENT |
Our sincere gratitude to the Almighty for the blessings and thanks to our family members for their support. |
|
Figures at a glance |
|
|
|
Figure 1 |
Figure 2 |
Figure 3 |
|
|
|
Figure 4 |
Figure 5 |
Figure 6 |
|
|
References |
- A.ArdeshirGoshtasby, ” Fusion of Multifocus Images to Maximize Image Information”, Elsevier,Information Fusion, 2007.
- Dong Jiang, DafangZhuang, Yaohuan Huang and Jinying Fu, “Image fusion and its applications” , intechweb.org, ISBN 978- 953-307-182-4.
- Eisemann, E., and Durand, F. 2004.” Flash photography enhancement via intrinsic relighting”, ACM Transactions on Graphics (Proc. SIGGRAPH), vol. 23, 673–678.
- G.Geetha, S.Raja Mohammad, Dr. Y.S.S.R. Murthy, “Multifocus image Fusion using multiresolution approach with bilateral gradient sharpness criterion”, © CS & IT-CSCP 2012, pp. 103–115, 2012.
- D. Krishnan and R. Fergus, “Dark flash photography,” ACM Trans. Graph., vol. 28, no. 3, pp. 1–2, 2009.
- T. Mertens, J. Kautz, and F. V. Reeth, “Exposure fusion,” in Proc. Pacific Conf. Comput.Graph. Appl., 2007, pp. 382–390.
- J. M. Ogden, E. H. Adelson, J. R. Bergen, and P. J. Burt.”Pyramid-based computer graphics”, RCA Engineer, 30(5), 1985.
- Petschnigg G., Agrawala M., Hoppe H., Szeliski R., Cohen M., and Toyama K. 2004, “Digital photography with flash and no- flash image pairs”, ACM Transactions on Graphics(Proc. SIGGRAPH) 23, 3, 664–672.
- E. Reinhard, M. Adhikhmin, B. Gooch, and P. Shirley, “Color transfer between images”, IEEE Comput.Graph. Appl., vol 21, no. 5, pp. 34–41, Sep.–Oct. 2001.
- SukhdhipKaur,KamaljitKaur, “Multifocus image fusion de-noising and sharpness criterion”, International Journal on Computer Science Engineering (IJCSE).ISSN: 2319-7323, Vol.1 No.02 November 2012.
- Sunil Kumar Panjeta and Deepak Sharma, “A Survey on Image fusion Techniques used to Improve Image Quality”, International Journal of Applied Engineering Research, ISSN 0973-4562 Vol.7 No.11 (2012).
- Miss. Suvarna A. Wakure, Mr. S.R. Todmal, “Survey on Different Image Fusion Techniques”, IOSR Journal of VLSI and Signal Processing (IOSR-JVSP) e-ISSN: 2319 – 4200, p-ISSN No. : 2319 – 4197 Volume 1, Issue 6 (Mar. – Apr. 2013), PP 42-48.
- M. Tico, N. Gelfand, and K. Pulli, “Motion-blur-free exposure fusion,” in Proc. 17th IEEE Int. Conf. Image Process., Oct. 2010,p.3321–3324.
|