ISSN ONLINE(2319-8753)PRINT(2347-6710)

All submissions of the EM system will be redirected to Online Manuscript Submission System. Authors are requested to submit articles directly to Online Manuscript Submission System of respective journal.

Novel Approach for Image Decomposition from Multiple Views

P.Dineshkumar1, R.Kamalakannan2, V.Rajakumareswaran3
  1. PG Student, II – ME CSE, K S R Institute for Engineering and Technology, India
  2. PG Student, II – ME (COMMUNICATION SYSTEMS), Department of ECE, Mahendra Engineering College, India
  3. Assistant Professor, Dept. of CSE, KSR Institute for Engineering and Technology, India
Related article at Pubmed, Scholar Google

Visit for more related articles at International Journal of Innovative Research in Science, Engineering and Technology

Abstract

Intrinsic images aim is separating an image into its reflectance and illumination components to facilitate further analysis or manipulation. This paper presents the system that is able to estimate shading and reflectance intrinsic images from a single real image, given the direction of the dominant illumination of the scene. Although some properties of real-world scenes are not modeled directly, such as occlusion edges, the system produces satisfying image decompositions. The basic strategy of our system is to gather local evidence from color and intensity patterns in the image. This evidence is then propagated to other areas of the image. The most computationally intense steps for recovering the shading and reflectance images are computing the local evidence, and running the Generalized Belief Propagation algorithm. One of the primary limitations of this work was the use of synthetic training data. This limited both the performance of the system and the range of algorithm pseudo inverse process is available for designing the classifiers. Then introduce an optimization method to estimate sun visibility over the point cloud. This algorithm compensates for the lack of accurate geometry and allows the extraction of precise shadows in the final image. Finally propagate the information computed over the sparse point cloud to every pixel in the photograph using image-guided propagation. Our propagation not only separates reflectance from illumination, but also decomposes the illumination into a sun, sky, and indirect layer they expect that performance would be improved by training from a set of intrinsic images gathered from real data.

Index Terms

Intrinsic images,, belief propagation, mean-shift algorithm.

INTRODUCTION

Intrinsic images are usually referred to the separation of illumination (shading) and reflectance components from an input photograph. The main difficulty in such manipulations resides in the fact that a pixel color aggregates the effect of both material and lighting, so that standard color manipulations are likely to affect both components for presenting the scene properties, including the illumination of the scene and the reflectance of the surfaces of the scene. Intrinsic images can be recovered either from a single image or from image sequences.
The particular case of separating a single image into illumination and reflectance components. It is well known intrinsic images are multiview stereo to automatically reconstruct a 3D point cloud of the scene. As will be reviewed in the related work section, the unsuitable assumptions and restrictions in intrinsic image algorithms make them insufficient to produce highquality illumination and reflectance components. The outdoor scenes and separate photographs into a material layer (also called reflectance) and several illumination layers that describe the contributions of image. Inverse global illumination methods, also estimate the material properties of a scene but require very accurate geometric models, either built by hand or acquired with complex laser scanners. Color is usually employed as a key cue for identifying Shading gradients, such as the approaches. Proposed an algorithm for recovering intrinsic images from a single photograph. Their approach `was based on a trained classifier, which classified the image derivatives as being caused by either illumination or reflectance changes. Then, the illumination and reflectance images were calculated according to the classified derivatives. It is still an open challenge on how to recover the high-quality intrinsic images when only a photograph is available. Due to its inherent ill posedness, the automatic decomposition of intrinsic images on a single image cannot be solved correctly without additional prior knowledge on reflectance or illumination.
A. CONTRIBUTIONS
In summary, this paper makes the following contributions:
Present a novel automatic intrinsic image recovery algorithm using optimization, which focuses on the local continuity assumption in reflectance values to define the new energy function.
Introduce an algorithm to reliably identify points in shadow based on a new parameterization of the reflectance. Our algorithm compensates for the lack of accurately reconstructed and complete 3D information.
An efficient intrinsic image decomposition approach is proposed by adding three types of user scribbles to the energy function, which improves the performance of the results by the automatic algorithm.
An overview of previous intrinsic image recovery approaches. Definition of our image formation model and a description of capture, the structure of our paper follow our three on attributions.

RELATED WORK

A. INTRINSIC IMAGES
Several methods have been proposed to estimate reflectance and illumination from a single image. This decomposition is severely ill posed and can only be solved with additional knowledge or assumptions about the scene content. Assume that neighboring pixels with similar chromaticity have the same reflectance and that the image is composed of a small set of reflectance’s. These various approaches produce encouraging decompositions on isolated objects, as evaluated by the ground-truth data set. However, the automatic decomposition of complex outdoor images remains an open challenge, in part because most existing methods assume monochrome lighting while outdoor scenes are often lit by a mixture of colored sun and sky light.
B. INVERSE RENDERING
Inverse rendering methods recover the reflectance and illumination of a scene by inverting the rendering equation. These methods require an accurate 3D model of the scene that is either modeled manually or acquired with expensive laser scanners and then solve an often costly global illumination system. Inverse rendering methods allow applications that are beyond the scope of this paper, such as free view point navigation and dynamic relighting.
C. IMAGE-BASED PROPAGATION
Image-guided interpolation methods have been introduced by Levin and Lischinski to propagate colors. In this paper, use the propagation algorithm to propagate user indications for intrinsic image decompositions. This algorithm is inspired by the matting Laplacian that has been used to decompose an image into a foreground and background layer, and to recover white balanced images under mixed lighting.
D. SHADOW REMOVAL
Our work is also related to shadow removal methods that aim at removing cast shadows in an Image, either automatically or with user assistance. While our method is also able to identify cast shadows, our main goal is to extract a reflectance image as well as smooth illumination variations. Separate the contribution of sun, sky, and indirect illumination, which enables novel image manipulations.

EXISTING SYSTEM

A typical usage scenario would be to take a photograph and manipulate its content after the capture, e.g., to change the color of an object, softens the shadows, or add in virtual objects. In this paper, we present a method to achieve this just by taking a few extra photographs, which represents a minimal additional cost. It focus on outdoor scenes and separate photographs into a material layer (also called reflectance) and several illumination layers that describe the contributions of sun, sky, and indirect lighting. This intrinsic images decomposition allows easy editing of each component separately, and subsequent compositing with consistent lighting. Applications of our decomposition our method combines sparse geometric reconstruction with image-guided propagation thus leveraging their respective strengths.
Exploit the automatically reconstructed 3D information to compute lighting information for a subset of pixels, and use image-guided propagation to decompose the photographs into intrinsic images. While our method requires users to capture additional images, it alleviates the need for user intervention apart from some initial calibration.
A. DISADVANTAGES

PROPOSED SYSTEM

The decompose similar image sequences of outdoor scenes into a shadow mask and images illuminated only by skylight or sunlight. The image capturing an environment map at multiple times of the day. The image estimates the reflectance field of an outdoor scene that can then be used for relighting. These methods assume a fixed viewpoint and varying illumination while this method relies on images captured under different viewpoints and fixed illumination.
The main advantage of capture approach is to reduce the acquisition time to a few minutes while time lapses typically require at least several hours of capture to cover as many lighting directions as possible. Most related to capture strategy is the system. That relights buildings reconstructed from multiple photographs. The most computationally intense steps for recovering the shading and reflectance images are computing the local evidence, and running the Generalized Belief Propagation algorithm.
Then introduce a pseudo inverse process took under 5 seconds. One of the primary limitations of this work was the use of synthetic training data. This limited both the performance of the system and the range of algorithms available for designing the classifiers. We then introduce an optimization method to estimate sun visibility over the point cloud. This algorithm compensates for the lack of accurate geometry and
This lightweight capture, use recent computer vision algorithms to reconstruct a sparse 3D point cloud of the scene.
The point cloud only provides an imprecise and incomplete representation of the scene. However, show that this is sufficient to compute plausible sky and indirect illumination at each reconstructed 3D point.
allows the extraction of precise shadows in the final image. Finally propagate the information computed over the sparse point cloud to every pixel in the photograph using image-guided propagation. Our propagation not only separates reflectance from illumination, but also decomposes the illumination into a sun, sky, and indirect layer.
It expects that performance would be improved by training from a set of intrinsic images gathered from real data. present a method to achieve this just by taking a few extra photographs, which represents a minimal additional cost. The main aim is to be focused by outdoor scenes and separate photographs into a material layer (also called reflectance) and several illumination layers that describe the contributions of sun, sky, and indirect lighting (Figs. 1e, 1f, 1g, and 1h). This intrinsic images decomposition allows easy editing of each component separately, and subsequent compositing with consistent lighting. Illustrate applications of our decomposition in Fig. 1. First alter the floor material with a graffiti while maintaining consistent shadows (Fig. 1b); we then add a virtual object in the scene with consistent lighting (Fig. 1c); and finally change the lighting color and blur the shadows to simulate sunset (Fig. 1d).
image
ADVANTAGE
Their method necessitates the additional acquisition of flash/no-flash image pairs to capture a material exemplar for every material of a building. Users then need to associate a material exemplar to each texture region of the reconstructed building.
The estimate indirect illumination at every pixel without the need to solve inverse global illumination. Nonetheless, inverse rendering methods allow applications that are beyond the scope of this paper, such as free viewpoint navigation and dynamic relighting.
Decompose an image into a foreground and background layer, and to recover white balanced images under mixed lighting.

CONCLUSION

Presented a method to estimate rich intrinsic images for outdoor scenes. In addition to reflectance, our algorithm generates a separate image for the sun, sky, and indirect illumination. Our method relies on a lightweight capture (10-31 photographs in the scenes shown here) to estimate a coarse geometric representation of the scene. This geometric information allows us to estimate illumination terms over a sparse 3D sampling of the scene. Then introduce an optimization algorithm to refine the inaccurate estimations of sun visibility. While incomplete, this sparse information provides the necessary constraints for an image-guided propagation algorithm that recovers the reflectance and illumination components at each pixel of the input photographs. Our intrinsic image decomposition allows users of image manipulation tools to perform consistent editing of material and lighting in photographs. To provide an alternative way of computing the illumination components. It will be interesting to investigate the tradeoffs between complexity, speed, and quality of the image. Enforcing consistent intrinsic image properties between views is an interesting future research direction. With such consistency, our method will open the way for dynamically relight able environments and freeviewpoint navigation for Image-Based Rendering systems [46], inaddition to the applications demonstrated in this paper. An important step for this goal is the generation of plausible shadow motion in the sun illumination layer in order to simulate moving light sources.

References

  1. Barrow, H. And Tenenbaum, J.(1978) ‘Recovering Intrinsic Scene Characteristics from Images,’ Computer Vision Systems, Academic Press.
  2. Bousseau, A. Paris, S.and Durand, F. (2009)’User-Assisted Intrinsic Images,”’ACM Trans. Graphics, vol. 28, no. 5, article 130.
  3. Carroll, R. Ramamoorthi, R. and Agrawala, M. (2011),’Illumination Decomposition for Material Recoloring with Consistent Interreflections,’ACM Trans. Graphics, vol. 30, no. 4, article 43.
  4. Digne, J. Morel, J.-M. Souzani, C.-M. andLartigue, C. (2011) ‘Scale Space Meshing of Raw Data Point Sets,’ Computer Graphics Forum, vol. 6,no. 4, pp. 1630-1642..
  5. Debevec, P. Tchou, C. Gardner, A. Hawkins, T. Poullis, C. Stumpfel, J. Jones, A. Yun, N. Einarsson, P. Lundgren, T. Fajardo, M.and Martinez, P.(2004) “Estimating Surface Reflectance Properties of a Complex Scene under Captured Natural Illumination,” technical report, USC Inst. for Creative Technologies.
  6. Furukawa, Y. and Ponce, J. (2010)’Accurate, Dense, and ‘Robust Multi-View Stereopsis,’IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 32, no. 8, pp. 1362-1376.
  7. Hsu, E. Mertens, T. Paris, S. Avidan, S. and Durand, F.(2008)’Light Mixture Estimation for Spatially Varying White Balance,’ACM Trans. Graphics, vol. 27, no. 3, article 70.
  8. Karsch, K. Hedau, V. Forsyth, D. and Hoiem, D. (2011)’Rendering Synthetic Objects into Legacy Photographs,’ ACM Trans. Graphics,vol. 30, no. 6, article 157.
  9. Loscos, C. Drettakis, G.and Robert, L. (2000)’Interactive Virtual Relighting of Real Scenes,’IEEE Trans. Visualization and Computer Graphics, vol. 6, no. 4, pp. 289-305.
  10. Levin, D. Lischinski, D. and Weiss, Y.(2008)’A Closed-Form Solution to Natural Image Matting,’IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 30, no. 2, pp. 228-242.
  11. Omer, I. and Werman, M. (2004)’Color Lines: Image Specific Color Representation,’Proc. IEEE CS Conf. Computer Vision and Pattern Recognition (CVPR), pp. 946-953.
  12. Okabe,M. Zeng, G. Matsushita, Y. Igarashi, L. Quan, T. and Shum, H.-Y.(2006)’Single-View Relighting with Normal Map Painting,” Proc.Pacific Graphics’ pp. 27-34.
  13. Tappen, M.F. Freeman, W.T. and Adelson, E.H. (2005)‘Recovering Intrinsic Images from a Single Image,’IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 27, no. 9, pp. 1459-1472.
  14. Shen, J. Yang, X. Jia, Y. and Li, X. (2011) ‘Intrinsic Images Using Optimization’,Proc. IEEE Conf. Computer Vision and Pattern Recognition (CVPR).