ISSN ONLINE(2319-8753)PRINT(2347-6710)

All submissions of the EM system will be redirected to Online Manuscript Submission System. Authors are requested to submit articles directly to Online Manuscript Submission System of respective journal.

Classification of Dynamic Textures Using Bag of Systems Approach

Mr. Y. N. Rampure
Dept Of CSE, Walchand College of Engineering, Sangli, Maharashtra, India.
Related article at Pubmed, Scholar Google

Visit for more related articles at International Journal of Innovative Research in Science, Engineering and Technology

Abstract

The problem of categorizing the video sequences of dynamic textures such as fire, water, stream, is extremely challenging, because the shape and appearance of a dynamical textures continuously change as a function of time. Existing approaches are unable to handle videos taken under different viewpoint and scale. These approaches are typically applied to a manually extracted region containing most dynamic content in the original video. There is a need to address these issues in the categorization of video sequences. So, the proposed work is an attempt to categorize the dynamic textures using the approach which is independent of region of interest and deals with videos under different viewpoints and scale

Keywords

Dynamic textures, categorization, linear dynamical systems

INTRODUCTION

Dynamic textures are video sequences which includes the complex non-rigid dynamical objects. The non-rigid dynamical objects are fire, flames, water on the surface of a lake, a flag fluttering in the wind, etc. The algorithms to analysis such video sequences is important in several applications such as surveillance, where, for example, one wants to detect fires or pipe ruptures. Dynamic texture changes continuously in the shape and appearance as a function of time. Because of which the categorization of video sequences is very challenging. So, the algorithms for this problem need to be developed. There are many approaches for modeling and synthesizing video sequences of dynamic textures. Among them, the generative model was proposed by Doretto et al. [1] where a dynamic texture is modeled using a Linear Dynamical System (LDS). This model has been shown to be very versatile. The LDS-based model has been successfully used for various vision tasks such as synthesis, editing, segmentation, registration, and categorization. So, the problem of categorization of dynamic textures is important. That is, given a video sequence of a single dynamic texture, it should be identified which class (e.g., water, fire, etc.) the video sequences belongs to.

DEFINITIONS

The definitions of Dynamic textures and its types are given in this section. These definitions are referred from [1]
1) Dynamic textures
Dynamic textures are sequences of images of moving scenes that exhibit certain stationeries properties in space and time; these include sea-waves, smoke, foliage, whirlwinds etc.
2) Categorization
Among the different dynamic textures videos, categorize or search out the specific video which it belongs to is called Categorization.
3) Linear Dynamical Systems
Linear dynamical systems are dynamical systems whose evaluation functions are linear. Linear dynamical systems can be solved exactly, and they have a rich set of mathematical properties. Linear systems can also be used to understand the qualitative behavior of general dynamical systems, by calculating the equilibrium points of the system and approximating it as a linear system around each such point.
It is a group of features of a video. These features are calculated with the help of some specific algorithms and techniques. These features are unique for each video so they are helpful to categorize the videos.

BRIEF REVIEW OF LITERATURE

There are many approaches to categorize the video sequence which have some drawbacks. In 2001, Saisan et al. used distances based on the principal angles between the observability subspaces associated with the LDSs [2]. In 2005, Chan and Vasconcelos used both the KL divergence and the Martin distance as a metric between dynamical systems [3].In 2006, Woolfe and Fitzgibbon used the family of Chernoff distances and distances between cestrum coefficients as a metric between LDSs [4]. Other types of approaches for dynamic texture categorization, such as Fujita and Nayar, divide the video sequences into blocks and compare the trajectories of the states in order to perform the inference [5]. Alternatively, Vidal and Favaro extended boosting to LDSs by using dynamical systems as weak classifiers. [6]
Currently, Fire Detection in video and Computer vision methods for coral reef assessment are the ongoing project in the dynamic texture categorization in UCLA lab, California. [7]

METHODOLOGY

The dataset contains the video patches. The spatiotemporal features are to be extracted from the video patches or frames. Then the codebook is formed. The codebook is further used for classification.
Input frames: The frames are taken from the videos.
Feature Extraction: Various types of descriptors are used to extract the spatiotemporal features of every frame. The dense sampling should be done for classification.
Codebook formation: In this, two approaches are used: K-medoid approach and K-means approach. In k-medoid approach, the algorithm is directly run from the Martin distances between the spatiotemporal volumes.
Representation and classification: There are two approaches for representation and classification by using k-NN classifier.

Data Set Information:

Video patches: This is the dataset of video texture patches generated from the UCLA dynamic texture dataset. There are 200 different videos of different classes. Each patch was cropped from the area of the video texture that contained the most motion, over all videos within the same class. [2]

REMARKS

In this paper, a Bag-of-Systems approach proposed for categorizing dynamic textures. By modeling a video with the distributions of local dynamical models extracted from it, the variations in view-point and scale in the training and test data as compared to modeling the entire video sequence with a single global model has been better handled. In order to construct the BoS model for a video sequence, several challenges have been tackled, specifically in the codebook formation step of the pipeline. Toward this end, distances on the space of dynamical systems, nonlinear dimensionality reduction techniques and K-Means or K-Medoid have been used to efficiently construct the dynamical systems codebook. The algorithm with standard Bag of Features approaches using a variety of different features for categorizing video sequences as well as the original single LDS approach are compared. The experimental results showed that the proposed approach produces better results across different parameter choices and empirically established the superior performance of our proposed approach.

ACKNOWLEDGMENT

I would like to express my sincere thanks to all those who helped me directly or indirectly in this esteemed work.

References

  1. G. Doretto, A. Chiuso, Y. Wu, and S. Soatto, “Dynamic Textures,” Int’l J. Computer Vision, vol. 51, no. 2, pp. 91-109, 2003.
  2. P. Saisan, G. Doretto, Y.N. Wu, and S. Soatto, “Dynamic Texture Recognition,” Proc. IEEE Conf. Computer Vision and Pattern Recognition, vol. 2, pp. 58-63, 2001.
  3. S. Vishwanathan, A. Smola, and R. Vidal, “Binet-Cauchy Kernels on Dynamical Systems and Its Application to the Analysis of Dynamic Scenes,” Int’l J. Computer Vision, vol. 73, no. 1, pp. 95-119, 2007.
  4. F. Woolfe and A. Fitzgibbon, “Shift-Invariant Dynamic Texture Recognition,” Proc. European Conf. Computer Vision, pp. 549-562, 2006.
  5. K. Fujita and S. Nayar, “Recognition of Dynamic Textures Using Impulse Responses of State Variables,” Proc. Third Int’l Workshop Texture Analysis and Synthesis, Oct. 2003.
  6. http://vision.ucsd.edu/project/computer-vision-methods-coral-reefassessment.
  7. http://mitpune.com/dept-comp/research-dept-comp.aspx
  8. http://www.jofcis.com/publishedpapers/2011_7_5_1402_1411.pdf
  9. http://ieeexplore.ieee.org/xpl/freeabs_all.jsp? arnumber=6487938&reason=concurrency
  10. J. Sivic and A. Zisserman, “Video Google: A Text Retrieval Approach to Object Matching in Videos,” Proc. IEEE Int’l Conf. Computer Vision, pp. 1470-1477, 2003.
  11. A. Ravichandran, Rizwan Chaudhry, Rene Vidal, “Categorizing Dynamic Textures Using a Bag of Dynamical Systems,” Proc. IEEE transactions on pattern analysis and machine intelligence, vol. 35, no. 2, Feb 2013
  12. C. Dance, J. Willamowski, L. Fan, C. Bray, and G. A. Ravichandran, R. Chaudhry, and R. Vidal, “View-Invariant Dynamic Texture Recognition Using a Bag of Dynamical Systems,” Proc. IEEE Conf. Computer Vision and Pattern Recognition, 2009.
  13. D. Nister and H. Stewenius, “Scalable Recognition with a Vocabulary Tree,” Proc. IEEE Conf. Computer Vision and Pattern Recognition, 2009.
  14. D. Nister and H. Stewenius, “Scalable Recognition with a Vocabulary Tree,” Proc. IEEE Conf. Computer Vision and Pattern Recognition, pp. 2161-2168, 2006.