ISSN: 2229-371X

All submissions of the EM system will be redirected to Online Manuscript Submission System. Authors are requested to submit articles directly to Online Manuscript Submission System of respective journal.

An ANALYSIS OF TEXTURE CLASSIFICATION: LOCAL BINARY PATTERN

Harish Sahu,Praveen Bhanodia
  1. Patel College of Science and Technology Indore (M.P.)
  2. Patel College of Science and Technology Indore (M.P.)
Related article at Pubmed, Scholar Google

Visit for more related articles at Journal of Global Research in Computer Sciences

Abstract

This paper presents a novel approach for texture classification and relevance with generalizing the well-known local binary patterns (LBP).

INTRODUCTION

The Local Binary Pattern (LBP) [1] is an operator for image description that is based on the signs of differences of neighboring pixels. It is fast to compute and invariant to monotonic gray-scale changes of the image. Despite being simple, it is very descriptive, which is attested by the wide variety of different tasks it has been successfully applied to. The LBP histogram has proven to be a widely applicable image feature for, e.g. texture classification, face analysis, video background subtraction, etc. [2]. A possible drawback of the LBP operator is that the thresholding operation in comparing the neighboring pixels could make it sensitive to noise. Practical experiments with images of good quality have not supported this argument but under difficult conditions or with images taken with noisy special cameras, noise might present a problem to the traditional LBP operator. In this paper we introduce soft histograms for LBP which we show to make the operator more robust to noise.

MULTI-BLOCK LOCAL BINARY PATTERNS

The MB-LBP (Multi-Block Local Binary Pattern) texture descriptor is an extension of the original LBP as proposed by Zhang et al. [12]. MB-LBP are more robust than the original LBP descriptor as it can encode microstructures as well as macrostructures. For certain applications such as face recognition, experimental results indicate that MB-LBP out-perform other LBP algorithms [13]. The calculation of
The most prominent limitation of the LBP operator is its small spatial support area. Features calculated in a local 3×3 neighborhood cannot capture large scale structure that may be the dominant features of some textures. Later the operator was extended to use neighborhoods of different size [1]. Using circular neighborhoods and bilinearly interpolating the pixel values allow any radius and number an MB-LBP is similar to a standard LBP except that in a MB-LBP t0 to t7 (Figure 1) are the average grey values of the pixels in each corresponding region. These regions are compared to the averaged central region. Each averaged region is of equal size but does not necessarily have to be square.
image

FEATURE EXTRACTION WITH LOCAL BINARY PATTERNS

The original LBP operator, introduced by Ojala et al. [1], is a powerful way of texture description. The operator labels the pixels of an image by thresholding the 3×3-neighbourhood of each pixel with the center value and considering the result as a binary number. Then the histogram of the labels can be used as a texture descriptor. The basic LBP operator is illustrated in Fig. 2 (a).
image
of pixels in the neighborhood. Examples of these kinds of extended LBP are shown in Fig. 2(b), (c), (d).

INTEGRAL HISTOGRAMS

A common technique used to detect faces in images is to slide a window with predefined size which is resized until certain value. At each step of this sliding process, the features are extracted from the image region inside the window and they are used as input to a classifier previously trained for that type of patterns. The problem with the sliding window technique comes from the time needed to compute the features at each step. The Integral Image representation [4] overcomes the processing time problem by precomputing all the possible summations of pixel gray values before the passage of the sliding window. At each step, only few accesses to a precomputed matrix are needed and the summation is done in a constant time for any scale and position. However, there are some criticisms about the usage of differences between summations of gray values in adjacent image regions. Balas and Sinha [5] argue that a colection of edge fragments is a simple way of image representation, but the local processing performed by the edge fragment extraction restricts the generalization strength of the features in adapting to small changes in illumination.
Besides, the edge maps implicitly ignore the majority of image information modified by geometrical transformations in the image. Those problems lead to the search for new image representations that could tackle some of them. Wang et al. [6] argues that the best compromise between distributional structure and the retaining of good image properties for class estimation are histograms. Within the context of real time face detection, Integral Histograms [7] is a new technique that is receiving great attention.

NEAREST NEIGHBORS

The nearest neighbor algorithms are simple classifiers that select the training samples with the closest distance to the query sample. These classifiers will compute the distance from the query sample to every training sample and select the best neighbor or neighbors with the shortest distance. The k-Nearest Neighbor (k-NN) is a popular implementation where k number of best neighbors is selected and the winning class will be decided based on the best number of votes among the k neighbors [14]. The nearest neighbor is simple to be implemented as it does not require a training process. It is useful especially when there is a small dataset available which is not effectively trained using other machine learning methods that goes to the training process. However, the major drawback of the nearest neighbor algorithms is that the speed of computing distance will increase according to the number of training samples available.

CONTINUOUS WAVELET TRANSFORMATION

The WT is designed to address the problem of nonstationary signals. It involves representing a time function in terms of simple, fixed building blocks, termed wavelets. These building blocks are actually a family of functions which are derived from a single generating function called the mother wavelet by translation and dilation operations. Dilation, also known as scaling, compresses or stretches the mother wavelet and translation shifts it along the time axis [8,9,10,11].
The WT can be categorized into continuous and discrete. Continuous wavelet transform (CWT) is defined by
image
Where x(t) represents the analyzed signal, a and b represent the scaling factor (dilatation/compression coefficient) and translation along the time axis (shifting coefficient), respectively, and the superscript asterisk denotes the complex conjugation. Ψa,b(.) is obtained by scaling the wavelet at time b and scale a:
image
Where ψ(t) represents the wavelet [9,10].
Continuous, in the context of the WT, implies that the scaling and translation parameters a and b change continuously. However, calculating wavelet coefficients for every possible scale can represent a considerable effort and result in a vast amount of data.
GREY LEVEL CO-OCCURRENCE MATRICES
GLCM is an old feature extraction for texture classification that was proposed by Haralick et al. back in 1973 [16]. It has been widely used on many texture classification applications and remained to be an important feature extraction method in the domain of texture classification. It is a statistical method that computes the relationship between pixel pairs in the image. In the conventional method, textural features will be calculated from the generated GLCMs, e.g. contrast, correlation, energy, entropy and homogeneity [17]. However in recent years, the GLCM is often combined with other methods and is rarely used individually [15, 18, 19, 20]. Other than the conventional implementation, there are a few other implementations of the GLCM, e.g. by introducing a second-order statistical method on top of the textural features in the original implementation [20], one-dimensional GLCM [21] and using the raw GLCM itself instead of the first-order statistics [22]. The GLCM can also be applied on different color space for color cooccurrence matrix [23].
GREY LEVEL CO-OCCURRENCE MATRICES
Introduced by Haralick [24], GLCM is one of the earliest texture analysers which is still of interest in many studies. Since the beginning of the 70‟s many researchers have studied GLCM theory and have practically implemented it in a wide range of texture analysis problems. GLCM is a model that can explicitly represent the higher order statistics of an image, just like ordinary histograms which represent the first order statistics of images.
Statistical Approaches:
Statistical texture analysis methods deal with the distribution of grey levels (or colours) in a texture. The first order statistics and pixel-wise analysis are not able to efficiently define or model a texture. Therefore, statistical texture analysis methods usually employ higher order statistics or neighbourhood (local) properties of textures. The most commonly used statistical texture analysis methods are co-occurrence matrices, autocorrelation function, texture unit and spectrum, and grey level run-length [25, 26, 27].
Grey level run-length or primitive-length:
In this method, the primitive set is defined as the maximum set of continuous pixels of the same grey level, located in a line. The length of primitives (run-lengths) in different directions can then be used as the texture descriptors. A longer run-length implies a coarser texture and vice versa, also a more uniformly distributed run-length implies a more random texture and vice versa. Statistics of the primitives can be computed as the texture features.
Spatial domain filtering:
A texture can be considered as a mixture of patterns, therefore characteristics of „edges‟ and „lines‟ are key elements to describe any texture. Even a plain or smooth texture can be considered as a texture without any edge. The early attempts to utilize spatial domain filtering as texture descriptor were emphasised on gradient (i.e. line and edge detector) filters such as Robert and Sobel operators [28, 26].

CONCLUSIONS

The goal of this study was to perform a study of several Local Binary Pattern approaches because this concept has represented a milestone in texture analysis. The Local Binary Pattern descriptors have been powerful tools for feature encoding.

References

  1. Ojala, T., Pietik¨ainen, M., M¨aenp¨a¨a, T.: Multiresolution Gray-scale and Rotation Invariant Texture Classification with Local Binary Patterns. IEEE Transactions on Pattern Analysis and Machine Intelligence 24 (2002) 971-987
  2. “The Local Binary Pattern Bibliography,” http://www.ee.oulu.fi/research/imag/texture/lbp/ biblio graphy/ , 2007.
  3. M. Varma and A. Zisserman. Classifying images of materials: Achieving viewpoint and illumination independence. In Proceedings of European Conference on Computer Vision 2002, volume 3, pages 255–271, 2002.
  4. P. Viola and M. J. Jones, “Robust real-time face detection,” International Journal of Computer Vision, vol. 57, no. 2, pp. 137–154, 2004.
  5. B. J. Balas and P. Sinha, “Dissociated dipoles: Image representation via non-local comparisons,” Tech. Rep., 2003.
  6. H. Wang, P. Li, and T. Zhang, “Proposal of novel histogram features for face detection,” in ICAPR, Lecture Notes in Computer Science, 2005, pp. 334–343.
  7. F. Porikli, “Integral histogram: A fast way to extract histograms in cartesian spaces,” in IEEE International Conference on Computer Vision and Pattern Recognition (CVPR), 2005, pp. 829–836.
  8. E.D. Übeyli,˙I Güler, Feature extraction from Doppler ultrasound signals for automated diagnostic systems, Comp. Biol. Med. 35 (9) (2005) 735–764.
  9. I. Daubechies, The wavelet transform time-frequency localization and signal analysis, IEEE Trans. Inform. Theory 36 (5) (1990) 961– 1005.
  10. S. Soltani, On the use of the wavelet decomposition for time series prediction, Neuro computing 48 (2002) 267–277.
  11. M. Unser, A. Aldroubi, A review of wavelets in biomedical applications, Proc. IEEE 84 (4) 1996)626–638.
  12. L. Zhang, R. Chu, S. Xiang, S. Liao, and S. Z. Li, Face Detection Based on Multi-Block LBP Representation IAPR/IEEE International Conference on Biometrics, 2007.
  13. S. Liao, X. Zhu, Z. Lei, L. Zhang, and S. Z. Li, Learning Multiscale Block Local Binary Patterns for Face Recognition IAPR/IEEE International Conference on Biometrics, 2007, pp. 828-837.
  14. B.D. Ripley, Pattern Recognition and Neural Networks, Cambridge University Press, United Kingdom, 1996. [69] G. Yu, S.V. Kamarthi, “Texture Classification Using Wavelets with a Cluster-Based Feature Extraction,” ISSCAA, 2008.
  15. J.A.R. Recio, L.A.R. Fernandez, and A. Fernandez-Sarria, “Use of Gabor Filters for Texture Classification of Digital Images,” Física de la Tierra, no. 17, pp. 47-56, 2005.
  16. R.M. Haralick, K. Shanmugam, and L. Dinstein, “Textural Features for Image Classification,” IEEE TSMC, vol. 3, pp. 610- 621, 1973.
  17. M. Petrou, and P.G. Sevilla, Image Processing Dealing withTexture, John Wiley & Sons, West Sussex, England, 2006.
  18. S. Arivazhagan, L. Ganesan, and T.G.S. Kumar, “Texture Classification using Curvelet Statistical and Co-occurrence Features,” ICPR, 2006.
  19. J.H. Kim, S.C. Kim, and T.J. Kang, “Fractal Dimension Cooccurrence Matrix Method for Texture Classification,” IEEE TENCON, 2006.
  20. M.B. Othmen, M. Sayadi, and F. Fnaiech, “Interest of the Multi-Resolution Analysis Based on the Co-occurrence Matrix for Texture Classification,” IEEE MELECON, pp. 852-856, 2008.
  21. J.Y. Tou, Y.H. Tay, and P.Y. Lau, “One-dimensional Greylevel Co-occurrence Matrices for Texture Classification,” ITSIM, vol. 3, pp. 1592-1597, 2008.
  22. J.Y. Tou, K.K.Y. Khoo, Y.H. Tay, and P.Y. Lau, “Evaluation of Speed and Accuracy for Comparison of Texture Classification Implementation on Embedded Platform,” IWAIT, 2009.
  23. A. Porebski, N. Vandenbroucke, and L. Macaire, “Iterative Feature Selection for Color Texture Classification,” ICIP, no. 3, pp. 509-512, 2007.
  24. R. Haralick, K. Shanmugam, and I. Dinstein. Textural features for image classification. IEEE Transactions on Systems, Man, and Cybernetics, 3(6):610–621, November 1973.
  25. L Hepplewhite. Computationally Efficient Texture Methods For Classification, Segmentation and Automated Visual Inspection. PhD thesis, Brunel University, Middlesex, The UK, 1998.
  26. M. Sonka, V. Hlavac, and R. Boyle. Image processing analysis and machine vision. International Thomson Computer Press, 1996.
  27. M. Tuceryan and A. Jain. Texture analysis. In The Handbook of Pattern Recognition and Computer Vision, pages 207–248. World Scientific, 1998.
  28. W. Pratt. Digital image processing. John Wiley and Sons, 1991.