ISSN ONLINE(2320-9801) PRINT (2320-9798)

All submissions of the EM system will be redirected to Online Manuscript Submission System. Authors are requested to submit articles directly to Online Manuscript Submission System of respective journal.

Neuro Fuzzy Classifier for Image Retrieval

S.Asha, S.Ramya, M.Sarulatha, M.Prakasham and P.Priyanka
PG Scholar, Dept of Computer Science, Sri Krishna College of Technology, Coimbatore, India
Related article at Pubmed, Scholar Google

Visit for more related articles at International Journal of Innovative Research in Computer and Communication Engineering

Abstract

Content-based image retrieval system has been an active research topic in areas such as, entertainment, multimedia, education, image classification and searching. One of the key issues with the Content-based image retrieval system is to extract essential information from the raw data which reflect the image content. Even though large numbers of feature extraction and retrieval techniques have been developed, there are still no globally accepted techniques available for region/object representation and retrieval. In this paper, we propose an Adaptive neuro fuzzy inference system (ANFIS), it has a potential to capture both benefits of neutral network and fuzzy logic. The Color extraction is done based on RGB (red, green, blue), HSV (hue, saturation value) and Y,Cb,Cr (luminance and chrominance).Texture of an image is extracted by Gray Level Co-Occurrence Matrix (GLCM) which is a popular statistical method. Shape extraction of an image can be determined by Canny Edge Detection. Thus the experimental results may show that our retrieval framework is very effective and requires less computation time with an unique systemic processes and outperforms the conventional image retrieval systems. The experiment results are analyzed based on the Corel Datasets.

Keywords

Adaptive Neuro-Fuzzy inference system, Canny Edge Detection, Content based image retrieval, Gray Level Co-occurence Matrix (GLCM).

INTRODUCTION

With development of Internet, and availability of image which are captured by the devices such as digital cameras,mobiles, image scanners, the size of digital image collection is increasing rapidly. Efficient image browsing searching and retrieval tools are required by users from various domains, including remote sensing, fashion, medicine, architecture, crime prevention, publishing, etc. For this purpose, many image retrieval systems have been developed. There are two frameworks for retrieval: text-based and content-based. Text-based approach can be tracked back to 1970s. In those systems, the images are manually annotated by text descriptors, which are widely used by a database management system(DBMS) to perform image retrieval. There are disadvantages with this approach. The first is a considerable level of human labour is analysed for manual annotation. The second method is the annotation inaccuracy due to the objectivity of human perception [1,2]. To overcome the above methoded disadvantages in text-based and content-based image retrieval (CBIR) was introduced in the early 1980s. In CBIR, images are indexed and ranked by their visual content, such as texture, color, shapes. A pioneering work which was published by Chang in 1984, who presented a picture indexing and abstraction approach for pictorial database retrieval [3]. The pictorial database system which consists of picture objects and picture relations. To construct picture indexes, operations are formulated to perform picture clustering and classification. In the past decades, many commercial products and experimental prototype systems have been developed, such as QBIC [4], Photobook [5], Virage [6], VisualSEEK [7], Netra [8], SIMPLIcity [9]. Comprehensive surveys in CBIR can be found in Refs. [10,11]. In this paper, we consider the advantages of intensity homogeneity and prominent color and orientation features to extract significant regions. color and shape features to achieve better performance. And, Adaptive neuro fuzzy inference system is proposed for efficient image retrieval.
The remainder of this paper is organized as follows. Section II details the technique of preprocessing an image. Section III details the technique of feature extraction. Section IV details the techniques of Adaptive neuro fuzzy inference system. And conclusion is given in section V.

LITERATURE SURVEY

Jaiswal, Kaul [27] concluded that content based image retrieval is not a replacement of, but rather a complementary component to text based image retrieval. Only the integration of the two can result in satisfactory retrieval performance. In this paper they reviewed the main components of a content based image retrieval system, including image feature representation, indexing, and system design, while highlighting the past and current technical achievement
Ivan Lee, et.al. [28] have present the analysis of the CBIR system with the human controlled and the machine controlled relevance feedback, over different network topologies including centralized, clustered, and distributed content search. In their experiment for the interactive relevance feedback using RBF, they observe a higher retrieval precision by introducing the semi-supervision to the non-linear Gaussian-shaped RBF relevance feedback.
Verma, Mahajan, [29] have used canny and sobel edge detection algorithm for extracting the shape features for the images. After extracting the shape feature, the classified images are indexed and labeled for making easy for applying retrieval algorithm in order to retrieve the relevant images from the database. In their work, retrieval of the images from the huge image database as required by the user can get perfectly by using canny edge detection technique according to results.

PROPOSED ALGORITHM

In CBIR (Content-Based Image Retrieval), visual features such as color, texture and shape are extracted to characterize images in the datasets. Each of the features in the images is represented using one or more feature descriptors. During the retrieval of the images, features and descriptors of the query images are compared to those of the images in the database.[14]The features is defined as a function of one or more measurements of images, each of which specifies quantifiable property of an image, and it is computed such that it quantifies some significant characteristics of the image.
The various features currently employed are classified as follows:
• General features: Application independent features such as texture, color, and shape.
Based on the abstraction level, it is further divided into:
• Pixel-level features: Features calculated at each pixel, e.g. location and color.
• Local features: Features are calculated over the results of subdivision of the image band on image segmentation or edge detection.
• Global features: Features are recalculated over the entire image or just regular sub-area of an image.
• Domain-specific features: Application dependent features such as fingerprints, human faces, and conceptual features. These features are a mixture of low-level for a specific domain. On the other hand, all features can be crudely classified into low-level features and high-level features of an image. Low-level features can be extracted from the original images, whereas high-level feature extraction must be based on low-level features of an image[15].

Color Extraction

The color feature of an image is one of the most widely used visual features in image retrieval systems. Images which are characterized by color features have many advantages:
• Robustness: On the view axis the color histogram is invariant to rotation of the image [16].color extraction is also insensitive to changes in image histogram resolution and occlusion.
• Effectiveness: Relevance between the query image and the extracted matching image is of high percentage.
• Implementation: The construction of the color histogram is a clear-cut process, including assigning color values to the resolution of the histogram, scanning of the image, and building the histogram using color components as indices.
• Computational simplicity: The histogram computation has O(X, Y ) complexity for images of size X × Y . The complication for a single image match is linear, O(n), where n represents the number of unusual colors, or resolution of the histogram.
• Storages: The color histogram size is considerably smaller than the image itself, pretentious color quantization thus low storage space.
Normally, the color of an image is represented through some color model. There exist a range of color models to portray color information. A color model of an image is precise in terms of 3-D coordinate system and a subspace within that system where each color is represented by a particular point. The more commonly used color models to retrieve are HSV (hue, saturation, value) RGB (red, green, blue), and Y,Cb,Cr (luminance and chrominance). Hence the color content is characterized by 3-channel.One representation is color histogram for color content of the image. Statistically, it denotes the joint prospect of the intensities of three color channels.Color is alleged by humans as a combination of three color stimuli: Red, Green, Blue(RGB) which forms a color space this model has both a hardware related and physiological foundation. RGB colors are called primary colors and are additive. By also varying their combinations, other colors can be obtained. RGB space cube is the base for the depiction of the HSV space, with the diagonal of the RGB model, as the vertical axis in HSV. Since saturation varies from 0.0 to 1.0, Colors vary from unsaturated (gray) to saturated (no white component). Shade ranges from 0 to 360 degrees, with variation starts with red, going through yellow, green, blue, cyan and magenta and back to red. These color spaces are spontaneously equivalent to the RGB model from which they can be derived through linear or non-linear transformations.

Texture Extraction

Texture analysis aims in deciding a exclusive way of representing the underlying characteristics of textures and represent them in some simpler but unique form, so that they can be used for tough, accurate classification and segmentation of objects. [17]Though texture plays a considerable role in image analysis and pattern recognition, only a few architectures execute on-board textural feature extraction. Figure 1 describes about GLCM.
Figure 1 describes Gray Level Co-Occurrence Matrix (GLCM) has proved to be a accepted statistical method of extracting textural feature from images. According to co-occurrence matrix, Haralick defines fourteen textural features measured from the probability matrix to extract the characteristics of texture statistics of remote sensing images. In this paper four important features, Correlation, Entropy, Angular Second Moment (energy), (inertia moment), and the Inverse Difference Moment are selected for implementation using Xilinx ISE 13.4.

Shape Extraction

Some images have weak edges due to the resemblance of background and foreground. For the perfect and complete image retrieval of an image, based on shape feature, we enhance the image edges. Shape features are expressed with the pyramid histogram of gradient orientation (PHOG) [18]. PHOG contains the global and local shape features of an image. The global shape is captured by the distribution over edge orientations in the image, while the local shape is captured by the distribution over edge orientations within the subsections at multiple resolutions. [19]Therefore, the edges are very significant in the process of extracting the shape feature.
Shape feature extraction plays a key role in the subsequent categories of applications:
• Shape retrieval: Searching for all shapes in a large database that are similar to a query shape. Generally all shapes within a given distance from the query are determined.
• Shape recognition and classification: Determining whether a given shape matches a model suitably or which of representative class is the most alike.
• Shape alignment and registration: Transforming or translating one shape so that it best matches another shape,[20]in whole or in part.
• Shape approximation and simplification: Constructing a shape with less elements (points, segments, triangles, etc.), so that it is still parallel to the original.
Shape descriptors should be as following requirements:
• The descriptors represent the content of the information items.
• The descriptors should be stored compactly. Thus the size of a descriptor vector must not be too large. [21][22][23]
• The descriptors should be undemanding, otherwise the execution time would be too long[24][25].

Edge Detection

To get contour of an image, we should alter the image to gray scale, and then the edges are get by means of the canny edge detector
Canny Edge Detection filter out any noise in the original image is the first step before trying to locate and detect any edges. The Sobel operator uses a pair of 3x3 convolution masks,[19] one estimating the gradient in the x-direction (columns) and the other estimating the gradient in the y-direction (rows).Figure 2 describes about canny edge detection filter. Edge Detection Matrices The magnitude, or EDGE STRENGTH, of the gradient is then approximated using the formula:
|G| = |Gx| + |Gy|
The formula for finding the edge direction is just:
theta = invtan (Gy / Gx)
They are shown below:

ANFIS

Adaptive neuro fuzzy inference system (ANFIS) is a type of neural network based on Takagi–Sugeno fuzzy [13] inference system. It integrates both fuzzy logic and neural networks principles. Hence, it has potential to capture the benefits of both the neural network and fuzzy logic in a single framework. This system corresponds to a set of fuzzy IF–THEN rules that have learning capability to approximate nonlinear functions [26]. Hence, ANFIS is considered to be a universal estimator [27].

EXPERIMENTAL RESULT

The Proposed System is implemented using Toolbox available in MATLAB 8.1.0.604. The MATLAB functions are programs (or routines) that accept input arguments and return output arguments. The experiments are conducted using Mat lab 8.1.0.604(R2013a) on an Intel Pentium 5.0 GHz processor with 2GB memory. The sample images are in different sizes. So, the images are resized to uniform size of 256 x 256. The pre-processing of the image is performed using Histogram Equalization. Histogram equalization of an image is computed using the cumulative distributive function of an image pixel values. And then features are extracted, using those features classification is performed. Figure 3and Figure 4 shows the Query image and histogram for the query image.
After the histogram analysis the image is retrieved using the neuro fuzzy classification. Figure 5 represent the retrieved images

CONCLUSION AND FUTURE WORK

This paper proposed an approach to retrieve a set of images based on their features for efficient image retrieval from the database. We have proposed Adaptive neuro fuzzy inference system (ANFIS), which determine the query image based on the features such as color , shape and texture. The color extraction is done based on RGB (red, green, blue), HSV (hue, saturation, value) and Y,Cb,Cr (luminance and chrominance).Texture of an image is extracted by Gray Level Co-Occurrence Matrix (GLCM) which is a popular statistical method. Shape of an image can be determined by edge detector (Canny Edge Detection).we compared our system with the state-of-art CBIR system, and the empirical measures demonstrate the effectiveness of the proposed approach.

Figures at a glance

Figure 1 Figure 2 Figure 3 Figure 4 Figure 5
Figure 1 Figure 2 Figure 3 Figure 4 Figure 5
 

References