ISSN ONLINE(2320-9801) PRINT (2320-9798)

All submissions of the EM system will be redirected to Online Manuscript Submission System. Authors are requested to submit articles directly to Online Manuscript Submission System of respective journal.

Research Article Open Access

Bridging the Semantic Gap in Content Based Image Retrieval

Abstract

Image content on the Web is increasing exponentially. As a result, there is a need for image retrieval systems. Historically, there have been two methodologies, text-based and content-based. In the text-based approach, query systems retrieve images that have been manually annotated using key words. This approach can be problematic: it is labor-intensive and maybe biased according to the subjectivity of the observer. Content based image retrieval (CBIR) searches and retrieves digital images in large databases by analysis of derived-image features. CBIR systems typically use the characteristics of color, texture, shape and their combination for definition of features. Similarity measures that originated in the preceding text-based era are commonly used. However, CBIR struggles with bridging the semantic gap, defined as the division between high-level complexity of CBIR and human perception and the low-level implementation features and techniques. In this paper, CBIR is reviewed in a broad context. Newer approaches is feature generation and similarity measures are detailed with representative studies addressing their efficacy. Color-texture moments, columns-of-interest, harmony-symmetry-geometry, SIFT (Scale Invariant Feature Transform), and SURF (Speeded Up Robust Features) are presented as alternative feature generation modalities. Graph matching, Earth Mover’s Distance, and relevance feedback are discussed with the realm of similarity. We conclude that while CBIR is evolving and continues to slowly close the semantic gap, addressing the complexity of human perception remains a challenge.

Paul C. Kuo

To read the full article Download Full Article