ISSN ONLINE(2320-9801) PRINT (2320-9798)
Rahul R. Ade1 Prof. Ashish B. Kharate2 |
Related article at Pubmed, Scholar Google |
Visit for more related articles at International Journal of Innovative Research in Computer and Communication Engineering
Image-related communications are forming an increasingly large part of modern communications, bringing the need for efficient and effective compression. Image compression is important for effective storage and transmission of images. Image compression is concerned with minimizing the number of bits required to represent an image. The algorithm used for this paper is Linde, Buzo and Gray (LBG) algorithm. It is based on minimization of the squared-error distortion measure. LBG proposed the VQ schemes for gray scale image compression. The basic requirement for this paper is codebook generation. A good codebook is required because the reconstruct image highly depends on the codeword’s in this very codebook. The generated codebook store into text file for VHDL file handling or data array in VHDL code. The VHDL file handling concept used for the quantization. This process of file handling will convert in blockwise conversion in a set of pixel value. The block array splitting by using Pairwise Nearest Neighbor (PNN) principle. The performance of compression ratio will be measure in Mean square Error (MSE) and Peak Signal to Noise Ratio (PSNR). Analysis the synthesis result & timing summary for purpose of design parameter.
Keywords |
Image compression, LBG Algorithm, Vector Quantization, Codebook |
INTRODUCTION |
The purpose of image compression is to reduce the amount of data required to represent a digital image. This reduction process can be understood as removing redundant data. Over the years, the need for image compression has grown steadily. An ever-expanding number of applications depend on the efficient manipulation, storage, and transmission of binary, gray-scale, or color images. Image compression is utilized for image transmission and storage. |
Based on the different requirements of reconstruction, image compression can be divided into two categories, lossless compression and lossy compression. The purpose of lossless image compression is to represent an image signal with the smallest possible number of bits without any information loss. The number of bits representing the signal is typically expressed as an average bit rate per pixel for still images and as an average number of bits per second for video. The purpose of lossy compression is to minimize the number of bits representing the image signal with some allowable information loss. In this way, a much greater reduction in bit rate can be obtained as compared to lossless compression. Lossy compression techniques result in some loss of information. Therefore data compressed by lossy compression techniques cannot be reconstmcted exactly. However, in many applications, this lack of exact reconstruction is not a problem. Therefore, lossy compression is widely used because of the high compression ratio it can achieve. Both scalar and vector quantization are lossy compression techniques. Scalar quantization processes the samples of an input signal in group. Shannon's information theory has proven that encoding sequences of input samples can get a better result than encoding input samples one by one. Therefore by utilizing vector quantization, we are able to achieve a better quantization result than scalar quantization. In vector quantization, the most important problem is designing an efficient codebook. There are already many algorithms published on how to generate a codebook. The LBG algorithm is a well-known algorithm on designing the codebook. It is the starting point for most of the work on vector quantization. Although many clustering algorithms have been designed for generating the codebook, there is no general approach available to generate a universal codebook . In traditional vector quantization, we utilize a set of training images to generate the codebook. This generated codebook is used directly to perform the vector quantization. If we perform some pre-processing on this generated codebook, it can make the vector quantization more efficient. |
The codeword choice is based on the best similarity between the image blocks represented by a coded vector and image blocks represented by codeword’s from dictionary. The codebook is transmitted together with the coded data. The advantage of vector quantization is a simple receiver structure consisting of a look-up table. A good codebook is required because the reconstruct image highly depends on the codeword’s in this very codebook. The generated codebook store into text file for VHDL file handling or data array in VHDL code. |
The algorithm for the design of VQ is referred to as LBG algorithm; and it is based on minimization of the squarederror distortion measure. LBG proposed the VQ schemes for gray scale image compression and it has proven to be a powerful tool for both speech & digital image compression. |
II. RELATED WORK |
The vector quantization is done by three process i.e. codebook generation, encoding process & decoding process. The first how to be codebook generated procedure will done. Image is stored as a two dimensional array of integers. The input image is stored as set of pixel array. These image provide information pixel by pixel data. The pixel represent single bytes i.e. converts eight bit into [128 128] block in binary form. In preprocessing process write a code in matlab The input image is read single dimension vector quantization in binary form to get codebook generation |
Generated codebook used VHDL file handling concept for quantization. This process of image text file will convert in blockwise conversion |
Fig. 1Block diagram of working principal. |
The image to be encoded is segmented into set of input image vectors and get compress image text file and decoding process to achieved single data. It is comes under in matrix form i.e. single dimension binary data to convert in matrix form and it is display in a form of image. The compression performance measured in a compression ratio and peak signal to noise ratio. |
III. VECTOR QUANTIZATION |
Figure shows the block diagram of vector quantization. In vector quantization, we first group the input into blocks or vectors. All the operations in vector quantization will be applied to whole vectors. At both the encoder and decoder sides, there is a set of vectors called the codebook. The vector in the codebook is called the codevector or codeword. Normally, the size of the codevector is the same as the input vector. There is a search engine in the encoder to find the codevector that can best match the input vector. The input vector is compared with each codevector in the codebook. The best match codevector is the quantized value of that input vector. After finding the best match 17codevector, we just need to ttansmit the binary index of the codevector to the decoder. Since there is a same codebook at the decoder side, after receiving the index the decoder can recover the codevector. This recovered codevector is sent out as the reconstructed input vector. |
Vector quantization done three steps (1) codebook design (2) encoding process (3) decoding process. In LBG algorithm an initial codebook is chosen at random from the training vectors. The codebook and the index-table is nothing but the compressed form of the input image. The encoding process, any arbitrary vector corresponding to a block from the image under consideration is replaced by the index of the most appropriate representative codeword. In decoding process, the codebook which is available at the receiver end too, is employed to translate the index back to its corresponding codeword. Figure shows schematic diagram of VQ encoding and decoding process. |
Fig.2 Vector quantization scheme |
From Figure we find there is a search engine in the encoder for finding the best match codevector of an input vector. But the decoder does not have this procedure. This is because the only thing the decoder needs to do is reconstructing the codevector from the index. Although the encoder may take much calculation time to find the best match codevector, the time for the decoder to reconstruct the codevector will be much less. This feature of vector quantization makes vector quantization very suitable to applications that the resources available for the decoder are much less than the resources available for the encoder. Such situations happen in many multimedia applications. The information is encoded and sent to the user. The user has a decoder to decode the encoded file and retrieve the information. Normally, the end user hopes the decoding time can be as short as possible. They do not care how long it takes to encode the information |
IV. OBJECTIVE |
We describe and analyze below LBG algorithm based implementation. |
ïÃÆÃË To develop image compression algorithm. |
ïÃÆÃË To design hardware descriptive code i.e. VHDL for LBG image compression, |
ïÃÆÃË To verify & simulate the operation of image compression and decompression using matlab and modelsim. |
ïÃÆÃË To analyze the parameter for design of purpose work. |
V. LITERATURE SURVEY |
Day by day the use of multimedia, images and the other picture formats are rapidly increasing in a variety of application. It is very straight forward image compression approach. The technique of obtaining the compact representation of an image while maintaining all the necessary information without much data loss is referred to as Image Compression [1]. Image compression maps an original image into bit stream suitable for communication over storage in a digital medium. The number of bits required to represents the coded image should be smaller than that required for original image. The VQ is popular technique used for data compression. Compression is achieved by forming vectors from a training data sequences, grouping similar vectors into clusters and assigning each cluster with a single representative vector. The nearest cluster representative referenced by a simple cluster index. The list of all cluster representative forms as codebook and each coding known as codeword [2]. |
Compressing image data by using Vector Quantization (VQ) will compare Training Vectors with Codebook. The result is an index of position with minimum distortion. The implementing Random Codebook will reduce the image quality[1]-[3]. One of the key roles of vector quantization is how to generate a good codebook such that distortion between the original image and reconstruct image is the minimum. Image pixels that are highly correlated and VQ performs better if it's input vector have components that are more highly correlated. Therefore vector in image domain are formed as compact little connect of adjacent pixels. Quality or efficiency can attained by size of the block[4] |
The VQ based image compression technique has major three steps i.e. codebook design, encoding and decoding process. The VQ technique depend on constructed codebook. A widely used technique for VQ codebook design is LBG algorithm. These algorithm depends on codebooks [5]. For using VQ a fast LBG codebook is generated. The LBG algorithm is an iterative procedure. Starting with initial segmentation of the training set. The completion codebook is updated centroid of these training vectors. The new generation codeword stored into average distortion in the codebook design procedure. This method will provides a good way to reduce computation cost in codebook training process [6]&[7] |
The codeword choice is based on the best similarity between the image blocks represented by a coded vector and image blocks represented by codeword’s from dictionary. The codebook is transmitted together with the coded data. The advantage of vector quantization is a simple receiver structure consisting of a look-up table [8] |
In the image domain strategies for forming vectors are relatively simple. Image pixels that are closer together are highly correlated & VQ performs better if its input vector have component that are more highly correlated. Therefore vector in image domain are form collection of adjacent pixels [9] An adaptive VQ technique for image compression is presented. The LBG algorithm consist three phase i.e. initial phase, adopted LBG & redundant generated codebook [10]. |
VI. APPLICATION |
Image transmission applications include broadcast television, remote sensing via satellite, aircraft, radar and sonar, teleconferencing, computer communication, and facsimile transmission. Image storage applications include educational and business documents, medical images, etc. Because of its wide applications, image compression is of great importance in digital image processing |
VII. CONCLUSION AND FUTURE WORK |
Above proposed algorithm reduces the complexity of a transferred image, without sacrificing performance. Image compressions address the problem of reducing amount of data to represent digital image with no significant loss of data. The quality of image depends on whether we use Lossless or Lossy compression technique. The amount of compression is depends on the type of compression scheme. More compression is achieved in case of lossy compression than lossless compression. The LBG algorithm used for image compression. These algorithms require as codebook generation. The codebook is the collection of codeword. These algorithm in community of vector quantization for purpose of data compression. The LBG used for to remove redundancy in data reconstruction makes possible. The compress image will be stored into VHDL simulation. The performance of compression ratio will be measure in compression ratio and peak signal to noise ratio. Design a purpose work for to analyse parameter. |
One area of future research is to improve the file size reduction most significant benefit of image compression. For real time purpose using wavelet transform algorithm. The focus on LBG algorithm is easy, rapid, efficient and simple algorithm which saves computation cost and time. It also ensure for developing vector quantization technique is less storage space, less transfer time, less image viewing and loading time such that faster file transfer will possible |
VIII. RESULT SIMULATION |
Vector quantization LBG image compression algorithm experiment is performed on Lena. This experiment was performing getting value of image text file. These image read single dimension vector quantization in a preprocessing process. These image file will convert in 4 x 4 block conversion for the process of codebook generation. The generated codebook store into VHDL file. The encoding and decoding process used for compression & decompression image. In post processing by using matlab to get reconstruct image. Then analysis the parameter for design of purpose work by using Xilinx. |
MSE & PSNR |
ïÃÆÃË MSE = 1.3671 |
ïÃÆÃË PSNR = 46.7727 |
ïÃÆÃË Elapsed Time = 2.226960 sec |
SYNTHESIS RESULT OF ENCODER |
TIMING SUMMARY OF ENCODER |
ïÃÆÃË Speed Grade :- 3 |
ïÃÆÃË Minimum Period : No path found |
ïÃÆÃË Minimum input arrival time before clock : 9.146ns |
ïÃÆÃË Minimum output require time after clock : 3.597ns |
ïÃÆÃË Maximum combinational path delay : No path found |
SYNTHESIS RESULT OF DECODER |
TIMING SUMMARY OF DECODER |
ïÃÆÃË Speed Grade :- 3 |
ïÃÆÃË Minimum Period : 1.941ns (Maximum Frequency : 515.159MHZ) |
ïÃÆÃË Minimum input arrival time before clock : 3.078ns |
ïÃÆÃË Minimum output require time after clock : 3.634ns |
ïÃÆÃË Maximum combinational path delay : No path found |
References |
|