ISSN ONLINE(2278-8875) PRINT (2320-3765)

All submissions of the EM system will be redirected to Online Manuscript Submission System. Authors are requested to submit articles directly to Online Manuscript Submission System of respective journal.

Image Compression with Variable Threshold and Adaptive Block Size

D Gowri Sankar Reddy1 and P Janardhana Reddy2
  1. Assistant professor, Department of ECE, S V University College of Engineering, Tirupati, Andhra Pradesh, India
  2. PG Student [Communication Systems], Department of ECE, S V University College of Engineering, Tirupati, Andhra Pradesh, India
Related article at Pubmed, Scholar Google

Visit for more related articles at International Journal of Advanced Research in Electrical, Electronics and Instrumentation Engineering

Abstract

Network technologies and media services provide ubiquitous conveniences for individuals and organizations to gather and process the images in multimedia networks Image compression is the major challenge in storage and bandwidth requirements. A good strategy of image compression gives a better solution for high compression rate without much reducing the quality of the image. In present paper we proposed a simple and effective Method by filtering the image as a preprocessing step and adaptive block size in Block truncation coding at the encoding stage. The results are appealing for finding optimal block size when compare to the JPEG2000 standard.

Keywords

Storage, Bandwidth Requirements, Filtering the image, Adaptive Block Size, JPEG 2000.

INTRODUCTION

Image compression is a need for imaging techniques to ensure good quality of service. Image compression can improve the performance of the digital systems by reducing time and cost in image storage and transmission without significant reduction of the image quality. The challenge faced with digital images compression is though high compression rate of the images is desired, the usability of the reconstructed images depends on some important characteristics of the original images, Image compression reduces the data required to represent an image with close similarity to the original image by removal of redundant information. Compression is encoding a pixel array into statistically uncorrelated dataset, for storage and transmission and can be decoded to reconstruct the original image in lossless compression or an approximation of image by lossy compression. Digital images are generally have three types of redundancies; 1) coding redundancy,
2) Inter pixel redundancy and
3) Psychovisual redundancy.
The compression algorithms exploit these redundancies to compress the image. The coding redundancy refers to the use of variable length code to match the statistics of the original image. Inter pixel redundancy also called spatial redundancy uses the fact that large regions where pixel values are the almost the same. Psychovisual redundancy is based on the human perception of the image information.
To get the good compression ratio recent works are proposed different methods like adaptive-regressive modeling interpolation method, Adaptive down sampling method, these methods are achieving the good compression ratio by down sampling the entire image it will reduce the image quality and it will create the blocking artifacts, Feng Wu’s edge based Inpainting method, Weisi Li’s Xin Li’s edge-directed interpolation method and Loganathan, Kumaraswamy’s An Improved Active Contour Medical Image Compression Technique with Lossless Region of Interest method,if we follow these methods for image compress we will get some complexity to get good compression ratios, if the image have more than one region of interests. Jiaji wu,Yan xing,Shi,Jiao’s down sampling and overlapped transform method ,in this method to achieve higher compression ratio we will down sample in the smoothing region, in this method we may loss the information if our region of interest and image have high smoothing region and it will create ringing and blocking artifacts,
In this paper we propose a new method to reduce the complexity and the information loss in region of interest of the image. In this proposed method we concentrate on the all three types of redundancies, coding redundancy will get by using the block truncation coding method, and in non-region of interest we will avoid the pixels which are not concentrated by the eye i.e. based on human perception of the image we will reduce the number of pixels in the non region of interest.
The below block diagram explains to achieve optimal point between the compression ratio and the PSNR for our required image. In this proposed method first step involves filtering the image pixels according to our requirement
image
In the next step wavelet transform is applied to the filtered image, after transforming we will apply the scalar quantization procedure for the transformed data of the image, and then apply the block truncation coding method for the purpose of encoding the quantized data of the image. at receiver side receive the compression data and then reconstruct the original image.

PREPROCESSING

In pre-processing step avoid the redundancy pixels in non-region of interest, fixing the threshold may lose information in the region of interest, edges and other part of the image, so due to loss of information it may create blocking artifact.
To reduce that problems in this proposed step first we will find out the range of pixels values in the required image, based on that range we will avoid the remaining pixels, for example consider the 5x5 size image.
In this image our region interest is consider as the highlighted portion then first we allow the pixels in the region of interest without any loss of the information, then based on the human perception selecting the pixels randomly.
image
image
As we explained above first we will allow the all pixels in the region of interest and then we will randomly select the pixels in the non-region of interest, in our method we will continue this random selection up to our satisfaction, in our experiments we are satisfied for statue.jpg at red(R)[19 to 255],green(G)[17 to 255],blue(B)[0 to 242] , for pisa.jpg at R[25 to 255], G[11 to 252],B[5 to 230], for choppers.jpg at R[6 to 249],G[9 to 255],B[26 to 255], for child.jpg R[25 to 255],G[35 to 251],B[31 to 250].
image
If the pixels are allowed only this range we does not have any information loss at our region of interest and also number pixels are reduced, the above images almost looks same as the original images, but the number pixels are reduced.
This pre-processing method will be use full for both lossy and lossless image compression, further it will reduce the complexity in image compression involving images have multiple region of interests.

BLOCK TRUNCATION CODING (BTC) WITH ADAPTIVE BLOCK SIZE.

Block truncation coding (BTC) is a simple and fast compression technique which involves less computational complexity. It is one-bit adaptive moment-preserving quantizer that preserves certain statistical moments of small blocks of the Input image in the quantized output. The original algorithm of BTC preserves the standard mean and the standard deviation. The statistical overheads Mean and the Standard deviation are to be coded as part of the block. BTC has gained popularity due to its practical usefulness.
In BTC increasing the block size in compression ratio also increase, in this increasing the block size leads to loss of information. In this method the block size is varied to get better compression ratio without much reduction in the PSNR, further an image dependent block size is successfully obtained.
In this selection of optimal block size process our method use two parameters first one is difference between the maximum and minimum values. In this paper we use the difference between maximum and minimum values was 30.
Second parameter is based on multiplication factor how many number of blocks having the difference between the maximum and minimum values greater than the set value, in this paper we used the multiplication factor (n) was 0.1 i.e. 10% of blocks. The difference exceeding blocks are greater than the 10% blocks then our method does not increase the block size, otherwise our method increases the block size.
In the below flow chart
diff = difference between the maximum and minimum values
“k” denotes how many numbers of blocks have the difference between the maximum and minimum values greater than the set value
n= multiplication factor, it used for our quality requirement.
[m1 n1]= size of the image.
b= block size.
image

RESULTS

Our method evaluated on four images “statue”, ”pisa”, ”choppers”, and “child, for this, we employ the compression ratio(CR) and peak signal to noise ratio (PSNR). The below table shows the results of standard JPEG 2000 for different block sizes.
image
B.S= block size,
CR= compression ratio,
PSNR=peak signal to noise ratio
The below table shows the result of our proposed method in our proposed method we will use the pre-processing step and adaptive block size at the encoding stage
image
O.B.S= optimal block size,
CR= compression ratio,
PSNR=peak signal to noise ratio
Graphs are easy way to compare the results for any experiment, the below graphs are shows the comparison between the standard JPEG 2000 and our proposed method.
image
In the above graphs inside bracket denotes the block size
image

CONCLUSIONS

A new image compression scheme based on variable threshold and adaptive block size is proposed which provides sufficient high compression ratios with no appreciable degradation of image quality. The effectiveness of this approach has been justified using a set of real images. To demonstrate the performance of the proposed method, a comparison between the proposed technique and standard JPEG 2000 has been revealed. From the experimental results it is evident that, the proposed compression technique gives better performance, the proposed method based on variable threshold and adaptive block size compression technique maintains better image quality with a less complexity.

References

  1. Loganathan R, Y.S.Kumaraswamy “An Improved Active Contour Medical Image Compression Technique with Lossless Region of Interest”, IEEE 2011.
  2. Chai, D.: Bouzerdoum. A.; “JPEG2000 image compression: an overview”, Intelligent Information Systems Conference, The Seventh Australian and New Zealand 2001, pp. 237 - 242, Nov. 18-2 I , 2001
  3. Petrova Jana, "Edge detection in medical images using the Wavelet transforms". Telemedicine, Jul 2011.
  4. Franti P, Nevalainen O, Kaukoranta T. Compression of Digital Images by Block Truncation Coding: A Survey, The Computer Journal, Vol. 37, No. 4, 1994.
  5. C. C. Tsou, S. H. Wu, Y. C. Hu, "Fast Pixel Grouping Technique for Block Truncation Coding," 2005 Workshop on Consumer Electronics and Signal Processing (WCEsp 2005), Yunlin, Nov. 17-18, 2005.
  6. M.D.Lema, O.R.Mitchell, “Absolute Moment Block Truncation Coding and its Application to Color Image”, IEEE Trans. Coomun., Vol. COM-32, No. 10, pp. 1148-1157, Oct. 1984.
  7. Somasundaram, K.and I. Kaspar Raj. “Low Computational Image Compression Scheme based on Absolute Moment Block Truncation Coding“. May 2006. Vol. 13.
  8. W. J. Chen and S. C. Tai, “Postprocessing Techniques for Absolute Moment Block Truncation Coding“ Proc. of ICS'98, Workshop on Image Processing and Character Recognition, pp. 125-130, Dec. 17-19, 1998, Tainan, Taiwan.
  9. E. Cands, “Compressive sampling,” in Proc. Int. Congr. Mathematics, Madrid, Spain, 2006, pp. 1433–1452. WU et al.: low bit – rate image compression via adaptive down sampling
  10. B. Zeng and A. N.Venetsanopoulos, “A jpeg-based interpolative image coding scheme,” in Proc. IEEE ICASSP, 1993, vol. 5, pp. 393–396.
  11. D. Tabuman and M. Marcellin, JPEG2000: Image Compression Fundamentals, Standards and Parctice. Norwell, MA: Kluwer.