ISSN ONLINE(2320-9801) PRINT (2320-9798)

All submissions of the EM system will be redirected to Online Manuscript Submission System. Authors are requested to submit articles directly to Online Manuscript Submission System of respective journal.

PIXEL MEMORY STORAGE REDUCTION

Salil Bhalla1, Kulwinder Singh Monga2, Rahul Malhotra3
  1. Student, Bhai Maha Singh College of Engineering, Muktsar, Punjab, India
  2. Assistant Professor, Bhai Maha Singh College of Engineering, Muktsar, Punjab, India
  3. Director-Principal, Adesh Institute of Technology, Chandigarh, India
Related article at Pubmed, Scholar Google

Visit for more related articles at International Journal of Innovative Research in Computer and Communication Engineering

Abstract

An advanced algorithm of online image memory size compression has been introduced in this paper. There are 3 parts into which whole process of online image memory size compression is divided. These are 1) Image capture 2) Pixel level Image compression 3) Image mapping and storage. A block based compression algorithm is being proposed in this. The reference pixel that is the brightest pixel value is choosen first of all and then we compare all the pixel values of the block with the pixel value of reference pixel. After that a comparator circuit is being employed so that in consonance with the proposed mapping the differential values of the successive pixels are further compared through comparator circuit. And then by using the intermediate precision bits selected number of differential values are quantized and calculated. Thus the bits required for on-pixel memory storage are reduced. This enables reduced silicon area for System on chip. Mentor Graphics’ Design Architect tool is being used to synthesize the algorithm.

Keywords

Compressive sensing, Digital pixel sensors (DPSs), Block – Based compression algorithm

INTRODUCTION

The history of image processing technology in electronics dates from the invention of television system in 1927 [5]. From this time, the image processing technology has been studied and developed mainly in terms of television broadcasting technology, such as NTSC or PAL.
On an image capturing devices, the solid-state imagers have been invented in 1970s, which employ the CCD (charge coupled device) as a signal transformer [6]. The recent development of VLSI technologies enables us to fabricate not only photo-detectors and signal transfer circuits, but also a simple image processing circuits in one image sensor chip, so called computational sensors [7].
Image sensors are basically distinguished by their pixel sensors i.e. whether it is active pixel sensor or passive pixel sensor [8]. Passive imagers require less amount of MOS devices or lesser silicon area but correspondingly active pixels with a trade-off provides batter picture quality. Hence, most of image sensors are active pixel image sensor.
However the history of CMOS active pixel sensors is not as long as it is for CCD active pixel sensors. First research paper on CMOS image sensor in IEEE Xplorer was published on ‘A CMOS facsimile video signal processor’ [9] in 1985. From the introduction of CMOS in image sensors, it is used in image processing as analog pixel part or as a processor. In image sensors introduction of image sensor reduces power dissipation, improve switching capability and helpful in analog to digital conversion due to its switching capability.
In 1994, CMOS image sensors were reported to be described in the concept of image compression [10]. A CMOS photodiode was used to reduce the dynamic range [11], as reduced dynamic range require lesser number of bits for digital data storage. So from that on the concept of compression or rather on-pixel compression was directly linked with dynamic range of frame. However, various algorithms and methodologies were applied to reduce the dynamic range but one commonly used methodology was to capture >> store >> compress technique. There are various hardware friendly algorithms [12] designed for such methodology.
In modern era a new methodology of image acquisition and online image compression is used and being implemented, it is capture >> compress >> store [13]. As this approach of image compression is newer one, hence there is not much work reported in this area. Due to its enhanced results over old methodology, this technique has large potential of research.
A. Block-Based Compression
In various image compression algorithms whole image captured by optical device is divided into blocks instead of using whole image. The concept of block-based compression enables a reduced dynamic range of differential pixel values for adaptive quantization.
Here are some block-level distributions of image.
Fig1. Bits distribution per pixel for different algorithms
(1) In this block representation all pixels are provided with a full precision of 8-bits to each pixel. Such kind of distribution is treated as original image acquisition.
(2) In this block representation we choose the brightest pixel value in the block and provide it full precision of 8- bits. While all other pixels are captured with an inter-mediate precision of 3 bits. As the brightest pixel value represents the highest intensive pixel, so comparing the brightest pixel with neighbouring pixel will provide a reduced dynamic range. A reduced dynamic range is so as captured with reduced number of bits.
(3) This is the proposed image compression algorithm, in which the brightest pixel is captured with full precision of 8-bits. And then after the block is sub-divided into smaller blocks (e.g. 2×2). The pixel of each sub block is captured as differential pixel value with an inter-mediate precision of 3-bits. While next pixel is stored with 1- bit. The differential values of previous pixel and current pixel are compared using a comparator [29]-[30], and 1-bit output of comparator is stored.
B. Mapping of pixels
In the mapping of pixels it is compulsory to trace the flow of data at pixel and block level. Here is an introductory data flow block representation.
The mapping of pixels in image acquisition helps to optimize the image compression. Here are some examples of different kind of mapping usually found in image compression algorithms.
Linear scanning
In linear scanning, image pixels are scanned row wise i.e. first of all first row of pixels will be scanned and then after it moves to next row within a block as shown in Fig. 2.4 and blocks are also scanned in similar fashion [23].
Linear sub-block scanning
In linear sub-block scanning, whole block is firstly sub divided in 2×2 blocks and then performs linear scanning subblock wise instead of block wise as shown in Fig. 2.5 [14].
In Hilbert mapping, image pixels are addressed as per Hilbert curve. However Hilbert mapping help improve in reducing the temporal and spatial redundancies. And also provides a good speed of sampling. But Hilbert mapping changes with number of pixels per block and also with number of blocks per frame. And also this mapping scheme is not regular one for each sub block [15].
In our proposed pixel mapping we will select the brightest pixel value from the block and provide it full precision of 8- bits. Then after we will calculate differential pixel values by comparing reference pixel with other pixel and store the differential values using a 3 bit precision. The remaining pixels are compared differential pixel values previous differential value and current differential value. These pixels are stored using 1-bit precision.

SYNTHESIS RESULTS

A. Synthesis results
In the synthesis of our algorithm firstly we design a pixel level CMOS Active Pixel circuit. As per the requirement of our algorithm we design this circuit. After designing the pixel level circuit we form a block level arrangement of pixels for testing.
The dynamic range of differential pixel output can be then arbitrated from the synthesis of whole block. Figure 4 shows the output for the Pixel 2, where Vfire and Vdiff2 are inputs to comparator of pixel 2 and Vout2 is output of pixel 2. By the analysis of an ordinary photodiode following data comes in existence.

ALGORITHM IMPLEMENTATION

A. Overview
Fig5. Illustrates the architecture of overall system. The sensor array is divided into 4×4 blocks. A single pixel consists of a MOS switch, a photodiode, a differential amplifier stage, and a comparator. As shown in Fig.5 whole image is divided into small blocks, and for smooth mapping of pixels and blocks controller bits are required. There are two blocks select controllers i.e. row blocks select controller and column blocks select controller. Memory size of these controllers varies with respect to size of block. This is the block based representation of image. However in block based architecture it is also quite possible to compress the image and store it with reduced number of bits. Hence various decomposition theorems are here to compress the image at block level prior to storage like QTD i.e. quadrant tree decomposition A MOS switch is used to level up Vn voltage to Vdd. And also a MOS switch distinguishes one frame with another and hence reduces power dissipation. A photodiode is used as a sensor device which converts light intensity into equivalent voltage as described in TABLE II. Hence voltage at node Vn will be proportional to light intensity.
In our design we use 3-4 bits counter to count the differential pixel values and correspondingly store them in on-chip memory.

CONCLUSION

In this paper, online image compression concept is adduced and is synthesized using Mentor Graphics Design Architect tool. The paper illustrates the advantages of proposed algorithm 1) reduced silicon area required for DPS; 2) improved fill-factor; 3) online compression processing, which enables the concept of parallel processing. Results illustrate that the proposed algorithm results in more than 50% on chip memory

FUTURE WORK

This work was synthesized using Design tools, however capturing images using camera and processing images on hardware implemented design may provide enhanced results.
 

Tables at a glance

Table icon Table icon
Table 1 Table 2
 

Figures at a glance

Figure 1 Figure 2 Figure 3a Figure 3b Figure 3c
Figure 1 Figure 2 Figure 3.1 Figure 3.2 Figure 3.3
Figure 3d Figure 4 Figure 5a Figure 5b Figure 5c
Figure 3.4 Figure 4 Figure 5.1 Figure 5.2 Figure 5.3
 

References