ISSN: 2319-9873

Reach Us +44 7456 035580
All submissions of the EM system will be redirected to Online Manuscript Submission System. Authors are requested to submit articles directly to Online Manuscript Submission System of respective journal.

Information Loss Reduction in Data Hiding using Visual Sensing Parameter Training Application

Shahin Shafei*

Young Researchers and Elite Club, Mahabad Branch, Islamic Azad University, Mahabad, Iran.

Corresponding Author:
Shahin Shafei
Young Researchers and Elite Club
Mahabad Branch, Islamic Azad University, Mahabad, Iran.

Received: 10/01/2014; Revised: 15/02/2014; Accepted: 25/02/2014

Visit for more related articles at Research & Reviews: Journal of Engineering and Technology

Abstract

The Human Visual System (HVS) is incredibly variable from one person to another and even under different conditions for the same person. Parameterizing allows for this -personalization‖ while maintaining the familiar property that, if a visually -fine‖ image is added to another visually -fine‖ image, the result should also be -fine.‖ we find that the separate operations generally work best when the parameter values are the same by insuring a visually pleasing result, this should help to improve image enhancement performance. Similar training methods have been introduced in the past and used for a number of applications. Further, we find that good results can be obtained without training the system for individual images, however by utilizing the training system on a specific problem one may have the best results.

Keywords

HVS, visual, image

Introduction

As image enhancement systems rely on performance of their basic arithmetical components, we study these most basic building blocks for improved performance. It can be shown that, when linear arithmetic is used, added images are always brighter than the originals, which can result in images that are too bright. When classical LIP arithmetic is used, added images are always darker than the originals, which can result in images that are overall too dark. As addition is a form of fusion, it is natural to want to combine images in a more meaningful fashion. Ideally, added images will be representative of the originals without unnaturally becoming too dark or too bright [1-4]. By optimizing these most basic image transforms, we will have improved enhancement. Because the LIP model has been successfully used for image processing applications; one solution could be a Parameterized LIP (PLIP) model. These parameters allow for fine tuning of the model, giving the user greater control over the end result. By changing only the parameters, one is able to change the overall brightness and contrast in output images. Also, as the parameters can be problem dependent, one can modify the range depending on the amount of information to be fused, thus avoiding the loss of information problem while minimizing operational complexity and allowing it to be realized with cheaper hardware. The primary result of the training of this system for addition has already been seen in [5-7]. The inclusion of parameters alone may not completely solve the image processing arithmetic limitations, though, namely the loss of information and the need for a more meaningful image fusion. To address these limitations, we propose an extra constraint be added to the PLIP system. We propose a fifth requirement for an image processing framework. The model must not damage either signal. In essence, when avisually “good” image is added to another visually “good” image, the result must also be “good” [2]. This is of particular importance, for example, when receiving information from two sensors which must be fused somehow.For this case, the resulting image should appear to have the second road blocked off by the boulders. This also demonstrates the previously mentioned limitation of LIP arithmetic wherein some output images can be visually damaged; the images are too dark and do not appear natural. While it is consistent that the resulting images should be brighter when linear addition is used and darker when classical LIP is used, practically these results can be improvedupon. Although classical addition tends to give result which are characteristically too bright and LIP addition gives results which are characteristically too dark, both cases result in visually pleasing and representative images with appropriate PLIP parameters.

Materials and Methods

First, the need for a trained PLIP model in image enhancement is demonstrated. PLIP arithmetic is more relevant to the image formation model –The work of Ivakhnenko demonstrates the use of a polynomial description of complex systems, and he presents methods for the tuning of parameters to train the system for any number of uses by means of an iterative regression technique based on mean square error [7]. Another work details new methods for the training of Recurrent Neural Networks using multi-objective algorithms and mean square error [4]. These methods, however, have the benefit of full training data; ie. there are well-defined inputs and the correct answer is known a-priori. For the enhancement problem the input is set but there is no a-priori knowledge of the optimal enhanced image. For a training problem such as this, experts’ judgmental information may have to be used at certain stages [1]. In general, the best parameters are algorithm dependent. In this paper, three methods of training the system will be focused on in order to determine the best parameters for an application. The first method is based on Mean-Squared Error (MSE) measurements, which will prove important when one considers systems of differing precision, such as 16 bits, 32 bits, 64 bits etc. [2]. The second is based on the image enhancement measure, the EMEE, which is a quantitative evaluation metric. This can be used as opposed to subjective human evaluations, giving more consistent results [2-3]. The third method is based on visual assessment of enhanced images to determine which are most visually pleasing for a human observer. For an imaging system which would combine many pixel values to arrive at one output value, such as a low pass filter or an edge detector, values can quickly go to saturation and information can be lost.

Results of Minimizing Loss of Information Using Mse

In this section, the PLIP system is first trained for addition, subtraction and multiplication to minimize this loss of information. To accomplish this, the best values for γ(M), k(M), and λ(M) are determined for the general case by attempting to maximize the information in the result of the operations, thus minimizing information loss. In this paper, The three operations, addition, subtraction, and multiplication are considered separately for training, selecting the appropriate parameters for the general use of the PLIP model. This parameterization, however, leads to an under constrained problem, which will be solved by the fifth requirement. Methods to best train the parameterized will also be investigated. Both of these issues are explored in detail. Although these parameters are selected independently, To measure this, two images are added, subtracted, or multiplied using PLIP arithmetic. Finally, the MSE is plotted against γ(M), k(M), or λ(M); As the information in the fractional portion is important in PLIP arithmetic, it is important to maximize this information. The process described quantifies the energy in these lowest order bits by comparing the result using high precision methods to the result of a low precision approximation. By selecting values corresponding to the maximum MSE, it is possible to find the parameter values for which there is the greatest information in the output image. This is most useful when one considers the expanded range of newer imaging systems, for example medical images using 16-bits, where the increased information would be utilized. Also, this MSE test can be useful when downgrading to a lesser system, where one would instead want to minimize the MSE. This study was performed for many different combinations of images using the three PLIP arithmetical operations. The result of this experiment for addition, shown in the graph in figure Figure 1.f, shows several peaks, however by far the largest peak occurs at γ(M) = 1026. Even though the MSE values are small compared to the pixel intensity values, the goal is not to find statistical differences between the output image and an approximation but to find a point of interest amongst the possible parameter values. The results for subtraction and multiplication are shown the same large peakat k(M), λ(M) = 1026. These results, including the same peaks with similar relative sizes, were found in all simulations. After investigating using the values from the peaks and other values for γ(M), k(M), and λ(M), it was determined that this is the best value for allthree of these parameters for the general case. Another interesting note is that all of the local minima correspond to values of 2n, with a large peak directly following. This suggests that a possible better function to minimize information loss for image arithmetic could be γ(M) = 2k, k = 1,2,3…To train the parameters γ, k, and λ with the EMEE, two methods are used. The different atomic operations can be tested individually and PLIP based enhancement algorithms can be tested as an overall system. These studies will be performed using several different values of the PLIP parameters, comparing the resulting images.First, the improved performance by changing only the k(M) value used to calculate the grey tone function, g( i, j ), for each image. For this example, γ(M) = 512 and k(M) is tested as the max of the two images separately, the max of the two images collectively, k (M) = 255 and k (M) = 300. The major difference is in the background; by using the maximum value of the two images collectively the background in the Truck image is given higher values, and this helps to hide some of the ripples which can be seen in the image.

engineering-technology-Practical-effect

Figure 1: Practical effect of using different proposed parameters while maintaining fixed enhancement parameters.

Conclusion

This is performed using both standard 64-bit double precision floating point arithmetic and a 15-bit floating point approximation, and the difference is measured using the mean squared error (MSE). As this measures the energy in the lowest order bits, this MSE is multiplied by a constant to simulate a left shift so these low order bits are made to be higher order. The first compares the MSE of a high precision result to that of a low precision approximation in order to minimize loss of information. The second uses EMEE scores to maximize visual appeal and further reduce information loss.

References