ISSN: 2319-9873
Department of Electrical Engineering, Urmia Branch, Islamic Azad University, Urmia, Iran.
Received: 02/10/2013 Accepted: 17/01/2013
Visit for more related articles at Research & Reviews: Journal of Engineering and Technology
Development of new imaging sensors arises the need for image processing techniques that can effectively fuse images from different sensors into a single coherent composition for interpretation. In order to make use of inherent redundancy and extended coverage of multiple sensors, we propose a multi-scale approach for pixel level image fusion. The ultimate goal is to reduce human/machine error in detection and recognition of objects. Results show that proposed methods has lots of superiority over traditional methods.
image, coherent, sensors, fusion method
Over the past two decades, a wide variety of pixel-level image fusion algorithms has been developed. These techniques may be classified into linear superposition, logical filter [1], mathematical morphology [2], image algebra [3], artificial neural net-work [4], and simulated annealing [5] methods. Each of these algorithms focuses on the fact that the fused image reveals new information concerning features that cannot be perceived in individual sensor images. However, some useful information has been discarded since each fusion scheme tends to emphasize different attributes of the image. Ref. [5] provides a detailed review of these techniques. Inspired by the fact that the human visual system processes and analyzes image information at different scales, researchers recently proposed a multiscale based fusion method which is widely accepted as one of the most effective techniques for image fusion. Wavelet theory has played a particularly important role in multiscale analysis. A number of papers [3-5] have addressed fusion algorithms based on the orthogonal wavelet transform. A major drawback in the recent pursuit of wavelet-based fusion algorithms is due to a lack of a good fusion scheme. Most fusion rules so far proposed are in essence more or less similar to “choose max” scheme proposed by [2], which introduces a significant amount of high frequency noise due to the sudden switch of the fused wavelet coefficient to that which is maximum of the source. This high frequency noise is particularly undesirable to visual perception. VERSE Synthetic Aperture Radar (ISAR) is a microwave imaging system capable of producing high resolution imagery from data collected by a relatively small antenna. ISAR can be explained in terms of spotlight SAR. Spotlight SAR is obtained as the radar antenna constantly tracks a particular target area of interest. The same data would be collected if the radar was stationary and the target area was rotating. The target rotation relative to the radar is used to generate the target image. This is precisely the idea of ISAR.
In this article, we apply a biorthogonal wavelet transform to the pixel level image fusion. It is possible to construct smooth biorthogonal wavelets of compact support which are either symmetric or antisymmetric. At the exception of a Haar wavelet, it has been shown that symmetric orthogonal wavelets are impossible to construct. Symmetric or antisymmetric wavelets are synthesized with perfect reconstruction filters having a linear phase. This is a desirable property for image fusion applications. Unlike the “choose max” type of selection rules, we propose an information theoretic fusion scheme. For each pixel in a source image, a vector consisting of wavelet coefficients at that pixel position across scales is formed to indicate the “activity” of that pixel. We denote these indicator vectors of all the pixels in a source image as its activity map. To make a reasonable comparison among activity indicator vectors, we apply our newly proposed divergence measure, Jensen divergence, which is defined in terms of entropy. The geometrical description of an object can be decomposed into registration and shape information. For example, an object’s location, rotation and size could be the registration information and the geometrical information that remains is the shape of the object. An object’s shape is invariant under registration transformations and two objects have the same shape if they can be registered to match exactly. The pioneers of this topic of general shape and registration. Inverse Synthetic Aperture Radar System transmits electro-magnetic waves to a target and coherently integrates the returned signals to synthesize the effect of a larger aperture array. The spatial distribution of the reflectivity density of a target, referred to as the image of the target, is usually mapped onto a range-azimuth plain.
Most existing approaches to denoising or signal enhancement in a wavelet-based frame-work have generally relied on the assumption of normally distributed and independent perturbations. In practice, this assumption is often violated and sometimes, even the prior information of a probability distribution of the noise process is unavailable. To relax this assumption, we propose a novel non-linear filtering technique in this paper. The key idea is to project a noisy signal onto a wavelet domain and to suppress wavelet coefficients by a mask derived from its curvature extrema in a scale space representation. For a piecewise smooth signal, it can be shown that filtering by this curvature mask is equivalent to preserving the signal pointwise. older exponents at the singular points and lifting its smoothness at all the remaining points. A Multiscale Approach for Pixel Level Image Fusion Pixel level image fusion refers to the processing and synergistic combination of information gathered by various imaging sources to provide a better understanding of a scene. We formulate the image fusion as an optimization problem and propose an information theoretic approach in a multiscale framework to solve it. Fig 1 shows Jensen divergence as a function of β. A bi-orthogonal wavelet transform of each source image is first calculated, and the new fusion algorithm applies Jensen divergence to construct a composite of wavelet coefficients according to the measurement of the information patterns inherent in source images. Experimental results on fusion of multi-sensor navigation images, multi-modality medical images, multi-spectral remote sensing images, and multi-focus optical images are presented to illustrate the proposed fusion scheme. ISAR imagery represents reflectivity magnitude associated with the illuminated target. In the terminology of radar signal processing, the direction of radar Line of Sight is referred to as range and the direction orthogonal to range is referred to as cross-range or azimuth. Range is determined by measuring the time it takes for a transmitted signal to travel a round-trip distance between radar and target. The ability of the radar to determine the range of a particular scatterer in a vicinity of other scatterers depends on the range resolution. Target reflectivity density is a function of frequency and viewing angle, and the assumption is that it does not vary significantly over the bandwidth of transmitted signal, which is considered narrow compared to the carrier frequency, or over the narrow bands.
The geometrical description of an object can be decomposed into registration and shape information. The goal of shape recognition is to identify a shape regardless of its registration information. We describe the matching of two configurations using a regression technique, making connections with general shape spaces and procrustean distances. In particular, we study the generalized matching by estimation in Euclidean and affine shape spaces. Simulation results show that matching by way of a mean shape is more robust than matching target shapes directly.