ISSN ONLINE(2320-9801) PRINT (2320-9798)

All submissions of the EM system will be redirected to Online Manuscript Submission System. Authors are requested to submit articles directly to Online Manuscript Submission System of respective journal.

HVS Based Enhanced Medical Image Fusion

T.Nalini1 ,A.Gayathri2
  1. Bharath University, Chennai, Tamilnadu, India.
  2. MNM Jain Engineering College, Tamilnadu, India.
Related article at Pubmed, Scholar Google

Visit for more related articles at International Journal of Innovative Research in Computer and Communication Engineering

Abstract

Medical image fusion will help the physicians to extract the features visible in images by different modalities. In this paper, a novel discrete wavelet transform (DWT) based technique for medical image fusion is presented. Firstly, the medical images to be fused are extracted .Secondly conversion into grayscale is carried out before decomposition by the DWT. Then by considering the characteristics of human visual system (HVS) and the physical meaning of the wavelet coefficients, new different fusion schemes are performed on low frequency and high frequency bands separately, i.e. Visibility Based Scheme for the low frequency coefficients and Variance Based Scheme for the high frequency coefficients are applied. Finally, the fused image is constructed by the inverse discrete wavelet transform (IDWT) with all the combined coefficients.

Keywords

Discrete wavelet transform (DWT), Inverse discrete wavelet transform (IDWT), Human Visual system (HVS)

1 INTRODUCTION

In order to better support more accurate clinical information for physicians to deal with medical diagnosis and evaluation, multimodality medical images are needed, such as X-ray, computed tomography (CT), magnetic resonance imaging (MRI), and positron emission tomography (PET) images. These multimodality medical images usually provide complementary and occasionally conflicting information. Therefore, the fusion of the multimodal medical images is necessary and it has become a promising and very challenging research area in recent years. For medical image fusion, the fusion of images can often lead to additional clinical information not apparent in the separate images. Another advantage is that it can reduce the storage cost by storing just the single fused image instead of multi-source images. So far, many techniques for image fusion have been proposed in the literature and a thorough overview of these methods can be viewed in reference. Since the real-world objects usually contain structures at many different scales or resolutions, multi resolution techniques for medical image fusion have become very essential.
1.1 Fusion Techniques
A generic categorization of image fusion methods is the following: 1)linear superposition 2)nonlinear methods 3)optimization approaches 4)artificial neural networks 5)image pyramids 6)wavelet transform 7)generic multi resolution fusion scheme
1.2 Wavelet Transform
A signal analysis method similar to image pyramids is the Discrete Wavelet Transform. The main difference is that while image pyramids lead to an over complete set of transform coefficients, the wavelet transform results in a non redundant image representation. The discrete 2-dim wavelet transform is computed by the recursive application of lowpass and high pass filters in each direction of the input image (i.e. rows and columns) followed by sub sampling. Details on this scheme can be found in the reference section. One major drawback of the wavelet transform when applied to image fusion is its well known shift dependency, i.e. a simple shift of the input signal may lead to complete different transform coefficients. This results in inconsistent fused images when invoked in image sequence fusion. To overcome the shift dependency of the wavelet fusion scheme, the input images must be decomposed into a shift invariant representation. There are several ways to achieve this: The straightforward way is to compute the wavelet transform for all possible circular shifts of the input signal.
1.3 Medical Image Fusion
Fused images may be created from multiple images from the same imaging modality, or by combining information from multiple modalities, such as magnetic resonance image (MRI), computed tomography (CT), positron emission tomography (PET), and single photon emission computed tomography (SPECT). In radiology and radiation oncology, these images serve different purposes.Magnetic resonance imaging (MRI) is primarily a medical imaging technique most commonly used in radiology to visualize the structure and function of the body. MRI provides much greater contrast between the different soft tissues of the body than computed tomography (CT) does, making it especially useful in neurological (brain).Unlike CT, it uses no ionizing radiation, but uses a powerful magnetic field to align the nuclear magnetization of (usually) hydrogen atoms in water in the body. Radiofrequency fields are used to systematically alter the alignment of this magnetization, causing the hydrogen nuclei to produce a rotating magnetic field detectable by the scanner. This signal can be manipulated by additional magnetic fields to build up enough information to construct an image of the body. The body is mainly composed of water molecules which each contain two hydrogen nuclei or protons. When a person goes inside the powerful magnetic field of the scanner these protons align with the direction of the field. A second radiofrequency electromagnetic field is then briefly turned on causing the protons to absorb some of its energy. When this field is turned off the protons release this energy at a radiofrequency which can be detected by the scanner. Computed tomography (CT) is a medical imaging method employing tomography. Tomography is imaging by sections or sectioning. Digital geometry processing is used to generate a three-dimensional image of the inside of an object from a large series of two-dimensional X-ray images taken around a single axis of rotation. CT is a sensitive method for diagnosis of abdominal diseases. It is used frequently to determine stage of cancer. CT images are used more often to ascertain differences in tissue density while MRI images are typically used to diagnose brain tumors.
1.4. Proposed System
In this system after decomposition by DWT, the coefficients of low frequency portions and high frequency portions are processed with different fusion schemes providing enhanced visual information. The image reader used is based on Sinkhorn scaling algorithm which makes the system flexible to the size of the input image. The multimodal medical images usually contain complementary and conflicting medical information, i.e., the same object in the multimodal medical images may appear very distinctly. Hence, when the source images are decomposed by wavelet transform, the approximation images (low frequency band) and the detail images (high frequency bands) must have quite different physical meaning in different images. On the other hand, as we know in most applications, the ultimate user or interpreter of the fused image is a human. So the human perception should be considered in the image fusion. According to the theoretical models of the HVS, we know that the human eyes have different sensitiveness to the wavelet coefficients of low resolution band and high resolution bands. Based on the above analysis, this paper presents a new fusion rule which treats the low frequency band and high frequency bands with different schemes separately.

II. OVERVIEW OF TECHNOLOGIES

The software requirement specification is produced at the culmination of the analysis task. The function and performance allocated to software as part of system engineering are refined by establishing a complete information description as functional representation, a representation of system behavior, an indication of performance requirements and design constraints, appropriate validation criteria.
2. 1.Image Extraction
In image extraction we need to perform feature extraction in which the necessary features must be gathered from input images. Some of the features that are common to all images are its size, color, texture, pattern etc. In case of medical images patterns are the region of importance that help in focusing on the abnormality .This pattern extraction is done using Sinkhorn Scaling Algorithm which makes the system flexible to the image size .Each input image is extracted and the values are converted into matrices(r, g, b ) which is stored in the form of an array.
2.2. Gray scale conversion
Conversion of a color image to grayscale is not unique; different weighting of color channels effectively represents the effect of shooting black-and-white film with different-colored photographic filters on the cameras. A common strategy is to match the luminance of the grayscale image to the luminance of the color image.To convert any color to a grayscale representation of its luminance, first one must obtain the values of its red, green, and blue (RGB) primaries in linear intensity encoding, by gamma expansion. Then, add together 30% of red value, 59% of green value, and 11% of blue value (these weights depend on the exact choice of the RGB primaries, but are typical). Regardless of the scale employed (0.0 to 1.0 in decimal, 0 to 255 in pixel range, 0% to 100%, etc.), the resultant number is the desired linear luminance value; it typically needs to be gamma compressed to get back to a conventional grayscale representation.Grayscale conversion by expressing grayscale as a continuous, image dependent, piecewise linear mapping of the RGB color primaries and their saturation is needed. The degree of contrast enhancement, scales of contrast features, and need for noise suppression can easily be adjusted to suit medical image fusion. The enhanced grayscale image can take the place of luminance image in existing systems for displaying, analyzing, and recognizing images. By rendering color contrasts in grayscale with less detail loss, we offer a more informative picture for visual inspection and interpretation.

III. PERFORMING DISCRETE WAVELET TRANSFORM

3.1. Discrete wavelet transforms
image
Calculating wavelet coefficients at every possible scale is a fair amount of work, and it generates an awful lot of data. If the scales and positions are chosen based on powers of two, the so-called dyadic scales and positions, then calculating wavelet coefficients are efficient and just as accurate. This is obtained from discrete wavelet transform. The generic form for a one-dimensional (1-D) wavelet transform is shown in Fig. 3.
Fig 3 Schematic Representation of IDWT
Here a signal is passed through a low pass and high pass filter, h and g, respectively, then down sampled by a factor of two, constituting one level of transform. Repeating the filtering and decimation process on the low pass branch outputs make multiple levels or “scales” of the wavelet transform only. The process is typically carried out for a finite number of levels K, and the resulting coefficients are called wavelet coefficients.Fusion rules determine how the source transforms will be combined[5]:i)Fusion rules may be application dependent ii)Fusion rules can be the same for all subbands or dependent on which sub-band is being fused
There are two basic steps to determine the rules that compute salience measures corresponding to the individual source transforms and decides how to combine the coefficients after comparing the salience measures (selection or averaging)There are many rules for image fusion. Some of them are very simple, like: MIN, MAX, MEAN, which use the minimum, maximum and mean values of the transform coefficients
3.2. Fusion schemes:
The main advantage of our proposed system is the use of two different schemes for lower and higher coefficients. The two schemes are: i) Visibility based scheme ii) Variance based scheme.
In visibility based scheme only the lower level coefficients can be evaluated. Since the low frequency band is the original image at coarser resolution level, it can be considered as a smoothed and sub-sampled version of the original image. Therefore, most information of their source images is kept in the low frequency band. So, in our method for the low frequency band, a fusion scheme which selects the highest local visibility is employed. This approach is derived from the fact that the HVS is sensitive to the contrast. The visibility of wavelet coefficients is defined[3].In variance based scheme higher level coefficients are evaluated. These coefficients are called as approximation coefficients as they contain only the limited information about the image. In this scheme the higher level coefficients are further divided into nine different coefficients which are given as input to IDWT along with lower level coefficients.After splitting the lower level coefficients using visibility scheme LH, HL, HH coefficients are omitted for obtaining better and efficient fusion result. This is because visibility scheme can be used only for lower level coefficients i.e. only the visible regions.
image
3.3. Fused image creation by IDWT
The next step after decomposition of source images is the application of the fusion rule. We choose to use the maximum of the absolute value of the coefficients at every decomposition level. The result is then obtained using inverse discrete wavelet transform. The application allows users to select two images from a directory, display them and then to render the fused image. IDWT is performed for the lower and higher coefficients that we got by applying fusion schemes[2].During the IDWT process canny edge detection is used to get the edge information by focusing on the sudden change in the pixel values.
Fig 3.3. Basic Fusion Process incorporating Fusion Rule
This helps in acquiring the important features from the final fused image with enhanced clarity. Focusing on the edge information in the final fused medical image becomes significant since it helps the physician in examining the abnormality.

IV. CONCLUSION

Thus a novel DWT based technique for medical image fusion is presented. The method is developed by not only considering the characteristics of HVS but also considering the physical meaning of the wavelet coefficients. Different fusion schemes are then performed on the coefficients of the low frequency band and high frequency bands with simultaneous computation of IDWT. We have compared the proposed method with some existing fusion approaches. Our application is intended to be useful for physicians who need to fuse multi-modality images for support in diagnosis. We can integrate the fusion process into a distributed Application .As technology keeps developing new and efficient filters can be used to obtain better fusion results through z-transform, Laplace transform or DWT. Instead of performing multilevel DWT which can affect the overall processing speed we can apply other methods that yield an equivalent result in a single level of decomposition. We can also use the simple Haar’s transform to obtain efficiency. The Haar wavelet is a certain sequence of rescaled "square-shaped" functions which together form a wavelet family or basis. The Haar wavelet is the simplest possible wavelet. The technical advantage of the Haar wavelet is that it is not continuous, and therefore not differentiable.

References

  1. Yong Yang, 2010, “Multimodal Medical Image Fusion Through a New DWT Based Technique “,School of Information Technology, Jiangxi University of Finance and Economics, Nanchang 330013, China.
  2. S. L. Cheng, J. M. He, Z. W. Lv, 2008, “Medical Image of PET/CT Weighted Fusion Based on Wavelet Transform,” Proceedings of the 2nd International Conference on Bioinformatics and Biomedical Engineering, pp. 2523–2525.
  3. D. Sabalan, G. Hassan, 2007, “MRI and PET images fusion based on human retina model,” Journal of Zhejiang University SCIENCE A, vol. 8, no. 10, pp. 1624-1632.
  4. Y. M. Zhu, S. M. Cochoff, 2006, “An object-oriented framework for medical image registration, fusion, and visualization,” Computer Methods and Programs in Biomedicine, vol. 82, no. 3, pp. 258–267.
  5. G. Pajares, J. M. D. L. Cruz , 2004, “A wavelet-based image fusion tutorial,” Pattern Recognition, vol. 37, no. 9, pp. 1855–1872. [6] F. Maes, D. Vandermeulen, P. Suetens , 2003, “Medical image registration using mutual information,” Proceedings of the IEEE, vol. 91, no. 10, pp. 1699–1722.