Keywords
|
Stationary wavelet transform, curvelet transform, contourlet transform, wavelet transform, laplacian pyramid, framelet transform. |
INTRODUCTION
|
With advantages in technology, more and more imaging modalities are available for research and clinical studies, for example X-ray computed tomography (CT), Positron Emission Tomography (PET) and Magnetic Resonance Imaging (MRI). Each of these modalities provides some unique and very often complementary characterization of the underlying anatomy and tissue microstructure. This calls for a joint analysis of these multimodality images. A multichannel image for each subject is defined as the co-registered collection of all single modality images that represent the same and analysis of such images is referred to as Multichannel Image Analysis (MIA). |
To improve the efficiency and reliability of the multichannel image registration, for each tissue type it is beneficial to only dip the most pertinent information from all the modalities, and discard the part that overlaps or less reliable. Such information is referred to as „redundant? [1]. |
A straightforward multimodal image fusion method is to overcome the source images by manipulating their transparency attributes, or assigning them by different color channels. This overlying scheme is a fundamental approach in color fusion, a type of image fusion that used color to expand the amount of information conveyed in a single image [2]. |
There are a number of techniques for multifocus image fusion up-to-date. Average fusion method (AF) is the simplest one that averages all input images pixel by pixel and the weighted average method a variation of AF, assigns a different weight to each input before taking the average. Both the above methods often appear to have serious side effects like reduction in the contrast of the fused image. Probalistic method involves huge amount of floating point computation to choose a proper fusion numerical operator among the three methods defined above and it consumes more time and memory for a given problem [3]. |
Due to the limited depth of focus of optical lenses, it is often difficult to get an image that contains all the relevant information of an object in single focus. To overcome this difficulty, multi-focus image fusion is developed. Image fusion is the process of combining multiple input images or some of their features into single information without any distortion or loss of information. With advancement in technology, it is now possible to obtain high quality fused image with both spectral and spatial information [4]. Medical image fusion refers matching and fusion of two or more images of the same lesion. It is broadly classified into three categories - Feature based, Pixel based and Decision making based Medical Image Fusions [5]. Due to different imaging mechanism and high complexity of body structures and tissues, different medical imaging system can provide non-overlay and complementary information. |
Wavelet transform possesses many desired properties such as orthoganality and smoothness. Even though it is a critical method because of its good frequency characteristics, directionality and layered structure coincide with the human vision it has been widely applied in medical image fusion. |
This report is organized as follows: |
Section II describes various image fusion techniques by different authors. |
Section III gives performance measures of image fusion. |
Section IV gives conclusion. |
DIFFERENT IMAGE FUSION TECHNIQUES
|
There are various level image fusion techniques - pixel level, decision level and feature level. Among pixel level fusion techniques, multi-scale transforms are the most popular technique; however, multi-scale fusion leads to distortion of images as images are derived from different sensor modalities. Andreas Ellmauthaler et al. presents a different scheme to improve the performance of multi-scale image fusion known as undecimated wavelet transform based fusion scheme [6]. In this technique, the image decomposition is split into two successive filtering operations. Normally fusion takes place after convolution with the first filter pair. Its smaller size leads to minimization of coefficient values. This leads to errors in the fused image. But non-sub sampled nature of UWT allows to use nonorthogonal filter banks which are more robust to artifacts during fusion. This technique provides various advantages than traditional methods. UWT techniques reduce ringing artifacts in the fused image. In this technique source images are initially filtered using first spectral factor, followed by normal fusion. The acquired output is again filtered using remaining spectral factors. Ultimately the fused image is obtained by taking inverse transform. The acquired output will be free from ringing artifacts and also from coefficient spreading problems. Besides, UWT technique is invariant to shifts that occur in input images. The fusion rule employed in this technique is “select max” rule. |
A new technique has been put forward by Rui Shen et al. [7] for novel cross scale fusion for volumetric images. This method consists of both intra scale as well as inter scale consistencies. An efficient colour image fusion scheme has been put forth for the case of two monochrome source images. |
In order to get good contrasts in red, green and yellow, blue channels the author arranged red, green and blue ,yellow colour on a circle where red, green axis is orthogonal to blue, yellow axis. Only if the hues of the colour fused image comes from two opposite quadrants or from two orthogonal hues on a colour circle colour contrast could be maximized in a colour fused image. Ultimately the author has inferred that better results have been obtained from the proposed fusion rule and looked forward to pursue his work on HD medical images. |
`A fusion frame work based on non- sub sampled contourlet transform has been proposed by Gaurav Bhatnagar et al. [8]. In this method first the source images are transformed by NSCT technique and the same is followed by combining low and high frequency components. Here two different fusion rules based on phase congruency and directive contrast have been suggested and used to fuse low and high frequency components. Finally the fused image is created by the inverse NSCT with all composite co-efficient. An effective way has been provided by experimental results to enable more accurate analysis of multi modality images. The main idea of this paper is to carryout NSCT on source images followed by fusion of low and high frequency components. The foremost benefit of phase congruency is that by this method the contrast and brightness invariant representation contained in the low frequency coefficients are combined and contrasted. |
By using directive contrast, the most conspicuous texture and edge information are selected from high frequency co - efficient and combined in the fused image. The statistical and visual comparisons indicate that the proposed algorithm can enhance the details of fused image and can improve the visual effect with much less information distortion. |
A new approach has been put forth for the fusion of Digital Image Magnetic Resonance and Computed Tomography images by G.Mamatha [9]. The application of curvelet transform for MRI and CT image fusion is submitted in this paper as the capability of the wavelet transform to deal with images having curved shapes is limited. In the curvelet transform technique the basic principle is “segmentation.” In this method ridgelet transform is applied to each file after the image is segmented into small overlapping tiles. During segmentation process the curved lines in the image are being converted into straight lines. Further, by overlapping of tiles the problem of edge effects are overcome. The ridgelet transform is a tool for detection of shapes. In this transformation the results arrived are better than the routine DWT techniques. |
Kunal Narayanan Choudary et al. has put forward the dual tree complex technique which exhibits better shiftinvariance than DWT techniques. In this paper the author has proposed an amplitude phase representation of DT-CWT giving direct explanation about the improvement in shift-invariance techniques. The author here featured the shiftability of DT-CWT in terms of shifting property of FHTs [10]. Some of the fundamental invariance of FHT group are translation, dilation and norm, however all these fundamental invariance are exclusive in this technique. The author also conclusively introduces a generalization of Bedrosian theorem for FHT operator and derives an explicit understanding of shifting action of FHT. Ultimately he has also extended his ideas to multidimensional setting by introducing a directional extension of FHT. |
Shutao Li has put forward a new method which is based on two scale decomposition of an image using a guided filter [11]. The two scale decomposition contains 2 layers viz. base layers and a detail layer. The base layer contains large scale variations of intensities while the detail layer contains every small scale detail novel guided filtering based weighted average technique proposed to make use of spatial consistency of base and detail layers. Moreover this technique presents a fast two scale fusion method. Guided filtering is used as a local filtering method and it is a edge preserving filter. The computing time of guided filter is independent of filter size. Further guided filter makes full use of strong correlations between neighbouring pixels. The author concludes that this technique can be used expertly for image registration that preserves the original and complementary information of source images. |
Navneet Kaur et al. has put forth a new technique based on curvelet transform using log gabor filter [12]. Compared to the traditional wavelet techniques used so far, the curvelet transform provides a major advantage of providing information about curved region. When log gabor filter is added to curvelet technique, it improves the contrast ridges and edges of images. The major advantage of using log gabor filter is that they can code natural images better than gabor filters. Moreover, log gabor filter can be constructed with any arbitrary bandwidth. The author concludes that this technique increases the quality of image besides preserving the important details of its features. |
In medical image fusion, merging automated segmentations obtained from multiple sources has become a common practice for improving accuracy [13]. Subrahmanyam Gorthi et al. proposed two fusion methods - Global weighted shape based averaging and Local weighted shape based averaging. In addition an edge preserving smoothness term has also been included to obtain the desired output. |
Jean-Luc Starckb et al. has applied two mathematical transforms - the ridgelet transform and curvelet transform. Curvelet transform uses ridgelet transform and implements sub-bands using a filter bank [14] .The major purpose of this technique is to denoise some of the standard images which are embedded in white noise. This technique was introduced to overcome the limitations of wavelet transforms. 2D wavelet transform will exhibit large wavelet coefficients. This leads to necessity of many wavelet coefficients to reconstruct the edges of an image. Due to presence of too many coefficients denoising faces certain problems. To overcome these denoising problems, the ridgelet transform plays a major role. Ridgelet transform allows sparse representation of smooth functions and perfectly straight edges. But in image processing edges are typically curved than straight. Hence here discrete ridgelet transform and discrete curvelet transform are used which provides representation of smooth objects and objects with edges. Also, the author has taken effects for model denoising problem where he has embed some standard images in white noise and applied thresholding techniques in digital curvelet transform domain. |
Peter J.Burt has put forward a technique for image encoding where local operators of many scales but identical shape serving as basic function has been proposed [15]. There is difference in the representation i.e. from established |
techniques the code elements are localized in spatial frequencies as well as in space. By subtracting a low pass filtered copy of the image from the image itself, pixel to pixel correlation is removed first. Since the difference or error, image has low variation and entropy and the low pass filtered image may represent at reduced sample density. By quantifying the difference, further image data compression is achieved. Subsequently these steps are repeated to compress the low pass image. A pyramid data structure is generated by iteration of the process at approximately expanded scales. Sampling the image with laplacian operators of many scales is equivalent to the encoding process. |
The code, thus, tends to improve the features of salient images. Moreover the present code is advantageous as it is well suited for many image analysis tasks as well as for image expression. For coding and decoding fast algorithms are used. This new technique is for removing image correlation which combines the features of predictive and transform methods. Though the techniques are non causal , the computations are relatively simple and local. The purpose to construct the reduced image g1 is that it may serve as a prediction for pixel values in the original image gO. In order to get a compressed representation, the error image which remains when an expanded g1 is subtracted from go is encoded. This image forms the bottom level of the Laplacian pyramid. By encoding g1 in the same way, next level is generated. |
F.E.Ali, I.M. EI-Dokany, A.A.Saad and F.E.Abd EI-Samie [16] put forward the application of curvelet transform in medical image fusion. During the fusion of medical images multiple medical images such as computed tomography and magnetic resonance images are fused into a new image for the purpose of improving the information content for diagnosis. In recent past, some attempts have been made for the fusion of MR and CT images using wavelet transform. As medical images have curved structure and also several objects, it can be hoped that the curvelet transform would be better in fusion. The curvelet transform is based on the segmentation of the whole image into small overlapping tiles and thereon each tile is subjected to ridgelet transform. The idea of segmentation is to approximate curved lines by straight lines. Overlapping of tiles targets at avoiding edge effects. The ridgelet transform is a 1D wavelet transform applied in the radon domain of each and every tile. Mostly curvelet transform was proposed for image de noising. The simulation results had established that curvelet transform is good for image fusion. |
The subject of wavelet image fusion, proposed for medical image pseudo coloring as wavelet efficiently separates an image into different sub spaces that preserves the feature of original images after pseudo colouring, is dealt with in this paper [17]. The processing speed of pseudo colouring algorithm has been improved in this proposed method. The system is robust in processing noisy images. Noise corruption is prevalent in medical image as it is generally obtained in a noisy environment. During most of the times even trained eyes find it difficult to make correct diagnosis of needy tissues. By adopting this technique the medical professionalists are able to separate appropriate tissues. |
In this method using wavelets a new orthogonal space is constructed and the image is at first projected to the space. After wavelet decomposition the coarse image is colored as per the details of image features provided by the classifier. The final pseudo colored image is formed by fusing together the colored wavelet coarse images. This wavelet fusion technique is advantageous for real time implementation. Further by using this method a stable / undeviating result is provided for a broad spectrum of patients. |
In this paper [18] a proposal for a fast and efficient coder to transform the image quality of the fractal compression is submitted. The fast fractal encoding by using partitioned iteration functions (PIFS) has been applied to the warse scale (low pass sub band) of curvelet transformed image and a modified set partitioning in hierarchical trees (SPIHT) coding. The details of images and curvelet progressive transmission characteristics are maintained, thereby the common encoding fidelity problem in fractal curvelet hybrid coders is solved. By adopting the proposed scheme 90% time in encoding and decoding can be achieved. A fractal curvelet image coder is presented and it applies to the speed of curvelet transform to the image quality of the fractal compression. The application used in the proposed hybrid cover is fast fractal encoding (QPIFS) using fisher?s acceleration in the approximation sub band. For curvelet transformed image, the detailed sub band using modified SPIHT coding is applied. |
The theme of fusion for images using multi resolution wavelet transform from various sources, with pre processing of image fusion is proposed in this paper [19]. When two images are taken in different angles of scene, they cause distortion. Even though most of the objects are identical, their shapes change a little. In order to fix the problem of distortion, it should be ascertained that before fusing of images each pixel at correlated images has been properly connected. This can be done by image registration. By using software, several control points of the two images having same scene could be connected. Each image should be adjusted to fuse to the same dimension and this could be achieved by re-sampling after registration. Each image will be of same size after re-sampling. The approach adopted is pixel by pixel. While averaging method is used for fusing low frequency components, maximum rule method is adopted for fusing high frequency components. |
The subject of fusion of medical captured images using different modalities combines complementary information of various modalities adopting an efficient method is dealt with in this paper [20]. In the dual tree complex contourlet transform the restriction of dual tree complex wavelet is rectified by integrating directional filter banks. The images produced by the dual tree complex contourlet transform are with improved contours and textures duly retaining the property of shift invariance. A modern technology has been proposed for fusion rules based on principle component analysis and it depends on frequency component of DFCTT co efficient. |
While PCA method is followed for low frequency components, the prominent features of local energy model are taken up for high frequency components. The inverse dual tree complex contourlet transform is applied for obtaining the final fused image. During this process an image is decomposed into two levels by using bi orthogonal daubechies (9-7). At each and every level , every sub band is supplied to the DFB stage with as many as eight directions at the finest level. The comparison of visual and statistical information proves that the results obtained contain more detailed information and at the same time with small information distortion. |
In this paper Po-Whei Huang et al. present PET and MR brain image fusion method based on wavelet transform [21]. This gives good fusion result by adjusting the anatomical structure information in the gray matter (GM) area, and patching the spectral information in the white matter area after it is decomposed by wavelet and gray level fusion. Though many methods have been proposed for fusing PET and MR images in the past few years, the HIS substitution method can obtain fused images with good anatomical structural information. But it has a side effect of color distortion. Several multi resolution based methods have been proposed to generate fused images with less color distortion as they have the problem of missing detailed structural information. A new method known as HIS+RIM has been proposed recently that has the best performance so far in fusing PET and MR images. In order to overcome the problems of the existing methods and to provide better results than IHS+ RIM method, the above method is proposed. To carry more structural information high-activity region is decomposed by 4-level wavelet transform. To carry more spectral information and to get better colour preservation, low- activity region is decomposed by 3-level wavelet transform. For testing and for comparison the normal axis, normal coronal and Alzheimer?s disease brain image is used. Experimental results have demonstrated that the fused results for normal axis, normal coronal and Alzheimer?s disease brain images have less colour distortion with rich anatomical structural information than HIS+RIM method visually and quantitatively. |
In the medical imaging studies collection of multiple-task brain imaging data from the same subject has now become very common practice [22]. In this paper, Jing Sui , Tulay Adali ,Godfrey Pearlson, Honghui Yang, Scott R. Sponheim ,Tonya White, Vince D. Calhoun put forward a simple yet effective model, “CCA+ICA”, as a powerful tool for multitask data fusion. This joint blind source separation (BSS) model takes advantage of two multivariate methods and they are canonical correlation analysis and independent component analysis. The purpose is to achieve both high estimation accuracy and to provide the correct connection between two datasets in which sources can have either common or distinct between-dataset correlation. In this paper the focus is on multi-task brain imaging data fusion which is a second-level analysis based on “features”. In this case, a “feature” is a distilled dataset representing the task related activations and tends to be more tractable than working with the original 4D data due to the reduced dimension. Most of the existing methods maximize (1) inter subject / direction co variation or (2) statistical independence among the components, or both to connect two datasets. However, such a requirement may not be met with in practice, thus the two assumptions cannot be satisfied simultaneously or without the use of any prior information and thereby consequently, resulting in a tradeoff solution. To overcome this problem a new joint blind source separation (BSS) model whose assumptions are less stringent and takes maximum advantage of data has been proposed. The performance of CCA+ICA on jointly analyzing these two features is compared with joint-ICA and mCCA. As anticipated, the three methods successfully extract different views of the data with CCA+ICA appearing to highlight both task-common and task-distinct aberrant brain regions in schizophrenia. |
In United States Prostate cancer is one of the leading causes of cancer death in men [23]. Minsong Cao, Song-Chu Ko, Eric D. Slessinger, Colleen M. Des Rosiers, Peter A. Johnstone, Indra J. Das et el. put forward the present treatment options, prostate permanent seed implant brachytherapy has become an increasingly popular monotherapy. This is because it has the ability to spare neighboring organs, while delivering adequate high doses to the prostate. Suboptimal dosage evaluated from post implant dosimetry of prostate brachytherapy creates conundrum and it needs resolution. This pilot study had been undertaken to explore the possibility of summing and visualizing radiation dosage from multimodality treatment. CT scans were performed on the whole pelvis of patients using standard protocol for prostate planning and the acquired CT data sets were reconstructed using different sizes of field view. The images with limited FOV focusing on prostate were imported into Variseed (Varian Medical Systems, Inc., Palo Alto, CA) for postimplant evaluation, while images with full FOV imported to Eclipse (Varian Medical Systems, Inc., Palo Alto, CA) treatment planning system (TPS) for future managements. The brachytherapy dose matrix is registered with the patient images with full FOVin Eclipse TPS. The targets for dose boost were defined based on the isodose curves generated from brachytherapy. In order to deliver dose for selected underdose regions an external photon beam plan was successfully generated. Accurate external beam radiation treatment planning could be accomplished using the planning protocols for these inadequate brachytherapy dose. The proposed method can be used to safely deliver additional external radiation dose using intensity-modulated radiation therapy technique after using the suboptimal brachytherapy procedure. In this paper, we have explored the feasibility of summation of dose by using commonly used Variseed and Eclipse TPS, an integration procedure for manipulating the external photon beam dose to suboptimal dose coverage volumes from seed implantation brachytherapy has been proposed. This procedure can be easily incorporated with CT-based postimplant dosimetric evaluation study using existing technologies. By reconstructing two image data sets from a single postimplant CT scan, automatic registration of images, contours, and dose matrixes between post implant brachytherapy plan and external photon beam salvage plan is allowed without increasing imaging dose to the patients. |
In this paper, a novel framework for medical image fusion based on framelet transform has been proposed by considering the characteristics of human visual system (HVS) [24]. The main idea behind the proposed framework is to decompose all source images by the framelet transform. Many effective techniques for image fusion have been proposed so far in the literature especially for medical images. The simplest ways are pixel-by-pixel gray level average or selection of the source images that lead to undesirable side effects such as reduced contrast. Statistical and numerical methods involve huge computation using floating-point arithmetic, and these methods are time and memory consuming. PCA/ICA frequently produces undesirable side effects like block artifacts, reduced contrast etc. which often result in the wrong diagnosis. In this paper an attempt has been made to rectify the drawbacks of multiresolution transform in medical image fusion. For this purpose, framelet transform has been used in the proposed framework. The framelet transform has FIR perfect reconstruction filter banks which produce reconstructed signals with almost no or minimal error. Framelet transform allows a near shift invariance behavior and lesser rectangular artifacts due to the dense time-scale plane when compared with the case of non-oversampled filter banks used in the wavelet related transform. The efficiency of the proposed framework is highlighted by various experiments on different medical images. Used HVS models and main properties of framelet transform such as symmetry, simple sampling and large vanishing moments are the reasons behind the better performance of Framelet transform. By this process smoother scaling and more informative detail coefficients are produced when compared to other multiresolution methods. Further, two clinical examples of the persons affected with Alzheimer and tumor have also been done for more elaborated performance comparison analysis. |
PERFORMANCE MEASURES OF IMAGE FUSION
|
The main requirement of image fusion is that it must preserve all useful and valid information from the source images without introducing any artifacts. To measure the quality of images that is for objective evaluation of image fusion different performance measures like entropy, correlation coefficient, peak signal to noise ratio, root mean square error, standard deviation, structure similarity index, high pass correlation, edge detection, average gradient etc., has been used. Entropy gives a measure of information quantity, correlation coefficient is used to find the similarities between registered and the fused image, average gradient reflects the clarity of the fused image, root mean square error is cumulative error between the fused and the original image whereas peak signal to noise ratio is a measure of image error and so on. |
CONCLUSION
|
In this paper various image fusion techniques has been reviewed from the different published research works. These different image fusion techniques are used to obtain better fused output in spatial and spectral preservation. |
|
References
|
- Yang Li and RaginiVerma, “Multichannel Image Registration by Feature- based Information Fusion” IEEE transactions on Medical Imaging processing,Vol.30,No.3,March 2011. .
- T.Z.Wong,T.G.Turkington,T.C.HawkandR.E.Coleman,”PET and brain tumour image fusion?, Cancer Vol.10,No.4,pp 234-242,2004.
- Phen-Lan Lin, Po-Ying Huang,”Fusion methods based on dynamic-segmented morphological wavelet or cut and paste for multifocus images”, ELSEVIER, Science Direct ,Signal Processing 88 (2008) 1511-1527.
- Zhijun Wang, “A comparative analysis of Image fusion methods”, IEEE transactions on Geo science and Remote sensing, Vol. 43,No.6,June 2005.
- ChetanK.Solanki, „Pixel based and wavelet based image fusion methods?, National Conference on recent trends in Engineering and Technology.
- Andreas Ellmauthaler ,”Multi scale Image fusion using the undecimated wavelet transform with spectral factorization and non-orthogonal filter banks”, IEEE transactions on Image processing,Vol.22,No.3,March 2013.
- RuiShen ,”Cross scale coefficient selection for volumetric medical image fusion”,IEEE Transactions on Bio-medical Engineering,Vol.60,No.4, April 2013.
- GauravBhatnagar,”,Directive contrast based multimodal Medical image fusion in NSCT domain”, IEEE Transactions onMultimedia,Vol.15,No.5,August 2013.
- G.Mamatha ,”An image fusion using wavelet and curvelettransform”,Global Journal of Advanced Engineering Technologies Vol-11,Issue-2,2012.
- Kunal Narayanan Choudary ,”On the shiftability of Dual tree complex wavelet transforms,IEEE Transactions on Signal processing, Vol.58,No.1,January 2010.
- ShutaoLi ,”Image fusion with guided filtering”,IEEE Transactions on Image processing,Vol. 22,No.7,July 2013.
- NavneetKaur ,”A novel method for pixel level image fusion based on curvelettransform”,International Journal of research in Engineering andtechnology,Vol. 1,No.1
- SubrahmanyamGorthi , “Weighted shape based averaging with neighbourhood prior model for multiple atlas fusion based medical imagesegmentation”,IEEE Signal processing letters Vol.20,No.11,November 2013.
- Jean-Luc Starckb ,”The curvelet transform for image denoising”,IEEE Transactions on Image processing,Vol.11,No.6,June 2002.
- Peter J.Burt, member IEEE and Edward H. Adelson,”The Laplacian Pyramid as a compact image code”,IEEE Transactions onCommunications, vol.com-31,No.4 April 1983.
- F.E.Ali, I.M. EI-Dokany,A.A.Saad and F.E.Abd EI-Samie, “Fusion of MR and CT Images using the Curvelet Transform”, 25th National RadioScience Conference,March 18-20,2008.
- ” Medical Image pseudo colouring by wavelet fusion”, International conference of the IEEE Engineering in medicine and Biology societyAmsterdam 1996,3.1.2: visualisation and virtual reality.
- A.Muruganandham,S.Karthick,Dr.R.S.D.WahidaBanu,”.A Fast Fractal – Curvelet Image Coder, International Journal of Computerapplications (0975-8887),volume 35-No.12, December 2011.
- R.J.Sapkal,S.M.Kulkarni, “Image fusion based on Wavelet transform for medical application”, International Journal of Engineering Researchand Applications ISSN:2248-9622,Vol.2,Issue 5,Sep-Oct 2012,pp.624-627.
- Nemir AL-Azzawi, HarsaAmylia Mat Sakim, Ahmed K.Wan Abdullah and HaidiIbrahim,”Medical Image Fusion Scheme using Complex Contourlet transform based on PCA, 31st International conference of the IEEE EMBS September 2-6,2009.
- Po-Whei Huang, “PET and MRI Brain Image Fusion Using Wavelet Transform with Structural Information Adjustment and Spectral Information Patching”, 2014 International symposium.
- Jing Sui , TülayAdali , Godfrey Pearlson ,Honghui Yang, Scott R. Sponheim ,Tonya White, Vince D. Calhoun, “ A CCA+ICA based model formulti-task brain imaging data fusion and its application to schizophrenia”, Elsevier ,NeuroImage 51 (2010) 123–134.
- Minsong Cao, Song-Chu Ko, Eric D. Slessinger, Colleen M. Des Rosiers, Peter A. Johnstone, Indra J. Das, “A simple method for dose fusion from multimodality treatment of prostate cancer: Brachytherapy to external beam therapy”, Elsevier, Brachytherapy 10 (2011) 214-220.
- GauravBhatnagar,Q.M.Jonathan,ZhengLiu,”Human visual system inspired multi-modal medical image fusion framework, Elsevier, Expert Systems with Applications 40 (2013) 1708–1720.
|