Image registration; Phase; Fast Fourier transform; Multimodal; Keypoints; Dynamic sub-clouds

By Geraldine Martinez,2014-05-11 04:23
8 views 0
Image registration; Phase; Fast Fourier transform; Multimodal; Keypoints; Dynamic sub-clouds


Copyright ? 2008 Elsevier B.V. All rights reserved.

    aSystems Design Engineering, University of Waterloo, 200 University

    Avenue West, Waterloo, Ontario, Canada N2L 3G1

Received 19 July 2008;

    revised 14 October 2008;

    accepted 18 October 2008.

    Available online 5 November 2008.

An interesting problem in pattern recognition is that of image

    registration, which plays an important role in many vision-based

    recognition and motion analysis applications. Of particular interest

    among registration problems are multimodal registration problems,

    where the images exist in different feature spaces. State-of-the-art

phased-based approaches to multimodal image registration methods

    have provided good accuracy but have high computational cost. This

    paper presents a fast phase-based approach to registering multimodal

    images for the purpose of initial coarse-grained registration. This is

    accomplished by simultaneously performing both globally exhaustive

    dynamic phase sub-cloud matching and polynomial feature space transformation estimation in the frequency domain using the fast

    Fourier transform (FFT). A multiscale phase-based feature extraction

    method is proposed that determines both the location and size of the

    dynamic sub-clouds being extracted. A simple outlier pruning based on

    resampling is used to remove false keypoint matches. The proposed

    phase-based approach to registration can be performed very efficiently

    without the need for initial estimates or equivalent keypoints from both

    images. Experimental results show that the proposed method can

    provide accuracies comparable to the state-of-the-art phase-based

    image registration methods for the purpose of initial coarse-grained

    registration while being much faster to compute.

    Image registration; Phase; Fast Fourier transform;

    Multimodal; Keypoints; Dynamic sub-clouds

    1. Introduction

    2. Multimodal registration problem 3. Previous work

    4. Proposed registration algorithm 4.1. Keypoint detection and sub-cloud size estimation

    4.2. Phase sub-cloud extraction 4.3. Simultaneous sub-cloud matching and feature space transformation


    4.4. Solving the simultaneous matching and feature space transformation

    estimation problem in the frequency domain 4.5. Outlier pruning through resampling 4.6. Algorithm outline

    5. Computational complexity analysis 6. Experimental results

    7. Conclusions and future work Acknowledgements


    Image registration is the process of matching points in one image to their

    corresponding points in another image. The problem of image

    registration plays a very important role in many visual and object

    recognition and motion analysis applications. Some of these applications

    include visual motion estimation [1] and [2], vision-based content-based retrieval [3] and [4], image registration [5], [6], [7] and

    [9], and biometric authentication [10]. In the best case scenario, the images exist at the same scale, in the same orientation, as well as

    represented in the same feature space. However, this is not the case in

    most real-world applications. There are many situations where the

    images exist in different feature spaces. This particular problem will be

    referred to as the multimodal registration problem and is a particularly

    difficult problem to solve. Examples of this problem in real-world

    situations include medical image registration and tracking of

    MRI/CT/PET data [11] and building modeling and visualization using

    LIDAR and optical data [12] and [13]. There are several important issues that make multimodal registration a

    difficult problem to solve. First, many registration algorithms require that

    equivalent keypoints be identified within each image. However, given the

    differences between feature spaces in which the images exist, it is often

    a very difficult task. The significant differences between feature spaces

    also make it impractical to perform direct intensity matching between the

    two images. In recent years, an effective approach to multimodal

    registration has been proposed that utilizes local phase [14] and [15].

    This state-of-the-art approach evaluates the mutual information

    between the local phase of two images to determine the optimal

    alignment and has been shown to be very effective at matching multimodal medical image data, outperforming existing multimodal registration methods [14] and [15]. However, this approach is

    6O(N) for the mutual information evaluation computationally expensive (

    process). As such, a registration method that is able to take advantage of local phase information to determine point correspondences between images while being computationally efficient is highly desired for the purpose of initial coarse-grained registration.

    The main contribution of this paper is fast phase-based registration algorithm for aligning multimodal images. The proposed method is designed to provide a fast alternative to the phase-based registration algorithm proposed by Mellor et al. [15]. It is important to note that the

    main contributions of this paper reside in the methods for keypoint detection and dynamic sub-cloud extraction, as well as the method for simultaneous phase correspondence evaluation and feature transformation estimation, not in the outlier pruning scheme. Furthermore, the proposed method is designed for fast initial coarse-grained matching and by no means guarantee the smoothness of the global data correspondence problem. A fine-grained matching method can be used after the initial matching to provide improved alignment based on global smoothness constraints.

The multimodal registration problem can be defined in the following

    f and g, where points in f and g manner. Suppose there exist two images

    are represented using two different feature spaces, respectively. For

    every point in f, the goal of registration is to determine a corresponding

    point in g such that the highest degree of correspondence can be found

    between f and g. This problem can be alternatively be formulated as

    finding the optimal transformation T that maps all points from f to the points from g such that the highest degree of correspondence can be

    achieved. The relationship between f and g can be defined as


    where and are coordinate vectors corresponding to f and g, respectively, and T is a transformation that maps points from f to g. Based on the above relationship, the multimodal registration problem can

    be formulated as a minimum distance optimization problem, with the

    distance representing the degree of data correspondence between two

    images expressed as


    D is the distance function that is inversely proportional to the where

    degree of correspondence between feature points. Low values of D indicate a high level of correspondence between the images.

    A large number of methods have been proposed for the purpose of image

    registration. In general, current methods can be grouped into four main


    (1) Methods based on relative distances [16], [17], [18] and [19]. (2) Methods in the frequency domain [20], [21] and [22]. (3) Methods based on direct comparisons [6], [7], [8], [9], [23], [24], [25], [26] and [27].

    (4) Methods based on extracted features [14], [15], [28], [29], [30], [31], [32], [33], [34], [35] and [36].

    Methods based on relative distances exploit the spatial relationships

    between neighboring pixels within an image to determine the best match

    between two points. These algorithms are based on the assumption that

    if a point in image f, p, has a corresponding point in image g, p, then f,0g,0there exist other points in f, {p,p,…,p}, that have a corresponding f,1f,2f,n

    points in g, {p,v,…,p}, such that the distance between p and point g,1g,2g,nf,0

    p is equal to the distance between p and point p. Methods based on f,kg,0g,krelative distances are primarily useful for situations where the

    transformation between the images consists only of translations and


    Methods in the frequency domain [20], [21] and [22] exploit the frequency characteristics such as phase to estimate the transformation

    between two images. A common frequency domain registration method is

    phase correlation, where the Fourier coefficients calculated from image f

    are divided by that calculated from image g. Performing the inverse transform on the result yields a single peak indicating the translation that

    matches the two images. This technique has been extended to account

    for global rotations and scale by Reddy et al. [21]. As such, frequency domain methods are only suited for globally rigid point correspondences.

    Methods based on direct comparisons [6], [7], [8], [9], [23], [24], [25],

    [26] and [27] attempt to find data point correspondences between two

    images by performing point matches directly in their respective feature

    spaces. Many techniques in this group make use of feature information

    from neighboring data points to determine the similarity between two

    data points. Some common similarity metrics used in direct comparison

    methods include maximum likelihood [5], correlation [23] and [26], and

    mutual information [6], [7], [8] and [9]. Of particular interest in recent

years are techniques based on mutual information, which attempt to

    match data points by finding the mutual dependence between the images.

    The key advantage of techniques based on mutual information is that it

    allows images existing in different feature spaces to be compared in a

    direct manner. However, such techniques also require good initial match

    estimates to produce accurate results as they are very

    under-constrained in nature. Furthermore, techniques based on mutual

    information are computationally expensive and may not be practical for

    certain situations where computational speed is important. A comparison

    between the computational complexity of mutual information-based

    techniques and the proposed method will be discussed later on in the


    In methods based on extracted features [14], [15], [28], [29], [30],

    [31], [32], [33], [34], [35] and [36], the images are compared indirectly using extracted features that exist in a common feature space.

    Common features used in such methods include contours [29] and [30],

    invariant moments [34] and [35], orientation [33] and [36], and shape

    properties [28]. By comparing methods using derived features in a

    common feature space, this allows such techniques to match images

    existing in different feature spaces in cases where similar features can

    be extracted from both images.

    In recent years, an effective approach has been proposed for the purpose of multimodal registration that evaluates the mutual information between the local phase of images to determine optimal point correspondences [14] and [15]. This state-of-the-art approach improves mutual

    information performance by reducing feature non-homogeneities and accentuating structural information, and has been shown to be very effective at finding correspondences between medical imaging data acquired using different medical imaging modalities. However, this approach suffers from the same computational cost issues associated with mutual information-based methods. The primary goal of this paper is to provide a fast approach to phase-based image registration that provides comparable accuracy at a significantly reduced computational cost.

    The proposed approach to phase-based image registration can be divided into three main parts. First, keypoints are selected and sub-clouds are extracted using a multiscale phase-based keypoint extraction method, as described in Sections 4.1 and 4.2. A simultaneous

    globally exhaustive phase correspondence evaluation and feature space transformation estimation scheme is performed between the phase sub-clouds from one image over all translations and local rotations of the

Report this document

For any questions or suggestions please email