DOC

2D+3D Face Recognition Using Dual-Tree Complex Wavelet

By Sally Rice,2014-09-10 21:01
17 views 0
2D+3D Face Recognition Using Dual-Tree Complex Wavelet

    豆丁网论文,http:///msn369

2D+3D Face Recognition Using Dual-Tree Complex Wavelet

    Transform

    Wang Xueqiao, Ruan Qiuqi, An gaoyun, Jin Yi

    5 (Institute of Information Science, Beijing Jiaotong University, Beijing 100044, China) Abstract: A fully automatic framework is proposed for face recognition and its superiority performance is justified by the FRGC v2 data. 2D and 3D facial representation extracted by the Dual-tree Complex Wavelet Transform (DT-CWT) is introduced to reflect the facial geometry properties in this paper. The level four high-frequency components of 2D texture image and 3D depth

    10 image are obtained respectively, and then Linear Discriminant Analysis (LDA) are used to get the feature vector. Cosine distance is developed for establishing the similarity matrixes. Finally a fusion was used into the two similarity matrixes. The verification rate at an FAR of 0.1% is 97.6% on All vs. All experiment.

    Key words: Face recognition; Dual-tree Complex Wavelet Transform; Linear Discriminant Analysis

0 Introduction 15

    Studies in 2D face recognition have gained significant development, such as PCA (Principal Component Analysis) [1], LDA (Linear Discriminant Analysis) [2], and ICA (Independent Component Analysis) [3] and so on, but still bear limitations mostly due to pose variation, illumination, make-up, and facial expression. These years, more and more researchers focus on

    20 this topic and proposed a large number of improved methods to overcome the obstacles of 3D face recognition. Beumier et al. [4] proposed two algorithms of facial surface registration. Central and lateral profiles were compared in the curvature space to achieve 3D face recognition. However, the method required a high computation cost and needed high resolution of 3D face data due to the sensitivity of curvature based features. Huttenlocher et al. [5] defined Hausdorff distance metric

    25 which have originally been proposed for 3D face recognition by Pan et al [6], Lee and Shim [7], and improved by Russ et al [8]. Chua et al [9] firstly applied Iterative Closet Point (ICP) [10] into 3D face registration, and then separated rigid parts of face from the non-rigid parts using Gaussian distribution to achieve recognition. Zhong et al [11] used 3D depth images for recognition. Gabor filters used on the 3D face images can effectively extract intrinsic discriminative information, and

    30 Learned Visual Codebook (LVC) can be constructed by learning the centers from K-means clustering of the filter response vectors. After this they by these learned centers. Finally they 3D face images can be represented by the LVC coefficients and achieved recognition using Nearest Neighbor (NN) classifier. They further developed Quadtree clustering algorithm to estimate the facial-codes which could further boost the performance for our purpose [12]. Chang et al. [13]

    35 used PCA to extract the intrinsic discriminant feature vectors on 2D intensity images and 3D depth images respectively, and then the fusion result obtained by 2D and 3D results was used to get the final performance. These years, many new 3D face recognition methods which were demonstrated on the FRGC v2 data have got good performances. Faltemier et al. [14] used 28 small parts divided by the whole 3D face images for 3D face recognition and the fusion results from

     40 independently matched regions. Yueming Wang et al. [15] extracted Gabor, LBP, and Haar

    Foundations: This work is supported by National Natural Science Foundation (No. 60973060); Specialized Research Fund for the Doctoral Program of Higher Education (No. 200800040008); Doctoral Candidate Outstanding Innovation Foundation (No. K11JB00290); the National Natural Science Foundation of China (No. 61003114); the fundamental research funds for the central universities (No. 2011JBM020); the Fundamental Research Funds for the Central Universities (No. 2011JBM022).

    Brief author introduction:Wang Xueqiao(1986-), Female, 3D face recognition. E-mail: silvia.wxq@gmail.com

    - 1 -

    豆丁网论文,http:///msn369

    feature from depth image, and then the most discriminative local feature were selected optimally by boosting and trained as weak classifiers for assembling three collective strong classifiers. Mian et al. [16] used Spherical Face Representation (SFR) for 3D facail data and the SIFT descriptor for 2D data to train a rejection classifier. The remaining faces were verified using a region-based 45 matching approach which is robust to facial expression. Berretti et.al [17] proposed an approach that took into account the graph form to reflect geometrical information for 3D facial surface and the relevant information among the neighboring points can be encoded into a compact representation. Alyuz et.al [18] proposed expression resistant 3-D face recognition based on the regional registration. Region-based registration scheme was used to establish the relationship 50 among all the gallery samples in a single registration pass via common region models. Zhang et al. [19] found a novel local feature with the distinct characteristics of resolution invariant. They fused six different scale invariant similarity measures at the score level, which effectively overcome the influence of large facial expression variation. For both 2D and 3D faces contained important information of a person, this paper used a fusion method for face recognition.

    55 We used Dual-tree Complex Wavelet Transform (DT-CWT) for face recognition due to its attractive properties: approximate shift invariance, approximate rotation invariance, orientation selectivity, and efficient computation. DT-CWT descriptors utilize the squared magnitude of a complex wavelet coefficient to evaluate the properties of spectral energy in the space, scale, and orientation of a particular location. Shift invariance can handle some shift depth face due to bad 60 nose detection. Rotation invariance can dispose rotation faces due to wrong ICP registration. Orientation selectivity could treat with little face expression. 2D and 3D facial representation extracted by the Dual-tree Complex Wavelet Transform (DT-CWT) is introduced to reflect the facial geometry properties in this paper. The level four high-frequency components of 2D texture image and 3D depth image are obtained respectively, and then Linear Discriminant Analysis 65 (LDA) are used to get the feature vector. Cosine distance is developed for establishing the similarity matrixes. Finally a fusion was used into the two similarity matrixes.

    The paper is organized as follows. In Section 1, the data preprocessing method are proposed. In Section 2, the DT-CWT feature is given. Experimental results of face recognition are given in Section 3, and conclusions are drawn in Section 4.

70 1 Data Preprocessing

    Because there are spikes and noise in some 3D faces, a 3*3 Gaussian filter was used to moving spikes and noise firstly. Since the texture channel and the 3D face data of FRGC database[22] are well corresponded, we use Ada-boost face detecting method [20] on 2D texture image to help 3D facial region extraction. Fig.1 is some examples of detected faces, and we called

     75 them texture images in the paper.

     Fig.1 Texture image

     1.1 Nose Detection

    80 Before we detect the nose tip, we find the central stripe firstly. The nose tip is on the central stripe, so the area which contains the nose tip is reduced. The width of the stripe is 2mm and it was used for nose detection subsequently. Our method is simpler than other central stripe finding

    - 2 -

    豆丁网论文,http:///msn369

     ' ' ' ' method [15]. Let F = { p| p= ( x, y, z),1 ?i ? N} denotes the point set of a 3D face. Firstly, we i i i i i ?i ? N ) to p( x, y, z )(1 ? i ? N ) . The transformation is as follows: map all the points p (1 ' ? x= max( x ) ?min( x ) ? xi i i i ?i i i i i ' = (1) y85 ? i i

     y

    ? ' z = z i i?

     ' Let F represent the point set of the transformed 3D face. Fig.2(a) is an example of the transformation, and the green face is the transformed face of the red one. Then we use ICP [10] for

    ' ' the registration between F and F .Then F is moved to F , and the transformation matrix M is

    recorded. The result is shown in Fig.2(b). Because 3 points can build a plane, we choose 3 points

    ' 90 a, b, c in F and find the corresponding pointsa ', b ', c ' in F . After this, we calculate the 3 points

    a '', b '', c '' in the plane which can detach the face into two parts.a '', b '', c '' are calculated as function (2):

     a + a ' b + b ' c + c ' (2) a '' = , b '' = , c '' = 2 2 2

    We use the position of the 3 points to establish the plane a ''( x, y, z), b ''( x, y, z), c ''( x, y, 1 1 1 2 2