Reconstruction-Based Metric Learning for Unconstrained Face Verification

Reconstruction-Based Metric Learning for Unconstrained Face Verification In this paper, we propose a reconstruction-based metric learning method to learn a discriminative distance metric for unconstrained face verification. Unlike conventional metric learning methods, which only consider the label information of training samples and ignore the reconstruction residual information in the learning procedure, we apply a reconstruction criterion to learn a discriminative distance metric. For each training example, the distance metric is learned by enforcing a margin between the interclass sparse reconstruction residual and interclass sparse reconstruction residual, so that the reconstruction residual of training samples can be effectively exploited to compute the between-class and within-class variations. To better use multiple features for distance metric learning, we propose a reconstruction-based multimetric learning method to collaboratively learn multiple distance metrics, one for each feature descriptor, to remove uncorrelated information for recognition. We evaluate our proposed methods on the Labelled Faces in the Wild (LFW) and YouTube face data sets and our experimental results clearly show the superiority of our methods over both previous metric learning methods and several state-of-the-art unconstrained face verification methods.