ROBUST FACE RECOGNITION

We present a novel algorithm for frontal face recognition that is robust to occlusion. Given a set of training images and a test image, we are required to find the identity of the test image. The test image may be partially occluded, and the location and nature of occlusion is unknown. In the following illustration, the top test image is a fine example of a real-life situation when a person's face may be occluded intentionally by a disguise. In the bottom test image, 50% of the pixels, chosen at random, have been replaced by pixels with random values, distributed uniformly between zero and the maximum grayscale pixel value.

The basic idea is to represent the test image as a sparse linear combination of the entire training set. Occlusion can be modeled as a sparse error that affects only a few pixels in the image. Since the error due to occlusion is highly non-Gaussian in nature and can be arbitrarily large, the Least Squares technique is not optimal here.

First, we stack all the training images as column vectors of a matrix*A*. Let *y* be the test image. Then, we have the equation *y = Ax + e*, where *x* is a sparse vector, and *e* is the error due to occlusion. The equation can be rewritten as *y = Bf*, where *B = *[ *A* *I* ], *I* is the identity matrix, and *f = *[* x*^{T} e^{T} ]^{T}. Thus, by minimizing the L1-norm of *f*, we impose sparsity constraints on both *x* and *e* simultaneously. Convex polytope theory in high dimensions forms the basis behind this technique. The following figure illustrates the geometry involved:

The vertices of the polytope consist of the training images and synthetic points with a single non-zero pixel. The idea here is to find the face of the polytope that can best represent the signal and the occlusion as a linear combination of its vertices.

The algorithm was tested on the AR database and the Extended Yale B database. We present a couple of examples below to showcase the efficacy of our algorithm.

Our method represents the test image (left) as the sum of a sparse linear combination of the training images (middle) and a sparse error due to occlusion or corruption (right). The red coefficients correspond to the training images of the occluded person.

For more information on the algorithm and results, please refer to our paper, Robust Face Recognition via Sparse Representation.

The basic idea is to represent the test image as a sparse linear combination of the entire training set. Occlusion can be modeled as a sparse error that affects only a few pixels in the image. Since the error due to occlusion is highly non-Gaussian in nature and can be arbitrarily large, the Least Squares technique is not optimal here.

First, we stack all the training images as column vectors of a matrix

The vertices of the polytope consist of the training images and synthetic points with a single non-zero pixel. The idea here is to find the face of the polytope that can best represent the signal and the occlusion as a linear combination of its vertices.

The algorithm was tested on the AR database and the Extended Yale B database. We present a couple of examples below to showcase the efficacy of our algorithm.

Our method represents the test image (left) as the sum of a sparse linear combination of the training images (middle) and a sparse error due to occlusion or corruption (right). The red coefficients correspond to the training images of the occluded person.

For more information on the algorithm and results, please refer to our paper, Robust Face Recognition via Sparse Representation.