UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Compressive classification for face recognition Majumdar, Angshul

Abstract

The problem of face recognition has been studied widely in the past two decades. Even though considerable progress has been made, e.g. in achieving better recognition rates, handling difficult environmental conditions etc., there has not been any widespread implementation of this technology. Most probably, the reason lies in not giving adequate consideration to practical problems such as communication costs and computational overhead. The thesis addresses the practical face recognition problem – e.g. a scenario that may arise in client recognition in Automated Teller Machines or employee authentication in large offices. In such scenarios the database of faces is updated regularly and the face recognition system must be updated at the same pace with minimum computational or communication costs. Such a scenario can not be handled by traditional machine learning methods as they assume the training is offline. We will develop novel methods to solve this problem from a completely new perspective. Face recognition consists of two main parts: dimensionality reduction followed by classification. This thesis employs the fastest possible dimensionality reduction technique – random projections. Most traditional classifiers do not give good classification results when the dimensionality of the data is reduced by such method. This work proposes a new class of classifiers that are robust to data whose dimensionality has been reduced using random projections. The Group Sparse Classifier (GSC) is based on the assumption that the training samples of each class approximately form a linear basis for any new test sample belonging to the same class. At the core of the GSC is an optimization problem which although gives very good results is somewhat slow. This problem is remedied in the Fast Group Sparse Classifier where the computationally intensive optimization is replaced by a fast greedy algorithm. The Nearest Subspace Classifier is based on the assumption that the samples from a particular class lie on a subspace specific to that class. This assumption leads to an optimization problem which can be solved very fast. In this work the robustness of the said classifiers is proved theoretically and is validated by thorough experimentation.

Item Media

Item Citations and Data

License

Attribution-NonCommercial-NoDerivatives 4.0 International

Usage Statistics