- Library Home /
- Search Collections /
- Open Collections /
- Browse Collections /
- UBC Theses and Dissertations /
- Recklessly approximate sparse coding
Open Collections
UBC Theses and Dissertations
UBC Theses and Dissertations
Recklessly approximate sparse coding Denil, Misha
Abstract
Introduction of the so called “K-means” or “triangle” features in Coates, Lee and Ng, 2011 caused significant discussion in the deep learning community. These simple features are able to achieve state of the art performance on standard image classification benchmarks, outperforming much more sophisticated methods including deep belief networks, convolutional nets, factored RBMs, mcRBMs, convolutional RBMs, sparse autoencoders and several others. Moreover, these features are extremely simple and easy to compute. Several intuitive arguments have been put forward to describe this remarkable performance, yet no mathematical justification has been offered. In Coates and Ng, 2011, the authors improve on the triangle features with “soft threshold” features, adding a hyperparameter to tune performance, and compare these features to sparse coding. Both soft thresholding and sparse coding are found to often yield similar classification results, though soft threshold features are much faster to compute. The main result of this thesis is to show that the soft threshold features are realized as a single step of proximal gradient descent on a non-negative sparse coding objective. This result is important because it provides an explanation for the success of the soft threshold features and shows that even very approximate solutions to the sparse coding problem are sufficient to build effective classifiers.
Item Metadata
Title |
Recklessly approximate sparse coding
|
Creator | |
Publisher |
University of British Columbia
|
Date Issued |
2012
|
Description |
Introduction of the so called “K-means” or “triangle” features in Coates,
Lee and Ng, 2011 caused significant discussion in the deep learning
community. These simple features are able to achieve state of the art performance on standard image classification benchmarks, outperforming much
more sophisticated methods including deep belief networks, convolutional
nets, factored RBMs, mcRBMs, convolutional RBMs, sparse autoencoders
and several others. Moreover, these features are extremely simple and easy
to compute.
Several intuitive arguments have been put forward to describe this remarkable performance, yet no mathematical justification has been offered.
In Coates and Ng, 2011, the authors improve on the triangle features
with “soft threshold” features, adding a hyperparameter to tune performance, and compare these features to sparse coding. Both soft thresholding and sparse coding are found to often yield similar classification results,
though soft threshold features are much faster to compute.
The main result of this thesis is to show that the soft threshold features
are realized as a single step of proximal gradient descent on a non-negative
sparse coding objective. This result is important because it provides an
explanation for the success of the soft threshold features and shows that
even very approximate solutions to the sparse coding problem are sufficient
to build effective classifiers.
|
Genre | |
Type | |
Language |
eng
|
Date Available |
2012-12-06
|
Provider |
Vancouver : University of British Columbia Library
|
Rights |
Attribution-NonCommercial-NoDerivatives 4.0 International
|
DOI |
10.14288/1.0052215
|
URI | |
Degree | |
Program | |
Affiliation | |
Degree Grantor |
University of British Columbia
|
Graduation Date |
2013-05
|
Campus | |
Scholarly Level |
Graduate
|
Rights URI | |
Aggregated Source Repository |
DSpace
|
Item Media
Item Citations and Data
Rights
Attribution-NonCommercial-NoDerivatives 4.0 International