- Library Home /
- Search Collections /
- Open Collections /
- Browse Collections /
- BIRS Workshop Lecture Videos /
- What can we learn from deep-learning? Models and validation...
Open Collections
BIRS Workshop Lecture Videos
BIRS Workshop Lecture Videos
What can we learn from deep-learning? Models and validation of neurobiological learning inspired by modern deep artificial neural networks McDonnell, Mark
Description
In the field of machine learning, ‘deep-learning’ has become spectacularly successful very rapidly, and now frequently achieves better-than-human performance on difficult pattern recognition tasks. It seems that the decades-old theoretical potential of artificial neural networks (ANNs) is finally being realized. For computer vision problems, convolutional ANNs are used, and are often characterized as “biologically inspired.” This is due to the hierarchy of layers of nonlinear processing units and pooling stages, and learnt spatial filters resembling simple and complex cells. However, this resemblance is superficial. An open challenge for computational neuroscience is to identify whether the spectacular success of deep-learning can offer insights for realistic models of neurobiological learning that are constrained by known anatomy and physiology. I will discuss this challenge and argue that we need to validate proposed neurobiological learning rules using challenging real data sets like those used in deep-learning, and ensure their learning capability is comparable to that of deep ANNs. To illustrate this approach, in this talk I will show mathematically how a standard cost-function used for supervised training of ANNs can be decomposed into an unsupervised decorrelation stage and a supervised Hebbian-like stage. With this insight, I argue that this form of learning is feasible as a neurobiological learning mechanism in recurrently-connected layer 2/3 and layer 4 cortical neurons. I will further show that the model can learn to very effectively classify patterns (e.g. images of handwritten digits from the MNIST benchmark); error rates are comparable to state of the art deep-learning algorithms, i.e. less than 1%.
Item Metadata
Title |
What can we learn from deep-learning? Models and validation of neurobiological learning inspired by modern deep artificial neural networks
|
Creator | |
Publisher |
Banff International Research Station for Mathematical Innovation and Discovery
|
Date Issued |
2017-02-28T10:47
|
Description |
In the field of machine learning, ‘deep-learning’ has become spectacularly successful very rapidly, and now frequently achieves better-than-human performance on difficult pattern recognition tasks. It seems that the decades-old theoretical potential of artificial neural networks (ANNs) is finally being realized. For computer vision problems, convolutional ANNs are used, and are often characterized as “biologically inspired.” This is due to the hierarchy of layers of nonlinear processing units and pooling stages, and learnt spatial filters resembling simple and complex cells.
However, this resemblance is superficial. An open challenge for computational neuroscience is to identify whether the spectacular success of deep-learning can offer insights for realistic models of neurobiological learning that are constrained by known anatomy and physiology. I will discuss this challenge and argue that we need to validate proposed neurobiological learning rules using challenging real data sets like those used in deep-learning, and ensure their learning capability is comparable to that of deep ANNs.
To illustrate this approach, in this talk I will show mathematically how a standard cost-function used for supervised training of ANNs can be decomposed into an unsupervised decorrelation stage and a supervised Hebbian-like stage. With this insight, I argue that this form of learning is feasible as a neurobiological learning mechanism in recurrently-connected layer 2/3 and layer 4 cortical neurons. I will further show that the model can learn to very effectively classify patterns (e.g. images of handwritten digits from the MNIST benchmark); error rates are comparable to state of the art deep-learning algorithms, i.e. less than 1%.
|
Extent |
70 minutes
|
Subject | |
Type | |
File Format |
video/mp4
|
Language |
eng
|
Notes |
Author affiliation: University of South Australia
|
Series | |
Date Available |
2017-08-28
|
Provider |
Vancouver : University of British Columbia Library
|
Rights |
Attribution-NonCommercial-NoDerivatives 4.0 International
|
DOI |
10.14288/1.0354795
|
URI | |
Affiliation | |
Peer Review Status |
Unreviewed
|
Scholarly Level |
Faculty
|
Rights URI | |
Aggregated Source Repository |
DSpace
|
Item Media
Item Citations and Data
Rights
Attribution-NonCommercial-NoDerivatives 4.0 International