- Library Home /
- Search Collections /
- Open Collections /
- Browse Collections /
- BIRS Workshop Lecture Videos /
- Theory of Deep Convolutional Neural Networks and Distributed...
Open Collections
BIRS Workshop Lecture Videos
BIRS Workshop Lecture Videos
Theory of Deep Convolutional Neural Networks and Distributed Learning Zhou, Ding-Xuan
Description
Deep learning has been widely applied and brought breakthroughs in speech recognition, computer vision, and many other domains. The involved deep neural network architectures and computational issues have been well studied in machine learning. But there lacks a theoreti- cal foundation for understanding the approximation or generalization ability of deep learning methods with network architectures such as deep convolutional neural networks with convo- lutional structures. This talk describes a mathematical theory of deep convolutional neural networks (CNNs). In particular, we show the universality of a deep CNN, meaning that it can be used to approximate any continuous function to an arbitrary accuracy when the depth of the neural network is large enough. Our quantitative estimate, given tightly in terms of the number of free parameters to be computed, verifies the efficiency of deep CNNs in dealing with large dimensional data. Some related distributed learning algorithms will also be discussed.
Item Metadata
Title |
Theory of Deep Convolutional Neural Networks and Distributed Learning
|
Creator | |
Publisher |
Banff International Research Station for Mathematical Innovation and Discovery
|
Date Issued |
2018-05-21T11:53
|
Description |
Deep learning has been widely applied and brought breakthroughs in speech recognition,
computer vision, and many other domains. The involved deep neural network architectures and
computational issues have been well studied in machine learning. But there lacks a theoreti-
cal foundation for understanding the approximation or generalization ability of deep learning
methods with network architectures such as deep convolutional neural networks with convo-
lutional structures. This talk describes a mathematical theory of deep convolutional neural
networks (CNNs). In particular, we show the universality of a deep CNN, meaning that it can
be used to approximate any continuous function to an arbitrary accuracy when the depth of
the neural network is large enough. Our quantitative estimate, given tightly in terms of the
number of free parameters to be computed, verifies the efficiency of deep CNNs in dealing with
large dimensional data. Some related distributed learning algorithms will also be discussed.
|
Extent |
37.0
|
Subject | |
Type | |
File Format |
video/mp4
|
Language |
eng
|
Notes |
Author affiliation: City University of Hong Kong
|
Series | |
Date Available |
2019-03-20
|
Provider |
Vancouver : University of British Columbia Library
|
Rights |
Attribution-NonCommercial-NoDerivatives 4.0 International
|
DOI |
10.14288/1.0377193
|
URI | |
Affiliation | |
Peer Review Status |
Unreviewed
|
Scholarly Level |
Faculty
|
Rights URI | |
Aggregated Source Repository |
DSpace
|
Item Media
Item Citations and Data
Rights
Attribution-NonCommercial-NoDerivatives 4.0 International