- Library Home /
- Search Collections /
- Open Collections /
- Browse Collections /
- BIRS Workshop Lecture Videos /
- A blob method for diffusion and applications to sampling...
Open Collections
BIRS Workshop Lecture Videos
BIRS Workshop Lecture Videos
A blob method for diffusion and applications to sampling and two layer neural networks. Craig, Katy
Description
Given a desired target distribution and an initial guess of that distribution, composed of finitely many samples, what is the best way to evolve the locations of the samples so that they more accurately represent the desired distribution A classical solution to this problem is to allow the samples to evolve according to Langevin dynamics, the stochastic particle method corresponding to the Fokker-Planck equation. In todayâ s talk, I will contrast this classical approach with a deterministic particle method corresponding to the porous medium equation. This method corresponds exactly to the mean-field dynamics of training a two layer neural network for a radial basis function activation function. We prove that, as the number of samples increases and the variance of the radial basis function goes to zero, the particle method converges to a bounded entropy solution of the porous medium equation. As a consequence, we obtain both a novel method for sampling probability distributions as well as insight into the training dynamics of two layer neural networks in the mean field regime. This is joint work with Karthik Elamvazhuthi (UCLA), Matt Haberland (Cal Poly), and Olga Turanova (Michigan State).
Item Metadata
Title |
A blob method for diffusion and applications to sampling and two layer neural networks.
|
Creator | |
Publisher |
Banff International Research Station for Mathematical Innovation and Discovery
|
Date Issued |
2021-06-25T11:30
|
Description |
Given a desired target distribution and an initial guess of that distribution, composed of finitely many samples, what is the best way to evolve the locations of the samples so that they more accurately represent the desired distribution A classical solution to this problem is to allow the samples to evolve according to Langevin dynamics, the stochastic particle method corresponding to the Fokker-Planck equation. In todayâ s talk, I will contrast this classical approach with a deterministic particle method corresponding to the porous medium equation. This method corresponds exactly to the mean-field dynamics of training a two layer neural network for a radial basis function activation function. We prove that, as the number of samples increases and the variance of the radial basis function goes to zero, the particle method converges to a bounded entropy solution of the porous medium equation. As a consequence, we obtain both a novel method for sampling probability distributions as well as insight into the training dynamics of two layer neural networks in the mean field regime. This is joint work with Karthik Elamvazhuthi (UCLA), Matt Haberland (Cal Poly), and Olga Turanova (Michigan State).
|
Extent |
50.0 minutes
|
Subject | |
Type | |
File Format |
video/mp4
|
Language |
eng
|
Notes |
Author affiliation: University of California Santa Barbara
|
Series | |
Date Available |
2023-10-24
|
Provider |
Vancouver : University of British Columbia Library
|
Rights |
Attribution-NonCommercial-NoDerivatives 4.0 International
|
DOI |
10.14288/1.0437313
|
URI | |
Affiliation | |
Peer Review Status |
Unreviewed
|
Scholarly Level |
Faculty
|
Rights URI | |
Aggregated Source Repository |
DSpace
|
Item Media
Item Citations and Data
Rights
Attribution-NonCommercial-NoDerivatives 4.0 International