- Library Home /
- Search Collections /
- Open Collections /
- Browse Collections /
- UBC Theses and Dissertations /
- Meta-reinforcement learning approaches to process control
Open Collections
UBC Theses and Dissertations
UBC Theses and Dissertations
Meta-reinforcement learning approaches to process control McClement, Daniel George
Abstract
Meta-learning is a branch of machine learning which trains neural network models to synthesize a wide variety of data in order to rapidly solve new problems. Many industrial processes have similar and well-understood dynamics, which suggests it is feasible to create a generalizable controller through meta-learning. In this work, two meta-reinforcement learning-based control strategies are formulated. First, a deep reinforcement learning-based controller which uses accumulated process data to adapt to different systems or control objectives is introduced. Next, a meta-reinforcement learning-based controller tuning strategy is introduced. This tuning strategy takes advantage of known, offline information for training, such as systems gains or time constants, yet efficiently tunes fixed-structure controllers for novel systems in a completely model-free fashion. The meta-RL tuning strategy has a recurrent structure that accumulates "context" for its current process dynamics through a hidden state variable. This end-to-end architecture enables the agent to automatically adapt to changes in the process dynamics. Moreover, the same agent can be deployed on systems with previously unseen nonlinearities and timescales. In tests reported here, the meta-RL tuning strategy was trained entirely offline, yet produced good control results in novel settings.
Item Metadata
Title |
Meta-reinforcement learning approaches to process control
|
Creator | |
Supervisor | |
Publisher |
University of British Columbia
|
Date Issued |
2022
|
Description |
Meta-learning is a branch of machine learning which trains neural network models to synthesize a wide variety of data in order to rapidly solve new problems. Many industrial processes have similar and well-understood dynamics, which suggests it is feasible to create a generalizable controller through meta-learning. In this work, two meta-reinforcement learning-based control strategies are formulated. First, a deep reinforcement learning-based controller which uses accumulated process data to adapt to different systems or control objectives is introduced. Next, a meta-reinforcement learning-based controller tuning strategy is introduced. This tuning strategy takes advantage of known, offline information for training, such as systems gains or time constants, yet efficiently tunes fixed-structure controllers for novel systems in a completely model-free fashion. The meta-RL tuning strategy has a recurrent structure that accumulates "context" for its current process dynamics through a hidden state variable. This end-to-end architecture enables the agent to automatically adapt to changes in the process dynamics. Moreover, the same agent can be deployed on systems with previously unseen nonlinearities and timescales. In tests reported here, the meta-RL tuning strategy was trained entirely offline, yet produced good control results in novel settings.
|
Genre | |
Type | |
Language |
eng
|
Date Available |
2022-05-03
|
Provider |
Vancouver : University of British Columbia Library
|
Rights |
Attribution-NonCommercial-ShareAlike 4.0 International
|
DOI |
10.14288/1.0413212
|
URI | |
Degree | |
Program | |
Affiliation | |
Degree Grantor |
University of British Columbia
|
Graduation Date |
2022-11
|
Campus | |
Scholarly Level |
Graduate
|
Rights URI | |
Aggregated Source Repository |
DSpace
|
Item Media
Item Citations and Data
Rights
Attribution-NonCommercial-ShareAlike 4.0 International