UBC Theses and Dissertations
Deep reinforcement learning agents for industrial control system design Lawrence, Nathan P.
Deep reinforcement learning (RL) is an optimization-driven framework for producing control strategies without explicit reliance on process models. Powerful new methods in RL are often showcased for their performance on difficult simulated tasks In contrast, industrial control system design has many intrinsic features that make "nominal" RL methods unsafe and inefficient. We develop methods for automatic control based on RL techniques while balancing key industrial requirements, such as interpretability, efficiency, and stability. A practical testbed for new control techniques is proportional-integral (PI) control due to its simple structure and prevalence in industry. In particular, PI controllers are elegantly compatible with RL methods as trainable policy "networks". We deploy this idea on a pilot-scale two-tank system, elucidating the challenges in real-world implementation and advantages of our method. To improve the scalability of RL-based controller tuning, we propose an extension based on "meta-RL" wherein a generalized agent is trained for the fast adaptation across a broad collection of dynamics. A key design element is the ability to leverage model-based information offline during training while maintaining a model-free policy structure for interacting with novel processes. Beyond PI control, we propose a framework for the design of feedback controllers that combines the model-free advantages of deep RL with the stability guarantees provided by the Youla-Kucera parameterization to define the search domain. This is accomplished through a data-driven realization of the Youla-Kučera parameterization working in tandem with a neural network representation of stable nonlinear operators. Ultimately, our approach is flexible, modular, and decouples the stability requirement from the choice of RL algorithm.
Item Citations and Data
Attribution-NonCommercial-NoDerivatives 4.0 International