UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Reinforcement learning in neural networks with multiple outputs Ip, John Chong Ching

Abstract

Reinforcement learning algorithms comprise a class of learning algorithms for neural networks. Reinforcement learning is distinguished from other classes by the type of problems that it is intended to solve. It is used for learning input-output mappings where the desired outputs are not known and only a scalar reinforcement value is available. Primary Reinforcement Learning (PRL) is a core component of the most actively researched form of reinforcement learning. The issues surrounding the convergence characteristics of PRL are considered in this thesis. There have been no convergence proofs for any kind of networks learning under PRL. A convergence theorem is proved in this thesis, showing that under some conditions, a particular reinforcement learning algorithm, the A[formula omitted] algorithm, will train a single-layer network correctly. The theorem is demonstrated with a series of simulations. A new PRL algorithm is proposed to deal with the training of multiple layer, binary output networks with continuous inputs. This is a more difficult learning problem than with binary inputs. The new algorithm is shown to be able to successfully train a network with multiple outputs when the environment conforms to the conditions of the convergence theorem for a single-layer network.

Item Media

Item Citations and Data

Rights

For non-commercial purposes only, such as research, private study and education. Additional conditions apply, see Terms of Use https://open.library.ubc.ca/terms_of_use.