- Library Home /
- Search Collections /
- Open Collections /
- Browse Collections /
- UBC Theses and Dissertations /
- Reinforcement learning using the game of soccer
Open Collections
UBC Theses and Dissertations
UBC Theses and Dissertations
Reinforcement learning using the game of soccer Ford, Roger David
Abstract
Trial and error learning methods are often ineffective when applied to robots. This is due to certain characteristics found in robotic domains such as large continuous state spaces, noisy sensors and faulty actuators. Learning algorithms work best with small discrete state spaces, discrete deterministic actions, and accurate identification of state. Since trial and error learning requires that an agent learn by trying actions under all possible situations, the large continuous state space is the most problematic of the above characteristics, causing the learning algorithm to become inefficient. There is rarely enough time to explicitly visit every state or enough memory to store the best action for every state. This thesis explores methods for achieving reinforcement learning on large continuous state spaces, where actions are not discrete. This is done by creating abstract states, allowing one state to represent numerous similar states. This saves time since not every state in the abstract state needs to be visited and saves space since only one state needs to be stored. The algorithm tested in this thesis learns which volumes of the state space are similar by recursively subdividing each volume with a KD-tree. Identifying if an abstract state should be split, which dimension should be split, and where that dimension should be split is done by collecting statistics on the previous effects of actions. Continuous actions are dealt with by giving actions inertia, so they can persist past state boundaries if it necessary.
Item Metadata
Title |
Reinforcement learning using the game of soccer
|
Creator | |
Publisher |
University of British Columbia
|
Date Issued |
1994
|
Description |
Trial and error learning methods are often ineffective when applied to robots. This is
due to certain characteristics found in robotic domains such as large continuous state
spaces, noisy sensors and faulty actuators. Learning algorithms work best with small
discrete state spaces, discrete deterministic actions, and accurate identification of state.
Since trial and error learning requires that an agent learn by trying actions under all
possible situations, the large continuous state space is the most problematic of the above
characteristics, causing the learning algorithm to become inefficient. There is rarely
enough time to explicitly visit every state or enough memory to store the best action for
every state.
This thesis explores methods for achieving reinforcement learning on large continuous
state spaces, where actions are not discrete. This is done by creating abstract states,
allowing one state to represent numerous similar states. This saves time since not every
state in the abstract state needs to be visited and saves space since only one state needs
to be stored.
The algorithm tested in this thesis learns which volumes of the state space are similar
by recursively subdividing each volume with a KD-tree. Identifying if an abstract state
should be split, which dimension should be split, and where that dimension should be
split is done by collecting statistics on the previous effects of actions. Continuous actions
are dealt with by giving actions inertia, so they can persist past state boundaries if it
necessary.
|
Extent |
1119444 bytes
|
Genre | |
Type | |
File Format |
application/pdf
|
Language |
eng
|
Date Available |
2009-03-04
|
Provider |
Vancouver : University of British Columbia Library
|
Rights |
For non-commercial purposes only, such as research, private study and education. Additional conditions apply, see Terms of Use https://open.library.ubc.ca/terms_of_use.
|
DOI |
10.14288/1.0051238
|
URI | |
Degree | |
Program | |
Affiliation | |
Degree Grantor |
University of British Columbia
|
Graduation Date |
1994-11
|
Campus | |
Scholarly Level |
Graduate
|
Aggregated Source Repository |
DSpace
|
Item Media
Item Citations and Data
Rights
For non-commercial purposes only, such as research, private study and education. Additional conditions apply, see Terms of Use https://open.library.ubc.ca/terms_of_use.