- Library Home /
- Search Collections /
- Open Collections /
- Browse Collections /
- UBC Theses and Dissertations /
- A reinforcement learning algorithm for operations planning...
Open Collections
UBC Theses and Dissertations
UBC Theses and Dissertations
A reinforcement learning algorithm for operations planning of a hydroelectric power multireservoir system Abdalla, Alaa Eatzaz
Abstract
The main objective of reservoir operations planning is to determine the optimum operation policies that maximize the expected value of the system resources over the planning horizon. This control problem is challenged with different sources of uncertainty that a reservoir system planner has to deal with. In the reservoir operations planning problem, there is a trade-off between the marginal value of water in storage and the electricity market price. The marginal value of water is uncertain too and is largely dependent on storage in the reservoir and storage in other reservoirs as well. The challenge here is how to deal with this large scale multireservoir problem under the encountered uncertainties. In this thesis, the use of a novel methodology to establish a good approximation of the optimal control of a large-scale hydroelectric power system applying Reinforcement Learning (RL) is presented. RL is an artificial intelligence method to machine learning that offers key advantages in handling problems that are too large to be solved by conventional dynamic programming methods. In this approach, a control agent progressively learns the optimal strategies that maximize rewards through interaction with a dynamic environment. This thesis introduces the main concepts and computational aspects of using RL for the multireservoir operations planning problem. A scenario generation-moment matching technique was adopted to generate a set of scenarios for the natural river inflows, electricity load, and market prices random variables. In this way, the statistical properties of the original distributions are preserved. The developed reinforcement learning reservoir optimization model (RLROM) was successfully applied to the BC Hydro main reservoirs on the Peace and Columbia Rivers. The model was used to: derive optimal control policies for this multireservoir system, to estimate the value of water in storage, and to establish the marginal value of water / energy. The RLROM outputs were compared to the classical method of optimizing reservoir operations, namely, stochastic dynamic programming (SDP), and the results for one and two reservoir systems were identical. The results suggests that the RL model is much more efficient at handling large scale reservoir operations problems and can give a very good approximate solution to this complex problem.
Item Metadata
Title |
A reinforcement learning algorithm for operations planning of a hydroelectric power multireservoir system
|
Creator | |
Publisher |
University of British Columbia
|
Date Issued |
2007
|
Description |
The main objective of reservoir operations planning is to determine the optimum
operation policies that maximize the expected value of the system resources over the
planning horizon. This control problem is challenged with different sources of
uncertainty that a reservoir system planner has to deal with. In the reservoir operations
planning problem, there is a trade-off between the marginal value of water in storage and
the electricity market price. The marginal value of water is uncertain too and is largely
dependent on storage in the reservoir and storage in other reservoirs as well. The
challenge here is how to deal with this large scale multireservoir problem under the
encountered uncertainties.
In this thesis, the use of a novel methodology to establish a good approximation of the
optimal control of a large-scale hydroelectric power system applying Reinforcement
Learning (RL) is presented. RL is an artificial intelligence method to machine learning
that offers key advantages in handling problems that are too large to be solved by
conventional dynamic programming methods. In this approach, a control agent
progressively learns the optimal strategies that maximize rewards through interaction
with a dynamic environment. This thesis introduces the main concepts and computational
aspects of using RL for the multireservoir operations planning problem.
A scenario generation-moment matching technique was adopted to generate a set of
scenarios for the natural river inflows, electricity load, and market prices random
variables. In this way, the statistical properties of the original distributions are preserved.
The developed reinforcement learning reservoir optimization model (RLROM) was
successfully applied to the BC Hydro main reservoirs on the Peace and Columbia Rivers.
The model was used to: derive optimal control policies for this multireservoir system, to
estimate the value of water in storage, and to establish the marginal value of water /
energy. The RLROM outputs were compared to the classical method of optimizing
reservoir operations, namely, stochastic dynamic programming (SDP), and the results for
one and two reservoir systems were identical. The results suggests that the RL model is
much more efficient at handling large scale reservoir operations problems and can give a
very good approximate solution to this complex problem.
|
Genre | |
Type | |
Language |
eng
|
Date Available |
2011-01-19
|
Provider |
Vancouver : University of British Columbia Library
|
Rights |
For non-commercial purposes only, such as research, private study and education. Additional conditions apply, see Terms of Use https://open.library.ubc.ca/terms_of_use.
|
DOI |
10.14288/1.0063269
|
URI | |
Degree | |
Program | |
Affiliation | |
Degree Grantor |
University of British Columbia
|
Campus | |
Scholarly Level |
Graduate
|
Aggregated Source Repository |
DSpace
|
Item Media
Item Citations and Data
Rights
For non-commercial purposes only, such as research, private study and education. Additional conditions apply, see Terms of Use https://open.library.ubc.ca/terms_of_use.