- Library Home /
- Search Collections /
- Open Collections /
- Browse Collections /
- UBC Theses and Dissertations /
- A rubric for algorithm selection in optimization of...
Open Collections
UBC Theses and Dissertations
UBC Theses and Dissertations
A rubric for algorithm selection in optimization of black-box functions Sremac, Stefan
Abstract
When optimizing black-box functions, little information is available to assist the user in selecting an optimization approach. It is assumed that prior to optimization, the input dimension d of the objective function, the average running time tf of the objective function and the total time T allotted to solve the problem, are known. The intent of this research is to explore the relationship between the variables d, tf, and T and the performance of five optimization algorithms: Genetic Algorithm, Nelder-Mead, NOMAD, Efficient Global Optimization, and Knowledge Gradient for Continuous Parameters. The performance of the algorithms is measured over a set of functions with varying dimensions, function call budgets, and starting points. Then a rubric is developed to assist the optimizer in selecting the most appropriate algorithm for a given optimization scenario. Based on the information available prior to optimization, the rubric estimates the number of function calls available to each algorithm and the amount of improvement each algorithm can make on the objective function under the function call constraint. The rubric reveals that Bayesian Global Optimization algorithms require substantially more time than the competing algorithms and are therefore limited to fewer function call budgets. However, if the objective function requires a large running time, this difference becomes negligible. With respect to improvement, the rubric suggests that Derivative Free Optimization algorithms are preferred at lower dimensions and higher budgets, while Bayesian Global Optimization algorithms are expected to perform better at higher dimensions and lower budgets. A test of the claims of the rubric reveals that the estimate of function call budget is accurate and reliable, but the improvement is not estimated accurately. The test data demonstrates large variability for the measure of improvement. It appears that the variables d, tf, and T are insufficient for describing the expected performance of the assessed algorithms, since variables such as function type and starting point are unaccounted for.
Item Metadata
Title |
A rubric for algorithm selection in optimization of black-box functions
|
Creator | |
Publisher |
University of British Columbia
|
Date Issued |
2015
|
Description |
When optimizing black-box functions, little information is available to assist the user in selecting an optimization approach. It is assumed that prior to optimization, the input dimension d of the objective function, the average running time tf of the objective function and the total time T allotted to solve the problem, are known. The intent of this research is to explore the relationship between the variables d, tf, and T and the performance of five optimization algorithms: Genetic Algorithm, Nelder-Mead, NOMAD, Efficient Global Optimization, and Knowledge Gradient for Continuous Parameters. The performance of the algorithms is measured over a set of functions with varying dimensions, function call budgets, and starting points. Then a rubric is developed to assist the optimizer in selecting the most appropriate algorithm for a given optimization scenario. Based on the information available prior to optimization, the rubric estimates the number of function calls available to each algorithm and the amount of improvement each algorithm can make on the objective function under the function call constraint. The rubric reveals that Bayesian Global Optimization algorithms require substantially more time than the competing algorithms and are therefore limited to fewer function call budgets. However, if the objective function requires a large running time, this difference becomes negligible. With respect to improvement, the rubric suggests that Derivative Free Optimization algorithms are preferred at lower dimensions and higher budgets, while Bayesian Global Optimization algorithms are expected to perform better at higher dimensions and lower budgets. A test of the claims of the rubric reveals that the estimate of function call budget is accurate and reliable, but the improvement is not estimated accurately. The test data demonstrates large variability for the measure of improvement. It appears that the variables d, tf, and T are insufficient for describing the expected performance of the assessed algorithms, since variables such as function type and starting point are unaccounted for.
|
Genre | |
Type | |
Language |
eng
|
Date Available |
2015-06-24
|
Provider |
Vancouver : University of British Columbia Library
|
Rights |
Attribution-NonCommercial-NoDerivs 2.5 Canada
|
DOI |
10.14288/1.0166321
|
URI | |
Degree | |
Program | |
Affiliation | |
Degree Grantor |
University of British Columbia
|
Graduation Date |
2015-09
|
Campus | |
Scholarly Level |
Graduate
|
Rights URI | |
Aggregated Source Repository |
DSpace
|
Item Media
Item Citations and Data
Rights
Attribution-NonCommercial-NoDerivs 2.5 Canada