UBC Theses and Dissertations
Practical optimization for structured machine learning problems Ahmed, Mohamed Osama
Recent years have witnessed huge advances in machine learning (ML) and its applications, especially in image, speech, and language applications. Optimization in ML is a key ingredient in both the training and hyperparameter tuning steps, and it also influences the test phase. In this thesis, we make improvements to optimization of all of these three problems. The first part of the thesis considers the training problem. We present the first linearly-convergent stochastic gradient method to train conditional random fields (CRFs) using the stochastic average gradient (SAG) method. Our method addresses the memory issues required for SAG and proposes a better non-uniform sampling (NUS) technique. The second part of the thesis deals with memory-free linearly-convergent stochastic gradient method for training ML models. We consider the stochastic variance reduced gradient (SVRG) algorithm, which normally requires occasional full-gradient evaluations. We show that we can decrease the number of gradient evaluations using growing batches (which improves performance in the early iterations) and support vectors (which improves performance in the later iterations). The third part of this thesis switches to the problem of hyper-parameter tuning. Although it is a lower dimensional problem, it can be more difficult as it is a non-convex global optimization problem. First, we propose a harmless global optimization algorithm to ensure that the performance is not worse than using random search. Moreover, we explore the use of gradient-based Bayesian optimization (BO) to improve the performance. We propose the use of directional derivatives to address the memory issues of BO. Finally, in the fourth part of this thesis, we propose a novel optimization method that combines the advantages of BO and Lipschitz optimization (LO).
Item Citations and Data
Attribution-NonCommercial-NoDerivatives 4.0 International