Open Collections

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Multi-fidelity algorithms for the horizontal alignment problem in road design Aziz, Mahdi 2018

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
24-ubc_2018_september_Aziz_Mahdi.pdf [ 1.39MB ]
Metadata
JSON: 24-1.0368940.json
JSON-LD: 24-1.0368940-ld.json
RDF/XML (Pretty): 24-1.0368940-rdf.xml
RDF/JSON: 24-1.0368940-rdf.json
Turtle: 24-1.0368940-turtle.txt
N-Triples: 24-1.0368940-rdf-ntriples.txt
Original Record: 24-1.0368940-source.json
Full Text
24-1.0368940-fulltext.txt
Citation
24-1.0368940.ris

Full Text

Multi-fidelity Algorithms for theHorizontal Alignment Problem inRoad DesignbyMahdi AzizB.Sc., Khayyam Institute of Higher Education, 2010M.Sc., Amirkabir University of Technology, 2013A THESIS SUBMITTED IN PARTIAL FULFILLMENT OFTHE REQUIREMENTS FOR THE DEGREE OFMASTER OF SCIENCEinThe College of Graduate Studies(Computer Science)THE UNIVERSITY OF BRITISH COLUMBIA(Okanagan)July 2018c© Mahdi Aziz, 2018The following individuals certify that they have read, and recommendto the College of Graduate Studies for acceptance, a thesis/dissertation en-titled:Multi-fidelity Algorithms for the Horizontal Alignment Problem in Road Designsubmitted by Mahdi Aziz in partial fulfilment of the requirements of thedegree of Master of ScienceDr. Yves Lucet, Irving K. Barber School of Arts and ScienceSupervisorDr. Warren Hare, Irving K. Barber School of Arts and ScienceCo-supervisorDr. Ramon Lawrence, Irving K. Barber School of Arts and ScienceSupervisory Committee MemberDr. Michael Friedlander, Department of Computer Science, UBCUniversity ExamineriiAbstractMulti-fidelity refers to methods that employ low fidelity and high fidelitymodels in their design so as to lower the computation time and obtain highaccuracy simultaneously. In particular, the algorithms that use the multi-fidelity technique in their design adaptively controls the error of the objectivefunction. As they reach better positions in the search space, they update theobjective function and use a new version with better precision. In this thesis,we study the multi-fidelity algorithms for solving the horizontal alignmentproblem in road design. The algorithms that we study here are a generalizedpattern search with adaptive precision control and a trust region algorithmfor unconstrained problems with controlled error. At first, in order to makea fair comparison, we tune the parameters of each algorithm on 5 smallroads. We then test the algorithms on 35 roads, ranging from small to verylarge roads. The results of the comparison demonstrate that the use of themulti-fidelity framework helps the horizontal alignment algorithms reducetheir computation time by an average factor of 2.52 while preserving thequality of solutions.iiiLay SummaryWe provide faster algorithms to design a road with the minimum costthat satisfies regulatory, safety, and environmental constraints. Numericalexperiments on a test set of 35 roads provided by our industrial partnersvalidate the algorithms and show a speedup of 2.52 while keeping the ap-proximation error within an acceptable tolerance.ivTable of ContentsAbstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iiiLay Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ivTable of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . vList of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viiList of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viiiAcknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . xDedication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiChapter 1: Introduction . . . . . . . . . . . . . . . . . . . . . . . 11.1 Road design optimization . . . . . . . . . . . . . . . . . . . . 11.1.1 Horizontal alignment problem in road design . . . . . 21.2 Multi-fidelity algorithms . . . . . . . . . . . . . . . . . . . . . 41.3 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61.4 Organization of this thesis . . . . . . . . . . . . . . . . . . . . 7Chapter 2: Multi-fidelity algorithms . . . . . . . . . . . . . . . 82.1 Generalized pattern search algorithms with adaptive precisioncontrol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82.1.1 Mathematical background . . . . . . . . . . . . . . . . 82.1.2 Generalized pattern search: general framework . . . . 102.1.3 Generalized pattern search: an adaptive precision con-trol version . . . . . . . . . . . . . . . . . . . . . . . . 122.2 A trust region algorithm with controlled error . . . . . . . . . 152.2.1 Creating a surrogate model using support vector re-gression model . . . . . . . . . . . . . . . . . . . . . . 20vTABLE OF CONTENTS2.2.2 Find the approximate minimum of the model . . . . . 21Chapter 3: Multi-fidelity algorithms for solving the HA . . . 223.1 Terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223.2 Objective function: vertical alignment problem . . . . . . . . 253.3 A surrogate model for the VA problem . . . . . . . . . . . . . 263.4 A general approach . . . . . . . . . . . . . . . . . . . . . . . . 283.5 A multi-fidelity GPS algorithm for the HA problem . . . . . . 293.6 A multi-fidelity trust region algorithm for the HA problem . . 30Chapter 4: Experimental results . . . . . . . . . . . . . . . . . . 334.1 Road set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 334.2 The effect of the multi-fidelity framework on the GPS algorithm 344.2.1 Parameter tuning . . . . . . . . . . . . . . . . . . . . . 344.2.2 Experimental results on 35 roads . . . . . . . . . . . . 374.2.3 The effect of the multi-fidelity on the GPS algorithmfor different classes of roads . . . . . . . . . . . . . . . 394.3 The effect of the multi-fidelity framework on trust region al-gorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 424.3.1 Parameter tuning . . . . . . . . . . . . . . . . . . . . . 424.3.2 Experimental results on 35 roads . . . . . . . . . . . . 444.3.3 The effect of the multi-fidelity on the TR algorithmfor different classes of roads . . . . . . . . . . . . . . . 454.4 Comparison with a standard solver . . . . . . . . . . . . . . . 484.4.1 Comparison with NOMAD for different classes of roads 50Chapter 5: Conclusion . . . . . . . . . . . . . . . . . . . . . . . . 55Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58Appendix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63Appendix A: Data Tables . . . . . . . . . . . . . . . . . . . . . . . 63viList of TablesTable 3.1 A mapping table from k to N . . . . . . . . . . . . . 28Table 4.1 Roads tested on different workstation PC computers. . 34Table 4.2 Parameter values for GPS0 and GPS1 algorithms . . . 35Table 4.3 Speed up and cost difference for the GPS algorithm . 38Table 4.4 Optimization failure for the GPS algorithm . . . . . . 39Table 4.5 Categorization of roads based on the number of IPs . . 39Table 4.6 Speed up for GPS algorithm for two categories of IPs 40Table 4.7 Cost difference for GPS for two categories of IPs . . . 40Table 4.8 Categorization of roads based on the number of stations 41Table 4.9 Speed up for GPS for groups of stations . . . . . . . . 41Table 4.10 Cost difference for GPS for groups of stations . . . . . 42Table 4.11 Parameter values for TRSVR0 and TRSVR1 algorithms 44Table 4.12 Speed up and Cost difference for the TRSVR . . . . . 45Table 4.13 Speed up for the TRSVR algorithm for categories of IPs 46Table 4.14 Cost difference for the TRSVR algorithm for cate-gories of IPs . . . . . . . . . . . . . . . . . . . . . . . . 46Table 4.15 Speed up for the TRSVR algorithm for groups of stations 47Table 4.16 Cost difference of TRSVR algorithm for groups of sta-tions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48Table 4.17 The parameter values for NOMAD . . . . . . . . . . . 49Table A.1 GPS0 for the HA problem across all roads . . . . . . . 64Table A.2 GPS1 for the HA problem across all roads . . . . . . . 65Table A.3 TRSVR0 for the HA problem across all roads . . . . . 66Table A.4 TRSVR1 for the HA problem across all roads . . . . . 67Table A.5 NOMAD0 for the HA problem across all roads . . . . 68Table A.6 Comparison of GPS0 and GPS1 algorithms . . . . . . 69Table A.7 Comparison of TRSVR0 and TRSVR1 algorithms . . 69Table A.8 Cost comparison among 5 HA algorithms . . . . . . . 70Table A.9 Time comparison among 5 HA algorithms . . . . . . . 71viiList of FiguresFigure 2.1 Positive basis sets and a positive spanning set in R2 . 9Figure 3.1 A satellite view of a road . . . . . . . . . . . . . . . . 23Figure 3.2 A sample road with stations and vertical and hori-zontal offset points . . . . . . . . . . . . . . . . . . . 24Figure 3.3 A road with four segments and 14 stations . . . . . . 25Figure 3.4 A corridor with five IPs represented by blue stars . . 25Figure 3.5 The objective function . . . . . . . . . . . . . . . . . 26Figure 3.6 The flowchart of a general approach for solving thehorizontal alignment problem . . . . . . . . . . . . . . 29Figure 3.7 The flowchart of the general multi-fidelity approach . 30Figure 3.8 The trajectory plot of the GPS algorithm for Road 35 31Figure 3.9 The trajectory plot of the trust region algorithm forRoad 35 . . . . . . . . . . . . . . . . . . . . . . . . . 32Figure 4.1 The performance profile of GPS0 algorithm . . . . . . 36Figure 4.2 The performance profile of GPS1 algorithm . . . . . . 37Figure 4.3 The horizontal alignment by GPS algorithms . . . . . 43Figure 4.4 The performance profile of TRSVR0 algorithm . . . . 44Figure 4.5 The performance profile of TRSVR1 algorithm . . . . 45Figure 4.6 The horizontal alignment by TRSVR algorithms . . . 48Figure 4.7 The performance profile of NOMAD . . . . . . . . . . 49Figure 4.8 The performance profile of 5 different algorithms . . . 50Figure 4.9 The performance profile of 5 different algorithms forCategory C1 roads . . . . . . . . . . . . . . . . . . . . 51Figure 4.10 The performance profile of 5 algorithms for CategoryC2 roads . . . . . . . . . . . . . . . . . . . . . . . . . 52Figure 4.11 The performance profile of 5 algorithms for Group 1roads . . . . . . . . . . . . . . . . . . . . . . . . . . . 52Figure 4.12 The performance profile of 5 algorithms for Group 2roads . . . . . . . . . . . . . . . . . . . . . . . . . . . 53viiiLIST OF FIGURESFigure 4.13 The performance profile of 5 different algorithms forGroup 3 roads . . . . . . . . . . . . . . . . . . . . . . 53ixAcknowledgementsI would like to thank Dr. Yves Lucet as my main supervisor for continu-ous help and guidance for this project. He has always been very welcomingto help during the project. And, Dr. Warren Hare as my co-supervisor gaveme very helpful comments and guidance during the project.Special thanks to The University of British Columbia, Softree Techni-cal System Inc., and the federal government of Canada for providing mefinancial support. This work was supported by the Natural Sciences andEngineering Research Council of Canada (NSERC) through CollaborativeResearch and Development grant #CRDPJ 479316-15 sponsored by SoftreeTechnical Systems Inc. Additional funding was provided by the GraduateEntrance Scholarship and the University Graduate Fellowship from the Uni-versity of British Columbia. Part of the computation in this research wascarried out, using a software library provided by Softree Technical SystemInc. Part of the research was performed in the Computer-Aided ConvexAnalysis (CA2) laboratory funded by a Leaders Opportunity Fund (LOF,John R. Evans Leaders Fund Funding for research infrastructure) from theCanada Foundation for Innovation (CFI) and by a British Columbia Knowl-edge Development Fund (BCKDF).xDedicationI would like to dedicate my work to M.Z.R. and K.L.xiChapter 1IntroductionIn this chapter, we provide a brief overview of the road design optimiza-tion and multi-fidelity algorithms, and our motivation to solve the horizontalalignment in road design optimization.1.1 Road design optimizationRoad design optimization is a hot topic in the design of transportationsystems as it has significantly contributed to the development of today’seconomy. For instance, according to one analysis published by the CanadianGovernment, 15.9 billion dollars were spent by the territorial, provincialand federal agencies to build and extend roads in 2015/2016. Moreover,in 2015/2016 more than 50 percent of revenue gained by the territorial,provincial and federal government comes from road transportation [25].Road design optimization refers to the problem of finding a series of tan-gents and curves between two given points in a given area while minimizingthe construction cost between these points and following all the design con-straints given. The problem can also be defined as the positioning of crosssection points between two given starting and end points in a road such thatall associated costs are minimized, and all design constraints are satisfied.There are many costs associated with road design. However, there arefive principal costs that need to be taken into account: planning, designand administrative cost; construction cost; maintenance cost; social, polit-ical and environmental cost; and user cost [20]. In road design, an align-ment refers to a curve between two specified points in a three-dimensionalspace. The horizontal projection between those points is called the horizon-tal alignment. The vertical alignment is the profile aspect of the road, andit comprises crest and sag curves, and the straight grade lines connectingthese curves.The road design optimization problem usually can be categorized intothree sub-problems: earthwork optimization, vertical alignment optimiza-tion and horizontal alignment optimization. These sub-problems are closelyrelated to one another where in order to solve one you need to take care of11.1. Road design optimizationthe other two sub-problems first. Given a fixed vertical alignment, earth-work optimization can be defined as the problem of minimizing the overallcost of hauling, excavation and embankment. Given specified quantities ofearth materials and a given ground profile, the objective is to haul, embankor excavate earth materials in a way that the ultimate cost would be min-imized. If for a given road the ground profile is less than the road profile,then we need to fill it with allowed materials. Otherwise, we need to cutthose materials from the ground. There is also the cost for moving (haul-ing) materials from cutting areas to filling areas, which is called haulingcost. The objective is to minimize the sum of the embankment, excavatingand hauling costs.Given a fixed horizontal alignment, vertical alignment optimization refersto the problem of minimizing the earthwork cost while following all verti-cal design constraints. The major design constraints for the vertical align-ment are the rate of curvature and the minimum and maximum of allowablegrades. The allowable grades are determined by some factors, including thetype of terrain (the topology of a road) and the maximum speed limit [13].The horizontal alignment, on the other hand, refers to the problem offinding the best feasible sets of curves and tangential line segments betweentwo given points such that the earthwork cost is minimized and all associ-ated horizontal and vertical design constraints are satisfied. The principalhorizontal design constraints are the minimum radius of curvature of thecircular arcs that is essential for safety requirements. Further details on theroad design problem can be found in [20].1.1.1 Horizontal alignment optimization problem in roaddesignOptimizing the horizontal alignment in road design is a challenging taskas the problem is extremely complicated and intertwined with earthwork andvertical alignment optimization problems. There are a few optimization al-gorithms that have been proposed for solving the horizontal optimizationproblem. Some of them considered the horizontal alignment along withvertical alignment optimization problem and solve the problems simultane-ously. For example, Aruga [2] proposed a tabu search for optimizing thevertical and horizontal alignments at the same time. They presented twodifferent ways to create the initial solutions, one is to manually select theinitial solution and the other to use Dijkstra algorithm to create the initialsolution. Using the initial solution, the tabu search creates the alternativehorizontal and vertical alignments in a small number of iterations. They21.1. Road design optimizationshowed that using the tabu search they can reduce the computational costsignificantly. While the tabu search algorithm returns a better horizontaloptimization quickly, due to its nature it does not necessarily converge toa stationary point. Another example is genetic algorithm in conjunctionwith GIS (geographic information system) that was proposed to find a fea-sible alignment for the horizontal problem in road design [33]. Like thetabu search algorithm, the genetic algorithm suffers from non-deterministicsolutions. However, the authors [33] have shown that the algorithm can pro-vide an acceptable solution for the 3-d dimensional alignment optimizationproblem very fast.In contrast to heuristic algorithms that do not guarantee (even local)optimality, Hirpa et al. [15] proposed a method that simultaneously solvedthe horizontal and vertical alignments together while converging to the localminimizer. In fact, they considered the problem as the optimization of three-dimensional road alignments where the objective function is considered asa bi-objective function. The first objective is to minimize the cost of thelength of the road and the second one is to minimize the cost of the volume ofearthwork. Since the problem is computationally expensive, they proposeda surrogate model to reduce the cost of simulating the objective functionand used NOMAD [3] (Nonlinear optimization with Mesh Adaptive Directsearch) to solve the multi-objective problem efficiently.Considering the horizontal alignment as a multi-level optimization prob-lem, Mondal [23] studied two derivative-free optimization solvers, namelyNOMAD and HOPSPACK (Hybrid Optimization Parallel Search PACK-age) to solve the horizontal alignment optimization in road design. Theydesigned a multi-layer framework such that in the inner layer the algorithmsolves the earthwork optimization problem that satisfies the vertical designconstraints for a fixed alignment. Then, in the outer layer, the algorithmsolves the horizontal alignment problem satisfying all horizontal design con-straints. They have shown that using NOMAD, their model guaranteeslocal optimality. However, even though their model can solve the horizontalalignment effectively, it takes a long time to evaluate the objective function(optimizing the vertical alignment), resulting in huge computation time inthe horizontal alignment optimization. In this thesis, we apply a surro-gate [14] that removes some cross sections in the road so that the evaluationof the objective function is less costly. In the next chapter, we use algo-rithms that employ a multi-fidelity framework, and in Chapter 4 we showthat using this surrogate in the multi-fidelity framework the algorithm canachieve huge time saving as well as good solution quality.31.2. Multi-fidelity algorithms1.2 Multi-fidelity algorithmsSurrogate-based optimization algorithms are a class of optimization algo-rithms that is concerned with the utilization of surrogate functions (models)to find a solution faster. These algorithms are usually employed when eval-uating the objective function of a problem is prohibitively expensive such asthe horizontal alignment in road design [29].A surrogate function is an approximation of a true objective functionthat shares some similarities with the true objective function. Ideally, asurrogate function is faster to compute than the true objective function.Furthermore, the local minimizers of the surrogate function should be closeto the local minimizers of the true objective function [4].Solving real-world problems such as designing the wings of aircraft orsolving the vertical optimization problem in road design are usually difficultand time-consuming. For such problems, performing even a small number offunction evaluations is highly time-consuming. For this reason, multi-fidelityalgorithms that use different levels of accuracy in function evaluations wereintroduced. They are the combination of low fidelity and high fidelity mod-els such that depending on the problem at hand, they use high fidelity, lowfidelity or a combination of both. The low fidelity and the surrogate termsare occasionally used interchangeably. However, there is a subtle differencebetween them. A surrogate function can be a fine approximation of thetrue objective function, which is considered as a high fidelity model. Al-ternatively, it can be a coarse approximation of the true objective function,which can be called a low fidelity model.The main objective of multi-fidelity algorithms is to reduce the compu-tation time of the optimization by using a low fidelity model and increasingthe quality of the solutions by taking advantage of the high fidelity modelat some point in the search process. These algorithms use a surrogate func-tion(s) (a low fidelity model) instead of the true objective function (the highfidelity model) to evaluate the problem, and as they make progress towardthe better solutions, they use more accurate surrogate functions. Finally,the algorithm stops as it reaches a local minimizer.There have been numerous multi-fidelity algorithms designed for a widerange of purposes, ranging from aerodynamic designs [18] to mechanicalengineering designs [17]. For example, in [16] authors designed a multi-levelfidelity surrogate model for solving an engineering design problem. Theydemonstrated that using variable-fidelity meta-modeling, the algorithm canfind more accurate engineering designs while having the same simulationcost.41.2. Multi-fidelity algorithmsOne of the important aspects of multi-fidelity algorithms is the modelmanagement strategy. This strategy is about deciding which surrogatemodel should be used at a particular time of the search process. In terms ofthe model management strategy, authors in [27] classified the multi-fidelityalgorithms into three strategies: adaptation, filtering and fusion. Adapta-tion strategy refers to adapting the low fidelity model to the problem at handusing the information gathered from the high fidelity model. For example,authors in [21] used the Kriging model as the low fidelity model in the algo-rithm where the results of the Kriging model is updated and improved usingthe results of the high fidelity model provided. In particular, the algorithmstarts with a crude initial Kriging model to solve the problem. At the nextiteration, the high fidelity model is utilized to provide a more accurate in-put for the low fidelity model. Using this input, the Kriging model providesbetter and more accurate interpolation.On the other hand, the fusion strategy refers to the methods that utilizelow and high fidelity models simultaneously. An example of this strategy isthe co-Kriging method where the output of the low fidelity Kriging methodis combined with the output of the high fidelity model at each iterationof the algorithm. One of the benefits of this technique is that no a prioriinformation about the problem is required [26].Lastly, methods based on the filtering work in a way that the high fi-delity is employed when more precision is required [10]. This techniqueusually starts with a low-accuracy surrogate model where the objective isto minimize the surrogate model. As the algorithm converges to betterpoints in the search space, the surrogate model utilizes the higher accuracy.Examples of this technique can be found in [28, 31, 36].Another important aspect of multi-fidelity algorithms is how the highfidelity model is converted to a low fidelity one. When it comes to convertinghigh fidelity models to low fidelity ones, there are many techniques includingdimensionality reduction, linearization, less refinement, simpler geometry,simpler physics, etc. [10]. In this thesis, in order to generate the low fidelitymodel for the objective function of our problem, we apply the less refinementtechnique in Section 3.3 where less number of cross sections are used foroptimizing the vertical alignment for a fixed horizontal alignment.As authors in [10] stated multi-fidelity algorithms have more heavilybeen used for mechanical, aerospace engineering problems such as designingan engineering system [17] or the wings of aircraft [16, 18]. However, inrecent years some researchers have utilized the idea of multi-fidelity in otherfields. For example, Polak and Wetter in [28] proposed a new version of theGeneralized Pattern Search (GPS) for unconstrained optimization problems51.3. Motivationthat adaptively controls the error of the objective function. In particular, thealgorithm begins with the low fidelity version of the true-objective function.Then as the algorithm converges to better points in the search space, ahigher fidelity model is employed. Using this technique, they have provedthat even though the objective function is not accurate, the algorithm willstill converge to a local minimizer. The algorithm was employed to simulatethe energy consumption of a building. They showed that it can solve theproblem more quickly than using the algorithm with only the high fidelitymodel. Another example of multi-fidelity algorithms used in other fields isthe implied volatility estimation where authors [31] showed that using a trustregion algorithm based on the support vector regression model, they cansolve the problem much faster than surrogate-free version. Using the ideaof multi-fidelity, they proved that the algorithm converges to the stationarypoint.In this thesis, we apply both GPS with adaptive control [28] and the trustregion algorithm with controlled error [31] to solve the horizontal problemin road design. The algorithms we choose fall into multi-fidelity hierarchi-cal methods as the algorithms do not employ high fidelity models at eachiteration, and they use the high fidelity models when they are close enoughto the stationary point. Moreover, the algorithms can also be considered asfiltering multi-fidelity algorithms as the model management strategy of thealgorithm decides when the high fidelity model or low fidelity model shouldbe used.1.3 MotivationEvaluating the design of the horizontal alignment is extremely expensivein terms of both time and memory consumption. Using a portion of crosssections to solve the vertical alignment was shown to be a good surrogatefor the true objective function [14]. In [14], authors showed that we canapproximate the vertical alignment in road design with 5 percent error whenalmost 95 percent of cross sections of roads are removed, resulting in hugetime-saving. In this thesis, we apply this surrogate to decrease the size of thevertical alignment, which results in solving the vertical alignment at a fasterpace. As a result we can solve the horizontal alignment faster than usingthe true objective function. In particular, using the idea of adaptive multi-fidelity, we first solve the horizontal alignment problem with low precisionand as the algorithm find better horizontal alignments, a more accuratesurrogate is used, thereby saving the computation time while maintaining61.4. Organization of this thesisthe overall accuracy at a reasonable level.1.4 Organization of this thesisThe remainder of this thesis is structured as follows. In Chapter 2, wediscuss two multi-fidelity algorithms. In Chapter 3, we explain how we applythese two multi-fidelity algorithms to solve the horizontal alignment in roaddesign. In Chapter 4, we report the results of the algorithms, and comparethe results with the versions of the algorithms without the multi-fidelityapproach on 35 road problems. The results clearly demonstrate that whenapplying the multi-fidelity framework to our algorithms, we save significantcomputation time. In Chapter 5, we conclude the thesis and shed somelights on future works.7Chapter 2Multi-fidelity algorithmsIn this chapter, we discuss the multi-fidelity algorithms that we apply tothe horizontal alignment in road design.2.1 Generalized pattern search algorithms withadaptive precision controlIn this section, a Generalized Pattern Search (GPS) for unconstrainedoptimization problems with adaptive precision is studied [28]. The algorithmis suitable for computationally expensive optimization problems. In order tounderstand the algorithm properly, we should first recall some basic conceptsin Linear Algebra.2.1.1 Mathematical backgroundDefinition in this section are from [4].Definition 2.1. Linearly Independent. The set of vectors {v1, v2, . . . , vm}where vm ∈ Rd is said to be linearly independent if and only if there is onlyone solution to the following equation:m∑i=0λi υi = 0. (2.1)Definition 2.2. Span. For a set of vectors Ω ⊂ Rd, its span is defined asSpan(Ω) ,{m∑i=1λi θi : m ≥ 1 , λi ∈ R , θi ∈ Ω}. (2.2)Definition 2.3. Basis. A set of vectors Ω is called a basis of Rd if and onlyif it is linearly independent and Span(Ω) = Rd.One important feature of a basis is that it must have exactly d vectors.82.1. Generalized pattern search algorithms with adaptive precision controla) Positve basis (3 vectors) b) Positive basis (4 vectors) c) Positive spanning setFigure 2.1: Two positive basis sets and a positive spanning set in R2Definition 2.4. Positive Spanning Set. A set of the non-negative linearcombination of vectors in Ω is said to be a positive spanning set if and onlyif PSpan(Ω) = Rd wherePSpan(Ω) ,{m∑i=1λi θi : m ≥ 1, λi ≥ 0, θi ∈ Ω}. (2.3)Definition 2.5. Positive Linearly Independent. A set of vectors Ω is said tobe positive linearly independent if and only if for any θ ∈ Ω, θ /∈ PSpan(Ω \{θ}).It means that a positive linearly independent set does not possess exces-sive vectors.Definition 2.6. Positive Basis. A set is said to be a positive basis if andonly if it is positive linearly independent and a positive spanning of Rd.Figure 2.1 shows two examples of positive basis sets and a positive span-ning set in R2. As shown in Figure 2.1, the first two sets are the positivebasis as their linear combination covers the whole space in R2, and it is notpossible to remove one of the directions in the aforementioned sets and get apositive spanning set. However, the last one is only a positive spanning setand not a positive basis, as by removing two directions in the middle, theremaining directions can still cover the whole space in R2. Therefore, theset is not positive linearly independent and as a result not a positive basis.Definition 2.7. Canonical Maximal Positive Basis. The canonical maximalpositive basis of Rd is defined asΩmax , {e1, e2, ..., ed, −e1, ..., −ed} (2.4)where ei is the ith unit vector.92.1. Generalized pattern search algorithms with adaptive precision controlDefinition 2.8. Canonical Minimal Positive Basis. The canonical minimalpositive basis of Rd is defined asΩmin , {e1, e2, ..., ed, −(d∑i=1ei)/d} (2.5)where ei is the ith unit vector.Definition 2.9. Mesh. A Mesh M is a set of vectors that is defined asM(x, δ) , {x+ δ Dy : y ∈ Np} ⊂ Rd, (2.6)where x ∈ Rd, δ ∈ R+, Dy ⊂ D, D = G Z ∈ Rd,p, G ∈ Rd,d is an invertiblematrix, Z ∈ Zd,p, and the columns of Z constructs a positive spanningset for Rd. We also define the set of non-negative real numbers R+ asR+ = {x ∈ R : x ≥ 0}.Definition 2.10. Box-constrained Optimization Problem. The function f :Rd → R is defined asminx{f(x) : x ∈ Rd, li ≤ xi ≤ ui, i = 1, . . . , d }, (2.7)where l, u are the lower and upper bounds, respectively.Definition 2.11. Box-constrained Optimization Problem with Error. Thefunction f˜(x, ) with  error is defined as,minx{f˜(x, ) : x ∈ Rd,  ∈ R+, li ≤ xi ≤ ui} (2.8)where f˜ : Rd × R+ → R, x ∈ Rd,  ∈ R+ and R+.Definition 2.12. Local Search Set: The local search set ι : (X×R+)→ R+is defined as,ι(x, δ) , {x + δ Ωmax}, (2.9)where X ⊂ Rd and Ωmax is the canonical maximal positive basis.2.1.2 Generalized pattern search: general frameworkThe generalized pattern search (GPS) is a famous DFO methods thatcan be applied to both unconstrained and constrained optimization prob-lems [28]. It belongs to the subclass of DFO methods, called direct search102.1. Generalized pattern search algorithms with adaptive precision controlmethods. It starts with an incumbent solution and explores the search spaceby generating many trial points. The key component of this algorithm is thepoll step where the success or failure of this algorithm heavily relies on theset of polling directions. If the set of polling directions is properly chosen,the algorithm converges to a local minimizer. A good poll set must followthree conditions. First, it should have only a few vectors. Second, everyhalf-space of Rd should be covered by the vectors in the poll set. Third, interms of size, all vectors in the poll set should be similar [4]. When the pollset is a positive basis (Definition 2.6), the first two conditions are satisfied.So it is logical to create the poll set as a positive basis. There are many tech-niques to create a positive basis, ranging from minimal to maximal positivebasis. Using the positive basis ensures that there exist at least one vectorin the set that is a descent direction [4, Theorem 6.5]. Algorithm 1 showsthe general framework for GPS algorithm.Algorithm 1: The generalized pattern search (general framework)Input: f : Rd → R, starting point x0.Step 0-initialization:δ0 ∈ (0,∞) an initial mesh size parameter,D = GZ a positive spanning matrix,Z ∈ Zd×p a d× p matrix of integer numbers,τinc > 1 a mesh size increasing parameter,τdec ∈ (0, 1) a mesh size decreasing parameter,k = 0 an iteration counter,stop ∈(0,∞) a stopping tolerance.Step 1-search step: construct Ek a finite subset of the mesh Mk.if f(x) < f(xk) for some x ∈ Ek, thenxk+1 = x, δk+1 = τinc δk, k = k + 1 and go to Step 3.else go to Step 2.Step 2-poll: select a positive spanning set ιk from columns of D:if f(x) < f(xk) for some x ∈ Pk = {xk + δkd : d ∈ ιk }, thenxk+1 = x, δk+1 = τinc δk.else xk+1 = xk, δk+1 = τdec δk.Step 3-stop:if δk ≤ stop, then stop.else k = k + 1 and go to Step 1.The detailed description of each step of the general version of GPS al-gorithm is as follows:112.1. Generalized pattern search algorithms with adaptive precision controlIn Step 0, all the parameters of the algorithm are initialized, includingthe initial mesh size δ0, the mesh size increasing and decreasing parametersτinc, τdec and the stopping condition stop.In Step 1, the algorithm performs the search process on the pointscreated on Ek. More specifically, at each iteration, the algorithm generatesEk as the subset of the Mesh Mk. Then the algorithm explores the pointsat Ek, trying to find a better point. The search process can be carried outusing many techniques, ranging from heuristic algorithms to even anotherGPS algorithm inside of the main GPS algorithm [28]. As long as the pointscreated lie on the Ek, the algorithm converges to a stationary point [4, p.120]. It is even possible not to employ the search step, and the algorithmstill converges [28]. The main objective of the search process is to help thealgorithm to converge faster to a local minimizer.In Step 2, the polling step is performed. At first, a positive spanningset (ιk) from columns of the polling points is constructed. If there is a pointin ιk, which is better than the current point, it takes the place of the currentpoint and the mesh size will be expanded by the factor of τinc. Otherwise,the size of the mesh is decreased by the factor of τdec.In Step 3, if the size of the mesh is small enough, the algorithm isstopped. Otherwise, the algorithm goes to Step 1.Using the proper polling set (positive basis), the generalized patternsearch was shown to converge to a stationary point [35].2.1.3 Generalized pattern search: an adaptive precisioncontrol versionIn 2006, Polak and Wetter [28] proposed a new version of GPS algorithmthat adaptively controls the error. The algorithm is a multi-fidelity GPS al-gorithm where the precision of the objective function is adaptively adjusted.In particular, the algorithm starts off with the objective function with lowaccuracy (low fidelity model). As the algorithm converges, it employs asmaller size for the mesh size parameter, and higher fidelity model is used.It was proven that when a proper poll set (positive basis) is selected and theerror decreases/increases similar to the mesh size parameter, the algorithmconverges to a stationary point [28].The algorithm considers three conditions that make it different fromthe general version of GPS algorithm. First, it assumes that the objectivefunction is not precise and contains error. Second, the error of objectivefunction can be controlled. Third, the more computation time spent onthe objective function, the more accuracy is gained. The main objective122.1. Generalized pattern search algorithms with adaptive precision controlof this algorithm is to adaptively control the error so that the algorithmconverges to the stationary point. The pseudo-code for the GPS algorithmwith adaptive precision control is presented in Algorithm 2.Algorithm 2: The GPS with adaptive precision control (GPS 1)Input: f : Rd → R, starting point x0.Step 0-initialization:0 ∈ (0, 1) the initial error value,δ0 ∈ (0,∞) an initial mesh size parameter,D = GZ a positive spanning matrix,Z ∈ Zd×p a d× p matrix of integer numbers,τinc > 1 a mesh size increasing parameter,τdec ∈ (0, 1) a mesh size decreasing parameter,k = 0 an iteration counter,stop ∈ (0,∞) a stopping tolerance,E ∈ N an error control parameter.Step 1-global search: construct the global search set Ek as a finitesubset of the mesh Mk.if f˜(x, k)− f˜(xk, k) < −ζ k for at least one x ∈ Ek, thenxk+1 = x, δk+1 = δk, k+1 = k and go to Step 3.else go to Step 2.Step 2-poll: select a positive spanning set ιk ⊆ D:if f˜(x, k)− f˜(xk, k) < −ζ k for at least one x ∈ ιk, thenxk+1 = x, δk+1 = δk, k+1 = k.elsexk+1 = xk, δk+1 = τdec δk.if E sequential poll failures have occurred, thenk+1 = τdec k.elsek+1 = k.Step 3-stop:if δk ≤ stop, then stop.else k = k + 1 and go to Step 1.The algorithm is similar to the general version of GPS except that it hasa strategy to adaptively control the error, and it uses the surrogate modelinstead of the true objective to evaluate each point. To control the error, thealgorithm defines a new parameter called k that determines the precision132.1. Generalized pattern search algorithms with adaptive precision controlof the objective function.One other important difference between Algorithm 2 and the generalversion of GPS is that Algorithm 2 does not expand δk+1 at successfuliterations.The description of the algorithm is as follows.In Step 0, the iteration counter k and the mesh size adjustment param-eter δ0 are initialized. Moreover, the initial error of the objective function,0, is initialized with a large value. The reason behind choosing a large valuefor the error at initial iterations is that we assume that the initial points arefar away from a local optimizer at initial iterations.In Step 1, similar to the general framework of GPS, the algorithm carriesout the search process on Ek, which is a subset of the Mesh Mk. In contrastto the general version of the GPS algorithm, this algorithm evaluates thepoints in Ek using the surrogate function. Note that in this study, we do notemploy the search step as it has not shown positive results in the outcomeof our preliminary studies.In Step 2, we first select a subset of the D as our local search set.Then we do the poll step on the selected local set (ιk). To select ιk, Pollakand Wetter [28] suggested using Canonical Maximal Positive Basis. In thisstudy, we employ this technique to generate polling points at each iteration.If we can find at least one point in ιk, which is significantly better thanthe incumbent solution, then we replace the incumbent solution with thatpoint. The reason that we use the word ‘significantly better‘ is that sincethe difference between the incumbent solution and x should be less than−ζ.k and ζ ≥ 0, x should be better than the current solution by a factor ofζ. If we cannot find any point, we update δk+1 by τdec. Furthermore, if thealgorithm cannot find any better points at E sequential iterations, the errork+1 will be updated by τdec. It is worth noting that both δk and k areupdated when a poll failure occurs. So, by linking the δk and k updates,the stopping only occurs when the error is small. It was shown that usingthis idea, the algorithm converges to a local minimizer [28].In Step 3, similar to the general version of GPS, the algorithm termi-nates if δk is small enough. Otherwise, it goes to Step 1.In [28], the authors showed that the proposed GPS algorithm convergesto a stationary point as long as the cost function f is locally Lipschitz. Theyhave shown that even if f is approximated by a family of discontinuoussurrogate functions, the algorithm will eventually converge to stationarypoints of f .142.2. A trust region algorithm with controlled error2.2 A trust region algorithm for unconstrainedoptimization problems with controlled errorModel-based DFOs are different from the GPS algorithm; they builda surrogate model function from the sample points and its objective func-tion values. It is worth indicating that model-based algorithms are goodif the objective function is differentiable and the model created is easy tominimize [4]. There are two different groups of model-based algorithms:gradient-descent and trust region algorithms. In contrast to gradient-descentalgorithms where the model is employed to find the descent direction of theobjective function, in trust region ones the model is minimized approxi-mately considering trust region constraints [4]. A general framework for thetrust region algorithm is presented in Algorithm 3.In Step 0, the parameters of the algorithm including ∆0, k, Nl, Nu areinitialized. The trust region size ∆ is one of the most important parametersof a trust region algorithm that has a great impact on the performance ofthe algorithm. In addition, we need to set the minimum (Nl) and maximum(Nu) size parameters for the sample set. These parameters control howmany sample points are in the sample set. The minimum size of the sampleset usually is set to d + 1 where d is the size of the optimization problemand the maximum size of the sample set (Nu) can be set to a large integernumber.The next sub-step is to construct the sample set (X). There are manydifferent ways to generate a sample set. It can be created randomly orusing a positive basis [4]. For example, Takaki [31] used a random methodto generate sample points. In [28], authors suggested constructing samplepoints using the canonical maximal positive basis (Definition 2.7).In Step 1, we create a surrogate model that approximates the behaviorof the objective function. There are numerous ways to create a surrogateof an objective function. It can be created using a linear or quadratic in-terpolation or regression models. Readers are referred to [6] to find moreinformation.In Step 2, the approximate minimum of the model within the trustregion is found. This procedure is usually called trust region sub-problemand has been discussed extensively in many papers [6, 12, 32].In Step 3, if the model is successful (meaning that the approximateminimum x∗k found in Step 2 is better than the current best point xk), thetrust region will be expanded. Otherwise, it will shrink.In Step 4, if the sample set has enough space, the model minimum x∗k152.2. A trust region algorithm with controlled errorAlgorithm 3: The trust region algorithm (general framework)Step 0 : initialization - choose trust region parametersk = 0 an iteration counter,∆0 ∈ (0,∞) an initial trust region size parameter,0 < Nl ≤ Nu the minimum and maximum size of sample set,construct sample set X = {x1, x2, ..., xl} where xl ∈ Rd,set x0 ∈ arg minx∈X(f(x)).Step 1: construct the model mk(x) using the sample set XStep 2: find the approximate minimum of the modelcompute xk∗ ∈ arg minxk∈Bk mk(xk).Step 3: update the trust region radiusif the model mk(x) is successful, then∆k+1 > ∆k.else∆k+1 < ∆k.Step 4: update the sample setif |X| < Nu, theninsert xk∗ into X.elsereplace xk∗ with arg maxx∈X(f(x)).set xk+1 ∈ arg minxj∈X(f(xj)).remove any x ∈ X such that ||x− xk+1|| > ∆k.while(|X| < Nl)generate a point inside of the trust region.Step 5: stopif ∆k or the gradient of the model is small, then stop.else k = k + 1 and go to Step 1.is inserted to the sample set X. Otherwise, the model minimum x∗k takesthe place of the worst point in X. Then we need to update the trust regionaccordingly. To do that, we first find the minimum of the sample set and setit to xk+1. And then all points in S that are outside of the trust region willbe removed from the sample set. If the number of remaining points in X isless than Nl, new points are generated until the number of points in X ismore than or equal to Nl. The new points can be generated using random162.2. A trust region algorithm with controlled errorsampling [31], Latin Hypercube sampling [19] or using positive basis [28].Algorithm 3 is a general framework for any type of trust region algo-rithms. However, the framework does not consider the error of the objectivefunction and how the algorithm handles it. Here, we study the trust re-gion algorithm that addresses unconstrained optimization problems withcontrolled error [31].The pseudo-code for this algorithm is presented in Algorithm 4.Algorithm 4 is similar to Algorithm 3 in several aspects. However, thereare several major differences. First, the algorithm considers point-wise errork in the objective function where k is adjusted according to the trust regionsize parameter ∆k. Second, the algorithm considers a penalty parameter Ckfor every point in the sample set S. This penalty parameter Ck comes intoplay when the surrogate model is constructed from the sample set. In partic-ular, the sample points with better performance have fewer penalty values,thereby having more impact on the construction of the surrogate model.Third, the algorithm does not only consider xl in S to construct surrogatemodel mk(.) but also penalty and error parameters. Finally, for every pointxk in the sample set we store three corresponding values (xk, yk, k, Ck).In Step 0, we initialize the parameters of the algorithm where modelquality control parameters (η0, η1) are set to (0.1, 0.75), the trust regionincreasing and decreasing parameters γinc, γdec to 0.5 and 2, the iterationcounter k to 0, the minimum size of the sample set Nl to d+1 and the initialsolution x0 to the best point in the sample set. The initial error parameter0 is set to 0.1. Other parameters are set similar to Algorithm 3.The next sub-step is to construct the sample set S. Similar to the generalframework of the trust region algorithm, the sample points in S can becreated using a random method [31], Latin Hypercube sampling [19] or apositive basis. However, in contrast to the general version of the trust-region algorithm that we only store x value for each point, in the trustregion algorithm with controlled error, we store the error and penalty valuesfor each point. Here, to construct the sample points in S, we choose thecanonical minimal positive basis (Definition 2.8) as our preliminary studieshave shown that it would be a good choice. It is worth noting that for everypoint in S, we initialize l and Cl with 0 and C0, respectively.In Step 1, we build a surrogate model using the support vector regressionmodel. Since we assume that the objective function contains error, thesurrogate model considers pointwise error (k) in its design. Moreover, inorder to penalize points that are farther away from the minimum of the trustregion, it considers a penalty parameter C that adaptively adjusts the effectof every point in the sample set. The penalty parameter is updated as in172.2. A trust region algorithm with controlled errorAlgorithm 4: The proposed trust region algorithmStep 0 : Initialization - choose trust region parameters0 ≤ η0 ≤ η1 < 1 model quality control parameters,m ∈ (0, 1) a model control parameter,γdec ∈ (0, 1) a trust region decreasing parameter,γinc > 1 a trust region increasing parameter,0 the initial error parameter,C0 the initial penalty parameter,construct sample set S = {s1, s2, ..., sl} where sl = (xl, yl, el, cl),set k,∆0, Nl, Nu, x0 similar to Algorithm 3.Step 1: construct the model mk(x) from sample set Sif(max(||∇mk(x)||, λmin(∇2mk(x))) < m), thenlabel the model as unsuccessful, ρk = 1 and got to Step 4.Step 2: find the approximate minimum of the modelcompute xk∗ ∈ arg minxk∈Bk mk(x),evaluate yk∗ = f˜(xk∗ , k∗) with k∗ = k.Step 3: evaluate the quality of the modelcompute the ratio: ρk =yk − yk∗mk(xk)−mk(xk∗) .Step 4: update the trust region radiussuccessful step: if ρk ≥ η1, then ∆k+1 = γinc ∆k.unsuccessful step: if ρk ≤ η0, then ∆k+1 = γdec ∆k.else ∆k+1 = ∆k.Step 5: adjust the accuracy k and penalty parameter Ckk+1 = o(∆k) , Ck+1 = C0/k+1.Step 6: find the best solutionxk+1 ∈ arg minxj∈(S∪x∗k)(f˜(xj , k))Step 7: update processif |S| < Nu, theninsert (xk∗ , yk∗ , k∗ , Ck) into S.elsereplace (xk∗ , yk∗ , k∗ , Ck) with the worst point in S.remove any x ∈ X such that ||x− xk+1|| > ∆k.while(|X| < Nl)generate a point inside of the trust region.update the penalty parameter Cl ∀ Cl ∈ S.Step 8: stop similar to Algorithm 3.182.2. A trust region algorithm with controlled errorEquation (2.2).The detailed description about how the model is built appears in Sec-tion 2.2.1.After building the model, we need to check whether the model is suc-cessful or not. If max(||∇mk(x)||, λmin(∇2mk(x)) < m, we label the modelas unsuccessful and go to Step 4. In other words, if the model is almosta constant function, the algorithm does not accept the model and define itas an unsuccessful one and shrink the trust region radius in Step 4. In theformula, λmin is the minimum eigenvalue of the Hessian of the model.In Step 2, the approximate minimum of the model is found. Since wewant to find the approximate minimum of the model within the hyperspherewithout any external constraints, we can use several different methods suchas dogleg, two-dimensional subspace, Steihaug’s methods [31] or truncatedconjugate gradient method [6]. In this study, we use the truncated conjugategradient as it provides the approximate minimum of the model quickly withgood accuracy. The discussion about it appears in Section 2.2.2.In Step 3, the quality of the model is evaluated. To compute the qualityof the model, we use the objective function value with k (the error value atthe current iteration).In Step 4, if the model is successful (the quality of the model is greaterthan η1), the current trust region radius ∆k+1 will be expanded. On theother hand, if the quality is not good enough, the trust region radius willshrink.In Step 5, the error is updated based on the trust region radius, ∆k.Because we do not have the derivative of the objective function, we cannotfind the distance between a point x to the solution. It was shown that [7]when xk is generated by a DFO algorithm with exact objective function,k → ∞, ∆k → 0 then ||∇f(xk)|| → 0 too. For this reason, it would be agood idea to use ∆k (the trust region radius) for the measure. For updatingthe penalty parameter, C0/k+1 is used. This formula makes sure that thepenalty parameter becomes larger for xk∗ with higher accuracy and viceversa.In Step 6, the best solution among the sample points in S and theapproximate minimum of the model x∗k is found and is assigned to xk+1.In Step 7 If the size of the sample set is less than the maximum size Nu,x∗k and all its corresponding values (yk∗ , k∗ , Ck) are inserted to the sampleset S. If not, then x∗k and its corresponding values will take the place of theworst point in S.For updating the sample set, we need to first measure the distance be-tween all points in the sample set to xk+1 (the best point). Then if the192.2. A trust region algorithm with controlled errornumber of sample points inside the new trust region is less than d+ 1, thenthe farthest points from xk+1 will be replaced with newly generated samplepoints. Note that the new sample points must be inside of the new trustregions to be accepted.When N is less than Nu, it is probable that some points would be out-side of the trust region thereby deteriorating the quality of the model. Tosurmount this, we reduce the penalty of these points, making them have lesseffect on the model by reducing Ci. So for any Ci corresponding to xi ∈ Sand ||xi − xk+1|| > ∆k+1, we updateCi = 10-||xi−xk+1||∆k C0.2.2.1 Creating a surrogate model using support vectorregression modelIn this section, we study the construction of a surrogate model usinga support vector regression model [31]. Suppose f : Rd → R is the trueobjective function. Our goal here is to find a quadratic surrogate model thatapproximates the behavior of f . Here, we define the model m : Rd → R asfollows,m(x) = w>φ(x),where w ∈ RN is a coefficient vector, N = (1/2)(d + 1)(d + 2), d is thedimension of the problem and φ(x) is a mapping function defined as,φ(x) = [1, x1, x2, ..., xn, x21, x1x2, ..., xn−1xn, x2n]> ∈ RN . (2.10)The next step is to calculate w as follows,w = P T (α+ − α−), (2.11)where P is computed as,P = [φ(x1), φ(x2), · · · , φ(xl)] ∈ Rl×NIn (2.11), α+ and α− are computed by finding an approximate minimizer ofthe following quadratic programming problem,202.2. A trust region algorithm with controlled errorarg minα+,α−12(α+α−)>(K −K−K K)(α+α−)+(− y+ y)>(α+α−),(2.12)subject to 0 ≤ α+, α− ≤ C and ∑li=1(α+i − α−i ) = 0.In (2.12), K is a matrix defined as: K = PP T , y is a vector of thefunction values of the sample points (yi = f˜(xi, i)) where i = 1, ..., l), = [1, 2, ..., l] and C = [C1, C2, ..., Cl].2.2.2 Find the approximate minimum of the modelIn this section, we briefly overview the problem of the approximatingminimum of a function inside an hypersphere. Suppose mk(x) is the modelfunction for the iteration k. The goal is to find the approximate minimumx∗k of the quadratic model function mk(x) within trust region Bk such thatx∗k is defined as,x∗k = arg minx∈Bkmk(x), (2.13)whereBk is a hypersphere that is defined as, Bk = {x ∈ Rd : ||x−xk|| ≤ ∆k}and ∆k is the current trust region radius. This problem is called trust regionsub-problem and has been addressed by many research papers [6, 12, 32].Since the problem is differentiable (m ∈ C2), many different algorithmssuch as dogleg, Steihaug’s approach, the truncated conjugate method canbe good choices. Since truncated conjugate gradient method [6] is a fast andan accurate technique to solve this problem, we use it in all our simulations. 11For the truncated conjugate gradient method, I use the implementation provided byDr. Yves Lucet, which follows [6, Section 7.5.1].21Chapter 3Multi-fidelity algorithms forsolving the horizontalalignment in road designBuilding a high precision algorithm with low computational cost hasalways been an interesting topic for researchers. For this purpose, multi-fidelity algorithms that employ different levels of accuracy have come intoplay. The main idea of these algorithms is to spend more computation timeon the areas of the search space that are close enough to local minimizers.The general framework for these algorithms is as follows. First, the multi-fidelity algorithm starts with a surrogate model with low accuracy. As thealgorithm converges to better points in the search space, the surrogate modelgradually increases its accuracy. Finally, the algorithm stops as it reaches alocal minimizer.In this chapter, we first formulate the horizontal optimization problemin road design, and then discuss how multi-fidelity algorithms are appliedto the problem.3.1 TerminologyTo solve the horizontal alignment, we need to first find an optimal curvein a given corridor. The corridor is usually specified by civil engineers, and itshows the feasible region for the horizontal and vertical alignments. In everygiven corridor, there are two points at the very start and at the very endof a road called, starting and ending points. The center-line alignment thatconnects these two points is called baseline. Figure 3.1 shows the satelliteview of a road that contains the corridor, the baseline and the starting andending points.The baseline is often discretized into some data points called base datapoints. For each base data point, there are some horizontal and verticalpoints that are called offset data points. If the points associated with a223.1. TerminologyFigure 3.1: A satellite view of a road that represents a corridor, a baselineand the starting and ending points.base data point are generated in the horizontal fashion, they are dubbed ashorizontal offset points. In a similar vein, if they are the displacement of abase data point in the vertical fashion, they are called vertical offset points.Another similar concept is the station where it is defined as base data pointsalong with all its associated vertical and horizontal offset points. Figure 3.2shows a road with stations and their corresponding vertical and horizontal233.1. Terminologyoffsets as well as the baseline. 2Station i Station i+1 Station i+2 0 1 2 -1 -2 Vertical offset Data point z Figure 3.2: A sample road with stations and their corresponding verticaland horizontal offset points [23]Another important concept is the intersection point. Since optimiz-ing the horizontal alignment for every station is extremely time-consuming,specifically for long roads, it was shown [23] that using circular curve andtangential line segments instead of stations to optimize the HA problem isa faster approach. Each segment contains some consecutive stations, andthe number of segments is usually dependent on the size and complexity ofa road. Figure 3.3 shows a road with four segments and 14 stations wherea segment is represented by a red rectangular box and a station with someconsecutive blue circles. In Figure 3.3, instead of optimizing the road with 14stations, we can use two tangential segments and two circular curvature seg-ments. Each segment often contains at least one Intersection Points (IPs).By moving these IPs, a Horizontal Alignment (HA) algorithm can find theoptimal horizontal alignment in a corridor. Figure 3.4 shows a corridor withfive IPs represented by blue stars.2The figure is taken from [23] with permission from the author243.2. Objective function: vertical alignment problemFigure 3.3: A road with four segments and 14 stationsFigure 3.4: A corridor with five IPs represented by blue stars3.2 Objective function: vertical alignmentproblemThe objective function for the horizontal alignment problem is the Ver-tical Alignment (VA) problem for a fixed horizontal alignment. The VAproblem is about finding a series of parabolic curves and vertical tangentiallines for a particular horizontal alignment. The main objective of the VA253.3. A surrogate model for the VA problemproblem is to find the minimum of the earthwork costs while following allthe safety design constraints. Here, we consider the VA problem as a black-box optimization problem where the input parameters are IPs, the stationsets and the output is the optimized VA. Figure 3.5 shows the box diagramof the objective function of the VA problem. Readers are referred to [23,Black-box VA Optimization (f(xk)) IP set (xk) Station set (Sm)Road meta-data informationOptimized VAFigure 3.5: The objective functionChapter 3] to gain more information about how the VA problem is modeled.It should be noted that if the horizontal alignment generated from theIP set (xk) is outside of the corridor or any other safety design constraintsare violated, the VA problem will return infinity as the output.Note that the VA problem is a mixed-integer linear programming prob-lem, and therefore it can be solved by a global solvers such as CPLEX [8].3.3 Merging stations: a surrogate model for thevertical alignment problemThe backbone of a multi-fidelity algorithm is the surrogate model thatapproximates the true objective function. In road design, finding the optimalvertical alignment for a given horizontal alignment often takes minutes oreven hours for a large and complex road. Moreover, for solving a horizontalproblem, we need to find the vertical optimization problem many times. So itwould be essential to find a surrogate for the vertical alignment optimizationproblem. In [14], Hare, Jaberipour and Lucet proposed two surrogate modelsto approximate the true objective function. Both surrogate models are based263.3. A surrogate model for the VA problemon the idea of the reduction of the size of mixed-integer program neededto tackle the vertical alignment optimization problem at the expense ofdecreasing the accuracy of the final earthwork cost.The first surrogate method, uniform merging technique (we call it f˜(·, )),removes some stations from the station set in the hope of reducing thecomputation time. Assume there is a road with n+ 1 stations and with thestation set S = {s0, s1, ..., sn}. The method reduces the station set toSm = {s0, sN , s2N , ..., sb nNcbNc}, (3.1)where N varies between N = 1, for no reduction, to N = n− 2 for reducingthe road to a single station.The uniform merging method decreases the number of stations by afactor of b nN c+ 1.Even though the method is simple, the experiments carried out on 40roads have shown that it can decrease the solution time by a factor of 8with a small loss of accuracy [14]. In this study, we will use this techniqueas a surrogate model for reducing the station set, and as a result, solvingthe vertical alignment optimization problem faster.We discussed how the surrogate is created. But we did not mention howexactly the surrogate is related to the true objective function. Algorithm 5shows how the surrogate model is utilized in the vertical optimization prob-lem. In Step 1, we determine N by the input k. To do this, we useAlgorithm 5: The merging station approachInput:f˜ : Rd × R+ → R+ the surrogate model,k the error value at iteration k,xk the IP set at iteration k,S the station set with no reduction.Step 1: Determining N by k using Table 3.1.Step 2:if Sm not exist for a particular N , thenmerge station set S and create Sm by uniform merging method.elseretrieve Sm for a particular N .Step 3: Perform vertical alignment optimization on Sm for xk.Table 3.1 that was obtained from experiments carried out on 40 roads [14].273.4. A general approachTable 3.1: A mapping table from k to NError (k) Nk > 0.12 200.06 ≤ k ≤ 0.12 100.034 ≤ k < 0.06 60.01 ≤ k < 0.034 40 < k < 0.01 2k == 0 1In Step 2, in order to avoid wasting computation time merging stations fora particular error, we save the merged station set (Sm), and its correspond-ing meta-data on disk after performing the uniform merging method. If Smexists for a particular N , we retrieve it from the disk.In Step 3, we perform the vertical alignment on Sm for the given IP set(xk).3.4 A general approachIn this section, we describe how HA algorithms are applied to solve thehorizontal alignment problem. At first, the input data including the roaddata profile, road meta-data information along with IPs positions are pro-vided. Using this information, the vertical alignment optimization finds theoptimal vertical alignment for a given horizontal alignment. If the horizon-tal alignment (HA) is optimal, then we terminate the algorithm; otherwise,the HA algorithm continues to change the IPs until either an optimal HAis found or the computation time is exhausted. Figure 3.6 shows the flowchart of the general approach.The multi-fidelity framework for solving the horizontal alignment prob-lem is similar to the general version except that we need to first determinehow close the incumbent solution is to the local minimizer. To find this wecan use either the mesh size δ for the GPS algorithm or the trust regionsize ∆ for the trust region algorithm. As the algorithm gets closer to a localminimizer, the mesh size or the trust region size becomes smaller and viceversa. Based on the premise of the closeness of the incumbent solution toa local minimizer, the algorithm decides to adopt a surrogate with high orlow accuracy for the vertical optimization problem. Figure 3.7 shows theflowchart of the general multi-fidelity approach for solving the horizontalalignment optimization problem in road design.283.5. A multi-fidelity GPS algorithm for the HA problemFinishStar tVer tical al ignment optimizationInput data (IPs, station set,...)Is HA optimal?Modify IPs posi tionNoHA Algor i thmOutput Optimal HA and VAYesExceed Time?NoOutput Optimal VA for  a HAYesFigure 3.6: The flowchart of a general approach for solving the horizontalalignment problem3.5 A multi-fidelity GPS algorithm for thehorizontal alignment optimization problem inroad designThe GPS algorithm designed for solving the horizontal alignment with-out a multi-fidelity framework is similar to Algorithm 1 described in Sec-tion 2.1.3. So we do not discuss it here. However, the multi-fidelity versionof the GPS algorithm is a little bit different from Algorithm 2. Here, forthe surrogate model f˜(·, ) the uniform merging method is employed andfor the objective function, the vertical alignment optimization is used. Fur-thermore, every point x in the local search set ιk represents IPs for a givenroad.Figure 3.8 demonstrates the trajectory plot for the multi-fidelity GPSalgorithms (with and without surrogate) for Road 35 with one IP. In thetrajectory plot, the contour lines represent the function value not the ty-pography of the road. In Figure 3.8, the initial point is represented by thered solid dot and a polling point by a hollow blue circle.293.6. A multi-fidelity trust region algorithm for the HA problemFinishStar tInput data (IPs, station set,..)Is HA optimal?VA optimization w ith higher  f idel i tyNoModify IPs posi tionOutput Optimal HA and VAYesExceed Time?Output Optimal VA for  a HAYesNoAre we close?Determine how close we are to optimal HAYesVA optimization w ith lower  f idel i tyNoHA Algorithm Multi-fidelity frameworkFigure 3.7: The flowchart of the general multi-fidelity approachAs shown in Figure 3.8, both GPS algorithms (with and without surro-gate) begins with large cost ($33, 389) and as they progress, they convergeto better positions with lower cost (≈ $30, 000).3.6 A multi-fidelity trust region algorithm forthe horizontal alignment optimizationproblem in road designIn this section, we study the multi-fidelity trust region algorithm for theHA problem in road design.303.6. A multi-fidelity trust region algorithm for the HA problem-20 -15 -10 -5 0 5 10 15 20X-20-15-10-505101520YRoad name: Road 35Contour linesInitial PointPolling points2.933.13.23.33.43.53.63.7104 ($)Final Point(a) GPS algorithm without the multi-fidelity framework-20 -15 -10 -5 0 5 10 15 20X-20-15-10-505101520YRoad name: Road 35Contour linesInitial PointPolling points2.933.13.23.33.43.53.63.7104 ($)Final Point(b) GPS algorithm with the multi-fidelity frameworkFigure 3.8: The trajectory plot of the GPS algorithm for Road 35The algorithm is similar to Algorithm 4 except in Step 6. After evalu-ating the sample points with the surrogate function, we evaluate the resultagain with the true objective function before setting it to xk+1. In par-ticular, we first create and evaluate the sample set based on the surrogatefunction. And, when we want to pick x0, we evaluate the best point in thesample set using the true objective function. And at each iteration, afterfinding the approximate minimizer of the model, we evaluate it using thetrue objective function.313.6. A multi-fidelity trust region algorithm for the HA problem-20 -15 -10 -5 0 5 10 15 20X-20-15-10-505101520YRoad name : Road 35contour linesInitial Pointnew pointsmin trust region2.933.13.23.33.43.53.63.7104Final Point($)(a) The trust region algorithm withoutthe multi-fidelity framework-20 -15 -10 -5 0 5 10 15 20X-20-15-10-505101520YRoad Name: Road 35Contour linesInitial Pointnew pointsmin in trust region2.933.13.23.33.43.53.63.7104 ($)Final Point(b) The trust region algorithm with themulti-fidelity frameworkFigure 3.9: The trajectory plot of the trust region algorithm for Road 35To see how the trust region algorithm progresses, Figure 3.9 shows thetrajectory plot of two versions of the trust region algorithm for the HAproblem for Road 35. In Figure 3.9, the initial point is represented by thered solid dot, and the approximate minimizer of the trust region (x∗k) bythe blue sold dot. As shown in Figure 3.9, both versions of the trust regionalgorithm start off with large vertical alignment costs, and as they progress,they converge to better points.32Chapter 4Experimental resultsIn this chapter, we study the effect of the multi-fidelity framework forthe two derivative-free optimization algorithms on 35 different roads. Thenwe compare the results with NOMAD [9] (Nonlinear Optimization by MeshAdaptive Direct Search) as a standard solver.For carrying out this experiment, we run the algorithms on three differentworkstation PCs where all the computers run on Windows 7 64 bits. Thefirst computer uses Intel core i7 2.8 GHz CPU with 16 GB RAM. We use thiscomputer to run the HA algorithms on small roads. The second computer isequipped with Intel Xeon 3.5 GHz CPU with 64GB RAM. Finally, the lastworkstation PC has the same configuration as the second computer. Thelast two workstation PCs are employed to run the algorithms on medium tolarge roads.In terms of the framework and the programming environment, we de-signed and implemented an integrated framework that is based on MATLAB2017 and Visual Studio 2013.4.1 Road setWe gathered a set of 35 roads from our industrial partner (Softree Tech-nical System Inc. abbreviated as Softree) ranging from small to very largeroads. The information we gathered contains the initial alignment (X- andY-coordinates and radius of IPs), the station set and cut/fill material infor-mation. Table 4.1 provides a summary of each road, including the numberof IPs and the number of stations for each road.As shown in Table 4.1, most of the roads contain 1 to 10 IPs. Sincethe radius of IPs are fixed, the number of IPs is exactly 1/2 the number ofvariables in the resulting HA problem, e.g, Road 1 with 2 IPs contains 4variables.334.2. The effect of the multi-fidelity framework on the GPS algorithmTable 4.1: Roads tested on different workstation PC computers.Computer 1##ofIPs# of Stations15 13 20317 2 22818 2 23421 8 52322 2 47023 3 9724 3 9531 17 36734 9 941Computer 2##ofIPs# of Stations1 2 492 1 333 5 1074 3 1145 3 1146 2 1167 2 1208 2 1209 7 17810 2 12011 2 12913 4 18414 2 22216 2 22720 19 36825 2 9826 2 1127 2 95228 5 1,16430 15 44632 12 9,343Computer 3##ofIPs# of Stations12 8 30319 17 25225 2 9829 2 2,28833 19 1,16835 1 244.2 The effect of the multi-fidelity framework onthe GPS algorithmIn order to study the effect of the GPS algorithm, we carry out ourexperiments with the algorithm presented in Algorithm 2 where 0 is set to0 for the GPS without multi-fidelity and set to 0.1 for the GPS with themulti-fidelity framework. Note that when 0 is set to 0, the GPS algorithmalways evaluates the true objective function.4.2.1 Parameter tuningIn order to make a fair comparison, we need to follow some rules [5].First, we need to give the same number of function evaluations to bothversions of the GPS algorithm. The number of function evaluations has agreat impact on the performance of algorithms. The number of functionevaluations is recommended to be dependent on the size of the problem asthe size of the problem becomes larger, the problem becomes more difficultto solve and needs more computation time. However, it should also bea trade-off between the maximum time allowed to solve the optimizationproblem and the size of the HA problem. Based on our initial experiments,we set the maximum number of function evaluations for all problems andboth versions of GPS as follows,344.2. The effect of the multi-fidelity framework on the GPS algorithmTable 4.2: Parameter values for GPS0 and GPS1 algorithmsParameter GPS0 GPS10 0 0.1E 0 {1, 2, , 5, 10}δ0 {5, 2.5, 1.25} {5, 2.5, 1.25}τdec {0.01, 0.1, 0.3, 0.5, 0.7, 0.9} {0.01, 0.1, 0.3, 0.5, 0.7, 0.9}Feval = 100 min(|IP |2, 5 |IP |) (4.1)where Feval is the maximum number of function evaluations and |IP | is thesize of IP set.The next rule is to use the same initial IP set for all the algorithms. Weuse the initial IP set that is provided by Softree for all the algorithms.When it comes to the maximum allowed computation time to solve theHA problem, we set it to 24 hours for solving each road and 1 hour forcalling each function call.Finally, we need to tune the parameters of both algorithms. The multi-fidelity GPS algorithm presented in Algorithm 2 has four major parameters,including the initial error value parameter 0, the initial mesh size δ0, themesh size decreasing parameter τdec and the error control policy. Both GPSand TRSVR algorithms use the same error control policy such that at eachE unsuccessful iterations, the error will be decreased by a factor of τdec.In the GPS algorithm, 0 is set to 0.1 according to what the originalpaper [28] suggested. The rest of parameters are tuned in this study. Wetune δ0, τdec and E, for the multi-fidelity version and tune the δ0, τdec forthe GPS without the multi-fidelity. Note that since for the GPS without themulti-fidelity 0 is set to 0, it does not matter what value the error decreasingparameter is. Table 4.2 shows the parameter values tested for each versionof the GPS algorithm. Note that from now on, we call the GPS withoutthe multi-fidelity framework “GPS0” and the GPS with the multi-fidelityframework “GPS1”.As shown in Figure 4.2, for GPS0 we have 18 different configurations.We call the first configuration “config0” and the next one “config1” and soon.In order to find out which configuration is the best for GPS0, we use theperformance profile [24] that can easily tell you which solver or algorithmis the best over a set of problems. A performance profile visualizes thecompetence of an algorithm by comparing it with other algorithms in a set354.2. The effect of the multi-fidelity framework on the GPS algorithmof problems. In fact, when you have a set of algorithms and a set of problems,and you like to know the efficiency and robustness of the algorithms in onecompact figure, you can use performance profile. In fact, the performanceprofile shows that the percentage of the problems that a solver has the bestperformance among all the other solvers. The algorithm that can solve thehighest percentage of the problems at the starting point is called the mostefficient algorithm, and the algorithm that can solve most of the problemsin total is called the most robust one. To gain more information about howto compute the performance profile, readers are referred to [4].We tune the parameters δ0 and τdec on five small roads (Road 1, Road2, Road 5, Road 8, Road 35). Figure 4.1 shows the performance profile ofGPS0 for 18 different configurations on five small roads.1 2 3 4 5ratio of Time 00.20.40.60.81Portion of problem solved p ()Performance Profile (  =10%)config4config10config16(a) Different Configurations of GPS0with (10%) tolerance1 2 3 4ratio of Time 00.20.40.60.81Portion of problem solved p ()Performance Profile (  =5%)config4config10config16(b) Different Configurations of GPS0with (5%) toleranceFigure 4.1: The performance profile of GPS0 algorithm for 18 different con-figuration on five small roads. The legend only shows the top three config-urations.As shown in Figure 4.1, when the tolerance τ is 10 percent, the fastestconfigurations are “config4” and “config10” and the most robust one is “con-fig16” where it solved all of the 5 problems. On the other hand, when τ be-comes 5 percent, “config4” and “config10” seem to work similarly. However,when we zoom in, “config10” seems to work faster. Therefore, we chose“config10” over “config4”, which means that we choose τdec = 0.5, δ0 = 5for GPS0 algorithm.Figure 4.2 shows the performance profile of GPS1 algorithm for 72 dif-364.2. The effect of the multi-fidelity framework on the GPS algorithmferent configurations on five small roads.2 4 6 8 10 12ratio of Time 00.20.40.60.81Portion of problem solved p ()Performance Profile (  =10%)config37(a) Different Configurations of GPS1with (10%) tolerance2 4 6 8 10 12 14ratio of Time 00.20.40.60.81Portion of problem solved p ()Performance Profile (  =5%)config37(b) Different Configurations of GPS1with (5%) toleranceFigure 4.2: The performance profile of GPS1 algorithm for 72 different con-figuration on five small roads. The legend only shows the best configuration.As shown in Figure 4.2, “config37” is the fastest and the most robustconfiguration for both tolerances. So we choose this configuration (config37=δ0 = 5, E = 1, τdec = 0.5) for all of our simulations.4.2.2 Experimental results on 35 roadsAfter setting the parameters, we carry out our experiments on bothversions of the GPS algorithm on 35 different roads presented in Table 4.1.We record the computation time, the optimized VA cost and the memoryfor all of the roads. The detailed description of results for GPS0 and GPS1algorithms is presented in Table A.1 and Table A.2, respectively.Table 4.3 shows the summary results of the speed up and cost differencefor both versions of the GPS algorithm on 35 roads.Notice that the speed up factor is calculated by dividing the computationtime of GPS1 algorithm by the computation time of GPS0 algorithm. If oneof the algorithms fails to solve the problem, we do not calculate the speedup factor for that problem. Moreover, the cost difference is calculated bysubtracting the optimized VA cost obtained by GPS0 algorithm from theone obtained by GPS1 algorithm and then dividing it by the optimized VAcost of the initial alignment.374.2. The effect of the multi-fidelity framework on the GPS algorithmTable 4.3: Speed up and cost difference for the GPS algorithmSpeed factor # of Roads< 0.7 10.7 to 0.9 10.9 to 1.1 11.1 to 2.0 72.0 to 3.0 9> 3 6All roadsAverage 2.52Cost Difference(%) # of Roads< 0 30.0 to 1.0 171.0 to 2.0 42.0 to 5.0 1> 5 0All roadsAverage 0.41As shown in Table 4.3, there is only one road for which the use of themulti-fidelity version of the GPS increases the computation time signifi-cantly. As shown in Table A.6, this road is Road 26, which has only 11stations and 2 IPs. The reason behind this computation time increase isthat the surrogate function may not provide a good approximation of thetrue objective function. From Table 4.3, we can also see that for 2 roads,GPS0 algorithm performs better than GPS1 algorithm in terms of computa-tion time. However, for other 21 roads, GPS1 algorithm outperforms GPS0algorithm when it comes to computation time. In average, using the multi-fidelity framework improves the speed of the GPS algorithm by a factor of1.52 while keeping the cost difference at 0.41 percent.Note that the average speed up is calculated by adding up the speed upfor each road and then dividing the total speed up by all the roads tested.In a similar vein, to compute the average cost difference, we add up the costdifference between GPS0 and GPS1 algorithms, and then divide the totalcost difference by the total number of roads.If you add up the number of roads in either Table 4.3, you can see thatit is 25. The reason is there are some roads that GPS1 or both algorithmsfail to solve due to the function evaluation exhaustion or the time-out policy(see Table A.1 or Table A.2). Table 4.4 shows the summary results for twoversions of the GPS algorithm for the roads they fail to solve.As shown in Table 4.4, GPS1 solves 4 more problems than GPS0. Forfinding more details about the speed up and the cost difference of GPS0 andGPS1 for every single road, see Table A.6.384.2. The effect of the multi-fidelity framework on the GPS algorithmTable 4.4: Optimization failure due to Time or number of function evalua-tions (Fe) budget exhaustion for the GPS algorithmAlgorithm GPS0 (# of Roads) GPS1 (# of roads)Fe exhaustion 0 0Time-out 9 5Total 9 5Table 4.5: Categorization of roads based on the number of IPsCategory # of IPs # of RoadsC1 1-2 17C2 3+ 184.2.3 The effect of the multi-fidelity on the GPS algorithmfor different classes of roadsIn order to study the effect of the multi-fidelity framework on differenttypes of roads, we categorize the 35 roads based on the number of stationsand the number of IPs. As the number of stations increases, the computationtime for solving the VA problem increases. In addition, the number of IPsaffects the number of function evaluations an algorithm requires to solvethe HA problem. As the number of IPs increases, the number of variablesincreases, resulting in requiring more function evaluations. Table 4.5 showsthe proposed categorization of roads based on the number of IPs. As shownin Table 4.5, most roads in our dataset contain one or two IPs (2 to 4variables) and the rest have 3 to 19 IPs.Table 4.6 show the summary results for the speed up of GPS0 and GPS1algorithms for two different categories of IPs. As shown in Table 4.6, thespeed up of the GPS algorithm is 94 percent for Category C1 roads and 240percent for Category C2 roads in the average case. From Table 4.6, we canrealize when the size of IP set is small (Category C1 roads), the multi-fidelityframework helps the GPS algorithm by almost a factor of 2. However, asthe the size of the IP set grows, the multi-fidelity framework helps the GPSalgorithm more significantly. Furthermore, we can also see when the size ofthe HA problem is small (Category C1 roads), the multi-fidelity frameworkmakes the GPS algorithm slower in 2 of the roads. But, when the size ofthe HA problem is large (Category C2 roads), the multi-fidelity does nothelp the algorithm in only 1 road, and there is no road that it causes thealgorithm to have more computation time.394.2. The effect of the multi-fidelity framework on the GPS algorithmTable 4.6: Speed up for GPS algorithm for two categories of IPsSpeed factor # of Roads (C1) # of Roads (C2)< 0.7 1 00.7 to 0.9 1 00.9 to 1.1 0 11.1 to 2 6 12 to 3 6 3> 3 1 5C1 C2Average 1.94 3.40Table 4.7: Cost difference for GPS algorithm for two categories of IPsCost Difference(%) # of Roads (C1) # of Roads (C2)< 0 2 00 to 1 11 61 to 2 2 12 to 5 0 0> 5 0 0C1 C2Average 0.17 0.78Table 4.7 represents the summary results of cost difference for GPS0and GPS1 algorithms for two different categories of IPs. As shown in Ta-ble 4.7, for Category C1 roads, the cost difference between GPS0 and GPS1algorithms is 0.17 percent in the average case, and there is no road thatthe cost difference is actually not within the acceptable tolerance range (0-5percent range). Furthermore, for Category C2 roads, the cost difference be-tween GPS0 and GPS1 algorithms is 0.78 in the average case. Interestingly,from Table 4.6 and Table 4.7, we can realize as the number of IPs increases,the speed up of GPS algorithm increases and the cost difference does notincrease significantly in the average case.We now study the effect of the multi-fidelity paradigm on the GPS algo-rithm for different sizes of the station set. Table 4.8 shows the categorizationof 35 roads based on the number of stations. As shown in Table 4.8, most ofthe roads belong to Group G2 roads where the number of stations is between100 to 250 stations.Table 4.9 summarizes the speed up of GPS0 and GPS1 algorithms fordifferent groups of stations. As shown in Table 4.9, in the average case,404.2. The effect of the multi-fidelity framework on the GPS algorithmTable 4.8: Categorization of roads based on the number of stationsGroup # of Stations # of RoadsG1 1-100 7G2 101-250 15G3 251+ 13Table 4.9: Speed up for the GPS algorithm for different groups of stationsSpeed factor # of Roads(G1)# of Roads(G2)# of Roads(G3)< 0.7 1 0 00.7 to 0.9 1 0 00.9 to 1.1 0 0 11.1 to 2 1 6 02 to 3 2 5 2> 3 2 4 0G1 G2 G3Average 3.03 2.38 2.10the multi-fidelity framework improves the speed of the GPS algorithm by afactor of 3, 2.38 and 2.10, respectively. As the number of stations increases,the speed up of the GPS algorithm decreases in the average case. However,the number of roads in Groups G1, Group G2 and Group G3 are different,and maybe by having more roads in Group G1, the results would be different.Table 4.10 presents the summary results of cost difference between twoversions of the GPS algorithm for different groups of stations. As shownin Table 4.10, the cost difference between GPS0 and GPS1 algorithms isalmost 2 percent in the average case for Group 1 roads while 0.17 and 0.41percent, respectively, for Group G2 and Group G3 roads.From Table 4.9 and Table 4.10, we can see that by increasing the numberof stations, the multi-fidelity framework reduces the computation time whilethe error is in an acceptable range, similar to what we saw in Table 4.6 andTable 4.7.Even though the results of the GPS algorithm for both Group G2 andGroup G3 roads are similar (in both groups, we have more than 100 percentspeed up improvement, and less than 1 percent cost difference), the GPSalgorithm could only solve 3 out of 13 for Group G3 roads while 15 out of15 for Group G2 roads, respectively.414.3. The effect of the multi-fidelity framework on trust region algorithmTable 4.10: Cost difference for the GPS algorithm for different groups ofstationsCost Difference # of Roads(G1)# of Roads(G2)# of Roads(G3)< 0 2 1 00 to 1 3 11 31 to 2 1 3 02 to 5 1 0 0> 5 0 0 0G1 G2 G3Average 0.42 0.17 0.27In order to see what the final results look like, Figure 4.3 shows the opti-mal horizontal alignments obtained by both versions of the GPS algorithmfor Road 1. Note that for both GPS0 and GPS1 algorithms, the best align-ment is obtained when the horizontal alignment touches the corridor. Sincethe corridor is given from the input (our industrial partner), it cannot beexpanded to get better results.4.3 The effect of the multi-fidelity framework onthe trust region algorithmIn this section, we study the effect of the multi-fidelity paradigm on theTrust Region (TR) algorithms. To do so, we employ the algorithm presentedin Algorithm 4 for the TR algorithm with the multi-fidelity framework. Inaddition, for the TR algorithm without multi-fidelity framework, we set 0to zero, making the error value disappear for the whole optimization process.We call the former Trust Region Support Vector Regression-based algorithmwith multi-fidelity (TRSVR1) and the latter Trust Region Support VectorRegression-based algorithm without multi-fidelity (TRSVR0).For the maximum of number of function evaluation, the maximum timeallowed to solve the HA problem and the maximum time allowed for eachfunction call, we follow the same rules as Section 4.2.1.4.3.1 Parameter tuningThere are many parameters in the TR algorithm. In this section, inorder to make a fair comparison, we tune the parameters that have a major424.3. The effect of the multi-fidelity framework on trust region algorithmFigure 4.3: The horizontal alignment carried out by GPS0 and GPS1 algo-rithms for Road 1impact on the performance of the TR algorithm. The parameters we tuneare the initial trust region size ∆0 and the trust region decreasing parameterγdec for TRSVR0 algorithm, and ∆0, γdec and the error control parameterE for TRSVR1 algorithm. The final trust region size is set to 0.01 forboth versions of the TR algorithms. The rest of parameters, including (γinc,η0, η1, m, C0, Nl and Nu) are set according to [31]. Table 4.11 shows theparameters values considered in the tuning process. We tune the parameterson 5 small roads following the same process as Section 4.2.1.Figure 4.4 and Figure 4.5 show the performance profile for the differentconfigurations for both TRSVR0 and TRSVR1 algorithms, respectively. Asshown in Figure 4.4, for TRSVR0 algorithm for both 10 and 5 percenttolerance, the “config1” (∆0 = 5, γdec = 0.1) is the fastest and the mostrobust configuration. Therefore, “config1” is chosen for TRSVR0 algorithmfor all the simulation. As shown in Figure 4.5, for TRSVR1 algorithm434.3. The effect of the multi-fidelity framework on trust region algorithmTable 4.11: Parameter values for TRSVR0 and TRSVR1 algorithmsParameter TRSVR0 TRSVR10 0 0.1E 0 {1, 2, , 5, 10}∆0 {5, 2.5, 1.25} {5, 2.5, 1.25}γdec { 0.1, 0.3, 0.5, 0.7, 0.9} { 0.1, 0.3, 0.5, 0.7, 0.9}2 4 6 8ratio of Time 00.20.40.60.81Portion of problem solved p ()Performance Profile (  =10%)config1(a) TRSVR0 with (10%) tolerance1 1.5 2 2.5ratio of Time 00.20.40.60.81Portion of problem solved p ()Performance Profile (  =5%)config1(b) TRSVR0 with (5%) toleranceFigure 4.4: The performance profile of TRSVR0 algorithm for 18 differentconfiguration on five small roads. The legend only shows the best configu-ration.for both 10 and 5 percent tolerance, the “config1” is the most robust andthe fastest among all configurations. Therefore, for TRSVR1 algorithm wechoose “config1” ( E = 1, ∆0 = 5, γdec = 0.1) for all the simulations.4.3.2 Experimental results on 35 roadsAfter tuning the parameters, we carry out an experiment on 35 roadsto study the effect of the multi-fidelity framework on the TR algorithm. Inorder to determine how much the effect of the multi-fidelity is, we collectthe time and the cost of the horizontal alignment for each algorithm.Table 4.12 summarizes the results of speed up and cost difference of twoversions of the TRSVR algorithm for the 35 roads presented in Table 4.1.As shown in Table 4.12, the multi-fidelity framework improves the speedof the TR algorithm by 31 percent while the cost difference is 0.40 percent444.3. The effect of the multi-fidelity framework on trust region algorithm5 10 15ratio of Time 00.20.40.60.81Portion of problem solved p ()Performance Profile (  =10%)config1(a) TRSVR1 with (10%) tolerance1 1.5 2 2.5 3 3.5ratio of Time 00.20.40.60.81Portion of problem solved p ()Performance Profile (  =5%)config1(b) TRSVR1 with (5%) toleranceFigure 4.5: The performance profile of TRSVR1 algorithm for 60 differentconfiguration on five small roads. The legend only shows the best configu-ration.in the average case. In particular, the multi-fidelity only decreases the speedof the algorithm only in 2 road, and in 6 roads it does not change the speedof the algorithm, and in 26 roads it improves the speed of the algorithm.4.3.3 The effect of the multi-fidelity on the TR algorithmfor different classes of roadsIn order to show how much the number of IPs and the number of stationshave effect on the performance of the multi-fidelity approach for the TRTable 4.12: Speed up and Cost difference for the TRSVR algorithmSpeed factor # of Roads< 0.7 10.7 to 0.9 10.9 to 1.1 61.1 to 2.0 262.0 to 3.0 0> 3 0All roadsAverage 1.31Cost Difference(%) # of Roads< 0 20.0 to 1.0 271.0 to 2.0 32.0 to 5.0 2> 5 0All roadsAverage 0.40454.3. The effect of the multi-fidelity framework on trust region algorithmTable 4.13: Speed up for TRSVR Algorithms for two categories of IPsSpeed factor # of Roads # of Roads< 0.7 1 00.7 to 0.9 0 10.9 to 1.1 2 41.1 to 2 14 122 to 3 0 0> 3 0 0C1 C2Average 1.31 1.30Table 4.14: Cost difference for TRSVR algorithm for 2 categories of IPsCost Difference(%) # of Roads # of Roads< 0 1 10 to 1 13 141 to 2 1 22 to 5 2 0> 5 0 0C1 C2Average 0.50 0.30algorithm, we run TRSVR0 and TRSVR1 algorithms on different categoriesof roads presented in Table 4.5 and Table 4.8.Table 4.15 shows the speed up summary results of TRSVR0 and TRSVR1algorithms for Category C1 to Category C2 roads, respectively. As shownin Table 4.15, in contrast to the GPS algorithm that the number of IPshas great effect on the performance of the multi-fidelity framework, in theTRSVR algorithm it seems that it does not have significant effect. In par-ticular, the average speed up of the TRSVR algorithm is 30 and 31 percentfor Category C1 and Category C2 in the average case.Table 4.14 show the cost difference between TRSVR0 and TRSVR1 al-gorithms for Category C1 to Category C2 roads, respectively. We can seefrom Table 4.14 that there is no significant cost difference between TRSVR0and TRSVR1 algorithms in the average case as it is less than 1 percent forboth categories.We studied the effect of multi-fidelity framework for the TRSVR al-gorithm for different categories of IPs. We now study the effect of themulti-fidelity paradigm for different groups of stations following the same464.3. The effect of the multi-fidelity framework on trust region algorithmTable 4.15: Speed up for the TRSVR algorithm for 3 different groups ofstationsSpeed factor # of Roads(G1)# of Roads(G2)# of Roads(G3)< 0.7 1 0 00.7 to 0.9 1 0 00.9 to 1.1 1 1 41.1 to 2 4 14 82 to 3 0 0 0> 3 0 0 0G1 G2 G3Average 1.15 1.39 1.33categorization of stations in Table 4.8.Table 4.15 represents the summary results of speed up of the TRSVRalgorithm for 3 different groups of stations. As shown in Table 4.15, forGroup G1 roads, there is a subtle difference between TRSVR0 and TRSVR1algorithms in terms of computation time. However, for Group G2 and GroupG3, TRSVR1 algorithm is faster than TRSVR0 algorithm by 39 and 33percent, respectively, in the average case. We can also see that in contrastto the GPS algorithm that could only solve 3 out 13 roads for Group G3roads, the TRSVR algorithm solves 12 out of 13 problems for Group G3roads.Table 4.16 demonstrates the summary results of cost difference betweenTRSVR0 and TRSVR1 algorithms for different groups of stations. As shownin Table 4.16, the cost difference between TRSVR0 and TRSVR1 is 0.64,44 and 0.20, respectively, in the average case.In total, from Table 4.13 and Table 4.15, we can realize that the size ofIP set has a less effect on the efficiency of the TRSVR algorithm than thesize of station set.To see the final results obtained by TRSVR0 and TRSVR1 algorithms,we plot the horizontal alignment for Road 1. Figure 4.6 show the optimalhorizontal alignments obtained by both versions of the TRSVR algorithmfor Road 1.474.4. Comparison with a standard solverTable 4.16: Cost difference of TRSVR algorithm for different groups ofstationsCost Difference # of Roads(G1)# of Roads(G2)# of Roads(G3)< 0 1 0 10 to 1 4 13 101 to 2 1 2 02 to 5 1 0 1> 5 0 0 0G1 G2 G3Average 0.64 0.44 0.20Figure 4.6: The horizontal alignment carried out by TRSVR algorithms forRoad 14.4 Comparison with a standard solverIn order to compare the GPS and the TRSVR algorithms with a standardsolver, we compare them with NOMAD [9]. The mesh adaptive direct searchalgorithm used in NOMAD solver contains several parameters that have a484.4. Comparison with a standard solverTable 4.17: The parameter values for NOMADParameter NOMADInitial mesh size {1, 2, , 5, 10}The mesh quadratic radius factor {1.1, 2, 3, 4}The mesh coarsening exponent {1, 2, 3, 4}drastic effect on the performance of the algorithm. Among these parameters,we tune 3 parameters, set 4 parameters and leave the others with defaultvalues suggested in [22]. The parameters we tune are the initial mesh size,the mesh coarsening exponent and the mesh quadratic radius factor. Theparameters we set are the final mesh size, the maximum allowed time to solvethe problem and the maximum number of function evaluations, having thesame values as mentioned in Section 4.2.1. We also set the mesh refiningexponent to the inverse values of the mesh coarsening exponent.Table 4.4 shows the parameters values for the tuned parameters. In orderto compare these configuration, we follow the same process as Section 4.2.1.1 1.5 2 2.5 3ratio of Time 00.20.40.60.81Portion of problem solved p ()Performance Profile (  =10%)config3config15(a) NOMAD with (10%) tolerance1 1.2 1.4 1.6ratio of Time 00.20.40.60.81Portion of problem solved p ()Performance Profile (  =5%)config3config15(b) NOMAD with (5%) toleranceFigure 4.7: The performance profile of NOMAD for 64 different configura-tions on five small roads. The legend only shows the top two configurations.Figure 4.7 shows the performance profile of NOMAD for 64 differentconfigurations on five small roads. As shown in Figure 4.7, both “config3”and “config15” seem to perform better than the other configurations. How-ever, as we zoom in the figure, we realize that “config3” is more efficientthan “config15”. Therefore, we choose “config3”= initial mesh size = 5, the494.4. Comparison with a standard solvermesh quadratic radius factor = 1.1 and the mesh coarsening exponent= 3.After tuning the parameters of NOMAD, we test all the algorithms pre-sented with it. To compare the results obtained by NOMAD and the restof the algorithm, we use the performance profile. Figure 4.8 shows the per-formance profile of all the five algorithms on 35 roads.0 10 20 30ratio of Time 00.20.40.60.81Portion of problem solved p ()Performance Profile (  =5%)GPS 0GPS 1TRSVR 0TRSVR 1NOMAD(a) Comparison with (5%) tolerance0 10 20 30ratio of Time 00.20.40.60.81Portion of problem solved p ()Performance Profile (  =1%)GPS 0GPS 1TRSVR 0TRSVR 1NOMAD(b) Comparison with (1%) toleranceFigure 4.8: The performance profile of 5 different algorithms across all roadsAs shown in Table 4.8, the fastest algorithms are TRSVR0 and TRSVR1for the tolerance of 5 percent while the most robust algorithm is NOMAD.However, as the tolerance decreases, the performance of both versions ofTRSVR algorithms plunges down drastically. We can also see that GPS1algorithm is faster and more robust than GPS0. However, note that theperformance profile does not imply relative ranking, i.e. if we remove thebest algorithm in a set of algorithms, we may see a shift in the rankingamong the rest of algorithms. Readers can find more information about theswitching phenomenon in [11].4.4.1 Comparison with NOMAD for different classes ofroadsAs mentioned earlier, we categorize the 35 roads based on the numberof IPs (see Table 4.5) and the number of stations (see Table 4.8).Figure 4.9 shows the performance profile of all the 5 algorithms for Cate-gory C1 roads. As shown in Figure 4.9, for Category C1 and when the erroris 5 percent, the most efficient algorithm is TRSVR1 and the most robust504.4. Comparison with a standard solverones are GPS1 and NOMAD. However, as the error decreases, the perfor-mance of TRSVR0 and TRSVR1 algorithms drastically decreases. For theGPS algorithm, GPS1 is faster and more robust than GPS0 in 5 percenterror but GPS0 can solve more problems in Category C1 roads in 1 percenterror.0 5 10 15ratio of Time 00.20.40.60.81Portion of problem solved p ()Performance Profile (  =5%)GPS 0GPS 1TRSVR 0TRSVR 1NOMAD(a) Comparison with (5%) tolerance0 5 10 15ratio of Time 00.20.40.60.81Portion of problem solved p ()Performance Profile (  =1%)GPS 0GPS 1TRSVR 0TRSVR 1NOMAD(b) Comparison with (1%) toleranceFigure 4.9: The performance profile of 5 different algorithms for CategoryC1 roadsFigure 4.10 shows the performance profile of all of the five algorithms forCategory C2 roads. As shown in Figure 4.10, similar to Category C1, for τ =5% TRSVR1 is the fastest algorithm, and the most robust algorithms areGPS1 and NOMAD for 5 percent and only NOMAD for 1 percent tolerance.We can also see that GPS1 algorithm is much faster and more robust thanGPS0 algorithm. From Figure 4.9 to Figure 4.10, we see that from CategoryC1 to Category 2 roads, the most robust algorithm is NOMAD and the mostefficient one is TRSVR1. Also, note that as the size of the IP set increases,GPS1 algorithm becomes more efficient, comparing to GPS0 algorithm.In the next step, we study the effect of the station set size on the perfor-mance and the efficiency of the algorithms. We use the same categorizationfor the station set as presented in Table 4.8. Figure 4.11 shows the per-formance profile of the 5 algorithms for Group G1 roads. As shown inFigure 4.11, for 5 percent accuracy, the most efficient algorithm is TRSVR1and the most robust one is NOMAD. However, when more accuracy is re-quired (1 percent tolerance), the fastest and the most robust algorithm isNOMAD.514.4. Comparison with a standard solver0 10 20 30ratio of Time 00.20.40.60.81Portion of problem solved p ()Performance Profile (  =5%)GPS 0GPS 1TRSVR 0TRSVR 1NOMAD(a) Comparison with (5%) tolerance0 10 20 30ratio of Time 00.20.40.60.81Portion of problem solved p ()Performance Profile (  =1%)GPS 0GPS 1TRSVR 0TRSVR 1NOMAD(b) Comparison with (1%) toleranceFigure 4.10: The performance profile of 5 algorithms for Category C2 roads0 5 10 15 20ratio of Time 00.20.40.60.81Portion of problem solved p ()Performance Profile (  =5%)GPS 0GPS 1TRSVR 0TRSVR 1NOMAD(a) Comparison with (5%) tolerance0 5 10 15 20ratio of Time 00.20.40.60.81Portion of problem solved p ()Performance Profile (  =1%)GPS 0GPS 1TRSVR 0TRSVR 1NOMAD(b) Comparison with (1%) toleranceFigure 4.11: The performance profile of 5 different algorithms for Group 1roadsFigure 4.12 shows the performance profile of the algorithms for GroupG2 roads. As shown in Figure 4.12, for 5 percent accuracy, the most robustalgorithm is NOMAD and the most efficient one is TRSVR1 algorithm.However, for 1 percent tolerance, the most robust and the most efficient oneis NOMAD algorithm.Figure 4.13 shows the performance profile of the 5 different algorithms524.4. Comparison with a standard solver0 10 20 30ratio of Time 00.20.40.60.81Portion of problem solved p ()Performance Profile (  =5%)GPS 0GPS 1TRSVR 0TRSVR 1NOMAD(a) Comparison with (5%) tolerance0 10 20 30ratio of Time 00.20.40.60.81Portion of problem solved p ()Performance Profile (  =1%)GPS 0GPS 1TRSVR 0TRSVR 1NOMAD(b) Comparison with (1%) toleranceFigure 4.12: The performance profile of 5 different algorithms for Group 2roadsfor Group G3 roads. As shown in Figure 4.13, for 5 percent tolerance, themost efficient algorithm is TRSVR1 and the most robust one is NOMAD.However, for the 1 percent tolerance, the most efficient and the most robustone is NOMAD algorithm.0 10 20 30ratio of Time 00.20.40.60.81Portion of problem solved p ()Performance Profile (  =5%)GPS 0GPS 1TRSVR 0TRSVR 1NOMAD(a) Comparison with (5%) tolerance0 10 20 30ratio of Time 00.20.40.60.81Portion of problem solved p ()Performance Profile (  =1%)GPS 0GPS 1TRSVR 0TRSVR 1NOMAD(b) Comparison with (1%) toleranceFigure 4.13: The performance profile of 5 different algorithms for Group 3roads534.4. Comparison with a standard solverAltogether, from Figure 4.9 to Figure 4.13 we can conclude that TRSVR1algorithm is the fastest algorithm among all of the five algorithms for dif-ferent sizes of IPs and different sizes of station set when 5 percent toleranceis required. However, when 1 percent tolerance is needed, the most effi-cient algorithm is either NOMAD or GPS1 for most of categories of IPs andgroups of the station set. Moreover, the most robust algorithms are alsoeither GPS1 or NOMAD algorithm that can solve most of the problems.54Chapter 5ConclusionThe horizontal alignment optimization problem is a time-consuming andcomplicated optimization problem. Solving this problem can take hours tosolve for Derivative-Free Optimization (DFO) algorithms. In this thesis,we studied the use of multi-fidelity framework for some DFO algorithmsin the HA problem in order to save computational time. The algorithmswe studied are generalized pattern search and trust region algorithms. Themulti-fidelity refers to use surrogate models with different precision in thehope of saving the computation time while keeping the approximation errorwithin an acceptable tolerance.In order to test the multi-fidelity framework, we studied both algorithmson 35 different roads. The experimental results showed that the multi-fidelity framework increases the speed of the GPS algorithm by an averagefactor of 2.52 while increasing the speed of the trust region algorithm by anaverage factor of 1.31.We also categorize the roads based on the number of intersection pointsand the number of stations. We demonstrate that when the size of theHA problem is small, the multi-fidelity framework did not help the HAalgorithm, and even it deteriorates the speed of the algorithm. However,as the HA problem becomes larger, the multi-fidelity framework providesimprovement in terms of speed. This is helpful since we already have goodHA algorithms to optimize the small roads, and our industrial partner ismore interested to see the computation time improvement when it comes tolarge roads.Finally, we study the HA algorithms in terms of number of stations. In-terestingly, we see that there is a direct correlation between the number ofstations and the effect of the multi-fidelity framework. As the number of sta-tions increases, the multi-fidelity framework provides greater improvementin terms of computation time. This is particularly important as more thanhalf of the roads takes more than 1 hour to solve for the GPS algorithm.55Chapter 5. ConclusionRecommendations for future researchEven though the multi-fidelity framework helps the GPS TRSVR algo-rithms to improve their speed, there is still a room for further improvement.In the GPS algorithm presented, we only considered the contraction forthe mesh size. While this assumption makes it easier to prove the con-vergence of the algorithm, it may also deteriorate the performance of thealgorithm. Furthermore, we utilized the basic poll strategy with maximalpositive basis for the construction of polling directions. This strategy makesthe GPS algorithm evaluate all possible polling directions, thereby spendingtoo much computation time on potentially unsuccessful polling directions.In the future, we can use the opportunistic polling with ordering that wasshown [1] to reduce the computation time of the GPS algorithm significantly.In the TRSVR algorithm presented, we consider the minimal positivebasis for the construction of the sample points in the trust region. Some-time using the maximal positive basis or different strategies might providebetter results. Furthermore, for the TRSVR algorithm, we only studied theinitial trust region size, trust region size decreasing and error control param-eters. The TRSVR algorithm has other parameters such as parameters forcontrolling the size of sample set. These parameters often have noticeableimpact on the performance of the algorithm, and by tuning them we mightobtain better results.In our multi-fidelity framework, we used the true objective function intwo cases: when we are close enough to a local minimizer and at the endof each iteration to check the quality of the new solution. Even thoughusing the true objective function in these cases helps the algorithm not togo far away from the promising regions of the search space, it increases thecomputation time of the algorithm. In the future, it might be worth to usethe true objective function in only one or neither of the cases to reduce thecomputation time of the algorithm.In terms of optimality, both the GPS and the TRSVR algorithms stud-ied, guarantee local optimality. In the future, if we are interested in globaloptimality, we can choose global optimization algorithms such as branchand cut approach [34] to find the global minimizer. Since the road designoptimization problem is non-convex mixed-integer non-linear programmingproblem, we can use global optimization solvers such as BARON [30] tosolve the small size roads, and for large-size roads, we can use the solutionobtained by a global solver as a initial point for the GPS or the TRSVRalgorithm.56Chapter 5. ConclusionIn our settings, we use a serial computing environment in both the GPSand the TRSVR algorithms. One potential future direction can be using theparallel computing environment in the algorithms. In fact, we can implementa multi-start GPS or TRSVR algorithm where for each core the algorithmcan choose one starting point.57Bibliography[1] Mark A Abramson, Charles Audet, and John Dennis. Generalized pat-tern search algorithms: unconstrained and constrained cases. In IMAWorkshop, 2003. → pages 56[2] Kazuhiro Aruga. Tabu search optimization of horizontal and verticalalignments of forest roads. Journal of Forest Research, 10(4):275–284,Aug 2005. → pages 2[3] Charles Audet and John E Dennis Jr. Mesh adaptive direct search al-gorithms for constrained optimization. SIAM Journal on optimization,17(1):188–217, 2006. → pages 3[4] Charles Audet and Warren Hare. Derivative-Free and Blackbox Opti-mization. Springer Series in Operations Research and Financial Engi-neering. Springer International Publishing, 2017. → pages 4, 8, 11, 12,15, 36[5] Vahid Beiranvand, Warren Hare, and Yves Lucet. Best practices forcomparing optimization algorithms. Optimization and Engineering,18(4):815–848, 2017. → pages 34[6] Andrew R Conn, Nicholas IM Gould, and Philippe L Toint. Trust regionmethods, volume 1. SIAM, 2000. → pages 15, 19, 21[7] Andrew R Conn, Katya Scheinberg, and Philippe L Toint. On theconvergence of derivative-free methods for unconstrained optimization.Approximation theory and optimization: tributes to MJD Powell, pages83–108, 1997. → pages 19[8] IBM ILOG CPLEX. V12. 1: Users manual for cplex. InternationalBusiness Machines Corporation, 46(53):157, 2009. → pages 26[9] Jonathan Currie and David Wilson. Opti: lowering the barrier betweenopen source optimizers and the industrial matlab user. Foundations ofComputer-aided Process Operations, 24:32, 2012. → pages 33, 4858Chapter 5. Bibliography[10] M Giselle Ferna´ndez-Godino, Chanyoung Park, Nam-Ho Kim, andRaphael T Haftka. Review of multi-fidelity models. In ECCOMASCongress 2016, 2016. → pages 5[11] Nicholas Gould and Jennifer Scott. A note on performance profiles forbenchmarking software. ACM Transactions on Mathematical Software(TOMS), 43(2):15, 2016. → pages 50[12] Nicholas IM Gould, Stefano Lucidi, Massimo Roma, and Philippe LToint. Solving the trust-region subproblem using the lanczos method.SIAM Journal on Optimization, 9(2):504–525, 1999. → pages 15, 21[13] Michael W Hancock and Bud Wright. A policy on geometric design ofhighways and streets, 2013. → pages 2[14] Warren Hare, Majid Jaberipour, and Yves Lucet. Multi-fidelity surro-gates for vertical alignment optimization in road design. in submission.→ pages 3, 6, 26, 27[15] Dessalegn Amenu Hirpa. Simultaneous optimization of vertical andhorizontal road alignments. Master’s thesis, University of BritishColumbia, 2014. → pages 3[16] Jiexiang Hu, Qi Zhou, Ping Jiang, Xinyu Shao, and Tingli Xie. Anadaptive sampling method for variable-fidelity surrogate models usingimproved hierarchical kriging. Engineering Optimization, 50(1):145–163, 2018. → pages 4, 5[17] Edward Huang, Jie Xu, Si Zhang, and Chun-Hung Chen. Multi-fidelitymodel integration for engineering design. Procedia Computer Science,44:336 – 344, 2015. → pages 4, 5[18] Likeng Huang, Zhenghong Gao, and Dehu Zhang. Research on multi-fidelity aerodynamic optimization methods. Chinese Journal of Aero-nautics, 26(2):279–286, 2013. → pages 4, 5[19] Ronald L Iman. Latin hypercube sampling. Wiley StatsRef: StatisticsReference Online, 2008. → pages 17[20] Manoj Kumar Jha, Paul Schonfeld, and Jyh-Cherng Jong. IntelligentRoad Design. Advances in transport. WIT Press, 2006. → pages 1, 2[21] Donald R Jones. A taxonomy of global optimization methods basedon response surfaces. Journal of Global Optimization, 21(4):345–383,2001. → pages 559Chapter 5. Bibliography[22] Se´bastien Le Digabel and Christophe Tribes. NOMAD User Guide:Version 3.5. Groupe d’e´tudes et de recherche en analyse des de´cisions(GERAD), 2009. → pages 49[23] Sukanto Mondal. Horizontal alignment optimization in road design.Master’s thesis, University of British Columbia, 2014. → pages 3, 24,26[24] Jorge J More´ and Stefan M Wild. Benchmarking derivative-free opti-mization algorithms. SIAM Journal on Optimization, 20(1):172–191,2009. → pages 35[25] Minister of Transport. Transportation in Canada 2016.https://www.tc.gc.ca/eng/policy/transportation-canada-2016.html#the-role-of-transportation-in-the-economy, 2017. →pages 1[26] Chanyoung Park, Raphael T Haftka, and Nam-Ho Kim. Experiencewith several multi-fidelity surrogate frameworks. In 11th World congresson structural and multidisciplinary optimization, Sydney, Australia,pages 7–12, 2015. → pages 5[27] Benjamin Peherstorfer, Karen Willcox, and Max Gunzburger. Sur-vey of multifidelity methods in uncertainty propagation, inference, andoptimization. Technical report, Aerospace Computational Design Lab-oratory, Department of Aeronautics and Astronautics, Massachusetts,2016. → pages 5[28] Elijah Polak and Michael Wetter. Precision control for generalized pat-tern search algorithms with adaptive precision function evaluations.SIAM Journal on Optimization, 16(3):650–669, 2006. → pages 5, 6, 8,10, 12, 14, 15, 17, 35[29] Yasha Pushak. Road design optimization with a surrogate function.Master’s thesis, University of British Columbia, Apr 2015. → pages 4[30] Nikolaos V Sahinidis. Baron: A general purpose global optimizationsoftware package. Journal of Global Optimization, 8(2):201–205, 1996.→ pages 56[31] Jun Takaki and Nobuo Yamashita. A derivative-free trust-region algo-rithm for unconstrained optimization with controlled error. NumericalAlgebra, Control & Optimization, 1(1):117–145, 2011. → pages 5, 6, 15,17, 19, 20, 4360Chapter 5. Bibliography[32] Pham Dinh Tao and Le Thi Hoai An. A dc optimization algorithm forsolving the trust-region subproblem. SIAM Journal on Optimization,8(2):476–505, 1998. → pages 15, 21[33] Chan Weng Tat and Fan Tao. Using gis and genetic algorithm in high-way alignment optimization. In Proceedings of the 2003 IEEE Inter-national Conference on Intelligent Transportation Systems, volume 2,pages 1563–1567 vol.2, Oct 2003. → pages 3[34] Mohit Tawarmalani and Nikolaos V Sahinidis. A polyhedral branch-and-cut approach to global optimization. Mathematical Programming,103(2):225–249, 2005. → pages 56[35] Virginia Torczon. On the convergence of pattern search algorithms.SIAM Journal on Optimization, 7(1):1–25, 1997. → pages 12[36] Michael Wetter and Elijah Polak. A convergent optimization methodusing pattern search algorithms with adaptive precision simulation.Building Services Engineering Research and Technology, 25(4):327–338,2004. → pages 561Appendix62Appendix AData TablesHere, we present the detailed results for GPS0, GPS1, TRSVR0, TRSVR1and NOMAD algorithms on 35 roads.It should be noted that in Table A.1 and Table A.2, Table A.3, Table A.4and Table A.5, “Cost” is the optimized VA cost obtained, “Time” is thetotal computation time measured, “Status” and “Reason” are the statusand reason that an algorithm terminates, “Memory” is the memory used bya computer to solve the problem, “True FC” and “Surrogate FC” are thenumber of surrogate and true function calls used for the algorithm to solvethe HA problem.In Table A.1 and Table A.2, “Mesh Small” refers to the fact that the GPSalgorithm stops as reaches the final mesh size. Furthermore, “Fe exhaustion”refers to the fact the algorithm terminates because it hits the upper boundfor the number of function evaluations, and “24h T.” demonstrates that thealgorithm reaches the maximum computation time before finding a localminimizer.In Table A.3 and Table A.4, “TR Small” means that the algorithm stopsbecause the trust region is small.In Table A.8, “Initial Solution” refers to the VA cost obtained for theinitial alignment.63Appendix A. Data TablesTable A.1: The detailed results of GPS0 for the HA problem across all roadsGPS0# Cost($)Time(s)Status Reason Memory(mb)TrueFC(#)SurrogateFC(#)1 252,975 334 Success Mesh Small 1,645 106 02 44,458 160 Success Mesh Small 1,450 41 03 537,873 5,023 Success Mesh Small 1,643 690 04 127,411 1,211 Success Mesh Small 1,645 150 05 8,554,830 1,054 Success Mesh Small 1,636 137 06 10,951,800 762 Success Mesh Small 1,643 90 07 5,889,700 1,069 Success Mesh Small 1,649 130 08 7,273,130 1,729 Success Mesh Small 1,654 144 09 363,861 5,538 Success Mesh Small 1,600 463 010 6,986,830 1,598 Success Mesh Small 1,653 94 011 6,082,110 1,533 Success Mesh Small 1,651 128 012 204,763 17,273 Success Mesh Small 1,702 694 013 650,128 4,566 Success Mesh Small 1,193 164 014 278,496 1,320 Success Mesh Small 1,275 53 015 1,514,740 43,276 Success Mesh Small 1,395 1,732 016 356,061 1,242 Success Mesh Small 1,268 49 017 244,938 1,767 Success Mesh Small 1,622 56 018 2,853,810 3,626 Success Mesh Small 1,632 106 019 614,144 68,809 Success Mesh Small 1,608 3,482 020 121,374 - Fail 24h T. - - -21 3,088,040 48,963 Success Mesh Small 1,369 589 022 230,578 5,504 Success Mesh Small 1,594 59 023 149,631 7,550 Success Mesh Small 1,596 479 024 222,118 3,030 Success Mesh Small 1,601 148 025 222,550 789 Success Mesh Small 1,441 65 026 244,800 195 Success Mesh Small 1,436 86 027 377,133 - Fail 24h T. - - -28 568,715 - Fail 24h T. - - -29 11,012,600 - Fail 24h T. - - -30 1,016,780 - Fail 24h T. - - -31 1,075,830 - Fail 24h T. - - -32 47,097,000 - Fail 24h T. - - -33 30,894,600 - Fail 24h T. - - -34 30,155,800 - Fail 24h T. - - -35 28,887 73 Success Mesh Small 1,665 35 064Appendix A. Data TablesTable A.2: The detailed results of GPS1 for the HA problem across all roadsGPS1# Cost($) Time(s) Status Reason Memory(mb)TrueFC(#)SurrogateFC(#)1 251,160 226 Success Mesh Small 1,298 21 1692 44,752 39 Success Mesh Small 1,284 10 413 543,235 1,504 Success Mesh Small 1,288 33 6614 127,346 563 Success Mesh Small 1,284 21 2535 8,604,670 613 Success Mesh Small 1,285 24 2896 10,953,600 481 Success Mesh Small 1,285 27 2177 5,897,510 508 Success Mesh Small 1,284 22 1778 7,327,620 1,078 Success Mesh Small 1,283 30 2419 367,957 1,428 Success Mesh Small 1,287 19 53310 6,996,660 969 Success Mesh Small 1,290 21 16911 6,163,850 841 Success Mesh Small 1,290 20 16112 204,763 18,124 Success Mesh Small 1,247 37 1,18513 650,128 1,280 Success Mesh Small 1,289 19 30514 278,496 577 Success Mesh Small 1,289 11 8915 1,545,050 12,700 Success Mesh Small 1,303 50 2,60116 356,061 520 Success Mesh Small 1,288 10 8117 244,938 799 Success Mesh Small 1,305 11 8918 2,853,810 1,860 Success Mesh Small 1,312 26 20919 614,144 - Fail 24h Timeout - - -20 131,313 15,516 Success Mesh Small 1,346 22 1,67321 3,115,360 17,052 Success Mesh Small 1,268 44 1,40922 230,578 2,221 Success Mesh Small 1,240 13 10523 152,889 746 Success Mesh Small 1,240 10 12124 223,508 1,468 Success Mesh Small 1,244 23 27725 222,550 937 Success Mesh Small 1,247 12 9726 247,984 316 Success Mesh Small 1,341 9 7327 391,768 - Fail 24h Timeout - - -28 563,671 26,550 Success Mesh Small 1,353 15 30129 11,012,600 - Fail 24h Timeout - - -30 1,028,300 20,355 Success Mesh Small 1,350 34 2,04131 1,084,200 68,432 Success Mesh Small 1,246 44 2,99332 47,035,100 52,576 Success Mesh Small 1,350 38 1,82533 30,894,600 - Fail 24h Timeout - - -34 29,678,800 - Fail 24h Timeout - - -35 28,579 36 Success Mesh Small 1,530 13 4065Appendix A. Data TablesTable A.3: The detailed results of TRSVR0 for the HA problem across allroadsTRSVR0# Cost($) Time(s) Status Reason Memory(mb)TrueFC(#)SurrogateFC(#)1 262,740 82 Success TR Small 1,655 31 02 46,223 41 Success TR Small 1,472 20 03 560,863 343 Success TR Small 1,652 60 04 136,550 280 Success TR Small 1,652 39 05 8,876,460 243 Success TR Small 1,653 41 06 11,183,200 183 Success TR Small 1,652 31 07 6,035,530 248 Success TR Small 1,669 30 08 7,468,290 325 Success TR Small 1,697 30 09 377,635 1,001 Success TR Small 1,644 77 010 7,224,780 273 Success TR Small 1,651 18 011 6,160,100 330 Success TR Small 1,641 31 012 215,049 1,776 Success TR Small 1,721 90 013 719,287 753 Success TR Small 1,261 50 014 288,204 626 Success TR Small 1,277 30 015 1,630,980 2,419 Success TR Small 1,456 137 016 365,717 630 Success TR Small 1,282 31 017 250,031 776 Success TR Small 1,675 30 018 3,038,370 831 Success TR Small 1,652 31 019 848,507 2,741 Success TR Small 1,655 175 020 154,308 5,522 Success TR Small 1,407 198 021 3,214,460 6,772 Success TR Small 1,419 90 022 238,498 2,303 Success TR Small 1,659 31 023 153,501 481 Success TR Small 1,659 41 024 239,318 801 Success TR Small 1,658 40 025 223,173 163 Success TR Small 1,470 15 026 255,320 55 Success TR Small 1,440 31 027 386,715 70,888 Success TR Small 1,302 12 028 584,928 36,686 Success TR Small 1,330 60 029 11,077,600 81,564 Success TR Small 1,741 29 030 1,049,540 7,218 Success TR Small 1,280 161 031 1,126,310 14,163 Success TR Small 1,554 174 032 47,319,700 17,589 Success TR Small 1,455 121 033 30,860,100 - Fail 24h T. - - -34 30,781,500 65,649 Success TR Small 1,315 100 035 30,982 32 Success TR Small 1,718 21 066Appendix A. Data TablesTable A.4: The detailed results of TRSVR1 for the HA problem across allroadsTRSVR1# Cost($) Time(s) Status Reason Memory(mb)TrueFC(#)SurrogateFC(#)1 266,295 63 Success TR Small 1,654 4 282 46,543 26 Success TR Small 1,469 4 183 562,267 256 Success TR Small 1,652 4 584 138,497 194 Success TR Small 1,653 4 385 8,957,230 182 Success TR Small 1,654 4 376 11,193,700 140 Success TR Small 1,651 4 287 6,052,440 186 Success TR Small 1,703 4 278 7,474,790 218 Success TR Small 1,689 4 279 378,078 679 Success TR Small 1,641 4 7810 7,260,590 139 Success TR Small 1,651 2 1711 6,194,380 259 Success TR Small 1,638 4 2712 215,049 1,765 Success TR Small 1,723 4 8813 722,599 833 Success TR Small 1,265 4 4814 288,741 439 Success TR Small 1,281 4 2815 1,652,720 1,623 Success TR Small 1,454 4 13716 365,717 453 Success TR Small 1,284 4 2817 250,031 584 Success TR Small 1,662 4 2818 3,056,910 572 Success TR Small 1,653 4 2819 848,700 2,775 Success TR Small 1,659 4 17820 155,122 3,679 Success TR Small 1,417 4 19821 3,214,870 4,256 Success TR Small 1,409 4 8822 238,610 1,581 Success TR Small 1,659 4 2823 153,748 396 Success TR Small 1,669 4 3824 240,306 977 Success TR Small 1,661 4 3825 223,173 175 Success TR Small 1,470 2 1826 254,333 109 Success TR Small 1,440 4 2727 395,426 54,644 Success TR Small 1,260 4 2728 585,648 24,535 Success TR Small 1,331 4 5829 11,077,600 80,765 Success TR Small 1,743 4 2730 1,051,290 4,782 Success TR Small 1,273 4 15831 1,128,230 8,189 Success TR Small 1,556 4 17732 47,320,300 12,063 Success TR Small 1,458 4 12833 30,860,100 - Fail 24h T. - - -34 30,547,400 68,147 Success TR Small 1,254 4 9835 31,789 25 Success TR Small 1,721 4 1867Appendix A. Data TablesTable A.5: The detailed results of NOMAD0 for the HA problem across allroadsNOMAD0# Cost($) Time(s) Status Reason Memory(mb)TrueFC(#)SurrogateFC(#)1 254,376 156 Success Mesh Small - 35 02 43,842 237 Success Mesh Small - 53 03 538,078 1,269 Success Mesh Small - 203 04 126,946 1,101 Success Mesh Small - 159 05 8,571,510 1,190 Success Mesh Small - 176 06 10,942,000 557 Success Mesh Small - 81 07 5,910,740 454 Success Mesh Small - 60 08 7,269,320 1,420 Success Mesh Small - 149 09 361,757 2,707 Success Mesh Small - 327 010 6,992,860 1,213 Success Mesh Small - 106 011 6,085,130 927 Success Mesh Small - 100 012 207,808 5,271 Success Mesh Small - 421 013 642,831 1,010 Success Mesh Small - 101 014 270,265 910 Success Mesh Small - 67 015 1,518,510 20,595 Success Mesh Small - 1,688 016 361,737 460 Success Mesh Small - 35 017 245,352 597 Success Mesh Small - 35 018 2,870,000 3,088 Success Mesh Small - 166 019 658,138 23,377 Success Mesh Small - 2,162 020 123,617 45,854 Success Mesh Small - 2,869 021 3,109,310 17,074 Success Mesh Small - 521 022 231,022 6,460 Success Mesh Small - 123 023 150,613 2,191 Success Mesh Small - 197 024 222,485 870 Success Mesh Small - 93 025 220,982 532 Success Mesh Small - 62 026 239,810 431 Success Mesh Small - 111 027 375,012 38,572 Success Mesh Small - 64 028 564,436 39,969 Success Mesh Small - 137 029 10,939,800 79,867 Success Mesh Small - 83 030 1,022,910 32,574 Success Mesh Small - 1,143 031 1,078,410 75,087 Success Mesh Small - 1,570 032 47,136,900 - Fail 24h T. - - -33 30,784,400 - Fail 24h T. - - -34 30,034,800 - Fail 24h T. - - -35 28,889 183 Success Mesh Small - 45 068Appendix A. Data TablesTable A.6: Comparison of GPS0 and GPS1 algorithmsGPS0 GPS1 Comparison# Cost($) Time(s) Cost($) Time(s) Diff(%) Speedup1 252,975 334 251,160 226 −0.67 1.482 44,458 160 44,752 39 0.60 4.063 537,873 5,023 543,235 1,504 0.93 3.344 127,411 1,211 127,346 563 −0.05 2.155 8,554,830 1,054 8,604,670 613 0.56 1.726 10,951,800 762 10,953,600 481 0.02 1.587 5,889,700 1,069 5,897,510 508 0.13 2.118 7,273,130 1,729 7,327,620 1,078 0.73 1.609 363,861 5,538 367,957 1,428 1.03 3.8810 6,986,830 1,598 6,996,660 969 0.13 1.6511 6,082,110 1,533 6,163,850 841 1.32 1.8212 204,763 17,273 204,763 18,124 0.00 0.9513 650,128 4,566 650,128 1,280 0.00 3.5714 278,496 1,320 278,496 577 0.00 2.2915 1,514,740 43,276 1,545,050 12,700 1.83 3.4116 356,061 1,242 356,061 520 0.00 2.3917 244,938 1,767 244,938 799 0.00 2.2118 2,853,810 3,626 2,853,810 1,860 0.00 1.9521 3,088,040 48,963 3,115,360 17,052 0.84 2.8722 230,578 5,504 230,578 2,221 0.00 2.4823 149,631 7,550 152,889 746 2.11 10.1224 222,118 3,030 223,508 1,468 0.56 2.0625 222,550 789 222,550 937 0.00 0.8426 244,800 195 247,984 316 1.24 0.6235 28,887 73 28,579 36 −0.91 2.00Table A.7: Comparison of TRSVR0 and TRSVR1 algorithmsTRSVR0 TRSVR1 Comparison# Cost($) Time(s) Cost($) Time(s) Diff(%) Speedup1 262,740 82 266,295 63 1.31 1.312 46,223 41 46,543 26 0.66 1.573 560,863 343 562,267 256 0.24 1.344 136,550 280 138,497 194 1.37 1.445 8,876,460 243 8,957,230 182 0.90 1.336 11,183,200 183 11,193,700 140 0.09 1.307 6,035,530 248 6,052,440 186 0.28 1.338 7,468,290 325 7,474,790 218 0.09 1.499 377,635 1,001 378,078 679 0.11 1.4710 7,224,780 273 7,260,590 139 0.49 1.9711 6,160,100 330 6,194,380 259 0.55 1.2812 215,049 1,776 215,049 1,765 0.00 1.0113 719,287 753 722,599 833 0.44 0.9014 288,204 626 288,741 439 0.18 1.4315 1,630,980 2,419 1,652,720 1,623 1.31 1.4916 365,717 630 365,717 453 0.00 1.3917 250,031 776 250,031 584 0.00 1.3318 3,038,370 831 3,056,910 572 0.60 1.4519 848,507 2,741 848,700 2,775 0.02 0.9920 154,308 5,522 155,122 3,679 0.49 1.5021 3,214,460 6,772 3,214,870 4,256 0.01 1.5922 238,498 2,303 238,610 1,581 0.05 1.4623 153,501 481 153,748 396 0.16 1.2124 239,318 801 240,306 977 0.40 0.8225 223,173 163 223,173 175 0.00 0.9326 255,320 55 254,333 109 −0.38 0.5027 386,715 70,888 395,426 54,644 2.20 1.3028 584,928 36,686 585,648 24,535 0.12 1.5029 11,077,600 81,564 11,077,600 80,765 0.00 1.0130 1,049,540 7,218 1,051,290 4,782 0.17 1.5131 1,126,310 14,163 1,128,230 8,189 0.17 1.7332 47,319,700 17,589 47,320,300 12,063 0.00 1.4634 30,781,500 65,649 30,547,400 68,147 −0.76 0.9635 30,982 32 31,789 25 2.37 1.2669Appendix A. Data TablesTable A.8: Comparison of VA cost obtained by 5 HA algorithmsGPS 0 GPS 1 TRSVR 0 TRSVR 1 NOMAD Initial Solution# Cost($) Cost($) Cost($) Cost($) Cost($) Cost($)1 252,975 251,160 262,740 266,295 254,376 270,4312 44,458 44,752 46,223 46,543 43,842 48,8353 537,873 543,235 560,863 562,267 538,078 575,4254 127,411 127,346 136,550 138,497 126,946 142,3185 8,554,830 8,604,670 8,876,460 8,957,230 8,571,510 8,957,9906 10,951,800 10,953,600 11,183,200 11,193,700 10,942,000 11,263,8007 5,889,700 5,897,510 6,035,530 6,052,440 5,910,740 6,063,6308 7,273,130 7,327,620 7,468,290 7,474,790 7,269,320 7,499,1009 363,861 367,957 377,635 378,078 361,757 396,96510 6,986,830 6,996,660 7,224,780 7,260,590 6,992,860 7,287,41011 6,082,110 6,163,850 6,160,100 6,194,380 6,085,130 6,207,56012 204,763 204,763 215,049 215,049 207,808 218,41113 650,128 650,128 719,287 722,599 642,831 752,01914 278,496 278,496 288,204 288,741 270,265 294,03415 1,514,740 1,545,050 1,630,980 1,652,720 1,518,510 1,655,24016 356,061 356,061 365,717 365,717 361,737 382,42817 244,938 244,938 250,031 250,031 245,352 255,16418 2,853,810 2,853,810 3,038,370 3,056,910 2,870,000 3,065,77019 614,144 614,144 848,507 848,700 658,138 922,87620 121,374 131,313 154,308 155,122 123,617 167,37221 3,088,040 3,115,360 3,214,460 3,214,870 3,109,310 3,262,09022 230,578 230,578 238,498 238,610 231,022 240,63723 149,631 152,889 153,501 153,748 150,613 154,33224 222,118 223,508 239,318 240,306 222,485 246,07625 222,550 222,550 223,173 223,173 220,982 227,52226 244,800 247,984 255,320 254,333 239,810 256,54627 377,133 391,768 386,715 395,426 375,012 396,17628 568,715 563,671 584,928 585,648 564,436 599,98829 11,012,600 11,012,600 11,077,600 11,077,600 10,939,800 11,167,80030 1,016,780 1,028,300 1,049,540 1,051,290 1,022,910 1,055,27031 1,075,830 1,084,200 1,126,310 1,128,230 1,078,410 1,129,76032 47,097,000 47,035,100 47,319,700 47,320,300 47,136,900 47,345,80033 30,894,600 30,894,600 30,860,100 30,860,100 30,784,400 31,208,10034 30,155,800 29,678,800 30,781,500 30,547,400 30,034,800 30,991,10035 28,887 28,579 30,982 31,789 28,889 33,98970Appendix A. Data TablesTable A.9: Comparison of computation time obtained by 5 HA algorithmsGPS 0 GPS 1 TRSVR 0 TRSVR 1 NOMAD# Time(s) Time(s) Time(s) Time(s) Time(s)1 334 226 82 63 1562 160 39 41 26 2373 5,023 1,504 343 256 1,2694 1,211 563 280 194 1,1015 1,054 613 243 182 1,1906 762 481 183 140 5577 1,069 508 248 186 4548 1,729 1,078 325 218 1,4209 5,538 1,428 1,001 679 2,70710 1,598 969 273 139 1,21311 1,533 841 330 259 92712 17,273 18,124 1,776 1,765 5,27113 4,566 1,280 753 833 1,01014 1,320 577 626 439 91015 43,276 12,700 2,419 1,623 20,59516 1,242 520 630 453 46017 1,767 799 776 584 59718 3,626 1,860 831 572 3,08819 68,809 24h T. 2,741 2,775 23,37720 24h T. 15,516 5,522 3,679 45,85421 48,963 17,052 6,772 4,256 17,07422 5,504 2,221 2,303 1,581 6,46023 7,550 746 481 396 2,19124 3,030 1,468 801 977 87025 789 937 163 175 53226 195 316 55 109 43127 24h T. 24h T. 70,888 54,644 38,57228 24h T. 26,550 36,686 24,535 39,96929 24h T. 24h T. 81,564 80,765 79,86730 24h T. 20,355 7,218 4,782 32,57431 24h T. 68,432 14,163 8,189 75,08732 24h T. 52,576 17,589 12,063 24h T.33 24h T. 24h T. 24h T. 24h T. 24h T.34 24h T. 24h T. 65,649 68,147 24h T.35 73 36 32 25 18371

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
https://iiif.library.ubc.ca/presentation/dsp.24.1-0368940/manifest

Comment

Related Items