Open Collections

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Dijkstra-like ordered upwind methods for solving static Hamilton-Jacobi equations Alton, Ken 2010

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
24-ubc_2010_fall_alton_kenneth.pdf [ 10.47MB ]
Metadata
JSON: 24-1.0051750.json
JSON-LD: 24-1.0051750-ld.json
RDF/XML (Pretty): 24-1.0051750-rdf.xml
RDF/JSON: 24-1.0051750-rdf.json
Turtle: 24-1.0051750-turtle.txt
N-Triples: 24-1.0051750-rdf-ntriples.txt
Original Record: 24-1.0051750-source.json
Full Text
24-1.0051750-fulltext.txt
Citation
24-1.0051750.ris

Full Text

Dijkstra-like Ordered Upwind Methods for Solving Static Hamilton-Jacobi Equations by Ken Alton Bachelor of Science, University of Calgary, 2000 Master of Science, University of British Columbia, 2005 a thesis submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in the faculty of graduate studies (Computer Science) The University Of British Columbia (Vancouver) May 2010 c© Ken Alton, 2010 Abstract The solution of a static Hamilton-Jacobi Partial Differential Equation (HJ PDE) can be used to determine the change of shape in a surface for etch- ing/deposition/lithography applications, to provide the first-arrival time of a wavefront emanating from a source for seismic applications, or to compute the minimal-time trajectory of a robot trying to reach a goal. HJ PDEs are nonlinear so theory and methods for solving linear PDEs do not directly apply. An efficient way to approximate the solution is to emulate the causal property of this class of HJ PDE: the solution at a particular point only depends on values backwards along the characteristic that passes through that point and solution values always increase along characteristics. In our discretization of the HJ PDE we enforce an analogous causal property, that the solution value at a grid node may only depend on the values of nodes in its numerical stencil which are smaller. This causal property is related but not the same thing as an upwinding property of schemes for time dependent problems. The solution to such a discretized system of equations can be efficiently computed using a Dijkstra-like method in a single pass through the grid nodes in order of nondecreasing value. We develop two Dijkstra-like methods for solving two subclasses of static HJ PDEs. The first method is an extension of the Fast Marching Method for isotropic Eikonal equations and it can be used to solve a class of axis-aligned anisotropic HJ PDEs on an orthogonal grid. The second method solves general convex static HJ PDEs on simplicial grids by computing stencils for a causal discretization in an initial pass through the grid nodes, and then solving the discretization in a second Dijkstra-like pass through the nodes. ii This method is suitable for computing solutions on highly nonuniform grids, which may be useful for extending it to an error-control method based on adaptive grid refinement. iii Table of Contents Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ii Table of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . iv List of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . x 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.2 Simple Example Problem: Airplane Escape . . . . . . . . . . 3 1.3 Static Hamilton-Jacobi Equation . . . . . . . . . . . . . . . . 11 1.4 Ordered Upwind Methods . . . . . . . . . . . . . . . . . . . . 14 1.5 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 1.6 Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 2 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 2.1 Viscosity Solutions of Hamilton-Jacobi Equations . . . . . . . 18 2.2 Numerical Methods for Hamilton-Jacobi Equations . . . . . . 19 2.3 Dynamic Programming and Optimal Control . . . . . . . . . 21 2.4 Methods for Static Hamilton-Jacobi Equations . . . . . . . . 21 2.4.1 Fast Marching Methods . . . . . . . . . . . . . . . . . 21 2.4.2 Ordered Upwind Methods . . . . . . . . . . . . . . . . 23 2.4.3 Sweeping Methods . . . . . . . . . . . . . . . . . . . . 24 iv 2.5 Applications of Static Hamilton-Jacobi Equations . . . . . . . 25 3 Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 3.1 Discretization . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 3.2 Convergence . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 3.2.1 Consistency, Monotonicity, and Stability . . . . . . . . 30 3.2.2 Viscosity Solution and Proof of Convergence . . . . . 31 3.3 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 3.3.1 Dijkstra-like Methods . . . . . . . . . . . . . . . . . . 34 3.3.2 Solution of Discretization . . . . . . . . . . . . . . . . 36 4 FMM for Axis-Aligned HJ PDEs . . . . . . . . . . . . . . . 39 4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 4.1.1 The Fast Marching Method . . . . . . . . . . . . . . . 41 4.2 Class of Hamiltonians . . . . . . . . . . . . . . . . . . . . . . 41 4.2.1 Connection to Osher’s criterion . . . . . . . . . . . . . 42 4.2.2 Example Hamiltonians . . . . . . . . . . . . . . . . . . 46 4.3 Discretization . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 4.3.1 Unique Update . . . . . . . . . . . . . . . . . . . . . . 53 4.3.2 Causality . . . . . . . . . . . . . . . . . . . . . . . . . 55 4.3.3 Monotonicity . . . . . . . . . . . . . . . . . . . . . . . 55 4.3.4 Consistency . . . . . . . . . . . . . . . . . . . . . . . . 56 4.3.5 Stability . . . . . . . . . . . . . . . . . . . . . . . . . . 58 4.4 Efficient Implementation of Update . . . . . . . . . . . . . . . 60 4.4.1 Symmetry Node Elimination . . . . . . . . . . . . . . 62 4.4.2 Causality Node Elimination . . . . . . . . . . . . . . . 64 4.4.3 Solution Elimination . . . . . . . . . . . . . . . . . . . 66 4.5 Analytic Solutions . . . . . . . . . . . . . . . . . . . . . . . . 67 4.5.1 Update for p = 1 . . . . . . . . . . . . . . . . . . . . . 68 4.5.2 Update for p = 2 . . . . . . . . . . . . . . . . . . . . . 68 4.5.3 Update for p =∞ . . . . . . . . . . . . . . . . . . . . 70 4.6 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 4.6.1 Numerical Convergence Study . . . . . . . . . . . . . . 71 v 4.6.2 Asymmetric Anisotropic Problem . . . . . . . . . . . . 71 4.6.3 Anelliptic Elastic Wave Propagation . . . . . . . . . . 71 4.6.4 Two Robots . . . . . . . . . . . . . . . . . . . . . . . . 74 5 OUM with Monotone Node Acceptance for Convex HJ PDEs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77 5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77 5.1.1 Dijkstra-like Methods . . . . . . . . . . . . . . . . . . 79 5.1.2 Related Work . . . . . . . . . . . . . . . . . . . . . . . 80 5.2 Discretization . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 5.2.1 Monotonicity . . . . . . . . . . . . . . . . . . . . . . . 82 5.2.2 Consistency . . . . . . . . . . . . . . . . . . . . . . . . 83 5.2.3 Unique Solution . . . . . . . . . . . . . . . . . . . . . 84 5.2.4 Equivalence of Discretizations . . . . . . . . . . . . . . 86 5.3 Causality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92 5.3.1 Negative-gradient-acuteness . . . . . . . . . . . . . . . 93 5.3.2 Anisotropy-angle-boundedness . . . . . . . . . . . . . 94 5.3.3 Distance-ratio-boundedness . . . . . . . . . . . . . . . 98 5.4 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 5.4.1 Computing the Update Set . . . . . . . . . . . . . . . 103 5.4.2 Convergence . . . . . . . . . . . . . . . . . . . . . . . 109 5.4.3 Complexity . . . . . . . . . . . . . . . . . . . . . . . . 111 5.4.4 Update Region . . . . . . . . . . . . . . . . . . . . . . 113 5.5 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115 5.5.1 Numerical Convergence Study . . . . . . . . . . . . . . 116 5.5.2 Nonuniform Grid . . . . . . . . . . . . . . . . . . . . . 117 5.5.3 Seismic Imaging . . . . . . . . . . . . . . . . . . . . . 119 5.5.4 Robot Navigation with Wind and Obstacles . . . . . . 121 6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126 6.1 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . 128 Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130 vi List of Tables 4.1 Numerical evidence of convergence of FMM for axis-aligned problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 5.1 Summary of MAOUM symbols . . . . . . . . . . . . . . . . . 101 5.2 Errors for MAOUM on problem with elliptical speed function 117 5.3 Errors for MAOUM and AFOUM on problem with rectangu- lar speed function . . . . . . . . . . . . . . . . . . . . . . . . . 120 vii List of Figures 1.1 Two-robot coordinated optimal navigation problem . . . . . . 3 1.2 Navigating a robot with wind and obstacles . . . . . . . . . . 4 1.3 Viscosity solutions for the airplane escape problem . . . . . . 6 1.4 Approximate solution for the airplane escape problem . . . . 8 1.5 Orthogonal grids . . . . . . . . . . . . . . . . . . . . . . . . . 14 1.6 Contours of first-arrival times of a seismic wave . . . . . . . . 16 2.1 Motion of two robots in a 2D world . . . . . . . . . . . . . . . 26 2.2 Robot arms cooperating to optimally transport cargo . . . . . 27 4.1 Contour plots of p-norms . . . . . . . . . . . . . . . . . . . . 47 4.2 Contour plots of transformed p-norms . . . . . . . . . . . . . 49 4.3 Contour plots of mixed p-norm and asymmetric norm-like function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 4.4 Neighborhood of x with d = 2 . . . . . . . . . . . . . . . . . . 52 4.5 Solution for asymmetric anisotropic problem . . . . . . . . . . 73 4.6 Anelliptic Hamiltonian and solution . . . . . . . . . . . . . . 74 5.1 Symbols used in the definition of δ-negative-gradient-acuteness 94 5.2 Symbols used in the definition of δ-anisotropy-angle-boundedness 95 5.3 Symbols used in the definition of distance-ratio-boundedness 98 5.4 Status of algorithm computing the update node set . . . . . . 104 5.5 (Continued) Status of algorithm computing the update node set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105 5.6 A sequence of uniformly-refined Maubach grids . . . . . . . . 116 5.7 Sequence of nonuniformly-refined Maubach grids . . . . . . . 118 viii 5.8 Error versus updates for rectangular speed function . . . . . . 119 5.9 Contours of first-arrival times of a seismic wave . . . . . . . . 121 5.10 Navigating a robot with wind and obstacles . . . . . . . . . . 125 ix Acknowledgments I would like to thank my supervisor, Ian Mitchell, for his collaboration and support. Much of the progress I have made in developing the ideas in this thesis has been with the help of Ian. My supervisory committee, including Uri Ascher and Adam Oberman, gave me a great deal of insightful and constructive feedback, for which I am grateful. I would also like to thank Alex Vladimirsky who has been both a critic and mentor, and who no doubt has made my thesis much stronger. I appreciate the time and effort spent by the examining committee to cri- tique my thesis work and provide many useful suggestions and corrections. Thank you to the chair, Zinovy Reichstein, and the examiners, Robert Brid- son and Philip Loewen. I am greatly indebted to my parents, Val and Norm Alton, for their unwavering love and support throughout my studies. I doubt that I would have even applied for grad school without the appreciation for education they have instilled in me. I am also thankful to my aunt and uncle, Liz Watts and Warren Murschell, for being my family in Vancouver and having me over for Sunday dinners. My wife, Luiza, has been incredibly supportive for the four years since we met. She has cheered me on when things were going well and helped me bounce back when they were not. Finally, I would like to thank her parents for being very helpful and providing us with a second home in Vancouver. This work has been supported by a grant from the National Science and Engineering Research Council of Canada. x Chapter 1 Introduction The viscosity solution of a static Hamilton-Jacobi Partial Differential Equa- tion (HJ PDE) has at least two illuminating interpretations. One of these is that the solution encodes for each location the first arrival time of a wave- front emanating from a source. Another interpretation is that the solution encodes for each starting location the minimal time for some dynamic system to arrive at a goal location. The connection between these two interpreta- tions is hinted at in a version of Huygen’s Principle (a minor modification of the definition in [31]): All points on a wavefront serve as point sources of secondary wavelets. After a short time the new position of the wavefront will be that of the surface tangent to those secondary wavelets. A wavefront propagates by trying all possible directions from all possible point sources along the front. The sources and directions that push the wavefront forward the fastest are used to determine the future position of the wavefront. Over time, the wavefront propagates by trying all possible trajectories from the original source, and the position of the wavefront at a particular time is determined by all the places that can be reached in that time by the most efficient trajectories. In the context of light wave propagation, this is Fermat’s Principal: the path taken between two points by a ray of light is the path that can be traversed in the least time. The 1 mathematical connection between these two interpretations is examined in [60]. 1.1 Motivation The two interpretations of the viscosity solution of a static HJ PDE suggest two applications. First, one may wish to know for a wavefront traveling in a medium when the wavefront first reaches various locations. Second, one may wish to compute optimal trajectories for a system to reach a goal. We solve both types of problems but we are particularly motivated by the latter, which are a type of optimal control problem. For example, one might ask: in what direction should I go in order to reach a goal location in the least possible time? An infinite number of such questions in sequence results in the larger question: what trajectory should I follow in order to reach the goal in the least possible time? One way of solving these optimal control problems is to formulate and solve an appropriate static HJ PDE. As mentioned, the viscosity solution is a function that specifies for each possible starting location the minimal time to reach the goal. The viscosity solution is smallest in value at the goal and grows as one moves further out from the goal. From a particular location, to reach the goal as fast as possible one should move in the direction that allows the fastest possible descent of the viscosity solution. This is related to, but not the same thing as, moving in the direction of steepest descent, because one may be capable of moving faster in some directions than in others. If, from a particular starting location, one continually chooses to move in the direction that allows the fastest descent, one will eventually reach the goal via a trajectory of minimal time. The viscosity solution can be used to determine the optimal direction to move for the current state forming a feedback controller that results in optimal trajectories from any initial state. This discussion is intended to provide an intuition of how viscosity so- lutions can be used to determine optimal trajectories. For a detailed math- ematical analysis of how the viscosity solution of static HJ PDE is the so- lution to a related optimal control problem see [7] and [25, Chapters 3 and 2 (a) (b) (c) (d) (e) (f) Figure 1.1: Two-robot coordinated optimal navigation problem. The joint goal is for the dark-colored robot to reach the center of the upper bulb and light-colored robot to reach the center of the lower bulb. Black indicates an obstacle region. The sequence shows the robots achieving their joint goal without collision from a particular initial state. The solution of an appropri- ate static HJ PDE allows quick determination of the optimal collision-free trajectories for both robots from any initial condition [1]. 10]. Section 1.2 introduces this method of optimal control using a simple plane escape problem. We solve a two-robot coordinated optimal navigation problem in Section 4.6.4 by approximating the viscosity solution to a static HJ PDE on a uniform orthogonal grid. A resulting trajectory of the two robots is illustrated in Figure 1.1. We also solve a optimal robot naviga- tion problem in Section 5.5.4 with wind and obtacles on a nonuniform grid with refinement around obstacles and the goal location. The grid, computed viscosity solution, and several resulting trajectories are shown in Figure 1.2. 1.2 Simple Example Problem: Airplane Escape The following example problem can be solved more simply than the methods of this thesis, mainly because it involves traversal speeds that are homoge- neous throughout the domain. However, the example serves as an intro- 3 Figure 1.2: The problem of optimally-navigating a robot to a goal through wind and obstacles. The left shows the laser-rangefinder data of the obstacles and the grid refined in a band around the collision set and the goal. The right shows the collision set in solid black. The right includes the wind vector field, the contours of the computed viscosity solution, and four optimal trajectories from different starting locations to the goal. duction to some notation, concepts, and issues involved in solving more complicated but related problems. Imagine that a passenger is in an airplane that just crashed and he wants to get out as quickly as possible. There are two exits on the airplane: a rear exit and a front exit. The airplane crashed nose down so he can move faster towards the nose of the airplane than towards the tail. One of the exits may be partially blocked by debris which will take some extra time to clear. How does he decide which exit to go towards? We let x be the distance from the tail along the longitudinal axis of the airplane. We use xt = 0, xr, xf , and xn to denote the x-coordinate of the tail, rear exit, front exit, and nose respectively. Let ft > 0 and fn > 0 be the speed at which the passenger can move towards the tail and nose of the airplane, respectively. Let gr and gf be the amount of time required just to open and get through the rear and front exit, respectively. The total time tr(x) required for a passenger at position x ∈ (xt, xn) to escape through the 4 rear exit is tr(x) = gr + x−xrft , x ≥ xrxr−x fn , otherwise (1.1) and the total time tf (x) required for a passenger at position x ∈ (xt, xn) to escape through the front exit is tf (x) = gf +  x−xf ft , x ≥ xf xf−x fn , otherwise (1.2) If tr < tf he should go for the rear exit, if tf < tr he should go for the front exit, and if tf = tr either exit is fine. This problem is simple to solve but for other problems we require more general methods. A more complicated version might have a speed f(x, a) that depends not only on the direction of travel a ∈ {−1,+1} but also on the current position x. Also x ∈ Rd might be a position in a d-dimensional space and a ∈ Rd such that ‖a‖2 = 1 may be a direction in that d-dimensional space. Consider the function u : (xt, xn) → R that gives the time required to exit the airplane for each passenger position x ∈ (xt, xn), which is the minimum of the time required to escape through the front or rear exits. Using (1.1) and (1.2), we obtain u(x) = min(tr(x), tf (x)). (1.3) The graph of u for an instance of the problem is shown in the top-left of Figure 1.3. We can verify that function u satisfies the following differential equation for almost all x: max a∈{−1,+1} −du(x) dx af(x, a) = 1, x ∈ Ω (1.4a) u(x) = g(x), x ∈ ∂Ω, (1.4b) where Ω = (xt, xr) ∪ (xr, xf ) ∪ (xf , xn) is the domain, ∂Ω = {xr, xf} is the 5 domain boundary, f(x, a) = f(a) = ft, if a = −1,fn, if a = +1, and g(x) = gr, if x = xr,gf , if x = xf . Figure 1.3: Viscosity solutions for instances of the airplane escape prob- lem. We do not precisely specify the problem parameters for these instances because doing so is not necessary for the purpose of our illustration. The top-left shows the viscosity solution u to (1.4) and specifies the optimal con- trol for the four regions separated by xr, x̂, and xf . For this instance the problem parameters are such that the boundary conditions are compatible and the system is small-time-controllable (i.e., ft > 0 and fn > 0), so there is a continuous solution that satisfies the boundary conditions. The top-right plots the solution u to (1.3) for the case where the boundary conditions are incompatible because u(xr) < gr. The bottom plots the solution u to (1.3) for the case where the system is not small-time-controllable because ft = 0. We say that the differential equation holds for almost all x because there may be some x̂ ∈ Ω at which the derivative du(x̂)/dx does not exist. In the top-left of Figure 1.3 such an x̂ can be seen, at which u(x̂) forms the apex of the graph of u. The point x̂ can be considered a decision boundary. For 6 x ∈ (xr, x̂) the optimal action is a = −1 (move backward toward xr), For x ∈ (x̂, xf ) the optimal action is a = +1 (move forward toward xf ), and if x = x̂, the optimal action is a ∈ {−1,+1} (move backward or forward). At the point x̂ the time to escape through either exit is equal: tr(x̂) = tf (x̂). Because the derivative du(x̂)/dx does not exist, there is no classical solution to (1.4). The u defined in (1.3) happens to be the (unique) weak solution of (1.4) called a viscosity solution [19]. The viscosity solution is defined in Section 3.2.2 but the theory of such solutions is not the focus of this thesis. The boundary ∂Ω and the boundary condition (1.4b) require some spe- cial attention. Firstly, even though xt and xn form part of the physical boundary of Ω, we do not include xt or xn in ∂Ω and as a result do not enforce a boundary condition on xt or xn. We define u on (xt, xn) instead of R based on the dimensions of the airplane. In other problems, it may be ap- propriate to enforce boundary conditions on the exterior boundary instead of (or in addition to) the interior boundary of the domain. Secondly, there is a potential issue with satisfying boundary conditions u = g on ∂Ω. For example, consider a problem instance for which, starting from the rear exit, the passenger can escape through the front exit faster than through the rear exit (perhaps the rear door is jammed shut): u(xr) = tf (xr) < tr(xr) = gr. Such an example is shown in Figure 1.3 top-right. In this case, either the boundary condition is not satisfied at xr (i.e., u(xr) < gr); or we enforce the boundary condition, (1.3) is no longer satisfied, and the function u becomes discontinuous at xr. When such an issue occurs, we say that the boundary conditions are incompatible. Since (1.3) is the correct solution to the physical problem, we want to ensure it holds. One approach is to assume compatible boundary conditions [11], so that the PDE and its boundary conditions can be satisfied by a continuous u. An alternative approach is to use a modified definition of viscosity solution at the boundary, which is satisfied continuously [8], and corresponds to the correct solution for most physical problems. For example, the solution to (1.3) would satisfy the 7 modified definition. To avoid complication we take the former approach and assume compatible boundary conditions, while acknowledging that the latter approach can sensibly handle incompatible boundary conditions for many practical problems. Another issue with the solution may arise if ft = 0 or fn = 0. In this case u is discontinuous at xr and xf and u(x) = ∞ for some x (for example, see Figure 1.3 bottom). Such conditions complicate the analysis and approximation of viscosity solutions, and so we assume that ft > 0 and fn > 0. Such an assumption is called small-time-controllability. Figure 1.4: Approximate solution for an instance of the airplane escape problem. The problem parameters for this instance are identical to those for the top-left of Figure 1.3. The three figures show a progression of solutions u to (1.5) (i.e., approximations to the viscosity solution) for increasingly fine grids: N ∈ {3, 5, 9}. On these three figures u is depicted by a light thin dotted line to indicate how well u approximates u. For this problem we have an exact closed form solution (1.3), but for most problems we are forced to approximate the solution on a grid using a numerical method. We demonstrate by constructing on a grid a discrete system of equations that is the analogue to (1.4). The solution u to the discrete system is used to approximate the solution u to (1.4) as shown in Figure 1.4. We let X be the set of grid nodes and N = |X | be the number of grid nodes. The spacing between grid nodes is h = (xn − xt)/(N − 1). Thus the set of grid nodes is X = {xt, xt + h, xt + 2h, . . . , xn − 2h, xn − h, xn}. The discrete boundary ∂Ω contains any node x ∈ X that is less than h away 8 from either xr or xf (so there may be as many as four boundary nodes): ∂Ω = {x ∈ X | min(|x− xr|, |x− xf |) < h}. For problems with dimension d > 1, ∂Ω may be a discrete representation of a continuous boundary (see Figure 1.5). The set Ω = X \ ∂Ω is a discrete representation of the domain Ω. The discrete analogue of (1.4) is max a∈{−1,+1} u(x)− u(x+ ah) h f(x, a) = 1, x ∈ Ω (1.5a) u(x) = u(x), x ∈ ∂Ω. (1.5b) Note that (1.5b) defines the boundary conditions to be the exact solution u. For most problems we must approximate the boundary condition in (1.5b). The discretization results in a system of nonlinear equations which we must solve, one equation (1.5a) for each node x ∈ Ω. Luckily, this system is structured in such a way that it can be efficiently solved. We can solve (1.5a) for the node value u(x) based on the values of neighboring nodes: u(x) = min a∈{−1,+1} ( u(x+ ah) + h f(x, a) ) . (1.6) Note that u(x) > u(x + ah), for all x ∈ Ω for some a ∈ {−1,+1}. Also, if u(x) ≤ u(x + ah), u(x) is independent of u(x + ah), a property of the discrete equation we call causality. Nodes may only depend on neighbor- ing nodes with smaller values for their own values. The information that determines the solution propagates between neighboring nodes from smaller- valued nodes to larger-valued nodes. We describe three different algorithms that can be used to solve (1.5). We let v : X → R specify grid node values which are updated in the course of algorithm execution. All three algorithms begin by initializing v(x)← u(x) for x ∈ ∂Ω, and v(x)←∞ for x ∈ Ω. As each algorithm progresses, a node 9 value v(x) is updated based on (1.6): v(x)← min a∈{−1,+1} ( v(x+ ah) + h f(x, a) ) . (1.7) When each algorithm terminates, v = u is the solution to (1.5). The first algorithm is Gauss-Seidel iteration. It repeatedly visits all nodes in Ω in a fixed order. Each time a node x is visited (1.7) is used to update v(x). The algorithm terminates when there is no change in v for an entire pass through X . After O(N2) updates (1.7), the algorithm terminates with v = u. There are O(N2) updates because if the solution information flows counter to the fixed node order, that information can propagate to at most to one new node per pass through all N nodes. A more efficient algorithm is a sweeping method. It performs two sweeps through the nodes in Ω: the first in left-to-right order and the second in right- to-left order. Each time a node x is visited (1.7) is used to update v(x). After two sweeps and O(N) node value updates, the algorithm terminates with v = u. There are at most 2N updates because solution information must flow uninterrupted to the left or right of each boundary node; thus, once the algorithm has performed sweeps in both directions the solution information has been propagated to all nodes. Another efficient algorithm is a Dijkstra-like method (see Algorithm 1). It keeps a set H which is initialized to X . The node x ∈ H with minimal v(x) is removed from H and its neighbors are updated using (1.7). The algorithm terminates after all nodes in H have been removed with v = u. It performs O(N) node value updates. We note that the latter two methods are much more efficient than simple Gauss-Seidel iteration. This advantage also holds for more difficult practical problems. Sweeping and Dijkstra-like methods are compared in Section 2.4. In this thesis we use Dijkstra-like methods, which are described in detail in Section 3.3. It is important to note that Dijkstra-like methods can only be used if the discrete equation is causal. One can verify that the unique solution to (1.5) is actually u(x) = u(x) for x ∈ X for this example, as shown in Figure 1.3. In fact, the error 10 in the approximation of u by u shown in the figure is solely due to the linear approximation of u between grid nodes. In typical problems additional errors would be introduced by approximation of the boundary condition, the inhomogeneous speed function f(x, a), and the gradient of the solution. We require the approximate solution u to converge to u as h approaches zero. This convergence appears to occur in Figure 1.3 for this problem. We examine convergence in detail in Section 3.2, but here we note that cer- tain properties of the discrete equation (1.5a) are sufficient for convergence: namely, consistency, montonicity, and stability [8]. Recall that the original problem was to decide which direction to travel to escape the airplane in minimal time from a particular location x. An optimal direction of travel a∗(x) for x ∈ Ω is given by a∗(x) ∈ argmax a∈{−1,+1} −du(x) dx af(x, a). If du(x)/dx exists and is negative going forward is optimal. If du(x)/dx exists and is positive going backward is optimal. If du(x)/dx does not exist (e.g., at x̂) moving in either direction is optimal. The mapping a∗ : Ω → {−1,+1} forms a feedback controller (or policy) that results in optimal trajectories from any initial state. An optimal trajectory ζ : [0, T ] → Ω from some initial position x0 ∈ Ω is then defined by the ordinary differential equation dζ(t) dt = a∗(ζ(t))f(ζ(t), a∗(ζ(t))), where ζ(0) = x0 and t = T is the minimum time at which ζ(t) ∈ ∂Ω (i.e., the trajectory exits the domain Ω). In practice, we compute a direction of travel (and trajectory) using the approximate function u in place of u. 1.3 Static Hamilton-Jacobi Equation We characterize the category of problems we wish to solve. The problem described in the above section is a simple example of this type. The Dirichlet 11 problem for a static HJ PDE is to find a function u, such that H(x,Du(x)) = 0, x ∈ Ω (1.8a) u(x) = g(x), x ∈ ∂Ω, (1.8b) where Ω ⊂ Rd is a Lipschitz domain,1 ∂Ω is the domain’s boundary, H : Ω × Rd → R is the Hamiltonian function, Du(x) is the gradient of u at x, and g : ∂Ω→ R specifies the Dirichlet boundary condition. Let Ω = Ω∪ ∂Ω be the closure of Ω. An optimal continuous control problem which attempts to minimize the time to reach the boundary leads to the following Hamiltonian H in (1.8a) [60]: H(x, q) = max a∈A [(−q · a)f(x, a)]− 1, (1.9) where A = {a ∈ Rd | ‖a‖ = 1} is the set of unit vector controls, a ∈ A is a control specifying direction of travel, x ∈ Ω is a state, f : Ω × A → R+ is a continuous function providing the finite positive speed of travel from each state x in each direction a, and g : ∂Ω → R gives the exit time penalty at each boundary state x. Note that f is positive, which means that small- time-controllability is assumed. For all x ∈ Ω, let the speed profile Af (x) = {taf(x, a) | a ∈ A and t ∈ R such that 0 ≤ t ≤ 1} (1.10) be a closed convex set.2 Because f is positive, Af (x) contains the origin in its interior. In an isotropic problem, f(x, a) is independent of a for all x, i.e., Af (x) is a hypersphere with the origin at its center. In such a problem, 1A Lipschitz domain is a open bounded set whose boundary can be thought of locally as the graph of a Lipschitz continuous function 2Results based on a convex speed profile apply without loss of generality to nonconvex speed profiles as long as measurable (and not continuous) controls are assumed. A non- convex speed profile can be replaced by its convex hull because a control ã can be imitated by alternating between two controls for which ã lies on their geodesic. 12 the Hamiltonian H reduces to H(x, q) = ‖q‖2f(x)− 1 (1.11) and (1.8) becomes the Eikonal equation: ‖Du(x)‖2 = 1 f(x) , x ∈ Ω (1.12a) u(x) = g(x), x ∈ ∂Ω. (1.12b) In an anisotropic problem, f(x, a) depends on a, i.e., Af (x) is a closed non-spherical but convex set. Since not all Hamiltonians H fit the control- theoretic formulation, more generally, for an isotropic problem the set of q solving H(x, q) = 0 is the surface of a origin-centered hypersphere; other- wise, the problem is anisotropic. The method in Chapter 5 assumes H has the form (1.9) and Af (x) is a closed convex set. For the method in Chapter 4, H satisfies several assumptions listed in Section 4.2. In general, it is impossible to find a classical solution u to the static Hamilton-Jacobi PDE (1.8) where u is differentiable for all x. We seek in- stead the viscosity solution (see definition in Section 3.2.2), a unique weak solution which under the above assumptions is continuous and almost ev- erywhere differentiable [19]. Since we typically cannot solve for the viscosity solution analytically, we compute an approximate solution u using a grid with nodes forming both a discretization Ω of Ω, and a discretization ∂Ω of ∂Ω. For example, in Chapter 4 we use an orthogonal grid like that shown in Figure 1.5 and Chapter 5 we use a simplicial grid like that shown on the left side of Figure 1.2. We note that a simplicial grid is significantly more difficult to implement than an orthogonal grid but allows for nonuniform refinement in the grid to better resolve certain features of the solution including non-trivial domain boundaries. Limited nonuniformity in orthogonal grids can be used to better resolve boundaries as shown in Figure 1.5. 13 (a) (b) Figure 1.5: Orthogonal grids combining discretizations Ω and ∂Ω. (a) Boundary conditions are given on the exterior of Ω. (b) Boundary con- ditions are given on the interior of Ω. 1.4 Ordered Upwind Methods In order to use viscosity solutions of HJ PDEs for optimal control, we must develop efficient methods of computing them. An idea for such a method comes from their interpretation as the first arrival time of a propagating wavefront. A level contour of the solution is the position of the wavefront at a specific time. An intuitive and efficient method of approximating the solution on a grid is to compute the solution values at grid nodes in the order in which the wavefront passes through the grid nodes. The solution value at a particular grid node is based on values of neighboring grid nodes that are smaller, in the same way that the time at which the wavefront crosses any particular point is dependent on the earlier times the wavefront crosses nearby points in the direction from which it emanates. Dijkstra-like methods were developed in [55, 64] to approximate the solution to an isotropic static HJ PDE, also known as the Eikonal equation, in a single pass through the nodes of a grid in order of nondecreasing solution value. Ordered Upwind Methods (OUMs) [59, 60] are an extension of these Dijkstra-like methods that approximate the solution to static convex HJ PDEs in a single pass, but not exactly in order of nondecreasing solution value. 14 1.5 Contributions Dijkstra-like methods require causal discretizations [60], for which the solu- tion value at a node is independent of any solution value at another node that is greater than or equal to it. Dijkstra-like methods have been used to solve isotropic problems and some anisotropic problems. In this thesis we extend Dijkstra-like methods to solve anisotropic static HJ PDEs in two ways. In the first extension, we analyze and implement a Dijkstra-like method for solving a class of axis-aligned anisotropic HJ PDEs on orthogonal grids. This extension is described in [4]. Our method is essentially the same as the Dijkstra-like Fast Marching Method (FMM) for isotropic problems [55], but we prove that it is applicable to all axis-aligned anisotropic HJ PDEs. Osher’s fast marching criterion [47, 62] states that FMM can be used for a similar class of axis-aligned problems. Our criterion restricts Osher’s crite- rion to rule out multiple solutions and loosens it to allow for nondifferen- tiable Hamiltonians. We prove the convergence properties of the resulting discretization, discuss implementation issues in detail, and report on ex- perimental results of applying our method to problems such as two robot coordinated optimal control (e.g., Figure 1.1) and seismic anelliptic wave- front propagation [28]. The second extension is a Dijkstra-like method for solving general con- vex static HJ PDEs. This extension is described in paper submission [5]. Our method is distinct from the OUM described in [59, 60], which solves the same class of HJ PDEs. That method keeps track of an accepted front that defines the boundary of nodes which have computed values thus far, and is not strictly a Dijkstra-like method since the discretization is not necessarily solved in order of nondecreasing solution value. For that method the stencils are dynamically created using only nodes from the accepted front, whereas for our method the stencils are generated in an additional initial pass, result- ing in a causal discretization that can be solved in a Dijkstra-like manner. Our method has the benefit that the stencil-width depends only on the local grid spacing, while for the OUM in [59, 60] the stencil width depends on the 15 global grid spacing. We propose and analyze several criteria to ensure that the stencil is causal and use these criteria to develop the initial pass that generates the stencils. Although we develop only a Dijkstra-like method, our criteria are generalized so that they can be used to build Dial-like meth- ods, which solve the discrete equation simultaneously for buckets of nodes with similar values and may potentially be more easily implemented in par- allel [23, 64, 65]. We test the Dijkstra-like method on several applications, including optimally navigating a robot to a goal through obstacles and a strong wind (see Figure 1.2), and computing the first arrival time for a seis- mic wave with elliptical but inhomogeneous propagation speed (see Figure 1.6). Figure 1.6: Contours of first-arrival times of a seismic wave. 1.6 Outline Chapter 2 surveys related work to put our research into context. We discuss work from broader areas such as dynamic programming, optimal control, and viscosity solutions. We narrow in on our subject by describing work 16 that has been done on general numerical methods for time-dependent and static HJ PDEs, and in particular, single-pass methods for static HJ PDEs. Chapter 3 details elements of our framework common to both algorithms for formulating and solving static HJ PDEs in a Dijkstra-like manner. We define the viscosity solution of a static HJ PDE that we wish to compute. We specify a format for the discretization and what types of properties are required so that the solution to the approximate problem converges to the viscosity solution as the grid is refined. Also, we point out what it means for the discretization to be causal. The chapter includes a definition of the basic Dijkstra-like method and a proof that it solves a causal discretized problem. The analysis, implementation, and experimental testing of the Dijkstra- like method for solving axis-aligned anisotropic HJ PDEs is included in Chapter 4. That of the Dijkstra-like method with pre-computed stencil for solving general convex static HJ PDEs is in Chapter 5. In formulating the discretization and the algorithm, proving convergence, and proving that the algorithm solves the discretized system of equations, we attempt to follow the framework outlined in Chapter 3 as closely as is practical. In Chapter 5, we diverge slightly from the framework but the proofs still apply. Chapter 6 ties together principles and observations from developing and testing the two Dijkstra-like methods and plots a course for developing these ideas further. 17 Chapter 2 Related Work 2.1 Viscosity Solutions of Hamilton-Jacobi Equa- tions The Dirichlet boundary-value problem for a static HJ PDE is to find a function u : Ω→ R satisfying (1.8), while the Cauchy initial-value problem for a time-dependent HJ PDE is to find a function u : Ω×R→ R such that ut +H(x,Du(x)) = 0, in Ω× (0,∞) (2.1a) u(x, 0) = u0(x), on Ω× {0}, (2.1b) where u0 : Ω → R specifies the initial value. In [17] both a boundary-value problem of a static HJ PDE and an initial-value problem of a time-dependent HJ PDE are described. In general, it is not possible to find a classical so- lution which is everywhere differentiable to an HJ PDE. The appropriate weak solution for the optimal control and wavefront first-arrival problems we seek to solve is differentiable almost everywhere and is called the viscos- ity solution. It is characterized for first-order HJ PDEs in [17]. Viscosity solution theory for more general second-order HJ PDEs is summarized in [19]. However, in our work we consider only first-order static HJ PDEs. The viscosity solution of an HJ PDE is identical to the value function of a corresponding optimal control problem. For infinite-horizon discounted 18 control problems, this connection is discussed in great detail in [7]. The HJ PDEs for such problems include a forcing term which is absent for the undiscounted problems that we consider. The connection between viscosity solutions of time-dependent HJ PDEs and finite-horizon optimal control problems is examined in [25, Chapter 10]. 2.2 Numerical Methods for Hamilton-Jacobi Equa- tions One cannot usually find an analytical solution to an HJ PDE so instead a numerical approximation is computed. Convergence of a finite difference scheme for time-dependent HJ PDEs is proved in [18]. In [8] it is proved that if a discretized scheme is monotone, stable, and consistent with a general second-order HJ PDE, the discretized solution will converge to the viscosity solution of the HJ PDE. The static first-order HJ PDEs we consider are a subset of the second-order HJ PDEs to which this theory applies. Our schemes, based on finite differences, mantain these properties so that con- vergence is assured (see Section 3.2). Time-dependent HJ PDEs have the nice property that the solution at time t is dependent only on the solution at previous times. For this reason, a natural method for approximating the viscosity solution of a time-dependent HJ PDE is to discretize the PDE in space and time and march forward in time, calculating the approximate solution at one time step based on the solution at the previous time step, as is done in [18]. In [46] a connection is drawn between static and time-dependent HJ PDEs, by observing that one can solve a static HJ PDE by using the level set method to formulate and solve a related time-dependent HJ PDE. However, we focus on methods that solve static HJ PDEs directly because they are generally much more efficient computationally. There are several different types of spatial discretizations that have been used to approximate the solution of HJ PDEs, both time-dependent and static. 19 Because of their simplicity, finite difference schemes have been very pop- ular. Semi-Lagrangian schemes, in the category of finite difference schemes, directly discretize Bellman’s dynamic programming principal and are stud- ied in [26, 27, 63]. Eulerian finite difference schemes, which discretize the HJ PDE by numerically approximating the solution gradient, are used to solve static HJ PDEs in [53, 55, 70]. In [60] the equivalence between a particular semi-Lagragian scheme and a first-order upwind Eulerian finite difference scheme is proved. Essentially non-oscillatory (ENO) and weighted ENO (WENO) are finite difference schemes that can produce higher-order accurate approximations of the solution where it is smooth while avoiding spurious oscillation at kinks in the solution, places where the solution is not differentiable. For solving time- dependent HJ PDEs, an ENO scheme is used in [48] and a WENO scheme is used in [35]. WENO schemes have been used to solve static HJ PDEs iteratively in [66, 68]. These methods were tested on several examples to demonstrate higher-order accurate convergence. However, convergence was not proven, possibly because the schemes are not monotone which makes the proof difficult. Furthermore, the solution of a first-order discretization is used as an initial guess to make convergence of the iterated method to the higher-order accurate solution more likely and faster. A finite element discretization is used in [11] to solve static HJ PDEs. Discontinuous Galerkin (DG) methods have been used to solve a static HJ PDE with forcing term in [15] and to solve time-dependent HJ PDEs in [16, 33]. Second-order DG methods are developed to solve Eikonal equations in [42, 69]. In our research we use mainly first-order accurate semi-Lagrangian and Eulerian finite difference schemes because of their relative simplicity and guaranteed monotonicity and causality properties. In Chapter 5, we use grid refinement instead of higher-order accurate approximations to compute accurate solutions efficiently. 20 2.3 Dynamic Programming and Optimal Control The classic book [9] introduced the ideas of a multi-stage decision process and the principle of optimality, popularizing the dynamic programming method. The optimal control problems we consider satisfy a continuous- time, continuous-state principal of optimality, which can be stated as a Hamilton-Jacobi-Bellman PDE [7, 25] with H as defined in (1.9). In [24] Dijkstra’s Algorithm for finding the cost of the shortest paths on a graph is first described. Dijkstra’s Algorithm is a dynamic programming method, and Dijkstra-like methods such as the Fast Marching Method (FMM) and Ordered Upwind Methods (OUMs) in general can be seen as modifications to the algorithm to approximate the cost of minimal-cost paths in a continuous space. We use Dijkstra-like methods to solve HJ PDEs. A modern refer- ence [10] examines general dynamic programming algorithms for the purpose of solving discrete-time operations research problems and continuous-time optimal control problems. 2.4 Methods for Static Hamilton-Jacobi Equations Our research looks at fast methods for directly solving static first-order HJ PDEs. Fast methods can be contrasted with slow methods like naive Gauss-Seidel iteration, which have asymptotic complexity O(N2). However, fast methods are not appropriate for solving second-order HJ PDEs, which do not possess characteristics. Two main categories of methods for these problems are ordered upwind (or single-pass or label-setting) methods and sweeping (or iterative or label-correcting) methods. Our research focuses on methods in the first category. In particular, we create Dijkstra-like OUMs that construct the approximation in nondecreasing value order. 2.4.1 Fast Marching Methods The first Dijkstra-like method for a first-order accurate semi-Lagrangian dis- cretization of the isotropic Eikonal PDE on an orthogonal grid was developed in [63]. The Dijkstra-like FMM was later independently developed in [55] 21 for the first-order accurate upwind Eulerian finite-difference discretization of the same Eikonal PDE. FMM was then extended to handle higher-order accurate upwind discretizations on grids and unstructured meshes in Rd and on manifolds [39, 57, 58]. In [56] it was shown that Dijkstra’s method on a uniform orthogonal grid produces the solution for the anisotropic maximum norm Eikonal equation. By solving an isotropic problem on a manifold and then projecting the solution into a subspace, FMM can solve certain anisotropic problems [58]. For example, if H is defined by (1.9) with a constant elliptical speed profile Af (x) = Af , then (1.8) can be solved by running isotropic FMM on an appropriately tilted planar manifold and then projecting away one dimension. Some anisotropic etching problems have also been solved using FMM [47]. The fact that correct operation of Dijkstra-like algorithms for approxi- mating the Eikonal PDE requires the causality property that u(x) can be written only in terms of smaller-valued u at neighboring nodes was stated in [63], but a reader might incorrectly infer from further comments in that paper that such algorithms would not work for any unstructured grid or anisotropic problem. That FMM is applicable for any consistent, causal, finite-difference discretization of a general static convex HJ PDE on an or- thogonal grid is stated in [56]. However, it is now understood that this criterion applies even more generally, since a Dijkstra-like method can be used to efficiently solve on a graph any nonlinear system of equations for which u(x) is dependent only on smaller-valued u at neighboring nodes. A sufficient criterion (see Section 4.2.1) under which FMM can be used for or- thogonal, finite-difference discretizations of static HJ PDEs—now commonly referred to as “Osher’s criterion”—is widely attributed to an unpublished work by Osher & Helmsen, but the earliest published description seems to be [47]. While it is stronger than the causality conditions described earlier, it is useful because it is stated as a condition on the analytic Hamiltonian instead of the equations created by the discretization. In this thesis, we likewise seek conditions under which FMM is applicable that are closer to the problem’s definition than the algorithm’s implementation. A Dijkstra-like method with a semi-Langrangian update scheme for isotropic 22 equations was developed in [21]. The semi-Lagrangian update is distinct from that in [63] as the time step for the integration is fixed. Numerical tests indicate that the algorithm computes more accurate solutions than FMM with a finite difference scheme, but it is marginally more costly due to an expanded node neighborhood. Dijkstra-like methods such as those described above have an asymptotic complexity of O(N logN), because each of N nodes is visited and a priority queue ordered according to node value is used to determine the next node to visit in O(logN) operations (see Section 3.3.1). There are modified versions of the FMM that claim to have O(N) asymptotic complexity. The Group Marching Method in [38] advances a group of grid points at a time, rather than sorting the node values to march forward a single node as is done in a Dijkstra-like method. An untidy priority queue (i.e. bucket sort) is used in [67] so that the next node to visit can be determined in constant time. The authors argue that the error introduced by the untidy priority queue will not be significant considering the discretization error. In [52] it is proven that a bucket width for this untidy priority queue can be chosen so that the error introduced is independent of the speed function f and the compu- tational complexity is O((f̂/f̌) logN), where f̂ is the maximum speed and f̌ the minimum speed. Both [38, 67] demonstrate their respective meth- ods outperforming FMM on selected problems, but neither gives rigorous theoretical guarantees of this result for any class of problems. 2.4.2 Ordered Upwind Methods The OUMs developed in [59, 60] can solve general convex anisotropic prob- lems on unstructured grids with an asymptotic complexity only a constant factor (related to the degree of anisotropy) worse than FMM. FMM fails for these general problems because the neighboring simplex from which the characteristic approaches a node x may contain another node y such that causality does not hold: u(x) ≤ u(y). These OUMs avoid this difficulty by searching along the accepted front to find a set of neighboring nodes (which may not be direct neighbors of x) whose values have been accepted, and then 23 constructing a virtual simplex with these nodes from which to update u(x). Although this search along the active front does not degrade the asymptotic complexity, it does significantly increase the computational cost in practice. This effect can be partially mitigated by using nontrivial data structures, such as 2d-trees, to speed up the search along the active front. 2.4.3 Sweeping Methods An alternative to these single-pass algorithms are the sweeping algorithms, which are often even simpler to implement than FMM. Sweeping algorithms are also capable of handling anisotropic and even nonconvex problems. The simplest sweeping algorithm is to just iterate through the grid updating each node in a Gauss-Seidel (GS) fashion (so a new value for a node is used immediately in subsequent updates) until u converges. GS converges quickly if the node update order is aligned with the characteristics of the so- lution, so better sweeping algorithms [12, 22, 29, 36, 50, 51, 62, 70] alternate among a collection of static node orderings so that all possible character- istic directions will align with at least one ordering. It is argued in [70] that these methods achieve O(N) asymptotic complexity (assuming that the node orderings are already determined); however, unlike OUMs the con- stant depends on the particular problem. For practical grid resolutions on Eikonal problems with curved characteristics FMM outperforms sweeping methods despite the difference in asymptotic complexity [30, 34]. A method for improving the efficiency of sweeping methods by skipping over a node during a sweep when a value update is guaranteed not to change the value is described in [6]. There are also some sweeping algorithms which use dynamic node order- ings; for example [6, 11, 49]. These algorithms attempt to approximate the optimal ordering generated by OUMs such as FMM without the overhead associated with sorting values exactly. These methods have been demon- strated in practice to be comparable to or better than OUMs for certain problems and grid resolutions. However, in general these methods may need to revisit nodes multiple times, because they do not compute the solution 24 in an order that exploits node value dependencies. The algorithm described in [20] combines aspects of OUMs and sweeping methods. It computes solutions iteratively for a propagating band of grid nodes, but determines periodically which band nodes’ values can no longer change and removes them from the band. This method is found to be more efficient than the Fast Sweeping method of [62, 70] for a problem with highly- curved characteristics but less efficient for a highly-anisotropic problem with straight characteristics. The asymptotic complexity of this method is not analyzed. 2.5 Applications of Static Hamilton-Jacobi Equa- tions The solution of an HJ PDE can be used in many contexts, including etch- ing/deposition/lithography, seismic imaging, image processing, data visual- ization, computer vision, and optimal control. We list some applications of the solution to Eikonal and to non-Eikonal HJ PDEs. This is not a compre- hensive list and is merely intended to demonstrate the wide applicability of HJ PDEs. The first-arrival time of a front, which is the solution to an HJ PDE, can be computed in order to solve some etching/deposition/lithography and seis- mic imaging problems. A semiconductor deposition problem above a trench was solved in [55], and it was noted that other etching/deposition/lithography problems could be solved using the Eikonal equation. Non-Eikonal HJ PDE solutions were applied to some anisotropic etching problems in [47]. An anisotropic seismic imaging problem with elliptic speed profile was solved in [59]. Some image processing and data visualization problems have been solved using Eikonal equations. Euclidean distance maps, which are the solutions to Eikonal equations, were computed and used to generate skeletons for image processing in [22]. The unorganized point data obtained from a 3- dimensional scanner was visualized using a computation on a distance func- 25 tion which is the solution to an Eikonal equation in [70]. In [13] it was proven that an Eikonal equation could be formulated and solved to determine a surface from a shaded image (i.e., shape-from-shading) given certain light source and reflectance properties. Note that not all shape- from-shading problems can be addressed by solving Eikonal equations and [32] discusses techniques for shape-from-shading in greater generality. The solution of a first-order HJ PDE was used to solve a shape-from-shading problem in [53]. The Eikonal equation was solved to find optimal trajectories for isotropic control problems in [63]. This equation was used to solve several path plan- ning problems including computing geodesics on a manifold in [40]. (a) (b) (c) Figure 2.1: Motion of two robots in a 2D world. The light robot is con- strained to a circular path, while the dark robot may move anywhere within the obstacle free space, where obstacles are shown in black. The light robot’s goal is on the right, and the dark robot’s goal is on the left. Arrows show direction of movement, and are placed near goal states. Anisotropic HJ PDE solutions have been used to solve a multi-robot coordinated optimal control problem in [1] (see Figure 2.1). This application helped to motivate the extension of FMM to axis-aligned HJ PDEs described in Chapter 4. A sequence of HJ PDE problems can be solved to determine meeting locations and connecting trajectories for an optimal multi-robot multi-location rendezvous problem, as described in [2]. An example of such a problem where three robot arms cooperate to transport cargo as quickly as possible in shown in Figure 2.2. The methods we describe in Chapters 4 26 and 5 can be used to solve the individual HJ PDE problems in the sequence. (a) (b) (c) (d) Figure 2.2: Robot arms cooperating to optimally transport cargo from a pickup to a dropoff location. The sequence of figures is a time series with (a) showing the beginning of the robot arm motions and (d) showing the completion of the motions. The starting area for the end effector of each robot arm is indicated by a large circle. The square box on the top-right/top- left is the pickup/dropoff location for the cargo. A small circle at the end effector of a moving robot arm indicates that the robot is currently carrying the cargo. (a) Robot 1 moves from its starting location to cargo pickup area. (b) Robot 1 moves from the cargo pickup box to meet robot 2, while robot 2 moves from its starting location to meet robot 1. (c) Robot 2 moves from its meeting with robot 1 to meet robot 3, while robot 3 moves from its starting location to meet robot 2. (d) Robot 3 moves from its meeting with robot 2 to the cargo dropoff area. 27 Chapter 3 Framework This chapter serves to unite the Dijkstra-like methods described in Chapters 4 and 5 under a single framework. Both methods involve devising for a par- ticular grid a discrete system of equations that approximates the static HJ PDE (1.8). Whether or not we are using a Dijkstra-like method, we require our approximate solution computed on a grid to converge to the continu- ous viscosity solution of (1.8) as the grid spacing shrinks towards zero. We prove in Section 3.2 that consistency, monotonicity, and stability of the dis- crete equations implies convergence of the approximate solution [8]. For a Dijkstra-like method to be applied to solve the discrete equation, we require it to be causal [55, 60, 64]. We prove that a Dijkstra-like method solves such causal systems in Section 3.3. We present the proof of convergence and that of applicability of Dijkstra-like methods here for completeness. 3.1 Discretization We denote the grid G. The grid may be an orthogonal grid like that used in Chapter 4 and shown in Figure 1.5 or a simplicial grid like that used in Chapter 5 and shown on the left side of Figure 1.2. Take the discretized domain Ω and the discretized boundary ∂Ω to be disjoint sets of grid nodes and let X = Ω∪ ∂Ω be the set of all grid nodes. Let N (x) be the set of grid neighbors (i.e., the one-ring neighborhood) of node x ∈ X . Let ĥG be the 28 maximum grid spacing of G: ĥG = max x∈Ω, y∈N (x) ‖x− y‖. Let −→Y G(x) ⊂ X denote the numerical stencil of node x for grid G. For example, −→Y G(x) = N (x) for standard FMM and our extension in Chapter 4. However, the method in Chapter 5, may require −→Y G(x) ⊃ N (x). For x ∈ ∂Ω, −→Y G(x) = ∅. Let ←−Y G(x) = {y ∈ Ω | x ∈ −→Y G(y)} be the set of nodes which have x in their stencil. Let r̂G(x) = max y∈−→Y G(x) ‖x − y‖ denote the the numerical stencil radius of node x for grid G. Let H(x,Y, φ, µ) ∈ R be the numerical Hamiltonian function with pa- rameters x ∈ Ω, stencil Y ⊂ Ω, φ : Ω → R, and µ ∈ R. The approximate Dirichlet problem corresponding to (1.8) is to find a function uG : Ω → R such that H(x, −→Y G(x), uG , uG(x)) = 0, x ∈ Ω (3.1a) uG(x) = g(x), x ∈ ∂Ω. (3.1b) and uG(x) satisfies a non-expansive interpolation,1 for x ∈ Ω \ X . For example, H could be a Godunov upwind finite difference scheme [53] as used in Chapter 4, or for H of form (1.9) a semi-Lagrangian scheme [59] as is used in Chapter 5. Excluding the boundary condition (3.1b) and interpolation, we have a system of |Ω| nonlinear equations of the form (3.1a), one for each node x ∈ Ω. We wish to compute the |Ω| values, uG(x), one for each node x ∈ Ω. We may omit G and write u = uG , −→Y (x) = −→Y G(x), and←−Y (x) =←−Y G(x) when no ambiguity results. 3.2 Convergence For linear PDEs, [41] showed that for a consistent scheme, stability is nec- essary and sufficient for convergence. For nonlinear HJ PDEs, a numerical 1By non-expansive interpolation we mean the interpolated values are bounded by the data values, such as for linear interpolation. 29 scheme which is consistent, monotone, and stable implies that the solution uG of (3.1) converges to the viscosity solution u of (1.8) as the spacing be- tween grid nodes goes to zero [8]. The monotonicity condition for the scheme is analogous to the comparison principle for the PDE (see Assumption 3.6). This relationship in the case of finite-difference schemes for degenerate el- liptic PDEs (a superset of the first-order HJ PDEs we study) is examined in [45]. We follow the technique described in [8] to prove convergence in this section. We note that this proof technique applies to the more general set of degenerate elliptic PDEs. However, despite its generality it is a remarkably simple proof method, so we employ it here. We define consistency, mono- tonicity, and stability in Properties 3.2 to 3.4 and use these in the proof. Then, to show the convergence of the schemes in Chapter 4 and Chapter 5, we need only show that Properties 3.2 to 3.4 or their equivalents are satisfied. 3.2.1 Consistency, Monotonicity, and Stability The following property states that the numerical stencil radius goes to zero at the same rate as the grid spacing. Property 3.1. Let r̂G(x) = O(ĥG) for all x ∈ Ω. Roughly speaking, the numerical scheme is consistent if the numerical Hamiltonian HG approaches the Hamiltonian H as ĥG → 0. Let C∞b (Ω) be the set of smooth, bounded functions on domain Ω. Property 3.2 (Consistency). Let Property 3.1 hold. Let x ∈ Ω and φ ∈ C∞b (Ω). For sequences Gk, xk ∈ Ωk, and ξk, such that xk → x, ĥGk → 0, and ξk → 0 as k →∞, lim k→∞ H(xk,Yk, φ+ ξk, φ(xk) + ξk) = H(x,Dφ(x)), (3.2) where Yk = −→Y Gk(xk). 30 The numerical scheme is monotone if an increase in the numerical Hamil- tonian H implies a decrease in the function φ, all other quantities being equal. The monotonicity property below is stated as the contrapositive of this description. Property 3.3 (Monotonicity). Let x ∈ Ω and µ ∈ R. Let functions φ̌ : Ω → R and φ̂ : Ω → R be such that φ̌(y) ≤ φ̂(y) for all y ∈ Y. Then, H(x,Y, φ̌, µ) ≥ H(x,Y, φ̂, µ). Typically, stability of a numerical scheme serves to limit the extent to which the numerical scheme amplifies perturbations in the boundary condi- tions. We use an alternative definition of stability which is the same as that used in [8] and convenient for proving convergence. The numerical scheme is stable if for every grid G the solution uG to (3.1), is bounded independent of the grid spacing ĥG . Property 3.4 (Stability). There exists Û ∈ R such for any G, the solution uG to (3.1) satisfies minx∈∂Ω g(x) ≤ uG ≤ Û . 3.2.2 Viscosity Solution and Proof of Convergence We show that uG converges to the viscosity solution u of (1.8) as ĥG → 0. For the proof, we assume that Properties 3.1 to 3.4 hold. We first define what it means for a function u to be a viscosity solution. An upper (respectively, lower) semi-continuous function u is a viscosity sub- solution (respectively, supersolution) of (1.8) if for all φ ∈ C∞b (Ω) such that u− φ has a local maximum (respectively, minimum) at x ∈ Ω we have H(x,Dφ(x)) ≤ 0 (respectively, H(x,Dφ(x)) ≥ 0). (3.3) A continuous function u is a viscosity solution if it is both a viscosity sub- solution and a viscosity supersolution [19]. We proceed to show in Propositions 3.5 and 3.7 that uG converges to the viscosity solution as defined above. By Property 3.4 (i.e., stability), we have 31 bounded uG . So, we define û ∈ B(Ω) and ǔ ∈ B(Ω) as û(x) = lim sup y→x ĥG→0 uG(y) and ǔ(x) = lim inf y→x ĥG→0 uG(y). (3.4) Note that û is upper semi-continuous and ǔ is lower semi-conintuous. Proposition 3.5. û is a viscosity subsolution of (1.8) and ǔ is a viscosity supersolution of (1.8). Proof. We prove only that û is a viscosity subsolution of (1.8), as the second part of the proposition can be proved symmetrically. Let x̂ ∈ Ω be a local maximum of û−φ for some φ ∈ C∞b (Ω). Without loss of generality, assume x̂ is a strict local maximum [25, p. 542] and (û − φ)(x̂) = 0. Then, there exist sequences Gk and xk ∈ Ωk such that as k →∞, ĥGk → 0, xk → x̂, uGk(xk)→ û(x̂), and (uGk − φ)(xk) ≥ (uGk − φ)(y) for all y ∈ Yk, where Yk = −→Y Gk(xk). Let ξk = (u Gk − φ)(xk). We have ξk → 0 and φ(y) + ξk ≥ uGk(y), for all y ∈ Yk. Consequently, by Property 3.3 (i.e., monotonicity) and the definition of uGk we have H(xk,Yk, φ+ ξk, φ(xk) + ξk) = H(xk,Yk, φ+ ξk, uGk(xk)) ≤ H(xk,Yk, uGk , uGk(xk)) = 0 Take the limit as k →∞, and use Property 3.2 (i.e., consistency) to get 0 ≥ lim k→∞ H(xk,Yk, φ+ ξk, φ(xk) + ξk) = H(x,Dφ(x)) Therefore, û is a viscosity subsolution. If H is continuous, (1.8) satisfies the comparison principle below [7]. This comparison principle roughly states that solutions of (1.8) are monotone in 32 the boundary values g and it is used in the proof of Proposition 3.7 below to show that uG converges to u, which is unique. Assumption 3.6 (Comparison Principle). For any bounded upper semi- continuous u∗ and bounded lower semi-continuous u∗, which are a viscosity subsolution and supersolution, respectively, of (1.8), such that u∗(x) ≤ u∗(x) for all x ∈ ∂Ω, we have u∗(x) ≤ u∗(x) for all x ∈ Ω. Proposition 3.7. Let H be continuous. The function û = ǔ = u is the unique viscosity solution of (1.8). As ĥG → 0, uG converges uniformly to u. Proof. By Proposition 3.5, û is an upper semi-continuous viscosity subso- lution of (1.8) and ǔ is a lower semi-continuous viscosity supersolution of (1.8). It follows by the comparison principle that û ≤ ǔ. But ǔ ≤ û by (3.4), so u = û = ǔ is a viscosity solution of (1.8). Again, by the comparison principle, u must be the unique viscosity solution of (1.8). Therefore, by (3.4), uG converges uniformly to u as ĥG → 0. 3.3 Algorithm We note that (3.1) defines a potentially very large system of nonlinear equa- tions, one equation for each node x ∈ Ω. A Dijkstra-like method can be used to solve this system very efficiently if for all x the solution value u(x) is dependent only on nodes in −→Y (x) with smaller values. This property rep- resents a causal relationship between node values. There is an information flow from nodes with smaller values to those with larger values. The causal relationship is meant to mimic that of the PDE (1.8) in which the solution u of (1.8) at x is completely defined using only values of u from states that are backwards along the characteristic line that passes through x, states whose u values must be less than u(x). We show that if the numerical scheme is causal, (3.1) can be solved using a Dijkstra-like algorithm in a single pass through the nodes in Ω in order of increasing value uG . Our argument is essentially the same as those in [55, 60, 64]. We require that the discrete equation has a unique solution. 33 Property 3.8 (Unique Solution). Let x ∈ Ω and φ : Ω → R. There exists a unique solution µ = µ̃ to H(x,Y, φ, µ) = 0. Causality of the numerical scheme means that if the solution µ = µ̃ to H(x,Y, φ, µ) = 0 depends on the value φ(y) for y ∈ Y, then µ̃ > φ(y). Causality is used in the proof of Proposition 3.13 that the Dijkstra-like algorithm solves (3.1). Property 3.9 (Causality). Let Property 3.8 hold. Let µ = µ̃ be the unique solution to H(x,Y, φ, µ) = 0. Let Y− = {y ∈ Y | φ(y) < µ̃}. Then µ = µ̃ is also the unique solution to H(x,Y−, φ, µ) = 0. The following corollary of Property 3.9 states that if adding a node y to the stencil causes the solution µ = µ̃ to H(x,Y, φ, µ) = 0 to decrease, then the decreased solution must still be greater than the value φ(y). This corollary is more convenient than Property 3.9 to use in the proof of Lemma 3.12. Corollary 3.10. Let Property 3.9 be satisfied. Let x, y ∈ Ω and φ : Ω→ R. Let µ = µ̃ be the unique solution to H(x,Y, φ, µ) = 0. Let µ = µ̃y be the unique solution to H(x,Y ∪ {y}, φ, µ) = 0. If µ̃y < µ̃, then µ̃y > φ(y). Proof. We prove the contrapositive of the last statement instead of the state- ment itself: if φ(y) ≥ µ̃, then µ̃y ≥ µ̃. Let φ(y) ≥ µ̃. Then, Y− = {ỹ ∈ Y | φ(ỹ) < µ̃} = {ỹ ∈ Y ∪ {y} | φ(ỹ) < µ̃}. It follows by Properties 3.8 and 3.9, µ̃ is the unique solution to all of H(x,Y, φ, µ) = 0, H(x,Y−, φ, µ) = 0, and H(x,Y ∪ {y}, φ, µ) = 0. So, µ̃ = µ̃y, which implies µ̃y ≥ µ̃. 3.3.1 Dijkstra-like Methods Algorithm 1 (p.35) outlines a standard Dijkstra-like method [60] for solving (3.1) in a single-pass through the grid nodes. The algorithm can become 34 Dijkstra’s algorithm or FMM depending on the choice of the Update func- tion. Consider, for example, the Update function in the context of optimal control, where we are computing the minimal cost over all possible paths. For Dijkstra’s algorithm, Update computes v(y) as a simple minimization over the neighboring nodes of y of the path costs to y via each neighbor. For FMM, the Update function computes v(y) as a minimization over the neighboring simplices of y of the minimum path costs to y via each simplex. The Update function must satisfy a causality property in order for Algo- rithm 1 to terminate with a correct solution. Update must compute a node value v(y) based only on information from neighboring nodes with smaller values, so that u is computed in increasing order of u(y) [55, 63]. In Dijk- stra’s algorithm and FMM for a standard Euclidean-norm Eikonal equation on an orthogonal grid, this property is automatic. foreach x ∈ Ω do v(x)←∞1 foreach x ∈ ∂Ω do v(x)← g(x)2 H ← X3 while H 6= ∅ do4 x← argminy∈H v(y)5 H ← H \ {x}6 foreach y ∈ ←−Y (x) ∩H do7 v(y)← Update(y)8 end9 end10 Algorithm 1: Standard Dijkstra-like method The algorithm begins by initializing the value v(x) of each node x. If x is a boundary node, then v(x) is initialized to g(x), otherwise it is initialized to∞. The set H of considered nodes initially includes all nodes in X . At the beginning of each while loop iteration, the node x ∈ H with the smallest value is accepted, it is removed fromH, and the values of all of the considered (i.e., still in H) nodes in←−Y (x) are updated using only the values of accepted nodes, including x. The while loop is repeated until H is empty or, in other words, all nodes have been accepted. The function call Update(y) returns 35 the solution µ = µ̃ to H(y, −→Y (y) \ H, v, µ) = 0. While the Update function in Algorithm 1 is determined by the underly- ing equation which we seek to solve, it is assumed that its execution time is independent of grid resolution and hence it does not affect the algorithm’s asymptotic complexity. Dijkstra-like algorithms are usually described as being O(N logN), where N = |X | is the number of grid nodes. This com- plexity is derived by noting that each node is removed from H once and in the usual binary min-heap implementation of H extraction of the minimum value node in line 5 costs O(log |H|) ≤ O(logN). Note that the heap H need only sort considered nodes with finite values and typically the number of such nodes is much less than N . Dijkstra-like methods are greedy algorithms: in Algorithm 1 the next node to remove from H is chosen greedily as the node in H with mini- mum value. This greedy selection of the next node yields a globally-correct method because of the causality of the discretization. It also yields a very efficient method because the value of each node in the grid is updated only a few times (no more than the number of neighboring nodes). The number of updates is in O(N) and it is not possible to compute the solution to (3.1) using asymptotically fewer updates. 3.3.2 Solution of Discretization We prove that Algorithm 1 finds the solution to (3.1), by showing that when it terminates the node values v satisfy (3.1). We begin by proving a lemma to show that Algorithm 1 terminates. Let a subscript i ≥ 0 represent the state of a variable at the beginning of the (i + 1)st iteration of the while loop in Algorithm 1. For example, Hi is the state of H at the beginning of iteration i + 1. From Line 5 and Line 6, we have xi+1 = argminy∈Hi vi(y) and Hi+1 = Hi \ {xi+1}. Lemma 3.11. Let N = |X | be finite. Let ←−M = maxx∈Ω |←−Y (x)|. If Update 36 always terminates then Algorithm 1 terminates after N iterations of the while loop and at most N ←− M calls to Update. Proof. By Line 3 of Algorithm 1, |H1| = N . By Lines 5 and 6, |Hi+1| = |Hi| − 1. Thus, by the while loop condition, there are N iterations of the while loop. For every iteration of the while loop, there are at most ←− M iterations of the foreach loop, each of which calls Update once. Therefore, Algorithm 1 terminates after at most N ←− M calls to Update. The following lemma states that the nodes are accepted in nondecreas- ing value order. Let v : X → R be the grid function after Algorithm 1 terminates. Lemma 3.12. Let Property 3.8 hold. Let Property 3.9 hold for all x ∈ X and Y = −→Y (x). For 1 ≤ i < j ≤ N + 1, we have v(xi) ≤ v(xj). Proof. By Lemma 3.11, Algorithm 1 terminates. Note that once a node xi is accepted, its value v(xi) = vi(xi) is fixed. Assume v(xi) ≤ v(xi+1), for 1 ≤ i ≤ N . By induction on i, we have v(xi) ≤ v(xj), for 1 ≤ i < j ≤ N +1. We proceed by proving the inductive step. Let i be such that 1 ≤ i ≤ N . Let y ∈ Hi. By Line 5, we have v(xi) ≤ vi−1(y). First, consider the case y /∈ ←−Y (xi). Such y are not updated in the ith iteration of the while loop, so vi(y) = vi−1(y) ≥ v(xi). Second, consider the case y ∈ ←−Y (xi). In this case, by Line 8 and the definition of Update, µ = vi(y) is the unique solution to H(y, −→Y (y) \ Hi, v, µ) = 0. Because y is updated whenever a new node is added to −→Y (y) \ H, we have−→Y (y) \ Hi = Ỹ ∪ {xi}, where Ỹ was the stencil last used to update y. By Corollary 3.10, if vi(y) < vi−1(y), then vi(y) > v(xi). So, for all y ∈ Hi, vi(y) ≥ v(xi). By Line 5, v(xi+1) = miny∈Hi vi(y) ≥ v(xi), which proves the inductive step. Proposition 3.13. Let Property 3.8 hold. Let Property 3.9 hold for all x ∈ X and Y = −→Y (x). Then u = v is the unique solution of (3.1). Proof. By Lemma 3.11, Algorithm 1 terminates. Let x ∈ X . First, consider the case x ∈ ∂Ω. After executing Line 2, v1(x) = g(x). Because −→Y (x) = ∅, 37 x /∈ ←−Y (y) for any y ∈ X . By the foreach loop condition x is not updated, and after Algorithm 1 terminates v(x) = v1(x) = g(x). Thus, u(x) = v(x) is the unique solution to (3.1b). Second, consider the case x ∈ Ω. By Lemma 3.12, Line 8, and the definition of Update, µ = v(x) is the unique solution to H(x,Y−, v, µ) = 0, where Y− = {y ∈ −→Y (x) | v(y) < v(x)}. By Property 3.9, µ = v(x) is also the unique solution to H(x, −→Y (x), v, µ) = 0, which is (3.1a). 38 Chapter 4 FMM for Axis-Aligned HJ PDEs The material in this chapter is based on [4]. 4.1 Introduction The Fast Marching Method (FMM) [55, 64] has become a popular algo- rithm to use when solving the Dirichlet problem for an isotropic static HJ PDE ‖Du(x)‖2 = c(x), also known as the Eikonal equation. While the isotropic case is the most common, there are applications which require so- lution of anisotropic HJ PDEs. Unfortunately, FMM produces a correct ap- proximation only under certain causality conditions on the values of nodes and their neighbors. This limitation has motivated the development of a more generally applicable version of FMM called Ordered Upwind Methods (OUMs) [59] and also several works such as [36, 50, 62] on sweeping meth- ods. However, OUMs are much more complex to implement than FMM, and sweeping methods can be much less efficient for problems with curved characteristics and practical grid sizes [30, 34]. Consequently, we have motivation to seek classes of anisotropic prob- lems to which FMM might still be applied. One such class of problems was identified in [58] and includes the Eikonal equation where an energy norm re- 39 places the standard Euclidean norm (i.e., an elliptical speed profile replaces a spherical one). In [1] we identified another such class of problems. Because its characteristics are minimum time paths to the boundary, the Eikonal equation has often been proposed for robotic path planning; for example, see [40]. However, for some robots using the Euclidean norm in this equation is inappropriate. Consider a robot arm, where each joint has its own motor. If each motor can rotate at some maximum speed independent of the action of the other motors, then the action of the whole arm is best bounded in an approprately-scaled infinity norm. The corresponding Eikonal equation should use the dual Manhattan norm and is thus anisotropic. Other sce- narios where such problems arise were considered in [1] and experimental evidence suggested that FMM would be successful on these problems. As a group, the anisotropy in these problems is axis-aligned. In this chap- ter we describe a broader class of such axis-aligned problems (Section 4.2) and demonstrate that FMM can be applied to approximate their solution on axis-aligned orthogonal grids without modification of the algorithm beyond the local update function for a single node (Section 4.3). In Section 4.4 we propose some methods by which the local update’s efficiency might be improved even if the grid and/or PDE lack symmetry. In Section 4.5 we provide analytic update formulas for the Eikonal equation with the p = 1, 2, and∞ norms on variably-spaced orthogonal grids in any dimension. Lastly, the examples in Section 4.6 include an anelliptic wave propagation problem [28] and a new multirobot scenario. Accurate robotic path planning is only required in cluttered environ- ments where optimal paths—and hence the characteristics of the HJ PDE— are not straight. No alternative algorithm proposed approaches the simple implementation and guaranteed speed of FMM for these types of problems. Consequently, we set out in this chapter to characterize another class of anisotropic HJ PDEs for which FMM will work, and also to explore their efficient implementation. It should be noted that the update procedures discussed in this chapter can be applied to any of the sweeping algorithms without modification. 40 4.1.1 The Fast Marching Method We compute an approximate solution uG on an axis-aligned orthogonal grid G; for example, see Figure 1.5. We allow any axis-aligned orthogonal grid, including those with node spacing that varies between dimensions and within a single dimension; the latter capability makes it easier to more accurately manage an irregular boundary [30]. It is important that the orthogonal grid and the Hamiltonian H are aligned to the same axis. What it means for H to be aligned to an axis is formalized in Section 4.2. Whenever we refer to a simplex of a node x, we mean a simplex specified by d neighbors of x, each in a distinct dimension. Since we are restricted to orthogonal grids, each simplex of x corresponds to a particular orthant. A major contribution of this chapter is to demonstrate that for a class of static HJ PDEs with axis-aligned anisotropy, an Update function that is consistent with the PDE and satisfies the causality property can be defined; thus, FMM can be used. Because we restrict our modifications of Algo- rithm 1 (p.35) to the Update function, all of the results here can be used with other versions of FMM; for example, the O(N) algorithm described in [67], which uses an untidy priority queue for H to reduce the cost of ExtractMin and hence the whole algorithm. 4.2 Class of Hamiltonians FMM can be extended to handle a class of axis-aligned anisotropic problems defined by a Hamiltonian H that satisfies Assumptions 4.3 to 4.6. We let q, q̃ ∈ Rd and make the definitions: Definition 4.1. Write q D q̃ if qj q̃j ≥ 0 and |qj | ≥ |q̃j |, for 1 ≤ j ≤ d. Definition 4.2. Write qB q̃ if (i) q 6= 0 and (ii) qj q̃j ≥ 0 and |qj | > |q̃j | or qj = q̃j = 0, for 1 ≤ j ≤ d. The following assumptions are satisfied by H: Assumption 4.3 (Continuity). H ∈ C(Ω× Rd). 41 Assumption 4.4 (Coercivity). For all x ∈ Ω, H(x, q)→∞ as ‖q‖ → ∞. Assumption 4.5 (Strict Compatibility). For all x ∈ Ω, H(x, 0) < 0. Assumption 4.6 (Strict One-Sided Monotonicity). For all x ∈ Ω, if qB q̃, then H(x, q) > H(x, q̃). We note that Assumptions 4.3, 4.4, and 4.5 are similar to some assumed properties on the Hamiltonian in [11]. We usually consider a fixed x ∈ Ω and may write H(q) = H(x, q) whenever no ambiguity results. When discussing properties of H they are in reference to the q parameter. The source of the axis-aligned description of the problem class is the strict one- sided monotonicity property of H. 4.2.1 Connection to Osher’s criterion Although there are earlier statements of the conditions on node values un- der which a Dijkstra-like algorithm can or cannot be used to solve the prob- lem [55, 63], in this section we outline the connection between the properties described above and Osher’s criterion [47] because the latter directly pro- vides a condition on the Hamiltonian rather than on the solution values. In Section 4.3.2, we make the connection between Assumptions 4.3 to 4.6 and the earlier conditions. Osher’s fast marching criterion is defined in [47, 62] as qj ∂H(x, q) ∂qj ≥ 0 for 1 ≤ j ≤ d. The authors state there that as long as this criterion is satisfied, a simple fast marching algorithm based on a one-sided upwind finite-difference discretization can be applied to solve the problem. However, we use Assumptions 4.3 to 4.6 instead of Osher’s criterion because Osher’s criterion requires H to be differentiable so that DqH(x, q) exists, but we are interested in potentially nondifferentiable H (e.g., see Section 4.2.2). Note that strict one-sided monotonicity is applicable even when DqH(x, q) does not exist for all x. 42 Propositions 4.8, 4.9, and 4.11 explain the relationship between strict one-sided monotonicity of H (Assumption 4.6) and Osher’s criterion. Propo- sition 4.8 shows that Assumption 4.6 implies one-sided monotonicity (Prop- erty 4.7). Then, Proposition 4.9 shows that Property 4.7 is the same as Osher’s criterion as long as H is differentiable. Finally, Proposition 4.11 demonstrates that Property 4.7 with the addition of one-sided homogeneity (Property 4.10) implies Assumption 4.6. Property 4.7 (One-Sided Monotonicity). For all x ∈ Ω, if q D q̃, then H(x, q) ≥ H(x, q̃). Proposition 4.8. Let H be continuous (Assumption 4.3). Then strict one- sided monotonicity of H (Assumption 4.6) implies one-sided monotonicity of H (Property 4.7). Proof. Let H be strictly one-sided monotone. Let q, q̃ ∈ Rd be such that q D q̃. Let κ ∈ {−1, 1}d be such that κj = +1, if qj ≥ 0−1, otherwise and let  > 0. Note that q + κ B q̃ and thus we have H(q + κ) > H(q̃). Therefore, by the continuity of H, we have H(q) = lim →0+ H(q + κ) ≥ H(q̃). Proposition 4.9. Let H be continuous (Assumption 4.3) and let DqH(q) exist for all q ∈ Rd. Then the following conditions on H are equivalent. (a) qj ∂H(q) ∂qj ≥ 0, for all j and q ∈ Rd. (b) H is one-sided monotone (Property 4.7). Proof. We begin by proving that (a) implies (b). Let q, q̃ ∈ Rd be such that q D q̃. If q = q̃ then H(q) = H(q̃). Otherwise, define the function 43 q̄ : [0, 1] → Rd such that q̄(t) = q̃ + t(q − q̃) represents the line segment between q̃ and q parameterized by t ∈ [0, 1]. Because q D q̃ we have (q − q̃)j q̄j(t) ≥ 0 for 1 ≤ j ≤ d and for t ∈ [0, 1]. Thus, by condition (a) we have (q − q̃)j ∂H(q̄j(t)) ∂q̄j(t) ≥ 0, (4.1) for 1 ≤ j ≤ d and for t ∈ [0, 1]. We know that H(q) = H(q̃) + ∫ 1 0 dq̄(t) dt ·DqH(q̄(t))dt = H(q̃) + ∫ 1 0 (q − q̃) ·DqH(q̄(t))dt = H(q̃) + ∫ 1 0 n∑ j=1 (q − q̃)j ∂H(q̄j(t)) ∂q̄j(t) dt ≥ H(q̃). The first equality follows from integrating the change in H along the line segment connecting q̃ and q. The second equality is because the derivative dq̄(t) dt is simply the vector q− q̃. The third equality breaks up the vector dot product into a sum of scalar products. The inequality results from (4.1) and the fact that an integral of a nonnegative function is nonnegative. Thus for all q, q̃ such that q D q̃, including q = q̃, we have H(q) ≥ H(q̃). We now prove that (b) implies (a). Let q ∈ Rd and 1 ≤ j ≤ d. Define the function σ : R→ {−1,+1} such that σ(y) = +1, if y ≥ 0−1, otherwise, let  > 0, and let ej be the jth vector in the standard basis. Note that q + σ(qj)ej D q and thus by (b) we have H(q + σ(qj)ej) − H(q) ≥ 0. 44 Consequently, by the existence of DqH(q) for all q ∈ Rd we have qj ∂H(q) ∂qj = lim →0+ qj H(q + σ(qj)ej)−H(q) σ(qj) ≥ 0. The following property is used to state Proposition 4.11. Property 4.10 (One-sided Homogeneity). For all x ∈ Ω, t ≥ 0, and q ∈ Rd: H(x, tq)−H(x, 0) = t(H(x, q)−H(x, 0)). Proposition 4.11. Let H satisfy Assumptions 4.3, 4.4, and 4.5, and let H be one-sided monotone (Property 4.7) and one-sided homogeneous (Property 4.10). Then H is strictly one-sided monotone (Assumption 4.6). Proof. Let q B q̃. Then q D q̃ and H(q) ≥ H(q̃) by one-sided monotonicity. First consider the case q̃ = 0. Assume H(q) = H(q̃) = H(0). By the one-sided homogeneity of H, lim t→∞[H(tq)−H(0)] = limt→∞[t(H(q)−H(0))] = 0. But by the coercivity of H lim t→∞[H(tq)−H(0)] =∞, since limt→∞ ‖tq‖ = ∞ and by compatability H(0) < 0. Thus we have a contradiction and it must be that H(q) > H(q̃). Second, consider the case where q̃ 6= 0. Let J = {j | |qj | > |q̃j |}. By Definition 4.2, we have J 6= ∅, and since q̃ 6= 0 there is some j ∈ J such that q̃j 6= 0. Define a scalar multiple of q: q̌ = tq = ( max j∈J |q̃j | |qj | ) q. Since |qj | > |q̃j |,∀j ∈ J , we have 0 < t < 1. Furthermore, for j ∈ J , |q̌j | = ( max j∈J |q̃j | |qj | ) |qj | ≥ |q̃j |, 45 while for j /∈ J , q̌j = tqj = 0 = q̃j . Consequently, |q̌j | ≥ |q̃j | for 1 ≤ j ≤ d. Also, since t > 0, we have q̌j q̃j = tqj q̃j ≥ 0 for 1 ≤ j ≤ d. This implies, by one-sided monotonicity of H, that H(q̌) ≥ H(q̃). Moreover, by one-sided homogeneity of H, H(q̌)−H(0) = H(tq)−H(0) = t(H(q)−H(0)). It follows that H(q)−H(0) = (H(q̌)−H(0))/t > H(q̌)−H(0), since 0 < t < 1 and H(q̌) ≥ H(0) by one-sided monotonicity. Therefore, H(q) > H(q̌) ≥ H(q̃). We impose strict one-sided monotonicity on H because it guarantees a unique solution to a first-order upwind finite-difference discretization of (1.8a), as shown in Section 4.3.1. Simply imposing one-sided monotonicity on H or Osher’s condition on differentiable H is not sufficient for a unique solution. However, Proposition 4.11 states that when H satisfies one-sided homogeneity in addition to one-sided monotonicity then it also satisfies strict one-sided monotonicity and there is a unique solution to the discretization. Moreover, by Propositions 4.9 and 4.11, when differentiable H satisfies one- sided homogeneity in addition to Osher’s criterion then H also satisfies strict one-sided monotonicity and there is a unique solution to the discretization. Note that there exist conditions other than one-sided homogeneity, such as strict convexity, that in combination with Osher’s criterion result in strict one-sided monotonicity of H. 4.2.2 Example Hamiltonians A Hamiltonian H that satisfies Assumptions 4.3 to 4.6 encompasses a fairly broad range of anisotropic problems. We consider examples of such H. In 46 (a) p = 1 (b) p = 2 (c) p =∞ Figure 4.1: Contour plots of ‖q‖p. particular, we look at the case H(x, q) = G(x, q)− 1, (4.2) where G is a p-norm or some variant, such as a linearly-tranformed p-norm or a mixed p-norm defined below. We must ensure that G is strictly one-sided monotone, which is not true of all norms. The p-norm is a useful category of strictly one-sided monotone norms. Let a p-norm, ‖ · ‖p, be defined by ‖q‖p =  d∑ j=1 |qj |p 1/p , where p ≥ 1. Commonly used p-norms, illustrated in Figure 4.1, are the Manhattan norm (p = 1), the Euclidean norm (p = 2), and the maximum norm (p =∞). Proposition 4.12. ‖ · ‖p is strictly one-sided monotone, for p ≥ 1. Proof. Let qB q̃. We know that |qj | ≥ |q̃j | ≥ 0 for all j, such that 1 ≤ j ≤ d. Furthermore, |qj | > |q̃j | for at least one j, such that 1 ≤ j ≤ d. First, consider finite p ≥ 1. By the properties of xp, we have |qj |p ≥ |q̃j |p ≥ 0 for all j. Furthermore, |qj |p > |q̃j |p for at least one j. This, in 47 turn, implies d∑ j=1 |qj |p > d∑ j=1 |q̃j |p ≥ 0. It follows that ‖q‖p =  d∑ j=1 |qj |p 1/p >  d∑ j=1 |q̃j |p 1/p = ‖q̃‖p. Consider the case p =∞ separately: ‖q‖∞ = lim p→∞  d∑ j=1 |qj |p 1/p = max 1≤j≤d |qj |. We have max1≤j≤d |qj | > max1≤j≤d |q̃j |. Therefore, for both the case where p ≥ 1 is finite and p = ∞, ‖ · ‖p is strictly one-sided monotone. Define a linearly-transformed p-norm, ‖ · ‖B,p, to be ‖q‖B,p = ‖Bq‖p, where p ≥ 1 and B is a nonsingular d × d matrix. Note that B must be nonsingular so that ‖ · ‖B,p satisfies the properties of a norm such as defi- niteness and homogeneity. Such a norm is not strictly one-sided monotone in general. Figure 4.2(a) shows a simple example where a vector is rotated by −pi/4 and scaled by 3 in the q2-axis before the Euclidean norm is taken; i.e., B = [ 1 0 0 3 ][ cos(−pi/4) − sin(−pi/4) sin(−pi/4) cos(−pi/4) ] = [ 1/ √ 2 1/ √ 2 −3/√2 3/√2 ] . (4.3) Let q = (2, 2)T and q̃ = ( √ 2, 0)T . We have q D q̃, but, ‖Bq‖2 = ‖(2 √ 2, 0)T ‖2 = √ 8 < √ 10 = ‖(1,−3)T ‖2 = ‖Bq̃‖2. 48 (a) (b) Figure 4.2: Contour plots of ‖Bq‖p. (a) is not strictly one-sided monotone: p = 2 and B is defined by (4.3). (b) is strictly one-sided monotone: p = 1 and B scales by 2 in the q1-axis. Consequently, this particular linearly-transformed p-norm is not strictly one- sided monotone. However, in this case an inverse transformation B−1 of the grid coordinates will result in a strictly one-sided monotone p-norm, while maintaining the grid’s orthogonality. More generally, we conjecture that if the Hamiltonian is of the form H(q) = H̃(Bq), where B is a rotation (which may be followed by scaling) and H̃ satisfies Assumptions 4.3 to 4.6, a transformation of the grid coordinates by B−1 will result in a transformed H that also satisfies Assumptions 4.3 to 4.6, while maintaining the grid’s orthogonality. More complex coordinate modifications might be possible but we have not yet adequately investigated conditions or procedures. A scaled p-norm (Figure 4.2(b)) is a special case of a linearly-transformed p-norm. Such a norm scales the components of its argument before applying a p-norm, by restricting B to be a nonsingular diagonal matrix. It is simple to show that a scaled p-norm is strictly one-sided monotone, considering Proposition 4.12. A mixed p-norm is a nested composition of p-norms and it is strictly one-sided monotone. The following is an example (Figure 4.3(a)) of a mixed p-norm that takes the Euclidean norm of the first 2 components and then 49 (a) (b) Figure 4.3: Contour plots of G(q). (a) mixed p-norm: G is defined by (4.4). (b) asymmetric norm-like function: G is defined by (4.5). takes the Manhattan norm of the result and the last component: ‖q‖ = ‖ (‖ (q1, q2) ‖2, q3) ‖1 = √ (q1)2 + (q2)2 + |q3|. (4.4) where q = (q1, q2, q3). This particular norm was used as a G function in [1] for a simple 2-robot coordinated optimal control problem (see Figure 2.1). Finally, the one-sidedness of Assumption 4.6 allows G to be asymmetric, which is not permitted for a norm. An example of such an asymmetric norm-like function is shown in Figure 4.3(b) and is given by G(q) =  ‖Baq‖∞, if q1 ≤ 0 and q2 ≤ 0, ‖Bbq‖1, if q1 ≤ 0 and q2 > 0, ‖Bcq‖2, if q1 > 0 and q2 ≤ 0, ‖Bdq‖2, if q1 > 0 and q2 > 0, (4.5) where Ba = [ 1/2 0 0 1 ] Bb = [ 1/2 0 0 1/2 ] Bc = [ 1 0 0 1 ] Bd = [ 1 0 0 1/2 ] . We solve the anisotropic problem characterized by (4.5) as well as an 50 anelliptic wave propagation problem and a multi-robot optimal path plan- ning problem in Section 4.6. Other examples of G functions which satisfy strict one-sided monotonicity are some polygonal norms such as axis-aligned hexagonal or octagonal norms; however, we do not further investigate these options here. 4.3 Discretization We define a discrete analogue of the Dirichlet problem (1.8). By describ- ing the Update function in Algorithm 1, we also formalize the Dijkstra-like algorithm. Finally, we prove important properties of the numerical scheme and corresponding Update function to show that the solution to the discrete problem converges to the solution to (1.8) and that Algorithm 1 solves the discrete problem. Let x ∈ Ω. The stencil of x is shown in Figure 4.4. Let x±j = x+h±j ej be the stencil node in ±ej direction, where ej is the jth vector in the standard orthogonal basis and h±j is a signed distance along the j th axis from x to x±j . Let N ⊆ {x±j | 1 ≤ j ≤ d} be the set of nodes used in the stencil. Let K = {(κ1, κ2, . . . , κd) | κj ∈ {−1,+1}, 1 ≤ j ≤ d}, such that κ ∈ K represents one of the 2d neighboring simplices of x. Note that we abuse notation by using κj ∈ {−1,+1} as a superscript indexing x±j or h±j . We define the numerical Hamiltonian H as follows: H(x,N , φ, µ) = max κ∈K [H(x,Dκ(x,N , φ, µ))], (4.6) where H is as defined in Section 4.2 and Dκ(x,N , φ, µ) = (Dκ1(x,N , φ, µ), Dκ2(x,N , φ, µ), . . . , Dκd(x,N , φ, µ)) is a first-order, upwind, finite-difference gradient approximation from the 51 Figure 4.4: Neighborhood of x with d = 2 simplex represented by κ; that is, Dκj (x,N , φ, µ) =  max(0,µ−φ(xκjj )) −hκjj , if x κj j ∈ N 0, otherwise (4.7) for 1 ≤ j ≤ d. Given an axis-aligned orthogonal grid G, the discretized Dirichlet prob- lem is to find a function uG : Ω → R, such that (3.1) is satisfied and uG(x) satisfies a non-expansive interpolation, for x ∈ Ω\X . For this discretization the stencil is the set of grid neighbors: −→Y (x) =←−Y (x) = N (x), for x ∈ Ω. Let the Update function in Algorithm 1 be defined as follows. A call to Update(x) returns the solution µ = µ̃ to H(x,N (x) \ H, v, µ) = 0. (4.8) In this way it determines a node’s value v(x) ← µ̃ given the values of its accepted neighbors, v(x±j ). Proposition 3.13 states that the grid function v that results from run- ning Algorithm 1 is the unique solution of the discretized problem (3.1), pro- vided that (4.8) has a unique solution (Property 3.8) and that the numerical scheme is causal (Property 3.9). We prove these properties in Sections 4.3.1 and 4.3.2. Also, the uniqueness of the solution to (4.8) and the fact that 52 H(µ) is nondecreasing in µ (Lemma 4.14) are useful for using numerical root finders to implement Update. Proposition 3.7 states that the solution uG converges to u the solution of (1.8), if the numerical scheme is consistent, monotone, and stable (Properties 3.2, 3.3, and 3.4 respectively). We include proofs of these properties in Sections 4.3.3, 4.3.4, and 4.3.5. When we are varying only µ, it will be convenient to write H(µ) = H(x,N , φ, µ) and Dκ(µ) = Dκ(x,N , φ, µ). For the lemmas and theorems stated in this section we assume H satisfies Assumptions 4.3 to 4.6. 4.3.1 Unique Update Let φ ∈ B(Ω). Define φ̌N = min x∈N φ(x). (4.9) We show in Theorem 4.15 there is a unique solution µ = µ̃ to H(x,N , φ, µ) = 0 such that µ̃ > φ̌N . In other words, we prove Property 3.8 (Unique Solu- tion) and in addition a lower bound on the solution µ̃. First, we prove two useful lemmas. Lemma 4.13. H(µ) is strictly increasing on µ ≥ φ̌N . Proof. Let µa > µb ≥ φ̌N . In order to prove the lemma, we must show H(µa) > H(µb). Let κ ∈ K and 1 ≤ j ≤ d. If µa > uκjj thenDκj (µa)Dκj (µb) ≥ 0 and |Dκj (µa)| > |Dκj (µb)|. On the other hand, if µa ≤ uκjj then Dκj (µa) = Dκj (µb) = 0. Also, there exists at least one κ ∈ K and 1 ≤ j ≤ d such that Dκj (µa) 6= 0, since µa > φ̌N . For such κ, H(Dκ(µa)) > H(Dκ(µb)), by strict one-sided monotonicity (Assumption 4.6). For all other κ, H(Dκ(µa)) = H(Dκ(µb)) = H(0). Therefore, by (4.6), H(µa) > H(µb). Lemma 4.14. The numerical Hamiltonian H(µ) satisfies the following. (a) H(µ) = H(0) < 0 for µ ≤ φ̌N . (b) H(µ)→∞ as µ→∞. (c) H(µ) is nondecreasing on all µ. 53 Proof. If µ ≤ φ̌N then by (4.7) and (4.9), we have Dκj (µ) = 0 for all κ ∈ K, 1 ≤ j ≤ d. By the strict compatibility of H, H(Dκ(vj)) = H(0) < 0, for all κ. By (4.6), we have H(µ) = H(0) < 0, for µ ≤ φ̌N , proving (a). Let κ ∈ K and 1 ≤ j ≤ d. As µ → ∞, we have Dκj (µ) → ∞ and ‖Dκ(µ)‖ → ∞ for all κ ∈ K, 1 ≤ j ≤ d. By the coercivity of H, as µ→∞, we have H(Dκ(µ)) → ∞ for all κ ∈ K. By (4.6), we have H(µ) → ∞ as µ→∞, proving (b). Because H(µ) is constant on µ ≤ φ̌N and by Lemma 4.13 increasing on µ ≥ φ̌N , H(µ) is nondecreasing on all µ, proving (c). Theorem 4.15. There exists a unique solution µ = µ̃ to H(µ) = 0 such that µ̃ > φ̌N . Proof. Each Dκj (µ) is continuous on µ. Furthermore, by the continuity of H, H(Dκ(µ)) is continuous on µ for all κ. Since max is continuous, H(µ) is continuous. By Lemma 4.14(a) and 4.14(b), H(µ) < 0 for µ ≤ φ̌N and H(µ)→∞ as µ→∞. Therefore, by the Intermediate Value Theorem there exists a solution µ = µ̃ to H(µ) = 0, such that φ̌N < µ̃ < ∞. Moreover, since H is strictly increasing on µ ≥ φ̌N by Lemma 4.13, the solution is unique. Remark 4.16. We note that strict one-sided monotonicity (Assumption 4.6) of H is used to prove Lemma 4.13, and Lemma 4.13 is then used to show that the solution to H(µ) = 0 is unique. We might consider whether or not one-sided monotonicity (Property 4.7) of H is sufficient for a unique solution. However, Property 4.7 would not be sufficient to prove Lemma 4.13 and we would find that H(µ) is only nondecreasing on µ ≥ φ̌N . A solution to H(µ) = 0 would still be guaranteed but not necessarily unique in this case. Analogously, for differentiable H, Osher’s criterion on H implies a solution that may not be unique unless H satisfies some additional property, such as one-sided homogeneity (Property 4.10) or convexity. 54 4.3.2 Causality The following theorem states that H and the Update function are causal. The Update function is considered causal if the solution µ = µ̃ toH(x,N , u, µ) = 0, does not change after all nodes with values greater than or equal to µ̃ are removed from the stencil. Theorem 4.17. Property 3.9 (Causality) holds. Proof. Let Property 3.8 hold. Let µ = µ̃ be the unique solution toH(x,N , φ, µ) = 0. Let N− = {y ∈ N | φ(y) < µ̃}. By (4.7), we have Dκj (x,N , φ, µ̃) = Dκj (x,N−, φ, µ̃), for all κ ∈ K, 1 ≤ j ≤ d. Thus, H(x,N−, φ, µ̃) = H(x,N , φ, µ̃) = 0. Therefore, µ = µ̃ is also the unique solution toH(x,N−, φ, µ) = 0. 4.3.3 Monotonicity We show that H and the Update function are monotone in the neighbor’s values. Monotonicity of H requires that if none of the stencil node values decreases, the numerical Hamiltonian H does not increase. We note that monotonicity does not require strict one-sided monotonicity of H, but rather one-sided monotonicity of H is sufficient. Theorem 4.18. Property 3.3 (Monotonicity) holds. Proof. Let x ∈ Ω and µ ∈ R. Let functions φ̌ : Ω→ R and φ̂ : Ω→ R be such that φ̌(y) ≤ φ̂(y) for all y ∈ N . We have Dκ(x,N , φ̌, µ) D Dκ(x,N , φ̂, µ), for all κ ∈ K. Also, by Proposition 4.8, H satisfies one-sided monotonicity (Property 4.7). Thus, H(Dκ(x,N , φ̌, µ)) ≥ H(Dκ(x,N , φ̂, µ)), for all κ ∈ K. Consequently, H(x,N , φ̌, µ) ≥ H(x,N , φ̂, µ), proving Prop- erty 3.3 holds. The following corollary states that if none of the stencil node values decreases, the solution µ = µ̃ to H(x,N , φ, µ) = 0 does not decrease. 55 Corollary 4.19. Let Property 3.8 hold. Let x ∈ Ω and µ ∈ R. Let functions φ̌ : Ω→ R and φ̂ : Ω→ R be such that φ̌(y) ≤ φ̂(y) for all y ∈ Y. Let µ = µφ̌ be the unique solution to H(x,N , φ̌, µ) = 0 and µ = µφ̂ be the unique solution to H(x,N , φ̂, µ) = 0, then µφ̌ ≤ µφ̂. Proof. By Property 3.3, H(x,N , φ̌, µφ̌) = 0 ≥ H(x,N , φ̂, µφ̌). By Lemma 4.14(c), H(x,N , φ̂, µ) is nondecreasing on all µ, so in order thatH(x,N , φ̂, µφ̂) = 0, it must be that µφ̌ ≤ µφ̂. 4.3.4 Consistency We show that the numerical Hamiltonian H is consistent with (4.2). First, we note that since |h±j | ≤ ĥG for all G, Property 3.1 is satisfied. Lemma 4.20. Let x ∈ Ω and φ ∈ C∞b (Ω). Let H be continuous in the first argument and satisfy Assumptions 4.3 to 4.6. Define ĥ = max1≤j≤d h±j . Then lim y→x, ξ→0 ĥ→0 H(y,N , φ+ ξ, φ(y) + ξ) = H(x,Dφ(x)). (4.10) Proof. Let φ, x, and H be as defined above. Let Dφ(x) = (∂1φ(x), ∂2φ(x), . . . , ∂dφ(x)). Let κ ∈ K and 1 ≤ j ≤ d. We have by (4.7) and the smoothness of φ lim y→x, ξ→0, ĥ→0 Dκj (y,N , φ+ ξ, φ(y) + ξ) = lim y→x, ξ→0, ĥ→0 max(0, φ(y) + ξ − φ(xκjj )− ξ) −hκjj = lim y→x, ĥ→0 min(0, φ(y + h κj j ej)− φ(y)) h κj j = ∂jφ(x), if κj∂jφ(x) ≤ 0,0, otherwise. 56 Define κ̃ as κ̃j = +1, if ∂jφ(x) ≤ 0,−1, otherwise, (4.11) for 1 ≤ j ≤ d. We have lim y→x, ξ→0, ĥ→0 Dκ̃(y,N , φ+ ξ, φ(y) + ξ) = Dφ(x). (4.12) By the continuity and strict one-sided monotonicity of H, lim y→x, ξ→0, ĥ→0 H(y,Dκ̃(y,N , φ+ξ, φ(y)+ξ)) ≥ lim y→x, ξ→0, ĥ→0 H(y,Dκ(y,N , φ+ξ, φ(y)+ξ)), for all κ. Therefore, by the continuity of max and H lim y→x, ξ→0, ĥ→0 H(y,N , φ+ ξ, φ(y) + ξ) = lim y→x, ξ→0, ĥ→0 max κ∈K H(y,Dκ(y,N , φ+ ξ, φ(y) + ξ)) = lim y→x, ξ→0, ĥ→0 H(y,Dκ̃(y,N , φ+ ξ, φ(y) + ξ)) = H(x,Dφ(x)). Proposition 4.21. Let H be continuous in the first argument and satisfy Assumptions 4.3 to 4.6. Property 3.2 (Consistency) holds. Proof. Let H satisfy the properties specified above. Let Property 3.1 hold. Let x ∈ Ω and φ ∈ C∞b (Ω). Let Gk, xk ∈ Ωk, and ξk be sequences such that 57 xk → x, ĥGk → 0, and ξk → 0 as k →∞. Then, by Lemma 4.20, lim k→∞ H(xk,Yk, φ+ ξk, φ(xk) + ξk) = lim xk→x, ξk→0 ĥ→0 H(xk,N , φ+ ξk, φ(xk) + ξk) = H(x,Dφ(x)), (4.13) so Property 3.2 holds. 4.3.5 Stability We show that for any reasonable orthogonal grid G, the solution uG to (3.1) is bounded and so Property 3.4 (Stability) holds. The following lemma implies that the magnitude of the slope in uG as measured between two neighbors is bounded. Lemma 4.22. Define K̂ = sup x∈Ω, H(x,p)≤0 ‖p‖. Let x ∈ Ω and φ ∈ B(Ω). If µ = µ̃ is the unique solution to H(x,N , φ, µ) = 0 then |Dκj (x,N , φ, µ̃)| ≤ K̂, for all κ ∈ K and 1 ≤ j ≤ d. Proof. Note that K̂ exists and is finite by the coercivity of H. Let κ ∈ K and 1 ≤ j ≤ d. Assume |Dκj (x,N , φ, µ̃)| > K̂. By the definition of K̂, H(x,−|Dκj (x,N , φ, µ̃)|κjej) > 0. Thus, by the one-sided monotonicity of H and (3.1), H(x,Dκ(x,N , φ, µ̃)) ≥ H(x,−|Dκj (x,N , φ, µ̃)|κjej) > 0 = H(x,N , φ, µ̃), 58 contradicting (4.6). Therefore, |Dκj (x,N , φ, µ̃)| ≤ K̂. We consider a grid reasonable if the minimum distance along grid lines from any node to a boundary node is bounded. Let X = (x1, x2, . . . xk) be a grid path or sequence of neighboring nodes, such that xl ∈ X for 1 ≤ l ≤ k and xl ∈ N (xl−1) for 2 ≤ l ≤ k. Define the grid path length of X by ω(X) = k∑ l=2 ‖xl − xl−1‖2. Define the minimum grid path length between x ∈ X and x′ ∈ X to be ω̌(x, x′) = min{ω(X) | x1 = x and xk = x′}. Finally, define the minimum node-to-boundary grid path length of x ∈ X as ω̃(x) = min x′∈∂Ω ω̌(x, x′). Theorem 4.23. Let G be an orthogonal discretization such that for all x ∈ Ω, ω̃(x) ≤ Ŵ , for some constant Ŵ . Let uG be the unique solution to (3.1). Then min x′∈∂Ω g(x′) ≤ uG ≤ Ŵ K̂ + max x′∈∂Ω g(x′). (4.14) Proof. Let x ∈ Ω. Let x′ ∈ ∂Ω and X = (x1, x2, . . . xk), such that x1 = x, xk = x ′, and ω(X) = ω̃(x). Obtain a modified grid G̃ for which Ω̃ includes all nodes in Ω that are along the grid path X: Ω̃ = {x | x ∈ Ω and x = xj , for some j such that 1 ≤ j ≤ k} and ∂Ω̃ includes all other nodes in X : ∂Ω̃ = X \ Ω̃. 59 Define the boundary condition g̃ : ∂Ω̃→ R for the modified discretization g̃(x) = g(x), x ∈ ∂Ω,∞, otherwise. Let uG̃(x) be the unique solution to the modified problem at x. Note that uG̃(x) ≤ Ŵ K̂ + g(x′), where K̂ is given by Lemma 4.22. Also note, by the monotonicity of H (Corollary 4.19), we have uG(x) ≤ uG̃(x). Therefore, uG(x) ≤ Ŵ K̂ + maxx′∈∂Ω g(x′), for all x ∈ X . For the lower bound, by Theorem 4.15, the unique solution µ = µ̃ to H(x,N (x), u, µ) = 0 must be such that µ̃ > ǔ, where ǔ is the minimum neighbor value as in (4.9). By induction on the nodes of Ω, we find that for all x ∈ X , uG(x) ≥ min x′∈∂Ω g(x′). Because of the non-expansive interpolation used to determine uG , (4.14) holds. 4.4 Efficient Implementation of Update This section describes three methods for improving the efficiency of the Update function: symmetry, causality, and solution elimination. Some of these methods are related to those found in [40, 70]. However, the efficiency gains from using these methods are not substantial for an already efficient implementation of Algorithm 1. In such an implementation the Update func- tion only computes updates from those nodes that are accepted (i.e. that have been removed from H) as in (4.8). Because the nodes are accepted in nondecreasing value order, causality elimination can not possibly remove any nodes from consideration in Update. Furthermore, only simplices that include the node x most recently removed from H are considered in comput- ing the updated value. Then, v(x) gets assigned the minimum of this newly computed value and its previous value. Our experiments show that in many calls to Update, only a single simplex fits these criteria, and the fraction of 60 updates for which only a single simplex fits the criteria grows as the grid is refined. For this reason, the techniques for eliminating nodes and simplices described in this section are largely ineffective. However, for coarse grid resolutions and problems where characteris- tics intersect often, multiple simplices are considered by Update frequently enough that symmetry elimination, which is very cheap, significantly im- proves efficiency. In some cases, a node value update can be ignored alto- gether if the most-recently extracted node is eliminated by symmetry. Despite the fact that the node and simplex elimination techniques de- scribed here are useful only in limited circumstances, we include them for theoretical interest and because they may be applied in other algorithms, such as iterative methods, that also require the Update function and for which the number of potential simplices to consider may be larger. Efficiency can be gained by determining which stencil nodes y ∈ N have no influence on the solution and eliminating them from consideration. Let KN be the set of stencil simplices that can be formed by the neighbors in N . For example, in d = 4 dimensions, take N = {x±2 , x−3 , x±4 } and Then we have KN = {(0,−1,−1,−1), (0,+1,−1,−1), (0,−1,−1,+1), (0,+1,−1,+1)}. For κ ∈ KN and 1 ≤ j ≤ d, κj = 0 indicates that xκjj is not considered in computing the gradient approximation Dκ(µ); that is, Dκj (x,N , φ, µ) = 0 if κj = 0 and D κ j satisfies (4.7) otherwise. We recognize that the computation of H can be simplified to only consider stencil simplices formed from the stencil nodes in N : H(x,N , φ, µ) = max κ∈KN [H(x,Dκ(x,N , φ, µ))]. (4.15) In order to implement a more efficient Update function, first we attempt to 61 reduce the set of stencil nodes from N to Ñ ⊆ N and second we solve H(x, Ñ , φ, µ) = 0 (4.16) for µ = µ̃. 4.4.1 Symmetry Node Elimination We show how the considered stencil nodes N can be reduced by keeping only the neighbor with smaller value of a pair of opposite neighbors in the jth dimension when (4.6) is symmetric in that dimension. This procedure is a generalization of those in [40, 70] to all axis-aligned anisotropic problems on unequally spaced grids. First, we introduce useful notation. Let q ∈ Rd. Let T i(q) ∈ Rd be a reflection of q in the hyperplane orthogonal to the ith axis, such that T ij (q) = −qj , if j = i,qj , otherwise, for 1 ≤ j ≤ d. Let Φj indicate symmetry of (4.6) in the jth dimension, as follows: Φj = 1, if |h−j | = |h+j | and for all q ∈ Rd, H(x, q) = H(x, T j(q)),0, otherwise. In other words, Φj = 1 if and only if the stencil spacing and H are symmetric in the jth dimension. Lemma 4.24. Let j be such that 1 ≤ j ≤ d. Let κ ∈ KN and κ′ = T j(κ). If Φj = 1 and φ(x κ′j j ) ≤ φ(xκjj ), then H(x,Dκ ′ (µ)) ≥ H(x,Dκ(µ)), for all µ. Proof. Let j, κ, and κ′ be as defined above. Let Φj = 1 and φ(x κ′j j ) ≤ φ(xκjj ). 62 Consider the components of T j(Dκ ′ (µ)). We have T jj (D κ′(µ))Dκj (µ) = −Dκ′j (µ)Dκj (µ) = −max(0, µ− φ(x κ′j j )) −hκ ′ j j max(0, µ− φ(xκjj )) −hκjj ≥ 0, since h κ′j j = −hκjj . Furthermore, |T jj (Dκ ′ (µ))| = | −Dκ′j (µ)| = ∣∣∣∣∣∣−max(0, µ− φ(x κ′j j )) −hκ ′ j j ∣∣∣∣∣∣ ≥ ∣∣∣∣∣max(0, µ− φ(x κj j ) −hκjj ∣∣∣∣∣ = |Dκj (µ)|, since h κ′j j = −hκjj and φ(x κ′j j ) ≤ φ(xκjj ). For i 6= j, T ji (D κ′(µ)) = Dκ ′ i (µ) = D κ i (µ), since κ′i = κi. Consequently, T j(Dκ ′ (µ))DDκ(µ). Therefore, by the symmetry of H in the jth dimension and by the one- sided monotonicity of H (Property 4.7), H(x,Dκ ′ (µ)) = H(x, T j(Dκ ′ (µ))) ≥ H(x,Dκ(µ)). Theorem 4.25. Let N ⊆ {x±j | 1 ≤ j ≤ d}. Pick any j ∈ {1, 2, . . . , d} such that Φj = 1, x − j ∈ N , and x+j ∈ N . Pick any σ ∈ {−1,+1} such that φ(x−σj ) ≤ φ(xσj ). Let Ñ = N \ {xσj }. Let µ = µN be the unique solution to H(x,N , φ, µ) = 0 and µ = µÑ be the 63 unique solution to H(x, Ñ , φ, µ) = 0. Then µÑ = µN . Proof. Let κ ∈ KN . First consider the case κj = −σ. We have κ ∈ KÑ . Second consider the case κj = σ. Let κ ′ = T j(κ) and note that κ′j = −σ. Then, by Lemma 4.24, H(x,Dκ ′ (µ)) ≥ H(x,Dκ(µ)), for all µ. In particular, H(x,Dκ ′ (µN )) ≥ H(x,Dκ(µN )). In both cases, there exists κ̃ ∈ KÑ , such thatH(x,Dκ̃(µN )) ≥ H(x,Dκ(µN )). Also, note that KÑ ⊆ KN . Consequently, H(x, Ñ , φ, µN ) = max κ∈KÑ [H(x,Dκ(µN ))] = max κ∈KN [H(x,Dκ(µN ))] = H(x,N , φ, µN ) = 0. Therefore, µ = µN is the unique solution to H(x, Ñ , φ, µ) = 0. An implementation of the Update function can use the result obtained in Theorem 4.25 to eliminate x±j ∈ N from consideration in solving (4.16) by exploiting symmetries in (4.6). We call this symmetry elimination. Remark 4.26. Theorem 4.25 can be generalized to a version that may elim- inate the node with larger value in a pair of opposite nodes in the jth dimen- sion even if (4.6) is not symmetric in that dimension. We let 1 ≤ j ≤ d such that x−j ∈ N , and x+j ∈ N . Let σ ∈ {−1,+1}. Node xσj ∈ N may be eliminated from consideration if • |h−σj | ≤ |hσj |; • for all q ∈ Rd such that σqj ≥ 0, H(x, q) ≥ H(x, T j(q)); • and φ(x−σj ) ≤ φ(xσj ). 4.4.2 Causality Node Elimination The causality of H can also be exploited to eliminate x±j ∈ N from consid- eration. This observation was used in two distinct but equivalent methods for analytically computing the Update from a single simplex to solve an 64 isotropic Eikonal equation [40, 70]. We show with the following theorem that the condition H(φ(y)) ≥ 0 can be checked to determine that a node x±j is non-causal, i.e., that the solution µ = µ̃ to (4.16) is not dependent on the node y. Theorem 4.27. Let N ⊆ {x±j | 1 ≤ j ≤ d}. Pick any κ ∈ KN and j ∈ {1, 2, . . . , d}, such that κj 6= 0 and H(x,N , φ, φ(xκjj )) ≥ 0. Let Ñ = N \ {xκjj }. Let µ = µN be the unique solution to H(x,N , φ, µ) = 0 and µ = µÑ be the unique solution to H(x, Ñ , φ, µ) = 0. Then µÑ = µN . Proof. Let N , κ, j, Ñ , µN and µÑ be as defined above. By Lemma 4.14(c), H(x,N , φ, µ) is nondecreasing. Since H(x,N , φ, φ(xκjj )) ≥ 0 = H(x,N , φ, µN ), it must be that µN ≤ φ(xκjj ). Note that H(x, Ñ , φ, µ) is identical to H(x,N , φ, µ) except that Dκj (x,N , φ, µ) is set to zero in H(x, Ñ , φ, µ). But for µ ≤ φ(xκjj ), we also have Dκj (x,N , φ, µ) = 0 in H(x,N , φ, µ). Conse- quently, H(x, Ñ , φ, µ) = H(x,N , φ, µ) for µ ≤ φ(xκjj ). In particular, H(x, Ñ , φ, µN ) = H(x,N , φ, µN ) = 0. Therefore, µ = µN is the unique solution to H(x, Ñ , φ, µ) = 0. Theorem 4.27 states that the unique solution µ = µ̃ to (4.16) does not change when a non-causal node is removed from N . This node removal can be repeated until all non-causal nodes have been removed and the solution µ̃ will remain unchanged. We call this causality elimination. A binary or linear search through sorted neighbors’ values can be used to determine the largest node value that might be causal. Note that causality elimination does not require symmetry in (4.16). However, the test for non-causality requires an evaluation of H, which is more expensive than the comparison of two neighbors’ values used for symmetry elimination. 65 4.4.3 Solution Elimination After eliminating from consideration nodes inN using symmetry and causal- ity elimination, we determine the solution µ = µ̃ to (4.16). Let µ̌ = min κ∈KN (µκ), (4.17) where µ = µκ is the unique solution to H(x,Dκ(µ)) = 0. (4.18) We show with the following theorem that, instead of solving (4.16) directly, we can solve (4.18) for each κ ∈ KN and take the minimum such solution µ̌. It can be shown that H(x,Dκ(µ)) is continuous and nondecreasing on µ and that (4.18) has a unique solution in an analogous but simpler manner to the proof of Theorem 4.15. Theorem 4.28. Let µ = µ̂ be the unique solution to (4.16). Then µ̂ = µ̌. Proof. Let µκ, µ̌ and µ̂ be as defined above. For any κ ∈ KN , we know µκ ≥ µ̌. Since H(x,Dκ(µ)) is nondecreasing on µ, it must be that H(x,Dκ(µ)) ≤ H(x,Dκ(µκ)) = 0, for all µ ≤ µκ. In particular, H(x,Dκ(µ̌)) ≤ 0. Further- more, by the definition of µ̌, there exists an κ̌ ∈ KN such that H(x,Dκ̌(µ̌)) = 0. Consequently, H(µ̌) = max κ∈KN H(x,Dκ(µ̌)) = 0. (4.19) Therefore, µ̂ = µ̌ solves (4.16) and it is a unique solution by Theorem 4.15. We further show that we may be able to determine µ̌ without solving (4.18) for each κ ∈ KN . We demonstrate using the following theorem that if we have computed a solution µ = µκ of (4.18) for some κ ∈ KN , we can easily determine if µκ̃ ≥ µκ, where µ = µκ̃ is the solution to H(x,Dκ̃(µ)) = 0 for some κ̃ ∈ KN such that κ̃ 6= κ. Note we do not necessarily need to compute µκ̃ to rule it out as a minimal solution. 66 Theorem 4.29. Let κ ∈ KN and κ̃ ∈ KN . Let µ = µκ be the unique solution to H(x,Dκ(µ)) = 0 and µ = µκ̃ be the unique solution to H(x,D κ̃(µ)) = 0. Then µκ̃ < µκ if and only if H(x,D κ̃(µκ)) > H(x,D κ(µκ)). Proof. Let µκ and µκ̃ be as defined above. IfH(x,D κ̃(µκ)) > H(x,D κ(µκ)) = 0, then the unique solution µ = µκ̃ to H(x,D κ̃(µ)) = 0 must be such that µκ̃ < µκ, since H(x,D κ̃(µ)) is nondecreasing on µ. Similarily, if H(x,Dκ̃(µκ)) ≤ H(x,Dκ(µκ)) then the unique solution µ = µκ̃ toH(x,Dκ̃(µ)) = 0 must be such that µκ̃ ≥ µκ. The result of Theorem 4.29 can be used to eliminate simplices κ ∈ KN for which solutions to (4.18) are irrelevant to the computation. We call this process solution elimination. 4.5 Analytic Solutions We provide analytic node value update equations for the cases where H(q) = ‖q‖p − c (4.20) and p = 1, p = 2, or p = ∞. In these cases, there is an exact solution to (4.8). The equation for p = 2 fixes some errors in the appendix of [44]. In [1] we demonstrated that these cases could be handled by a Dijkstra- like method on an orthogonal grid and are useful for robotic applications. However, here we generalize the update equations to any dimension and grid spacing. Let κ ∈ KN . Let d̃ be the number of non-zero values in κ or the number of nodes in the simplex represented by κ. Note that κ might represent a lower-dimensional simplex: 1 ≤ d̃ ≤ d. Let (v1, v2, . . . , vd̃) be the list of simplex node values, one for each node x in the simplex and (h1, h2, . . . , hd̃) be the corresponding grid spacings. We are solving for µ. In order to use the analytic updates below, non-causal node values must already have been eliminated using causality elimination, so µ > max1≤j≤d̃ vj . In the case of Algorithm 1, any nodes that would be removed from consideration by 67 causality elimination would not yet have been extracted from H, and so the analytic updates below can be applied directly. In the following, indices j, l, j1, and j2 range from 1 to d̃ unless otherwise specified. 4.5.1 Update for p = 1 From (4.18) and (4.20) we have ∑ j ( |µ− vj | hj ) = c. Assume µ > maxj vj and multiply through by ∏ l hl to obtain ∑ j ∏ l 6=j hl µ−∑ j ∏ l 6=j hl  vj = (∏ l hl ) c. Then solve for µ to get µ = ∑ j (∏ l 6=j hl ) vj + ∏ l hlc∑ j ∏ l 6=j hl . 4.5.2 Update for p = 2 From (4.18) and (4.20) we have ∑ j ( µ− vj hj )2 = c2. Multiply through by ∏ l h 2 l to get∑ j ∏ l 6=j h2l µ2−2 ∑ j ∏ l 6=j h2l  vj µ+∑ j ∏ l 6=j h2l  v2j− (∏ l h2l ) c2 = 0. 68 Then, using the quadratic formula, solve for µ: µ = 2 ∑ j (∏ l 6=j h 2 l ) vj + √√√√√√ [ 2 ∑ j (∏ l 6=j h 2 l ) vj ]2 − 4 [∑ j ∏ l 6=j h 2 l ] [∑ j (∏ l 6=j h 2 l ) v2j − (∏ l h 2 l ) c2 ] 2 ∑ j ∏ l 6=j h 2 l = ∑ j (∏ l 6=j h 2 l ) vj + ∏ l hl √√√√√√√√√ (∑ j ∏ l 6=j h 2 l ) c2 + ∑ j1 ∑ j2 (∏ l 6=j1,j2 h 2 l ) vj1vj2 −∑j1∑j2 (∏l 6=j1,j2 h2l ) v2j1∑ j ∏ l 6=j h 2 l . We only consider the larger of the two quadratic solutions since the alternative will result in µ ≤ maxj vj . The last two terms of the discriminant can be made more concise as follows: ∑ j1 ∑ j2  ∏ l 6=j1,j2 h2l  vj1vj2 −∑ j1 ∑ j2  ∏ l 6=j1,j2 h2l  v2j1 = ∑ j1 ∑ j2 6=j1  ∏ l 6=j1,j2 h2l  vj1vj2 +∑ j1 ∑ j2=j1  ∏ l 6=j1,j2 h2l  vj1vj2 − ∑ j1 ∑ j2 6=j1  ∏ l 6=j1,j2 h2l  v2j1 −∑ j1 ∑ j2=j1  ∏ l 6=j1,j2 h2l  v2j1 = 2 ∑ j1 ∑ j2>j1  ∏ l 6=j1,j2 h2l  vj1vj2 +∑ j ∏ l 6=j h2l  v2j − ∑ j1 ∑ j2>j1  ∏ l 6=j1,j2 h2l  (v2j1 + v2j2)−∑ j ∏ l 6=j h2l  v2j 69 = ∑ j1 ∑ j2>j1  ∏ l 6=j1,j2 h2l  2vj1vj2 −∑ j1 ∑ j2>j1  ∏ l 6=j1,j2 h2l  (v2j1 + v2j2) = − ∑ j1 ∑ j2>j1  ∏ l 6=j1,j2 h2l  (vj1 − vj2)2. The simplified equation for µ becomes: µ = ∑ j (∏ l 6=j h 2 l ) vj + ∏ l hl √√√√√ (∑ j ∏ l 6=j h 2 l ) c2 −∑j1∑j2>j1 (∏l 6=j1,j2 h2l ) (vj1 − vj2)2∑ j ∏ l 6=j h 2 l . 4.5.3 Update for p =∞ From (4.18) and (4.20) we have max j ( |µ− vj | hj ) = c. Assume µ > maxj vj solve for µ to obtain µ = min j (vj + hjc) . The p = ∞ case is identical to the update formula for Dijkstra’s algorithm for shortest path on a discrete graph. 4.6 Experiments We conduct experiments to show numerical evidence that the result of Algo- rithm 1 (p.35) converges to the viscosity solution of (1.8), and to demonstrate types of anisotropic problems that can be solved. Throughout this section, the boundary conditions are g(x) = 0 for x ∈ ∂Ω. For all experiments below, excluding that in Section 4.6.4, Ω = [−1, 1]d \ {O} and ∂Ω = {O}, where 70 O is the origin. We discretize [−1, 1]d to have m uniformly-spaced nodes in each dimension, where m is odd to ensure that there is a node at the origin. 4.6.1 Numerical Convergence Study We examine the difference between the solution to (3.1) and the solution to (1.8) for two simple Dirichlet problems. In particular, we look at how the absolute error changes as the grid spacing decreases toward zero. We take H to have the form in (4.2), where G(Du(x)) = ‖Du(x)‖p and p = 1 or p = 2. There is no reason for testing p =∞ because in this case Dijkstra’s algorithm gives the exact solution, even on a coarse grid. We use the analytic node value update equations provided in Section 4.5. The approximation errors are summarized in Table 4.1. 4.6.2 Asymmetric Anisotropic Problem For this anisotropic problem, H is as in (4.2), where G is defined by (4.5) (see Figure 4.3(b)). The four rectangular regions shown in black in Figure 4.5 are obstacles, where instead H(x, q) = ‖q‖2 − c and c  1, making it extremely costly for the system to traverse such states so that they will not be on any optimal path from a non-obstacle state to the goal. In the Update function, we analytically computed to solution to (4.8) using the equations for updating from a single simplex given in Section 4.5. The number of nodes in each dimension is m = 1281. We plot the contours of u computed by Algorithm 1 in Figure 4.5. Note the asymmetric contours where the characteristics bend through gaps. The relationship between the shape of the contours of G in Figure 4.3(b) and those of u is explained by the duality articulated in [3, Proposition 2.7 and Appendix A]. 4.6.3 Anelliptic Elastic Wave Propagation We consider elastic wave propagation in transversely isotropic media with a vertical axis of symmetry (VTI media) as is done in [28]. In particular, we wish to find the arrival times of qP-waves propagating in two dimensions 71 p = 1 p = 2 d m N h e∞ rh∞ e1 rh1 e∞ rh∞ e1 rh1 2 11 1.2e2 2.0e-1 2.2e-1 6.3e-2 1.2e-1 6.2e-2 21 4.4e2 1.0e-1 1.7e-1 .41 3.7e-2 .77 7.8e-2 .56 4.3e-2 .53 41 1.7e3 5.0e-2 1.2e-1 .46 2.0e-2 .85 5.0e-2 .65 2.8e-2 .63 81 6.6e3 2.5e-2 8.8e-2 .48 1.1e-2 .90 3.1e-2 .70 1.7e-2 .69 161 2.6e4 1.3e-2 6.3e-2 .49 5.7e-3 .94 1.8e-2 .75 1.0e-2 .73 321 1.0e5 6.3e-3 4.4e-2 .49 2.9e-3 .96 1.1e-2 .78 6.1e-3 .77 641 4.1e5 3.1e-3 3.1e-2 .50 1.5e-3 .97 6.1e-3 .81 3.5e-3 .79 1281 1.6e6 1.6e-3 2.2e-2 .50 7.6e-4 .98 3.4e-3 .83 2.0e-3 .82 3 11 1.3e3 2.0e-1 3.5e-1 1.2e-1 2.1e-1 1.2e-1 21 9.3e3 1.0e-1 2.6e-1 .43 6.9e-2 .78 1.4e-1 .58 8.4e-2 .57 41 6.9e4 5.0e-2 1.9e-1 .47 3.9e-2 .85 8.7e-2 .66 5.4e-2 .65 81 5.3e5 2.5e-2 1.3e-1 .49 2.1e-2 .89 5.3e-2 .72 3.3e-2 .70 161 4.2e6 1.3e-2 9.5e-2 .50 1.1e-2 .92 3.1e-2 .76 2.0e-2 .74 4 11 1.5e4 2.0e-1 4.4e-1 1.7e-1 2.9e-1 1.8e-1 21 1.9e5 1.0e-1 3.2e-1 .45 9.8e-2 .78 1.9e-1 .60 1.2e-1 .58 41 2.8e6 5.0e-2 2.3e-1 .48 5.5e-2 .83 1.2e-1 .67 7.7e-2 .66 Table 4.1: Errors of approximate solution computed by Algorithm 1 com- pared to exact solution of (1.8), where H is as in (4.2) and G(Du(x)) = ‖Du(x)‖p. The variables d, m, and N are the dimension, the number of nodes in each dimension, and the total number of nodes, respectively. Other variables are the spacing h between grid nodes, the L∞-error e∞, the L∞ convergence rate rh∞, the L1-error e1, and the L1 convergence rate rh1 . from a point source at the origin O. We solve the anisotropic HJ PDE given by defining the Hamiltonian H(q) = 1 2 (q21 + q 2 2) { (a+ l)q21 + (c+ l)q 2 2 + √ [(a− l)q21 − (c− l)q22]2 + 4(f + l)2q21q22 } − 1. (4.21) This Hamiltonian is derived from the anisotropic Eikonal equation and the exact qP-wave phase velocity equation in [28]. The parameters a = 14.47, l = 2.28, c = 9.57, and f = 4.51 are taken from [28]. The Hamiltonian H and the approximate solution u resulting from Algo- 72 Figure 4.5: Contours of u computed for the anisotropic problem where Hamiltonian H is as in (4.2) and G is as in (4.5). The black circle at O = (0, 0) indicates ∂Ω and the black rectangles are obstacles. In these regions, u has purposefully not been computed. rithm 1 are shown in Figure 4.6. We have not shown analytically that (4.21) satisfies strict one-sided monotonicity for some range of parameters. How- ever, the level sets of H as shown in Figure 4.6(a) indicate that H is strictly one-sided monotone for the given parameters. Furthermore, the level sets of H indicate that H is convex and computation of the derivative of H using the symbolic mathematics program Maple shows that H satisfies Osher’s criterion for the given parameters. As a result, the analysis in this paper can be applied to the problem and Algorithm 1 can be used to compute the solution. Theorem 4.15 tells us that we can determine the solution to (4.8) uniquely. In the Update function, we used the bisection root-finding method to solve (4.8) numerically. We used a grid of size 201 × 201. We computed the maximum relative error of u to be 0.0076, when compared to the travel-time computed with the group-velocity approximation for qP-waves presented in [28]. In turn, the group-velocity approximation is claimed to have a maximum error of 73 (a) (b) Figure 4.6: Using Algorithm 1 for computation of travel times of qP-waves in two-dimensional VTI media. (a) Contours of Hamiltonian H as given in (4.21). (b) Contours of approximate solution u computed by Algorithm 1. about 0.003, when compared to the true solution. 4.6.4 Two Robots We consider the two-robot coordinated navigation problem illustrated in Figure 1.1. The circular robots are free to move independently in a 2- dimensional plane but may not collide with each other or the obstacles (black region). The dark-colored robot, called α, has a maximum speed of fα. The light-colored robot, called β, has a maximum speed of fβ. The robots attempt to achieve a joint goal state in minimal time from any initial state in the domain without incurring collisions. Let the state of robot α be (xα1 , x α 2 ) ∈ R2 and the state of robot β be (xβ1 , x β 2 ) ∈ R2 so that the combined state of the two robots is x = (xα1 , x α 2 , x β 1 , x β 2 ) ∈ R4. The corresponding direction of travel for the combined system is a = (aα1 , a α 2 , a β 1 , a β 2 ) ∈ R4, 74 where ‖a‖2 = 1. The speed function in non-collision states is f(x, a) = f(a) = min ( fα ‖(aα1 , aα2 )‖2 , fβ ‖(aβ1 , aβ2 )‖2 ) . The resulting speed profile is Af = {af(a) | ‖a‖ ≤ 1} = {af(a) | F (af(a)) ≤ 1}, where F (y) = max ( ‖(yα1 , yα2 )‖2 fα , ‖(yβ1 , yβ2 )‖2 fβ ) Proposition 2.7 of [3] states that we can use the dual of F to obtain H(x, q) = G(x, q)− 1 = ∥∥∥(fα‖(qα1 , qα2 )‖2, fβ‖(qβ1 , qβ2 )‖2)∥∥∥ 1 − 1, (4.22) where q = (qα1 , q α 2 , q β 1 , q β 2 ). For collision states we set f(x, a) = f(a) =   min(fα, fβ), making it extremely slow for the two-robot system to traverse such states, so that they will not be on any optimal path to the joint goal. We can compute u using Algorithm 1 since G is a mixed p-norm, and thus H satisfies Assumptions 4.3 to 4.6 (see Section 4.2.2). The domain Ω is discretized using a uniform orthogonal grid of (81 × 21)2 nodes. The resulting discrete equation (4.8) is quartic in µ, so it is difficult to solve for µ analytically. We implement the Update function using the bisection method to find the unique solution to (4.8) numerically. Once we have determined u using Algorithm 1, we approximately solve the following ODE to determine an optimal collision-free trajectory from any initial state to the goal: dx dt = argmax a∈A [(−q · a)f(x, a)] = ( wαq α 1 , wαq α 2 , wβq β 1 , wβq β 2 ) , (4.23) 75 where q = Du(x), wα = fα ‖(qα1 , qα2 )‖2 , and wβ = fβ ‖(qβ1 , qβ2 )‖2 . We discretize the ODE (4.23) using forward Euler. The gradient Du(x) is determined by a first-order finite difference scheme. At each time step, each robot moves at its maximum speed in the direction of the relevant compo- nents of the negative gradient. If the magnitude of the relevant components of the gradient fall below a small threshold the robot does not move at all, as is the case for the light-colored robot in Figures 1.1(d) and 1.1(e). In this case, the robot is not in the critical path to achieve the goal so it is not compelled to move. The optimal trajectories from a single starting condition are shown in Figure 1.1. 76 Chapter 5 OUM with Monotone Node Acceptance for Convex HJ PDEs This material in this chapter is based on a submitted paper [5]. For the method described in this chapter, we assume H has the form (1.9) and Af (x) is a closed convex set containing the origin in its interior. 5.1 Introduction One potential benefit of single-pass methods is that it may be possible to estimate error and refine the grid to achieve a desired solution accuracy as the solution is computed outward from the boundary condition. Because a node value is not computed until those node values on which it depends have already been computed, there is the potential to control error early before it has been propagated unnecessarily to dependent grid nodes. We are not aware of any existing adaptive single-pass methods for static HJ PDEs that refine the grid to control error on the fly. However, we believe that such an adaptive method would create significantly nonuniform grids for many problems. The OUM in [59, 60] uses the global maximum grid spacing to determine the size of the update stencil of a node, which leads 77 to unreasonably large stencils in refined parts of the grid. Our attempts at modifying that OUM to use local grid spacing for the stencil showed some encouraging experimental evidence, but we have thus far been unable to prove theoretical convergence. Instead we developed an alternative method for which we can show convergence using a well-known consistency, mono- tonicity, and stability proof [8] (see Section 3.2). An added benefit of our method is that the discretization of the HJ PDE is δ-causal, such that the solution value at a grid node is dependent only on solution values that are smaller by at least δ, which is not true of the OUM in [59, 60]. We present Monotone Acceptance OUM (MAOUM), a two-pass Dijkstra- like method for which node values are accepted in nondecreasing order on a simplicial grid. We contrast MAOUM with the OUM introduced in [59, 60], which solves the same set of static convex HJ PDEs but does not necessar- ily accept node values monotonically. Because one of the defining features of that method is that a front of nodes with accepted values is maintained and the stencil is formed dynamically using only nodes from this front, we call their method Accepted Front OUM (AFOUM). MAOUM is a two-pass algorithm because the stencils for the discrete equation are precomputed in an initial pass through the nodes of the grid, as opposed to being computed in the same pass as the solution like in AFOUM. If δ > 0, MAOUM can be modified to create a Dial-like method, for which nodes are sorted into buckets of width δ according to value and buckets of nodes are accepted in increasing value order [23, 64]. We present three hierarchical tests for the δ-causality of a node value update from a simplex and prove their validity in Section 5.3. We believe the first criterion, δ-Negative-Gradient-Acuteness, is nearly equivalent to the criterion for the applicability of Dijkstra-like and Dial-like methods to anisotropic problems given in [65, Section 4]. However, δ-Negative-Gradient- Acuteness was derived independently and does not require differentiability of the cost function (5.9). We use these tests to define MAOUM, verify that it solves the discretized equation, and analyze its asymptotic complexity in Section 5.4. Numerical examples are included in Section 5.5 to show that the algorithm is convergent, is not much less efficient/accurate than 78 AFOUM on uniform grids and is significantly more efficient/accurate than AFOUM for some examples with appropriately chosen grid refinement. We also demonstrate that MAOUM can be used to solve practical problems, such as computing the first-arrival time for a seismic wavefront or finding optimal paths for a robot to reach a goal while fighting a strong wind and avoiding obstacles. 5.1.1 Dijkstra-like Methods We compute an approximate solution uG on a simplicial grid G. Let s be a simplex with ns vertices, and x s i be the i th vertex of s where 1 ≤ i ≤ ns. Let S be the set of grid simplices using neighboring grid nodes in X . Define S(R) = {s ∈ S | for 1 ≤ i ≤ ns, xsi ∈ R ⊂ Rd}, the set of grid simplices using neighboring grid nodes in R. We may specify the number of vertices using a subscript: for example, S d̃(R) is the set of grid simplices with d̃ vertices. If left unspecified it is assumed 1 ≤ ns ≤ d, thereby excluding simplices with d+ 1 vertices. MAOUM is an extension of Algorithm 1 (p.35) to handle HJ PDEs with convex anisotropy, while allowing for nonuniformity (defined using either a global or local mesh ratio) in the computational grid. For MAOUM, we use the semi-Lagrangian discretization from [60] to calculate Update(y, s). This discretization generalizes the one for Eikonal equations from [63] to anisotropic HJ PDEs. It is discussed in detail in Section 5.2. In order for MAOUM to maintain the capability of computing node val- ues in a single pass, the update stencil for a node x ∈ X needs to be expanded beyond N (x) to handle many convex anisotropic problems. MAOUM in- cludes an initial pass to compute this stencil for each x, resulting in a two- pass method. However, the asymptotic complexity for MAOUM is only increased from that of Algorithm 1 by a constant factor of (Ψ̂ρ)d, where d is the dimension, Ψ̂ measures anisotropy in the equation, and ρ measures the degree of nonuniformity in the grid. This constant factor bounds above the number of nodes in a stencil. Although ρ factors into the asymptotic com- plexity, experiments demonstrate that grid nonuniformity does not appear 79 to have a large effect on the computational cost of MAOUM in practice. 5.1.2 Related Work Like AFOUM, our method solves convex anisotropic problems on any sim- plicial grid. MAOUM does not maintain an accepted front but must perform an initial pass to compute the stencils for each node and store them for the second pass that computes the solution. The benefit of this extra initial computation and storage is that MAOUM does not need to determine the stencil based on a global grid spacing parameter, as is done with AFOUM. This results in an algorithm that can take better advantage of local refine- ments in the grid to perform updates from a smaller stencil. Sweeping methods have been used to solve static convex Hamilton-Jacobi equations on semi-structured grids [14] and unstructured grids [11, 50]. These methods have the advantage that the update of node value depends only on immediate neighbors, so they naturally take advantage of local re- finement in the grid to compute accurate updates. However, the discretiza- tions used lack an easy way of determining dependencies in the solution, which makes it difficult to know if a part of the solution has finished be- ing computed before the algorithm has terminated. On the other hand, for single-pass methods like OUMs it is clear when a particular part of the so- lution is complete and this might allow adaptive error estimation and grid refinement to be done as the solution is computed. For this reason, we develop an OUM that is suited to exploiting local refinement. 5.2 Discretization MAOUM can be used on any type of simplicial grid, whether structured, semi-structured, or unstructured. We use the semi-Lagrangian discretization and notation from [60]. All norms are Euclidean unless otherwise stated. Let ζ be an n-vector barycentric coordinate such that n∑ i=1 ζi = 1 (5.1) 80 and 0 ≤ ζi for 1 ≤ i ≤ n. Let Ξn be the set of all possible n-vector barycen- tric coordinates. For some simplex s, state x̃s ∈ s can be parameterized by ζ ∈ Ξns , that is, x̃s(ζ) = ∑ns i=1 ζix s i . Let Ξ̃n be the set of n-vector barycen- tric coordinates without the constraint 0 ≤ ζi for 1 ≤ i ≤ d. This looser definition of barycentric coordinates is used in Section 5.2.4. Let x ∈ Ω and define τs(x, ζ) = ‖x̃s(ζ)− x‖ (5.2) and as(x, ζ) = x̃s(ζ)− x τs(x, ζ) . (5.3) Restrict x /∈ s for all x so that τs(x, ζ) > 0. We may write τs(ζ) = τs(x, ζ) and as(ζ) = as(x, ζ) when x is clear from context. We say that a ∈ A intersects s from x if the ray x+ ta, with t > 0, intersects s. Note that as(ζ) intersects s from x if and only if ζi ≥ 0 for 1 ≤ i ≤ ns, a condition imposed on ζ above. We define the numerical Hamiltonian H as follows: H(x,S, φ, µ) = max s∈S max ζ∈Ξns { µ−∑nsi=1 ζiφ(xsi ) τs(x, ζ) f(x, as(x, ζ))− 1 } , (5.4) where x ∈ Ω, S is a set of simplices s in Ω such that x /∈ s and vectors xsi −x are independent, φ : Ω → R is bounded, and µ ∈ R. Note that the second parameter of H is a set of stencil simplices instead of a set of stencil nodes as in previous chapters. This modification to the framework does not affect the relevance of the proofs of Propositions 3.7 (i.e., convergence) and 3.13 (i.e., solution of the discretized equations). When we are only varying µ, we write H(µ) = H(x,S, φ, µ) for convenience. Given a simplicial grid G, the discretized Dirichlet problem is to find a function uG : X → R, such that H(x,S(x), u, u(x)) = 0, x ∈ Ω (5.5a) u(x) = g(x), x ∈ ∂Ω, (5.5b) 81 where S(x), the update simplex set of x, is chosen so that H is consistent and (5.5) is causal, allowing the solution to the discrete equation to be computed efficiently using a Dijkstra-like single-pass method. Section 5.3 discusses criteria on S(x) that guarantee causality. A significant part of Section 5.4 defines and analyzes the algorithm used to generate S(x). Propositions 5.1 and 5.2 state that the numerical Hamiltonian H is monotone and consistent, which is important for the convergence of u to the solution u of the HJ PDE (1.8) as the grid spacing goes to zero [8] (see Section 3.2). A key property for monotonicity of H is that ζi are con- strained to be nonnegative in (5.4). For consistency of H, it it crucial that the simplices in S(x) encompass all possible directions a ∈ A. Proposition 5.3 expresses µ in H(µ) = 0 explicitly, which is useful for implementing the Update function in Algorithm 2. 5.2.1 Monotonicity H(x,S, φ, µ) is monotone in the values φ(xsi ), for s ∈ S and 1 ≤ i ≤ ns. Monotonicity of H requires that if none of φ(xsi ) decrease, then H(x,S, φ, µ) should not increase. Monotonicity of H comes naturally for the semi- Lagrangian form of (5.4). However, it is essential that ζi ≥ 0 for 1 ≤ i ≤ ns. A negative ζi can cause an increase in φ(x s i ) to increase H(x,S, φ, µ). This requirement is equivalent to making as(ζ) intersect s from x and makes the discrete equation H(x,S, φ, µ) = 0 upwinding in the terminology of [60]. The following proposition is analogous to Property 3.3, but is stated for the formulation of H in (5.4) with a second parameter that is a set of simplices instead of a set of nodes. Proposition 5.1. Let x ∈ Ω. Let φ̌ : Ω → R and φ̂ : Ω → R be functions such that φ̌(xsi ) ≤ φ̂(xsi ) for all s ∈ S and 1 ≤ i ≤ ns. Then H(x,S, φ̌, µ) ≥ H(x,S, φ̂, µ). Proof. Let s ∈ S and 1 ≤ i ≤ ns. Since φ̌(xsi ) ≤ φ̂(xsi ), ζi ≥ 0 for 1 ≤ i ≤ ns, 82 and summation is monotone, we have µ− ns∑ i=1 ζiφ̌(x s i ) ≥ µ− ns∑ i=1 ζiφ̂(x s i ). Therefore, since f is positive, τs is positive, and max is monotone, we have that H(x,S, φ̌, µ) ≥ H(x,S, φ̂, µ). 5.2.2 Consistency For the numerical Hamiltonian H to be consistent with the Hamiltonian H from (1.9), the set of simplices S must encompass all possible action directions in A. Definition. We say that S is directionally-complete(DC) for x ∈ Ω if for all a ∈ A there exists an s ∈ S such that a intersects s from x. Proposition 5.2. Let x ∈ Ω, y ∈ Ω, and φ ∈ C∞b (Ω). Let S(y) be DC for y and such that for s ∈ S(y), we have y /∈ s. Define r̂(y) = max s∈S(y),1≤i≤ns ‖xsi − y‖. Then lim y→x, ξ→0 r̂(y)→0 H(y,S(y), φ+ ξ, φ(y) + ξ) = H(x,Dφ(x)). (5.6) Proof. By (5.4) we have lim y→x, ξ→0 r̂(y)→0 H(y,S(y), φ+ ξ, φ(y) + ξ) = lim y→x, ξ→0 r̂(y)→0 max s∈S(y) max ζ∈Ξns { φ(y) + ξ −∑nsi=1 ζi(φ(xsi ) + ξ) τs(y, ζ) f(y, as(y, ζ))− 1 } Then by (5.1), the smoothness of φ, the continuity of max and f , (5.3), the 83 DC for y of S(y), and (1.9) lim y→x, ξ→0 r̂(y)→0 max s∈S(y) max ζ∈Ξns { φ(y) + ξ −∑nsi=1 ζi(φ(xsi ) + ξ) τs(y, ζ) f(y, as(y, ζ))− 1 } = lim y→x, r̂(y)→0 max s∈S(y) max ζ∈Ξns { φ(y)− φ(∑nsi=1 ζixsi ) +O (r̂(y)2) τs(y, ζ) f(y, as(y, ζ))− 1 } = max s∈S(x) max ζ∈Ξns [(−Dφ(x) · as(x, ζ))f(x, as(x, ζ))]− 1 = max a∈A [(−Dφ(x) · a)f(x, a)]− 1 = H(x,Dφ(x)) In order to use Proposition 5.2 to prove that Property 3.2 holds, we must show that for all x ∈ Ω, for sequences Gk and xk ∈ Ωk such that xk → x and ĥGk → 0, S(xk) is DC for xk. We prove this property in Section 5.4.2. 5.2.3 Unique Solution There exists a unique solution µ = µ̃ to the equation H(µ) = 0. The value µ̃ can be written explicitly in terms of the other quantities in H(µ). This is a convenient form of the discretized equation for determining the solution u(y) at a node y in terms of the solution at its neighbors as is done in the Update function of Algorithm 2. It is the same form as the semi-Lagrangian update in [60]. The following proposition states that a property analogous to Property 3.8, but for the modified H in (5.4), is satisfied. In addition, it gives the explicit formula for the unique solution and states that an optimal ŝ ∈ S and ζ̂ ∈ Ξns in (5.4) remain optimal in the explicit formula. Proposition 5.3. The unique solution µ = µ̃ to H(µ) = 0 with H defined by (5.4) is given by µ̃ = min s∈S min ζ∈Ξns { τs(ζ) f(x, as(ζ)) + ns∑ i=1 ζiφ(x s i ) } . (5.7) 84 Furthermore, if ŝ ∈ S and ζ̂ ∈ Ξns are maximizers in (5.4), then ŝ ∈ S and ζ̂ ∈ Ξns are minimizers in (5.7). Proof. Let s ∈ S and ζ ∈ Ξns . Define function Hsζ(µ) : R→ R to be Hsζ(µ) = µ−∑nsi=1 ζiφ(xsi ) τs(ζ) f(x, as(ζ))− 1. The function Hsζ(µ) is strictly increasing on µ, since it is linear with positive slope f(x, as(ζ))/τs(ζ). Furthermore, H s ζ(µ) = 0 has a unique solution µ = µ̃sζ = τs(ζ) f(x, as(ζ)) + ns∑ i=1 ζiφ(x s i ). Now define function λsζ(µ) : R→ R to be λsζ(µ) = τs(ζ) f(x, as(ζ)) Hsζ(µ) = µ− ns∑ i=1 ζiφ(x s i )− τs(ζ) f(x, as(ζ)) . The function λsζ(µ) is also strictly increasing on µ, since it is linear with a slope of 1. Note that µ = µ̃sζ is also the unique solution to λ s ζ(µ) = 0. Because Hsζ(µ) and λ s ζ(µ) are both increasing and µ̃ s ζ is the unique solution to both Hsζ(µ) = 0 and λ s ζ(µ) = 0 for all s ∈ S and ζ ∈ Ξns , the solution µ = µ̃ to H(µ) = max s∈S max ζ∈Ξns Hsζ(µ) = 0, must also be the solution to max s∈S max ζ∈Ξns λsζ(µ) = max s∈S max ζ∈Ξns { µ− ns∑ i=1 ζiφ(x s i )− τs(ζ) f(x, as(ζ)) } = 0. By rearranging the terms and negating both sides of this equation, we get the formula (5.7) for the unique solution µ = µ̃. A maximizer ŝ ∈ S and ζ̂ ∈ Ξns of maxs∈S maxζ∈Ξns H s ζ(µ) is also a maximizer of maxs∈S maxζ∈Ξns λ s ζ(µ), which is a minimizer of the right-hand-side of (5.7). 85 Remark 5.4. This proposition still applies in the case where S contains just a single simplex and also in the case where Ξns is replaced by Ξ̃ns (i.e., the constraint ζi ≥ 0 is removed). 5.2.4 Equivalence of Discretizations We examine the equivalence between the first-order semi-Lagrangian dis- cretization (5.7) and an Eulerian discretization [59]. This equivalence is demonstrated in Proposition 5.7. The proof uses a simplified version of the equivalence, which is stated in Proposition 5.6. Proposition 5.7 is used in the proof of Theorem 5.8 that shows the discrete equation is causal as long as negative-gradient-acuteness is satisfied. It can also be used to extend the main results in this chapter to the equivalent Eulerian discretization. The analysis in this section is based on that in the appendix of [59]. Define the function ηsφ : Ξns → R, which is the objective function in (5.7): ηsφ(x, ζ) = τs(x, ζ) f(x, as(x, ζ)) + ns∑ i=1 ζiφ(x s i ). We typically deal with a fixed x and write ηsφ(ζ) = η s φ(x, ζ). Define the restriction of (5.7) to a single simplex s: µ̃s = min ζ∈Ξns ηsφ(ζ). (5.8) The solution µ = µ̃ to H(x,S, φ, µ) = 0 is given by µ̃ = mins∈S µ̃s. We show that ηsφ is convex in ζ. This lemma is used in the proof of Proposition 5.7. Also, convexity is useful for applying convex numerical optimization algorithms to solve (5.7). Lemma 5.5. The function ηsφ is convex. Proof. Define a function c : Rd × Rd → R: c(x, y) =  0, if y = 0, ‖y‖ f ( x, y‖y‖ ) , otherwise. (5.9) 86 Note that Af (x) = {y | c(x, y) ≤ 1} and that c is homogeneous in the second parameter: c(x, ty) = tc(x, y) for t ≥ 0. Thus, by the convexity of the set Af (x), c(x, y) is convex in y. We have ηsφ(ζ) = τs(ζ) f(x, as(ζ)) + ∑d i=1 ζiφ(x s i ) = c(x, τs(ζ)as(ζ)) + ∑d i=1 ζiφ(x s i ) = c(x, x̃s(ζ)− x) + ∑d i=1 ζiφ(x s i ). (5.10) Since c(x, y) is convex in y, x̃s is linear in ζ, and ∑d i=1 ζiφ(x s i ) is linear in ζ, ηsφ is convex. Proposition 5.6. Let x ∈ Ω and s ∈ S and ns = d. Conditions (a) and (b) on µ̃s ∈ R and ζ̂ ∈ Ξns are equivalent: (a) max ζ∈Ξ̃d { µ̃s − ∑d i=1 ζiφ(x s i ) τs(ζ) f(x, as(ζ)) } = µ̃s − ∑d i=1 ζ̂iφ(x s i ) τs(ζ̂) f(x, as(ζ̂)) = 1, (5.11) (b) there exists q ∈ Rd satisfying max a∈A (−q · a)f(x, a) = (−q · as(ζ̂))f(x, as(ζ̂)) = 1 (5.12) and φ(xsi ) = µ̃s + (x s i − x) · q, (5.13) for i such that 1 ≤ i ≤ ns. 87 Proof. Since the vectors {xsi − x} are independent, we define q ∈ Rd by q =  xs1 − x xs2 − x ... xsd − x  −1  φ(xs1)− µ̃s φ(xs2)− µ̃s ... φ(xsd)− µ̃s  . The above equation implies (5.13) holds for 1 ≤ i ≤ d. We prove that (a) and (b) on µ̃s ∈ R and ζ̂ ∈ Ξns are equivalent, by proving that (5.11) is equivalent to (5.12). By (5.13), (5.1), and (5.3),∑d i=1 ζiφ(x s i ) = ∑d i=1 ζi[µ̃s + (x s i − x) · q] = µ̃s + (∑d i=1 ζix s i − x ) · q = µ̃s + τs(ζ)as(ζ) · q, for all ζ ∈ Ξ̃d. This equation can be rearranged and multiplied on both sides by f(x, as(ζ)) to obtain µ̃s − ∑d i=1 ζiφ(x s i ) τs(ζ) f(x, as(ζ)) = (−q · as(ζ))f(x, as(ζ)), for all ζ ∈ Ξ̃d. Thus, (5.11) is equivalent to max ζ∈Ξ̃d [(−q · as(ζ))f(x, as(ζ))] = (−q · as(ζ̂))f(x, as(ζ̂)) = 1 (5.14) We now prove that (5.14) is equivalent to (5.12). We first show that (5.14) implies (5.12). Note that ζ̂ ∈ Ξd is strictly feasible in Ξ̃d. In other words, ζ̂ is a (not necessarily strict) local maximum of (−q ·as(ζ))f(x, as(ζ)). Since as(ζ) is a continuous mapping, as(ζ̂) is a local maximum of (−q · a)f(x, a). Because Af (x) is convex, a local maximum â of (−q · a)f(x, a) is a global maximum over all a ∈ A. Thus, we have (5.12). But (5.12) implies (5.14), since A ⊇ {as(ζ) | ζ ∈ Ξ̃d} and ζ̂ ∈ Ξd ⊂ Ξ̃d. Therefore, (5.12) is 88 equivalent to (5.14), which is equivalent to (5.11). Proposition 5.6 shows in a simplified form the equivalence between the semi-Lagrangian and Eulerian discretizations. Note that there are two ways to find the unique µ̃s. The first, the semi-Lagrangian discretization, is to solve (5.11), which can be done by solving the unconstrained convex opti- mization problem µ̃s = min ζ∈Ξ̃ns ηsφ(ζ). The second, the Eulerian discretization, is to find µ̃s by solving the nonlinear system of equations (5.13) and (5.12) for q and µ̃s. Unfortunately, there may not be a ζ̂ ∈ Ξd that satisfies (5.11), or equiv- alently, satisfies condition (b) of Proposition 5.6. Also, the simplex s may have 1 ≤ ns < d. Proposition 5.7 considers these more general cases. In the statement of Proposition 5.7, note that the maximum in (5.15) is over the set Ξns of barycentric coordinates within the simplex in contrast to the maximum of (5.11), which is over the set Ξ̃d. Proposition 5.7. Let x ∈ Ω and s ∈ S. Conditions (a) and (b) on µ̃s ∈ R and ζ̂ ∈ Ξns are equivalent: (a) max ζ∈Ξns { µ̃s − ∑d i=1 ζiφ(x s i ) τs(ζ) f(x, as(ζ)) } = µ̃s − ∑d i=1 ζ̂iφ(x s i ) τs(ζ̂) f(x, as(ζ̂)) = 1, (5.15) (b) there exists q ∈ Rd satisfying (5.12) and φ(xsi ) = µ̃s + (xsi − x) · q, if ζ̂i > 0,≥ µ̃s + (xsi − x) · q, if ζ̂i = 0, (5.16) for i such that 1 ≤ i ≤ ns. 89 We can make the proof of Proposition 5.7 below fit the form of the proof for Proposition 5.6 by making two adjustments. The first adjustment is to add d−ns phantom nodes xsi to s such that vectors xsi−x remain independent but set ζ̂i = 0. The second adjustment is to replace φ with a function φ̃ such that φ̃(xsi ) = φ(x s i ) for i such that ζ̂i > 0 and ζ̂ ∈ argminζ∈Ξ̃d ηsφ̃(ζ), where ζ̂ ∈ Ξns is the optimizer in (5.8). We let q ∈ Rd satisfy (5.13) with φ̃ in place of φ, and use Proposition 5.6 to complete the proof. Proof. Without loss of generality, assume ns = d. If ns < d, without af- fecting the proof result, we add d − ns phantom nodes xsi to s such that the vectors {xsi − x} remain linearly independent. For i such that xsi is a phantom node, we set ζ̂i = 0 and φ(x s i ) = ∞. By (5.1), there exists i such that 1 ≤ i ≤ ns and ζ̂i > 0. Without loss of generality, we assume ζ̂i > 0 for i = 1. First, we prove condition (a) implies condition (b). Assume µ̃s ∈ R and ζ̂ ∈ Ξns satisfy condition (a). We seek a function φ̃ such that φ̃(xsi ) = φ(xsi ), if ζ̂i > 0,≤ φ(xsi ), if ζ̂i = 0, (5.17) and ζ̂ ∈ argmin ζ∈Ξ̃d ηs φ̃ (ζ). (5.18) By (5.10), because c(x, x̃s(ζ)− x) is convex in ζ and ∑d i=1 ζiφ̃(x s i ) is linear in ζ, such a function φ̃ exists. For 2 ≤ i ≤ d, define the vector νi = (νi1, . . . , νid), where νi1 = −1, νii = 1, and ν i j = 0 for j such that j /∈ {1, i}. Let ∂ηsφ(ζ)/∂νi be the directional subdifferential set of ηsφ at ζ in direction ν i. In order for (5.18) to be satisfied, φ̃ must be defined such that 0 ∈ ∂ηs φ̃ (ζ̂) ∂νi , (5.19) for 2 ≤ i ≤ d. 90 By (5.15) and Proposition 5.3, we have ζ̂ ∈ argminζ∈Ξd ηsφ(ζ). By Lemma 5.5, ηsφ is convex. If ζ̂i > 0, then ζ̂ is strictly feasible in Ξd in both directions νi and −νi. On the other hand, if ζ̂i = 0, then ζ̂ is strictly feasible in Ξd in direction ν i, but ζ̂ is constrained in direction −νi. In either case, the convexity of ηsφ, the optimality of ζ̂, and the strict feasibility in Ξd in direction νi imply that max(∂ηsφ(ζ̂)/∂ν i) ≥ 0. If ζ̂ is strictly feasible in Ξd in direction −νi, it must be that min(∂ηsφ(ζ̂)/∂νi) ≤ 0. However, if ζ̂ is constrained in direction −νi, it is possible that min(∂ηsφ(ζ̂)/∂νi) > 0. In this case, we can define φ̃(xsi ) < φ(x s i ), so that min(∂η s φ̃ (ζ̂)/∂νi) = 0. With these observations and the definition of ηsφ in mind, we define the function φ̃ : {xsi | 1 ≤ i ≤ d} → R: φ̃(xsi ) =  φ(xsi ), if min ( ∂ηsφ(ζ̂) ∂νi ) ≤ 0, φ(xs1)− ‖xsi − xs1‖min ( ∂ηsφ(ζ̂) ∂νi ) , otherwise. (5.20) By this definition, we have (5.19) for 2 ≤ i ≤ d, which implies (5.18). Furthermore, (5.17) is satisfied by the definition of φ̃. By (5.18), (5.17), (5.15), and Proposition 5.3 we have min ζ∈Ξ̃d ηs φ̃ (ζ) = ηs φ̃ (ζ̂) = ηsφ(ζ̂) = min ζ∈Ξd ηsφ(ζ) = µ̃s Again by Proposition 5.3, we have max ζ∈Ξ̃d { µ̃s − ∑d i=1 ζiφ̃(x s i ) τs(ζ) f(x, as(ζ)) } = µ̃s − ∑d i=1 ζ̂iφ̃(x s i ) τs(ζ̂) = 1, Since the vectors {xsi − x} are independent, we define q satisfying φ̃(xsi ) = µ̃s + (x s i − x) · q, for i such that 1 ≤ i ≤ ns. By (5.17), we have (5.16). Moreover, by 91 Proposition 5.6, we have (5.12). Therefore, condition (b) holds. Second, we prove condition (b) implies condition (a). Assume µ̃s ∈ R and ζ̂ ∈ Ξns satisfy condition (b). Let q ∈ Rd satisfy (5.12) and (5.16). Define φ̃ : {xsi | 1 ≤ i ≤ d} → R: φ̃(xsi ) = µ̃s + (x s i − x) · q. By (5.16), we have (5.17). By (5.12) and Proposition 5.6, we have max ζ∈Ξ̃d { µ̃s − ∑d i=1 ζiφ̃(x s i ) τs(ζ) f(x, as(ζ)) } = µ̃s − ∑d i=1 ζ̂iφ̃(x s i ) τs(ζ̂) f(x, as(ζ̂)) = 1. Therefore, by (5.17) and the fact that ζ̂ ∈ Ξns , we have (5.15). 5.3 Causality Causality means that if the solution µ = µ̃ to H(x,S, φ, µ) = 0 depends on value φ(xsi ) then it must be that µ̃ > φ(x s i ). Causality allows Dijkstra- like algorithms to be used to compute the solution to (5.5) in a single pass through the nodes x ∈ X in order of increasing value u(x). For isotropic problems, causality is satisfied if the direction vectors (xsi − x)/‖xsi − x‖ for 1 ≤ i ≤ ns, when considered pairwise, form non-obtuse angles [58]. Negative-gradient-acuteness (NGA) is a similar property for more general anisotropic problems. Let δ ≥ 0. δ-causality means that if the solution µ = µ̃ to H(x,S, φ, µ) = 0 depends on value φ(xsi ) then it must be that µ̃ > φ(x s i ) + δ. δ-causality allows Dial-like algorithms with buckets of width δ to be used to compute the solution to (5.5) in a single pass through the nodes x ∈ X in order of increasing bucket value. Definition. Let ζ̌s be a minimizer in (5.8). We say that (5.8) is δ-causal for x ∈ Ω and s ∈ S if ζ̌si > 0 implies µ̃s − φ(xsi ) > δ, for 1 ≤ i ≤ ns. When δ = 0, we may say (5.8) is causal for x and s. 92 In Section 5.3.1 we define what it means for a simplex to be δ-NGA and prove that it implies δ-causality of the discrete equation. In [65] a criterion for the δ-causality of (3.1) was presented. We believe δ-NGA is equivalent to the criterion in [65], if the function c defined in (5.9) is differentiable in the second parameter y everywhere but at the origin. In Section 5.3.2 we define another property on a simplex, δ-anisotropy-angle-boundedness (δ-AAB), and show that it implies δ-NGA. δ-AAB is more specific than δ-NGA, but easy to implement in Algorithm 2. δ-NGA is general and more likely to have application beyond Algorithm 2. In Section 5.3.3 we define one last property on a simplex called distance-ratio-boundedness (DRB). When δ = 0, DRB implies δ-AAB. DRB is the most specific condition and we use it to prove the correctness of Algorithm 3 in Theorem 5.17. The properties δ-NGA and δ-AAB allow for general δ ≥ 0 so that Al- gorithm 2 can easily be converted into a Dial-like method. However, we set δ = 0 for the description and discussion of Dijkstra-like Algorithm 2 in Section 5.4 and we test only the Dijkstra-like algorithm in Section 5.5. The property DRB is not generalized to δ ≥ 0, because it is used to prove the correctness of just the Dijkstra-like algorithm. 5.3.1 Negative-gradient-acuteness We show that if S is δ-negative-gradient-acute, the discrete equationH(x,S, φ, µ) = 0 is δ-causal. Figure 5.1 is a geometric aid in understanding the definition of and proofs involving δ-negative-gradient-acuteness. Definition. We say that s ∈ S is δ-negative-gradient-acute (δ-NGA) for x ∈ Ω if: for all q ∈ Rd and ζ̂ ∈ Ξns such that (5.12) holds, we have (xsi − x) · (−q) > δ for i such that ζ̂i > 0. We say that S is δ-NGA for x if all s ∈ S are δ-NGA for x. When δ = 0, we may say s (or S) is NGA for x. Theorem 5.8. Let s ∈ S be δ-NGA for x ∈ Ω. Let (5.8) hold and ζ̌ be the minimizer in (5.8). Then (5.8) is δ-causal for x and s. Proof. Let ζ̂ = ζ̌. By Proposition 5.3, (5.15) holds. By Proposition 5.7, there exists q ∈ Rd such that (5.12) and (5.16) hold. Since s ∈ S is δ-NGA 93 Figure 5.1: Depiction of symbols used in the definition of δ-NGA and the proof of Theorem 5.8. for x, we have (xsi −x) ·(−q) > δ for i such that ζ̂i > 0. Therefore, by (5.16), µ̃s > µ̃s + (x s i − x) · q + δ = φ(xsi ) + δ, for i such that ζ̂i > 0. 5.3.2 Anisotropy-angle-boundedness A less general but more concrete way of ensuring causality of the discrete equation H(x,S, φ, µ) = 0 is to bound the maximum angle between any two direction vectors in A that intersect a simplex s ∈ S from x. We show that δ-AAB of a simplex s implies δ-NGA, which in turn implies δ-causality. δ-AAB is easy to implement, so we use it in Algorithm 2. Figure 5.2 is a geometric aid in understanding the definition of and proofs involving δ- anisotropy-angle-boundedness. Define f̌(x) = mina∈A f(x, a) and f̂(x) = maxa∈A f(x, a). Let Υ(x) = f̂(x)/f̌(x) be the local anisotropy coefficient. Note that 0 < f̌(x) ≤ f̂(x) < ∞, so 1 ≤ Υ(x) <∞. Denote asi = (x s i − x)/‖xsi − x‖. 94 Figure 5.2: Depiction of symbols used in the definition of δ-AAB and the proof of Theorem 5.9. Define αsi,j to be the angle between a s i and a s j . Let α̂s = max i,j αsi,j . Let rs(x) be the minimum distance between x and any vertex of s: rs(x) = min i ‖xsi − x‖. (5.21) Definition. We say that s ∈ S is δ-anisotropy-angle-bounded (δ-AAB) for x ∈ Ω if δf̂(x)/rs(x) ≤ 1 and α̂s < arccos(δf̂(x)/rs(x))− arccos(1/Υ(x)). (5.22) We say that S is δ-AAB for x if all s ∈ S are δ-AAB for x. When δ = 0, we may say s (or S) is AAB for x. Theorem 5.9. If s ∈ S is δ-AAB for x ∈ Ω, then s is δ-NGA for x. The following proof builds on analysis done in [60, Section 3.4] that bounds the angle between the optimal action and the negative gradient of the viscosity solution u of (1.8). Proof. Let q ∈ Rd be such that maxa∈A(−q·a)f(x, a) = 1 and â ∈ argmaxa∈A(−q· 95 a)f(x, a) such that â intersects s from x. We have (−q · â)f(x, â) ≥ (−q · (−q/‖ − q‖)) f (x,−q/‖ − q‖) ≥ ‖q‖f̌(x). Let β be the angle between −q and â. Since ‖â‖ ≤ 1, we have cos(β) = −q · â ‖ − q‖‖â‖ ≥ f̌(x) f(x, â) ≥ 1 Υ(x) . (5.23) Because s is δ-AAB for x and by (5.23), β ≤ arccos(1/Υ(x)), we have α̂s + β < arccos(δf̂(x)/rs(x)). (5.24) Let 1 ≤ i ≤ ns. Let γi be the angle between â and asi . Since â intersects s from x, we have γi ≤ α̂s. Let θi be the angle between −q and asi . By (5.24), we have θi ≤ γi + β ≤ α̂s + β < arccos(δf̂(x)/rs(x)). (5.25) Since ‖â‖ ≤ 1, we have ‖q‖f̂(x) = (−q · (−q/‖ − q‖))f̂(x) ≥ (−q · â)f(x, â) = 1, and by (5.21), we have 0 ≤ δ‖xsi − x‖‖q‖ ≤ δf̂(x) rs(x) ≤ 1, where the final inequality is because s is δ-AAB. By this inequality and (5.25), it follows that cos(θi) > δf̂(x) rs(x) ≥ δ‖xsi − x‖‖q‖ . Consequently, we have (xsi − x) · (−q) = ‖xsi − x‖‖ − q‖ cos(θi) > δ 96 for 1 ≤ i ≤ ns. Therefore, s is δ-NGA. We describe a way to increase the upper bound on α̂s for the δ-AAB of a simplex s. This modification may result in more simplices satisfying the def- inition, which may allow us to find a DC and δ-NGA set of update simplices S(x) occupying a smaller region around x and reduce the truncation error. First we define some simplex specific notation. Let f̌s(x) = mina∈Ǎs f(x, a), where Ǎs = {q ∈ A | argmax a∈A (−q · a)f(x, a) intersects s from x}. Let f̂s(x) = maxa∈Âs f(x, a), where Âs = {a ∈ A | a intersects s from x}. Then let Υs(x) = f̂s(x)/f̌s(x) be a simplex specific anisotropy coefficient. We have f̌s(x) ≥ f̌(x) since Ǎs ⊂ A and f̂s(x) ≤ f̂(x) since Âs ⊂ A. It follows that Υs(x) = f̂s(x)/f̌s(x) ≤ f̂(x)/f̌(x) = Υ(x). If it is possible to compute f̂s(x) and f̌s(x), we can modify the definition of δ-AAB to use the loosened restriction δf̂s(x)/rs(x) ≤ 1 and α̂s < arccos(δf̂s(x)/rs(x))− arccos(1/Υs(x)). Theorem 5.9 still holds. The definition of δ-AAB can be simplified if we restrict the problem in several ways. These restrictions serve to draw connections to previous work. Also, the restriction δ = 0 is useful in the implementation of Algorithm 2. If we take Υ(x) = 1, (5.22) becomes α̂s < arccos(δf̂(x)/rs(x)) or, equivalently, cos(α̂s) < δf̂(x)/rs(x). 97 Figure 5.3: Depiction of symbols used in the definition of DRB and the proof of Lemma 5.10. This resembles a formula for the optimal bucket width δ for a Dial-like algorithm to solve the Eikonal equation derived in [65]. On the other hand, if we take δ = 0, (5.22) becomes α̂s < arcsin(1/Υ(x)). We use this condition in the Dijkstra-like Algorithm 2. Finally, if we take both Υ(x) = 1 and δ = 0, (5.22) becomes α̂s < pi/2. In the appendix of [60], it is shown that the slightly looser condition α̂s ≤ pi/2 is sufficient for causality of (5.8). 5.3.3 Distance-ratio-boundedness If the ratio of the minimum distance between x and any node in simplex s and the maximum distance between nodes in s is large enough then s ∈ S must be AAB for x. Proposition 5.12 provides a lower bound for this ratio that is sufficient for AAB. We use DRB in the proof of correctness of Algo- rithm 3 in the case when δ = 0. We do not parameterize DRB by δ because it is difficult to determine a simple and tight lower bound on the ratio for general nonnegative δ. Figure 5.3 is a geometric aid in understanding the definition of and proofs involving distance-ratio-boundedness. 98 Let ĥs be the maximum grid edge distance in s: ĥs = max i,j ‖xsi − xsj‖. (5.26) Lemma 5.10. Let ĥs/(2rs(x)) ≤ 1. The inequality α̂s ≤ 2 arcsin(ĥs/(2rs(x))) holds. Proof. Let i, j be such that 1 ≤ i ≤ ns, 1 ≤ j ≤ ns, and i 6= j. Let b = min{‖xsi − x‖, ‖xsj − x‖}. Form an isosceles triangle with apex A = x and the other two vertices B = x + basi , and C = x + ba s j . We bound the length of the base above: ‖B − C‖ ≤ ‖xsi − xsj‖ ≤ ĥs. By (5.21) we bound the length of either side below: b = ‖A−B‖ = ‖A− C‖ ≥ rs(x). We split the isosceles triangle ABC in half to obtain a right-angle triangle with vertices A = x, B = x+ basi , and D = x+ b(a s i + a s j)/2. We have sin ( αsi,j 2 ) = ‖B −D‖ b = ‖B − C‖ 2b ≤ ĥs 2rs(x) . (5.27) By the properties of the simplex s, 0 < αsi,j/2 < pi/2. By (5.27), α s i,j ≤ 2 arcsin(ĥs/(2rs(x))) for any i and j. This implies that α̂s ≤ 2 arcsin(ĥs/(2rs(x))). Proposition 5.11. Let x ∈ Ω and s ∈ S. If ĥs/(2rs(x)) ≤ 1, δf̂(x)/rs(x) ≤ 1, and 2 arcsin(ĥs/(2rs(x))) < arccos(δf̂(x)/rs(x))− arccos(1/Υ(x)) (5.28) then s is δ-AAB for x Proof. By (5.28), Lemma 5.10, and (5.22) in the definition of δ-AAB, s is δ-AAB for x. 99 Equation (5.28) can be simplified if we restrict the problem in several ways. These simplications are useful for building intuition about Proposition 5.11. Furthermore, the restriction δ = 0 is used to form our definition of DRB. If we take Υ(x) = 1, (5.28) becomes 2 arcsin(ĥs/(2rs(x))) < arccos(δf̂(x)/rs(x)). On the other hand, if we take δ = 0, (5.28) becomes 2 arcsin(ĥs/(2rs(x))) < arcsin(1/Υ(x)). From this condition, we can determine a lower bound Ψ(x) on the ratio rs(x)/ĥs: rs(x) ĥs > [ 2 sin ( arcsin(1/Υ(x)) 2 )]−1 = Ψ(x). (5.29) Finally, if we take both Υ(x) = 1 and δ = 0, (5.28) becomes ĥs/rs(x) < √ 2, which implies α̂s < pi/2, the condition for AAB in this case. Definition. We say that s ∈ S is distance-ratio-bounded (DRB) for x ∈ Ω if δ = 0 and (5.29) holds. We say that S is DRB for x if all s ∈ S are DRB for x. Proposition 5.12. If s ∈ S is DRB for x ∈ Ω then s is AAB for x. Proof. By the definition of DRB, (5.29) holds, which is equivalent to (5.28), when δ = 0. Note that Ψ(x) ≥ 1/√2 (see Remark 5.13 below), which means that by (5.29), ĥs/(2rs(x)) < 1/ √ 2 ≤ 1. Also, since δ = 0 we have δf̂(x)/rs(x) = 0 ≤ 1. Therefore, by Proposition 5.11, s is AAB for x. Remark 5.13. If we simplify the definition of DRB by replacing Ψ(x) with Υ(x) in (5.29), Proposition 5.12 still holds. However, Ψ(x) < Υ(x), so using Ψ(x) to define DRB results in a looser restriction on simplices. For 1 ≤ Υ(x) <∞, since arcsin(1/Υ(x)) > 2 arcsin(1/(2Υ(x))), we have Ψ(x) < Υ(x). When Υ(x) = 1, Ψ(x) = 1/ √ 2 and limΥ(x)→∞[Ψ(x)/Υ(x)] = 1. Finally, for 1 ≤ Υ(x) <∞, Ψ(x) increases as Υ(x) increases. 100 5.4 Algorithm symbol type or definition description X = Ω ∪ ∂Ω set of all grid nodes Ω subset of X discretized domain ∂Ω subset of X discretized domain boundary x Rd domain point or grid node N (x) X → subset of X set of grid neighbors of node x s convex subset of Rd simplex S(R) set of simplices grid simplices with all vertices in R and 1 ≤ ns ≤ d−→Y (x) X → subset of X update node set of x, i.e., nodes upon which x might depend S(x) X → set of simplices update simplex set of x H(x,S, φ, µ) see (5.4) numerical (discrete) Hamiltonian u(x) X → R solution of (5.5) at node x v(x) X → R (temporary) value of node x H subset of X min-value heap of unaccepted nodes←−Y (x) X → subset of X dependent node set of x, i.e., nodes which might depend on x Sd set of simplices grid simplices with ns = d Update(x, s) X × S → R value update of node x from simplex s, i.e., µ̃s from (5.8) with φ(x s i ) = v(x s i ) Q subset of X queue to become update nodes M(s) S → subset of X set of neighbors of all vertex nodes in s B(x, r) convex subset of Rd ball of radius r centered on x B1(x) convex subset of Rd ball centered on x defined in (5.31) B2(x) convex subset of Rd ball centered on x defined in (5.32) Ψ(x) Ω→ R a measure of anisotropy from (5.29) Table 5.1: Summary of symbols. The first group is used in defining the numerical problem, the second group is used in Algorithm 2, the third group is used only in Algorithm 3, and the forth group is used in proofs of algorithm correctness and complexity. In Algorithm 2 we define the MAOUM algorithm. Algorithm 2 solves the discrete system (3.1). For (3.1) to be well-defined S(x) must be determined. The update simplex set S(x) is chosen to ensureH(x,S(x), φ, µ) is consistent 101 and the discrete equation H(x,S(x), φ, µ) = 0 is causal. Let S(x) = {s ∈ S(−→Y (x)) | s is AAB for x}, (5.30) where −→Y (x) is the stencil or update node set of x. We update v(x) from simplex s using Update(x, s), which evaluates to the solution µ̃s of (5.8), where φ(xsi ) = v(x s i ). In Algorithm 2, the node x is included in −→Y (x) and←−Y (x) for convenience of proofs in Section 5.4.1. However, the algorithm is also correct if x is excluded from −→Y (x) and ←−Y (x). In Section 5.4.1 we describe how the subroutine ComputeUpdateSet de- fined in Algorithm 3 determines −→Y (x) such that S(x) satisfies DC and AAB for x. We revisit the convergence of u to the solution u of the HJ PDE (1.8) as the grid spacing goes to zero in Section 5.4.2. In Section 5.4.3 we examine the computational complexity of MAOUM. One of the drawbacks of MAOUM is that the update node set −→Y (x) must be stored for each x, which may require a significant amount of memory. In Section 5.4.4 we show that it is possible to instead store a superset of −→Y (x) which has a compact representation. foreach x ∈ Ω do v(x)←∞1 foreach x ∈ ∂Ω do v(x)← g(x)2 foreach x ∈ X do ←−Y (x)← {x}, −→Y (x)← {x}3 foreach x ∈ Ω do ComputeUpdateSet(x)4 H ← X5 while H 6= ∅ do6 x← argminy∈H v(y)7 H ← H \ {x}8 foreach y ∈ ←−Y (x) ∩H do9 foreach s ∈ S(−→Y (y) \ H) s.t. x ∈ s and s is AAB for y do10 v(y)← min(v(y), Update(y, s))11 end12 end13 end14 Algorithm 2: Monotone Acceptance Ordered Upwind Method (MAOUM) 102 Q ← N (x)1 while Q 6= ∅ do2 y ← Pop(Q)3 −→Y (x)← −→Y (x) ∪ {y}4 ←−Y (y)←←−Y (y) ∪ {x}5 foreach s ∈ Sd( −→Y (x)) s.t. x /∈ s and y ∈ s do6 if s not AAB for x then7 Q ← Q∪ ( M(s) \ −→Y (x) ) 8 end9 end10 end11 Algorithm 3: ComputeUpdateSet(x) 5.4.1 Computing the Update Set The status of Algorithm 3 during different stages of computing −→Y (x) is shown in Figures 5.4 and 5.5. To achieve more accurate and efficient com- putations in locally-refined parts of the grid, we desire the maximum extent of the update node set, r̂(x) = max y∈−→Y (x) ‖x− y‖, to shrink towards zero as the local grid spacing goes to 0. By proving Theorem 5.17, we show that the subroutine call ComputeUpdateSet(x) (Algorithm 3) terminates in a finite number of iterations with a bound on r̂(x) that varies linearly with the local grid resolution. We further show that for x not too near the boundary ∂Ω, S(x) defined in (5.30) is DC and NGA, which is sufficient for consistency and causality. First we define notation and prove some useful lemmas. Let Z ⊆ X . The set Sd+1(Z) contains the d-dimensional grid simplices with all vertex nodes in Z. Define UZ = ⋃ s∈Sd+1(Z) (s), which is the d-dimensional set covered by the simplices in Sd+1(Z). 103 Figure 5.4: (Continued in following figure) The status of Algorithm 3 during different stages of the computation of the update node set of x. The star is node x. Squares are nodes in the update node set. Circles are nodes in Q. Thin solid lines are grid edges that are AAB for x. Thin dotted lines are grid edges which are not AAB. Thick lines are grid edges on the frontier of the update node set. Thick solid lines are AAB and thick dashed lines are not AAB. The top-left shows the moment just after neighbors of x have been added to the update node set and the frontier edges for which they are vertices have failed the AAB test. As a result, all nodes opposite these edges have been added to Q. Note that the order in which nodes are removed from Q is arbitrary, although our implementation uses first-in first-out order. Subsequent plots show subsequent but not sequential stages left to right. Define M(s) = {y ∈ X | y ∈ N (xsi ) for all i such that 1 ≤ i ≤ ns}. Define F∂(Z) = {s ∈ Sd(Z) | M(s) \ Z 6= ∅ or for all j, xsj ∈ ∂Ω}. The set F∂(Z) contains the (d− 1)-dimensional grid simplex faces with all 104 Figure 5.5: (Continued from previous figure) The lower-right shows the status at the termination of Algorithm 3. All grid edges on the frontier of the update node set have passed the AAB test. Subsequent plots show subsequent but not sequential stages left to right. vertex nodes in Z, such that there is a neighbor of all vertex nodes which is not in Z or all vertex nodes are on the boundary of the grid. Define U∂Z = ⋃ s∈F∂(Z) (s), which is the (d− 1)-dimensional surface covered by the simplices in F∂(Z). Lemma 5.14. Let ∂UZ be the boundary of UZ . Then ∂UZ ⊆ U∂Z . Proof. Since Sd+1(Z) contains only grid simplices which do not overlap ex- cept at their boundary, Sd+1(Z) is a partition of UZ . It follows that any point z ∈ ∂UZ must be on a (d − 1)-dimensional face s of just one d- dimensional simplex in Sd+1(Z). We have s ∈ Sd(Z). Either M(s) \ Z 6= ∅ or for all j, xsj ∈ ∂Ω. By definition s ∈ F∂(Z). Thus, z ∈ U∂Z =⋃ s∈F∂(Z)(s). For the following proofs we are only concerned with a single execution of Algorithm 3. Also, we are only considering the update node set −→Y (x) and 105 not the dependent node set ←−Y (x), so we abbreviate Y = −→Y (x), to make the notation less cluttered. We now prove a lemma stating that throughout the execution of Algorithm 3, F∂(Y) is DC for x. Let a subscript i ≥ 0 represent the state of a variable at the beginning of the (i+1)st iteration of the while loop in Algorithm 3. For example, Yi is the state of the update node set Y at the beginning of iteration i + 1. From Lines 3 and 4 of Algorithm 3, we have yi+1 = Pop(Qi) and Yi+1 = Yi ∪ {yi+1}. Lemma 5.15. Suppose that x /∈ ∂Ω. For i ≥ |N (x)|, F∂(Yi) is DC for x. Proof. Suppose that x ∈ UYi \ ∂UYi . Any path ξ : [0, 1] → Ω, such that ξ(0) = x and ξ(1) /∈ UYi intersects ∂UYi . In particular, any path ξ of the form ξ(t) = x + tCa, where t ∈ [0, 1], C ∈ R+ is a constant, a ∈ A, and ξ(1) /∈ UYi intersects ∂UYi . By Lemma 5.14, such a path also intersects U∂Yi . Since this fact holds for any a ∈ A and U∂Yi is the union of simplices in F∂(Yi), F∂(Yi) is DC for x. We now prove by induction our supposition that x ∈ UYi \ ∂UYi . Consider the base case: i = |N (x)|. By Line 3 of Algorithm 2, and Lines 1, 3, and 4 of Algorithm 3, we have Yi = {x} ∪N (x). Thus, Sd+1(Yi) includes all d-dimensional simplices which have x as a vertex. Since x /∈ ∂Ω, we have x ∈ UYi \ ∂UYi . Now consider the inductive step. Suppose that x ∈ UYi \ ∂UYi . We have Yi+1 = Yi ∪ {yi+1}. Thus, Sd+1(Yi+1) includes just the d-dimensional simplices in Sd+1(Yi) plus any other d-dimensional simplices which have yi+1 as a vertex and nodes in Yi for all other vertices. It follows that Sd+1(Yi+1) ⊇ Sd+1(Yi). Since, x ∈ UYi \ ∂UYi , we have x ∈ UYi+1 \ ∂UYi+1 . Let ĥ = max{‖y − z‖ | y ∈ X and z ∈ N (y)} be the maximum grid edge length. Let ĥ(R) = max{‖y − z‖ | y ∈ R ∩ X and z ∈ N (y)} be the maximum length of grid edges with at least one end node in R ⊂ Rd. Let B(x, r) be a closed ball of radius r around point x ∈ Rd. The following lemma establishes that we can define a ball centered on x with radius linear in the maximum length of grid edges within the ball. This concept is used to define local grid spacing in Theorem 5.17. 106 Lemma 5.16. For all x and all b ∈ R+ there exists r̃ ∈ R+ such that 0 < r̃ = bĥ(B(x, r̃)) <∞. Proof. We have bĥ(B(x, 0)) = bmax{‖x− y‖ | y ∈ N (x)} > 0, bĥ(B(x, r̃)) nondecreasing on r̃, and limr̃→∞ bĥ(B(x, r̃)) = bĥ < ∞. There- fore, there exists r̃ such that 0 < r̃ <∞ and r̃ = bĥ(B(x, r̃)). As allowed by the previous Lemma, choose b = Ψ(x) + 1 and define ř(x) to be the minimum r̃ such that ĥ(B(x, r̃)) = r̃/(Ψ(x) + 1). Define B1(x) = B(x, ř(x)Ψ(x)/(Ψ(x) + 1)) (5.31) and B2(x) = B(x, ř(x)), (5.32) where Ψ(x) is defined in (5.29). Since Ψ(x) > 0, we have B1(x) ⊂ B2(x). De- fine BC1 (x) = Ω\B1(x), and BC2 (x) = Ω\B2(x). Let λ(x) = miny∈B1(x),z∈BC2 (x) ‖y− z‖ be the minimum distance between B1(x) and BC2 (x). We have λ(x) > ř(x)− ř(x)Ψ(x)/(Ψ(x) + 1) = ř(x)/(Ψ(x) + 1). When x is clear from the context, we may abbreviate B1 = B1(x), B2 = B2(x), BC1 = BC1 (x), BC2 = BC2 (x), and λ = λ(x). Let m(R) = |X ∩ R| be the number of grid nodes in R ⊂ Rd. Theorem 5.17. Let x ∈ Ω. Let ∂Ω ∩ B1 = ∅. The subroutine call ComputeUpdateSet(x) terminates before iteration m(B2) of the while loop with r̂(x) ≤ ř(x). The set F∂(−→Y (x)) is DC for x and AAB for x. Section 5.4.2 explains why requiring that x be sufficiently far from the boundary does not impact convergence. Proof. A node w may be added to Y at most once. Since Y0 = {x}, we have |Yi| = i+ 1. 107 Suppose there exists ǐ ≥ 1, such that yǐ ∈ BC2 and for 1 ≤ i < ǐ, yi ∈ B2. In other words, yǐ is the first node outside of B2 to be added to Y. By Lines 3 and 4 of Algorithm 3, yǐ must have entered Q. So, by Lines 1 and 8, either yǐ ∈ N (x) or yǐ ∈M(s) for s ∈ Sd(B2). If yǐ ∈ N (x), then ‖x− yǐ‖ ≤ ĥ(B2) = ř(x) Ψ(x) + 1 < ř(x), which contradicts the supposition that yǐ ∈ BC2 . So yǐ /∈ N (x), which means yǐ ∈M(s), for s such that xsk ∈ B2 ∩ X for 1 ≤ k ≤ d. Since ‖xsk − yǐ‖ ≤ ĥ(B2) = ř(x) Ψ(x) + 1) < λ, it must be that xsk ∈ BC1 for 1 ≤ k ≤ d. By (5.21) and the definition of BC1 , we have rs(x) > ř(x)Ψ(x) Ψ(x) + 1 . Thus, by (5.26), since ĥs ≤ ĥ(B2) = ř(x) Ψ(x) + 1 we have rs(x) ĥs > ř(x)Ψ(x)/(Ψ(x) + 1) ĥ(B2) = Ψ(x). (5.33) By definition, s is DRB for x. By Proposition 5.12, s is AAB for x and by the if condition in Line 7, yǐ did not enter Q, which is a contradiction. Thus, there does not exist ǐ ≥ 1, such that yǐ ∈ BC2 . Because |Yi| = i+1, the algorithm terminates before iteration m(B2) of the while loop. Let ĩ be the last while iteration. We have |N (x)| ≤ ĩ < m(B2) and r̂(x) = maxy∈Yĩ ‖x− y‖ ≤ ř(x). Consider each (d − 1)-dimensional simplex face s ∈ F∂(Yĩ). Because ∂Ω ∩ B1 = ∅, x /∈ ∂Ω. So, since N (x) ⊆ Yĩ, x is not a vertex of s. There exists ǰ ≤ ĩ such that yǰ = xsk for some k such that 1 ≤ k ≤ d and xsk ∈ Yǰ for all k such that 1 ≤ k ≤ d. In other words, ǰ is the first while iteration 108 when all vertices of s are in Y. By the foreach loop, if s is not AAB for x then M(s) ⊂ Qǰ , which implies M(s) ⊂ Yĩ, meaning M(s) \ Yĩ = ∅. But since s ∈ F∂(Yĩ), M(s) \ Yĩ 6= ∅ or for all k, xsk ∈ ∂Ω. It follows that s is AAB or for all k, xsk ∈ ∂Ω. Since ∂Ω∩B1 = ∅, if for all k, xsk ∈ ∂Ω, then for all k, xsk ∈ BC1 . Thus, by (5.33), s is DRB for x, implying s is AAB for x. Therefore, we have F∂(Yĩ) is AAB for x, and by Lemma 5.15, DC for x. The following corollary states that for x not too near the boundary ∂Ω, S(x) is DC for x and NGA for x. By Proposition 5.2, DC for x is needed for the consistency of the Hamiltonian H(x,S(x), φ, µ). By Theorem 5.8, the fact that S(x) is NGA for x implies (5.8) is causal for x and s ∈ S(x). Corollary 5.18. Let ∂Ω∩B1 = ∅. Then S(x) is DC for x and NGA for x. Proof. By definition, F∂(−→Y (x)) ⊆ Sd( −→Y (x)) ⊆ S(−→Y (x)). By the definition of S(x) in (5.30) and since, by Theorem 5.17, F∂(−→Y (x)) is AAB for x, we have S(x) ⊇ F∂(−→Y (x)). By Theorem 5.17, F∂(−→Y (x)) is DC for x, so its superset S(x) is DC for x. By (5.30), S(x) is AAB for x, so by Theorem 5.9, S(x) is NGA for x. 5.4.2 Convergence In Section 5.2, we proved that H is monotone (Proposition 5.1) and con- sistent (Propositions 5.2) as long as S is DC for x. In Corollary 5.18, ∂Ω ∩ B1 = ∅ is a sufficient condition for S to be DC, but for x near the computational boundary the condition may not hold. However, the follow- ing propositions show that in the limit the update set does not intersect ∂Ω, so the scheme is consistent. Let řG(x) and BG1 (x) be as defined above, but explicitly parameterized by the grid G. Proposition 5.19. Property 3.1 (Stencil Radius) holds. Moreover, řG(x) = O(ĥG), for all x ∈ Ω. 109 Proof. Let x ∈ Ω. By Theorem 5.17 and the definition of řG(x), after executing Algorithm 3, r̂G(x) ≤ řG(x) ≤ (Ψ(x) + 1)ĥ(B2) ≤ (Ψ̂ + 1)ĥG . Proposition 5.20. Let x ∈ Ω and φ ∈ C∞b (Ω). For sequences Gk, xk ∈ Ωk, and ξk, such that xk → x, ĥGk → 0, and ξk → 0 as k →∞, lim k→∞ H(xk,Sk, φ+ ξk, φ(xk) + ξk) = H(x,Dφ(x)), (5.34) where Sk = S(−→Y Gk(xk)). Proof. Let x ∈ Ω and φ ∈ C∞b (Ω). Let Gk, xk ∈ Ωk, and ξk be sequences such that xk → x, ĥGk → 0, and ξk → 0 as k → ∞. Since ĥGk → 0 as k → ∞, by Proposition 5.19, we have r̂Gk(x) → 0 and řGk(x) → 0, as k → ∞. Also, because Ω is a bounded Lipschitz domain, x ∈ Ω is at some minimum distance  > 0 from ∂Ω. So, there exists some k̃ > 0 such that for all k ≥ k̃, ∂Ω ∩ BGk1 = ∅. By Theorem 5.17, for all k ≥ k̃, Sk is DC for xk. By the fact that r̂Gk(x)→ 0 as k →∞ and by Proposition 5.2, lim k→∞ H(xk,Sk, φ+ ξk, φ(xk) + ξk) = lim y→x, ξ→0 r̂(y)→0 H(y,S(y), φ+ ξ, φ(y) + ξ) = H(x,Dφ(x)), (5.35) so (5.34) holds. To use the convergence proof in Section 3.2 we must also show that u is bounded. Virtually the same proof as that in Section 4.3.5 can be used to prove u is bounded. 110 5.4.3 Complexity We analyze the asymptotic computational complexity of Algorithm 2. We argue that executing the while loop in Algorithm 2 is more computationally complex than initialization before the while loop. Of the tasks performed during execution of the while loop, maintaining the nodes in value order using a heap determines the complexity of the while loop and, therefore, the complexity of Algorithm 2. Recall that N = |X | is the number of grid nodes. To analyze computational complexity it is useful to prove a lemma bounding the maximum number of nodes in any update region or dependent region as N →∞. Let ȟ = min{‖y− z‖ | y ∈ X and z ∈ N (y)} be the min- imum grid edge length and let ρ = ĥ/ȟ. Let −→ M = maxx∈Ω |−→Y (x) ∩ X | and←− M = maxx∈X |←−Y (x)∩Ω| be the maximum number of nodes over all update node sets and dependent node sets, respectively. Let Ψ̂ = maxx∈Ω Ψ(x). Lemma 5.21. As N →∞, −→M = O((Ψ̂ρ)d) and ←−M = O((Ψ̂ρ)d). Proof. By Theorem 5.17, after executing Algorithm 3, r̂(x) ≤ ř(x) ≤ (Ψ(x) + 1)ĥ(B2) ≤ (Ψ̂ + 1)ĥ, for all x ∈ Ω. Given ȟ is the minimum spacing between nodes, we bound above the number of nodes that can be packed into B(x, r̂(x)), for all x: |B(x, r̂(x)) ∩ X | = O (( r̂(x) ȟ )d) = O (Ψ̂ĥ ȟ )d = O((Ψ̂ρ)d) Therefore, −→ M = O((Ψ̂ρ)d). The symmetry y ∈ ←−Y (x) if and only if x ∈ −→Y (y) is evident from Line 3 of Algorithm 2, and Lines 4 and 5 of Algorithm 3. Since r̂(x) ≤ (Ψ̂ + 1)ĥ for all x ∈ Ω, the same holds for the maximum extent of the dependent node set: max y∈←−Y (x) ‖x− y‖ ≤ (Ψ̂ + 1)ĥ for all x ∈ X . Following the same argument as above, ←− M = O((Ψ̂ρ)d). 111 We first consider the computational cost of initializing the update regions of nodes x ∈ Ω (and dependent regions of nodes x ∈ X ) using the subroutine Algorithm 3. In Line 4 of Algorithm 2, ComputeUpdateSet is called N times, once for each node x. For each call to ComputeUpdateSet, there are O(−→M) iterations of the while loop in Algorithm 3. For each while iteration, a foreach loop visits the (d− 1)-dimensional simplex faces that include node y for a total of O(Ps) iterations, where Ps = maxy∈X |{s ∈ Sd | y ∈ s}|. Each iteration of the foreach loop performs a constant number of oper- ations. Thus, the number of operations to execute ComputeUpdateSet N times isO(N−→MPs)). Assuming Ps is bounded as N →∞, the computational complexity of initializing the update node sets is O(N−→M). Now we examine the complexity of executing the while loop of Algo- rithm 2. For each iteration of the while loop, a node x is accepted. A node is accepted only once, so there are N iterations in total. For each iteration, a foreach loop visits the O(←−M) unaccepted nodes in the dependent node set of x. For each such dependent node y, each neighbor of x must be tested for membership in −→Y (y) to determine the update simplices in Line 10 at a cost of O(Pn log−→M), where Pn = maxx∈X |N (x)| and −→Y (y) is implemented as a self-balancing binary search tree. Also, for each y, O(Ps) updates are performed at Line 11 and the binary min-heap implementation of H must be updated at a complexity of O(logN). Thus, the number of operations in executing the while loop is O(N←−M(Pn log−→M + Ps + logN)). Assuming Pn and Ps are bounded as N → ∞, the complexity is O(N←−M logN), which is determined by the heap update operations. The complexity of executing the while loop of Algorithm 2 dominates the complexity of the initialization, including that of the calls to the sub- routine Algorithm 3, which we determined above to be O(N−→M) and the initialization of the heap which is O(N logN). So, by Lemma 5.21, the overall asymptotic complexity of Algorithm 2 is O(N←−M logN) = O(N(Ψ̂ρ)d logN). Single-pass algorithms for isotropic problems [55, 64] and limited anisotropic 112 problems such as that in [58] or in Chapter 4 have a complexity ofO(N logN). The extra (Ψ̂ρ)d factor in the complexity of MAOUM is due to the num- ber of nodes in the update node set, which in MAOUM has been expanded beyond direct grid neighbors. In [60] a complexity of O(NΥ̂d−1 logN) is derived for AFOUM, where Υ̂ = maxx∈Ω Υ(x). As shown in Remark 5.13, Ψ̂ < Υ̂. However, AFOUM’s complexity has a reduced exponent of d − 1 because the update node set lies on a lower dimensional accepted front. The complexity claim from [60] does not consider ρ, even though it plays an important role when the grid is highly nonuniform. If we assume ρ is bounded as N → ∞, then MAOUM’s complexity is O(NΨ̂d logN). However, the optimal allocation of grid nodes for Algorithm 2 may cause the grid to become more nonuniform as N →∞. In Section 5.5 we examine the relationship between the solution accuracy and the computational cost experimentally on several problems with a progression of increasingly-refined uniform and nonuniform grids. For practical grid sizes, it is often the CPU time spent computing node value updates which dominates the CPU time of executing the entire al- gorithm, despite the fact that the computational complexity of the heap updates dominates in asymptotic analysis. Computing a node value may involve iteratively solving a nonlinear equation or a numerical optimization which can be quite computationally intensive. For this reason, in Section 5.5 we use the number of updates as our measure of computational cost. 5.4.4 Update Region One of the drawbacks of MAOUM is that −→Y (x) and ←−Y (x) must be stored for each x, which may require a considerable amount of memory. It turns out that −→Y (x) (←−Y (x), repectively) can be stored in memory as a region−→R(x) ⊃ −→Y (x) (←−R(x) ⊃ ←−Y (x), respectively) instead of a set of nodes. For now, we just discuss determining −→R(x), since the identical representations and computations can be used for ←−R(x). At the end of this section, we discuss one important consideration for ←−R(x). A simple example of such a region is the ball −→R(x) = B(x, r̂(x)). Such 113 a representation is easy to update in Algorithm 3 by replacing Line 4 with r̂(x)← max(r̂(x), ‖x− y‖). Also, it can be stored using just one number r̂(x) and it is easy to test whether ‖xsi − y‖ ≤ r̂(y) in Line 10 of Algorithm 2. A ball may form a reasonably tight bound of −→Y (x) when the mesh is uniform, but it can be much too large in the case where the mesh is highly nonuniform, since for a given x the mesh may be coarse in some directions while being highly refined in others. For this reason we use a polygonal representation of −→R(x), for which we choose a set of direction vectors Ap such that n = |Ap| is the number of directions and also the maximum number of sides in the polygon, and ‖ai‖2 = 1 for ai ∈ Ap and 1 ≤ i ≤ n. For example, we may use n = 3d − 1 direction vectors by taking all permutations of negative ones, zeroes, and ones except the zero vector and normalizing. If d = 2, this choice of Ap would represent an octagon using 8 direction vectors: Ap = {(−1/ √ 2,−1/ √ 2), (−1, 0), (−1/ √ 2, 1/ √ 2), (0,−1), (0, 1), (1/ √ 2,−1/ √ 2), (1, 0), (1/ √ 2, 1/ √ 2)}. Roughly speaking, for each x ∈ Ω and for each direction ai we compute and store the maximum distance ηi(x) of any y ∈ −→Y (x) along the direction ai. In other words, ηi(x) = max y∈−→Y (x) [(y − x) · ai], for 1 ≤ i ≤ n. It is simple to update ηi(x) in Algorithm 3 by replacing Line 4 with ηi(x)← max(ηi(x), (y − x) · ai), for 1 ≤ i ≤ n. We determine whether xsi ∈ −→R(x) in Line 10 of Algorithm 2 by checking the constraints (xsi − y) · ai ≤ ηi(y), 114 for 1 ≤ i ≤ n. There is a trade-off between the amount of data stored to represent−→R(x), and the number of nodes |−→R(x)|, which relates to the number of times Update(y, s) is called. The above representation with n = 3d − 1 direction vectors can in many cases form a reasonable approximation of−→Y (x). This representation has worked well in tests we have performed with highly nonuniform meshes, even though it is limited to representing a convex region. For the most part, ←−R(x) can be computed and used in the same manner as −→R(x), but there is one notable exception. In Line 9 of Algorithm 2, the set ←−R(x)∩H must be determined. We address this difficulty by storing the nodes in H in a 2d-tree. It is fairly straightforward to search the 2d-tree to determine which nodes are within a ball or polygon, for example. The ease with which the 2d-tree can be searched is another important consideration in choosing a representation of ←−R(x) in our implementation. 5.5 Experiments We present numerical experiments to test the convergence, computational cost, and accuracy of Algorithm 1 (p.35). We also demonstrate that MAOUM can be used to solve practical problems, such as a seismic imaging problem [60] and a robot navigation problem with obstacles and wind. The experi- ments indicate that MAOUM is particularly suited to problems in which the characteristics are highly-curved in some regions of Ω and straight or nearly straight elsewhere. For such problems it is more efficient to refine the grid only in the regions with curved characteristics. For all of the experiments reported below d = 2, although Algorithm 2 can be used for any dimension. Let x = (x1, x2) T and y = (y1, y2) T . We use a Maubach grid [43] for our simplicial grid. Examples of uniform Maubach grids are shown in Figure 5.6 and nonuniform Maubach grids in Figure 5.7. It is a semi-structured simplicial grid that can be adaptively refined and used in any dimension. For the Update function we must solve (5.8), which involves finding the 115 minimum over ζ of a convex function ηsv(ζ) for each s. We use the golden section optimization [37] when d = 2 and the projected subgradient method [61] when d > 2. These optimization methods are likely slower than more sophisticated alternatives that use gradient information but they are rel- atively simple to implement and can minimize all convex functions, even those for which the gradient is undefined in places. 5.5.1 Numerical Convergence Study Figure 5.6: A sequence of uniformly-refined Maubach grids with 0, 1, 2, and 3 levels of refinement. We demonstrate numerically that the output v of Algorithm 2 converges to the solution u of an anisotropic HJ PDE as the grid spacing goes to zero. For this experiment we use a series of increasingly fine uniform Maubach grids, such as those in Figure 5.6. We use a homogeneous Hamiltonian (i.e. H(x, q) = H(q)) and Af that is a non-grid aligned ellipse. For the purpose of implementation it is easiest to define c(x, y) = c(y) from (5.9). Let c(y) = ‖By‖2, where B is the 2× 2 matrix B = [ 1 0 0 4 ][ cos(pi/6) − sin(pi/6) sin(pi/6) cos(pi/6) ] The cost function c rotates y by pi/6 counterclockwise around the origin and then scales it by 4 in the vertical axis before taking the Euclidean norm. Let D = {x ∈ R2 | c(x) ≤ 0.4}. The domain for this problem is Ω = [−1, 1]2 \ D and the boundary is ∂Ω = ∂D. The boundary conditions are given as g(x) = c(x) for x ∈ D. Notice that the boundary conditions are defined throughout D and thus extend beyond ∂Ω. We define the boundary 116 Level N M h Updates e∞ rh∞ e1 rh1 10 1089 56.6 6.3e-2 32124 3.1e-2 2.9e-3 12 4225 60.2 3.1e-2 134666 8.9e-3 1.8 1.2e-3 1.3 14 16641 62.1 1.6e-2 550726 3.8e-3 1.2 4.7e-4 1.3 16 66049 63.0 7.8e-3 2226532 1.8e-3 1.1 2.1e-4 1.2 18 263169 63.5 3.9e-3 8952636 8.2e-4 1.1 9.6e-5 1.1 Table 5.2: The problem has a homogeneous Hamiltonian and Af that is a non-grid aligned ellipse with anisotropy Υ = 4. The table shows grid spacing versus maximum errors and average errors of approximate solutions computed on a progression of uniform grids by MAOUM. Level is the level of grid refinement. M is the average over x ∈ Ω of the number of nodes in the update node set. h is the distance between neighboring grid nodes in the x1 and x2 directions. Updates is the number of times Update(y, s) is called. e∞ is the L∞-error in the approximation u to the true solution u of (1.8). e1 is the L1-error. The error convergence rates rh∞ and rh1 are computed with respect to the grid spacing h. to be the ellipse D to exclude errors caused by poor approximation of the solution u near the origin, where u is nondifferentiable. It is not necessary to give boundary conditions on the external boundary since all characteristics flow out of this boundary. The grid resolutions and corresponding errors are listed in Table 5.2. 5.5.2 Nonuniform Grid To determine what benefit Algorithm 2 gains from refining a grid intelli- gently, we use an anisotropic HJ PDE which has a solution with kinks where the gradient is undefined. We run Algorithm 2 on a series of increasingly- fine uniform grids and a series of increasingly nonuniform grids, where newly added nodes are concentrated around the parts of the solution where the gradient is undefined. For comparison, we also run AFOUM on both the uniform and nonuniform grid series. For all four combinations of algorithm and grid series we plot the solution accuracy vs computational cost to see if Algorithm 2 performs better on a well-chosen nonuniform grid. We use a homogeneous Hamiltonian (i.e. H(x, q) = H(q)) and a rectan- 117 Figure 5.7: The sequence of nonuniformly-refined Maubach grids with 10, 12, and 14 levels of refinement used for the problem with homogeneous Hamiltonian and rectangular Af that is not aligned with the grid lines. gular Af that is not aligned with the grid lines. Let c(y) = ‖By‖∞, where B is the 2× 2 matrix B = [ 1 0 0 2 ][ cos(pi/8) − sin(pi/8) sin(pi/8) cos(pi/8) ] The cost function c rotates y by pi/8 counterclockwise around the origin and then scales it by 2 in the vertical axis before taking the maximum norm. The domain for this problem is Ω = [−1, 1]2 \ O and the boundary is ∂Ω = O, where O is the origin. The boundary conditions are given as g(O) = 0. Part of the nonuniform grid series used is shown in Figure 5.7. The grids are refined within distance 2ȟΥ of the two lines x2 = sin(−pi/8) + 12 cos(−pi/8) cos(−pi/8)− 12 sin(−pi/8) x1 and x2 = sin(−pi/8)− 12 cos(−pi/8) cos(−pi/8) + 12 sin(−pi/8) x1, where ȟ is the minimum grid edge length after refinement is complete. The results for all four combinations of algorithm and grid series are compared in Table 5.3 and Figure 5.8. To properly interpret the relative performance of MAOUM and AFOUM in Figure 5.8, one needs to under- 118 Figure 5.8: Error versus number of updates for problem with homogeneous Hamiltonian and Af that is a non-grid aligned rectangle with length that is twice its width. The values plotted are from Table 5.3. stand the extra cost involved in the initialization of the update sets in Al- gorithm 3 of MAOUM. Between 40 and 50 percent of MAOUM’s total run time was spent in Algorithm 3. If we consider the ratio of total run time (including initialization) to number of updates, MAOUM took between 90 and 240 percent of the time per update of AFOUM. After factoring in the initialization time for MAOUM, on the highly- nonuniform grids it runs nearly an order of magnitude faster than AFOUM resulting in similar approximation error. On uniform grids MAOUM runs slightly slower than AFOUM (but of the same order) resulting in similar er- ror. MAOUM on highly-nonuniform grids runs nearly an order of magnitude faster than either method on uniform grids resulting in similar error. Note that this problem has been chosen to highlight the advantage of MAOUM in solving problems in which the characteristics are highly-curved in some regions of Ω and straight or nearly straight elsewhere. 5.5.3 Seismic Imaging We run MAOUM on the seismic imaging example from the lower-right of Figure 6 in [60]. The domain for this problem is Ω = [−0.5, 0.5]2 \O and the boundary is ∂Ω = O, where O is the origin. The boundary conditions are 119 MAOUM AFOUM Grid Level N Updates e∞ rU∞ Updates e∞ rU∞ Nonuni 10 621 10126 5.3e-2 73560 5.3e-2 12 1443 24482 3.6e-2 0.41 243620 3.9e-2 0.26 14 3051 53264 2.6e-2 0.45 657620 2.8e-2 0.35 16 6305 111972 1.8e-2 0.51 1616153 1.9e-2 0.41 18 12911 231618 1.2e-2 0.47 3809008 1.4e-2 0.40 Uni 10 1089 18744 5.3e-2 24754 5.3e-2 12 4225 76346 3.6e-2 0.26 97974 3.9e-2 0.22 14 16641 308226 2.6e-2 0.25 390716 2.8e-2 0.25 16 66049 1238754 1.8e-2 0.27 1561944 1.9e-2 0.27 18 263169 4967030 1.2e-2 0.25 6248199 1.4e-2 0.25 e1 r U 1 e1 r U 1 Nonuni 10 621 10126 2.3e-3 73560 2.4e-3 12 1443 24482 1.1e-3 0.84 243620 1.2e-3 0.59 14 3051 53264 5.2e-4 0.93 657620 5.8e-4 0.71 16 6305 111972 2.6e-4 0.95 1616153 2.9e-4 0.76 18 12911 231618 1.3e-4 0.90 3809008 1.5e-4 0.75 Uni 10 1089 18744 2.3e-3 24754 2.4e-3 12 4225 76346 1.1e-3 0.53 97974 1.2e-3 0.52 14 16641 308226 5.2e-4 0.52 390716 5.8e-4 0.51 16 66049 1238754 2.5e-4 0.51 1561944 2.9e-4 0.51 18 263169 4967030 1.2e-4 0.51 6248199 1.4e-4 0.51 Table 5.3: The problem has a homogeneous Hamiltonian and rectangular Af that is not aligned with the grid lines. The table shows the number of updates versus maximum errors and average errors of approximate solutions computed on a progression of nonuniform and uniform grids by MAOUM and AFOUM. Nonuni indicates the use of a nonuniform grid, while Uni indicates a uniform grid. Level is the maximum level of grid refinement. Updates is the number of times Update(y, s) is called. e∞ is the L∞-error in the approximation u to the true solution u of (1.8). e1 is the L1-error. The error convergence rates rU∞ and rU1 are computed with respect to Updates rather than h as in Table 5.2. given as g(O) = 0. A wave propagates from the origin and passes through Ω, which is split into four layers by three vertically-shifted sinusoidal curves. The set Af (x) is an ellipse with the longer axis aligned with the tangent of the sinusoidal curves at x2. The dimensions of the elliptical Af (x) are 120 constant within a layer. The problem is to compute the first arrival time for the seismic wave. More details are specified in [60]. Figure 5.9: Contours of first-arrival times of a seismic wave computed using a uniform Maubach grid of level 13 with 8321 nodes (on left) and level 18 with 263169 nodes (on right). We test MAOUM on this problem using uniform Maubach grids from levels 13 to 18. The computed solution for levels 13 and 18 grids are shown in Figure 5.9. Preliminary experiments indicate that refining the grid along the sinusoidal layer boundaries does not improve accuracy significantly. We believe this is because the characteristics are curved to roughly the same degree throughout Ω. Localized grid refinement is most beneficial when characteristics are highly-curved in some parts of Ω and nearly straight elsewhere. 5.5.4 Robot Navigation with Wind and Obstacles An optimal time-to-reach problem with obstacles is a natural candidate for exploiting localized grid refinement. An optimal trajectory that must go around an obstacle to achieve the goal must closely track the obstacle boundary for some portion. Refining the grid to better resolve these obstacle boundaries should allow for a more accurate solution in portions of the domain that do not have an obstacle-free optimal path to the goal. 121 Although this example is not physically realistic, it does use data of suitable complexity for realistic scenarios and demonstrates MAOUM on a spatially-inhomogeneous anisotropic problem. The objective is for a robot to navigate from any location in the domain to a goal in optimal time. To make the task difficult the robot must avoid obstacles on its way to the goal and there is an inhomogeneous but static wind that pushes the robot. The goal is x∗ = (80.75, 46.25)T , g(x∗) = 0, and ∂Ω = {x∗}. The domain is Ω = [72, 112] × [17.5, 57.5] \ ∂Ω. The domain dimensions were chosen arbitrarily to obtain a sufficiently varied region of wind data (see below). The robot is circular with a radius of 1.1429. The obstacles are a set of points obtained from a laser range finder map downloaded with the Saphira robot control system software. The same point data was used for an isotropic path planning problem in [1], but we map the data from the square domain [−4000,−500] × [−3500, 0] to Ω. The point obstacles are shown in Figure 5.10 (a). To prevent the robot from attempting to plan a path through obstacles, it moves at a very slow speed of f(x, a) = 0.5 for any a ∈ A and any x ∈ C, where C is the set of states such that the robot is in collision with a point obstacle. The collision set C is depicted in solid black in Figure 5.10. The wind velocity is represented by a vector field shown in (b). We obtained the vector field from the wind arrays in Matlab.1 We use bilinear interpolation between samples to obtain the wind vector function ~fw : Ω→ R2. In the absence of wind, the robot moves with speed fr = 75.0, resulting in an isotropic speed profile Afr = {y | ‖y‖ ≤ fr}. Although not physi- cally realistic, we shift the isotropic speed profile by wind velocity, so the anisotropic speed profile is Af (x) = {y | ‖y − ~fw(x)‖ ≤ fr}. Note that fr = 75.0 > maxx∈Ω ‖~fw(x)‖, so Af (x) contains the origin in 1To load the wind data into Matlab type load wind;. The data is a 3D vector field, so we used only the 6th page of the data and ignored any wind velocity component in the 3rd dimension. In other words, we used u(:,:,6) and v(:,:,6) for the arrays of wind vector components and x(:,:,6) and y(:,:,6) for the arrays of spatial coordinates. 122 its interior. In order to determine the cost c(x, y) we find the intersection of vector y multiplied by a scalar b with ∂Af (x) by solving the quadratic ‖by − ~fw(x)‖2 = f2r for b ∈ R+. Then we have f(x, y/‖y‖) = b‖y‖ and c(x, y) = ‖y‖ f(x, y/‖y‖) = 1 b = ‖y‖2 y · ~fw(x) + √ (y · ~fw(x))2 − ‖y‖2(‖~fw(x)‖2 − f2r ) . We note that an HJ PDE with the same isotropic control and advection component was derived and solved in [59]. To compute an optimal trajectory from any initial location to x∗, we solve the characteristic ordinary differential equation (ODE) dx dt = ~fw(x)− fr Du(x)‖Du(x)‖ = f(x, â(x))â(x), (5.36) where â(x) ∈ argmax a∈A [(−Du(x) · a)f(x, a)]. Note that this is not gradient descent, because the gradient and the optimal characteristic will not generally align in anisotropic problems. To solve the ODE we used the function ode23 in Matlab. We determine the derivative in (5.36) for any x, by first computing Du(x) as the gradient of the linearly interpolated u of the grid simplex containing x. We use a Maubach grid that is additionally refined within a distance of ȟΥ of C and the goal x∗. The grid is uniformly refined to level 10 and then refined a further 8 levels near C and x∗. The resulting grid is shown in Figure 5.10 (a) and has 38728 nodes. We compute the time-to-reach function u for the anisotropic problem (i.e. with the wind) and the isotropic problem (i.e. without the wind) on the nonuniformly refined grid. Solution contours are shown in Figure 5.10 (b) and (c), respectively. Optimal trajectories for the anisotropic problem are shown in (b). Notice how trajectories 2 and 4 cut quickly across regions where the wind is strong and against the direction towards the goal. Contrast these trajectories with the straight line optimal 123 trajectories for the isotropic problem in (c). We also solve the anistropic problem on a uniform level 15 Maubach grid of 33025 nodes. Solution con- tours and optimal trajectories are shown in (d). Although the solution contours are smoother away from C in this uniform case, the ODE compu- tation gets stuck near C for trajectories 2 and 4, likely due to insufficient grid refinement and poor solution quality near ∂C. 124 (a) (b) (c) (d) Figure 5.10: The problem of optimally-navigating a robot to a goal through wind and obstacles. The top-left shows the laser-rangefinder data of the obstacles and the grid refined in a band around the collision set C and the goal x∗. The other three figures show C in solid black. The top-right includes the wind vector field, the contours of the computed time-to-reach function, and four optimal trajectories from different starting locations to the goal. The lower-left compares the optimal trajectories computed with the wind and without. The contours are of the isotropic (i.e. without wind) time- to-reach function. The solid lines are trajectories with the wind and the dash-dotted lines are trajectories without the wind. The lower-right shows contours of the time-to-reach function and trajectories, computed using a level 15 uniform Maubach grid with roughly the same number of nodes. Note that for trajectories 2 and 4 the characteristic ODE computation of the optimal trajectories gets stuck near the boundary of C which is insufficiently resolved. 125 Chapter 6 Conclusion We extended Dijkstra-like OUMs for solving static HJ PDEs in two impor- tant ways. For both extensions, we used a common framework (Chapter 3) to prove convergence of the approximate solution to the viscosity solu- tion of the HJ PDE and to prove that the algorithm solves the system of discretized equations in a single-pass through the nodes of the grid. The consistency, monotonicity, and stability of the discretization were used to prove convergence (Section 3.2) following the technique of [8]. The causality of the discretization was used to prove a Dijkstra-like algorithm (Section 3.3) can solve the discrete equations in a single-pass in nondecreasing node value following a similar argument to those in [55, 64]. The first extension (Chapter 4) showed that the Dijkstra-like FMM can be used to solve a class of axis-aligned HJ PDEs on an orthogonal grid. This class (Section 4.2) is not limited to differentiable Hamiltonians as is the case for Osher’s criterion on the applicability of FMM [47, 62]. We showed that a first-order upwind finite difference Eulerian discretization (Section 4.3) of the HJ PDE satisfies the required consistency, monotonicity, stability, and causality, as well as has properties that make it simple and efficient to update a node value using the discretized equation. These properties include the fact that H(µ) is nondecreasing on µ and there is a unique solution µ = µ̃ to H(µ) = 0, where H is the numerical Hamiltonian and µ is a stand-in for the current node value of interest. There are also a number of ways that 126 structure in H can be exploited (Section 4.4) to remove from consideration neighboring nodes and simplices when solvingH(µ) = 0. Our method can be used to solve some challenging problems (Section 4.6), including computing coordinated multi-robot optimal trajectories. The second extension (Chapter 5) developed a Dijkstra-like method called MAOUM that can be used to solve general convex static HJ PDEs. We presented three simple criteria for the causality of the discretization of a static convex HJ PDE. MAOUM uses an initial pass through the nodes to compute and store extended stencils that ensure the discretization is causal. A second pass through the nodes computes the solution to the discretized equations in a Dijkstra-like fashion. This method solves the same class of problems as the original OUM (which we call AFOUM) described in [59, 60]. One advantage of MAOUM over AFOUM is that it uses local grid spacings to compute the stencil, making it more suitable for use on a grid with highly nonuniform grid spacing. A second advantage is that the discrete equations are causal in the sense of Property 3.9, and so once the stencils have been built, a simple Dijkstra-like method can be used to solve the discrete equa- tions in nondecreasing value order. A third advantage is that the analysis of δ-causality in Section 5.3 allows modification of the method to a Dial-like algorithm with buckets of width δ. One last advantage is that for solv- ing multiple problems with different boundary conditions g but the same Hamiltonian H the generated stencils can be computed once and reused for each problem. Disadvantages of the method include the extra pass required to compute the stencils and the necessity of storing the stencils for use in the second pass. Despite these disadvantages, in our experiments MAOUM performed similarly in terms of accuracy/computation cost to AFOUM for problems on uniform grids, and it performed considerably better for some problems which had highly curved characteristics and a more refined grid in only certain regions of the domain. MAOUM was applied to solve several problems, such as finding optimal paths for a robot to reach a goal while fighting a strong wind and avoiding obstacles. 127 6.1 Future Work Algorithm 3 does not generate the smallest possible stencil −→Y (x) such that S(x) satisfies DC and AAB for x. The last plot in Figure 5.5 shows a −→Y (x) that is not minimal. Consequently, Algorithm 3 could be improved. Ideally, it would generate −→Y (x) that is minimal without substantially increasing the computational cost. We intend to develop an error-control adaptive-grid version of MAOUM. When a part of the approximate solution has just been computed we can measure the approximation error and test whether it falls within a pre- specified tolerance. If it does, that part of the solution can be accepted. If not, the grid can be refined in that local region, the stencils nearby can be recomputed, and the Dijkstra-like algorithm can be backed-up to recompute parts of the solution that may have changed due to the modified discretized equations. In this way, grid refinement would be used to control error as the solution is being constructed. There is the question of how we might exploit heuristics and parallelism to compute the solution or parts of it more efficiently. If after computing the approximate solution, one plots a dependency graph that shows how grid nodes depend on others for their solution value, it is usually the case that any particular node is independent of most other nodes for its value. Accordingly, if we were to recompute the solution for that node, we could ignore all nodes of which it is independent. If we are only interested in computing the solution near a particular state or near a characteristic that passes through that state (as is often the case in optimal control problems), we may be able to devise a heuristic that helps us decide in advance which solution values might be relevant. This idea may allow us to develop a single-pass focused computation of the solution in the spirit of A* search for discrete graphs (see [54, page 97]). Furthermore, the fact that node values are commonly independent of each other, especially when those values are very similar, makes the possi- bility of computing node values in parallel intriguing. One way to exploit this inherent potential for parallel computation may be to use a Dial-like 128 method [64] to solve a δ-causal discretization. 129 Bibliography [1] K. Alton and I. Mitchell. Optimal path planning under different norms in continuous state spaces. In Proceedings of the International Confer- ence on Robotics and Automation, pages 866–872, 2006. [2] K. Alton and I. Mitchell. Efficient dynamic programming for optimal multi-location robot rendezvous. In Proceedings of the IEEE Conference on Decision and Control, pages 2794–2799, 2008. [3] K. Alton and I. M. Mitchell. Fast marching methods for a class of anisotropic stationary Hamilton-Jacobi equations. Technical Report TR-2006-27, Department of Computer Science, University of British Columbia, 2007. [4] K. Alton and I. M. Mitchell. Fast marching methods for stationary Hamilton-Jacobi equations with axis-aligned anisotropy. SIAM J. Nu- mer. Anal., 43:363–385, 2008. [5] K. Alton and I. M. Mitchell. An ordered upwind method with precom- puted stencil and monotone node acceptance for solving static convex Hamilton-Jacobi equations, 2010. In revision for J. Sci. Comput.. [6] S. Bak, J. McLaughlin, and D. Renzi. Some improvements for the fast sweeping method, 2010. Accepted to SIAM J. Sci. Comput.. [7] M. Bardi and I. Capuzzo-Dolcetta. Optimal Control and Viscosity So- lutions of Hamilton-Jacobi-Bellman Equations. Birkhauser, Boston and Basel and Berlin, 1997. [8] G. Barles and P. E. Souganidis. Convergence of approximation schemes for fully nonlinear second order equations. Asymptot. Anal., 4:271–283, 1991. 130 [9] R. Bellman. Dynamic Programming. Princeton University Press, Princeton, NJ, 1957. [10] D. P. Bertsekas. Dynamic Programming and Optimal Control: Second Edition. Athena Scientific, Belmont, Massachusetts, 2000. [11] F. Bornemann and C. Rasch. Finite-element discretization of static Hamilton-Jacobi equations based on a local variational principle. Com- put. Vis. Sci., 9:57–69, 2006. [12] M. Boue and P. Dupuis. Markov chain approximations for deterministic control problems with affine dynamics and quadratic cost in the control. SIAM J. Numer. Anal., 36(3):667–695, 1999. [13] A. R. Bruss. The Eikonal equation: Some results applicable to computer vision. J. Math. Phys., 23:890–896, 1982. [14] T. C. Cecil, S. J. Osher, and J. Qian. Simplex free adaptive tree fast sweeping and evolution methods for solving level set equations in arbi- trary dimension. J. Comput. Phys., 213(2):458–473, Apr. 2006. [15] Y. Chen and B. Cockburn. An adaptive high-order discontinuous Galerkin method with error control for the Hamilton-Jacobi equations. Part I: The one-dimensional steady state case. J. Comput. Phys., 226 (1):1027–1058, September 2007. [16] Y. Cheng and C.-W. Shu. A discontinuous Galerkin finite element method for directly solving the Hamilton-Jacobi equations. J. Comput. Phys., 223:398–415, Apr. 2007. [17] M. G. Crandall and P. L. Lions. Viscosity solutions of Hamilton-Jacobi equations. T. Am. Math. Soc., 277(1):1–42, May 1983. [18] M. G. Crandall and P. L. Lions. Two approximations of solutions of Hamilton-Jacobi equations. Math. Comput., 43(167):1–19, July 1984. [19] M. G. Crandall, H. Ishii, and P. Lions. User’s guide to viscosity solutions of second order partial differential equations. Bull. Amer. Math. Soc., 27(1):1–67, 1992. [20] E. Cristiani. A fast marching method for Hamilton-Jacobi equations modeling monotone front propagations. J. Sci. Comput., 39(2):189–205, 2009. 131 [21] E. Cristiani and M. Falcone. Fast semi-Lagrangian schemes for the Eikonal equation and applications. SIAM J. Numer. Anal., 45(5):1979– 2011, 2007. [22] P.-E. Danielsson. Euclidean distance mapping. Comput. Graphics Im- age Process., 14(3):227–248, Nov. 1980. [23] R. B. Dial. Algorithm 360: shortest-path forest with topological order- ing. Commun. ACM, 12:632–633, 1969. [24] E. W. Dijkstra. A note on two problems in connection with graphs. Numer. Math., 1(1):269–271, Dec. 1959. [25] L. C. Evans. Partial Differential Equations. American Mathematical Society, Providence, Rhode Island, 1991. [26] M. Falcone and R. Ferretti. Discrete time high-order schemes for vis- cosity solutions of Hamilton-Jacobi-Bellman equations. Numer. Math., 67:315–344, 1994. [27] M. Falcone and R. Ferretti. Semi-Lagrangian schemes for Hamilton- Jacobi equations, discrete representation formulae and Godunov meth- ods. J. Comput. Phys., 175:559–575, 2002. [28] S. Fomel. On anelliptic approximations for qP velocities in VTI media. Geophysical Prospecting, 52:247–259, April 2004. [29] S. Fomel, S. Luo, and H. Zhao. Fast sweeping method for the factored Eikonal equation. J. Comput. Phys., 228:6440–6455, September 2009. [30] P. A. Gremaud and C. M. Kuster. Computational study of fast methods for the Eikonal equation. SIAM J. Sci. Comput., 27:1803–1816, 2006. [31] D. Halliday, R. Resnick, and J. Walker. Fundamentals of Physics. John Wiley and Sons, Inc., USA, 1997. [32] B. K. P. Horn and M. J. Brooks, editors. Shape from Shading. The MIT Press, Cambridge, Massachusetts, 1989. [33] C. Hu and C.-W. Shu. A discontinuous Galerkin finite element method for Hamilton-Jacobi equations. SIAM J. Sci. Comput., 21(2):666–690, 1999. 132 [34] S.-R. Hysing and S. Turek. The Eikonal equation: Numerical efficiency vs. algorithmic complexity on quadrilateral grids. In Proceedings of Algoritmy 2005, pages 22–31, 2005. [35] G.-S. Jiang and D. Peng. Weighted ENO schemes for Hamilton-Jacobi equations. SIAM J. Sci. Comput., 21(6):2126–2143, 2000. [36] C. Y. Kao, S. Osher, and Y. Tsai. Fast sweeping methods for static Hamilton-Jacobi equations. SIAM J. Numer. Anal., 42(6):2612–2632, 2004-2005. [37] J. Kiefer. Sequential minimax search for a maximum. P. Am. Math. Soc., 4:502, 1953. [38] S. Kim and D. Folie. The group marching method: An O(n) level set Eikonal solver. SEG Technical Program Expanded Abstracts, 70:2297– 2300, 2000. [39] R. Kimmel and J. A. Sethian. Fast marching methods on triangulated domains. Proc. Natl. Acad. Sci., 95:8341–8435, 1998. [40] R. Kimmel and J. A. Sethian. Optimal algorithm for shape from shading and path planning. J. Math. Imaging Vision, 14(3):237–244, May 2001. [41] P. D. Lax and R. D. Richtmyer. Survey of the stability of linear finite difference equations. Comm. Pure Appl. Math., 9:267–293, 1956. [42] F. Lia, C.-W. Shu, Y.-T. Zhang, and H. Zhao. A second order dis- continuous Galerkin fast sweeping method for Eikonal equations. J. Comput. Phys., 227:8191–8208, Sept. 2008. [43] J. M. Maubach. Local bisection refinement for n-simplicial grids gen- erated by reflection. J. Sci. Comput., 16(1):210–227, January 1995. [44] I. M. Mitchell and S. Sastry. Continuous path planning with multiple constraints. Technical Report UCB/ERL M03/34, UC Berkeley Engi- neering Research Laboratory, August 2003. [45] A. M. Oberman. Convergent difference schemes for degenerate elliptic and parabolic equations: Hamilton-Jacobi equations and free boundary problems. SIAM J. Numer. Anal., 44(2):879–895, 2006. [46] S. Osher. A level set formulation for the solution of the Dirichlet prob- lem for Hamilton-Jacobi equations. SIAM J. Math. Anal., 24(5):1145– 1152, Sept. 1993. 133 [47] S. Osher and R. P. Fedkiw. Level set methods: An overview and some recent results. J. Comput. Phys., 169(2):463–502, May 2001. [48] S. Osher and J. Sethian. Fronts propagating with curvature-dependent speed: Algorithms based on Hamilton-Jacobi formulation. J. Comput. Phys., 79:12–49, 1988. [49] L. C. Polymenakos, D. P. Bertsekas, and J. N. Tsitsiklis. Implemen- tation of efficient algorithms for globally optimal trajectories. IEEE Trans. Automat. Control, 43:278–283, 1998. [50] J. Qian, Y. Zhang, and H. Zhao. A fast sweeping method for static convex Hamilton-Jacobi equations. J. Sci. Comput., 31(1-2):237–271, May 2007. [51] J. Qian, Y. Zhang, and H. Zhao. Fast sweeping methods for Eikonal equations on triangulated meshes. SIAM J. Numer. Anal., 45:83–107, 2007. [52] C. Rasch and T. Satzger. Remarks on the O(n) implementation of the fast marching method. IMA J. Numer. Anal., 29:806–813, 2009. [53] E. Rouy and A. Tourin. A viscosity solution approach to shape-from- shading. SIAM J. Numer. Anal., 29(3):867–884, 1992. [54] S. Russell and P. Norvig. Artificial Intelligence: A Modern Approach. Prentic Hall, Upper Saddle River, New Jersey, 1995. [55] J. A. Sethian. A fast marching level set method for monotonically advancing fronts. Proc. Natl. Acad. Sci., 93:1591–1595, 1996. [56] J. A. Sethian. Level Set Methods. Cambridge University Press, Cam- bridge, UK, 1996. [57] J. A. Sethian. Fast marching methods. SIAM Review, 41(2):199–235, 1999. [58] J. A. Sethian and A. Vladimirsky. Fast methods for Eikonal and related Hamilton-Jacobi equations on unstructured meshes. Proc. Natl. Acad. Sci., 97(11):5699–5703, May 2000. [59] J. A. Sethian and A. Vladimirsky. Ordered upwind methods for static Hamilton-Jacobi equations. Proc. Natl. Acad. Sci., 98(20):11069–11074, 2001. 134 [60] J. A. Sethian and A. Vladimirsky. Ordered upwind methods for static Hamilton-Jacobi equations: Theory and algorithms. SIAM J. Numer. Anal., 41(1):323–363, 2003. [61] N. Z. Shor. Minimization Methods for Non-differentiable Functions. Springer-Verlag, New York, USA, 1985. [62] Y.-H. R. Tsai, L.-T. Cheng, S. Osher, and H.-K. Zhao. Fast sweeping algorithms for a class of Hamilton-Jacobi equations. SIAM J. Numer. Anal., 41(2):673–694, 2003. [63] J. N. Tsitsiklis. Efficient algorithms for globally optimal trajectories. In Proceedings of the 33rd Conference on Decision and Control, pages 1368–1373, 1994. [64] J. N. Tsitsiklis. Efficient algorithms for globally optimal trajectories. IEEE Trans. Automat. Control, 40(9):1528–1538, 1995. [65] A. Vladimirsky. Label-setting methods for multimode stochastic short- est path problems on graphs. Math. Oper. Res., 33(4):821–838, Novem- ber 2008. [66] T. Xiong, M. Zhang, Y.-T. Zhang, and C.-W. Shu. Fifth order fast sweeping WENO scheme for static Hamilton-Jacobi equations with ac- curate boundary treatment, 2009. Submitted to J. Sci. Comput.. [67] L. Yatziv, A. Bartesaghi, and G. Sapiro. O(n) implementation of the fast marching method. J. Comput. Phys., 212(2):393–399, March 2006. [68] Y.-T. Zhang, H.-K. Zhao, and J. Qian. High order fast sweeping meth- ods for static Hamilton-Jacobi equations. J. Sci. Comput., 29(1):25–56, Oct. 2006. [69] Y.-T. Zhang, S. Chen, F. Li, H. Zhao, and C.-W. Shu. Uniformly ac- curate discontinuous Galerkin fast sweeping methods for Eikonal equa- tions, 2009. Submitted to SIAM J. Sci. Comput.. [70] H. Zhao. A fast sweeping method for Eikonal equations. Math. Comp., 74(250):603–627, May 2004. 135

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
https://iiif.library.ubc.ca/presentation/dsp.24.1-0051750/manifest

Comment

Related Items