UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Motion planning based on uncertain robot states in dynamic environments : a receding horizon control… Mohandes, Ali 2014

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
24-ubc_2014_november_Mohandes_Ali.pdf [ 1.4MB ]
Metadata
JSON: 24-1.0074387.json
JSON-LD: 24-1.0074387-ld.json
RDF/XML (Pretty): 24-1.0074387-rdf.xml
RDF/JSON: 24-1.0074387-rdf.json
Turtle: 24-1.0074387-turtle.txt
N-Triples: 24-1.0074387-rdf-ntriples.txt
Original Record: 24-1.0074387-source.json
Full Text
24-1.0074387-fulltext.txt
Citation
24-1.0074387.ris

Full Text

 Motion Planning Based on Uncertain Robot States in Dynamic Environments: A Receding Horizon Control Approach  by  Ali Mohandes B.Sc., University of Tehran, 2012   A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF  MASTER OF APPLIED SCIENCE  in  THE COLLEGE OF GRADUATE STUDIES (Mechanical Engineering)   THE UNIVERSITY OF BRITISH COLUMBIA (Okanagan)       October 2014  © Ali Mohandes, 2014   ii Abstract This thesis is concerned with trajectory generation for robots in dynamic environments with relatively narrow passages. In particular, this thesis aims at developing motion planning schemes using receding horizon control (RHC) and mixed-integer linear programming (MILP). The thesis is constructed of two phases. In the first phase, a general nonlinear RHC framework is developed utilizing existing algorithms for motion planning of a robot with uncertain states. In this phase, the motion planning problem in the presence of arbitrary shaped obstacles is tackled using nonlinear system and measurement equations. This method is then adopted to solve the problem of cooperating aerial and ground vehicles. An important application of the latter is automated wildfire suppression that is investigated as a case study in this thesis. Simulation results demonstrate that the nonlinear RHC algorithm can effectively compute an optimal trajectory while handling the system uncertainties and constraints.  In the second phase, motion planning schemes for robots with uncertain states are developed under more specific assumptions (i.e., linear system and measurement equations and convex polyhedral obstacles) using receding horizon MILP (RHMILP). This phase builds upon the existing RHMILP motion planning methods by introducing two additional features including a state estimation algorithm and an obstacle avoidance constraint. First, the state estimation includes a partially closed-loop strategy to estimate the future states of the robot based on the anticipated future state measurements that can in turn help to reduce the uncertainty of the future states. Second, the obstacle avoidance constraint is designed to secure a line connecting each pair of sequential robot positions inside the configuration space (outside the obstacles) at all discrete times, analogous to maintaining line-of-sight (LOS) between each pair of sequential positions of the robot. The LOS-based obstacle avoidance is advantageous in that it can typically provide the  iii same level of safety at time steps larger than those of the conventional approaches; hence creating trajectories which are more efficient at both planning and execution stages. Numerical simulations and experiments indicate that proper inclusion of the future anticipated measurements contributes to a less conservative (more feasible) path.    iv Preface This research was conducted in the advanced control and intelligent systems (ACIS) laboratory at the School of Engineering at the University of British Columbia, Okanagan campus.  A version of ‎Chapter 2 has been accepted (and is going to be presented) at the IEEE-VTC 2014 Conference [1] (A.‎ Mohandes,‎ M.‎ Farrokhsiar,‎ and‎ H.‎ Najjaran,‎ “A‎ Motion‎ Planning‎Scheme‎for‎Automated‎Wildfire‎Suppression,”).  A version of ‎Chapter 2 and ‎Chapter 4 has been presented at the CSME International Congress 2014 [2] (A.‎ Mohandes,‎ M.‎ Farrokhsiar,‎ and‎ H.‎ Najjaran,‎ “Robust‎ MILP‎ motion‎planning with LOS-based‎obstacle‎avoidance‎constraint,”). This paper won the best student paper award.  A version of ‎Chapter 2 and ‎Chapter 4 has been submitted to the Journal of Autonomous Robots [3] (A.‎Mohandes,‎M.‎Farrokhsiar,‎and‎H.‎Najjaran,‎“Probabilistically‎Safe‎MILP‎Motion‎Planning with LOS-based‎Obstacle‎Avoidance,”).  All of the papers have three authors as follows. I, Ali Mohandes, was the lead investigator, responsible for conducting research, developing algorithms, programming, and writing the paper. Morteza Farrokhsiar, the second author and a PhD candidate at the ACIS laboratory, was involved in the early stages of concept formation and contributed to manuscript edition as a mentor. Homayoun Najjaran, the third and supervisory author, was involved throughout the project in concept formation and manuscript edition.  The materials of the prepared conference and journal papers are included in this thesis. In the case of using figures and tables from the papers, they have been cited. In addition, texts from the papers have been used throughout this thesis.   v Table of Contents  Abstract ........................................................................................................................................... ii Preface............................................................................................................................................ iv Table of Contents ............................................................................................................................ v List of Tables ................................................................................................................................. ix List of Figures ................................................................................................................................. x List of Symbols ............................................................................................................................ xiv Acknowledgment ........................................................................................................................ xvii Dedication .................................................................................................................................. xviii Chapter 1 Introduction .................................................................................................................... 1 1.1 Motivation ......................................................................................................................... 1 1.2 Literature Review .............................................................................................................. 3 1.2.1 Non-RHC Motion Planning Methods .......................................................................... 4 1.2.2 RHC-based Motion Planning ...................................................................................... 6 1.3 Statement of Contributions ................................................................................................ 9 1.4 Thesis Overview .............................................................................................................. 10 Chapter 2 Receding Horizon Trajectory Planning ........................................................................ 12 2.1 Background and Overview .............................................................................................. 12  vi 2.2 Problem Statement .......................................................................................................... 14 2.3 State Estimation ............................................................................................................... 14 2.3.1 Filtering (Recursive Bayesian Estimation) ................................................................ 15 2.3.1.1 Linear Gaussian System: Kalman Filter .............................................................. 16 2.3.1.2 Nonlinear Gaussian Systems: Extended Kalman Filter ...................................... 16 2.3.2 Prediction ................................................................................................................... 17 2.3.2.1 Open Loop Prediction ......................................................................................... 18 2.3.2.2 Approximately Closed-loop Prediction ............................................................... 19 2.3.2.3 Partially Closed-loop Prediction ......................................................................... 19 2.4 RHC Algorithm ............................................................................................................... 20 2.4.1 Planning ..................................................................................................................... 21 2.4.1.1 Constraints ........................................................................................................... 22 2.4.2 Execution ................................................................................................................... 24 2.5 Motion Planning for Automated Firefighting: A Case Study ......................................... 24 2.5.1 System Modeling ....................................................................................................... 25 2.5.2 Constraints ................................................................................................................. 28 2.5.3 RHC Framework ....................................................................................................... 29 2.5.4 Results ....................................................................................................................... 29 2.6 Summary ......................................................................................................................... 33 Chapter 3 MILP Receding Horizon Trajectory Planning ............................................................. 35  vii 3.1 Background and Overview .............................................................................................. 35 3.2 Problem Statement .......................................................................................................... 36 3.3 MILP Formulation ........................................................................................................... 37 3.3.1 Obstacle Avoidance ................................................................................................... 39 3.3.1.1 Conventional Obstacle Avoidance ...................................................................... 39 3.3.1.2 LOS-based Obstacle Avoidance .......................................................................... 40 3.3.2 Vehicle Dynamics ..................................................................................................... 43 3.3.3 RHMILP .................................................................................................................... 43 3.4 Results ............................................................................................................................. 44 3.5 Summary ......................................................................................................................... 48 Chapter 4 Robust MILP Receding Horizon Trajectory Planning ................................................. 49 4.1 Background and Overview .............................................................................................. 49 4.2 Problem Statement .......................................................................................................... 50 4.2.1 State Estimation ......................................................................................................... 51 4.2.1.1 Open Loop Prediction ......................................................................................... 51 4.2.1.2 Partially Closed-loop (PCL) Prediction .............................................................. 52 4.2.2 RHC Scheme ............................................................................................................. 53 4.3 Chance Constraints .......................................................................................................... 54 4.3.1 Single Linear Chance Constraints ............................................................................. 54 4.3.2 Sets of Linear Chance Constraints ............................................................................ 56  viii 4.3.2.1 Disjunctive Linear Constraints ............................................................................ 57 4.3.2.2 Conjunctive Linear Constraints ........................................................................... 58 4.4 MILP Formulation ........................................................................................................... 59 4.4.1 RHMILP .................................................................................................................... 59 4.4.2 Obstacle Avoidance ................................................................................................... 60 4.4.2.1 Conventional Obstacle Avoidance ...................................................................... 60 4.4.2.2 LOS-based Obstacle Avoidance .......................................................................... 62 4.4.2.3 Convex Polyhedral Obstacle ............................................................................... 64 4.5 Simulation Results ........................................................................................................... 67 4.6 Experimental Results ....................................................................................................... 73 4.6.1 Experimental Test-bed .............................................................................................. 73 4.6.2 Results ....................................................................................................................... 75 4.7 Summary ......................................................................................................................... 77 Chapter 5 Conclusions and Future Work ...................................................................................... 79 5.1 Summary ......................................................................................................................... 79 5.2 Future Work .................................................................................................................... 81 Bibliography ................................................................................................................................. 82     ix List of Tables Table ‎3.1 Simulation scenarios are designed to study the efficacy of the proposed algorithm. ..45 Table ‎4.1 Example values for the level of confidence   versus the            . ................56 Table ‎4.2 Simulation scenarios are designed to study the efficacy of the proposed algorithm. ..67 Table ‎4.3 The failure rate for different execution horizons is shown. As the execution horizon increases, the failure rate also increases [3]. ...................................................70 Table ‎4.4 Two experiments are designed to study the efficacy of the proposed algorithm. ........75    x List of Figures Figure ‎2.1 The system output over the planning horizon is predicted and the calculated control actions are obtained. Thereafter, the control actions are partially implemented. This process is repeated until the goal is achieved [1]. .......................21 Figure ‎2.2 The‎UGV‟s‎objective‎is‎to‎move‎to‎the‎desired‎location‎(fire).‎The‎UAV‎provides‎the localization data for the system. The UAV uses GPS for self-localization and vision for localizing the UGV and the fire. The localization data is then transferred to the UGV i.e., the control unit, through vehicle-to-vehicle (V2V) communication. ..........................................................................................................25 Figure ‎2.3 The designed path for the UGV has to drive it from the initial starting point to the target point (fire) while avoiding obstacles. .........................................................26 Figure ‎2.4 (a) The UGV reduces its distance with fire. The UAV is also constantly decreasing its distance with the UGV and the fire. (b) As a result of the motion of the UAV, the uncertainty associated with the system is decreasing. .....................31 Figure ‎2.5 The objective for the UGV is to reach to the fire while avoiding the circular obstacle. Meanwhile, the UAV tries to increase the accuracy of the estimation scheme by getting closer to the UGV and the fire while avoiding the no-fly zone. ..31 Figure ‎2.6 (a) As the UGV gets closer to the fire, its velocities in both directions approach zero to maintain the distance between the UGV and the fire. (b) As the UAV gets close to the state where the uncertainty is minimized, the velocity decreases to zero so that the UAV stays at its optimal state. ..........................................................32  xi Figure ‎3.1 The top left quarter of the original and enlarged obstacle. The first enlargement enables the point mass assumption of the robot. The second enlargement is related to discrete time planning [2]. ..........................................................................42 Figure ‎3.2 (a) The robot that follows the LOS-based obstacle avoidance trajectory performs a detour motion around the corners of the obstacle thus decreasing the chance of collision with an unforeseen obstacle located behind the main obstacle. (b) Obstacle enlargement in the conventional obstacle avoidance method blocks the robot‟s‎ path‎ and‎ results‎ in‎ the‎ trajectory‎ to‎ go‎ around‎ the‎ obstacles.‎ (c)‎ The‎coordinates of the dynamic obstacle with respect to time is shown with different line styles. Due to requiring bigger obstacle enlargement, the obstacle avoidance method is not able to generate a feasible trajectory for the robot. On the other side, the path generated by the LOS-based obstacle avoidance method is able to tackle both dynamic and static obstacles effectively and reach to the goal position [3]. ................................................................................................................47 Figure ‎4.1   is a single-variate normal random variable        . When the variance    is fixed,     is the necessary and sufficient condition for       . ....................55 Figure ‎4.2 A two dimensional polyhedral obstacle with 5 faces (pentagon) is schematically shown. The polyhedron is defined by               where         represents the     face [3]. ..........................................................................................65 Figure ‎4.3 A path planning problem is solved in three different cases: OL (green), PCL with low quality measurement (red), and PCL with high quality measurement (grey). In the OL approach, the uncertainty of the future robot position grows unboundedly. In contrast, the PCL approach bounds the growth of uncertainty.  xii (a) The grey line follows the intuitively optimal solution while the other two diverge from the straight line. (b) The future robot position uncertainty for three approaches is plotted to demonstrate that the PCL bounds the growth of uncertainty. (c) The planned and executed paths for the OL are considerably distinct. On the other side, efficient incorporation of anticipated future measurements in the PCL results in having very similar planned and executed paths [3]. .....................................................................................................................70 Figure ‎4.4 The effect of changing the level of confidence on the designed path is studied. The paths generated for uncertain robot states are generally more conservative than the path for the deterministic environment (black dashed line). As the level of confidence is increased, the robot path between the obstacles moves further away to avoid the obstacles. For          the path can no longer go between the obstacles and has to go around the obstacle [3]. ..................................................71 Figure ‎4.5 The designed trajectory is dependent upon the planning horizon as the planned trajectories differ for different planning horizon values.............................................73 Figure ‎4.6 The interconnectivity of MATLAB, ROS, and Gazebo in the experimental platform is illustrated. ROS transfers the localization request and localization data from MATLAB to Gazebo and vice versa. It also executes the planned motions and control commands ordered by MATLAB on Gazebo [3]. .....................74 Figure ‎4.7 The robot starting and final points are          and         , respectively [3]. ..............................................................................................................................75  xiii Figure ‎4.8 (a) In the beginning of the process, there are only two obstacles to be avoided. (b) In the middle of the process, a third obstacle is added which occludes the initial path [3]. ............................................................................................................76 Figure ‎4.9 (a) In the planning process, uncertainties are taken into account so that the disturbances in the robot control input would not cause collision of the executed trajectory with the obstacles. (b) The initial trajectory of the robot (before appearance of the dynamic obstacle) is compared with the adjusted trajectory of the robot (after appearance of the dynamic obstacle) and the executed trajectory. (c) The input of the robot in x and y directions is plotted and compared to the higher and lower bounds limits (red lines) [3]. ..........................................................77  xiv List of Symbols   Position of robot in the   direction    Position of robot in the   direction    Position of robot in the   direction    Time interval     Position vector of robot at time interval       Velocity vector of robot at time interval       Acceleration vector of robot at time interval       State vector of the robot at time interval       Covariance of the state of robot at time interval       Input vector of robot at time interval       System dynamics disturbance at time interval       Covariance of system dynamics disturbance at time interval      Normal random variable     Measurement vector at time interval       Measurement noise at time interval       Covariance of measurement noise at time interval      Nonlinear measurement function    Nonlinear system dynamics function       Probability of occurrence of event       State matrix in LTI systems     Input matrix in LTI systems     Disturbance matrix in LTI systems   xv    State matrix in linear measurement functions     Measurement noise matrix in linear measurement functions   ̃  Linearized state matrix in nonlinear systems   ̃  Linearized disturbance matrix in nonlinear systems   ̃  Linearized state matrix in nonlinear measurement functions   ̃  Linearized measurement noise matrix in nonlinear measurement functions    Stage cost function in RHC     Final cost function in RHC     Desired state    Nonlinear constraint    Admissible set of inputs    Admissible set of states    Level of confidence     Existing obstacles at time interval      Planning horizon in RHC    Execution horizon in RHC     Identity matrix of size       Zero matrix of size       Sampling time    Logical conjunction    Logical disjunction    Robot diameter   xvi   A value between 0 and 1 that defines the position of interest in a line    A discrete set representing       Members of      Number of members of      ,   System input lower and upper bound vectors    ,   Lower and upper bound vectors for system input derivative     xvii Acknowledgment I, first of all, would like to express my gratitude to my supervisor, Dr. Homayoun Najjaran. It would have been impossible to successfully finish my academic journey without your help and support. Thank you for the continued trust you placed in me. I am thankful to the Natural Sciences and Engineering Research Council of Canada (NSERC) for providing me with financial support throughout the course of my research. A big word of appreciation goes to Nima, my colleague and mentor, for all the fruitful discussions on research, ideology, and politics. Thanks for teaching me how to think.  To my dearest friends, Reza, Mohammad, and Brie: I feel exceedingly privileged for having you. I truly appreciate you being there for me and supporting me throughout.  To my Iranian friends, Behzad, Ehsan, Mo, and AR: I will be always remembering the joyful moments of playing volley ball, gaming, barbequing, and swimming.  To my colleagues in the ACIS lab, Mohammad, Farzad, Nikolai, Hadi, and George: Thanks for creating a cooperative, friendly environment in the lab by sharing all the discussions, laughs, and support.  Last but not least, I am eternally grateful for the endless love, sacrifice, and support that I have received form my family in my entire life. To them I dedicate this thesis.   xviii Dedication To my father, for his inspirational support,  to my mother, for her endless sacrifice,  and to my siblings, for their continued love.    1    Chapter 1 Introduction This thesis is focused on the trajectory generation problem of robots in dynamic, cluttered environments. We are aiming at tackling scenarios in which uncertainty in the robot states plagues the system. The planned trajectories must be safe (safe in the sense of collision avoidance), be robust against the uncertainties (robust in the sense that uncertainties affecting the system do not bring the robot to unsafe situation), be optimal with respect to certain criteria, while satisfying possible constraints on the system input, system states, or both.  1.1 Motivation In recent years, unmanned vehicles have received increasing attention due to their various military and civilian applications. Different varieties of unmanned vehicles are being widely developed including but not limited to unmanned aerial vehicles (UAVs), unmanned ground vehicles (UGVs), and unmanned surface vehicles (USVs). The obvious reduction of human safety risk is one of the primary reasons for deploying these types of vehicles, as these vehicles operate without requiring a human onboard. Unmanned vehicles can therefore widely contribute in performing dull, dirty, and dangerous tasks such as search and rescue missions, household chores, and surveillance of environments which are inaccessible to humans. That being said, as the level of autonomy in the unmanned vehicles increases, the latter become more cost effective and more capable of operating in harsh and remote environments. One famous example of a fully  2 autonomous (automated) unmanned vehicle is the Google driverless car [4]. This car is capable of sensing its surrounding and navigating through the environment with no human intervention. More recently, iRobot Ava 500 Tele-presence robot has been announced, and is another example of an autonomous robot; designed to provide easy navigation in labs, houses, and offices particularly for people who are not present on the site [5]. It needs to be clarified that the emergence of autonomous robots should be perceived as a motivation for continuing the research on autonomous robots in a larger scale as the existing algorithms can and must be improved in every aspect. The latter has been the line of thought in this thesis. In order to be fully autonomous, a vehicle or a robot requires a few key elements including the ability to localize itself, map the environment, and plan the motions. Additionally, for applications where a team of autonomous robots are employed, communication and task allocation capabilities are added to the mentioned list. These functionalities should guarantee the flexibility, robustness, efficiency, resilience, and safety of the overall system. The robotics literature has been subject to continuing effort to increase the performance of autonomous vehicles including but not limited to simultaneous localization and mapping (SLAM) [6–8], task allocation between a team of robots [9–11], and motion planning [12–14]. In this thesis, the problem of interest is robot motion planning. In the robotic literature, motion‎ planning,‎ also‎ known‎ as‎ trajectory‎ generation‎ and‎ the‎ piano‎ mover‟s‎ problem,‎ is‎concerned with finding the control actions that will take a single robot or team of robots from an initial state to the goal state [14]. The environment may contain static and dynamic obstacles that the robot must avoid collision with. An obstacle with constant coordinates is called a static obstacle. Dynamic obstacles are non-static obstacles. For example, while navigating through the environment, other moving autonomous agents are considered dynamic obstacles and the robot  3 path must not collide with them. Furthermore, the available system information may be plagued by different sources of uncertainty, including but not limited to process disturbances, measurement noise, and dynamicity of the environment. Safe and robust motion planning frameworks must be created to design trajectories that ensure the safety of the robots, account for system uncertainties, and are optimal with respect to certain criteria (e.g., time or fuel or both). In the remainder of this chapter, an overview of the motion planning literature is presented and the thesis is outlined. 1.2 Literature Review This section aims at presenting a brief overview of the existing motion planning algorithms in the robotics literature. Motion planning algorithms can be categorized from different perspectives. Motion planning algorithms have been developed by either the dynamics and control community [13, 14] or robotics and artificial intelligence community [15, 16]. While the former is more concerned with the dynamical behavior and state and control constraints, the latter is focused on computational issues and developing frameworks that are capable of real-time control [17]. Historically speaking, research on the motion planning problem was first conducted for deterministic systems with no differential constraints while it was later expanded to account for both uncertainties and differential constraints in the problem [17, 18]. We categorize the motion algorithms to: (i) Non-RHC motion planning, and (ii) RHC-based motion planning.‎ It‎ is‎ worthwhile‎ to‎ mention‎ that‎ in‎ this‎ thesis,‎ the‎ phrases‎ “trajectory‎ planning”,‎“trajectory‎generation”,‎and‎“planning”‎are‎all‎equal‎to‎the‎phrase‎“motion‎planning”‎and‎will be used interchangeably. For more extensive background on motion planning algorithms, interested readers are referred to the classic textbook written by Latombe [12] where many basic concepts are‎introduced.‎The‎“Planning‎Algorithms”‎book‎by‎LaValle‎[14] studies more recent techniques  4 and algorithms related to motion planning. The surveys by Goerzen et al. [17], Dadkhah and Mettler [19], and Hoy et al. [20] are more recent reports on the existing motion planning algorithms for deterministic and uncertain environments. 1.2.1 Non-RHC Motion Planning Methods The very first motion planning algorithms were developed based on the optimal control principle [21] and dynamic programming technique [22] and dealt with holonomic systems in obstacle free environments [23]. The first methods to tackle the motion planning problem in the presence of obstacles were based on graph‎ search‎ algorithms‎ such‎ as‎ Dijkstra‟s‎ [24, 25] and A* [26] algorithms. The exact and approximate cell decomposition approaches are famous examples where the obstacle-free environment (the configuration space) is divided into a number of cells and the goal is to find and connect a sequence of neighboring cells and build a path for the robot [27, 28]. The so-called roadmap approaches (e.g., visibility graphs [29] and Voronoi diagrams [30]) are other graph searching methods. In these methods, a line connecting two cells is a feasible path if it is located inside the configuration space. Thereafter, a network of all feasible paths is constructed and the path planning problem is solved using a graph search algorithm such as‎Dijkstra‟s algorithm to connect the initial and final configuration with the shortest line through a set of feasible paths. These methods do not consider the differential constraints or nonlinearities of the system in their solution. Thus, although a collision-free path for the robot can be constructed using these approaches, the robot might fail to follow the designed path due to its nonlinear dynamics or velocity constraints. Potential field methods are another group of techniques which solve the motion planning problem through a vector minimization [31–33]. A vector is assigned to each point in the workspace. The vector assigned to the desired state has the minimum value and functions as an  5 attractive force while the obstacles function as a repulsive force. The vector value for each point is calculated based on these attractive and repulsive forces. Since the goal owns the minimum potential, minimizing the vector value will result in the robot traveling from its initial configuration to the goal while trying to avoid the obstacles (due to repulsive forces). The original approach proposed by Khatib [31], was not capable of dealing with differential constraints of the system. More recent field techniques can produce smooth and dynamically feasible trajectories and take into account the mentioned constraints [32, 33]; however, they have two main disadvantages. First, robot navigation can fail due to the robot getting trapped in the local minima. Second, the obstacle avoidance constraints are not modeled as hard constraints and might be violated. A group of more recent algorithms that are powerful tools for planning in high-dimensional spaces are sampling-based algorithms [34, 35]. Suitable for systems with differential and non-holonimic constraints, these approaches are based on generating randomized robot configurations in the work space. The idea is very much like roadmaps or cell decompositions (combinatorial planning techniques in general) in the sense that two configurations can be connected to form a feasible path if the connecting line is located outside of the obstacle region. However, the difference between the sampling-based algorithms and the combinatorial approaches is that the former generates random samples based on dynamics and constraints of the system i.e., the samples are generated in the free state-space. Examples of famous sampling-based algorithms are tree-based planners (e.g., rapidly-exploring random trees (RRTs)) [36] and probabilistic roadmap techniques (PRMs) [37].     6 1.2.2 RHC-based Motion Planning Mathematical programming methods are techniques that solve the trajectory planning problem through a numerical optimization framework and thus are also called trajectory optimization methods [17, 38]. The early trajectory optimization algorithms were developed for navigation of robots through obstacle free environments using optimal control and dynamic programming [23]. While the earlier groups of these approaches were only tackling the motion planning problem in deterministic environments, the next generations became capable of dealing with uncertain environments as well [39]. However, they were computationally expensive and could not readily handle hard constraints [40]. Receding horizon control (RHC) algorithms are mathematical programming approaches that aim at reducing the complexity of the motion planning problem and are able to handle system constraints and uncertainties in a systematic way [41]. RHC, also known as model predictive control (MPC), is a suboptimal control scheme that is based on iterative constrained optimization over a finite horizon. The RHC algorithm is as follows. At each iteration, a cost function generated based on the system model is optimized over a so-called planning horizon and a set of optimal system inputs is obtained. Thereafter, the optimal control input set is partially executed over a shorter horizon called execution horizon and the new system states are measured. The operation is repeated until the goal is achieved. In the optimization step, the cost function is calculated based on the predicted future system states and control inputs. In doing so, the system model is used to predict the future system states thus ensuring that the offered solution meets the system constraints e.g., differential constraints and nonlinearities of the system. RHC is beneficial in that it can be utilized for multi-system control, handle system constraints and uncertainties, and generate solutions that allow the systems to work closer to their constraints, thus offering good performance [42]. The interested reader is referred to the  7 studies on the structure, stability, and optimality of RHC by Goodwin et al. [41], Mayne et al. [43], and Morari and Lee [44].  RHCs were originally used in process industries to control chemical plants [45]. Jadbabaie [46] was the first to utilize RHCs as a framework for motion planning of nonlinear systems in obstacle-free environments. RHCs were later used by Singh and Fuller as a control scheme for robots to follow a (pre-determined) trajectory around obstacles in a non-convex region [47]. Originally, deterministic systems were of interest [48]; however, the RHC has been extended to stochastic systems as well [40]. The latter has been addressed by assuming either probabilistic uncertainties [40] or set-bounded uncertainties [49]. In this thesis, the stochastic systems with probabilistic uncertainties are of interest. In such systems two important features are noticeable. First, the system states are not exactly known and have to be estimated. Second, since only an estimation of the states is available, instead of deterministic constraints, chance constraints are utilized i.e., the probability of constraint violation has to be below a certain value [50]. Yun and Bitmead [51] were the first to incorporate uncertainties, and thus state estimation, in the stochastic RHC for a linear Gaussian system. In doing so, if both the initial state estimation and uncertainties are of Gaussian type, the future system states can also be modeled as Gaussian distributions. In their work, however, future information was ignored which resulted in an open loop future state estimation and a growing predicted state covariance which contributes to overly conservative solutions. To overcome this issue, they introduced a closed-loop covariance i.e., the one-step ahead covariance is considered as the predicted future state covariance for all time steps [52]. Blackmore et al. adopted the stochastic RHC framework presented in [52] to develop a motion planning scheme for robots with uncertain positions in the presence of obstacles [53]. In another work, Blackmore at al. [54] developed a framework that also assumed the initial system  8 states and the sources of the uncertainties to be Gaussian. However, this work predicted the distribution of the future states by using sampled particles. Du Toit and Burdick [55] extended the state estimation approach presented by Yun and Bitmead ([52]) by incorporating the anticipated future information into the estimation algorithm. This state estimation algorithm, named partially closed-loop state estimation, assumes the most likely future measurements to occur. In doing so, the fact that future measurements are going to happen is appreciated while the least information gain is introduced to the controller [40]. As a result, the predicted state covariance is bounded and does not grow substantially over time which contributes to a more useful solution. The general RHC motion planning scheme is more deeply structured as follows. Future system states are predicted based on initial system states and (to be determined) system inputs. A nonlinear cost function, which is the core of the planning problem, is optimized over a finite horizon. The optimization is subject to linear and nonlinear constraints that guarantee the feasibility of the planned trajectory in terms of both collision avoidance and kinodynamic constraints. However, under certain assumptions i.e., robots with linear dynamics and convex polyhedral obstacles, the RHC scheme for trajectory planning can be modeled as receding horizon mixed integer linear programming (RHMILP) [56]. In MILP, discrete logic and integer variables are included in a continuous linear optimization problem. In doing so, the kinodynamic constraints are modeled as continuous linear constraints while logical constraints such as collision and obstacle avoidance are formulated using integer variables. For more information on MILP and its applications, the interested reader is referred to reference [57]. The formulation of motion planning schemes using RHMILP was first reported by Schouwenaars et al. [56] where the problem was solved for deterministic environments and in the presence of rectangular  9 obstacles. In the original RHMILP formulation, safety, in the sense of collision avoidance, is guaranteed by enforcing the robot outside of the obstacle coordination at all discrete planning time steps. Later, reference [58] introduced a further safety constraint for RHMILP motion planning i.e., ensuring that at all time steps, a path to a state that is naturally collision free exists. The RHMILP motion planning framework has also been extended to problems with uncertain environments where the uncertainties are modeled as set-bounded variables [49, 59].  RHMILP handles hard constraints e.g., obstacle and collision avoidance, systematically while offering a more computationally viable framework compared to the general nonlinear programming approaches. Thanks to the emergence of faster computers and powerful optimization software such as CPLEX [60], RHMILP is considered to be a suitable framework for real-time trajectory planning [61, 62]. 1.3 Statement of Contributions In this thesis we build upon and introduce new features to the existing RHMILP literature. The thesis has two contributions as follows.  We formulate a new state estimation algorithm for planning under uncertainty in RHMILP. As was mentioned above, the current RHMILP frameworks either assume deterministic environments or characterize the uncertainty as a set-bounded model. In this thesis, however, the uncertainty is assumed to be of a probabilistic nature. The partially closed-loop (PCL) state estimation strategy is utilized to predict the future system states. In doing so, the future system states are estimated based on the anticipated future state measurements that can in turn help to reduce the uncertainty of the future states.  We introduce a new obstacle avoidance constraint for motion planning of robots with deterministic and uncertain states in RHMILP framework. The obstacle avoidance  10 constraint is designed to secure a line connecting each pair of sequential robot positions inside the configuration space (outside the obstacles) at all discrete time, analogous to maintaining line-of-sight (LOS) between each pair of sequential positions of the robot. In the conventional RHMILP obstacle avoidance formulation, obstacle avoidance is guaranteed by locating the robot outside of the obstacles at all discrete time steps. LOS-based obstacle avoidance is advantageous in that it can typically provide the same level of safety at time steps larger than those of the conventional approaches; hence creating trajectories which are more effective at both planning and execution stages. 1.4 Thesis Overview The thesis is organized as follows. ‎Chapter 2 formulates the general receding horizon scheme that can be used for trajectory planning of single and multiple robots. The problem is formulated as nonlinear programming. The PCL is utilized for future state prediction. The developed scheme is examined in a case study i.e., cooperative motion planning of unmanned vehicles for wildfire suppression. Numerical simulations are carried out to study the performance of the proposed scheme. In ‎Chapter 3, the RHMILP framework for motion planning of robots in deterministic environments is studied. The robots are assumed to have linear dynamics and the obstacles are convex polyhedral. LOS-based obstacle avoidance constraint is formulated and is incorporated into the RHMILP framework. Numerical simulations have been carried out to study the efficacy of the proposed algorithms in dynamic, cluttered environments with narrow passages. ‎Chapter 4 extends the RHMILP scheme developed in ‎Chapter 3 to environments where the robot state is uncertain. The PCL strategy is utilized for future state prediction. The safety constraints formulated in ‎Chapter 3, including LOS-based obstacle avoidance, are extended for uncertain robot states using a probabilistic framework (chance constraints) to ensure the robustness of the  11 designed trajectory against the uncertainties. We then show that despite extending the algorithm to environments with uncertain robot states, it can still be modeled in an RHMILP framework. Numerical simulations and experimental studies are carried out to investigate the applicability of the proposed framework in dynamic, cluttered environments with uncertain robot states. ‎Chapter 5 then concludes the thesis and briefly points out some intended future directions of the research.   12    Chapter 2 Receding Horizon Trajectory Planning 2.1 Background and Overview This chapter is concerned with the development of a general motion planning framework using RHC, general in the sense that system and measurement equations are nonlinear and obstacles are arbitrarily shaped. We are interested to design a (motion) planning and control scheme that drives the vehicles optimally in the environment (optimal in the sense that the performance of the system is optimized with respect to some attributes) and is probing (minimizes system uncertainty). The generated trajectories must assure the safety of the system (safety in the sense of collision avoidance) even in the presence of uncertainties i.e., be robust against uncertainties. The presented framework can be used for single or multiple robot motion planning tasks. RHC is employed as the trajectory planning and control scheme. RHC is an optimization-based framework which is able to handle constraints and uncertainties explicitly [41]. In RHC, at each sampling time, an open-loop optimal control problem (balancing a tradeoff between the control action and system states) is solved over a finite horizon. Then, the control action sequence is applied to the system for a period after which the optimal control problem is re-calculated based upon the new states of the system. The ability to explicitly tackle the constraints has made the RHC an important method for motion planning, see for example [63–65].  13 Due to the existence of uncertainties in the problem, the current and future system states cannot be exactly determined. The system states are associated with uncertainties mainly due to the following reasons:  The initial state of the system is assumed to be reported by a state estimator e.g., Kalman filter.   The system model is not exactly known. This uncertainty arises due to system model linearization or modeling errors.   Process disturbances (e.g., wind forces) affect the system motion.  The measurement data is corrupted by measurement noise. The formulation of the RHC framework includes prediction and filtering algorithms. In particular, the partially closed-loop (PCL) strategy proposed by Du Toit [40] is of interest as the prediction algorithm. In the PCL state estimation method, at each future time step, the most possible measurement is assumed to occur.  This chapter is organized as follows. The problem of interest and the system and measurement equations are defined in Section ‎2.2. In Section ‎2.3, the state estimation algorithms (filtering and prediction) are formulated. The RHC scheme is developed in Section ‎2.4 and consists of two steps: planning and execution. A novel case study for motion planning (automated firefighting) is presented in Section ‎2.5. The presented case study is novel in the sense‎that‎ the‎proposed‎framework,‎ to‎ the‎author‟s‎knowledge,‎has‎not‎been‎utilized‎before‎for‎an automated firefighting problem. The presented scheme is formulated to handle various sources of uncertainty as well as environment constraints by incorporating them to the conventional nonlinear model predictive control. Finally, the concluding remarks are summarized in Section ‎2.6.  14 2.2 Problem Statement This section presents the problem setup for motion planning of robots. The dynamics of the vehicles (robots) in discrete time is described as                    (‎2.1) where    and    are the system state and input vectors at time interval  , respectively. The dimension of the system state and system input vectors are assumed to be     and    . It is assumed that the initial state of the system,   , is given a priori and has a Gaussian distribution       ̂     . The robot‟s motion is subject to process disturbance and system model uncertainty which are modeled accumulatively as a Gaussian white noise    and is distributed according to           . The state measurement mapping for the system is described as               (‎2.2) Here,    is the     vector representing the system output and            is Gaussian white noise that represents the measurement noise. We aim at developing a motion planning and control framework: deliver robots from their starting configuration to their goal while avoiding obstacles.  2.3 State Estimation Initial uncertainty of the robot states, as well as existence of process disturbances and motion model uncertainties cause the system states to be continuously associated with uncertainty. These uncertainties can contribute to deviation of the robot from its planned trajectory. Therefore, failure to account for system uncertainties might decrease the safety of the planned trajectory, safety in the sense of collision avoidance. The first step towards handling uncertainties is to  15 determine their effect on the system states. In doing so, the system states are continuously estimated using the available measurements, the system equations and the uncertainties as a probabilistic distribution. The state estimation framework in this thesis is divided into two parts: filtering and prediction. In filtering, the current system states are estimated using the past and current measurements. In prediction, which is of interest to planning, the uncertainty of the future states is assessed (and future states are predicted) using the currently available information. 2.3.1 Filtering (Recursive Bayesian Estimation) The recursive Bayesian estimation framework is utilized for filtering [66]. In doing so, the mathematical system model and measurements are fused to estimate a probability density function for the system states i.e.,     |            . In order for the estimation process to be formulated recursively, it is assumed that the states are a Markov process. This means that knowing the current state estimation and the system input, the future states can be predicted independently of the earlier states. More precisely, earlier system data (states, measurements, and system input) do not add any additional information to the prediction of the future states [66]. The recursive Bayesian state estimation framework contains two steps: prediction and update. In the prediction step, the system dynamics model is used to calculate the probability of the current system states. In the update step, the latest taken system measurements are incorporated into the estimation. It was noted by Kalman [67] that incorporating the measurements up to the current time (and the system mathematical model) in the estimation framework leads to an optimal estimation of the system state. In Kalman‟s‎formulation,‎ this‎estimation‎is‎obtained by solving a least square error problem where the square of the state estimation error (the variance) is minimized [66].   16 2.3.1.1 Linear Gaussian System: Kalman Filter In general, a closed-form formulation for optimal state estimation does not exist. However, when system dynamics and measurement equations are linear and the uncertainties are modeled as Gaussian distributions, a closed-form formulation for state estimation is developed and is called the Kalman filter. The system dynamics and measurement equations are modeled linearly as                       (‎2.3)               (‎2.4) where                   are known matrices. Knowing the a priori state estimation   |   ( ̂ |    | ), measurements at the current time     , and the measurement and system motion models, the Kalman filter is formulated to estimate the current system state     |     ( ̂   |        |   ) in a two-step process as follows. The prediction step is:  { ̂   |      ̂ |          |      |             (‎2.5) and the update step is:  {         |    (      |            )   ̂   |     ̂   |      (        ̂   | )    |                  |  (‎2.6) where      is called the Kalman gain.  2.3.1.2 Nonlinear Gaussian Systems: Extended Kalman Filter The Kalman filter, modeled in Section ‎2.3.1.1, is not applicable to systems with nonlinear equations or non-Gaussian uncertainties. In the case of nonlinear dynamic and measurement  17 equations, the states do not remain Gaussian after passing through the system equations. Hence, the optimal filter (the solution to the least squares problem) has to be developed for non-Gaussian distributions thus resulting to higher computational burden [68]. The Extended Kalman filter (EKF) is a non-optimal approach for state estimation of systems with nonlinear equations. This is an approximation approach since the nonlinear equations are linearized and a Kalman filter is derived for the linearized system. For a system described by Eq. (‎2.1) and (‎2.2), the prediction step of the EKF is given by:  { ̂   |                 |   ̃   |  ̃    ̃    ̃   (‎2.7) and the update step is:  {           |  ̃  ( ̃     |  ̃    ̃    ̃  )   ̂   |     ̂   |      (      ( ̂   |   ))    |    (       ̃ )    |   (‎2.8) In Eq. (‎2.7) and (‎2.8),  ̃      | ̂ |    ,  ̃      | ̂ |    ,  ̃      | ̂ |   , and  ̃      | ̂ |    are the linearized approximations of the system and measurement equations which are derived by first-order Taylor series expansion. These matrices are analogous to                , respectively. For a more general formulation of EKF, where the process disturbances and measurement noise are non-zero mean Gaussian, the interested reader is referred to [40].  2.3.2 Prediction In Section ‎2.3.1, the Kalman filter and EKF were introduced as the two most common filtering approaches for linear and nonlinear systems, respectively. In planning, we also need to predict the future system states. This is a straightforward task to achieve in deterministic environments:  18 the exact value of the future system states is predicted using system dynamics. In uncertain environments, however, the future system states cannot be accurately predicted solely by system dynamic equations. This happens due to the fact that different sources of uncertainty do not allow the system to evolve exactly as planned. Instead, an estimation of the future system states has to be predicted accounting for uncertainties (called prediction). Three prediction algorithms are presented in the rest of this section. 2.3.2.1 Open Loop Prediction In open loop prediction algorithms, measurements up to the current time are utilized. Future measurements are ignored since there is no future information available currently [51, 53]. In open loop prediction, the system states are predicted in future time for a given control sequence using system dynamics. The process disturbances affect the predicted uncertainty of the system states. For linear and nonlinear systems, the open loop prediction algorithms are the same as the prediction steps of the Kalman filter and EKF, respectively. In doing so, given a priori state estimation   |     ̂ |    |   the system dynamics are utilized to predict the mean and covariance of the future system state. For instance, in a linear system,  { ̂   |      ̂ |          |      |             (‎2.9) where     |     ̂   |      |   is an estimation of the future state of the system. The disadvantage of this approach is that the uncertainty of future system states grows unbound as a result of considering process disturbances and ignoring future information. We will elaborate in Sections ‎2.4.1.1 and ‎4.3 (the latter specifically) why this property is a disadvantage for the motion planning scheme.  19 2.3.2.2 Approximately Closed-loop Prediction As mentioned, the use of open loop prediction causes the future predicted covariance to be unbounded. Yan and Bitmead [52] introduced a new state estimation approach that bounds the future system state uncertainties. In their approach, the one-step-ahead covariance (the covariance at the next time step) is artificially set as the future predicted state covariance. Similarly to the open loop state estimation approach, the mean of future system states is predicted using system dynamics. This approach prevents the uncertainty from growing but ignores future expected measurements. 2.3.2.3 Partially Closed-loop Prediction In this thesis, the PCL prediction strategy proposed by Du Toit [40] is utilized. The key idea behind this approach is that though not currently available, future measurements will be obtained in the future. Thus, the effect of expected future information should be somehow incorporated in the prediction process. From the context of the recursive Bayesian frameworks (Section ‎2.3.1) it is inferred that incorporating the measurements has two effects on the estimated system state: (i) the value of the measurement affects the predicted state mean, and (ii) the measurement noise characteristics (noise covariance) affects both the mean and the uncertainty (covariance) of the state. It is then noted that the uncertainty of the predicted state is not dependent on the actual value of the measurement. Thus, the effect of the future measurement can be incorporated in the state prediction by updating the system uncertainty using the noise covariance. In the PCL strategy, the most probable measurement is assumed to occur. By this assumption, the mean of the predicted state is not changed but the uncertainty is corrected accounting for the fact that future measurements are expected. The latter can be observed from Eq. (‎2.11) where the mean of the system state (second line in Eq. (‎2.11)) stays unchanged and is not affected by incorporating  20 the uncertainty of the future measurement. It is proved in [40] that by assuming the most likely measurement, no information is artificially introduced to the prediction problem. The Kalman filter and the EKF can be modified to formulate the prediction algorithm for linear and nonlinear systems, respectively. Thus in this thesis, the prediction algorithm is called the modified Kalman predictor (MKP). The MKP for nonlinear system equations and white Gaussian noise is formulated as follows. The prediction step is:  { ̂   |                 |   ̃   |  ̃    ̃    ̃   (‎2.10) and the update step is:  {         |  ̃  ( ̃     |  ̃    ̃    ̃  )   ̂   |   ̂   |             |  (       ̃ )    |   (‎2.11) As expected, the most probable measurement assumption does not affect the state mean but results in updating and decreasing the state uncertainty. 2.4 RHC Algorithm In this subsection, a Receding Horizon Control (RHC) algorithm is developed as the motion planning scheme. RHC, also known as Model Predictive Control (MPC), has a flexible architecture in which state and input constraints as well as system uncertainties can be handled systematically [69]. In RHC, the control problem, which is modeled as an optimization problem, is iteratively solved (planning). The calculated control input set is then partially executed (execution) and the new system states are measured. The procedure is repeated until the goal state is achieved. The RHC framework is schematically shown in Figure ‎2.1.  21  Figure ‎2.1:  The system output over the planning horizon is predicted and the calculated control actions are obtained. Thereafter, the control actions are partially implemented. This process is repeated until the goal is achieved [1]. 2.4.1 Planning In the planning stage, the control problem is formulated as an optimization problem over the next   horizons called the planning horizon. The solution to this problem is a set of control actions that optimize the system performance with respect to some attributes while satisfying problem constraints. Knowing the a priori state estimation   |     ̂ |    |   the motion planning is expressed as an optimization problem: Find the sequence of optimal control inputs (          and     to    ) that minimizes an additive cost function and satisfies system dynamics and problem constraints.             (∑ ( (    |         ))         (    |    ))  (‎2.12) subject to   (      |          )     (‎2.13) k+1 k+2 k+M k+Nu(k+1)System inputPrediction horizon Control horizonPredicted outputReference outputPast Future 22 The constraints will be elaborated in Section ‎2.4.1.1. The final desired state is shown by   . The functions   and    are called the stage cost and the final cost, respectively. The latter, in order to guarantee globally optimal solutions, must be the exact cost-to-go from the final state in the planning horizon (    | ) to the final desired state    [70]. It is impossible, however, to calculate the exact cost-to-go especially where the environment is not fully characterized at the moment. Moreover, in order to find the ideal cost-to-go, a fixed horizon control problem has to be solved for each planning iteration that contributes to increasing the computational cost [65]. The final cost is thus usually calculated heuristically. As a result, it is expected that the offered solution by the RHC algorithm might not be a globally optimal solution and the RHC is a sub-optimal framework. There is no specific formulation for the stage and final cost functions as they are dependent on the control objectives. They, however, are similar in that they indicate how far the future system states will be from the desired state. Throughout this thesis, different cost functions, designed for different applications and scenarios, will be introduced.  2.4.1.1 Constraints In general the constraints to the optimization problem are modeled as nonlinear equality functions  (    |        )   . This section will study the optimization constraints in more detail. However, the exact constraint formulations are postponed to Section ‎2.5 and ‎Chapter 3 and ‎Chapter 4 where the problem of interest is defined more precisely. There are basically three constraints in the (motion) planning problem: (i) constraints on the system state and the system input that is aimed to account for the system dynamics equation, (ii) constraints on the system input, and (iii) constraints on the system states. These constraints are formulated respectively in Eq. (‎2.14) to Eq. (‎2.16).     ̂     |     ̂   |        (‎2.14)  23         (‎2.15)   (      |   )      (‎2.16) Here,   and   are the set of admissible inputs and states, respectively. In the robot motion planning problem, the latter two constraints represent actuator limitations, kinodynamic constraints (e.g., velocity and acceleration bounds, limited turn rate, etc.), collision avoidance, and other required constraints (dependent on the application). The commonly used obstacle avoidance constraint in the literature for motion planning of robots is called the conventional obstacle avoidance method (see Definition 1). Definition 1: At time step  , the obstacle avoidance of a robot positioned at    [     ]  is guaranteed, if and only if       where    is the obstacle for the robot at time step  . It was mentioned earlier that due to uncertainties in the problem, the system states are not deterministically known and thus are estimated. Since the system states are represented by probabilistic values with unbounded distributions, it is impossible to guarantee that a state constraint is satisfied for all possible realizations of a state. Instead, chance constraints are imposed on the system state: the probability of constraint violation is assured to be below a certain level. In the chance constraint formulation on system states in Eq. (‎2.16),      denotes the probability of occurrence of event   and   is the level of confidence. The (motion) planning constraints are nonlinear in general. However, in certain conditions (e.g., linear system equations, polyhedral obstacles, and Gaussian uncertainties), it is possible to model the constraints as linear inequalities on the system states and inputs. This will be further studied in ‎Chapter 3 and ‎Chapter 4 in the concept of RHMILP.   24 2.4.2 Execution In the RHC algorithm, after solving the motion planning problem for the next   horizons and obtaining the set of inputs,   , the system executes the first     control actions to the system. In order to estimate the newly obtained system states after the control actions have been applied, a filtering algorithm (e.g., a Kalman filter or an EKF) is utilized (Section ‎2.3.1). Knowing the a priori state estimation   |   ( ̂ |    | ), measurements at the current time     , and the measurement and motion models (Eq. (‎2.1) and Eq. (‎2.2)), the system state     |    is estimated and has a Gaussian distribution     |     ( ̂   |        |   ). The planning and execution stages are repeatedly performed until the goal state is achieved. 2.5 Motion Planning for Automated Firefighting: A Case Study In this section, the general RHC presented in this chapter is adopted to tackle the motion planning of an automated wildfire suppression framework. As a matter of fact, the latter is investigated as a case study for the broader problem of cooperating aerial and ground vehicles. The proposed framework consists of an unmanned aerial vehicle (UAV) and an unmanned ground vehicle (UGV) cooperating to detect, localize, and reach a fire. In this automated cooperative setting, the UGV moves towards the fire. Meanwhile, the UAV gathers and reports the localization data (the position of the fire, the position of the UGV, and the position of the UAV) to the system. A schema of this automated firefighting system is shown in Figure ‎2.2. The system dynamics and measurements are plagued with sources of uncertainty. Ground and aerial obstacles exist in the environment. The problem of interest is to develop a motion planning and control scheme for this automated system that: (i) drives the UGV to the desired target in minimum time and with minimum fuel consumption (Figure ‎2.3), (ii) controls the UAV so that  25 the data uncertainty is minimized, and (iii) guarantees the safety of the system (safety in the sense of collision and hazard avoidance).  Figure ‎2.2:  The‎UGV‟s‎objective‎is‎to move to the desired location (fire). The UAV provides the localization data for the system. The UAV uses GPS for self-localization and vision for localizing the UGV and the fire. The localization data is then transferred to the UGV i.e., the control unit, through vehicle-to-vehicle (V2V) communication.  2.5.1 System Modeling The UGV and target motion are in the    plane and the UAV moves in 3D space. The rotational motions of the UAV are assumed to be small, thus negligible. The motion models of the UAV and the UGV are approximated by double integrators. The UGV motion model in discrete time is described as  *                +  [          ] *             +  [          ]                (‎2.17) here,         [             ],        [ ̇       ̇     ], and        [ ̈       ̈     ] denote the UGV position, velocity, and acceleration at time    , respectively. Sampling time is shown  26 by    and    and    are identity and zero matrices of size  . The process disturbance is shown with      .  Figure ‎2.3:  The designed path for the UGV has to drive it from the initial starting point to the target point (fire) while avoiding obstacles.  The UAV motion model is expressed as  *                +  [          ] *             +  [          ]               (‎2.18) where        [                    ] and        [ ̇       ̇       ̇     ] denote the position and the velocity of the UAV respectively. The acceleration of the UAV is denoted by        [ ̈       ̈       ̈     ]. The process disturbance is shown with      . The target (fire) motion is assumed to be a random walk and is expressed as  [         ]  [  ][       ]          (‎2.19) where         [               ]is‎ the‎ target‟s‎ position‎ vector‎ at‎ time‎     and         is a white Gaussian noise used to model the random walk motion. We define    [             ],  27     [             ], and    [             ]. The input vector to the system at time     is represented by      and the system state    at time     is as follows:     [     ] (‎2.20) The overall system model that formulates the dynamics of the UGV and the UAV can then be written as in Eq. (‎2.21).                   (‎2.21) In the above equation,    is the process disturbance and is assumed to be Gaussian white noise,           , where   is the covariance matrix at time    . The measurement function calculates localization values that are reported by the UAV measurement system and is formulated in Eq. (‎2.22)      (          )     (‎2.22) where             is Gaussian white noise representing the measurement noise and    is the covariance matrix. The measurement function is assumed to be a linear function and is modeled in Eq. (‎2.23).     [                     ]     (‎2.23) The UAV measurement system provides the localization data for the UGV, the UAV, and the target (fire). The measurement noise covariance      is considered to be a function of the distances between the UAV and the UGV (    , the UAV and the ground      , and the UAV and the fire      .  28  {    ‖             ‖    ‖      ‖    ‖              ‖ (‎2.24) In the Eq. (‎2.24), ‖ ‖ is the Euclidean norm function. The measurement noise covariance is more precisely expressed as                                         where    is a constant matrix.  2.5.2 Constraints System constraints in the defined scenario are obstacle avoidance, no-fly zone avoidance, and bounded velocities and accelerations. The obstacles and the no-fly zone area are assumed to be circles and cylinders, respectively. The obstacle avoidance constraint is formulated as           (     )      (‎2.25) where   [     ]     represents a circle (obstacle) with the center [     ] and the radius   . The robot position in 3D space is shown with [        ]. The no-fly zone avoidance constraint is formulated in Eq. (‎2.26)  {         (     )           (‎2.26) where   [     ]        represents a cylinder (no-fly zone) with the cross section   [     ]    and the height   . The (obstacle and no-fly zone) avoidance constraints are incorporated in the RHC framework through state constraints. The bounded velocity and acceleration constraints are expressed as following:  ,                         (‎2.27)  29 where [          ] and [         ] are vectors that define the velocity and acceleration bounds, respectively. The velocity and acceleration bounds are incorporated in the RHC through state constraints and input constraints, respectively. 2.5.3 RHC Framework The stage cost and the cost-to-go functions are formulated in Eq. (‎2.28) and Eq.(‎2.29), respectively   (    |         )   ‖ ̂   |   ̂ ‖   ‖    ‖ (‎2.28)    (    |         | )    ( ̂   |   ̂   )         (    | ) (‎2.29) where  ,  ,  , and   are non-negative scalars. In Eq. (‎2.29),      (    | ) is an index that represents the system uncertainty at the end of the planning horizon and is included in the cost function to minimize the system uncertainty. As it can be seen from Eq. (‎2.28) and Eq. (‎2.29), the stage and cost functions are specifically chosen to meet the design criteria i.e., determining the sets of inputs that would drive the system from the initial configuration to the goal state with a time-fuel optimal trajectory while increasing the accuracy of the state estimation. 2.5.4 Results The developed MPC scheme was examined in a simulated automated firefighting scenario. The UGV, UAV and fire initial positions were assumed to be [     ], [         ], and [       ], respectively. There was a ground obstacle   [       ]      in the environment. The no-fly zone for the UAV was a cylinder   [           ]           which was considered to assure the safety of the UAV during its operation. Furthermore, in order to ensure the safety of the UAV in terms of collision avoidance with ground objects, the UAV had to always maintain its altitude        above     . The covariance matrix for the process  30 disturbance was assumed to be                                      . The Gaussian white noise for the fire motion was modeled as          (*  +  *              +). The constant matrix    that is used to construct the measurement noise covariance was chosen to be                                                 . The bounded accelerations and velocities for the UAV in the    plane were [         ]  [              ] and [         ]  [        ], respectively. In the   direction, the bounded accelerations and velocities for the UAV were [         ]  [                ] and [         ]  [        ], respectively. Furthermore, the UGV accelerations and velocities were limited to [         ]  [              ] and [         ]  [        ], respectively. Finally, the parameters  ,  ,  , and   were chosen to be 1, 1, 10, and    , respectively. The formulated MPC scheme (    ,    ) was employed to solve the motion planning and control problem i.e., drive the UGV to the fire, minimize the system uncertainty, and satisfy the constraints and design criteria. The results of this simulation are shown in Figure ‎2.4 to Figure ‎2.6. The Euclidean distances between the UGV, the UAV, and the fire are shown in Figure ‎2.4(a). In Figure ‎2.4(b), the system, the UGV position, and the fire position uncertainties versus time are plotted. Figure ‎2.4(a) demonstrates that the UGV is decreasing its distance with fire. Meanwhile, in order to decrease the system uncertainty, the UAV is getting closer to the UGV and the fire. These results are consistent with Figure ‎2.4(b) where the uncertainties of the system, the position of the UGV, and the position of the fire are decreasing. It can be noticed that the minimum system uncertainty in Figure ‎2.4(b) happens at             which is the same time that the UAV has minimized its distance with the UGV and the fire in Figure ‎2.4(a).  31  (a)  (b) Figure ‎2.4:  (a) The UGV reduces its distance with fire. The UAV is also constantly decreasing its distance with the UGV and the fire. (b) As a result of the motion of the UAV, the uncertainty associated with the system is decreasing.  (a)  (b) Figure ‎2.5:  The objective for the UGV is to reach to the fire while avoiding the circular obstacle. Meanwhile, the UAV tries to increase the accuracy of the estimation scheme by getting closer to the UGV and the fire while avoiding the no-fly zone. It needs to be mentioned that the final uncertainty in Figure ‎2.4(b) is so-called minimum in the sense that the solution is resulted from an optimization problem. In general, however, the 2 4 6 8 10 12 14 16 18051015202530t [s]Distance [m]  UGV - Fire distanceUAV - Fire distanceUAV - UGV distance0 2 4 6 8 10 12 14 16 1810-210-1100101t [s]Uncertainty index (Trace ())  UGV position uncertaintyFire position uncertaintySystem uncertaintyMinimumsystemuncertainty0 5 10 15 20 25051015205x [m]y [m]  UAV executed 2D pathUGV executed pathFire (target)GroundobstacleUAVstartingpointUGVstartingpointNo-flyzone0 2 4 6 8 10 12 14 16 189101112131415t [s]z [m]  UAV executed altitudeLower bound for UAV altitude 32 final uncertainty is not guaranteed to be globally minimum as the RHC is a sub-optimal control scheme.  The executed paths of the UGV and the UAV in the presence of the ground obstacle and the no-fly zone are shown in Figure ‎2.5. In Figure ‎2.5(b) the UAV altitude is decreasing; a behavior that is expected in order to increase the state estimation accuracy. However, the UAV altitude has a lower bound limit of     . It is also notable that the executed paths for both the UGV and the UAV are robust against system uncertainties while avoiding their corresponding obstacle and no-fly zone.   (a)  (b) Figure ‎2.6:  (a) As the UGV gets closer to the fire, its velocities in both directions approach zero to maintain the distance between the UGV and the fire. (b) As the UAV gets close to the state where the uncertainty is minimized, the velocity decreases to zero so that the UAV stays at its optimal state. The UGV and the UAV velocities in the    plane are plotted in Figure ‎2.6(a) and Figure ‎2.6(b), respectively. The velocities satisfy the bound limit constraints. As can be observed from Figure ‎2.6(a), as the UGV approaches the fire, its velocity decreases so that it stays as close 0 2 4 6 8 10 12 14 16 18-3-2-1012345t [s]v [m/s]  UGV x velocityUGV y velocityVelocity bounds0 2 4 6 8 10 12 14 16 18-4-202468t [s]v [m/s]  UAV x velocityUAV y velocityVel city bounds 33 as possible to the fire. A similar behavior is observed in UAV velocity when the UAV gets close to the UGV and the fire. 2.6 Summary This chapter presented a motion planning and control framework using nonlinear model predictive control. The objective of the proposed system was to drive the unmanned vehicles in the environment, safely and optimally. To deal with uncertainties, different state estimation algorithms (including filtering and prediction algorithms) were studied and developed for linear and nonlinear systems. The presented motion planning framework was used in an automated firefighting application. More deeply, the cooperative setting consisted of one UAV and one UGV that cooperated to handle the fire in a firefighting task. The proposed automated system aimed at driving the unmanned vehicles from their initial states to the targeted states in an optimal way. The motion planning scheme intended to find a time-fuel optimal trajectory for the unmanned vehicles while maximizing the state estimation accuracy and handling constraints. The vehicles motion models and measurement functions were assumed to be linear. The partially closed-loop state estimation algorithm was adopted to predict an estimation of the future states. Various constraints including obstacle avoidance for the UGV, no-fly zone avoidance for the UAV, and bounded control inputs were considered to study the ability of the proposed framework to provide a practical solution for real-world scenarios. In order to verify the applicability of the proposed scheme, numerical simulations were carried out. Based on the defined scenario and the problem formulation, two major behaviors were expected to be observed in the simulation results as follows.  A safe optimal path is designed for the UGV to reach to the fire.  34  The UAV path is designed to maximize the estimation accuracy and to avoid the no-fly zones while being robust against noise and disturbances. Simulation results verified our initial perception of the system behavior. In the presence of obstacles, the UGV chooses the shortest possible path to reach the target that is robust and avoids the obstacle. Furthermore, the UAV decreases its distance with the fire and the UGV in order to minimize the system data uncertainty. It is notable that in the RHC, the optimal control problem is solved over a finite horizon. Thus, the solution is sub-optimal unless the final cost function represents an exact cost-to-go from the last state in the horizon. As a result, globally optimal solutions are not guaranteed to be obtained. We would like to clarify that the so-called optimal solution is optimal in the sense that it is a result of an optimization problem. However, it is in fact sub-optimal as the planning horizon is finite and the final cost function is never guaranteed to be exact.  In summary, simulation results demonstrated that the developed RHC motion planning scheme is able to design trajectories for both the UGV and the UAV that are optimal (within the problem constraints and in the sense that they are resulted from an optimization framework), safe (in the sense of collision avoidance) and satisfy the design criteria i.e., the system uncertainty is minimized. It should also be mentioned that the developed algorithm is able to solve the motion planning problem given that an optimal feasible solution for the problem exists. Obviously, in cases where there is no feasible solution to the problem that can satisfy all of the constraints, the proposed scheme is not able to find a solution.    35    Chapter 3 MILP Receding Horizon Trajectory Planning 3.1 Background and Overview This chapter presents a receding horizon motion planning scheme for trajectory planning of robots using mixed integer linear programming (MILP). In ‎Chapter 2, RHC-based motion planning was developed for a general case assuming nonlinear system and measurement equations. Under certain assumptions i.e., linear system equations and convex polyhedral obstacles, the motion planning problem can be formulated as an RHMILP [56]. Schouwenaars et al. first introduced RHMILP as a tool for motion planning in the presence of convex polyhedral obstacles and absence of uncertainties in the environment [56]. In 2006, a distributed RHMILP framework for multivehicle path planning was presented in [71] that aimed to maintain LOS between the vehicles during operation. In 2011, Kuwata and How developed a robust RHMILP framework that utilized set theory to characterize uncertainties [49]. In 2013, Richards and Turnbull developed a new MILP-based obstacle avoidance constraint that ensures the feasibility of the generated trajectories between the discrete samples as well as in the discrete samples [72].  In the conventional RHMILP obstacle avoidance formulation, the obstacle avoidance is guaranteed by locating the robot outside of the obstacles at all discrete time steps [56]. This chapter builds on the existing RHMILP literature by introducing a new obstacle avoidance formulation for generating safer trajectories (safer in the sense of collision avoidance). The LOS  36 maintenance constraint between two vehicles which was originally formulated in [71] is adopted to form an obstacle avoidance constraint i.e., the robot must see its next position at any iteration. The proposed obstacle avoidance constraint, called LOS-based obstacle avoidance is advantageous in that it can typically provide the same level of safety at time steps larger than those of the conventional approaches; hence creating trajectories which are more effective in both the planning and execution stages.  The problem statement and the general RHC formulation are discussed in Section ‎3.2. Section ‎3.3 formulates the motion planning problem as an RHMILP, elaborating on developing cost functions and constraints in a linear framework. Both the conventional and the proposed obstacle avoidance constraints are presented in this section. Section ‎3.4 provides simulation results and discussions. Finally, Section ‎3.5 summarizes this chapter. 3.2 Problem Statement We aim to develop a motion planning framework in order to deliver a robot from its starting configuration to its goal while avoiding obstacles. For the sake of simplicity, the obstacles are assumed to be rectangles. The planned trajectory must be optimal with respect to time and fuel consumption. The robot motion model is assumed to be linear time invariant (LTI), as               (‎3.1) where    and    are the system state and input vectors at time interval  , respectively. A linear sensor with the following model is assumed to measure the system outputs        . (‎3.2) In Eq. (‎3.1) and Eq. (‎3.2),                    are known LTI matrices.   37 The RHC motion planning scheme for the robot at time interval   is expressed as an optimization problem, as            ∑( (    |         ))         (    |    )  (‎3.3) subject to        |       |        (‎3.4)         (‎3.5)        |   . (‎3.6) Knowing   | , the system dynamics presented in Eq. (‎3.4) is utilized to predict the future state in the    th horizon,       | . Note that the environment is assumed to be deterministic and there is no uncertainty associated with the system dynamics or measurements. Thus, the future system states can be exactly predicted. The robot motion planning problem is subject to various constraints which can be categorized as constraints on the inputs and constraints on the states. The system input and state constraints are included in the motion planning problem through Eq. (‎3.5) and Eq. (‎3.6), respectively. Section ‎3.3, elaborates the formulation of stage and cost functions, as well as input and state constraints in an RHMILP framework. 3.3 MILP Formulation Mixed-integer linear programming (MILP) is a linear optimization framework that allows for incorporation of discrete logic and integer variables in the optimization [65]. The motion planning problem outlined in the previous section can be modeled as an MILP. We first explain  38 how discrete logic, logical disjunction and logical conjunction in particular, can be incorporated in a continuous optimization framework.  Lemma1: In a constrained optimization, a set of inequality constraints with logical disjunctions (OR-constraints) may be equalized to a set of inequality constraints with logical conjunctions (AND-constraints) by introducing a set of binary variables and a large arbitrary number [65]. Consider the following simple example                                      (‎3.7) where at least one of the two constraints must be satisfied. The problem (‎3.7) is equal to                                                     (‎3.8) where    and   are binary values and an arbitrary large number, respectively. The last constraint in problem (‎3.8) ensures that at least one    is zero and thus satisfying at least one inequality constrain in problem (‎3.7). Meanwhile, the existence of a large number   emancipates the other (possibly) infeasible inequality in choosing its value. This can be easily extended to an optimization problem with   disjunctive constraints                  from which at least   constraints have to be satisfied. This is done as follows.                                              ∑            (‎3.9)  39 In problem (‎3.9), if      and       are linear expressions of  , the optimization will be a MILP. In the remainder of this section, the motion planning problem will be formulated in an RHMILP framework. In the formulation of the RHMILP motion planning, Lemma1 will be utilized to capture non-convex constraints e.g., obstacle avoidance constraints. 3.3.1 Obstacle Avoidance To maintain simplicity, the obstacle avoidance problem in this chapter is formulated for a rectangular obstacle. A more general formulation of this constraint that accounts for any type of polyhedral obstacle is developed in ‎Chapter 4. The obstacle is assumed to be a rectangle confined to [          ] and [          ]. Two obstacle avoidance constraints are formulated here: (i) the conventional approach [56], and (ii) the LOS-based obstacle avoidance approach. The latter is one of the contributions of this thesis.  3.3.1.1 Conventional Obstacle Avoidance The conventional obstacle avoidance approach was defined earlier in Definition 1 and is presented here again. Definition 1: At time step  , the obstacle avoidance of a robot positioned at    [     ]  is guaranteed, if and only if       where    is the obstacle for the robot at time step  . Based on Definition 1, in order to avoid the obstacle at time step  , the corresponding robot position must lay outside the obstacle. For the defined obstacle, this constraint is formulated as                                                (‎3.10) This can be reformulated to capture the logical disjunctions using Lemma1 as  40                                                             ∑           (‎3.11) 3.3.1.2 LOS-based Obstacle Avoidance In this section, a new obstacle avoidance constraint, called the LOS-based obstacle avoidance, is developed. We first introduce the LOS concept as directly adopted from multi-robot systems. Definition 2: At time step  , the LOS connection between two agents   and  , positioned respectively at     [       ]  and     [       ] , is maintained if the line that connects them together is not cut by any obstacle, i.e.      (       )  (       )   [   ], where     and     are the existing obstacles relevant to   and  , respectively and   defines the position of interest. This thesis proposes that, in order to avoid the obstacles at any iteration, the robot must be able to observe its next planned position i.e., the path between any two sequential positions must not intersect an obstacle. Theorem 1: At time step  , the obstacle avoidance of a robot located at    and heading towards      is guaranteed if LOS between    and      is maintained i.e.,                      [   ] where    defines the set of existing obstacles. Based on Theorem 1 the LOS-based obstacle avoidance constraint for the defined rectangular obstacle can be written as a conjunction of disjunctive linear constraints.  41             (    )                   (    )                  (    )                   (    )            (‎3.12) where   is treated as a discrete set in order to formulate LOS-based obstacle avoidance with linear constraints. The discrete set of   is noted as   and    are members of   [   ]. We next formulate the LOS-based obstacle avoidance constraint as a conjunction of linear constraints using the Lemma1.             (    )                      (    )                     (    )                     (    )                ∑          (‎3.13) Here,   and    are the large number and the binary variables, respectively. The constraint ∑          implies that for         , at least one    has to be zero and therefore at least one inequality in the disjunction set in Eq. (‎3.12), is not violated. Meanwhile, the existence of a large number   emancipates other (possibly) infeasible inequalities in choosing their value. By utilizing Eq. (‎3.13), the LOS-based obstacle avoidance problem for deterministic robot states is formulated by linear conjunctive inequalities and can be easily handled through mixed integer linear programming (MILP) solvers. It needs to be mentioned that constraining linear interpolation points to avoid the obstacle is not novel. Reference [73] has also proposed an obstacle avoidance constraint for the robot using linear interpolation points. However, our  42 formulation results from an LOS point of view with its focus on safer planned paths. Furthermore, modeling the LOS-based obstacle avoidance constraint for uncertain robot states which is the focus of the next chapter has not been previously investigated in the literature. Obstacles must be enlarged before the robot can be assumed as a point mass object. Assuming a circular geometry for the robot, the required obstacle enlargement for the point mass assumption is equal to the robots diameter,   (Figure ‎3.1). The key difference between the LOS-based obstacle avoidance constraint, Eq. (‎3.13), and the conventional one, Eq. (‎3.11), is the required obstacle enlargement due to planning in discrete time steps. As can be seen in Figure ‎3.1, the required enlargement for the proposed method is proportional to       where   is the number of members of   and    is the discretization time. The required obstacle enlargement for the conventional obstacle avoidance method, that can be treated as an LOS-based obstacle avoidance constraint with    , is bigger than that for the LOS-based obstacle avoidance method with    . In Figure ‎3.1,      is the maximum possible velocity. For a robot with first order linear dynamics,                                     where    and    are defined in the following section.  Figure ‎3.1:  The top left quarter of the original and enlarged obstacle. The first enlargement enables the point mass assumption of the robot. The second enlargement is related to discrete time planning [2].  43 3.3.2 Vehicle Dynamics The robot inputs are assumed to be constrained to ensure that the problem solution is dynamically feasible and implementable. The input values are constrained to be bounded as,           (‎3.14) where    and    are the     matrices that imply the lower and upper bounds for the system inputs. Furthermore, the system input derivatives are constrained to be bounded, thus limiting the robot turn rate as,                 (‎3.15) where   and   are the lower bound and upper bound    matrices. 3.3.3 RHMILP In order to formulate the problem in an RHMILP fashion, the following cost functions are considered:   (    |         )    |    |     |    |    | (‎3.16)    (    |    )    |    |     |  (‎3.17) Here,           are the appropriate vectors that are used to weight the attributes. We introduce auxiliary variables and additional constraints to handle the absolute values in the cost functions (‎3.16) and (‎3.17). In doing so, Lemma 2 (see [74]) is used. Lemma 2: The optimization problem    |    | can be modeled as:  44                                       (‎3.18) If the function      in problem (‎3.18) is linear in  , which is the case for the cost functions (‎3.16) and (‎3.17), the final optimization is a linear optimization. It should be noted that the motion planning problem can also be formulated as a mixed-integer quadratic programming (MIQP) optimization problem. In doing so, the following stage and final cost functions have to be utilized:   (    |         )  (    |     ) (    |     )             (‎3.19)    (    |    )   (    |     ) (    |     )  (‎3.20) where        are positive-semidefinite matrices and   is a constant positive value.  With the cost functions modeled as a linear function of         and the constraints modeled as linear inequalities and equalities in        , the trajectory optimization problem is a linear optimization that includes binary and continuous decision variables. 3.4 Results In order to study the applicability of the developed scheme for motion planning, it was tested in various simulation scenarios for a circular robot with diameter     . The motion planning algorithms were developed in MATLAB [75] and solved in CPLEX [60]. The robot is assumed to be a first order linear system with the following state and measurement equations:       *    +    *      +    (‎3.21)  45     *    +    (‎3.22) where    [     ]  and    [       ] are the position and velocity of the robot in   and   directions, respectively. Three scenarios were considered for numerical simulations as summarized in Table ‎3.1. These scenarios aim at demonstrating the advantage of the LOS-based obstacle avoidance method over the conventional obstacle avoidance method [56] particularly in cluttered environments with narrow passages.  Table ‎3.1:  Simulation scenarios are designed to study the efficacy of the proposed algorithm. The environment was assumed to be deterministic. In these simulations, two different RHMILP motion planning frameworks, one using LOS-based obstacle avoidance and the other using the conventional obstacle avoidance, were developed to generate the trajectories for the robot. In the controller design,           were all chosen to be [   ]. The values of        were also selected to be 100 and 1, respectively.  In the first scenario, a robot started from point [        ]  to point [          ] . There existed a rectangular obstacle with [         ]  [       ] and [         ]  [     ]. Robot control inputs were subject to upper and lower limits of *            +  and *              + , respectively. The planning interval was set to be        . The Scenario # Description 1 Single obstacle avoidance 2 Environments with narrow passages 3 Dynamic and cluttered environments  46 required obstacle enlargement for the conventional and the LOS-based obstacle avoidance methods were then calculated to be        and       , respectively. In order to compare the two algorithms under equal conditions, an enlargement of        was performed. The resulting trajectories are shown in Figure ‎3.2(a). Both trajectories are able to avoid the obstacle. However, the trajectory resulting from LOS-based obstacle avoidance performs an extra detour motion around the corners of the obstacle. This detour motion in general contributes to safer trajectories especially in terms of avoiding obstacles that are obscured by the main obstacle.  In the second scenario the‎ robot‟s start and end points were [         ]  and [       ] , respectively. Two obstacles were located in the area and represented by [           ]  [        ], [           ]  [     ] for the first obstacle and [           ]  [       ],  [           ]  [        ] for the second obstacle. The robot was assumed to be a point mass object subject to bounded control inputs    *            +  and   *              + . Other properties remained the same as the first scenario. For each obstacle avoidance method, the obstacles were enlarged enough (not equally) to prevent the lines from cutting through corners. As can be seen in Figure ‎3.2(b), for the conventional obstacle avoidance method, due to bigger enlargements, the space between the two obstacles is considered occluded and infeasible while it is a free space for the LOS-based obstacle avoidance method. As a result, the path generated by the conventional obstacle avoidance method turns around the obstacles while the LOS-based obstacle avoidance method results in a path that passes through the obstacles; hence the latter is considered to be a more time-fuel optimal path. It can be seen that the LOS-based obstacle avoidance method provides a more feasible solution in comparison to the conventional obstacle avoidance method.  47 (a) (b) (c) Figure ‎3.2:  (a) The robot that follows the LOS-based obstacle avoidance trajectory performs a detour motion around the corners of the obstacle thus decreasing the chance of collision with an unforeseen obstacle located behind the main obstacle. (b) Obstacle enlargement in the conventional obstacle avoidance method blocks the‎robot‟s‎path and results in the trajectory to go around the obstacles. (c) The coordinates of the dynamic obstacle with respect to time is shown with different line styles. Due to requiring bigger obstacle enlargement, the obstacle avoidance method is not able to generate a feasible trajectory for the robot. On the other side, the path generated by the LOS-based obstacle avoidance method is able to tackle both dynamic and static obstacles effectively and reach to the goal position [3]. Finally, the last simulation scenario was designed to study the applicability of the proposed motion planning algorithm in a cluttered environment with dynamic and static obstacles. A robot started from [     ]  and aimed to reach [        ] . The robot was subject to lower and upper limits of *            + for the control inputs. A dynamic obstacle with initial coordinates of [         ]  [      ]  and  [         ]  [       ]  existed in the environment and was assumed to move along the x-axis with the speed of      . The dynamic obstacle started its motion at     and moved for two time intervals i.e., 1s. The generated trajectories are plotted in Figure ‎3.2(c). Both obstacle avoidance methods tackle the dynamic obstacle effectively. However, the required obstacles enlargement in the conventional obstacle avoidance method is more than that for the LOS-based method. As a result RHMILP with the conventional obstacle avoidance algorithm is not capable of finding a feasible path. In summary -2 -1 0 1 2-1-0.500.511.5x [m]y [m]  LOS-based obstacle avoidanceConventional obstacle avoidance-2 -1 0 1 2-1.5-1-0.500.511.5x [m]y [m]  LOS-based obstacle avoidanceConventional obstacle avoidance5 1002468101214x [m]y [m]  Conventional obstacle avoidanceLOS based obstacle avoidance 48 the RHMILP with the LOS-based obstacle avoidance method was shown to effectively tackle static and dynamic obstacles. 3.5 Summary This chapter formulated a receding horizon MILP (RHMILP) path planning algorithm and introduced new features to this well-established concept in the robotics literature. Actuator limitations, obstacle avoidance, and kinodynamic constraints were modeled as linear constraints. Two obstacle avoidance constraints, the conventionally-used one and the LOS-based obstacle avoidance, were incorporated into the RHMILP. The LOS-based constraint, that is the proposition of this thesis, ensures that every two sequential robot positions are connected (i.e., the robot always sees its next position) to avoid intersecting with the surrounding obstacles. The problem was formulated as a linear optimization framework for deterministic environments. The presented motion planning framework was used in different motion planning scenarios. The incorporation of the LOS-based obstacle avoidance constraint in RHMILP was shown to provide safer trajectories and increase the feasibility of the solution compared to conventional RHMILP obstacle avoidance methods. Extending the RHMILP motion planning scheme to environments where the robot states are uncertain is the focus of the next chapter.   49    Chapter 4 Robust MILP Receding Horizon Trajectory Planning 4.1 Background and Overview In this chapter, the RHMILP motion planning framework presented in ‎Chapter 3 is extended to account for cases where the environment is plagued with uncertainty. In such scenarios, the generated trajectory must be robust against uncertainties. In particular, RHMILP will be developed for motion planning scenarios where the robot state is uncertain. In doing so, the open loop and partially closed-loop state prediction algorithms (Section ‎2.3.2) are both formulated to predict an estimation of the future states of the robot. The latter is advantageous in that it accounts for the anticipated future state measurements that can in turn help to reduce the uncertainty of the future states. Chance constraints are used to enforce constraints on the uncertain system states. More precisely, the LOS-based and conventional obstacle avoidance algorithms, presented in ‎Chapter 3, are developed for environments with uncertain robot states using chance constraints. The resulting constraints are disjunctive linear constraints on the robot states. MILP formulation is then used to change the disjunctive linear constraints to conjunctive linear constraints. The obstacle avoidance constraints are initially formulated for a rectangular obstacle and are further extended to account for general convex polyhedron obstacles.  The problem of interest, the general RHC motion planning scheme, and state prediction algorithms are presented in Section ‎4.2. Chance constraints are formulated in Section ‎4.3.  50 Section ‎4.4 develops the RHMILP framework by formulating the cost function and the constraints for uncertain robot states. Simulation and experimental results are given in Sections ‎4.5 and ‎4.6, respectively. Finally, Section ‎4.7 summarizes this chapter. 4.2 Problem Statement A motion planning framework is desired for navigation of robots with uncertain states in the environment while avoiding convex polyhedral obstacles. The planned trajectory has to be: (i) optimal with respect to time and fuel consumption, and (ii) robust against system uncertainties which tend to make the trajectory infeasible.  It is assumed that process disturbances, system model uncertainties, and measurement noise are Gaussian white noises. The robot motion model is assumed to be linear time invariant (LTI), as                   (‎4.1) The robot motion is subject to process disturbance and system model uncertainty which are modeled accumulatively as Gaussian white noise    and are distributed according to           . It is assumed that the initial state of the robot    is given a priori and has a Gaussian distribution       ̂      with  ̂         as the mean and the covariance. A linear sensor with the following model is assumed to measure the system outputs:            (‎4.2) where            is Gaussian white noise that represents the measurement noise.     51 4.2.1 State Estimation It was noted earlier (Section ‎2.3) that in the presence of uncertainties, an estimation of future system states has to be predicted. The open-loop and the partially closed-loop prediction algorithms are utilized in this chapter for future state estimation. This section is dedicated to reformulating these algorithms and understanding some of their properties in the case of linear systems. 4.2.1.1 Open Loop Prediction Given a priori state estimation   |     ̂ |    |  , the open loop prediction algorithm estimates the future system state as following:   { ̂   |     ̂ |         |     |       (‎4.3) where     |     ̂   |      |   is an estimation of the future state of the system at time interval     assuming the state information for current time   |     ̂ |    |   is given. It is notable that when the system uncertainties are modeled as a white Gaussian noise, the open loop prediction algorithm will characterize the future system states to be a normal random variable as well. The system dynamics are utilized to predict the mean and covariance of the future system states. Another key observation is that in the above formulation, the mean of the future system state ( ̂ | ) is a linear function of the control inputs and the covariance of the system state (  | ) can be computed offline since it is independent of the system input. Utilizing these properties allows for linear formulation of the constraints on the future system states and solving the motion planning problem using linear solvers. On the negative side, however, the future measurements in the open loop prediction approach are ignored since none are obtained at the present time. Therefore, the covariance (and thus the uncertainty) of the system state will grow over time. It is  52 shown later in Section ‎4.3 that higher uncertainty results in a more conservative constraint on the future system states. In fact, by using the open loop prediction algorithm, the level of conservatism in the planned trajectory increases with planning time. In an extreme case, the path planning constraint might become too conservative and the planning algorithm might even not be able to find a feasible trajectory. The following section offers a solution to this problem by utilizing the PCL as an alternative prediction algorithm. 4.2.1.2 Partially Closed-loop (PCL) Prediction In contrast to the open loop prediction algorithm, the PCL strategy accounts for anticipated future measurements. In doing so, at each future step, the most likely measurement is assumed to occur. A Modified Kalman Predictor (MKP) is developed to predict the future system state accounting for anticipated future information. The key point is that by this estimation assumption, the future measurement existence is appreciated and is utilized to reduce the predicted state uncertainty without affecting the state mean. Knowing the a priori state estimation   |     ̂ |    |  , and the motion and measurement models, the MKP algorithm for the linear Gaussian system is modeled as a two-step process. First as prediction:  { ̂   |     ̂ |         |      |       (‎4.4) and then update:  {         |          |              |               |   (‎4.5) where    is the Kalman gain and is utilized to correct the uncertainty. In the MKP formulation, Eq. (‎4.4), represents Eq. (‎4.3) of the open loop prediction. By updating the covariance in Eq.  53 (‎4.5), the predicted covariance in Eq. (‎4.4) is corrected. The important observation is that this MKP formulation maintains all properties of the open loop state prediction method. Namely, the future system state is a multivariate random variable, its mean is a linear function of the system input, and its covariance can be computed offline. These properties allow the system constraints to be modeled linearly. 4.2.2 RHC Scheme Knowing the a priori state estimation   |     ̂ |    |   the motion planning for a linear robot is expressed as an optimization problem and is formulated in Eq. (‎4.6) to Eq. (‎4.9):            ∑( ( ̂   |         ))         ( ̂   |    )  (‎4.6) subject to   ̂     |    ̂   |        (‎4.7)         (‎4.8)   (      |   )     . (‎4.9) The stage cost and the final cost will be formulated in Section ‎4.4.1. The final desired state is shown by   . Given an estimation of   | , the future system state is predicted using the open loop and the PCL prediction algorithms. In doing so, the mean of the future states is predicted using system dynamics without considering the disturbances as in Eq. (‎4.7). The covariance of the future states, on the other hand, is computed offline. The system input constraints are included in the motion planning problem through Eq. (‎4.8) and are the same as the corresponding constraints in Section ‎3.3.2. The system state constraints are introduced to the motion planning problem  54 through Eq. (‎4.9). Due to the lack of perfect information about future system states, chance constraints are utilized instead of deterministic constraints. These chance constraints will be extensively described in Sections ‎4.3 and ‎4.4. 4.3 Chance Constraints Chance constraints are utilized to incorporate the uncertainty in the constraint modeling. In chance constraints, instead of finding the solutions of      , the problem          is solved where   is the level of confidence,   is the system state and      denotes the probability of occurrence of event  . Constraints on the system states are either of a linear         or a nonlinear          form. As shown in ‎Chapter 3, in the RHMILP, the constraints on the system states are linear; hence linear chance constraints              are of interest. Chance constraints are generally handled with two approaches: (i) Monte Carlo simulations are performed where random samples are selected from the system state and the probability of failure (constraint violation) is obtained from the number of samples that violate the deterministic constraint (e.g., [54]), and (ii) the Gaussian distribution of the system state is estimated and the chance constraints are modeled as deterministic constraints on the mean of the system state using probability theory (e.g., [76, 77]). The latter is of interest in this thesis. 4.3.1 Single Linear Chance Constraints In this section, it is shown that single linear chance constraints on the system state can be turned into linear deterministic constraints on the mean of the state by calculating a back-off amount that is a function of the state uncertainty and the level of confidence. We first consider a chance constraint          where   has the univariate Gaussian distribution          . This problem is illustrated in Figure ‎4.1.  55  Figure ‎4.1:    is a single-variate normal random variable          . When the variance    is fixed,     is the necessary and sufficient condition for         . This chance constraint is equivalent to a deterministic constraint on the mean i.e.,                (‎4.10) Here,   is a function of   and   and can be calculated as    √              (‎4.11) where       , the error function, is defined as following:          √ ∫           (‎4.12) An example lookup table that can be used to find the value of             is shown here in Table ‎4.1. It is notable that as the level of confidence increases, the value of             and thus   decreases to assure the constraint     is more likely to satisfy.   P(X  0) =1-  56 Table ‎4.1:  Example values for the level of confidence   versus the            . This simple case can be extended to account for linear chance constraints on a multivariate Gaussian random variable            . Consider the linear chance constraint to be             where   is an     vector,   being the size of   . Let         be an affine transformation of  . From probability theory [78]   has a normal distribution with the mean and variance of           and          , respectively. The chance constrain             is then equivalent to          and can be changed to deterministic constraints as following:  {                      √               √                   (‎4.13) Eq. (‎4.13) can then be used to change the single linear chance constraint             to a deterministic constraint on the mean of  . It can be perceived from Eq. (‎4.11) and Eq. (‎4.13) that as the uncertainty increases, the absolute value of   will also increase resulting in a more conservative linear constraint. This is the main motivation for utilizing the PCL instead of the open loop prediction approach as the former prevents the unbounded growth in the uncertainty of future predicted states; hence limiting the value of  . The formulation for single linear chance constraint will be extended in the following section to account for cases where sets of linear chance constraints have to be satisfied simultaneously.  4.3.2 Sets of Linear Chance Constraints In the robotic motion planning problem, it is often necessary to impose a chance constraint on a set of linear constraints. For example, as was elaborated in Section ‎3.3.1, the conventional obstacle avoidance constraint at each time step is modeled as a set of disjunctive linear   0.1 0.2 0.4 0.7 0.8 0.9 0.99 0.999             0.906 0.595 0.179 -0.370 -0.595 -0.906 -1.645 -2.185  57 constraints. In another example, LOS-based obstacle avoidance is modeled as a conjunction of disjunctive constraints. In such cases, the chance constraint formulation in the previous section is no longer applicable since it only considers single linear constraints. In this section, problems with chance constraints on a set of linear constraints are formulated. Two different sets of linear constraints are studied here: (i) set of disjunctive linear constraints, and (ii) set of conjunctive linear constraints.  4.3.2.1 Disjunctive Linear Constraints Consider the set of linear constraints (                  ). The problem of interest is to formulate the chance constraint               . In simpler words, we want to ensure that the probability that at least one event    happens is more than the specified value  . Consider De Morgan‟s‎law              (‎4.14) and the fact that  (     ) (     )    (‎4.15) where the overbar represents the negation relation, the problem of interest can be equally formulated as    (         )        . (‎4.16) In order to simplify this problem further, we use the formulation for the conditional probability i.e.,   (     )   (  ) (  |  ) (‎4.17)  58 where  (  |  ) denotes the probability of event    happening given that    has happened. It is notable that as  (  |  )   , therefore  (     )   (  ). Similarly, it can be shown that  (     )   (  ). In other words  (  ) and  (  ) are upper bounds on  (     ). Therefore,  (         )   (        ). Thus, ensuring the constraint     ̅                (‎4.18) is a sufficient (but conservative) criteria to satisfy Eq. (‎4.16). The chance constraints  (        )     are single linear chance constrains equivalent to               which was formulated in the previous section. 4.3.2.2 Conjunctive Linear Constraints The set of linear constraints (                  ) is considered again while the problem of interest is to ensure that               . It is again more straightforward to formulate this problem in terms of probability of failure i.e., the probability of negated relations. Using De Morgan‟s‎law:‎              (‎4.19) the problem of interest is equivalent to    (         )        . (‎4.20) It‎is‎known‎from‎Boole‟s‎inequality‎that   (         )  ∑ (  )       (‎4.21)  59 Therefore, if we constrain the value of    ̅         to be lower than    ,    (         )  ∑ (  )                (‎4.22) In other words, Eq. (‎4.20) can be conservatively modeled as  (        )      equivalent to                   which was modeled in Section ‎4.3.1. In the previous and current sections, chance constraints on sets of disjunctive and conjunctive linear constraints were formulated as single chance constraints, respectively. It is notable that the latter is formulated more conservatively compared to the former (          ). Thus, in the cases where the problem can be formulated as both disjunctive and conjunctive constraints, the former approach is preferred. This will be elaborated further in Section ‎4.4.1.  4.4 MILP Formulation In this section, the RHMILP motion planning framework which was developed in Section ‎3.3 is developed for the cases where the robot state is uncertain. In such cases, the current and future system states are not known deterministically and have to be estimated. Hence, estimation algorithms have to be incorporated into the RHMILP framework. Also, chance constraints must be utilized instead of deterministic constraints to ensure constraint satisfaction in the presence of uncertainties.  4.4.1 RHMILP The stage and final cost formulations in Eq. (‎4.6) are formulated in Eq. (‎4.23) and Eq. (‎4.24), respectively.   ( ̂   |         )    | ̂   |     |    |    | (‎4.23)  60    ( ̂   |    )    | ̂   |     |  (‎4.24) Similar to the cost function formulation for RHMILP in a deterministic environment (Section ‎3.3.3),           are the weighting vectors and the auxiliary variables are utilized to handle absolute values in the cost functions based on Lemma 2. 4.4.2 Obstacle Avoidance The obstacle avoidance constraints that were developed in the previous chapter for deterministic environments are reformulated here for the environments where the robot states are uncertain. In doing so, the linear chance constraints developed in Section ‎4.3 are utilized to ensure robustness against the uncertainties. The constraints are first developed for rectangular obstacles and are further extended to general convex polyhedral ones. 4.4.2.1 Conventional Obstacle Avoidance Remark 1: The conventional obstacle avoidance for deterministic robot position at time step   is formulated as a set of disjunctive constraints as                                                (‎4.25) The chance constraint formulation for conventional obstacle avoidance is formulated as following:    (                                              )     (‎4.26)  61 which is correspondent to the chance constraint formulation for sets of disjunctive constraints presented in Section ‎4.3.2.1. Thus, Eq. (‎4.18) can be used to conservatively simplify Eq. (‎4.26) as following:                                                               (‎4.27) In the above equation, each line is a linear chance constraint and can be written in the form of  (        )   , and can be formulated as linear deterministic constraints on the mean of the system state as  { (        )        ̂            √                     (‎4.28) Eq. (‎4.27) is then formulated as a disjunction of linear inequalities as following:                                                                (‎4.29) and is finally turned into a conjunction of linear inequalities as  62                                                                             ∑           (‎4.30) Note that another approach would be to first turn the disjunctive constraints in Eq. (‎4.25) to conjunctive constraints and then apply the chance constraints. However, as was mentioned in Section ‎4.3.2, the chance constraints on a conjunctive set of constraints are formulated more conservatively than the chance constraints on a disjunctive set of constraints, and, therefore, less preferable. 4.4.2.2 LOS-based Obstacle Avoidance Here, the LOS-based obstacle avoidance constraint when uncertainties are associated with the robot states is developed. Chance constraints are again utilized to incorporate the uncertainty in the constraint modeling.  Remark 2: In a deterministic environment, the LOS-based obstacle avoidance constraint for a rectangular obstacle is modeled as a conjunction of disjunctive linear constraints, as             (    )                   (    )                  (    )                   (    )             (‎4.31) The chance constraint formulation of Eq. (‎4.31) with the level of confidence    is written as  63   (               (    )                   (    )                  (    )                   (    )            )        (‎4.32) The formulation of this chance constraint is to some extent similar to the one developed for the conventional obstacle avoidance constraint. However, as the LOS-based obstacle avoidance constraint is modeled as a conjunction of disjunctive constraints, the level of confidence for any of the disjunctive set of constraints has to be     where   is the number of member of   (refer to Section ‎4.3.2.2 where the chance constraints are applied on a conjunction of constrains). Thus, for any     , the chance constraint formulation is modeled as follows.   (         (    )                   (    )                  (    )                   (    )           )          (‎4.33) and further     (     (    )           )      (      (    )           )      (     (    )           )      (      (    )           )     (‎4.34) In the above equation, each line is a linear chance constraint which can be written in the form of  (         )   , where          (    )    , and can be formulated as linear deterministic constraints as  64  { (         )        ̂               √                      (‎4.35) Here,  ̂      ̂ |  (    ) ̂   |  and         |  (    )    |  are the mean and covariance matrices associated with    , respectively. The LOS-based obstacle avoidance constraint is then modeled as a conjunction of disjunctive linear inequalities as following:           ̂  (    ) ̂                     ̂  (    ) ̂                    ̂  (    ) ̂                     ̂  (    ) ̂                (‎4.36) and turned into the desired MILP formulation as            ̂  (    ) ̂                        ̂  (    ) ̂                       ̂  (    ) ̂                        ̂  (    ) ̂                 ∑           (‎4.37) 4.4.2.3 Convex Polyhedral Obstacle In this section, the obstacle avoidance formulation is extended for a general convex polyhedral obstacle. For the sake of brevity, only the LOS-based constraint is formulated. The obstacle is assumed to be a convex polyhedron with   faces defined by the following equation:                (‎4.38)  65 where         represents the     face and    is the outward normal to the     face. An example of a convex polyhedral obstacle in 2D with     is shown in Figure ‎4.2.   Figure ‎4.2:  A two dimensional polyhedral obstacle with 5 faces (pentagon) is schematically shown. The polyhedron is defined by               where         represents the     face [3]. Based on the conventional obstacle avoidance method, in order to avoid obstacle at time step   in a deterministic environment, Eq. (‎4.39) must be satisfied.                    (‎4.39) Based on Theorem 1, the LOS-based obstacle avoidance constraint in a deterministic environment, for a polyhedral obstacle represented by Eq. (‎4.38), can be written as                    (     (    )    )        (‎4.40) Eq. (‎4.40) is a conjunction of disjunctive linear inequalities and can be turned into a conjunction of conjunctive linear inequality constraints utilizing the MILP formulation as following:  66                   (     (    )    )          ∑             (‎4.41) The chance constraint formulation for the LOS-based obstacle avoidance for probabilistic robot state is given in Eq. (‎4.42).                 (    (     (    )    )     )     (‎4.42) The key observation is that in Eq. (‎4.42), each individual constraint is a linear chance constraint on the states of the robot. Denoting      (    )    as    , each individual chance constraint can be expressed as a linear deterministic constraint utilizing   { (           )         ̂               √                      (‎4.43) The LOS-based obstacle avoidance constraint for a probabilistic robot state can then be modeled as                    (   ̂  (    ) ̂   )              (‎4.44) As can be seen in the above equation, the LOS-based obstacle avoidance at time step   provides a conjunction of disjunctive linear inequality constraints on the robot states. This constraint is turned into a conjunction of conjunctive linear inequality constraints.                    (   ̂  (    ) ̂   )                ∑             (‎4.45) By utilizing Eq. (‎4.45), the LOS-based obstacle avoidance problem for stochastic robot states is formulated by linear conjunctive inequalities and can be easily handled through mixed integer  67 linear programming (MILP) solvers. The main thesis of this chapter is that the RHMILP with the PCL state estimation strategy is able to cope with static and dynamic obstacles in environments with uncertain robot states more efficiently compared to the RHMILP with the open loop state estimation algorithm. To verify this claim, numerical simulations and experiments have been carried out utilizing the developed motion planning and control algorithms. These simulations and experiments are elaborated in the next two sections. 4.5 Simulation Results In order to study the applicability of the proposed scheme for motion planning, it was tested in three simulation scenarios as summarized in Table ‎4.2. The simulation properties (e.g., order of system dynamics, utilized software, and problem setup) were kept the same as the previous chapter unless otherwise noted.  Table ‎4.2:  Simulation scenarios are designed to study the efficacy of the proposed algorithm. The scenarios were designed to analyze the performance and the effect of incorporating the PCL state estimation technique into the RHMILP motion planning framework equipped with the LOS-based obstacle avoidance constraint. A Kalman Filter (KF) was developed to estimate the system states after executing the system inputs. Knowing the a priori state estimation   |   ( ̂ |    | ), measurements at the current time     , and the measurement and system motion models, Eq. (‎4.1) and Eq. (‎4.2), the KF estimates the system state at time interval     which is     |     ( ̂   |        |   ).  Scenario # Description 1 Comparing the open loop and the PCL algorithms 2 Studying the effect of level of confidence on the planned trajectory 3 Studying the effect of the planning horizon on the planned path  68 The robot inputs were subject to upper and lower bounds of *           + and *             + , respectively. The associated uncertainty with the initial state of the robot and the covariance of the process disturbance were assumed to be   |  *          + and   *          +, respectively. In the controller design,           were chosen to be   ,    and 1 where    is an     identity matrix. The values of            were also selected to be 100, 1, and 1, respectively. In the first scenario, the robot started from  ̂ |  [        ]  to reach    [         ]  while there was a single rectangular obstacle confined to [         ]  [         ] and [         ]  [      ] in the environment. The main goal of this simulation was to demonstrate the effect of including the future measurements on the quality of the planned path. Three cases were considered for trajectory planning. In the first case, the future measurements were ignored and the open loop (OL) prediction was performed. For the second and third cases, the PCL was performed taking into account the anticipated future measurement. In order to study the effect of sensor measurement accuracy, the quality of the measurement was chosen to be different for the latter two cases (         and       ). The simulation results of this scenario are shown in Figure ‎4.3. The planned paths for the three different cases are shown in Figure ‎4.3(a). The obstacle is illustrated in yellow. The uncertainty associated with the future robot states is represented by the ellipses along the trajectories. This scenario is intended to be very simple, simple in the sense that only one obstacle exists and the intuitive solution is a straight line, thus making it easier to compare the two approaches. The optimal solution to this motion planning problem is intuitively known. The robot must travel along the obstacle in a straight line to reach to its desired goal.  69 The planned paths by the PCL (red and grey dashed lines) are considerably closer to the optimal path compared to the planned path using the OL prediction (dashed green line). The reason for this observation can be perceived from Figure ‎4.3(b) where the uncertainties of the robot future states for the three different cases are compared. In the OL prediction, the uncertainty of the robot position substantially grows with time. This fact results in the chance constraints enforcing the planned path to diverge from the straight line. On the other hand, the PCL strategy incorporates the future anticipated measurements in the state estimation and bounds the uncertainty growth. The bounds on the growth in uncertainty are directly dependent to the quality of the measurement. The state estimation from the sensor with the lesser quality measurement (red) is associated with higher value of uncertainty compared to the estimation from the higher quality sensor (grey). As a result, the grey path, corresponding to the higher quality measurement, is closer to the optimal solution while the red path diverges from the optimal path.  In Figure ‎4.3(c), the planned paths and the executed paths for the OL prediction and the PCL strategy with higher measurement quality (       ) are illustrated. The planned and executed paths for the OL approach differ substantially from each other. This is due to the fact that after each planning, one control action is applied, new measurements are taken, and the motion planning problem is resolved. The newly taken measurements correct the covariance of the robot position thus preventing the uncertainty from substantially increasing. On the other hand, the planned and executed paths for the PCL approach are very similar. This implies the fact that in the PCL strategy, the future anticipated measurements are efficiently included in the planning process.  70  (a)  (b)  (c) Figure ‎4.3:  A path planning problem is solved in three different cases: OL (green), PCL with low quality measurement (red), and PCL with high quality measurement (grey). In the OL approach, the uncertainty of the future robot position grows unboundedly. In contrast, the PCL approach bounds the growth of uncertainty. (a) The grey line follows the intuitively optimal solution while the other two diverge from the straight line. (b) The future robot position uncertainty for three approaches is plotted to demonstrate that the PCL bounds the growth of uncertainty. (c) The planned and executed paths for the OL are considerably distinct. On the other side, efficient incorporation of anticipated future measurements in the PCL results in having very similar planned and executed paths [3]. On the negative side, however, the assumption that the measurements are going to occur makes the usage of the PCL algorithm limited to receding horizon frameworks. In contrast, the OL state estimation can be used in both RHC and fixed horizon control algorithms. In other words, as the execution horizon (M) increases, the robustness against uncertainties of the designed trajectory by the PCL approach is decreased. In order to support this claim, the motion planning scenario in Figure ‎4.3 was performed for different values of execution horizon (M) using the PCL strategy. In order to have an appropriate estimation, for each value of M, the problem was solved 20 times.  Table ‎4.3:  The failure rate for different execution horizons is shown. As the execution horizon increases, the failure rate also increases [3]. 0 20 40 60 80 100 120-1012345time step [k]Trace ( k )  OLPCL, R=I2PCL, R=0.2I20 2 4 6 8 10 12-1012345x [m]y [m]  OL plannedOL executedPCL plannedPCL executedExecution horizon (M) 1 2 3 4 5 Failure % 0 5 10 25 60  71 The above table demonstrates the results of this study. It was observed, and also was expected, that as we increase the number of the execution horizons, the robustness of the PCL algorithm against the uncertainties decreases. For    , the failure percentage reaches 60% and the algorithm can no longer be considered robust anymore. The main goal of the second scenario was to study the effect of the level of confidence on the generated trajectories. In this scenario, the robot started from  ̂ |  [         ] to reach    [     ]  while there were two rectangular obstacles confined to [           ]  [      ], [           ]  [     ], [           ]  [       ], and [           ]  [      ] in the environment. The measurement noise covariance was assumed to be       . Three different trajectories for three different levels of confidence were designed using the PCL approach.  Figure ‎4.4:  The effect of changing the level of confidence on the designed path is studied. The paths generated for uncertain robot states are generally more conservative than the path for the deterministic environment (black dashed line). As the level of confidence is increased, the robot path between the obstacles moves further away to avoid the obstacles. For          the path can no longer go between the obstacles and has to go around the obstacle [3]. -4 -3 -2 -1 0 1 2 3 4 5 6 7-3-2-1012345x [m]y [m]  Deterministic=0.99=0.999=0.9999EndStart 72 Furthermore, one trajectory was generated for the deterministic environment i.e., assuming no uncertainty is associated with the robot state. Figure ‎4.4 demonstrates how the planned trajectory becomes more conservative (and safer) as the level of confidence increases. As we move from a deterministic environment (black dashed line) to a non-deterministic one, the robot position becomes uncertain. As a result, the chance constraints become effective and enforce the robot path to move further away from the obstacles. Moreover, when the robot position is uncertain, as the level of confidence increases (thus decreasing the collision likelihood) the planned path becomes more conservative and moves further away from the obstacle (the green and blue dashed lines). For the level of confidence          the robot can no longer go between the obstacles and has to go around the obstacle in order to achieve its goal. Finally, the third scenario aimed at studying the effect of the number of planning horizons on the planned path. The robot started from  ̂ |  [         ] to reach    [        ]  while there existed one rectangular obstacle in the environment confined to [         ]  [      ], [         ]  [     ]. The position of the robot in the y direction was also constrained to be positive. The simulation results are shown in Figure ‎4.5. In Figure ‎4.5, the motion planning algorithm with      results in a trajectory in which the robot stops behind the obstacle. Technically, this is known as getting trapped in the local minima. As a result, the generated trajectory does not deliver the robot to the desired position. In contrast, the algorithm with       generates a trajectory that reaches the target and does not get trapped in the local minima. In conclusion, when solving the motion planning problem, the planning horizon has to be sufficiently large in order for the generated trajectory (which is resulted from solving an optimization problem) not to get trapped behind the obstacles.   73  Figure ‎4.5:  The designed trajectory is dependent upon the planning horizon as the planned trajectories differ for different planning horizon values.  4.6 Experimental Results To study the applicability of the proposed motion planning algorithm to real-world situations i.e., uncertainties in measurement, errors in system dynamic modeling, and suboptimal optimization results, two experimental scenarios were carried out. A Robot Operating System (ROS) was utilized as our experimental setup to validate the feasibility of the developed algorithms. The next two sections elaborate the experimental test-bed and results of the experiments. 4.6.1 Experimental Test-bed The experimental platform consisting of ROS Hydro and Gazebo 1.9 was set up on a Linux machine. The motion planning and control algorithms were programmed in MATLAB and solved in CPLEX. ROS acted as middleware between the motion planning algorithm and the Gazebo software that simulated the environment. An overall schematic of the interconnectivity of the components of the experimental platform is illustrated in Figure ‎4.6. The main contribution of ROS is to transfer a localization request from MATLAB to Gazebo, provide -4 -3 -2 -1 0 1 2 3 4 5 600.511.522.533.544.5x [m]y [m]  N=135N=35StartGoal 74 localization data from Gazebo to MATLAB, and execute the control orders from MATLAB on the simulated robot in Gazebo.  Figure ‎4.6:  The interconnectivity of MATLAB, ROS, and Gazebo in the experimental platform is illustrated. ROS transfers the localization request and localization data from MATLAB to Gazebo and vice versa. It also executes the planned motions and control commands ordered by MATLAB on Gazebo [3]. A nonlinear nonholonomic robot, Pioneer 2DX, was used in the experiments. In order for the planned path calculated for an omnidirectional linear robot to be valid for our nonholonomic robot, the robot motion was decoupled into two independent components: pure rotation and pure translation. More precisely, in order to travel from point A to point B on the planned path, the robot first rotates to align its heading towards B, and then travels to point B along a straight line until it reaches point B.  The maximum linear and angular velocities of Pioneer 2DX were set to        and         , respectively. In the motion planning algorithm,   ,   ,   , and    were selected to be *           + , *             + , *             + , and *               + , respectively. The latter two constraints were enforced to limit the robot turn rate. The values of            were  75 selected to be 40, 1, and 1, respectively. In the controller design,           were chosen to be   ,    and 1 where    is an     identity matrix. 4.6.2 Results Two robot motion planning scenarios summarized in Table ‎4.4 were solved using the experimental setup. In these scenarios, the RHMILP algorithm with the LOS-based obstacle avoidance and the PCL strategy was utilized for motion planning and control of the robot.  Table ‎4.4:  Two experiments are designed to study the efficacy of the proposed algorithm. The first scenario was designed to study the performance of the developed scheme in a relatively cluttered environment in the presence of uncertainties and static obstacles. The robot entered a building from the opening in one side and aimed to exit from the opposite side while avoiding obstacles. This scenario is shown in Figure ‎4.7.   Figure ‎4.7:  The robot starting and final points are [       ]  and [        ], respectively [3]. Scenario # Description 1 Tackling relatively cluttered environments  2 Tackling dynamic environments  76 The second scenario, designed to test the ability of the RHMILP motion planning scheme to tackle dynamic environments, is shown in Figure ‎4.8. The robot starting and goal points were [       ]     [       ], respectively. In the beginning of the motion, the robot trajectory must avoid two obstacles (Figure ‎4.8(a)). In the middle of the process, a third obstacle appears and the trajectory of the robot must adapt accordingly (Figure ‎4.8(b)).  (a)  (b) Figure ‎4.8:  (a) In the beginning of the process, there are only two obstacles to be avoided. (b) In the middle of the process, a third obstacle is added which occludes the initial path [3]. The results of the first experiment are shown in Figure ‎4.9(a) where the planned trajectory and the executed trajectory are plotted. It is demonstrated that the proposed motion planning scheme is able to effectively tackle the obstacles and uncertainties in the problem. The planned trajectory is designed conservatively so that despite the disturbances in the robot control input, the executed trajectory of the robot is still safe in terms of obstacle avoidance. The results of the second scenario are shown in Figure ‎4.9(b) and (c). In the absence of the dynamic obstacle, the intuitive solution to this motion planning problem is that the robot must follow a straight horizontal line from the start to goal. This initially planned trajectory is shown in Figure ‎4.9(b) as the blue dashed line. However, when the robot arrives at [       ], the third obstacle appears. The new planned trajectory i.e., the red dashed line can no longer go between the two original obstacles and must turn around them instead. The inputs to the system were bounded to be  77 between        and      . It is shown in Figure ‎4.9(c) that in practice this constraint is almost always satisfied.  (a)  (b)  (c) Figure ‎4.9:  (a) In the planning process, uncertainties are taken into account so that the disturbances in the robot control input would not cause collision of the executed trajectory with the obstacles. (b) The initial trajectory of the robot (before appearance of the dynamic obstacle) is compared with the adjusted trajectory of the robot (after appearance of the dynamic obstacle) and the executed trajectory. (c) The input of the robot in x and y directions is plotted and compared to the higher and lower bounds limits (red lines) [3]. 4.7 Summary This chapter presented a mathematical formulation for motion planning of linear robots with uncertain states in cluttered and dynamic environments using RHMILP. More precisely, the LOS-based obstacle avoidance constraint, presented in ‎Chapter 3, was adopted and extended to account for uncertain robot states using chance constraints. The PCL and the open loop state estimation strategies were incorporated into the RHMILP framework to estimate the predicted future states. The PCL algorithm recursively estimates the future robot states taking into account the anticipated future measurements. As a result, the uncertainty of the future states does not grow substantially over time; hence leading to a more useful path planning problem compared to the open loop prediction algorithm. The presented motion planning framework was examined 0 2 4 6 8 1001234567x [m]y [m]  Executed trajectoryPlanned trajectoryRobot endpointRobot startpoint2 4 6 8 101.522.533.544.555.5x [m]y [m]  Executed trajectoryInitially planned trajectorySecondary planned trajectoryRobot position whendynamic obstacleappears.0 10 20 30 40 50-0.25-0.2-0.15-0.1-0.0500.050.10.150.20.25t [s]Input [m/s]  Input in x directionInput in y direction 78 through both simulations and experiments in different motion planning scenarios. We demonstrated how the inclusion of future anticipated measurements results in a better prediction of the future state uncertainty compared to the open loop algorithm where future measurements are completely ignored.    79    Chapter 5 Conclusions and Future Work 5.1 Summary The main contribution of this thesis was to develop new concepts and algorithms for motion planning of autonomous robots using receding horizon mixed-integer linear programming (RHMILP). In particular, the thesis was concerned with generating safe and optimal trajectories for linear systems under uncertainty. Namely, this thesis introduced a new state estimation algorithm and a new obstacle avoidance constraint to the existing RHMILP literature for planning under uncertainty.  The thesis began with a general receding horizon formulation for motion planning in ‎Chapter 2. The developed framework was a nonlinear RHC that took into consideration the uncertainties through incorporating them into the motion planning algorithm. The uncertainties in Chapter 2, and throughout the thesis, were considered to have a probabilistic nature. A case study on motion planning of collaboration between aerial and ground vehicles for automated wildfire suppression was presented. The problem of interest was to direct a UGV and a UAV optimally where the UGV was designed to move towards a target while the UAV was responsible for providing the localization data for the UGV, the UAV, and the target. More precisely, the objective was to  80 guide the UGV to the target (fire) with the least amount of time and fuel consumption while the UAV motion intended to reduce the uncertainty of the reported localization data.  ‎Chapter 3 then narrowed down the motion planning problem to linear robotic systems in deterministic environments. It was shown that in such cases, the receding horizon algorithm can be formulated in an RHMILP framework. The problem of interest was to safely guide the autonomous robot towards its target, safe in the sense of obstacle avoidance. In addition to developing state-of-the-art RHMILP motion planning algorithms, a new obstacle avoidance constraint called the LOS-based obstacle avoidance, was formulated and incorporated into the framework. This constraint is preferable to the conventional RHMILP obstacle avoidance constraint [56] as it increases the feasibility of the problem yet providing the same level of safety for the planned path. In the proposed constraint, the robot avoids colliding with the surrounding obstacles by ensuring that every two sequential robot positions are connected (i.e., the robot always sees its next position). The framework presented in ‎Chapter 3 was further extended in ‎Chapter 4 to capture motion planning of linear robotic systems with uncertain states. Instead of deterministic constraints on the robot states, chance constraints were formulated to ensure that the constraints are systematically satisfied for possible realizations of system states. The other elements of the RHMILP framework were adapted to take into account the uncertainties associated with system states. This thesis presented the first formal effort to incorporate the partially closed-loop (PCL) strategy, utilized for predicting the future states, into the RHMILP scheme. The PCL strategy, predicts the future state uncertainties while taking into account the uncertainty of the expected future measurements. As a result, the predicted uncertainty of the future states is bounded and, unlike traditional approaches, does not grow substantially over time. Thus, the proposed path  81 planning algorithm can offer a more useful solution for motion planning in cluttered environments or narrow passages. We demonstrated how the inclusion of future anticipated measurements can result in a better prediction of the future state uncertainty compared to the open-loop RHMILP where the effect of future measurements is ignored. The developed motion planning and control schemes were examined through numerical simulations and experiments. The simulation framework included MATLAB and CPLEX and the experiment test-bed consisted of ROS, Gazebo, MATLAB, and CPLEX.  5.2 Future Work The algorithms presented in this thesis open new directions for future research. The developed RHMILP algorithm can be improved in many ways. For instance, in the cost function formulation, more specific cost-to-go functions can be adopted to prevent the robot from becoming trapped in local minima. Also, the chance constraint LOS-based obstacle avoidance was conservatively formulated as deterministic constraints. Recent research [53], however, provides a less conservative formulation of sets of chance constraints. A future work direction is to adopt the new chance constraint formulation for the LOS-based obstacle avoidance. As another example, it was shown that the robustness of the RHMILP that employs the PCL estimation strategy is heavily dependent on the execution horizon. More precisely, the robustness of the algorithm decreases as the execution horizon increases. Developing a state prediction algorithm that is as viable yet more effective, compared to the PCL, is another future research direction. Finally, the proposed RHMILP motion planning scheme can be extended to multi-robotic systems in a cooperative setting is another potential future project.    82 Bibliography [1] A.‎Mohandes,‎M.‎Farrokhsiar,‎and‎H.‎Najjaran,‎“A‎Motion‎Planning‎Scheme‎for‎Automated‎Wildfire‎Suppression,”‎in‎Vehicular Technology Conference (VTC Fall), 2014 IEEE 80th, 2014, pp. 1–5, (To be published ). [2] A.‎Mohandes,‎M.‎Farrokhsiar,‎and‎H.‎Najjaran,‎“Robust‎MILP‎motion‎planning‎with‎LOS-based‎obstacle‎avoidance‎constraint,”‎in‎CSME International Congress 2014 proceedings, 2014, pp. 1–6, (To be published). [3] A.‎Mohandes,‎M.‎Farrokhsiar,‎and‎H.‎Najjaran,‎“Probabilistically‎Safe‎MILP‎Motion‎Planning with LOS-based‎Obstacle‎Avoidance,”‎(Submitted to) Auton. Robot. [4] S.‎Thrun,‎“Google‟s‎driverless‎car,”‎Ted Talk, Ed, 2011. [5] “iRobot‎Ava‎500‎Video‎Collaboration Robot.”‎[Online].‎Available:‎http://www.irobot.com/us/learn/commercial/ava500.aspx. [Accessed: 12-Aug-2013]. [6] T. Bailey and H. Durrant-Whyte,‎“Simultaneous‎localization‎and‎mapping‎(SLAM):‎Part‎II,”‎Robot. Autom. Mag. IEEE, vol. 13, no. 3, pp. 108–117, 2006. [7] H. Durrant-Whyte‎and‎T.‎Bailey,‎“Simultaneous‎localization‎and‎mapping:‎part‎I,”‎Robot. Autom. Mag. IEEE, vol. 13, no. 2, pp. 99–110, 2006. [8] M.‎Montemerlo,‎S.‎Thrun,‎D.‎Koller,‎and‎B.‎Wegbreit,‎“FastSLAM:‎A‎factored‎solution‎to the simultaneous‎localization‎and‎mapping‎problem,”‎in‎Proceedings of the National conference on Artificial Intelligence, 2002, pp. 593–598.  83 [9] B.‎P.‎Gerkey‎and‎M.‎J.‎Mataric,‎“Multi-robot task allocation: Analyzing the complexity and‎optimality‎of‎key‎architectures,”‎in Proceedings of the International Conference on Robotics and Automation, 2003, vol. 3, pp. 3862–3868. [10] M.‎J.‎Matarić,‎G.‎S.‎Sukhatme,‎and‎E.‎H.‎Østergaard,‎“Multi-robot task allocation in uncertain‎environments,”‎Auton. Robots, vol. 14, no. 2–3, pp. 255–263, 2003. [11] M.‎Alighanbari,‎“Robust‎and‎decentralized‎task‎assignment‎algorithms‎for‎uavs.”‎PhD‎dissertion, Massachusetts Institute of Technology, 2007. [12] J.-C. Latombe, Robot Motion Planning. Norwell, MA: Kluwer academic publishers, 1991. [13] Z. Li and J. Canny, Nonholonomic motion planning. Springer, 1993. [14] S. M. LaValle, Planning algorithms. Cambridge university press, 2006. [15] J.‎Blythe,‎“Decision-theoretic‎planning,”‎AI Mag., vol. 20, no. 2, p. 37, 1999. [16] C. Boutilier, T. Dean, and‎S.‎Hanks,‎“Decision-theoretic planning: Structural assumptions and‎computational‎leverage,”‎Artif. Intell. Res., vol. 10, pp. 1–94, 1999. [17] C.‎Goerzen,‎Z.‎Kong,‎and‎B.‎Mettler,‎“A‎survey‎of‎motion‎planning‎algorithms‎from‎the‎perspective of autonomous UAV‎guidance,”‎J. Intell. Robot. Syst., vol. 57, no. 1–4, pp. 65–100, 2010. [18] S.‎M.‎Lavalle,‎“Motion‎planning,”‎Robot. Autom. Mag. IEEE, vol. 18, no. 2, pp. 108–118, 2011.  84 [19] N.‎Dadkhah‎and‎B.‎Mettler,‎“Survey‎of‎Motion‎Planning‎Literature‎in‎the‎Presence of Uncertainty:‎Considerations‎for‎UAV‎Guidance,”‎J. Intell. Robot. Syst., vol. 65, no. 1–4, pp. 233–246, Nov. 2011. [20] M.‎Hoy,‎A.‎S.‎Matveev,‎and‎A.‎V.‎Savkin,‎“Algorithms‎for‎collision-free navigation of mobile robots in complex cluttered environments:‎a‎survey,”‎Robotica, no. March, pp. 1–35, Mar. 2014. [21] ATHANS-FALB, Optimal Control. An Introduction to the Theory and Its Applications. McGraw-Hill, 1966. [22] D. P. Bertsekas, Dynamic programming and optimal control, vol. 1, no. 2. Athena Scientific Belmont, 1995. [23] J.‎T.‎Betts,‎“Survey‎of‎numerical‎methods‎for‎trajectory‎optimization,”‎J. Guid. Control. Dyn., vol. 21, no. 2, pp. 193–207, 1998. [24] E.‎W.‎Dijkstra,‎“A‎note‎on‎two‎problems‎in‎connexion‎with‎graphs,”‎Numer. Math., vol. 1, no. 1, pp. 269–271, 1959. [25] J.‎P.‎Van‎Den‎Berg‎and‎M.‎H.‎Overmars,‎“Roadmap-based motion planning in dynamic environments,”‎Robot. IEEE Trans., vol. 21, no. 5, pp. 885–897, 2005. [26] P.‎E.‎Hart,‎N.‎J.‎Nilsson,‎and‎B.‎Raphael,‎“A‎formal‎basis‎for‎the‎heuristic‎determination‎of‎minimum‎cost‎paths,”‎Syst. Sci. Cybern. IEEE Trans., vol. 4, no. 2, pp. 100–107, 1968.  85 [27] J.‎T.‎Schwartz‎and‎M.‎Sharir,‎“On‎the‎„piano‎movers‟‎problem.‎II.‎General techniques for computing‎topological‎properties‎of‎real‎algebraic‎manifolds,”‎Adv. Appl. Math., vol. 4, no. 3, pp. 298–351, 1983. [28] T. Lozano-Perez,‎“Spatial‎planning:‎A‎configuration‎space‎approach,”‎Comput. IEEE Trans., vol. 100, no. 2, pp. 108–120, 1983. [29] T. Lozano-Pérez‎and‎M.‎A.‎Wesley,‎“An‎algorithm‎for‎planning‎collision-free paths among‎polyhedral‎obstacles,”‎Commun. ACM, vol. 22, no. 10, pp. 560–570, 1979. [30] H.‎Choset,‎“Sensor‎based‎motion‎planning:‎The‎hierarchical‎generalized‎Voronoi‎graph.”‎Ph.D. dissertation, California Institute of Technology, 1996. [31] O.‎Khatib,‎“Real-time‎obstacle‎avoidance‎for‎manipulators‎and‎mobile‎robots,”‎Int. J. Rob. Res., vol. 5, no. 1, pp. 90–98, 1986. [32] A.‎B.‎Roger‎and‎C.‎R.‎McInnes,‎“Safety‎constrained free-flyer path planning at the international‎space‎station,”‎J. Guid. Control. Dyn., vol. 23, no. 6, pp. 971–979, 2000. [33] A.‎Bemporad,‎A.‎De‎Luca,‎and‎G.‎Oriolo,‎“Local‎incremental‎planning‎for‎a‎car-like robot navigating‎among‎obstacles,”‎in Proceedings of the International Conference on Robotics and Automation, 1996, vol. 2, pp. 1205–1211. [34] L. Kavraki and J.-C.‎Latombe,‎“Randomized‎preprocessing‎of‎configuration‎for‎fast‎path‎planning,”‎in‎Proceedings of the International Conference on Robotics and Automation, 1994, pp. 2138–2145.  86 [35] J.‎J.‎Kuffner‎and‎S.‎M.‎LaValle,‎“RRT-connect: An efficient approach to single-query path‎planning,”‎in‎Proceedings of the International Conference on Robotics and Automation, 2000, vol. 2, pp. 995–1001. [36] S.‎M.‎LaValle‎and‎J.‎J.‎Kuffner,‎“Randomized‎kinodynamic‎planning,”‎Int. J. Rob. Res., vol. 20, no. 5, pp. 378–400, 2001. [37] L. E. Kavraki, P. Svestka, J.-C.‎Latombe,‎and‎M.‎H.‎Overmars,‎“Probabilistic‎roadmaps‎for path planning in high-dimensional configuration‎spaces,”‎Robot. Autom. IEEE Trans., vol. 12, no. 4, pp. 566–580, 1996. [38] A.‎U.‎Raghunathan,‎V.‎Gopal,‎D.‎Subramanian,‎L.‎T.‎Biegler,‎and‎T.‎Samad,‎“Dynamic‎optimization strategies for three-dimensional conflict resolution of multiple aircraft,”‎J. Guid. Control. Dyn., vol. 27, no. 4, pp. 586–594, 2004. [39] Y. Bar-Shalom,‎“Stochastic‎dynamic‎programming:‎Caution‎and‎probing,”‎Autom. Control. IEEE Trans., vol. 26, no. 5, pp. 1184–1195, 1981. [40] N.‎Du‎Toit,‎“Robot‎motion‎planning‎in‎dynamic, cluttered, and uncertain environments: the Partially Closed-Loop‎Receding‎Horizon‎Control‎approach,”‎Ph.D.‎dissertation,‎California Institute of Technology, 2010. [41] G. C. Goodwin, M. M. Seron, and J. A. De Doná, Constrained Control and Estimation: an Optimization Approach. Springer, 2005. [42] J. M. Maciejowski, Predictive control: with constraints. Pearson education, 2002.  87 [43] D.‎Q.‎Mayne,‎J.‎B.‎Rawlings,‎C.‎V‎Rao,‎and‎P.‎O.‎M.‎Scokaert,‎“Constrained‎model‎predictive control: Stability and optimality,”‎Automatica, vol. 36, no. 6, pp. 789–814, 2000. [44] M.‎Morari‎and‎J.‎H‎Lee,‎“Model‎predictive‎control:‎past,‎present‎and‎future,”‎Comput. Chem. Eng., vol. 23, no. 4, pp. 667–682, 1999. [45] J.‎Richalet,‎A.‎Rault,‎J.‎L.‎Testud,‎and‎J.‎Papon,‎“Model‎predictive heuristic control: Applications‎to‎industrial‎processes,”‎Automatica, vol. 14, no. 5, pp. 413–428, 1978. [46] A.‎Jadbabaie,‎“Receding‎horizon‎control‎of‎nonlinear‎systems:‎a‎control‎Lyapunov‎function‎approach.”‎PhD‎dissertion,‎California‎Institute‎of Technology, 2000. [47] L.‎Singh‎and‎J.‎Fuller,‎“Trajectory‎generation‎for‎a‎UAV‎in‎urban‎terrain,‎using‎nonlinear‎MPC,”‎in‎Proceedings of the American Control Conference, 2001, vol. 3, pp. 2301–2308. [48] Y.‎Kuwata,‎“Real-time Trajectory Design for Unmanned Aerial Vehicles using Receding Horizon‎Control.”‎M.Sc.‎thesis,‎Massachusetts‎Institute‎of‎Technology,‎2003. [49] Y.‎Kuwata‎and‎J.‎P.‎How,‎“Cooperative‎Distributed‎Robust‎Trajectory‎Optimization‎Using‎Receding‎Horizon‎MILP,”‎IEEE Trans. Control Syst. Technlogy, vol. 19, no. 2, pp. 423–431, 2011. [50] A. Prékopa, Stochastic programming. Springer, 1995. [51] J.‎Yan‎and‎R.‎R.‎Bitmead,‎“Model‎predictive‎control‎and‎state‎estimation:‎a‎network‎example,”‎in‎Proceedings of the 15th IFAC, 2002, p. 2381.  88 [52] J.‎Yan‎and‎R.‎R.‎Bitmead,‎“Incorporating‎state‎estimation‎into‎model‎predictive‎control‎and‎its‎application‎to‎network‎traffic‎control,”‎Automatica, vol. 41, no. 4, pp. 595–604, 2005. [53] L.‎Blackmore,‎M.‎Ono,‎and‎B.‎C.‎Williams,‎“Chance-constrained optimal path planning with‎obstacles,”‎Robot. IEEE Trans., vol. 27, no. 6, pp. 1080–1094, 2011. [54] L.‎Blackmore,‎M.‎Ono,‎A.‎Bektassov,‎and‎B.‎C.‎Williams,‎“A‎Probabilistic‎Particle-Control Approximation of Chance-Constrained‎Stochastic‎Predictive‎Control,”‎Robot. IEEE Trans., vol. 26, no. 3, pp. 502–517, 2010. [55] N.‎E.‎Du‎Toit‎and‎J.‎W.‎Burdick,‎“Robot‎motion‎planning‎in‎dynamic,‎uncertain‎environments,”‎Robot. IEEE Trans., vol. 28, no. 1, pp. 101–115, 2012. [56] T. Schouwenaars, B. De Moor, E. Feron, and J. How,‎“Mixed‎integer‎programming‎for‎multi-vehicle‎path‎planning,”‎in‎Proceedings of the European Control Conference, 2001, pp. 2603–2608. [57] C. A. Floudas, Nonlinear and mixed-integer optimization: fundamentals and applications. Oxford University Press, 1995. [58] T.‎Schouwenaars,‎E.‎Feron,‎and‎J.‎How,‎“Safe‎receding‎horizon‎path‎planning‎for‎autonomous‎vehicles,”‎in‎Proceedings of the Annual Allerton Conference on Communication Control and Computing, 2002, vol. 40, no. 1, pp. 295–304. [59] A. Richards and‎J.‎P.‎How,‎“Robust‎distributed‎model‎predictive‎control,”‎Int. J. Control, vol. 80, no. 9, pp. 1517–1531, Sep. 2007.  89 [60] I.‎B.‎M.‎I.‎Cplex,‎“12.2‎User‟s‎Manual.”‎IBM,‎2010. [61] A.‎Richards,‎Y.‎Kuwata,‎and‎J.‎How,‎“Experimental‎demonstrations‎of‎real-time MILP control,”‎in‎Proceeding of the AIAA Guidance, Navigation, and Control Conference, 2003. [62] T.‎Schouwenaars,‎M.‎Valenti,‎E.‎Feron,‎and‎J.‎How,‎“Implementation‎and‎flight‎test‎results of MILP-based‎UAV‎guidance,”‎in‎Aerospace Conference, 2005 IEEE, 2005, pp. 1–13. [63] A.‎G.‎Richards,‎“Robust‎constrained‎model‎predictive‎control.”‎Ph.D.‎dissertation,‎Massachusetts Institute of Technology, 2004. [64] M.‎Farrokhsiar,‎G.‎Pavlik,‎and‎H.‎Najjaran,‎“An‎integrated‎robust‎probing‎motion‎planning and control scheme: A tube-based‎MPC‎approach,”‎Rob. Auton. Syst., 2013. [65] T.‎Schouwenaars,‎“Safe‎trajectory‎planning‎of‎autonomous‎vehicles.”‎Ph.D.‎dissertation,‎Massachusetts Institute of Technology, 2006. [66] S. Thrun, W. Burgard, and D. Fox, Probabilistic robotics, vol. 1. MIT press Cambridge, 2005. [67] R.‎E.‎Kalman,‎“A‎new‎approach‎to‎linear‎filtering‎and‎prediction‎problems,”‎J. Basic Eng., vol. 82, no. 1, pp. 35–45, 1960. [68] M.‎I.‎Ribeiro,‎“Kalman‎and‎extended‎kalman‎filters:‎Concept,‎derivation‎and‎properties.”‎Institute for Systems and Robotics, 2004.  90 [69] J. B. Rawlings and D. Q. Mayne, Model predictive control. Nob Hill Pub., 2008. [70] R.‎Bellman,‎“Dynamic‎programming,‎Princeton‎Univ,”‎Princeton‎university‎press,‎1957. [71] T. Schouwenaars, A. Stubbs,‎J.‎Paduano,‎and‎E.‎Feron,‎“Multivehicle‎path‎planning‎for‎nonline-of-sight‎communication,”‎J. F. Robot., vol. 23, no. 3–4, pp. 269–290, 2006. [72] A.‎Richards‎and‎O.‎Turnbull,‎“Inter-sample avoidance in trajectory optimizers using mixed-integer linear programming,”‎Int. J. Robust Nonlinear Control, 2013. [73] K.‎Culligan,‎M.‎Valenti,‎Y.‎Kuwata,‎and‎J.‎P.‎How,‎“Three-Dimensional Flight Experiments Using On-Line Mixed-Integer Linear Programming Trajectory Optimization,”‎in‎Proceedings f the American Control Conference, 2007, pp. 5322–5327. [74] D. Bertsimas and J. N. Tsitsiklis, Introduction to linear optimization, vol. 6. Athena Scientific Belmont, MA, 1997. [75] M.‎U.‎Guide,‎“The‎mathworks,”‎Inc., Natick, MA, vol. 5, 1998. [76] L.‎Blackmore‎and‎B.‎Williams,‎“A‎probabilistic‎approach‎to‎optimal‎robust‎path‎planning‎with‎obstacles,”‎in‎Proceedings of the American Control Conference, 2006. [77] N.‎E.‎Du‎Toit‎and‎J.‎W.‎Burdick,‎“Robotic‎motion‎planning‎in‎dynamic,‎cluttered,‎uncertain‎environments,”‎2010 IEEE Int. Conf. Robot. Autom., pp. 966–973, May 2010. [78] S. Ross, A First Course in Probability 8th Edition. Pearson, 2009.   

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.24.1-0074387/manifest

Comment

Related Items