UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Translational visual servoing control of quadrotor helicopters Kummer, Nikolai 2013

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
24-ubc_2013_fall_kummer_nikolai.pdf [ 11.77MB ]
Metadata
JSON: 24-1.0072149.json
JSON-LD: 24-1.0072149-ld.json
RDF/XML (Pretty): 24-1.0072149-rdf.xml
RDF/JSON: 24-1.0072149-rdf.json
Turtle: 24-1.0072149-turtle.txt
N-Triples: 24-1.0072149-rdf-ntriples.txt
Original Record: 24-1.0072149-source.json
Full Text
24-1.0072149-fulltext.txt
Citation
24-1.0072149.ris

Full Text

Translational Visual Servoing Control of QuadrotorHelicoptersbyNikolai KummerB. Sc. Mechanical Engineering, University of Alberta, 2010A THESIS SUBMITTED IN PARTIAL FULFILLMENTOF THE REQUIREMENTS FOR THE DEGREE OFMaster of Applied ScienceinTHE COLLEGE OF GRADUATE STUDIES(Mechanical Engineering)THE UNIVERSITY OF BRITISH COLUMBIA(Okanagan)November 2013? Nikolai Kummer, 2013AbstractVision based control enables accurate positioning relative to a stationary ormoving target of quadrotor helicopters independent of on-board sensor accuracy.The use of vision for helicopter control opens indoor and GPS-denied environ-ments as an area of operation, which currently poses challenges to these systems.This thesis presents a full vision-based quadrotor helicopter controller for trackinga stationary or slow-moving target.The full controller used three control loops. The outer control loop convertsan image feature error into UAV velocity commands. The quadrotor is a nonlinearsystem that has highly coupled position and attitude dynamics. The second loopperformed a feedback linearization on these dynamics and translated the desiredvelocity into attitude commands. The inner loop was an on-board attitude andaltitude controller, which was part of the ARDrone system and which was analyzedvia frequency domain system identification for its dynamics. A linear quadraticGaussian (LQG) controller, consisting of a Kalman filter for state estimation and alinear quadratic regulator was used to control the linearized system.The visual servoing control scheme was image based which convergences to-wards the desired configuration independent of on-board sensor accuracy and inthe presence of camera calibration errors. The visual servoing features, used tocontrol the translational degrees of freedom, were based on the virtual sphere ap-proach, which uses two points in 3D space to create a virtual sphere. Control ofthe helicopter in the x and y direction was linked to the center location of the vir-tual sphere and distance to the target was controlled by the virtual sphere radius.The target features were two points, detected by colour and shape based detectionmethods.iiAdaboost.MRT, a boosting method for multivariate regression is proposed aspart of this thesis. Adaboost.MRT is based on Adaboost.RT and extends the origi-nal to multivariate regression, improves boosting?s noise sensitivity, and improvesthe singularity in the misclassification function. A variance-scaled misclassifica-tion function is proposed and the threshold parameter is expanded to accommodatevector output. The proposed method is tested on eight datasets and displays a simi-lar or better accuracy than the original Adaboost.RT or the base learning algorithm.iiiTable of ContentsAbstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iiTable of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ivList of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viiiList of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ixAcknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.1 Motivations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2 Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31.3 Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41.4 Thesis Organization . . . . . . . . . . . . . . . . . . . . . . . . . 51.5 Literature Review . . . . . . . . . . . . . . . . . . . . . . . . . . 61.5.1 General Visual Servoing Control . . . . . . . . . . . . . . 61.5.2 Visual Servoing of Unmanned Aerial Vehicles . . . . . . 111.5.3 Quadrotor Helicopter Control Methods . . . . . . . . . . 171.5.4 Moving Object Tracking via UAV . . . . . . . . . . . . . 201.5.5 Ensemble Prediction and Adaboost . . . . . . . . . . . . 211.6 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261.6.1 Quadrotor Helicopter Visual Servoing . . . . . . . . . . . 261.6.2 Adaboost.MRT . . . . . . . . . . . . . . . . . . . . . . . 27iv2 Visual Servoing Control Scheme . . . . . . . . . . . . . . . . . . . . 292.1 Camera Model . . . . . . . . . . . . . . . . . . . . . . . . . . . 292.1.1 Perspective Projection . . . . . . . . . . . . . . . . . . . 302.1.2 Camera Calibration . . . . . . . . . . . . . . . . . . . . . 312.1.3 Spherical Projection . . . . . . . . . . . . . . . . . . . . 332.2 Visual Servoing Feature Selection . . . . . . . . . . . . . . . . . 352.2.1 Features for Translational Degrees of Freedom . . . . . . 372.2.2 Translation Visual Servoing . . . . . . . . . . . . . . . . 392.3 Visual Servoing Feature Detection . . . . . . . . . . . . . . . . . 432.3.1 Colour Space Selection . . . . . . . . . . . . . . . . . . . 442.3.2 Detection Procedure . . . . . . . . . . . . . . . . . . . . 462.4 Simulation Results . . . . . . . . . . . . . . . . . . . . . . . . . 502.5 Moving Target Compensation . . . . . . . . . . . . . . . . . . . . 533 Quadrotor Helicopter Modelling and Control . . . . . . . . . . . . . 573.1 System Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . 583.1.1 Attitude Dynamics for the Pitch and Roll Response . . . . 603.2 Feedback Linearization . . . . . . . . . . . . . . . . . . . . . . . 623.2.1 Reduced System Model . . . . . . . . . . . . . . . . . . 633.3 Controller Design . . . . . . . . . . . . . . . . . . . . . . . . . . 643.3.1 Transfer Function Identification for Pitch and Roll AngleDynamics . . . . . . . . . . . . . . . . . . . . . . . . . . 653.3.2 Discretization and State Space Model . . . . . . . . . . . 713.3.3 Kalman Filter . . . . . . . . . . . . . . . . . . . . . . . . 743.3.4 Linear Quadratic Gaussian Controller . . . . . . . . . . . 753.4 Simulation Setup . . . . . . . . . . . . . . . . . . . . . . . . . . 773.5 Implementation Results . . . . . . . . . . . . . . . . . . . . . . . 773.5.1 Roll Response . . . . . . . . . . . . . . . . . . . . . . . 793.5.2 Pitch Response . . . . . . . . . . . . . . . . . . . . . . . 823.5.3 Combined Pitch and Roll Response . . . . . . . . . . . . 844 Visual Servoing Implementation and Results . . . . . . . . . . . . . 874.1 Implementation Setup . . . . . . . . . . . . . . . . . . . . . . . . 87v4.2 Stationary Target Experiments . . . . . . . . . . . . . . . . . . . 894.2.1 Experiment #1 and #2 Results . . . . . . . . . . . . . . . 894.3 Moving Target Experiments . . . . . . . . . . . . . . . . . . . . 964.3.1 3D Stationary Target . . . . . . . . . . . . . . . . . . . . 974.3.2 3D Moving Target, No Compensation . . . . . . . . . . . 994.3.3 3D Moving Target, With Compensation . . . . . . . . . . 995 Adaboost.MRT: Colour based Object detection by ensemble learning 1045.1 Adaboost.MRT algorithm . . . . . . . . . . . . . . . . . . . . . . 1045.1.1 Adaboost.MRT Overview . . . . . . . . . . . . . . . . . 1055.1.2 Experimental Threshold Selection . . . . . . . . . . . . . 1105.2 Performance Evaluation . . . . . . . . . . . . . . . . . . . . . . . 1125.2.1 Datasets . . . . . . . . . . . . . . . . . . . . . . . . . . . 1125.2.2 Threshold Parameter Optimization . . . . . . . . . . . . . 1135.2.3 Performance Comparison . . . . . . . . . . . . . . . . . . 1155.2.4 Performance Comparison with Noisy Data . . . . . . . . 1185.2.5 Multivariate Output . . . . . . . . . . . . . . . . . . . . . 1185.3 Colour Contour Learning . . . . . . . . . . . . . . . . . . . . . . 1245.3.1 Training Data Generation . . . . . . . . . . . . . . . . . . 1245.3.2 Adaboost.MRT Colour Contour Learning . . . . . . . . . 1265.3.3 Colour Learning Results . . . . . . . . . . . . . . . . . . 1276 Conclusions and Future Work . . . . . . . . . . . . . . . . . . . . . 130Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134Appendices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146Appendix A Supporting Materials . . . . . . . . . . . . . . . . . . . . 146A.1 Appendix: Original Adaboost.RT . . . . . . . . . . . . . . . . . . 146Appendix B Controller Designs . . . . . . . . . . . . . . . . . . . . . . 148B.1 System Identification . . . . . . . . . . . . . . . . . . . . . . . . 148B.1.1 Pitch System Identification Frequency Sweep Information 148viB.1.2 Pitch System Identification Frequency Sweep Information 149B.2 LQR with Integral Term . . . . . . . . . . . . . . . . . . . . . . 149Appendix C Matlab Code . . . . . . . . . . . . . . . . . . . . . . . . . 151C.1 ARDrone Communication Functions . . . . . . . . . . . . . . . . 151C.2 Kalman Filter Function . . . . . . . . . . . . . . . . . . . . . . . 153C.3 Visual Servoing with UAV . . . . . . . . . . . . . . . . . . . . . 155viiList of TablesTable 2.1 Identified intrinsic parameters of the on-board UAV camera. . . 33Table 3.1 Identified parameters for the second order transfer function andfor the frequency model identified from frequency domain datafor the pitch and roll response. . . . . . . . . . . . . . . . . . 68Table 5.1 Data set equations used in the performance evaluation. . . . . . 114Table 5.2 Training Parameters for the data sets, showing the number ofnodes in the hidden layers, the Adaboost.RT threshold ?RT , theAdaboost.MRT threshold ?MRT and the sample size N for eachiteration. The power coefficient n = 1 was used for all tests. . . 114Table 5.3 Performance comparison of Adaboost.RT, Adaboost.MRT anda single iteration of the ANN, trained on the same data. . . . . 117Table 5.4 Performance comparison of Adaboost.RT, Adaboost.MRT anda single iteration of the ANN for multivariate function estimation.122Table B.1 Frequency Sweep Information for Pitch Dynamics System I-dentification . . . . . . . . . . . . . . . . . . . . . . . . . . . 148Table B.2 Frequency Sweep Information for the Roll Dynamics SystemIdentification . . . . . . . . . . . . . . . . . . . . . . . . . . . 149viiiList of FiguresFigure 1.1 Three control loops used as part of the vision based quadrotorhelicopter controller. . . . . . . . . . . . . . . . . . . . . . . 4Figure 1.2 Performance comparison of Adaboost.RT and Adaboost.MRTfor estimation of the noise-less function y = x. The accuracyof Adaboost.RT is negatively affected by the singularity in themisclassification function. . . . . . . . . . . . . . . . . . . . 25Figure 2.1 Standard pinhole camera model. . . . . . . . . . . . . . . . . 31Figure 2.2 Sample calibration images used in the determination of thecamera intrinsic parameters. . . . . . . . . . . . . . . . . . . 32Figure 2.3 Image generation for a central projection camera. . . . . . . . 34Figure 2.4 Central projection of a virtual sphere Sv constructed from 2points in 3D space onto the spherical lens Sp. . . . . . . . . . 37Figure 2.5 Convention for the calculation of the small circle radius r. . . 39Figure 2.6 Targets for visual servoing of a quadrotor helicopter UAV. . . 43Figure 2.7 Comparison and visualization of the RGB colour cube and thecylindrical HSV colour space representation. . . . . . . . . . 44Figure 2.8 Comparison and visualization of the L*a*b* colour space at aluminance L of 20, 40 and 80. . . . . . . . . . . . . . . . . . 45Figure 2.9 Full object detection procedure for the two coloured targets. . 47Figure 2.10 Variation of the virtual sphere radius R with changing cameraposition C, relative to the two target points P1 and P2. . . . . . 51Figure 2.11 Simulation setup of the initial and desired final location of thecamera. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52ixFigure 2.12 Trajectory of the camera and the features during convergencefrom the initial to the desired configuration. . . . . . . . . . . 53Figure 2.13 Feature error and corresponding camera velocity during the vi-sual servoing for R = 0.2. . . . . . . . . . . . . . . . . . . . 53Figure 2.14 Feature error and corresponding camera velocity during the vi-sual servoing for R = 0.1. . . . . . . . . . . . . . . . . . . . 54Figure 3.1 Quadrotor axis convention for the body attached frame {B}. . 58Figure 3.2 Block diagram of the complete nonlinear system model for aquadrotor helicopter with on-board attitude controller. Winddisturbance effects are assumed negligible. . . . . . . . . . . 62Figure 3.3 Feedback linearized,decoupled system model of the quadrotorhelicopter. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64Figure 3.4 System overview for the velocity controller for the x and y di-rections. The term r(k) represents the input velocity. . . . . . 65Figure 3.5 Comparison of modelled and measured pitch response. Theparameters for the second order model are shown in Table 3.1. 69Figure 3.6 Bode diagram showing the identified second order and fre-quency model and the frequency sweep data of the pitch dy-namics. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69Figure 3.7 Comparison of the identified frequency model response withthe measured response to a sample frequency sweep for theroll dynamics. . . . . . . . . . . . . . . . . . . . . . . . . . . 70Figure 3.8 Comparison of modelled and measured roll response. The pa-rameters for the second order model are shown in Table 3.1. . 72Figure 3.9 Bode diagram showing the identified second order and fre-quency model and the frequency sweep data for the roll dy-namics. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72Figure 3.10 Comparison of the identified frequency model response withthe measured response to a sample frequency sweep for theroll dynamics. . . . . . . . . . . . . . . . . . . . . . . . . . . 73Figure 3.11 General overview of the quadrotor controller simulated in Mat-lab Simulink. . . . . . . . . . . . . . . . . . . . . . . . . . . 78xFigure 3.12 Simulation model used in the Simulink simulation. An overviewof the controller is shown in Figure 3.11. The rigid body dy-namics use Equation 3.9 to 3.11 calculate the forces acting onthe quadrotor helicopter. . . . . . . . . . . . . . . . . . . . . 78Figure 3.13 Block diagram of the experimental setup for the velocity con-trol of the ARDrone. . . . . . . . . . . . . . . . . . . . . . . 79Figure 3.14 Velocity step response of the quadrotor in the y direction as aresult of rolling movement. . . . . . . . . . . . . . . . . . . . 81Figure 3.15 Comparison of the simulated input and the actual input into thesystem in the y direction. . . . . . . . . . . . . . . . . . . . . 81Figure 3.16 Comparison of the simulated and the actual roll angle of theUAV during a step input velocity command. . . . . . . . . . . 82Figure 3.17 Velocity step response of the quadrotor in the x direction as aresult of pitching movement. . . . . . . . . . . . . . . . . . . 84Figure 3.18 Comparison of the simulated pitch input and the actual pitchinput into the system for the movement in the x direction. . . . 85Figure 3.19 Comparison of the simulated and the actual pitch angle of theUAV during a step input velocity command. . . . . . . . . . . 85Figure 3.20 Velocity step response of the quadrotor in the x direction as aresult of pitching movement for combined x and y motion. . . 86Figure 3.21 Velocity step response of the quadrotor in the y direction as aresult of rolling movement for combined x and y motion. . . . 86Figure 4.1 Overview of the controller block diagram of the system withthe ARDrone helicopter. . . . . . . . . . . . . . . . . . . . . 88Figure 4.2 System diagram of the visual servoing and velocity controller. 88Figure 4.3 Initial location of the UAV and desired final feature position ofthe target. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89Figure 4.4 Velocity response for the x and y directions relative to theUAV reference frame {A}. Shown are the measured velocities,Kalman filtered estimates and the input velocities. Experiment#1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90xiFigure 4.5 Image based error criteria (a) and the target trajectory in theimage (b). The coordinate convention is in the camera refer-ence frame. Experiment #1. . . . . . . . . . . . . . . . . . . 90Figure 4.6 Pixel error in the image x and y directions as a function of timefor experiment #1. . . . . . . . . . . . . . . . . . . . . . . . 91Figure 4.7 Velocity response for the x and y directions relative to theUAV reference frame {A}. Shown are the measured velocities,Kalman filtered estimates and the input velocities. Experiment#2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91Figure 4.8 Image based error criteria (a) and the target trajectory in theimage (b). The coordinate convention is in the camera refer-ence frame. Experiment #2. . . . . . . . . . . . . . . . . . . 92Figure 4.9 Pixel error in the image x and y directions as a function of timefor experiment #2. . . . . . . . . . . . . . . . . . . . . . . . 92Figure 4.10 Controller input to control height above ground in the z direc-tion relative to the UAV reference frame {A}. Experiment #2. 93Figure 4.11 Attitude angles and normalized input into the controller for ex-periment #2. . . . . . . . . . . . . . . . . . . . . . . . . . . 93Figure 4.12 Initial location of the UAV and desired final feature position ofthe target. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97Figure 4.13 Measured, input and Kalman filtered velocity responses for thex and y directions relative to the UAV reference frame {A} fora 3-dimensional, static target. . . . . . . . . . . . . . . . . . . 98Figure 4.14 Image based error criteria (a) and the target trajectory in theimage (b). The target was a static and 3 dimensional. . . . . . 98Figure 4.15 Measured, input and Kalman filtered velocity responses for thex and y directions relative to the UAV reference frame {A} fora 3-dimensional, moving target. . . . . . . . . . . . . . . . . 100Figure 4.16 Image based error criteria (a) and the target trajectory in theimage (b). The 3 dimensional target was moving at 0.25m/sec. 100xiiFigure 4.17 Measured, input and Kalman filtered velocity responses for thex and y directions relative to the UAV reference frame {A} fora 3-dimensional, moving target and a controller with movingtarget compensation. . . . . . . . . . . . . . . . . . . . . . . 102Figure 4.18 Image based error criteria (a) and the target trajectory in theimage (b). The target was moving and the velocity controllerincluded compensation for the moving target. . . . . . . . . . 102Figure 4.19 Target feature error and time steps during the visual servoingprocess. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103Figure 5.1 Mean-Squared Error with variation in the number of iterations,T and the error threshold ? , for a scalar input and a scalaroutput. Results are presented for varying magnitudes of noise. 111Figure 5.2 Function estimation with noise-corrupted training data for thefunction y = sin(x) with noise distributed according to ? ?U(?4,4). . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112Figure 5.3 Variation of ?MRT and the resulting RMSE for a single cross-validation test. Shown is also the mean RMSE and one stan-dard deviation for Adaboost.RT. . . . . . . . . . . . . . . . . 116Figure 5.4 Variation of the noise to signal magnitude ratio and resultingRMSE comparison. . . . . . . . . . . . . . . . . . . . . . . . 118Figure 5.5 RMSE comparison for x and y as a function of ? = ?x = ?y andT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120Figure 5.6 RMSE comparison for x and y as a function of ?x and ?y. . . . 121Figure 5.8 Visualization of the training data and the sample performanceof Adaboost.MRT and a single ANN. . . . . . . . . . . . . . 123Figure 5.7 This figure shows the noise corrupted training data used totrain the neural networks, the noiseless function, a single ANNestimation (xRMSE = 0.543,yRMSE = 0.488) and the Adaboost.MRTestimation (xRMSE = 0.258,yRMSE = 0.241). . . . . . . . . . . 123Figure 5.9 Training data and the corresponding value of yi assigned to allpixels in the image. One image shows the desired target andthree images are typical background images. . . . . . . . . . 125xiiiFigure 5.10 Histograms showing the target and background sets. In 5.10(a)the red and blue targets are represented by high saturation val-ues and H ? 1 and H ? 0.6. . . . . . . . . . . . . . . . . . . 126Figure 5.11 Distribution of the target colour likelihood along the hue-saturationdimensions. . . . . . . . . . . . . . . . . . . . . . . . . . . . 128Figure 5.12 Sample images highlighting the application of the thresholdingfilter. The first row shows the original images, the second rowshows a binary image within the threshold and the third rowshows the original colours of the binary image. The forth rowshows the result of a hole-filling operation and the removal ofnoisy artefacts with less than 20 pixels. . . . . . . . . . . . . 129xivAcknowledgementsI would like to thank my supervisor, Dr. Homayoun Najjaran, for guiding meand helping me throughout my degree. He was always helpful and his guidanceand the opportunities he provided made this thesis work possible. I would like toalso thank the faculty and staff at the UBC as well as my friends and co-workers atthe Advanced Control and Intelligent Systems Laboratory.I would also like to express gratitude to the UBCO Mechatronics club, theUBCO Robotics club and the IEEE Okanagan student branch for teaching me mostof what I know about electronics and microprocessors. I wish all of you the verybest in your future endeavours.Finally I would like to thank my family for being supportive of my lengthyeducation and absence. Thank you to my parents for raising and guiding me andmy siblings for supporting me in their own way. Without their assistance andsupport all this would not have been possible.xvChapter 1Introduction1.1 MotivationsUnmanned Aerial Vehicles (UAVs) have experienced a surge of interest forcivilian and research purposes over the last two decades. The low cost, size andavailability of MEMS sensors and micro-processor and the emergence of high pow-er density batteries have partially fuelled system availability. Organizations andresearch groups are currently working towards finding useful civilian applicationsfor UAVs that will allow new businesses to develop and fully utilize the existingtechnology. Due to the often remote nature of UAV applications, system autonomyis important.The two main system architectures are the fixed-wing aircraft and the heli-copter type UAV. Amongst UAVs, the multi-rotor helicopter type has proven to bemore flexible due to its payload capacity, manoeuvrability, low operational speed,and hovering capability [61]. Flight safety regulations estimate potential for dam-age as a function of the speed and weight of the UAV. The helicopter?s low speedand ability to hover makes it safer and more practical than the fixed-wing aircraftsin urbanized, cluttered and indoor environments where any operations are deniedto most fixed-wing aircrafts.Currently the majority of implemented and envisioned applications of UAVsare in the field of remote sensing, such as surveying with cameras or extra-spectralimaging sensors. Vision based sensors are popular due to the small size and weight1of cameras. The limiting factor for on-board sensors are energy consumption andpayload restrictions of the aircraft. The helicopter?s ability to carry heavy cameraequipment and hover semi-stationary in mid-air, makes it ideally suited for inspec-tion, surveying and surveillance operations. The research presented in this thesisfocusses on multirotor helicopters as they are more suitable for vision tasks in in-door environments and cluttered environments than fixed-wing UAVs.Potential uses for UAV that go beyond surveying of static landscapes includetracking of moving targets for law enforcement, the tracking of wild-life, filmingof sport events, or search and rescue missions. Potential applications of quadrotorUAVs could also involve disaster site exploration, for which their hovering abilityand object manipulation tasks will prove useful. The uses of UAVs are numerousand include some non-conventional uses such as the Joggobot[49], a flying joggingcompanion.There are however challenges that need to be overcome. The quadrotor heli-copter has 6 degrees of freedom and 4 control inputs, which makes it an underac-tuated system. In addition, the quadrotor has non-linear equations of motion and astrong coupling between its attitude and velocity dynamics. Due to this coupling,it is only possible to stabilize either the attitude or the position of the UAV. AllUAVs are subject to wind-disturbance forces, which displaces them, causing errorsin their position.The majority of current uses for UAV involve camera sensors. The camerasare the remote sensing sensors and, for the most part, do not actively control UAVnavigation. The active use of on-board hardware and reduced reliance on externalpositioning sensors will yield benefits in regards to increased positioning accu-racy and safe operation in GPS-denied environments. Indoor environments arechallenging to all UAV, due to space constraints and because they are often GPS-denied environments. Vision based navigation has special advantages for task ori-ented applications, such as object manipulation or accurate positioning relative toa landmark.Visual servoing is the practice of controlling a robot using camera as feedback.Purely vision-based control does not rely on the UAV?s absolute position estimatesas it defines an image based error function. Visual servoing of a helicopter-typeUAV is made difficult due to the underactuated nature of the UAV and the coupling2of the position and attitude dynamics.This thesis presents work done on the visual servoing of quadrotor helicoptersfor positioning without the use of GPS or other absolute positioning sensors. Acolour-based object detection method, which utilizes a modified form of the Ad-aboost.MRT algorithm, will be used to track a target that consists of two pointsin 3D space. The Adaboost.MRT algorithm is presented in this thesis and is a re-gression algorithm that boosts multivariate learning. The thesis presents the visualservoing control scheme and the UAV system identification and velocity controller.The full visual servoing control system is implemented and tested in experimentson an ARDrone quadrotor helicopter system.1.2 ObjectivesThe main objective of this thesis is to create a vision-based control scheme forquadrotor helicopters that leads to accurate positioning relative to a stationary ormoving target. The visual servoing control scheme must be capable of avoidinglocal minimas in feature tracking and converging to the desired location in theimage. The control scheme should keep the target within its field of view at alltimes. The target of interest is stationary or slow moving at a constant velocity.Two points on the target are localized by coloured markers and are used as part ofthe visual servoing control scheme. Colour-based detection is used for easy andreliable detection of these markers under varying lighting conditions. A colour-based object detector should be used to learn to separate the colours of the distinctlycoloured target object from background information. The selection of the objectmethod should be compatible with the visual servoing control scheme.The quadrotor UAV platform is an ARDrone helicopter and the on-board atti-tude controller needs to be incorporated as part of the control system. There is littleinformation available about the on-board controller of the ARDrone, therefore sys-tem identification is used to identify the controller dynamics. The use of systemidentification on the on-board dynamics, should make the controller extendable toother multirotor helicopter hardware.Image processing should be done on a ground station computer. The imagesare transmitted from the helicopter to the computer and control signals will be sent3Attitude Controller - + Velocity Controller - + Vision Controller - + UAV Figure 1.1: Three control loops used as part of the vision based quadrotorhelicopter controller.to the helicopter over a wireless network. A system model of the UAV dynamics isnecessary and is created using system identification. A velocity controller must bedesigned that allows for implementation of the visual servoing control scheme andwhich will work with the identified model. The velocity controller should workwith an on-board attitude controller to achieve the velocity commanded by the vi-sual servoing scheme. The integration of the visual servoing control scheme andthe velocity controller should be tested in experiments to investigate the perfor-mance of the controller.1.3 MethodologyThe visual tracking task is achieved with the aid of three control loops (see Fig-ure 1.1 or Figure 4.2). The first controller is the on-board attitude controller of theARDrone, which controls the pitch, roll, altitude, and yaw of the UAV. Frequencydomain system identification is performed to identify the on-board controller dy-namics of the inner control loop. The second controller is the velocity controller,which translates a velocity command into the appropriate attitude commands tothe inner controller. The velocity controller incorporates a feedback linearizationscheme, a linear quadratic regulator and a Kalman filter. The third controller is thevisual servoing controller, which generates velocity commands, based on the targetlocation in the image. The visual servoing control loop includes compensation formoving target tracking. The outer control loops assume the stability of the innercontrollers.The moving target contains two features which are used as part of the visualservoing control scheme. The target features are two points, which are localized bytwo distinctly-coloured markers. The coloured markers are detected by colour andshape- based detection methods and the two points are indicated by the centroids4of the two markers. The colour-based detection uses the HSV colour space andfixed thresholds on the hue and saturation channels to select the distinct colours.The visual servoing control scheme is image based and uses the virtual spherefeature for control. Image based control schemes allow for convergence towardsthe desired position, independent of on-board sensor accuracy and facilitate oper-ation in GPS-denied environments. The virtual sphere feature is created from thetwo points on the moving target. Control of the helicopter in the image x and y di-rection is mainly gouverned by the center location of the virtual sphere and distanceto the target is gouverned by the virtual sphere radius. Compensation for a movingtarget was added to the visual servoing controller in the form of a proportional/in-tegral controller on the image feature error. The proposed vision based controller istested in simulations and experiments on a static and moving target. This researchconsidered translational control only and the yaw angle was kept constant through-out the tests. The appropriate feature selection for yaw angle control has been leftfor future work.The proposed Adaboost.MRT, a modified version of the original Adaboost.RT,is an algorithm for boosting multivariate regression and is presented as part of thisthesis with the future aim of implementation for task learning and control. Asa sample application, Adaboost.MRT is used to learn the colour of the colouredmarkers in the hue-saturation space and remove the background in the images. Themethod is presented as an alternative to the fixed-thresholds used in the colouredmarker detection, but was too slow for real-time implementation.1.4 Thesis OrganizationThis thesis begins by presenting the visual servoing controller and the colour-based object detection procedure in Chapter 2. This chapter also includes a com-pensator for moving target tracking. Chapter 3 presents the velocity controllerfor the ARDrone helicopter. The chapter presents the system identification, thefeedback linearizing controller, the linear quadratic regulator and the Kalman filterdesign. Chapter 4 presents the real-time implementation of the proposed controllerand experimental results of tracking a stationary and moving target.Finally, the development of a boosting algorithm for multivariate regression is5presented in Chapter 5 . This boosting algorithm was developed in this thesis andapplied to colour-based object detection. The resulting algorithm was however tooslow for real-time implementation for visual servoing with quadrotor helicopter-s. The real-time implementation uses fixed thresholds for the colour-based objectdetector, rather than the thresholds learned by Adaboost.MRT.1.5 Literature ReviewThis literature review begins with an overview of general visual servoing con-trol scheme in Section 1.5.1. The general visual servoing review sets the stagefor visual servoing applied to quadrotor helicopters in Section 1.5.2. Section 1.5.3expands on the general methods used to control quadrotor helicopters. Some back-ground literature on the tracking of moving targets with UAV is presented in Sec-tion 1.5.4. This section concludes the literature review with some background onensemble prediction and the Adaboost algorithm in Section 1.5.5, which can beused to learn the colour of the targets.1.5.1 General Visual Servoing ControlIn visual servoing, camera feedback is used to control the motion of a roboticsystem. A set of features is defined and the controller attempts to converge thefeatures from an initial configuration s towards a desired feature configuration s?.Visual servoing combines concepts from image processing for target detection,computer vision for object tracking and control theory for control law generation.Comprehensive, introductory tutorials can be found in [27, 28, 57].The problem of visual servoing can be broken down into two main tasks. Thefirst task is the definition of image features that can be reliably detected and trackedthroughout the workspace. Feature selection strategies depend on the control taskand the control strategy. Commonly used features include point features [27], linefeatures [31, 69] and image moments [7, 26, 99]. Available features are vast andthey are sometimes combined in a partitioned approach, in which certain featuresare used to control specific degrees of freedom. The selection of features affects theconvergence properties and the behaviour of the visual servoing task. A major con-sideration of the feature selection is the robot-camera configuration. Robot camera6configuration include eye-to-hand, where a camera observes the end-effector inan environment and eye-in-hand, where the camera is linked to the end-effector.The quadrotor helicopter system with on-board camera is an eye-in-hand cameraconfiguration.The second design task in visual servoing is the creation of a stable controlstrategy that will allow the tracked features to converge to the desired feature lo-cation. The available control strategies fall into three major groups: image basedvisual servoing (IBVS) , position based visual servoing (PBVS) and a hybrid ofthe two strategies[27, 28]. Switching controllers that alternate between IBVS andPBVS in certain regions, fall into the category of hybrid controllers [2, 30].Visual Servoing Control SchemesPBVS (sometimes referred to as 3D visual servoing) uses a calibrated camerato reconstruct the pose of the target object from image points [27, 57]. The targetpose is reconstructed from the image via a target model. Robot motion towards thedesired target configuration occurs in the world coordinate space. The reconstruc-tion relies on the intrinsic camera properties, which are the focal length, principlepoint and the pixel scale factors along the image x and y directions. These intrinsicparameters are determined experimentally through a calibration and are subject toerrors. Inaccurate calibration of the camera parameters will lead to errors in thepose estimation and therefore in the final robot position, meaning that s? will notbe reached. Since robot motion occurs in the Cartesian space it cannot be guaran-teed that the object will always remain in the image frame. Once the features leavethe image frame, the visual servoing task will have failed. A target object model isalso required to estimate the camera location[27]. Since the motion occurs in theCartesian space the camera trajectory is acceptable, but the image feature trajectorymay undergo complicated motions. Some of the problems with PBVS have beenaddressed in literature. In [111] an optimization scheme is used to constrain the tra-jectory of the features in the image. Hybrid control schemes have been proposedto address these short-comings by switching between PBVS and IBVS[2, 30].In IBVS (sometimes referred to as 2D visual servoing) control directly occursin the image plane, which removes access to the Cartesian velocities of the con-7trolled system[31]. One criticism of IBVS is that the robot can sometimes undergolarge or undesirable motions in the Cartesian space to achieve acceptable image s-pace trajectories. The main advantage of IBVS is that it will converge to the desiredconfiguration in the presence of camera calibration errors as the motion occurs inthe image space. Camera calibration errors affect the rate of convergence only. Aproperty of IBVS is that it works well for small image motions, but breaks downwhen large motions are involved that especially concern the rotation about the cam-era z-axis [31]. Strategies that address the short-comings of IBVS include a changein the used image features[31] or the re-definition from Cartesian to polar imagecoordinates[32]. In addition it is possible to use multiple types of features in a s-tacked or partitioned image Jacobian[32]. In order to control the path of the imagefeatures, the image Jacobian can be calculated at the initial or the desired cameraposition or be calculated in real time throughout the motion. The calculation of theJacobian affects the camera path.The shortcomings of the PBVS and IBVS schemes have led to creation ofhybrid-control strategies that attempt to combine the strengths of both. One alter-native are the switching controllers which switch between IBVS and PBVS underspecific conditions during the visual servoing. In [73] a PBVS controller is usedthat switches to IBVS when the features approach the image edge. A problem thatswitching schemes face is the introduction of a chattering back and forth motion,which occurs during the switching and that have to be addressed[30, 111]. Anotherhybrid control strategy, the so called 2.5 D visual servoing, decouples the rotation-al and translational degrees of freedom of the camera. The control scheme intro-duces an image feature state s = (x,y, log(Z)), where x and y are the normalizedcoordinates of the target and Z is the distance to the target. This control schemedetects a target plane using planar homography. From this homography, the rota-tion of the current plane position and target plane positions can be extracted. In the2.5D formulation the translational and rotational components of the camera veloc-ity are decoupled. The control scheme is robust to camera calibration errors andhas proven convergence for errors present in various parameters[63]. The targetplane is estimated from image measurements, which makes this method sensitiveto measurement noise in the feature location. If a non-planar target is used, thehomography formulation requires a minimum of 8 target points.8Visual Servoing Feature SelectionThe selection of the proper features in the visual servoing control scheme af-fects the convergence properties, convergence rate and the stability of the con-troller. The following features are mainly used on image based and hybrid controlstrategies. The selection of the image feature is not a solved problem and dependson the visual servoing task and the system to be controlled.The image Jacobian links the camera motion to a change in the image featureparameters like position, velocity and orientation. The image Jacobian is some-times referred to as the interaction matrix and is often unknown, since estimationhappens from a 3 dimensional scene that is projected on a 2 dimensional imageplane with some loss of information. The image Jacobian is required for the cre-ation of a control strategy.An ideal condition for feature selection is that the interaction matrix is decou-pled for each degree of freedom of the camera. The feature should also meet thecriteria of local and global stability, and invertability of the Jacobian on the taskspace. For a given control law, the image feature should have an asymptoticallystable convergence rate. The rate of convergence should be similar for each degreeof freedom, to avoid the destabilization of one degree of freedom, by fast motions.This section will discuss some analytical models of the image features and theirJacobians. Strategies to learn the image Jacobian from either known motions oronline estimates [28] will not be addressed.Let r and r? denote the camera position and velocity, respectively. Let s be avector of image features and s? represent the rate of change of the image featureparameters. The dimensions of r are m and the dimensions of s are k. The imageJacobian Jv(r) is a function that relates how the camera motion (r? and s?) affects achange in the image features (s and s?) according to:s? = Jv(r)r? (1.1)9The formal definition of the image Jacobian isJv =????? s1(r)? r1 . . .? s1(r)? rm......? sk(r)? r1 . . .? sk(r)? rm???? . (1.2)The image Jacobian depends on the position of the camera r and changes through-out the visual servoing trajectory (see Equation 1.2). The Jacobian can be cal-culated at the initial or the desired camera position or be continuously calculatedthrough the motion. Some work suggests that good convergence properties can beachieved by averaging the Jacobian at the initial and desired position[27]. In [64]it was shown that a linear combination of the current and the desired configurationcan be used. The weights of the linear combination are not constrained on the rangeof [0,1], but can also reach negative values[64]. Each of these strategies will affectthe path and convergence properties of the controller. Continuous calculation ofthe Jacobian may not always be feasible [27].One of the most commonly used image features are the point features[32, 60].The image Jacobian for the point features Jv,p in the Cartesian formulation isJv,p =[?1Z 0xZ xy ?(1+ x2) y0 ?1ZyZ 1+ y2 ?xy ?x](1.3)where x and y are the image coordinates of the feature point and Z is the depthto the point in Cartesian space. For a formulation of point features in the polarcoordinates, see[58].To control a 6 degrees of freedom camera at least 3 points must be used andtheir interaction matrices can be stacked such thatJ =??????Jv,p1Jv,p2. . .Jv,pn??????? R2n?6, (1.4)with n indicating the number of points used. It is desirable to use more than 310points, due to potential presence of local minimas in the point feature formulation.The final Jacobian can be defined as the pseudo-inverse of the stacked interactionmatrix. A common cited failure mode of the point feature is a rotation of pi radiansabout the camera z-axis, which will cause a camera retreat to infinity. Howeverthis problem can be solved by the redefinition of one, or some, of the features aslines[25, 31]. The point features are best used for small camera motions.A robot operating in any man-made environment is likely to encounter straightlines. These lines can be used as a feature to control the camera. Depth estimationwill be a challenge when the robot is operating in unknown environments. In[4]the authors used normalized Plu?cker coordinates to position the robot in relation tolines. However depth control was only achieved with the addition of a laser pointerto estimate depth in the image. In [110], a depth independent interaction matrixis proposed that works for point and line features and is combined with onlineestimation of parameters. However the interaction matrix requires feedback aboutthe position and velocity of the robot.Image moments and moment invariants are properties of areas and shapes thatcan be calculated from a binary or intensity image. A moment-based formulationthat decouples the camera degrees of freedom and that relates camera motion toparameter changes is found in [26, 99]. The image moment formulation has beenshown to be robust to camera calibration errors in experiments.Photometric moments for visual servoing have been proposed in [7] and theauthors present formulations that decouple rotational and translational degrees offreedom. The visual servoing formulation allows positioning relative to complexplanar targets without the need for image segmentation.The visual servoing feature in this thesis will be image based to avoid the prob-lems associated with camera calibration errors. The tendency of image features toleave the image in position based visual servoing could potentially result in track-ing failure when a moving target is considered.1.5.2 Visual Servoing of Unmanned Aerial VehiclesVisual servoing control schemes were originally developed for robot arms,which are capable of achieving the desired velocities and rotational rates. Visu-11al servoing of UAVs poses unique challenges to general visual servoing control,regardless of whether fixed-wing or helicopter type UAV are used. The underac-tuated nature and coupling between position and velocity of UAVs create controlchallenges that have to be addressed. It is important to consider that it is onlypossible to stabilize either the position or the attitude of the quadrotor helicopters,which reduces the degrees of freedoms to control. Sample applications of vision-based control of UAV include stabilization, landing [82, 91, 114], simultaneouslocalization and mapping [112] and moving target tracking[48, 102].Control SchemesA majority of the cited work uses position based visual servoing [3, 16, 37, 48,59, 82, 88, 94, 114, 115]. In these papers, intrinsic camera properties are used toestimate the UAV position relative to a target or landmark. Some form of UAV con-troller is then used to reduce the error in the Cartesian space. The presented resultswere acceptable, however PBVS control schemes are prone to camera calibrationerrors which leads to errors in the final position of the UAV. This inaccuracy isoften obscured or masked by the large ?resting? motion of the UAVs. A helicopterthat attempts to maintain its position will undergo drift. This drifting motion is of-ten due to random air-flows and wind disturbance forces caused by the helicopter?srotors. Due to the noisy nature of feature measurements in images, PBVS informa-tion is sometimes fused with measurements from external sources. In [89, 91] theauthors use pose estimation to estimate the UAV location with respect to a target.The pose information is fused with information from an inertial measurement unit(IMU). Vision information allows the system to constrain accelerometer and gyro-scope drift. For PBVS control of a quadrotor helicopter the visual servoing task issomewhat simplified as the motion of the UAV can be decoupled from the visualservoing task. The UAV camera identifies a known landmark (glyph or colouredobject) and then uses the intrinsic camera parameters to estimate the UAV positionrelative to the object. The depth to the target is calculated from a model of thetarget or range finder measurements.In [14, 15, 24, 52, 60, 65, 78] the authors used an image based approach to vi-sual servoing with quadrotor helicopters. IBVS is more robust to errors in camera12calibration as the pose of the camera, with respect to the target, does not need to beestimated. The features will remain in the image throughout the control scheme,which is a useful property for tracking a moving target. There are challenges to us-ing image based control when underactuated systems with an eye-in-hand cameraconfiguration are used. For UAV helicopters the position and attitude dynamics arestrongly linked. Any controlled translational motion of the quadrotor helicopterUAV is initiated by a change in the UAV attitude. This change in camera orienta-tion will lead to a change in the location of the target features in the image plane,which induces feature error. This coupled motion requires special consideration,either during the feature selection or the quadrotor control phase.Controllers for quadrotor IBVS fall into two major categories, the dynamic andkinematic controllers. The dynamic controller considers the entire UAV system,the forces acting on the system and the constrained motion in the control scheme.Dynamic controllers incorporate the image feature locations as part of the controlscheme and link the features to the control input to the servos. In [52], the au-thors created an image based back-stepping dynamic controller for underactuatedsystems. The author show that their proposed controller when used with sphericalprojection image features displays passivity-like properties. The implementation oftheir controller required information about the translational and rotational veloci-ties as well as an inertial direction, measured by an IMU. The work did not con-sider orientation around the yaw axis, for which a separate controller was required.Follow-up work with experimental results was presented in [50]. A dynamic con-troller that was based on a virtual spring approach to control underactuated systemsand that included a yaw controller was presented in [78]. A dynamic controller wasused in [67] for 2.5 D visual servoing and in [118] to control an unmanned blimp.The alternative to the dynamic are the kinematic controllers. While the dy-namic controllers allow for the estimation of the forces acting on the system, theresulting controllers will be too tightly linked to the specific system for which theywere designed. The general kinematic control structure consists of an inner controlloop, which handles low level servo control and an outer visual servoing controlloop. Stability of the low level controller is assumed, which operates at a higherfrequency and a higher gain than the higher level controller. The outer control loopreduces the image based error by providing the input to the low level controller.13The use of a kinematic controller that is not linked to the physical system allowsfor reuse of a visual servoing control strategy in other systems. Within the kine-matic controllers, some work assumes a small-angle approximation [48, 59, 65]for helicopter motion, which does not significantly affect performance. The au-thors constrain the UAV velocity to reduce the impact of velocity-attitude couplingon the feature error. These assumptions are often justified, as the UAV only reacheslarge angles during ?extreme? or high-performance manoeuvres.Some body of work does not feel that the small-angle assumption is justifiedand use compensation schemes for kinematic controllers to reduce the impact ofthe coupled motion and increase robustness of the controllers. In [60], an adaptiveweighted gain is used to keep the features in the image throughout the motion. Theweight applies to each degree of freedom and is reduced as the feature nears theimage edge. The method was validated in simulation. In [72], an image basedvisual control scheme for underactuated underwater vehicles was presented. Theauthor addresses the problem of image based visual servoing, by the implementa-tion of PID control on the image feature errors. The presented controller is able tocompensate for the robot motions in the image, by the proper tuning of the PID pa-rameters. In [102] the motion induced by the quadrotor helicopter is compensatedwith information from the IMU to track the position of the target. In [14, 15], var-ious control laws for translational IBVS with quadrotor helicopters are presentedand compared. The control laws select image features that are based on perspectiveprojection image moments. Proportional control, partitioned control, and a redefi-nition of the image features was considered to create a globally asymptotic controllaw with approximately equal convergence rates for each degree of freedom.Hybrid visual servoing control schemes for UAV control are difficult to classi-fy, since a lot of the control schemes use a combination of image based control withexternal position estimation. In [67], 2.5D visual servoing was used to position ahelicopter type UAV relative to a bridge for inspection tasks. In [24], the authorsused hybrid visual servoing with spherical projection features and implemented theorientation control from the 2.5D visual servoing control scheme first introducedin [63].14Target FeaturesTraditionally, the image features are defined in the perspective projection cor-responding to a pinhole camera model. For a typical camera with a flat imageplane, the projection of a point Pi = [Xi Yi Zi] will have normalized coordinates piaccording topi = f[XiZiYiZi1]T(1.5)where f is the focal length of the camera. This model is commonly used, but is notapplicable when fish-eye or catadioptric camera lenses are used.The spherical projection model has seen an increase in use for visual servoingapplications following the unifying theory for central panoramic systems, present-ed in [46]. The spherical projection model projects the point Pi onto a sphere ofunity radius (though any radius can be used). In the spherical model the point pi ispi =Pi?Pi?(1.6)The projection onto the unit sphere can be used with fish-eye and omni-directionalcameras, using equations presented in [46]. Some features defined in the spher-ical model are independent of camera rotation[24, 41, 51], which is a favourableproperty for systems with coupled translation-attitude dynamics.The following will present feature types that have been applied to UAV visualservoing. Many of these features can be defined in the perspective and sphericalprojections, with varying impact on their suitability for visual servoing. Point fea-tures are often used in visual servoing, which represent a point in 3D space andits corresponding projection on the image plane. Point features do not have to bephysical points. Sometimes they are the centroids of an image, a glyph, or a specialmarker [16, 89]. In [102] a colour histogram of the target is used, which ensuresproper detection with the point feature represented by the centroid. In [91] invari-ant moments are used for target detection of the point feature. In [60] a traditionalinteraction matrix for perspective projection IBVS point features is used. In [24]point features defined in the perspective and spherical projection are used and therelative performance of the IBVS controller is compared. A feature set that is verylikely to be encountered in man-made environments are the line feature. In [10] a15quadrotor UAV uses perspective image cues from straight lines to navigate indoorenvironments. Visual servoing formulations for line features can be found in [4]for perspective projection and in [69] for spherical projection models.The unnormalized first order spherical moments are a set of features that pre-serve the passivity like properties of the UAV dynamics and are invariant to rota-tional motion. They were presented as application for visual servoing of underac-tuated systems in [13, 50, 52]. The image moment q is defined asq =n?i=1pi (1.7)where pi is the spherical projection of the point feature and n is the number ofpoints. The error criterion ? can be defined as ? = q? q? with q? representingthe desired point configuration. In [14, 15] the authors present multiple controllaws that use and transform ? that preserves the passivity like properties as wellas attempts to create linear control, globally asymptotic stability and good equalconvergence rates on the image features.Not to be confused with the spherical moments are the image moments whichhave been used extensively for general visual servoing[26], visual servoing withUAV [78] and object recognition[56, 116]. The image moments are scalar quan-tities that capture the properties of an unknown distribution and are used as shapedescriptors. The image moment mi j is defined asmi j =?x?yxiy jI(x,y) (1.8)where I(x,y) is the intensity of the pixel at x and y. The intensity of the pixelis often treated as a binary image for simplification such that I(x,y) = {0,1}(asin [26]), but recent developments consider the intensity values of each pixel [7].Image moments have been extended from the perspective to the spherical projec-tion model[100, 101], allowing catadioptric cameras to take advantage of thesefeatures. A special form of the image moments are the so-called Hu?s moment in-variants, which have been first introduced in [55]. The seven moment invariants areshape properties that are invariant to scale and 2D rotation in the image, makingthem a popular feature for object recognition tasks [56, 91, 116].16Visual servoing with spherical targets has been shown to yield good resultswhen used with spherical projection [40]. In [41] a physical spherical target withsurface markings was used to control the 6 DOF of a robot arm. An alternate formconsiders a virtual sphere, described by two points in space. The centroid locationof the virtual sphere controls the x and y camera motion, while the radius of thesphere is a measure of the depth to the target and controls camera z motion. Thevirtual sphere formulation is decoupled from the rotational motion of the camera,has the same dynamics for all 3 DOF, is very robust to error in the estimate of thevirtual sphere radius R, and does not contain any singularities in its inverse as longas R 6= 0 or R 6=?. The virtual sphere formulation has been implemented in [24] tocontrol the translational degrees of freedom of a quadrotor helicopter test platform.A major challenge of image based visual servoing is the requirement of accu-rate depth measurements for many types of features. Most previous methods fordepth estimation with UAV-borne cameras use a target model such as the target?ssize [15, 115] or the ratio of the measured target area to the desired target area[16].In [59], the landing area for an autonomous helicopter consists of multiple con-centric rings, the geometry of which is known. By measuring the shape propertiesof the landing target in the image the UAV can estimate its position relative to thetarget.1.5.3 Quadrotor Helicopter Control MethodsThe quadrotor helicopter is a 6 degrees of freedom system with four inputs. Ithas strong coupling between its attitude and position dynamics. It is only possibleto stabilize 4 of the 6 degrees of freedom. Two groups of variables that are tradi-tionally controlled in literature are either the x,y,z positions and the heading ?[71]or the attitude angles ? ,? ,? and the altitude z. The quadrotor system dynamics arenon-linear due to the attitude angles that affect the direction of the thrust.A commonly used helicopter controller architecture consists of an inner loopfor attitude control and an outer loop for position/velocity control [71, 112]. Theouter loop can take the form of a feedback linearization scheme. The inner/outerloop architecture is especially useful when a commercial helicopter system with apre-existing on-board controller, such as the ARDrone, is used. There is an often17an under-lying assumption that the inner loops are stable. In [16, 107, 108, 112]the inner controller dynamics were modelled by second order transfer functions.The transfer function modelled the dynamics between the controller input and theresulting attitude angle of the quadrotor helicopter. The transfer functions wereidentified in the time domain. In [112] the transfer function was identified using arepeated step input and the function was adjusted to fit the response data. In [107]the function was identified from a repeated step input of varying length. Controlinput design is an important part of system identification[35].An alternative to time domain identification is frequency domain identifica-tion, which is commonly used to identify UAV dynamics, for example in [38] forfixed-wing aircraft or in[68] for small helicopters. A step input is not as rich ininformation content as a random input or a frequency sweep. Frequency domainidentification uses a frequency sweep to excite the system for a range of frequen-cies. A Bode plot is generated from the frequency sweep response and the transferfunction is fit to the data. In this thesis, frequency based identification was used toidentify the system more accurately than from time domain data.General PID control is popular in quadrotor UAV control design [47]. PID con-trol has been implemented for attitude control[11], for hovering control [114] andin [54] as part of a controller designed for aggressive manoeuvring. PID control isalso used by the Arducopter open-source auto-pilot[5] for stabilization of the atti-tude during remote control. PID control is a highly effective control method, butrequires careful tuning of 3 parameters, which are only effective for a limited rangeof states. PID control may result in large errors during aggressive manoeuvring, ascertain aerodynamic factors that were not considered come in effect.Research groups have studied some of the aerodynamic effects that are presentduring aggressive manoeuvring. A spinning rotor that translates through the airinduces a moment on the UAV that arises when the leading and retreating propellerblade experience different lift. This lift variation is called blade flapping and iscreated when the tip velocity of the advancing and retreating blades differs duringfast translational motions [54]. These forces need to be accounted for, when adynamic controller is used. The ARDrone already considers blade flapping as partof the on-board controller[18].The field of optimal control aims to calculate an optimum input that minimizes18a performance index for a system. In the case of the linear quadratic regulator(LQR), the system consists of linear differential equations and the cost-functionto be minimized is quadratic [74]. Linear quadratic regulators (LQRs) have beenextensively used in quadrotor helicopter control. LQR techniques require a linearsystem model, which the quadrotor helicopter does not have. The linearizationaround hover conditions via a small angle approximation [70] has been determinedto not be accurate enough for fast motions [11], yet still present acceptable resultsfor slow motions. An alternate method presented in [11], linearizes the nonlin-ear dynamic equations around each state to increase the accuracy of the lineariza-tion. Feedback linearization methods have been used for nonlinear control andapplied to the UAV dynamics[71, 112]. In feedback linearization, the system istransformed into a linear system, by a change of variables in which an input is cal-culated that renders the system linear. In [71] the system is linearized according toan input-output linearization method, that transforms the system into a linear andcontrollable system. The formulations were tested in simulation only and requireda measurement of the derivative of the acceleration, which rendered the systemsensitive to noise. Input-output linearization was also considered in [3] as part of ahybrid controller.A feedback linearization scheme was proposed in [112] in which the dynamicequations were solved for the required attitude angles as a function of the desiredapplied forces Fx, Fy and Fz on the UAV body. This calculation provided the set-point to the attitude controller that would result in a force acting on the UAV bodyrequired to follow a specific trajectory. A similar controller was used in [1] withthe attitude angles as a function of the desired accelerations. These controllers didnot consider air resistance and wind disturbance in the derivation.Due to the noisy nature of the helicopter state measurements, a state estima-tor or observer is often implemented. An extension of LQR to a linear quadraticGaussian controller (LQG) by the addition of a state observer is beneficial to reduceuncertainty in noisy data. In [112] the authors implement a LQG with loop-transferrecovery (LTR) to control the position of a quadrotor helicopter during simultane-ous localization and mapping(SLAM) tasks.191.5.4 Moving Object Tracking via UAVThe tracking of a moving object is made difficult for fixed-wing UAV due tovelocity and manoeuvrability constraints. For fixed-wing UAV trajectory control,the algorithms consider a circular or wavy path when the target velocity is slowerthan the stall-speed of the UAV [83]. Although the fixed-wing and the helicoptertype UAV differ, they still share some constraints and challenges as they are bothunderactuated systems and sensitive to wind-disturbances. Their limited payloadcapacity limits the complexity of the tracking algorithm due to computational re-strictions. Camera motion blur induced by fast movements of the UAV or the targetwill be present in visual tracking of moving or stationary objects. Moving targettracking is mainly used for positioning relative to a moving target [48, 102] orlanding on a moving platform [90]. Target tracking requires a filter, such as theKalman filter to account for the noise in the relative motion of the UAV to thetarget [83, 90].In [102], the authors present a vision-based control algorithm to track a movingobject. The object is represented by geometric constraints like position, orientationand bounding box. The object is detected based on its colour histogram and trackedvia a particle filter. A PI controller was used for velocity control and a proportion-al controller is used for yaw control. The UAV contained an on-board controllercapable of ensuring stability of the system. In [87] the author implemented an ob-ject detection and tracking method on a graphics processing unit (GPU) to tracka moving target with a fixed-wing UAV. The tracking algorithm is the covariancetracking algorithm, which requires high computational power. The system iden-tification used an output error method with a Kalman filter for online parameteroptimization.In [48] the authors implemented vision based tracking for a UAV flying abovea target. Target detection used a Canny edge detector to detect a coloured squareon a moving target. Optical flow is used to estimate the velocity of the helicopter.The visual servoing scheme was position based. Switching controllers were imple-mented that switched between various flight modes that searched for the movingtarget at its last known location when the target left the image.201.5.5 Ensemble Prediction and AdaboostEnsemble prediction and learning is a widely used technique in machine learn-ing and artificial intelligence to improve prediction accuracy through the combi-nation of multiple predictors/learners [19]. Ensembles display better classifica-tion accuracy than each individual predictor if the ensemble consists of diversepredictors[53]. Predictors are said to be diverse if they have different error charac-teristics (e.g., mean, variance) on data points[36]. The rationale behind the use ofensembles acknowledges that it may be hard to find an individual optimal learnerand may be more practical to select a weighted combination of sub-optimal learn-ers. While the exact performance increase is difficult to determine, ensembles per-form better than the ensemble?s weakest learner. Published work reports increasedaccuracy of ensembles over single classifiers [53, 93].Bagging (bootstrap aggregating) and boosting are two ensemble methods thatuse multiple predictors to improve the accuracy for regression and classificationproblems. The size of the available training data is increased and diversified byrepeated sampling, with replacement, from the original dataset. In bagging, thesamples are chosen with equal weights. In boosting sampling occurs sequential-ly with weights that are updated based on the accuracy of previous predictions.Boosting methods are generally learned in series, whereas bagging can be readilyparallelized for faster learning. These methods work best on unstable learners suchas the neural network technique. There is some evidence suggesting that boostinghas increased accuracy over bagging[39]. However in [103], the authors presentexamples of boosting?s performance degradation in the presence of noise. The se-quential sampling of boosting techniques, allows the ensemble to focus on harderexamples, but may result in over-fitting. For an overview on boosting methods, see[85].Adaboost MethodsAdaboost, one of the first practical boosting methods, was introduced by Fre-und and Schapire[42, 44]. In Adaboost, weak learners are combined, ?boosted?,to improve ensemble accuracy and produce a ?strong? classifier. In classification,weak learners exhibit only a small correlation between the prediction and the true21value. Specifically, a weak learner has prediction accuracy of just over 50%, whichis only slightly better than random guessing. Weak learners are used in ensembles,because a complex function can be described by many general trends, which canresult in a strong approximation of that function. It is also difficult to select anindividual optimal learner.In Adaboost, the training data is iteratively sampled, with replacement, to trainthe weak learner. The predictive performance of the weak learner hypothesis isevaluated and the sampling distribution weights are updated, giving more weight tothe misclassified examples. The next learner samples from the updated distributionand the learning process is repeated. Training set diversity is introduced by trainingthe next weak learner on the previously misclassified, or ?hard?, examples. Thefinal output of Adaboost, is a weighted combination of all hypotheses, with theweights corresponding to each individual hypothesis? prediction error.Adaboost featured some practical advantages: its low implementation com-plexity and the introduction of only a single tuning parameter, the number of it-erations T . However, the accuracy that can be expected is dependent on the baselearning algorithm and the available data. Boosting is susceptible to noise and Ad-aboost will fail if there is insufficient data, or if the prediction hypothesis is tooweak[44].Following the introduction of Adaboost, the research effort shifted to the ex-tension of the algorithm to accommodate a wider set of problems and to address itsshort-comings. Adaboost focused on binary classification but has been extendedto multi-class classification (Adaboost.M1,Adaboost.M2 [43]). In [98], Sun et al.present cost-sensitive modifications that handle class-imbalances in pattern recog-nition, which are data with significant differences in prior probabilities.Boosting is prone to over-fitting and is therefore noise and outlier sensitive[23,66, 103]. Strategies that address this noise sensitivity generally fall into two ma-jor categories[23]: (1) revision of the weight updating mechanism to correspondto a modified loss function; (2) filtering of outliers or modification of the weightsthat are assigned to noisy data in the final hypothesis. Following the first strate-gy, ND-Adaboost[23] was introduced, which addresses boosting?s noise sensitivityand improves performance by a noise detection mechanism. Along the second s-trategy is the AveBoost algorithm [76, 77], in which the weighting distributions of22the current and previous iteration are averaged. This averaging reduced the impactof outliers on the final hypothesis by limiting the weight that these outliers can beassigned. The reduction of the sampling weight of outliers reduced the likelihoodthat learners will pick these values for the following iterations and thus reduced thechance of fitting to outliers.Adaboost for RegressionThe successful application of Adaboost for classification problems, led to theextension of Adaboost to regression problems. Adaboost?s modular nature allowsfor improvements of the regressors and for the adaptability to specific problems[85].For an overview of regression boosting methods see [86].Adaboost.RT (where RT is an abbreviation for regression threshold) is based onAdaboost.M1, which was modified for regression and was introduced by Soloma-tine and Shrestha[95, 97]. Adaboost.RT samples from the available training datausing sampling weights Dt. The base learning algorithm uses this sample to createa hypothesis that links the input to the output data. This hypothesis is applied toall the data to create predictions. The absolute relative error is the percent errorof the prediction compared to the true value and is calculated for each predictionand compared to an error threshold ? . The threshold ? is used to classify the pre-dicted values as correct or incorrect. This threshold modifies the regression taskto a classification problem. The sampling weight of incorrectly predicted samplesis increased for the next iteration. Adaboost displayed better performance than asingle artificial neural network (ANN) at the cost of the introduction of a new tun-ing parameter ? . Adaboost.RT did not have an early stopping criterion like similarregression boosting methods such as Adaboost.R2 and the number of iterationscan be chosen depending on the required accuracy. Follow-up work compared per-formance to Adaboost.R2, Bagging and Artificial Neural Networks among othersand reported better results with Adaboost.RT[95]. Performance comparisons ofAdaboost.RT with various other algorithms can be found in [95, 109].Unlike other Adaboost-based regression techniques, Adaboost.RT requires theselection of an additional tuning parameter ? [95]. The parameter ? affects theaccuracy of the ensemble, as it influences the sampling weights of the instances and23thereby modifies the diversity of the training data. A small threshold will classifythe majority of predicted values as incorrect and result in a near uniform samplingdistribution, which will resemble bagging. A threshold that is set too large willresult in fitting to extreme outliers. An adaptive work-around for the problem hasbeen proposed in [104, 105], by recursive adjustment of ? . However, the equationspresented in [104] enable the value of ? to become negative, which is physicallymeaningless and force the algorithm to fail. From [104], the value of ? at iterationt +1 is????t+1 = ?t(1?? ), while et < et?1?t+1 = ?t(1+? ), while et > et?1(1.9)where et is the root mean squared error (RMSE) at iteration t and by definitionet ? 0. The value ? is? = r????et ? et?1et???? (1.10)where r is a tuning constant. The term (1?? ) will be negative, leading to a neg-ative ? , if et?1 > 2et (for r = 1). This problem can be managed by selecting thevalue of r sufficiently small to let ? < 1. The reduction of the magnitude of r alsolimits the algorithm?s ability to change and adapt the value of ? . The method shiftsthe tuning parameter from ? to r. The adaptive method will however not fail aslong as etr > |et ? et?1| and et 6= 0.An additional limitation of Adaboost.RT is the definition of the misclassifica-tion function Ert(i) as the absolute relative error. This error function is a directimplementation of the Adaboost.M1 algorithm and classifies a sample predictionbased on the percent error of the prediction ft(xi) compared to the true value yiafter each learning iteration. The absolute relative error is defined asErt(i) = AREt(i) =????ft(xi)? yiyi???? . (1.11)This definition of the error function contains a singularity when the true valueyi = 0. Values near zero receive an artificially high absolute relative error and arealways classified as incorrect and their sampling weight increases. Values near ze-24ro will be selected more frequently and the accuracy of the values away from thezero-crossing will decrease. The performance of the entire ensemble is degradedwhen the output variable contains a zero-crossing in the output data. This issue isillustrated in Figure 1.2, with training data sampled from the noise-less functiony = x. Despite the lack of noise, the overall accuracy of the algorithm is degraded.Training instances near y = 0 are repeatedly sampled and identified to a high ac-curacy, but the values away from the zero crossing, experience a very high relativeerror.?0.02 ?0.015 ?0.01 ?0.005 0 0.005 0.01 0.015 0.02?0.02?0.015?0.01?0.00500.0050.010.0150.020.025XY  Input Data, y=xAdaboost.RT,   RMSE=1.65 ?10?4Adaboost.MRT, RMSE=2.94 ?10?5Figure 1.2: Performance comparison of Adaboost.RT and Adaboost.MRTfor estimation of the noise-less function y = x. The accuracy of Ad-aboost.RT is negatively affected by the singularity in the misclassifica-tion function.Some possible remedies include a data transformation, by the addition of aconstant to the output data [95] or the use of the absolute error misclassificationfunction defined byErt(i) = | ft(xi)? yi| . (1.12)The absolute error function, makes the determination of a threshold ? difficult,since it is directly linked to the numerical value of the output data. Absolute errorbased misclassification functions attempt to increase accuracy at each sample, butgiven noisy training data the general shape of the function is more important thanaccuracy to avoid over-fitting to noise.25Despite its short-comings, Adaboost.RT is shown in literature to provide goodresults, with a few papers detailing applications to real problems: in [104, 105] formolten steel temperature prediction, in [117] for music emotion classification witha regression approach, in [33] for software reliability modelling and in [92] forintensity estimation of facial action units. These methods were used to estimate asingle output variable only. To the best of my knowledge, the Adaboost algorithmhas not been used for multivariate output regression.1.6 Contributions1.6.1 Quadrotor Helicopter Visual ServoingThe contribution of this thesis is a control scheme for a quadrotor helicopterUAV, capable of vision-based positioning of the UAV relative to a stationary orslow moving target. The ARDrone helicopter is used and controlled via a Matlabprogram that connects to the official ARDrone API. Position feedback is receivedfrom the forward facing camera on the quadrotor helicopter and the controller isvalidated in experiments. The main contribution is the selection and combinationof favourable methods from the fields of control and visual servoing to achievethe task of visual servoing with a quadrotor helicopter. The methods were select-ed to make the visual servoing control scheme modular, so that improvement orsubstitution of any subsystem is possible.The moving target consists of two different coloured markers, which are detect-ed based on colour and shape-based object detection methods. The object detectionmethod detects the target in a laboratory environment under varying lighting con-ditions. It is assumed that the target object is not occluded. The visual servoingcontrol scheme uses the virtual sphere feature which was first introduced by [51]and implemented on a quadrotor teststand in [24]. This thesis implements the vir-tual sphere formulation for visual servoing with a free-flying quadrotor helicopter,while tracking a moving target. Moving target compensation has been added as anadditional contribution in the form of a proportional and integral controller appliedto the image feature error, an approach previously considered in [72] for underwa-ter visual servoing. Building on previous work, a forgetting factor is implemented26on the integral gain to avoid integral wind-up.The kinematic controller takes advantage of the on-board attitude controllerand calculates the attitude input into the ARDrone that is necessary to achieve thedesired velocity. System identification is performed to analyze the on-board con-troller dynamics. This thesis improves on previous work on quadrotor helicopteridentification by the implementation of a frequency domain system identificationtechnique. Frequency based system identification methods capture the dynamicsof the system better than a step input, which was previously used in [107, 112].The frequency domain identification aims to improve the accuracy of the identifiedtransfer function across a wide range of frequencies. The system input in this thesiswas a linearly increasing frequency sweep.A feedback linearization scheme linearizes the nonlinear system equations, bythe calculation of the necessary orientation angles required to achieve the desiredforces on the UAV body. The feedback linearization scheme was previously pre-sented in [1, 112] for position control of a quadrotor helicopter. In this thesis, thescheme is modified for heading independent velocity control and implemented forvisual servoing. A Kalman filter is used to identify the system states and a linearquadratic regulator calculates the gain required to achieve the desired velocities.1.6.2 Adaboost.MRTAdaboost.MRT (Adaboost Multivariate Regression Threshold), a multivari-ate regression algorithm based on Adaboost.RT is proposed in this thesis. Ad-aboost.MRT was created to assist in colour-based object recognition, but can beused for other machine learning tasks. Adaboost.MRT takes in a set of labelledtraining instances S = {(xn,yn),n = 1,2, . . . ,m}, where yn is an R-dimensionalvector of output data and xn is a multi-dimensional vector of input data. The re-gression model ft(x) is built over T learning iterations, with training data generatedby iterative sampling from the m instances.The first major contribution is the improvement of the misclassification func-tion of Adaboost.RT. The singularity in the absolute relative error function of theoriginal algorithm potentially degrades the accuracy of the ensemble. In the pro-posed formulation, a variance-scaled error function is selected that allows for zero-27crossing in the output data without the need for output data transformation. Thevariance-scaled error function always classifies a portion of the predictions as in-correct at each iteration which also avoids fitting to outliers.The second major contribution is the extension of Adaboost.RT to accommo-date multivariate output. The proposed method modifies the sampling distributioninto Dy,t an R?m vector which maintains the weight distribution of all R outputvariables for each of the m instances. This new distribution is called the output er-ror distribution and is a measure of the prediction error at each output variable. Ateach iteration, the instance sampling weight Dt is the average of the weights of theR output variables within the instance. Similar to AveBoost [76, 77], this averagingreduces the impact that an outlier within the R output variables has on the weightof the training instance. This strategy is aimed at reducing the algorithm?s noiseand outlier sensitivity when dealing with multivariate output data.Adaboost.MRT addresses Adaboost?s noise and outlier sensitivity in two ways:(1) by the redefinition of the misclassification function (Equation 5.1) and by (2)averaging the sampling distribution of the output variables for multivariate output(Equation 5.5). The variance-scaled misclassification classifies the least accuratesamples as incorrect at each iteration based on ? . The misclassification of the leastcorrect predictions generates training set diversity in successive training iterations.The averaging of the sampling distribution across the output variables, for multi-variate output, prevents a single outlier in y from overly increasing the value ofa particular instance. This averaging limits the impact of outliers in one outputvariable on the entire sample distribution.Adaboost.MRT was tested against Adaboost.RT and their base learning algo-rithm, the artificial neural network, on six commonly used datasets. The Friedman#1, #2 and #3 is a commonly used data-set for regression. The Gabor and the sinc|x|function were used due to their zero-crossing in the output data. The yacht hydro-dynamics dataset from the UCI Machine Learning database was used to assess theaccuracy on real data. The noise sensitivity of Adaboost.RT and Adaboost.MRTwas compared in an experiment that varied the noise-to-signal magnitude ratio forthe Friedman #3 dataset and the sinc|x| function. A nonlinear multivariate out-put function describing a cone in 3D space was used to assess the performance ofAdaboost.MRT for multivariate output.28Chapter 2Visual Servoing Control SchemeThis chapter details the image based visual servoing (IBVS) control scheme,which covers the feature selection, the control law and the feature detection meth-ods. The chapter begins with an explanation of unified model for central imagingsensors in Section 2.1. The features use the virtual sphere formulation to create animage based error and are explained in Section 2.2. The colour and shape basedobject detection method will be presented in Section 2.3. Simulation results of thevisual servoing controller are shown in Section 2.4. A strategy to compensate forthe feature error induced by the moving target is presented in Section 2.5.2.1 Camera ModelThe feature selection for the visual servoing scheme begins with the selectionof the camera model. The obvious selection criteria for the camera model is thecamera hardware used, whether it is a regular or catadioptric camera. Catadioptriccameras combine lenses and spherical mirrors to achieve hemispherical and om-nidirectional fields of view. These lenses required different camera models, untilthe unified theory for central imaging sensors was proposed in [46]. The omnidi-rectional and perspective projection models are covered by this theory as they arecentral imaging systems. The unifying theory states that a central imaging systemcan be modelled by a projection on a sphere followed by a projection to a planefrom a point.29The unifying theory allows for transformation between the perspective andspherical projection and the design of schemes that were applicable to all camerasthat obey the unique viewpoint requirement. The spherical projection model allowsfor generalization of the control scheme to regular and omnidirectional cameras.It has been shown that spherical projection displays good decoupling propertiesbetween the degrees of freedom, and displays passivity like properties[52]. Theproposed method was implemented on a regular camera sensor, but the sphericalprojection model was used. The spherical projection model allows the extension ofthe visual servoing control scheme to any perspective or catadioptric class sensorthat obeys the unified camera model.This section presents the pinhole camera model in Section 2.1.1, followed bythe camera hardware calibration on the quadrotor UAV in Section 2.1.2. A transfor-mation from the perspective camera to the spherical projection model is presentedin Section 2.1.3.2.1.1 Perspective ProjectionPerspective projection uses the pinhole camera model, which is commonlyused in IBVS and robotics applications due to its simplicity. The perspective pro-jection models a simple CCD camera as a single lens. The perspective projectionis a special case of the unified projection model presented in [46]. Perspective pro-jection uses a transformation to form an image point xi from a point X in 3D space.The camera optical axis aligns with the camera z frame. Figure 2.1 shows the axisconvention for the pinhole camera model. The image plane is perpendicular to theoptical axis, which aligns with the camera z-axis.The image generation for the pinhole model is a simple perspective projection.Given a 3D point X = [X Y Z] in the world coordinate frame Fc, the projectiononto a plane isxn =fZ[X Y Z]T (2.1)where f is the focal length of the camera. Let xn = [x y 1] be the normalized imagecoordinates of a point in the camera frame. The normalized image coordinates xnrepresents the projection of a point on a plane at z = 1. The mapping between the30??  ??  ??  ? ? ?? ?? ? ? ?? ????? ????? ??  Figure 2.1: Standard pinhole camera model.image point and the normalized coordinate xn isxi = Kxn. (2.2)with K is the camera projection matrix given by Equation 2.3.The interaction matrix formulation in IBVS for a point coordinate in perspec-tive projection relies on the depth information at each point for proper convergenceand suffers from strong coupling between the rotational and translational degreesof freedom. The main drawback of perspective projection is that it only applies toregular cameras with a limited field of view.2.1.2 Camera CalibrationThe main purpose of the camera calibration is to identify the connection be-tween a point in 3D space and its projection on the image plane. The cameraintrinsic parameters are used in the transformation from camera relative coordi-nates to image space coordinates. Camera calibration is performed to estimate thecamera intrinsic parameters consisting of the focal length fx and fy in the x and ydirection, the principal point coordinates uo and vo in the x and y direction whichdenote the center of the camera lens in the image, and the skew coefficient ?c whichdenotes the angle between the sensor x and y axes. The intrinsic camera calibration31matrix K can be written in matrix form asK =???fx ?c fx uo0 fy vo0 0 1??? . (2.3)The matrix K relates the image coordinates xi to the normalized coordinates in thecamera frame xn byxi = Kxn. (2.4)The matrix K is an upper triangular matrix and is invertible, as all of its diagonalparameters are greater than zero.Camera calibration was performed using the MATLAB Camera CalibrationToolbox1 which is based on the procedure by [119]. The intrinsic parameters of theforward facing camera on the ARDrone UAV were identified. During calibration,the camera was stationary and captured multiple pictures of a checker-board patternwhich was moved and rotated between image captures (see Figure 2.2 for sampleimages).(a) (b) (c)Figure 2.2: Sample calibration images used in the determination of the cam-era intrinsic parameters.It was assumed that distortions due to the lens curvature were negligible in thenormalization of the coordinates (Equation 2.4). The distortion estimation couldhave been skewed due to the non-rigidity of the large checker-board pattern, whichwas printed on cardboard. The identified intrinsic parameters for an image resolu-tion of 320?240 pixels are shown in Table 2.1.1http://www.vision.caltech.edu/bouguetj/calib doc/ ,last accessed 201332Table 2.1: Identified intrinsic parameters of the on-board UAV camera.Identified Parameter ValueFocal parameter, fx 197.7Focal parameter, fy 197.7Principal point, uo 159.0Principal point, vo 116.7Skew coefficient, ?c 02.1.3 Spherical ProjectionThis section details the transformation from perspective projection to the spher-ical projection and vice versa. Central imaging systems can be modelled by twoconsecutive transformations, the projection onto a sphere followed by the projec-tion on a plane. The unified model has been presented in [46] and has been popularin visual servoing due to its broad applications onto large field-of-view cameras.The following procedure to convert from perspective projection to spherical pro-jection has been compiled from [46, 51, 100].A diagram showing the image formation for a central projection system isshown in Figure 2.3. The camera coordinate frame Fc and the frame attached tothe spherical mirror Fm at Cm is separated by ? along the optical axis Z-axis. Forsimple perspective sensors ? = 0. The image generation procedure for a centralimaging system from a point in 3D space X follows.33?? ? ? ?? ?? ?? ?? ????? ????? ? ? ?? ?? ?? Figure 2.3: Image generation for a central projection camera.1. The first step involves the projection of the 3D world point X = [X Y Z]T inFm onto the unit sphere Sp(Cm,1) viaXm =X?X?=1?X2 +Y 2 +Z2[X Y Z]T . (2.5)2. The point Xm on the surface of Sp is projected on the plane pii with equationZ = 1?? .xn = f(X) =[XZ +??YZ +?? 1]T(2.6)where ? = ?X? =?X2 +Y 2 +Z2 and xn is the normalized image coordi-nate. Alternatively to Equation 2.6 This step can also be modelled by twosuccessive transformations. The first transformation is a translation of thepoint Xm from Fm to Fc by ? . The second transformation is a perspectiveprojection of the point on the plane Z = 1.3. The last step is a perspective projection from the plane pii to the image plane34by a perspective transformationxi = Kxn (2.7)where K is a matrix that contains intrinsic camera information and is definedby Equation 2.3.The camera intrinsic parameters for K and ? can be determined by camera cali-bration (see Section 2.1.2 for details). The variable ? is zero for simple perspectivecamera sensors. The inverse mapping from an image point to the spherical pro-jection is of interest for visual servoing. This inverse mapping is achieved by theinverse of the steps 2 and 3. The inverse mapping is shown here for clarity:1. Normalize the image coordinates by the inverse transformation of K fromthe image frame to the camera framexn = [x y 1] = K?1xi (2.8)2. The point located on the unit sphere Sp is obtained by the inversion of thenon-linear projection Equation 2.6Xm = f?1(xn) = ?[x y 1???]T(2.9)where ? = ?+?1+(1?? 2)(x2+y2)x2+y2+1 .Step 2 represents a vector normalization of the normalized coordinates xn for per-spective sensors (? = 0) according to xn?xn? .2.2 Visual Servoing Feature SelectionThe selection of visual features affects the convergence properties, stability,and the coupling between the degrees of freedom of the visual servoing scheme.The literature surrounding feature selection is vast and is it is recommended toselect the feature based on the intended application. Ideally, the features are de-coupled for each degree of freedom they control, linear in the task space, detectable35throughout the configuration space, and without local minimas. The same featuredoes not have to be selected for each degree of freedom and it is possible to parti-tion them to achieve improved performance[75]. The challenges facing dissimilarfeatures is the sensitivity of the individual degrees of freedom to changes. Similarconvergence rates across degrees of freedom are desired. Dissimilar convergencerates may allow DOFs to converge faster and possibly de-stabilize others, or allowfeatures to leave the image. Visual servoing performance of the entire system islimited by the slowest degree of freedom[75].The quadrotor helicopter is a 6 degree-of-freedom system with four inputs,which allow a controller to stabilize 4 DOF. For the proposed visual servoingscheme, the translational x,y,z velocity and the rotational yaw angle ? were se-lected as the DOF to be stabilized. This thesis focussed on the active control ofthe translational DOF, while the yaw angle was kept constant. A simple visualservoing strategy is to select two points in 3D space, from which the interactionmatrix for point features (shown in Equation 1.3) can be calculated. The selectionof two points should allow for the control of 4 DOFs. However, this formulationmay be susceptible to local minimas and has strong coupling among all 4 DOFs.Local minimas may prevent the controller from reaching the desired position andcoupling between degrees of freedom will allow errors in one to influence perfor-mance of the other DOF. The controller based on point features and the interactionmatrix in Equation 1.3 is only stable in the vicinity of the desired feature positionand may not be suitable for large motions.The controller selected in this thesis uses the spherical projection of two points,which are used to create a virtual sphere. The virtual sphere approach can be usedto extract features to control the translational degrees of freedom with favourableproperties, such as rotation invariance. The center location of this virtual sphere isused to control the camera x and y position and the relative size of the sphere isused to control the camera z position. Quadrotor helicopter UAV have to rotate inorder to translate, which makes the selection of rotation invariant features to controltranslational degrees of freedom a sensible choice. This controller has previouslybeen implemented on an experimental platform in [24].362.2.1 Features for Translational Degrees of FreedomThis section describes the central projection of a virtual sphere created by twopoints P1 and P2 in 3D space. The presented procedure is based on [40, 51] andwill expand where needed on the clarity of the derivation. The aim is to positiona camera relative two world coordinate points, using only the projection of thesepoints onto the image plane. Ideally these desired features are independent of thecamera rotation. Spherical projection has been used, as the projection of a sphereonto a sphere is a circle, regardless of the orientation of the spheres.Let P1 = [X1 Y1 Z1] and P2 = [X2 Y2 Z2] be two points in 3D space relativeto Cm in Fm with projections on to the unit sphere denoted by S1 and S2, respec-tively. The projections S1 and S2 can be obtained from the image points p1 andp2 corresponding to the world points P1 and P2 using the procedure outlined inSection 2.1.3.? ? ?? ??  ????????? ?? ?? ???????? ?? ??  ??? ? Figure 2.4: Central projection of a virtual sphere Sv constructed from 2 pointsin 3D space onto the spherical lens Sp.Consider a virtual sphere Sv(P1,R), centered at P1 with radius R(R?R+). This37virtual sphere radius R is constrained by the unit vector from the unit-sphere centerCm to the point P2. The contour ? of Sv visible from Sp forms a circle c that isprojected onto the unit sphere Sp. The circle c lies on a plane pic and has radiusr. The circle is also located on the plane pic, which is defined by its unit vector~n = [nx ny nz]T . The plane pic is not tangent to the sphere Sp. The vector~n is alsoparallel to the unit vector pointing from Cm to P1 and can be rewritten as~n = [nx ny nz]T = [XS1 YS1 ZS1 ]T (2.10)since S1 lies on the unit sphere and ?S1? = 1. Any point in c with coordinates of[X Y Z]T satisfies the following constraints?={X2 +Y 2 +Z2 = 1XS1X +YS1Y +ZS1Z?KO = 0(2.11)where K2O = X2P1 +Y2P1 +Z2P1 ?R2. The first condition in Equation 2.11 shows thatthe contour ? is a circle on the unit sphere. From [40] a relation for the radius r ofthe small circle c can be found via similar triangles (see Figure 2.5)r?S2?=R?P1?(2.12)r1=R?X2P1 +Y2P1+Z2P1(2.13)The magnitude of P1 is not known, but the size of the small circle r is linkedto the angle between the vectors S1 and S2 (see Figure 2.5). Using geometry aformulation for the angle ? between the two vectors can be obtained via? = cos?1(ST1 S2) (2.14)since S1 is perpendicular to cr?S2?= sin(?) . (2.15)38?? ?? ?  ??  ??? ?? ?? Figure 2.5: Convention for the calculation of the small circle radius r.Using the identity sin(cos?1(x))=?1? x2 and the fact that ?S2?= 1 we getr =?1?(ST1 S2)2. (2.16)The center of the circle is indicated by the line connecting Cm and S1 and showsthe relative position of the two target points relative to the camera. The radius ofthe projected circle r is a measure of the depth of the two target points P1 and P2 tothe camera. This circle radius r is independent of camera rotation and can be usedfor visual servoing control along the camera optical axis.2.2.2 Translation Visual ServoingThis section uses the features derived in Section 2.2.1 to control the velocity ofthe system to drive the camera to the desired location. A feature vector is definedass = [s1] (2.17)39where s1 is a feature vector that contains information about the relative positionof the target to the UAV, from purely image based information. In [40, 51] theauthors have shown that a definition of the feature vector related to a virtual sphereprojection yields good decoupling properties on the three degrees of freedom. Lets1 =1rS1 =1r[XS1 YS1 ZS1 ] (2.18)be a vector that contains information, related to the virtual sphere projection ontothe camera frame. An error vector e is defined ase = s? s? (2.19)where s? is the desired feature vector. The desired feature vector s? can be deter-mined by a system model or by extracting measurements from an image with thecamera at the desired position. It is common practice to design the error reductionas an exponential function according to the forme? =??e (2.20)e? =?? (s? s?) (2.21)where e? = s?? s?? and ? is a proportional gain that can be used to adjust the conver-gence rate. The desired location of the features in the image does not change withtime, therefore the variation of the desired position of the feature vector s? is zeroand the rate of change of the error is e? = s?. The variation of the features can belinked to the camera velocity Vc = [v,?]T = [vx vy vz ?x ?y ?z]T bys? = LsVc (2.22)where Ls is the interaction matrix or image Jacobian which links the camera ve-locity to image feature variations. Using the developments from Equation 2.22and Equation 2.21 we get a control law for the camera velocity that results in anasymptotic decrease of the error e:Vc =???L?1s (s? s?) (2.23)40where ?L?1s is the estimated inverse of the interaction matrix Ls.The next step is the derivation of the interaction matrix Ls that links the motionof the camera to the changes in s?1. The time derivative of Equation 2.18 iss?1 =rS?1?S1r?r2. (2.24)It was assumed that dRdt = R? = 0, as the radius does not change with time, but onlywith position. The time derivative of the small circle radius r can be calculatedfrom Equation 2.13r? =dP1R??Rd?p1d2p1(2.25)=?R2d2p1d?p1R(2.26)where dP1 =?PT1 P1 =?X2P1 +Y2P1+Z2P1 . The identityR2d2p1= r2 (from Equation 2.13)can be used to getr? =?r2Rddt(?PT1 P1)(2.27)=?r2RP1?P1?P?1 (2.28)=?r2RST1 P?1. (2.29)The point P1 is defined in the camera coordinate frame Fm. The translationalmotion of the point P?1 is directly linked to the motion of the camera. The termP?1 = [?I3 [P1]?]Vc is the motion of the point P1 relative to the spherical lens cen-ter Cm, where [P1]? is the skew symmetric matrix related to P1. Since the vectorsP1 and S1 are parallel ST1 [P1]? = 0. Equation 2.29 can be rewritten tor? =r2RST1 [I3 [P1]?]Vc (2.30)r? =[r2RST1 01?3]Vc (2.31)41The implications of Equation 2.31 are that the radius r of the small circle doesnot change with camera rotations and is affected by translations only. This is afavourable property in visual servoing with underactuated systems. From [40], thetime variation of a point S?1 on the unit sphere isS?1 =[ rR(S1ST1 ? I3)[S1]?]Vc. (2.32)The equation linking camera motion to the variation in the feature vector s1 isdefined by Equation 2.24. Using Equation 2.32 and Equation 2.31. Equation 2.24reduces tos?1 = LS1Vc (2.33)where LS1 = [LS1,v LS1,? ] is broken into two components, the variation of S1 dueto the camera translation LS1 ,v and the variation of S1 due to the camera rotationLS1,? . The equations of the two components areLS1,v =???1R 0 00 1R 00 0 1R??? (2.34)LS1,? =???0?Zs1rYs1rZs1r 0?Xs1r?Ys1rXs1r 0??? (2.35)The only unknown in Equation 2.34 and Equation 2.35 is the radius of thevirtual sphere R. The radius can be estimated and the resulting system will havethe same dynamics for all 3 translational DOF. Any error in the estimation of theradius R will change the convergence rate of the system as it will show up as anadditional gain to the matrix. Experimental results have shown that the system isquite robust to significant errors in the estimate of R [40, 41]. This thesis uses onlythe translational degrees of freedom and the interaction matrix becomes Ls = LS1,v.42(a) 2 Dimensional Static Target. (b) 3 Dimensional Moving Target.Figure 2.6: Targets for visual servoing of a quadrotor helicopter UAV.2.3 Visual Servoing Feature DetectionThis section presents the object detection and tracking method used in thisthesis. The features selected for visual servoing require a minimum of two pointsin 3D space. Additional points increase system robustness to estimation errors,noise in image measurements and avoidance of local minimas. In this thesis, a2-dimensional and a 3-dimensional target were chosen for the static and movingtarget experiments (see Figure 2.6). Both targets were distinctly coloured to theenvironment consisting of a highly saturated red and blue colour. The 2-D targetconsisted of 2 circles and the 3-D target of two spheres, which would make targetdetection independent of camera orientation. The feature detection consisted ofcolour and shape based detection methods.The first step of object detection is to collect a series of potential candidateobjects for the target in the image. The collection methods are vast and includecolour-based thresholding, texture analysis, and histogram based methods, to namea few. Following the initial detection, a series of filtering steps is implemented toremove weak candidates from the pool of potential targets. Alternatively, a measureof the objects similarity to the target can be defined and used to select the optimaltarget. This method has been found to be effective, however it is most often usedfor the detection of specific objects under restricted conditions. This thesis usescolour-based object detection with shape based filtering. Section 2.3.1 elaborateson the role of spaces in object recognition. The full object detection procedure isshown in Section 2.3.2.432.3.1 Colour Space SelectionThe choice of colour space plays an important role in colour-based object de-tection. The colour of an object varies due many factors, which include changinglighting condition, external and internal shadows and reflectance from other object-s. Proper choice of colour space can mitigate some of the problems and increasedetection robustness under adverse conditions.RGB is a commonly used colour space often seen in computer and digital imag-ing applications. RGB was modelled after the human eye, which has receptors forred green and blue light[106]. RGB (Figure 2.7(a)) is supported by most camerasand it can be transformed to other colour spaces at a computational cost. RGB is anon-linear colour space and it is not intuitive for the user what a colour combinationof various proportions of red, green, and blue looks like. One main disadvantage isits non-uniformity i.e. the low correlation between the perceived difference of twocolours and the Euclidean distance in the RGB space [106]. Colour-based trackingwould ideally be robust to brightness and illumination changes, but such a relationin the RGB colour space is a conical section [20], which makes the determinationof a threshold for colour separation difficult. Colour-based detection and classifi-cation has been done in the RGB space by using training algorithms to learn thecolours, in [21] for road object recognition and in [80] for citrus fruit recognition.Blue (a) RGB colour spaceValue (b) HSV colour spaceFigure 2.7: Comparison and visualization of the RGB colour cube and thecylindrical HSV colour space representation.44The alternative to RGB are colour invariant colour spaces, such as HSV, CIELab(or L*a*b*) and YUV. In these colour spaces, two channels (H and S in HSV, Uand V in YUV, a* and b* in L*a*b*) are devoted to colour information (chromi-nance) and the third is devoted to illumination information (luminance), whichmakes these colour space more robust to illumination changes.The CIELab (or L*a*b*) colour space is commonly applied in image editingand colour enhancement. The L* channel denotes the intensity of light at the pixeland the a* and b* channels are opponent colour dimensions. Positive and nega-tive a* is the red and green colour and positive and negative b* is the yellow andblue colour, respectively (Figure 2.8(a)-2.8(c)). The advantage of L*a*b* is thatit is approximately perceptually uniform and gives good results for segmentationof colour pictures [96]. The disadvantage is that not many cameras natively sup-port L*a*b*. RGB can be transformed via a non-linear transformation to L*a*b*,which is computationally expensive. This colour space has been shown throughexperiments to perform better than the RGB and HSV colour space at segmentingshadows from images[8], which is done through colour filtering.a* Channel  b* Channel (a) Luminance, L=20a* Channel  b* Channel L= 40  (b) Luminance, L=40a* Channel  b* Channel (c) Luminance, L=80Figure 2.8: Comparison and visualization of the L*a*b* colour space at aluminance L of 20, 40 and 80.The HSV colour space (see Figure 2.7(b)) is a cylindrical colour space whichis used in many robotics and machine vision application to separate the colourfrom shading information. HSV stands for hue, saturation and value. Hue is theattribute linked to the colour of the pixel. Saturation is the level of non-whiteness45with highly saturated colours being very vivid. Value is the level of brightness ofthe colour. There are some disadvantages of HSV compared to other colour spaces[106]. It is a linear transformation of the RGB colour space, which makes it non-uniform. The cylindrical formulation of the hue causes a discontinuity at 360?,which requires additional considerations for some arithmetic operations.It has been shown that the CIELab colour space produces good separation ofcolour from lighting information[84]. However, the transformation from the RGBto the L*a*b* colour space is computationally expensive. Since the HSV colourspace is a linear transformation from RGB it is fast and gives better results forcolour segmentation than RGB. For this thesis work, the HSV colour space hasbeen selected for experimental validation as a result of computational limitations.The C implementation of the HSV transformation took? 0.003 seconds per 320?240 image.2.3.2 Detection ProcedureA flowchart of the object detection procedure is shown in Figure 2.9. Thetargets are two circles or spheres coloured red and blue. The same procedure wasused for the 2D and 3D target, with some numerical modifications of the thresholds.Binary Image GenerationThe colour thresholds were defined on the hue (H) and saturation (S) channelsof the HSV colour space, for each colour. The Value (V) channel was not used asit is a measure of the lightness of the colour and varies widely. The RGB imagewas transformed into the HSV colour space and the thresholds were applied. Thethresholds were determined experimentally from sample images of the target. TheMatlab implementation of the thresholding method presented in Section 5.3 wastoo slow for visual servoing purposes (? 0.038 seconds per image). Fixed thresh-olds on the H and S channels were used, rather than the learning method presentedin Section 5.3. The results of the thresholding operation was a binary image foreach of the red and blue colour. The threshold range influences the robustness ofthe detection for a fixed, non-adaptive threshold. A threshold that is set too wide46Shape Based Filtering, Shape CC 1 CC 2 CC 1 CC 2 BI 1 BI 2 Size-Based Noise Removal Connected Component Extraction HSV Image Binary Image Generation BI 2 BI 1 Shape Based Filtering, Ratio Colour  Range 1 Colour  Range 2 BI = Binary Image CC = Connected Component Set SM = Similarity Measures Set Find Best Object Mahalanobis Distance Calculation CC 1 CC 2 SM 1 SM 2 Sample Shape Data Figure 2.9: Full object detection procedure for the two coloured targets.will result in a large number of target candidate objects that require filtering. Athreshold set too narrow may be unable to find the target under changing ambientlighting conditions.Size-Based Noise RemovalObjects in the binary image with a pixel count below a small threshold wereconsidered noise objects. Cameras with low quality sensors tend to produce grainyimages with small noisy elements. For this thesis, items below 40 pixels at animage size of 320?240 were removed, due to their high probability of being noise.Objects with a small pixel count were removed because the calculation of the imagemoments becomes inaccurate for low pixel count shapes.47Connected Component ExtractionConnected components were extracted from the resulting binary images andassigned to two separate sets. One set for the red colour and one set for the bluecolour.Shape Based Filtering, RatioThe first shape based filtering step considered the ratio of the bounding box ofthe object. It was found that physical edges of objects tend to show up as noiseartefacts, which are long and only a few pixels wide, for low resolution imagesensors. The targets were circular discs and spheres and their profile for the lengthto width ratio Ri of connected component i was limited to a ratio rcuto f f .1rcuto f f< Ri < rcuto f f (2.36)This step removed noise artefacts that are long and slender, but also limits theviewing angle that the UAV can have of the 2D circular targets. This step is com-putationally cheap and allows for easy filtering of some noise objects, thus savingresources for further more computational expensive steps.Shape Based Filtering, Moment InvariantsImage moments are commonly used in object recognition. The Hu?s momentinvariants are a special type of image moments that are invariant to scale and 2Drotation in the image. The moment of an area ?pq is?pq =?x?y(x? x?)p(y? y?)q f (x,y) (2.37)where p and q are the order of the image moment, x? and y? are the centroid in the xand y direction and f (x,y) is the value of the pixel (1 or 0 for binary images). Theimage moment ?pq can be normalized to make it invariant to scale and translation.The normalized image moment ?pq is?pq =?pq?1+p+q200(2.38)48where ?00 is the area of the shape.Hu?s invariant moments are shape properties that are invariant to translation,scale or rotation in the image. There are 7 invariant moments, but in this thesis in-variant moments up to order 2 were used. Higher order moments require additionalcomputational resources, therefore only the first two moments were considered toincrease the frame rate. The invariant moments up to order 2 areI1 = ?20 +?02 (2.39)I2 = (?20??02)2 +4?211. (2.40)The values of the moments I1,i, I2,i for object i can be compared to an?ideal? valuefor the target I1,t and I2,t .Similarity Measure CalculationThe similarity measure is a metric of how close the current shape moment is tothe desired target shape moment. Sample moments for the target I1,t and I2,t werecalculated from 83 images of the target in various orientations. It was assumed thatthe target values were approximately normally distributed. The target mean ?I1,tand ?I2,t and co-variance matrix S12 were calculated from the 83 sample images.The values of I1,t and I2,t are not independent and therefore S12 was not a diagonalmatrix.The similarity measure was defined as the Mahalanobis distance between themeans and Hu?s invariants I1,i and I2,i of target i. The presence of co-variancemakes the Euclidean distance not suitable for calculation of the similarity measure.The Mahalanobis distance for shape i wasdm(i) =?(Ii??t)T S?112 (Ii??t) (2.41)where Ii = [I1,i I2,i]T is a vector containing the first and second Hu?s invariants ofobject i and ?t = [?I1,t ?I2,t ]T is a vector containing mean target values.49Final Feature SelectionThe lowest Mahalanobis distance dm was the shape that most closely resembledthe target in the current image. The value of dm was compared to a cut-off value toavoid selecting objects that have a weak similarity to the desired target. The targetimage moments were the same for the sphere and the circles, with ?I1,t = 0.16and ?I2,t = 0.0004. The centroid of the shape in the image was selected as themeasurements for P1 and P2 in the visual servoing controller.2.4 Simulation ResultsThis section presents simulation results of the image based visual servoing con-trol scheme. The velocity for the camera motion was determined by Equation 2.23.Figure 2.11 shows the initial and final desired location of the camera. Only trans-lational camera motion was considered for these simulations. The simulations didnot consider the hardware and coupled motion of the UAV, since a kinematic ve-locity controller was used (see Chapter 3). It is assumed that the camera is capableof achieving the required velocities.50???? ?? ?? ?? ?? ?? ?? Figure 2.10: Variation of the virtual sphere radius R with changing cameraposition C, relative to the two target points P1 and P2.The proposed visual servoing scheme depends on the estimate of the virtualsphere radius R in Equation 2.34, which has been proven to be quite robust tosignificant estimation errors. For this simulation, a constant radius R = 0.2 wasused. The error in the virtual sphere estimate will manifest itself as a gain that willaffect convergence rate, but not stability of the system. Estimation errors will affectall 3 translational DOF equally, since the control law has the same dynamics foreach DOF. The value governing the rate of convergence ? was 0.35. Figure 2.10shows the variation of the radius R depending on the camera position and distanceto the physical targets P1 and P2. The radius approaches the physical distanceP1P2 = 0.2 as the distance to the target increases. As a rule of thumb, R should bethe physical distance between P1 and P2 which is the highest value R can achieveat an infinite distance. The rate of convergence can then be adjusted to physicallyfeasible values, via the rate parameter ? .The feature and camera trajectory is shown in Figure 2.12. The feature tra-jectory is a straight line and the camera movement in 3D space is slightly curved510 50 100 150 200 250 300050100150200uv(a) Initial Simulated (b) Initial Actual0 50 100 150 200 250 300050100150200uv(c) Desired Simulated (d) Desired ActualFigure 2.11: Simulation setup of the initial and desired final location of thecamera.(Figure 2.12(b)), which is characteristic of image-based visual servoing. In IBVS,the image based error function is reduced to zero. While the image based featuretrajectory is a straight line, it will not necessarily lead to a straight line trajectoryin Cartesian coordinates. The feature error reduction and the velocity of the cam-era is shown in Figure 2.13. The feature error shows an exponential decrease atapproximately similar rates for all three degrees of freedom. The camera velocityshows reasonable velocities that can be achieved by the UAV. Convergence seemsslow, but can be tuned by adjustment of ? .The results for visual servoing with an incorrect estimate of the radius R? = 0.1is shown in Figure 2.14. The convergence rate is significantly slower than forR? = 0.2 in Figure 2.13. The camera and feature trajectory is not affected, butconvergence is slower.520 50 100 150 200 250 300050100150200uv(a) Feature Trajectory?1?0.500.5?3?2?101?0.500.511.5XCamdesCaminitYZ(b) Camera TrajectoryFigure 2.12: Trajectory of the camera and the features during convergencefrom the initial to the desired configuration.0 5 10 15?8?6?4?20246810Time, [s]Feature Error  X ErrorY ErrorZ Error(a) Feature Error0 5 10 15?0.8?0.6?0.4?0.200.20.40.6Time, [s]Velocity [m/s]  VxVyVz(b) Camera VelocityFigure 2.13: Feature error and corresponding camera velocity during the vi-sual servoing for R = 0.2.2.5 Moving Target CompensationThe previously presented control law does not account for a moving target andwill result in steady state errors in the final position of the target in the image.This section presents the compensation scheme used to address the feature errorinduced by a moving target. The presented control law is similar to a popularmethod, which is derived here, but which has been altered to accommodate the530 5 10 15?8?6?4?20246810Time, [s]Feature Error  X ErrorY ErrorZ Error(a) Feature Error0 5 10 15?0.4?0.3?0.2?0.100.10.20.3Time, [s]Velocity [m/s]  VxVyVz(b) Camera VelocityFigure 2.14: Feature error and corresponding camera velocity during the vi-sual servoing for R = 0.1.dynamic UAV system. A popular method that addresses moving targets is one thatcompensates by estimation of the target?s velocity[9, 29, 34]. The change in theimage based error function e? can be attributed to the camera velocity Vc and thefeature movement ?e? t according toe? = LsVc +?e? t (2.42)given a desired exponential decrease in the error according to e? = ??e, we cansolve for the velocityVc =?L?1s(??e? ??e? t)(2.43)where ? governs the rate of convergence and ??e? t is the estimate of the target motionand can be written as??e? t =??e?LsVc,t?1 (2.44)where Vc,t?1 is the velocity at the previous time instant and ??e is the feature errorvariation in the image. The feature error variation can be estimated by??e =et ? et?1?t(2.45)54where et and et?1 is the current and previous feature error and ?t is the time inbetween measurements. An estimate of the derivative by Equation 2.45 will con-tain significant levels of noise and will require filtering. Equation 2.44 and Equa-tion 2.45 can be used in Equation 2.43 to find the required velocity Vc,t, the velocityat the time step tVc,t =???L?1s e??L?1s ??e+Vc,t?1. (2.46)The term ??L?1s ??e+Vc,t?1 is the estimate of the target velocity, which is added tothe feature error in the image ???L?1s e to place the target into the desired position.The target velocity is in practice difficult to estimate due to multiple factors. Thefirst difficulty is the accuracy of on-board velocity estimation sensors and the noiseassociated with these velocity measurements. The second issue is the estimation ofthe derivative of the feature error ??e, which also contains noise and requires filtering.The last and most important issue is the estimate of the inverse of the interactionmatrix ?L?1s which links the feature error velocity to the real world target velocity.Errors in the interaction matrix estimate will lead to incorrect predicted velocities.Given the virtual sphere approach, the exact interaction matrix is difficult to deter-mine. In this thesis, a constant inverse interaction matrix was chosen, which wouldresult in incorrect error estimates for the target velocity.Equation 2.46 presents PID characteristics, with ???L?1s e accounting for thecurrent error, ?L?1s ??e accounting for the future error and Vc,t?1 accounting for pasterrors. PID control on the feature error has been previously used in [72]. Thecontroller in this work will be based on similar characteristics to Equation 2.46with some modifications. The input velocity to the velocity controller for this thesisisVc,t =???L?1s (et +Kie?(t)) (2.47)The term Ki is the integral gain and e?(t) is a summation of the previous error,according toe?(t) = f e?(t?1)+ et (2.48)where f is the forgetting factor, which prevents the summation from growing toolarge. The forgetting factor f was necessary due to the slow response of the he-licopter in reducing the feature error. Slow response time can potentially lead to55integral wind-up. The implementation of a forgetting factor f results in a steadystate error, which is however smaller than with the pure proportional control lawshown in Equation 2.23. The derivative term was not used, due to the large vari-ations in image features attributed to the pitch and roll motions of the UAV. Thefactor f was tuned in experiments to achieve adequate performance (few oscilla-tions) and a sufficiently low steady state error.56Chapter 3Quadrotor Helicopter Modellingand ControlThis chapter presents the quadrotor helicopter model, the feedback lineariza-tion scheme, the parameter identification and the velocity controller design. Thecontroller receives a desired velocity input and sends attitude commands to theUAV platform. The quadrotor helicopter hardware that will be used is the ARDroneby Parrot[18, 79]. The controller is based on a common controller architecture forquadrotor helicopter systems. Quadrotor helicopters controllers often consist of aninner attitude control loop and an outer velocity/position control loop [1, 16, 24].Various quadrotor systems, like the ARDrone or the AscTec Hummingbird1[112],have pre-existing on-board attitude controllers that simplify controller design. Theanalysis of the response characteristics of the on-board controllers require systemidentification to be performed. The nonlinear system dynamics require a nonlinearcontroller or a feedback linearization scheme. In this chapter a feedback lineariza-tion schemes are used to allow for the application of well-studied linear controlstrategies [108, 112].The nonlinear system dynamics and the model of the transfer function of theon-board controller are presented in Section 3.1. The feedback linearization methodthat makes up the outer control loop is presented in Section 3.2. Section 3.3 de-scribes the implementation details of the overall controller, consisting of: the on-1http://www.asctec.de/57board controller transfer function identification, the state space model, the detailsof the Kalman filter for state estimation and the parameters for the linear quadraticGaussian controller. Section 3.4 presents the simulation setup used to tune and testthe controller parameters. The implementation and sample velocity responses arepresented in Section 3.5.3.1 System ModelThe standard model of the quadrotor has been used in this thesis and can befound in[62] with some minor modifications. The quadrotor is modelled by a cen-tral mass m, a constant inertia matrix I ? R3?3 and four motors, which producethrust forces denoted by F1 to F4. The North-East-Down inertial coordinate framedenoted by {A} has axes labelled x0, y0, z0. The body-attached frame moves androtates with the quadrotor is labelled as {B} with axes named xB, yB and zB (seeFigure 3.1).??  ??? ?? ?? ?? ?? ?? ? ? ? ?? ?? ?? Figure 3.1: Quadrotor axis convention for the body attached frame {B}.The body fixed {B} and inertial {A} reference frame are related by a rotation58matrix R according to the ZYX Euler angle convention.R(?,? ,?) = Rz(?)Ry(?)Rx(?) (3.1)R(?,? ,?) =???c?c? c?s?s? ? c?s? s?s?+ c?c?s?c?s? c?s?+ s?s?s? c?s?s? ? c?s??s? c?s? c?c???? (3.2)with c and s representing the cosine and sine functions, respectively.Let xo ? {A} and vo ? {A} represent the position and velocity vectors of thequadrotor of {A} with respect to {B}. Let ? ? {B} denote the angular rate ofchange of {B} with respect to {A}. The rigid body equations for the quadrotor canbe written asx?o = vo, (3.3)mv?o = mg???001???+RF, (3.4)R? = R??, (3.5)I??=??? I?+ ? (3.6)where the term F and ? represents the total forces and moments acting on thequadrotor, which include the generated thrust as well as any wind disturbanceforces. The term g is the gravity constant and ?? represents the skew-symmetricmatrix that replaces the cross-product such that ??v =??v for any vector v?R3.Equation 3.3 to Equation 3.6 represent the complete rigid body dynamics forthe rotation and translation of the quadrotor helicopter in motion. The quadrotorplatform used for the experiments contained an on-board controller, which handledthe rotational dynamics of the system. Equation 3.5 and Equation 3.6 are controlledby the on-board controller and have been shown for the sake of completeness. Thedesigned controller, focused on the translational dynamics.The external disturbance forces acting on the body are complex due to theturbulent nature of helicopter hover and the wind vortices generated by multiplespinning propellers. The controller design assumed that the external disturbance59forces and moments were zero. This is a simplifying assumption and, unless ad-dressed, will cause a reduced performance of the controller response on a real sys-tem. Forces induced due to blade flapping were assumed to be negligible. Ignoringthe disturbances and wind resistances, the total applied force F in Equation 3.4simplifies to the total thrust. Let T ? {B} be the scalar total thrust produced by therotors in the body reference frameTB = F1,B +F2,B +F3,B +F4,B (3.7)where F1,B,F2,B,F3,B,F4,B are the thrust produced by the four motors in the bodyreference frame {B}.Let v?o = {ax,ay,az} ? {A}, then the translational dynamics from Equation 3.4can be rewritten asmax = Fxmay = Fymaz = Fz(3.8)where Fx,Fy,Fz are the forces required to accelerate the mass of the quadrotor UAVm, with accelerations ax,ay and az in the inertial coordinate frame. The forcesacting on the rigid helicopter frame in the inertial reference frame {A} are[81]Fx =?T (cos?sin?cos? + sin?sin?) (3.9)Fy =?T (sin?sin?cos? ? cos?sin?) (3.10)Fz =?T cos?cos? +mg. (3.11)It should be noted that Equation 3.9 to 3.11 represent the force required to movethe object using propeller thrust. Wind disturbance forces have not been modelledand the object will continue to move in one direction, unless the thrust counteractsthe helicopter?s velocity.3.1.1 Attitude Dynamics for the Pitch and Roll ResponseThe on-board controller on the experimental quadrotor platform controlled theattitude of the UAV and replaces the inner control loop that corresponds to Equa-tion 3.5 and 3.6. It is common practice to model the relationship between the60controller input and the system response as a transfer function (T (s)) [16, 18, 108]with a time delay (e?Tds) to account for wireless communication [112].The input into the controller is the desired angle and is sent over the wirelessnetwork. The output is the actual pitch or roll angle of the UAV following a timedelay. Manufacturer specifications of the ARDrone dictate that the attitude controlfollows a second order response [18]. A second order model was used on a similarquadrotor helicopter platform in [16] and in [112] in series with a time delay. Thesecond order model equation isT2(s) = Kp?2s2 +2??s+?2 . (3.12)The second order model transfer function will be denoted by T2(s) and the modelidentified from the frequency response data will be denoted by Tf (s). It was ex-perimentally determined that a good approximation of the frequency response wasachieved with a second order function with a zero in the numerator of the formTf (s) = Kp1+TzsT 2?s2 +2?T?s+1e?Tds (3.13)where Kp is the steady state gain. Td is the time delay, ? is the damping factor andT? is a parameter associated with the natural frequency and Tz places the zero inthe numerator of the transfer function. Both forms of the transfer function wereconsidered in the system identification and compared. The second order modelT2(s) corresponding to Equation 3.12 was identified from step response data andthe frequency model Tf (s) corresponding to Equation 3.13 was identified from stepand frequency response data.The term e?Tds in Equation 3.13 represents the time delay and was modelledby a Pade? approximation (used in [38]). The transfer function form of the ap-proximation depends on the degree of the Pade? approximation. Higher order ap-proximations will result in a higher accuracy at higher frequencies. A 2nd orderapproximation was selected according toe?Tds ?1? Td2 s+T 2d12 s21+ Td2 s+T 2d12 s2(3.14)61where Td is the time delay. A second order Pade? approximation was used, as itincreases the accuracy of the identified system model Tf (s) at higher frequencies.This approximation increases the order of Tf (s) and introduce new poles and zerosinto the transfer function. In the following sections the Pade? approximation willbe used wherever the time delay operator e?Tds is shown. The parameters for thesecond order transfer function (T2(s)) were tuned to incorporate the time delaywithout the use of the Pade? approximation.For the UAV to achieve forward and sideways velocity the pitch and roll angles? and ? have to be displaced from equilibrium. The controller receives an inputto the pitch and roll which is normalized on the interval [?1,1]. A negative pitchangle ? will result in a positive velocity vx along the x axis. A positive roll angle? will results in in a positive value of vy along the positive y-axis. The detailsand results of the parameter identification experiments are shown in Section 3.3.1.The overall system model consisting of equations Equation 3.9 - 3.11 is shown inFigure 3.2.? ????? ??? ?? ?? ? ? ? ??? ? ??? ??? ? ? ?? ???????????  ??????? ??? ?? ??? ?? Figure 3.2: Block diagram of the complete nonlinear system model for aquadrotor helicopter with on-board attitude controller. Wind distur-bance effects are assumed negligible.3.2 Feedback LinearizationThis section presents the feedback linearization of Equation 3.9 to Equation 3.11.The objective of the feedback linearization is to calculate the necessary attitude an-gles ? and ? that result in the desired force applied on the UAV body. This appliedforce leads to an acceleration of the UAV body in the desired direction.It has been shown[112] that Equation 3.9 to Equation 3.11 can be solved for62the desired orientation angles (? ?,? ?), given desired forces F?x ,F?y ,F?z . The newdesired orientation angles ? ? and ? ? are given by? ? = arctan(F?y sin?+F?x cos?mg?F?z)(3.15)? ? = arctan(F?x sin??F?y cos??mg+F?zcos? ?)(3.16)where F?x ,F?y ,F?z are virtual, desired forces needed to move the helicopter. Theforces have been termed virtual forces, as they ignore the wind resistance and ex-ternal disturbance effects present in the system. The control task simplifies to thecontrol of these virtual forces.Equation 3.15 and Equation 3.16 are expressions for the forces in the inertialcoordinate frame {A}. For the purpose of eye-in-hand IBVS, the camera concernsitself with self motion (i.e. the motion of the camera in the image plane x,y and zcoordinates). Therefore the angle ? will be set to zero, which leads to a redefinitionof Equation 3.15 and 3.16 into a heading independent coordinate frame? ? = arctan(F?xmg?F?z)(3.17)? ? = arctan(?F?y cos? ??mg+F?z). (3.18)The control task simplifies to controlling the forces that will lead to the desiredaccelerations. The angles ? ? and ? ? are sent to the on-board controller. It shouldbe kept in mind that the current system model does not include wind resistanceand wind disturbance effects, which are actually present. In the current model,the helicopter will not slow down unless an opposing thrust force component isgenerated by the controller.3.2.1 Reduced System ModelThis section presents a reduced model with decoupled attitude angle dynam-ics. The system model shown in Figure 3.2 can be transformed into a linear, de-coupled system. It can be assumed that the attitude dynamics are smooth andsufficiently fast that the second order model as well as the input transformation can63be exchanged[112].The new model (shown in Figure 3.3) is linear with the virtual forces F?x ,F?y ,the heading velocity ??, the altitude force F?z as inputs and with the quadrotorvelocities vx,vy, height z and heading angle ? as outputs. This new plant is adecoupled and linear system, which can be controlled by traditional means.?? ???????????  ??????? ????? ??? ?? ??? ??? ?? ??? ?? ?? ?? ?? ??? ??? ?? ??? ?? ?? ???????????  ??????? ????? ??? ?? Figure 3.3: Feedback linearized,decoupled system model of the quadrotorhelicopter.3.3 Controller DesignThis section outlines the controller design for the reduced and decoupled sys-tem model shown in Figure 3.3. The purpose of the controller was to calculate theinput angles into the quadrotor to achieve a desired velocity in the x and y direc-tions. According to the separation principle of estimation and control, it is possibleto design an observer and a controller separately[22]. The observer in this case willbe a Kalman filter and the controller will be a linear quadratic regulator with a pre-compensation gain N to remove steady state error. An overview of the controlleris shown in Figure 3.4. The term y(k) is the velocity measured by the on-boardcontroller. The term x?(k) is the estimate of the state of the system and N is thepre-compensation gain that removes steady state errors from the output.In Section 3.3.1, the parameters for the second order and frequency modelwill be identified and compared. In Section 3.3.2 the transfer function shown inFigure 3.3 will be discretized and transformed into state space form. Section 3.3.364Feedback  Linearization  UAV ???? - + K alman  Filter  N  ????? ???? ???? ???? Figure 3.4: System overview for the velocity controller for the x and y direc-tions. The term r(k) represents the input velocity.will outline the design of the Kalman filter that uses the system model and themeasured velocity y(k) and estimates the system state x?(k). Section 3.3.4 willoutline the LQR with integral controller that uses the estimate x?(k) to optimallycontrol the system.3.3.1 Transfer Function Identification for Pitch and Roll AngleDynamicsIn this section two models have been identified and their performance will becompared. The second order model was identified in the time domain, is denotedby Equation 3.12 and is characterized by the parameters Kp,2, ?2 and ? . The secondorder model is indicative of the form used in literature and by the manufacturerspecifications ([18]) and was identified using step response data. The time delay incommunication was incorporated as part of the second order model. The frequencymodel is denoted by Equation 3.13 and is characterized by the parameters Kp, f , Tz,T? , ? f and Td . The time delay for the frequency model was approximated by asecond order Pade? approximation.The system input for the time domain data was a step input and a sinusoidal fre-quency sweep for the frequency domain. The sinusoidal frequency input resultedin an oscillating attitude angle with a varying magnitude and phase lag. The pitchand roll response identification was done separately, which was justified by the de-coupling assumption of the roll and pitch dynamics. The sweep was generated witha linearly increasing frequency. The lower frequency limit was set by experimental65space constraints and the higher frequency was limited by the Nyquist frequency.The sampling frequency was approximately 20 Hz or Ts = 0.05s and was limited bythe computer processing of the received image and navigation data. The Nyquistlimit indicates that input signals above 10Hz cannot be identified accurately, whichsets the theoretical upper frequency limit for the frequency sweep. A more prac-tical upper limit of 5 Hz was chosen. On the lower frequency side, the frequencyis limited by the laboratory space available for manoeuvring the quadrotor heli-copter. Low-frequency, high-angle inputs require large empty spaces to performsuccessfully. Multiple overlapping frequency sweeps were conducted. Informationregarding the exact frequency ranges and durations used in the sweep can be foundin Table B.1 for the pitch and in Table B.2 for the roll dynamics identification.The noise in pitch and roll measurements dictate the minimum input amplitudeof the sweep to excite the system to a measurable degree. The noise amplitude wasextracted from sample hover flights with zero control input. The noise was assumednormally distributed and had approximately zero mean with a standard deviationof 0.005 rad. The input amplitude of the frequency sweep was set sufficiently highabove the noise level to excite the system. The minimum amplitude of the outputwas 0.01 rad (approx. 0.6?).Frequency Sweep IdentificationFrequency domain system identification consists of two major steps. The firststep extracts frequency responses using spectral density quantities from time do-main input-output data. The second step fits a linear time-invariant system to thefrequency response.Consider a sequence of discretely sampled system inputs u[n] correspondingto the frequency sweep of length N across the desired frequencies. The systemresponse y[n] = T{u[n]} to the input is observed for 0 ? n ? N ? 1. First, thediscrete cross and auto-correlation functions Ryu and Ruu were calculated accordingtoRyu[k] =1NN?1?n=0y[n]u[n? k] (3.19)66Ruu[k] =1NN?1?n=0u[n]u[n? k] (3.20)The discrete Fourier Transform is applied to Ruu and Ryu to obtain the power spec-tral density functions as a function of the frequencySyu(?)DFT?? Ryu[k] (3.21)Suu(?)DFT?? Ruu[k] (3.22)A Hann windowing function was applied to the time domain data to reduce noiseand smooth the frequency response estimate. Windowing reduces leakage of ener-gy into other frequencies. Leakage of frequencies can drastically affect the shapeof the spectrum. The length of the window was 128 entries. The calculation ofthe correlation functions, the windowing and the transformation to the power spec-tral densities was done using the Matlab spa command, which performs spectralanalysis.A linear time-invariant system was fitted to the frequency response data afterspectral analysis. The transfer function is shown in Equation 3.13. The processmodel was a continuous second order function with an additional zero in the nu-merator. The transfer function parameters Kp, f , Tz, T? , ? f and Td were identifiedusing the Matlab pem command. The identified values were manually tuned forbest agreement between the frequency response and the step response data.Pitch AngleThe UAV was given a pitch step-input and the dynamic response was recordedover 9 trials for the time domain identification. An input pitch of -0.25 was sent for2 seconds to allow the transients from take-off to decay. The pitch angle as a func-tion of time for all 9 trials are shown in Figure 3.5. Sudden jumps that correspondto a pitch of 0 radians are dropped communication packets and do not affect theresult. The second order model was identified from Figure 3.5 and the frequencymodel was identified and tuned to fit Figure 3.6 and Figure 3.5. A summary of theidentified parameters for both models is shown in Table 3.1.The frequency model underestimates the steady state angle in the step response67data, but fits the frequency data very well. The frequency model shown in in Fig-ure 3.6 includes a second order Pade? approximation of the time delay (see Equa-tion 3.14). Low frequency sweep data was hard to estimate accurately due to exper-imental space constraints. The frequency model was optimized to best fit both thefrequency and step response data. The second order function fits the step responsewell, but performs poorly when compared to the frequency data.The identified model was tested on a 25 second frequency sweep test set from0.01 Hz to 5 Hz. The test set was not used as part of the model identificationprocedure. The input amplitude was 0.1 and included white Gaussian noise with astandard deviation of 0.02. Noise was added to increase the variation of the inputsignal. The model-predicted output, the measured output, and the input is shown inFigure 3.7. The model approximates the phase difference response well, but doesexperience some errors in the magnitude at higher frequencies. These errors in themagnitude can be partially attributed to noise in the angle measurement, partiallyto random wind turbulence and partially to unmodelled dynamics. The frequencymodel was chosen as the best option, as it fits the transient system characteristicsbetter than the second order model.Table 3.1: Identified parameters for the second order transfer function and forthe frequency model identified from frequency domain data for the pitchand roll response.Pitch, ? Roll,?Second Order SystemSteady State Gain, Kp,2 0.460 0.568Natural Frequency, ?n 9.04 8.77Damping Ratio, ?2 0.334 0.376Frequency ModelSteady State Gain, Kp, f 0.358 0.424Frequency Parameter, T? 0.221 0.191Zero Parameter, Tz 0.680 0.455Damping Ratio ? f 0.869 0.940Time Delay, Td 0.099 0.087681.5 2 2.5 3 3.5 4?0.25?0.2?0.15?0.1?0.0500.05Time (sec)Pitch Angle (Rad)  Frequency ModelSecond Order ModelInputMeasured AngleFigure 3.5: Comparison of modelled and measured pitch response. The pa-rameters for the second order model are shown in Table 3.1.10?2 10?1 100 101 10200.20.40.60.8Magnitude (y/u)  10?2 10?1 100 101 102?300?200?1000Frequency, (rad/s)Phase, (deg)Frequency ModelFrequency Sweep DataSecond Order ModelFigure 3.6: Bode diagram showing the identified second order and frequencymodel and the frequency sweep data of the pitch dynamics.695 10 15 20 25?0.15?0.1?0.0500.050.10.15Time (sec)Pitch Angle (Rad)  InputMeasuredSystem ModelFigure 3.7: Comparison of the identified frequency model response with themeasured response to a sample frequency sweep for the roll dynamics.Roll AngleThe roll angle transfer functions for the second order and frequency modelsdiffered slightly from the pitch models. These minor numerical differences (seeTable 3.1) can be explained by the elliptical body of the ARDrone. The step re-sponse is shown in Figure 3.8 and the frequency response is shown in Figure 3.9.The identified parameters for the second order and frequency model are shown inTable 3.1.There is little agreement between the second order model and the frequencyresponse data. The frequency model shows a steady state error of approximately25% in Figure 3.8. This large steady state error in the step response is a result ofthe uncertainty of frequency response at low frequencies in Figure 3.9. The pa-rameters shown in Table 3.1 were determined by system identification using theMatlab System Identification Toolbox and manually adjusted to fit the step andfrequency response data. The identified model was applied to a 25 second frequen-cy sweep test set from 0.5 Hz to 1.5 Hz. The test set was not used as part of themodel identification procedure. The model predicted output, the measured outputand the input is shown in Figure 3.10. The model approximates the phase differ-70ence response well, but does experience some errors in the magnitude at higherfrequencies. These errors in the magnitude can be partially attributed to noise. Thefrequency model was selected as the best approximation of the roll dynamics.3.3.2 Discretization and State Space ModelThe model shown in Figure 3.3 with the virtual force as input and the velocityoutput was discretized to the z domain with a time constant Ts = 0.05 seconds. Thediscrete model was then transformed into state space formx(k+1) = Gx(k)+Hu(k)+w(k) (3.23)y(k) = Cx(k)+Du(k)+v(k) (3.24)The resulting state vector, x(k) and the input vector u(k) were R5?1 vectors for theroll and pitch dynamics. The terms w(k)?N(0,Rw) and v(k)?N(0,Rv) representthe process noise and measurement noise, which were assumed normally distribut-ed with zero mean and covariance Rw and Rv, respectively. It is important to notethat the states did not necessarily represent physical quantities after the transfor-mation from the transfer function form. An observer was necessary to estimatethese states. The system model was assumed to be decoupled, therefore a separatecontroller for the pitch and roll response was designed. The designed controllerswere identical in the control method used, but differed in the numerical value oftheir parameters. The state space matrices were not stacked into a 10? 10 matrixfor clarity and to separate the controllers.The roll response state space representation wasx? (k+1) =????????2.786 ?2.775 1.168 ?0.199 0.0211 0 0 0 00 1 0 0 00 0 1 0 00 0 0 1 0????????x? (k)+????????10000????????u? (k)(3.25)y? (k) =[?0.0003 0.0019 0.0169 ?0.0140 ?0.0023]x? (k) (3.26)711.5 2 2.5 3 3.5 4?0.25?0.2?0.15?0.1?0.0500.05Time (sec)Roll Angle (Rad)  Frequency ModelSecond Order ModelInputMeasured AngleFigure 3.8: Comparison of modelled and measured roll response. The pa-rameters for the second order model are shown in Table 3.1.10?2 10?1 100 101 10200.51Magnitude (y/u)  10?2 10?1 100 101 102?300?200?1000Frequency, (rad/s)Phase, (deg)Frequency ModelFrequency Sweep DataSecond Order ModelFigure 3.9: Bode diagram showing the identified second order and frequencymodel and the frequency sweep data for the roll dynamics.724 6 8 10 12 14 16 18 20 22 24?0.25?0.2?0.15?0.1?0.0500.050.10.150.20.25Time (sec)Roll Angle (Rad)  InputMeasuredSystem ModelFigure 3.10: Comparison of the identified frequency model response with themeasured response to a sample frequency sweep for the roll dynamics.The pitch response state space matrix differed from the roll state space in numericalvalues. The pitch state space wasx? (k+1) =????????2.914 ?3.097 1.451 ?0.301 0.0321 0 0 0 00 1 0 0 00 0 1 0 00 0 0 1 0????????x? (k)+????????10000????????u? (k)(3.27)y? (k) =[0.0002 ?0.002 0.019 ?0.013 ?0.003]x? (k) (3.28)The state space systems for the pitch and roll response are both observable andcontrollable. Observability is crucial for this system, since the states do not rep-resent physical quantities and have to be reconstructed from the output. Since thesystem is observable, it is possible to proceed with the state observer design. Theimplication is that had the system not been observable, then some states would notbe reconstructible given the measured output.733.3.3 Kalman FilterThe Kalman Filter is one of the most used filters in robotics and control[113].As it is a Gaussian filter it requires the noise to be normally distributed with zeromean (white noise). The Kalman filter recursively estimates the states in two steps:the prediction and the correction steps. In the prediction step a system model isused to predict the value of the state, given previous observations. In the correctionstep, a measurement along with a recursively computed Kalman gain is used tocompute the optimal estimate. The Kalman filter also has a metric of performance,contained in the term Pk, which is the error covariance and which is calculated atevery step.Due to the high signal to noise ratio of the velocity measurement, the require-ment to reconstruct non-physical states from system output, and the occasionalmissing measurement of the UAV velocity, a Kalman filter was implemented toestimate the UAV velocity at every step. The filter will prevent temporary droppedvelocity measurement packets from destabilizing the velocity controller. Followingthe feedback linearization, the system model was linear and a traditional Kalmanfilter design process was used. The process and measurement noise variances Rwand Rv were tuned through a combination of simulation and implementation result-s. The Kalman filter equations follow:Kalman Filter Prediction Step1. Project the state ahead: x??k = Gxk?1 +Huk?1 (3.29)2. Project the error covariance ahead: P?k = GPk?1GT +Rw (3.30)Kalman Filter Correction Step1. Compute the Kalman gain: Kk =P?k CTCP?k CT +Rv(3.31)2. Update the estimate with measurement zk: x?k = x?k +Kk(zk?Cx?k ) (3.32)3. Update the error covariance: Pk = (I?KkC)P?k (3.33)The term x??k is the projection of the state given the system model and the informa-tion at the previous time step. The matrix P?k 1 is the a priori estimate of the error74covariance and the matrix Pk is the a posteriori estimate of the error covariance.3.3.4 Linear Quadratic Gaussian ControllerThe field of optimal control aims to control a system while minimizing a pre-determined cost function. The linear quadratic regulator controls a linear systemand minimizes the quadratic cost function or performance index J. One of the mainreasons to use a quadratic optimal control scheme is that it produces an asymptot-ically stable control system[74]. The discrete cost function J associated with thelinear quadratic regulator isJ =12xT (N)Sx(N)+12N?1?k=0[xT (k)Qx(k)+uT (k)Ru(k)](3.34)where S,Q and R being positive, definite or semi-definite Hermitian matrices thataccount for the importance of the final state, the intermediate state and the controlinput respectively. For the purpose of the quadrotor controller, a steady-state linearquadratic regulator was chosen, with the matrix S that accounts for the final stateset to zero.A special case of the LQR is the linear quadratic Gaussian controller (LQG),in which a Kalman filter is used with an LQR. The Kalman filter provides stateestimates, while the LQR is asymptotically stable. The states x? and x? in thespace equations 3.25 , 3.26 , 3.27 , 3.28 did not represent measurable quantities.Therefore the LQR was applied to the estimates of the state denoted by x?? and x?? ,which were estimated by the Kalman filter (see Section 3.3.3).The cost function to be minimized was reduced toJ =12??k=0[x?T (k)Qx?(k)+uT (k)Ru(k)]. (3.35)Equation 3.35 is solved by iteratively solving the steady-state, time invariant RicattiequationP = Q+GTPG?GTPH(R+HTPH)?1HTPG (3.36)with P being a n?n Hermitian matrix, with n = 5 for the roll and pitch controllers.75LQG with Pre-CompensationThe linear quadratic Gaussian, while asymptotically stable will result in asteady state error in the output[74]. An integral term could be used to overcomethis steady state error. The integral term, additionally overcomes disturbances andaddresses the unmodelled wind resistances. This controller architecture was ini-tially considered, it was however found that the integral term reduces the systemspeed significantly. The rise time of the system with the integral controller was inthe order of 1.5 to 2 seconds, with a significant phase lag in sinusoidal input. Suchsystem performance is not desirable for visual servoing, as small errors induced bywind turbulence will take a long time to correct. The design procedure for LQRwith integral term design has been added to the Appendix and can be found inSection B.2.The alternative used in this thesis is a pre-multiplication of the system inputr(k) by a pre-compensation constant Nr and Np for the roll and pitch, respectively.Figure 3.2 shows the controller setup with the pre-compensation constants. Theconstants Nr and Np were tuned in simulations and then manually adjusted in ex-periments. The choice of a fixed constant pre-compensation constant makes thesystem less adaptable to unforeseen wind-disturbances and may result in steady-state errors in the output velocity. Steady state errors in the output velocity willonly affect the rate of convergence, due to the choice of visual servoing controlscheme. Equation 2.23 represents the input velocity into the controller to reducethe image based error (shown again clarity)Vc =???L?1s (s? s?) (3.37)The output velocity of the velocity controller for the roll direction isNrr(k) = vy, (3.38)where vy = [0 1 0]TcVc and Tc is the transformation matrix between the cameraand the UAV body reference frame. Any errors in the quantity Nr will show upas an over or under-gain in the ? term that controls the rate of convergence of thesystem in control law Equation 3.37. Steady state errors in the velocity output will76affect the rate of convergence, but not system stability.3.4 Simulation SetupThis section will use the identified system model and the designed controllerto test the velocity controller and to tune the calculated parameters. A MatlabSimulink program was created to test the model and the designed controller beforethe implementation phase. The simulation allowed for tuning and adjustment of thecontroller parameters without endangering the equipment. The parameters wereadjusted manually in experiments after the simulation provided the controller withsafe starting values.Given the simplifying assumptions from the feedback linearization (see Sec-tion 3.2) regarding the wind resistance and disturbances, the controller generatescontrol inputs that will be lower than required in implementation. These steadystate errors can be adjusted by changing the pre-compensation gains Nr and Np. Inaddition, the controller will be tuned to behave more aggressively, with a higherovershoot in order to reach the set-point faster during implementation.An unknown variable was the total thrust T =mg+Fz generated by all 4 motorsin the z direction. The true values of the full thrust is inaccessible due to the closednature of the on-board controller. The helicopter on-board controller maintains theheight above ground at a constant value. The best guess estimate for the thrust wasassumed to be T? = mg. This assumption was justified by the fact that the totalthrust varies around that value. Figure 3.11 shows the controller block diagramand Figure 3.12 shows an overview of the quadrotor model. The simulated velocityresponse and the simulated control input will be shown and compared to actualquadrotor helicopter response in Section 3.5.3.5 Implementation ResultsThis section will present an implementation of the proposed controller to testthe velocity controller performance and the agreement with simulation results. Thehardware used to test the controller was an ARDrone quadrotor helicopter. Theplatform was chosen for its low cost as well as its on-board attitude controller. TheARDrone also features an exterior foam shell that extends around the propeller77offset1-0.5 offset-0.5delay31/zdelay21/zdelay11/zdelay1/zPulse PulseNrNrNpNpKlqr, RollK*u Klqr, PitchK*uKalman Filter Rolly(k) u(k)xhat(k+1)yhat(k+1)Kalman Filter Pitchy(k) u(k)xhat(k+1)yhat(k+1)Feedback LinearizationFxs Fys Tthetas phisfcnConstant1m*9.81ARDrone Modeltheta_sphi_svx vy theta phir(t) r(t)Figure 3.11: General overview of the quadrotor controller simulated in Mat-lab Simulink.phi4theta3 vy2vx1td_th1td_thmass, roll1/mmass, pitch1/mRollTz_ph.s+1Tw_ph^2.s  +2*Tw_ph*Zeta_phs+12Rigid Body Dynamicsthetaphi TFx FyuavPitchTz_th.s+1Tw_th^2.s  +2*Tw_th*Zeta_ths+12 Massm*9.81Kp_thKp_th Kp_phKp_phIntegrator11 sIntegrator1 sphi_s2theta_s1Figure 3.12: Simulation model used in the Simulink simulation. An overviewof the controller is shown in Figure 3.11. The rigid body dynamicsuse Equation 3.9 to 3.11 calculate the forces acting on the quadrotorhelicopter.78blades and reduces equipment damage during crashes.A chart showing the flow of information for the velocity controller tests isshown in Figure 3.13. The quadrotor connects to the ground station computer overthe WiFi network. The ARDrone application programming interface (API) is usedto establish communication to the quadrotor and has been modified to export nav-igation and image data to Matlab. The API is used to establish communicationand decode image and navigation data, but beyond the initialization does not sendany information to the ARDrone. Matlab monitors the navigation data (NavData)consisting of the attitude angles, ? , ? , ? , the velocities vx, vy and the height aboveground z. An implementation of the proposed controller has been written in Matlaband is used to maintain communication and to send appropriate navigation com-mands to the quadrotor helicopter. By linking the communication through Matlabit can be ensured that the API does not interfere with the Matlab controller.?????? ??????? ? ?????? ARDrone  API  ???? Veloc i t y Controll er  ??????? ???? ??????? Heli c opter  ?????? Figure 3.13: Block diagram of the experimental setup for the velocity controlof the ARDrone.3.5.1 Roll ResponseThis section compares the quadrotor response to a velocity step input. Thequadrotor was brought to a hover and given two seconds to allow the transient79response from take-off to decay. The quadrotor was then commanded a velocity ofvy = 0.5m/sec, purely due to roll input.The parameters for the Kalman filter estimation of the velocity were tuned fromsample data withRv = 0.001 ,Rw =????????0.1 0 0 0 00 0.1 0 0 00 0 0.1 0 00 0 0 0.1 00 0 0 0 0.1????????.The parameters for the LQR controller that were used to calculate the gain KLQR,?wereR? = 20 ,Q? = 0.0001????????3 0 0 0 00 0 0 0 00 0 1 0 00 0 0 1 00 0 0 0 2????????.The parameter Rv accounts for the energy expenditure of the control signal. It wasgiven a higher rating than the cost of the intermediate states x, represented by thecolumns of Q? . The pre-compensation gain Nr scaled the input to remove steady-state error, with Nr,sim and Nr,imp representing the simulation and implementationgain, respectively. The resulting gains wereNr,sim = 2.55 ,KLQR,? =[0.1030 ?0.1798 0.0980 ?0.0176 0.0020]Nr,imp = 3.00The best measured sample response velocity is shown in Figure 3.14. The mea-surements that suddenly drop to zero (at t ? 3.9s, t ? 5.9s, t ? 6.1s) stem from acommunication issue between the ARDrone, the API, and the Matlab navigationscript. The controller was commanded to treat this measurement as the previousvalue before the communication loss. The Kalman filter is activated at a time of 3seconds and the state estimate converges to the measurements after approximately3 time steps. The simulated ideal response shows more oscillations than the actual802 3 4 5 6 7 8 9 10?0.2?0.100.10.20.30.40.5Time, [seconds]Velocity v y [m/s]  Input vyMeasured vy, Nr=3Filtered vySimulated vy, Nr=2.55Figure 3.14: Velocity step response of the quadrotor in the y direction as aresult of rolling movement.2 3 4 5 6 7 8 9?0.3?0.2?0.100.10.20.30.4Time, [seconds]Roll Input  Actual InputSimulated InputFigure 3.15: Comparison of the simulated input and the actual input into thesystem in the y direction.812 3 4 5 6 7 8 9?10?8?6?4?2024681012Time, [seconds]Roll, ? [degrees]  Actual ?Simulated ?Figure 3.16: Comparison of the simulated and the actual roll angle of theUAV during a step input velocity command.measured values. The implemented pre-compensation gain Nr,imp was larger thanthe simulated gain, due to nonmodelled wind-resistance in the simulated model.The 90% rise time is approximately 0.75 seconds. The system response demon-strates an overshoot when slowing down from 0.5m/sec to 0 m/sec. This overshootcan be attributed to wind-resistance, which aids in slowing down the quadrotor andresults in too much control effort from the velocity controller, thus causing an over-shoot. Figure 3.15 underlines the limitations of the wind disturbance assumption.The simulated input is zero to keep the velocity constant, which is not the case. Inthe actual results, the quadrotor maintains an angle in order to remain at a velocityto overcome wind resistance. The measured and simulated angles are shown inFigure 3.16. The maximum measured angle was approximately 12?, which wouldhave resulted in a small error had a small angle approximation controller been used.3.5.2 Pitch ResponseThis section shows the response of the controller to a step velocity input inthe x direction (forward-backward motion) and details the parameters used. The82parameters for the Kalman filter estimation of the velocity were:Rv = 0.001 ,Rw =????????0.1 0 0 0 00 0.1 0 0 00 0 0.1 0 00 0 0 0.1 00 0 0 0 0.1????????.The parameters for the LQR controller that were used to calculate the gainKLQR,? wereR? = 25 ,Q? = 0.0001????????0 0 0 0 00 0 0 0 00 0 0 0 00 0 0 1 00 0 0 0 0.1????????.The parameter R? accounts for the energy expenditure of the control signal. It wasgiven a higher rating than the cost of the intermediate states x. The resulting gainswereNr,sim = 1.4 ,KLQR,? =[0.0561 ?0.1062 0.0651 ?0.0148 0.0018]Nr,imp = 1.8Figure 3.17 shows the velocity response of the quadrotor to a velocity inputof 0.5 m/s. The Kalman filter is switched turned on at a time of t = 3 secondsand quickly converges to the measured values. Figure 3.18 shows the simulatedand actual controller input into the system. The actual controller input was higherdue to wind resistance forces that the quadrotor needs to overcome. The input wassimilar to Figure 3.15. The pitch response matches the simulated response betterthan the roll response. The control effort to accelerate is larger than the controleffort for deceleration, due to the air-resistance effects, which were considered aspart of the model. Figure 3.19 shows the pitch angle during the experiment andunderlines that more control effort is required to accelerate the UAV (t = 2 to t = 3seconds) than to decelerate (t = 7 to t = 8 seconds). Future work could consider832 3 4 5 6 7 8 9 10?0.100.10.20.30.40.5Time, [seconds]Velocity v x [m/s]  Input vxMeasured vx, Np=1.8Filtered vxSimulated vx, Np=1.4Figure 3.17: Velocity step response of the quadrotor in the x direction as aresult of pitching movement.a model of this resistive force to improve the controller performance and reducediscrepancies the simulated and observed responses.3.5.3 Combined Pitch and Roll ResponseThis section presents an example of combined pitch and roll movement. Thefeedback linearization scheme combines the input in the pitch and roll directionsto calculate the attitude angles given a desired force. In this section the decouplingassumption between the pitch and roll dynamics is investigated. A step input of 0.4m/s in the x and y directions was sent to the UAV. The velocity response is shown inFigure 3.20 and 3.21 for the x and y directions, respectively. The velocity responsefor both the x and y directions is more turbulent for the combined case, than theindividual cases presented in Section 3.5.1 and Section 3.5.2. Potential reasonsfor a degraded combined performance could include minor coupling between pitchand roll dynamics and the simplifying assumption that the thrust T = mg. Thecontroller has shown to be capable of combined velocity response, though it doescontain oscillations about the setpoint.842 3 4 5 6 7 8 9?0.2?0.15?0.1?0.0500.050.10.150.20.25Time, [seconds]Pitch Input  Actual InputSimulated InputFigure 3.18: Comparison of the simulated pitch input and the actual pitchinput into the system for the movement in the x direction.2 3 4 5 6 7 8 9?6?4?20246Time, [seconds]Pitch, ? [degrees]  Actual ?Simulated ?Figure 3.19: Comparison of the simulated and the actual pitch angle of theUAV during a step input velocity command.852 3 4 5 6 7 8 9 10?0.5?0.4?0.3?0.2?0.100.1Time, [seconds]Velocity v x [m/s]  Input vxMeasured vx, Np=1.8Filtered vxSimulated vx, Np=1.4Figure 3.20: Velocity step response of the quadrotor in the x direction as aresult of pitching movement for combined x and y motion.2 3 4 5 6 7 8 9 10?0.5?0.4?0.3?0.2?0.100.1Time, [seconds]Velocity v y [m/s]  Input vyMeasured vy, Nr=3Filtered vySimulated vy, Nr=2.55Figure 3.21: Velocity step response of the quadrotor in the y direction as aresult of rolling movement for combined x and y motion.86Chapter 4Visual Servoing Implementationand ResultsThis chapter presents the results of the visual tracking of a stationary and mov-ing target with an ARDrone quadrotor helicopter. The experimental setup and theflow of information is outlined in Section 4.1. Experimental results on a stationary2 dimensional target are presented in Section 4.2. Results for visual servoing witha moving 3 dimensional target are presented in Section 4.3.4.1 Implementation SetupThe ARDrone quadrotor helicopter was used for the testing of the proposedcontroller. The ARDrone hardware contains an on-board attitude controller, WiFicommunications, a 3-axis gyroscope, a 3-axis accelerometer a magnetometer and2 sonar sensors. The ARDrone API1 was modified to establish communicationand save images (ImageData) and navigation data (Navdata) on the ground stationcomputer. These images were retrieved by the colour based object detection sub-program and the targets in the images were detected. The location of the detectedtargets was passed to the visual servoing controller and the control input signalwas calculated. This input was passed to the velocity controller. The current UAVstate was filtered and estimated from the navigation data that was received from1See http://projects.ardrone.org/ for details.87?????? ??????? ? ?????? Vi sual Servoing  Controll er  ARDron e  API  ???? Veloc i t y Controll er  ??????? ???? ??????? Heli c opter  Im age Storage  ????? ???? ?????? ????? ???? Ob ject Detec tio n  Figure 4.1: Overview of the controller block diagram of the system with theARDrone helicopter.Feedback  Linearization  ???? Kalman  Filter  N  - + ????? ????? ???? ???? Image Feature Extraction  UAV - + IBVS  Control Law  ????? ? ?? ? Figure 4.2: System diagram of the visual servoing and velocity controller.the ARDrone API. The velocity controller and the target detection in the imageoperated in series. The object detector, velocity and visual servoing controlleroperated in the same Matlab thread at the same frequency of approximately 20-25 Hz. The velocity controller sends commands over the network directly to theon-board controller of the ARDrone and ensured that the communication did nottime out. An overview of the flow of information is shown in Figure 4.1. A fullsystem diagram that includes the visual servoing and velocity controller is shownin Figure 4.2.882.6 m 0.75 m (a) Overhead view of quadrotor starting positionwith dimensions.(b) Final desired feature position.Figure 4.3: Initial location of the UAV and desired final feature position ofthe target.4.2 Stationary Target ExperimentsThe first two tests used a stationary, 2 dimensional target for quadrotor heli-copter visual servoing. The two experiments show an average and a ?best? ob-served performance of the controller. The target consisted of one red and one bluecircle with a center-to-center distance of 15cm. The initial position required theUAV to translate in all three degrees of freedom, with the final desired featureposition extracted form a target image (see Figure 4.3). The UAV started on theground. The visual servoing lasted for 30 seconds for experiment #1 and 50 sec-onds for experiment # 2, during which the controller positioned the UAV in frontof the target.4.2.1 Experiment #1 and #2 ResultsThere are two phases during the visual servoing with a quadrotor helicopter:the initial reduction in error, followed by the maintenance of the UAV position infront of the target. The initial reduction works well for large and small values ofthe convergence rate ? . If the velocity input is too small compared to the noise invelocity measurements, the UAV will experience drift and not achieve the desiredsmall corrections. This drift will result in positioning errors, which take a few890 5 10 15 20 25 30?0.2?0.100.10.20.30.40.5Time, [s]Velocity V x [m/s]  Vx measuredVx filteredVx input(a) x-velocity (front-back)0 5 10 15 20 25 30?0.100.10.20.30.4Time, [s]Velocity V y [m/s]  Vy measuredVy estimateVy input(b) y-velocity (left-right)Figure 4.4: Velocity response for the x and y directions relative to the UAVreference frame {A}. Shown are the measured velocities, Kalman fil-tered estimates and the input velocities. Experiment #1.0 5 10 15 20 25 30?10?5051015Time, [s]Feature Error (Camera Frame)  x?errory?errorz?error(a) Image feature error e = s1? s?0 50 100 150 200 250 300050100150200uv(b) Target trajectory in the image.Figure 4.5: Image based error criteria (a) and the target trajectory in the im-age (b). The coordinate convention is in the camera reference frame.Experiment #1.900 5 10 15 20 25 30?100?80?60?40?20020406080100Time [sec]Pixel Error  x?Error, redy?Error, redx?Error, bluey?Error, blueFigure 4.6: Pixel error in the image x and y directions as a function of timefor experiment #1.0 5 10 15 20 25 30 35 40 45 50?0.2?0.100.10.20.30.4Time, [s]Velocity V x [m/s]  Vx measuredVx filteredVx input(a) x-velocity (front-back)0 5 10 15 20 25 30 35 40 45 50?0.1?0.0500.050.10.150.20.250.30.350.4Time, [s]Velocity V y [m/s]  Vy measuredVy estimateVy input(b) y-velocity (left-right)Figure 4.7: Velocity response for the x and y directions relative to the UAVreference frame {A}. Shown are the measured velocities, Kalman fil-tered estimates and the input velocities. Experiment #2.910 5 10 15 20 25 30 35 40 45 50?10?5051015Time, [s]Feature Error (Camera Frame)  x?errory?errorz?error(a) Image feature error e = s1? s?0 50 100 150 200 250 300050100150200uv(b) Target trajectory in the image.Figure 4.8: Image based error criteria (a) and the target trajectory in the im-age (b). The coordinate convention is in the camera reference frame.Experiment #2.0 5 10 15 20 25 30 35 40 45 50?100?80?60?40?20020406080100Time [sec]Pixel Error  x?Error, redy?Error, redx?Error, bluey?Error, blueFigure 4.9: Pixel error in the image x and y directions as a function of timefor experiment #2.920 5 10 15 20 25 30 35 40 45 50?0.2?0.100.10.20.30.40.5Time, [s]v z, input(a) z-velocity (up-down).0 5 10 15 20 25 30 35 40 45 5001002003004005006007008009001000Time, [s]Z, [mm](b) Height above ground z.Figure 4.10: Controller input to control height above ground in the z directionrelative to the UAV reference frame {A}. Experiment #2.0 5 10 15 20 25 30 35 40 45 50?0.14?0.12?0.1?0.08?0.06?0.04?0.0200.020.04Time, [s]Input / Angle [rad]  Input Pitch, ??UAV Pitch, ? [rad](a) Negative pitch input and resulting UAV pitch? (affects vx) during visual servoing.0 5 10 15 20 25 30 35 40 45 50?0.1?0.0500.050.10.150.2Time, [s]Input / Angle [rad]  Input Roll, ?UAV Roll, ? [rad](b) Roll input and resulting UAV roll ? (affectsvy) during visual servoing.Figure 4.11: Attitude angles and normalized input into the controller for ex-periment #2.93seconds to overcome. An increase in ? will result in a larger control effort tomaintain the final position, with reduced drift. A convergence rate of ? = 1 and? = 1.1 was selected for the first and second experiment, respectively. Large valuesof ? result in large initial velocities, due to the exponential decay of the imagebased error function. Large velocities potentially cause motion blur and amplify theproblems experienced due to dropped communication packets. The input velocityfor these experiments was limited to 0.3m/sec (see Figure 4.7(a)).The pixel trajectory of the target is shown in Figure 4.5(b) and 4.8(b). Thereis more variation in the image x-direction than in the image y-direction of thetarget positions. Discontinuities in the trajectory correspond to images in whichthe target was not detected, potentially due to motion blur. The initial straight linereduction in error is due to the quadrotor taking off and the following stabilization.This is reflected in the initial reduction in the feature y-error (Figure 4.5(a) and4.8(a)) within the first 5 seconds. The helicopter UAV reaches its desired positionafter approximately 8 seconds and begins the maintenance of the UAV positionin front of the target. The time to reach the desired position can be reduced byincreasing the limits on the UAV velocity. Following take-off, the error is reducedand the UAV keeps its position in front of the target with small deviations. Thesedeviations can be attributed to the coupled nature of the quadrotor velocity andattitude dynamics. To reduce the error in the x or y direction the UAV attitude mustbe changed, which induces errors in the target position in the image.Figure 4.10 shows the velocity input and height above ground (HAG) mea-surements of the UAV for experiment #2. HAG is measured by the on-board sonarsensors. The input into the system does not correspond to a velocity, but is a nor-malized input variable on [?1,1]. The majority of the altitude correction occursshortly after take-off (t = 2 seconds). Figure 4.10(b) shows the missing measure-ments in the altitude due to lost packets. The visual servoing controller is imagebased and does not use the altitude measurements in the control loop, which showsthe robustness of IBVS to positioning sensor accuracy.Figure 4.11(a) and 4.11(b) show the inputs to the attitude controller and result-ing attitude angles. The controller input was a normalized input on [?1,1]. Themaximum measured inclination in the pitch and roll direction was approximately7o and 4o, respectively. The maximum inclination occurred at the initial reduc-94tion of the large image error. The attitude angles experienced in these experimentswere small enough to still fall under the small angle assumption, commonly usedfor nonlinear system controller design.The measured, estimated, and desired velocities for the quadrotor helicopterare shown in Figure 4.4 and Figure 4.7. The large variation before t = 2 secondsis attributed to measurement noise during take-off. The noise magnitude in themeasured values was on the order of 0.05m/sec. This noise limits the performanceof the feature error reduction of the visual servoing control scheme. There is alsoa visible lag between the velocity input and the actual UAV velocity. This lag ispartially responsible for the variation of the feature trajectory in Figures 4.5(b) and4.8(b).A major contributing factor to the visual servoing performance degradation arevariations in the time steps, caused by communication delays or image processing.Figure 4.19(b) shows an example of the time step length as a function of time.There are intermittent delays that reduce the effectiveness of the control scheme.The helicopter creates its own turbulent air, which affects the lift and induces posi-tion variations. Small air-movements in the room cannot be ruled out from creatingUAV displacement. The large profile of the UAVs external foam shell makes it sus-ceptible to external wind forces, due to its area. The response lag attributed to therise time of the velocity controller affects the UAV?s ability to deal with randomwind disturbance during visual servoing. Figure 4.9 show the image pixel error forthe red and blue target as a variation of time. The response is oscillatory in thex-direction and much more smoother in the y-direction as the targets are parallel tothe ground. During the maintenance of the position phase, the maximum error ison the order of 20 pixel. Given an image width of 320 pixel, this maximum erroris roughly 6% of the screen size.Experiment #2 experienced a significant number of dropped Navdata and Im-ageData packets during visual servoing. The most significant drop in communi-cation occurs at t = 40 seconds, during which the controller loses communicationwith the ground station computer for approximately 1 second. Overall it is difficultto pin-point the exact cause of the feature deviations in the images (Figures 4.5(b)and 4.8(b)). A small amount can be attributed to the detection of the circles andthe calculation of the circle center. Noise in the velocity measurement is a factor95and the lag between the input and output velocity affects the final position of thefeatures. Dropped images during the wireless communication also contribute tofeature variations. All these factors work together to influence the deviation in thefeature location in the image.4.3 Moving Target ExperimentsThis section details the implementation of the visual servoing control schemeon the quadrotor UAV with a 3-dimensional moving target. The target was a redand blue sphere attached to a Quanser Q-bot. The Q-bot is a differential drive mo-bile robot based on the Roomba architecture. Three experiments were performed.The first experiment investigated the feature error that can be expected for a sta-tionary 3D target (Section 4.3.1). This performance is then used as the baselinefor comparison to later experiments with a moving target. The second experimentconsidered a moving target and used a simple proportional visual servoing controllaw (Section 4.3.2). The last experiment included the compensation for the fea-tures error induced by the moving target, which was derived in Section 2.5 and theresults for which will be presented in Section 4.3.3. The desired feature positionfor both the stationary and moving target is shown in 4.12(a). For all cases, thetranslation control was considered only and the helicopter UAV started from reston the ground.96(a) Desired feature position for the 3 dimensionaltarget.(b) Initial UAV position with targets properly de-tected.Figure 4.12: Initial location of the UAV and desired final feature position ofthe target.4.3.1 3D Stationary TargetThe first performed test involved a static ground robot. The static test wasused as a benchmark to compare the pixel error for the 3 dimensional target tothe pixel error in the moving target tests. The initial error was smaller than the 2-dimensional target error and the UAV was able to reduce it with minor oscillations.Figure 4.13 shows the measured, input and Kalman filtered velocities during visualservoing. The large variations during the first 2 seconds correspond frames wherethe target was not detected, which is visible as the discontinuities in Figure 4.14(b).The discontinuities occur during take-off where fast motions are involved. Thefeature error reduction as a function of time and the feature trajectory is shownin Figure 4.14. The feature trajectory includes a straight line trajectory, due tothe take-off. The visual servoing feature trajectory approximates a straight linetowards the desired position, following the straight line trajectory induced by UAVtake-off.970 5 10 15 20 25 30?0.1?0.0500.050.10.150.20.25Time, [s]Velocity V x [m/s]  Vx measuredVx filteredVx input(a) x-velocity (front-back)0 5 10 15 20 25 30?0.25?0.2?0.15?0.1?0.0500.050.10.150.20.25Time, [s]Velocity V y [m/s]  Vy measuredVy estimateVy input(b) y-velocity (left-right)Figure 4.13: Measured, input and Kalman filtered velocity responses for thex and y directions relative to the UAV reference frame {A} for a 3-dimensional, static target.0 5 10 15 20 25 30?2.5?2?1.5?1?0.500.511.522.5Time, [s]Feature Error (Camera Frame)  x?errory?errorz?error(a) Image feature error e = s1? s?0 50 100 150 200 250 300050100150200uv(b) Target trajectory in the image.Figure 4.14: Image based error criteria (a) and the target trajectory in theimage (b). The target was a static and 3 dimensional.984.3.2 3D Moving Target, No CompensationThe ground robot was positioned in front of the UAV, facing perpendicular tothe UAV?s line of sight. The objective was to keep the target in the desired locationin the image (see Figure 4.12(a)). The UAV took off and was given 10 secondsto position itself in front of the differential drive robot. At approximately t = 10seconds the robot was given a velocity of 0.25 m/sec for about 10 seconds. The redtarget was placed facing the front and the blue target was placed facing the back ofthe ground robot. The UAV reduced the error using mainly roll control.The purely proportional visual servoing control law does not consider targetmotions and a steady-state error was to be expected. Figure 4.16 shows the steadystate error, which is approximately 60 pixels in the image x-direction. The velocityresponse is shown in Figure 4.15(a) and 4.15(b). The x-velocity (forward and backmotion) of the UAV is below 0.1m/sec and maintains its distance to the target.The y-velocity (left and right motion) shows the initial reduction of the error whilethe ground robot is stationary. The UAV achieves a velocity of approximately0.15m/sec after the ground robot is set into motion.It is evident that the UAV lags behind the target and that the controller is un-able to account for the presence of a moving target, despite the target?s low veloc-ity. This issue can be partially addressed by increasing the rate of convergence ? ,which will reduce the size of the steady state error, but may also make the systemmore unstable. One alternative control law is one where the convergence parameteris a function of the error and increases as the target moves closer to the image edge.This alternate control law will still result in a steady state error, as the features as-sociated with the moving target will converge towards a local minima.4.3.3 3D Moving Target, With CompensationCompensation was added to reduce the steady state error for a moving target.For the performed experiment, the forgetting factor was f = 0.6 and the integralgain was Ki = 0.6. The target was at rest for the first and last 10 seconds of theexperiment and was moving for approximately 10 seconds in between. The targetwas moving in the image x-direction perpendicular to the UAV camera axis with aspeed of 0.25 m/sec.990 5 10 15 20 25 30 35 40?0.1?0.0500.050.10.150.20.25Time, [s]Velocity V x [m/s]  Vx measuredVx filteredVx input(a) x-velocity (front-back)0 5 10 15 20 25 30 35 40?0.25?0.2?0.15?0.1?0.0500.050.10.150.20.25Time, [s]Velocity V y [m/s]  Vy measuredVy estimateVy input(b) y-velocity (left-right)Figure 4.15: Measured, input and Kalman filtered velocity responses for thex and y directions relative to the UAV reference frame {A} for a 3-dimensional, moving target.0 5 10 15 20 25 30 35 40?2.5?2?1.5?1?0.500.511.522.5Time, [s]Feature Error (Camera Frame)  x?errory?errorz?error(a) Image feature error e = s1? s?0 50 100 150 200 250 300050100150200uv(b) Target trajectory in the image.Figure 4.16: Image based error criteria (a) and the target trajectory in theimage (b). The 3 dimensional target was moving at 0.25m/sec.100The x and y velocities in the UAV reference frame are shown in Figure 4.17(a)and 4.17(b). The mean x velocity was approximately 0 m/sec with variations in theorder of 0.05 m/sec to maintain the UAV position. The y-velocity between t = 10and t = 20 seconds was approximately 0.25m/sec with an oscillation amplitude of0.05 m/sec. These oscillations can be partially attributed to the communicationdelays (see 4.19(b)). The error in pixel can be seen as a trajectory in the image inFigure 4.18(b) or in Figure 4.19(a) as a function of time. The target position erroris on the order of 40 pixel in the x-direction, which is approximately twice the errorvariation for a stationary target and presents an error reduction when compared tothe uncompensated results. This error can further be reduced by the appropriatetuning of the integral gain Ki and forgetting factor f .1010 5 10 15 20 25 30?0.25?0.2?0.15?0.1?0.0500.050.10.150.20.25Time, [s]Velocity V x [m/s]  Vx measuredVx filteredVx input(a) x-velocity (front-back)0 5 10 15 20 25 30?0.2?0.100.10.20.3Time, [s]Velocity V y [m/s]  Vy measuredVy estimateVy input(b) y-velocity (left-right)Figure 4.17: Measured, input and Kalman filtered velocity responses for thex and y directions relative to the UAV reference frame {A} for a 3-dimensional, moving target and a controller with moving target com-pensation.0 5 10 15 20 25 30?4?3?2?101234Time, [s]Feature Error (Camera Frame)  x?errory?errorz?error(a) Image feature error e = s1? s?0 50 100 150 200 250 300050100150200uv(b) Target trajectory in the image.Figure 4.18: Image based error criteria (a) and the target trajectory in theimage (b). The target was moving and the velocity controller includedcompensation for the moving target.1020 5 10 15 20 25 30?100?80?60?40?20020406080100Time [sec]Pixel Error  x?Error, redy?Error, redx?Error, bluey?Error, blue(a) Pixel error of target features as a function oftime.0 5 10 15 20 25 3000.10.20.30.40.50.60.70.8Time, [s]Time Step T s, [s](b) Time Step Length, TsFigure 4.19: Target feature error and time steps during the visual servoingprocess.103Chapter 5Adaboost.MRT: Colour basedObject detection by ensemblelearningThe main contribution of the work presented in this chapter is the introductionof Adaboost.MRT and its application to colour-based object detection. The algo-rithm is outlined in Section 5.1. The proposed algorithm is tested in benchmarksagainst the Adaboost.RT algorithm and a single iteration of the base learning algo-rithm for the single and multivariate output case in Section 5.2. Adaboost.MRT isapplied to colour-based object detection in Section 5.3.5.1 Adaboost.MRT algorithmThe proposed Adaboost.MRT is capable of estimating R-sized output vectorsy = (y1,y2, . . .yR) and is an extension of Adaboost.RT for R ? 1. Let R denotethe number of output variables and r denote the index of an output variable withinthe vector. The proposed method trains T different learners in series and assignseach learner a weight ?t,r corresponding to its accuracy in predicting each outputvariable (where t denotes the learner iteration number). This section details theAdaboost.MRT procedure (see Algorithm 1) in Section 5.1.1. A discussion of thethreshold parameter ? selection is given in Section 5.1.2.1045.1.1 Adaboost.MRT OverviewInitialization PhaseAdaboost.MRT is passed m training instances and selects N (where N ? m) ofthese for each learning iteration, t (where t = 1 . . .T ). The weights in Dt (an m?1vector) are initialized as a uniform distribution and correspond to the samplingweight of each instance in S. This distribution Dt is updated at each iteration toassign more weight to the misclassified predictions (or ?hard? examples), makingthem more likely to be selected for the next iteration. Hard examples in the trainingdata will be selected more frequently than samples that are accurately predicted,which generates predictors in the ensemble that are specialized on hard examples.The output error distribution Dy,t is an R by m vector, which contains the predictionaccuracy for each output for all m training instances. The output error distributionis an extension of the Adaboost.RT algorithm to the multivariate output case and isinitialized as 1m . At the initialization phase the algorithm receives a threshold pa-rameter vector ? = [?1 . . .?R], which is used to classify the predictions as correctlyor incorrectly identified. The optimal threshold value ?opt that results in the lowestRMSE for a given dataset is selected from experiments in which the value of ? isvaried (see Section 5.1.2).Iteration PhaseAt each iteration a regression model ft(x)? y is created from N training ex-amples. The N samples are randomly selected, with replacement, from the originalm training instances using the sampling weights Dt . The regression model is usedon all m instances to predict the output y. This prediction is used in the variance-scaled error functionEr(r)t (i) =| ft(xi)? yr,i|? (r)t. (5.1)Equation 5.1 calculates the absolute error of the prediction and divides it by thestandard deviation of the absolute error for all m predictions ? (r)t of an outputvariable. The resulting quantity is a measure of prediction error for instance i ofoutput variable r, compared to all predictions at this iteration. This formulation is105defined for all ? (r)t > 0, which is true for most predictions that have error.The error function output is an R?m dimensional vector and is compared to thethreshold ? . The weights in Dt,r of the misclassified samples are added to createthe misclassification error rate ?t,r, an R? 1-dimensional vector. The misclassifi-cation error rate is a measure of how many ?hard? examples the regression modelmisclassified. In the context of a variance-scaled Ert(i), ?hard? examples are notnecessarily outliers, they are examples that previous regression models have clas-sified as incorrect. A large value means that the regression model classified a lot ofthe ?hard? examples as incorrect. The error rate is used in?t,r =(?(r)t)n(5.2)to calculate the weight updating parameter ?t,r using the power coefficient n. Thepower coefficient governs the increase of the weights for the misclassified samplesfor the next iteration. In [95] a power coefficient of n = 2 and n = 3 was sug-gested, but the authors warn that a large power coefficient may result in fitting tospecific examples and may fail to generalize. If n is set too large, the algorithmwill fit to outliers in a manner similar to the zero-crossing problem experienced byAdaboost.RT.The weight updating parameter serves two purposes, the first purpose is toupdate the weights in the output error distribution in Dy,t+1(r)(i) byDy,t+1(r)(i) =Dy,t (r)(i)Zt?{?t,r if Er(r)t (i)<= ?r1 otherwise(5.3)and the second purpose of ?t,r is as a weighting factor in the final hypothesis givenbyy(r)(x) =?t(log(1?t,r))ft(x)?t(log(1?t,r)) . (5.4)The values of the correctly classified examples in Equation 5.3 are multiplied by?t,r. If the current iteration misclassified a lot of hard examples (i.e. little additionaldiversity or bad prediction), ?t,r will be close to 1 and the weights in Dt,r will106remain nearly unchanged. Conversely, if the prediction is good, the weights ofcorrectly classified examples are reduced. The sampling weights distribution Dtfor the next iteration is updated as the average of the output error distribution Dy,tfor each instance in m given byDt+1(i) =1Rr?k=1D(r=k)y,t (i). (5.5)Output PhaseThe final hypothesis of Adaboost.MRT is a weighted sum of the output of eachregression model. During the learning phases, T different regression models arebuilt and their corresponding weight updating parameter ?t,r is calculated. Theweight updating parameter is a measure of a regression model?s ability to predictthe output variable yr. In the final hypothesis the prediction of a regression modelis multiplied by the logarithm of the inverse of ?t,r (Equation 5.4).107Algorithm 1 Generalized Procedure for the Adaboost Multivariate RegressionThreshold (Adaboost.MRT) algorithm, capable of estimation of r sized output vec-tors.1. Input:? Sequence of m examples < x1,y1 > . . .< xm,ym > where the output y =(y1,y2, . . .yR) ? R? Weak learning algorithm (?Weak Learner?)? Integer T specifying number of iterations or machines? Threshold vector ? = (?1, . . .?R), (0 < ?) used to classify samples as correctand incorrect.2. Initialize:? Machine number or iteration, t = 1? Set the sampling weight distribution Dt(i) = 1m for all i? Set the output error distribution Dy,t (r)(i) = 1m for all i? Set misclassification error rate vector of size R to ?rt = 03. Iterate while t ? T :? Sample N examples from sequence of m, using weights Dt with replacement(where N ? m)? Give these N examples to the Weak Learner? Build the regression model ft(x)? y? Calculate the error function on all m examples for each output variable:Er(r)t (i) =| ft(xi)? yr,i|? (r)twhere ? (r)t is the sample standard deviation of ( ft(xi)? yr,i).1083. Continued? Calculate the misclassification error rate for every output of ft(x) :?(r)t = ?i:Er(r)t (i)>?r Dy,t(r)(i)? Set the weight updating parameter,?t,r =(?(r)t)nwhere n = power coefficient (e.g. , linear, square or cubic)? Update the output error distribution Dy,t as:Dy,t+1(r)(i) =Dy,t (r)(i)Zt?{?t,r if Er(r)t (i)<= ?r1 otherwisewhere Zt is a normalization factor such that Dy,t+1(r) will be a distribution.? Update the sample weight distribution as the average of the error rate of eachoutput variable for each i:Dt+1(i) =1Rr?k=1D(r=k)y,t (i)? Set t = t +1.4. Output the final hypothesis:y(r)(x) =?t(log(1?t,r))ft(x)?t(log(1?t,r))1095.1.2 Experimental Threshold SelectionBoosting performance is sensitive to the number of iterations and the weightsthat are assigned to the samples for the next training iteration. For the proposedalgorithm, this sensitivity manifests itself in the value of T and ? and the formu-lation of Ert(i). Adaboost.MRT uses a variance-scaled misclassification function,therefore the value of ? will depend on the prediction error distribution of the baselearning algorithm. In this section experimental testing was used to determine thevalue of ?opt , which splits a given error distribution to introduce the most diversityand results in the least RMSE. Experimental testing was necessary, as the value of?opt is a function of the error distribution. The error distribution depends on thebase learning algorithm type and the function to be estimated.The aim of the first test was to experimentally determine the relationship be-tween ?opt and T for a given function and multiple noise magnitude levels. Thebase learning algorithm was a multi-layer perceptron with one hidden layer and 20hidden nodes H. The function to be approximated was y = sin(x)+ ? , where ?was uniformly distributed according to ? = U [?k,k]. Three different noise mag-nitudes, k, were tested, with k ? {0,1,4}. The input x was uniformly distributedon x ? [?2,20]. The nonlinear function was chosen for its repeated zero-crossing,a condition that caused the accuracy of Adaboost.RT to degrade. The weight up-dating parameter was set to a linear coefficient (n = 1, see Equation 5.2). For eachcombination of ? and T , the algorithm was run for 30 times and the average rootmean squared error was calculated and reported. The training and test set weregenerated on the same bounds and consisted of 700 and 300 examples respectively.The RMSE was calculated with respect to the noiseless function y = sin(x).Figure 5.1 shows the variation of the root mean squared error (RMSE) withchanging value of ? , T for three different signal-to-noise ratios. The RMSE isrelatively constant for T = 1 (corresponding to the accuracy of the base learningalgorithm) and the RMSE reduces as T is increased. The RMSE is sensitive to thenumber of iterations, however the reduction in RMSE slows down beyond T = 10iterations, which is consistent with Adaboost.RT [95]. An improvement or wors-ening of the RMSE can be observed depending on the value of ? and the signal-to-noise magnitude ratio. For noiseless training data ?opt = 0.6 and with noise in110Threshold, ?Number of Iterations, T  0.2 0.4 0.6 0.8 1 1.2 1.4 1.6123456789RMSE0.0340.0360.0380.040.0420.0440.0460.0480.05(a) Noise ? = 0 with ?opt = 0.6Threshold, ?Number of Iterations, T  0.5 1 1.5 2123456789RMSE0.10.120.140.160.180.20.22(b) Noise ? ?U(?1,1) with ?opt = 1.2Threshold, ?Number of Iterations, T  0.2 0.4 0.6 0.8 1 1.2 1.4 1.61234567891011RMSE0.30.350.40.450.50.550.60.650.70.75(c) Noise ? ?U(?4,4) with ?opt = 1.1Figure 5.1: Mean-Squared Error with variation in the number of iterations, Tand the error threshold ? , for a scalar input and a scalar output. Resultsare presented for varying magnitudes of noise.the data the value is in the range of ?opt ? 1.1. At higher values of ? the mean andvariance of the RMSE increased significantly. A rule of thumb for unknown data(if extensive analysis is not possible) can be to let 0.5 ? ? ? 1. Figure 5.2 showsa visualization of the training data, the noiseless test data, the sample performanceof a single ANN and the Adaboost.MRT algorithm.Under the assumption that the prediction errors are normally distributed, 0.5?? ? 1 corresponds to 40% to 70% of predictions marked as correct and the rest asincorrect, based on their large variance. The samples marked as incorrect will bemore likely to be selected for the next training iterations. Diversity is introduced111?2 0 2 4 6 8 10 12 14 16 18 20?5?4?3?2?1012345xy  Training DataAdaboost.MRT, ?=1.3, T=10, RMSE=0.266Single ANN, RMSE=0.525Function y=sin(x)Figure 5.2: Function estimation with noise-corrupted training data for thefunction y = sin(x) with noise distributed according to ? ?U(?4,4).by training learners on different subsets of the training data. As the learners aretrained on different subsets of training data, they are more likely to make differenterrors on the same data and are thus more likely to be diverse. Diverse ensemblesare more accurate than their individual learners [36].The Adaboost.MRT training time is a multiple of the training time for the singleANN depending on T with an approximately linear relationship. These resultswere to be expected as the training phase in boosting happens in series, which isone drawback compared to bagging in which the training can be parallelized.5.2 Performance Evaluation5.2.1 DatasetsThe performance of Adaboost.MRT was tested on six datasets and comparedto that of Adaboost.RT and a single iteration of their weak learner i.e., the artificialneural network (ANN). Six datasets (five of which appeared in [109, 120]) wereused for performance comparison. Each dataset consists of vector inputs and scalaroutputs. The six datasets are the Friedman #1, #2 and #3 test sets, the Gaborfunction, the sinc|x| function and the yacht hydrodynamics data set from the UCI112Machine Learning Repository[6]. These test sets are commonly used in regressionliterature. The Friedman #1, #2 and #3 have been used in boosting regression in[17, 39, 45, 95, 109, 120].The Friedman #1 dataset has 10 input variables, 5 of which do not contributeto the output. The Friedman #2 and #3 test set model the impedance and phaseshift in an alternating circuit[17]. The Gabor and the sinc|x| function have beenselected for the zero crossing in the output data. The yacht hydrodynamics dataset was selected for function estimation of real measured data. The equations andparameters for the test sets are shown in Table 5.1. Uniformly distributed noise? was added to the output variable y in the training data. The noise magnitudeparameters are shown in Table 5.2. The RMSE was calculated against the noiselessfunction.The performance of the learners was assessed by k-fold cross-validation withk = 10. In k-fold cross-validation the data is partitioned into k equal folds and thealgorithm is trained on all data except one fold for each of the k folds[12]. The datacontained in the fold left out of the training data is used as a test set. The singleANN did not sample N from the training data, but was trained on all the data it waspassed from the folding process. The yacht data-set used 4-fold cross-validationdue to the low number of samples.The neural networks used as the base learner are trained by the back-propagationalgorithm in a MATLAB implementation. The neural networks consisted of oneinput, one hidden, and one output layer, with the number of hidden nodes H shownin Table 5.2 for each data set. The architecture of the neural network was notoptimized for accuracy, as the relative performance of the tested algorithms wasmore important. The power coefficient n = 1 was used in the Adaboost.MRT andAdaboost.RT algorithms to reduce the probability of fitting to outliers.5.2.2 Threshold Parameter OptimizationThe threshold parameter ? affects the accuracy for Adaboost.MRT and Ad-aboost.RT. In this section the optimum value for the thresholds was experimentallydetermined. For clarity, the thresholds will be refereed to as ?RT and ?MRT forAdaboost.RT and Adaboost.MRT, respectively. Following a search on the range113Table 5.1: Data set equations used in the performance evaluation.Data Set Function Variables Size, mFriedman #1 y = 10sin(pix1x2)+20(x3?0.5)2 +10x4 +5x5 xi ? U[0,1],i = 1 . . .105000Friedman #2 y =?x21 +(x2x3?(1x2x4))2x1 ? U[0,100]x2 ? U[40pi,560pi]x3 ? U[0,1]x4 ? U[1,11]5000Friedman #3 y = tan?1(x2x3? 1x2x4x1) x1 ? U[0,100]x2 ? U[40pi,560pi]x3 ? U[0,1]x4 ? U[1,11]3000Gabor y = pi2 exp[?2(x21 + x22)]cos[2pi(x1 + x2)]x1 ? U[0,1]x2 ? U[0,1]3000sinc|x| y = sinc|x|= sin|x||x| x? U[?2pi,2pi] 5000Yacht Yacht Hydrodynamics Data Set xi, i = 1 . . .6 308Table 5.2: Training Parameters for the data sets, showing the number of nodesin the hidden layers, the Adaboost.RT threshold ?RT , the Adaboost.MRTthreshold ?MRT and the sample size N for each iteration. The powercoefficient n = 1 was used for all tests.Data Set Nodes, H ?RT ?MRT Sample Size, N Noise, ?Friedman # 1 10 0.05 0.9 2000 U[?2,2]Friedman # 2 20 0.05 1.0 2000 U[?120,120]Friedman # 3 95 0.10 0.5 1200 U[?0.4,0.4]Gabor 45 0.10 0.9 1200 U[?0.2,0.2]sinc|x| 10 0.10 1.3 2000 U[?0.2,0.2]Yacht 10 0.05 0.7 231114[0,0.5], with steps of 0.05, the best values for ?RT were found for each dataset andare shown in Table 5.2. The optimum value for ?MRT was assessed for each datasetby a 10-fold cross-validation and 4-fold cross-validation for the yacht dataset. Theresulting RMSE and standard deviation is shown in Figure 5.3. Table 5.2 showsthe mean and standard deviation RMSE for Adaboost.RT with ?RT . The presentedtrends in the RMSE were used in the selection of ?MRT . The value of ?MRT withthe lowest RMSE was used in Section 5.2.3 for benchmarking.The performance of the two algorithms is comparable for the Friedman #1,#2, #3 and yacht data-sets for a range of ?MRT . There are significant performancegains in the sinc|x| and the Gabor function prediction for a wide range of ?MRT .This performance gain can be attributed to the zero-crossing in the data, whichcauses over-fitting in the prediction. The optimum value for ?MRT varies fromdataset to dataset, but is on the range of [0.5,1.3], depending on the function andthe prediction error distribution. The main factor that affects the selection of ?is the prediction error distribution. Large errors were observed for ?MRT < 0.05and ?MRT > 1.5, which can be attributed to lack of diversity in the training data.The sampling distribution will remain approximately uniform if the threshold istoo small and alternatively will only fit to extreme outliers if the threshold is toolarge.5.2.3 Performance ComparisonThe parameters from Table 5.2 were used to compare the RMSE of Adaboost.RT,Adaboost.MRT and a single iteration of the base learning algorithm. The root meansquare error (RMSE) was used as the metric for performance comparison of the twoensemble methods. The RMSE was calculated as the mean of a 10 times repeated10-fold cross validation. The data was regenerated after each repetition. The yachtdataset used 10 ? 4-fold cross validation. Due to the limited number of samples,the data could not be regenerated after each repetition. Since there was overlap inthe samples for the 10 ? 4-fold cross-validation, the samples are not independent.Dependence between samples will result in a lower estimated variance, comparedto the actual variance[12].The resulting RMSE performance is shown in Table 5.3, which differs slightly1150 0.5 1 1.50.150.20.250.30.350.40.450.50.55Friedman #1Threshold ?MRT 0 0.5 1 1.57891011121314151617Friedman #2Threshold ?MRT0 0.2 0.4 0.6 0.8 1 1.2 1.40.090.10.110.120.130.140.15Friedman #3Threshold ?MRT 0 0.2 0.4 0.6 0.8 1 1.2 1.4456789101112 x 10?3sinc|x|Threshold ?MRT0 0.2 0.4 0.6 0.8 1 1.2 1.40.020.0220.0240.0260.0280.030.0320.0340.0360.038GaborThreshold ?MRT 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.60.40.60.811.21.41.61.82Threshold ?MRTYachtFigure 5.3: Variation of ?MRT and the resulting RMSE for a single cross-validation test. Shown is also the mean RMSE and one standard devia-tion for Adaboost.RT.116Table 5.3: Performance comparison of Adaboost.RT, Adaboost.MRT and asingle iteration of the ANN, trained on the same data.Mean RMSEDataset Method Mean Std. Dev.Friedman #1 Adaboost.RT 0.3061 0.0152Adaboost.MRT 0.3345 0.0238Single ANN 0.6982 0.1192Friedman #2 Adaboost.RT 11.600 0.536Adaboost.MRT 11.447 0.492Single ANN 15.285 0.848Friedman #3 Adaboost.RT 0.1011 0.0439Adaboost.MRT 0.1023 0.0474Single ANN 0.1311 0.0663sinc|x| Adaboost.RT 0.01111 0.00619Adaboost.MRT 0.00655 0.00380Single ANN 0.01440 0.01090Gabor Adaboost.RT 0.02486 0.01015Adaboost.MRT 0.02123 0.00885Single ANN 0.03285 0.01593Yacht Adaboost.RT 0.8383 0.0801Adaboost.MRT 0.8094 0.1015Single ANN 1.2541 1.0899from the values shown in Figure 5.3. Figure 5.3 in Section 5.2.2 used the samedata for all of its tests and displays the mean and standard deviation of the RMSEa single cross-validation. The results presented in Table 5.3 used data that wasregenerated after each 10-fold cross-validation and show the mean RMSE and thestandard deviation of the mean RMSE for 30 tests.The mean and standard deviation of the RMSE for the Friedman #1, #2, #3 andthe yacht datasets is similar for Adaboost.RT and Adaboost.MRT. Both ensemblesoutperform a single iteration of their learners. Adaboost.MRT outperforms Ad-aboost.RT on the sinc|x| and the Gabor data set, due to the zero crossing in theoutput data.1170 0.25 0.5 1 2 4 8 16123456RMSE Friedman #3NoiseMagnitudeSignalMagnitude  Adaboost.RT, ?RT=0.1Adaboost.MRT, ?MRT=0.5ANN0 0.16 0.33 0.66 1.33 2.66 5.33 10.67 21.3300.20.40.60.81RMSE sinc|x|NoiseMagnitudeSignalMagnitude  Adaboost.RT, ?RT=0.1Adaboost.MRT, ?MRT=1.3ANNFigure 5.4: Variation of the noise to signal magnitude ratio and resultingRMSE comparison.5.2.4 Performance Comparison with Noisy DataThe redefinition of the error function from the absolute relative error to thevariance-scaled error function should reduce the RMSE in the presence of noise.This claim is investigated using two datasets with an increasing signal-to-noiseratio (SNR). The Friedman #3 data set was used since the performance for Ad-aboost.RT and Adaboost.MRT are similar on this data. The sinc|x| function wasselected to investigate performance gains when the data contains a zero-crossing.The noise was uniformly distributed with zero mean. The noise amplitude wasvaried from a value of 0.2 and incrementally doubled to 25.6, resulting in eight dif-ferent noise-to-signal magnitude ratios. The parameters for the learners is shown inTable 5.2. The performance was assessed using 10-fold cross-validation test. Thedata between each test was regenerated and repeated 5 times for each noise itera-tion. The average RMSE values were recorded for Adaboost.MRT, Adaboost.RTand the ANN. The results for both datasets are shown in Figure 5.4. In the presenceof noise Adaboost.MRT achieves a greater RMSE reduction than Adaboost.RT forthe Friedman #3 and the sinc|x| function.5.2.5 Multivariate OutputThis section compares the accuracy of the proposed algorithm against a singleiteration of its base learner for multivariate output. Two multivariate nonlinear118functions were tested.The first function describes a cone in 3D space, with two input variables x =[x1,x2] = [z,? ] and two output variables y = [y1,y2] = [x,y]. The equations describ-ing the cone are[xy]=[(h?z)h (rd)cos(?)(h?z)h (rd)sin(?)]+[?1?2](5.6)where h is the height of the cone (h=10), rd describes the radius (rd=10) and ?1,?2is white noise, normally distributed according to N(?,?2) = N(0,1). The shapewas chosen to aid in the visualization of multivariate function estimation. The baselearning algorithm was an artificial neural network, with a single hidden layer andnumber of hidden nodes, H = 20. The training data consisted of m = 900 instances,N = 900 of which were sampled at each iteration, with input data uniformly dis-tributed on ? ?U [?pi,pi] and z ?U [0,10].The second function was the Swiss-roll function, with two inputs x = [x1,x2] =[d,? ] and three outputs y= [y1,y2,y3] = [x,y,z]. The equations describing the Swissroll are ???xyz???=????cos(?)d?sin(?)???+????1?2?3.??? (5.7)The input parameters for the Swiss roll were uniformly distributed d ? [0,10] and? ? [0,4pi]. The noise parameters ?1,?2,?3 were normally distributed according toN(?,?2) = N(0,2.25). The training data consisted of m = 400 points, N = 400 ofwhich were selected at each iterations. Adaboost.MRT used T = 10 iterations andthe artificial neural network hidden layer consisted of H = 10 hidden nodes.Threshold Parameter SelectionThis section describes the threshold parameter selection for the cone dataset.A detailed discussion of the Swiss roll parameters selection was not included dueto space constraints. The Swiss roll threshold parameter was selected from sampletests and was ?Swiss = [0.2,0.2,0.1]. This value may not be the optimal value andfurther discussion on the parameter selection has been left for future work.119The parameter ? for the multivariate cone function was a two-dimensional vec-tor (? = [?x,?y]), which resulted in two tuning parameters. The optimal value ?optfor ?x and ?y was experimentally determined. The first test calculated RMSEx andRMSEy for varying values of ? and T , while keeping the tuning parameters equal(?x = ?y). At every combination of ? and T the RMSE was averaged over 20repetitions. Thirteen entries of ? were used, with values ranging from 0.01 to 2.The results are shown in Figure 5.5. This figure serves to illustrate the accuracyimprovements as T is increased, with the accuracy of the base learning algorithmshown at T = 1. Beyond 10 iterations the decrease in RMSE was slow, which isconsistent with the single output case.RMSEX?x, ?yItertations, T  0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.81234567891011120.30.350.40.450.50.550.6(a) RMSE for Output variable xRMSEY?x, ?yItertations, T  0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.81234567891011120.30.350.40.450.50.550.6(b) RMSE for Output variable yFigure 5.5: RMSE comparison for x and y as a function of ? = ?x = ?y andT .The RMSE of each output variable is not independent of the other output vari-ables, due to the averaging of the weights in the sampling process. The second testinvestigated the RMSE for varying values of the threshold ? where ?x 6= ?y, whilekeeping the number of iterations T = 10. The RMSE was averaged over 20 itera-tions and is shown in Figure 5.6. The results presented in the second test (on theline where ?x = ?y) agree with the results presented in Figure 5.5. Figure 5.6 showsthat the RMSE for x is more sensitive to the parameter ?x than ?y. Conversely, theRMSE for y is more sensitive to the parameter ?y. This suggests that for a lownumber of output variables the threshold parameters can be tuned individually. For120the tested function, the output variables threshold parameters should be individual-ly optimized for the best accuracy. Based on this experiment the optimal value forthis function was selected as ?x,opt = 0.4 and ?y,opt = 0.2.From Figure 5.6 it appears that a value of ?x = ?y = 0 will produce good results.However for very low ? the RMSE drastically increases with ? = 0, correspondingto a singularity. For ? = 0, the misclassification error rate ?(r)t = 1, meaning thatall samples were classified as incorrect. A unity misclassification error rate willlead to the weight updating parameter ?t,r = 1, which corresponds to a singularityin the final output hypothesis function Equation 5.4. A similar effect occurs forlarge values of ? . Large ? lead to a misclassification error rate ?(r)t = 0, leading to?t,r = 0, which also corresponds to a singularity in Equation 5.4. In the previousexperiment the minimum and maximum value of ? were 0.01 and 2, respectively.The limits of ? depend on the prediction error distribution and should be chosen sothat the resulting misclassification error rate lies in 0 < ?(r)t < 1.?x? yRMSEX  0.2 0.4 0.6 0.8 1 1.2 1.40.20.40.60.811.21.40.30.350.40.450.5(a) RMSE for Output variable xRMSEY?x? y  0.2 0.4 0.6 0.8 1 1.2 1.40.20.40.60.811.21.40.30.350.40.450.50.55(b) RMSE for Output variable yFigure 5.6: RMSE comparison for x and y as a function of ?x and ?y.Function EstimationThe previously determined optimal parameters ?x,opt , ?y,opt and ?swiss wereused to assess the multivariate function reconstruction ability of Adaboost.MRTfrom noise corrupted training data. Figure 5.7 and Figure 5.8 visualize typical re-sults for function estimation for the single ANN and Adaboost.MRT. The mean121Table 5.4: Performance comparison of Adaboost.RT, Adaboost.MRT and asingle iteration of the ANN for multivariate function estimation.RMSEFunction Method Output Mean Std. Dev.3D Cone Adaboost.MRT x 0.2752 0.0293(?x = 0.4,?y = 0.2) y 0.2854 0.0272Single ANN x 0.4246 0.0866y 0.4104 0.0771Swiss Roll Adaboost.MRT x 0.4556 0.1071(?x = ?y = 0.2,?z = 0.1) y 0.3030 0.0570z 0.4685 0.0846Single ANN x 0.6086 0.2198y 0.4063 0.0781z 0.5800 0.2075and standard deviation of the RMSE was calculated from 30 different tests, for foreach output variable. The data was re-generated after each test. The results and thecorresponding values of ? that were used are shown in Table 5.4. Adaboost.MRTshows a significant improvement in mean and standard deviation of RMSE overthe single ANN, making prediction more consistent. The ensemble performs bet-ter than the individual learner for the multivariate functions that were tested.122Figure 5.8: Visualization of the training data and the sample performance ofAdaboost.MRT and a single ANN.Training Data?10 ?5 0 5 10?10?50510True Function?10 ?5 0 5 10?10?50510Adaboost .MRT?10 ?5 0 5 10?10?50510Single ANN?10 ?5 0 5 10?10?50510Figure 5.7: This figure shows the noise corrupted training data used to trainthe neural networks, the noiseless function, a single ANN estima-tion (xRMSE = 0.543,yRMSE = 0.488) and the Adaboost.MRT estimation(xRMSE = 0.258,yRMSE = 0.241).1235.3 Colour Contour LearningThis section describes a sample application of Adaboost.MRT to colour basedobject detection. The target colour contour, in the hue-saturation space, can belearned via the Adaboost.MRT algorithm. A sample application is the tracking ofa mobile robot based on a red and blue spherical visual marker. The proposed off-line learning method requires images of the background and images of the targetlocated within that background. The images should be representative of the back-ground colour distribution and the target?s colour should be somewhat distinct toits environment.The aim of the learning is to estimate a function f (Hi,Si) with 2 inputs and 1output such thatf (Hi,Si) ? yi (5.8)where Hi and Si are the hue and saturation values at pixel i, and yi is a similaritymeasure that the pixel i is part of the target object colour contour. The data fortraining is created from the H and S values of all the pixels contained in sampleimages. Each data entry of pixel i consists of three data entries < Hi,Si,yi >.All pixels in the image that contain the target are assigned to the set ST (sub-script T for ?target?) with yi = 1, regardless if the pixels are located on the targetor not. That way, less information about the distinct target is needed, other thanan image in which it is present in the environment. All pixels in the backgroundimages are assigned to the set SNT (subscript NT for ?not target?) with yi = 0. Thesets ST and SNT are not mutually exclusive and the aim is to find the hue and satu-ration values that are in ST but not in SNT . Since there are background pixels in theset ST which overlap with SNT , the resulting data will contain conflicting informa-tion, which can be considered noise. Adaboost.MRT is well suited for noisy dataand will be used to generalize over the noise.5.3.1 Training Data GenerationThis section describes the collection of the training data, which consisted of allthe pixels contained in four images. For each pixel the values of hue, saturationand yi was recorded. Of the sample images used in the experiment, three containedtypical background and one image contained the target, a blue and red sphere.124(a) yi = 1 (b) yi = 0 (c) yi = 0 (d) yi = 0Figure 5.9: Training data and the corresponding value of yi assigned to al-l pixels in the image. One image shows the desired target and threeimages are typical background images.The target colours are somewhat distinct to the environment, which is a commonoccurrence for colour based target detection. The presence of the target colourin the background images will reduce the algorithm?s ability to properly detect thetarget. The images were resized from a resolution of 320?240 to half their originalsize (160?120) to reduce the number of data points. The total number of trainingdata points was 76,800, only a fraction of which was part of the target. The trainingdata images are shown in Figure 5.9.The hue saturation histogram for each set is shown in Figure 5.10. Figure5.10(a) shows the histogram of the target image (pixels in set ST ) and Figure5.10(b) shows the histogram of the background images (pixels contained in setSNT ). The overlap between the two sets stems from the background pixels. Thetarget appears in Figure 5.10(a) with saturation S? 0.9 and hue H ? 1 and H ? 0.6for the red and blue target respectively.125(a) Histogram for target image pixels containedin set ST with yi = 1(b) Histogram for background image pixels con-tained in set SNT with yi = 0Figure 5.10: Histograms showing the target and background sets. In 5.10(a)the red and blue targets are represented by high saturation values andH ? 1 and H ? 0.6.During training, the background pixels with yi will attempt to drive the pre-dicted yi to 0 while the target pixels will attempt to drive yi to 1. This conflict inthe data requires the use of more background images than target images. Morebackground images are needed, as they will assist in driving the background pixelspresent in the target image(s) to 0.The histogram of the background and the target images does not span the en-tire hue-saturation space. Anywhere outside the space spanned by the images, thelearner will be forced to extrapolate and will produce arbitrary results. Extrapo-lation from training data is not recommended and it is assumed that the sampleimages span the practical usable space.5.3.2 Adaboost.MRT Colour Contour LearningThis section describes the parameters used in Adaboost.MRT for the learningof the target colour. The feed-forward back-propagation network consisted of twoinputs (hue, H and saturation S), a single hidden layer with 6 hidden nodes, and oneoutput yi. The number of hidden nodes and layers was heuristically selected. Ad-aboost.MRT used t = 10 iterations and a threshold ? = 0.5. The selection of theseparameters was based on intuitions gained from previous sections Section 5.2.5.126The number of iterations for Adaboost.MRT were set to 10, as experiments haveshown that increasing the number of iterations does not appear to result in largeaccuracy gains. The threshold ? was set to 0.5, which was based on general trendsdetermined from the experimental relationship in Section 5.2.At each iteration N = 2000 points were sampled from the original m = 76,800and used to train the weak learner. Over 10 iterations, the learner will only receivea maximum of 20,000 points of the original training data. There is little danger ofnot seeing any target data points with yi = 1 during training, due to the iterativere-weighing of the samples based on prediction error. Should the points at aniterations contain yi = 0 only, then the sampling weight of the target points will beincreased as they have a large error. This sampling weight increase will increasethe likelihood that the target points will be picked for the next iteration.5.3.3 Colour Learning ResultsThis section presents the results of the colour contour learning implementationon the test data set. The learned contour of the similarity measure in the hue-saturation space is shown in Figure 5.11. A higher target colour likelihood, meansthat the combination of hue and saturation is most likely part of only the target setST . A lower value means that this combination is contained in the set SNT as thereare more data-points from SNT , the background pixels from ST should have a lowsimilarity measure.The area corresponding to the highest likelihood is a highly saturated red andblue colour. Magenta lies between red and blue in the hue space. Despite not beingpart of the original ST nor in SNT it received a high similarity by extrapolation. Ex-trapolation outside of the training data is not recommended for function estimationand should not be counted as part of the target.The aim of the learning algorithm was to learn the contour depicted in Fig-ure 5.11. The colour contour can be divided into a binary image, consisting oftarget and background via a threshold. The threshold depends on the level of con-fidence desired and was selected as 0.5 for this test. For testing, the hue and satu-ration values were interpolated in the colour contour and their similarity measurewas compared to the threshold.127Figure 5.11: Distribution of the target colour likelihood along the hue-saturation dimensions.It is difficult to define a measure of performance for the training on this data.The root mean squared error (RMSE) does not apply in this case due to the pres-ence of conflicting data from the background pixels in ST . A subjective qualitativeanalysis was performed on a video sequence of 361 images (see Figure 5.12). Thered target was fully found in all images that contained the target in the test data.This result is attributed to the isolation of the colour to the background pixels. Theblue target was partially found in all images, but suffered degradation, due to thelow histogram count for this colour and its proximity to the background pixels inthe hue-saturation space. Depending on the level of confidence the threshold in thebinary image can be adjusted, with a wider confidence level allowing more noise.Figure 5.12 depicts the unfiltered binary, the corresponding colour and the fil-tered colour images. The result is typical of colour-based object detection. Thebinary images shows low-pixel count noise and long slender noise artefacts onphysical edges. These can be removed by size and bounding box filtering. Thesenoise artefacts were depicted and not filtered to show the amount of informationthe method was able to learn from the training data. The remaining objects can beclassified based on their shape as is typically used with colour-based object detec-128Figure 5.12: Sample images highlighting the application of the thresholdingfilter. The first row shows the original images, the second row shows abinary image within the threshold and the third row shows the originalcolours of the binary image. The forth row shows the result of a hole-filling operation and the removal of noisy artefacts with less than 20pixels.tion methods, however more outside information about the target will be needed.The Matlab implementation of the program was able to apply the colour contourthreshold to 361 images in approximately 13.66 seconds, or ? 26.4frames per sec-ond or 0.038 seconds per frame.129Chapter 6Conclusions and Future WorkVisual Servoing with Quadrotor HelicoptersThis thesis presented a practical visual servoing controller for quadrotor heli-copter UAVs. The presented controller was capable of tracking a stationary andmoving target. The control scheme was implemented on an inexpensive $300 heli-copter platform and has shown to be robust to noise in measurements and commu-nication delays. Image processing was performed on a ground station computer.The moving target contained two distinctly coloured markers, which were detectedbased on their colour and shape properties. The features for visual servoing weredefined according to a spherical projection model. Compensation for the featureerror induced by a moving target was implemented in the form of a proportionalintegral controller on the image feature error.The full controller consists of three control loops, which took the stability of theinner loops for granted. The outer control loop converts the image feature error intoUAV velocity commands. The second loop performed a feedback linearization onthe dynamics and translated the desired velocity command into attitude commands.The inner loop was the on-board attitude and altitude controller of the ARDronesystem and which was analyzed using system identification for its dynamics. Thefinal velocity controller rise time was approximately 0.7 seconds.The proposed controller was tested and validated in simulations and experi-ments. The experiments tested the initial feature error reduction followed by a pro-130longed positioning of the quadrotor helicopter in front of the target. Experimentson the 2D stationary target have shown that the controller is able to reduce largeinitial feature errors and that the controller is capable of maintaining the desiredposition in the presence of wind turbulence forces. Experiments on a stationary 3Dtarget have shown that the maximum pixel variation during the positioning phase ison the order of 20 pixel for an image size of 320?240 pixels. The maximum pixelerror for a target moving at a constant velocity of 0.25m/sec for an uncompensatedcontrol law was 60 pixels. With compensation, the maximum error reduced to 40pixels. Comprehensive tuning of the compensation parameters is likely to producebetter results. Future work can focus on the optimization of the proportional andintegral parameters or the incorporation of a more robust estimation scheme of thetarget velocity into the control law.The pixel error experienced during positioning can be attributed to many fac-tors. A major contributing factor is the coupled position and orientation dynamicsof the quadrotor helicopter. Small position errors are induced by the turbulent aircreated by the four spinning rotors of the quadrotor helicopter. The UAV has torotate in order to translate to reduce these errors, which induces additional smallerrors in the features.An additional contributor to the pixel error are the communication delays thatwere frequently experienced. The controller operates at approximately 20Hz, butexperiences occasional increases in time step duration. These occasional increasescan be attributed to the lack of synchronization between the Matlab program andthe ARDrone API, lost packets in wireless communication, or sudden increasesin image processing time due to large number of additional objects. These errorscan be reduced by a stricter synchronization of the sub programs. An additionalcontributor to the positioning errors are minor errors in the system identification.The ARDrone used consumer grade hardware and uses on-board sensor with lim-ited accuracy. The visual servoing control scheme used a constant approximationof the interaction matrix between the image features and the motion of the cam-era. This approximation produces acceptable results, but also contributes to thepixel error. A better approximation of the interaction matrix has been left for fu-ture work and will reduce the steady state error and the pixel error experiencedduring positioning. An estimate of the total thrust force was not available and131sub-optimal approximation of a constant thrust force equal to the UAV weight wasused. This approximation produced acceptable results, but a better approximationwill increase the accuracy of the feedback linearization scheme.The velocity controller used in this thesis assumed no wind resistance in thedynamic equations of motion. This approximation results in performance degra-dation and results in errors between the simulation and experimentally observedresponse. While the controller was capable of rejecting small air currents in indoorenvironments, it is unknown how the controller will perform in the presence ofstronger winds. Future work will consider the addition of wind-resistance approx-imation to the velocity controller. The wind resistance can be approximated usingmachine learning methods, or by assuming a moving target to compensate for thetracking error due to wind.Adaboost.MRTIn this thesis, Adaboost.MRT, a boosting algorithm for multivariate outputregression was introduced. The proposed algorithm is based on Adaboost.RTand addresses the singularity in the misclassification function by introducing avariance-based misclassification function. To the best of the author?s knowledge,Adaboost.MRT is the first algorithm that extends Adaboost to multivariate regres-sion, resulting in increased accuracy for machine learning with multivariate output-s. Adaboost.MRT increases accuracy and has presented some insensitivity to noisein the training data, which led to an increase in accuracy. Multivariate regressionis extensively used in machine learning for data mining, system identification andadvanced control.Experiments with six commonly used data-sets showed that Adaboost.MRToutperforms Adaboost.RT on training sets that contain high magnitude noise andtraining sets with a zero-crossing in the output data. On other training sets, the pro-posed method performed similarly to Adaboost.RT. Performed experiments sug-gest that the proposed method is applicable to a wider range of cases than Ad-aboost.RT, especially where the output variables are near zero or contain noise.In Adaboost.MRT, the expected accuracy depends on the base learning algo-rithm and the noise distribution of the predictions. Following the selection of the132base learning algorithm, the values of ? and T need to be chosen. There is a reduc-tion in accuracy increase beyond T = 10 iterations. The optimum value of ? canbe experimentally determined and remains consistent for a given prediction errordistribution. In the performed experiments on singlevariate output, the optimumthreshold was on the range of [0.5,1.4].Experiments on Adaboost.MRT on two multivariate output functions have beenperformed. In both cases, there was a significant error reduction in the RMSEover the base learning algorithm. Experiments suggest that for a small number ofoutput variables the RMSE of an output variable is most sensitive to the thresholdparameter of the specific output variable. This suggests that the threshold can beselected and tuned for each variable individually. Further testing on larger sets andmore diverse combinations of output variables is warranted.Future work should focus on the selection of the threshold parameter ? . Theoptimal parameter value is likely linked to the distribution of the prediction error.Further research on an adaptive threshold parameter selection method that auto-mates the threshold selection or an investigation in the relation between the noisedistribution and the optimal parameter value is warranted.133Bibliography[1] M. Achtelik, S. Weiss, and R. Siegwart. Onboard IMU and monocularvision based control for MAVs in unknown in-and outdoor environments.In Robotics and automation (ICRA), 2011 IEEE International Conferenceon, pages 3056?3063. IEEE, May 2011. ? pages 19, 27, 57[2] G. Allibert and E. Courtial. Switching controller for efficient IBVS. InInternational Conference on Intelligent Robots and Systems (IROS) 2012,pages 1695?1701. IEEE, 2012. ? pages 7[3] E. Altug, J. P. Ostrowski, and R. Mahony. Control of a quadrotor helicopterusing visual feedback. In Robotics and Automation, 2002. Proceedings.ICRA?02. IEEE International Conference on, volume 1, pages 72?77.IEEE, 2002. ? pages 12, 19[4] N. Andreff, B. Espiau, and R. Horaud. Visual Servoing from Lines. TheInternational Journal of Robotics Research, 21(8):679?699, Aug. 2002.ISSN 0278-3649. ? pages 11, 16[5] Arducopter Autopilot. April 2013. [online] Available:.http://code.google.com/p/arducopter/. ? pages 18[6] M. Bache, K. Lichman. {UCI}Machine Learning Repositoryhttp://archive.ics.uci.edu/ml, 2013. ? pages 113[7] M. Bakthavatchalam, F. Chaumette, and E. Marchand. Photometricmoments: New promising candidates for visual servoing. In IEEE Int.Conf. on Robotics and Automation, pages 5521?5526. IEEE, 2013. ?pages 6, 11, 16[8] C. Benedek and T. Szira?nyi. Study on color space selection for detectingcast shadows in video surveillance. International Journal of ImagingSystems and Technology, 17(3):190?201, 2007. ? pages 45134[9] F. Bensalah and F. Chaumette. Compensation of abrupt motion changes intarget tracking by visual servoing. In Intelligent Robots and Systems95.?Human Robot Interaction and Cooperative Robots?, Proceedings. 1995IEEE/RSJ International Conference on, number 4, pages 181?187. IEEE,1995. ? pages 54[10] C. Bills, J. Chen, and A. Saxena. Autonomous MAV flight in indoorenvironments using single image perspective cues. In Robotics andautomation (ICRA), 2011 IEEE international conference on, pages5779?5783. IEEE, 2011. ? pages 15[11] S. Bouabdallah, A. Noth, and R. Siegwart. PID vs LQ control techniquesapplied to an indoor micro quadrotor. In Intelligent Robots and Systems,2004.(IROS 2004). Proceedings. 2004 IEEE/RSJ International Conferenceon, volume 3, pages 2451?2456. IEEE, 2004. ? pages 18, 19[12] R. R. Bouckaert. Choosing between two learning algorithms based oncalibrated tests. In International Conference on Machine Learning(ICML-03), pages 51?58, 2003. ? pages 113, 115[13] O. Bourquardez, R. Mahony, T. Hamel, and F. Chaumette. Stability andperformance of image based visual servo control using first order sphericalimage moments. In Intelligent Robots and Systems, 2006 IEEE/RSJInternational Conference on, volume 06, pages 4304?4309. IEEE, 2006.? pages 16[14] O. Bourquardez, R. Mahony, and N. Guenard. Kinematic visual servocontrol of a quadrotor aerial vehicle. Technical report, 2007. ? pages 12,14, 16[15] O. Bourquardez, R. Mahony, N. Guenard, F. Chaumette, T. Hamel, andL. Eck. Image-Based Visual Servo Control of the Translation Kinematicsof a Quadrotor Aerial Vehicle. IEEE Transactions on Robotics, 25(3):743?749, 2009. ISSN 1552-3098. ? pages 12, 14, 16, 17[16] M. Bos?nak, D. Matko, and S. Blaz?ic?. Quadrocopter control using anon-board video system with off-board processing. Robotics andAutonomous Systems, 60(4):657?667, Apr. 2012. ISSN 09218890. ?pages 12, 15, 17, 18, 57, 61[17] L. Breiman. Bagging predictors. Machine learning, 24(2):123?140, 1996.? pages 113135[18] P. Bristeau, F. Callou, D. Vissie`re, and N. Petit. The Navigation andControl technology inside the AR. Drone micro UAV. In 18th IFAC WorldCongress, pages 1477?1484, 2011. ? pages 18, 57, 61, 65[19] G. Brown, J. Wyatt, and P. Tio. Managing diversity in regressionensembles. The Journal of Machine Learning Research, 6:1621?1650,2005. ? pages 21[20] J. Bruce, T. Balch, and M. Veloso. Fast and inexpensive color imagesegmentation for interactive robots. In Intelligent Robots and Systems,2000.(IROS 2000). Proceedings. 2000 IEEE/RSJ International Conferenceon, volume 3, pages 2061?2066. IEEE, 2000. ? pages 44[21] S. D. Buluswar and B. a. Draper. Color machine vision for autonomousvehicles. Engineering Applications of Artificial Intelligence, 11(2):245?256, Apr. 1998. ISSN 09521976. ? pages 44[22] G. C. Goodwin, S. F. Graebe, and M. E.Salgado. Control System Design.Prentice Hall, 1st edition, 2001. ? pages 64[23] J. Cao, S. Kwong, and R. Wang. A noise-detection based AdaBoostalgorithm for mislabeled data. Pattern Recognition, 45(12):4451?4465,2012. ISSN 0031-3203. ? pages 22[24] Z. Ceren and E. Altug. Image Based and Hybrid Visual Servo Control ofan Unmanned Aerial Vehicle. Journal of Intelligent & Robotic Systems, 65(1-4):325?344, 2012. ? pages 12, 14, 15, 17, 26, 36, 57[25] F. Chaumette. Potential problems of stability and convergence inimage-based and position-based visual servoing. The confluence of visionand control, pages 66?78, 1998. ? pages 11[26] F. Chaumette. Image moments: a general and useful set of features forvisual servoing. Robotics, IEEE Transactions on, 20(4):713?723, 2004. ?pages 6, 11, 16[27] F. Chaumette and S. Hutchinson. Visual servo control. I. Basic approaches.Robotics & Automation Magazine, 13(4):82?90, 2006. ? pages 6, 7, 10[28] F. Chaumette and S. A. Hutchinson. Visual servo control. II. Advancedapproaches. IEEE Robotics & Automation Magazine, 14(1):109?118, Mar.2007. ISSN 1070-9932. ? pages 6, 7, 9136[29] F. Chaumette and A. Santos. Tracking a moving object by visual servoing.In 12th IFAC World Congress, volume 3, pages 643?648, 1993. ? pages54[30] G. Chesi and K. Hashimoto. Keeping features in the field of view ineye-in-hand visual servoing: A switching approach. Robotics, IEEETransactions on, 20(5):908?913, 2004. ? pages 7, 8[31] P. Corke and S. Hutchinson. A new partitioned approach to image-basedvisual servo control. Robotics and Automation, IEEE Transactions on, 17(4):507?515, 2001. ? pages 6, 8, 11[32] P. I. Corke, F. Spindler, and F. Chaumette. Combining Cartesian and polarcoordinates in IBVS. In International Conference on Intelligent Robotsand Systems, 2009., number 4, pages 5962?5967. IEEE, Oct. 2009. ?pages 8, 10[33] E. O. Costa, A. Trinidad, R. Pozo, and S. R. Vergilio. A geneticprogramming approach for software reliability modeling. IEEETransactions on Reliability, 59(1):222?230, 2010. ? pages 26[34] A. Cre?tual, F. Chaumette, and P. Bouthemy. Complex object tracking byvisual servoing based on 2D image motion. In Pattern Recognition, 1998.Proceedings. Fourteenth International Conference on, number 2, pages1251?1254, 1998. ? pages 54[35] M. Dahleh. 6.435 System Identification Spring 2005. 2012. [online].Available: http://ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-435-system-identification-spring-2005/. ? pages18[36] T. Dietterich. Ensemble methods in machine learning. Multiple classifiersystems, pages 1?15, 2000. ? pages 21, 112[37] V. Dobrokhodov, I. Kaminer, K. Jones, and R. Ghabcheloo. Vision-basedtracking and motion estimation for moving targets using small UAVs. InAmerican Control Conference, pages 1428?1433, 2006. ? pages 12[38] A. Dorobantu, A. Murch, B. Mettler, and G. Balas. System Identificationfor Small, Low-Cost, Fixed-Wing Unmanned Aircraft. Journal of Aircraft,50(4):1117?1130, July 2013. ISSN 0021-8669. ? pages 18, 61[39] H. Drucker. Improving regressors using boosting techniques. InternationalConference on Machine Learning, 97:107?115, 1997. ? pages 21, 113137[40] R. Fomena and F. Chaumette. Visual servoing from spheres using aspherical projection model. In International Conference on Robotics andAutomation, 2007, pages 2080?2085. IEEE, 2007. ? pages 17, 37, 38, 40,42[41] R. Fomena and F. Chaumette. Improvements on Visual Servoing FromSpherical Targets Using a Spherical Projection Model. IEEE Transactionson Robotics, 25(4):874?886, Aug. 2009. ISSN 1552-3098. ? pages 15,17, 42[42] Y. Freund and R. Schapire. A Decision-Theoretic Generalization ofOn-Line Learning and an Application to Boosting. Journal of Computerand System Sciences, 55(1):119?139, Aug. 1995. ISSN 00220000. ?pages 21[43] Y. Freund and R. Schapire. Experiments with a new boosting algorithm. InInternational Conference on Machine Learning. Vol 96., pages 148?156,1996. ? pages 22[44] Y. Freund and R. Schapire. A short introduction to boosting.Journal-Japanese Society For Artificial Intelligence, 14(5):771?780, 1999.? pages 21, 22[45] J. H. Friedman. Multivariate adaptive regression splines. The Annals ofStatistics, pages 1?67, 1991. ? pages 113[46] C. Geyer and K. Daniilidis. A unifying theory for central panoramicsystems and practical implications. In Computer VisionECCV 2000, pages445?461. Springer Berlin Heidelberg, 2000. ? pages 15, 29, 30, 33[47] R. Goel, S. Shah, N. Gupta, and N. Ananthkrishnan. Modeling, simulationand flight testing of an autonomous quadrotor. In IISc CentenaryInternational Conference and Exhibition on Aerospace Engineering,ICEAE, pages 18?22, 2009. ? pages 18[48] J. E. Gomez-Balderas, G. Flores, L. R. Garc??a Carrillo, and R. Lozano.Tracking a Ground Moving Target with a Quadrotor Using SwitchingControl. Journal of Intelligent & Robotic Systems, 70(1-4):65?78, Aug.2012. ISSN 0921-0296. ? pages 12, 14, 20[49] E. Graether and F. Mueller. Joggobot: a flying robot as jogging companion.In CHI?12 Extended Abstracts on Human Factors in Computing Systems,pages 1063?1066. ACM, 2012. ? pages 2138[50] N. Guenard and T. Hamel. A practical visual servo control for anunmanned aerial vehicle. Robotics, IEEE Transactions, 24(2):331?340,2008. ? pages 13, 16[51] H. Hadj-Abdelkader, Y. Mezouar, and P. Martinet. Decoupled visualservoing based on the spherical projection of a set of points. In Roboticsand Automation, 2009. ICRA?09. IEEE International Conference on, pages1110?1115. IEEE, May 2009. ? pages 15, 26, 33, 37, 40[52] T. Hamel and R. Mahony. Visual servoing of an under-actuated dynamicrigid-body system: an image-based approach. Robotics and Automation,IEEE Transactions on, 18(2):187?198, 2002. ? pages 12, 13, 16, 30[53] L. Hansen and P. Salamon. Neural network ensembles. IEEE Transactionson Pattern Analysis and Machine Intelligence, 12(10):993?1001, 1990. ?pages 21[54] G. Hoffmann, S. Waslander, and C. Tomlin. Aerodynamics and control ofautonomous quadrotor helicopters in aggressive maneuvering. In 2009IEEE International Conference on Robotics and Automation, pages3277?3282. IEEE, May 2009. ? pages 18[55] M. Hu. Visual Pattern Recognition by Moment Invariants. InformationTheory, IRE Transactions on, 8(2):66?70, 1962. ? pages 16[56] S. Huh and D. Shim. A vision-based landing system for small unmannedaerial vehicles using an airbag. Control Engineering Practice, 18(7):812?823, 2010. ISSN 0967-0661. ? pages 16[57] S. A. Hutchinson, G. G. D. Hager, P. I. P. Corke, and Hutchinson, Seth. Atutorial on visual servo control. IEEE Transactions on Robotics andAutomation, 12(5):651?670, 1996. ISSN 1042296X. ? pages 6, 7[58] M. Iwatsuki and N. Okiyama. A new formulation of visual servoing basedon cylindrical coordinate system. Robotics, IEEE Transactions on, 21(2):266?273, 2005. ? pages 10[59] S. Lange, N. Sunderhauf, and P. Protzel. A vision based onboard approachfor landing and position control of an autonomous multirotor UAV inGPS-denied environments. In Advanced Robotics, 2009. ICAR 2009.International Conference on, pages 1?6, 2009. ? pages 12, 14, 17139[60] D. Lee and H. Kim. Adaptive visual servo control for a quadrotorhelicopter. In Control Automation and Systems (ICCAS), 2010International Conference on, pages 1049?1052. IEEE, 2010. ? pages 10,12, 14, 15[61] R. Mahony and V. Kumar. Aerial Robotics and the Quadrotor. Robotics &Automation Magazine, IEEE, 19(3):19. ? pages 1[62] R. Mahony, V. Kumar, and P. I. Corke. Multirotor Aerial Vehicles:Modeling, Estimation, and Control of Quadrotor. Robotics & AutomationMagazine, IEEE, 19(3):20 ? 32, 2012. ? pages 58[63] E. Malis, F. Chaumette, and S. Boude. 2 - 1/2-D Visual Servoing. Roboticsand Automation, IEEE Transactions on, 15(2):238?250, 1999. ? pages 8,14[64] M. Marey and F. Chaumette. Analysis of classical and new visual servoingcontrol laws. In 2008 IEEE International Conference on Robotics andAutomation, pages 3244?3249. IEEE, May 2008. ? pages 10[65] L. Mejias, S. Saripalli, P. Campoy, G. S. Sukhatme, and L. Mej??as. Visualservoing of an autonomous helicopter in urban areas using feature tracking.Journal of Field Robotics, 23(3-4):185?199, Mar. 2006. ISSN 1556-4959.? pages 12, 14[66] P. Melville and R. J. Mooney. Creating diversity in ensembles usingartificial data. Information Fusion, 6(1):99?111, Mar. 2005. ISSN15662535. ? pages 22[67] N. Metni and T. Hamel. A UAV for bridge inspection: Visual servoingcontrol law with orientation limits. Automation in Construction, 17(1):3?10, Nov. 2007. ISSN 09265805. ? pages 13, 14[68] B. Mettler, M. B. Tischler, and T. Kanade. System identification ofsmall-size unmanned helicopter dynamics. Annual Forum Proceedings -American Helicopter Society, 2:1706?1717, 1999. ? pages 18[69] Y. Mezouar and H. Abdelkader. Visual servoing from 3d straight lines withcentral catadioptric cameras. In Fifth Workshop on Omnidirectional Vision,Omnivis?, 2004. ? pages 6, 16[70] L. Minh and Ha Cheolkeun. Modeling and control of quadrotor MAV usingvision-based measurement. In Strategic Technology (IFOST), 2010International Forum on, pages 70?75, 2010. ? pages 19140[71] V. Mistler, A. Benallegue, and N. M?Sirdi. Exact linearization andnoninteracting control of a 4 rotors helicopter via dynamic feedback. InInternational Workshop on Robot and Human Interactive Communication2001, pages 586?593. IEEE, 2001. ? pages 17, 19[72] M. M. Nawaf. A Visual Servoing Application for Autonomous UnderwaterRobotics. Msc, Heriot-Watt University, 2011. ? pages 14, 26, 55[73] R. G. Nicholas and S. A. Hutchinson. An Asymptotically Stable SwitchedSystem Visual Controller for Eye in Hand Robots. 1(October), 2003. ?pages 8[74] K. Ogata. Discrete-Time Control Systems. Prentice Hall, 2nd ed. edition,1987. ? pages 19, 75, 76[75] P. Oh and K. Allen. Visual servoing by partitioning degrees of freedom.IEEE Transactions on Robotics and Automation, 17(1):1?17, 2001. ISSN1042296X. ? pages 36[76] N. Oza. Boosting with averaged weight vectors. In Multiple ClassifierSystems, volume 4, pages 15?24. Springer Berlin Heidelberg, 2003. ?pages 22, 28[77] N. Oza. Aveboost2: Boosting for noisy data. In Multiple ClassifierSystems, pages 31?40. Springer Berlin Heidelberg, 2004. ? pages 22, 28[78] R. Ozawa and F. Chaumette. Dynamic visual servoing with image momentsfor an unmanned aerial vehicle using a virtual spring approach. AdvancedRobotics, 27(9):683?696, Mar. 2013. ISSN 0169-1864. ? pages 12, 13, 16[79] Parrot ARDrone. June 2011. [online] Available:.http://ardrone.parrot.com/. ? pages 57[80] F. Pla, F. Juste, and F. Ferri. Colour segmentation based on a lightreflection model to locate citrus fruits for robotic harvesting. Computersand Electronics in Agriculture, 9(1):53?70, 1993. ? pages 44[81] P. Pounds, R. Mahony, and P. Corke. Modelling and control of a largequadrotor robot. Control Engineering Practice, 18(7):691?699, July 2010.ISSN 09670661. ? pages 60[82] A. A. Proctor and E. N. Johnson. Vision-only Approach and Landing. InAIAA Guidance, Navigation and Control Conference, number August.AIAA, 2005. ? pages 12141[83] F. Rafi, S. Khan, K. Shafiq, and M. Shah. Autonomous target following byunmanned aerial vehicles. In G. R. Gerhart, C. M. Shoemaker, and D. W.Gage, editors, Defense and Security Symposium, volume 6230.International Society for Optics and Photonics, May 2006. ? pages 20[84] M. Rautiainen, T. OJALA, and H. KAUNISKANGAS. Detectingperceptual color changes from sequential images for scene surveillance. InIEICE TRANSACTIONS on Information and Systems, number 12, pages1676?1683, 2001. ? pages 46[85] G. Ridgeway. The state of boosting. Computing Science and Statistics,pages 172?181, 1999. ? pages 21, 23[86] G. Ridgeway, D. Madigan, and T. Richardson. Boosting methodology forregression problems. In Proceedings of the International Workshop on AIand Statistics, pages 152?161, 1999. ? pages 23[87] Y. Rizwan. Towards High Speed Aerial Tracking of Agile Targets. Masterof applied science, University of Waterloo, 2011. ? pages 20[88] H. Romero, R. Benosman, and R. Lozano. Stabilization and location of afour rotor helicopter applying vision. In American Control Conference,pages 3930?3935. IEEE, 2006. ? pages 12[89] P. Rudol, M. Wzorek, and P. Doherty. Vision-based pose estimation forautonomous indoor navigation of micro-scale Unmanned Aircraft Systems.In 2010 IEEE International Conference on Robotics and Automation(ICRA), pages 1913?1920. IEEE, May 2010. ? pages 12, 15[90] S. Saripalli and G. Sukhatme. Landing a helicopter on a moving target. InRobotics and Automation, 2007 IEEE International Conference on, numberApril, pages 2030?2035. IEEE, 2007. ? pages 20[91] S. Saripalli, J. F. Montgomery, and G. S. Sukhatme. Visually guidedlanding of an unmanned aerial vehicle. Robotics and Automation, IEEETransactions on, 19(3):371?380, 2003. ? pages 12, 15, 16[92] A. Savran, B. Sankur, and M. Taha Bilge. Regression-based intensityestimation of facial action units. Image and Vision Computing, 30(10):774?784, Oct. 2012. ISSN 02628856. ? pages 26[93] R. Schapire and Y. Singer. Improved boosting algorithms usingconfidence-rated predictions. Machine Learning, 37(3):297?336, 1999. ?pages 21142[94] O. Shakernia, R. Vidal, C. Sharp, Y. Ma, and S. Sastry. Multiple viewmotion estimation and control for landing an unmanned aerial vehicle. InRobotics and Automation, 2002. Proceedings. ICRA?02. IEEE InternationalConference on, volume 3, pages 2793?2798. IEEE, 2002. ? pages 12[95] D. Shrestha and D. Solomatine. Experiments with AdaBoost. RT, animproved boosting scheme for regression. Neural computation, 18(7):1678?710, July 2006. ISSN 0899-7667. ? pages 23, 25, 106, 110, 113[96] W. Skarbek and A. Koschan. Colour image segmentation-a survey.(October), 1994. ? pages 45[97] D. Solomatine and D. Shrestha. AdaBoost. RT: a boosting algorithm forregression problems. In IEEE International Joint Conference on NeuralNetworks, volume 2, pages 1163?1168. IEEE, 2004. ? pages 23, 146[98] Y. Sun, M. S. Kamel, A. K. Wong, and Y. Wang. Cost-sensitive boostingfor classification of imbalanced data. Pattern Recognition, 40(12):3358?3378, Dec. 2007. ISSN 00313203. ? pages 22[99] O. Tahri and F. Chaumette. Image moments: generic descriptors fordecoupled image-based visual servo. In Robotics and Automation, 2004.Proceedings. ICRA?04. 2004 IEEE International Conference on, numberApril, pages 1185?1190. IEEE, 2004. ? pages 6, 11[100] O. Tahri, F. Chaumette, and Y. Mezouar. New decoupled visual servoingscheme based on invariants from projection onto a sphere. In 2008 IEEEInternational Conference on Robotics and Automation, pages 3238?3243.IEEE, May 2008. ? pages 16, 33[101] O. Tahri, Y. Mezouar, F. Chaumette, and P. Corke. Decoupled image-basedvisual servoing for cameras obeying the unified projection model.Robotics, IEEE Transactions on, 26(4):684?697, 2010. ? pages 16[102] C. Teuliere, L. Eck, and E. Marchand. Chasing a moving target from aflying UAV. In Intelligent Robots and Systems (IROS), 2011 IEEE/RSJInternational Conference on, pages 4929?4934. IEEE, Sept. 2011. ?pages 12, 14, 15, 20[103] Thomas G. Dietterich. An experimental comparison of three methods forconstructing ensembles of decision trees: Bagging, boosting, andrandomization. Machine Learning, 40(2):139?157, 2000. ? pages 21, 22143[104] H. Tian and Z. Mao. An Ensemble ELM Based on Modified AdaBoost.RTAlgorithm for Predicting the Temperature of Molten Steel in LadleFurnace. IEEE Transactions on Automation Science and Engineering, 7(1):73?80, Jan. 2010. ISSN 1545-5955. ? pages 24, 26[105] H. Tian, Z. Mao, and Y. Wang. Hybrid modeling of molten steeltemperature prediction in LF. ISIJ international, 48(1):58?62, 2008. ISSN0915-1559. ? pages 24, 26[106] M. Tkalcic. Colour spaces: perceptual, historical and applicationalbackground, volume 1. IEEE, 2003. ? pages 44, 46[107] A. Visser and N. Dijkshoorn. Closing the gap between simulation andreality in the sensor and motion models of an autonomous AR. Drone. InProceedings of the International Micro Air Vehicle Conference and FlightCompetition (IMAV11), 2011. ? pages 18, 27[108] H. Voos. Nonlinear control of a quadrotor micro-UAV usingfeedback-linearization. In Mechatronics, 2009. ICM 2009. IEEEInternational Conference on, number April, pages 1?6. IEEE, 2009. ?pages 18, 57, 61[109] D. Wang and M. Alhamdoosh. Evolutionary extreme learning machineensembles with size control. Neurocomputing, 102:98?110, Feb. 2013.ISSN 09252312. ? pages 23, 112, 113[110] H. Wang, Y. Liu, and D. Zhou. Adaptive visual servoing using point andline features with an uncalibrated eye-in-hand camera. Robotics, IEEETransactions on, 24(4):843?857, 2008. ? pages 11[111] Y. Wang, J. Thunberg, and X. Hu. A transformation of the Position BasedVisual Servoing Problem into a convex optimization problem. In Decisionand Control (CDC), 2012 IEEE 51st Annual Conference on, pages5673?5678, 2012. ? pages 7, 8[112] S. Weiss, D. Scaramuzza, and R. Siegwart. Monocular SLAMbasednavigation for autonomous micro helicopters in GPS denied environments.Journal of Field Robotics, 28(6):854?874, 2011. ? pages 12, 17, 18, 19,27, 57, 61, 62, 64[113] G. Welch and G. Bishop. An introduction to the Kalman filter. InSIGGRAPH, Course 8, 2001. ? pages 74144[114] K. E. Wenzel, P. Rosset, and A. Zell. Low-Cost Visual Tracking of aLanding Place and Hovering Flight Control with a Microcontroller.Journal of Intelligent and Robotic Systems, 57(1-4):297?311, Aug. 2009.ISSN 0921-0296. ? pages 12, 18[115] A. Wu, E. Johnson, and A. Proctor. Vision-aided inertial navigation forflight control. Journal of Aerospace Computing, Information, andCommunication, 2(9):348?360, 2005. ? pages 12, 17[116] G. Xu, Y. Zhang, S. Ji, Y. Cheng, and Y. Tian. Research on computervision-based for UAV autonomous landing on a ship. Pattern RecognitionLetters, 30(6):600?605, Apr. 2009. ISSN 01678655. ? pages 16[117] Y.-H. Yang, Y.-C. Lin, Y.-F. Su, and H. H. Chen. Music emotionclassification: A regression approach. In Multimedia and Expo, pages208?211. IEEE, 2007. ? pages 26[118] H. Zhang and J. P. Ostrowski. Visual servoing with dynamics: control of anunmanned blimp. In International Conference on Robotics andAutomation, volume 1, pages 618?623. IEEE, 1999. ? pages 13[119] Z. Zhang. Flexible camera calibration by viewing a plane from unknownorientations. In The Proceedings of the Seventh IEEE InternationalConference on, pages 666?673. IEEE, 1999. ? pages 32[120] Z. Zhou, J. Wu, and W. Tang. Ensembling neural networks: Many could bebetter than all. Artificial Intelligence, 137(1):239?263, May 2002. ISSN00043702. ? pages 112, 113145AppendicesAppendix ASupporting MaterialsA.1 Appendix: Original Adaboost.RTThis section shows the original Adaboost.RT algorithm, first published in [97].This algorithm is shown for reference and comparison purposes of Adaboost.MRTto the original algorithm. The only modification is the renaming of the absoluteerror function AREt(i) to the misclassification function Ert(i). This change wasdone for consistency with the other algorithms.146Algorithm 2 Adaboost.RT algorithm1. Input:? Sequence of m examples < x1,y1 > . . .< xm,ym > where the output y ? R? Weak learning algorithm (?Weak Learner?)? Integer T specifying number of iterations or machines? Threshold ? , (0 < ?) used to classify samples as correct and incorrect.2. Initialize:? Machine number or iteration, t = 1? Set the sampling distribution Dt(i) = 1m for all i? Set error rate ?rt = 03. Iterate while t ? T :? Sample N examples from sequence of m, using weights Dt with replacement? Give these N samples to the Weak Learner? Build the regression model ft(x)? y? Calculate the error frequency error function on all m examples:Ert(i) = f ( ft(xi),yi) (A.1)? Calculate the misclasification error rate of ft(x) :?t = ?i:Ert(i)>? Dt(i)? Set the weight updating parameter ?t = ?nt , where n = power coefficient (e.g., linear, square or cubic)? Update the distribution of weights Dt as:Dt+1Dt(i)Zt?{?t if Ert(i)<= ?1 otherwise(A.2)where Zt is a normalization factor such that Dt+1 will be a distribution.? Set t = t +1.4. Output the final hypothesis:f f in(x) =?t(log( 1?t)ft(x)?t(log( 1?t) (A.3)147Appendix BController DesignsB.1 System IdentificationB.1.1 Pitch System Identification Frequency Sweep InformationTable B.1: Frequency Sweep Information for Pitch Dynamics System Identi-ficationLower Freq. Upper Freq.Test Length (sec) Limits (Hz) Limits (Hz)1 30 0.01 12 30 0.01 13 25 0.5 1.254 25 0.75 1.55 17 1 26 17 2 37 17 3 48 17 4 59 17 3 5148B.1.2 Pitch System Identification Frequency Sweep InformationTable B.2: Frequency Sweep Information for the Roll Dynamics System I-dentificationLower Freq. Upper Freq.Test Length (sec) Limits (Hz) Limits (Hz)1 20 0.01 0.52 20 0.01 0.53 20 0.5 0.64 20 2 45 20 2 56 25 0.5 1.57 25 0.05 0.58 30 0.2 0.89 30 0.25 0.7510 30 0.16 111 30 0.5 2.5B.2 LQR with Integral TermThis section describes the procedure for the design of a LQR controller withan integral term pre-compensation to remove steady-state errors. This controllerarchitecture is capable of handling unmodelled wind disturbance, but was not usedas it reduces system speed. The state space representation of the controller withthe addition of an integral term isx?(k+1) = Gx?(k)+Hu(k) (B.1)y(k) = Cx?(k) (B.2)v(k) = v(k?1)+ r(k)? y?(k) (B.3)u(k) =?Kx?(k)+KIv(k) (B.4)149where the term v(k) is the summation of the error, r(k) represents the desired veloc-ity and y? is the estimate of the current velocity. This new system can be transformedinto state space and solved for the optimal gains K and KI . Equation B.3 can berearranged and rewritten in terms of the state space matricesv(k+1) = v(k)+ r(k+1)? y(k+1) (B.5)=?CGx?(k)+ v(k)?CHu(k)+ r(k+1). (B.6)The state space representation of the entire system can be rewritten into a newform[x?(k+1)v(k+1)]=[C 0?CG 1][x?(k)v(k)]+[H?CH]]u(k)+[01]r(k+1)(B.7)with the optimal gainK? =[K ?KI]. (B.8)The new system represented by Equation B.7 is solved for the new optimal gain K?.The value KI is the integral gain and K is the used on the estimate of the state.150Appendix CMatlab CodeC.1 ARDrone Communication FunctionsThis section presents sample Matlab code used in the Thesis. The computershould be connected to the ARDrone via WiFi. The variable u contains a udpobject from Matlab that points Matlab to the Ip address. The term iSeq is thesequence number. The ARdrone executes the most recent command sent based oniSeq.1 %% Establish Connection2 IP='192.168.1.1';3 at_port=5556;4 u=udp(IP,'LocalPort',at_port+100,'RemotePort',at_port);5 fopen(u); %make connection6 iSeq=1;Send AT command function1 function sendAT(u,string)2 %Sends commands over udp connection3 fwrite(u,[string,char(hex2dec('0d'))]);4 end151Take Off Function1 function iSeq=ARtakeOff(u,iSeq)2 %This function sends a level command and then takes off ...with the ARDRONE3 %level ardrone4 disp('Sending Flat Trim');5 sendAT(u,['AT*FTRIM=',int2str(iSeq)]);6 iSeq=iSeq+1;7 sendAT(u,['AT*FTRIM=',int2str(iSeq)]);8 iSeq=iSeq+1;910 %Takeoff11 disp('Sending Takeoff');12 sendAT(u,['AT*REF=',int2str(iSeq),',',int2str(290718208)]);13 iSeq=iSeq+1;14 sendAT(u,['AT*REF=',int2str(iSeq),',',int2str(290718208)]);15 iSeq=iSeq+1;16 endMaintain Communication (Watchdog) FunctionIf no commands need to be sent, use this function to prevent the ARdronecommunication from timing out.1 sendAT(u,['AT*COMWDG=',num2str(iSeq)]);2 iSeq=iSeq+1;Send Attitude commandsThe inputs into the system are normalized on [-1,1]. The following sends anattitude command:1 function iSeq=ARMove(ud,iSeq,flag,roll,pitch,vz,yaw)2 %This function sends a move (PCMD) command to the ARDrone ...that changes the1523 %orientations. The parameters are sent to the drone as ...32bit integer value.4 %The parameters are:5 % ud -udp object that is used to send commands over ...network6 % iSeq -The Sequence number7 % flag -should be 1 or 0, for combined yaw mode8 % roll -roll [-1,1] negative = left-tilt flies ...leftward9 % pitch -pitch [-1,1] negative = nose-down flies ...frontward10 % vz -velocity perpendicular to drone plane [-1,1]11 % yaw -angular speed, positive value makes it turn ...right [-1,1]12 roll=num2str(typecast(single(roll),'int32'));13 pitch=num2str(typecast(single(pitch),'int32'));14 vz=num2str(typecast(single(vz),'int32'));15 yaw=num2str(typecast(single(yaw),'int32'));1617 sendAT(ud,['AT*PCMD=',num2str(iSeq),',',num2str(flag),',' , ...18 roll,',',pitch,',',vz,',',yaw])19 iSeq=iSeq+1;20 endC.2 Kalman Filter Function1 function [ke]=kal_est(mode,entries_or_ke,dims_or_t,G,H,C,u,yt)2 %kalman filter estimate3 %mode = init or update4 %entries_or_ke =gives the number of entries for init or ...existing ke object5 %dims_or_n =gives the dimensions or the index in hte ...form [n,r]6 % where n = num states, r = num inputs7 %t =Time location, (also k)8 %G,H,C =System matrices9 %u =input vector10 %yt =y(t) output scalar15311 %***********************************************12 % NOTE: You have to manually set ke.Rv and ke.Rw13 %***********************************************14 switch(mode)15 case 'init'16 entries=entries_or_ke;17 dims=dims_or_t;1819 if size(dims,2)==1 %input state and Kalman gain n-by-120 %Initiate with zero values21 ke.xhat=zeros(dims,entries);22 ke.rr=zeros(1,entries);23 ke.ur=zeros(1,entries);24 ke.Pkk=zeros(dims,dims,entries);25 ke.pkk(:,:,1)=ones(dims,dims);26 ke.Kk=zeros(dims,entries);27 ke.xhatkk=zeros(dims,entries);28 ke.xhatk1k=zeros(dims,entries);29 else %input state and Kalman gain n-by-130 ke.xhat=zeros(dims(1),entries);31 ke.rr=zeros(1,entries);32 ke.ur=zeros(1,entries);33 ke.Pkk=zeros(dims(1),dims(1),entries);34 ke.pkk(:,:,1)=ones(dims(1),dims(1));35 ke.Kk=zeros(dims(1),dims(2),entries);36 ke.xhatkk=zeros(dims(1),entries);37 ke.xhatk1k=zeros(dims(1),entries);38 end39 case 'update'40 t=dims_or_t;41 ke=entries_or_ke;42 ke.xhatk1k(:,t)=G*ke.xhatkk(:,t-1)+H*u(:,t-1);43 Pk1k=G*ke.Pkk(:,:,t-1)*G'+ke.Rw;44 %Step 3: Measurement update, filtering45 %compute Kalman gain46 try47 ke.Kk(:,:,t)=Pk1k*C'/(ke.Rv+C*Pk1k*C');48 %correct state estimte4950 %check for dropped packets:15451 if abs(yt)<1e-1252 yt=C*ke.xhatkk(:,t-1);53 end54 xhatk1k1=ke.xhatk1k(:,t)+ke.Kk(:,:,t)*(yt-C*ke.xhatk1k(:,t));55 ke.xhatkk(:,t)=xhatk1k1;56 %update the new error covariance matrix57 ke.Pkk(:,:,t)=Pk1k-ke.Kk(:,:,t)*C*Pk1k;58 catch59 ke.Kk(:,t)=Pk1k*C'/(ke.Rv+C*Pk1k*C');6061 %correct state estimte6263 %check for dropped packets:64 if abs(yt)<1e-1265 yt=C*ke.xhatkk(:,t-1);66 end67 xhatk1k1=ke.xhatk1k(:,t)+ke.Kk(:,t)*(yt-C*ke.xhatk1k(:,t));68 ke.xhatkk(:,t)=xhatk1k1;69 %update the new error covariance matrix70 ke.Pkk(:,:,t)=Pk1k-ke.Kk(:,t)*C*Pk1k;71 end7273 end74757677 endC.3 Visual Servoing with UAV1 %% Drone Matlab-C++ connection Template2 % The data receiving will be done via the c++ script. ...Matlab will recieve3 % and display the data. The data handled by the C++ library ...will be Navdata4 % and Image data.5 % This script will still have to send WatchDog commands in ...order to keep1556 % the line open and allow the Done to send data78 %% Close previous Connections (if any open)9 % clears variables as well10 clear tlg11 clear xhatkk12 clear ur13 pause(2)14 try15 try16 ARland(u,iSeq+1);17 end18 disp('Initial Clean-Up')19 fclose(u);20 delete(u);21 clear('u');22 %iSeq=ARland(u,iSeq+1);23 close allt24 end2526 %% Empty Folder27 folderIm='C:\picsDome\';28 dirIm=dir([folderIm,'*.bmp']); %find Image folder29 for i=2:size(dirIm,1)30 delete([folderIm,dirIm(i).name])31 end3233 %% Variable Initilization34 timeRun=50; %length of flight in seconds35 folderOut='C:\picsAR\';36 folderIm='C:\picsDome\';37 iSeq=40;38 entries=timeRun*20;39 emergency=10;40 global log;41 global Im;42 global imhsv;43 global blankg;44 global data;4515646 %% Control Variables Initialization47 % control inputs48 yaw=zeros(1,entries);49 pitch=zeros(1,entries);50 thrust=zeros(1,entries);51 roll=zeros(1,entries);52 %Desired velocities53 rp=zeros(1,entries); %desired values input r(t) x, pitch ...front back54 rr=zeros(1,entries); %desired values input r(t) y, roll ...left right55 rz=zeros(1,entries); %desired values input r(t) z, up down5657 % Inititate state estimation58 keroll=kal_est('init',entries,5);59 kepitch=kal_est('init',entries,5);60 vr=zeros(1,entries);61 vp=zeros(1,entries);62 ur=zeros(1,entries);63 up=zeros(1,entries);64 screw=zeros(6,entries);65 err=zeros(3,entries);66 omy=zeros(1,entries);67 r=zeros(1,entries);68 sone=zeros(entries,3);69 stwo=zeros(entries,3);707172 %% Initiate image and camera variables73 uo=zeros(2,entries);74 vo=zeros(2,entries);75 fu=200;76 fv=200;77 ures=320;78 vres=240;79 load('Calib_ResultsARDRONE.mat','fc','cc')80 %Load Image for desired feature coordinates81 Im=imread('imdes_med.bmp');82 [ud,vd]=detect_hsv();83 [x,y]=pix2coord(ud,vd,fc,cc);15784 zetai=0;85 lambda=(zetai+sqrt(1+(1-zetai?2)*(x.?2+y.?2)))./(x.?2+y.?2+1);86 Sd=bsxfun(@times,lambda,[x,y,ones(size(x))]);87 % STEP 2: Construct Interaction matrix88 r=sqrt(1-(Sd(1,:)*Sd(2,:)')?2); %find r (little circle radius)89 sd=Sd(1,:)/r;90 R=0.1;9192 %% Kalman Observer Controller93 keroll.Rv=0.01;94 keroll.Rw=0.1*diag([1,1,1,1,1]);95 kepitch.Rv=0.01;96 kepitch.Rw=0.1*diag([1,1,1,1,1]);9798 %LQR Gains99 Klqrp =[0.0561 -0.1062 0.0651 -0.0148 0.0018];100 Klqrr =[0.1030 -0.1798 0.0980 -0.0176 0.0020];101102 Nr=3; %pre-compensation103 Np=1.8; %pre-compensation104105 %System Matrices106 Gr=[2.7857 -2.7752 1.1680 -0.1993 0.0208;1.0000 ...0 0 0 0; 0 ...1.0000 0 0 0; 0 ...0 1.0000 0 0; 0 0 ...0 1.0000 0];107 Hr=[1;0;0;0;0];108 Cr=[-0.0003 0.0019 0.0169 -0.0140 -0.0023];109110 Gp =[2.9143 -3.0971 1.4516 -0.3013 0.0326; ...1.0000 0 0 0 0; ...0 1.0000 0 0 0; 0 ...0 1.0000 0 0; 0 ...0 0 1.0000 0];111 Hp=[1;0;0;0;0];112 Cp=[0.0002 -0.0016 0.0190 -0.0134 -0.0028];113114 %% Vision Kalman Filter115 [ke1]=kal_est('init',533,[4,2]);158116 [ke2]=kal_est('init',533,[4,2]);117 t=0.05;118 G=[[1,t,;0,1],zeros(2,2);zeros(2,2),[1,t,;0,1]];119 H=[0;0;0;0];120 C=[1,0,0,0;0,0,1,0];121122 Rw=diag([1,1,1,1]);123 Rv=0.7*diag([1,1]);124125 ke1.Rv=Rv;126 ke2.Rv=Rv;127 ke1.Rw=Rw;128 ke2.Rw=Rw;129130 %initialize with desired values131 ke1.xhatkk(:,1)=[ud(1);0;vd(1);0];132 ke2.xhatkk(:,1)=[ud(2);0;vd(2);0];133134135 uinputv=zeros(533);136137138 %% Establish Connection139 IP='192.168.1.1';140 at_port=5556;141 u=udp(IP,'LocalPort',at_port+100,'RemotePort',at_port);142 fopen(u); %make connection143144 [status, ...result]=system('"C:\ARDrone_SDK_Version_1_7\Examples\...145 Win32\VCProjects\ARDrone\Debug\Win32Client.exe"&');146 ARLog(entries); %create empty log object147148 %% Take Off ++++++++++++++++++++++++++++++++++++++++++149 iSeq=ARtakeOff(u,iSeq); %Take Off ARDRONE150 iSeq=iSeq+1;151152 %% Decay Transients from Take OFF ++++++++++++++++++++153154 for m_num=1:40159155 pause(0.075)156 sendAT(u,['AT*COMWDG=',num2str(iSeq)]);157 iSeq=iSeq+1;158 end159 disp('Stop Loiter')160161 %% Main Control Loop +++++++++++++++++++++++++++++++++162 try163 tic164 tcur=toc;165 t=2;166 while tcur<timeRun167 tcur=toc;168 %% Navdata169 x=0; y=0;170 ARLog(log,t, 'C:\picsDome\navdata.csv',[ x,y],tcur);171 %% Image read172 f0=java.io.File(folderIm);173 xitem=f0.list();174 in=size(xitem,1);175 test= regexp(char(xitem(in)),'.bmp');176 while isempty(test)177 in=in-1;178 test= regexp(char(xitem(in)),'.bmp');179 end180 Im=imread([folderIm,char(xitem(in))]);181182 %maintain communication183 sendAT(u,['AT*COMWDG=',num2str(iSeq)]);184 iSeq=iSeq+1;185186 %% Visual Servoing CONTROLLER187 %%Detect and Track Features188 [uo(:,t),vo(:,t)]=detect_hsv();189 if ( uo(:,t)<1e-8 ) %check for detection190 [ke1]=kal_est('update',ke1,t,G,H,C,...191 uinputv,[ud(1);vd(1)]); ...%desired192 else193 [ke1]=kal_est('update',ke1,t,G,H,C,...160194 uinputv,[uo(1,t);vo(1,t)]);195 end196 if ( vo(:,t)<1e-8 )%check for detection197 [ke2]=kal_est('update',ke1,t,G,H,C,...198 uinputv,[ud(2);vd(2)]);199 else200 [ke2]=kal_est('update',ke2,t,G,H,C,...201 uinputv,[uo(2,t);vo(2,t)]);202 end203 Im=print_cross(Im,ke1.xhatkk(1,t),...204 ke1.xhatkk(3,t),5,[200,50,50]);205 Im=print_cross(Im,ke2.xhatkk(1,t),...206 ke2.xhatkk(3,t),5,[200,50,50]);207 Im=print_cross(Im,uo(1,t),vo(1,t),10,[255,255,46]);208 Im=print_cross(Im,uo(2,t),vo(2,t),10,[255,255,46]);209210 % Spherical Projection211 [x,y]=pix2coord([ke1.xhatkk(1,t);ke2.xhatkk(1,t)],...212 [ke1.xhatkk(3,t);ke2.xhatkk(3,t)],fc,cc);213 zetai=0;214 lambda=(zetai+sqrt(1+(1-zetai?2)*(x.?2+y.?2)))./(x.?2+y.?2+1);215 S=bsxfun(@times,lambda,[x,y,ones(size(x))]);216 % STEP 2: Construct Interaction matrix217 r(t)=sqrt(1-(S(1,:)*S(2,:)')?2); %find r (little ...circle radius)218 s=S(1,:)/r(t);219 sone(t,:)=S(1,:);220 stwo(t,:)=S(2,:);221 Ls1v=-1/R*diag([1,1,1]);%Interaction Matrix222223 Linv=pinv(Ls1v); %pseudo inverse224 lambda=1.1;225 err(:,t)=s-sd;226 screw(:,t)=[-lambda*Linv*(err(:,t));zeros(3,1)];227 %Transformation Matrix228 contr_vec=[0,0,1;1,0,0;0,1,0]*screw(1:3,t);229 rp(t)=contr_vec(1); %x front back (fron negative pitch)230 rr(t)=contr_vec(2); %y left right231 rz(t)=-2*contr_vec(3); %z up down232161233 %Maximum input ceiling234 rp(t)=min(abs(rp(t)),0.3)*sign(rp(t));235 rr(t)=min(abs(rr(t)),0.3)*sign(rr(t));236 rz(t)=min(abs(rz(t)),0.5)*sign(rz(t));237238239 %% Define Controllers and Send Commands240 %start updating Kalman Filters241 if tcur>2242 [keroll]=kal_est('update',keroll,t,Gr,Hr,Cr,roll,log.vy(t)/1000);243 [kepitch]=kal_est('update',kepitch,t,Gp,Hp,Cp,pitch,log.vx(t)/1000);244245 up(t)=Np*rp(t)-Klqrp*kepitch.xhatkk(:,t);246 ur(t)=Nr*rr(t)-Klqrr*keroll.xhatkk(:,t);247248 %Send to Feedback Linearization249 [pitch(t),roll(t),Ts] = ...AR_feed_lin(up(t),ur(t),0.42*9.81);250251 %% Test for "Manual"- Landing252 if abs(log.z(t))<215 %tripped253 emergency=emergency-1; %reduce counter254 if emergency<2 %Down Too Long, abort255 disp('ABORT! ABORT!')256 break257 end258 else259 emergency=10;260 end261 end262 iSeq=ARMove(u,iSeq,1,roll(t),-pitch(t),rz(t),0);263 digi='%.3f';264 disp([num2str(tcur,'%4.3f'),' ...r:',num2str(roll(t),digi),' ...p:',num2str(pitch(t),digi),' ...z:',num2str(rz(t),digi)]);265 imwrite(Im,[folderOut,sprintf('%4.4d',t),'.bmp'],'BMP');266 t=t+1;267 end268 %% Clean Up and Land162269 ARLog(0 ,t);270 yaw(t:end)=[];271 pitch(t:end)=[];272 thrust(t:end)=[];273 roll(t:end)=[];274 kepitch.xhatkk(:,t:end)=[];275 keroll.xhatkk(:,t:end)=[];276 rp(t:end)=[];277 rz(t:end)=[];278 rr(t:end)=[];279 screw(:,t:end)=[];280 uo(:,t:end)=[];281 vo(:,t:end)=[];282 vr(:,t:end)=[];283 vp(:,t:end)=[];284 err(:,t:end)=[];285 iSeq=ARland(u,iSeq+1);286 disp('Clean Finish')287 iSeq=ARland(u,iSeq+1);288 fclose(u);289 delete(u);290 clear('u');291292 catch exc293 disp('Aborted');294 disp(exc);295 iSeq=ARland(u,iSeq+1);296 fclose(u);297 delete(u);298 clear('u');299 end300 disp('Done')163

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
https://iiif.library.ubc.ca/presentation/dsp.24.1-0072149/manifest

Comment

Related Items