Open Collections

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

An indoor optical wireless location comparison between an angular receiver and an image receiver Arafa, Ahmed Tarek 2015

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
24-ubc_2015_may_Arafa_Ahmed.pdf [ 2.98MB ]
Metadata
JSON: 24-1.0074408.json
JSON-LD: 24-1.0074408-ld.json
RDF/XML (Pretty): 24-1.0074408-rdf.xml
RDF/JSON: 24-1.0074408-rdf.json
Turtle: 24-1.0074408-turtle.txt
N-Triples: 24-1.0074408-rdf-ntriples.txt
Original Record: 24-1.0074408-source.json
Full Text
24-1.0074408-fulltext.txt
Citation
24-1.0074408.ris

Full Text

An Indoor Optical Wireless LocationComparison between an AngularReceiver and an Image ReceiverbyAhmed Tarek ArafaB.Sc. Hons., Kuwait University, 2006M.Sc., The University of Calgary, 2010A THESIS SUBMITTED IN PARTIAL FULFILLMENT OFTHE REQUIREMENTS FOR THE DEGREE OFDOCTOR OF PHILOSOPHYinTHE COLLEGE OF GRADUATE STUDIES(Electrical Engineering)THE UNIVERSITY OF BRITISH COLUMBIA(Okanagan)March 2015c© Ahmed Tarek Arafa, 2015AbstractIn this work, the positioning accuracies of two novel photoreceivers aredemonstrated. The two photoreceivers, namely an angular receiver and animage receiver, estimate their position via triangulation by measuring theangle of arrival (AOA) of light from LED optical beacons. The angularreceiver consists of three PDs assembled in a corner-cube structure, whilethe image receiver consists of a custom-made microlens over top of a CMOSarray image sensor. The mean AOA accuracy of the angular receiver wasfound to be 2◦ whereas the mean AOA accuracy of the image receiver wasfound to be 0.5◦. The effect of LED optical beacon and photoreceiver geom-etry was quantified in terms of Dilution of Precision (DOP). The positionaccuracy of the photoreceivers was quantified while static and in motion.In the static case, the mean position accuracy of the angular receiver wasfound to be 5 cm whereas the mean position accuracy of the image receiverwas found to be 2.5 cm. While the photoreceivers were in motion, the meanposition accuracy of the angular receiver was found to be on the order of10 cm whereas the mean position accuracy of the image receiver was foundto be 4 cm.iiPrefaceThis work has been done under the supervision of Dr. Richard Klukas.Portions of this work have been published in the following journals, bookchapter and conference papers.− A. Arafa, S. Dalmiya, R. Klukas, and J. F. Holzman, “Angle-of-ArrivalReception for Optical Wireless Location Technology,” Optics Express,pp. 7755 – 7766, vol. 23, no. 6, March 23, 2015.− A. Arafa, X. Jin, and R. Klukas, “Wireless Indoor Optical Positioningwith a Differential Photosensor,” IEEE Photonics Technology Letters,pp. 1027 – 1029, vol. 24, no. 12, June 15, 2012.− X. Jin, A. Arafa, B. A. Hristovski, R. Klukas, and J. F. Holzman, “Dif-ferential photosensors for optical wireless communication and locationtechnologies,” book chapter in Optical Imaging and Sensing: Technol-ogy, Devices and Applications, A. Khosla, D. Kim, and K. Iniewski,Eds.: CRC Press, 2015. (invited)− A. Arafa, X. Jin, J. F. Holzman, and R. Klukas, “An Integrated Photo-sensing System for Indoor Optical Positioning,” Proceedings of GNSS2011, The Institute of Navigation, Portland, Oregon, pp. 1758 – 1763,September 20 – 23, 2011.− A. Arafa, R. Klukas, J.F. Holzman, and X. Jin, “Towards a Practi-cal Indoor Lighting Positioning System,” Proceedings of GNSS 2012,The Institute of Navigation, Nashville, Tennessee, pp. 2450 – 2453,September 17 – 21, 2012.− A. Arafa, X. Jin, D. Guerrero, R. Klukas, J. F. Holzman, “Imag-ing Sensors for Optical Wireless Location Technology,” Proceedingsof GNSS 2013, The Institute of Navigation, Nashville, Tennessee, pp.1020 – 1023, September 16 – 20, 2013.Portions of this work have been submitted for publication in:iiiPreface− A. Arafa, X. Jin, R. Klukas, and J. F. Holzman, “Characterizationof Image Receivers for Optical Wireless Location Technology,” IEEEPhotonics Technology Letters.ivTable of ContentsAbstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iiPreface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iiiTable of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . vList of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viiList of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ixList of Symbols . . . . . . . . . . . . . . . . . . . . . . . . . . . .xviiList of Abbreviations . . . . . . . . . . . . . . . . . . . . . . . . xxiAcknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . .xxiiiDedication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .xxivChapter 1: Introduction . . . . . . . . . . . . . . . . . . . . . . . 11.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.2 Research Objectives . . . . . . . . . . . . . . . . . . . . . . . 51.3 Thesis Outline . . . . . . . . . . . . . . . . . . . . . . . . . . 5Chapter 2: Indoor Positioning Techniques . . . . . . . . . . . . 72.1 Received Signal Strength . . . . . . . . . . . . . . . . . . . . . 72.2 Time of Arrival . . . . . . . . . . . . . . . . . . . . . . . . . . 92.3 Scene Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . 112.4 Inertial Navigation System . . . . . . . . . . . . . . . . . . . 142.5 Visible Light . . . . . . . . . . . . . . . . . . . . . . . . . . . 142.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16Chapter 3: Angular Receiver Positioning . . . . . . . . . . . . 17vTABLE OF CONTENTS3.1 The Angular Receiver . . . . . . . . . . . . . . . . . . . . . . 173.1.1 Angular Response . . . . . . . . . . . . . . . . . . . . 193.1.2 Intensity Response . . . . . . . . . . . . . . . . . . . . 233.1.3 Multipath Response . . . . . . . . . . . . . . . . . . . 253.2 Angle-Of-Arrival Measurement Error Characterization . . . . 273.3 Positioning Analysis Using Dilution of Precision . . . . . . . . 303.4 Positioning Performance . . . . . . . . . . . . . . . . . . . . . 363.4.1 Optical RSS . . . . . . . . . . . . . . . . . . . . . . . . 373.4.2 Optical AOA . . . . . . . . . . . . . . . . . . . . . . . 403.4.3 Optical AOA Precision . . . . . . . . . . . . . . . . . . 443.4.4 Optical AOA Accuracy . . . . . . . . . . . . . . . . . 47Chapter 4: Image Receiver Positioning . . . . . . . . . . . . . . 544.1 Image Receiver . . . . . . . . . . . . . . . . . . . . . . . . . . 544.1.1 Image Sensor and Microlens . . . . . . . . . . . . . . . 544.1.2 Colour and Frequency Detection . . . . . . . . . . . . 564.2 Angle-Of-Arrival Measurement Error Characterization . . . . 624.3 Positioning Analysis Using Dilution of Precision . . . . . . . . 664.4 Positioning Performance . . . . . . . . . . . . . . . . . . . . . 71Chapter 5: Receivers’ Performance while in Motion . . . . . . 775.1 Angular Receiver Positioning . . . . . . . . . . . . . . . . . . 775.1.1 Low Speed 10 cm/s . . . . . . . . . . . . . . . . . . . 825.1.2 Medium Speed 50 cm/s . . . . . . . . . . . . . . . . . 1005.1.3 Average Walking Speed 139 cm/s . . . . . . . . . . . . 1135.1.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . 1205.2 Image Receiver Positioning . . . . . . . . . . . . . . . . . . . 1215.2.1 Very Low Speed 5 cm/s . . . . . . . . . . . . . . . . . 1235.2.2 Low Speed 10 cm/s . . . . . . . . . . . . . . . . . . . 129Chapter 6: Conclusions and Recommendations . . . . . . . . . 1366.1 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1366.1.1 Angular Receiver . . . . . . . . . . . . . . . . . . . . . 1366.1.2 Image Receiver and Angular Receiver PerformanceComparison . . . . . . . . . . . . . . . . . . . . . . . 1386.2 Recommendations . . . . . . . . . . . . . . . . . . . . . . . . 140Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143viList of TablesTable 2.1 Summary of indoor positioning techniques . . . . . . . 16Table 3.1 Numerical fitting parameters . . . . . . . . . . . . . . 20Table 3.2 φ error precision . . . . . . . . . . . . . . . . . . . . . 46Table 3.3 θ error precision . . . . . . . . . . . . . . . . . . . . . . 46Table 3.4 φ error mean and standard deviation . . . . . . . . . . 50Table 3.5 θ error mean and standard deviation . . . . . . . . . . 50Table 4.1 2-D and 3-D positioning error results for the wide- andultra-wide FOV microlenses. . . . . . . . . . . . . . . . 75Table 5.1 Mean θ error for 10 cm/s at 5 Hz AOA measurementrate. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88Table 5.2 Mean φ error for 10 cm/s at 5 Hz AOA measurementrate. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88Table 5.3 The 2-D and 3-D error statistics for 10 cm/s at 5 HzAOA measurement rate. . . . . . . . . . . . . . . . . . 89Table 5.4 Mean θ error for 10 cm/s at 20 Hz measurement update. 99Table 5.5 Mean φ error for 10 cm/s at 20 Hz measurement update. 99Table 5.6 The 2-D and 3-D error statistics for 10 cm/s at 20 HzAOA measurement rate. . . . . . . . . . . . . . . . . . 100Table 5.7 Mean θ error for 50 cm/s at 5 Hz AOA measurementrate. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105Table 5.8 Mean φ error for 50 cm/s at 5 Hz AOA measurementrate. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105Table 5.9 The 2-D and 3-D error statistics for 50 cm/s at 5 HzAOA measurement rate. . . . . . . . . . . . . . . . . . 107Table 5.10 Mean θ error for 50 cm/s at 20 Hz AOA measurementrate. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111Table 5.11 Mean φ error for 50 cm/s at 20 Hz AOA measurementrate. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111viiLIST OF TABLESTable 5.12 The 2-D and 3-D error statistics for 50 cm/s at 20 HzAOA measurement rate. . . . . . . . . . . . . . . . . . 112Table 5.13 Mean θ error for 139 cm/s at 20 Hz AOA measurementrate. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119Table 5.14 Mean φ error for 139 cm/s at 20 Hz AOA measurementrate. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119Table 5.15 The 2-D and 3-D error statistics for 139 cm/s at 20 HzAOA measurement rate. . . . . . . . . . . . . . . . . . 120Table 5.16 Summary of 2-D and 3-D error statistics. . . . . . . . 120Table 5.17 The 2-D and 3-D error statistics for the image receivermoving at a speed of 5 cm/s . . . . . . . . . . . . . . . 128Table 5.18 The 2-D and 3-D error statistics for the image receivermoving at a speed of 10 cm/s . . . . . . . . . . . . . . 134viiiList of FiguresFigure 2.1 Camera coordinate frame. . . . . . . . . . . . . . . . . 13Figure 3.1 The angular receiver. . . . . . . . . . . . . . . . . . . 17Figure 3.2 Schematic of the angular receiver showing azimuthaland polar angles and photodiode side numbers. . . . . 18Figure 3.3 White LED spectrum. . . . . . . . . . . . . . . . . . . 19Figure 3.4 Analytical results are shown for normalized differen-tial photocurrents ∆i1(φ, θ) and ∆i2(φ, θ) versus az-imuthal φ and polar θ angles. . . . . . . . . . . . . . . 21Figure 3.5 Circuit block diagram stages of amplification and band-pass filter. . . . . . . . . . . . . . . . . . . . . . . . . 22Figure 3.6 Butterworth bandpass circuit schematic to enhancethe photodiode’s output SNR. . . . . . . . . . . . . . 22Figure 3.7 Direct intensity characterization of measured azimuthalφ and polar θ angles versus incident optical intensity. 24Figure 3.8 Multipath characterization experiment setup. . . . . . 25Figure 3.9 Reflected intensity characterization of measured az-imuthal φ and polar θ angles versus reflective surfacedistance for three surfaces (plywood, stainless steel,drywall). . . . . . . . . . . . . . . . . . . . . . . . . . 26Figure 3.10 Percentage ratio of reflected optical power to totalincident optical power on PD1 for drywall. . . . . . . 27Figure 3.11 Measured angle error ∆φ as a function of φ and θ. . . 29Figure 3.12 Measured angle error ∆θ as a function of φ and θ. . . 29Figure 3.13 Schematic of the optical AOA positioning system.The (x ’,y ’,z ’) represent the angular receiver’s bodyframe, while the (x ,y ,z ) represent the navigation ref-erence frame. . . . . . . . . . . . . . . . . . . . . . . . 32Figure 3.14 The predicted 3-D positioning error standard devia-tion σp for optical AOA positioning with two LEDoptical beacons A1 and A2. . . . . . . . . . . . . . . . 33ixLIST OF FIGURESFigure 3.15 DOP (cm/deg) for optical AOA positioning with twoLED optical beacons A1 and A2. . . . . . . . . . . . . 34Figure 3.16 The 3-D predicted positioning error standard devia-tion σp for optical AOA positioning with four LEDoptical beacons B1, B2, B3, and B4. . . . . . . . . . . 35Figure 3.17 The 3-D DOP (cm/deg) for optical AOA positioningwith four LED optical beacons B1, B2, B3, and B4. . 36Figure 3.18 The 3-D positioning error for optical RSS positioning. 38Figure 3.19 Simulated 3-D range DOP for optical RSS positioningsetup. . . . . . . . . . . . . . . . . . . . . . . . . . . . 39Figure 3.20 Angular receiver orientation. (θR, φR) represent theangular receiver body frame (x’,y’,z’) rotation withrespect to the reference frame (x,y,z). . . . . . . . . 40Figure 3.21 The 3-D positioning error for optical AOA positioning. 43Figure 3.22 AOA measurement precision histograms for φ1 and θ1. 44Figure 3.23 AOA measurement precision histograms for φ2 and θ2. 45Figure 3.24 AOA measurement precision histograms for φ3 and θ3. 45Figure 3.25 AOA measurement precision histograms for φ4 and θ4. 46Figure 3.26 AOA measurement accuracy histograms for φ1 and θ1. 48Figure 3.27 AOA measurement accuracy histograms for φ2 and θ2. 48Figure 3.28 AOA measurement accuracy histograms for φ3 and θ3. 49Figure 3.29 AOA measurement accuracy histograms for φ4 and θ4. 49Figure 3.30 Angular receiver orientation along yz axis. . . . . . . 52Figure 3.31 Maximum square grid side-length capability for theangular receiver. . . . . . . . . . . . . . . . . . . . . . 53Figure 4.1 An illustration of an OWL system showing the LEDoptical beacons and the image receiver consisting ofa microlens and a CMOS sensor. . . . . . . . . . . . . 56Figure 4.2 HSV colour representation. . . . . . . . . . . . . . . . 58Figure 4.3 An image of four different colour LEDs (red, green,blue and white) appearing as red, green, blue andwhite spots. . . . . . . . . . . . . . . . . . . . . . . . 59Figure 4.4 Colour discrimination (implemented using an HSV al-gorithm) detects each coloured spot in Fig. 4.3 anddraws a circle around it. . . . . . . . . . . . . . . . . 60Figure 4.5 Colour discrimination implemented using the RGBmodel. . . . . . . . . . . . . . . . . . . . . . . . . . . 61xLIST OF FIGURESFigure 4.6 FFT analysis performed for 100 and 200 frames oncolour blue. The left plot shows interference at 70 Hz,and the right plot shows a reduction in interference. . 62Figure 4.7 Schematic views and SEM images for (a) the imagesensor with a wide FOV microlens, having an α =30◦ contact angle, and (b) the image sensor with aultrawide FOV microlens, having an α = 90◦ contactangle. The microlens radius is r . Incident AOAs onthe image sensors are defined on the (x ’, y ’, z ’) co-ordinates of the body frame. The focal spot locationon the CMOS array is defined by its azimuthal angle,φIS, and radial distance, ρIS. . . . . . . . . . . . . . . 63Figure 4.8 Low intensity LED focal spot size image (top) andhigh intensity LED focal spot size image. . . . . . . . 64Figure 4.9 Azimuthal characterization results, showing the AOAangle φ as a function of the measured φIS angle, forimage sensors with the (a) wide FOV microlens and(b) ultrawide FOV microlens. . . . . . . . . . . . . . . 65Figure 4.10 Polar characterization results, showing the AOA an-gle θ as a function of the measured normalized ρIS/rdistance, for the image sensors with the (a) wide FOVmicrolens and (b) ultrawide FOV microlens. . . . . . 66Figure 4.11 Illustration of LED optical beacon geometry for DOPcalculation. . . . . . . . . . . . . . . . . . . . . . . . . 67Figure 4.12 DOP characterization for the wide FOV microlens in(x , y , z = 0) navigational frame. . . . . . . . . . . . . 68Figure 4.13 DOP characterization for the ultrawide FOVmicrolensin (x , y , z = 0) navigational frame. . . . . . . . . . . 68Figure 4.14 Positioning accuracy for the wide FOV microlens in(x , y , z = 0) navigational frame. . . . . . . . . . . . . 70Figure 4.15 Positioning accuracy for the ultrawide FOV microlensin (x , y , z = 0) navigational frame. . . . . . . . . . . 71Figure 4.16 Top view drawing of LED optical beacon/ image re-ceiver geometry for position estimation. . . . . . . . . 72Figure 4.17 An RGB image showing the measurement of ρIS,1, theradial pixel distance between the microlens centre toLED1 focal spot. . . . . . . . . . . . . . . . . . . . . . 73Figure 4.18 Frequency components for LED1, LED2, LED3, andLED4 red, green and blue layers. . . . . . . . . . . . . 74xiLIST OF FIGURESFigure 4.19 Maximum square grid side-length capability for wide-and ultrawide FOV microlens. . . . . . . . . . . . . . 76Figure 5.1 iRobot Create platform. . . . . . . . . . . . . . . . . . 78Figure 5.2 Angular receiver mounted on iRobot Create. . . . . . 79Figure 5.3 Illustration of LED optical beacon and receiver ge-ometry as well as the trajectory of the robot from itsstart point to its end point. The point X representsthe start point of the robot carrying the receiver, andthe point Y represents the end point of the robottrajectory. . . . . . . . . . . . . . . . . . . . . . . . . 81Figure 5.4 Illustration of the process of measuring the AOA (θ1,φ1) from LED1. . . . . . . . . . . . . . . . . . . . . . 83Figure 5.5 The photocurrents used to generate the AOA anglesare shown versus iRobot Create distance traveled forLED1, LED2, LED3, and LED4. Photocurrents i1,i2, and i3 are shown as the red (solid), black (dotted)and blue (dashed) curves respectively. . . . . . . . . . 84Figure 5.6 AOA (θ1, φ1) versus robot travel distance computedusing both the AOA measurement (data) and geom-etry calculation (actual) when moving at a speed of10 cm/s at a 5 Hz AOA measurement rate. . . . . . . 85Figure 5.7 AOA (θ2, φ2) versus robot travel distance computedusing both the AOA measurement (data) and geom-etry calculation (actual) when moving at a speed of10 cm/s at a 5 Hz AOA measurement rate. . . . . . . 86Figure 5.8 AOA (θ3, φ3) versus robot travel distance computedusing both the AOA measurement (data) and geom-etry calculation (actual) when moving at a speed of10 cm/s at a 5 Hz AOA measurement rate. . . . . . . 87Figure 5.9 AOA (θ4, φ4) versus robot travel distance computedusing both the AOA measurement (data) and geom-etry calculation (actual) when moving at a speed of10 cm/s at a 5 Hz AOA measurement rate. . . . . . . 88Figure 5.10 The 2-D and 3-D positioning error for an angular re-ceiver speed of 10 cm/s and a 5 Hz AOA measurementrate. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89Figure 5.11 Polar angle θ1 error versus distance traveled. . . . . . 90Figure 5.12 Polar angle θ2 error versus distance traveled. . . . . . 91xiiLIST OF FIGURESFigure 5.13 Theoretical 3-D DOP versus empirical DOP calcu-lated along the angular receiver trajectory. . . . . . . 93Figure 5.14 Azimuthal angle φ1 error versus distance traveled. . . 94Figure 5.15 Azimuthal angle φ2 error versus distance traveled. . . 94Figure 5.16 Azimuthal angle φ3 error versus distance traveled. . . 95Figure 5.17 Azimuthal angle φ4 error versus distance traveled. . . 95Figure 5.18 AOA (θ1, φ1) versus robot travel distance computedusing both the AOA measurement (data) and geom-etry calculation (actual) when moving at a speed of10 cm/s at a 20 Hz AOA measurement rate. . . . . . 96Figure 5.19 AOA (θ2, φ2) versus robot travel distance computedusing both the AOA measurement (data) and geom-etry calculation (actual) when moving at a speed of10 cm/s at a 20 Hz AOA measurement rate. . . . . . 97Figure 5.20 AOA (θ3, φ3) versus robot travel distance computedusing both the AOA measurement (data) and geom-etry calculation (actual) when moving at a speed of10 cm/s at a 20 Hz AOA measurement rate. . . . . . 98Figure 5.21 AOA (θ4, φ4) versus robot travel distance computedusing both the AOA measurement (data) and geom-etry calculation (actual) when moving at a speed of10 cm/s at a 20 Hz AOA measurement rate. . . . . . 99Figure 5.22 The 2-D and 3-D positioning error for an angular re-ceiver speed of 10 cm/s and a 20 Hz AOA measure-ment rate. . . . . . . . . . . . . . . . . . . . . . . . . 100Figure 5.23 The photocurrents used to generate the AOA anglesare shown versus iRobot Create distance traveled forLED1, LED2, LED3, and LED4. Photocurrent i1, i2,and i3 are shown as the red (solid), black (dotted)and blue (dashed) curves respectively. . . . . . . . . . 101Figure 5.24 AOA (θ1, φ1) versus robot travel distance computedusing both the AOA measurement (data) and geom-etry calculation (actual) when moving at a speed of50 cm/s at a 5 Hz AOA measurement rate. . . . . . . 102Figure 5.25 AOA (θ2, φ2) versus robot travel distance computedusing both the AOA measurement (data) and geom-etry calculation (actual) when moving at a speed of50 cm/s at a 5 Hz AOA measurement rate. . . . . . . 103xiiiLIST OF FIGURESFigure 5.26 AOA (θ3, φ3) versus robot travel distance computedusing both the AOA measurement (data) and geom-etry calculation (actual) when moving at a speed of50 cm/s at a 5 Hz AOA measurement rate. . . . . . . 104Figure 5.27 AOA (θ4, φ4) versus robot travel distance computedusing both the AOA measurement (data) and geom-etry calculation (actual) when moving at a speed of50 cm/s at a 5 Hz AOA measurement rate. . . . . . . 105Figure 5.28 The 2-D and 3-D positioning error for an angular re-ceiver speed of 50 cm/s and a 5 Hz AOA measurementrate. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107Figure 5.29 AOA (θ1, φ1) versus robot travel distance computedusing both the AOA measurement (data) and geom-etry calculation (actual) when moving at a speed of50 cm/s at a 20 Hz AOA measurement rate. . . . . . 108Figure 5.30 AOA (θ2, φ2) versus robot travel distance computedusing both the AOA measurement (data) and geom-etry calculation (actual) when moving at a speed of50 cm/s at a 20 Hz AOA measurement rate. . . . . . 109Figure 5.31 AOA (θ3, φ3) versus robot travel distance computedusing both the AOA measurement (data) and geom-etry calculation (actual) when moving at a speed of50 cm/s at a 20 Hz AOA measurement rate. . . . . . 110Figure 5.32 AOA (θ4, φ4) versus robot travel distance computedusing both the AOA measurement (data) and geom-etry calculation (actual) when moving at a speed of50 cm/s at a 20 Hz AOA measurement rate. . . . . . 111Figure 5.33 The 2-D and 3-D positioning error for an angular re-ceiver speed of 50 cm/s and a 20 Hz AOA measure-ment rate. . . . . . . . . . . . . . . . . . . . . . . . . 112Figure 5.34 The photocurrents used to generate the AOA anglesare shown versus iRobot Create distance traveled forLED1, LED2, LED3, and LED4. Photocurrent i1, i2,and i3 are shown as the red (solid), black (dotted)and blue (dashed) curves respectively. . . . . . . . . . 114Figure 5.35 AOA (θ1, φ1) versus robot travel distance computedusing both the AOA measurement (data) and geom-etry calculation (actual) when moving at a speed of139 cm/s at a 20 Hz AOA measurement rate. . . . . . 115xivLIST OF FIGURESFigure 5.36 AOA (θ2, φ2) versus robot travel distance computedusing both the AOA measurement (data) and geom-etry calculation (actual) when moving at a speed of139 cm/s at a 20 Hz AOA measurement rate. . . . . . 116Figure 5.37 AOA (θ3, φ3) versus robot travel distance computedusing both the AOA measurement (data) and geom-etry calculation (actual) when moving at a speed of139 cm/s at a 20 Hz AOA measurement rate. . . . . . 117Figure 5.38 AOA (θ4, φ4) versus robot travel distance computedusing both the AOA measurement (data) and geom-etry calculation (actual) when moving at a speed of139 cm/s at a 20 Hz AOA measurement rate. . . . . . 118Figure 5.39 The 2-D and 3-D positioning error for angular receiverspeed of 139 cm/s and a 20 Hz AOA measurement rate.119Figure 5.40 Image receiver mounted on iRobot Create. . . . . . . 122Figure 5.41 A magnified view of the microlens and the image sensor.122Figure 5.42 AOA (θ1, φ1) versus robot travel distance computedusing both the AOA measurement (data) and geom-etry calculation (actual) when moving at a speed of5 cm/s. . . . . . . . . . . . . . . . . . . . . . . . . . 124Figure 5.43 AOA (θ2, φ2) versus robot travel distance computedusing both the AOA measurement (data) and geom-etry calculation (actual) when moving at a speed of5 cm/s. . . . . . . . . . . . . . . . . . . . . . . . . . 125Figure 5.44 AOA (θ3, φ3) versus robot travel distance computedusing both the AOA measurement (data) and geom-etry calculation (actual) when moving at a speed of5 cm/s. . . . . . . . . . . . . . . . . . . . . . . . . . 126Figure 5.45 AOA (θ4, φ4) versus robot travel distance computedusing both the AOA measurement (data) and geom-etry calculation (actual) when moving at a speed of5 cm/s. . . . . . . . . . . . . . . . . . . . . . . . . . 127Figure 5.46 The 2-D and 3-D position error versus distance trav-eled for the image receiver at 5 cm/s. . . . . . . . . . 128Figure 5.47 Theoretical 3-D DOP versus empirical DOP calcu-lated along the image receiver trajectory. . . . . . . . 129Figure 5.48 AOA (θ1, φ1) versus robot travel distance computedusing both the AOA measurement (data) and geom-etry calculation (actual) when moving at a speed of10 cm/s. . . . . . . . . . . . . . . . . . . . . . . . . . 130xvLIST OF FIGURESFigure 5.49 AOA (θ2, φ2) versus robot travel distance computedusing both the AOA measurement (data) and geom-etry calculation (actual) when moving at a speed of10 cm/s. . . . . . . . . . . . . . . . . . . . . . . . . . 131Figure 5.50 AOA (θ3, φ3) versus robot travel distance computedusing both the AOA measurement (data) and geom-etry calculation (actual) when moving at a speed of10 cm/s. . . . . . . . . . . . . . . . . . . . . . . . . . 132Figure 5.51 AOA (θ4, φ4) versus robot travel distance computedusing both the AOA measurement (data) and geom-etry calculation (actual) when moving at a speed of10 cm/s. . . . . . . . . . . . . . . . . . . . . . . . . . 133Figure 5.52 The 2-D and 3-D position error versus distance trav-eled for the image receiver at 10 cm/s. . . . . . . . . 134xviList of Symbolsα contact angleA ampsA matrix of partial derivativesB colour bluecm centimetresδ Least Squares correctionδx Least Squares x correctionδy Least Squares y correctionδz Least Squares z correction∆D distance between AOA sample readings∆φ absolute difference between measured and true φ∆i differential photocurrent∆θ absolute difference between measured and true θ∆x error in the x coordinates∆y error in the y coordinates∆z error in the z coordinatesd distance between transmitter and receiverd0 reference distancedeg degreesD microlens diameterE() expected value operationE filter intensityφ azimuthal angleφ’ azimuthal angle measured with respect to photoreceiver’s body frameφIS image sensor azimuthal angleφR azimuthal angle rotationf lens focal lengthfH high frequency cut-offfl low frequency cut-offf -number ratio of lens focal length to its diameterxviiList of SymbolsG colour greenh vertical height between receiver and LED gridH Hue of a colourH matrix of partial derivativesHz Hertziin photocurrent generated by angular receiveri1 photocurrent generated by PD1i2 photocurrent generated by PD2i3 photocurrent generated by PD3I intensity functionk constantK maximum number of LEDsµ microm metresmm millimetresmW milliwattsM megaM vector of measured angle errorsn path loss exponentρIS focal spot lengthP” coordinates defined with respect to an image planeP0(dB) average received signal power at d0P1 output power from PD1P2 output power from PD2P3 output power from PD3Pc coordinates defined with respect to a camera coordinate systemPl coordinates defined with respect to a local coordinate systemPr received optical powerPt transmitted optical powerp LED grid spacingP vector of measured position errorsP (dB) average received signal powerθ polar angleθ’ polar angle measured with respect to photoreceiver’s body frameθR polar angle rotationr rangeR colour redxviiiList of SymbolsR1 buffer circuit resistor[R] rotational matrixσdB shadowing standard deviationσm angle error measurement standard deviationσP position error standard deviationσx x error standard deviationσy y error standard deviationσz z error standard deviations secondsS saturation of a colourtr() trace operationT transpose operation[T ] translational vectoru pixel shift in x directionv pixel shift in y directionV intensity of a colourV1 output voltage from PD1V2 output voltage from PD2V3 output voltage from PD3w Least Squares weightsW window functionW wattx̂ Least Squares x estimatex’ x coordinate of the angular receiver body framex” x coordinate of a pixel defined in an image planexR receiver x positionxT transmitter x positionXc x coordinate defined in a camera coordinate systemXl x coordinate defined in a local coordinate systemŷ Least Squares y estimatey’ y coordinate of the angular receiver body framey” y coordinate of a pixel defined in an image planeyR receiver y positionyT transmitter y positionYc y coordinate defined in a camera coordinate systemYl y coordinate defined in a local coordinate systemxixList of Symbolsẑ Least Squares z estimatez’ z coordinate of the angular receiver body frameZc z coordinate defined in a camera coordinate systemZl z coordinate defined in a local coordinate systemxxList of Abbreviations2-D two-dimensional3-D three-dimensionalAOA Angle of ArrivalCMOS Complementary Metal Oxide SemiconductorDAQ Data acquisitionDOP Dilution of PrecisionFFT Fast Fourier TransformFOV Field-of-ViewGPS Global Positioning SystemHSV Hue-Saturation-ValueHPF High pass filterIMU Inertial Measurement UnitINS Inertial Navigation SystemIR InfraredLabVIEW Laboratory Virtual Instrument Engineering WorkbenchLED Light Emitting DiodeLOP Line of PositionLPF Low pass filterMATLAB Matrix LaboratoryMEMS MicroElectroMechanical systemsNI National InstrumentsNOA Norland Optical AdhesiveOWL Optical Wireless LocationPD PhotodiodePDOP Position Dilution of PrecisionPS3 Play station 3RF Radio FrequencyRFID Radio Frequency IdentificationRMS Root Mean SquaredRSS Received Signal StrengthxxiList of AbbreviationsSEM Scanning Electron MicroscopeSNR Signal to noise ratioTDOA Time difference of arrivalTOA Time of ArrivalTOF Time of FlightTOT Time of TransmissionUWB UltraWide BandVLC Visible Light CommunicationWiFi Wireless FidelityWLAN Wireless Local Area NetworkxxiiAcknowledgementsI would like to express my deepest gratitude to my supervisor Dr.RichardKlukas. His encouragement, knowledge and support has guided me throughmy PhD research. I would like to thank Dr. Jonathan Holzman for hisinvaluable insights and discussions, and Dr.Kenneth Chau for his insightfulfeedback about my research.The financial support of the Natural Sciences and Engineering ResearchCouncil of Canada is gratefully acknowledged.Special thanks go to Xian Jin for being a good friend and for his insightfulcomments and suggestions about setting up experiments. I have to thankmy friends Mohamed Yafia, Walaa Morsi and Ahmad Alsherbini for theirkindness and hospitality. A special thank you goes to my parents for theircontinuous support through all these years. Most of all, I must thank mywife, Yasmin, whose love and support made all this possible.xxiiiDedicationTo my YasminxxivChapter 1IntroductionIndoor positioning is concerned with navigating, tracking or monitoringpeople or objects inside buildings. Typical examples include monitoringpatients in hospitals [1], locating assets stored in a warehouse [2], and robot[3], [4] and pedestrian [5] navigation.The most well known positioning system, the Global Positioning System(GPS), works well outdoors as long as there is a clear view between theGPS receiver and the satellites. However, inside buildings or tunnels, heavyattenuation of the GPS satellite signals results in poor positioning accuracyor no position estimate at all. To remedy this, several indoor positioningtechniques have been proposed in the literature such as positioning usinga wireless local area network (WLAN) [6], infrared (IR) sensors [7], ultra-wide band (UWB) [8], ultrasound [9], vision analysis [10]-[11], and cellularsystems [12]. These techniques vary in terms of cost, positioning accuracy,security and complexity.Positioning based on WLAN relies on a mobile device measuring the re-ceived signal power from WiFi transmitters and uses the propagation modelof the radio frequency (RF) channel to determine the position of the mo-bile device. Although WLAN positioning is simple, it is considered a coarsemeasure of position, with errors as large as a few meters. Positioning basedon IR sensors requires installing a grid of IR transmitters on the ceiling, witheach IR transmitter covering a certain region. A user with an IR receiverdetermines his or her position based on how close he or she is to any giventransmitter. This positioning technique, called proximity detection, suffersfrom absolute power fluctuations and, therefore, requires the power of thetransmitters to be synchronized. In UWB systems, distance information isextracted by measuring the propagation delay between several transmittersand the receiver to be positioned, i.e., a mobile device. Accuracies in thecentimeter range can be achieved. However, this comes at the cost of expen-sive transceivers. RF angle of arrival (AOA) positioning requires an antennaarray at the receiver. This is done by mounting two or more antennas on thereceiver with known fixed locations relative to one another. By measuringthe phase of the signal received by each antenna, the direction of the signal11.1. Motivationis calculated. The drawback of using AOA positioning is the cost and size ofthe antenna array. Hence it will not be practical for small, low complexityreceivers. Ultrasound based positioning systems have strong dependencieson the environment such as temperature and, therefore, require continuouscalibration to achieve accurate position estimates. Moreover, ultrasoundsystems are limited to ranges of a few meters. Vision based systems such ascell phone embedded cameras rely on comparing features in an image to adatabase of images. If the environment changes, the database needs to beupdated increasing complexity. In addition, in order to identify features onemust solve an optimization problem which may or may not converge, thusrendering it unreliable.The above mentioned techniques suffer from at least one limitation suchas high cost (UWB systems), multipath (WLAN), power issues (IR sys-tems), or calibration issues (ultrasound and camera based systems). Theselimitation, and others, that will be described in Chapter 2, render thesesystems impractical for small, low-complexity receivers.This thesis studies the feasibility of obtaining very accurate indoor posi-tioning using optical frequencies, specifically those of visible light. A charac-terization and performance comparison of two photoreceivers is presented.Chapter 1 is organized as follows. Section 1.1 discusses the motivationfor the proposed work. Section 1.2 defines the thesis objectives. Section 1.3shows the thesis outline.1.1 MotivationRecently, a growing interest has focused on using light-emitting diode(LED) optical beacons for indoor positioning [13] [14]. This concept calledOptical Wireless Location (OWL) is based on work performed on visiblelight communication (VLC). VLC [15] [16] [17] [18] [19] [20] possesses sev-eral advantages over conventional RF systems such as less susceptibility tointerference and increased security since light rays are blocked by walls. ForVLC the optical transmitters used in the literature are either fluorescentlight or LED optical beacons. LED optical beacons are a more favourablechoice because they have a longer lifetime, smaller size and higher efficiency,compared to fluorescent light, and they can be modulated at high frequencies(MHz range) making them suitable for high data rate communications.Using VLC is especially advantageous for low-complexity systems sincesuitable lighting infrastructure may already exist inside buildings. In a num-ber of research papers, visible light positioning has been simulated and21.1. Motivationyielded centimeter-level accuracy [21], [22], [14]. However, the literaturelacks a complete investigation of photoreceiver positioning, depending uponwhether the receiver is a photodiode (PD) or an image receiver.OWL systems in [13], [14] use Time Difference of Arrival (TDOA) withLED optical beacons modulated at MHz frequencies (wavelength on theorder of 100 m). This results in very poor distance resolution since thephotoreceiver needs to move on the order of the wavelength to detect achange in phase. Results shown in [13], [14] were either simulated or nocomment on position accuracy was noted.In this thesis an indoor angular receiver positioning system that employsa corner-cube PD structure to measure AOA of light emanating from LEDoptical beacons mounted on the ceiling is described. The angular receiversystem proposed here aims to solve all of the above shortcomings of visiblelight positioning techniques. The proposed angular receiver structure pro-vides a simple and effective way for position estimation, since it combinesthe simplicity of the proximity detection technique and the accuracy of AOApositioning systems. Each of the corner-cube photoreceiver sides generatesa photocurrent proportional to the intensity of the light from a LED opticalbeacon source that strikes it. The geometry of the structure and the pho-tocurrent intensity allows the AOA of LED optical beacons to be determinedand hence the position of the corner-cube photoreceiver structure (angularreceiver) to be estimated.The angular receiver estimates AOA using a differential corner-cube PDreceiver structure. The angular receiver can estimate AOA with average an-gular accuracies of 2◦ which translates to positioning accuracies for indoorapplications on the order of centimeters. Unlike the TDOA positioning sys-tems in [13], [14], which require synchronization between the photoreceiverand LED optical beacons, the proposed system does not require synchro-nization. This is advantageous since the positioning accuracies of [13], [14]depend on the clock precision, and also require power for the clocks adding tothe cost of the system. Comparing the proposed angular receiver system to[23] which is based on proximity detection, both systems are low-complexity.However, [23] requires that the photoreceiver see only one light source at atime, since all light sources emit at the same frequency. This is impractical.Another issue with proximity detection systems is that they require the LEDoptical beacons to be power synchronized. For the proposed angular receiversystem, LED powers need not be synchronized, since the angular receiveruses the relative intensity on each of the corner-cube sides to determine anAOA.Based on the above comparison, the proposed system is better in terms of31.1. Motivationpracticality and complexity. In addition, the retroreflecting phenomenon ofthe corner-cube photoreceiver allows it to be used for optical communicationas well [24].A second indoor positioning technique using an image receiver (camera)is investigated in this thesis. The image receiver is made up of a CMOSsensor with a microlens (µm size). The image receiver estimates its positionby way of measuring the AOA from different LED optical beacons focused onthe CMOS sensor. Image receivers have been previously used for positioning[22], [21], [25]. However, these receivers have a long focal length of 2 cm anda narrow field-of-view (FOV). In the proposed image receiver a µm sized,electro-dispensed, polymer lens will be used to achieve a wider FOV and ashorter focal length than those in [22], [21], [25] in order to make the imagereceiver compact and, therefore, more practical from a user perspective.In order to determine which LED optical beacon on the ceiling corre-sponds to which spot on the CMOS sensor, the authors in [22] proposedusing different coloured LED optical beacons. A Hue, Sensitivity, and In-tensity (HIS) algorithm was used in [22] to detect the colour of the LEDoptical beacons. Simulated positioning accuracy was on the order of half ameter. For the proposed image receiver system, a more practical LED opti-cal beacon configuration is presented in which all LED optical beacons emitwhite light but with different modulation frequencies for each LED opticalbeacon.Using the proposed image receiver, a more compact structure with awider FOV than that of other receivers in the literature is proposed. Theproposed image receiver is able to triangulate its position from LED opticalbeacons emitting white light with an AOA accuracy of 0.5◦.Therefore, in this thesis, two OWL receiver systems are investigated.The first system is the angular receiver (corner-cube photoreceiver) and thesecond system is the image receiver (camera). The two photoreceivers rep-resent two extremes. The angular receiver consists of three orthogonal pho-todetectors, which is the bare minimum needed for AOA computation. Onthe other hand, the image receiver is made up of a large number of photode-tectors (pixels). The angular receiver has low resolution, while the imagereceiver has high resolution. The performance of the two photoreceivers isinvestigated in terms of parameters such as AOA accuracy, positioning ac-curacy, optical beacon and receiver geometry, and speed and accuracy ofobtaining reliable AOA measurements.The specific contributions of this work compared to the state-of-the-artare as follows.41.2. Research Objectives1. Demonstrate how a novel corner-cube photoreceiver (the angular re-ceiver), originally developed for OWC, can be used for indoor posi-tioning at the cm-level using white light LEDs that can also serve asroom lights.2. This thesis also demonstrates how a simple image receiver, made to beextremely compact with wide-FOV microlenses can be used for indoorpositioning at the cm-level using white light LEDs that can also serveas room lights.3. Finally, this thesis also demonstrates that indoor positioning with theabove novel receivers is possible while the receivers are in motion.1.2 Research ObjectivesThe objective of this research is to develop and test two indoor posi-tioning systems that use LED optical beacons as transmitters and two novelreceivers, namely an angular receiver and an image receiver. A comparisonbetween the angular receiver positioning system and the image receiver po-sitioning system in terms of performance will be investigated. This primaryobjective will be achieved by pursuing the following secondary objectives.1. Determine the feasibility of measuring the AOA of light with the an-gular receiver and the image receiver.2. Characterize the angular accuracy of the measured AOA for both re-ceivers.3. Quantify the effect of optical beacon geometry on system positionalaccuracy.4. Characterize the angular and position accuracy of both receivers whilein motion for various measurement update frequencies and receiverspeeds.1.3 Thesis OutlineThis thesis is organized as follows. Chapter 2 presents an overview andanalysis of current indoor positioning techniques. Chapter 3 presents theangular receiver AOA measurements and characterization, whereas Chap-ter 4 presents the image receiver AOA measurements and characterization.51.3. Thesis OutlineA performance comparison between the performance of the angular receiverand the image sensor is given in Chapter 5 for both receivers while in motion.Concluding remarks are made in Chapter 6.6Chapter 2Indoor PositioningTechniquesVarious types of indoor positioning techniques will be discussed in thischapter. The advantage and disadvantage of each of these techniques will beaddressed. These techniques include WLAN positioning, Radio FrequencyIdentification (RFID), Ultrasound, IR positioning, positioning using a cam-era and positioning using visible light.Indoor positioning techniques rely on measuring signal parameters suchas the Received Signal Strength (RSS), the Time of Arrival (TOA), and theAOA of the propagating signal. Another common technique uses a cameraand is commonly known as scene analysis. Positioning using inertial sensorsis also discussed.The aforementioned techniques will be discussed in the following sec-tions. Section 2.1 explains the fundamentals behind RSS positioning. Posi-tioning techniques based on TOA are discussed in Section 2.2. Camera basedpositioning is explained in Section 2.3. Positioning based on inertial sensorsis shown in Section 2.4. Positioning based on visible light is demonstratedin Section 2.5. Section 2.6 provides a summary of the indoor positioningtechniques discussed.2.1 Received Signal StrengthPositioning based on RSS measurements can be achieved via trilatera-tion, fingerprinting or proximity detection. Trilateration is the process offinding the position of an object by measuring ranges to three or more de-vices with known fixed positions. RSS positioning based on trilateration isas follows [26]. A receiver with unknown position in a wireless local areanetwork measures the power of signals arriving from three or more WiFi ac-cess points with known position. RSS measurements depend heavily on theenvironment. This is described by a quantity called shadowing, which is theattenuation of signals due to objects between the transmitter and receiver.72.1. Received Signal StrengthThe average received signal power at the receiver is given byP (dB) = P0(dB)− 10nlogdd0(2.1)where P (dB) is the average received signal power at distance d and P0(dB)is the received signal power at a reference distance d0. The variable, d,represents the distance between the transmitter and receiver. The pathloss exponent, n, depends on the environment between the transmitter andreceiver. Typical values for n for indoor non-light of sight environmentsrange between 3 and 6 [27]. The difference between the measured receivedpower and its average is modeled as a log normal distribution with mean10nlog dd0 and with a shadowing standard deviation of σdB, that ranges from4 to 12 dB [27]. The relationship between the distance between the receiverand transmitter and their respective coordinates isd =√(xT − xR)2 − (yT − yR)2 (2.2)where (xT , xT ) are the known transmitter coordinates and (xR, yR) are thereceiver coordinates to be estimated. Once the distance, d, is found fromequation 2.1, (xR, yR) can be found from equation 2.2. For the systemdemonstrated in [26], a positioning accuracy of 4 m was obtained.A more sophisticated positioning technique that uses RSS is known asfingerprinting. It includes an offline and an online phase. A system knownas RADAR [6] estimates position based on the 802.11 WLAN. Three basestations, with known positions, were deployed in an office environment withdimensions 43.5 m by 22.5 m. The offline phase entails building an extensivelook up table containing the measured received power and the correspondingactual transmitter receiver separation. The online phase entails making RSSmeasurements and comparing these RSS values with the look up table tointerpolate a position estimate. The main drawback of this technique is thehigh dependence on the environment. If the office environment changes (i.e.furniture rearranged) from one day to the next, then the entire calibrationprocess needs to be redone. Accuracies of 2-3 m for stationary users and3.5 m for mobile users were obtained.Another positioning technique that uses RSS measurements is proximitydetection. In proximity detection, the receiver with unknown position mea-sures the received power from different transmitters. The receiver positionis simply the position of the transmitter with the highest measured signalstrength. In [7] a system that compares the received optical powers from anindoor infrared grid was introduced. The drawback of this technique is the82.2. Time of Arrivalinherent sensitivity to power imbalances which may result in a false positionestimate. Infrastructure to power synchronize the transmitters is needed.The accuracy of the position estimate also depends on the density of the IRtransmitters. Positioning accuracies of 5 m were reported.Positioning based on RFID is another technique that is similar to RSStechniques. Positioning using RFID can be performed in two ways, a high-cost approach and a low-cost approach. In the high-cost approach, expensiveRFID readers are installed in an indoor environment at known fixed loca-tions. The person/object to be positioned is equipped with a cheap RFIDtag. This type of positioning is common in warehouses to monitor goods.The RFID tag sends a signal which is received by the readers. Trilatera-tion [28], fingerprinting [28] or proximity detection [29] is then employedto find the position of the tag. The low-cost approach involves installing alarge number of cheap RFID tags at known fixed locations whereas the userto be positioned carries the RFID reader. In [28] RFID tags are installedon the ceiling at known fixed positions. Each RFID tag covers a certainregion called a cell. The greater the density of RFID tags the higher thepositioning accuracy. Position is calculated via trilateration from three ormore tags. In [28] and [29] positioning accuracies were 1-2 m.Positioning based on Bluetooth has been demonstrated in [30]. ThreeBluetooth transmitters were deployed in a office floor. The RSS fingerprint-ing values of three transmitters and their actual locations were recorded in adatabase. A mobile receiver would then estimate its position by measuringthe corresponding RSS values from the three transmitters and interpolat-ing its position using the database. The advantage of using Bluetooth forpositioning is that most mobile devices have Bluetooth capability in them.However, Bluetooth signals may interfere with other systems, necessitatingthe construction of a dedicated transmitter infrastructure. One solution toreduce interference is to reduce the Bluetooth transmission power. How-ever, this will result in smaller coverage areas. In [30] positioning accuraciesof 2 m were found, which is not sufficient for many indoor positioning ornavigation applications.2.2 Time of ArrivalPosition estimation based on TOA works by measuring the time it takesfor a signal to travel from the transmitter (with known fixed position), tothe receiver (the position of which is to be estimated) [31]. Geometrically,this provides a circle, centered on the transmitter and of radius equal to92.2. Time of Arrivalthe range between the transmitter and receiver, on which the receiver mustlie. This circle, consisting of possible locations of the receiver, is called aline of position (LOP). The intersection of three or more circles providesthe receivers position in two dimensions (2-D). For three dimensional (3-D)positioning, the LOPs become spheres. The range or distance between thetransmitter and the receiver is the difference between the TOA and the timeof transmission (TOT), multiplied by the speed of propagation. The draw-back of the TOA technique is that the transmitters must be synchronizedwith the receiver, and line of sight (LOS) propagation is required. TDOAsystems overcome the need for synchronization between the transmittersand receiver, but still require the transmitters to be synchronized. The re-ceiver measures the difference in travel time between signals from differenttransmitters. The difference in travel times between any two transmittersforms a LOP. This LOP is a hyperbola with a constant range difference fromthe two transmitters. The intersection of two hyperbolas gives the receiverposition.In [32], the position of a cellular phone was sought by measuring theTOA of signals from different cellular base stations. The standard deviationof the ranging errors reached 10’s of meters. Clearly using such a cellularsystem method to find position would be unsuitable when navigating insidea building. Other solutions include augmenting cellular positioning withWLAN signal strength data. Results show mean positioning accuracies of5 m [33].RF systems that employ TOA/TDOA are usually expensive or sufferfrom severe attenuation indoors. For instance UWB transceivers reducemultipath effects by spreading the signal energy over a wide range of fre-quencies. Positioning accuracies on the order of a few centimeters have beenrecorded for UWB systems [34]. However, UWB transceivers require preciseclock synchronization and are, therefore, costly. The problem with comput-ing position by measuring the TOA with RF systems is that to resolve adistance of 1 m, a clock with a resolution in the order of nanoseconds isrequired. This condition can be relaxed by using ultrasound systems forpositioning which have lower clock resolution due to their low propagationspeed [35]. Centimeter-level accuracies have been reported in [35]. However,the range of ultrasonic transmitters is limited to a few meters and ultrasoundspeed is correlated to temperature, which needs to be calibrated.3-D cameras used in gaming applications utilize the Time of Flight(TOF) of a near IR signal to determine how far a user is away from the3-D camera. The near IR signal is reflected from the user’s body and thereflected signal is captured by the camera’s CMOS sensor. Signal process-102.3. Scene Analysising techniques are then used to distinguish the user’s body from backgroundclutter [36]. At least two cameras (stereo vision) are needed to determinethe user’s 2-D position.2.3 Scene AnalysisPositioning based on scene analysis is most common in mobile robot po-sitioning with a camera [3],[10] or in pedestrian positioning with embeddedcell phone cameras [5]. It operates by first mapping features in capturedimages to stored images in a database. The stored image feature point co-ordinates in an image correspond to a known reference 3-D position withrespect to a reference frame in the real world. By tracking the change in thepositions of features in an image, rotational and translational transforma-tions can be used to find the position and orientation of the robot/user. In[11], two ways have been employed to find position. The first is the naive ap-proach. In the naive approach, a user takes an image of the surrounding andan algorithm matches the features in the image to find the closest match toimages in the database. Once a match is found, the location of the matchedimage in the database with respect to a reference frame in the real world isnow the location of the user. The second is called the hierarchical approach,where images corresponding to similar objects are grouped together. Forinstance, images in a particular room are grouped under one hierarchy andthe algorithm compares the captured image with the database to see whichroom the user is in. The main advantage of this technique is that it speedsup the search process since the system has fewer images to search through.However, the algorithm will run into an infinite loop if it confuses the imagewith an image in a different room.Positioning using a camera relies heavily on identifying features in animage. To identify features, various algorithms in the literature have beenproposed such as colour histograms, shape matching, and the Harris cornerdetector [37]. Harris corner detection operates by dividing an image into 64patches of fixed size and then searches for the best (i.e., most distinctive)patch. Features that do not change with the viewing angle are desirable.These features are usually the corners of a room. A filter intensity equation,E, is given byE(u, v) =∑W (x, y)[I(x+ u, y + v)− I(x, y)]2, (2.3)where u, and v represent the pixel shift in the x and y directions respectively,W is the window function, e.g., rectangle with pixel centre at (x, y) and I112.3. Scene Analysisrepresents the intensity function. The filter is applied to each of the patches.For constant, but not very distinctive patches, E is minimum because theintensity difference, I(x+u, y+v)−I(x, y), does not change much. However,for a good feature in a distinctive patch the intensity equation will be amaximum. The match is that patch that maximizes the above function.Once the features are identified, the camera’s position and orientation areestimated with respect to a local reference frame.Camera calibration takes place to determine the relationship betweenfeature points in an image (camera coordinates) and where they are locatedin a local coordinate system (the positions of the features are assumed to beknown before hand such as in [38]). The calibration phase involves findingthe following:− Intrinsic parameters: Finds the relation between image/pixel coordi-nates and camera coordinate system (uses a pin-hole camera model).− Extrinsic parameters: Defines the location of the camera coordinateswith respect to a local coordinate system, i.e., finds the position andorientation of the camera with respect to a local coordinate system.Fig. 2.1 shows a 3-D camera coordinate system (x, y, z) whose origin Orepresents the centre of projection, and z is along the optical axis. A pointPc at coordinates (Xc, Yc, Zc) in the camera coordinate system will appearat point P” (x”, y”, f) in the 2-D image plane defined with coordinates (x”,y”). The relationship between the two coordinate systems is found from thepin-hole camera model using similar triangles, such that.x′′ = f XcZc, (2.4)andy′′ = f YcZc. (2.5)122.3. Scene AnalysisFigure 2.1: Camera coordinate frame.The extrinsic parameters map the relationship between the camera coor-dinate system and the local coordinate system. Let point P” be defined withrespect to a local reference frame at Pl = [Xl Yl Zl]T and with respect tothe camera reference frame at Pc = [Xc Yc Zc]T . The relationship betweenthe two frames is[Pc]3×1 = [R]3×3 [Pl]3×1 + [T ]3×1 (2.6)where [R] is the rotational matrix, and [T ] is the translational vector alongthe x, y and z coordinates. By substituting equations 2.4 and 2.5 intoequation 2.6, the orientation and position of the camera is computed fromtwo or more point features with known positions.In [39], [40] a mobile robot with a camera pointing toward the ceilingis tested for indoor positioning, where the light sources represent the dis-tinctive features. An algorithm is devised to detect the features of the lightsources and estimate the robot position. The light sources were spaced inclose packed grids of 10 cm. Centimeter-level positioning accuracies were at-tained. The crux of using a camera for positioning relies on how efficientlythe algorithm can detect features in an image. Factors such as a change inthe camera’s view point and nonlinear changes in illumination in a roomcan make the feature detection algorithm fail and, therefore, result in noposition estimate.132.4. Inertial Navigation System2.4 Inertial Navigation SystemPositioning based on Inertial Navigation System (INS) uses accelerom-eters, gyroscopes, and magnetometers, to estimate a user/robot positionwith dead reckoning. Although tactical grade INS can be used to providevery accurate position estimates, they are heavy and expensive [38]. Withadvancements in Microelectromechanical systems (MEMS) technology, INSsensors are packaged into increasingly smaller Inertial Measurement Units(IMU) that are much smaller and cheaper. A typical use of MEMS INSis in adjusting screen-view orientation in cell-phones. INS sensors typicallyconsist of triaxial gyroscopes, triaxial accelerometers and triaxial magne-tometers. The choice of MEMS sensors for navigation depends on severalfactors such as bias stability, bandwidth and noise.Accelerometers and gyroscopes provide relative measurements, whilemagnetometers provide absolute measurements. Gyroscopes are used toprovide heading information by integrating the gyroscope’s angular velocityover time to estimate heading. Gyroscopes suffer errors due to temperaturebias. Accelerometers determine translation by measuring the translationalacceleration and double integrating it over time. Magnetometers are usedto provide absolute heading information by measuring the earth’s magneticfield. However, magnetometers suffer from strong interference indoors fromobjects such as copiers making them non-benefical indoors. However, inoutdoor environments, they suffer less interference [41].After some time, due to biases in the sensor measurements, the INSsystem must be recalibrated. Since the INS is a relative positioning sys-tem, INS is often augmented with other absolute positioning systems torecalibrate the INS sensor position. In [28], RFID positioning was used toperiodically correct a MEMS INS sensor measurement. Indoor positioningaccuracies were 1-2 m. In robotics [42] and pedestrian [43] navigation, INSand vision-based positioning is a popular mix.2.5 Visible LightIndoor Optical Wireless Location (OWL) systems utilizing visible lightwere first proposed by [17]. An OWL system consists of light sources, typ-ically LED optical beacons, and a photoreceiver. The photoreceiver caneither be one or more PD [13] or an image receiver (camera) [17].In [17], indoor positioning was simulated with three LED optical beaconsmounted on the ceiling at known fixed positions. An image receiver, with a142.5. Visible LightFOV of 45◦, captures an image of the LED optical beacons. By calculatingthe relative positions of the optical beacons on the image with respect toone another, and knowing the focal length of the lens, the position of theimage sensor was inferred using similar triangles. The focal length of thelens was 2 cm.In [21] indoor positioning was demonstrated with a dual camera (stereocamera) compared to the monocamera presented in [17]. Similar to [17] LEDoptical beacons were simulated to be on the ceiling, and the position of thestereo camera was calculated from geometry. Each of the cameras had aFOV of 45◦. The focal length of the lenses of the cameras was 2.7 cm. Onlysimulated results were presented. The work presented in [17] and [21] usedwhite LED optical beacons. In order to infer which LED optical beacon isbeing imaged, [22] proposes using different colour LED optical beacons.Indoor OWL systems using PDs were presented in [13],[23] and [14].In [13] 2-D positioning was investigated using four LED optical beaconsand a PD. Two LED optical beacons were intensity modulated at 20 MHz,with one LED phase shifted from the other. A PD, acting as a receiver,estimates the TDOA of the two peaks corresponding to each of the twoLED optical transmitters. A second TDOA is calculated from the secondpair of LED optical beacons. The PD position is then estimated usinghyperbolic trilateration (the intersection of two hyperbolas). A main factorin the accuracy of this positioning technique is the choice of wavelength ofthe signal used to modulate the LED optical beacons. At 2 MHz (150 mwavelength), the receiver has to move in the order of a wavelength to detecta phase difference. Also, the LED optical beacons need to be synchronized,adding to the cost of the OWL systemIn [14] a similar system employing TDOA is implemented. The authorsused three LED optical beacons each modulated at a different frequency.The frequencies were 1, 3 and 5 MHz. The receiver is able to detect thethree signals and compute their phase differences and, therefore, TDOA,and a position estimate. No experimental results are given in this paper.Similar to the work in [13], LED synchronization adds to the cost of theOWL system.In [23], an indoor positioning system based on fluorescent lighting wasintroduced. The system offers 3-4 m accuracy. The transmitter is a modu-lated fluorescent light and the receiver is a single PD. The positioning systemis based on proximity detection; the closer one is to a given transmitter, themore likely one’s position can be inferred to be that of the transmitter. Adrawback of this system is that it can only resolve one modulated light signalat a time. Therefore, the transmitters need to be placed a fixed minimum152.6. Summarydistance apart to avoid interference.2.6 SummaryTable 2.1 provides a summary of previously published empirical posi-tioning techniques. The accuracy of the proposed OWL systems will beTable 2.1: Summary of indoor positioning techniquesPositioning signal Measurement type AccuracyRF WLAN RSS fingerprinting 2-3 mRF WLAN RSS trilateration 4 mIR RSS proximity 5 mRFID RSS trilateration 1-2 mCellular TOA trilateration 10 mUWB TOA trilateration 1 cmUltrasound TOA trilateration 1 cmBluetooth RSS fingerprinting 2 mINS/RFID RSS fingerprinting 1-2 mVision-based Vision 10 cmcompared to these empirical techniques. In conclusion RSS measurementsare simpler than their TOA counterpart. However, they suffer from poorerpositional accuracy due to environmental effects such as shadowing. INSsystems suffer severely from sensor bias drift that must be periodically cali-brated by other positioning techniques. Positioning using a camera requiresbuilding a map of positions where a user travels, and may not work properlyunder different lighting conditions. OWL systems are emerging as a strongcompetitor to other positioning techniques,with promising simulation po-sitioning accuracies. Positioning systems such as UWB, ultrasound, andvision-based system have very good accuracy (cm level), but their disadvan-tages disqualify them from being the low complexity solution to the indoorpositioning problem.16Chapter 3Angular Receiver PositioningThis chapter presents the angular receiver. A detailed characterizationof its angular and positioning performances is quantified. Section 3.1 intro-duces the angular receiver structure. Section 3.2 presents the angular re-ceiver AOA empirical error characterization. Section 3.3 discusses the effectof Dilution of Precision (DOP) on positioning accuracy. Finally, Section 3.4presents empirical positioning performance results.3.1 The Angular ReceiverThe angular receiver structure is made up of three orthogonal sides, eachof which consists of a silicon PD. The sides form the interior of a corner-cube.The angular receiver is shown in Fig. 3.1.Figure 3.1: The angular receiver.The angular receiver was developed in the Integrated Optics Lab at theUniversity of British Columbia. The photoreceiver was initially developedfor optical communication but has been adapted here for indoor positioning.173.1. The Angular ReceiverEach PD is a Thorlabs FDS1010, with an active area of 9.7 × 9.7 mm2,operating over a range of 400 – 1100 nm and with a maximum responsivityof 0.65 A/W at 1000 nm. The field of view (FOV) of the device spans asolid angle defined by the azimuthal angle, φ, and polar angle, θ, rangingfrom 0◦ to 90◦ as shown in Fig. 3.2. The angular receiver is a retroreflectorallowing light to reflect back to the light source and therefore can be usedto provide bidirectional communication.Figure 3.2: Schematic of the angular receiver showing azimuthal and polarangles and photodiode side numbers.The angular receiver, first introduced in [44], consists of three orthogonalPDs. The PDs are defined as PD1, PD2, and PD3 in the y ’-z ’, x ’-z ’, andx ’-y ’ planes, respectively, where the x ’y ’z ’ coordinate system represents theangular receiver body frame (see Fig. 3.2).Responsivity in amps (A) per watt (W) is defined as the ratio of the out-put photocurrent generated by a PD to the input optical power incident onthat PD. Responsivity is wavelength dependent, and since the LED opticalbeacons (OPTEK Technology OVS5MxBCR4) used here have white light(broadband) spectral characteristics, as shown in Fig. 3.3, an effective or av-erage responsivity needs to consider the wavelength-dependent responsivityof the PDs. With this in mind, the spectrum of the white light LEDs wasrecorded by a spectrometer giving the result shown in Fig. 3.3. The totalarea under the curve in Fig. 3.3 is normalized to correspond to an opticalpower of 1 W.183.1. The Angular Receiver200 300 400 500 600 700 800010002000300040005000White LED spectrum (normalized)Wavelength, (nm)Figure 3.3: White LED spectrum.The normalized spectrum is then multiplied by the known spectral re-sponsivity curve for the PDs, to arrive at the photocurrent curve as a func-tion of wavelength. The total area under the curve (which is a result of anoptical power of 1 W) gives an effective responsivity of 0.27 A/W for theLED and PD configuration used in this work.Sections 3.1.1 and 3.1.2 present the measurements done in order to char-acterize the angular receiver’s angular and intensity responses.3.1.1 Angular ResponseAn incident beam from an LED optical beacon is characterized by anAOA in the body frame with an azimuthal angle φ defined in the x ’y ’ planeand a polar angle θ defined relative to the z ’ axis as shown in Figure 3.2In [44], the relationship between the photocurrents, i1, i2, and i3, asa function of the azimuthal angle, φ and polar angle, θ were derived andexperimentally verified. These expressions are piece-wise functions. Forpositioning applications one seeks to estimate φ and θ given the photocurrentvalues of each PD. In order to solve for the AOA angles φ and θ, one needsto invert the expressions in [44]. However, since explicit expressions aredifficult to find, an approximation of the differential photocurrents, i1-i3and i2-i3 are formed.Since there are two unknowns (φ and θ) and three known photocurrentvalues, i1, i2, and i3, differential photocurrents are formed such that193.1. The Angular Receiver∆i1(φ, θ) = i1(φ, θ)− i3(φ, θ)≈ C0 + C1θ + C2φ+ C3θ2 + C4θφ+C5φ2 + C6θ3 + C7θ2φ+ C8θφ2,(3.1)∆i2(φ, θ) = i2(φ, θ)− i3(φ, θ)≈ D0 +D1θ +D2φ+D3θ2 +D4θφ+D5φ2 +D6θ3 +D7θ2φ+D8θφ2.(3.2)The theoretical piece-wise expressions for ∆i1(φ,θ) and ∆i2(φ,θ) areapproximated with polynomial distributions using a least-angle regressionanalysis with a root mean squared (RMS) fitting error of 0.2% for the nu-merical fitting parameters shown in Table 3.1.Table 3.1: Numerical fitting parametersParameter ValueC0 -9.08×10−1C1 1.58×10−2(◦)−1C2 -2.44×10−4(◦)−1C3 1.51×10−1(◦)−2C4 -1.06×10−4(◦)−2C5 8.40×10−6(◦)−2C6 -1.17×10−6(◦)−3C7 2.32×10−6(◦)−3C8 -2.51×10−6(◦)−3D0 -8.63×10−1D1 -1.40×10−2(◦)−1D2 -1.22×10−3(◦)−1D3 3.60×10−4(◦)−2D4 5.58×10−4(◦)−2D5 7.98×10−6(◦)−2D6 -1.17×10−6(◦)−3D7 -2.31×10−6(◦)−3D8 -2.52×10−6(◦)−2To solve for the AOA, the photocurrent values i1, i2, and i3 are measured,the differential photocurrent values ∆i1 and ∆i2 are formed, as shown inequations 3.1 and 3.2, and the two equations are solved simultaneously forφ and θ. Fig. 3.4 shows a graphical representation of normalized ∆i1(φ,θ)and ∆i2(φ,θ) as a function of φ and θ. The value ∆i3(φ,θ) = i3(φ,θ)-i3(φ,θ)203.1. The Angular Receiverrepresents the differential photocurrent zero plane. The intersection of the∆i1(φ,θ), ∆i2(φ,θ) , and ∆i3(φ,θ) planes is the AOA solution. In this case,φ = 45◦ and θ = 54.7◦ is the solution. The AOA at (φ = 45◦, θ = 54.7◦)is a result of balanced photocurrents i1, i2, and i3 due to equal opticalillumination on the three PDs. This is apparent in Fig. 3.4 where ∆i1(φ,θ)and ∆i2(φ,θ) are symmetric.Figure 3.4: Analytical results are shown for normalized differential pho-tocurrents ∆i1(φ, θ) and ∆i2(φ, θ) versus azimuthal φ and polar θ angles.When an incident light beam from an LED optical beacon strikes theangular receiver, photocurrents i1, i2 and i3 are generated. The amplitudesof these photocurrents are extremely small, in the order of nano-Amperesand the signal is noisy resulting in a poor signal to noise ratio (SNR). Inorder to increase the SNR, the circuit block diagram shown in Figure 3.5 isdesigned to first filter out the noise components and then increase the signalpower.213.1. The Angular ReceiverFigure 3.5: Circuit block diagram stages of amplification and bandpass filter.The photocurrents generated by each PD pass through a buffer with afeedback resistance R1 = 10 kΩ. At this stage, the photocurrent is convertedto a voltage signal. The high input impedance of the buffer and the largeresistance R1 are beneficial in that they decrease thermal noise. The signalthen passes through a fourth–order Butterworth high-pass filter to attenuatelow frequency components such as the photocurrent due to the 60 Hz roomlight and its harmonics. Similarly, a second–order Butterworth low-passfilter removes high frequency components such as microwave frequencies.The allowable frequency band for LED optical beacon operation is between500 Hz and 3 kHz. After filtering out the noise components of the signal and,therefore, increasing the SNR, an amplifier with a gain of 1000 is applied toincrease the level of the signal. Fig. 3.6 shows a schematic diagram of thecircuit. The circuit was built with discrete components on a prototypingboard.Figure 3.6: Butterworth bandpass circuit schematic to enhance the photo-diode’s output SNR.223.1. The Angular ReceiverAfter filtering and amplification, the output voltage corresponding toeach PD is connected to a separate channel on the National Instruments(NI) wireless data acquisition (DAQ) unit that transmits the amplitudesof the captured signal values wirelessly to a computer. Using LabVIEW, agraphical programming language, the power spectral density of the capturedsignal from each PD channel is recorded. The power values P1, P2, and P3at the particular modulation frequency of the LED optical beacon of interestare recorded in dB, and converted to the corresponding output voltage valuesV1, V2 and V3. Using the circuit diagram in Fig. 3.6, the correspondingphotocurrents are computed by dividing the output voltages correspondingto each PD by the circuit impedance (10 MΩ). Differential photocurrentsare then formed as shown in equations 3.1 and 3.2 to solve for the AOAvalues (φ, θ).3.1.2 Intensity ResponseIntensity independence is a significant distinction between optical posi-tioning using proximity detection, also known as optical RSS, and opticalAOA positioning using the angular receiver. Optical RSS-based position-ing systems rely on measuring the incident optical power from LED opticalbeacons. A typical example of optical RSS-based positioning systems is il-lustrated in [7] where optical transmitters are mounted on the ceiling anda user carrying an optical receiver determines their position based on thesignal strength of the optical signal measured by the receiver. The strongerthe signal measured from an optical transmitter the closer one is to thattransmitter. The major drawback of optical RSS-based systems is the in-herent sensitivity to optical beacon grid powers. The system designer mustmake sure that all optical transmitters operate at the same power level. Anyimbalances will render optical RSS-based systems inaccurate.The proposed angular receiver’s AOA is independent of absolute opticalpowers being incident on the angular receiver PD1, PD2, and PD3 sides,since the AOA calculation process uses normalized differences between thesethree powers. However, there exists a minimum intensity threshold, thatis configuration-dependent, for optical AOA positioning using the angularreceiver. In order to determine the minimum allowable intensity for theproposed positioning system to estimate the AOA reliably, an experimentis performed in which an LED optical beacon is incident along the centralaxis of symmetry of the angular receiver, at an AOA of φ = 45◦ and θ =54.7◦ and with a separation distance of 0.5 m. At this orientation, an initialcalibration of the PD responsivities is undertaken where the transimpedance233.1. The Angular Receiveramplifier gains connected to each of the three PDs are adjusted to yieldbalanced (i.e., equal) photocurrents. The AOA is then recorded and plottedversus incident optical intensity by varying the LED optical beacon poweras shown in Fig. 3.7. Note that as the light intensity decreases, the AOAangles remain constant, until the light intensity reaches 0.2 µW/cm2. Atthat point, the measured angles deviate from the true angles. This is due tolarge fluctuations in the received power on each of the three PD channels.The minimum optical transmit power, Pt, that would be required tohave the LED optical beacon achieve the minimum allowable received opticalintensity of 0.2 µW/cm2 can be found for a typical distance of 0.5 m betweenthe LED optical beacon and receiver. The minimum optical transmit powerfor the LED optical beacon would be Pt=(4pir2)× 0.2µW/cm2 = 6.3 mW.The knowledge of this value is critical for system designers to build opticalbeacon networks that are able to reliably estimate AOA for larger opticallink distances such as those in [7].Figure 3.7: Direct intensity characterization of measured azimuthal φ andpolar θ angles versus incident optical intensity.For the on-going analysis, the LEDs are operated with their maximumtransmit power of 57 mW. With knowledge that the minimum receivedpower intensity is 0.2 µW/cm2, as shown in Fig. 3.7, it can be concludedthat the system can operate with distances between the LED optical beacons243.1. The Angular Receiverand receiver of up to r =√(57 mW0.2 µW/cm2)= 5.3 m.3.1.3 Multipath ResponseIn order to characterize the effect of diffuse reflections, from differentobjects in the environment, on the accuracy of the AOA measurements, anexperiment is carried out where a variety of reflective surfaces (plywood,stainless steel, drywall) are set in the proximity of the angular receiver.Fig. 3.8 shows the diagram of the multipath characterization experiment.Figure 3.8: Multipath characterization experiment setup.The distance between the angular receiver and the LED optical beaconis fixed at 0.5 m. The angular receiver is oriented at φ = 45◦ and θ = 54.7◦,and the incident optical intensity is 1.2 µW/cm2. The AOA is measured asthe reflective surface distance is changed. The results are shown in Fig. 3.9as a function of the reflective surface distance, with the angular receivercentral axis of symmetry parallel to the reflective surface. Multipath effects,in the form of significant departure of measured φ from the true value φ,253.1. The Angular Receiverare apparent in the figure for each of the materials when the reflective sur-face distance is less than approximately 0.5 m. When the reflective surfacedistances increases above 0.5 m, φ converges to the true value of 45◦ and θconverges to the true value of θ = 54.7◦.Figure 3.9: Reflected intensity characterization of measured azimuthal φ andpolar θ angles versus reflective surface distance for three surfaces (plywood,stainless steel, drywall).From Fig. 3.9 one observes that for the current configuration in Fig. 3.8,multipath reflections impact φ values more than θ values. Light reflected bythe reflective surface will erroneously increase the incident optical intensityof PD1 to a much greater extent than that of PD2. Since φ is largely afunction of i1-i2, whereas θ is largely a function of i2-i3, φ will be impactedby an inflated value of photocurrent much more so than will be θ.The total incident optical power on PD1 is the linear sum of the LOSoptical power and the NLOS or reflected power. Figure 3.10 shows the per-centage of incident optical power that is reflected onto PD1 due to multipathfor the case of drywall. One observes that at approximately 20 cm reflectivesurface distance, 50% of the incident optical power is due to reflections. Atapproximately 40 cm, 30% of the incident power is reflected and this resultsin a φ error of 4◦ compared to 2◦ error at 50 cm reflective surface distance.In conclusion, optical AOA positioning is most accurate if the reflected263.2. Angle-Of-Arrival Measurement Error Characterizationlight is approximately 20% or lower of the total incident light. In this casethe effect of reflections will be negligible.Figure 3.10: Percentage ratio of reflected optical power to total incidentoptical power on PD1 for drywall.3.2 Angle-Of-Arrival Measurement ErrorCharacterizationTo characterize the angular receiver’s AOA accuracy, the angular receiveris mounted on two gyroscopes (with a precision of 0.5◦). One gyroscope liesin the x ’-y ’ plane of the angular receiver body frame to adjust the incidentφ angle, and the other gyroscope adjusts the incident θ angle (see Fig. 3.2).The angular receiver is illuminated by a single LED optical beacon at a fixedknown position which ensures an intensity of 1.8 µW/cm2.The gyroscopes are adjusted such that the output photocurrents fromPD1, PD2 and PD3 are approximately equal. Since the three PDs havesimilar but not necessarily identical responsivities, the calibration of thePDs is carried out to ensure that the three photocurrents are equal (orbalanced). This is done by adjusting the three preamplifier gains in theangular receiver electronics to yield balanced photocurrents. At this angularreceiver orientation, the measured AOA is equal to the true AOA φ = 45◦and θ = 54.7◦. The corresponding reference AOA (true AOA) is read from273.2. Angle-Of-Arrival Measurement Error Characterizationthe markings on the azimuthal and polar gyroscopes.The gyroscopes are rotated and the AOA φ and θ angles measured andcompared to the true angles. Each AOA (φ and θ pair) that is estimatedis the result of 100 averaged power samples at a modulated LED opticalbeacon frequency of 2.5 kHz. Measured angle errors ∆φ and ∆θ are definedhere as the absolute differences between the measured and true φ and θangles respectively. The angle errors are shown in Figs. 3.11 and 3.12 asa function of φ and θ. Error trends are apparent. The measured angleerrors ∆φ and ∆θ are at their lowest level in close proximity to the φ =45◦ and θ = 54.7◦ central axis of symmetry of the angular receiver. Inmoving away from this central axis, the errors increase in a way that reflectsthe structural symmetry. In Fig. 3.11, the measured angle error ∆φ isroughly symmetric about the φ = 45◦ line, as one would expect by thestructures mirror symmetry about a φ = 45◦ bisecting plane (see Fig. 3.2).At the same time, the measured angle error ∆φ is disproportionately largefor small θ angles, compared to those for large θ angles. This distinction isseen from illumination asymmetry for small or large θ. When the structureis illuminated near θ ≈ 0◦, both PD1 and PD2 yield negligible photocurrentsand only PD3 yields a high photocurrent. This gives way to large measuredangle errors in ∆φ . When the structure is illuminated near θ ≈ 90◦, bothPD1 and PD2 yield high photocurrents and only PD3 yields a negligiblephotocurrent. This gives way to low measured angle errors in ∆φ.In Fig. 3.12, the measured angle error distribution for ∆θ is roughlysymmetric about φ = 45◦ when θ approaches 90◦. This can be understoodby examining Fig. 3.2, with its symmetry between PD1 and PD2. As θ isreduced, however, symmetry in the measured angle error ∆θ diminishes, andthe response becomes dominated by random error.To deploy the angular receiver in OWL systems, the measured angleerrors ∆φ and ∆θ must be kept below an acceptable level. For this investi-gation, a mean error of 2◦ is deemed to be acceptable, and this is achieved,given the results in Figs. 3.11 and 3.12, by defining an operational cone ofφ × θ = 40◦ × 40◦ through the central axis of symmetry. LED opticalbeacons illuminating the angular receiver within this operational cone givemeasured mean angle errors ∆φ and ∆θ below 2◦. For typical indoor opticallink distances of 2 m, an azimuthal error of 2◦ results in a 7 cm positioningerror. For applications such as robot positioning an error of 7 cm would besufficient for robot navigation without hitting obstacles, and for the robotto go through doorways and corridors.283.2. Angle-Of-Arrival Measurement Error CharacterizationFigure 3.11: Measured angle error ∆φ as a function of φ and θ.Figure 3.12: Measured angle error ∆θ as a function of φ and θ.293.3. Positioning Analysis Using Dilution of Precision3.3 Positioning Analysis Using Dilution ofPrecisionThe term Dilution of Precision (DOP) quantifies the effect of opticalbeacon and angular receiver geometry on position error standard deviation.The lower the DOP number the lower the position uncertainty and, there-fore, the better is the position estimate.Dilution of precision is defined as the ratio of the position standarddeviation σP to the measurement standard deviation σm as shown inDOP = σPσm . (3.3)DOP is commonly used as shown in equation 3.4 to quantify GPS posi-tioning accuracy, wherePosition error ≈ DOP× Range error. (3.4)The derivation of AOA DOP is presented next. Assume the angular re-ceiver is positioned at (x , y , z ) with respect to a known reference frame.Measurements are made for AOA φi and θi angles for K LED optical bea-cons positioned at (xi , yi , zi), where i = 1, 2, ..K. The relationship betweenthe angular receiver position and the AOA angles is given bytanφi = xi−xyi−y (3.5)andtan θi =√(xi−x)2+(yi−y)2zi−z . (3.6)The AOA angles φi and θi are defined for directions toward the ith opticalbeacon.Expressions 3.7 and 3.8 are fundamental relationships for linking theerrors in the existing measured angles to the errors in the estimated positionof the angular receiver. The measured angle errors are recorded in themeasured angle error vectorM = [∆φ1 ∆θ1 ∆φ2 ∆θ2 . . . ∆φK ∆θK ]T (3.7)where ∆φi and ∆θi are the respective measured angle errors for φi andθi for the ith optical beacon, and the T superscript denotes the transposeoperation. Similarly, the position error vector is303.3. Positioning Analysis Using Dilution of PrecisionP = [∆x ∆y ∆z]T (3.8)where ∆x, ∆y and ∆z are the respective errors in the x, y, and z coordinates.The measured angle errors ∆φ and ∆θ can be linked to the absolute positionerrors by taking partial derivatives of the observation’s φi and θi with respectto the unknown angular receiver position x, y, z. The resulting partialderivative matrix, H, is defined byH =δφ1δxδφ1δyδφ1δzδθ1δxδθ1δyδθ1δz... ... ...δφKδxδφKδyδφKδzδθKδxδθKδyδθKδz2K×3(3.9)and the relationship between P , H, and M isM = HP . (3.10)Equation 3.10 has 2K equations and three unknowns ∆x, ∆y and ∆z. Tosolve for the position errors, ∆x, ∆y and ∆z, the overdetermined linearsystem is solved using the method of Least Squares [45] resulting in a positionerror equal toP =(HTH)−1HTM . (3.11)For this AOA-based system, the measured angle errors are assumed to beindependent, zero-mean, Gaussian-distributed random variables with equalvariance, σ2M [17], [46]. The covariance of equation 3.11 can then be used toexpress the position error variance asσ2P = tr[E(PPT)]= tr[(HTH−1)]σ2m (3.12)where E() and tr() denote the expectation and trace operations respectively.Three-dimensional position DOP (PDOP) is then defined as the ratio ofpositioning error standard deviation σP to the standard deviation of themeasured angle errors σm such thatPDOP = σPσm =√σ2x+σ2y+σ2zσm =√tr[(HTH)−1]. (3.13)Here, position standard deviations in x, y and z coordinates are denotedby σx, σy and σz, respectively. Note that the PDOP in equation 3.13 is313.3. Positioning Analysis Using Dilution of Precisionan AOA-based quantity with units of meters per radian, unlike the unitlessPDOP for range-based systems such as GPS. PDOP acts as a weightingfactor on the measured angle standard deviation for the calculation of theposition standard deviation. The effect of the angle standard deviation onthe position standard deviation will depend on the angular receiver positionwith respect to observable optical beacons. Given two observable opticalbeacons in close proximity, for example, the angular receiver registers twoAOAs with the corresponding LOPs being nearly parallel, which in turnyields a large PDOP and large position standard deviation. Given two ob-servable optical beacons that are well separated, in contrast, the angularreceiver registers two AOAs with the corresponding LOPs being nearly or-thogonal, which in turn yields a small PDOP and small position standarddeviation. For the present analysis, the 3-D positioning error standard de-viation σP, results from the measured mean angular error of σm = 2◦ inthe operational cone and can be found, for a particular OWL system, bycalculating the PDOP from 3.13.In order to visualize the effect of PDOP on positional accuracy, an OWLsystem is simulated to predict the position standard deviation, σP in equa-tion 3.13, using PDOP and assuming an AOA standard deviation, σm = 2◦.The OWL system is simulated for two and four LED optical beacon config-urations as illustrated in Fig. 3.13.Figure 3.13: Schematic of the optical AOA positioning system. The(x ’,y ’,z ’) represent the angular receiver’s body frame, while the (x ,y ,z ) rep-resent the navigation reference frame.For the two optical beacon configuration, the optical beacons are posi-tioned with LED A1 at (x1 = 15 cm, y1 = 0 cm, z1 = 50 cm) and with LED323.3. Positioning Analysis Using Dilution of PrecisionA2 at (x2 = -15 cm, y2 = 0 cm, z2 = 50 cm), and with the angular receiverscanned across the z = 0 plane. Results are shown as predicted positionstandard deviation in Fig. 3.14. The position standard deviation valuesrange from 4 to 6 cm. The mean 3-D position error standard deviation isσP = 4.7 cm. Note that the largest position standard deviation occurs inthe plane y = 0. This is because LOPs from LEDs A1 and A2 in this regionare parallel, resulting in larger DOP which subsequently gives rise to largeposition error standard deviation. The corresponding DOP plot for the twoLED beacon configuration is shown in Fig. 3.15 with DOP values rangingfrom 2 to 3 cm/deg.Figure 3.14: The predicted 3-D positioning error standard deviation σp foroptical AOA positioning with two LED optical beacons A1 and A2.333.3. Positioning Analysis Using Dilution of PrecisionFigure 3.15: DOP (cm/deg) for optical AOA positioning with two LEDoptical beacons A1 and A2.In order to improve the position standard deviation, four LED opticalbeacons are used in the OWL system. The addition of the two opticalbeacons improves the OWL system geometry (i.e., lowers DOP). The opticalbeacons are positioned with LED B1 at (x1 = 15 cm, y1 = 15 cm, z1 = 50cm), LED B2 at (x2 = -15 cm, y2 = 15 cm, z2 = 50 cm), LED B3 at (x3 =-15 cm, y3 = -15 cm, z3 = 50 cm) and LED B4 at (x4 = 15 cm, y4 = -15cm, z4 = 50 cm). The angular receiver is scanned across the z = 0 plane.Figure 3.16 shows the predicted position standard deviation assuming σm =2◦. Position error standard deviations lower than those for the two opticalbeacon case (Fig. 3.14) are apparent. For the four optical beacon grid theposition standard deviation values range from 2.7 to 2.8 cm. The mean 3-Dposition error standard deviation is σP = 2.8 cm. The corresponding DOPplot for the four LED beacon configuration is shown in Fig. 3.17 with DOPvalues ranging from 1.35 to 1.4 cm/deg. The predicted mean 3-D positionerror standard deviation of σP = 2.8 cm is a factor of two improvement overthe mean predicted 3-D position error standard deviation of σP = 4.7 cmfor the two optical beacon grid.343.3. Positioning Analysis Using Dilution of PrecisionFigure 3.16: The 3-D predicted positioning error standard deviation σp foroptical AOA positioning with four LED optical beacons B1, B2, B3, and B4.353.4. Positioning PerformanceFigure 3.17: The 3-D DOP (cm/deg) for optical AOA positioning with fourLED optical beacons B1, B2, B3, and B4.For the 2-D and 3-D optical beacon configurations discussed above, oneobserves that the DOP values vary far less as a function of angular receiverlocation for the 4 LED optical beacons case than for the 2 LED opticalbeacons case. From Figs. 3.15 and 3.17 one observes that the PDOP andtherefore σP increases as the receiver nears an optical beacon. This leadsto the conclusion that the angular receiver should not use the AOA for thatparticular beacon in the position estimate calculation. The angular receivercan detect its proximity to an LED optical beacon based on the AOA itmeasures (the angular receiver is directly below an LED if the measuredAOA is φ = 45◦, and θ = 54.7◦), and the total photocurrent (the sum ofi1, i2, and i3) being greater at this location than the neighboring angularreceiver locations.3.4 Positioning PerformanceIn this section, the performance of conventional RSS optical position-ing (which relies on proximity detection) is compared to that of the angularreceiver AOA optical positioning technique. The performance analysis quan-tifies positioning error, which is defined as the Euclidean distance between363.4. Positioning Performancethe measured and true 3-D positions. Each OWL system is tested with thefour optical beacons B1, B2, B3 and B4 shown in Fig. 3.13. Section 3.4.1presents the RSS measurement experiment, and Section 3.4.2 presents theAOA measurement experiment.3.4.1 Optical RSSOptical RSS positioning is tested with a single flat 9.7×9.7 mm2 PDacting as the receiver. The PD is rastered across the xy plane at 25 differentpositions of (0, 0), (0, ±7.5 cm), (0, ±15 cm), (±7.5 cm, 0), (±7.5 cm,±7.5 cm), (±7.5 cm, ±15 cm), (±15 cm, 0), (±15 cm, ±7.5 cm), (±15 cm,±15 cm). The incident intensities measured by the PD are used to quantifythe respective ranges to the four LEDs. The calibration and the process offinding range is performed as follows:1. The received electrical power is measured using the PD connected tothe circuit in Fig. 3.5 in section 3.1.1.2. The equivalent photocurrent from the PD is calculated (µA).3. Knowing the average responsivity of the PD, the received optical powerPr is calculated, where Pr = photocurrent/responsivity.4. Received optical power is related to transmitted optical power by Pr =kPt/r2 [17], where r is the range between the LED optical beacon andthe PD. Each LED optical beacon will have a slightly different Pr dueto power imbalances.5. At the (0, 0) position, the value of kPt is calculated for each LEDoptical beacon by multiplying the Pr by the known range squared, r2.The value of kPt is constant for a given LED optical beacon. This isthe calibration process.6. Given the received optical power at different locations, and knowingkPt, the range r is calculated such that r =√kPt/Pr7. A Nonlinear Least Squares algorithm takes the range values from thefour LED optical beacons and determines the 2-D and 3-D coordinatesof the PD via trilateration.The resulting positioning errors for the optical RSS positioning techniqueare shown as a best fit plot in Fig. 3.18. Note that at the centre of the grid (x= 0, y = 0), the ranges from the LED optical beacons have been calibrated373.4. Positioning Performancesuch that they are all equal, since the PD is equidistant from all four LEDoptical beacons, so this centre position gives the lowest positioning error. Amean RSS positioning error of 20 cm is found for positions across the xyplane.Figure 3.18: The 3-D positioning error for optical RSS positioning.Motion away from the centre increases the positioning error due to twofactors. The first factor is range DOP as illustrated in Fig. 3.19 and thesecond factor is range error. In order to validate the theoretical DOP inequation 3.13 (illustrated in Fig. 3.19), the empirical DOP is calculatedusing equation 3.3 as the ratio of the position standard deviations to therange standard deviations of the 25 PD test points in Fig. 3.18.The average theoretical DOP is computed from Fig. 3.19 and is comparedto the average empirical DOP from the 25 PD test points in Fig. 3.18. It isvalid to do such a comparison since the range DOP in Fig. 3.19 changes bya maximum factor of 0.1 across the 25 PD test points. The mean theoreticalDOP of Fig. 3.19 is equal to 1.35. The mean empirical DOP is calculated as383.4. Positioning Performancefollows. Assuming there is no bias in the range measurements the standarddeviation of the range measurements is equivalent to the range error. There-fore, the range standard deviation of the 100 range measurements (4 rangemeasurements for each of the 25 PD locations) is calculated and is equal to5.8 cm. Similarly, assuming there is no bias in the position estimates, thestandard deviation of the position estimates is equivalent to the position er-ror shown in Fig. 3.18.Therefore, the position standard deviation of the 25position errors is calculated and is equal to 8.8 cm. Therefore, the empiricalDOP = σPσm =8.85.8 = 1.5, approximates the average theoretical DOP valueof 1.35. For a range standard deviation equal to 5.8 cm, the difference be-tween the empirical (1.5) and theoretical (1.35) DOP translates to positionerrors of 0.9 cm. Since the setup was measured by a tape measure with mmresolution, it is conceivable that one would have up to 1 cm of error.For the second factor, the position error plot in Fig. 3.18 can be verifiedby finding the standard deviation of the range error, σm at a test positionand multiplying it by the corresponding DOP value in Fig. 3.19. For instanceat position (x = -15, y = 15), σm = 20.0 cm, and the corresponding DOPvalue is 1.45, this results in a σP = 20.0× 1.45 = 29 cm. The value of 29 cmapproximates the position error value of 30 cm in Fig. 3.18 at (x = -15, y= 15).Figure 3.19: Simulated 3-D range DOP for optical RSS positioning setup.393.4. Positioning Performance3.4.2 Optical AOAThe optical angular receiver AOA positioning setup is illustrated inFig. 3.13 with LED optical beacons B1, B2, B3 and B4. The angular re-ceiver is oriented as shown in Fig. 3.20 such that PD1 is normal to(x, y, z) = (cos(54.7◦) cos(45◦), cos(54.7◦) cos(45◦), sin(54.7◦)= (1/√6, 1/√6,√2/√3), (3.14)PD2 is normal to(x, y, z) = (− cos(54.7◦) sin(45◦), cos(54.7◦) cos(45◦), sin(54.7◦))= (−1/√6, 1/√6,√2/√3), (3.15)and PD3 is normal to(x, y, z) = (0, sin(54.7◦), cos(54.7◦))= (0,−√2/√3, 1/√3). (3.16)Figure 3.20: Angular receiver orientation. (θR, φR) represent the angularreceiver body frame (x’,y’,z’) rotation with respect to the reference frame(x,y,z).The angular receiver is rastered across the xy plane at the same 25 testpoints as the RSS experiment. The AOA is measured from LED optical403.4. Positioning Performancebeacons B1, B2, B3 and B4 (see Fig. 3.21), where the beacons are 0.5 mabove the angular receiver.At each of the 25 test points, the power values P1, P2 and P3, corre-sponding to the generated photocurrents i1, i2 and i3, are recorded for agiven LED optical beacon modulated at a particular modulation frequencyusing the procedures in Section 3.1.1. A total of one thousand power mea-surements are collected for each LED optical beacon. The one thousandpower measurements are recorded in a time span of 3.4 minutes. For each ofthe one thousand P1, P2 and P3 power values measured from a given LEDoptical beacon, the corresponding mean power values are used to calculatethe mean AOA φ and θ using equations 3.1 and 3.2. These AOAs are mea-sured with respect to the angular receiver body frame x ’y ’z ’ axis. Knowingthe orientation of the body frame with respect to the reference frame (i.e.,φR = 45◦ and θR = 54.7◦), the AOA angles are then computed with respectto the xyz navigational frame.The AOA values from each of the four LED optical beacons are usedin a Least Squares triangulation algorithm to calculate an estimate of theangular receiver 3-D position with respect to the known LED optical beaconpositions.The Least Squares algorithm is based on rearranging equations 3.5 and 3.6such thattanφi(yi − y)− (xi − x) = 0, (3.17)andtan θi(zi − z)−√(xi − x)2 + (yi − y)2 = 0. (3.18)An A matrix is defined containing the partial derivatives of equations 3.5and 3.6 with respect to the three unknowns (x, y, z). Given an initial guessof the angular receiver position (x̂, ŷ, ẑ), the weights, w, are computed bysubstituting (x̂, ŷ, ẑ) for (x, y, z) in equations 3.5 and 3.6. The weights ofeach of φi and θi are then the value of the expression on the left hand sideof each of equations 3.17 and 3.18, respectively. The estimated corrections,δ = [δx δy δz] for the x, y, and z position estimates are computed withδ = (ATA)−1ATw (3.19)and the new x, y, and z estimates are equal to the initial estimates plus thedelta corrections. The new x, y, and z are fed into an iterative algorithmsuch that they form the new guess values, from which new weights, w, arecomputed and then a new δ. The process continues until δ converges to anegligible value. The resultant (x̂, ŷ, ẑ) position becomes the Least Squaresestimate.413.4. Positioning PerformanceThe resulting 3-D positioning error is shown as a best fit plot in Fig. 3.21.Position error depends on the AOA accuracy measured from the four LEDsat each of the 25 angular receiver positions. For instance, a large positionerror of 8 cm is calculated at angular receiver position (x = -15 cm, y =15 cm) primarily due to the fact that the measured φ from LED4 has alarge error of 8◦ error. This large φ error is due to the fact that the lightfrom LED4 strikes the angular receiver at (φ4 = 4◦, and θ4 = 30◦) which isoutside the operational cone.Note that Fig. 3.11 shows that an AOA at (φ4 = 4◦, and θ4 = 30◦) has anerror of approximately 8◦ in φ which agrees with the error in the measuredφ for LED4 in this test. As the angular receiver moves along the x = -15 cmline, from y = 15 cm to y = -15 cm, the average AOA error from the fourLED optical beacons decreases, resulting in a lower positioning error.At angular receiver position (x = 15 cm, y = 15 cm) an AOA of (φ3 =88◦, and θ3 = 35◦) is measured from LED3. Since the AOA is outside theoperational cone, the φ3 error is 5◦. This error in φ is confirmed in Fig. 3.11which shows that at this AOA the φ error is approximately 5◦.423.4. Positioning PerformanceFigure 3.21: The 3-D positioning error for optical AOA positioning.Others factors that contribute to the position error are biases in theangular receiver position and orientation. This is due to the fact that theangular receiver has to be physically moved to each of the 25 test locations,which results in biases in the true AOA.The shape in Fig. 3.21 can also be explained by looking at Figs. 3.11and 3.12. Angular receiver positions with significant illumination of twoPDs (at large and negative y values in Fig. 3.21 with θ approaching 90◦ inFigs. 3.11 and 3.12) have improved accuracy over orientations with signif-icant illumination of only one PD (at large and positive y values Fig. 3.21with θ approaching 0◦ in Figs. 3.11 and 3.12). This is because more lightflux is captured by the angular receiver at large and negative y values and,therefore, results in more accurate AOAs.Overall, the mean positioning error of the optical AOA positioning is5 cm, a result which is four times lower than that of the optical RSS posi-tioning.433.4. Positioning Performance3.4.3 Optical AOA PrecisionFor the calculation of each of the 25 position estimates on which Fig. 3.21is based, the average AOA value of 840 φ and 840 θ measurements is used.In order to quantify the precision of these instantaneous AOA values, theφ error is defined as the difference between the measured instantaneous φvalues for one of the 25 test points and the average φ value for the same testpoint. Similarly, the θ error is defined as the difference between the measuredinstantaneous θ values for one of the 25 test points and the average θ valuefor the same test point.A sample size of 25 test points is necessary to achieve a 95% confidencelevel with a standard deviation of 0.01◦ [47]. The relationship between thesample size, confidence level (z-score), confidence interval (margin of error),and measurement standard deviation is given by equation 3.20Sample size = (z-score)2σ(1− σ)/(margin of error)2. (3.20)Figs. 3.22, 3.23, 3.24 and 3.25 show the histograms of the AOA φ and θprecision for the AOA measured from LED optical beacons B1, B2, B3 andB4 respectively. Note that in each histogram there are 21,000 measurements(25 test points with 840 measurements each).Figure 3.22: AOA measurement precision histograms for φ1 and θ1.443.4. Positioning PerformanceFigure 3.23: AOA measurement precision histograms for φ2 and θ2.Figure 3.24: AOA measurement precision histograms for φ3 and θ3.453.4. Positioning PerformanceFigure 3.25: AOA measurement precision histograms for φ4 and θ4.Table 3.2 shows the standard deviation for the φ error. The maximumstandard deviation is ≈ 0.03◦. Similarly, Table 3.3 shows the standarddeviation for the θ error. The maximum standard deviation is ≈ 0.01◦.Table 3.2: φ error precisionParameter φ1 φ2 φ3 φ4standard deviation (◦) 0.0278 0.0154 0.0185 0.0075Table 3.3: θ error precisionParameter θ1 θ2 θ3 θ4standard deviation (◦) 0.0139 0.0088 0.0060 0.0079The distribution of the error histograms in Figs. 3.22, 3.23, 3.24, and 3.25,and the precision results in Tables 3.2 and 3.3 suggest that the instantaneousAOA measurements are very precise. Consequently, there is very little ran-dom error in the AOA measurements of the angular receiver. An AOA errorequal to 0.03◦ (the maximum AOA standard deviation seen in Tables 3.2and 3.3) results in a translation error of just 1 mm over an optical bea-con/angular receiver separation of 2 m.In order to validate the theoretical 3-D DOP in equation 3.13 (illustrated463.4. Positioning Performancein Fig. 3.17), the empirical 3-D DOP in equation 3.3 is calculated as theratio of the position standard deviations to the AOA standard deviationsin Fig. 3.21. The overall AOA standard deviation in Tables 3.2 and 3.3 iscalculated to be σm = 0.0149◦. The position standard deviation of the 21,000position estimates at the angular receiver 25 test points is calculated to be σP= 0.0193 cm. Therefore, the empirical DOP = σPσm =0.01930.0149 = 1.30 cm/deg.Note that the empirical 3-D DOP value approximates the theoretical 3-DDOP in Fig. 3.17 which has an average value of 1.37 cm/deg. It is safe toquote an average 3-D DOP value, since the 3-D theoretical DOP changes byas little as 0.1 cm/deg as shown in Fig. 3.17. Also, the difference betweenthe empirical and theoretical DOP is negligible since for a σm = 0.0149◦,the difference in position standard deviation σP , for a 3-D DOP differenceof 0.07 cm/deg would be 1.1×10−3 cm.3.4.4 Optical AOA AccuracyIn order to quantify the accuracy of the instantaneous AOA measure-ments for each of the 25 test points, the φ error is defined as the differencebetween the measured instantaneous φ values for one of the 25 test pointsand the true φ value for the same test point as determined from the orienta-tion of the angular receiver body frame with respect to the reference frame,and the geometrical position of the angular receiver with respect to the LEDoptical beacons. Similarly, the θ error is defined as the difference betweenthe measured instantaneous θ values for one of the 25 test points and thetrue θ value for the same test point as determined from the orientation ofthe angular receiver body frame with respect to the reference frame, and thegeometrical position of the angular receiver with respect to the LED opticalbeacons. Figs. 3.26, 3.27, 3.28 and 3.29 show the histograms of the AOA φand θ error for the AOA measured from LED optical beacons B1, B2, B3and B4 respectively. Again, each histogram contains 21,000 measurements(25 test points with 840 measurements each).473.4. Positioning PerformanceFigure 3.26: AOA measurement accuracy histograms for φ1 and θ1.Figure 3.27: AOA measurement accuracy histograms for φ2 and θ2.483.4. Positioning PerformanceFigure 3.28: AOA measurement accuracy histograms for φ3 and θ3.Figure 3.29: AOA measurement accuracy histograms for φ4 and θ4.Table 3.4 shows the mean and standard deviation for the φ error. Themaximum standard deviation is ≈ 3◦. Similarly, Table 3.5 shows the mean493.4. Positioning Performanceand standard deviation for the θ error. The maximum standard deviationis ≈ 2.4◦.Table 3.4: φ error mean and standard deviationParameter φ1 φ2 φ3 φ4mean (◦) -0.1688 -1.0103 0.8720 2.8116standard deviation (◦) 0.8973 1.4380 2.4199 3.1935Table 3.5: θ error mean and standard deviationParameter θ1 θ2 θ3 θ4mean (◦) 0.7445 1.7750 0.9979 -1.1102standard deviation (◦) 2.4083 1.4886 1.4755 1.7431There are two factors that determine the shape of the histograms. Thefirst factor is systematic errors due to the nature of the experiment, whereslight errors in the angular receiver orientation with respect to the LEDoptical beacons would result in a bias in the true AOA. The second factoris AOA errors due to angles outside the cone of acceptance which give riseto large errors as shown in Figs. 3.11 and 3.12.Biases are apparent in Table 3.4 where the mean φ2, φ3, and φ4 errorsare -1.0◦, 0.9◦, and 2.8◦ respectively, and in Table 3.5 where the mean θerrors θ1, θ2, θ3, and θ4 are 0.7◦, 1.8◦, 1.0◦, and -1.1◦.In particular, the φ4 error in Table 3.4 has the largest mean error com-pared to the rest of the φ values due to the fact that φ4 registers the largestpercentage of angles outside of the acceptance cone of operation. A maxi-mum φ4 error of 8.1◦ (left side plot in Fig. 3.29) occurs when the angularreceiver is at (x = -15 cm, y = 15 cm) (see Fig. 3.21). At this location, theangular receiver measures an AOA from LED4 at (φ4 = 4◦, and θ4 = 30◦).The φ error characterization in Fig. 3.11 shows that at this particular AOAthe error, ∆φ, is approximately 10◦.Note that the φ1 error histogram is centered about approximately zeroas shown in the left side plot of Fig. 3.26. This is due to the fact that the φ1angle registers the highest percentage of AOAs, at the 25 angular receiverlocations, that are within the acceptance cone and, therefore, would yieldAOA errors less than 2◦.The θ3 error shown in the right side of Fig. 3.28 has a maximum θ errorof 3.2◦ at angular receiver location (x = 7.5 cm, y = -15 cm) in Fig. 3.21.At this location, the angular receiver measures an AOA from LED3 at (φ503.4. Positioning Performance= 72◦, and θ = 60◦). From Fig. 3.11, (φ = 72◦, and θ = 60◦) correspondsto a ∆φ error of approximately 3◦. The θ3 error is approximately zero atangular receiver locations (x = -15 cm, y = 0 cm), and (x = -15 cm, y =15 cm), where the θ3 values are within the operational cone.In conclusion, the angular receiver has demonstrated 3-D position accu-racies on the order of a few centimeters as shown in the empirical resultsin Section 3.4.2. A minimum intensity threshold of 0.2 µW/cm2 must bemaintained for reliable AOA estimates. In order to triangulate the angularreceiver position, at least two optical beacons must lie within the angularreceiver FOV defined from (φ = 0◦ to 90◦ and θ = 0◦ to 90◦). Note thatthe full angular receiver FOV will be used for the analysis below instead ofthe FOV defined for the operational cone. Although this may sacrifice someAOA accuracy, it will allow the angular receiver to view more distant LEDs,and, therefore, improves DOP. Since the LED optical beacons are mountedon the ceiling, it is prudent that the angular receiver be oriented upwards asshown in Fig. 3.20 such that φR = 45◦ and θR = 54.7◦ to capture a greaternumber of LED optical beacons.Several factors such as the LED separation distances (grid size) andseparation height between the angular receiver and LED optical beacons,h, must be designed to maintain at least two optical beacons within theangular receiver FOV. The relationship between the grid spacing and theseparation height is outlined as follows.Assume that the angular receiver is at the origin of the (x ,y ,z ) navigationframe as shown in Fig. 3.20 and is pointing upwards towards the ceiling suchthat φR = 45◦ and θR = 54.7◦. The maximum distance that the LEDs canbe placed with respect to the angular receiver position will be investigated.These distances are a function of the angular receiver FOV.In order to determine the maximum distance an LED can be placedalong the y direction and still be visible by the angular receiver, considerFig. 3.30 which shows the angular receiver of Fig. 3.20 when looking at the yzplane. From Fig. 3.30, the maximum distance along the positive y directionis y1 = h/ tan(θR). Assuming the height separation between the LED andangular receiver is h = 2 m, this implies that y1 = 2/ tan(54.7◦) = 1.4 m.Similarly, the maximum distance an LED can be placed in the negative ydirection is y2 = h/ tan(θc), and for h = 2 m, y2 = 2/ tan(35.3◦) = 2.8 m.513.4. Positioning PerformanceFigure 3.30: Angular receiver orientation along yz axis.In order to determine the maximum distance an LED can be placedalong the x direction while maintaining visibility by the angular receiver,an incident angle along the x direction φ = 0◦, and θ = 39◦ measured withrespect to the z axis will result in φ’ = 0◦, θ’ = 63◦, where φ’ and θ’ arethe angles measured with respect to the angular receiver body frame. Notethat φ’= 0◦ represents the extreme angle when the light is incident alongthe x’ axis shown in Fig. 3.20. When h = 2 m, the maximum distance alongthe x direction is h/ tan(90◦− θ) = 2/ tan(90◦− 39◦)= 1.6 m. Similarly themaximum distance along the negative x direction is 1.6 m.Next, the maximum distance an LED can be placed in the positive ydirection is determined such that the LED has equal x and y componentsfrom the angular receiver position i.e., φ = 45◦ or φ = 135◦ when measuredwith respect to the x axis shown in Fig. 3.20. When an LED is incident atφ = 45◦, and θ = 45◦ this results in φ’ = 15◦, and θ’ = 90◦. A θ’ = 90◦represents the maximum FOV in the positive y direction. Therefore, for h= 2 m, the maximum 2-D distance of the LED from the angular receiveris h/ tan(90◦ − θ) = 2/ tan(45◦) = 2 m. For equal x and y LED positions,x = y =√22/2 = 1.4 m.Next the maximum distance an LED can be placed in the negative ydirection is determined such that the LED has equal x and y positions fromthe angular receiver position i.e., φ = 315◦ or φ = 225◦ when measured withrespect to the x axis shown in Fig. 3.20. When an LED is incident at φ523.4. Positioning Performance= 225◦, and θ = -35◦ this results in φ’ = 90◦, and θ’ = 36◦. A φ’ = 90◦represents the maximum FOV in the negative y direction. Therefore, for h= 2 m, the maximum 2-D distance of the LED from the angular receiveris h/ tan(90◦ − |θ|) = 2/ tan(55◦) = 1.4 m. For equal x and y positions,x = y =√1.42/2 = 1.0 m.Figure 3.31 represents a summary of the LED positions, discussed above,with respect to the angular receiver (assumed to be at the origin withorientations as shown in Fig. 3.20). The figure shows a top view of theLED/angular receiver separation assuming a h = 2 m.Figure 3.31: Maximum square grid side-length capability for the angularreceiver.Note the asymmetry between the positive y and negative y directions(which is a result of the asymmetry in θ), and the symmetry between thepositive x and negative x directions (which is a result of the symmetry in φ).For square optical beacon grids the maximum LED grid spacing should beat most 2 m when the height of the grid is 2 m above the angular receiver.The square grid is attractive because it facilitates tessellation.53Chapter 4Image Receiver PositioningThis chapter presents the image receiver. A detailed characterization ofits angular and positioning performance is quantified. Section 4.1 introducesthe image receiver. The empirical AOA error characterization is shown inSection 4.2. Section 4.3 discusses the effect of Dilution of Precision (DOP)on positioning accuracy. Finally, Section 4.4 presents empirical positioningperformance results.4.1 Image ReceiverSection 4.1.1 describes the components of the image receiver, namely theimage sensor and microlens. Section 4.1.2 describes the algorithms used todifferentiate between the LED optical beacon transmitters.4.1.1 Image Sensor and MicrolensIn this work, the image sensor used for positioning is that of a Sony PlayStation 3 (PS3) eye webcam. This webcam was chosen due to its widespreadavailability, cheap cost of only $10, and its high frame rate of 187 framesper second. The original lens that comes with the webcam is removed andis replaced with a custom-made microlens that is much more compact andhas a wider field of view. This is advantageous as a wider FOV microlenscan image more LED optical beacons.The microlens fabrication process and apparatus was developed in theIntegrated Optics Laboratory at the University of British Columbia. Theprocess consists of electro-dispensing a polymer of type Norland OpticalAdhesive (NOA 68), onto a glass substrate immersed in glycerol. The vol-ume of the microlens is determined by the pressure control system, and thetuning of the microlens shape is accomplished by the voltage applied to theneedle tip. Tuning the shape of the microlens produces microlenses withvarying contact angles which affects the microlens FOV. After the polymeris dispensed on the glass substrate and has been electro-tuned to give thedesired shape, it is solidified by curing it with ultra-violet light [48].544.1. Image ReceiverFigure 4.1 shows an illustration of the image receiver consisting of amicrolens and a CMOS sensor, where θ is the polar angle, h is the LEDgrid/microlens separation distance, f is the microlens focal length, and αis the contact angle. A CMOS sensor is made up of thousands of pixels(photoreceiver elements). The Sony PS3 webcam CMOS sensor is made upof pixels where each pixel has a size of 6 × 6 µm2. Microlenses are madewith different diameters that range from 500 µm to 800 µm, and with variouscontact angles that range from 30◦ to 120◦. Higher contact angle microlenseshave a larger FOV and, therefore, are able to image more distant beacons(see Fig. 4.1).Another important parameter for lens design is the f -number definedas, f -number = fD , where f is the focal length and D is the lens diam-eter. The f -number is inversely proportional to the lens FOV, such thatf -number = fD ∝ 1FOV . Conventional microlenses [49] achieve µm scalediameters and low curvature (low contact angles), which implies large focallength (cm scale), resulting in a large f-number, and, therefore, narrow FOVmicrolenses. Unlike conventional microlenses, the proposed microlens hasµm scale diameters, and high curvature (high contact angles), which impliesshort focal lengths (mm scale), resulting in a low f -number, and, therefore,wide FOV microlenses.554.1. Image ReceiverFigure 4.1: An illustration of an OWL system showing the LED opticalbeacons and the image receiver consisting of a microlens and a CMOS sensor.4.1.2 Colour and Frequency DetectionIndoor positioning utilizing the image receiver works by capturing animage of the ceiling and computing the AOA of the LED optical beacons inthe image as will be shown in Section 4.2. The image receiver position isthen found by triangulating with the AOAs.In order for the image receiver to distinguish between the LED opticalbeacons in an image, each LED optical beacon emits light at a specificwavelength of the visible spectrum (i.e., each LED has a different colourusually red, green, or blue) and/or is pulse modulated at a specific frequencyusually in the kHz range.564.1. Image ReceiverA common colour detection algorithm is the Hue Saturation and Value(HSV) algorithm [50],[51]. The HSV algorithm assigns each colour an Hvalue for Hue, an S value for Saturation, and a V value that describesthe brightness or luminance. The HSV colour representation is shown inFig. 4.2. The H value describes the colour tone, and is described by anangle, whose reference is the colour red. The equation for calculating H is[51]H = arccos 0.5(2R−G−B)√(R−G)2−(R−B)(G−B)). (4.1)where R, G, and B represent the normalized red, green and blue colours of agiven pixel computed via MATLAB R©. The S value represents the distanceto the V -axis. The nearer a colour is to the V -axis the more diluted it is,and the further away it is from the V -axis the more saturated the colourgets. The equation for calculating S is [50]S = max(R,G,B)−min(R,G,B)max(R,G,B) . (4.2)The V value represents the brightness or intensity of the colour such thatV = max(R,G,B). The colour black would have a V value of zero.574.1. Image ReceiverFigure 4.2: HSV colour representation.Figure 4.3 shows an image of four LED optical beacons, where eachLED has a different colour. The image was captured by a 30◦ contact anglemicrolens and a Sony PS3 CMOS sensor using a Sony PS3 code laboratories584.1. Image Receiverapplication that runs on a laptop computer. To differentiate between thedifferent LED optical beacons, an HSV algorithm is implemented that usesthe representation of Fig. 4.2. Once the algorithm detects a colour, it draws acircle on each of the coloured spots as shown in Fig. 4.4. The algorithm worksby setting a range for the H, S, and V values corresponding to each colouras shown in Fig. 4.2. Depending on the ambient light and the particluarwavelength of the LED optical beacons, the HSV values are tuned such thatfor the colour red, H ranges from 0 to 0.005 and S is greater than 0.3, forgreen H ranges from 0.300 to 0.470 and S is greater than 0.3, for blue Hranges from 0.620 to 0.670 and S is greater than 0.3, and for white H rangesfrom 0.078 to 0.470 and S is greater than 0.3.Once the algorithm detects the colour, it draws a circle centered on thethe pixel coordinates of the LED being imaged with a radius equal to 4 pixels.As will be shown later, the pixel coordinates are necessary to compute theAOA.Figure 4.3: An image of four different colour LEDs (red, green, blue andwhite) appearing as red, green, blue and white spots.594.1. Image ReceiverFigure 4.4: Colour discrimination (implemented using an HSV algorithm)detects each coloured spot in Fig. 4.3 and draws a circle around it.Another common colour model is the RGB model illustrated in Fig. 4.5.The RGB model will be used in conjunction with frequency modulation laterin this chapter (see Section 4.4). The RGB model is used instead of the HSValgorithm due to the fact that MATLAB R© reads video frames in the RGBformat. The RGB model is based on the Cartesian coordinate system. Theprimary colours red, green and blue have coordinates (1, 0, 0), (0, 1, 0), and(0, 0, 1) respectively. From Fig. 4.5 the colour yellow would have an RGBvalue of (1, 1, 0).604.1. Image ReceiverFigure 4.5: Colour discrimination implemented using the RGB model.The modulation frequency of a coloured LED optical beacon, for examplered, is found by using the RGB algorithm to detect the pixels in an imagewith the red colour and capturing the amplitude of those pixels for a giventime period. Then the fast Fourier transform (FFT) algorithm is used tofind the spectral frequency of those pixels. The pixel with the correct colour(which is typically at the centre of the image focal spot), and frequencyvalue is chosen for AOA estimation and the rest discarded.The PS3 webcam high frame rate of 187 frames per second (fps), al-lows LED optical beacons to be modulated up to 93.5 Hz. At least 200frames (approximately 1 second) need to be captured for accurate spectralfrequency calculation. This is best illustrated with an example where anLED emitting blue light and modulated with a frequency of 80 Hz is im-aged. The FFT algorithm is applied on the amplitude of the correspondingpixel for 100 frames (0.5 s) and 200 frames (1.07 s). Figure 4.6 shows thespectral analysis for both 100 and 200 frames. The 100 frames FFT analysisshows frequency components at 70 Hz and 80 Hz. The 70 Hz peak is a re-sult of interference of the LED green lead modulated at 70 Hz. The spectral614.2. Angle-Of-Arrival Measurement Error Characterizationanalysis is therefore ambiguous. Using 200 frames on the other hand, resultsin the 80 Hz peak being higher than the peak at 70 Hz and therefore, 200frames represent a sufficient number of frames needed to accurately acquirethe LED modulation frequency.Figure 4.6: FFT analysis performed for 100 and 200 frames on colourblue. The left plot shows interference at 70 Hz, and the right plot shows areduction in interference.4.2 Angle-Of-Arrival Measurement ErrorCharacterizationIn this section, the performance of two different microlenses is investi-gated. The first microlens has a contact angle of α = 30◦ and achieves awide FOV of 95◦, while the second microlens has a contact angle of α = 90◦and achieves an ultrawide FOV of 130◦. The Scanning Electron Microscope(SEM) images of the wide- and ultrawide FOV microlenses are shown at thebottom of Figs. 4.7 (a) and (b), respectively [48]. The wide FOV microlenshas a radius of r = 400 µm, while the ultra-wide FOV microlens has a radiusof r = 250 µm.624.2. Angle-Of-Arrival Measurement Error CharacterizationFigure 4.7: Schematic views and SEM images for (a) the image sensor witha wide FOV microlens, having an α = 30◦ contact angle, and (b) the imagesensor with a ultrawide FOV microlens, having an α = 90◦ contact angle.The microlens radius is r . Incident AOAs on the image sensors are definedon the (x ’, y ’, z ’) coordinates of the body frame. The focal spot location onthe CMOS array is defined by its azimuthal angle, φIS, and radial distance,ρIS.LED optical beacon intensities ranging between a low of 0.03 µW/cm2and a high of 0.2 µW/cm2 are used for the AOA characterization. Fig-ure 4.8 shows images for the 0.03 µW/cm2 LED intensity (top image) and634.2. Angle-Of-Arrival Measurement Error Characterizationthe 0.2 µW/cm2 LED intensity (bottom image). The focal spot size for thelow intensity LED is 18 µm, while for the high intensity it is 24 µm. Higheroptical beacon intensities would increase the focal spot size which wouldhinder the AOA accuracy.Figure 4.8: Low intensity LED focal spot size image (top) and high intensityLED focal spot size image.An LED optical beacon’s AOA is defined with the (x ’, y ’, z ’) body framecoordinates of the image sensor, with the CMOS array in the x ’-y ’ plane(see Fig. 4.7). The AOA is defined in terms of the azimuthal angle, φ =arctan(y ′/x ′) and the polar angle, θ = arccos[z ′/(x ′2 + y ′2 + z ′2 ) 12].The AOA is measured from the focal spot location on the CMOS array,which is defined by its azimuthal angle, φIS, from the x ’-axis, and its radialdistance, ρIS, from the microlens centre. An AOA characterization is carriedout by imaging LED optical beacons placed on the ceiling and determiningthe linear transformation from the measured values of φIS and ρIS to thetrue AOA angles φ and θ. The true AOA angles are determined from theLED and image receiver geometry.The image receiver is moved to different locations with respect to theLED optical beacons. The separation height between the image receiver andthe LED optical beacons is decreased to determine the AOA at the extremeFOV.644.2. Angle-Of-Arrival Measurement Error CharacterizationThe azimuthal characterization results are shown in Fig. 4.9. Fig. 4.9(a)shows φ versus φIS for the image sensor with the wide FOV microlens. Fig.4.9(b) shows φ versus φIS for the image sensor with the ultrawide FOVmicrolens. Both image sensors show strong linearity in φ versus φIS, due tonegligible astigmatism in the microlenses. For both image sensors, the lineartransformation from the focal spot location on the image sensor to the trueAOA angle is simply φ ≈ φIS - 180◦, as one would expect for the invertedimage in the focal plane of a planoconvex lens. The mean azimuthal AOAerror in this linear region is ∆φ ≈ 0.5◦ for both image sensors.Figure 4.9: Azimuthal characterization results, showing the AOA angle φas a function of the measured φIS angle, for image sensors with the (a) wideFOV microlens and (b) ultrawide FOV microlens.The polar characterization results are shown in Fig. 4.10. Fig. 4.10(a)shows θ versus ρIS/r for the image sensor with the wide FOV microlens.Fig. 4.10(b) shows θ versus ρIS/r for the image sensor with the ultraw-ide FOV microlens. The polar angle, θ represents the true, geometrical θgiven the position of the image receiver and the LEDs. For the desired low-distortion linear transformation from ρIS to θ, imaging is restricted to thelinear regime, seen as solid circles, with the angular FOV being twice themaximum θ value. Imaging beyond the linear regime, seen as hollow circles,654.3. Positioning Analysis Using Dilution of Precisiondistorts the focal spots and increase the AOA errors. Overall, the angularFOVs of the image sensors with the wide- and ultrawide FOV microlensesare 2×47.5◦ = 95◦ and 2×65◦ = 130◦, respectively, and the mean polar AOAerror over the linear regimes is found to be equal to that of the azimuthalangle, ∆φ ≈ ∆θ ≈ 0.5◦.Figure 4.10: Polar characterization results, showing the AOA angle θ as afunction of the measured normalized ρIS/r distance, for the image sensorswith the (a) wide FOV microlens and (b) ultrawide FOV microlens.4.3 Positioning Analysis Using Dilution ofPrecisionA DOP characterization is carried out for the optical wireless testbed.The overhead optical beacon grid has nine LED optical beacons arrayedacross a plane with a height of h and pitch of p = 150 cm. In the (x , y , z )global frame coordinates, the LED optical beacons are at (-p, p, h), (0, p,h), (p, p, h), (-p, 0, h), (0, 0, h), (p, 0, h), (-p, -p, h), (0, -p, h), and (p,-p, h). The DOP is calculated for image sensors positioned across the z =0 plane of the (x , y , z ) global frame coordinates as shown in Fig. 4.11.664.3. Positioning Analysis Using Dilution of PrecisionFigure 4.11: Illustration of LED optical beacon geometry for DOP calcula-tion.The DOP characterization results are shown in Figs. 4.12 and 4.13. TheDOP, in units of cm/deg, is shown as a function of x and y coordinatesin the z = 0 plane. Fig. 4.12 shows DOP for the image sensor with thewide FOV microlens. Given that the grid spacing is fixed to p = 150 cm,and θ = 47.5 ◦ (being the maximum allowable θ for the LED to be withinthe FOV of the wide FOV microlens), the minimum allowable height of theoptical beacon grid above the image receiver, h is defined as h =√8 ptan(θ) .At this height, h = 400 cm, the wide FOV microlens is just able to observeall optical beacons within its FOV. Fig. 4.13 shows DOP for the imagesensor with the ultrawide FOV microlens. The height of the optical beacongrid above the image receiver is now decreased to h = 200 cm, since at thisheight the ultrawide FOV microlens is just able to observe optical beaconswithin its FOV.674.3. Positioning Analysis Using Dilution of PrecisionFigure 4.12: DOP characterization for the wide FOV microlens in (x , y , z= 0) navigational frame.Figure 4.13: DOP characterization for the ultrawide FOV microlens in (x ,y , z = 0) navigational frame.A comparison of Figs. 4.12 and 4.13 illustrates the effects of DOP on684.3. Positioning Analysis Using Dilution of Precisionoptical wireless positioning. The first attribute to note relates to the nu-merical scale of the DOP. The DOP of the image sensor with the ultrawideFOV microlens is approximately two-times lower than that of the imagesensor with the wide FOV microlens. At the centre of the (x , y , z ) globalframe coordinates, the wide FOV microlens yields a maximum DOP of 6.6cm/deg, while the ultrawide FOV microlens yields a minimum DOP of 2.8cm/deg. This leads to a proportional (and desirable) decrease in positionerror for the image sensor with the ultrawide FOV microlens. The secondattribute to note relates to the concavity of the DOP surfaces near thecentre of the (x , y , z ) global frame coordinates. The DOP for the imagesensors with wide- and ultrawide FOV microlenses are concave-down and(slightly) concave-up, respectively. This dissimilarity is due to the FOVsand the established lines of position. For the wide FOV microlens in Fig.4.12 the lines of position are largely parallel in the centre. Translations tothe perimeter lead to increasingly orthogonal lines of position and reducedDOP, as the directions from the image receiver to the optical beacons beingapproached and receded from become increasingly orthogonal. In contrast,for the ultrawide FOV microlens in Fig. 4.13 the lines of position are largelyorthogonal in the centre. Translations toward the perimeter lead to increas-ingly parallel lines of position and increased DOP, albeit to a small extent.In this case, the directions to the optical beacons being approached remainlargely orthogonal throughout the translation.The above AOA and DOP characterizations can be merged to quantifythe overall positioning accuracy for the image sensors with the wide- andultrawide FOV microlenses. This is done according to equation 3.3, bycalculating the position error, σP, as the product of the measurement AOAstandard deviation, σA, and DOP. In this investigation, systematic AOAerrors are zero as demonstrated in Section 4.2 as long as the LED lies withinthe microlens maximum FOV. In this case the AOA error is equivalent tothe AOA standard deviation, and therefore, σA = 0.5◦.The resulting position error, σP, for the same optical beacon grids thatproduced the DOP characterization of Figs. 4.12 and 4.13, is shown inFigs. 4.14 and 4.15 as a function of x and y global frame coordinates inthe z = 0 plane. Fig. 4.14 shows the position error for the image sensorwith the wide FOV microlens. Fig. 4.15 shows the position error for theimage sensor with the ultrawide FOV microlens. The two position errordistributions mimic those of the DOP distributions, as expected, and thebroad view of the ultrawide FOV microlens leads to reduced position errors.The ultrawide FOV microlens is better able to image distant optical bea-cons leading to enhanced localization along its lines of position and reduced694.3. Positioning Analysis Using Dilution of PrecisionDOP. For this distribution of optical beacons, the ultrawide- FOV microlensyields a position error of approximately 1.4 cm at the centre, compared tothe corresponding position error of approximately 3.3 cm for the wide FOVmicrolens.Figure 4.14: Positioning accuracy for the wide FOV microlens in (x , y , z =0) navigational frame.704.4. Positioning PerformanceFigure 4.15: Positioning accuracy for the ultrawide FOV microlens in (x , y ,z = 0) navigational frame.4.4 Positioning PerformanceIn this section, the empirical positioning accuracy of the wide FOV (30◦contact angle microlens) and ultra-wide FOV (90◦ contact angle microlens)sensors are quantified. Four LED optical beacons are mounted on the ceilingat a height of h = 100 cm and a pitch of p = 17.5 cm at (x , y , z ) navigationalframe coordinates of (-p, 2p, h), (p, 2p, h), (-p, -2p, h) and (p, -2p, h) asshown in Fig. 4.16.714.4. Positioning PerformanceFigure 4.16: Top view drawing of LED optical beacon/ image receiver ge-ometry for position estimation.Each of the four LED optical beacons uses an RGB LED. The LEDoptical beacons emit white light by frequency modulating the red, green andblue LED leads such that LED1 has RDCGf1Bf1, LED2 has RDCGf1Bf2,LED3 has RDCGf2Bf2, and LED4 has RDCGf2Bf1, where DC refers to f =0, f1 is 70 Hz and f2 is 80 Hz.The image receiver is rastered across the z = 0 plane at 9 differentlocations at (x , y , z ) navigational frame coordinates of (0, 0, 0), (-v , 0, 0),(v , 0, 0), (0, -v , 0), (-v , -v , 0), (v , -v , 0), (0, v , 0), (-v , -v , 0), (v , v , 0),where v = 28 cm as shown in Fig. 4.16.724.4. Positioning PerformanceAt each of the nine test locations, an image of the ceiling is capturedusing the wide FOV microlens and the ultra-wide FOV microlens. In onesuch image, shown in Fig. 4.17, the radial focal spot radius, ρIS, is measuredas the euclidean pixel distance between the image pixel and the microlenspixel centre on the image sensor. Fig. 4.17 illustrates the radial focal spotradius, ρIS,1, for LED1. The respective LED optical beacons are identifiedin the image by observing the frequency component of the red, green andblue layers using the RGB model shown in Fig. 4.5. Fig. 4.18 shows thenormalized frequency spectrum of the red, green and blue channels for LED1,LED2, LED3, and LED4.Figure 4.17: An RGB image showing the measurement of ρIS,1, the radialpixel distance between the microlens centre to LED1 focal spot.734.4. Positioning PerformanceFigure 4.18: Frequency components for LED1, LED2, LED3, and LED4red, green and blue layers.Once ρIS,1 is measured for LED1, the value of θ1 is calculated for the wideand ultrawide FOV microlenses using Fig. 4.10(a) and (b) respectively. Theazimuthal angle, φ1 is calculated from Fig. 4.9(a) and (b) for the wide andultrawide FOV microlenses respectively. The above procedure is repeatedfor LED2, LED3, and LED4 to compute (θ2, φ2), (θ3, φ3), and (θ4, φ4). Forall 36 AOA measurements, the mean φ error, ∆φ, is equal to the mean θerror, ∆θ ≈ 0.5◦.A Least Squares algorithm is then implemented to estimate the 2-Dand 3-D position of the image receiver using the wide and ultra-wide FOVmicrolenses. The mean positioning error is shown in Table 4.1. For bothmicrolenses the mean 2-D positioning error was on the order of 1 cm, whilethe 3-D positioning error was 2.5 cm. This result is expected since for thegiven LED and image sensor geometry the 3-D DOP is approximately 2.5times the 2-D DOP.In conclusion, the image receiver has demonstrated 3-D position accura-744.4. Positioning PerformanceTable 4.1: 2-D and 3-D positioning error results for the wide- and ultra-wideFOV microlenses.Parameter 2-D 3-DPositioning error (cm) 1 2.5cies on the order of a few centimeters as shown in the empirical results inthis section. This is brought about by operating within the image sensors’FOV, having AOA errors of ≈ 0.5◦.Although both microlenses have the same accuracy here, the ultrawideFOV microlens has a significant advantage over the wide FOV microlens.It can image more LED optical beacons due to its wider FOV. A greaternumber of LED optical beacons results in LED redundancy, and better DOP,due to the fact that the LED optical beacons will be more spaced out, andtherefore positioning accuracy is improved.Assuming the LEDs are placed at the edges of a square grid, the squaregrid length should be designed such that the LEDs lie within the imagereceiver FOV. For a separation height h = 2 m, the distance between theimage receiver and LED should be d = h tan(θ). For the wide FOV, d =2 tan(47.5◦) = 2.1 m. For an LED placed at equal x and y componentsfrom the image receiver, x = y =√(2.12/2) = 1.5 m. Similarly, for theultra-wide FOV, d = 2 tan(65◦) = 4.3 m. For an LED placed at equal xand y components from the image receiver, x = y =√(4.32/2) = 3.0 m.Figure 4.19 shows a top-view of the image receiver (at the origin) and LEDsplaced on a square grid. Note that the ultrawide FOV microlens can imagesquare grids with a side length equal to 6 m compared to 3 m for the wideFOV microlens.754.4. Positioning PerformanceFigure 4.19: Maximum square grid side-length capability for wide-and ul-trawide FOV microlens.Another factor that determines the accuracy of the positioning system isthe incident intensity. It is advisable that the received LED optical intensitybe between 0.03 µW/cm2 and 0.2 µW/cm2. Intensities below 0.03 µW/cm2are too faint, and, therefore, can not be imaged accurately, while intensitiesabove 0.2 µW/cm2 result in a large focal spot that is saturated at a largersize. Intensities outside of this range will increase the AOA errors above0.5◦.76Chapter 5Receivers’ Performance whilein MotionIn this chapter, the accuracies of the AOA estimates and the resultingposition estimates are investigated for the angular receiver structure andthe image receiver in motion. Motion is achieved by mounting each receiveron a robot platform. Section 5.1 presents the performance of the angularreceiver structure, while Section 5.2 presents the performance of the imagereceiver.5.1 Angular Receiver PositioningFor this analysis, the iRobot Create platform [52] shown in Figure 5.1is the robot used to create motion. This particular platform is chosen dueto its popularity as a research platform and the significant cargo bay spaceit offers to mount sensors on the robot. The robot can be controlled bya microcontroller that is programmed using the C language to transmitcommands to the robot to move and turn in any direction.775.1. Angular Receiver PositioningFigure 5.1: iRobot Create platform.In this work, an Arduino nano (ATmega 328) [53] is used to controlthe iRobot Create. The Arduino nano is shown in the bottom right handcorner in Fig. 5.2. The Arduino nano is used due to its compact size. Thetransmit and receive pins on the Arduino nano are connected to the receive(pin 1) and transmit pin (pin 2) respectively on the iRobot Create DB-25connector [53].As shown in Fig. 5.2, the angular receiver structure is mounted on theiRobot Create along with the amplifying circuit of Section 3.1.1 and a Lab-VIEW NI data acquisition device. The three output voltages correspondingto photocurrents i1, i2, and i3 of PD1, PD2, and PD3, are each connected tothe amplifying circuit and then to the LabVIEW data acquisition unit. Thedata acquisition unit wirelessly transmits the PD voltages over three sepa-rate channels via the NI LabVIEW wireless network to a Laptop computer,where the voltage values are used to compute the AOA estimates.785.1. Angular Receiver PositioningFigure 5.2: Angular receiver mounted on iRobot Create.The iRobot Create powers both the Arduino nano and the LabVIEWDAQ. The iRobot Create 5 volt output (pin 8) is connected to the Arduinonano Vin pin, while the iRobot Create 14 volt output (pin 10) is connectedto the LabVIEW data acquisition device. The amplifying circuit op-ampsare powered by two 9 volt batteries.Four high-power (3 watt) LED optical beacons emitting warm whitelight [54] are used as the optical transmitters (warm white light has a yel-lowish colour light compared to cool white light which has white/blueishcolour). Each LED optical beacon is connected to an amplifier of typeTexas Instruments OPA552 [55] that is able to provide a power of ≈ 3 wattto each of the LED optical beacons. The high power LED optical beaconsare needed to operate the PDs at an intensity greater than the 0.2 µW/cm2minimum intensity threshold described in Section 3.1.2. Each of the fourLED optical beacons are frequency modulated at distinct frequencies, suchthat the LED1 optical beacon has a frequency of 2.0 kHz, the LED2 opticalbeacon has a frequency of 2.3 kHz, the LED3 optical beacon has a frequencyof 2.6 kHz, and the LED4 optical beacon has a frequency of 2.9 kHz.In order to recover the spectral components for each of the three PDchannels with a FFT, LabVIEW is programmed to read 2048 voltage samplesfor each of the three channels at a rate of 10 kHz or 40 kHz. As a result,795.1. Angular Receiver Positioningthe 10 kHz sampling rate captures a power reading at the appropriate LEDmodulation frequency every 0.2 s i.e., 5 Hz, while the 40 kHz sampling ratecaptures such a reading every 0.05 s, i.e., 20 Hz.Figure 5.3 shows a top-view of the LED/receiver setup. The four LEDoptical beacons are positioned in an (x , y , z ) navigational frame coordinatesystem such that LED1 is positioned at (-35 cm, 30 cm, 100 cm), LED2at (-35 cm, 65 cm, 100 cm), LED3 at (35 cm, 30 cm, 100 cm), and LED4at (35 cm, 65 cm, 100 cm) as shown in Fig. 5.3. The z axis representsthe vertical separation between the LED optical beacons and the angularreceiver. The LED optical beacons are at z = 100 cm, while the angularreceiver lies in the xy plane.805.1. Angular Receiver PositioningFigure 5.3: Illustration of LED optical beacon and receiver geometry as wellas the trajectory of the robot from its start point to its end point. Thepoint X represents the start point of the robot carrying the receiver, andthe point Y represents the end point of the robot trajectory.The angular receiver is oriented as shown in Fig. 3.20. The iRobot Cre-ate, hosting the angular receiver, is programmed to move along a straightline trajectory. The robot trajectory starts at the navigational frame coor-dinate (0, 0, 0) and moves along the line x = 0 for a distance of 1.0 m, asshown in Fig. 5.3.In order to determine the exact timestamp of the angular receiver datacaptured using LabVIEW and the location of the robot controlled by theArduino, the timestamps for LabVIEW and Arduino are synchronized byconnecting one of the output pins of the Arduino to one of the inputs of the815.1. Angular Receiver Positioningwireless DAQ. The Arduino nano is programmed such that the instant theiRobot Create starts moving the pin is set to high and when it stops it isset to low. The timestamps during the time period in which this pin is highcorresponds to the angular receiver data collected during the robot motionalong the line x = 0 for a distance of 1.0 m.The following sections describe the angular receiver AOA and position-ing accuracies for varying robot speed and varying LabVIEW sampling fre-quency. Section 5.1.1 presents the results for a robot speed of 10 cm/s.Section 5.1.2 presents the results for a robot speed of 50 cm/s, while Sec-tion 5.1.3 presents the results for when the robot moves at an average walkingspeed of 139 cm/s.5.1.1 Low Speed 10 cm/sAs the robot moves along its path the voltage values corresponding toeach LED optical beacon intensity on the three PDs are recorded usingLabVIEW. For example, at a given instant LabVIEW registers three voltagevalues for LED optical beacon 1 (operating at frequency 2.0 kHz) for PD1,PD2, and PD3. Similarly, the voltage values for LED optical beacons 2, 3,and 4 operating at 2.3 kHz, 2.6 kHz, and 2.9 kHz respectively are recorded.The twelve voltage values are converted into photocurrent values by dividingthe voltage values by the amplifying circuit impedance (10 MΩ) shown inFig. 3.6. The photocurrent values corresponding to a particular LED opticalbeacon are used in equations 3.1 and 3.2 to solve for the AOA angles.LabVIEW is programmed to read 2048 samples at a sampling frequencyof 10 kHz, which results in an AOA measurement approximately every 0.2 s.Consequently, the AOA measurement update rate is 5 Hz. For a robot mov-ing a distance of 100 cm at a speed of 10 cm/s, this results in approximately49 AOA values. Figure 5.4 illustrates the process of measuring the AOA(θ1, φ1) from LED1 with frequency f1 for the third instance along the robottrajectory. The photocurrents i1, i2, and i3 are generated as a result of sam-pling 2048 voltage values at a sampling frequency of 10 kHz, between thesecond and third time instances. The above process is repeated for LED2at f2, LED3 at f3, and LED4 at f4.825.1. Angular Receiver PositioningFigure 5.4: Illustration of the process of measuring the AOA (θ1, φ1) fromLED1.Figure 5.5 shows the photocurrents i1, i2 and i3 corresponding to LEDoptical beacons 1, 2, 3 and 4 as the angular receiver is moving along its100 cm path. Forty nine photocurrents are generated.835.1. Angular Receiver PositioningFigure 5.5: The photocurrents used to generate the AOA angles are shownversus iRobot Create distance traveled for LED1, LED2, LED3, and LED4.Photocurrents i1, i2, and i3 are shown as the red (solid), black (dotted) andblue (dashed) curves respectively.Note the trend for photocurrents i1 and i2. As the robot moves alongits 1 m track, the angular receiver moves away from LED optical beacons 1and 3, and therefore, i1 (red-solid) and i2 (black-dotted) decrease as shownfor LED optical beacons 1 and 3. As the robot moves along its trajectory,the angular receiver first approaches LED optical beacons 2 and 4, resultingin the photocurrents i1 (red-solid) and i2 (black-dotted) increasing, andthen recedes, causing photocurrents i1 (red-solid) and i2 (black-dotted) todecrease.The trend for photocurrent i3 is as follows. An increase in i3 occursfor LED optical beacons 2 and 4 as the robot moves towards LED opticalbeacons 2 and 4. However, for LED optical beacons 1 and 3, i3 increasesuntil it reaches a maximum at around 60 cm. At this location more light isincident on PD3 compared to the start position of the robot. Beyond 60 cmthe i3 decreases as the angular receiver moves further away from the LEDoptical beacons 1 and 3.The smoothness of the photocurrent i1, i2, and i3 curves is a direct resultof the amplifier/filter design capability to suppress random noise effectively.845.1. Angular Receiver PositioningKnowing the rotation of the angular receiver x ’y ’z ’ body frame withrespect to the xyz navigational frame as shown in Fig. 3.20, the 49 AOAmeasurements (with respect to the angular receiver body frame) are ex-pressed in terms of an azimuthal angle, φ, measured with respect to the xdirection, and a polar angle, θ, measured with respect to the z direction.Figure 5.6 shows the 49 AOA values (expressed as θ and φ) for LED1optical beacon as the robot moves along the line x = 0. The actual AOA,computed based on the robot to LED1 optical beacon geometry, is shown foraccuracy comparison. Similarly, Figs. 5.7, 5.8, and 5.9 show the measuredAOA values for LED2, LED3, and LED4 optical beacons respectively.Figure 5.6: AOA (θ1, φ1) versus robot travel distance computed usingboth the AOA measurement (data) and geometry calculation (actual) whenmoving at a speed of 10 cm/s at a 5 Hz AOA measurement rate.855.1. Angular Receiver PositioningFigure 5.7: AOA (θ2, φ2) versus robot travel distance computed usingboth the AOA measurement (data) and geometry calculation (actual) whenmoving at a speed of 10 cm/s at a 5 Hz AOA measurement rate.865.1. Angular Receiver PositioningFigure 5.8: AOA (θ3, φ3) versus robot travel distance computed usingboth the AOA measurement (data) and geometry calculation (actual) whenmoving at a speed of 10 cm/s at a 5 Hz AOA measurement rate.875.1. Angular Receiver PositioningFigure 5.9: AOA (θ4, φ4) versus robot travel distance computed usingboth the AOA measurement (data) and geometry calculation (actual) whenmoving at a speed of 10 cm/s at a 5 Hz AOA measurement rate.Note that in Figs. 5.6 - 5.9 φ changes by ≈ 100◦ as the robot travelsalong its 1.0 m trajectory, compared to approximately only 20◦ for θ. Themean AOA is defined as the absolute difference between the measured AOA(data) and the AOA calculated based on the respective position of the robotto the LED optical beacons (actual). Table 5.1 shows the mean θ error forthe 49 samples collected along the robot trajectory for θ1, θ2, θ3, and θ4.The overall mean θ error is 2.1◦. Table 5.2 shows the mean φ error for the49 samples collected along the robot trajectory for φ1, φ2, φ3, and φ4. Theoverall mean φ error is 2.4◦.Table 5.1: Mean θ error for 10 cm/s at 5 Hz AOA measurement rate.Parameter θ1 θ2 θ3 θ4mean (◦) 2.4 3.0 1.9 0.9Table 5.2: Mean φ error for 10 cm/s at 5 Hz AOA measurement rate.Parameter φ1 φ2 φ3 φ4mean (◦) 2.7 1.4 3.0 2.6885.1. Angular Receiver PositioningEven though the angular receiver is not operating strictly within its coneof acceptance, the AOA errors are bounded, and the mean AOA error is notgreater than 3◦. Using the AOA angles (θ1, φ1), (θ2, φ2), (θ3, φ3), and (θ4,φ4) a Least Squares algorithm is implemented to determine the 2-D and 3-Dposition estimates of the angular receiver. Figure 5.10 shows the 2-D and3-D positioning error along the trajectory of the robot.Figure 5.10: The 2-D and 3-D positioning error for an angular receiver speedof 10 cm/s and a 5 Hz AOA measurement rate.Table 5.3 shows the statistics of the 2-D and 3-D positioning error shownin Fig. 5.10. The mean 2-D positioning error is 2.6 cm, compared to 11.4 cmfor the 3-D case. The 3-D case also has higher position error standarddeviation.Table 5.3: The 2-D and 3-D error statistics for 10 cm/s at 5 Hz AOAmeasurement rate.Parameter 2-D 3-Dmean (cm) 2.6 11.4standard deviation (cm) 0.8 4.0minimum (cm) 0.6 3.0maximum (cm) 3.8 17.3895.1. Angular Receiver PositioningIn order to verify the shape of the 3-D position error plot, two inves-tigations are performed. The first is an investigation of the AOA error, inparticular the θ error. This is because the 2-D position estimate uses onlyequation 3.17 for Least Squares estimation of the (x, y) angular receiver co-ordinates. Equation 3.17 is only a function of φ. The resulting Least squaresposition estimate and the true position (from geometry) is used to definethe 2-D position error. The 2-D position error in Fig. 5.10 is the Euclideandistance between the estimate and the true position. The 3-D position onthe other hand, uses both equation 3.17 and 3.18 to estimate 3-D position.Since the 2-D position errors (which are solely a function of φ) are fairlyconstant, and none of the AOA errors are dominating (i.e., magnitudes ofthe of the overall mean θ errors are approximately equal to the overall meanφ errors), the θ errors will therefore be investigated below to verify the 3-Dposition error plot in Fig. 5.10.Figures 5.11 and 5.12 show the θ1 and θ2 errors respectively, as theabsolute difference between the measured θ and the actual θ. Note that theθ3 error distribution has the same error pattern as θ1, while the θ4 errordistribution has the same error pattern as θ2, and are therefore not shown.Figure 5.11: Polar angle θ1 error versus distance traveled.905.1. Angular Receiver PositioningFigure 5.12: Polar angle θ2 error versus distance traveled.Note From Fig. 5.11 that the θ1 error increases as the angular receivertravels from 0 cm to 45 cm, decreases as the angular receiver travels from45 cm to 78 cm and increases again as the angular receiver travels from78 cm to 100 cm. The AOA error increase or decrease can be explainedby referring back to the θ error distribution shown in Fig. 3.12. When theangular receiver travels from 0 cm to 45 cm along the trajectory, the AOAchanges from (θ = 65◦, φ = 65◦) to (θ = 45◦, φ = 69◦). This results ina θ1 error increase of 3.5◦. When one observes the same change in AOAin Fig. 3.12, the theta error increases by 2.8◦. When the angular receivertravels from 45 cm to 78 cm along the trajectory, the AOA changes from (θ= 45◦, φ = 69◦) to (θ = 31◦, φ = 80◦). This results in a θ1 error decreaseof 5.0◦. When one observes the same change in AOA in Fig. 3.12, the thetaerror increases by 4.7◦. When the angular receiver travels from 78 cm to100 cm along the trajectory, the AOA changes from (θ = 31◦, φ = 80◦) to(θ = 23◦, φ = 87◦). This results in a θ1 error increase of 2.5◦. When oneobserves the same change in AOA in Fig. 3.12, the θ error increases by 1.4◦.For the results shown in Fig. 5.12, the θ2 error increases as the angularreceiver travels from 45 cm to 78 cm and then decreases as the angularreceiver travels from 78 cm to 100 cm. Similar to the Fig. 5.11 argument,the AOA error increase or decrease can be explained by referring back to theθ error distribution shown in Fig. 3.12. When the angular receiver travels915.1. Angular Receiver Positioningfrom 45 cm to 78 cm along the trajectory, the AOA changes from (θ = 60◦,φ = 64◦) to (θ = 45◦, φ = 70◦). This results in a θ2 error increase of 3.0◦.When one observes the same change in AOA in Fig. 3.12, the theta errorincreases by 5.1◦. When the angular receiver travels from 78 cm to 100 cmalong the trajectory, the AOA changes from (θ = 45◦, φ = 70◦) to (θ = 35◦,φ = 74◦). This results in a θ2 error decrease of 3.0◦. When one observes thesame change in AOA in Fig. 3.12, the theta error decreases by 2.5◦.The θ error results in Figs. 5.11 and 5.12 can explain the general errortrend for the 3-D position error in Fig. 5.10. From Fig. 5.10 one observes a3-D position error increase from 0 cm to 40 cm along the trajectory. This isdue to an increase in θ1 and θ2 errors, as seen in Figs. 5.11 and 5.12, alongthe same portion of the trajectory. The 3-D position error plateaus between40 cm and 50 cm in Fig. 5.10 as a result of an equal decrease in θ1 error andan equal increase in θ2 error over the same region. Finally the 3-D positionerror decreases slowly between 50 cm and 80 cm in Fig. 5.10 as a result ofa sharp decrease in θ1 error compared to the increase in θ2 error. The 3-Dposition error decreases sharply between 80 cm and 90 cm in Fig. 5.10 as aresult of a sharp decrease in θ2 error compared to the increase in θ1 error.The second investigation aims to validate the theoretical 3-D DOP inequation 3.13 using the empirical 3-D DOP in equation 3.3 defined as theratio of the position standard deviations from the data of Fig. 5.10 to theAOA standard deviations from the data in Figs. 5.6-5.9. The empirical 3-DDOP is compared to the theoretical 3-D DOP as shown below.The empirical 3-D DOP is calculated as follows. At distance 0 cm, theangular receiver measures 4 pairs of AOA (θ1, φ1), (θ2, φ2), (θ3, φ3), and(θ4, φ4) from LEDs 1, 2, 3 and 4 respectively. The AOA error is calculatedas the difference between the measured AOA and the true AOA (calculatedbased on geometry). The standard deviation of those eight AOA errors iscalculated and is denoted by σm. The 3-D position estimate at distance0 cm is computed using the measured AOAs in a Least Squares algorithmand results in an x̂, ŷ, and ẑ position estimate. The x error is calculated asthe difference between the x̂ estimate and the actual xR coordinate of theangular receiver. The y and z errors are computed in a similar fashion. Thestandard deviation of the x error, y error, and z error is calculated and isdenoted be σp. The ratio of σp/σm is the empirical DOP value at distance0 cm. At the next point in the trajectory, empirical data is collected, andthe process is repeated using the AOAs and position error at this particulardistance. Fig. 5.13 shows a plot of the empirical DOP and the theoreticalDOP along the entire trajectory.925.1. Angular Receiver PositioningFigure 5.13: Theoretical 3-D DOP versus empirical DOP calculated alongthe angular receiver trajectory.Notice from Fig. 5.13 that the DOP increases at 30 cm and then de-creases at 60 cm. This mirrors the behaviour of the 3-D position error inFig. 5.10 over the same region of the trajectory. Therefore, this two-prongedinvestigation into the shape of the 3-D position error plot of Fig. 5.10 demon-strates that an increase in positioning error can be explained by an increasein the AOA error measurements, as well as an increase in PDOP. Figure 5.13also verifies that the empirical DOP can be very well approximated by equa-tion 3.13.Figures 5.14, 5.15, 5.16, and 5.17 show the φ1, φ2, φ3, and φ4 errorsrespectively, as the absolute difference between the measured φ and theactual φ versus distance traveled. Using a similar argument to the 3-D case,the φ error results in Figs. 5.14, 5.15, 5.16, and 5.17 can explain the generalerror trend for the 2-D position error in Fig. 5.10.935.1. Angular Receiver PositioningFigure 5.14: Azimuthal angle φ1 error versus distance traveled.Figure 5.15: Azimuthal angle φ2 error versus distance traveled.945.1. Angular Receiver PositioningFigure 5.16: Azimuthal angle φ3 error versus distance traveled.Figure 5.17: Azimuthal angle φ4 error versus distance traveled.Note from Fig. 5.10 that the 2-D position error starts to decrease at955.1. Angular Receiver Positioning40 cm. This can be attributed to the φ error patterns where at 40 cm theφ1 error in Fig. 5.14, the φ3 error in Fig. 5.16 and the φ4 error in Fig. 5.17all decrease sharply. At 60 cm, the φ2 error in Fig. 5.15 also drops sharply,resulting in minimum 2-D position error at 60 cm. At 60 cm the φ3 error isincreasing, with φ1 and φ4 errors increasing at 70 cm. As a result, the 2-Dposition error is increasing from 60 cm to 80 cm.In order to determine the effect of AOA measurement rate on positioningperformance while the angular receiver is in motion, the above experiment inwhich the iRobot Create again moves 1 m at a speed of 10 cm/s is repeatedbut with an AOA measurement rate of 20 Hz. This results in approximately200 AOA samples recorded during the robot motion compared to 49 sampleswhen the AOA measurement rate is 5 Hz.Figure 5.18 shows the AOA measured with respect to the x -axis for LED1as the robot moves along the line x = 0. The actual AOA, based on therobot to LED1 optical beacon geometry, is shown for accuracy comparison.Similarly, Figs. 5.19, 5.20, and 5.21 show the measured AOA values forLED2, LED3, and LED4 optical beacons, respectively.Figure 5.18: AOA (θ1, φ1) versus robot travel distance computed usingboth the AOA measurement (data) and geometry calculation (actual) whenmoving at a speed of 10 cm/s at a 20 Hz AOA measurement rate.965.1. Angular Receiver PositioningFigure 5.19: AOA (θ2, φ2) versus robot travel distance computed usingboth the AOA measurement (data) and geometry calculation (actual) whenmoving at a speed of 10 cm/s at a 20 Hz AOA measurement rate.975.1. Angular Receiver PositioningFigure 5.20: AOA (θ3, φ3) versus robot travel distance computed usingboth the AOA measurement (data) and geometry calculation (actual) whenmoving at a speed of 10 cm/s at a 20 Hz AOA measurement rate.985.1. Angular Receiver PositioningFigure 5.21: AOA (θ4, φ4) versus robot travel distance computed usingboth the AOA measurement (data) and geometry calculation (actual) whenmoving at a speed of 10 cm/s at a 20 Hz AOA measurement rate.Table 5.4 shows the mean θ error for the 200 samples collected alongthe robot trajectory for θ1, θ2, θ3, and θ4. The mean θ error is 2.4◦. Ta-ble 5.5 shows the mean φ error for the 200 samples collected along the robottrajectory for φ1, φ2, φ3, and φ4. The mean φ error is 1.9◦.Table 5.4: Mean θ error for 10 cm/s at 20 Hz measurement update.Parameter θ1 θ2 θ3 θ4mean (◦) 2.5 3.0 1.9 2.0Table 5.5: Mean φ error for 10 cm/s at 20 Hz measurement update.Parameter φ1 φ2 φ3 φ4mean (◦) 1.9 2.0 1.8 2.0Figure 5.22 shows the 2-D and 3-D positioning error plots for the angularreceiver at a speed of 10 cm/s and a 20 Hz AOA measurement rate.995.1. Angular Receiver PositioningFigure 5.22: The 2-D and 3-D positioning error for an angular receiver speedof 10 cm/s and a 20 Hz AOA measurement rate.Table 5.6 shows the position statistics for the 2-D and 3-D position errors.Table 5.6: The 2-D and 3-D error statistics for 10 cm/s at 20 Hz AOAmeasurement rate.Parameter 2-D 3-Dmean (cm) 3.2 9.9standard deviation (cm) 1.2 4.0minimum (cm) 0.3 2.7maximum (cm) 6.3 17.2From the above analysis, the AOA and positioning accuracies of the 5 Hzand 20 Hz measurements are similar, with no major improvements when thesampling rate, and hence AOA measurement rate, increases. This is validfor linear trajectories such as the one described in this analysis. However, ina more complex non-linear trajectory, a higher measurement rate will givea truer estimate of the trajectory.5.1.2 Medium Speed 50 cm/sThis section determines the AOA and positioning accuracies while theangular receiver moves at a speed of 50 cm/s for 5 Hz and 20 Hz AOA1005.1. Angular Receiver Positioningmeasurement rates. Results for the 5 Hz AOA measurement rate are shownfirst. As the angular receiver moves at a speed of 50 cm/s over 100 cm, 10AOA samples will be recorded. Figure 5.23 shows photocurrents i1, i2 andi3 corresponding to LED optical beacons 1, 2, 3 and 4 as the angular receivermoves along its 100 cm path. Note that the shapes of the photocurrents inFig. 5.23 follow the same patterns as the photocurrents in Fig. 5.5 for the10 cm/s speed, which asserts the performance of the amplifying/filter circuitto combat noise.Figure 5.23: The photocurrents used to generate the AOA angles are shownversus iRobot Create distance traveled for LED1, LED2, LED3, and LED4.Photocurrent i1, i2, and i3 are shown as the red (solid), black (dotted) andblue (dashed) curves respectively.Figure 5.24 shows the 10 AOA values (expressed as θ and φ) for LED1optical beacon as the robot moves along the line x = 0. The actual AOA,computed based on the robot to LED1 optical beacon geometry, is shown foraccuracy comparison. Similarly, Figs. 5.25, 5.26, and 5.27 show the measuredAOA values for LED2, LED3, and LED4 optical beacons respectively.1015.1. Angular Receiver PositioningFigure 5.24: AOA (θ1, φ1) versus robot travel distance computed usingboth the AOA measurement (data) and geometry calculation (actual) whenmoving at a speed of 50 cm/s at a 5 Hz AOA measurement rate.1025.1. Angular Receiver PositioningFigure 5.25: AOA (θ2, φ2) versus robot travel distance computed usingboth the AOA measurement (data) and geometry calculation (actual) whenmoving at a speed of 50 cm/s at a 5 Hz AOA measurement rate.1035.1. Angular Receiver PositioningFigure 5.26: AOA (θ3, φ3) versus robot travel distance computed usingboth the AOA measurement (data) and geometry calculation (actual) whenmoving at a speed of 50 cm/s at a 5 Hz AOA measurement rate.1045.1. Angular Receiver PositioningFigure 5.27: AOA (θ4, φ4) versus robot travel distance computed usingboth the AOA measurement (data) and geometry calculation (actual) whenmoving at a speed of 50 cm/s at a 5 Hz AOA measurement rate.Table 5.7 shows the mean θ error for the 10 samples collected along therobot trajectory for θ1, θ2, θ3, and θ4. The overall mean θ error is 3.2◦.Table 5.8 shows the mean φ error for the 10 samples collected along therobot trajectory for φ1, φ2, φ3, and φ4. The overall mean φ error is 13.2◦.Table 5.7: Mean θ error for 50 cm/s at 5 Hz AOA measurement rate.Parameter θ1 θ2 θ3 θ4mean (◦) 2.8 5.1 2.3 3.7Table 5.8: Mean φ error for 50 cm/s at 5 Hz AOA measurement rate.Parameter φ1 φ2 φ3 φ4mean (◦) 10.7 14.0 13.0 15.4Note that the AOA error for the 50 cm/s-5 Hz case has significantlyincreased compared to the 10 cm/s-5 Hz case in Section 5.1.1. Although theθ error changed very little, the φ error increased by a factor of five. Thiscan be attributed to two factors: the first is the robot speed, which affects1055.1. Angular Receiver Positioningthe rate of change of the AOA measurements, and the second factor is theLED/receiver geometry.The robot speed has increased from 10 cm/s in Section 5.1.1 to 50 cm/sin this experiment, resulting in the robot traveling the 100 cm path in amuch shorter time (2 s) and, therefore, the distance the robot moves whilecollecting the 2048 voltage samples needed to produce one AOA measure-ment, ∆D, is significantly greater. This distance is defined as ∆D = speed(cm/s)/(sampling frequency-1 (Hz)). For the case of 50 cm/s-5 Hz, ∆D =12.5 cm, while for 10 cm/s-5 Hz, ∆D = 2.5 cm. The larger the ∆D gets,the larger AOA errors become since the AOA changes significantly from thefirst voltage sample to the 2048th voltage sample needed to compute oneAOA measurement. The second factor relates to the LED/receiver geome-try. Since the change in θ in Figs. 5.24, 5.25, 5.26, and 5.27 along the entiretrajectory is only 14◦ compared to 100◦ for φ, the φ angles vary more sothan θ angles during the collection of the 2048 samples. Therefore, the trueφ angle for the first sample is significantly different than the true φ for the2048th sample resulting in a more erroneous φ for all 2048 samples.Figure 5.28 shows the 2-D and 3-D position error as the robot moves1 m at a speed of 50 cm/s with a 5 Hz AOA measurement rate. Note thatthe 2-D positioning error increases with increased distance moved. This isbecause the 2-D position error depends on φ, which increases with distancetraveled, as shown in the right hand plots of Figs. 5.24, 5.25, 5.26, and 5.27.Note also that the φ error (responsible for the 2-D position accuracy) hasan overall mean error of 13.2◦ that is much larger (four times) than the θerror, with overall mean error of 3.2◦, as shown in Tables 5.7 and 5.8. Thiswill result in the 3-D positioning error being dominated by φ error and,therefore, the 2-D and 3-D position plots will have similar error patterns.1065.1. Angular Receiver PositioningFigure 5.28: The 2-D and 3-D positioning error for an angular receiver speedof 50 cm/s and a 5 Hz AOA measurement rate.Table 5.9 shows the position statistics for the 2-D and 3-D position error.Table 5.9: The 2-D and 3-D error statistics for 50 cm/s at 5 Hz AOAmeasurement rate.Parameter 2-D 3-Dmean (cm) 12.7 18.2standard deviation (cm) 1.8 2.6minimum (cm) 3.7 10.1maximum (cm) 17.8 23.3Now the performance of the angular receiver is investigated for the samespeed but with a 20 Hz measurement update rate. Figure 5.29 shows theAOA measured with respect to the x -axis for LED1 as the robot moves alongthe line x = 0 at a speed of 50 cm/s with a 20 Hz measurement update rate.Forty AOA measurements are collected along the 1.0 m trajectory. The ac-tual AOA, based on the robot to LED1 optical beacon geometry, is shownfor accuracy comparison. Similarly, Figs. 5.30, 5.31, and 5.32 show the mea-sured AOA values for LED2, LED3, and LED4 optical beacons respectively.1075.1. Angular Receiver PositioningFigure 5.29: AOA (θ1, φ1) versus robot travel distance computed usingboth the AOA measurement (data) and geometry calculation (actual) whenmoving at a speed of 50 cm/s at a 20 Hz AOA measurement rate.1085.1. Angular Receiver PositioningFigure 5.30: AOA (θ2, φ2) versus robot travel distance computed usingboth the AOA measurement (data) and geometry calculation (actual) whenmoving at a speed of 50 cm/s at a 20 Hz AOA measurement rate.1095.1. Angular Receiver PositioningFigure 5.31: AOA (θ3, φ3) versus robot travel distance computed usingboth the AOA measurement (data) and geometry calculation (actual) whenmoving at a speed of 50 cm/s at a 20 Hz AOA measurement rate.1105.1. Angular Receiver PositioningFigure 5.32: AOA (θ4, φ4) versus robot travel distance computed usingboth the AOA measurement (data) and geometry calculation (actual) whenmoving at a speed of 50 cm/s at a 20 Hz AOA measurement rate.Table 5.10 shows the mean error for θ1, θ2, θ3, and θ4 for the 40 AOAsmeasured along the robot trajectory. The overall mean θ error is 2.3◦.Table 5.11 shows the mean error for φ1, φ2, φ3, and φ4 for the 40 AOAsmeasured along the robot trajectory. The overall mean φ error is 3.9◦.Table 5.10: Mean θ error for 50 cm/s at 20 Hz AOA measurement rate.Parameter θ1 θ2 θ3 θ4mean (◦) 2.6 3.3 1.8 1.6Table 5.11: Mean φ error for 50 cm/s at 20 Hz AOA measurement rate.Parameter φ1 φ2 φ3 φ4mean (◦) 3.7 5.2 4.3 2.2The Fig. 5.33 shows the 2-D and 3-D position error as the robot moves1 m at a speed of 50 cm/s with a 20 Hz AOA measurement rate.1115.1. Angular Receiver PositioningFigure 5.33: The 2-D and 3-D positioning error for an angular receiver speedof 50 cm/s and a 20 Hz AOA measurement rate.Table 5.12 shows the position statistics for the 2-D and 3-D positionerror.Table 5.12: The 2-D and 3-D error statistics for 50 cm/s at 20 Hz AOAmeasurement rate.Parameter 2-D 3-Dmean (cm) 4.0 12.0standard deviation (cm) 1.8 2.6minimum (cm) 1.3 8.0maximum (cm) 8.1 16.7Comparing the 50 cm/s speed for the 20 Hz and 5 Hz measurementupdate rates, one sees a significant reduction in AOA and positioning errorfor the 20 Hz case. The average φ error has significantly reduced from13.2◦ (50 cm/s-5 Hz) to 3.9◦ for the current case of (50 cm/s-20 Hz). Also,the AOA errors for the current case (50 cm/s-20 Hz) is comparable to the10 cm/s-5 Hz case and the 10 cm/s-20 Hz case, and as a result, the 2-D and3-D positioning errors in Fig. 5.33 follow the same shapes as in Figs. 5.10and 5.22.1125.1. Angular Receiver Positioning5.1.3 Average Walking Speed 139 cm/sIn this section, the AOA estimation accuracy and positioning perfor-mance of the angular receiver is quantified as it moves for 100 cm at anaverage speed of 139 cm/s with a 20 Hz AOA measurement rate. A speedof 139 cm/s (5 km/h) represents the average human walking speed [56]. Atotal of 14 AOA measurements are made along the 100 cm trajectory. Fig-ure 5.34 shows photocurrents i1, i2 and i3 corresponding to LED opticalbeacons 1, 2, 3 and 4 as the angular receiver is moving along its 100 cmpath at a speed of 139 cm/s at 20 Hz AOA measurement rate.Note that the photocurrents in Fig. 5.34 look more linear than the pho-tocurrents for the 10 cm/s and 50 cm/s cases in Figs. 5.5 and 5.23 respec-tively. This is because for 139 cm/s at 20 Hz AOA measurement rate, therobot moves 100 cm in a much shorter time period of 0.7 s and collects only14 AOA samples, compared to a 10 s time period with 49 AOA samples forthe 10 cm/s robot speed with 5 Hz AOA measurement rate.This phenomenon can be explained by observing the photocurrent i3(dashed blue) for LED2 in the top right hand plots of Figs. 5.5 and 5.34,and from the observed pattern of the robot motion at 10 cm/s and 139 cm/s.In the 10 cm/s case in Fig. 5.5 the robot moves along the 100 cm trajectoryat a fairly constant speed. However, in the 139 cm/s case the robot expe-riences significant acceleration at the start of its motion to reach a velocityof 139 cm/s from an initial zero velocity. This means that the 2048 sam-ple photocurrents collected to compute an AOA at time instant 0 for the139 cm/s case in Fig. 5.34 is actually the average summation of photocur-rents in Fig. 5.5 at time instances greater than and equal to 0. Since thephotocurrent i3 for LED2 in Fig. 5.5 is increasing from 0 cm to 60 cm, pho-tocurrent i3 for LED2 in Fig. 5.34 will be at a greater value at time instant0 compared to that in Fig. 5.5. Similarly, towards the end of the robot mo-tion, the robot must experience negative acceleration to reach zero velocity.This means that the 2048 photocurrent samples measured at a given timeor distance shown in Fig. 5.34 (for instance at time t = 0.7 s or distance d= 0.7× 139 ≈ 100 cm) is in fact the average summation of photocurrents inFig. 5.5 at times less than or equal to 0.7 s or distances less than or equal100 cm. Consequently, in the 139 cm/s case, photocurrent i3 will be higherat the start of the trajectory than in the 10 cm/s case, and vice versa at theend of the trajectory. Since i3 is increasing for LED2, the i3 photocurrentplot for the 139 cm/s case appears linear.The latency or delay in the angular receiver (PD and amplifying circuit)response is measured to be 0.15 ms, while the time needed for AOA mea-1135.1. Angular Receiver Positioningsurement for the 139 cm/s-20 Hz measurement rate is 0.7 s/(14-1) = 5.3 ms.This means that the angular receiver responds fast enough to changes inillumination levels as the robot moves at 139 cm/s at 20 Hz measurementrate, and, therefore, the linear behaviour of the photocurrents is only dueto the non-constant velocity of the robot.Figure 5.34: The photocurrents used to generate the AOA angles are shownversus iRobot Create distance traveled for LED1, LED2, LED3, and LED4.Photocurrent i1, i2, and i3 are shown as the red (solid), black (dotted) andblue (dashed) curves respectively.Figure 5.35 shows the AOA measured with respect to the x -axis forLED1 as the robot moves along the line x = 0 at a speed of 139 cm/s at a20 Hz measurement update rate. The actual AOA, based on the robot toLED1 optical beacon geometry, is shown for accuracy comparison. Similarly,Figs. 5.36, 5.37, and 5.38 show the AOA measured from LED2, LED3, andLED4 optical beacons respectively.1145.1. Angular Receiver PositioningFigure 5.35: AOA (θ1, φ1) versus robot travel distance computed usingboth the AOA measurement (data) and geometry calculation (actual) whenmoving at a speed of 139 cm/s at a 20 Hz AOA measurement rate.1155.1. Angular Receiver PositioningFigure 5.36: AOA (θ2, φ2) versus robot travel distance computed usingboth the AOA measurement (data) and geometry calculation (actual) whenmoving at a speed of 139 cm/s at a 20 Hz AOA measurement rate.1165.1. Angular Receiver PositioningFigure 5.37: AOA (θ3, φ3) versus robot travel distance computed usingboth the AOA measurement (data) and geometry calculation (actual) whenmoving at a speed of 139 cm/s at a 20 Hz AOA measurement rate.1175.1. Angular Receiver PositioningFigure 5.38: AOA (θ4, φ4) versus robot travel distance computed usingboth the AOA measurement (data) and geometry calculation (actual) whenmoving at a speed of 139 cm/s at a 20 Hz AOA measurement rate.Note that the AOA errors in Figs. 5.35, 5.36, 5.37, and 5.38 are muchlarger than the AOA errors at lower speeds of 10 cm/s and 50 cm/s. Onepossible reason for the large AOA error at the start of the robot motion is thefact that to reach a speed of 139 cm/s from standstill as quickly as possible,the robot must experience significant acceleration at the start of its trajec-tory, and also to stop, it must experience significant negative acceleration atthe end of the trajectory. Therefore, the actual AOA is not representative ofwhere the AOA measurements are made, and this will cause the differencebetween the actual and measured AOA to be large at the start and end of thetrajectory, as shown in Figs. 5.35, 5.36, 5.37, and 5.38. For this particularexperiment a longer track would likely yield more accurate AOA estimatessince the time the robot experiences acceleration and negative accelerationwill be less.Table 5.13 shows the mean θ error for the 14 samples collected alongthe robot trajectory for θ1, θ2, θ3, and θ4. The overall mean θ error is 3.0◦.Table 5.14 shows the mean φ error for the 14 samples collected along therobot trajectory for φ1, φ2, φ3, and φ4. The overall mean φ error is 19.5◦.Figure 5.39 shows the 2-D and 3-D positioning error versus distance trav-eled. Table 5.15 shows the position statistics for the 2-D and 3-D position1185.1. Angular Receiver PositioningTable 5.13: Mean θ error for 139 cm/s at 20 Hz AOA measurement rate.Parameter θ1 θ2 θ3 θ4mean (◦) 3.9 3.3 1.8 2.8Table 5.14: Mean φ error for 139 cm/s at 20 Hz AOA measurement rate.Parameter φ1 φ2 φ3 φ4mean (◦) 19.0 18 18.0 23.0error. Note that the 2-D and 3-D positioning error plots shown in Fig. 5.39have much larger position errors compared to the 10 cm/s and 50 cm/s casesin Sections 5.1.1 and 5.1.2 respectively. This is due to large φ and θ errorsat the start and end of the robot trajectory. Note that, at the centre of thetrajectory, around the 50 cm position, error drops to 5 cm for the 2-D case,and to 20 cm for the 3-D case. This is because the φ and θ errors are lesstowards the centre of the trajectory than at the start or end, as shown inFigs. 5.35, 5.36, 5.37, and 5.38, since the robot would have reached constantspeed by this time.Figure 5.39: The 2-D and 3-D positioning error for angular receiver speedof 139 cm/s and a 20 Hz AOA measurement rate.Note that the φ error (responsible for the 2-D position accuracy) has1195.1. Angular Receiver PositioningTable 5.15: The 2-D and 3-D error statistics for 139 cm/s at 20 Hz AOAmeasurement rate.Parameter 2-D 3-Dmean (cm) 20 27standard deviation (cm) 11 7minimum (cm) 3 18.0maximum (cm) 40 43an overall mean error of 19.5◦ that is much larger (6.5 times) than the θerror with overall mean error of 3◦ shown in Tables 5.13 and 5.14. This willresult in the 3-D positioning error to be dominated by φ error. Similar tothe positioning results for the 50 cm/s at 5 Hz AOA measurement rate, inFig. 5.28, the 2-D and 3-D position plots will have similar error patterns.5.1.4 SummaryTable 5.16 summarizes the positioning error results for the angular re-ceiver in motion. ∆D is the distance traveled while collecting the totalnumber of voltage samples required to find one AOA measurement. For in-stance when the robot moves at 10 cm/s at a 5 Hz AOA measurement rate,AOA measurements are made every ∆D = 10/(5-1) = 2.5 cm. The 5 Hzmeasurement rate is a result of LabVIEW capturing 2048 samples at a rateof 10 kHz. Therefore, in this case, the time over which voltage samples arecollected to produce one AOA measurement is 0.2 s.Table 5.16: Summary of 2-D and 3-D error statistics.Parameter mean 2-D error mean 3-D error ∆D(cm) (cm) (cm)10 cm/s-5 Hz-2048 2.6 11.4 2.510 cm/s-20 Hz-2048 3.2 9.9 0.5350 cm/s-5 Hz-2048 12.7 18.2 12.550 cm/s-10 Hz-1024 8.3 13.0 5.550 cm/s-20 Hz-2048 4.0 12.0 2.6139 cm/s-20 Hz-2048 20 27 7.3The first column in Table 5.16 shows the robot speed in cm/s, the AOAmeasurement rate in Hz, and the number of samples read by LabVIEW.From Table 5.16 one observes that lowest positioning error occurs for the1205.2. Image Receiver Positioningcases of 10 cm/s at 5 Hz and 50 cm/s at 20 Hz. In both of these cases, ∆Dis approximately 2.5 cm. When ∆D = 0.5 cm, as in the 10 cm/s-20 Hz case,positioning accuracy does not significantly improve compared to the ∆D =2.5 cm case. Decreasing the number of samples LabVIEW reads, from 2048to 1024, at a 50 cm/s angular receiver speed and 10 kHz sampling frequency,results in generating AOA readings every ≈ 0.1 s (10 Hz). Comparing the50 cm/s-10 Hz (1024 samples) to the case of 50 cm/s-5 Hz (2048 samples),shows ∆D decreasing from 12.5 cm to 5.5 cm, and as a result, the positioningaccuracy has improved as one would expect. From the above table, one candraw the conclusion that the choice of sampling speed, needed to achieveaccurate position estimates, is correlated to the angular receiver speed. Forthe given geometry of LED optical beacons and angular receiver presentedhere, it is advisable that the AOA measurement rate, in Hz, be at least halfthe speed of the angular receiver, in cm/s, to achieve positioning accuracieson the order of 2-3 cm for 2-D positioning or on the order of 10 cm for 3-Dpositioning.5.2 Image Receiver PositioningIn this section, the performance of the image receiver is studied while inmotion. Similar to Section 5.1, iRobot Create is used to host the image re-ceiver. Figure 5.40 shows an image of the image receiver mounted on iRobotCreate. An Arduino nano is used to control the robot motion. Figure 5.41shows a magnified view of the image sensor and the microlens.1215.2. Image Receiver PositioningFigure 5.40: Image receiver mounted on iRobot Create.Figure 5.41: A magnified view of the microlens and the image sensor.1225.2. Image Receiver PositioningFour LED optical beacons are positioned in an (x , y , z ) navigationalframe coordinate system as shown in Fig. 5.3 such that LED1 is positionedat (-35 cm, 30 cm, 100 cm), LED2 at (-35 cm, 65 cm, 100 cm), LED3 at(35 cm, 30 cm, 100 cm), and LED4 at (35 cm, 65 cm, 100 cm). Fig. 5.3 showsa top-view of the of the LED/image receiver setup. The z axis represents thevertical separation between the LED optical beacons and the image receiver.The LED optical beacons are at z = 100 cm, while the image receiver liesin the xy plane.Similar to Section 4.4 the four LED optical beacons emit white lightby frequency modulating the R, G and B LED that make up each opticalbeacon, such that LED1 has RDCGf1Bf1, LED2 has RDCGf1Bf2, LED3 hasRDCGf2Bf2, and LED4 has RDCGf2Bf1, where DC refers to f = 0, f1 is70 Hz and f2 is 80 Hz. In order to match each LED optical beacon to itscorresponding focal spot on the image, an FFT algorithm is run, before theimage receiver starts moving, to detect the frequency components on the R,G, and B layers.The iRobot Create is programmed to move along a straight line trajec-tory. The robot trajectory starts at the navigational frame coordinate (0, 0,0) and moves along the line x = 0 for a distance of 100 cm. For this analysis,the microlens with the ultra-wide FOV (90◦ contact angle) is utilized. Asthe robot starts moving a video application starts. This application recordsthe images captured by the image receiver on a video. By analyzing thevideo frames one can determine the exact times at which the robot startsand stops. Also, by knowing the image receiver’s frame rate, the exact framecorresponding to a translated position of the image receiver is calculated.The AOA accuracy and the resulting positioning accuracy are studiedwhen varying the image receiver speed. Section 5.2.1 shows the AOA andpositioning results when the image receiver is moving at a speed of 5 cm/s,while Section 5.2.2 shows the AOA and positioning results when the imagereceiver is moving at a speed of 10 cm/s.5.2.1 Very Low Speed 5 cm/sThe AOAs are computed at 10 cm intervals. At each interval the AOAis measured from each of the four LEDs which results in a total of 44 AOAs.Figures 5.42, 5.43, 5.44, and 5.45 show the AOA computed using the methoddescribed in Section 4.4 and the actual AOA based on geometry, for LED1,LED2, LED3, and LED4 respectively. The mean θ and φ errors, for allAOAs across the entire trajectory and for all four optical beacons, is approx-imately 1◦. Note that the image receiver mean AOA error is approximately1235.2. Image Receiver Positioning1◦-3◦ which is more accurate than the angular receiver mean AOA errorwhen operating at ∆D ≤ 2.5 cm.Figure 5.42: AOA (θ1, φ1) versus robot travel distance computed usingboth the AOA measurement (data) and geometry calculation (actual) whenmoving at a speed of 5 cm/s.1245.2. Image Receiver PositioningFigure 5.43: AOA (θ2, φ2) versus robot travel distance computed usingboth the AOA measurement (data) and geometry calculation (actual) whenmoving at a speed of 5 cm/s.1255.2. Image Receiver PositioningFigure 5.44: AOA (θ3, φ3) versus robot travel distance computed usingboth the AOA measurement (data) and geometry calculation (actual) whenmoving at a speed of 5 cm/s.1265.2. Image Receiver PositioningFigure 5.45: AOA (θ4, φ4) versus robot travel distance computed usingboth the AOA measurement (data) and geometry calculation (actual) whenmoving at a speed of 5 cm/s.Figure 5.46 shows the 2-D and 3-D positioning error as the image receivermoves for 1 m. Table 5.17 shows the statistics of the 2-D and 3-D positioningerror shown in Fig. 5.46. Note that in stark contrast to the 2-D and 3-Dpositioning error for the angular receiver, the 2-D and 3-D positioning errorsfor the image receiver are relatively constant over the entire trajectory.1275.2. Image Receiver PositioningFigure 5.46: The 2-D and 3-D position error versus distance traveled for theimage receiver at 5 cm/s.Table 5.17: The 2-D and 3-D error statistics for the image receiver movingat a speed of 5 cm/s .Parameter 2-D 3-Dmean (cm) 0.9 4.9standard deviation (cm) 0.17 0.18minimum (cm) 0.7 4.6maximum (cm) 1.3 5.3In order to verify the 3-D positioning results, a 3-D DOP investigationis carried out to validate the theoretical 3-D DOP in equation 3.13 usingthe empirical 3-D DOP in equation 3.3 defined as the ratio of the positionstandard deviations from the data of Fig. 5.46 to the AOA standard devia-tions from the data in Figs. 5.42-5.45. The empirical 3-D DOP is comparedto the theoretical 3-D DOP below.The empirical 3-D DOP is calculated as follows. At distance 0 cm, theimage receiver measures 4 pairs of AOA (θ1, φ1), (θ2, φ2), (θ3, φ3), and (θ4,φ4) from LEDs 1, 2, 3 and 4 respectively. The AOA error is calculated asthe difference between the measured AOA and the true AOA (calculatedbased on geometry). The standard deviation of those eight AOA errors is1285.2. Image Receiver Positioningcalculated and is denoted by σm. The 3-D position estimate at distance0 cm is computed using the measured AOAs in a Least Squares algorithmand results in an x̂, ŷ, and ẑ position estimate. The x error is calculatedas the difference between the x̂ estimate and the actual x coordinate of theimage receiver. The y and z errors are computed in a similar fashion. Thestandard deviation of the x error, y error, and z error is calculated and isdenoted be σp. The ratio of σp/σm is the empirical DOP value at distance0 cm. At the next point in the trajectory, empirical data is collected, andthe process is repeated using the AOAs and position error at this particulardistance. Figure 5.47 shows a plot of the empirical DOP and the theoreticalDOP along the entire trajectory. Note the very close agreement betweenthe theoretical and empirical DOP validates equation 3.13.Figure 5.47: Theoretical 3-D DOP versus empirical DOP calculated alongthe image receiver trajectory.The next section studies the image receiver AOA and positioning accu-racy when moving at a speed of 10 cm/s.5.2.2 Low Speed 10 cm/sThe above test is repeated but with the image receiver moving at 10 cm/s.AOAmeasurements are again made at 10 cm intervals. Figures 5.48, 5.49, 5.50,1295.2. Image Receiver Positioningand 5.51 show the AOA computed using the method shown in Section 4.4and the actual AOA based on geometry, for LED1, LED2, LED3, and LED4respectively. The mean θ and φ errors of the data points is again approxi-mately 1◦.Figure 5.48: AOA (θ1, φ1) versus robot travel distance computed usingboth the AOA measurement (data) and geometry calculation (actual) whenmoving at a speed of 10 cm/s.1305.2. Image Receiver PositioningFigure 5.49: AOA (θ2, φ2) versus robot travel distance computed usingboth the AOA measurement (data) and geometry calculation (actual) whenmoving at a speed of 10 cm/s.1315.2. Image Receiver PositioningFigure 5.50: AOA (θ3, φ3) versus robot travel distance computed usingboth the AOA measurement (data) and geometry calculation (actual) whenmoving at a speed of 10 cm/s.1325.2. Image Receiver PositioningFigure 5.51: AOA (θ4, φ4) versus robot travel distance computed usingboth the AOA measurement (data) and geometry calculation (actual) whenmoving at a speed of 10 cm/s.Figure 5.52 shows the 2-D and 3-D positioning error as the image receivermoves for 1 m. Table 5.18 shows the statistics of the 2-D and 3-D positioningerror shown in Fig. 5.52. Again, as in the case of the image receiver speedof 5 cm/s, the 2-D and 3-D positioning errors are relatively constant.1335.2. Image Receiver PositioningFigure 5.52: The 2-D and 3-D position error versus distance traveled for theimage receiver at 10 cm/s.Table 5.18: The 2-D and 3-D error statistics for the image receiver movingat a speed of 10 cm/s .Parameter 2-D 3-Dmean (cm) 1.5 4.1standard deviation (cm) 0.15 0.12minimum (cm) 1.3 4.0maximum (cm) 1.7 4.4Comparing the 2-D and 3-D results in Table 5.18 with the angular re-ceiver 2-D and 3-D results for the same speed of 10 cm/s in Table 5.3 oneobserves that the 2-D and 3-D mean positioning errors for the angular re-ceiver are greater than the 2-D and 3-D mean positioning errors for theimage receivers by factors greater than approximately two in some cases.Moving at speeds greater than 10 cm/s introduced AOA errors due toslight motion of the actual microlens with respect to the CMOS sensor whichcaused the focal spot to move. Therefore, 10 cm/s represents the maximumspeed the image receiver can travel utilizing the setup in Fig. 5.40. Toachieve accurate position results at higher speeds it is essential that themicrolens and image sensor platforms be mounted as one platform to ensure1345.2. Image Receiver Positioningstable images.In summary, comparing the AOA and positioning results for the angularand image receivers in motion, one observes that the AOA error character-istics of the angular receiver are highly dependent on the particular incidentAOA as shown in the results of Figs. 3.11 and 3.12. As a result, the 2-Dand 3-D position accuracies varied accordingly as the incident AOA changedwhile the angular receiver moved along its trajectory and this resulted inhigh position variances. In the image receiver case, the AOA errors, asshown in Fig. 4.9 and 4.10, are largely independent of the incident AOAwithin the angular receiver FOV. As a result the image receiver 2-D and3-D position accuracies were fairly constant with little variations over theentire trajectory.135Chapter 6Conclusions andRecommendationsThis thesis presented a thorough assessment of indoor optical positioningusing two receivers, namely an angular receiver and an image receiver. Inthis chapter all the key conclusions of this research are outlined. This isfollowed by recommendations for future work.6.1 ConclusionsIn order to evaluate the performance of the angular and image receiversfor indoor optical positioning the following objectives were completed.1. Determine the feasibility of measuring the AOA of light with the an-gular receiver and the image receiver.2. Determine the accuracy of the AOA measured by the angular receiverand the image receiver.3. Determine the effect of optical beacon geometry on the angular receiverand image receiver systems positional accuracy.4. Determine the positional accuracy when both receivers are static andwhile in motion.The conclusions drawn with respect to each of the objectives are givenbelow starting with those associated with the angular receiver.6.1.1 Angular Receiver1. The angular receiver is a corner-cube structure with interior sides madeup of PDs. When light from an LED strikes the angular receiver, eachof the three PD sides produces a photocurrent proportional to theintensity of the incident light that strikes it. The three generated pho-tocurrents are normalized with respect to the maximum photocurrent1366.1. Conclusionsand are subtracted with respect to one photocurrent to form two dif-ferential equations. These two differential equations are then solvedsimultaneously to compute the AOA of the incident light beam. Theprecision of the AOAs was found to be very high with a maximumstandard deviation of 0.03◦.2. Given the large bandwidth of the angular receiver and LED opticalbeacons, different LEDs were modulated at different frequencies whichnecessitated designing a band-pass and amplifier circuit to filter am-bient noise, such as 60 Hz room light, so that the angular receivercan accurately detect and distinguish between different LEDs basedon their modulation frequencies. A Butterworth bandpass filter wasdesigned and tested with the angular receiver for this role, and provedto be effective in attenuating ambient light, since the AOA computedin the presence of ambient light was equal to the AOA computed inthe absence of ambient noise.3. The effect of diffuse multipath was investigated for various materialssuch as drywall, plywood and stainless steel. The effect of multipathreflections on AOAs measured with the angular receiver was found tobe negligible when the percentage of reflected light was at most 20%of the total incident light.4. For LOS environments, a configuration-dependent minimum opticalintensity for accurate AOA estimation was determined to be 0.2 µW/cm2.Light incident at the angular receiver below this minimum intensityyielded erroneous AOA measurements. The minimum intensity isunique to the particular PDs used in the angular receiver tested. Anangular receiver constructed from different PDs would likely have adifferent minimum intensity.5. The accuracy of the angular receiver AOA measurements was inves-tigated for various LED incident angles. Least accurate AOAs werefound to be at the edges of the angular receiver, when θ approaches0◦ or 90◦ or when φ approaches 0◦ or 90◦, while more accurate AOAswere found to be close to the angular receiver axis of symmetry θ =54.7◦ and φ = 45◦. An operational cone of 40◦ × 40◦ about θ = 54.7◦and φ = 45◦ was found to provide a mean AOA error of 2◦. For typicalLED/angular receiver separation distances of 2 m, the 2◦ error trans-lates into position errors in the order of a few centimeters. Such an1376.1. Conclusionsaccuracy is necessary for applications such as guiding robots throughnarrow areas such as doorways.6. In optical RSS positioning systems, the position of a receiver withrespect to a fixed grid of LED beacons with known positions is com-puted by first measuring its range to each of the LEDs using receivedsignal strength and a channel model, and then estimating the receiverposition using trilateration. The angular receiver is essentially an op-tical RSS system, since it measures the RSS on each of its PD sides.However, it uses normalized PD differences to get an AOA. Angularreceiver position is then estimated using triangulation. It was essen-tial to compare the positioning performance of the angular receiverto an optical RSS-based system. Such a comparison showed that theposition error using the angular receiver was 75% lower than that ofoptical RSS.7. Typically, the angular receiver will be mounted facing upwards to cap-ture within its FOV the maximum number of LEDs mounted on theceiling. For a square LED grid having four LEDs one at each corner,and for LED/angular receiver vertical distances of 2 m, the square gridside-length must be at most 2 m for the four LEDs mounted on theceiling to be within the angular receiver FOV defined for θ between 0◦and 90◦ and φ between 0◦ and 90◦.8. The angular receiver position accuracy was investigated while the an-gular receiver was in motion. This was done by mounting the angularreceiver on a robot platform. The robot moved at speeds of 10 cm/s,50 cm/s and 5 km/h (average human walking speed). During motion,the angular receiver collected AOA readings from four optical bea-cons mounted on the ceiling. At each of the above speeds the AOAmeasurement rate was varied. For the given geometry of LEDs andangular receiver, it was found that to achieve a 3-D positioning erroron the order of 10 cm, the AOA measurement rate in Hz must be atleast half the angular receiver speed in cm/s.6.1.2 Image Receiver and Angular Receiver PerformanceComparison1. The performance of the image receiver, consisting of a custom-mademicrolens and an image sensor, was tested for AOA estimation fromLEDs mounted on the ceiling. The custom-made microlens presents1386.1. Conclusionsa significant advantage over current microlenses since it achieves anespecially wide FOV (130◦) and a very short focal length of a fewmillimeters. This facilitates the integration of this image receiver inemerging technologies such as cellular cameras.2. An AOA was determined by first recording a video of the LEDs withthe image sensor, and then finding the pixel position of a spot corre-sponding to a particular LED. Based on the LED pixel position withrespect to the microlens pixel position, θ and φ were determined.3. In order to test the maximum FOV of the microlens, several LEDswere scattered on the ceiling at various positions with respect to theimage receiver, and their AOAs measured. Two different microlenseswere used for this investigation having different curvatures and, there-fore, different FOV characteristics. The maximum FOV, in terms of θdefined with respect to the vertical axis, was found to be 95◦ for thewide FOV microlens and 130◦ for the ultra-wide FOV microlens.In comparison to the angular receiver FOV in Section 6.1.1, for asquare LED grid having four LEDs one at each corner, and for verticaldistances of 2 m between the LEDs and image receiver, the imagereceiver with the ultra-wide FOV microlens can image the four LEDswith a square grid side-length that is three times that of the angularreceiver.4. The mean AOA for angles within the FOV of the image receiver wasfound to be 0.5◦ which is 75% lower than the 2◦ AOA error for theangular receiver within its operational cone. This mean AOA wasdetermined when the optical intensities of the LEDs being imagedwere between 0.03 µW/cm2 and 0.2 µW/cm2. Note that the imagereceiver can determine an AOA at dimmer optical intensities comparedto the angular receiver (having a minimum intensity of 0.2 µW/cm2).5. Unlike the angular receiver which consists of only three PDs, the imagereceiver consists of thousands of PDs (pixels) and, therefore, is slowerbeing limited by frame rate. The frame rate of the image receiver usedin this thesis was 187 frames/s which allows it to distinguish betweenLEDs modulated up to approximately 90 Hz. In order for the imagereceiver to distinguish between different LEDs, a novel colour andfrequency multiplexing scheme was implemented utilizing Red GreenBlue (RGB) LED optical beacons. All LEDs appear to emit a whitecolour. However, each of the R, G, and B pins of an individual LED1396.2. Recommendationsare modulated at unique frequencies. An FFT algorithm operating onthe image pixel data is utilized to determine the R, G, and B spectralcomponents. One drawback of this technique is that it requires thecamera to be stationary. This is needed to accurately acquire LEDmodulation frequencies.6. While in motion the image receiver achieved an approximately con-stant position error. This was due to the constant nature of the AOAerror which was largely independent of the incident AOA. For the samegeometry, the mean 3-D position error of the image receiver was 4 cmwhich is at least 50% lower than the mean 3-D position error of theangular receiver in motion (having an AOA measurement rate in Hzthat is half the angular receiver speed in cm/s).6.2 RecommendationsThe work presented has shown that both optical receivers, namely theangular receiver and image receiver can provide position errors on the orderof a few centimeters for typical indoor positioning scenarios. However, sev-eral factors exist that need to be further investigated as future work for thisresearch.1. In order to make the angular receiver a more practical system, sev-eral design challenges need to be overcome. The current bandpass andamplifier circuit spans a significant amount of space (20 cm × 10 cm).Using a printed circuit board to miniaturize the angular receiver cir-cuitry would make the angular receiver more practical. Also, each PDof the corner-cube angular receiver is approximately 1 cm × 1 cm.Smaller PDs of approximately 3 mm × 3 mm can be used to make theangular receiver more compact and, therefore, unobtrusive.2. The field of view of the angular receiver can be increased by buildinga quadruplet angular receiver. Since one angular receiver (corner-cube) spans a quarter of a hemisphere, this limits its FOV to a fewLEDs. It would be advantageous to increase the corner-cube angularreceiver FOV so that it sees more LEDs. This in effect introducesredundancy and should, therefore, result in a better position estimate.A typical positioning scenario would be attaching the angular receiverto a box to be monitored in a warehouse. If the worker responsible forcarrying the box tips the box, the current angular receiver may not1406.2. Recommendationsface the ceiling and, therefore, will not see any of the LEDs mountedon the ceiling. Having a quadruplet receiver will reduce the risk of thishappening, allowing continuous positioning to be maintained. To dothis, four corner-cubes could be assembled back to back.3. Bidirectional communication between the network and the angular re-ceiver should be implemented. In a typical communication system, thereceiver is resource constrained. Therefore, the receiver will send themeasured data (photocurrents in the case of an angular receiver) backto the network which will perform the heavy computational load. Inorder to do this, two way communication between the LED transmit-ters and the angular receiver needs to be established. The corner-cubestructure of the angular receiver causes it to act as a retroreflectorand, therefore, allows the LED light to be reflected back to the net-work. The network will have a PD beside each LED transmitter thatwill capture the light signals transmitted back from the corner-cubeangular receiver. The corner-cube angular receiver will need to mod-ulate the retroreflected light to carry information on the relative pho-tocurrent values generated back to the network where the AOA willbe calculated. One possible way to do this is by using a liquid crys-tal modulator [57]. However, liquid crystal modulators are limited tospeeds of 150 Hz. To mitigate this, a switching technique with higherfrequencies, known as multiple quantum well [58], can be utilized. Thistechnique is used in free-space optical communications. The signal isencoded using on-off keying modulation of the carrier.4. A more robust image receiver structure in which the microlens andimage sensor are mounted together in one platform should be designedin order to overcome vibrations witnessed in the motion tests. This isespecially difficult since the image receiver focal length is only a fewmillimeters. This would be essential to test the image receiver AOAand positioning performances at higher speeds than those tested inthis thesis.5. In this work, the orientation of both the angular and image receivers’body frames are known with respect to a reference frame. In a typicalsystem, the body frame orientation, defined by yaw, pitch and rollangles, is unknown and needs to be estimated. To solve for the threeorientation angles a greater redundancy of LED beacons is required.At least six LED beacons would be needed to solve for 3-D positionand receivers’ orientation.1416.2. Recommendations6. The angular receiver and the image receiver represent two extremecases in terms of the number of photodiodes used for positioning ap-plications. The angular receiver has only three photodiodes, whilethe image receiver has thousands of photodiodes and, therefore, hasa higher resolution compared to the angular receiver. The angularreceiver, on the other hand, has a faster response and is less computa-tionally intensive compared to the image receiver. A device that hasa fast response yet achieves sufficient resolution needs to be designedto enhance indoor optical wireless positioning performance.142Bibliography[1] D. Manandhar, H. Torimoto, and M. Ishii, “Experiment results of seam-less navigation using imes for hospital resource management,” in Pro-ceedings of the Institute of Navigation (GNSS 2012), 2012, pp. 200–207.[2] L. Q. Zhuang, W. Liu, J. B. Zhang, D. H. Zhang, and I. Kamajaya,“Distributed asset tracking using wireless sensor network,” in Proceed-ing in IEEE International Conference on Emerging Technologies andFactory Automation, 2008, pp. 201–214.[3] G. Desouza and A. KaK, “Vision for mobile robot navigation: A sur-vey,” IEEE Trans. Pattern analysis and machine intelligence, vol. 24,no. 2, pp. 237–267, Feb. 2002.[4] R. Mautz and S. Tilch, “Survey of optical indoor positioning systems,”in intel. conf. on indoor positioning and indoor navigation IPIN, 2011,pp. 1–7.[5] L. Ruotsalainen, H. Kuusniemi, and R. Chen, “Overview of methods forvisual-aided pedestrian navigation,” in Ubiquitous positioning indoornavigation and location based service, 2010, pp. 1–10.[6] P. Bahl and V. N. Padmanabhan, “Radar: an in-building rf-based userlocation and tracking system,” in INFOCOM, 2000, pp. 775–784.[7] A. Hiyama, J. Yamashita, H. Kuzuoka, K. Hirota, and M. Hirose,“Position tracking using infra-red signals for museum guiding system,”in Ubiquitous Computing Systems, ser. Lecture Notes in ComputerScience, H. Murakami, H. Nakashima, H. Tokuda, and M. Yasumura,Eds. Springer Berlin Heidelberg, 2005, vol. 3598, pp. 49–61. [Online].Available: http://dx.doi.org/10.1007/11526858 5[8] (2014) Ubisense, system overview. [Online]. Available:http://www.ubisense.net.143Bibliography[9] M. Hazas and A. Hopper, “A novel broadband ultrasonic location sys-tem for improved indoor positioning,” IEEE Trans. Mobile Comput,vol. 5, no. 5, pp. 536–547, May 2006.[10] E. Royer, M. Lhuillier, M. Dhome, and J. Lavest, “Monocular vision formobile robot localization and autonomous navigation,” Int. J. Comput.Vision, vol. 74, no. 3, pp. 237–260, Sep. 2007.[11] N. Ravi, P. Shankar, A. Frankel, A. Elgammal, and L. Iftode, “Indoorlocalization using camera phones,” in Proc. 7th IEEE Mobile Comput-ing Systems and Applications, 2005, pp. 1–19.[12] J. Caffery and G. L. Stuber, “Subscriber location in cdma cellular net-works,” Vehicular Technology, IEEE Transactions on, vol. 47, no. 2,pp. 406–416, May 1998.[13] K. Panta and J. Armstrong, “Indoor localization using white leds,” IETElectron lett, vol. 84, no. 4, pp. 228–230, Feb. 2012.[14] S.-Y. Jung, S. Hann, and C.-S. Park, “Tdoa-based optical wireless in-door localization using led ceiling lamps,” IEEE Trans. consum. Elec-tron., vol. 57, no. 4, pp. 1592–1597, Nov. 2011.[15] J. Grubar, S. Randel, K. D. Langer, and J. W. Walewski, “Broadbandinformation broadcasting using led-based interior lighting,” J. Light-wave Technol., vol. 26, no. 24, pp. 3883–3892, Dec. 2008.[16] H. Elgala, R. Mesleh, and H. Haas, “Indoor optical wireless communi-cation: potential and state-of-the-art,” IEEE Commun. Mag., vol. 49,no. 9, pp. 56–62, Sep. 2011.[17] S. Horikawa, T. Komine, S. Haruyama, and M. Nakagawa, “Perva-sive visible light positioning system using white led lighting,” Technicalreport of the Institute of elecronics, information and communicationengineers, pp. 93–99, Mar. 2004.[18] K. D. Dambul, D. C. O’Brien, and G. Faulkner, “Indoor optical wirelessmimo system with an imaging receiver,” IEEE Photon. Technol. Lett,vol. 23, no. 2, pp. 97–99, Jan. 2011.[19] J. Vucic, C. Kottke, S. Nerreter, A. Buttner, K. D. Langer, and J. D.Walewski, “White light wireless transmission at 200+ mb/s net datarate by use of discrete-multitone modulation,” IEEE Photon. Technol.Lett, vol. 21, no. 20, pp. 1511–1513, Oct. 2009.144Bibliography[20] H. L. Minh, D. C. O’Brien, G. Faulkner, O. Bouchet, M. Wolf, L. Grobe,and J. Li, “A 1.25 gb/s indoor cellular optical wireless communicationsdemonstrator,” IEEE Photon. Technol. Lett, vol. 22, no. 21, pp. 1598–1600, Nov. 2010.[21] M. S. Rahman, M. M. Haque, and K. D. Kim, “High precision indoorpositioning using lighting led and image sensor,” in Computer and In-formation Technology (ICCIT), 2011 14th International Conference on,2011, pp. 309–314.[22] B. Y. Kim, J. S. Cho, Y. Park, and K. D. Kim, “Implementation ofindoor positioning using led and dual pc cameras,” in Ubiquitous andFuture Networks (ICUFN), 2012 Fourth International Conference on,2012, pp. 476–477.[23] A. D. Cheok and L. Yue, “A novel light sensor based information trans-mission system for indoor positioning and navigation,” IEEE Trans.Instrum. Meas., vol. 60, no. 1, pp. 290–299, Jan. 2011.[24] D. C. O’Brien, J. J. Liu, G. E. Faulkner, S. Sivathasan, W. W. Yuan,S. Collins, and S. J. Elston, “Design and implementation of opticalwireless communications with optically powered smart dust motes,”IEEE J. Sel. Areas Commun., vol. 27, no. 9, pp. 1646–1653, Dec. 2009.[25] M. Yoshino, S. Haruyama, and M. Nakagawa, “High-accuracy posi-tioning system using visible led lights and image sensor,” in Radio andWireless Symposium, 2008, pp. 439–442.[26] S. Mazuelas, A. Bahillo, R. Lorenzo, P. Fernandez, F. Lago, E. Garcia,J. Blas, and E. Abril, “Robust indoor positioning provided by real-time rssi values in unmodified wlan networks,” Selected Topics in SignalProcessing, IEEE Journal of, vol. 3, no. 5, pp. 821–831, Oct. 2009.[27] T. S. Rappaport, Wireless Communications, Principles and Practice.Prentice Hall, 2002.[28] G. Retscher and Q. Fu, “Continuous indoor navigation with rfid andins,” in Position Location and Navigation Symposium (PLANS), 2010IEEE/ION, 2010, pp. 102–112.[29] J. Song, C. T. Haas, and C. H. Caldas, “A proximity-basedmethod for locating rfid tagged objects,” Adv. Eng. Inform.,vol. 21, no. 4, pp. 367–376, Oct. 2007. [Online]. Available:http://dx.doi.org/10.1016/j.aei.2006.09.002145Bibliography[30] M. Rodriguez, J. P. Pece, and C. J. Escudero, “In-building location us-ing bluetooth,” in Proceedings of the International Workshop on Wire-less Ad Hoc Networks, 2005.[31] M. Deffenbaugh, J. Bellingham, and H. Schmidt, “The relation-ship between spherical and hyperbolic positioning,” in OCEANS 96.MTS/IEEE. Prospectsfor the 21st Century. Conference Proceedings,1996, pp. 590–595.[32] F. Benedetto, G. Giunta, and E. Guzzon, “Enhanced toa-based indoor-positioning algorithm for mobile lte cellular systems,” in PositioningNavigation and Communication (WPNC), 2011 8th Workshop on, Apr.2011, pp. 137–142.[33] M. Stella, M. Russo, and M. Saric, “Rbf network design for indoor po-sitioning based on wlan and gsm,” Int. J. Circuit Syst. Signal Process.,vol. 8, pp. 116–122, 2014.[34] Z. Irahhauten, H. Nikookar, and M. Klepper, “2d uwb localization in in-door multipath environment using a joint toa/doa technique,” in IEEEwireless communications and networking conference (WCNC): Mobileand wireless networks, 2012, pp. 2253–2257.[35] Y. Fukuju, M. Minami, H. Morikawa, and T. Aoyama, “Dolphin: Anautonomous indoor positioning system in ubiquitous computing envi-ronment,” in Proceedings of the IEEE Workshop on Software Technolo-gies for Future Embedded Systems, 2003, pp. 53–56.[36] G. Yahav, G. Iddan, and D. Mandelboum, “3d imaging camera for gam-ing application,” in Consumer Electronics, 2007. ICCE 2007. Digest ofTechnical Papers. International Conference on, Jan 2007, pp. 1–2.[37] C. Harris and M. Stephens, “A combined corner and edge detector,” inAlvey Vision Conference, 1988, pp. 147–151.[38] M. Veth and J. Raquet, “Two-dimensional stochastic projections fortight integration of optical and inertial sensors for navigation,” reportAir force institute of technology, pp. 237–267, 2007.[39] D. Xu, L. Han, M. Tan, and L. Y. F, “Ceiling-based visual positioningfor an indoor mobile robot with monocular vision,” Industrial Electron-ics, IEEE, vol. 56, no. 5, pp. 1617–1628, May 2009.146Bibliography[40] S.-Y. Hwang and S. J-B, “Monocular vision-based slam in indoor en-vironment using corner, lamp, and door features from upward-lookingcamera,” Industrial Electronics, IEEE Transactions on, vol. 58, no. 10,pp. 4804–4812, Oct. 2011.[41] S. Kwanmuang, L. Ojeda, and J. Borenstein, “Magnetometer-enhancedpersonal locator for tunnels and gps-denied outdoor environments,” inProc. SPIE, vol. 8019, 2011, pp. 80 190O–80 190O–11.[42] A. Martinelli, “Vision and imu data fusion: Closed-form solutionsfor attitude, speed, absolute scale, and bias determination,” Robotics,IEEE Transactions on, vol. 28, no. 1, pp. 44–60, Feb. 2012.[43] R. Jirawimut, S. Prakoonwit, F. Cecelja, and W. Balachandran, “Vi-sual odometer for pedestrian navigation,” Instrumentation and Mea-surement, IEEE Transactions on, vol. 52, no. 4, pp. 1166–1173, Aug.2003.[44] J. Xian and J. F. Holzman, “Differential retro-detection for remotesensing applications,” Sensors Journal, IEEE, vol. 10, no. 12, pp. 1875–1883, Dec. 2010.[45] S. M. Kay, Fundamentals of statistical signal processing: EstimationTheory. Prentice Hall, 1993.[46] A. G. Dempster, “Dilution of precision in angle-of-arrival positioningsystems,” Electronics Letters, vol. 42, no. 5, pp. 291–292, Mar. 2006.[47] E. M. Mikhail, Observations and least-squares. A Dun-Donnelly Pub-lisher, 1976.[48] X. Jin, D. Guerrero, R. Klukas, and J. F. Holzman, “Microlenses withtuned focal characteristics for optical wireless imaging,” Appl. Phys.Lett., vol. 105, pp. 1–5, Jul. 2014.[49] S. Audran, B. Faure, B. Mortini, C. Aumont, R. Tiron, C. Zinck,Y. Sanchez, C. Fellous, J. Regolini, J. P. Reynard, G. Schlatter, andG. Hadziioannou, “Study of dynamical formation and shape of mi-crolenses formed by the reflow method,” in Proc. SPIE, vol. 6153, 2006,pp. 61 534D–61 534D–10.[50] D. Travis, Effective Color Displays. Theory and Practice. AcademicPress, 1991.147Bibliography[51] R. C. Gonzalez and R. E. Woods, Digital Image Processing. AddisionWesley, 1992.[52] (2014) irobot corporation. irobot create platform. [Online]. Available:http://www.irobot.com.[53] (2014) Arduino nano. [Online]. Available:http://www.arduino.cc/en/Main/arduinoBoardNano.[54] (2014) Sure electronics: 3 watt high power led. [Online]. Available:www.sure-electronics.com.[55] (2014) Texas instruments opa552 datasheet. [Online]. Available:http://www.ti.com.cn/cn/lit/ds/symlink/opa551.pdf.[56] M. Altini, R. Vullers, C. Van Hoof, M. van Dort, and O. Amft, “Self-calibration of walking speed estimations using smartphone sensors,”in Pervasive Computing and Communications Workshops (PERCOMWorkshops), 2014 IEEE International Conference on, Mar. 2014, pp.10–18.[57] D. C. O’Brien, W. W. Yuan, J. J. Liu, G. E. Faulkner, S. J. Elston,S. Collins, and L. A. Parry-Jones, “Optical wireless communications formicromachines,” in Proc. SPIE, vol. 6304, 2006, pp. 63 041A–63 041A–8.[58] W. S. Rabinovich, R. Mahon, H. R. Burris, G. C. Gilbreath, P. G.Goetz, C. I. Moore, M. F. Stell, M. J. Vilcheck, J. L. Witkowsky,L. Swingen, M. R. Suite, E. Oh, and J. Koplow, “Free-space opticalcommunications link at 1550 nm using multiple-quantum-well modu-lating retroreflectors in a marine environment,” Optical Engineering,vol. 44, no. 5, pp. 056 001–056 001–12, May 2005.148

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.24.1-0074408/manifest

Comment

Related Items