Open Collections

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Characterisations and recommendations for an angle-of-arrival-based optical wireless positioning system Bergen, Mark Henry 2016

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
24-ubc_2017_february_bergen_mark.pdf [ 4.39MB ]
Metadata
JSON: 24-1.0320831.json
JSON-LD: 24-1.0320831-ld.json
RDF/XML (Pretty): 24-1.0320831-rdf.xml
RDF/JSON: 24-1.0320831-rdf.json
Turtle: 24-1.0320831-turtle.txt
N-Triples: 24-1.0320831-rdf-ntriples.txt
Original Record: 24-1.0320831-source.json
Full Text
24-1.0320831-fulltext.txt
Citation
24-1.0320831.ris

Full Text

  CHARACTERISATIONS AND RECOMMENDATIONS FOR AN ANGLE-OF-ARRIVAL-BASED OPTICAL WIRELESS POSITIONING SYSTEM by Mark Henry Bergen  B.A.Sc., The University of British Columbia, 2014  A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF APPLIED SCIENCE  in THE COLLEGE OF GRADUATE STUDIES (Electrical Engineering)  THE UNIVERSITY OF BRITISH COLUMBIA (Okanagan)  October 2016  © Mark Henry Bergen, 2016   ii  The undersigned certify that they have read, and recommend to the College of Graduate Studies for acceptance, a thesis entitled:    Characterisations and recommendations for an angle-of-arrival-based optical wireless positioning system submitted by                          Mark Henry Bergen in partial fulfilment of the requirements of the degree of                      Master of Applied Science .    Dr. Jonathan Holzman, School of Engineering, The University of British Columbia  Supervisor, Professor (please print name and faculty/school above the line)   Dr. Richard Klukas, School of Engineering, The University of British Columbia  Supervisory Committee Member, Professor (please print name and faculty/school in the line above)   Dr. Thomas Johnson, School of Engineering, The University of British Columbia  Supervisory Committee Member, Professor (please print name and faculty/school in the line above)   Dr. Rudolf Seethaler, School of Engineering, The University of British Columbia   University Examiner, Professor (please print name and faculty/school in the line above)     October 26, 2016  (Date Submitted to Grad Studies)  iii  Abstract The rise of Optical Wireless (OW) technologies in recent years has motivated the development of many applications. While OW communication (e.g., Li-Fi, free-space optical) is the primary area of interest, OW positioning with an array of indoor optical beacons and a mobile optical receiver has begun to gain traction. Optical wireless positioning has the potential to complement GPS in indoor environments, such as buildings, where GPS is unreliable. There are several methods of OW positioning that are capable of metre to centimetre level accuracy. This thesis investigates an angle-of-arrival- (AOA-) based OW positioning system which we find capable of centimetre level position accuracy. In the past, OW positioning system analyses tended to focus on optical receiver design without considering the optical beacon geometry. Unfortunately, position accuracy of an OW positioning system greatly depends on both. As such, the AOA-based OW positioning system analysis presented in this thesis is broken into two areas: optical beacon geometry and optical receiver design. Optical beacon geometry is investigated to first quantify the performance of generalized optical beacon geometries using a dilution-of-precision (DOP) analysis, then to investigate several optical beacon geometries and identify trends to improve DOP. Optical receiver design is then carried out by first using the literature to identify a potential optical receiver architecture, then thoroughly investigating the design of that architecture to minimize measurement errors. Using the analysis results for both the optical beacon geometry and the optical receiver, an AOA-based OW positioning system is built. Theoretical and experimental position error results over a 1 m2 working area are 1.68 cm and 1.7 cm ± 0.2 cm, respectively. The characterisations and recommendations in this thesis support improved AOA-based OW positioning system designs in the future. iv  Preface The work presented in this thesis was carried out in the Integrated Optics Laboratory, in the School of Engineering, at the University of British Columbia’s Okanagan campus. All work was done under the supervision of Dr. Jonathan Holzman.  The majority of Chapter 2 of this thesis is based on previously published work [J1,J2]. In [J1] I was the principal investigator carrying out experimentation, performing data analysis, and preparing the manuscript. This work contained algorithms based off of previous work done by A. Arafa, and experimentation was carried out using microlenses fabricated by X. Jin. Manuscript edits and other insights were given by R. Klukas and J. Holzman. In [J2], I provided manuscript editing assistance and did the data analysis for one figure in the manuscript. The principal investigator, A. Arafa, was responsible for simulations, experimentation, and preparing the manuscript. Microlens fabrication was done by X. Jin and manuscript editing was done by A. Arafa, X. Jin, R. Klukas, and J. Holzman. I carried out the remainder of the work presented in Chapter 2. Figure 3.3 in this thesis was reproduced with permission from [J1]. The work presented in Chapters 3 and 4 of this thesis will be the subject of a future journal publication. In this work, I was the principal investigator carrying out the majority of the experimentation and data analysis. Experimentation was done with the assistance of H. A. L. F. Chaves who fabricated the LED modulation circuit in Chapter 3 and X. Jin who fabricated the microlenses used in the optical receiver in Chapters 3 and 4. Appendix C is an algorithm which I wrote based on a similar algorithm written by A. Arafa.      v  Publication List Refereed Journal Articles [J1] M. H. Bergen, A. Arafa, X. Jin, R. Klukas, and J. F. Holzman, "Characteristics of angular precision and dilution of precision for optical wireless positioning," IEEE Journal of Lightwave Technology, vol. 33, no. 20, pp. 4253-4260, Oct. 2015. (Front cover of journal) [J2] A. Arafa, X. Jin, M. Bergen, R. Klukas, and J. F. Holzman, "Characterization of image receivers for optical wireless location technology," IEEE Photonics Technology Letters, vol 27, no. 18, pp.1923-1926, Sep. 2015. [J3] C. M. Collier, M. H. Bergen, T. J. Stirling, M. A. DeWachter, and J. F. Holzman, "Optimization processes for pulsed terahertz systems," Applied Optics, vol. 54, no. 3, pp. 535-545, Jan. 2015. [J4] M. H. Bergen, et al. "Retroreflective imaging system for optical labeling and detection of microorganisms," Applied Optics, vol. 53, no. 17, pp. 3647-3655, Jun. 2014. (reprinted in the Virtual Journal for Biomedical Optics, 2014)  Refereed Conference Proceedings [C1] M. H. Bergen, D. Guerrero, X. Jin, B. A. Hristovski, H. Chaves, R. Klukas, and J. F. Holzman, "Design and optimization of indoor optical wireless positioning systems", in SPIE OPTO, 2016, pp. 97540A-97540A. (Presenting Author) [C2] C. M. Collier, J. D. Krupa, I. R. Hristovski, T. J. Stirling, M. H. Bergen, and J. F. Holzman, "Textured semiconductors for enhanced photoconductive terahertz emission," in SPIE OPTO, 2016, pp. 97470M-97470M.  vi  [C3] M. H. Bergen et al., "Retroreflective imaging systems for enhanced optical biosensing," in Proc. SPIE Photonics Europe, 2014, pp. 912914. [C4] C. M. Collier, B. Born, X. Jin, T. M. Westgate, M. Bethune-Waddell, M. H. Bergen, and J. F. Holzman, "Transient mobility and photoconductive terahertz emission with GaP," in SPIE Optical Engineering+ Applications, 2013, pp. 884616. vii  Table of Contents  Abstract ......................................................................................................................................... iii Preface ........................................................................................................................................... iv Table of Contents ........................................................................................................................ vii List of Tables ..................................................................................................................................x List of Figures ............................................................................................................................... xi List of Symbols ......................................................................................................................... xviii List of Abbreviations ................................................................................................................. xxi Acknowledgements ................................................................................................................... xxii Dedication ................................................................................................................................. xxiv Chapter 1: Introduction ................................................................................................................1 1.1 Indoor Positioning Methods ............................................................................................ 2 1.1.1 Radio Frequency Systems ........................................................................................... 2 1.1.2 Optical Wireless Systems ........................................................................................... 3 1.2 Angle-of-Arrival Based Optical Wireless Positioning ................................................... 5 1.3 Position Error in Angle-of-Arrival Based Optical Wireless Positioning Systems ......... 7 1.3.1 Optical Beacon Geometry ........................................................................................... 8 1.3.2 Optical Receiver Design ........................................................................................... 10 1.4 Scope of this dissertation .............................................................................................. 11 Chapter 2: Dilution of Precision .................................................................................................14 2.1 General Analyses of Dilution of Precision ................................................................... 14 2.2 Single-cell Analyses of Dilution of Precision ............................................................... 20 viii  2.3 Multi-cell Analyses of Dilution of Precision ................................................................ 30 2.4 Summary ....................................................................................................................... 31 Chapter 3: Optical Receiver Design ...........................................................................................33 3.1 Optical Receiver Architectures ..................................................................................... 33 3.2 Optical Receiver Design ............................................................................................... 37 3.2.1 Image Sensor ............................................................................................................. 37 3.2.2 Microlens .................................................................................................................. 38 3.3 Optical Receiver Performance ...................................................................................... 42 3.3.1 Performance for Angle-of-Arrival Estimation .......................................................... 43 3.3.1.1 Random Azimuthal and Polar Errors ................................................................ 47 3.3.1.1.1 Random Azimuthal Error ............................................................................ 49 3.3.1.1.2 Random Polar Error .................................................................................... 51 3.3.1.2 Systematic Azimuthal and Polar Errors ............................................................ 52 3.3.1.2.1 Systematic Azimuthal Error ........................................................................ 52 3.3.1.2.2 Systematic Polar Error ................................................................................ 53 3.3.1.3 AOA Measurement Error Results ..................................................................... 60 3.3.2 Performance for Optical Beacon Identification ........................................................ 65 3.3.2.1 Frequency-based Identification ......................................................................... 66 3.3.2.2 Colour-frequency-based Identification ............................................................. 68 3.4 Summary ....................................................................................................................... 78 Chapter 4: Positioning Results ...................................................................................................79 Chapter 5: Conclusion .................................................................................................................87 5.1 Summary for our analyses and conclusions .................................................................. 87 ix  5.2 Recommendations for future work ............................................................................... 91 Bibliography .................................................................................................................................93 Appendices ....................................................................................................................................98 Appendix A - Least Squares Positioning Algorithm ................................................................ 98 Appendix B - Colour Interference Results .............................................................................. 100 Appendix C - AOA Positioning Algorithm ............................................................................ 102  x  List of Tables  Table 2.1 Summary of the results from Fig. 2.2.  The upper table summarizes mean DOP, E[DOP(x, y, z)], and mean position error, E[p(x, y, z)], for a beacon geometry side length of a = 100 cm and an AOA error of AOA = 1°. The lower table summarizes the DOP standard deviation, STD[DOP(x, y, z)], and position error standard deviation STD[p(x, y, z)], also for a beacon geometry side length of a = 100 cm and a typical AOA error of AOA = 1°. ............................................................................. 25 Table 3.1 Summary of optical receiver architecture properties. ................................................... 36 Table 3.2 Summary of colour interference ratios. The rows correspond to the activated colour (red, green, and blue) components of the RGB LED providing illumination. The columns correspond to the colour interference ratio, which is the normalized pixel signal amplitudes of the RGB (red, green, and blue) pixels in the illuminated image sensor. The results shown are for an illuminating intensity of 0.5 W/m2 (the minimum acceptable value) at an exposure rate of approximately 5% of the maximum. The proposed system applies modulation only to the red and blue LEDs, which leads to colour interference ratios with well-defined recognition of the desired colour and rejection of undesired colours by the red and blue pixels. ....................... 75 Table 3.3 Identifier frequency look-up table. Combinations of frequencies, which are modulated onto the red and blue components of the RGB LEDs (which act as optical beacons), are defined to uniquely identify each optical beacon (by its optical beacon number). .................................................................................................................................... 78 xi  List of Figures  Figure 1.1 An AOA system schematic is shown with a single optical beacon and optical receiver. The optical beacon is shown as the white circle and is located at (xi, yi, zi). The optical receiver is shown as the black dot at the centre of the coordinate frame at (x, y, z). .............................................................................................................................. 6 Figure 1.2 Simple AOA-based OW positioning system is shown with two optical beacons. Lines-of-position are drawn from each optical beacon to the optical receiver, which we assume to be the laptop. The position of the optical receiver, indicated by the black dot, is at the intersection of the two LOP’s at (x, y, z). ................................................ 7 Figure 1.3 Practical AOA-based OW positioning system is shown with two optical beacons and an optical receiver, which we take to be the laptop. Lines-of-position, now spread into cones due to AOA error, intersect over an overlap volume, shown by red highlighting, as opposed to a single point. The true position of the optical receiver, indicated by the black dot, is at (x, y, z). An AOA-based OW positioning system with small DOP is shown in (a), while an AOA-based OW positioning system with large DOP is shown in (b). .................................................................................................. 10 Figure 2.1 The (a) square, and (b) rhombus, optical beacon geometries are shown with their associated scaling factors. The portion of each optical beacon used in a single cell is denoted by the fraction of blue area filled in each circle, where circles denote optical beacons. ...................................................................................................................... 20 Figure 2.2 This figure shows DOP contours for a 60° FOV optical receiver using (a) square, and (b) rhombus, optical beacon geometries, and a 120° FOV optical receiver using (c) xii  square, and (d) rhombus, optical beacon geometries. This plot assumes a square cell side-length of a = 100 cm. The left axis shows DOP in units of cm/° while the right axis shows position error in units of cm after assuming an AOA error of AOA = 1°. One colour bar is used for both (a) and (b) while (c) and (d) use a second colour bar. Optical beacons are indicated in each plot by the large circles. ................................ 23 Figure 2.3 Representative diagrams of working areas with various h/a ratios are shown. Each box represents a working area with the grey bottom plane indicating the plane in which the optical receiver operates and the circles at the top of the box representing optical beacons. The parameters h and a are shown along with the corresponding h/a ratio for each working area. ................................................................................................ 27 Figure 2.4 The (a) normalized mean DOP, E[DOP/a], and mean position error, E[P], and (b) normalized DOP standard deviation, STD[DOP/a], and position error standard deviation, STD[P], are plotted on the left and right axes, respectively, as a function of h/a for the square and rhombus optical beacon geometries. .................................. 28 Figure 2.5 The (a) normalized mean DOP, E[DOP/a], and mean position error, E[P], and (b) normalized DOP standard deviation, STD[DOP/a], and position error standard deviation, STD[P], are plotted on the left and right axes, respectively, as a function of the FOV for the square and rhombus optical beacon geometries. ......................... 30 Figure 2.6 The normalized mean DOP, E[DOP/a], is shown as solid circles as a function of the number of optical beacons along each side, N. The h/a ratio is 1. Curve fitting with E[DOP/a] = 1/N is shown as a dotted line. The inset shows a configuration of optical beacons with N beacons along each side. .................................................................. 31 xiii  Figure 3.1: Diagram showing incident light rays (in red) focused down by a hemispherical microlens onto an image sensor. An SEM image of the OV7720 sensor pixels is shown in the inset. The image displays four pixels in red as the larger effective pixels that are used together at higher frame rates. .............................................................. 38 Figure 3.2 Schematic of the dispensed microlens on a glass coverslip is shown. The chief ray travelling through the system is denoted by the red arrow. The chief ray exits the microlens normal to its curved back surface. ............................................................. 40 Figure 3.3 Scanning electron microscope image showing a representative hemispherical dispensed microlens with a diameter of 500 m. ...................................................... 42 Figure 3.4 Representative diagram showing azimuthal and polar angles on an image sensor. A beamspot illuminates the pixel indicated in blue whose chief coordinates are (xIS, yIS). The dashed circled outlines the perimeter of the microlens. .............................. 44 Figure 3.5 Visual depictions of random and systematic error are shown. The figures show (a) a grid of points, (b) an image of the grid of points, subject to random azimuthal error that increases in inverse-proportion to the distance from the origin and random polar error that is small and constant, and (c) an image of the grid of points, subject to negligible systematic azimuthal error and finite systematic polar error that increases in proportion to the distance from the origin. ............................................................ 46 Figure 3.6 Ray trace results are shown for incident light at polar angles of (a) 0° and (b) 50°. The light rays are shown in red passing through the systems. The vertical axis shows the distance transverse to the OA. The horizontal axis shows the distance along the OA. The insets show transverse profiles of the focused rays at a distance of 1200 m along the OA (with respect to the planar surface of the microlens). ......................... 49 xiv  Figure 3.7 Full camera architecture schematic is shown including the glass coverslip, microlens, image sensor glass, air gap, and image sensor OV7720 image sensor. The chief ray travelling through the system is denoted by the red arrow. Its radial displacement for each stage is denoted by 1,2,3. ................................................................................... 54 Figure 3.8 Radial distortion on an 800-m-diameter hemispherical microlens with a refractive index of n = 1.54 and focal length of 1140 m is shown. In (a), the chief radius, IS, and linear approximation of radial displacement on the image sensor are shown versus the polar angle, . In (b), the difference between the chief radius, IS, and linear approximation of radial displacement on the image sensor is shown versus the polar angle, . ............................................................................................................. 59 Figure 3.9 Azimuthal and polar angle error results are shown versus azimuthal and polar angles for the optical receiver. In (a), the azimuthal error is plotted versus the azimuthal angle,  In (b), the azimuthal error is plotted versus the polar angle, . In (c), the polar error is plotted versus the azimuthal angle, . In (d), the polar error is plotted versus the polar angle, . In (a), exceedingly large errors that occur at small polar angles are removed to better show the remaining data points, i.e., the results are shown only for data collected at  > 15°.................................................................... 61 Figure 3.10 Azimuthal and polar error results versus azimuthal and polar angles are shown for the LG Nexus 5 smartphone’s front facing camera. In (a), the azimuthal error is plotted versus the azimuthal angle,  In (b), the azimuthal error is plotted versus the polar angle, . In (c), the polar error is plotted versus the azimuthal angle, . In (d), the polar error is plotted versus the polar angle, . .................................................... 64 xv  Figure 3.11 Normalized power spectral density is shown as a function of wavelength for the red (in red), green (in green), and blue (in blue) components of the white light LED. The results are acquired by way of a Thorlabs CCS100 Spectrometer. The RGB components of the white light LED (Cree PLCC6-CLV6A) are individually activated. .................................................................................................................... 70 Figure 3.12 Normalized responsivity curves are plotted as a function of wavelength for the red (in red), green (in green), and blue (in blue) pixels of the Thorlabs DCC1645C (Silicon CMOS) image sensor—which are similar to those of the OV7720 (Silicon CMOS) image sensor. The data is reproduced from the DCC1645C CMOS image sensor data sheet......................................................................................................... 72 Figure 3.13 Results are shown for activation of the red component of the RGB LED, yielding red illumination on the image sensor at varying intensities. The results show the (a) normalized pixel signal amplitude (of the RGB pixels) and (b) corresponding colour interference ratios for the red illumination. ............................................................... 75 Figure 3.14 Optical beacon identification results are shown for a representative optical beacon. The red line corresponds to the frequencies received by the red pixels, while the blue line corresponds to the frequencies received by the blue pixels. The amplitudes of both colour spectra are normalized to 1. The inset shows the nine beamspots as imaged by the optical receiver with the beamspot corresponding to the data circled in red............................................................................................................................... 77 Figure 4.1 Optical wireless positioning system setup for our experiment is shown. A 3 × 3 multi-cell square optical beacon geometry containing nine optical beacons is used. The dimensions are a = 100 cm and h = 110 cm. ............................................................. 80 xvi  Figure 4.2 Dilution-of-precision and position error contours are shown for our OW positioning system in the (x, y, z = 0) plane. The OW positioning system has a 3 × 3 multi-cell square optical beacon geometry with a side length of a = 100 cm and a height of h = 110 cm. The values for DOP are shown on the left axis, in unis of cm/°, while the values for position error are shown on the right axis, in units of cm. ........................ 81 Figure 4.3 Experimental results are plotted for the estimated and true positions of the optical receiver in the transverse dimensions across the (x, y, z = 0) plane. Position estimates are taken in the plane beneath the optical beacons in our OW positioning system, spaced according to 25 cm steps. The estimated and true positions are plotted using blue diamonds and orange circles, respectively. The figure shows outlines of the 3 × 3 geometry of optical beacons indicated by the black circles. ................................... 82 Figure 4.4 Experimental results are shown for the estimated and true positions of the optical receiver in the vertical dimension across the (x, y, z = 0) plane for all three sets of data. Position estimates are taken in the plane beneath the optical beacons in our OW positioning system, spaced according to 25 cm steps. The estimated and true positions are plotted using blue diamonds and orange circles, respectively. The figure shows outlines of the 3 × 3 geometry of optical beacons indicated by the black circles. The results are shown for (a) the uncalibrated optical receiver, with a slight bias due to misalignment between the microlens and image sensor, and (b) the calibrated optical receiver, with the bias removed. .................................................... 85 Figure 4.5 Experimental results are shown for the 3-D position error of the optical receiver. Position estimates are taken in the plane beneath the optical beacons in our OW xvii  positioning system, spaced according to 25 cm steps. The figure shows outlines of the 3 × 3 geometry of optical beacons indicated by the black circles. ....................... 86 Figure 5.1 Flowchart showing the full design process for an AOA-based OW positioning system. .................................................................................................................................... 91 Figure B.1 Pixel response to green illumination is shown. The pixel response normalized to the saturation level on the image sensor is shown in (a), while the ratio between the pixel response of green and red or blue is shown in (b). Since (b) is normalized against green, the ratio or green with itself is 1. ................................................................... 100 Figure B.2 Pixel response to blue illumination is shown. The pixel response normalized to the saturation level on the image sensor is shown in (a), while the ratio between the pixel response of blue and red or green is shown in (b). Since (b) is normalized against red, the ratio of blue with itself is 1. ............................................................................... 101  xviii  List of Symbols  Symbols Definitions  Microlens contact angle xIS, yIS Spatial quantization error in xIS and yIS dimensions  Azimuthal angle  Polar angle i Azimuthal angle of the ith optical beacon i Polar angle of the ith optical beacon  Random azimuthal error  Random polar error IS Estimated polar angle IS Estimated azimuthal angle int Internal polar angle gl Polar angle in glass covering image sensor IS Chief radius 1,2,3 Chief ray radial displacement for steps 1, 2, and 3 p(x, y, z) Position error at (x, y, z) AOA Angle-of-arrival error a Square optical beacon geometry side length C Conversion from radians to degrees xix  d Distance from the centre of the microlens to the image sensor glass Dl Microlens diameter DOP(x, y, z) Dilution of precision at (x, y, z) DOP/a Normalized dilution of precision E[•] Expectation value (mean) fl Microlens focal length fNyquist Nyquist frequency fflicker Flicker frequency f1,2 Identifier frequencies 1 and 2 g Thickness of air gap between image sensor glass and pixels h Height between the optical beacon geometry and the measurement plane h/a Ratio between h and a H Full design matrix H Azimuthal design matrix H Polar design matrix i Optical beacon number k Linear scaling factor lpixel Pixel side length n Refractive index N Number of optical beacons along each side of the square optical beacon geometry xx  ri Distance between the ith optical beacon and the optical receiver on the (x, y) plane Ri Distance between the ith optical beacon and the optical receiver Rl Microlens radius Rf Front radius of curvature of a lens Rb Back radius of curvature of a lens sl Microlens sag STD[•] Standard deviation t Thickness of image sensor glass (x, y, z) Optical receiver coordinates (xi, yi, zi) Optical beacon coordinates (xIS, yIS) Chief coordinates  xxi  List of Abbreviations  Abbreviations Definitions 3-D Three-Dimensional AOA Angle-of-Arrival DOP Dilution of Precision FFT Fast Fourier Transform FOV Field-of-View GPS Global Positioning System LED Light Emitting Diode LOP Line-of-Position LOS Line-of-Sight LS Least-Squares OA Optical Axis OW Optical Wireless RF Radio Frequency RB Red, Blue RGB Red, Green, Blue RSS Received Signal Strength SEM Scanning Electron Microscope TDOA Time-Difference-of-Arrival TOA Time-of-Arrival xxii  Acknowledgements I would like to take this opportunity to thank the many individuals and organisations whose support and encouragement made this thesis possible.  First, I would like to acknowledge the endless support and technical assistance of my supervisor Dr. Jonathan Holzman. His tireless efforts assisting me in all stages of this work were instrumental in bringing it to fruition. I have grown substantially as a professional and productive academic under the guidance of Dr. Holzman. I would like to thank Dr. Richard Klukas and Dr. Thomas Johnson for serving on my committee. I would also like to thank Dr. Rudolf Seethaler for serving as the university examiner on my committee.  Many experiments carried out in this thesis were made possible by the technical support provided by Marc Nadeau, Tim Giesbrecht, Emily Zhang, Aria Fani, Durwin Bossy, and David Arkinstall. Next, I would like to acknowledge the support and contributions of everyone in the Integrated Optics Laboratory (IOL). I would foremost like to thank Xian Jin for his insight into optical wireless systems and his assistance in device fabrication and experiment design. I would like to acknowledge Ahmed Arafa for his technical support with my simulations. I would like to thank Naomi Fredeen, and Hugo Chaves for their contributions to the experimental setup and execution. I would also like to thank Christopher M. Collier, Brandon Born, Mitch Westgate, and Blago Hristovski for their personal and technical support throughout my time in the IOL. Finally, I would like to recognize everyone who has been a part of the IOL throughout my undergraduate and graduate research: Jackie Nichols, Max Bethune-Waddell, Emily Landry, Daniel Geurrero, Mike Bernier, Blake Veerman, Jonah Schwab, Jamie Garbowski, Ilija Hristovski, Simon Geoffroy-Gagnon, Trevor Stirling, and Adebola Adebowale. xxiii  I would like to recognize the financial support the Canadian Natural Sciences and Engineering Research Council and the UBC School of Engineering.  Finally, I would like to acknowledge the endless patience and encouragement of my family. I would like to thank my parents, Donna and Cliff, for their financial and moral support. I would also like to thank my mother-in-law and father-in-law, Jacquie and Andy Weatherson, for their encouragement and support. Finally, I would like to thank my wife Lisa for her patience, love, and support, without which this work would have been impossible.        xxiv  Dedication   For my parents,  Cliff and Donna Bergen1  Chapter 1: Introduction Being able to determine someone’s exact position on earth has been taken for granted now that global navigation satellite system technologies, such as the global positioning system (GPS), have become so widespread. There are a multitude of applications for tracking the precise location of an object and new applications arise every day. While GPS has ushered in this era by providing cheap and accurate global positioning, there are still a few environments where GPS is ineffective. These environments include inside buildings and other structures such as urban canyons. While GPS augmentation systems such as data fusion with inertial navigation systems are able to provide a solution when GPS is intermittent, in many environments a new type of positioning system is required. Research has gone into many methods for indoor positioning. These methods include radio frequency (RF) localization via either ultra wide-band systems [1, 2] or Wi-Fi networks [3, 4], ultrasonic positioning systems [5, 6], and machine vision [7]. Moreover, optical wireless (OW) positioning has emerged as a promising (accurate and low cost) alternative.  Optical wireless technologies are on the verge of moving from the realm of research into wide scale commercialization. These technologies can be broadly broken down into two fields: OW communications and OW positioning. The primary field, OW communications, can be further separated into point-to-point communications and distributed communications. Much of the commercialization in the field of OW technology has been based around point-to-point communications [8, 9]; however, groups are beginning to explore distributed OW communications for indoor networks [10] and outdoor networks [11] by creating multi-transmitter systems. The other field, OW positioning, aims to work in tandem with distributed OW communications networks to provide accurate indoor positioning.  2  1.1 Indoor Positioning Methods The most common systems for indoor localization today are based on RF and OW positioning. These positioning technologies can be separated broadly into three systems: received signal strength (RSS) [12–15], time-of-arrival (TOA) and time-difference-of-arrival (TDOA) [16–18], and angle-of-arrival (AOA) [19–23]. The vast majority of RF and OW positioning systems employ some type of transmitter network along with a mobile receiver that interfaces with the network to determine its position.   1.1.1 Radio Frequency Systems Radio frequency positioning systems are typically in the form of the more traditional Wi-Fi based systems or newer and more accurate ultra wide-band based systems. Wi-Fi systems have the advantage of being able to operate through walls requiring fewer transmitters to cover a given area. These systems are popular due to the proliferation of Wi-Fi enabled devices on the market; however, there are several significant drawbacks to this method. Most Wi-Fi-based RF systems operate based on the strength of the signal coming from an array of transmitters, similar to RSS-based OW positioning, and therefore are susceptible to attenuation from objects within their working area. Static objects can be calibrated out using a technique called fingerprinting [3]; however, moving objects such as people introduce error and the rearrangement of static objects requires a full system recalibration. As such, Wi-Fi-based systems are typically only capable of metre-level accuracy [3, 24]. The other major RF-based positioning system, ultra wide-band, consists of an array of RF transmitters and typically employ RSS or TOA/TDOA to carry out positioning. These systems have progressed significantly in recent years and are capable of centimetre level accuracy when augmented with filtering techniques [2]. The drawback of these 3  systems are the costs associated with the deployment of the transmitter array and the complex data-fusion methods.  1.1.2 Optical Wireless Systems Like RF-based systems, OW positioning systems are typically based on a network of transmitters and a mobile optical receiver. The major difference between RF and OW positioning technologies is the line-of-sight (LOS) operation. Due to the fact that OW positioning systems require LOS between the transmitters and receiver, more transmitters are required to cover the same working area in OW positioning systems, compared to RF systems. On the other hand, effects of environmental factors such as obstacles in the working area are negligible so long as LOS operation between a few transmitters is maintained. This LOS characteristic contributes to the high positioning accuracies seen in OW positioning systems. These accuracies are on the order of centimetres [19, 22].  The most common type of OW positioning system is an RSS-based system. An RSS-based OW positioning system works analogously to an RSS-based RF positioning system where each transmitter emits a signal with a known power level and the optical receiver measures the power level from each transmitter at its location. The attenuation in power levels indicate the distance between the transmitter and the optical receiver. By knowing these distances and the locations of all the transmitters, the optical receiver is able to carry out trilateration to determine its position. The typical positioning accuracy for this type of OW positioning system is on the order of centimetres [13–15]. It should be noted, that to obtain these accuracies, the optical receiver must also simulate the Lambertian radiation patterns of the optical beacons, which introduces considerable complexity in these systems [14]. While this challenge can be overcome using complex modelling methods, there is another fundamental challenge with RSS-based OW 4  positioning that can not be modelled. This fundamental challenge with RSS-based OW positioning systems is that any fluctuations in the output power from each transmitter, typically from age or partial illumination due to obstructions, introduce considerable errors [25].  The second major type of OW positioning system is TOA- or TDOA-based. In this type of system, whose RF equivalent is used in GPS, all transmitters send out precisely synchronized signals and the optical receiver measures the time of flight for each signal based on the speed of light, giving the distance between itself and each transmitter. It then employs the same trilateration method used in RSS systems to determine its position. The typical positioning accuracy for this type of OW positioning system is on the order of centimetres [16] to millimetres [18]. The challenge with this type of system arises from the fact that all transmitters and receivers must be precisely synchronized. The satellites used for GPS all contain ultra-stable atomic clocks that are corrected daily [26]; however, this kind of synchronization for an array of practical indoor optical transmitters is challenging and expensive. Without it, position error could potentially suffer dramatically [25].  The final major type of OW positioning system is AOA-based. An AOA-based OW positioning system uses an optical receiver capable of determining the angle of each transmitter in relation to itself. These relative angles are called AOAs. By measuring AOAs, and thus knowing the direction toward each transmitter, the optical receiver can carry out triangulation to determine its position. This method is advantageous due to its independence from transmitter power levels and its lack of synchronization requirements. One drawback to an AOA-based OW positioning system is that its positioning accuracy degrades as the distance between the transmitter and receiver increases. However, the short distances used in indoor positioning still allow for very accurate position determination in most applications [20, 22]. Based on the advantages of using an AOA-based OW 5  positioning system over the other two main types, this thesis focuses only on that subset of OW positioning.   1.2 Angle-of-Arrival Based Optical Wireless Positioning Of the many OW positioning methods, we established that AOA-based OW positioning was superior [25] and will be investigated in this thesis. An AOA-based OW positioning system uses an array of ceiling-mounted transmitters, which we will refer to as optical beacons for the remainder of this thesis, and a mobile optical receiver that carries out positioning. The ceiling-mounted optical beacons are typically white-light LEDs. The optical receiver must be capable of detecting not only the presence of optical beacons but also the relative angle between itself and each optical beacon. These relative angles are AOAs. The optical receiver breaks down each AOA into two components. The azimuthal angle, , is the angle about the vertical axis of the optical receiver. The polar angle, , is the angle down from the vertical axis above the optical receiver. Figure 1.1 shows a system with one optical beacon and an optical receiver.   6   Figure 1.1 An AOA system schematic is shown with a single optical beacon and optical receiver. The optical beacon is shown as the white circle and is located at (xi, yi, zi). The optical receiver is shown as the black dot at the centre of the coordinate frame at (x, y, z).   In this figure, the location of the optical sensor is denoted as (x, y, z) in the global coordinate frame, while the location of the ith optical beacon is denoted as (xi,yi,zi) in the global coordinate frame. It must be noted that each AOA measured at the optical receiver is in the body frame of the optical receiver. To simplify calculations, in this thesis we will assume that the global frame and optical receiver body frame are aligned or any mismatch in orientation has been resolved by the use of an inertial navigation system. Once the optical receiver has obtained an AOA in the body frame, it must determine which AOA corresponds to which optical beacon. Once AOAs have been matched to their corresponding optical beacon, vectors are drawn out from the optical beacons in the direction of the AOAs. Each line is called a line-of-position (LOP), and the point at which all 7  the LOPs intersect is the location of the optical receiver. Figure 1.2 shows a simple, AOA-based OW positioning system.   Figure 1.2 Simple AOA-based OW positioning system is shown with two optical beacons. Lines-of-position are drawn from each optical beacon to the optical receiver, which we assume to be the laptop. The position of the optical receiver, indicated by the black dot, is at the intersection of the two LOP’s at (x, y, z).  1.3 Position Error in Angle-of-Arrival Based Optical Wireless Positioning Systems The goal of any positioning system is to carry out positioning with minimal error, and an AOA-based OW positioning system is no different. An AOA-based OW positioning system can be broadly broken down into two parts: the optical beacons and the optical receiver. Both parts contribute to the overall error of the system but in different ways. The arrangement of the optical beacons must be designed in such a way that an optical receiver using the system must receive a 8  useful combination of AOAs to carry out positioning. The optical receiver must be designed to receive these AOAs with minimal error or distortion, since any AOA error will be converted into position error by the positioning algorithm. Both of these aspects will be expanded on in the following work.  1.3.1 Optical Beacon Geometry While one may think that all AOAs are created equal when it comes to AOA-based OW positioning systems, certain combinations of AOAs are advantageous in reducing the effects of random AOA measurement error in the computed position. This random AOA error is typically introduced in the optical receiver; however, a well informed decision on optical beacon geometry can mitigate the effects of these random errors significantly. To visualize how AOA error relates to position error, we must recall that an AOA-based OW positioning system uses measured AOAs to draw LOPs and the intersection of the LOPs is the optical receiver’s position. In an ideal system, the intersection of these lines happens at a single point, which gives a single solution for the optical receiver’s position. Figure 1.2 shows an ideal system where all of the LOPs overlap at a single point. In a practical system, these lines will not overlap at a single point due to AOA error at the optical receiver. We can envision this by converting the AOA lines into AOA cones originating at their respective optical beacons. The angle of divergence of each AOA cone represents the magnitude of its AOA error where a larger divergence indicates a larger AOA error. This now transforms the overlap area from a single point to an overlap volume. Figure 1.3 shows a practical system where AOA error is visualized as cones and the receivers’ position lies somewhere within a volume in three-dimensional (3-D) space. The role of the optical receiver positioning algorithm is to pick the most likely location of the optical receiver within the overlap volume. The total overlap volume represents system position error where larger volumes correspond to larger 9  position errors. The relationship between the overlap volume, corresponding to the position error, and the AOA error defines the dilution-of-precision (DOP). Dilution-of-precision is a function of the location of the optical receiver relative to the optical beacons in 3-D space, and, as system designers, we would like to minimize it by optimizing the geometry of the optical beacons. In other fields of study such as those utilizing trilateration ranging systems, DOP also includes time in addition to 3-D space [26]. In those fields, DOP for 3-D space excluding time is termed geometric DOP. In this work we will simply refer to geometric DOP as DOP. Dilution-of-precision is typically minimized when the AOA cones intersect in a perpendicular manner as shown in Fig. 1.3 (a). As AOA cones become parallel or antiparallel, as seen in Fig. 1.3 (b), DOP, and thus position error, suffers. Also, due to the fact that the AOA cones diverge as they get further from their respective optical beacons, it is desirable to operate fairly close to the optical beacons to reduce position error.   (a) 10   (b) Figure 1.3 Practical AOA-based OW positioning system is shown with two optical beacons and an optical receiver, which we take to be the laptop. Lines-of-position, now spread into cones due to AOA error, intersect over an overlap volume, shown by red highlighting, as opposed to a single point. The true position of the optical receiver, indicated by the black dot, is at (x, y, z). An AOA-based OW positioning system with small DOP is shown in (a), while an AOA-based OW positioning system with large DOP is shown in (b).  The illustrations in Fig. 1.3 are for a system containing only two optical beacons making it simple to visualize which geometries are optimal and which are not. In practical systems, there will be many optical beacons so an algorithmic approach must be carried out to fully see the effects of DOP. These calculations and their results will be the focus of Chapter 2.   1.3.2 Optical Receiver Design While DOP, as described in the previous section, affects the position error in the system, it is not the source. The source of error in an AOA-based OW positioning system comes from the optical receiver’s inability to perfectly discern AOAs. The errors can originate from poor optical 11  components, quantization of measurements, or the inability to discern optical beacons. These AOA errors manifest themselves either as random error, typically associated with measurement quantization, or systematic error, typically associated with the design of the optical components themselves. Along with AOA errors, the optical receiver must be capable of reliably identifying optical beacons so that they can be used in the positioning algorithm. Unidentified optical beacons do not contribute to the overall position solution and thus are wasted. This means that careful consideration must be given to the method of optical beacon identification. There are multiple optical receiver architectures presented in the literature, including ones using orthogonal photodiodes [20, 27] and image sensor systems resembling a camera [21, 23]. Each architecture has its own advantages and disadvantages; therefore, the optimal architecture must be selected based on the considerations mentioned above. The design of the optical receiver along with its associated sources of error are discussed in Chapter 3.  1.4 Scope of this thesis In this thesis, the analysis and design of an AOA-based OW positioning system using an imaging-based optical receiver architecture is presented. Recommendations are made pertaining to the design of both the optical beacon geometry and the optical receiver. Positioning results are given for an implementation of an AOA-based OW positioning system using the recommendations made in this thesis. The thesis is organized into the following five chapters.  Chapter 1 first introduces OW positioning and its applications, and then focuses on AOA-based OW positioning. A basic introduction into the operating principles and challenges of AOA-based OW positioning systems is given. 12  Chapter 2 focuses on optical beacon geometry considerations. The full derivation for DOP is presented in Section 2.1 along with metrics used to determine the quality of the DOP characteristics for an optical beacon geometry. In Section 2.2, two optical beacon geometries, the square and the rhombus, are analysed to determine which optical beacon geometry is superior. These two particular geometries were chosen due to their ability to be tessellated over a larger area and their equal number of optical beacons. The DOP analysis for both optical beacon geometries is carried out for various separations between the optical beacon geometry and the optical receiver. By doing the analysis in this way, insights into the required field-of-view (FOV) of the optical receiver can also be made. Section 2.2 concludes with recommendations for the optical beacon geometry as well as the FOV of the optical receiver. Section 2.3 presents the DOP analysis for a square multi-cell, i.e., tessellated, optical beacon geometry to see the effects of increasing the density of optical beacons within an optical beacon geometry. Chapter 2 concludes by giving recommendations for the optical beacon geometry and the optical receiver FOV. Chapter 3 presents the design and operation of the optical receiver. Section 3.1 presents two candidate architectures to be the optical receiver in an AOA-based OW positioning system and selects the appropriate one. Section 3.2 selects the appropriate components to build the selected optical receiver architecture. In Section 3.3, an in-depth analysis of the operation of the selected optical receiver architecture is presented. This section investigates, both theoretically and experimentally, the AOA error performance, FOV, and optical beacon identification performance of the selected optical receiver architecture. Chapter 3 concludes by summarizing the design and performance of the selected optical receiver architecture in Section 3.4. Chapter 4 presents the results for an experimental AOA-based OW positioning system. This AOA-based OW positioning system was designed and constructed using the recommendations and 13  specifications given in the previous two chapters. Both theoretical and experimental positioning results are given.  Chapter 5 is the conclusion of this thesis. It summarizes the findings and contributions of this thesis as well as provides recommendations for future work.   14  Chapter 2: Dilution of Precision  The prior chapter introduced two factors affecting position error: AOA error and DOP. The AOA error is a metric that defines the performance of the optical receiver, while DOP is a metric that defines the effect of the optical beacon geometry. In other words, DOP acts as a weighting of the AOA error on the position error. Clearly, the desire to minimize position error demands a reduction in the AOA error and DOP. In the literature for OW positioning, most groups focus on the performance of the optical receiver [18, 22] without considering optical beacon geometry at all. Those who do consider the optical beacon geometry typically do not perform a rigorous analysis [17]. In fact, DOP for AOA-based systems is mentioned in only a few sources including ourselves [23, 28].  This chapter addresses the lack of understanding of DOP for OW positioning. The analyses are presented for cells of optical beacons arranged in two elementary geometries. Section 2.1 gives general analyses of DOP, for an arbitrary distribution of optical beacons. Section 2.2 presents single-cell analyses of DOP. Section 2.3 presents multi-cell analyses of DOP, for tessellated implementations of the cells. Section 2.4 summarizes the findings and draws some concluding remarks.   2.1 General Analyses of Dilution of Precision Dilution-of-precision is a manifestation of the estimation process that is used to calculate the position of the optical receiver. The optical receiver is assumed to be mobile within the 3-D environment having optical beacons that are fixed overhead. For AOA-based positioning, the optical receiver defines unit vectors, i.e., LOPs, between itself and all observed optical beacons. Each LOP is quantified by an AOA that is itself comprised of two components: the azimuthal 15  angle, , is the angle of rotation about the vertical axis of the optical receiver; the polar angle, , is the angle down from the vertical axis of the optical receiver. These azimuthal and polar angles are defined in the body frame, i.e., the coordinate axes attached to the optical receiver. The position of the optical receiver is then defined by way of triangulation in the global frame, i.e., the coordinate axes encompassing the indoor environment. The body frame and global frame are shown in Fig. 1.1. The triangulation process identifies the position of the optical receiver in the global frame (x, y, z) as the intersection point of all LOPs. If the orientation of the body frame is known, with respect to the global frame, triangulation with two or more observed optical beacons will yield a unique position for the optical receiver. If the body frame orientation is unknown, with respect to the global frame, triangulation must be carried out with three or more observed optical beacons to yield a unique position for the optical receiver. Dilution of precision is a linear scaling factor that relates the AOA error, AOA, to the position error, p(x, y, z) [28]. Thus, it can be stated that  AOAp ),,(DOP),,(  zyxzyx  .               (1) Here, the AOA error is assumed to be independent of the AOA and the optical receiver’s position, while DOP(x, y, z) is a function of the optical receiver’s position. The distribution for DOP(x, y, z) arises from the process used to carry out triangulation and estimate the optical receiver’s position. For such a process, the optical receiver observes multiple optical beacons, with the AOA for the ith optical beacon being defined by the respective azimuthal angle, iiixxyyarctan ,                 (2) and polar angle, 16  ||arctaniiizzr ,                (3) where 22 )()( iii yyxxr  .                (4) In these expressions, the optical receiver is positioned at (x, y, z) and the ith optical beacon is located at (xi,yi,zi). The intermediate variable ri is simply the distance from the optical receiver to the spot directly below the optical beacon within the horizontal plane of the optical receiver. With these definitions, the distance between the optical beacon and optical receiver is 222 )()()( iiii zzyyxxR  .              (5) When the orientation of the optical receiver is known, equations (2) and (3) have three unknowns, i.e., x, y, and z, and these unknowns can be solved for by triangulating with two optical beacons, i.e., solving the above two equations with i = 1 and 2. When the orientation of the optical receiver is unknown, equations (2) and (3) must be modified to include three Euler angles (i.e., yaw, pitch, and roll) to represent the orientation of the optical receiver within the global frame. This increases the number of unknowns to six, i.e., x, y, z, yaw, pitch, and roll. These unknowns can be solved for by triangulating with three or more optical beacons, i.e., solving the above two equations with i = 1, 2, and 3.  The process of triangulation, and estimation of the optical receiver’s position, is hindered by the nonlinear form of equations (2) and (3). To overcome this obstacle, the nonlinear equations are linearized and solved by way of an iterative least-squares (LS) algorithm. The LS algorithm must have the number of equations, being twice the number of observed optical beacons, be equal to or greater than the number of unknowns, i.e., the degrees of freedom in the system. By the very nature 17  of the LS algorithm, the observation of optical beacons beyond the minimum number will typically yield lower position errors—and improved positioning performance. Thus, it is desirable to carry out positioning with as many optical beacons as possible. In this thesis, we will assume that the orientation of the optical receiver is known, so that the minimum number of observed optical beacons is two and the number of unknowns is three. For more information on the LS algorithm, see Appendix A. When solving a system of equations using the LS algorithm, a design matrix is created using the system’s equations. This design matrix is formed by taking the partial derivatives of the positioning equations, (2) and (3), for each of the three coordinates in the global frame, i.e., x, y, and z. In doing so, we obtain a set of three partial derivative expressions per positioning equation for each observed optical beacon. Each set of three partial derivative expressions forms a row in the design matrix. The structure of the design matrix is formed by creating a design matrix for the azimuthal angle, for each optical beacon, H, and augmenting it with a design matrix for the polar angle, for each optical beacon, H. The partial derivative matrix for the azimuthal angle, H, is defined by  zyxzyxzyxzyxzyxHnnniiini::::::::1111,             (6) while the partial derivative matrix for the polar angle, H, is defined by 18   zyxzyxzyxzyxzyxHnnniiini::::::::1111.             (7) Augmenting these two matrices together gives the complete design matrix that is used to calculate DOP. For the above expressions, this matrix is  22222221121111211112222211211)(||)(||:::)(||)(||:::)(||)(||0)()(:::0)()(:::0)()(nnnnnnnnnniiiiiiiiiinnnniiiiRrRryyzzRrxxzzRrRryyzzRrxxzzRrRryyzzRrxxzzrxxryyrxxryyrxxryyH .             (8) The complete design matrix is now manipulated to calculate the DOP. Assuming that the covariance values of all the measurements are equal, i.e., all of the optical beacons have the same AOA error, the DOP is AOAp1T),,(])[(tr),,(DOP zyxHHzyx   ,              (9) where tr[∙] is the trace operator, and [∙]T is the transpose operator [28]. Note that equation (9) defines the DOP as a scalar quantity at any given position in the 3-D environment. The DOP 19  distribution that is formed is referred to as a DOP contour in this work. The DOP contour is calculated for optical beacons that are fixed overhead. The optical receiver is free to move across the area beneath the optical beacons. By setting the height of the optical receiver at z = 0 and defining the separation between the optical beacons and the plane in which positioning is to be carried out as h, we have the optical beacons located at z = h. The x and y coordinates of the optical receiver are rastered across the horizontal plane (x, y, z = 0), and the DOP is calculated at each coordinate. Two metrics are used in this work to quantify statistics of the DOP contours. The first metric is the mean, or expectation value, of the DOP contour, and it is denoted as E[DOP(x, y, z)]. A low value for this mean indicates that, on average, the DOP and position error across the contour are low. The second metric is the standard deviation of the DOP contour, STD[DOP(x, y, z)]. This metric quantifies the non-uniformity of the DOP across the contour. It is desirable to have this standard deviation be low, so that there is consistent performance for positioning across the contour. Given that the position error, p, is the product of DOP and the AOA error, AOA, metrics can be defined for the mean of the position error contour, E[p(x, y, z)], and the standard deviation of the position error contour, STD[p(x, y, z)], in analogy to those of DOP. In the following sections, we investigate DOP for optical beacons arranged in cells having two elementary geometries: the square and the rhombus. These cells are chosen because their geometries allow them to be tessellated as arrays to cover a larger area, if desired. Section 2.2 presents single-cell analyses of DOP, while 2.3 presents multi-cell analyses of DOP.  20  2.2 Single-cell Analyses of Dilution of Precision In this subsection, we analyse the DOP and position error contours for square and rhombus cells. For reasons of practicality, the area density of optical beacons in each cell, and therefore the cost of installation, is fixed in the following analyses to ensure a fair comparison. The area density of optical beacons in a cell is defined by the ratio of the number of beacons in the cell, assuming that it was tessellated, by the area of the cell. This definition is illustrated in Fig. 2.1 for an area density of optical beacons of one beacon over an area of a2. The side-length of the square cell is a, and the side-length of the rhombus is 2½∙3-¼a. For the square, four quarters of the beacons yield a single optical beacon over the area of a2. For the rhombus, two sixths and two thirds of the optical beacons yield a single optical beacon over the area of a2.                                                (a)     (b) Figure 2.1 The (a) square, and (b) rhombus, optical beacon geometries are shown with their associated scaling factors. The portion of each optical beacon used in a single cell is denoted by the fraction of blue area filled in each circle, where circles denote optical beacons.   For additional reasons of practicality, the FOV of the optical receiver is also considered in the following analyses. The term FOV will be used throughout this work to refer to the angular FOV, in units of degrees, as opposed to the alternative definition being a solid angle FOV, in units of steradians. Analyses are carried out for a typical (narrow) FOV of 60° and a doubled (wide) FOV 21  of 120° as was done by Jin et al. [29]. We constrain the system to have a vertically-oriented optical receiver observe all of the optical beacons within the cell, i.e., the optical receiver has all optical beacons within its FOV for any position within the cell. Thus, for FOVs of 60° and 120°, with a = 100 cm, the heights of the unit cell above the measurement plane are h = 100∙6½ cm and h = 100∙2½∙3-½ cm for the square cell, and h = 100∙2½∙3¾ cm and h = 100∙2½∙3-¼ cm for the rhombus cell, respectively. Applying a DOP analysis to the systems above gives the contours shown in Fig. 2.2. Figures 2.2 (a) and (b) show DOP contours for the square and rhombus, respectively, for the case where the FOV is 60°, with a = 100 cm, and h = 100∙6½ cm and h = 100∙2½∙3¾ cm, respectively. Figures 2.2 (c) and (d) show the square and rhombus, respectively, for the case where the FOV is 120°, with a = 100 cm and, and h = 100∙2½∙3-½ cm and h = 100∙2½∙3-¼ cm, respectively. In every contour, DOP is shown on the left vertical axis in units of centimetres/degree and position error is shown on the right vertical axis in units of centimetres, for a typical AOA error of 1°. (a)  22  (b) (c)   23  (d)  Figure 2.2 This figure shows DOP contours for a 60° FOV optical receiver using (a) square, and (b) rhombus, optical beacon geometries, and a 120° FOV optical receiver using (c) square, and (d) rhombus, optical beacon geometries. This plot assumes a square cell side-length of a = 100 cm. The left axis shows DOP in units of cm/° while the right axis shows position error in units of cm after assuming an AOA error of AOA = 1°. One colour bar is used for both (a) and (b) while (c) and (d) use a second colour bar. Optical beacons are indicated in each plot by the large circles.  Figures 2.2 (a) and (b) show DOP and position error contours of the square and rhombus cells, given a FOV of 60°. Figure 2.2 (a) shows results for the square cell with a = 100 cm and a height of h = 100∙6½ cm. Figure 2.2 (b) shows results for the rhombus cell with a = 100 cm and a height of h = 100∙2½∙3¾ cm. The negative concavity of the DOP and position error contours is immediately apparent in both plots. The DOP reaches a peak in the middle of the cells due to the parallel nature of the LOPs near the centre. The LOPs are all relatively parallel due to the high aspect ratio of the system, i.e., the height of the system, h, is larger than the separation between the optical beacons, as seen in Fig. 1.3 (b). The parallel LOPs lead to high DOP in the centre, as discussed in Section 24  1.3.1. However, in moving away from the centre, many of the LOPs become increasingly perpendicular. Certain LOPs become increasingly parallel, but the effects of the increasingly perpendicular LOPs outweigh the effects of the parallel ones. The net effect decreases DOP and position error near the edges of the contours. Figures 2.2 (c) and (d) show DOP and position error contours of the square and rhombus cells, given an FOV of 120°. Figure 2.2 (c) shows results for the square cell with a = 100 cm and a height of h = 100∙2½∙3-½ cm. Figure 2.2 (d) shows results for the rhombus cell with a = 100 cm and a height of h = 100∙2½∙3-¼ cm. The positive concavity of the DOP and position error contours is immediately apparent. The DOP reaches a trough in the middle due to the perpendicular nature of the LOPs near the centre of the contour. The LOPs are all relatively perpendicular in this region, due to the low aspect ratio of the system, i.e., the height of the system, h, is comparable to the separation between the optical beacons, as seen in Fig. 1.3 (a). The perpendicular LOPs lead to low DOP in the centre, as discussed in Section 1.3.1. However, in moving away from the centre, many of the LOPs become increasingly parallel and antiparallel. The net effect increases DOP and position error near the edges of the contours. Table 2.1 summarizes the general characteristics of the DOP statistics for all the contours. The table shows mean DOP, E[DOP(x, y, z)], and standard deviation, STD[DOP(x, y, z)]. The results are shown for the square and rhombus cells, with 60° and 120° FOVs. In terms of the mean DOP, it is seen that the square cell is better able to produce low DOP values, compared to those of the rhombus cell, and the optical receiver with the FOV of 120° is better able to produce low DOP values, compared to those of the optical receiver with a FOV of 60°. For the OW positioning system with the square cell and 120° FOV, a mean DOP of 1.9 cm/° is achieved. This corresponds to a mean position error of 1.9 cm, for an AOA error of 1°. The same trend appears for the DOP 25  standard deviation. The square cell is better able to produce uniform DOP values, compared to those of the rhombus cell, and the optical receiver with the FOV of 120° is better able to produce uniform DOP values, compared to those of the optical receiver with a FOV of 60°. For the OW positioning system with the square cell and 120° FOV, a DOP standard deviation of 0.061 cm/° is achieved. This corresponds to a standard deviation of the position error of 0.061 cm, for an AOA error of 1°. Ultimately, the lower and more uniform DOP values for the positioning system with the square cell and 120° FOV are attributed to the LOPs in this system remaining largely perpendicular over the entire contour.  Table 2.1 Summary of the results from Fig. 2.2.  The upper table summarizes mean DOP, E[DOP(x, y, z)], and mean position error, E[p(x, y, z)], for a beacon geometry side length of a = 100 cm and an AOA error of AOA = 1°. The lower table summarizes the DOP standard deviation, STD[DOP(x, y, z)], and position error standard deviation STD[p(x, y, z)], also for a beacon geometry side length of a = 100 cm and a typical AOA error of AOA = 1°. Mean of DOP (and Mean of Position Error) FOV of 120° FOV of 60° Square cell 1.9 cm/° (and 1.9 cm) 7.6 cm/° (and 7.6 cm) Rhombus cell 2.4 cm/° (and 2.4 cm) 11.8 cm/° (and 11.8 cm)  Standard Deviation of DOP (and Standard Deviation of Position Error) FOV of 120° FOV of 60° Square cell 0.061 cm/° (and 0.061 cm) 0.28 cm/° (and 0.28 cm) Rhombus cell 0.084 cm/° (and 0.084 cm) 0.56 cm/° (and 0.56 cm)   While the previous analysis provided much insight into optical beacon geometries, we now wish to generalize our findings. To begin this generalization, we must illustrate a fact about scaling for DOP contours. To begin this illustration, we take a DOP contour given by a set of system dimensions. We then double all of the system dimensions and recalculate the DOP contour. This 26  new DOP contour using the doubled system dimensions will have the same shape but its magnitudes will be doubled. This is due to the 1/Ri relationship present in the geometry matrix, H, which is then inverted in (9) to give an Ri relationship. This means that DOP is proportional to the distance between the optical beacons and the optical receivers. By doubling the system dimensions, and thus Ri, we double DOP. (This can also be visualized by imagining the AOA error as the angular arc of a cone, radiating from an optical beacon, and thinking of the position error as the proportional arc length of the cone at the position of the optical receiver.) This also means that any DOP statistic such as mean and standard deviation can also be scaled with the system dimensions since they are simply calculated from the DOP contour.  It is now apparent that the only thing affecting the shape of the DOP contour is the ratio between the height between the optical beacon geometry and the measurement plane, which we denote as h, and the size of the optical beacon geometry, whose side length is denoted as a. We refer to this ratio as the h/a ratio. We now carry out an analysis of the DOP statistics as this h/a ratio changes. Since the h/a ratio does not depend on the absolute value of any of the system dimensions, we can create a normalized system geometry by defining the side length parameter, a, to be 1. This side length is equivalent to the side length of the square cell in Fig. 2.1. Then, by changing the value of the height between the optical beacon geometry and the measurement plane, h, we can realize different h/a ratios. By calculating DOP with these normalized system dimensions we obtain a DOP contour and DOP metrics which have been normalized to a. We will refer to this normalized DOP as DOP/a. Conversely, the DOP/a contour and any statistics derived from it can be converted back to true statistics by simply multiplying the contour by the desired dimension a. To help visualize h/a ratios, Fig. 2.3 shows three representative system geometries with h/a ratios of h/a > 1, h/a = 1, and h/a < 1. 27   Figure 2.3 Representative diagrams of working areas with various h/a ratios are shown. Each box represents a working area with the grey bottom plane indicating the plane in which the optical receiver operates and the circles at the top of the box representing optical beacons. The parameters h and a are shown along with the corresponding h/a ratio for each working area.  We will now carry out our analysis by calculating the normalized mean DOP, E[DOP/a], and normalized DOP standard deviation, STD[DOP/a], for many contours with varying h/a ratios. This analysis is carried out for both the square and rhombus cells and the results are shown in Figure 2.4. Figure 2.4 (a) shows the normalized mean DOP, E[DOP/a], for the square and rhombus optical beacon geometries on the left axis in units of (°)-1, and mean position error E[p] for the square and rhombus optical beacon geometries on the right axis in centimetres, assuming a side length parameter of a = 100 cm and an AOA error of AOA = 1° for the square and rhombus optical beacon geometries. Figure 2.4 (b) shows the normalized DOP standard deviation, STD[DOP/a], for the square and rhombus optical beacon geometries on the left axis in units of (°)-1, and position error standard deviation STD[p] for the square and rhombus optical beacon geometries on the right axis in centimetres, assuming a side length parameter of a = 100 cm and an AOA error of AOA = 1° for the square and rhombus optical beacon geometries.  28                                (a)                 (b) Figure 2.4 The (a) normalized mean DOP, E[DOP/a], and mean position error, E[P], and (b) normalized DOP standard deviation, STD[DOP/a], and position error standard deviation, STD[P], are plotted on the left and right axes, respectively, as a function of h/a for the square and rhombus optical beacon geometries.   Looking at Fig. 2.4 (a), we can see that h/a = 0.25 is ideal if normalized mean DOP, and thus mean position error, is to be minimized. Unfortunately, minimizing normalized DOP standard deviation is somewhat more difficult since there are several minima, being near h/a = 0.5 and 1.5 for the square cell and near h/a = 0.5 and 2 for the rhombus cell. We also notice that both normalized mean DOP and normalized DOP standard deviation increase in a roughly linear fashion beyond h/a = 2. These results agree, in general, with the earlier conclusion that a wider FOV, which allows us to operate with a smaller h/a ratio, improves both the normalized mean DOP and normalized DOP standard deviation. The systems for which this would not apply are those with h/a < 0.25, as the normalized mean DOP begins to increase as h/a decreases. However, from a practical standpoint, we are able to ignore this region, as it would require an optical receiver with an unreasonably large FOV. We also notice that the rhombus is slightly better in terms of normalized mean DOP than the square. This does not agree with the prior conjecture that the square 29  cell can outperform the rhombus cell. The apparent discrepancy arises due to our disregarding of the FOV of the optical receiver in the results of Fig. 2.4. In reality, OW systems operate with a finite and fixed FOV, and so this must be taken into account. To implement the analyses of normalized DOP and position error with respect to the FOV, we convert the ratio h/a to a corresponding FOV. We do this by calculating the minimum FOV required to see all optical beacons in a cell, from anywhere within that cell. The conversion equation for the square cell is h/a = 2½/tan(FOV/2), and the conversion equation for the rhombus cell is h/a = 2½3¼/tan(FOV/2). Upon converting h/a into FOV and replotting, we obtain Fig. 2.5. We can see from this new figure that the square cell is in fact superior to the rhombus cell, both in terms of normalized mean DOP and normalized DOP standard deviation. However, it is advisable to operate with a sufficiently wide FOV, being at or beyond 100°, as the FOV in this upper range has the normalized mean DOP remain relatively low and flat, for superior positioning performance. In particular, for a square cell with a = 100 cm and an optical receiver with a 100° FOV, it is possible to establish a DOP of 2.7 cm/°. For a typical AOA error of 1°, this would yield a position error of 2.7 cm.    30                                      (a)               (b) Figure 2.5 The (a) normalized mean DOP, E[DOP/a], and mean position error, E[P], and (b) normalized DOP standard deviation, STD[DOP/a], and position error standard deviation, STD[P], are plotted on the left and right axes, respectively, as a function of the FOV for the square and rhombus optical beacon geometries.   2.3 Multi-cell Analyses of Dilution of Precision While the analysis of optical beacons configured in single cells is insightful, we must recognize that there are reasons to consider operation with increased numbers of optical beacons. This is because the LS positioning algorithm can produce lower DOP values and improved positioning performance when increased numbers of optical beacons are observed by the optical receiver. Moreover, many installations will need to cover wider areas. To this end, we now extend our analysis to multi-cell optical beacon configurations. A multi-cell configuration has a tessellated array of single cells, being comprised of optical beacons. We analyse the performance of a square multi-cell configuration here by characterising DOP as a function of the number of optical beacons along each side of the cell, N. This defines the total number of optical beacons as the square of the number of optical beacons along each side, i.e., N × N = N2. We fix h/a = 1, which corresponds to a half angle of FOV/2 ≈ 54.7°. Figure 2.6 shows the results. The figure presents the normalized 31  mean DOP, E[DOP/a], as a function of N for the square optical beacon geometry. A curve fit to the equation E[DOP/a] = 1/N is also shown. The curve fit shows strong agreement with the data, yielding an R-squared value greater than 0.999. The inverse relationship between the normalized mean DOP and the number of optical beacons along the side of the square cell is understandable, as it can be expected that DOP will approach zero as the number of optical beacons approaches infinity and the improvements in DOP will diminish as optical beacons are added.  Figure 2.6 The normalized mean DOP, E[DOP/a], is shown as solid circles as a function of the number of optical beacons along each side, N. The h/a ratio is 1. Curve fitting with E[DOP/a] = 1/N is shown as a dotted line. The inset shows a configuration of optical beacons with N beacons along each side.   2.4 Summary In this chapter, we introduced derivations and analyses of DOP. Section 2.1 derived DOP based on the LS algorithm, while Sections 2.2 and 2.3 presented analyses of DOP for single- and multi-cell optical beacon configurations, respectively. It was found that the square optical beacon geometry is superior to the rhombus optical beacon geometry. It was also found that the positioning 32  performance improved with increased numbers of observable optical beacons—resulting from the use of both increased numbers of optical beacons and optical receivers with wide FOVs. It must be noted, however, that the use of increased numbers of optical beacons results in greater installation costs, so a balance must be struck between performance and practicality. For this reason, the following chapter will consider varying numbers of optical beacons and varying FOVs for the optical receiver, to arrive at the effective and practical design for the OW positioning system.  33  Chapter 3: Optical Receiver Design In Chapter 2, we investigated the suitability of several potential optical beacon geometries to be used in an OW positioning system. The suitability of these geometries was determined using a DOP-based analysis. Dilution-of-precision was defined as the weighting factor that converts the effects of AOA error to position error. This quantity could be calculated at every point in space using the location of the optical receiver with respect to the optical beacons. Efforts were made to reduce DOP—and thereby reduce the position error. However, in general, the AOA error should also be reduced to lessen the overall position error. The AOA error is defined largely by the characteristics of the optical receiver, and so this chapter focuses on the design of the optical receiver for an optimal OW positioning system. A variety of optical receiver architectures have been proposed in the literature for OW positioning systems. Section 3.1 introduces the two general forms of such architectures and selects one for use in this study. Section 3.2 presents the details of fabrication for the selected architecture. Section 3.3 analyses the theoretical and experimental performance of the selected architecture. Section 3.4 summarizes the performance of the selected optical receiver architecture and gives recommendations for future designs.   3.1 Optical Receiver Architectures There are many optical receiver architectures that can be used to measure AOAs and carry out OW positioning—but their performance levels are typically not equal. When choosing the best optical receiver to use in an AOA-based OW positioning system one must consider the two core functions of the optical receiver: AOA estimation and optical beacon identification. 34  The performance of the optical receiver in carrying out AOA estimation is defined by its AOA error, which is itself comprised of random error and systematic error. The random error on the AOA is linked to precision, as it manifests itself as random deviations in the measured azimuthal and polar angles. The systematic error on the AOA is linked to accuracy, as it quantifies biasing between measured and true azimuthal and polar angles. In general, the optical receiver will be operated over a limited range of polar angles, for which random and systematic error are deemed acceptable. This range is used to define the FOV of the optical receiver. The performance of the optical receiver in carrying out optical beacon identification is enhanced by the use of an increased sampling frequency. The sampling frequency is the rate at which the optical receiver samples the power from surrounding optical beacons—to calculate the corresponding AOAs and ultimately estimate the optical receiver’s position. Poor optical beacon identification results in AOAs that are unusable by the positioning algorithm, since they cannot be uniquely associated with an optical beacon. This makes it necessary to carry out positioning using fewer optical beacons, yielding greater position error. Overall, we will use AOA error, FOV, and sampling frequency as determining factors in selecting an effective optical receiver architecture. There are two architectures considered here for optical receivers in the OW positioning system: the photodiode architecture [20] and the camera architecture [21]. Using data from the literature, we can make a rough comparison between these two architectures and assess which architecture is better suited for an AOA-based OW positioning system. The photodiode architecture makes use of multiple photodiodes arranged in a non-collinear form (typically orthogonal to each other). The structure is arranged such that the azimuthal and polar angles of the AOA are measured by detecting the ratio of optical powers present on the 35  photodiodes. The photodiode architecture has been studied extensively by Ahmed et al. [20]. The authors’ findings indicate that the photodiode architecture can function with an AOA error of 2°, a FOV of approximately 80°, and a sampling frequency of 3 kHz [20]. The FOV of this architecture is defined as the outermost polar angle for which shadowing commences between neighbouring photodiodes. This effect is dependent on the azimuthal orientation and so the FOV that is quoted here is an average value that is based on an approximation of a uniform solid beam angle. The photodiode architecture is defined here to have a sampling frequency of 3 kHz, as this was the frequency used by Ahmed et al., although this value is actually a lower limit. Such an architecture may be capable of operation at much higher frequencies. The camera architecture is comprised of a lens suspended over an image sensor. This architecture uses the lens to focus light from each optical beacon to a beamspot on the image sensor. Based upon the location of this beamspot, the azimuthal and polar angles of the AOA can be calculated. The camera architecture has been studied extensively by Jin et al. [29]. Their findings indicate that this architecture has an AOA error of 1° and a FOV of 120°. The authors applied a definition for FOV that differs from the one used in this work, in that they restricted the imaging area on the image sensor and this limited the FOV, but their data suggests that AOA measurements can be made with the imaging system with minimal distortion up to the specified 120° FOV. The work did not consider optical beacon identification or the associated minimum sampling frequency, but an estimate of the minimum sampling frequency in their work is one-half of the 187-frames-per-second frame rate of the Omnivision OV7720 image sensor, which is approximately 93 Hz [30]. A summary of performance metrics of the photodiode and camera architectures is provided in Table 3.1. The data in this table indicates that the camera architecture, with its wider FOV and 36  lower AOA error, is superior for an AOA-based OW positioning system. However, care must be taken to accommodate its lower sampling frequency, in comparison to that of the photodiode architecture (and the resulting implications for optical beacon identification). It is also noted that the camera architecture has two practical advantages. The first practical advantage is that a camera already exists in contemporary smartphone technology. Thus, the proposed OW positioning technologies can be integrated with existing smartphones. (Although, it must be noted that typical smartphones have a narrow FOV, at approximately 60°, according to the tests later in this chapter, and this may limit the number of observable optical beacons and the overall positioning performance.) The second practical advantage to the camera architecture is the colour detection capability of the image sensor. This capability is exploited through the work shown later in this chapter to assist in optical beacon identification. The photodiode architecture lacks this capability.  Table 3.1 Summary of optical receiver architecture properties.  Photodiode architecture Camera architecture AOA Error (°) 2 1 FOV (°) 80 120 Sampling Frequency (Hz) 3000 93  Based on all of the above considerations, we conclude that the camera architecture is superior to the photodiode architecture for use in an AOA-based OW positioning system. Thus, the following sections will present the optical receiver design, in Section 3.2, and the optical receiver operation, in Section 3.3, for this camera architecture.  37  3.2 Optical Receiver Design The design of the camera architecture is considered in this section for an AOA-based OW positioning system. The primary components to this architecture are the image sensor and microlens. The components must be selected carefully to ensure suitable performance for the core functions of the optical receiver, being AOA estimation and optical beacon identification. The following subsections will explore these two components.  3.2.1 Image Sensor We begin by selecting an image sensor for our optical receiver. The Omnivision OV7720 CMOS VGA image sensor will be used. This image sensor is a good choice because it has a higher frame rate than typical image sensors and a small pixel size. (At the same time, the Omnivision OV7720 allows for easy addressability, via a USB, and interfacing, via commercial software.) The motivation for a high frame rate, and the resulting ease of optical beacon identification, were already introduced with regard to the sampling frequency, but the reasoning for a small pixel size may not be immediately obvious. As we will see later, in Section 3.3.1.1, discrete pixels introduce error by quantizing the measured AOAs. The magnitude of this quantization error is determined by the size of each pixel, where smaller pixels yield lower quantization error. The pixel size for the OV7720 is 6 m × 6 m. A scanning electron microscope (SEM) image of these pixels is shown in Fig. 3.1 along with a rough schematic of the camera architecture. Note that the image sensor must use groupings of four pixels to obtain a signal at 187 Hz, and this increases the effective pixel size to 12 m × 12 m. These larger "effective" pixels increase the quantization error.   38   Figure 3.1: Diagram showing incident light rays (in red) focused down by a hemispherical microlens onto an image sensor. An SEM image of the OV7720 sensor pixels is shown in the inset. The image displays four pixels in red as the larger effective pixels that are used together at higher frame rates.   3.2.2 Microlens We now move to the selection of a microlens for our optical receiver. The microlens focuses light from each observable optical beacon to a beamspot on the image sensor. The optical receiver then uses the location of each beamspot to calculate the AOA for each optical beacon. Thus, the FOV of the optical receiver is dictated largely by the characteristics of the microlens. Drawing from our conclusions in Chapter 2, we know that position error is reduced by increasing the FOV, so our microlens should have as wide of a FOV as possible. To maximize the FOV, the flipped microlens configuration shown previously in Fig. 3.1 is considered. It has been successfully used in both OW positioning [29] and OW communication [31] systems. This flipped microlens configuration has incident light refract through a glass coverslip and then be focused by the microlens onto the image sensor. The refraction through the glass coverslip increases the FOV.  39  The challenge now is to fabricate an effective flipped microlens. One simple fabrication method presented in the literature makes use of a dispensing process [32]. In this process, a UV-curable polymer is dispensed onto a glass coverslip. The polymer forms a microdroplet on the glass coverslip with a spherical surface, which is then cured into a (roughly) spherical microlens. The shape of the microlens is quantified here by the contact angle, , and it is dictated by surface tension forces between the glass coverslip and polymer. If dispensed in air, the microlens forms a contact angle of roughly  = 30° with the glass coverslip, making a shallow microlens. If dispensed in glycerol, the microlens forms a contact angle of roughly  = 90° with the glass coverslip, making it a hemispherical microlens. The following analysis will determine if the shallow microlens, with the  = 30° contact angle, or the hemispherical microlens, with the  = 90° contact angle, is better suited for OW positioning. To make this decision, we will look at two key considerations for the microlenses, being aperturing and focusing.  Our first consideration, aperturing, is introduced by the finite size of a microlens. Aperturing degrades imaging when the light rays illuminate the structure with a polar angle, , that is beyond the polar angle at which light rays are focused at the outermost edge of the microlens. To calculate this maximum polar angle, we will investigate a microlens with an arbitrary contact angle, , and extrapolate the results. A schematic of the overall structure, with a glass coverslip, microlens, and image sensor, is shown in Fig. 3.2. Light rays strike the glass coverslip at an incident angle of , which is defined with respect to the normal of the planar surface. The rays are refracted. We assume that the refractive indices of the glass coverslip and microlens are both equal to a value of n = 1.54, such that the incident light rays then propagate undeflected through the glass coverslip and microlens at an angle of  int = asin(sin()/n), with respect to the normal of the planar surface. 40  The internal light rays then strike the curved back surface of the microlens. Here, one internal light ray (called the chief ray) passes undeviated through the curved back surface of the microlens, because it is normal to the curved back surface. According to the geometry of Fig. 3.2, the internal angle of propagation,int, for this chief ray is equal to the contact angle of the microlens, . We use this equivalency to calculate the minimum allowable microlens contact angle that can avoid aperturing for all incident polar angles, i.e., we avoid aperturing for polar angles up to  = 90°. This extreme case gives  = asin(sin(90°)/1.54) ≈ 41.8°. In other words, microlenses with contact angles above  ≈ 41.8° can avoid aperturing. Thus, the shallow microlens, with its 30° contact angle, would be subject to aperturing, and the hemispherical microlens, with its 90° contact angle, would not be subject to aperturing—and is the preferable choice for the optical receiver.   Figure 3.2 Schematic of the dispensed microlens on a glass coverslip is shown. The chief ray travelling through the system is denoted by the red arrow. The chief ray exits the microlens normal to its curved back surface.  41  The second consideration, focusing, is motivated by our desire for a compact design with a short focal length. Using the lensmaker’s equation [33], we can calculate the expected focal length, fl, of a microlens based on the radii of curvature of its both sides, its refractive index, n, and its sag, i.e., thickness, sl. The lensmaker’s equation is  bflbfl)1(11)1(1RnRsnRRnf.              (10) We set the radius of curvature of the planar front surface, Rf, to infinity, and we set the radius of curvature of the curved back surface, Rb, to the negative radius of curvature of the microlens, Rl. By simple geometry, the radius of curvature of the microlens can then be expressed in terms of its contact angle, as Rl =Dl/2sin(). This simplifies the lensmaker’s equation to  lll)sin(2)1(1)1(1DnRnf .              (11) It is apparent that increasing the contact angle, , reduces the focal length, fl. Thus, the focal length can be minimized by using the hemispherical microlens, with its contact angle of = 90°. We select the hemispherical microlens to proceed in this study, given its reduced susceptibility to aperturing and short focal length. Others in the literature have applied hemispherical microlenses and witnessed these benefits [29, 31]. Hemispherical microlenses with a diameter of Dl ≈ 800 m were dispensed for use in our optical receiver. Figure 3.3 shows an SEM image of a representative microlens. The final optical receiver design is comprised of the flipped hemispherical microlens positioned over the Omnivision OV7720 image sensor. In the following section, we will look at the optical receiver’s performance. We will analyze the performance for AOA estimation by characterising the AOA error (as random 42  and systematic errors). The performance will be contrasted to that of a typical smartphone. We will then analyze the performance for optical beacon identification.   Figure 3.3 Scanning electron microscope image showing a representative hemispherical dispensed microlens with a diameter of 500 m.  3.3 Optical Receiver Performance In the previous section, we selected components for our optical receiver to have it be effective for an AOA-based OW positioning system. In this section, we will analyze the performance of this optical receiver, in terms of AOA estimation and optical beacon identification. We begin with a performance analysis for AOA estimation in Section 3.3.1, by characterising the AOA error in terms of random and systematic errors. We then move to a performance analysis for optical beacon identification in Section 3.3.2, and explore how to reliably distinguish optical beacons. Section 3.3.3 summarizes the findings.  43  3.3.1 Performance for Angle-of-Arrival Estimation An optical receiver in an AOA-based OW positioning system must effectively estimate AOAs. Both the microlens and image sensor in the optical receiver play a role in this. The microlens collects light from each optical beacon and focuses it onto the image sensor. The image sensor takes the intensity profile of the focused light and converts it to electronic signals, i.e., an image, which is used to define the AOA, for subsequent triangulation of the optical receiver’s position. In the first step, the microlens focuses incoming light from a wide range of angles onto different areas on the image sensor. Since each optical beacon is assumed to be a distant point source, light rays from each optical beacon will be roughly collimated when they arrive at the optical receiver. The collimated light coming from each optical beacon is focused to a point of high intensity on the image sensor, that will be referred to here as a beamspot. In the second step, the location of each beamspot in the image is used to estimate the true azimuthal, , and polar, , angles, which together define the AOA for the optical beacon of interest. The estimated azimuthal and polar angles, IS and IS, respectively, are defined by the location of the illuminated pixel on the image sensor, according to  ISIS1IS tanxy ,               (12) for the estimated azimuthal angle and IS2IS2ISIS  kyxk  ,              (13) for the estimated polar angle. The expression for the estimated polar angle is an approximation, as opposed to an equality, because the true relation is nonlinear (and will be discussed later). In both equations, xIS and yIS are discrete integer coordinates on the image sensor. They define the centre of the pixel being illuminated by the chief ray and will be referred to as the chief coordinates in 44  the remainder of this work. They have integer values with units of pixels, and an origin centred on the optical axis (OA) of the microlens. The variable IS is the chief radius, which we define as the radial displacement between the chief coordinates and origin. The variable k is a linear scaling factor with units of °/pixel. It is used to create a linear approximation between the measured chief radius, IS, in pixels, to the estimated polar angle, IS, in degrees. The schematic is shown in Fig. 3.4. It is important to note that this overall AOA estimation process will be subject to random and systematic errors. Thus, our estimated angles, IS and IS, will not be exactly equal to the true angles,  and .  Figure 3.4 Representative diagram showing azimuthal and polar angles on an image sensor. A beamspot illuminates the pixel indicated in blue whose chief coordinates are (xIS, yIS). The dashed circled outlines the perimeter of the microlens.   Looking at the azimuthal angle, for which we seek IS ≈ , the error between the estimated and true azimuthal angles will be defined by random azimuthal error and systematic azimuthal error. For the purposes of this study, systematic azimuthal error can be neglected simply because the microlenses are found to have no observable astigmatism. The random azimuthal error is more 45  complicated, however, because it grows in size as the polar angle is reduced. For an optical beacon that is nearly overhead, i.e., a small polar angle, the beamspot is close to the origin, and it becomes difficult to resolve an accurate estimate of the azimuthal angle. Figure 3.5 illustrates the effect of random azimuthal error with a grid of points before imaging, Fig. 3.5 (a), and a grid of points after imaging, Fig. 3.5 (b). It shows an expansion of the points in the tangential (azimuthal) direction by an amount that is inversely proportional to the distance from the origin. Clearly, it is desirable to avoid the large random azimuthal error that occurs for small polar angles, so we will define the FOV of the optical receiver in this thesis to have a lower bound for the polar angle. Operation above this lower bound has the random azimuthal error be sufficiently small. The random azimuthal error is explored further in Section 3.3.1.1.  Looking at the polar angle, for which we seek IS ≈ , the error between the estimated and true polar angles will be defined by random polar error and systematic polar error. A heuristic approach is used in the following section to quantify random polar error, and the results show it to be small and constant. The systematic polar error demands careful attention, however, as it is the origin of the approximation in (13). For small polar angles, the linear approximation in (13) is good and the relationship between the measured chief radius, IS, and polar angle, , is roughly linear with a proportionality constant k. For large polar angles, however, the linear approximation becomes poor, and the measured chief radius, IS, and the polar angle, , are not linearly related. However, the estimated polar angle, IS, calculated from (13) still assumes linearity. This introduces systematic polar error between the estimated polar angle, IS, and the true polar angle, , which is referred to as radial distortion [34, 35]. Figure 3.5 illustrates the effect of systematic polar error with a grid of points before imaging, Fig. 3.5 (a), and a grid of points after imaging, Fig. 3.5 (c). 46  We see a compression of the grid in the radial (polar) direction, by an amount that is proportional to the distance from the origin. (Radial distortion is sometimes referred to as barrel distortion because of the barrel shape of this image.) Clearly, it is desirable to avoid the large systematic polar error that occurs for large polar angles. Thus, for the purposes of this study, we will define an upper bound for the polar angle, i.e., restrict the FOV of the optical receiver, to keep the systematic polar error sufficiently small. The systematic polar error is explored further in Section 3.3.1.2.     (a)                                             (b)                                                (c) Figure 3.5 Visual depictions of random and systematic error are shown. The figures show (a) a grid of points, (b) an image of the grid of points, subject to random azimuthal error that increases in inverse-proportion to the distance from the origin and random polar error that is small and constant, and (c) an image of the grid of points, subject to negligible systematic azimuthal error and finite systematic polar error that increases in proportion to the distance from the origin.     47  3.3.1.1 Random Azimuthal and Polar Errors Random error defines performance for a system devoid of biases, i.e., systematic error. For this study, random error manifests itself as discrepancies between the estimated azimuthal and polar angles, IS and IS, and the true azimuthal and polar angles,  and ,while neglecting biases, i.e., systematic error. There are two effects that can impact the validity of the estimated angles.  The first effect to consider for the validity of the estimated angles, IS and IS, in the presence of random error, is the shape of the beamspot on the image sensor. Ideally, the beamspot would be sufficiently small and symmetric to identify unique chief coordinates, xIS and yIS, and therefore unique estimated angles, IS and IS. However, in reality there may exist aberrations, in the form of spherical aberration, comatic aberration, and field curvature. The aberrations can spread the beamspot over an area of many pixels, with an asymmetric intensity profile, and this can make it difficult to identify xIS and yIS. Figure 3.6 shows this effect by way of ray-tracing analyses of a hemispherical microlens, with a diameter of Dl = 800 µm and refractive index of n = 1.54. The microlens is fixed to a coverslip with a thickness of 100 µm and refractive index of n = 1.54. The planar face of the microlens is at 0 µm along the OA, and an image sensor is positioned at 1200 µm along the OA. The ray-tracing results for incident light rays at polar angles of  = 0° and  = 50°, with respect to the OA, are shown in Figs. 3.6 (a) and (b), respectively. The insets show the intensity profiles of the beamspots on the image sensor. For illumination at a small polar angle,  = 0°, seen in Fig 3.6 (a), the chief coordinates, xIS and yIS, are relatively easy to identify because an intense focus is formed with a small beamspot. The paraxial light rays that propagate close to the OA undergo effective focusing to create an intense point of light at the centre of the beamspot, while the non-paraxial light rays that propagate far 48  from the OA yield a diffuse (and symmetric) beamspot. In this case, xIS and yIS can simply be defined by the brightest pixel at the geometric centre of the beamspot. For illumination at a large polar angle,  = 50°, seen in Fig. 3.6 (b), the chief coordinates, xIS and yIS, must be carefully determined because the beamspot is larger and more asymmetric than the prior case. In this case, paraxial light rays close to the chief ray are focused to the same brightest pixel, but light rays outside the paraxial region undergo asymmetric skewing which is seen as a diffuse flaring of the image. This diffuse flaring is referred to as comatic aberration. Fortunately, the chief ray and its neighbouring rays propagate undeflected through the coverslip and microlens without the effects of aberration, and because of this the brightest pixel can still be used to uniquely defined xIS and yIS on the image sensor. This approach of looking for the brightest pixel is taken in this work. The second effect to consider for the validity of the estimated angles, IS and IS, in the presence of random error, is quantization error due to pixels on the image sensor. This effect arises because we chose (above) to use the brightest pixel to define the chief coordinates. This decision makes the chief coordinates, xIS and yIS, be discrete coordinates in integer multiples of pixels. The estimated angles of IS and IS are therefore also discrete and typically not equal to the true angles of  and . The magnitude of this error depends on the lens and pixel sizes where increasing the lens size compared to the pixel size reduces the error magnitude. This effect cannot be resolved through imaging techniques, so quantization error will remain the primary source of random azimuthal and polar errors. The following two subsections use a sensitivity analysis to define the effects of quantization error for the azimuthal and polar angles. 49         (a)                (b)   Figure 3.6 Ray trace results are shown for incident light at polar angles of (a) 0° and (b) 50°. The light rays are shown in red passing through the systems. The vertical axis shows the distance transverse to the OA. The horizontal axis shows the distance along the OA. The insets show transverse profiles of the focused rays at a distance of 1200 m along the OA (with respect to the planar surface of the microlens).  3.3.1.1.1 Random Azimuthal Error Our first sensitivity analysis will consider random azimuthal error due to pixel quantization. To carry out this analysis, we take the partial derivative of (12) and define the random azimuthal error to be  )(1ISISISIS2ISπIS2ISISIS2ISISISISISISxyyxCyxxyyyxx ,       (14) where 2IS2ISIS yx   and C = 180°/ is the conversion factor that transforms the angles from radians to degrees. Here, xIS and yIS are the spatial quantization errors in the x and y dimensions, respectively. These spatial errors are in units of pixels because xIS and yIS are integer multiples of 50  pixels, while the random azimuthal error  is in units of degrees. According to our prior discussion on beamspot asymmetry, we can assume that the chief coordinates have a unique set of coordinates, xIS and yIS, and so we define the spatial quantization errors in the x and y dimensions, xIS and yIS, by conservative (worst-case) values being equal to half of a pixel’s side-length, i.e., xIS = yIS = pixel/2. This is because xIS and yIS are integer quantities in units of pixels and the maximum distance from a pixel centre to its edge along xIS or yIS is half the side-length of each pixel. We can now rewrite (14) to define the random azimuthal error to be )((pixel/2)ISIS2ISπ yxC  .              (15) To proceed, we consider the profile of the beamspot on the image sensor along the x and y axes, i.e., = 0° and 90°, respectively. Along these axes, (15) is simplified to expressions of  = C∙(pixel/2)/|xIS| and  = C∙(pixel/2)/|yIS|, in degrees, respectively. If we consider the projection of the beamspot along a line bisecting the x and y axes, i.e., with yIS = ±xIS, (15) can be simplified to give an error of  = C∙(pixel/2)/|xIS|, in degrees. In these cases, we see that random azimuthal error is inversely proportional to the magnitudes of xIS and yIS. For the case where the chief coordinates are at the origin, in particular, the error is infinite! Note that (15) can be written in terms of the estimated polar angle, IS, by solving (13) for IS and substituting it into (15). This gives  )||||(pixel/2)(ISIS2IS2π yxkC  .             (16) This equation explicitly shows the inverse dependence between the random azimuthal error and estimated polar angle, although (16) is only highly accurate for operation within the optical receiver’s FOV, where the approximation IS ≈ IS/k is deemed to be acceptable. This inverse 51  dependence means that the random azimuthal error is not a constant—and that it is unbounded. If we assume k ≈ 1°/pixel, which we will later see is a valid assumption, this expression gives a worst-case random azimuthal error of  ≈ 1°, if we restrict the FOV to have a lower bound on the polar angles of approximately 30°. This requirement for such a large lower bound on the polar angle is disappointing, but we expect that the actual random azimuthal error and lower bound on the polar angles will be smaller, given our conservative (worst-case) values being used in these analyses. In Section 3.3.1.3, we will experimentally quantify these errors to define a more practical lower bound of 15° for the polar angle in the FOV.   3.3.1.1.2 Random Polar Error Our second sensitivity analysis considers random polar error due to pixel quantization. To carry out this analysis, we take the partial derivative of (13) to define the random polar error as )||(2222ISISISISISIS2IS2ISISIS2IS2ISISISISISISyyxxkyyxyxyxxkyyxx .  (17) By making the same assumptions for the polar angle as were made for the azimuthal angle, we are able to simplify (17) to )||||(pixel/2)(ISISISyxk .              (18) This expression gives insight into the effects of pixel quantization on random polar error. If we consider the projection of the beamspot along the x or y axes, i.e., with xIS = 0 or yIS = 0, (18) can be simplified to give an error of  = k(pixel/2). If we consider the projection of the beamspot along a line bisecting the x and y axes, i.e., with yIS = ±xIS, (18) can be simplified to give an error of k(pixel/21/2). Overall, we see that the random polar error is independent of the polar angle and 52  (moderately) dependent on the azimuthal angle. With this in mind, we again apply a conservative estimate for the error by defining it as a largest (worst-case) value. By again assuming k ≈ 1°/pixel, as we did with the azimuthal angle, we can calculate an upper limit of the random polar error in our work to be  = 2-1/2° ≈ 0.7° for all azimuthal and polar angles. In Section 3.3.1.3, we will see that the random polar error in our experimental AOA measurements are less than this worst-case value.   3.3.1.2 Systematic Azimuthal and Polar Errors Systematic error is the degradation of accuracy. This error is predictable, so it can be avoided (to an extent) or compensated for. The former approach is taken in this work. We apply microlenses with negligible astigmatism to avoid significant systematic azimuthal error. We then place an upper bound on the polar angle, i.e., restrict the FOV, to avoid the high systematic polar error that occurs at large polar angles. Systematic azimuthal error is discussed in Section 3.3.1.2.1. Systematic polar error is discussed in Section 3.3.1.2.2.  3.3.1.2.1 Systematic Azimuthal Error Systematic azimuthal error is the degradation of accuracy in the measurements of the azimuthal angle due to bias which can be modelled. A major cause of systematic azimuthal error in this work would be imperfections in the cylindrical symmetry of the microlens. Such imperfections would have the imaging characteristics of the microlens depend upon the azimuthal angle, and this would be a manifestation of systematic azimuthal error. Fortunately, the applied dispensing process can pattern microlenses as a liquid with near-perfect cylindrical symmetry. This point is corroborated by SEM images of the microlenses and the lack of astigmatism in the acquired images. In Section 53  3.3.1.3, we present AOA measurements for our optical receiver and confirm that there is negligible systematic azimuthal error.  3.3.1.2.2 Systematic Polar Error Systematic polar error is the predictable degradation of accuracy for measurements of the polar angle. As was discussed in Section 3.3.1, systematic polar error manifests itself as radial distortion in the acquired images, and the level of radial distortion increases as the polar angle increases. The goal of this section will be to model the systematic polar error caused by radial distortion and define an upper bound on the polar angle, i.e. restrict the FOV, to avoid the high systematic polar error that occurs at large polar angles. Given that the FOV is twice the maximum allowable polar angle, and that wider FOVs are advantageous for AOA-based optical wireless positioning systems, according to Chapter 2, such an analysis can have important implications for the performance of our OW positioning system. In Section 3.3.1, we stated that an upper bound could be defined for the polar angle by the point at which the relationship between the estimated polar angle, IS, and the chief radius, IS, ceases to be sufficiently linear. For this work, we consider this relationship to have lost its linearity when the systematic polar error due to radial distortion reaches 1°.  The true (nonlinear) relationship between the polar angle, , and the chief radius, IS, is complex. This relationship can be derived by tracing the chief ray through the entire camera architecture to define the measured chief coordinates on the image sensor. The derivation builds on the analysis in Section 3.2.2, where we traced the chief ray through the glass coverslip and microlens, and it continues through the image sensor glass that is present in the OV7720 and onto 54  the image sensor itself. A schematic of the chief ray traced through the full camera architecture is shown in Fig. 3.7.   Figure 3.7 Full camera architecture schematic is shown including the glass coverslip, microlens, image sensor glass, air gap, and image sensor OV7720 image sensor. The chief ray travelling through the system is denoted by the red arrow. Its radial displacement for each stage is denoted by 1,2,3.  To calculate the chief radius, IS, we divide the system into three refractions and three propagations. In order, these are refraction at the front surface of the glass coverslip, propagation up to the image sensor glass, refraction at the front of the image sensor glass, propagation through the image sensor glass, refraction at the rear surface of the image sensor glass, and propagation through the air to the surface of the image sensor. 55  The first refraction, refraction at the front surface of the coverslip, has already been discussed in Section 3.3.1. Using Snell’s law, we can calculate the polar angle internal to the microlens, int, using the incident polar angle, , and the coverslip refractive index, n. We obtain  n)sin(sin 1int .                (19) The first propagation has the chief ray travel at the internal polar angle, int, through the front coverslip, microlens, and air gap between the microlens and image sensor glass, up to the front surface of the image sensor glass. There is no refraction at the interface between the glass cover slip and the microlens since we defined their refractive indices to be equal at n = 1.54. Also, since we are considering the chief ray, this ray passes through the centre of the hemispherical microlens to intersect normal to its back curved surface, meaning that there is no refraction at this interface either. Finally, the chief ray travels through the air to the front surface of the image sensor glass. The contribution to the chief radius in this step, 1, can be calculated by drawing a right triangle using the chief ray, its radial displacement for this step, and the distance from the centre of the microlens to the front of the image sensor glass along the OA, d, as shown in the Fig. 3.7. With these parameters, we are able to calculate the contribution to the chief radius for this step to be  )tan( int1  d .                (20) The second refraction is at the front of the image sensor glass. Similar to the glass coverslip in front of the microlens, we can calculate the new angle of the chief ray within the glass, gl, using Snell’s law to be   n)sin(sin int1gl ,               (21) where the glass refractive index is assumed to be equal to the coverslip and microlens indices. 56  The second propagation, through the image sensor glass, has the chief ray propagate through the image sensor glass at an angle of gl, until it reaches the back surface of the image sensor glass. The contribution to the chief radius for this step can be calculated by drawing a right triangle using the chief ray angled at gl, its radial displacement for this step, 2, and the thickness of the image sensor glass, t, as shown in the Fig. 3.7. With these parameters, we are able to calculate the contribution to the chief radius for this step to be  )tan( gl2  t .                (22) The third refraction is at the back of the glass covering the image sensor. At this interface, we are essentially undoing the refraction from (21) by having refraction back into air from the image sensor glass. This allows us to directly state that the chief ray will be traveling at int after this refraction. The third propagation, through the air up to the image sensor, has the chief ray propagate from the back of the image sensor glass at an angle of int to the surface of the image sensor. The contribution to the chief radius for this step can be calculated by drawing a right triangle using the chief ray angled at int, the radial displacement for this step, 3, and the thickness of the air gap between the image sensor glass and the image sensor, g, as shown in the Fig. 3.7. With these parameters, we are able to calculate the contribution to the chief radius for this step to be  )tan( int3  g .                (23) The complete expression for the chief radius on the image sensor, IS, is the sum of the three contributions from each step, i.e., the sum of  1,  2, and  3. This expression is )tan()tan()tan( intglintIS  gtf  .             (24) 57  Writing this expression in terms of the incident polar angle, , by substituting (19) and (21) into (24), gives  ngnntnf)sin(sintan)sin(sinsinsintan)sin(sintan 1111IS .      (25) This is the measured chief radius on the image sensor. We can see that (25) is highly nonlinear and would be challenging to invert if we wanted to calculate the incident polar angle, , from a measured chief radius, IS. With this in mind, we linearize this expression for the chief radius on the image sensor, IS, by using the small angle approximation of sin( ≈ . This simplifies the above expression to ngntnf tantantan2IS.             (26) Now, by making another small angle approximation, tan() ≈ , we arrive at   gntfnIS .               (27) We can see that by linearizing the true expression for the chief radius using several small angle approximations we are able to arrive at a linear relation between the chief radius, IS, and the polar angle, . This expression can be rearranged to be analogous to the linear approximation in (13) by writing it in the form 58  ISpixelπIS lCkgntfn ,              (28) where lpixel is the side length of a pixel on the image sensor. This result gives us an expression for the linear scaling factor, k, according to gntflnCkpixelπ.                (29) In the beginning of this section, we defined radial distortion as the main cause of systematic polar error. We now see that this error arises from the fact that the chief radius, IS, is not linearly related to the polar angle, , as is assumed in equation (13). Thus, the estimated polar angle, IS, differs from the true polar angle, . The difference between the angles is minor for small polar angles since the small angle approximation of the nonlinear expression for IS gives a linear relationship. As the polar angle increases, however, the small angle approximation becomes poor and systematic polar error is introduced by way of radial distortion. An illustration of the deviation between the chief radius, IS, and the linear approximation of radial displacement on the image sensor is seen in Figure 3.8. The results in this figure assume the following values for t, g, d, and n. From the OV7720 datasheet, typical values for t, and g are 400m and 40m, respectively. We assume that the image sensor is at the focal length of the microlens system by making d = 833 m. The refractive index for the coverslip, microlens, and image sensor glass are all equal to n = 1.54. By putting these values into (29), we are able to calculate a theoretical linear scaling factor of k = 0.93 °/pixel. This validates the claim made in Section 3.3.1.1 which stated that a reasonable value for k was 1°/pixel. Figure 3.8 (a) shows the 59  chief radius, IS, and the linear approximation of radial displacement on the image sensor as a function of the polar angle, . Figure 3.8 (b) shows the difference between the two curves in (a), converted into degrees using the linear scaling factor, k, as a function of the polar angle, .                             (a)              (b) Figure 3.8 Radial distortion on an 800-m-diameter hemispherical microlens with a refractive index of n = 1.54 and focal length of 1140 m is shown. In (a), the chief radius, IS, and linear approximation of radial displacement on the image sensor are shown versus the polar angle, . In (b), the difference between the chief radius, IS, and linear approximation of radial displacement on the image sensor is shown versus the polar angle, .   We defined the upper bound of the polar angle, i.e., restricted the FOV, to be the point in which systematic polar error introduces 1° of error. Figure 3.8 indicates that this occurs at a polar angle of approximately 50°, defining a full theoretical FOV of 100°. Within this FOV, radial distortion introduces less than 1° of systematic polar error. Outside of this FOV, the linear approximation in (13) becomes poor and the measured chief radius diverges from linearity rapidly with increasing polar angle. As was discussed at the start of this section, in a practical system we could use 60  knowledge of these nonlinearities to theoretically compensate for the radial distortion in polar AOAs beyond 50°. However, in this work, we simply use this analysis to define the upper bound of the polar angle and limit of the FOV of our optical receiver to be 100°. Compensation of the radial distortion can be explored in future work.  3.3.1.3 AOA Measurement Error Results Sections 3.3.1.1 and 3.3.1.2 sought to derive models for random and systematic error for AOA-based estimation, respectively. In this section, we show results for azimuthal and polar angle measurements made by our optical receiver and compare these results to the models presented in the previous sections. The results for estimated azimuthal and polar angles are also shown for the front facing camera of an LG Nexus 5 as a typical smartphone camera to illustrate the shortcomings of this device for use as an optical receiver. We can now compile our conclusions for the azimuthal and polar errors. Section 3.3.1.1 indicated that the random azimuthal error is greatest for small polar angles and unrelated to the azimuthal angle, while the random polar error is independent of both azimuthal and polar angles. Section 3.3.1.2 indicated that the systematic azimuthal error is negligible due to microlens symmetry, while the systematic polar error is greatest for large polar angle and independent of azimuthal angle. Figure 3.9 shows results that test these hypotheses for our optical receiver. Figure 3.9 (a) shows the azimuthal error versus the azimuthal angle, . Figure 3.9 (b) shows the azimuthal error versus the polar angle, . Figure 3.9 (c) shows the polar error versus the azimuthal angle, . Figure 3.9 (d) shows polar error versus the polar angle, . We will interpret Fig. 3.9 by comparing its results to the claims made in Sections 3.3.1.1 and 3.3.1.2.   61                                        Figure 3.9 Azimuthal and polar angle error results are shown versus azimuthal and polar angles for the optical receiver. In (a), the azimuthal error is plotted versus the azimuthal angle,  In (b), the azimuthal error is plotted versus the polar angle, . In (c), the polar error is plotted versus the azimuthal angle, . In (d), the polar error is plotted versus the polar angle, . In (a), exceedingly large errors that occur at small polar angles are removed to better show the remaining data points, i.e., the results are shown only for data collected at  > 15°.   We begin with azimuthal error. Section 3.3.1.1 claimed that random azimuthal error is independent of the azimuthal angle and inversely proportional to the polar angle. Figure 3.9 supports these claims. Figure 3.9 (a) shows no correlation between the azimuthal error and 62  azimuthal angle, which confirms the claim of independence between the random azimuthal error and azimuthal angle. Moreover, the lack of bias in the results of Figs. 3.9 (a) and (b) confirms the claim stated in Section 3.3.1.2 that our system has negligible systematic azimuthal error. Figure 3.9 (b) also supports the claim of an inverse dependence between the random azimuthal error and polar angle. By taking the standard deviation of the data in Fig. 3.9 (b), as a function of the polar angle, we find that a maximum random azimuthal error of 1° defines a lower bound on the polar angle of 15°. This is the lower limit of the FOV for the optical receiver. We continue by looking at the polar error. Sections 3.3.1.1 and 3.3.1.2 claimed that the random polar error is independent of both the azimuthal and polar angles, and the systematic polar error is greatest for large polar angles while being independent of the azimuthal angle. Figure 3.9 supports these claims. Figure 3.9 (c) shows that there is negligible systematic polar error with respect to azimuthal angle, which we expect due to the symmetry of the system. Figures 3.9 (c) and 3.9 (d) have constant standard deviations for the polar angle error, at approximately 0.7° after subtracting the systematic polar error, and this confirms the claim that the random polar error is independent of both the azimuthal and polar angles. Figure 3.9 (d) shows a growing bias in the polar error, indicating that the systematic polar error is greatest for large polar angles, as claimed earlier. This systematic polar error limits the FOV by defining an upper bound on the polar angle—as the polar angle that establishes 1° of polar error. We can see from Fig. 3.9 (d) that this acceptable level of polar error limits the polar angle to be below 50°. Thus, given the lower bound on the polar angle (set by the random azimuthal error) and the upper bound on the polar angle (set by the systematic polar error), the FOV of the optical receiver is defined by 15° <  < 50°. In this range of polar angles, the optical receiver can function with an AOA error (standard deviation) at or below 1°. 63  As a benchmark for comparison, the above results for the azimuthal and polar errors are contrasted to those of the front facing camera in a typical (LG Nexus 5) smartphone. Since we do not know the exact internal configuration of this receiver, we will assume it suffers from the same imaging constraints as our optical receiver, i.e., it has large random azimuthal error for small polar angles and large systematic polar error for large polar angles. Figure 3.10 (a) shows the azimuthal error versus the azimuthal angle, . Figure 3.10 (b) shows the azimuthal error versus the polar angle, . Figure 3.10 (c) shows the polar error versus the azimuthal angle, . Figure 3.10 (d) shows polar error versus the polar angle, .  64  Figure 3.10 Azimuthal and polar error results versus azimuthal and polar angles are shown for the LG Nexus 5 smartphone’s front facing camera. In (a), the azimuthal error is plotted versus the azimuthal angle,  In (b), the azimuthal error is plotted versus the polar angle, . In (c), the polar error is plotted versus the azimuthal angle, . In (d), the polar error is plotted versus the polar angle, .   From these results for the LG Nexus 5 we can see that both the random azimuthal error (at a standard deviation of 0.6°) and the random polar error (at a standard deviation of 0.4°) are better than those of our optical receiver. However, this smart phone, like our optical receiver, suffers from systematic polar error, i.e., radial distortion, as seen in Fig. 3.10 (d). This systematic polar error reaches 1° at a polar angle of 30°, which defines an FOV of 60° for the smartphone. It should be noted that, even in the absence of this systematic polar error, the typical smartphone would be limited to a polar angle of approximately 30° because this angle corresponds to the point at which the radial displacement of the chief ray, , runs off the edge of the image sensor. Overall, the Nexus 5 outperforms our optical receiver in terms of random error, but its large systematic error at relatively small polar angles will limit its application to OW positioning systems—which seek wide FOVs for reduced DOP and reduced position error. In summary, both the random azimuthal error and random polar error were investigated in this section for the developed optical receiver. These errors were kept below 1°, for a FOV defined by polar angles in the range of 15° <  < 50°. The lower bound on the polar angle was defined by the random azimuthal error, which grows rapidly as the polar angle decreases. The upper bound on the polar angle was defined by the systematic polar error, which introduced radial distortion that grows along with the polar angle. These constraints define the FOV of our optical receiver to be 100°. Note that this definition of the FOV ignores the lower bound on the polar angle, but this is 65  acceptable given that OW positioning is best performed (with low DOP and position error) at large polar angles. Within the defined FOV, our optical receiver has an AOA error of AOA = 1°.   3.3.2 Performance for Optical Beacon Identification In Section 3.3.1 we addressed the first performance metric for our optical receiver, AOA estimation. In this section we will turn our attention to the second performance metric, optical beacon identification. Once the optical receiver has isolated an azimuthal and polar angle to define the AOA for each beamspot, it must identify the optical beacon that created each beamspot on the image sensor. Positioning can then be carried out. The LS algorithm is used for this positioning, and it requires the optical beacon identification in order to apply triangulation with the AOA angles. Any AOAs whose optical beacons have not been identified cannot be used by the LS algorithm. This leads to concerns on reliability, as a minimum of two observable and identifiable optical beacons is required to carry out positioning. This also leads to concerns on accuracy, as greater numbers of observable and identifiable optical beacons typically yield improved positioning accuracy. In this section, we will begin by discussing the typical method of optical beacon identification in the literature. This method makes use of amplitude modulation, with a unique identification frequency for each optical beacon [18], and it is referred to as frequency-based identification in this thesis. It must be noted, however, that the low sampling frequency of our camera architecture is not well suited to this method of optical beacon identification. To remedy this, we introduce a method of optical beacon identification that takes advantage of the colour detection ability of the camera architecture. The result is called colour-frequency-based identification, and we study its effectiveness in our optical receiver. The typical frequency-based identification method and its 66  challenges are discussed in Section 3.3.2.1. Our colour-frequency-based identification method is introduced in Section 3.3.2.2.   3.3.2.1 Frequency-based Identification The most common method of optical beacon identification in the literature is frequency-based identification [18]. In this optical beacon identification method, each optical beacon is amplitude modulated with a sinusoidal signal having a fixed frequency. These frequencies will be referred to as identifier frequencies in this thesis. A unique frequency is assigned to each optical beacon. The amplitude modulated signal is sampled by the optical receiver and the identifier frequency is extracted using a fast Fourier transform (FFT), and other spectral analysis/filtering techniques. By using a lookup table, we match each identifier frequency with its optical beacon, to allow the corresponding AOA’s azimuthal and polar angles to be used in the LS positioning algorithm. In the camera architecture specifically, the amplitude modulated signal transmitted by each optical beacon is focused down to an amplitude modulated beamspot on the image sensor. An FFT is then carried out on each beamspot to determine its identifier frequency. While frequency-based identification is a simple and robust method of optical beacon identification, its use with our camera architecture presents some challenges. The greatest challenge is the limited range of sampling frequencies over which our image sensor can operate. This limited range of sampling frequencies restricts the highest identifier frequency that can be used in our OW positioning system. The highest identifier frequency that can be measured is at half the frame rate of the image sensor. This frequency is referred to as the Nyquist frequency and for our OV7720 image sensor it is equal to fNyquist = 93.5 Hz. Any frequencies at or above this frequency will not be reliably identified using the FFT.  67  While the low sampling rate of our image sensor defines the maximum identifier frequency, it is the vision of people in the environment that defines the minimum identifier frequency. The maximum frequency that the human eye can discern is called the flicker frequency, and it is used to define the minimum identifier frequency. The flicker frequency depends on the intensity of the modulated signal and the intensity of the background light, typically around 65 Hz [36], and our OW positioning system makes use of these dependencies. We design our optical beacon power output to contain a large background (DC) component with a small modulated (AC) signal component at the identifier frequency. This allows us to operate below the flicker frequency given in the literature, with little discernible flickering seen by the human eye. In this thesis, our flicker frequency is taken to be fflicker = 35 Hz. Knowing that our choice of identifier frequencies is confined to be between fflicker and fNyquist, we will know the frequencies that can be used. The frequency range (i.e., bandwidth) between fflicker and fNyquist for our OW positioning system is 58.5 Hz. This frequency range should support as many unique identifier frequencies as possible. However, it is here that another limitation of the flicker frequency becomes apparent: flicker frequency not only defines the minimum allowable identifier frequency but also the minimum spacing between identifier frequencies. For example, we require a 2 × 35 Hz = 70 Hz frequency range to have three identifier frequencies, but we only require a 35 Hz frequency range to have two identifier frequencies. Thus, our available frequency range of 58.5 Hz can be implemented with two identifier frequencies, having a spacing of 35 Hz. It is important to note that the use of two identifier frequencies would only allow for the use of four optical beacons, if each optical beacon has only one identifier frequency. This would present a challenge for OW positioning, as four optical beacons would give poor positioning reliability and performance. Ideally, we would like to have more than 4 uniquely-identified optical beacons, 68  according to the optical beacon cells discussed in Chapter 2, but this will require further consideration. Clearly, some other type of optical beacon identification method is required if we are to use the camera architecture, with its low sampling frequency, and the following subsection investigates colour-frequency-based identification as such a method.  3.3.2.2 Colour-frequency-based Identification In the previous section, we introduced the frequency-based identification method, and we found that the camera architecture of our optical receiver is poorly suited to utilize this method. In this section, we introduce an alternative optical beacon identification method which takes advantage of the colour imaging capabilities of the OV7720 image sensor to identify optical beacons. The method is used in conjunction with the aforementioned frequency-based identification method, and it is referred to here as colour-frequency-based identification. The principle behind this method is quite simple. It makes use of the fact that the OV7720 image sensor has three types of pixels, corresponding to the colours red, green, and blue (RGB). Thus, we can carry out frequency-based identification for each of these colours. Now, each beamspot can be identified by two identifier frequencies and the three distinct RGB components—instead of just two identifier frequencies in total. A single example of this would be a theoretical beamspot with its red pixel modulated at an identifier frequency, f1, its green pixel modulated at a different identifier frequency, f2, and its blue pixel modulated with both identifier frequencies, f1 and f2.  At the optical beacon, the transmitting of unique frequencies on each of the RGB components is best carried out if the beacon is a white-coloured RGB LED with a separate input pin for each colour component. The combination of the RGB components still makes the emitted light appear to be white to a user, but the optical receiver is now able to discern each RGB component 69  separately. We can use this principle, with different DC biasing levels on each RGB component, to maintain a white light balance and modulation for each optical beacon. Based on these factors, we will use an RGB LED, being the Cree PLCC6-CLV6A, for our optical beacons in the remainder of this thesis.  Given that colour-frequency-based identification allows the optical receiver to identify more optical beacons than frequency-based identification, and that RGB LEDs can enable colour-frequency-based identification, we would now like to determine the number of optical beacons that can be employed in an AOA-based OW positioning system. The two identifier frequencies can be applied in four distinct combinations, being i. DC, ii. f1, iii. f2, iv. f1 and f2. The three colour components in the RGB LEDs are i. Red, ii. Green, iii. Blue. Thus, we can have four distinct combinations of identifier frequencies for each of the three colour components in the RGB LEDs, and this gives 43 = 64 unique combinations of colour-frequency-based identifiers. If we require our system to have modulation with at least one finite identifier frequency on each RGB component, i.e., eliminate the DC frequency, it will be possible to have 33 = 27 unique combinations of colour-frequency-based identifiers. We will proceed with the latter case, with at least one finite identifier frequency on each RGB component, because AC modulation will allow the detected signals to be distinguishable from ambient (typically DC) noise and light sources in the environment via AC narrowband filtering. The AC narrowband filtering at the modulation frequencies can reject the power from ambient noise and light sources, and this improves the signal-to-noise ratio. However, like all optical beacon identification methods, colour-frequency-based identification has challenges that must be addressed. The first challenge that must be considered for colour-frequency-based identification is rooted in our assumption that the three colour components of the RGB LED have perfectly narrow 70  emission spectra. This is not the case in reality. Each of the colour components in the RGB LED will have a finite bandwidth—which could contribute to leakage between neighbouring colours and thus colour interference. With this in mind, spectral analyses are carried out on the RGB LED of interest, and the resulting emission spectra for the RGB components are shown in Figure 3.11. The figure shows the normalized power spectral density as a function of wavelength for the red, green, and blue components of the white light LED. It is clear from these results that the RGB components have emission peaks that are isolated from each other. Although, it should be noted that there is some leakage between RGB components, so it will be necessary to take this leakage into account and thereby minimize colour interference.   Figure 3.11 Normalized power spectral density is shown as a function of wavelength for the red (in red), green (in green), and blue (in blue) components of the white light LED. The results are acquired by way of a Thorlabs CCS100 Spectrometer. The RGB components of the white light LED (Cree PLCC6-CLV6A) are individually activated.  71   The second challenge with colour-frequency-based identification is rooted in the assumption that each RGB pixel on the image sensor responds only to its one colour and that it blocks out all other colours, much like an optical bandpass filter. In reality, the response of each pixel is simply centred on its colour—with other colours yielding some (potentially non-negligible) level of response. Specifically, the red pixel has a peak at red (615 nm) in its responsivity curve, the green pixel has a peak at green (540 nm) in its responsivity curve, and the blue pixel has a peak at blue (460 nm) in its responsivity curve, but we recognize that each responsivity curve has a finite bandwidth and thus the potential for colour interference. With this in mind, the representative responsivity curves for the RGB pixels of the OV7720 image sensor are shown in Figure 3.12. The curves that are shown are normalized responsivity curves for the RGB pixels in a Thorlabs DCC1645C (Silicon CMOS) image sensor, and such curves would be similar to those of the employed OV7720 (Silicon CMOS) image sensor.   72   Figure 3.12 Normalized responsivity curves are plotted as a function of wavelength for the red (in red), green (in green), and blue (in blue) pixels of the Thorlabs DCC1645C (Silicon CMOS) image sensor—which are similar to those of the OV7720 (Silicon CMOS) image sensor. The data is reproduced from the DCC1645C CMOS image sensor data sheet.  The spectral response in Fig. 3.12 shows significant colour interference in the green portion of the spectrum around 550 nm. At the extremes of the wavelength range of 500 nm to 600 nm, we see that the red and green responsivities are over 50% of the peak green responsivity. Thus, the green response may show up in the responses of the red and blue colours, and vice versa. It is also worth noting that none of the three pixels has its responsivity go completely to zero. The responsivities never go below 10%. This means that some level of colour interference is unavoidable. (However, as we will see later, it is possible to greatly reduce the colour interference by restricting the number of colours that are used for identification.) 73  It is apparent from the above analyses that colour interference can manifest itself from the finite bandwidth of the emission spectra (for the colour components in the RGB LED) and/or the finite bandwidth of the responsivity curves (for the RGB pixels in the image sensor). Fortunately, it may be possible to largely eliminate colour interference for colour-frequency-based identification by simply comparing the relative weight of pixel signal amplitudes from the RGB pixels in the image sensors, rather than simply looking for the RGB pixels that are active. As an example, red light illuminating the image sensor can be identified by the fact that it yields a strong pixel signal amplitude from the red pixel and weaker pixel signal amplitudes on the green and blue pixels—and not by the fact that it yields a signal only on the red pixel. To test this hypothesis for our colour-frequency-based identification method, we implement an analysis of colour interference ratios. We activate a single colour component of the RGB LED to produce illumination on the optical beacon, using various intensities. We then measure the pixel signal amplitudes for each of the RGB pixels, with various intensities, and compare the amplitudes via colour interference ratios. The colour interference ratio is defined here as the ratio of an undesired pixel signal amplitude to the desired pixel signal amplitude, for illumination by a given colour component in the RGB LED. For example, if the red component of the RGB LED is activated, and the red, green, and blue pixels of the image sensor respond with pixel signal amplitudes of 100, 50, and 25, respectively, the colour interference ratios for red illumination on the red, green, and blue pixels would be 1, 0.5, and 0.25, respectively. Such an analysis is shown in Fig. 3.13. Figure 3.13 (a) shows the RGB pixel signal amplitudes for activation of the red component of the white light LED, with illumination at varying intensities. Figure 3.13 (b) shows the corresponding colour interference ratios for the same red illumination. Here we see that there are finite pixel signal amplitudes and colour interference ratios for the green and blue pixels, with respect to the red pixel. Also, the 74  colour interference ratios of the green pixel are always higher than those of the blue pixel, but this is understandable since the green wavelengths are closer to those of the red wavelengths, in comparison to the blue wavelengths. We also see that the colour interference ratios are relatively constant over a wide range of intensities, which makes our analysis simpler. We can compare constant values for colour interference ratios, without regard to the intensity. The only regime where the colour interference ratio is not constant is at low intensities, but the OW positioning system can be designed with sufficiently powerful optical beacons to prevent such low intensities. For the purposes of this study, the system will be operated with illumination intensities above 0.5 W/m2, for an image sensor exposure rate of approximately 5% of the maximum, although it is noted that the system can be operated with lower intensities by simply increasing this exposure rate. With these prescribed conditions in mind, colour interference will be deemed to be sufficiently small if the colour interference ratio remains below 0.5. The colour interference results for green and blue illumination are similar to this case of red illumination and are shown in Appendix B, for illumination under various intensities. Table 3.2 shows the results for colour interference ratios for all colour combinations.                                            (a)                 (b) 75  Figure 3.13 Results are shown for activation of the red component of the RGB LED, yielding red illumination on the image sensor at varying intensities. The results show the (a) normalized pixel signal amplitude (of the RGB pixels) and (b) corresponding colour interference ratios for the red illumination.   Table 3.2 Summary of colour interference ratios. The rows correspond to the activated colour (red, green, and blue) components of the RGB LED providing illumination. The columns correspond to the colour interference ratio, which is the normalized pixel signal amplitudes of the RGB (red, green, and blue) pixels in the illuminated image sensor. The results shown are for an illuminating intensity of 0.5 W/m2 (the minimum acceptable value) at an exposure rate of approximately 5% of the maximum. The proposed system applies modulation only to the red and blue LEDs, which leads to colour interference ratios with well-defined recognition of the desired colour and rejection of undesired colours by the red and blue pixels.    Colour Interference Ratio   Red Green Blue Illumination Colour Red 1.00 0.49 0.32 Green 0.49 1.00 0.87 Blue 0.38 0.72 1.00  It is at this point that we can make design decisions for the operational colours of our OW positioning system. We can see that the colour interference ratio is lowest between red and blue, which makes sense, as these colours are especially distant in the visible spectrum. In contrast, the colour interference ratio between green and blue is the highest, with approximately two thirds of the response on the desired pixel being received by the incorrect pixel (which is above the maximum acceptable colour interference ratio of 0.5). To this end, the OW positioning system will make use of colour-frequency-based identification with only red and blue colours being used for identification. Table 3.2 shows that this restricted operation yields colour interference ratios with well-defined recognition and rejection by the red and blue pixels. The green component of the RGB LED will still be activated, with a DC bias, to maintain a white light colour balance. 76  Ultimately, we will be able to identify 32 = 9 unique combinations of colour-frequency-based identifiers. This number of combinations is sufficient for implementing the optical beacon distributions introduced in Chapter 2. The red and blue components of the RGB LED will be activated with two frequencies, as defined here. We stated in Section 3.3.2.1 that the minimum allowable identifier frequency corresponds to the flicker frequency, fflicker = 35 Hz, and the maximum allowable identifier frequency corresponds to the Nyquist frequency, fNyquist = 93.5 Hz. We also stated that the minimum allowable separation between the identifier frequencies is equal to the flicker frequency. Based on these constraints, identifier frequencies of f1 = 45 Hz and f2 = 90 Hz would be natural choices. The 45 Hz minimum identifier frequency and frequency separation can avoid flicker frequency affects. However, this choice has a close proximity (3.5 Hz) between f2 and the Nyquist frequency. By placing the upper identifier frequency so close to the Nyquist frequency, we run the risk of aliasing at our upper identifier frequency. To alleviate this, we will instead select identifier frequencies of f1 = 40 Hz and f2 = 80 Hz. This choice enjoys the same benefits as our previous identifier frequencies, but it also lessens aliasing by having 13.5 Hz between the upper identifier frequency and the Nyquist frequency.  Now that we have determined which colours and frequencies to use in our colour-frequency-based identification method, we can modulate the red and blue components of the RGB LEDs with the frequencies and test the operation of the optical receiver. To do this, we created an OW positioning system using nine optical beacons arranged in a 3 × 3 multi-cell square optical beacon configuration. Each of these optical beacons had their RB components modulated with a unique combination of either f1 = 40 Hz, f2 = 80 Hz, or f1 = 40 Hz and f2 = 80 Hz. All of the optical beacons were imaged using our optical receiver giving a set of nine brightly focused beamspots. The FFT 77  was taken for a representative beamspot and the results are shown in Fig. 3.14. This optical beacon has its red LED modulated at f2 = 80 Hz, yielding the red spike in the figure, and its blue LED modulated at f1 = 40 Hz, yielding the blue spike in the figure. An inset showing the beamspots as imaged by the optical receiver is included in the figure and the beamspot corresponding to the optical beacon identified is circled in red. By referencing identifier frequencies with the look-up table given in Table 3.3, we can see that Fig. 3.14 corresponds to optical beacon 2.   Figure 3.14 Optical beacon identification results are shown for a representative optical beacon. The red line corresponds to the frequencies received by the red pixels, while the blue line corresponds to the frequencies received by the blue pixels. The amplitudes of both colour spectra are normalized to 1. The inset shows the nine beamspots as imaged by the optical receiver with the beamspot corresponding to the data circled in red.     78  Table 3.3 Identifier frequency look-up table. Combinations of frequencies, which are modulated onto the red and blue components of the RGB LEDs (which act as optical beacons), are defined to uniquely identify each optical beacon (by its optical beacon number). Beacon Number Red Frequency (Hz) Blue Frequency (Hz) 1 40 80 2 80 40 3 80 40, 80 4 40, 80 80 5 40, 80 40, 80 6 40, 80 40 7 40 40 8 40 40, 80 9 80 80  3.4 Summary In this chapter, we determined which optical receiver architecture was best suited to serve in our OW positioning system, gave a design for this optical receiver architecture, then analysed the performance of this design both theoretically and experimentally. We first determined that the camera architecture was superior to the photodiode architecture due to the camera architecture’s wider FOV and lower AOA error. We then presented a design for the camera architecture using an OV7720 image sensor and a hemispherical microlens. Finally, we analysed the random and systematic error sources for this architecture as well as its ability to identify optical beacons. It was found that the camera architecture has a FOV up to 100°, defined by polar angles in the range of 15° <  < 50°, with 1° of random AOA error within this range. It was also found to be capable of identifying up to nine optical beacons using a colour-frequency identification method between the optical beacons and optical receiver. The ultimate positioning performance of such a system will be tested in the upcoming chapter. 79  Chapter 4: Positioning Results The ultimate goal of an AOA-based OW positioning system is to determine the position of the optical receiver using surrounding optical beacons. In the previous two chapters, we investigated the design and operation of the optical beacon geometries and the optical receiver. In this section, we will show results for an OW positioning system using the design recommendations and specifications given in the previous chapters. In Chapter 2, we recommended the use of a square optical beacon geometry to reduce DOP and thus reduce position error. We also recommended operating as close to the optical beacons as possible while keeping all optical beacons within the FOV of the optical receiver. This led to demands for a sufficiently wide FOV for the optical receiver. In Chapter 3, we presented the design of an optical receiver with specifications corresponding to an AOA error of AOA = 1° and a FOV of 100°. From the operation of this optical receiver, we determined that it could identify up to nine optical beacons using colour-frequency-based identification. In this chapter, an OW positioning system is analysed and constructed according to the recommendations in Chapter 2 and the specifications in Chapter 3. The system is comprised of nine optical beacons arranged in a 3 × 3 multi-cell square optical beacon geometry with an optical beacon geometry side length of a = 100 cm. The use of nine optical beacons (the maximum number being distinguishable) reduces DOP and thereby improves position accuracy. The 100° FOV of our optical receiver dictates that the separation between the optical beacons and the plane in which positioning is carried out should be h = 119 cm. In our setup, we select a separation between the optical beacons and the plane in which positioning is carried out to be h = 110 cm. This slight reduction introduces additional (albeit small) AOA error, due to a corresponding need for a slightly wider FOV, but we will see that this error is minor and only manifests itself when positioning is 80  carried out at the corners of the optical beacon geometry. The OW positioning system is shown in Fig. 4.1.    Figure 4.1 Optical wireless positioning system setup for our experiment is shown. A 3 × 3 multi-cell square optical beacon geometry containing nine optical beacons is used. The dimensions are a = 100 cm and h = 110 cm.  The specifications that are defined for the OW positioning system yield the DOP contour shown in Fig. 4.2. The simulation gives a mean DOP of E[DOP(x, y, z = 0)] = 1.68 cm/°. By assuming that the AOA error is AOA = 1°, according to Chapter 3, the OW positioning system has a mean (3-D) position error of E[p(x, y, z = 0)] = 1.68 cm.  81   Figure 4.2 Dilution-of-precision and position error contours are shown for our OW positioning system in the (x, y, z = 0) plane. The OW positioning system has a 3 × 3 multi-cell square optical beacon geometry with a side length of a = 100 cm and a height of h = 110 cm. The values for DOP are shown on the left axis, in unis of cm/°, while the values for position error are shown on the right axis, in units of cm.  Experimental analyses of positioning are carried out using the OW positioning system shown in Fig. 4.1 and the optical receiver described in Chapter 3. Our test involves rastering the optical receiver over a grid in the plane beneath the optical beacon geometry with a spacing between steps of 25 cm. The algorithm used to calculate positions from AOAs is shown in Appendix C. Three complete sets of data were taken and the results of this positioning test are shown in Fig. 4.3 for the transverse dimensions and Fig. 4.4 for the vertical dimension. To compare these results to the theoretical analysis above, we utilize the standard deviation of the measured position errors. (The standard deviation is used here because the mean is assumed to be zero.) The results in the first figure, Fig. 4.3, show the estimated and true positions in transverse dimensions across the (x, y, z = 0) plane. We can see that the estimated positons, designated by 82  blue diamonds, are in good agreement with the true positions of the optical receiver, designated by orange circles. The greatest position errors occur near the edge of the optical beacon geometry—as expected due to our operation slightly outside of the FOV of our optical receiver in these extreme locations. The standard deviations of the transverse position error in the x, y plane for each of the three sets of data were measured to be 1.19 cm, 1.04 cm, and 1.35 cm, respectively.   Figure 4.3 Experimental results are plotted for the estimated and true positions of the optical receiver in the transverse dimensions across the (x, y, z = 0) plane. Position estimates are taken in the plane beneath the optical beacons in our OW positioning system, spaced according to 25 cm steps. The estimated and true positions are plotted using blue diamonds and orange circles, respectively. The figure shows outlines of the 3 × 3 geometry of optical beacons indicated by the black circles.   83  The results in the second figure, Fig. 4.4, show the estimated and true positions in the vertical dimension across the (x, y, z = 0) plane. Note that the true positions all have z = 0, thus the results of this figure can be interpreted directly as position error. We see that the estimated positions, designated by blue diamonds in Fig. 4.4 (a), are in good agreement with the true positions, designated by orange circles. The standard deviations of the position errors in the vertical dimension for each of the three sets of data are 1.76 cm, 1.50 cm, and 1.76 cm, respectively, making it similar to the transverse position error in the (x, y, z = 0) plane. We also note that, as is apparent in this figure, the position error in the vertical dimension for our experiments is slightly dependent on the y dimension. This is due to a slight tilt of the microlens, with respect to the plane of the image sensor. Fortunately, a biasing of the results such as this can be easily compensated. Upon calibrating out this bias, with a linear relation between y and z, we obtain smaller position errors in the vertical dimension, being 1.14 cm, 1.12 cm, and 1.22 cm, respectively. All three sets of data for this calibrated optical receiver are shown in Fig. 4.4 (b).  84   (a)  (b) 85  Figure 4.4 Experimental results are shown for the estimated and true positions of the optical receiver in the vertical dimension across the (x, y, z = 0) plane for all three sets of data. Position estimates are taken in the plane beneath the optical beacons in our OW positioning system, spaced according to 25 cm steps. The estimated and true positions are plotted using blue diamonds and orange circles, respectively. The figure shows outlines of the 3 × 3 geometry of optical beacons indicated by the black circles. The results are shown for (a) the uncalibrated optical receiver, with a slight bias due to misalignment between the microlens and image sensor, and (b) the calibrated optical receiver, with the bias removed.  By combining the transverse and vertical position errors, we are able to obtain the overall (3-D) position errors for our OW positioning system, with the optical receiver and 3 × 3 multi-cell square optical beacon geometry. The overall (3-D) position errors are calculated by the root-mean-squared summation of the position errors in the transverse dimensions and vertical dimension. The distribution of the overall (3-D) position errors for all three runs is shown in Fig. 4.5. The mean position errors for these results are p = 1.65 cm, 1.53 cm, and 1.82 cm, respectively. By combining these results, we obtain an average position error of 1.7 cm ± 0.2 cm. Thus, the experimental results are in good agreement with the theoretical position error, from Fig. 4.2, which had a mean position error of E[p(x, y, z = 0)] = 1.68 cm. The acquiring of such values with the OW positioning system is deemed to be a notable achievement, given that it rivals the position errors seen for far more sophisticated (and expensive) positioning systems, such as TOA/TDOA [16] and RF [2] systems. 86   Figure 4.5 Experimental results are shown for the 3-D position error of the optical receiver. Position estimates are taken in the plane beneath the optical beacons in our OW positioning system, spaced according to 25 cm steps. The figure shows outlines of the 3 × 3 geometry of optical beacons indicated by the black circles.  87  Chapter 5: Conclusion In this chapter, we will summarize the conclusions for our characterisations and recommendations for an AOA-based OW positioning system as well as give recommendations for future work in this area. A summary of our analyses and conclusions will be given in Section 5.1. The recommendations for future work will be given in Section 5.2.  5.1 Summary for our analyses and conclusions In this thesis, we broke down the analysis of an AOA-based OW positioning system into two parts: the optical beacon geometry and the optical receiver. The effects of optical beacon geometry were investigated in Chapter 2, while the design and characterisation of the optical receiver was investigated in Chapter 3. The positioning results for an AOA-based OW positioning system, based on the characterisations and recommendations given in the previous chapters, were presented in Chapter 4. In Chapter 2, we investigated the effects of optical beacon geometry. We began by deriving DOP, the linear scaling factor which relates AOA error to position error. We then used DOP to measure the performance of two optical beacon geometries for several sets of system dimensions. The following are the major conclusions from this chapter: i. A square optical beacon geometry is superior to a rhombus optical beacon geometry. This was concluded based on its lower mean DOP and DOP standard deviation when the FOV of the optical receiver was taken into account, as seen in Fig. 2.5. When only h/a was taken into account, as seen in Fig. 2.4, both performed similarly. However, in a practical system, the FOV of the optical receiver will likely be the major determining factor. 88  ii. Dilution-of-precision, and thus position error, in general decreases as the h/a ratio decreases. This means that having the optical receiver relatively close to the optical beacons improves DOP to a point. If the optical receiver is too close to the optical beacons, however, DOP increases slightly because the LOPs coming from the optical beacons become increasingly antiparallel. Thus, there is an optimal range of h/a ratios for which DOP is minimized, and our system should operate within this range.  iii. In order for the optical receiver to operate within the optimal h/a range, it must have a large FOV. This would allow it to image optical beacons that are not directly overhead. A minimum FOV of 100° is recommended. Optical receivers with FOVs below 100° would see their DOP increase rapidly with decreasing FOV, yielding a corresponding and undesirable increase in position error. iv. Adding additional optical beacons to an optical beacon geometry improves DOP slightly but not drastically. This effect also decreases asymptotically as more optical beacons are added—leading to diminishing returns when there are well over nine optical beacons.  In Chapter 3, we designed and characterised an optical receiver. We sought an optical receiver that was able to accurately discern the AOAs of many optical beacons over a wide FOV. The following are major conclusions from this chapter: i. A camera architecture comprised of a microlens suspended overtop of an image sensor is the ideal optical receiver for AOA-based OW positioning. The other option, a photodiode based optical receiver, had a smaller FOV and larger random AOA error. From Chapter 2 we know that wide FOVs greatly reduce DOP and that position error is linearly related to AOA error so that should be minimized as well. The significantly 89  lower sampling frequency of the camera architecture reduces the number of identifiable optical beacons, but that effect is much smaller in terms of position error.  ii. The camera architecture should be fabricated using an Omnivision OV7720 CMOS image sensor, or a similar sensor, with a flipped hemispherical microlens. The applied polymer dispensing system for fabricating the hemispherical microlens was effective.  iii. The camera architecture as described above is capable of a 100° FOV with an AOA error below 1°. The FOV is dictated by the maximum undistorted polar angle, which for the camera architecture is dictated by radial distortion. The AOA error is dictated by random angle error caused by pixel quantization at the image sensor.  iv. Due to its low sampling frequency, the image sensor in the camera architecture must take advantage of its colour discerning capability to increase the number of identifiable optical beacons. With a colour-frequency method of identifying optical beacons our optical receiver can identify up to nine optical beacons. In Chapter 4, we constructed and tested an AOA-based OW positioning system based on the recommendations from Chapters 2 and 3. A 3 × 3 multi-cell square optical beacon geometry was used, since the square geometry was determined to be superior and our optical receiver is capable of identifying up to nine optical beacons. (This allows for the use of a 3 × 3 multi-cell square optical beacon geometry instead of the 2 × 2 square optical beacon geometry.) The system dimension ratios were set roughly to the maximum possible for the 100° FOV of the optical receiver. A DOP analysis of the system predicted a 1.68 cm position error assuming the optical receiver had an AOA error of 1°, as determined in Chapter 3. Three sets of data were collected and the experimental results for our AOA-based OW positioning system gave an average position error of 1.7 cm ± 0.2 cm over a 1 m2 working area. The following are the conclusions from this chapter: 90  i. The DOP analysis is an accurate method of determining theoretical position error (given knowledge of the AOA error). The difference between its predictions and the measured position error was slight. ii. An AOA-based OW positioning system is capable of centimetre level accuracy. This performance is comparable to complex augmented RSS-based positioning systems as well as TOA/TDOA-based positioning—but without the challenges of complex system modelling or system synchronisation. In conclusion, we provided characterisations and recommendations for an AOA-based OW positioning system illustrating how to best implement both its optical beacon geometry and its optical receiver. From these characterisations and recommendations, an AOA-based OW positioning system was built, yielding a theoretical position error of 1.68 cm and an experimental position error of 1.7 cm ± 0.2 cm over a 1 m2 working area. The full design process is depicted as roadmap in Fig. 5.1. 91   Figure 5.1 Flowchart showing the full design process for an AOA-based OW positioning system.  5.2 Recommendations for future work Recommendations for future work can be divided into recommendations regarding the optical beacon geometry and recommendations regarding the optical receiver. Regarding the optical beacon geometry, this thesis only investigated two simple optical beacon layouts which were not fully optimised. The author has confidence that they are close to optimal; however, geometries containing different numbers of optical beacons could be investigated and could yield superior performance. A logical choice might be a triangular optical beacon geometry containing three optical beacons. Care must be taken though to ensure a fair comparison of optical beacon geometries using different numbers of optical beacons, since additional optical beacons improve 92  accuracy by virtue of the LS algorithm. Regarding the optical receiver, we concluded from Chapter 2 that reducing AOA error has a much greater effect on positioning accuracy than increasing the number of identifiable optical beacons, so the biggest improvement to the optical receiver would be to decrease its pixel size on the image sensor, i.e., improve its resolution. This would allow for lower random AOA error due to pixel quantization. On the software side, by modelling radial distortion beyond the FOV of the optical receiver, the operation could be extended to wider polar angles (and greater FOVs). Another software improvement that could be made to the optical receiver is the addition of a filtering algorithm that is capable of predicting AOAs based on a solution for the current position. This would reduce the AOA error and improve the processing speed. Also along those lines, an inertial navigation system which could be incorporated, via data fusion, with the position solutions from the optical receiver—much like that currently done with GPS to enhance position accuracy. One final potential topic for future work would be to increase the sampling rate of the optical receiver, such that it could receive data as a fully-integrated package for OW communication and positioning.  93  Bibliography [1] B. Alavi and K. Pahlavan, "Modeling of the TOA-based distance measurement error using UWB indoor radio measurements," IEEE Commun. Lett., vol. 10, no. 4, pp. 275-277, Apr. 2006. [2] M. Kok, J. D. Hol, and T. B. Schön, "Indoor positioning using ultrawideband and inertial measurements," IEEE Trans. Veh. Technol., vol. 64, no. 4, pp. 1293-1303, Apr. 2015. [3] R. Ma, Q. Guo, C. Hu, and J. Xue, "An improved WiFi indoor positioning algorithm by weighted fusion," Sensors, vol. 15, no. 9, pp. 21824-21843, Aug. 2015. [4] P. Bahl and V. N. Padmanabhan, "RADAR: an in-building RF-based user location and tracking system," in Proc. IEEE INFOCOM, 2000, vol. 2, pp. 775-784. [5] A. De Angelis, A. Moschitta, P. Carbone, M. Calderini, S. Neri, R. Borgna, and M. Peppucci, "Design and characterization of a portable ultrasonic indoor 3-D positioning system," IEEE Trans. Instrum. Meas., vol. 64, no. 10, pp. 2616-2625, Oct. 2015. [6] A. Lindo, E. Garcia, J. Ureña, M. del Carmen Perez, and A. Hernandez, "Multiband waveform design for an ultrasonic indoor positioning system," IEEE Sensors J., vol. 15, no. 12, pp. 7190-7199, Dec. 2015. [7] T. Luhmann, "Precision potential of photogrammetric 6DOF pose estimation with a single camera," Int. Soc. Photogramme., vol. 64, no. 3, pp. 275-284, May 2009. [8] K. Wang, A. Nirmalathas, C. Lim, and E. Skafidas, "High-speed optical wireless communication system for indoor applications," IEEE Photon. Technol. Lett., vol. 23, no. 8, pp. 519-521, Apr. 2011. 94  [9] K. Wang, A. Nirmalathas, C. Lim, and E. Skafidas, "4 x 12.5 Gb/s WDM optical wireless communication system for indoor applications," J. Lightw. Technol., vol. 29, no. 13, pp. 1988-1996, Jul. 2011. [10] T. Q. Wang, Y. A. Sekercioglu, A. Neild, and J. Armstrong, "Position accuracy of time-of-arrival based ranging using visible light with application in indoor localization systems," J. Lightw. Technol., vol. 31, no. 20, pp. 3302-3308, Oct. 2013. [11] T. Yamazato, I. Takai, H. Okada, T. Fujii, T. Yendo, S. Arai, M. Andoh, T. Harada, K. Yasutomi, K. Kagawa, and S. Kawahito, "Image-sensor-based visible light communication for automotive applications," IEEE Commun. Mag., vol. 52, no. 5, pp. 88-97, Jul. 2014. [12] X. Zhang, J. Duan, Y. Fu, and A. Shi, "Theoretical accuracy analysis of indoor visible light communication positioning system based on received signal strength indicator," J. Lightw. Technol., vol. 32, no. 21, pp. 4180-4186, Nov. 2014. [13] Y. Kim, J. Hwang, J. Lee, and M. Yoo, "Position estimation algorithm based on tracking of received light intensity for indoor visible light communication systems," in Proc. IEEE ICUFN Conf., 2011, pp. 131-134. [14] S. Y. Jung, S. Hann, S. Park, and C. S. Park, "Optical wireless indoor positioning system using light emitting diode ceiling lights," Microw. Opt. Techn. Let., vol. 54, no. 7, pp. 1622-1626, Jul. 2012.  [15] W. Gu, W. Zhang, J. Wang, M. A. Kashani, and M. Kavehrad, "Three dimensional indoor positioning based on visible light with gaussian mixture sigma-point particle filter technique," in SPIE OPTO, 2015, pp. 93870O-93870O. [16] D. Wu, Z. Ghassemlooy, W. D. Zhong, M. A. Khalighi, H. Le Minh, C. Chen, S. Zvanovec, and A. C. Boucouvalas, "Effect of optimal Lambertian order for cellular indoor optical 95  wireless communication and positioning systems," Opt. Eng., vol. 55, no. 6, pp. 066114, Jun. 2016.  [17] A. Taparugssanagorn, S. Siwamogsatham, and C. Pomalaza-Ráez, "A hexagonal coverage LED-ID indoor positioning based on TDOA with extended kalman filter," in IEEE 37th Annual COMPSAC, 2013, pp. 742-747. [18] S.-Y. Jung, S. Hann, and C.-S. Park, "TDOA-based optical wireless indoor localization using LED ceiling lamps," IEEE Trans. Consum. Electron., vol. 57, no. 4, pp. 1592-1597, Nov. 2011. [19] A. Arafa, X. Jin, and R. Klukas, "Wireless indoor optical positioning with a differential photosensor," IEEE Photon. Technol. Lett., vol. 24, no. 12, pp. 1027-1029, Jun. 2012. [20] A. Arafa, S. Dalmiya, R. Klukas, and J. F. Holzman, "Angle-of-arrival reception for optical wireless location technology," Opt. Express, vol. 23, no. 6, pp. 7755-7766, Mar. 2015. [21] A. Arafa, X. Jin, M. H. Bergen, R. Klukas, and J. F. Holzman, "Characterization of image receivers for optical wireless location technology," IEEE Photon. Technol. Lett., vol. 27, no. 8, pp. 1923-1926, Sep. 2015. [22] Y. S. Kuo, P. Pannuto, K. J. Hsiao, and P. Dutta, "Luxapose: Indoor positioning with mobile phones and visible light," in Proceedings of the 20th annual international conference on Mobile computing and networking, 2014, pp. 447-458. [23] M. H. Bergen, A. Arafa, X. Jin, R. Klukas, and J. F. Holzman, "Characteristics of angular precision and dilution of precision for optical wireless positioning," J. Lightw. Technol., vol. 33, no. 20, pp. 4253-4260, Oct. 2015. [24] H. Liu, H. Darabi, P. Banerjee, and J. Liu, "Survey of wireless indoor positioning techniques and systems," IEEE Trans. Syst., Man., Cybern. C, vol. 37, no. 6, pp. 1067-1080, Nov. 2007.  96  [25] J. Armstrong, Y. A. Sekercioglu, and A. Neild, "Visible light positioning: a roadmap for international standardization," IEEE Commun. Mag., vol. 51, no. 12, pp. 68-73, Dec. 2013. [26] P. Misra and P. Enge, Global Positioning System – Signals, Measurements, and Performance, 2nd ed. Lincoln, MA: Ganga-Jamuna Press, 2011, pp. 200-206. [27] X. Jin and J. F. Holzman, "Differential retro-detection for remote sensing applications," IEEE Sensors J., vol. 10, no. 12, pp. 1875-1883, Dec. 2010. [28] A. G. Dempster, "Dilution of precision in angle-of-arrival positioning systems," Electron. Lett., vol. 42, no. 5, pp. 291-292, Mar. 2006. [29] X. Jin, D. Guerrero, R. Klukas, and J. F. Holzman, "Microlenses with tuned focal characteristics for optical wireless imaging," Appl. Phys. Lett., vol. 105, no. 3, pp. 031102 (1-5), Jul. 2014. [30] Omnivision, "OV7720/OV7221 CMOS VGA (640x480) CameraChip Sensor with OmniPixel2 Technology," ver. 1.1, Sep. 2006. (Datasheet) [31] T. Q. Wang, Y. A. Sekercioglu, and J. Armstrong, "Analysis of an optical wireless receiver using a hemispherical lens with application in MIMO visible light communications," J. Lightw. Technol., vol. 31, no. 11, pp. 1744-1754, Jun. 2013. [32] B. Born, E. L. Landry, and J. F. Holzman, "Electrodispensing of microspheroids for lateral refractive and reflective photonic elements," IEEE Photon. J., vol. 2, no. 6, pp. 873-883, Dec. 2010. [33] E. Hecht, Optics, 2nd ed.: Addison-Wesley Publishing Company, 1987, pp. 212. [34] K. Khoshelham and S. O. Elberink, "Accuracy and resolution of kinect depth data for indoor mapping applications," Sensors, vol. 12, no. 2, pp. 1437-1454, Feb. 2012. 97  [35] A. J. Woods, T. Docherty, and R. Koch, "Image distortions in stereoscopic video systems," in IS&T/SPIE’s Symposium on Electronic Imaging: Science and Technology, 1993, pp. 36-48.  [36] J. Davis, Y. H. Hsieh, and H. C. Lee, "Humans perceive flicker artifacts at 500 Hz," Sci. Rep., vol. 5, pp. 7861, Dec. 2014. 98  Appendices Appendix A  - Least Squares Positioning Algorithm  Converting the measured AOAs into a final position is done using an iterative LS algorithm computed by the optical receiver. The LS algorithm must be able to solve a system of nonlinear equations made up of equations (2) and (3) for each optical beacon to determine the location of the optical receiver. The standard LS technique is a powerful method of simultaneously solving a series of linear equations that do not have a single optimal solution. Least squares attempts to solve all given equations simultaneously. If there is not a single solution that satisfies all given equations, LS calculates a solution that minimizes the square of the residual, i.e., error, between its solution and valid solutions for each given equation, i.e., LS finds a solution that almost satisfies every given equation. This method is ideal to solve the AOA position equations since our measurements include error (and no single position solution will be attainable). In this case, the LS algorithm will solve for a position that is close to each AOA’s LOP but likely not intersecting any.  The one challenge with implementing a LS algorithm for our positioning equations is that equations (2) and (3) are nonlinear, while the canonical LS method requires linear equations. To remedy this, we linearize each equation at a certain point in space and allow the LS algorithm to compute a position. However, since this solution was obtained using a linearization of a nonlinear system, there is a good chance the position result is inaccurate due to linearization errors. To solve this, each time the LS algorithm computes a position, it recalculates the linearized positioning equations and then recalculates a position. Each linearization and position computation together are called an iteration. This method assumes that the system of nonlinear equations is convex such that each iteration approaches a single, unique solution, i.e., it will only approach the global minima as there should be no local minima. The algorithm iterates until the position stops changing 99  to a large extent from one iteration to another. For our AOA-based OW positioning system this typically takes about four iterations if the position used in the first iteration is far from the true position, or as few as one if the first position estimate is quite near the true position.     100  Appendix B  - Colour Interference Results This appendix contains the remaining colour interference results from Section 3.3.2.2. These results are for both green and blue illumination. The first results presented here are for green illumination in Fig. B.1.                                           (a)                 (b) Figure B.1 Pixel response to green illumination is shown. The pixel response normalized to the saturation level on the image sensor is shown in (a), while the ratio between the pixel response of green and red or blue is shown in (b). Since (b) is normalized against green, the ratio or green with itself is 1.  From this figure we can see that the colour interference ratio for blue is about 0.8 which is well above our threshold of 0.5 making blue incompatible with green. The colour interference ratio for red is around 0.5 putting it right at the threshold for compatibility with green. These two results confirm our assertion that using green in a colour-frequency based method of optical beacon identification is a poor choice. Our next results, shown in Fig. B.2, are for blue illumination of the image sensor pixels. From this figure we can see that the colour interference ratio for green is about 0.8 which is the same value that we saw for the colour interference of green light on blue pixels. This ratio is well above the threshold for usefulness in a colour-frequency identification method for optical beacons 101  meaning that these two colours should not be used together. The colour interference ratio for red is smaller, around 0.4. This ratio is below the threshold for usefulness in a colour-frequency identification method for optical beacons meaning that red and blue can be used together.                                           (a)                 (b) Figure B.2 Pixel response to blue illumination is shown. The pixel response normalized to the saturation level on the image sensor is shown in (a), while the ratio between the pixel response of blue and red or green is shown in (b). Since (b) is normalized against red, the ratio of blue with itself is 1.    102  Appendix C  - AOA Positioning Algorithm % [pos, n_itter] = AOA2xyz(beacons,AOA,xp,tol,max_itter) % % This code locates a position in 3D space using AOAs given the beacon % coordinates (beacons) and their corresponding AOA values (AOA).  % % The function can also be given an initial position (xp), a desired accuracy % (tol), and the maixmum number of itterations before it automatically % shuts off (max_itter).  % % The function outputs the position (pos) in xyz and the number of itterations  % used to find that solution (n_itter). % % October 19, 2015 % Mark Bergen % % Note: % The starting point must be in the plane of the beacons otherwise the % solution may become unstable. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%  function [pos,n_itter] = AOA2xyz(beacons,AOA,varargin)   switch nargin     case 2         tol = 1e-3;         max_itter = 50;         xp = [0 0 0];     case 3         tol = 1e-3;         max_itter = 50;         xp = varargin{1};     case 4         max_itter = 50;         tol = varargin{2};         xp = varargin{1};     case 5         max_itter = varargin{3};         tol = varargin{2};         xp = varargin{1}; end   % The beacon vector needs to be either nx3 or 3xn where n is the number of % beacons being used. Each set of 3 values corresponds to the xyz 103  % coordinates of a beacon.   B = beacons;            % Beacons to be used in Simulation s_B = size(B); if(s_B(1) == 3)       % make 'B' an nx3 vector     B = B'; end   s_B = size(B);  % The AOA matrix must be either an nx2 or 2xn vector where n is the number % of beacons. Each set of 2 values corresponds to the phi and theta values % for that beacon. The order of beacons used must be the same for B and % AOA.    s_AOA = size(AOA);      % Make AOA and nx2 vector if(s_AOA(1)<s_AOA(2))     AOA = AOA'; end   for i = 1:length(AOA)    if AOA(i,1)<-pi        AOA(i,1) = AOA(i,1) + 2*pi;    elseif AOA(i,1)>=pi        AOA(i,1) = AOA(i,1) - 2*pi;    end end   % Other Definitions   H = zeros(s_B(1)*2,3); wo = zeros(s_B(1)*2,1); err = 1; n_itter = 0;  %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % Main Loop while err > tol    k = 0;    for ib = 1:s_B(1)        r = sqrt((xp(1)-B(ib,1))^2+(xp(2)-B(ib,2))^2);        R = sqrt((xp(1)-B(ib,1))^2+(xp(2)-B(ib,2))^2 + (xp(3)-B(ib,3))^2);          phi(ib) = atan2((xp(1)-B(ib,1)),(xp(2)-B(ib,2)));        theta(ib) = atan2(r,abs(xp(3)-B(ib,3))); 104           k = k+1;        wo(k) = AOA(ib,1) - phi(ib);           if wo(k) > pi           % Makes it so that wo < pi             wo(k) = wo(k)-2*pi;         end           if wo(k) < -pi          % Makes it so that wo > -pi             wo(k) = wo(k)+2*pi;         end          H(k,:) = [-(xp(2)-B(ib,2))/r^2;(xp(1)-B(ib,1))/r^2;0];       % Phi DOP          k = k+1;          H(k,1) = abs(B(ib,3)-xp(3))*-(xp(1)-B(ib,1))/(r*R^2);        % Theta DOP        H(k,2) = abs(B(ib,3)-xp(3))*-(xp(2)-B(ib,2))/(r*R^2);        H(k,3) = -r/(R^2);            wo(k) = AOA(ib,2) - theta(ib);           if wo(k) > pi/2           % Makes it so that wo < pi/2             wo(k) = wo(k)-pi;         end           if wo(k) < -pi/2          % Makes it so that wo > -pi/2             wo(k) = wo(k)+pi;         end      end    Cl = eye(max(size(H)));    delta = -inv(H'/Cl*H)*H'/Cl*wo;      xp = xp + delta';    err = max(abs(wo));    n_itter = n_itter+1;    if(n_itter >= max_itter)        break    end end  pos = xp;  

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
https://iiif.library.ubc.ca/presentation/dsp.24.1-0320831/manifest

Comment

Related Items