UBC Faculty Research and Publications

Design and implementation of an optical receiver for angle-of-arrival-based positioning Jin, Xian; Guerrero, Daniel; Bergen, Mark Henry; Chaves, Hugo A. L. F.; Freeden, Naomi V.; Holzman, Jonathan F. Mar 19, 2018

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
52383-Bergen_M_et_al_Design_and_implementation_2017.pdf [ 410.37kB ]
Metadata
JSON: 52383-1.0364296.json
JSON-LD: 52383-1.0364296-ld.json
RDF/XML (Pretty): 52383-1.0364296-rdf.xml
RDF/JSON: 52383-1.0364296-rdf.json
Turtle: 52383-1.0364296-turtle.txt
N-Triples: 52383-1.0364296-rdf-ntriples.txt
Original Record: 52383-1.0364296-source.json
Full Text
52383-1.0364296-fulltext.txt
Citation
52383-1.0364296.ris

Full Text

> (c) 2017 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works. 1 Abstract—Optical wireless (OW) technology has attracted significant interest for indoor positioning in the past decade. An emerging form of this technology makes use of angle-of-arrival (AOA) measurements to carry out positioning via triangulation off of an optical beacon grid. Such AOA-based OW positioning systems can yield accurate position estimates—but only given sufficient attention to the optical receiver. The design, operation, and implementation of such a receiver is presented in this work. The optical receiver is designed to have a sufficiently small AOA error, being AOA = 1°, over a wide angular field-of-view (FOV), being 100°. The design allows the optical receiver to carry out positioning based off a 3 × 3 grid of optical beacons, where each optical beacon is uniquely identified using multiple frequency and colour channels. The optical beacons are widely spaced to fully utilize the optical receiver's wide angular FOV. The overall AOA-based OW positioning system exhibits a position error of 1.7 cm, which is comparable to those obtained by more complex positioning systems. Thus, the presented AOA-based technologies can play a role in emerging indoor positioning systems. Index Terms—Angle-of-arrival, indoor positioning, optical wireless. I. INTRODUCTIONNDOOR positioning technologies have emerged in the marketplace after years of development. These indoor technologies are applied to complement Global Positioning System (GPS) technology, which functions well in the outdoor environment but exhibits poor performance in the indoor environment [1]. Indoor positioning technologies can be realized as optical wireless (OW) systems [2], which are also referred to as visible light positioning systems [3]. These OW positioning systems have attracted growing interest in recent years due to their potential for integration with lighting and OW communication systems. Three methods have emerged for OW positioning. They are based on received signal strength (RSS) [4-7], time-of-arrival (TOA) / time-difference-of-arrival (TDOA) [8-10], and angle-of-arrival (AOA) [11-Manuscript received ##########; revised ################; accepted ###########. Date of publication ##############; date of current version ###########. This work was supported in part by the Natural Science and Engineering Research Council of Canada (NSERC), the Canadian Foundation for Innovation (CFI), and Western Economic Diversification Canada (WD). The authors are with the Faculty of Applied Science, The University of British Columbia, Kelowna, BC V1V 1V7, Canada (e-mail: mark.bergen@alumni.ubc.ca; xianjin@alumni.ubc.ca; daniel.guerrero@live.ca; hugo.lima.chaves@gmail.com; nvfredeen@gmail.com; jonathan.holzman@ubc.ca).  15]. The systems typically use a network of fixed optical beacons (i.e., optical transmitters [9]) and a mobile optical receiver. Each method has advantages and disadvantages.  The first method of OW positioning, RSS, is the simplest offering moderate positioning accuracy ranging from metres for Wi-Fi systems [16] to tens of centimetres for optical systems [5]. An RSS-based optical receiver measures the incident optical power, as a single scalar quantity, for each observed optical beacon in the network. It then uses the received optical powers from multiple optical beacons to quantify ranges and applies trilateration to estimate the optical receiver's position. Unfortunately, this method is susceptible to increased position error when the network of optical beacons exhibits imbalances in the power levels or radiation cones [3]. Moreover, physical changes to the environment can affect the received optical powers, via reflections, which can increase the position error. The second method of OW positioning, TOA or TDOA, overcomes many of the deficiencies of RSS-based positioning by applying measurements of phase. A TOA or TDOA optical receiver measures phase of high-frequency signals from surrounding optical beacons, to quantify the corresponding ranges (i.e., distances) to the optical beacons. Trilateration is then applied to estimate the optical receiver's position. Such a method can yield small position errors, e.g., 0.5 cm [10]. However, its phase-based approach demands high-frequency electronics and precise phase synchronization for good performance, necessitating high implementation costs. The third method of OW positioning, AOA, is distinct from the aforementioned methods. This is because AOA-based OW positioning applies triangulation, by measuring angles and estimating the position via the associated vectors between the optical beacons and receiver. (In contrast, the aforementioned methods apply trilateration, by measuring power or phase and estimating the position via the associated scalar distances between the optical beacons and receiver.) For AOA-based OW positioning, the vector from each observed optical beacon to the optical receiver is known as the line of position (LOP). Each LOP defines an AOA on the optical receiver that is quantified by two angles: the azimuthal angle,  and the polar angle, . This can be done quite simply by using a lens and an image sensor [17]. Triangulation is then applied to estimate the optical receiver's position as the point of overlap of the LOPs from the observed optical beacons. Such a method has the unique advantage that its operation is largely independent Design and implementation of an optical receiver for angle-of-arrival-based positioning Mark H. Bergen, Student Member, IEEE, Xian Jin, Daniel Guerrero, Hugo A. L. F. Chaves, Naomi V. Fredeen, Student Member, IEEE, and Jonathan F. Holzman, Member, IEEE I > REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) <   2of the received optical power and phase—as the optical receiver carries out triangulation off vectors [15]. Angle-of-arrival-based OW positioning systems have yielded excellent position accuracies that range over 1-5 cm [12, 14, 18]. It is important to note that AOA-based OW positioning has key challenges. In the past, its substantial computational demands limited its use. This challenge was avoided to some extent by placing the optical beacons on the mobile subjects, i.e., robots [18], and carrying out imaging and computations on a fixed optical receiver with a camera and processor, but this computational challenge is now less of a concern given the recent surge in integrated electronics and smartphones. However, there remain critical design challenges. The challenges of an AOA-based OW positioning system relate to its two fundamental components: an array of fixed optical beacons (typically being ceiling-mounted LEDs) and a mobile optical receiver (typically having a lens and image sensor.) The two components can be optimised separately to improve positioning accuracy, but by doing so one discovers two design conflicts. The first conflict pertains to the number of optical beacons. The optical receiver exhibits improved performance when it is deployed with a large number of optical beacons, because its least-squares positioning algorithm yields smaller position errors when it observes larger numbers of optical beacons [15]. However, the optical beacons are difficult to implement in large numbers, because it is necessary to have each optical beacon be uniquely identified by the optical receiver via frequency, colour, etc. The second conflict pertains to the spacing of optical beacons. The optical beacons support improved positioning performance when they are implemented with wide spacings, because wide spacings decrease the dilution of precision (DOP) and thus the position error [15]. However, the optical receiver can be difficult to implement with these wide spacings between the optical beacons, because it must have a correspondingly wide field-of-view (FOV). In our previous work, we explored the design of the optical beacon geometry to improve position accuracy [15]. In this work, we address the design and implementation of an effective optical receiver. The proposed work puts forward design recommendations for a complete AOA-based OW positioning system with a position error of 1.7 cm. The work is laid out as follows. Section II presents the key considerations for the optical receiver's design. Sections III and IV present theoretical and experimental analyses of the optical receiver's operation. Section V shows implementation results. Section VI gives concluding remarks. II. OPTICAL RECEIVER DESIGN In this section, we look at the design of the optical receiver for use in an AOA-based OW positioning system. The system concept of an AOA-based OW positioning system containing an optical receiver and two optical beacons is shown in Fig. 1. The position of the optical receiver is denoted by a solid circle at the coordinates (x, y, z) of the global frame. The position of the ith optical beacon is denoted by a hollow circle at the coordinates (xi, yi, zi) of the global frame, for i = 1 and 2. The optical receiver measures an AOA in the direction towards the optical beacon, with respect to its body frame. (We consider the specific case in this work where the global and body frames are aligned with their vertical axes parallel to the z-axis.) The AOA in the body frame is defined by both the azimuthal angle,  being the angle rotated about the z axis, and the polar angle,  being the angle down from the z axis. The optical receiver uses the measured AOA to define the LOP, each of which is shown in Fig. 1 as a dashed line. Triangulation is carried out with multiple optical beacons to estimate the optical receiver's position as the point of overlap of the multiple LOPs. To carry out this process and obtain an accurate position estimation via triangulation, an optical receiver must be capable of carrying out two operations: AOA measurement and AOA identification. For the optical receiver to be effective at the first operation, AOA measurement, the error AOA must be small. In this work, a AOA of 1° is adopted as a tolerance for error in the azimuthal and polar angles, including contributions from both random and systematic errors. To reduce the random and systematic errors, it is often possible to restrict the angles over which AOAs are measured. However, such restrictions to the optical receiver's angular FOV can reduce the number of measured AOAs, leading to increased position error. This is because the associated LOPs are used in a least-squares minimization process [15]. Overall, it is beneficial to have a small AOA error over as wide of an angular FOV as possible.  Fig. 1. The system concept of an AOA-based OW positioning system with an optical receiver, denoted by the solid circle at (x, y, z), and two optical beacons, denoted by the hollow circles at (x1, y1, z1) and (x2, y2, z2). These positions are defined in the global frame having x, y, and z axes. The optical receiver measures the AOA for the ith optical beacon, which is defined by both the azimuthal angle, i, and polar angle, i, within its body frame, for i = 1 and 2. The body frame is attached to the optical receiver and has xb, yb, and zb axes. The AOA defines the LOP as a vector, shown as a dashed line, running from the optical beacon to the optical receiver.  For the optical receiver to be effective at the second operation, AOA identification, it must be capable of associating each measured AOA to its optical beacon. To do this, the optical beacons can be implemented with distinctive characteristics, such as colour [19] or frequency [10-11]. When the optical receiver is able to uniquely identify the AOAs, it can apply the corresponding LOPs as vectors in the global frame originating from the known locations of their > REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) <   3optical beacons. The optical receiver then applies triangulation with all of the LOPs to estimate its position. In general, it is beneficial to identify the optical beacons of as many AOAs as possible. This maximizes the number of LOPs used to triangulate the position and ultimately reduces position error. A camera architecture, having a microlens above an image sensor as shown in Fig. 2, is used for the design of the optical receiver because it can be made effective at both AOA measurement and AOA identification. The optical receiver's performance in AOA measurement is determined by both the image senor, which can introduce random error via pixel quantization, and the microlens, which can introduce systematic error via image distortion. The optical receiver's performance in AOA identification is determined mainly by the image sensor. The image sensor can be operated with red, green, and blue colour channels and differing frequency channels to allow the optical beacons to be identified by their distinct colour and frequency.    Fig. 2. Schematic of the camera architecture used by the optical receiver. A side-profile of the architecture is shown at the top, with a microlens (having a glass coverslip) on top of an image sensor (having a protective layer of image sensor glass). A chief ray of light is shown in red propagating from a distant optical beacon through the architecture. The dimensions are denoted by d, t, and g. A top-view of the image sensor is shown at the bottom. It shows a grid of pixels with the chief ray illuminating the pixel at the coordinates (xIS, yIS) on the image sensor. The coordinates of the illuminated pixel define the estimated azimuthal angle, IS, and estimated polar angle, IS ≈ kIS.  The image sensor that is used for the design of the optical receiver is the Omnivision OV7720 CMOS VGA. It has a pixel size of 6 × 6 µm2. However, it is used at its fastest frame-rate, and this clusters the pixels in groups of four, yielding an effective pixel size of 12 × 12 µm2. The image sensor has an especially high frame-rate, of 187 frames-per-second, which supports use of multiple widely separated frequency channels. The microlens that is applied in the design is fabricated by way of polymer dispensing and curing. A droplet of UV-curable polymer is dispensed onto a 150-µm-thick glass coverslip and cured by UV illumination. The process is carried out in a filler fluid of glycerol to create a hemispherical microlens with a polymer-glass contact angle of 90° and a diameter of 800 µm. Process details are shown elsewhere [17, 20]. A hemispherical lens is fabricated here because it minimizes aperturing of light at the more extreme incident angles. A microlens exhibits decreased aperturing as the contact angle of its polymer-glass interface (and thus its numerical aperture) is increased [21]. The microlens is mounted above the protective glass of the image sensor with the microlens facing the image sensor, as shown in Fig. 2. This orientation yields decreased distortion and increased angular FOV, as compared to the orientation with the microlens facing away from the image sensor [22]. The labelled dimensions are d = 833 µm, t = 400 µm, and g = 40 µm. The refractive index of the glass and microlens is n ≈ 1.54.  The following two sections examine the operation of the optical receiver's design in terms of AOA measurement and AOA identification. III. OPTICAL RECEIVER OPERATION: AOA MEASUREMENT The optical receiver carries out AOA measurement by locating a beamspot on the image sensor, from a distant optical beacon, and transforming the beamspot's location to an AOA, i.e., to an azimuthal angle, , and a polar angle, , towards the optical beacon. The beamspot takes the form of a circle on the image sensor when it is near the centre of the microlens, the size of which is determined mainly by spherical aberration [23]. It takes the form of a flared circle on the image sensor when it is far from the centre of the microlens, the shape of which is determined by comatic aberration [23]. To mitigate the effects of aberration, only the brightest point of light in the beamspot is used in this work to define the location of the beamspot. This point is formed by the chief ray and its neighbouring paraxial rays as they pass through the system in the manner shown in Fig. 2. The location of the point on the image sensor is defined by discrete Cartesian coordinates, (xIS, yIS), where xIS and yIS are integers in units of pixels. The coordinates then define the estimated azimuthal angle, IS, and estimated polar angle, IS, according to   ISIS1IS tan xy ,              (1)  and  IS2IS2ISIS  kyxk  ,        (2)  where k ≈ 1 °/pixel is a linear scaling factor and IS = (xIS2 + yIS2)1/2 is the radial displacement of the beamspot on the image sensor, with respect to an origin at the centre of the microlens. Equation (2) displays an approximation because it is formed by linearization of the exact nonlinear expression relating IS to the polar angle, as discussed later. The value of k used in this linearization is obtained by characterizing measured radial displacements against known values of the polar angle. The effectiveness of AOA measurement is defined by the level of agreement between the estimated azimuthal and polar angles, IS and IS, and the true azimuthal and polar angles, > REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) <   4and . Differences between the estimated and true angles arise from random and systematic AOA errors, which are the focus of following theoretical and experimental analyses. A. Theoretical error analyses The AOA measurement operation is subject to azimuthal error and polar error. The azimuthal error takes the form of  2/12sys2rdm )(   ,          (3)  where rdm is the random azimuthal error and sys is the systematic azimuthal error. The polar error takes the form of  2/12sys2rdm )(   ,          (4)  where rdm is the random polar error and sys is the systematic polar error. The manifestations of random error, from statistical variations in measurements, and systematic error, from bias/drift in measurements, are considered here. Random errors: Random azimuthal and polar errors arise mainly from pixel quantization. Pixel quantization appears because the location of the single brightest point of light in the beamspot is defined by discrete Cartesian coordinates, (xIS, yIS), which denote geometric centres of the pixels and typically not the exact beamspot location. This quantization discretizes the estimated azimuthal angle, IS, and estimated polar angle, IS, yielding random azimuthal and polar errors. The random azimuthal error,rdm, is defined by linking it to the discrete Cartesian coordinates, (xIS, yIS). The link is made by taking the partial derivatives of (1) with respect to the Cartesian coordinates, in a sensitivity analysis. This gives     ISπISISISIS2ISπrdm(pixel/2)21 kCxyyxC ,  (5)  where C = 180°/ is a factor that converts random azimuthal error from radians to degrees. Equation (2) is used to form the second (approximate) equality. In this work, we consider the worst-case scenario of pixel quantization for a beamspot on the line yIS = xIS, with xIS = pixel/2 and yIS = pixel/2 being the errors in the x and y dimensions, respectively. The general trend of (5) suggests that the random azimuthal error, rdm, is inversely proportional to the estimated polar angle, IS. Such a trend is logical given that a small polar angle has the beamspot lie near the origin, xIS ≈ 0 and yIS ≈ 0, where it becomes difficult to apply (1) to define the estimated azimuthal angle, IS. The random polar error,rdm, is defined by linking it to the discrete Cartesian coordinates, (xIS, yIS). The link is made by taking the partial derivatives of (2) with respect to the Cartesian coordinates, in a sensitivity analysis. This gives      )2/pixel(2)||( ISISISISISrdm kyyxxk   ,     (6)  where the second (approximate) equality is again formed for a worst-case scenario with a beamspot on the image sensor along the line yIS = xIS, with xIS = pixel/2 and yIS = pixel/2. The general trend of (6) suggests that the random polar error is finite and independent of the estimated polar angle, IS. Systematic errors: Systematic azimuthal and polar errors arise from measurement bias/drift. Such errors are deterministic and so they can be reduced by compensation or restriction in the operation. The latter approach will be applied in this analysis.  The systematic azimuthal error,sys, is straightforward to define for an optical receiver with an ideal microlens. It is  0sys  .            (7)  The systematic azimuthal error is zero for this theoretical case simply because the ideal microlens has cylindrical symmetry and thus no astigmatic aberration [23]. The systematic polar error,sys, is more complicated to define. It comes about from image distortion by the microlens, which manifests itself as barrel distortion on the image sensor [23]. Thus, for a given measurement of the beamspot's radial displacement, IS, there will be a difference between the estimated polar angle, which is found by solving (2) for IS, and the true polar angle, which is found by solving  ,sinsintansinsintansinsintan1211IS   ngntnd    (8)  for , where d, t, and g are the dimensions in Fig. 2. For sufficiently small polar angles, the trigonometric functions in (8) can be accurately approximated by their arguments. This transforms (8) into the linear form of (2), such that IS≈. However, such an approximation becomes invalid for larger polar angles, leading to systematic polar error in the form of               ISsys   .              (9)  The systematic polar error,sys, is shown in Fig. 3 versus the true polar angle, . The figure shows image distortion, in that the systematic polar error is low and roughly flat for small polar angles, but it increases rapidly at large polar angles.    Fig. 3. Theoretical systematic polar error,sys, versus true polar angle, . The systematic polar error is low and flat for small polar angles, but it increases rapidly for large polar angles.  Overall, certain predictions can be made from the above analyses. With regard to the azimuthal error, we see that it is 00.511.522.533.540 10 20 30 40 50 60 70Systematic polar error,  sys(°)Polar angle,  (°)> REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) <   5subject to random azimuthal error, which increases for decreasing polar angle, and negligible systematic azimuthal error. This suggests that we may need to set a lower limit on the polar angle to keep the azimuthal error within the AOA error tolerance of AOA = 1°. With regard to the polar error, we see that it is subject to both random and systematic errors, with the random polar error being finite but constant and the systematic polar error increasing for increasing polar angle. This suggests that we may need to set an upper limit on the polar angle to keep the polar error within the AOA error tolerance of AOA = 1°. These predictions will be tested in the following subsection on experimental error analyses. B. Experimental error analyses Experimental error analyses are carried out to test the theoretical predictions. The results are collected for the optical receiver design in Section II using an OW testbed. An optical beacon is set at varying azimuthal and polar angles,  and , and the Cartesian coordinates of the beamspot on the image sensor, xIS and yIS, are recorded by capturing a still image. The Cartesian coordinates are used in (1) and (2) to calculate the estimated azimuthal angle, IS, and estimated polar angle,IS, respectively. Differences between true and estimated angles are recorded as the azimuthal error, , and polar error, .  The azimuthal error, , is shown in Figs. 4(a) and (b) versus azimuthal angle, , and polar angle, , respectively. In Fig. 4(a), the azimuthal error is seen to be predominantly random in nature, with a mean of roughly 0.2° and standard deviation of roughly 0.7°. It is essentially independent of the azimuthal angle, . In Fig. 4(b), the azimuthal error is also seen to be predominantly random in nature, with a mean of roughly 0.2°, although its standard deviation increases for decreasing polar angle,. Figures 4a and 4b show the same data plotted against different angular variables. These observations agree with the theoretical predictions in Section IIIA, and follow the general trend of (5), in that they show a random azimuthal error, rdm, that is inversely proportional to the estimated polar angle, IS, and a systematic azimuthal error,sys, that is near zero. Ultimately, the standard deviation of the azimuthal error can be kept within the AOA error tolerance of AOA = 1° if the optical receiver operates with polar angles at or above  = 15°. The polar error, , is shown in Figs. 5(a) and (b) versus azimuthal angle, , and polar angle, , respectively. In Fig. 5(a), the polar error is seen to be predominantly random in nature, with a mean of roughly 0.1° and standard deviation of roughly 0.6°. It is essentially independent of the azimuthal angle, . In contrast, in Fig. 5(b), the polar error is seen to be subject to both random and systematic error. The random polar error is essentially constant with a standard deviation of 0.4°, while the systematic polar error increases for increasing polar angle, . Figures 5a and 5b show the same data plotted against different angular variables. These observations agree with the theoretical predictions in Section IIIA, and follow the general trend of (6), in that they show a random polar error that is finite and independent of the estimated polar angle, IS, and a systematic polar error that increases for increasing polar angle (according to Fig. 3). Ultimately, the standard deviation of the polar error can be kept within the AOA error tolerance of AOA = 1° if the optical receiver operates with polar angles at or below  = 50°.   (a)  (b) Fig. 4. Experimental azimuthal error versus (a) true azimuthal angle, , and (b) true polar angle, . The results in (a) are for polar angles above 15°.   (a)  -2-1.5-1-0.500.511.52-180 -120 -60 0 60 120 180Azimuthal error, (°)Azimuthal angle,  (°)-10-8-6-4-202468100 10 20 30 40 50 60 70Azimuthal error, (°)Polar angle,  (°)-2-1.5-1-0.500.511.52-180 -120 -60 0 60 120 180Polar error, (°)Azimuthal angle,  (°)-2-10123450 10 20 30 40 50 60 70Polar error, (°)Polar angle,  (°)> REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) <   6(b) Fig. 5. Experimental polar error versus (a) true azimuthal angle, , and (b) true polar angle, . Results in (a) are collected for polar angles below 15°. With the above restrictions in mind for the polar angle, the optical receiver can be operated with an AOA error of AOA = 1° for azimuthal angles in the range –180° <  < 180° and polar angles in the range 15° <  < 50°. However, for the polar angle, the solid angle subtended by the lower limit of 15° is less than 10% of that subtended by the upper limit of 50°. Thus, for ease of operation in the remainder of this work, the optical receiver is operated for all polar angles up to 50°, for a corresponding angular FOV of 2 × 50° = 100°. IV. OPTICAL RECEIVER OPERATION: AOA IDENTIFICATION The optical receiver measures AOAs and applies their LOPs to triangulate its position. The resulting position error can be made low by having the optical receiver measure as many AOAs as possible—but the optical beacons forming all of the AOAs must be uniquely identified. Techniques for AOA identification are explored here. The first technique considered for AOA identification uses distinct frequency channels [10-11]. This technique has the optical beacons emit light that is intensity-modulated at distinct frequencies. The optical receiver then applies a fast-Fourier-transform (FFT) to its image, such that the beamspots in the image exhibit modulation at distinct frequency channels. The optical receiver can then associate a specific optical beacon (and its location) to each measured AOA—allowing it to triangulate its position off all the defined LOPs. The assignment of frequency channels in the AOA identification process is subject to practical limits. The lower frequency limit for operation with visible light is the flicker frequency. The flicker frequency is the lowest frequency at which modulation is registered by the eye. It is a function of modulation depth, with lower modulation depths yielding higher flicker frequencies [24]. Thus, AC modulation is applied in this work with a sufficiently large DC background to have the flicker frequency be 35 Hz. The upper frequency limit for the operation is set by the characteristics of the applied image sensor. The image sensor has a frame-rate of 187 frames-per-second, yielding a maximum frequency at the 93.5 Hz Nyquist frequency. Given the above frequency limits, the optical beacons are modulated at frequency channels between 35 and 93.5 Hz, with a 35 Hz separation between the channels. Care is taken to avoid operation at frequencies near those of power systems, 50 Hz and 60 Hz, although the authors have found that is more important to avoid operation near the Nyquist frequency and thereby minimize aliasing. With this in mind, this work applies two frequency channels, having frequencies of f1 = 40 Hz and f2 = 80 Hz. Note that these two frequencies can also be used with the applied image sensor when it operates at its lower frame rate of 75 frames-per-second, or even other slower image sensors, although such implementations would need to carefully apply undersampling [25]. The two frequency channels that are selected allow for unique identification of three optical beacons, with modulation at f1, f2, or f1 and f2. A DC channel is not used due to its susceptibility to high ambient light/noise. For an optical receiver with an unknown orientation, triangulation requires three or more measured and identified AOAs. Thus, an additional technique for AOA identification should be introduced for greater reliability. The second technique considered for AOA identification is based upon colour channels [19]. It leverages the image sensor's ability to discern red, green, and blue, via separate RGB pixels, to assign colour channels to the optical beacons. The optical beacons are implemented as white-light LEDs (Cree PLCC6-CLV6A), which have separate inputs for internal red, green, and blue LEDs. This white-light LED can operate with three independent colour channels, although colour channel interference must be carefully considered. Colour channel interference can occur if the red, green, and blue LEDs in the optical beacons show significant overlap in their power spectral density. Profiles of power spectral density for the red, green, and blue LEDs are shown in [26]. It is found that red and green channels exhibit interference below 2%, red and blue channels exhibit interference below 1%, and green and blue channels exhibit interference below 10%. These levels of colour channel interference are deemed to be sufficiently low for the proposed AOA identification. It is also necessary to consider the image sensor's role in colour channel interference. Colour channel interference can arise at the image sensor from the broadened responsivities of its red, green, and blue pixels. Broad responsivities will have each pixel preferentially measure the intensity of its assigned colour as well as the intensities of the other colours—albeit to a lesser extent. The broadened responsivities are investigated here by a colour interference ratio. The ratio is quantified by pixel signal levels of the red, green, and blue pixels on the image sensor for illumination by the red, green, and blue LEDs of the optical beacon (one at a time). The colour interference ratio is then defined as the red, green, and blue pixel signal levels, for illumination by a particular LED colour, normalized with respect to the pixel signal level of the particular LED colour. Results for the nine combinations are collected as a function of intensity to see if a minimum intensity must be prescribed to maintain a sufficiently high colour interference ratio. Results are shown in Fig. 6(a) for illumination by the red LED, Fig. 6(b) for illumination by the green LED, and Fig. 6(c) for illumination by the blue LED. The colour interference ratios of the red, green, and blue pixels are shown as data points in their respective colours. Two conclusions can be made. First, it is beneficial to operate with only red and blue colour channels, as these two colours yield the least interference between each other. The green channel could be applied, but it exhibits a relatively high level of interference with the blue channel. Second, the optical receiver should be implemented in a system that maintains an illuminating intensity above 0.3 W/m2. For operation with red and blue colour channels above this intensity, Figs. 6(a) and (c) show that the colour interference ratio is below 40%. Given the above analyses on AOA identification, the AOA-based OW positioning system is implemented with two frequency channels and two colour channels. The optical beacons have their red and blue LEDs modulated at f1, f2, or f1 and f2. (The optical beacons have their green LED operated with a level of DC current that establishes a white-light balance to have them function as room lights.) With these colour and frequency combinations, it is possible to apply nine > REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) <   7unique optical beacons in a 3 × 3 grid. Table 1 provides a lookup table for the frequency and colour characteristics of these optical beacons, which are indexed by i. The implementation of the optical receiver in this 3 × 3 optical beacon grid is analysed in the following section.  (a)  (b)  (c) Fig. 6. Colour interference ratio versus intensity for the image sensor under illumination by the optical beacon's (a) red LED, (b) green LED, and (c) blue LED. The red, green, and blue data points show ratios for the red, green, and blue pixels, respectively. The colour interference ratios for illumination by a particular LED are the red, green, and blue pixel signal levels, normalized with respect to the pixel signal level for that particular colour.          Table 1. Lookup table showing optical beacons (indexed by i) with their associated colours (listed in columns) and frequencies (stated in the cells).  Beacon index, i Red colour channel Blue colour channel 1 f1 = 40 Hz f2 = 80 Hz 2 f2 = 80 Hz f1 = 40 Hz 3 f2 = 80 Hz f1 = 40 Hz, f2 = 80 Hz 4 f1 = 40 Hz, f2 = 80 Hz f2 = 80 Hz 5 f1 = 40 Hz, f2 = 80 Hz f1 = 40 Hz, f2 = 80 Hz 6 f1 = 40 Hz, f2 = 80 Hz f1 = 40 Hz 7 f1 = 40 Hz f1 = 40 Hz 8 f1 = 40 Hz f1 = 40, f2 = 80 Hz 9 f2 = 80 Hz f2 = 80 Hz V. OPTICAL RECEIVER IMPLEMENTATION The implementation of AOA-based OW positioning must consider the performance specifications of the optical receiver, in terms of its angular FOV, and the characteristics of the 3 × 3 optical beacon grid, in terms of its spacing and height. The grid that is applied has its optical beacons laid out with a spacing of 50 cm in a plane at a height of z = 110 cm above the optical receiver. Thus, the nine optical beacons are located at (x = 0 and ±50 cm, y = 0 and ±50 cm, z = 110 cm) in the global frame. The optical receiver is rastered across the horizontal x-y plane below the optical beacons, with positions defined by (x, y, z = 0 cm). These dimensions allow the optical receiver to keep all the optical beacons within its 100° angular FOV for all positions in the horizontal plane. The positioning performance of the optical receiver is considered here by way of theoretical and experimental analyses, the results of which are shown in Fig. 7.  Theoretical analyses are carried out via DOP, which defines position error with respect to AOA error. Details on DOP are given in our earlier work [15]. For this work, the distribution of position error, p(x, y, z = 0 cm), is simply the product of the DOP distribution, DOP(x, y, z = 0 cm) and the constant AOA error, AOA, yielding       AOAp )0,,DOP()0,,(   zyxzyx .    (10)  Clearly, position errors can be reduced by using an optical receiver with low AOA error, although the AOA error for this work is fixed at AOA = 1°, or by implementing the optical beacon grid with suitably low values in its DOP distribution. To realize low DOP values, it is useful to visualize DOP as a geometrical weighting factor of AOA error on position error. The effects of AOA error can be seen by visualizing the LOPs as cones radiating from the optical beacons towards the optical receiver, rather than the ideal case with LOPs being vectors. Triangulation off multiple beacons then yields a volume for the position of the optical receiver, at the intersection of the cones, rather than the ideal case with the LOPs intersecting at a single point. The volume of the intersecting cones defines the optical receiver's position (as the centre of the volume) and positioning error (as the side-length of the volume). Such visualization makes it apparent that DOP can be made low by having the optical receiver position off optical beacons that are widely separated. Widely separated optical beacons have predominately orthogonal LOP cones, which yield a smaller intersecting volume and smaller position error, in comparison 0204060801000 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6Colour interference ratio (%)Intensity (W/m2)Illumination by the Red LEDRed PixelGreen PixelBlue Pixel0204060801000 0.2 0.4 0.6 0.8 1Colour interference ratio (%)Intensity (W/m2)Illumination by the Green LEDRed PixelGreen PixelBlue Pixel0204060801000 0.25 0.5 0.75 1 1.25 1.5Colour interference ratio (%)Intensity (W/m2)Illumination by the Blue LEDRed PixelGreen PixelBlue Pixel> REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) <   8to those of predominantly parallel LOPs. The wide FOV of the designed optical receiver supports such positioning off widely separated optical beacons. If it had a narrower FOV, the optical beacons would need to be set at a smaller spacing and greater height, which would yield more parallel LOPs. The theoretical DOP distribution, DOP(x, y, z = 0 cm), and position error distribution, p(x, y, z = 0 cm), for the proposed optical beacon grid are calculated in the manner of our prior work [15]. The results are shown in Fig. 7(a). We see here that the wide angular FOV of the optical receiver yields a low and flat position error across the plane of positioning—particularly in the interior of the plane. In the interior of the plane, the optical receiver triangulates its position off widely separated optical beacons, with predominantly orthogonal LOPs, and this leads to low and flat position errors. In contrast, at the corners, the optical receiver triangulates its position off of optical beacons that are less separated, with less orthogonal LOPs, and this leads to the displayed red peaks in DOP and position error. Overall, across the entire plane of positioning, the theoretical mean position error is 1.68 cm. Experimental analyses are carried out to test the operation of the optical receiver with the proposed optical beacon grid. The optical receiver is deployed beneath the grid, and it is rastered across the horizontal plane with positions defined by (x, y, z = 0 cm). A flowchart of the full position estimation process is shown via the algorithm in Fig. 8. The process uses a video file captured by the optical receiver to estimate the position and is executed offline with MATLAB. The process reads the video file, isolates the beamspot on the image for each optical beacon, and determines the brightest pixel for each beamspot. Locations of the brightest pixels are then used to calculate the estimated azimuthal angles, IS, and estimated polar angles, IS. An FFT of the intensity of each brightest pixel is used to identify the strongest frequencies for each optical beacon. Next, the frequencies are compared on a lookup table to uniquely identify each optical beacon. Finally, a position estimate is computed via a least-squares algorithm. This process is completed in under one second, but this time can be reduced for real-time positioning by processing with a microcontroller in the optical receiver. It should be noted, however, that real-time positioning would benefit from use of a Kalman filter to improve the dynamic performance [14].   (a)   (b)  Fig. 7. The (a) theoretical and (b) experimental positioning results. In (a), the theoretical results are shown as the position error, p, in cm. In (b), the experimental results are shown at points spaced by 25 cm, with three estimated positions as blue diamonds and the true positions as orange circles in the plane defined by (x, y, z = 0). The nine optical beacons, denoted by open circles with indices, are positioned at (x = 0 and ±50 cm, y = 0 and ±50 cm, z = 110 cm).  The estimated positions (blue diamonds) for three experiments are shown with their true positions (orange circles) in Fig. 7(b). The experimental position error, being the error between the estimated and true positions, is similar in the x, y, and z dimensions and relatively constant across the entire positioning plane. The overall experimental mean position error across the plane is 1.70 cm, which is in good agreement with the theoretical mean position error of 1.68 cm. Moreover, this position error is comparable to those obtainable by more complex RF and TOA/TDOA systems [8].    Fig. 8. A flowchart illustrating the operation of the optical receiver. The process begins with reading the image sensor data and ends with estimating the optical receiver's position.   We note here that it would be possible to integrate the proposed AOA-based OW positioning system with OW communication technology (potentially with a radio-frequency uplink) in one of two ways. The first way would have the optical beacons modulate high-speed data as independent downlinks in tandem with positioning signals having unique identifier frequencies. A fast photodiode would be used to receive the high-speed data. The second way would use a camera communication system similar to that in [27]. In this system, the downlink is established via multiple LED 147528639124758369> REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) <   9transmitters and undersampling by the optical receiver. This can enable data rates on the order of 100 bit/s in tandem with positioning signals having unique identifier frequencies. For higher bit rates, spatial multiplexing with multiple optical beacons and thus multiple channels can be employed. Such multiplexing can enable data rates on the order of kbit/s. Ultimately, the realization of an integrated OW positioning and communication system is achievable. VI. CONCLUSION Within this work, we have analysed the design, operation, and implementation of an optical receiver for use in AOA-based OW positioning. The optical receiver was designed to have a sufficiently small AOA error, being AOA = 1°, over a wide angular FOV, being 100°. Such a design supported the optical receiver's use in positioning with a 3 × 3 grid of optical beacons. The grid was implemented with optical beacons having unique characteristics for identification, enabled by multiple frequency and colour channels, and wide spacings, enabled by the optical receiver's wide angular FOV. The overall AOA-based OW positioning system demonstrated a position error of 1.7 cm, which is comparable to that obtained by more complex RF and TOA/TDOA positioning systems. Thus, the presented AOA-based OW technologies can play an important role in emerging indoor positioning systems. REFERENCES [1] Global Positioning System Standard Positioning Service Performance Standard, 4 Ed., 2008. [2] H. Liu, H. Darabi, P. Banerjee, and J. Liu, "Survey of wireless indoor positioning techniques and systems," IEEE Trans. Syst., Man., Cybern. C, vol. 37, no. 6, pp. 1067–1080, Nov. 2007.  [3] J. Armstrong, Y. A. Sekercioglu, and A. Neild, "Visible light positioning: a roadmap for international standardization," IEEE Commun. Mag., vol. 51, no. 12, pp. 68–73, Dec. 2013. [4] X. Zhang, J. Duan, Y. Fu, and A. Shi, "Theoretical accuracy analysis of indoor visible light communication positioning system based on received signal strength indicator," J. Lightw. Technol., vol. 32, no. 21, pp. 4180–4186, Nov. 2014. [5] Y. Kim, J. Hwang, J. Lee, and M. Yoo, "Position estimation algorithm based on tracking of received light intensity for indoor visible light communication systems," in Proc. IEEE ICUFN Conf., 2011, pp. 131–134. [6] S. Y. Jung, S. Hann, S. Park, and C. S. Park, "Optical wireless indoor positioning system using light emitting diode ceiling lights," Microw. Opt. Techn. Let., vol. 54, no. 7, pp. 1622–1626, Jul. 2012.  [7] W. Gu, W. Zhang, J. Wang, M. A. Kashani, and M. Kavehrad, "Three dimensional indoor positioning based on visible light with Gaussian mixture sigma-point particle filter technique," in Proc. SPIE, 2015, 93870O. [8] D. Wu, Z. Ghassemlooy, W. D. Zhong, M. A. Khalighi, H. Le Minh, C. Chen, S. Zvanovec, and A. C. Boucouvalas, "Effect of optimal Lambertian order for cellular indoor optical wireless communication and positioning systems," Opt. Eng., vol. 55, no. 6, 066114, Jun. 2016.  [9] A. Taparugssanagorn, S. Siwamogsatham, and C. Pomalaza-Ráez, "A hexagonal coverage LED-ID indoor positioning based on TDOA with extended Kalman filter," in IEEE 37th Annual COMPSAC, 2013, pp. 742–747. [10] S.-Y. Jung, S. Hann, and C.-S. Park, "TDOA-based optical wireless indoor localization using LED ceiling lamps," IEEE Trans. Consum. Electron., vol. 57, no. 4, pp. 1592–1597, Nov. 2011. [11] A. Arafa, S. Dalmiya, R. Klukas, and J. F. Holzman, "Angle-of-arrival reception for optical wireless location technology," Opt. Express, vol. 23, no. 6, pp. 7755–7766, Mar. 2015. [12] A. Arafa, X. Jin, M. H. Bergen, R. Klukas, and J. F. Holzman, "Characterization of image receivers for optical wireless location technology," IEEE Photon. Technol. Lett., vol. 27, no. 8, pp. 1923–1926, Sep. 2015. [13] A. Arafa, X. Jin, and R. Klukas, "Wireless indoor optical positioning with a differential photosensor," IEEE Photon. Technol. Lett., vol. 24, no. 12, pp. 1027–1029, Jun. 2012. [14] Y. S. Kuo, P. Pannuto, K. J. Hsiao, and P. Dutta, "Luxapose: Indoor positioning with mobile phones and visible light." in Proc. 20th Annu. Int. Conf. Mobile Comput. Netw., 2014, pp. 447–458. [15] M. H. Bergen, A. Arafa, X. Jin, R. Klukas, and J. F. Holzman, "Characteristics of angular precision and dilution of precision for optical wireless positioning," J. Lightw. Technol., vol. 33, no. 20, pp. 4253–4260, Oct. 2015. [16] R. Ma, Q. Guo, C. Hu, and J. Xue, "An improved WiFi indoor positioning algorithm by weighted fusion," Sensors, vol. 15, no. 9, pp. 21824–21843, Aug. 2015. [17] X. Jin, D. Guerrero, R. Klukas, and J. F. Holzman, "Microlenses with tuned focal characteristics for optical wireless imaging," Appl. Phys. Lett., vol. 105, no. 3, 031102, Jul. 2014. [18] Y. Arai and M. Sekiai, "Absolute position measurement system for mobile robot based on incident angle detection of infrared light," in Proc. IEEE IROS, vol. 1, 2003. [19] T. Tanaka and S. Haruyama, "New position detection method using image sensor and visible light LEDs," in Proc. IEEE 2nd Int. Conf. Mach. Vision, 2009, pp. 150–153. [20] B. Born, E. L. Landry, and J. F. Holzman, "Electrodispensing of microspheroids for lateral refractive and reflective photonic elements," IEEE Photon. J., vol. 2, no. 6, pp. 873–883, Dec. 2010. [21] K. H. Jeong and L. P. Lee, "A new method of increasing numerical aperture of microlens for biophotonic MEMS," in Proc. 2nd Annu. Int. IEEE-EMB Special Topic Conf. Microtechnol. Med. Biol., 2002, pp. 380–383. [22] T. Q. Wang, Y. A. Sekercioglu, and J. Armstrong, "Analysis of an optical wireless receiver using a hemispherical lens with application in MIMO visible light communications," J. Lightw. Technol., vol. 31, no. 11, pp. 1744–1754, Jun. 2013. [23] E. Hecht, Optics, 2nd ed.: Addison-Wesley Publishing Company, 1987, pp. 212.  [24] J. Davis, Y. H. Hsieh, and H. C. Lee, "Humans perceive flicker artifacts at 500 Hz," Sci. Rep., vol. 5, 07861, Dec. 2014. [25] P. E. Pace, R. E. Leino, and D. Styer, "Use of the symmetrical number system in resolving single-frequency undersampling aliases," IEEE Trans. Signal Process., vol. 45, no. 5, pp. 1153–1160, May 1997. [26] Cree, "Cree® PLCC6 3 in 1 SMD LED SLV6A-FKB," datasheet, 2015. [27] N. Liu, J. Cheng, and J. F. Holzman, "Undersampled differential phase shift on-off keying for optical camera communications," Journal of Communications and Information Networks, to be published, 2017.  

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.52383.1-0364296/manifest

Comment

Related Items