UBC Faculty Research and Publications

Design and optimization of indoor optical wireless positioning systems Bergen, Mark; Guerrero, Daniel; Jin, Xian; Hristovski, Blago A.; Chaves, Hugo A. L. F.; Klukas, Richard; Holzman, Jonathan F. Mar 16, 2016

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
52383-Bergen_M_et_al_Optimization_optical_wireless.pdf [ 1MB ]
Metadata
JSON: 52383-1.0319007.json
JSON-LD: 52383-1.0319007-ld.json
RDF/XML (Pretty): 52383-1.0319007-rdf.xml
RDF/JSON: 52383-1.0319007-rdf.json
Turtle: 52383-1.0319007-turtle.txt
N-Triples: 52383-1.0319007-rdf-ntriples.txt
Original Record: 52383-1.0319007-source.json
Full Text
52383-1.0319007-fulltext.txt
Citation
52383-1.0319007.ris

Full Text

  Design and optimization of indoor optical wireless positioning systems  Mark H. Bergen*, Daniel Guerrero, Xian Jin, Blago A. Hristovski, Hugo A. L. F. Chaves,  Richard Klukas, and Jonathan F. Holzman Integrated Optics Laboratory, The University of British Columbia, School of Engineering 3333 University Way, Kelowna, BC, Canada, V1V 1V7 ABSTRACT  Optical wireless (OW) technologies are an emerging field utilizing optical sources to replace existing radio wavelength technologies. The vast majority of work in OW focuses on communication; however, one smaller emerging field is indoor OW positioning. This emerging field essentially aims to replace GPS indoors. One of the primary competing methods in indoor OW positioning is angle-of-arrival (AOA). AOA positioning uses the received vectors from several optical beacons to triangulate its position. The reliability of this triangulation is fundamentally based on two aspects: the geometry of the optical receiver’s location compared to the optical beacon locations, and the ability for the optical receiver to resolve the incident vectors correctly. The optical receiver is quantified based on the standard deviation of the azimuthal and polar angles that define the measured vector. The quality of the optical beacon geometry is quantified using dilution of precision (DOP). This proceeding discusses the AOA standard deviation of an ultra-wide field-of-view (FOV) lens along with the DOP characteristics for several optical beacon geometries. The optical beacon geometries used were simple triangle, square, and hexagon optical beacon geometries. To assist the implementation of large optical beacon geometries it is proposed to use both frequency and wavelength division multiplexing. It is found that with an ultra-wide FOV lens, coupled with the appropriately sized optical beacon geometry, allow for high accuracy positioning over a large area. The results of this work will enable reliable OW positioning deployments.  Keywords: Angle-of-arrival, dilution of precision, indoor positioning, optical wireless.  1. INTRODUCTION  Optical wireless (OW) technologies have become prevalent over the past few years. While OW communication has garnered most of the attention due to its potential for high data rates1,2, there is another application that shows much promise. It is OW positioning. Such OW positioning can augment conventional positioning systems, such as GPS, in areas where conventional systems are ineffective, such as indoors3. There are multiple systems in use for OW positioning. The most common OW positioning system is the received signal strength (RSS) based system4. This system uses a single photodetector to receive signal power from many optical beacons. It then uses the powers of the received signals to trilaterate its position. The main drawback to this system, as seen by ourselves5 and others6, is its sensitivity to optical beacon power imbalances and fluctuations (both of which degrade the performance). The imbalances and fluctuations may be a result of a non-uniform optical beacon radiation patterns or environmental factors7. These challenges can make RSS based OW positioning inaccurate. Thus, broadband systems using time-of-arrival8,9 and time-difference-of-arrival10 have been applied for OW positioning; however, they require precise synchronization between the optical beacons and optical receiver, making implementation difficult. A third solution for OW positioning, requiring neither synchronization nor predictable optical beacon power, is angle-of-arrival (AOA) based positioning. Like other systems, AOA based positioning uses a fixed and overhead optical beacon geometry and a mobile optical receiver. As the mobile optical receiver must be capable of spatial discernment of the optical beacons, orthogonal photodetectors11 or image sensors with lenses12 are often used. The optical receiver measures the incident AOA from each optical beacon as an azimuthal angle, φ, and a polar angle, θ, and it defines a vector for each of the AOAs. Triangulation is then used to estimate the optical receiver’s position as the intersection point of the vectors. Since this process applies vectors, as opposed to the scalar powers of RSS positioning, the operation is largely independent of the optical beacon powers. This is a distinct advantage of AOA based positioning.  *mark.bergen@alumni.ubc.ca; phone 1 250 807-8798; fax 1 250 807-9850. Photonic Instrumentation Engineering III, edited by Yakov G. Soskind, Craig Olson, Proc. of SPIE Vol. 9754, 97540A · © 2016 SPIE · CCC code: 0277-786X/16/$18 · doi: 10.1117/12.2208722Proc. of SPIE Vol. 9754  97540A-1Downloaded From: http://nanophotonics.spiedigitallibrary.org/ on 04/15/2016 Terms of Use: http://spiedigitallibrary.org/ss/TermsOfUse.aspx  In this work, we investigate the performance of AOA based positioning. We study the effects of geometric dilution of precision (DOP) and angular error for the AOA based positioning systems, with an ultimate goal of establishing low and uniform positioning errors across the region of operation. Practical considerations related to optical beacon geometry and identification are also considered. 2. POSITIONING PRECISION An indoor OW positioning system must be capable of locating the optical receiver in three-dimensional space. In general, the optical receiver can be at any location (x, y, z) or orientation (roll, pitch, yaw), so six degrees of freedom must be determined. For AOA based systems, the optical receiver carries out positioning by observing surrounding optical beacons and identifying AOAs for each optical beacon. Each AOA is quantified by its azimuthal angle, φ, and polar angle, θ , and these angles are used to define a line of position (LOP), as a unit vector from the optical beacon to the optical receiver. The location where all of the LOPs intersect is found by triangulation and is the location of the optical receiver. Since each measured AOA is quantified by its two (azimuthal and polar) angles, each optical beacon yields two equations for the triangulation process. Thus, three optical beacons must be observed to uniquely define values for the six degrees of freedom in the system. The triangulation equations are then solved using a least squares algorithm. It is important to note that triangulation can be carried out with three optical beacons, but operation with even more optical beacons yields better positioning accuracy. (The authors also note that AOA based positioning can be carried out with only two optical beacons if the orientation of the optical receiver is known.) The complete AOA positioning system is comprised of the mobile optical receiver as well as the fixed and overhead optical beacon grid. The optical receiver has attached to it a body frame, defined by (x’, y’, z’), and its position is estimated in the global frame, defined by (x, y, z). The optimization of the AOA positioning is carried out by recognizing that geometric DOP is a scaling factor relating the angular error, σa, and positioning error, σp. Thus, we have the relation   ),,(DOP),,( ap zyxzyx ⋅= σσ .      (1) To visualize an angular error and the effects of geometric DOP, one can envision the vector originating at a beacon to be a cone, as opposed to a perfect vector. Thus, for several optical beacons, the triangulation solution is no longer an intersection point of perfect lines but is an intersection volume of cones. Figure 1 shows how the arrangement of these cones affects the overlapping volume containing the true position. In Fig. 1(a) we see an example of a poor geometry. The nearly parallel LOPs cause the overlapping volume to be exaggerated. Positioning using this arrangement would yield large positioning errors. In contrast, Fig. 1(b) shows an example of a good geometry. The nearly orthogonal LOPs cause the overlapping volume to be small compared to the previous geometry. Positioning using this latter geometry would yield small positioning errors. Thus, we see here that geometric DOP is a function of both the optical beacon geometry and the optical receiver’s position. We can conclude that geometric DOP should be minimized by judicious arrangement of the optical beacons and by having the optical receiver operate with the lowest possible angular error, over as wide as possible of an angular field-of-view (FOV). The following two subsections address these goals.        (a)                               (b) Figure 1. Example of the effects of geometry on positioning estimation. Figure (a) shows nearly parallel LOP cones resulting in a large overlapping volume, while figure (b) shows nearly perpendicular LOP cones resulting in a smaller overlapping volume and improved positioning performance.  Proc. of SPIE Vol. 9754  97540A-2Downloaded From: http://nanophotonics.spiedigitallibrary.org/ on 04/15/2016 Terms of Use: http://spiedigitallibrary.org/ss/TermsOfUse.aspx  2.1 Geometric Optimization Geometric optimization can support positioning systems. Most notably, GPS designers used DOP to optimize the placement of satellites to obtain low and uniform DOP over the entire globe13. In AOA based systems, geometric DOP is used to describe the DOP in three dimensions without the incorporation of time. Geometric DOP itself arises from the least squares algorithm that performs the positioning calculations. Formally stated DOP is      ap1T ),,(])[(tr),,(DOP σσ zyxHHzyx == − ,        (2) where tr[x] and [x]T are the trace and transpose operators, respectively14. The geometric design matrix, H, is used to determine the ratio of the positioning error, σp, to angular error, σa, at a position (x, y, z) in the global frame. However, it is assumed here that the polar and azimuthal angular errors are similar. This assumption is later verified.  The geometric design matrix arises from the formulation of the least squares algorithm. It is essentially a measure of the suitability of the positioning equations to provide an accurate solution. Due to the linearization of the positioning mathematical model with a Taylor series expansion, the geometry matrix, H, is found by taking the first partial derivatives of the positioning equations and combining them into a full matrix. These equations determine the azimuthal and polar angles for the ith optical beacon based on the difference between the optical receiver location (x, y, z) in the global frame and the position of that optical beacon (xi, yi, zi) also in the global frame. These equations are  ⎟⎟⎠⎞⎜⎜⎝⎛−−=iii xxyyarctanφ ,        (3) and ⎟⎟⎠⎞⎜⎜⎝⎛−= ||arctan iii zzrθ ,        (4) where 22 )()( iii yyxxr −+−= .           (5) Additionally, the full distance between the optical beacon at (xi, yi, zi) and the optical receiver at (x, y, z) is 222 )()()( iiii zzyyxxR −+−+−= .          (6) We begin forming the geometry matrix by taking the first partial derivative of the azimuthal equation in three dimensions for each optical beacon. The resulting partial derivative matrix for up to n optical beacons is given by  [ ]⎥⎥⎥⎦⎤⎢⎢⎢⎣⎡∂∂∂⎥⎥⎥⎥⎥⎥⎥⎥⎥⎦⎤⎢⎢⎢⎢⎢⎢⎢⎢⎢⎣⎡∂∂∂∂∂∂∂∂∂∂∂∂∂∂∂∂∂∂=⎥⎥⎥⎦⎤⎢⎢⎢⎣⎡∂∂∂=⎥⎥⎥⎥⎥⎥⎦⎤⎢⎢⎢⎢⎢⎢⎣⎡∂∂∂zyxzyxzyxzyxzyxHnnniiiniφφφφφφφφφφφφφ::::::::1111.         (7) Next, we form the partial derivative matrix for the polar angle equation  by taking its partial derivative in three dimensions for each optical beacon. The resulting partial derivative matrix for up to n beacons is given by Proc. of SPIE Vol. 9754  97540A-3Downloaded From: http://nanophotonics.spiedigitallibrary.org/ on 04/15/2016 Terms of Use: http://spiedigitallibrary.org/ss/TermsOfUse.aspx  [ ]⎥⎥⎥⎦⎤⎢⎢⎢⎣⎡∂∂∂⎥⎥⎥⎥⎥⎥⎥⎥⎥⎦⎤⎢⎢⎢⎢⎢⎢⎢⎢⎢⎣⎡∂∂∂∂∂∂∂∂∂∂∂∂∂∂∂∂∂∂=⎥⎥⎥⎦⎤⎢⎢⎢⎣⎡∂∂∂=⎥⎥⎥⎥⎥⎥⎦⎤⎢⎢⎢⎢⎢⎢⎣⎡∂∂∂zyxzyxzyxzyxzyxHnnniiiniθθθθθθθθθθθθθ::::::::1111.          (8) The full geometry matrix, H, is formed by augmenting the polar and azimuthal partial derivative matrices together as               [ ]⎥⎥⎥⎦⎤⎢⎢⎢⎣⎡∂∂∂=⎥⎥⎥⎦⎤⎢⎢⎢⎣⎡∂∂∂⎥⎦⎤⎢⎣⎡=⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎦⎤⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎣⎡∂∂∂∂∂∂zyxHzyxHHniniθφθθθφφφ::::11.     (9) Using equations (7)-(9) we find the final geometry matrix for n optical beacons to be [ ]⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎦⎤⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎣⎡−−−−−−−−−−−−−−−−−−=22222221121111211112222211211)(||)(||:::)(||)(||:::)(||)(||0)()(:::0)()(:::0)()(nnnnnnnnnniiiiiiiiiinnnniiiiRrRryyzzRrxxzzRrRryyzzRrxxzzRrRryyzzRrxxzzrxxryyrxxryyrxxryyH .           (10) This final form is a function of the position of the optical receiver in the global frame (x, y, z) and the positions of each of the optical beacons (xi, yi, zi) in the global frame. This means that the geometry matrix, H, will be unique for every point in space. The geometry matrix, H, also grows with each additional visible optical beacon by 2n rows, one for each of the azimuthal and polar angles. Inserting this result into equation (2) we can see that DOP is a function of space. Note that the least squares algorithm means that the scalar result of equation (2) is reduced with additional optical beacons.   2.2 Angular Error Minimization Optimizing DOP is only half of the position error minimization challenge; the angular error of the optical receiver also must be low and constant for the azimuthal and polar angles. We define the FOV as the range of AOAs over which the Proc. of SPIE Vol. 9754  97540A-4Downloaded From: http://nanophotonics.spiedigitallibrary.org/ on 04/15/2016 Terms of Use: http://spiedigitallibrary.org/ss/TermsOfUse.aspx.nI  optical receiver can measure accurately with virtually no systematic errors such as astigmatism or coma. With this defined, angular precision and angular accuracy become equivalent for the measured AOAs. The angular error then simply becomes the difference between the true and measured AOAs. Our prior work has shown that it is possible to create an ultra-wide FOV optical receiver with low angular error by using an optical image receiver with an electro-dispensed hemispherical lens12. The work of others has verified that a hemispherical lens is well-suited to ultra-wide FOV imaging15. More information on this lens fabrication method is shown elsewhere16. We created an optical receiver using this technique. Figure 2 shows a scanning electron microscope (SEM) image of both the CMOS image sensor pixels, Fig. 2(a), and the electro-dispensed microlens, Fig. 2(b). Superimposed onto the image of the CMOS pixels is the approximate spot size that incoming collimated light would be focused down to on the image sensor. This spot size is approximately 30 μm in diameter, which is much larger than the 6 x 6 μm2 pixels meaning that image spot quantization will not be an issue.  A testbed was created to measure the angular error of the optical receiver. The angular error measurements were separated into azimuthal and polar angular errors. A wide range of incident AOAs were tested to define the limits of the optical receiver’s FOV. Incident light from each optical beacon was concentrated down to a focal spot on the image sensor. The geometric centre of each focal spot on the image sensor, (xIS, yIS), can then be used to estimate the AOAs corresponding to that optical beacon. The estimated azimuthal angle is calculated as φIS = arctan(yIS/xIS) – 180°, while the estimated polar angle is calculated as θIS ≈ k(xIS2 + yIS2)1/2. In the polar equation, k is a constant of proportionality that must be determined through calibration. With the earlier assumption that AOA estimates are unbiased, we can use  φ ≈ φIS = arctan(yIS/xIS) - 180°,         (11)  to calculate the true azimuthal angle and   θ ≈ θIS ≈ k(xIS2 + yIS2)1/2,        (12)  to calculate the true polar angle. Any remaining deviations in these measurements are defined as random angular error. The azimuthal angular error is Δφ = φIS - φ and the polar angular error is Δθ = θIS - θ. The results of the characterization of the angular error of the optical receiver can be seen in Fig. 3. The first figure, Fig. 3(a), shows a plot of the estimated polar angle, θIS, versus the true polar angle on the left axis and the polar angular error, Δθ, versus the true polar angle on the right axis. The second figure, Fig. 3(b), shows a plot of the estimated azimuthal angle, φIS, versus the true azimuthal angle on the left axis and the azimuthal angular error, Δφ, versus the true azimuthal angle on the right axis.    (a)         (b)    Figure 2. SEM images are shown. Figure (a) shows the CMOS image sensor pixels with a focal spot overlaid in blue. Figure (b) shows the ultra-wide FOV microlens.  Proc. of SPIE Vol. 9754  97540A-5Downloaded From: http://nanophotonics.spiedigitallibrary.org/ on 04/15/2016 Terms of Use: http://spiedigitallibrary.org/ss/TermsOfUse.aspx180X150 -"SI-i120 -bAR;5, 90EE0 3060 -W 30 -060 90 120 150 18Azimuthal angle, q3(deg)654N3 Ert2E-1 aQQ0 e70 "-1-ó-2-3 â.-5-690(As80 De..70m 609 50_ _.40 0 10 20 30 40 50 60 70 8030E20W10Polar angle, O (deg)543 óe21Ae0 F4'        (a)          (b) Figure 3. Results of angular error characterization are shown. Figure (a) shows the estimated polar angle, θIS, plotted on the left axis and the angular error, Δθ, plotted on the right axis versus the true polar angle, θ. Figure (b) shows the estimated angle, φIS, plotted on the left axis and the angular error, Δφ, plotted on the right axis versus the true azimuthal angle, φ. In (a), deviations from linearity at θ > 60° are denoted by hollow triangles. In both plots, the black line represents the true azimuthal and polar angles. In Fig. 3 there are deviations from linearity in the angular characterization for both the azimuthal and polar angles, which are manifestations of random noise in the measurements. The standard deviation of this angular error falls within 1° for both the azimuthal and polar measurements. The only region where the angular error deviates past this limit is if the polar angle is below 3° or above 60°. When the polar angle is below 3°, the optical beacon is virtually overhead making it difficult to determine the AOAs. When the polar angle is above 60°, our assumption that the measurements are unbiased by systematic errors, such as comatic aberration, breaks down. Using these limits, we can quantify the FOV and angular error of the optical receiver. The optical receiver’s FOV is between 3° and 60° in the polar angle for all azimuthal angles, giving an angular FOV of approximately 120°. Within this linear range the angular error has a standard deviation of approximately 1°.  3. POSITIONING IMPLEMENTATION Armed with knowledge of the effects of DOP and angular precision on positioning we are now able to analyse practical systems. There are a few challenges associated with deploying a practical OW positioning system however. The first challenge, explored in Section 3.1, is how to match an AOA with its corresponding optical beacon. Frequency and wavelength based modulation techniques will be explored for this. The second challenge is determining the best optical beacon geometry for use in an OW positioning system. As we will see in Section 3.2, the best optical beacon geometry depends on the environment in which the optical beacons are deployed as well as the maximum FOV with which the optical receiver is capable of imaging. 3.1 Optical Beacon Identification After the image sensor on the optical receiver has isolated each optical beacon within its FOV and extracted an AOA for each optical beacon, the remaining task for positioning by triangulation is to determine the identity of each optical beacon. Thus, the AOAs must be matched to the optical beacons. A logical method of identifying optical beacons would be either to modulate them at various frequencies and use those frequencies as identifiers, or to simply send the coordinates of the optical beacons as modulated data. Since the majority of commercial image sensors have framerates below 200 Hz, with a Nyquist frequency below 100 Hz, we must assume that sending the coordinates of each optical beacon as a modulated signal is impractical due to bandwidth limitations. This frequency limitation also greatly reduces the number of identifier frequencies that can be used. Restricting the frequencies further is the fact that the human eye can see frequencies up to approximately 30 Hz, so a practical system should avoid these low frequencies. Thus, a realistic frequency range would be 35 - 95 Hz. In order to determine how many frequencies fit within this range, we must consider the spacing between the frequencies and the minimum sampling time. The minimum sampling time of a signal is the reciprocal of either the lowest frequency Proc. of SPIE Vol. 9754  97540A-6Downloaded From: http://nanophotonics.spiedigitallibrary.org/ on 04/15/2016 Terms of Use: http://spiedigitallibrary.org/ss/TermsOfUse.aspxx 10'- Red Receiver- Blue Receiver0 10 20 30 40 50 60 70 80 90Frequency (Hz)8x 10'6o.00 10 20 30 40 50 60 70 80 90Frequency (Hz)- Red Receiver- Blue Receiver  or the frequency spacing between adjacent frequencies. The lowest frequency in both cases would be approximately 35 Hz, due to the human eye, giving a minimum sampling time of 29 ms. While simply sampling for a longer period of time sounds like a reasonable solution, it would require the optical receiver to be immobile during such sampling. In practical applications the optical receiver could, and likely would, be moving and this would be seen as positioning error. In a large deployment where many optical beacons are required, there may not be enough combinations of frequencies to uniquely identify every optical beacon. In this case another method must be employed. Since virtually all commercial image sensors are capable of capturing images at multiple wavelengths (colours), an OW positioning receiver can use the wavelength as an additional identifier to increase the number of discernable optical beacons. This requires the use of optical beacons that consist of a white-light LED with separate modulation inputs on the red, green, and blue (RGB) LEDs. At the same time, typical image receivers consist of RGB pixels so each wavelength can be processed separately. Since RGB only provides three more variables, it is combined with frequency identification to obtain more identifiers. To test the suitability of this method, we tested a single white-light LED optical beacon with the optical receiver described in Section 2.2. The LED optical beacon had its red and blue wavelengths modulated at either 45 Hz, 75 Hz, or both. The optical receiver then captured a video of this optical beacon and performed a fast Fourier transform (FFT) on the red and blue signals. The combination of wavelengths and frequencies corresponded to an optical beacon number using a look-up table. Figure 4 shows the results of FFTs of two optical beacons. Table I is the look-up table matching frequencies and wavelengths to optical beacon numbers.  Combining the results from Fig. 4 and Table 1 we can conclude that Fig. 4(a) corresponds to beacon 2 and Fig. 4(b) corresponds to beacon 7. Additionally, the intensity of each wavelength at a given frequency is independent of the other wavelength. That is to say, there is no aliasing between wavelengths. These results show that introducing wavelength as an identification variable is a viable method of increasing the number of identifiers for low-bandwidth optical receivers.   (a)         (b)  Figure 4. FFT results are shown for two different frequency and wavelength combinations. Figure (a) shows peaks at both 45 and 75 Hz in red and 75 Hz in blue. Figure (b) shows peaks at 45 Hz for red and 75 Hz for blue. These results are used in conjunction with Table I to allow for the identification of the optical beacons.   Table 1. Beacon identification table matching wavelengths and frequencies with optical beacons.  Wavelength and Frequency Beacon Identifiers  Beacon Number Red Frequencies (Hz) Blue Frequencies (Hz) 1 45 45 2 45, 75 75 3 75 45 4 75 45, 75 5 45, 75 45 6 75 75 7 45 75 8 45 45, 75Proc. of SPIE Vol. 9754  97540A-7Downloaded From: http://nanophotonics.spiedigitallibrary.org/ on 04/15/2016 Terms of Use: http://spiedigitallibrary.org/ss/TermsOfUse.aspx  3.2 Optical Beacon Geometry Optimization The second practical consideration for real-world OW positioning implementations is the effect of the optical beacon geometries. Since there are many potential optical beacon geometries, we restrict this analysis to simple geometric shapes that can be organized into a periodic array. These geometries are based on a hexagon, a square, and an equilateral triangle. All optical beacons from each geometry are arranged in a plane. The results of this analysis are a worst case scenario, since replications of these simple geometries only improve the positioning accuracy, as the addition of more beacons (typically) improves the positioning accuracy.  Given the qualified argument that more optical beacons improve positioning accuracy, we must devise a method to fairly compare the positioning characteristics of each of the geometries, since each has a different number of optical beacons. To do this we introduce an optical beacon density (or power density) and make it equivalent for all geometries. We calculate the fraction of each optical beacon that would contribute to a single cell, if each of the geometries were repeated, and we sum all the contributions in the area of the cell. The beacon fractions are shown in Fig. 5. Once we have the optical beacon density for each geometry, making them equal is a simple matter of adjusting the side length of each geometry. In this work we use the density of the square geometry as the standard. It has one beacon with an area of a2. The other two geometries are scaled by a side length factor of 21/2/31/4 for the triangle and 2/31/4 for the hexagon.  Since a visualization of DOP in three dimensions would be difficult, simulations were done for an optical beacon height of z = 100 cm and a fixed measurement plane at z = 0 cm. Figure 6 shows the DOP and positioning error, σp, beneath each geometry. We took the side length, a, of each geometry to be 100 cm and the angular error, σa, to be 1°.  The results for the triangle beacon geometry, shown in Fig. 6(a), indicate that the DOP varies significantly across the plane beneath the beacon geometry, with a mean value of 2.65 cm/degree and a standard deviation of 0.095 cm/degree, making it the worst of the three geometries for both mean DOP and DOP standard deviation. The square geometry, shown in Fig. 6(b), fared better, with a mean DOP of 2.26 cm/degree and a DOP standard deviation of 0.066 cm/degree. The hexagon geometry, shown in Fig. 6(c), performed the best, with a mean DOP of 1.87 cm/degree and a DOP standard deviation of 0.053 cm/degree. The results, gathered in Table 2, show that the hexagon beacon geometry performs the best, even when optical beacon power density is compensated. Interestingly, the power density compensation forces the hexagon geometry to spread, out making its LOPs more orthogonal and improving its positioning characteristics. Although, it should be noted that this is for one specific distance between the optical receiver and the optical beacon geometry plane.   Figure 5. Beacon fractions are shown as the filled portion of circles, with side length scaling for the different optical beacon geometries. The leftmost geometry is the triangle with 0.5 beacons and a scaling factor of 21/3/31/4. The rightmost geometry is the hexagon with 2 beacons and a scaling factor of 2/33/4. The centre geometry is the square with 1 beacon and a scaling factor of 1. After scaling, the geometries have the same optical beacon density, allowing for a fair comparison when simulating the performance of each.  Proc. of SPIE Vol. 9754  97540A-8Downloaded From: http://nanophotonics.spiedigitallibrary.org/ on 04/15/2016 Terms of Use: http://spiedigitallibrary.org/ss/TermsOfUse.aspxân°0.032° 0.030.0280.0260.024-60a0-40-20oQ'3.2II'2.64 °40 60020 2040 -20x (cm) 60 -60 Y (cm)0.030.0280.0260.0240.0220.020.01832.82.62.42.221.80.030.02 .80.026 .60.024 .40.022 .20.020.018 .8ué0.02,___.-...iN 0.019, .._...k 0.018.-10075..O 5025L1x (cm).2.91.85 1009QhN80.030.020.020.020.0220.020.01832.82.62.42.221.8   (a)  (b)  (c) Figure 6. Geometric DOP contour, DOP(x, y, z = 0), and positioning error, σp (x, y, z = 0), are shown for each optical beacon geometry assuming the optical beacons are arranged in a plane parallel to the positioning surface and separated by 100 cm. The triangular, square, and hexagon geometries are shown in (a), (b), and (c) respectively. The optical beacon spacing parameter is set at a = 100 cm. The optical beacons are shown as hollow circles. Table 2. Summary of positioning error characteristics for triangle, square, and hexagon beacon geometries at (x, y, z = 0).   Positioning Error Results (height = a = 100 cm) Beacon Geometry Mean, E[σp (x, y, z = 0)] (cm) Standard deviation, STD[σp (x, y, z = 0)] (cm) Triangle 2.65 0.095 Square 2.26 0.066 Hexagon 1.87 0.053 Proc. of SPIE Vol. 9754  97540A-9Downloaded From: http://nanophotonics.spiedigitallibrary.org/ on 04/15/2016 Terms of Use: http://spiedigitallibrary.org/ss/TermsOfUse.aspx201816"1 12T. 108ÿ 6w200 05 1.5Ida2 2.5 30.20.180.160.14 Cro0.120.1 :80.080. cA0.06` Q0.0200 0.5 1.5h/a2.50.0080.007.0.006 CC0.005 ,Dyv0.004'7,0.0030â0.002 ,\,400.001  Considering that the previous analysis was done for only one value of z, we must now analyze the metrics of mean DOP and DOP standard deviation for a continuum of heights for each geometry. DOP scales linearly as the geometry scale increases so doubling the size of the optical beacon geometry and the distance from the optical beacons to the optical receiver will double the DOP value. Because of this, we normalize our DOP analysis in terms of the ratio, h/a, which is the ratio of the height to the side-length of each geometry. We then carry out our analysis for a continuum of h/a ratios from 0 to 3. The results of this analysis are plotted in Fig. 7. On the right axis of Fig. 7(a) we see the mean DOP value while on the right axis of Fig. 7(b) we see the standard deviation of the DOP at varying h/a ratios.  There are some important h/a values to note from Fig. 7. First, as the value of h/a increases beyond 2, the mean DOP and DOP standard deviation increase unbounded. This indicates that an effective OW positioning system should operate at an h/a ratio of less than 2. Second, the mean DOP decreases with h/a ratio except for the case where the h/a ratio is less than 0.25. Below this value, the mean DOP levels out and even begins to increase slightly. This indicates that the optimal h/a ratio range where both mean DOP and DOP standard deviation are low is from 0.25 to 2.0.  While the ideal theoretical minimum h/a ratio is 0.25, the true minimum h/a ratio is dictated by the optical receiver’s maximum FOV. The optical receiver must be able to simultaneously image all optical beacons in a given geometry at any given location within the perimeter of the geometry. Using simple trigonometry it is found that the minimum h/a ratios can be calculated for the triangle, square, and hexagon geometries to be 1/tan(FOV/2), 21/2/tan(FOV/2), and 2/tan(FOV/2), respectively. The optical receiver analyzed in Section 2.2 was found to have a 1° angular error with a 120° FOV. Assuming an optical beacon spacing parameter of a = 100 cm, we are able to calculate the minimum h/a ratio for each optical beacon geometry and the estimated positioning error. The calculated minimum h/a ratios for the triangular, square, and hexagon geometries are 0.58, 0.82, and 1.15, respectively. These are shown in Figs. 7(a) and 7(b) as the coloured flags. The position error results are shown on the left axes. An interesting point to note is that at the minimum h/a ratios, each geometry exhibits almost identical performance with a mean positioning error of approximately 1.8 cm and a positioning error standard deviation of approximately 0.05 cm. From an implementation standpoint, the ideal optical beacon arrangement would depend on the required h/a ratio.  Note that the positioning results given above depend on the optical receiver having an ultra-wide FOV of 120°. Other optical receivers may have a FOV as low as 60°, which degrades the positioning capabilities. Calculating the minimum h/a ratio for each of the triangle, square, and hexagon beacon geometries for a FOV of 60° gives 1.73, 2.45, and 3.46, respectively. From Fig. 7 we can see that for a hexagon beacon geometry and a sensor of 60°, the h/a ratio is so large that it does not appear on the plot. The minimum h/a ratio for the square geometry, while at least on the plot, is still outside of the optimal working range of 0.25 < h/a < 2. Only the minimum h/a ratio for the triangular geometry is within the ideal working range (at 1.73). Thus, while the hexagon beacon geometry performed the best in our initial analysis, it cannot be concluded that a hexagon beacon geometry will perform the best in all cases. Each implementation requires its own consideration on the selected geometry.  (a)           (b) Figure 7. Results are shown for a continuum of h/a ratios for mean and standard deviation of positioning error. Mean positioning error, E[σp (x, y, z = 0)], is shown in (a) for the triangle, square, and hexagon geometries in orange, blue, and red, respectively. Positioning error standard deviation, STD[DOP (x, y, z = 0)], is shown in (b) for the triangle, square, and hexagon geometries in orange, blue, and red, respectively. Results are plotted as a function of the height-to-side-length ratio, h/a, with the optimal region indicated on both insets. The minimum ratio that each FOV can detect is shown with flags. Proc. of SPIE Vol. 9754  97540A-10Downloaded From: http://nanophotonics.spiedigitallibrary.org/ on 04/15/2016 Terms of Use: http://spiedigitallibrary.org/ss/TermsOfUse.aspx  4. CONCLUSIONS In this work, OW positioning was demonstrated using AOA based systems. The design was optimized using DOP and angular FOV characteristics of the optical beacon geometry and optical receiver. Practical challenges of the implementation challenges, such as optical beacon identification, were also addressed. It was found that the optimal positioning characteristics, of low and uniform positioning errors, could be accomplished using an optical receiver with an ultra-wide FOV lens. Optical beacon identification was demonstrated using wavelength and frequency identifiers. It was found to be a practical solution for conventional image sensors. The optimal optical beacon geometry was found to dependent on the implementation. The hexagon beacon geometry gave the best positioning results but had the most stringent restrictions on the FOV. The triangle beacon geometry had the worst performance but was capable of moderate performance with optical receivers having a small FOV. Ultimately, it is shown that promising performance can be achieved with such AOA based positioning systems. ACKNOWLEDGEMENTS  This work was supported by the Natural Sciences and Engineering Research Council of Canada (NSERC). REFERENCES [1] Wang, K., Nirmalathas, A., Lim, C., and Skafidas, E., “High-speed optical wireless communication system for indoor applications,” IEEE Photon. Technol. Lett. 23(8), 519–521 (2011). [2] Wang, K., Nirmalathas, A., Lim, C., and Skafidas, E., “4 x 12.5 Gb/s WDM optical wireless communication system for indoor applications,” J. Lightw. Technol. 29(13), 1988–1996 (2011). [3] Arafa, A., Jin, X., and Klukas, R., “Wireless indoor optical positioning with a differential photosensor,” IEEE Photon. Technol. Lett. 24(12), 1027–1029 (2012). [4] Bahl, P. and Padmanabhan, V. N., “RADAR: an in-building RF-based user location and tracking system,” in Proc. IEEE INFOCOM 2, 775–784 (2000). [5] Arafa, A., Dalmiya, S., Klukas, R., and Holzman, J. F., “Angle-of-arrival reception for optical wireless location technology,” Opt. Express 23(6), 7755–7766 (2015). [6] Zhang, X., Duan, J., Fu, Y., and Shi, A., “Theoretical accuracy analysis of indoor visible light communication positioning system based on received signal strength indicator,” J. Lightw. Technol. 32(21), 4180–4186 (2014). [7] Kim, Y., Hwang, J., Lee, J., and Yoo, M., “Position estimation algorithm based on tracking of received light intensity for indoor visible light communication systems,” Proc. IEEE ICUFN Conf., 131–134 (2011). [8] Alavi, B. and Pahlavan, K., “Modeling of the TOA-based distance measurement error using UWB indoor radio measurements,” IEEE Commun. Lett. 10(4), 275–277 (2006). [9] Ding, G., Tan, Z., Zhang, L., Zhang, Z., and Zhang J., “Hybrid TOA/AOA cooperative localization in non-line-of-sight environments,” Proc. IEEE (VTC Spring), 1–5 (2012).   [10] Jung, S.-Y., Hann, S., and Park, C.-S., “TDOA-based optical wireless indoor localization using LED ceiling lamps,” IEEE Trans. Consum. Electron. 57(4), 1592–1597 (2011). [11] Jin, X. and Holzman, J. F., “Differential retro-detection for remote sensing applications,” IEEE Sensors J. 10(12), 1875–1883  (2010). [12] Jin, X., Guerrero, D., Klukas, R., and Holzman, J. F., “Microlenses with tuned focal characteristics for optical wireless imaging,” Appl. Phys. Lett. 105(3), 031102 (1–5) (2014). [13] Misra, P. and Enge, P., [Global Positioning System – Signals, Measurements, and Performance, 2nd ed.], Ganga-Jamuna Press, Lincoln, MA, 200–206 (2011). [14] Dempster, A. G., “Dilution of precision in angle-of-arrival positioning systems,” Electron. Lett. 42(5), 291–292 (2006). [15] Wang, T.Q., Sekercioglu, Y.A. and Armstrong, J., “Analysis of an optical wireless receiver using a hemispherical lens with application in MIMO visible light communications,” J. Lightw. Technol. 31(11), 1744–1754 (2013). [16] Born, B., Landry, E. L., and Holzman, J. F., “Electrodispensing of microspheroids for lateral refractive and reflective photonic elements,” IEEE Photon. J. 2(6), 873–883 (2010). Proc. of SPIE Vol. 9754  97540A-11Downloaded From: http://nanophotonics.spiedigitallibrary.org/ on 04/15/2016 Terms of Use: http://spiedigitallibrary.org/ss/TermsOfUse.aspx

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.52383.1-0319007/manifest

Comment

Related Items