Open Collections

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Indoor positioning through integration of optical angles of arrival with an inertial measurement unit Islam, Md. Shariful 2012

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
24-ubc_2012_fall_islam_md. shariful.pdf [ 1.76MB ]
Metadata
JSON: 24-1.0073179.json
JSON-LD: 24-1.0073179-ld.json
RDF/XML (Pretty): 24-1.0073179-rdf.xml
RDF/JSON: 24-1.0073179-rdf.json
Turtle: 24-1.0073179-turtle.txt
N-Triples: 24-1.0073179-rdf-ntriples.txt
Original Record: 24-1.0073179-source.json
Full Text
24-1.0073179-fulltext.txt
Citation
24-1.0073179.ris

Full Text

Indoor Positioning through Integration of Optical Angles of Arrival with an Inertial Measurement Unit by Md. Shariful Islam  B.Sc., Bangladesh University of Engineering and Technology, 2009  A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF APPLIED SCIENCE in The College of Graduate Studies (Electrical Engineering)  THE UNIVERSITY OF BRITISH COLUMBIA (Okanagan) September 2012 c Md. Shariful Islam 2012  Abstract A novel indoor positioning solution is proposed in this work. An inertial navigation system (INS) is integrated with optical angle of arrival (OAOA) measurements to yield a smoother, more accurate, and robust positioning solution for indoor environments. An extended Kalman filter (EKF) is used to integrate the INS and OAOA measurements. Four different algorithms are proposed for the novel indoor positioning solution by INS/OAOA integration. An error state Kalman filter is used for implementing all four algorithms. In previous work, magnetometer error estimation was not included in the EKF state vector. In this work, magnetometer error estimation is added to the EKF state vector, and this reduced the average position error by 3.7% to 7%. Quaternion algebra is used instead of Euler angles due to the possibility of mathematical singularities for certain Euler angles. Quaternion vector estimation is performed by adding the quaternion vector to the state vector of the EKF. Both loosely coupled and tightly coupled integration strategies are explored for INS/OAOA integration. The tightly coupled strategy reduces the average positioning error by 60% compared to an OAOA-only system while the loosely coupled strategy reduces the average error by 44%. However, the performance improvement of the tightly coupled system comes with an inii  Abstract creased computational cost due to nonlinearities in the measurement model. The loosely and tightly coupled algorithms are modified by augmenting the observation vector with a prior accelerometer bias estimate and a quaternion vector estimate. This results in loosely and tightly coupled algorithms with augmented observations. The algorithms with augmented observations perform significantly better, especially in a case of the low update rate for the OAOA sensor. An average position error of 4.89 cm is reduced to 3.11 cm by using the loosely coupled algorithm with augmented observations instead of the loosely coupled algorithm without observation augmentation. This is an improvement of approximately 36%. For the tightly coupled system, this improvement is approximately 32%. However, where the update rate from the OAOA sensor is fast enough, no significant performance improvement is observed by using algorithms with augmented observations.  iii  Table of Contents Abstract  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  ii  Table of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . .  iv  List of Tables  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii  List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  ix  List of Symbols  xi  . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  Abbreviations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvi Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii Dedication  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xix  1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  1  1.1  Context . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  1  1.2  Motivation  . . . . . . . . . . . . . . . . . . . . . . . . . . . .  3  1.3  Problem Statement and Objective . . . . . . . . . . . . . . .  6  1.4  Thesis Outline . . . . . . . . . . . . . . . . . . . . . . . . . .  8  1.5  Contribution . . . . . . . . . . . . . . . . . . . . . . . . . . .  8  iv  Table of Contents 2 Literature Review 2.1  2.2  2.3  . . . . . . . . . . . . . . . . . . . . . . . . .  Indoor Positioning Systems: An Overview  . . . . . . . . . .  10  2.1.1  What is an Indoor Positioning System? . . . . . . . .  11  2.1.2  Location Technologies, Techniques and Algorithms  .  11  2.1.3  Network-Based vs. Non-Network-Based System  . . .  13  2.1.4  Criteria of Evaluating Indoor Positioning System  . .  13  . . . . . . . . . . . . .  15  Existing Indoor Positioning Solutions 2.2.1  Infrared (IR) Positioning Systems  . . . . . . . . . . .  15  2.2.2  Ultra-sound Positioning Systems . . . . . . . . . . . .  15  2.2.3  Radio Frequency (RF) Positioning Systems . . . . . .  16  2.2.4  Magnetic Positioning Systems  19  2.2.5  Vision-based Positioning Systems  2.2.6  Audible Sound Positioning Systems  2.2.7  3.2  . . . . . . . . . . . . . . . . . . . . . . . .  19  . . . . . . . . . .  20  Differential Photosensor-based Positioning System . .  20  Proposed Indoor Positioning Solution  . . . . . . . . . . . . .  23  2.3.1  Inertial Navigation System . . . . . . . . . . . . . . .  23  2.3.2  GPS and INS Integration Strategy . . . . . . . . . . .  24  2.3.3  Proposed Solution . . . . . . . . . . . . . . . . . . . .  26  3 INS and OAOA Integration 3.1  10  . . . . . . . . . . . . . . . . . . .  27  OAOA for Indoor Positioning . . . . . . . . . . . . . . . . . .  27  3.1.1  The Corner-Cube Photosensor . . . . . . . . . . . . .  28  3.1.2  Determining Position  . . . . . . . . . . . . . . . . . .  31  . . . . . . . . . . . . . . . . . . .  32  Mathematical Notations in Inertial Navigation . . . .  32  Inertial Navigation System 3.2.1  v  Table of Contents  3.3  3.2.2  Coordinate Frames  . . . . . . . . . . . . . . . . . . .  35  3.2.3  Rotation Matrix Computation . . . . . . . . . . . . .  38  3.2.4  Modeling Motion in the Local Navigation Frame . . .  43  3.2.5  Step by Step Computation of Navigation Parameters  51  Proposed Integration Algorithm  4.2  55  3.3.1  Loosely Coupled Integration  . . . . . . . . . . . . . .  61  3.3.2  Tightly Coupled Integration  . . . . . . . . . . . . . .  67  3.3.3  Integration Algorithms with Augmented Observation  4 Experimental Work and Results 4.1  . . . . . . . . . . . . . . . .  72  . . . . . . . . . . . . . . . .  80  Experiment . . . . . . . . . . . . . . . . . . . . . . . . . . . .  80  4.1.1  Data Collection  . . . . . . . . . . . . . . . . . . . . .  80  4.1.2  Experimental Setup . . . . . . . . . . . . . . . . . . .  83  Performance Analysis and Discussion  . . . . . . . . . . . . .  86  5 Conclusions and Future Work . . . . . . . . . . . . . . . . . .  95  5.1  Developed Algorithms . . . . . . . . . . . . . . . . . . . . . .  96  5.2  Limitations and Future Work . . . . . . . . . . . . . . . . . .  98  Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100  vi  List of Tables 2.1  Summary and Comparison of Existing Indoor Positioning Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  22  3.1  Quadrant information for roll from acceleration signs. . . . .  41  3.2  Quadrant information for pitch from acceleration signs. . . .  41  3.3  Quadrant information for yaw from magnetic field components signs. . . . . . . . . . . . . . . . . . . . . . . . . . . . .  4.1  43  Performance comparison of OAOA-only, loosely coupled, tightly coupled, loosely coupled with augmented observations, tightly coupled with augmented observations integration algorithms (OAOA update is 10 Hz, AWGN with 0 mean and a variance of 2◦ is added to the simulated azimuthal angle values). . . .  4.2  88  Performance comparison of OAOA-only, loosely coupled, tightly coupled, loosely coupled with augmented observations, tightly coupled with augmented observations integration algorithms (OAOA update is 5 Hz, AWGN with 0 mean and a variance of 2◦ is added to the simulated azimuthal angle values). . . .  89  vii  List of Tables 4.3  Performance comparison of OAOA-only, loosely coupled, tightly coupled, loosely coupled with augmented observations, tightly coupled with augmented observations integration algorithms (OAOA update is 10 Hz, AWGN with 0 mean and a variance of 3◦ is added to the simulated azimuthal angle values). . . .  4.4  91  Performance comparison of OAOA-only, loosely coupled, tightly coupled, loosely coupled with augmented observations, tightly coupled with augmented observations integration algorithms (OAOA update is 5 Hz, AWGN with 0 mean and a variance of 3◦ is added to the simulated azimuthal angle values). . . .  4.5  92  Effect of magnetometer bias estimation on the performance of loosely coupled, tightly coupled, loosely coupled with augmented observations, tightly coupled with augmented observations integration algorithms (OAOA update is 10 Hz, AWGN with 0 mean and a variance of 2◦ is added to the simulated azimuthal angle values). . . . . . . . . . . . . . . . . . . . . .  93  viii  List of Figures 3.1  Photograph of the corner-cube photosensor. . . . . . . . . . .  28  3.2  Sketch of the corner-cube photosensor. . . . . . . . . . . . . .  29  3.3  Hypothetical system setup for an indoor positioning solution using the OAOA sensor. . . . . . . . . . . . . . . . . . . . . .  30  3.4  Coordinate frames. . . . . . . . . . . . . . . . . . . . . . . . .  36  3.5  Basic block diagram of INS mechanization equation for the local navigation frame. . . . . . . . . . . . . . . . . . . . . . .  3.6  Basic diagram of the INS mechanization block with feedback from the EKF and corrected input measurement. . . . . . . .  3.7  3.9  61  Block diagram of a loosely coupled OAOA and INS integrated positioning solution. . . . . . . . . . . . . . . . . . . . . . . .  3.8  50  62  Block diagram of a tightly coupled OAOA and INS integrated positioning solution. . . . . . . . . . . . . . . . . . . . . . . .  68  Observation augmenting block. . . . . . . . . . . . . . . . . .  73  3.10 Block diagram of a loosely coupled OAOA and INS integrated positioning solution with augmented observations. . . . . . .  76  3.11 Block diagram of a tightly coupled OAOA and INS integrated positioning solution with augmented observations. . . . . . .  78  ix  List of Figures 4.1  A commercial athletic performance sensor used as the IMU unit. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  81  4.2  A DC motor turntable for creating a controlled motion. . . .  82  4.3  Hypothetical experimental setup for data collection (coordi-  4.4  nate units are cm, and figure is not drawn to scale). . . . . .  83  Comparison between true and estimated trajectories. . . . . .  85  x  List of Symbols λ  Longitude  Ωbib  Skew symmetric matrix of ω bib  ω bib  Rotation rate of frame ‘i’, wrt frame ‘b’, expressed in frame ‘b’  Ωbie  Skew symmetric matrix of ω bie  ω bie  Rotation rate of frame ‘i’, wrt frame ‘e’, expressed in frame ‘b’  Ωnie  Skew symmetric matrix of ω nie  ω nie  Rotation rate of frame ‘i’, wrt frame ‘e’, expressed in frame ‘n’  Ω(ω) Four dimensional skew symmetric matrix φ  Azimuthal angle  Rnb  Rotation matrix to convert from body frame to navigation frame  Rbn  Rotation matrix to convert from navigation frame to body frame  An  skew symmetric matrix of an  ba  Bias vector in measured specific force vector  bm  Bias vector in the magnetometer measurement vector xi  List of Symbols bω  Bias vector in the angular rate measurement vector  gn  Earth’s gravity field  HL[A] Measurement matrix for LC algorithm with augmented observation HL  Measurement matrix for the loosely coupled algorithm  HT [A] Measurement matrix for TC algorithm with augmented observation HT  Measurement matrix for the tightly coupled algorithm  Q  Quaternion parameter  Qk  Quaternion parameter at kth time epoch  rb  Position vector expressed in b-frame  rn  Position vector expressed in n-frame  rnOAOA Initial position vector determined by OAOA block initial S Position vector determined by INS block at (k + 1)th epoch rnIN k+1  rnOAOA Position vector determined by OAOA block at (k + 1)th epoch k+1 vn  Velocity vector expressed in n-frame  S Velocity vector determined by INS block at (k + 1)th epoch vnIN k+1  zL[A]  Observation vector for LC algorithm with augmented observation  zL  Observation vector for the loosely coupled algorithm  zT [A]  Observation vector for TC algorithm with augmented observation xii  List of Symbols zT  Observation vector for the tightly coupled algorithm  θ  Polar angle  ϕ  Latitude  ω k+1 Raw gyroscope measurement at (k + 1)th epoch ak+1  Raw accelerometer measurement at (k + 1)th epoch  mk+1 Raw magnetometer measurement at (k + 1)th epoch abx  Raw measurement from x-axis accelerometer  aby  Raw measurement from y-axis accelerometer  abz  Raw measurement from z-axis accelerometer  bax  Bias in the x-accelerometer  bax  Bias in the y-accelerometer  bax  Bias in the z-accelerometer  bωx  Bias in the x-gyroscope  bωx  Bias in the y-gyroscope  bωx  Bias in the z-gyroscope  h  Altitude  mbx  Raw measurement from x-axis magnetometer  mby  Raw measurement from y-axis magnetometer xiii  List of Symbols mbz  Raw measurement from z-axis magnetometer  p  Pitch  r  Roll  tk  kth time epoch  tk+1  (k + 1)th time epoch  y  Yaw  αx  Reciprocal of the correlation time of x-accelerometer  αy  Reciprocal of the correlation time of y-accelerometer  αz  Reciprocal of the correlation time of z-accelerometer  βx  Reciprocal of the correlation time of x-gyroscope  βy  Reciprocal of the correlation time of y-gyroscope  βz  Reciprocal of the correlation time of z-gyroscope  αD  Diagonal matrix with −αx , −αy , −αz  βD  Diagonal matrix with −βx , −βy , −βz  γD  Diagonal matrix with −γx , −γy , −γz  δω bib  Vector of errors in the measured angular rate vector  δab  Vector of errors in the measured specific force vector  δmb  Vector of errors in the magnetometer measurement xiv  List of Symbols δrn  Position error state vector  δvn  Velocity error state vector  δψ n  Attitude error state vector  γx  Reciprocal of the correlation time of x-magnetometer  γy  Reciprocal of the correlation time of y-magnetometer  γz  Reciprocal of the correlation time of z-magnetometer  ωe  Earth rotation rate  xv  Abbreviations GNSS  Global Navigation Satellite System  LOS  Line of Sight  NAVSTAR  Navigation by Satellite Ranging and Timing  GPS  Global Positioning System  IPS  Indoor Positioning System  AOA  Angle of Arrival  OAOA  Optical Angle of Arrival  INS  Inertial Navigation System  IMU  Inertial Measurement Unit  RFID  Radio-Frequency IDentification  WLAN  Wireless Local Area Network  WPAN  Wireless Personal Area Network  UWB  Ultra-Wide Band  TOA  Time of Arrival  RSS  Received Signal Strength  PN  Personal Network  MEMS  Micro Electro-Mechanical Systems  ECI  Earth-Centered Inertial  ECEF  Earth-Centered Earth Fixed  GMST  Greenwich Mean Sidereal Time  ENU  East-Noth-Up  NED  North-East-Down  EKF  Extended Kalman Filter  LKF  Linear Kalman Filter xvi  Acknowledgements I must start with showing my gratitude to my supervisor Dr. Richard Klukas, who has drawn an end to all my prejudice about graduate supervisor which was developed by horrifying stories about graduate life and tales of horrifying professors. Richard is such a nice person and mentor to work with that graduate life might not feel like a conventional graduate life at all. I am really lucky to have such a pleasant start of my graduate life with a supervisor like him. I would like to thank my committee members, Dr. Jonathan Holzman and Dr. Thomas Jhonson, for kindly being a part of my supervisory committee. I have to thank my friend and colleague Ahmed Arafa, who is working on a novel indoor positioning solution. I got an opportunity to find a research topic by finding some problems in the new positioning solution. As a forgetful person, I have to thank him again, as he lives in the same building with me. That is why, I can, sometimes, forget my laptop charger or backpack in the office and request him to bring those home, so that I can collect those at night. He is really a wonderful guy. I would like to thank my apartment-mates Nijam, Azam and Russell for keeping our apartment unit lively, which didn’t let me realize the huge xvii  Acknowledgements distance between me and my family. Thanks to Skype, which makes my room feel like my room far away in Bangladesh, at least sometimes. Lastly, my family; I will not thank my family, my mom, my dad, my sisters and my brother. It’s them, who make it really difficult for me to stay cool even in Canada; because they are the reason, I am still warm. However, I will not thank them, because we don’t thank our family to be with us, we call them family because they are always with us.  xviii  Dedication  To the truth I seek  xix  Chapter 1  Introduction 1.1  Context  What is meant by the term ‘positioning’ ? Although there is no universally agreed definition of positioning, in general it can be defined as “any of the several methods of determining the position of a body or device by geometry, astronomy, radio signals, etc. with respect to some known reference” [1]. In a broad sense, positioning can be divided into two fields, ‘outdoor positioning’ and ‘indoor positioning’ depending on the different environment that the positioning system might experience. Indoor and outdoor environments, being totally different, manifest different kinds of challenges for a positioning system, and thus make indoor positioning and outdoor positioning different fields of research. GNSS (global navigation satellite system) is used almost universally for outdoor positioning, where line of sight (LOS) views between the GNSS receiver and at least four satellites are available. The most well known GNSS system is the Navigation by Satellite Ranging and Timing (NAVSTAR) Global Positioning System (GPS), owned and operated by the US government and commonly known as GPS. The Russian GLONASS system 1  1.1. Context is also operational. These GNSS techniques are almost universally used for outdoor navigation and positioning without any significant competitor in the market. Due to signal blockage, GNSS does not work well, or at all, in indoor environments. Moreover, indoor environments are different in many attributes compared to outdoor environments. Indoor environments are more complex. Various obstacles, such as, walls, equipment, and human beings, influence the propagation of electromagnetic waves, which leads to multi-path effects. The indoor environment changes more dynamically making indoor positioning and navigation a challenging research field needing special attention. So the question is what exactly is an ‘indoor positioning system’ (IPS)? Dempsey [2] defines an IPS as a system that continuously and in real-time can determine the position of something or someone in a physical space such as in a hospital, a gymnasium, a school, etc. An IPS can provide different kinds of location information for location-based applications used by the user. Usually the absolute position information with respect to the map of a coverage area is offered by indoor positioning tracking systems and indoor navigation systems, because tracking and guidance services need the exact positions of the targets. Relative position information, which measure the motion of different parts of a target is another kind of output offered by the IPSs [3].  2  1.2. Motivation  1.2  Motivation  Accurate, reliable and real-time indoor positioning and position-based protocols and services are required in future generation communication networks to significantly improve the performance of wireless networks through network planning, network adaptation, and load balancing. Position-based tracking systems can also be used in hospitals for patient tracking, and in warehouses for tracking valuable goods. Considering the importance of positioning information and its application in different fields, many positioning systems have been developed over the years. Infrared-based, ultrasound-based, radio frequency based, and vision-based positioning systems are among the most well-known positioning solutions with potential. However, each of them has its own advantages and disadvantages and none dominates the indoor positioning market as GNSS does the outdoor positioning market. The industry and research communities are still looking for a robust, accurate, cheap and reliable indoor positioning solution. In 2011, a novel indoor positioning solution, based on angle of arrival (AOA) of light measured by a newly devised differential photosensor, was proposed in [4, 5] by Arafa, Jin, and Klukas. The positioning solution is based on a newly devised photosensor [6], which can determine the AOA of a light beam transmitted by a source and incident on the photosensor. Positioning information can be determined from the AOAs and the known frame of reference using simple trigonometry. The proposed system has several advantages over some of the existing positioning technologies. As  3  1.2. Motivation reported in [4], above a certain threshold, the AOA values does not depend on the intensity of the incident light. This makes the system independent of incident power and thus overcomes the problems associated with proximity methods. Moreover, the sensor itself is very inexpensive, and it is anticipated that it will be possible to use existing indoor lighting systems as the necessary light sources for the positioning system. This will decrease the extra infrastructure required further lowering the cost of the system. However, an unobstructed LOS view between the optical source and photosensor is required given that the system is based on visible light. Anything blocking the LOS view between the source and sensor will destroy the integrity of the positioning system. In addition, the optical angle of arrival (OAOA) positioning solution at each epoch of time is independent of the solution at the previous and following epochs, and the noise associated with each epoch is uncorrelated. This results in a trajectory which is not smooth. Now the question is, can we find any positioning solution, that supports the OAOA based positioning system for a robust and more reliable positioning solution, even in the absence of LOS view? An inertial navigation system (INS) is a dead reckoning technique that uses a navigation processor and inertial sensors such as accelerometers and gyroscopes to continuously calculate the position, orientation, and velocity of a moving object without the need for external references. In navigation, dead reckoning is the process of calculating one’s current position by using a previously determined position, and advancing that position based upon known or estimated velocity over elapsed time. The most important parts of an inertial navigation system are the inertial measurement unit (IMU) and 4  1.2. Motivation a navigation processor, which integrates the IMU outputs to produce a position, velocity and attitude solution. In basic terms, the velocity is updated by integrating the acceleration from the accelerometers, and the position is updated by integrating velocity. An inertial navigation system suffers from three basic problems. Firstly, being a dead reckoning system, the navigation solution must be initialized with some initial value. Secondly, being an iterative integration process, error from the previous stages accumulates and becomes very large after a few stages. Thirdly, low grade inertial sensors are not accurate enough to work as a stand-alone navigation system because of their drift over time. However, short-term performance of an INS-based positioning system is very good and provides a smooth trajectory, unlike an OAOA-based positioning system. The most important point to be noted is that the inertial navigation system does not need any external reference. Now the question is, can we use an inertial navigation system to back up an OAOA-based positioning solution to enhance the performance of the overall positioning system? The above two questions motivate this research. INS and OAOA-based positioning systems are largely complementary, so by integrating them, the advantages of both technologies are combined to give a continuous, complete positioning solution with good long and short-term accuracy. In this research, different aspects of integration of an OAOA-based positioning system with an inertial navigation system are investigated. Different integration algorithms and their effect on overall system performance is the main focus. In an integrated INS/OAOA indoor positioning system, the OAOAbased system should prevent the inertial solution from drifting, while the 5  1.3. Problem Statement and Objective INS should smooth the OAOA positioning solution and bridge any LOS view outages.  1.3  Problem Statement and Objective  The following statements summarize the problem at hand. • The recently proposed OAOA-based positioning system [4, 5] has some major drawbacks. Firstly, because visible light is the communication link between the receiver and transmitter, the positioning system requires a continuous LOS view between the transmitter and receiver. This is not likely to always be maintained in indoor environments due to persons and objects blocking the direct path. Secondly, the maximum output rate of the novel photosensor is yet to be determined. To date, the sensor has only been used for determining static positions. In the dynamic case, the sensors’s update rate may limit the smoothness of the trajectory. Thirdly, the positioning information provided by the sensor for a particular position does not depend on the previous position, and due to measurement noise and system imperfections, the OAOA-based positioning system is not likely to provide a smooth trajectory. • Low grade inertial sensors cannot be used for stand-alone navigation because of their drift over time. Moreover, since inertial navigation is an iterative integration process, errors accumulate as they pass through the various stages. This deteriorates the final result after  6  1.3. Problem Statement and Objective just a few stages. Therefore, an independent positioning system is necessary to update the position information at every stage or at least after every few stages so the errors are not allowed to accumulate. However, the advantage of inertial sensors is that they have a fairly high output rate, which is typically 100 Hz. Moreover, being a dead reckoning system, position information at a particular point of time depends on the position information of the previous point of time, which results in a smooth trajectory. Given the above problem statements, the following objectives are defined for this particular research work. • Integration of OAOA-based positioning solution with the inertial navigation solution for providing a more reliable and accurate position information. • Using an extended Kalman filter for the integration purpose of OAOA based positioning and INS based positioning solution. • Developing different algorithms based on the extended Kalman filter, namely loosely coupled algorithm, tightly coupled algorithm, loosely coupled algorithm with augmented observation and tightly coupled algorithm with augmented observations. • Performance analysis and comparison of different algorithms in different conditions to determine their positive and negative features.  7  1.4. Thesis Outline  1.4  Thesis Outline  Chapter 1 introduces the basics of indoor positioning and the motivation behind this research. The problem statements, objectives and contributions are clearly stated in this chapter. Chapter 2 summarizes all of the relevant current research and the state-ofthe-art-techniques for indoor positioning. An overview on the existing positioning solutions and their performance, cost, and limitations is given. Chapter 3 gives the necessary theoretical background for OAOA/INS integration performed. This chapter starts with a detailed overview of the OAOA-based indoor positioning solution, which is followed by the basics of inertial navigation. The proposed algorithms for integration are then described. Chapter 4 describes the experimental setup and is followed by experimental and simulated results as well as discussion of the results. Chapter 5 presents the conclusion and make recommendations on the future work.  1.5  Contribution  The contribution of this work can be summarized as follows. • A new indoor positioning solution is proposed which integrates OAOAbased indoor positioning solution with an inertial navigation system 8  1.5. Contribution to provide an accurate, reliable and inexpensive indoor positioning solution. • Four different algorithms are developed to integrate the OAOA-based positioning with INS positioning namely the loosely coupled algorithm, the tightly coupled algorithm, the loosely coupled algorithm with augmented observations, and the tightly coupled integration with augmented observations. These algorithms are described in Chapter 3. • Although some of these algorithms have previously been used to integrate other types of positioning data and solutions (i.e. GPS with INS), their use to integrate OAOA positioning data with INS data is novel.  9  Chapter 2  Literature Review This chapter gives an overview of various technology options for Indoor Positioning including infrared (IR), ultrasound, radio-frequency identification (RFID), wireless local area network (WLAN), Bluetooth, sensor networks, ultra-wideband (UWB), magnetic signals, vision analysis and audible sound. Based on these technologies different companies, researchers, and universities have developed different indoor positioning solutions [3]. Each solution has its own advantages and disadvantages. The designer of the indoor positioning system must compromise between performance and system complexity. In [7] it is reported that combining some positioning technologies can improve the quality of positioning services. Therefore, it is necessary to investigate the characteristics and limitations of existing state-of-the-art technologies in order to develop an integrated solution that improves performance.  2.1  Indoor Positioning Systems: An Overview  In this section the attributes of various indoor positioning solutions are described. Various evaluation criteria are discussed to compare the solution for the services demanded by users. 10  2.1. Indoor Positioning Systems: An Overview  2.1.1  What is an Indoor Positioning System?  An indoor positioning system is a system that can provide the position of something or someone continuously and in real-time in a physical space such as in a building, a hospital, a gymnasium [2]. An indoor positioning system can provide different kinds of location information depending on the user requirement. Usually, indoor positioning tracking systems offer the absolute position information with respect to a map of the coverage area [3]. Relative position estimation and proximity detection are other types of output which also provided by some indoor positioning systems.  2.1.2  Location Technologies, Techniques and Algorithms  The need for an indoor positioning solution has led researchers to introduce different locationing technologies such as IR, ultra-sound, RFID, WLAN, Bluetooth, UWB, magnetic technology, etc. Equipped with these locationing technologies an indoor positioning solution uses several techniques to locate objects and provide location information. The basic techniques of indoor positioning are known as trilateration, triangulation, fingerprinting and vision analysis [8, 9]. Theoretically, it is also possible to use an inertial navigation system for indoor positioning. However, the cost of high grade inertial sensors is too high for most civilian applications. An aviation grade IMU costs approximately $100,000 and still drifts approximately 1.5 km in the first hour of operation [10]. Trilateration is the process of determining location by the measurement of distance, and then using the geometry of circles or spheres. In the case  11  2.1. Indoor Positioning Systems: An Overview of two dimensional trilateration, three or more reference points of known coordinates are selected and the distance from the point to be positioned to these reference points are determined using measurements such as received signal strength (RSS) or time of arrival (TOA) [11]. Knowing the distance between the point and each reference point, circles with radius equal to the distances and centered at the reference points can be drawn. The common intersection point of all circles is the position solution. The ability to resolve multipath with very large bandwidth signals (such as UWB) in the indoor environment makes TOA the most accurate positioning technique. However, it is complex to implement because it requires complex circuitry for precise clock synchronization. Although RSS measurements are generally easy to make, they may not be accurate due to change in the propagation environment. Note that, RSS and TOA require at least three reference points. In contrast to the trilateration, triangulation is the process of estimating the location of a point by measuring angles to it reference points of known position. The measurements made are known as angle of arrival. In contrast to RSS and TOA, AOA requires only two reference points to determine position [3]. However, in triangulation the impact of an error in the AOA measurement on the position estimate will increase as the distance between the point to be positioned and the reference point increases [12]. Fingerprint-based positioning refers to algorithms that first collect the features of a scene and then estimate location by matching the collected features with a-priori information. The problem with fingerprint based positioning is the need for a very large database for storing the a-priori information. Fingerprinting for location estimation is also complex and costly if 12  2.1. Indoor Positioning Systems: An Overview the number of users of the positioning system increases significantly [3]. Vision analysis for location determination uses a camera or camera array to collect images and then analyzes the images to determine the position of a point or object with respect to other objects of known position in the image. The advantage of using vision analysis is there is no need to carry any tracking device. However, any change in the indoor environment, even a change in the light level, can affect the positioning performance.  2.1.3  Network-Based vs. Non-Network-Based System  Any indoor positioning solution can be categorized depending on various criteria. A positioning solution can be categorized as network-based approach and non-network-based approach depending on whether the IPS uses any existing wireless network infrastructure to determine the position information. A non-network-based approach uses its own infrastructure according the designer and depending on the need of accuracy. On the other hand, network based approach reduces cost by using an existing wireless network which might come with a sacrificed accuracy level [3].  2.1.4  Criteria of Evaluating Indoor Positioning System  In this subsection some criteria given in [3] are discussed for evaluating an indoor positioning system. Security and Privacy: Any indoor positioning system uses some kind of personal network (PN) for communicating the position information. Therefore, network security and privacy is an important issue for an 13  2.1. Indoor Positioning Systems: An Overview indoor positioning system. Cost: The cost of a positioning solution for an indoor environment depends on the cost of the infrastructure components, the cost of the positioning device for each user and the cost of the system installation and maintenance. Some indoor positioning solutions use existing infrastructure such as WLAN and are therefore very cost effective. The initial cost of the positioning device carried by an individual user and its maintenance cost and lifetime should also be considered. Performance: For the performance evaluation of an indoor positioning solution the two main performance parameters are ‘accuracy’ and ‘precision’, where accuracy relates to how close the estimated position is to the true position and may be quantified with average distance error, and precision relates to the difference between multiple estimates of the same position and can be quantified with standard deviation. The ‘delay’ of a measurement and the number of objects that an IPS can locate within a ceratin infrastructure, are also important performance evaluation issues. Usually, a trade-off exists between the performance and the price of an indoor positioning solution. User Preference: User preference is also an important issue when designing an indoor positioning solution. For the comfort of the users, the device should be wireless, small and light weight, have low power consumption, and be computationally powerful to offer accurate and realtime positioning information.  14  2.2. Existing Indoor Positioning Solutions  2.2  Existing Indoor Positioning Solutions  There are several indoor positioning systems. In this section some existing positioning solutions are introduced and their advantages and limitations are discussed.  2.2.1  Infrared (IR) Positioning Systems  The IR-based positioning system is a non-network based positioning system which uses proximity technique to determine the positioning solution. The IR-based indoor positioning system is very accurate (several millimeters) in position estimation. An IR-based positioning system needs line of sight communication between transmitter and receiver. Thus the coverage range per infrastructure is limited within a room [3]. For estimation of absolute position information, interference from strong light sources should be avoided [13]. Optical filtering and noise canceling signal processing algorithms are necessary to filter out the interference effect from florescent light and sunlight [14], which raises the cost of the system. The requirement of expensive system hardware such as a receiver camera array and connected via wires makes the positioning system very expensive [3].  2.2.2  Ultra-sound Positioning Systems  Position estimation using ultra-sound is described in [15–20]. Ultrasound signals are used by bats to navigate and this inspires people to design a similar navigation systems [3]. Although ultra-sound positioning systems are inexpensive, the accuracy is worse than (several centimeters) the IR  15  2.2. Existing Indoor Positioning Solutions based positioning solutions (several millimeters). Moreover, ultra-sound based solutions suffer from reflected ultrasound signals from different obstacles [3].  2.2.3  Radio Frequency (RF) Positioning Systems  Radio waves can travel through walls and human bodies, which can be an advantage for indoor positioning. RF-based positioning systems can reuse existing RF technology systems such as WLAN [3]. In [3] some basic RF-based positioning solutions are described. RFID: Radio frequency identification (RFID) [21] is commonly used in complex indoor environments. The advantage of an RFID positioning system is that it uses light and small tags (worn by the people or objects to be tracked). An RFID system can uniquely identify equipment and persons tracked in the system. However, this positioning technology needs numerous infrastructure components to be installed and maintained in the working area of the system [3]. Moreover, typical RFID positioning solutions offer an error of 2 m to 3 m [22], which may not be suitable for some applications. WLAN: WLAN based indoor positioning technology is based on the fingerprinting technique. WLAN based IPS uses existing WLAN infrastructure which makes it a cost effective solution [23–26]. The accuracy of location estimates based on the signal strength of WLAN signals is affected by various elements in indoor environments such as the movement and orientation of the human body, the overlapping of access 16  2.2. Existing Indoor Positioning Solutions points (AP), and nearby mobile devices, walls, doors, etc. WLAN technology is widely used and integrated in various wireless devices such as laptops, mobile phones, etc. Thus, WLAN-based positioning systems can also reuse these wireless devices as tracked targets to locate persons. However, because of complex indoor environments [23–26], the performance of these positioning systems is not very good given accuracy of several meters. Moreover, using the stored information and fingerprinting technique for location estimation is complex and costly if the number of users of the positioning system increases significantly. Bluetooth: Bluetooth, which is also known as the IEEE 802.15.1 standard, is a kind of wireless personal area network (WPAN). Various Bluetooth clusters can be formed for Bluetooth-based indoor positioning [27– 30]. The main advantage of using Bluetooth for indoor positioning is the option of using devices that already have Bluetooth in them. Moreover, being cheap, Bluetooth-based indoor positioning offers a cost effective positioning solution for indoor environments. However, Bluetooth-based systems offer accuracies of only 2 m to 3 m and, more importantly, with a minimum delay of 20 s. Sensor Networks: Sensor-based positioning systems consist of a large number of sensors fixed in predefined locations [31]. Sensors are devices exposed to a physical or environmental condition including sound, pressure, temperature, light, etc., and generate proportional outputs. A person or device can be located from the measurements taken by 17  2.2. Existing Indoor Positioning Solutions the sensors. Sensor-network-based positioning has great potential for indoor positioning due to emerging sensor network technology. However, cheap and small sensors have limited processing capability and battery power compared to other mobile device which results in poorer accuracy [3]. Ultra-wideband: UWB-based positioning system uses both TOA and AOA techniques for position determination. The most exciting feature of an ultra-wideband signal is its extremely large bandwidth or extremely short pulse-width. This makes it among the best of all RF positioning solutions due to its ability to resolve multipath. The duration of an ultra-wideband pulse is less than 1 ns, which makes it possible to filter reflected signals from the original signal, and thus offer high accuracy [3]. Compared with other RF-based positioning systems, UWB systems offer higher accuracies of about 15 cm in 3-D [3] and have coverage areas up to 400 m2 . Therefore, UWB technology offers several advantages over other indoor positioning technologies, such as no line-of-sight (LOS) requirement, little multi-path distortion, less interference, high penetration ability, etc. However, UWB technology needs highly precise clock synchronization, which makes receiver design complex and costly. A high precision UWB indoor positioning research package reported in [3] costs about $16,875.  18  2.2. Existing Indoor Positioning Solutions  2.2.4  Magnetic Positioning Systems  A magnetic positioning system is a very high accuracy positioning system that does not suffer from the line-of-sight problem. This is an old and classic way of tracking and position measurement [32]. The magnetic sensors are small in size, robust and cheap. Moreover, it offers higher accuracy and enables multi-positioning tracking at the same time. However, the coverage was reported to be only 3 m in [32], which is not scalable for large indoor public application services.  2.2.5  Vision-based Positioning Systems  A vision-based positioning system and identification of persons or devices in a complex indoor environment was introduced in [33, 34]. The positive side is that, in a vision-based positioning system, the tracked person or object does not need to carry any tracking device. Moreover, a low price camera can cover a large area. However, this system is not good considering the privacy of people. Moreover, in a dynamically changing environment, this system is not very reliable, since the position estimation is based on the saved vision information in a database. Interference sources such as weather changes and light conditions can also degrade the performance of a vision based positioning system. In addition, multiple persons moving in the same environment is a challenge for this kind of positioning system and a substantial amount of computation capability is required [3].  19  2.2. Existing Indoor Positioning Solutions  2.2.6  Audible Sound Positioning Systems  Currently, every wireless mobile device has the ability to emit audible sounds. The possibility of using audible sounds for indoor positioning was introduced in [35]. A 3-D indoor positioning solution named ‘Beep’ was designed using audible sound technology. It uses trilateration technique based on time of arrival (TOA) measured by a sensor in the ‘Beep’ system. An accuracy of around 0.4 m was found in an experimental environment of a 20 m × 9 m room in 90% of all the cases. The main advantage of using audible sound for indoor positioning is the opportunity to use existing devices such as cell phone or PDA to derive position information. However, such systems suffer interference from sound noises in many public indoor situations. Moreover, transmitting audible sound for positioning is a continuous disturbance to the people in the indoor environment [3].  2.2.7  Differential Photosensor-based Positioning System  Recently a differential photosensor was introduced for simultaneous optical retroreflection, detection, and control in bidirectional sensor links [6]. This is an architecture with three mutually orthogonal photodiodes in a corner-cube arrangement (used to retroreflect incident light and sample the incident optical signal). Recent work [4, 5] proposes an indoor positioning technique based on the AOA of incident light measured with the differential photosensors. The corner-cube arrangement generates photocurrents depending on the AOA of the incident light. From the AOA measurements for two or more light sources with known positions, the position of the corner-  20  2.2. Existing Indoor Positioning Solutions cube sensor can be determined. An accuracy on the range of 2-3 cm in a 2-D plane was reported in [4]. The proposed system has several advantages over some of the existing positioning technologies. As reported in [4], above a certain threshold, the AOA value does not depend on the intensity of the incident light. This makes the system impervious to light level fluctuation and thus overcomes the problems associated with proximity methods. Moreover, the sensor itself is very inexpensive, and it is anticipated that it may be possible to use indoor lighting as the necessary light sources for the positioning system. However, it should be noted that if room lights are used, the lighting system must be based on LED lights, since other types of room lighting (i.e fluorescent) cannot be modulated as required by the system. Use of existing room lighting will decrease the extra infrastructure required and further lower the cost of the system. Table 2.1 summarizes the performance, cost and coverage of the above mentioned indoor positioning technologies.  21  Table 2.1: Summary and Comparison of Existing Indoor Positioning Solutions Technology Accuracy Coverage Cost High (several millimeters)  Medium  High  Ultra-sound  High (several centimeters)  Medium  Low/Medium  RFID (RF)  Medium (1-2 meters)  Medium  High  WLAN (RF)  Low (several meters)  Medium  Low  Sensor network (RF)  Medium (1.5-3.8 meters)  High  Low  UWB (RF)  High (centimeter level)  Medium  High  Bluetooth (RF)  Low (2-3 meters)  Medium  High  Magnetic  High (1 centimeter)  Low (3 meters)  High  Vision  Low  Medium  Medium  Audible sound  Medium (.4 meter)  Low  Low  2.2. Existing Indoor Positioning Solutions  Infrared  22  2.3. Proposed Indoor Positioning Solution  2.3 2.3.1  Proposed Indoor Positioning Solution Inertial Navigation System  An inertial navigation system (INS) is a dead-reckoning system that consists of an inertial measurement unit and a navigation processor. Inertial navigation system technology used as an alternative to GPS, because GPS does not work well in all environments [36]. MEMS (microelectromechanical systems) inertial technology is seen as both a possible complement and a potential alternative to GPS [36]. An inertial measurement unit (IMU) usually combines three accelerometers and gyros to produce a 3-D measurement of specific forces and angular rates. An accelerometer measures specific forces and a gyroscope measures angular rates, both without an external reference [10]. An IMU is the sensor package for an inertial navigation system which can produce an independent 3-D navigation solution. Current inertial sensor development is mostly focused on MEMS technology. MEMS sensors are small, light, and exhibit much greater shock tolerance than conventional mechanical designs, though with relatively poor performance. Although there is no universally accepted definition of a high-, medium-, and low-grade IMU, they can be broadly grouped into five performance categories: marine, aviation, intermediate, tactical, and automotive [10]. According to [10] marine and aviation grade IMU sensors are not suitable for civilian use because of their high price, which is approximately $1 million and $100,0000 respectively. An intermediate grade IMU, which is used in 23  2.3. Proposed Indoor Positioning Solution small aircrafts and helicopters, might cost up to $50,000. A tactical grade IMU can be used to provide stand-alone positioning solution for a few minutes [10]. A long term positioning solution can be obtained by integrating it with a positioning system, such as GPS. These systems typically cost between $5,000 to $20,000 [10], and are typically used in guided weapons and unmanned air vehicles. The lowest grade IMU, which is known as automotive grade IMU are actually very cheap, and can be bought as a single accelerometer and gyroscope unit. Automotive grade three axes-MEMS accelerometers, when produced in large quantities cost only $2-10 per unit depending on their accuracy. According to [36], a single gyroscope can currently cost as low as $5. They are expected to become less expensive according to current market predictions [36]. However, INS based on automotive grade IMUs, cannot be used as a standalone navigation system due to the large drift in the accelerometer and gyroscope. Therefore, INS based on automotive grade MEMS systems must be integrated with some other positioning system to yield a complete navigation solution [10].  2.3.2  GPS and INS Integration Strategy  Inertial navigation systems have a number of advantages. They operate continuously and provide high-bandwidth output at 50 Hz or even 100 Hz; and they exhibit low short-term noise [10]. However, the accuracy of an inertial navigation solution degrades with time as the errors are integrated through the navigation equations and are propagated through stages. 24  2.3. Proposed Indoor Positioning Solution On the other hand GNSS or GPS can provide high long-term position accuracy with limited error. However, compared to INS, the update rate is low. Moreover, GNSS signals are also subject to obstruction and interference, such that GNSS cannot be relied upon to provide a continuous navigation system [10]. For outdoor applications where real-time kinematic (RTK) positioning capability in degraded environments requires centimeter level accuracy, INS is integrated with GNSS. Various algorithms are used to integrate INS and GNSS [37]. INS/GNSS integration architecture depends on three factors: how corrections are applied to the inertial navigation solution, what type of GNSS measurements are used, and how the GNSS user equipment is aided by the INS and integration algorithm [10]. In the literature, terms such as loosely coupled, tightly coupled, ultratightly coupled, closely coupled, cascaded, and deep are used to define integration architecture [38, 39]. In the loosely coupled architecture, INS and GNSS receivers operate as independent navigation systems and position information is blended using an Extended Kalman filter (EKF). When low quality inertial sensors are used, a feedback path is necessary to prevent the sensor errors from becoming unbounded. In general, the classical tightly coupled architecture provides a more accurate solution than the loosely coupled architecture. In this architecture, the INS and GNSS measurements are combined to generate a single blended navigation solution [40].  25  2.3. Proposed Indoor Positioning Solution  2.3.3  Proposed Solution  For the integration of INS with OAOA, loosely coupled and tightly coupled integration strategies similar to those used for GPS/INS integration will be used. From the above discussion it becomes apparent that, although there are a number of indoor positioning solutions, none of them has proven to be a complete solution in terms of accuracy, coverage, cost and robustness. Every system has its own advantages and disadvantages. The recently proposed indoor positioning solution based on OAOA is a potential indoor positioning solution that needs more detailed investigation. However, some drawbacks associated with the OAOA-based positioning solution, and the well known practice of GPS/INS integration for accurate outdoor positioning, is the motivation behind integrating the OAOA-based positioning solution with the inertial navigation system for a more accurate and robust indoor positioning solution. The loosely coupled and tightly coupled architectures used for GNSS/INS integration will be investigated with necessary modifications for INS/OAOA integration.  26  Chapter 3  INS and OAOA Integration In this chapter the basic theoretical background for OAOA-based indoor positioning system is investigated. The basics of the inertial navigation system is then introduced. The algorithms for loosely coupled and tightly coupled integration strategies of INS and OAOA-based indoor positioning and a modification of the algorithms using a error compensation block, is proposed at the end of the chapter.  3.1  OAOA for Indoor Positioning  The recently proposed indoor positioning solution based on angles of arrival of light, uses a photosensor, consisting of three photodiodes arranged in a corner-cube. This photosensor can estimate the AOAs of light transmitted by an optical source. The position of the photosensor can be determined from the AOA and knowledge of the position of the optical sensor in a frame of reference. The proposed system has several advantages over some of the existing positioning technologies. As reported in [4], above a certain threshold, the AOA values does not depend on the intensity of the incident light. This makes the system independent of incident power and thus overcomes the problems associated with proximity methods. 27  3.1. OAOA for Indoor Positioning  PD1  PD2  PD3  Figure 3.1: Photograph of the corner-cube photosensor.  3.1.1  The Corner-Cube Photosensor  The structure used for the proposed indoor positioning solution is a corner-cube as shown in Figure 3.1. Figure 3.2 shows a sketch of the sensor. It is comprised of three mutually orthogonal photodiodes (PD), with P D1 lying in the yz-plane, P D2 lying in the xz-plane, and P D3 lying in the xy-plane. The device was first introduced by Jin and Holzman in [6]. The basic parts of the structure are three reflective surfaces that are mutually orthogonal and oriented as an interior corner-cube orientation as shown in Figure 3.2. The structure can detect light arriving from an angle span between 0◦ and 90◦ in the azimuthal and polar directions. The two angles, azimuthal angle φ and polar angle θ, are defined according to a spherical coordinate system, as shown in Figure 3.2. When a beam of light falls on the sensor, photocurrents being proportional to the light intensity and dependent on the angle of arrival of the light, are generated in the three PDs. The values of φ and θ can be determined from the values of the currents generated in 28  3.1. OAOA for Indoor Positioning Polar angle  z  PD2  Incident light  θ PD1 y  φ x  PD3  Azimuthal angle  Figure 3.2: Sketch of the corner-cube photosensor. the three PDs. Photocurrents i1 , i2 and i3 are generated in P D1 , P D2 and P D3 , respectively, when beams of light from optical source A or optical source B strike the sensor (see Figure 3.3). The values of these photocurrents are proportional to the power of the incident light and also depend on the AOA of the incident light. However, to determine AOA only normalized photocurrents are required. The relationships between these normalized photocurrents I1 , I2 and I3 and the AOA angles φ and θ are given by the following equations [4]:  I1 = sin(θ) cos(φ),  (3.1)  I2 = sin(θ) sin(φ),  (3.2)  I3 = cos(θ).  (3.3)  29  3.1. OAOA for Indoor Positioning  z Optical beacon A Frequency fA  (x1,y1,z1)  (x2,y2,z2)  Optical beacon B Frequency fB Origin (0,0,0)  y  (x0,y0,z0) x  OAOA sensor  Figure 3.3: Hypothetical system setup for an indoor positioning solution using the OAOA sensor. Solving the above equations for φ and θ we get φ = tan−1 θ = tan  −1  I2 I1  (3.4) I1 2 + I2 2 . I3  (3.5)  Equations 3.4 and 3.5 may be used to calculate φ and θ from the measured value of the photodiode currents I1 , I2 and I3 .  30  3.1. OAOA for Indoor Positioning  3.1.2  Determining Position  It is clear that, for determining the position of the sensor, at least two optical sources are necessary. To acquire and process photocurrents contributed by different optical sources, a multi-frequency LED configuration is used [4]. Figure 3.3 shows a hypothetical setup for determining the position of the sensor with respect to a known reference frame. The cube in Figure 3.3 represents our hypothetical reference frame, with origin at (0, 0, 0). Two optical sources are placed at two known positions in the reference frame, (x1 , y1 , z1 ) and (x2 , y2 , z2 ). The OAOA sensor is placed at an arbitrary position (x, y, z), which is to be determined. As shown in Figure 3.3, two optical sources at two different co-ordinates are driven by two function generators operating at different frequencies. The modulation scheme used in the experiments is on-off keying, which is done by using a function generator to drive each LED light source with a square wave of different frequency. Optical source A is modulated by frequency fA and optical source B is modulated by a frequency fB . This creates photocurrents (normalized) I1 , I2 and I3 , which have components at frequency fA and frequency fB . A simple bandpass filter can be used to extract the different frequency components of the generated currents, and these can be used to determine φA , φB , θA and θB , where φA and θA are the azimuthal angle and polar angle for optical beam A, and φB and θB are the azimuthal angle and polar angle for optical beam B. The AOA information of both optical sources can be used to determine  31  3.2. Inertial Navigation System the position of the sensor with respect to the reference frame by triangulation. More than two optical sources can be used for improving positional accuracy by providing redundant information.  3.2  Inertial Navigation System  An inertial navigation system is a dead-reckoning navigation system, being comprised of an inertial measurement unit and a navigation processor. An inertial measurement unit, which is comprised of a set of accelerometers, gyroscopes, and magnetometers is the main part of an inertial navigation system. This section introduces the necessary background for understanding inertial navigation systems by introducing inertial sensors, coordinate frames, rotation matrices, the initial alignment procedure, and the navigation equations.  3.2.1  Mathematical Notations in Inertial Navigation  Although the principle of inertial navigation is simple, the necessity of using multiple coordinate frames and transformation between them, causes inertial navigation computation to have particular notations and conventions. In an effort to clarify the following discussion, the majority of the notation and conventions used in this section can be summarized as follows. To indicate the coordinate frame in which the components of a vector are given, a superscript is attached to the vector and to the components of the vector. As an example, the position vector r described in the navigation  32  3.2. Inertial Navigation System frame (n−frame) can be described as   rxn       n rn =  ry  rzn  (3.6)  where rxn , ryn , and rzn represent the components of the position vector along the x−,y−, and z−axes of the navigation frame. A rotation rate vector ω abc represents the rotation rate of frame ‘c’, relative to frame ‘b’, expressed in frame ‘a’. Transformation of any vector from one computational frame to another is carried out through the rotation matrix between the two frames. A rotation matrix Rnb rotates any vector in the b−frame to a vector in the n−frame. As an example, transformation of a position vector represented in the b−frame to the n−frame can be done by the equation rn = Rnb rb .  (3.7)  The inverse of the transformation matrix describes a transformation in the opposite direction, which means Rbn is the inverse of Rnb and can be used to transform a vector in the n−frame to a vector in the b−frame. Coordinate transformation of vectors applies to angular velocity vectors as well. Transformation of the vector ω bie from b−frame to n−frame can be carried out as ω nie = Rnb ω bie .  (3.8)  Rotation can not only be expressed by an angular velocity vector, but 33  3.2. Inertial Navigation System also by a skew symmetric matrix containing the same vector components. The skew symmetric matrix corresponding to the angular velocity vector ω bib is defined by Ωbib as    0  Ωbib =   ωz −ωy   ωx  where,  ω bib  −ωz 0 ωx    ωy     −ωx   0  (3.9)     =  ωy .  ωz  The skew symmetric matrix is a very useful matrix in navigation computation. If we have two vectors a and b, their cross product a×b can be expressed in terms of their skew symmetric matrices (A or B) as  a×b = Ab = −Ba.  (3.10)  The rotation between two coordinate frames can be expressed as the sum of two rotations as in ω bin = ω bie + ω ben .  (3.11)  The point to be noted here is that the inner indices will be canceled out and the superscripts have to be the same. This means that only vectors in the same reference frame can be added or subtracted.  34  3.2. Inertial Navigation System  3.2.2  Coordinate Frames  Earth is considered an inertial frame of reference in the case of simple mechanics problems, where the rotation of the Earth does not have a significant impact. However, in the case of navigation, the rotation of the Earth has a significant impact on the navigation computation since inertial sensors measure their motion with respect to a real inertial frame. However, the user would like to know their position with respect to the Earth. Navigation computation, thus deals with multiple coordinate frames which need to be defined clearly. All coordinate frames considered in this work form orthogonal right-handed basis sets. The remainder of this section defines the following coordinate frames used in navigation problems: Earth-centered inertial (ECI) frame, Earthcentered Earth-fixed (ECEF) frame, local navigation frame, and body frame [10, 41]. All coordinate frames, with the exception of the INS body frame described below, are depicted in Figure 3.4. Earth-Centered Inertial Frame • Symbol: denoted by the symbol ‘i’. • Origin: at the mass center of the Earth. • X i -axis: pointing towards the mean equinoctial colure in the equatorial plane. • Z i -axis: parallel to the Earth’s instantaneous spin axis.  35  3.2. Inertial Navigation System  Z e, Z i Earth rotation axis  Yn  Zn  Nor  Up  th  East  Xn  Yi  ϕ  Greenwich meridian  Ye λ 360 - GMST Equator  Xe  Equinox  Xi Figure 3.4: Coordinate frames. • Y i -axis: orthogonal to the X and Z axes to complete a right-handed frame. Earth-Centered Earth-Fixed Frame • Symbol: denoted by the symbol ‘e’. • Origin: at the mass center of the Earth.  36  3.2. Inertial Navigation System • X e -axis: pointing toward the Greenwich meridian, in the equatorial plane. • Y e -axis: 90◦ east of the Greenwich meridian, in the equatorial plane. • Z e -axis: along the Earth’s polar axis (axis of rotation of the reference ellipsoid). • Coordinates in the ECEF frame can be transformed to the ECI frame by a negative rotation about the Z e axis of the frame by the amount of the Greenwich Mean Sidereal Time (GMST) [42]. Local Navigation Frame • Symbol: denoted by the symbol ‘n’. • Origin: origin of the navigation system. • X n -axis: ellipsoid East (E-axis). • Y n -axis: ellipsoid North (N-axis). • Z n -axis: upward along the ellipsoidal normal (UP-axis). The INS Body Frame • Symbol: denoted by the symbol ‘b’. • Origin: coincident with that of the local navigation frame. • X b -axis: right direction. • Y b -axis: forward direction. 37  3.2. Inertial Navigation System • Z b -axis: upward direction.  3.2.3  Rotation Matrix Computation  The specific force measurement made by triaxial accelerometers in an IMU unit are expressed in the body frame. This raw measurement contains the components of the ‘gravity offset’, which needs to be subtracted from the raw measurement to find the actual velocity increments with respect to the navigation frame. However, the body frame is not aligned with the navigation frame. Therefore, to compensate for the ‘gravity offset’ the orientation of the body frame with respect to the navigation frame is necessary. The most convenient way to represent the attitude of the body frame with respect to the navigation frame is through a set of three Euler angles commonly known as roll (r), pitch (p), and yaw (y), which represent rotation along the x−, y− and z−axes, in the body frame, respectively [10]. Roll, pitch and yaw define the rotation of the body frame with respect to the navigation frame. Therefore, from the roll, pitch and yaw, the rotation matrix, Rnb , for converting a vector in the body frame to a vector in the navigation frame, can be calculated by Rnb = RZ (y)RY (p)RX (r)  (3.12)  where RX (r), RY (p), and RZ (y) are rotation matrices along the x−, y−,and  38  3.2. Inertial Navigation System z−axes, respectively, and can be written as [43]   1  0    0    (3.13)    (3.14)    (3.15)     RX (r) =  0 cos(r) sin(r) , 0 −sin(r) cos(r)   cos(p) 0 −sin(p)    RY (p) =      0  1  0  sin(p) 0 cos(y)  cos(p)   ,   sin(y) 0     RZ (y) =  −sin(y) cos(y) 0. 0 0 1  It should be noted that the sequence of the multiplication is very important. Multiplication of three rotation matrices resulting from a single set of roll, pitch and yaw in different sequences results in totally different rotation matrices. In this research, the rotation is applied along the z−, y− and x−axes sequentially. As discussed in the previous paragraph, knowing the roll, pitch and yaw, allows us to calculate the rotation matrix Rnb . In the opposite way, roll, pitch and yaw can be calculated from a known rotation matrix, Rnb as r = tan−1  R32 R12 2 + R22 2  R31 R33 R21 y = tan−1 R11 p = tan−1  (3.16) (3.17) (3.18)  39  3.2. Inertial Navigation System where    R11 R12 R13        Rnb =  R21 R22 R23 . R31 R32 R33  (3.19)  Initial Alignment Computation One of the underlying assumptions of the INS mechanization equations is that the initial condition of the system is known. While position and velocity are usually easily input by the user, the initial orientation of the system is not so readily available. From the discussion above, it is clear that the rotation matrix Rnb can be calculated if the Euler angles r, p, and y are known, and vice versa. However, at the beginning of inertial navigation, neither is known and for this reason the INS must usually execute an initial alignment procedure to determine the initial value of the Euler angles. Where the IMU is stationary, self-alignment can be used to initialize roll and pitch with the help of accelerometer data, whereas heading or yaw is often initialized using a magnetic compass or magnetometer. The principle behind roll and pitch determination in the stationary condition is that when the INS is stationary (or traveling at a constant velocity), the only specific force sensed by the accelerometer is the reaction to gravity. In such a condition, the triaxial accelerometer measurements represent the components of the vertically upward antigravity force. Therefore, from the normalized accelerometer readings it is possible to determine the roll and  40  3.2. Inertial Navigation System pitch according to aby  r = tan−1  2  abx + abz  (3.20)  2  abx  p = −tan−1  2  aby + abz  (3.21)  2  where, abx , aby , and abz are the raw measurements from the triaxial accelerometer along the x−, y−, and z−axes, respectively. For initial alignment, the IMU is kept static for around 30 seconds, and in that case, time averaged values of abx , aby , and abz are used instead of instantaneous values. Table 3.1: Quadrant information for roll from acceleration signs. Roll quadrant Sign of aby Sign of abz 1  +  +  2  +  -  3  -  -  4  -  +  Table 3.2: Quadrant information for pitch from acceleration signs. Pitch quadrant Sign of abx Sign of abz 1  -  +  2  -  -  3  +  -  4  +  +  Equation 3.20 and Equation 3.21 do not uniquely determine the quadrant in which the roll and pitch fall. In [43], lookup tables were constructed for North-East-Down (NED) axes conventions for the navigation frame. Table 41  3.2. Inertial Navigation System 3.1 and Table 3.2 are the lookup tables required to resolve the quadrant for roll and pitch using the sign of the accelerometer measurement for the EastNorth-Up (ENU) convention (which is used in this work) of the navigation frame. A magnetometer is not an inertial sensor. However, it is an important part of an inertial navigation system using low grade inertial sensors. Although it possible to use gyroscope data to determine the heading of the IMU by gyrocompassing, the performance of the gyrocompassing method depends on the quality of the gyroscope and only marine and aviation grade gyroscope are accurate enough for gyrocompassing [10]. For this reason, magnetometers can be used for determining the heading of the IMU for a low cost INS. The triaxial magnetometer measures the magnetic field vector of the Earth’s magnetic field in the body frame, represented by the vector mb = mbx mby mbz  T  . From this vector it is possible to calculate the yaw  of the body frame with respect to the navigation frame, if the roll and pitch is already known. If the roll, r, and the pitch, p, are already calculated from Equation 3.20 and Equation 3.21, the magnetic components experienced by the body frame along the x− and y−axes can be computed as [43] Xh = mbx cos(p) + mby sin(r)sin(p) − mbz cos(r)sin(p)  (3.22)  Yh = mby cos(r) + mbz sin(r).  (3.23)  From the values of Xh and Yh , the yaw angle can be calculated as y = tan−1 (  Yh ). Xh  (3.24)  42  3.2. Inertial Navigation System Being a inverse trigonometric function, like roll and pitch, it is also impossible to uniquely determine the quadrant of the yaw angle just from Equation 3.24. Similar to the roll and pitch angles, the quadrant information can be resolved from the signs of Xh and Yh [43]. The necessary lookup table is given in Table 3.3. Table 3.3: Quadrant information for yaw from magnetic field components signs. For yaw Xh < 0  Yh π − tan−1 X h  Xh > 0, Yh < 0  Yh −tan−1 X h  Xh > 0, Yh > 0  Yh 2π − tan−1 X h  Xh = 0, Yh < 0  π 2 3π 2  Xh = 0, Yh > 0  3.2.4  Modeling Motion in the Local Navigation Frame  Although a navigation solution can be computed in any coordinate frame of interest, in most cases modeling motion in the local navigation frame is preferable. The local navigation frame formulation has the advantage that its axes are aligned to local East, North and up directions. In the local navigation implementation of the inertial navigation equations, the ECEF frame is used as the reference frame. This form of navigation equation has the advantage of providing a navigation solution in a form readily suited for the user [10]. In general, the position of a moving body described in n−frame is expressed in terms of curvilinear coordinates (latitude, longitude and altitude).  43  3.2. Inertial Navigation System However, in the case of indoor positioning, cartesian coordinates are more preferable as the final positioning output. Curvilinear position information, which is given by     ϕ     r = λ n  (3.25)  h  where ϕ is the latitude angle, λ is the longitude angle and h is the altitude, is still needed for intermediate steps of the inertial navigation computation. Position in the cartesian format can be represented by three components along the East direction (re ), North direction (rn ) and vertical direction (ru ) as     re     n rncart =  r . ru  (3.26)  Velocity is expressed by three components along the East direction (v e ), North direction (v n ) and vertical direction (v u )    ve     n vn =  v . vu  (3.27)  The relationship between the velocity components v e , v n , v u and the curvilinear positions are given by  ϕ˙ = λ˙ =  vn M +h ve (N + h)cosϕ  h˙ = v u  (3.28) (3.29) (3.30)  44  3.2. Inertial Navigation System where M =  (ab)2 [a2 cos2 ϕ+b2 sin2 ϕ]3/2  ,N =  a2 , [a2 cos2 ϕ+b2 sin2 ϕ]1/2  a = 6378137.0 m is  the semi-major axis of the Earth, and b = 6356752.3 m is the semi-minor axis of the Earth. Similar relationships between the cartesian position components re , rn , ru and the velocity components are straight forward and can be given as r˙ e = v e  (3.31)  r˙ n = v n  (3.32)  r˙ u = v u .  (3.33)  In the above equations r˙ e , r˙ n , and r˙ u represents the derivatives of re , rn , and ru , respectively. Position Mechanization Equation From the previous discussion the rate of change of the curvilinear position components and the cartesian position components are related to the velocity components by      0 ϕ˙     n 1 ˙ r˙ =  λ =   (N +h)cosϕ h˙ 0  1 M +h  0 0  r˙ ncart = vn .    0 v e       −1 n n 0 v  = D v  1  vu  (3.34)  (3.35)  45  3.2. Inertial Navigation System Velocity Mechanization Equation Triaxial accelerometers measure the acceleration of a moving body along three mutually orthogonal directions in the b−frame. The accelerometer measurements are specific force measurements and resolved in the body frame as    ax        ab =  ay . az  (3.36)  This measurement is transformed to the local navigation frame using the rotation matrix Rnb according to    ae      ax         n b n  n an =  a  = Rb a = Rb ay . au az  (3.37)  However, the accelerometer components expressed in the local navigation frame cannot directly provide local navigation frame velocity components of the moving body. To do so requires three factors to be taken into account. The first is the Earth rotation rate (ω e = 15 degrees/hour) with respect to the inertial frame, which is resolved in the navigation frame, and can be represented as     0     e  ω nie =  ω cosϕ.  ω e sinϕ  (3.38)  The second factor is the change of orientation of the navigation frame with respect to the Earth. This change of orientation is due to the definition of the local North, local East and local vertical directions. The North  46  3.2. Inertial Navigation System direction is always tangential to the meridian, while the vertical direction is normal to the Earth’s surface at each instant. The angular velocity of the navigation frame with respect to the Earth, as observed from the navigation frame, can be represented as    −ϕ˙      ˙ ω nen =  λcosϕ = ˙ λsinϕ      −v n M +h  e   v .  N +h  v e tanϕ N +h  (3.39)  The third factor is the Earth’s gravity field    0      gn =   0  −g  (3.40)  where g is obtained from the well known gravity model given as [42] h g = 9.7802703 + .0517993sin2 ϕ − 19.69405 . a  (3.41)  If we take into consideration all these factors, the time rate of change of the velocity components of a moving body in the local navigation frame can be expressed as v˙ n = Rnb ab + (2Ωnie + Ωnen )vn + gn  (3.42)  where Ωnie and Ωnen are the skew-symmetric matrices corresponding to ω nie  47  3.2. Inertial Navigation System and ω nen , respectively, and are expressed as    e Ωnie =   ω sinϕ −ω e cosϕ     0  e  v tanϕ Ωnen =   N +h −v e N +h  0  0  −v e tanϕ N +h  0    −ω e sinϕ ω e cosϕ  0  0 −v n M +h  0   ve N +h  v n . M +h       (3.43)  (3.44)  0  Attitude Mechanization Equation The attitude angles of a moving body are determined by solving the time derivative equation of the transformation matrix. Since the mechanization is implemented in the local navigation frame, the following time-derivative transformation matrix should be considered [44] R˙ nb = Rnb Ωbnb .  (3.45)  The angular velocity skew-symmetric matrix Ωbnb can be expressed as Ωbnb = Ωbni + Ωbib  (3.46)  = −Ωbin + Ωbib  (3.47)  = Ωbib − Ωbin .  (3.48)  Thus from Equation 3.45 and Equation 3.48, it is found that the rotation matrix Rnb can be obtained by solving the following differential equation R˙ nb = Rnb (Ωbib − Ωbin )  (3.49) 48  3.2. Inertial Navigation System where Ωbib is the skew-symmetric matrix of the measurement of angular velocity vector (ω bib ) provided by the gyroscopes. The angular velocity ω bin consists of two parts. The first part is ω bie , which accounts for the Earth rotation rate, and the second part is ω ben , which accounts for the orientation change of the navigation frame. Therefore, ω bin can be expressed as ω bin = ω bie + ω ben  (3.50)  = Rbn ω nie + Rbn ω nen     0    e cosϕ + = Rbn  ω    ω e sinϕ     = Rbn      −v n M +h + ω e cosϕ  ve N +h v e tanϕ N +h  +  (3.51)    −v n M +h   ve  Rbn   N +h  v e tanϕ N +h  (3.52)     .   (3.53)  ω e sinϕ  Using all the equations above, we are now in a position to express the mechanization equation for all the necessary navigation states. The inputs to the navigation frame mechanization equation are the gyroscope and the accelerometer measurements. The outputs are the curvilinear coordinates, three velocity components and three attitude components. The final equation can be written as         D−1 vn r˙n     n b n n n + gn .  v˙n  =  R a + Ω )v + (2Ω   en ie   b   n n b b R˙ b Rb ( Ωib − Ωin )  (3.54)  The square boxes in the above equation represents the input raw measure49  3.2. Inertial Navigation System Initial alignment  mb  Gravity computation +  Rnb a  v˙ n  vn  rn  + -  b  vn  2Ωnie+ Ωnen Ωbib  + -  ˙ nb R  Rnb  Rnb(updated) b ωin  Input Block  Rbn  n ωin  + +  n ωen  n ωie  Output Block  INS Mechanization Block  Figure 3.5: Basic block diagram of INS mechanization equation for the local navigation frame. ments found from the accelerometers and the gyroscopes. If the outputs are chosen to be cartesian coordinates, three velocity components and three attitude components, the final equation can be written as     ˙ rncart    v˙n    R˙ nb         vn      n b n n n n  = Rb a + (2Ωie + Ωen )v + g .  Rnb (  Ωbib  −  Ωbin )  (3.55)  Solving these 1st order differential equations provides the navigation parameters in the local navigation frame. Figure 3.5, a visual form of Equation 3.54, is a block diagram which shows the basic parts of an inertial navigation system. The vertical box on the left of the figure represents the input block which consists of square boxes written ab , Ωbib and mb representing measurements from the accelerometers, gyroscopes and magnetometers. The inputs are then passed to the main algorithm which will be described shortly in the following section. 50  3.2. Inertial Navigation System  3.2.5  Step by Step Computation of Navigation Parameters  Based on the above discussion of inertial navigation systems, a step by step computation of the navigation parameters is presented in this section. Step 0: Before Starting Determining the initial alignment of the IMU must be done before the INS algorithm begins. By using Equation 3.20, Equation 3.21, and Equation 3.24 we can determine the roll, pith and yaw of the IMU, respectively, before it goes into motion. Using the value of the initial roll, pitch and yaw in Equation 3.12, the initial value of the transformation matrix from b−frame to n−frame, Rnb (t0 ) is determined. For initial alignment, the IMU should ideally be stationary for a certain amount of time. Typically, a time span of 30 s is sufficient. The initial value of all the components of the velocity vector vn (t0 ) can be considered zero. Then ω nen (t0 ) can be determined from Equation 3.39. The value of ω nie (t0 ) is a function of a given initial latitude and Earth’s rotation rate as shown in Equation 3.38. Step 1: Velocity Calculation The measurements from the triaxial accelerometer, ab (tk ), at time instant tk , are multiplied by the rotation matrix Rnb (tk ). Then, according to Equation 3.42, v˙ n (tk ) is calculated and is then integrated to get the velocity  51  3.2. Inertial Navigation System components as ∆vn ∆t  = Rnb ab − (2Ωnie + Ωnen )vn + gn  (3.56)  ⇒∆vn = (Rnb ab − (2Ωnie + Ωnen )vn + gn )∆t  (3.57)  1 vn (tk+1 ) = vn (tk ) + (∆vn (tk ) + ∆vn (tk+1 )). 2  (3.58)  Step 2: Position Calculation By integrating the velocity vector, the position vector can be determined. Both the curvilinear and cartesian coordinates for the position vector can be determined using 1 rn (tk+1 ) = rn (tk ) + (vn (tk ) + vn (tk+1 ))∆t. 2  (3.59)  This can be extended for the components of the position vector along East, North and upward directions as 1 re (tk+1 ) = re (tk ) + (v e (tk+1 ) + v e (tk ))∆t 2 1 n n r (tk+1 ) = r (tk ) + (v n (tk+1 ) + v n (tk ))∆t 2 1 u h(tk+1 ) = h(tk ) + (v (tk+1 ) + v u (tk ))∆t. 2  (3.60) (3.61) (3.62)  52  3.2. Inertial Navigation System Curvilinear position information can also be determined according to 1 (v n (tk+1 ) + v n (tk )) ∆t 2 M +h 1 (v e (tk+1 ) + v e (tk )) λ(tk+1 ) = λ(tk ) + ∆t 2 (N + h)cosϕ 1 h(tk+1 ) = h(tk ) + (v u (tk+1 ) + v u (tk ))∆t. 2  ϕ(tk+1 ) = ϕ(tk ) +  (3.63) (3.64) (3.65)  Step 3: Rotation Matrix Update Updating the rotation matrix is a very important part of an INS algorithm and can be done by solving the first order differential equation, Equation 3.49. However, to model the rotation matrix Rnb according to Equation 3.49, six differential equations are needed. For this reason, quaternion parameters are introduced to describe the rotation of the b−frame with respect to the n−frame. Solving the quaternion parameters requires only four differential equations to be solved. Moreover, the quaternion solution avoids singularity problems that might exist with other solution methods. The time rate of change of the quaternion is described by the first-order differential equation [42] ˙ = 1 Ω(ω)Q Q 2  (3.66)  where Ω(ω) is a four dimensional skew-symmetric matrix given as   0   −ω z  Ω(ω) =   ωy   ωz  −ωy ωx  0  ωx  −ωx  0  −ωx −ωy  −ωz     ωy   . ωz    (3.67)  0  53  3.2. Inertial Navigation System In Equation 3.67, ωx , ωy and ωz are the angular velocities of body rotation, which are determined by the gyroscopes after compensating for the Earth rotation and the local-level frame change of orientation, Ω = Ωbib − Ωbin .  (3.68)  To solve the first-order differential equation describing the time rate of change of the quaternion parameters, Euler methods can be used to determine the quaternion parameters Qk+1 at time tk+1 based on the values of the quaternion parameters Qk at time tk according to [42]  Qk+1 = Qk +     q1 (tk+1 )    q (t   2 k+1 ) ⇒  q3 (t  k+1 )   q4 (tk+1 )    1 Ω(ωk )Qk ∆t 2    q1 (tk )    0     q (t ) 1 −δθ z  2 k   =  +  q3 (t ) 2  δθy k     q4 (tk )  (3.69)  δθz  −δθy  0  δθx  −δθx  0  −δθx −δθy  −δθz  δθx      q1 (tk )        δθy y  q2 (tk )  .   δθz  q3 (tk )  0  q4 (tk )  Once the quaternion parameters are determined at a certain point in time, the rotation matrix Rnb can be obtained using the direct relationship [42]   q1 2 − q2 2 − q3 2 + q4 2    Rnb =      2(q1 q2 − q3 q4 )  2(q1 q3 + q2 q4 )  2(q1 q2 + q3 q4 )  −q1 2 + q2 2 − q3 2 + q4 2  2(q2 q3 − q1 q4 )  2(q1 q3 − q2 q4 )  2(q2 q3 + q1 q4 )  −q1 2 − q2 2 + q3 2 + q4 2 (3.70)  54   .   3.3. Proposed Integration Algorithm For solving the above equation, initial value of the quaternion vector Q0 , at time instant t0 is necessary, which can be determined by [42]    q1       q    2   = q3       q4  .25(R32 − R23 )/q4 .25(R13 − R31 )/q4  .25(R21 − R12 )/q4 √ .5 1 + R11 + R22 + R33          (3.71)  where the values R11 , R12 , R13 , R21 , R22 , R23 , R31 , R32 , and R33 are the elements of the rotation matrix Rnb (t0 ), which is determined by the initial alignment procedure. The updated rotation matrix is then used in the next stage to rotate the measurements from the accelerometers, which is resolved in the body frame to transform them into a quantity in the navigation frame. The entire algorithm from Step 1 to Step 3 is then repeated in the same way to find a continuous navigation solution.  3.3  Proposed Integration Algorithm  In this section, the proposed algorithm for the integrated OAOA/INS positioning solution is presented. The algorithm is based on some conventional GNSS/INS integration strategies. Different GNSS/INS integration architectures such as loosely coupled, tightly coupled, ultratightly coupled, and closely coupled are described in the literature. These architectures differ in how corrections are applied to the inertial navigation solution, what type of GNSS measurements are used, and how the GNSS user equipment is aided by the INS and integration algorithm [10]. 55  3.3. Proposed Integration Algorithm For integration purposes, the extended Kalman filter (EKF) is one of the most popular estimators [45]. The EKF is here used to correct the INS trajectory with position updates from the OAOA positioning solution. The EKF is actually an adaptation of the linear Kalman filter (LKF) to nonlinear functions, and is necessary here given the nonlinear nature of the INS equations. An integration strategy can be either an open loop or closed loop, depending on the feedback provided to the INS mechanization block. In an open loop system, the INS algorithm works independently without any feedback from the estimator’s error estimation. On the other hand, in a closed looped system, error estimation from the EKF is fed back to the INS mechanization block. In an open loop system with low cost MEMS sensors, errors can grow unbounded in a very short period of time. Therefore, an error state Kalman filter is used to estimate the errors and the error estimates are fed back to the INS mechanization block. The basis of the error state EKF is the INS error state model, which can be obtained by the perturbation analysis of the INS mechanization equations described in Section 3.2.4 [46]. The full derivation of perturbation analysis is complex and is beyond the scope of this work. Therefore, only the results are shown here. The results of the perturbed system can be expressed as [44, 47] δ r˙ n = δ v˙ n  (3.72)  δ v˙ n = −2Ωnie δvn − An δψ n + Rnb δab  (3.73)  δ ψ˙  n  = −Ωnie δψ n + Rnb δω bib  (3.74) 56  3.3. Proposed Integration Algorithm where δ in front of a parameter indicates a perturbed quantity and An = skew symmetric matrix of an , δψ n = attitude error state vector, δab = vector of errors in the measured specific force vector, δω bib = vector of errors in the measured angular rate vector. The above equations can be used to model the inertial errors. However, in a system using a low grade MEMS IMU, errors may grow unbounded and it is important to estimate the errors present in the sensor measurement. All types of accelerometers and gyroscopes exhibit biases, scale factors, cross-coupling errors, and random noise to a certain extent [10]. Among all the errors, the accelerometer and gyroscope biases are the most significant and should be taken care of [48]. The bias is a constant error exhibited by all accelerometer and gyroscopes. It is independent of the underlying specific force or angular rate. In most cases bias is the dominant term in the overall error of an inertial instrument [10]. In the above equations δab and δω bib represent the errors in the accelerometer and gyroscope measurements respectively. Theoretically, these error terms are composed of a constant sensor bias, a temperature sensitivity effect, misalignment error, a scale factor and sensor noise [41]. However, modeling all of the above errors in a Kalman filter is impractical. Therefore, both the accelerometer and gyroscope errors are considered to consist of only a bias term and noise. The accelerometer and gyroscope error models  57  3.3. Proposed Integration Algorithm can therefore be written as δab = ba + wa  (3.75)  δω bib = bω + wω  (3.76)  where ba is the bias vector in the measured specific force vector, and bω is the bias vector in the angular rate measurement vector. The parameters wa and wω are the measurement noise of the specific force vector and angular rate vector, respectively. According to the 1st order Gauss-Markov model [49], the bias for a single accelerometer can be represented with a discrete differential equation as b˙ ax = −αx bax +  2αx σax 2 w(t)  (3.77)  where, αx is reciprocal of the correlation time, σax 2 is the variance of the process, and bax is the bias associated with the x−axis accelerometer. This equation can be extended to the vector case considering all three accelerometers as     √ 2αx σax 2 b˙ ax −αx 0 0 bax        b˙ a  = − 0 ba  +  2  w(t). 2α σ −α 0 y ay y  y   y    √ b˙ az 0 0 −αz baz 2αz σaz 2    (3.78)  The diagonal matrix with −αx , −αy , and −αz is denoted by αD , and the √ √ column matrix with 2αx σax 2 , 2αy σay 2 , and 2αz σaz 2 is denoted by Σa . Similarly, for the triaxial gyroscope, the bias error model for the angular  58  3.3. Proposed Integration Algorithm rate measurement can be written as               2βx σωx 2 bm −βx 0 0 b˙ m ω ω        bm  +  b˙ m  = − 0 2  2β σ −β 0 y ωy w(t). y   ω    ω  0 0 −βz bm b˙ m 2βz σωz 2 ω ω  (3.79)  The diagonal matrix with −βx , −βy , and −βz is denoted by β D , and the column matrix with  2βx σax 2 ,  2βy σay 2 , and  2βz σaz 2 is denoted by Σω .  For both the accelerometer and the gyroscope error models, the white noise term w(t) is of zero mean and unity variance. The error modeling of the magnetometer is often neglected because it only affects the calculation of the yaw angle in the body frame with respect to the navigation frame and this does not lead to any large errors in position estimation for short durations [43]. This is especially true in the integration of INS with GNSS. Since GNSS is a very reliable positioning solution, a small heading error initiated by the magnetometer bias can be ignored without any significant performance degradation in the integrated system. However, for indoor positioning applications, the aiding positioning solution used in this work, namely OAOA, is not yet as reliable as GNSS. Moreover, a given amount of position error in an indoor environment may be considered to be more significant than the same in an outdoor environment. Therefore, magnetometer error estimation is also necessary. The raw magnetometer measurements contain components from the magnetic field of the navigation system, the magnetic field of surrounding equipment, in addition to the Earth’s magnetic field. Therefore, it can be said that, for a particular environment, these errors are analogous to a fixed bias.  59  3.3. Proposed Integration Algorithm A complex calibration method is available for calibrating the magnetometer [10]. However, for this work, it is assumed that the magnetometer error vector δmb consists only of a bias term and random noise and can be written as δmb = bm + wm .  (3.80)  The bias error model for the magnetometer measurement can be written as               2γx σmx 2 b˙ m −γx 0 0 bm x x         b˙ m  = − 0 bm  +  2γy σmy 2  w(t). −γ 0 y  y   y    m m b˙ z 0 0 −γz bz 2γz σmz 2  (3.81)  The diagonal matrix with −γx , −γy , and −γz is denoted by γ D , and the column matrix with  2γx σax 2 ,  2γy σay 2 , and  2γz σaz 2 is denoted by Σm . As  stated earlier, the white noise term w(t) is of zero mean and unity variance. Figure 3.6 depicts the basic block diagram showing the INS mechanization block which gets feedback from the EKF and the input to the block is compensated for the bias estimation by the EKF. In the next section the proposed algorithms for OAOA/INS integration are described. A detailed description of the Kalman filter algorithm is not discussed here but may be found in the literature [39, 50]. Loosely coupled and tightly coupled integration strategies are discussed first. This is followed by a modified version of the loosely coupled and tightly coupled integration algorithms which use an error compensation block before the INS mechanization block.  60  3.3. Proposed Integration Algorithm  Qk  ˆa ˆm bˆω k bk bk -  mb  Gravity computation  +  v˙ n  +  vn  Rnb + -  Qk+  Rnb  Bias Compensate  -  rn  +  vn  2Ωnie+ Ωnen Rnb(updated) b ωin  Input Block  -  +  -  -  Ωbib  +  an  Qk +  δˆ r nk  δˆ v nk  Initial alignment  +  ab  EKF  INS Mechanization Block  Rbn  n ωin  + +  n ωen  n ωie  Output Block  Figure 3.6: Basic diagram of the INS mechanization block with feedback from the EKF and corrected input measurement.  3.3.1  Loosely Coupled Integration  In the case of GNSS/INS loosely coupled integration, the INS and GNSS operate as independent navigation systems. The navigation solution from each are blended using an estimator to form a third navigation solution [40]. Following the basic architecture of the loosely coupled algorithm for GNSS/INS integration, a loosely coupled algorithm for OAOA/INS integration is proposed. The key aspects of the proposed algorithm are described below and depicted in Figure 3.7. 1. The IMU consists of a triaxial accelerometer, a triaxial gyroscope and a triaxial magnetometer which provide the raw accelerometer measurements ak+1 , the raw gyroscope measurements ω k+1 , and the raw magnetometer measurements mk+1 . The subscript k + 1 denotes the 61  3.3. Proposed Integration Algorithm  ak+1  -  +  IMU  -  ω k+1 mk+1  +  -  INS Mechanization Block  Output  +  rˆnk+1  +  r nOAOA initial  +  OAOA Sensor  I k+1  OAOA Processor  S r nIN k+1  +  δˆ r nk δˆ v nk  -  r nOAOA k+1  EKF  ˆ Q k bˆm k bˆω k bˆak  Figure 3.7: Block diagram of a loosely coupled OAOA and INS integrated positioning solution. current update cycle. The measurements from the accelerometer and the gyroscopes are then corrected using the bias estimate from the EKF which is based on information from the previous cycle and thus denoted by the subscript k in Figure 3.7. 2. As stated earlier, magnetometer error estimation is also necessary for indoor positioning and, therefore, is added to the EKF state vector which provides significant performance improvement as shown in Chapter 4. 3. The estimated parameters are then sent to the INS mechanization block. The estimated velocity errors are used to correct the velocity in the INS mechanization block. The INS mechanization block cal-  62  3.3. Proposed Integration Algorithm S = nIN S r nIN S r nIN S culates the position vector, rnIN rxk+1 k+1 yk+1 zk+1 S = nIN S the velocity vector vnIN vxk+1 k+1  nIN S vyk+1  nIN S vzk+1  T  T  , and  , with respect  to the local navigation frame at each time epoch. 4. The rotation matrix for the rotation from the body frame to the navigation frame, implemented using Euler angles, may become singular at times. Therefore, instead of using Euler angles as in [43], quaternion algebra is used to update the rotation matrix. Therefore, the EKF here is used for estimating the quaternion parameters in addition to the estimation of attitude errors as in conventional INS/GNSS [10, 43]. This quaternion estimation is given as feedback to the INS mechanization block, where the quaternion parameters are then calculated according to Equation 3.69. The EKF estimated quaternion parameters are used for Qk in Equation 3.69. 5. For the INS mechanization block to begin, initial condition information is needed and this is provided by the OAOA processor block. In Figure 3.7 this information is represented by rnOAOA initial . 6. The OAOA sensor block provides a current from each of the three orthogonal photodiodes. The elements are termed IP D1 k+1 , IP D2 k+1 , and IP D3 k+1 . The currents from these three photodiodes have components at different frequencies from each optical source. The components of the currents at the different frequencies are not shown in Figure 3.7. These current components are passed to the OAOA pronOAOA , r nOAOA , and cessor block which then calculates the positions rxk+1 yk+1 nOAOA along the three orthogonal axes. The position vector, rnIN S , rzk+1 k+1  63  3.3. Proposed Integration Algorithm calculated from the INS algorithm is then subtracted from the position vector rnOAOA , calculated from OAOA processor, and the difference k+1 is fed to the EKF as the observation vector. 7. According to [46], the basic EKF estimator consists of nine navigation n , δˆ n , and error states consisting of three position error states δˆ rxk ryk n , three velocity error states δˆ n , δˆ n , and δˆ n , and three attitude δˆ rzk vxk vyk vzk  error states. In addition to all the error state vectors, the quaternion parameter vector Qk is also estimated here and it has four components. Moreover, for low cost MEMS IMU sensors, the state vector of the EKF must be augmented by incorporating accelerometer error states ˆba , ˆba , and ˆba , and gyroscope error states, ˆbω , ˆbω , and ˆbω . If the xk yk zk xk yk zk magnetometer error states are added to the EKF states, the EKF here becomes a twenty two state extended Kalman filter. 8. The estimated position error is used to correct the position output from the INS. The final estimated position is given as S ˆrnk+1 = rnIN rnk . k+1 + δˆ  (3.82)  EKF for Loosely Coupled Integration The Kalman filter used here for the OAOA/INS integration is a twenty two or nineteen state Kalman filter, depending on whether the magnetometer errors are estimated or not. The state vector for the EKF can be written  64  3.3. Proposed Integration Algorithm as           xnk =            δrnk    δvnk     δψ n     Qk  .  (3.83)         bak  bωk  bm k  All components of this state vector are vectors containing three elements, except Qk , which has four quaternion parameters q1 (tk ), q2 (tk ), q3 (tk ), and q4 (tk ). This presents a problem for writing the system model for the EKF. For this reason, Qk is divided into two parts as         q1 (tk )  q1 (tk )        q (t ) q (t ) Q  2 k   2 k   (3×1)k  Qk =  . = = q3 (t )  q3 (t )  q4k k  k     (3.84)  q4 (tk )  q4 (tk )  For the same reason Ω(ω) in Equation 3.67, is rewritten in the form  0   −ωz Ω(ω) =    ωy   ωz 0 −ωx  −ωy        ωx      ωx   ωy  0  −ωx −ωy −ωz  Ω3×3 Ω3×1  =  Ω1×3   ωz  0    0    .  (3.85)  Based on Equation 3.72 and on the error analysis of the accelerometer, gyroscope and magnetometer, the system model of the EKF can be written  65  3.3. Proposed Integration Algorithm as   δ r˙ n      03×3      δ v˙ n  0   3×3      δ ψ˙n  0   3×3       Q ˙  (3×1)  03×3 =   q˙4  01×3        δb ˙ a  03×3       ˙ ω  03×3  δb     δ b˙  m  I3×3  03×3  03×3  03×1  −2Ωnie  An  03×3  03×1  Rnb  03×3  −Ωnie  03×3  03×3  03×3  03×3  03×3  1 2 Ω3×1  01×3  03×3  1 2 Ω3×3 1 2 Ω1×3  03×3  03×3  03×3  03×1  03×3  03×3  03×3  03×1  03×3  03×3  03×3  03×1  03×3   03×3 03×3 03×1    Rn  b  0  3×3  0  3×3 + 01×3   03×3   03×3     0  03×3 03×3 03×3     Rnb  03×3  01×3 03×3 03×3  03×3 03×3      wa   03×1   wω   . 0    w Σa    Σω         n   03×3   δψ         03×3 03×3 03×3  Q(3×1)       a    03×3  δb    ω   03×3   δb     01×3 01×3 01×3    αD  03×3  03×3  βD  03×3 03×3  q4  δbm  γD    03×1       n  03×3 03×3   δv   03×3 03×1   Rnb  δrn  (3.86)  Σm  Obtaining the Gauss-Markov model parameters for the sensor errors is an important task to this end. Approximately 12 hours of static data was collected by a commercial IMU device. In static conditions, the triaxial accelerometer measures only the reaction force due to gravity, the gyroscope measures only the Earth’s rotation and the magnetometer measures hardiron and soft-iron magnetic flux. All of the quantities are constant, at least for the length of the test considered. Therefore, it can be concluded that any variations in the measurements must be caused by variations in the sensor errors, which is to be modeled. The autocorrelation function of the raw data was computed as stated in [49] to determine the parameters for 66  3.3. Proposed Integration Algorithm Gauss-Markov model. In addition to the system model, the EKF also needs a measurement model such as zk = Hk xk + wmk , which is basically a relationship between the observation vector zk and the state vector xk , where Hk is the measurement matrix and wmk is the measurement noise vector. For the loosely coupled system, the observation vector consists of three position errors and can be written as     nIN S − r nOAOA rxk xk     nIN S − r nOAOA  . zL =   ryk  yk nIN S rzk  −  nOAOA rzk  (3.87)  For the loosely coupled system, the state vector and the observation vector have a linear relationship. The measurement matrix for loosely coupled integration can be written as  HL = I3×3 03×3 03×3 03×3 03×1 03×3 03×3 03×3 .  (3.88)  The algorithm for estimating the states of the EKF using the system model and measurement model, follows standard the EKF algorithm, which is available in any book on Kalman filtering [50, 51].  3.3.2  Tightly Coupled Integration  In the case of GNSS/INS tightly coupled integration, raw GNSS observations are passed directly to an EKF. The observation vector contains the GNSS processor’s pseudo-range and pseudo-range rate measurements [10]. Following the basic architecture of a tightly coupled algorithm for GNSS/INS integration, a tightly coupled algorithm for OAOA/INS integra67  3.3. Proposed Integration Algorithm  ak+1  -  +  IMU  -  ω k+1 mk+1  +  -  INS Mechanization Block  Output  +  rˆnk+1  OAOA Processor OAOA Sensor  r nOAOA initial  + S r nIN k+1  Current Calculation Block S I IN k+1  I k+1  +  +  -  δI k+1  δˆ r nk δˆ v nk EKF  ˆ Q k bˆm k bˆω k bˆak  Figure 3.8: Block diagram of a tightly coupled OAOA and INS integrated positioning solution. tion is proposed. The key aspects of the proposed algorithm are described below and depicted in Figure 3.8. 1. The difference between the loosely coupled and the tightly coupled integration algorithms is in the observation vector. In loose integraS tion, the position vector, rnIN k+1 , calculated from the INS algorithm is  subtracted from the position vector, rnOAOA , calculated from OAOA k+1 processor, and the difference is fed to the EKF as the observation vector. In the case of the tightly coupled system shown in Fig. 3.8, however, the observation vector contains the differences between the OAOA sensor measured currents and current values predicted from the corrected inertial navigation solution. To be more specific, each of the currents from the photosensor has components at different fre68  3.3. Proposed Integration Algorithm quencies depending on the number of light sources used. Using the estimated position vector determined by the INS algorithm and simple trigonometry, similar current components can be calculated. In the case of the tightly coupled algorithm, the difference between the two current values is fed to the EKF as an observation vector. 2. If N light sources are used for positioning, current from the each photodiode will have N frequency components. IP D1 will have components IPF1D1 , IPF2D1 , IPF3D1 , ... ... ..., IPFND1 . Similarly, IP D2 will have components IPF1D2 , IPF2D2 , IPF3D2 , ... ... ..., IPFND2 , and IP D3 will have components like IPF1D3 , IPF2D3 , IPF3D3 , ... ... ..., IPFND3 . Similar current components can be calculated by knowing the position vector of the sensor, determined by the INS algorithm. The difference between them can be represented as δI1F1 , δI1F2 , δI1F3 , ... ... ..., δI1FN , δI2F1 , δI2F2 , δI2F3 , ... ... ..., δI2FN , δI3F1 , δI3F2 , δI3F3 , ... ... ..., δI3FN . In the case of tightly coupled integration, these current differences comprise the observation vector for the EKF. EKF for Tightly Coupled Integration The EKF for the tightly coupled OAOA/INS integration has the same system model as the EKF used for loosely coupled integration. However, the measurement model in the tightly coupled case is different from that of the loosely coupled case. In the case of the loosely coupled algorithm, a linear relationship was available between the state vector and the observation vector. However, in the tightly coupled case, the measurement model is nonlinear and the relationship between the state vector and observation  69  3.3. Proposed Integration Algorithm vector can be represented by [50]  zk = h(xk ) + wmk .  (3.89)  The observation vector consists of the different frequency components of the current differences described above. The observation vector can be written as      δI1F1  .   .   .      FN  δI1     F1   δI2     .  zT =  .. .    FN  δI2     F1   δI3     .   ..     (3.90)  δI3FN  Equation 3.89 does not represent a linear relationship between zk and xk . A linearization of this relationship is necessary for the Kalman filter algorithm to operate. According to [50], the measurement matrix can be written as the Jacobian of the Taylor expansion of h(xk ). Therefore, the measurement  70  3.3. Proposed Integration Algorithm matrix is written as  HT  =  ∂h ∂xk   =  (3.91) xk =ˆ xk|k−1  ∂h11 n  ∂δrx  ∂h12  ∂δrn  x  ∂h11 ∂δryn ∂h12 ∂δryn  ∂h3N ∂δrxn  ∂h3N ∂δryn          .. . .. .  .. . .. .  ∂h11 ∂δrzn ∂h12 ∂δryn  .. . .. .  ∂h3N ∂δryn  ··· ··· ··· ··· .. . .. . ··· ···    ∂h11 ∂δbm z  ∂h12   ∂δbm z   .. . .. .  ∂h3N ∂δbm z          (3.92)  xk =ˆ xk|k−1  where F  δIi j = hij (δrxn , δryn , δrzn ) = hij ; i = 1, 2, 3 and j = 1, 2, 3, ......, N.  (3.93)  Note that, the constant of the Taylor series expansion of h(xk ) appears in the Kalman filter correction equation [50]. The differentials of Equation 3.91 are calculated numerically. The values of these currents generated in the photosensor depend on the AOAs of the light beams striking the photosensor, which in turn is a function of the position of the photosensor. A general form of this function is not convenient to determine because the function depends on the number and position of the light sources used in the system as well as knowledge of the reference frame used.  71  3.3. Proposed Integration Algorithm  3.3.3  Integration Algorithms with Augmented Observation  One of the major sources of error in inertial navigation systems are the different errors associated with the inertial sensors. Therefore, it is important to compensate for the errors generated by noise present in the accelerometer, gyroscope and magnetometer measurements. For this reason, accelerometer bias, gyroscope bias and magnetometer bias errors are estimated in the EKF, and then used to correct the raw sensor measurements. However, the error estimation done in the EKF is based on information from the previous time epoch. Therefore, a prior error estimation method can be implemented which is associated with the current measurements of the sensor data. Moreover, except for the position error states, all the states which are estimated by the EKF, as discussed in the previous sections, are estimated solely using the system model, because there is no contribution from the sensor bias, velocity error or quaternion parameters in the observation vector. By augmenting the observation vector by adding prior sensor bias estimation data and the quaternion vector, the overall performance of the positioning system can be improved. This is shown in Chapter 4. The idea of augmented observations was introduced in [43]. However, for this work, a modified version of the basic idea of augmenting the observation vector is used for the specific problem at hand. The key and novel features of the integration algorithm which includes an augmented observation vector are depicted in Figure 3.9 and described as follows. 1. The EKF in the loosely coupled and tightly coupled strategies for 72  3.3. Proposed Integration Algorithm  EKF  Observation vector  ˆa ˆm bˆω k bk bk -  mk+1  mb Initial alignment  Rnb  Q k+1 -  ak+1  ab  ak+1 ω k+1  Ωbib  ATk  +  aˆ k LKF  +  Input Block  Observation Augmenting Block  INS Mechanization Block  +  Bias Compensate  Figure 3.9: Observation augmenting block. OAOA/INS integration, as described in the previous section, estimates the accelerometer, gyroscope and magnetometer bias using information from the previous cycle. This is then used to correct the raw measurements from the accelerometers, gyroscopes and magnetometer in the current cycle. In [43] a linear Kalman filter (LKF) was used to estimate the bias in the accelerometer and gyroscope prior to the bias estimation in the EKF. However, in the present work an LKF is used to estimate only the accelerometer bias using the current cycle measurement, which is then used to augment the observation vector of the EKF. 2. The EKF used in loosely and tightly coupled integration estimates the quaternion vector, Q, which is provided as feedback to the INS mechanization block to determine the updated quaternion vector at the next time epoch according to Equation 3.66. The EKF used in the previous 73  3.3. Proposed Integration Algorithm section does not have any observation for quaternion estimation. The quaternion parameters estimated from the raw measurements of the current cycle are used to augment the observation vector for the EKF. This helps the EKF to estimate the quaternion parameters based on the most up-to-date information. Prior Accelerometer Bias Estimation The EKF estimates the accelerometer bias solely from the system model using information from the previous cycle. This estimated bias is then used to correct the raw accelerometer measurement. The EKF does not have any information on the accelerometer error associated with the current cycle accelerometer data. Therefore, a linear Kalman filter (LKF) is employed which uses the current cycle measurement from the accelerometer as part of its observation vector to estimate the prior accelerometer bias. In Figure 3.9, the linear Kalman filter used for accelerometer prior bias estimation is named LKF. The system model for bias estimation was discussed in Section 3.3 and can be written as a b˙ = αD ba + Σa w(t).  (3.94)  The measurement model can be written as zbak = bak + wmk  (3.95)  where zbak is the observation vector for this measurement model and wmk  74  3.3. Proposed Integration Algorithm is the measurement noise vector. The accelerometer bias term required for the observation vector of the EKF can be calculated using the previously ˆk and the state transition matrix Ak , estimated accelerometer measurement a as described in [43]. The details for calculating the state transition matrix can be found in [43]. The state transition matrix can be used for estimating the acceleration of the current cycle from the estimated acceleration of the previous cycle. The estimated current cycle acceleration is subtracted from the raw accelerometer measurement to find the bias associated with the current cycle. The bias term can be estimated according to ˜a = a ˜k+1 − ATk a ˆk . zbak = b k+1  (3.96)  This estimated bias term is part of the EKF observation vector. Prior Quaternion Parameter Estimation The extended Kalman filter also estimates the quaternion vector Qk . However, the EKF estimates the quaternion parameters solely based on the system model because there is no observations for the quaternion parameters. By incorporating a prior estimate of Qk in the EKF observation vector, better estimation of the quaternion parameters is possible. The accelerometer and magnetometer measurements may be used to calculate roll, pitch and yaw according to Equation 3.20, Equation 3.21, and Equation 3.24. Equation 3.12 can then be used to calculate the rotation matrix directly from the roll, pitch and yaw. The rotation matrix can be used for calculating the quaternion vector according to Equation 3.71, and as a result, this 75  IMU  ak+1 ω k+1 mk+1  Augmenting Block  3.3. Proposed Integration Algorithm  + +  -  INS Mechanization Block  Output  +  rˆnk+1  +  r nOAOA initial  +  OAOA Sensor  I k+1  OAOA Processor  S r nIN k+1  +  δˆ r nk δˆ v nk  -  r nOAOA k+1  EKF  ˆ Q k bˆm k bˆω k bˆak  Figure 3.10: Block diagram of a loosely coupled OAOA and INS integrated positioning solution with augmented observations. calculated quaternion vector is directly related to the observed accelerometer and magnetometer measurement at the current cycle. The calculated quaternion is added to the observation vector of the EKF. Loosely Coupled Algorithm with Augmented Observations The loosely coupled integration with augmented observations is shown in Figure 3.10. The system model is the same as that discussed in Section 3.3.1. However, the measurement model is not the same due to the augmented observation vector. The augmented observation vector for the loosely coupled integration algorithm can be written as  76  3.3. Proposed Integration Algorithm   δrnk       Q  k(3×1)  zL[A] =  .   q 4k    (3.97)  bak  The measurement matrix for the measurement model of the loosely coupled algorithm with augmented observation can be represented as   I3×3   0  3×3 HL[A] =  01×3   03×3 03×3 03×3 03×1 03×3 03×3 03×3 03×3 03×3  I3×3  01×3 01×3 01×3     03×1 03×3 03×3 03×3   . 1 01×3 01×3 01×3    03×3 03×3 03×3 03×3 03×1  I3×3  (3.98)  03×3 03×3  Given the matrices of Equation 3.97 and Equation 3.98, as well as the system model and measurement model, the standard Kalman filter algorithm can be implemented to estimate the positioning solution. Tightly Coupled Algorithm with Augmented Observations The tightly coupled algorithm with augmented observations is depicted in Figure 3.11. In this case the observation vector consists of the current difference values, instead of δrnk as found in the observation vector of the loosely coupled algorithm. The other elements of the observation vector remain the same as for loosely coupled integration. The observation vector for the tightly coupled system with augmented observations can be written  77  IMU  ak+1 ω k+1 mk+1  Augmenting Block  3.3. Proposed Integration Algorithm  + +  -  INS Mechanization Block  Output  +  rˆnk+1  OAOA Processor OAOA Sensor  r nOAOA initial  +  I k+1  S r nIN k+1  +  Current Calculation Block S I IN k+1 -  +  δˆ r nk δˆ v nk  ˆ δI k+1 EKF Qk bˆm k ω ˆ bk bˆak  Figure 3.11: Block diagram of a tightly coupled OAOA and INS integrated positioning solution with augmented observations. as    δI1F1 .. .               δI FN  3 . zT [A] =    Qk(3×1)       q4k     (3.99)  bak  78  3.3. Proposed Integration Algorithm According to [50], the measurement matrix can be written as  HT [A] =  ∂h ∂xk   (3.100) xk =ˆ xk|k−1  ∂h11 n  ∂δrx  ∂h12  ∂δrn x   =   .  ..   .  .  .   ∂h3N  ∂δrn x   ∂q1  ∂δrn  x  .  .  .   ..  .  a ∂δbz ∂δrza  ∂h11 ∂δryn ∂h12 ∂δryn  ∂h11 ∂δrzn ∂h12 ∂δryn  ··· ···  ∂h3N ∂δryn ∂q1 ∂δryn  ∂h3N ∂δryn ∂q1 ∂δryn  ··· ···  ∂δba z ∂δryn  ∂δba z ∂δryn  .. . .. .  .. . .. .  .. . .. .  .. . .. .  ··· ··· .. . .. . ··· ··· .. . .. . ··· ···    ∂h11 ∂δbm z  ∂h12   ∂δbm z          ∂h3N   ∂δbm z  ∂q1   ∂δbm z  ..  .    ..  .   a  .. . .. .  ∂δbz ∂δbm z  (3.101)  xk =ˆ xk|k−1  where F  hij = δIi j = hij (δrxn , δryn , δrzn ); i = 1, 2, 3 and j = 1, 2, 3, ......, N.  (3.102)  As stated earlier in the case of the tightly coupled system without augmented observations, a general form of this function is not easy to determine because the function depends on the number and position of the light sources used in the system, and a knowledge of the reference frame used. Therefore, the differentials are calculated numerically. Given the matrices of Equation 3.99 and Equation 3.100, as well as the system model and measurement model, the standard Kalman filter algorithm found in any standard text [39, 50] can be implemented to estimate the positioning solution.  79  Chapter 4  Experimental Work and Results In this chapter the experimental results for the performance analysis of the proposed algorithms described in the previous chapter are discussed. The description of the necessary experimental setup is first presented followed by details of the data collection and data processing procedure. The results and performance comparison of different algorithms are then presented. The limitations and future scope of the experiments are also discussed in this chapter.  4.1 4.1.1  Experiment Data Collection  For the proposed indoor positioning solution, inertial sensor data and data from the OAOA sensor are necessary. These data are then used by the proposed algorithm to calculate the positioning solution.  80  4.1. Experiment  Figure 4.1: A commercial athletic performance sensor used as the IMU unit. IMU Data Collection In Figures 3.7, 3.8, 3.10, and 3.11 the IMU block supplies raw accelerometer, gyroscope and magnetometer measurements to the INS mechanization block. For the experiments conducted, the commercial athletic performance sensor shown in Figure 4.1 is used as the IMU unit. This device contains accelerometers, gyroscopes and magnetometers along three orthogonal axes and is able to collect data at a frequency of 100 Hz. For collecting data with an IMU, motion is required and this is provided by the ‘turntable’ shown in Figure 4.2. The turntable is a circular rotating platform with a diameter of 60 cm. The IMU is mounted on the turntable and collects data as the turntable rotates.  81  4.1. Experiment  Diameter 60 cm  Figure 4.2: A DC motor turntable for creating a controlled motion. OAOA Data Collection The basic principle of the OAOA sensor was discussed in Chapter 3. Two or more sources of light at known coordinates, and modulated by different frequencies as described in Chapter 3, are necessary for the experiment. The new OAOA sensor is able to provide location information by determining the AOAs of the incident light from the light sources, with respect to a known reference frame. At the time of the research, the OAOA sensor was not able to collect continuous data while moving. Laboratory experiments were done in static conditions for determining the position of the sensor and the results were published in [4, 5]. For those experiments, several current measurements were taken for a particular position of the sensor and then averaged to find the current measurement for that particular position. To collect the current data continuously while the sensor is moving, an automatic data interpretation circuit is needed. For the time being, the OAOA 82  4.1. Experiment  (-60,40,50) Origin (0,0,0)  (65,40,50) Start point  60 cm  y  Turntable z  x  (-60,-90,50)  (65,-90,50)  Figure 4.3: Hypothetical experimental setup for data collection (coordinate units are cm, and figure is not drawn to scale). sensor was able to provide positioning data only in static conditions. Therefore, for this work, simulated data is used for OAOA sensor observations for the case of a moving sensor. The nature of this simulated data is described in Section 4.1.2.  4.1.2  Experimental Setup  Consider the hypothetical setup of Figure 4.3 which consists of the turntable and four optical sources at known coordinates. For this experiment the turntable is rotated at a constant angular velocity of 36 deg/sec. Therefore, the turntable makes one complete revolution in 10 seconds. For  83  4.1. Experiment a practical experiment with the OAOA sensor collecting data in a dynamic environment, the OAOA sensor and the IMU should be stacked together. In that case the OAOA sensor can sense the AOA of the incident light beams from the optical sources. The AOA values can later be used to calculate the position of the OAOA sensor. However, as stated earlier, the OAOA sensor is not yet ready to collect to data in a dynamic environment. Therefore, only the INS sensor is mounted on the turntable and the turntable is rotated at a constant velocity. The turntable velocity is measured with the help of a stopwatch. For calculating theoretical AOA values, to use as simulated data, the geometry of the trajectory and the velocity of the INS module are required. For a known geometry (circular in this case), the position of any moving object can be calculated using simple equations of motion. Since the geometry of the path and the velocity are already known, for any particular position of the IMU on the rotating turntable, theoretical AOA values for the four optical beacons can be calculated. These ‘true’ AOA values are then corrupted with additive white Gaussian noise (AWGN) and become the simulated OAOA data. The update rate of the OAOA sensor in a dynamic environment is still undetermined. Therefore, for this work different OAOA update rates are used to investigate the effect of the OAOA update rate on the performance of the positioning solution.  84  4.1. Experiment  10  Reference y-axis [cm]  0  OAOA−based track True track  −10 −20 −30 −40 −50 −60 −70 −40  −20 0 20 Reference x-axis [cm]  40  (a) True trajectory and trajectory determined by OAOA only.  10  Reference y-axis [cm]  0  Estimated track True track  −10 −20 −30 −40 −50 −60 −70 −40  −20 0 20 Reference x-axis [cm]  40  (b) True trajectory and trajectory determined by the OAOA and INS loosely coupled integration algorithm.  Figure 4.4: Comparison between true and estimated trajectories.  85  4.2. Performance Analysis and Discussion The simulated OAOA measurements are generated at update rates of 10 Hz and 5 Hz. These OAOA position measurements are used to update the 100 Hz INS positioning solution using the proposed algorithms described in Chapter 3. Consequently, an OAOA update is provided to the INS positioning process every 10 or 20 epochs. At the epochs, when no OAOA update is available, state vector prediction is performed using the state estimation of the previous epoch. In this work, positioning is performed in a horizontal (2-D) plane because the turntable is only able to provide a motion in 2-D space. However, this same algorithm is able to provide positioning solution in a three dimensional space.  4.2  Performance Analysis and Discussion  Figure 4.4(a) shows the true trajectory (determined by known geometry, which is a circle with 60 cm diameter in this case) and the trajectory determined from the simulated and noise-corrupted OAOA values only. Note that no IMU data is used to produce the trajectories in Figure 4.4(a). As expected, the trajectory determined from OAOA information is not smooth, since noise associated with each epoch is totally uncorrelated with the noise of previous or future epochs. The simulated OAOA position information is created by computing the theoretical AOA values with noise. As stated above, in this work only horizontal positioning is considered, which depends only on the azimuthal angles if a particular orientation of the sensor is considered. From [4] it is clear that azimuthal angle errors are relatively constant irrespective of the position of the photosensor, which means the  86  4.2. Performance Analysis and Discussion errors in the azimuthal angle do not depend on the actual values of the azimuthal angle. The errors are typically less than 2◦ as reported in [4]. Therefore, white Gaussian noise with mean 0◦ and variance 2◦ was added to the AOA values to keep most of the angle errors less than 2◦ . Using these noise corrupted AOA measurements, the trajectory of Figure 4.4(a) is obtained, where the maximum error is 7.34 cm. This level of positioning error is similar to that reported in [4]. Figure 4.4(b) shows the true trajectory and the trajectory determined by integrating INS and OAOA with the loosely coupled integration algorithm depicted in Figure 3.7. The INS/OAOA trajectory exhibits a smooth transition from one epoch to the next, as expected, given the use of the EKF. It is obvious from the comparison of Figure 4.4(a) and Figure 4.4(b) that the integrated system provides a better solution in terms of accuracy and smoothness. The tightly coupled integration system of Figure 3.8 provides a trajectory similar to that of Figure 4.4(b) but with better accuracy. However, from a figure such as Figure 4.4(b) it is not possible to quantify the amount of position error merely by observation. Therefore, the performance comparisons are quantified in tabular form as shown in Tables 4.1, 4.2, 4.3, and 4.4. In Table 4.1 the performance comparison of all four algorithms is quantified for an update rate from the OAOA sensor of 10 Hz and with AWGN in the simulated azimuthal angles with 0 mean and a variance of 2◦ . Table 4.2 is similar to Table 4.1 but with a different update rate from the OAOA sensor, which is 5 Hz. Tables 4.3 and 4.4 are similar to Tables 4.1 and 4.2, respectively, but with a higher amount of AWGN (0 mean and a variance of 3◦ ) added to corrupt the simulated azimuthal angles. 87  4.2. Performance Analysis and Discussion  Table 4.1: Performance comparison of OAOA-only, loosely coupled, tightly coupled, loosely coupled with augmented observations, tightly coupled with augmented observations integration algorithms (OAOA update is 10 Hz, AWGN with 0 mean and a variance of 2◦ is added to the simulated azimuthal angle values). IPS strategy  Max error [cm]  Average error [cm]  OAOA-only  7.34  4.21  Loosely coupled (LC)  3.15  2.34  Tightly coupled (TC)  3.23  1.67  LC with augmented observation  3.01  2.20  TC with augmented observation  2.95  1.59  Table 4.1 represents the comparison between the performances of different algorithms described in the previous chapter. Maximum errors and average errors are given for OAOA-only, loosely coupled, tightly coupled, loosely coupled with augmented observation and tightly coupled with augmented observation algorithms. For the results in Table 4.1, the OAOA update is provided at every 10th sample of the INS data. For the OAOAonly case the maximum position error is 7.34 cm and average positioning error is 4.21 cm. For the loosely coupled system and tightly coupled system, the maximum errors reduce to 3.15 cm and 3.23 cm, respectively. The improvement in performance of the tightly coupled system is not evident when considering maximum error only. The maximum error may occur due to a large value of noise at a particular epoch. However, as expected, the improvement in the average error is evident for the tightly coupled system is only 1.67 cm, compared to an average error of 2.34 cm for the case of the loosely coupled  88  4.2. Performance Analysis and Discussion system. The last two rows of Table 4.1 show the performance of the loosely coupled system and tightly coupled system with augmented observations, as depicted in Figure 3.10 and Figure 3.11 respectively. In this case the improvement is not very significant compared to the algorithms without augmented observations. This may be due to the high update rate from the OAOA sensor. Due to the high update rate, there may be a little room for the better algorithms to show significant performance improvement since the INS solution is being updated by OAOA measurement so often. However, the performance improvement is evident in the case where update rate is lower, as will be shown below. Table 4.2: Performance comparison of OAOA-only, loosely coupled, tightly coupled, loosely coupled with augmented observations, tightly coupled with augmented observations integration algorithms (OAOA update is 5 Hz, AWGN with 0 mean and a variance of 2◦ is added to the simulated azimuthal angle values). IPS strategy  Max error [cm]  Average error [cm]  OAOA-only  7.34  4.21  Loosely coupled (LC)  6.01  4.89  Tightly coupled (TC)  5.87  4.21  LC with augmented observation  3.79  3.11  TC with augmented observation  3.23  2.85  Table 4.2 shows the results for the same five algorithms as Table 4.1 but for an OAOA update provided at every 20th sample of the INS data. That is, the OAOA data rate is 5 Hz. In this case the performance of every algorithm is worse than that shown  89  4.2. Performance Analysis and Discussion in Table 4.1. This is expected because of the lower update rate of the OAOA sensor. With a lower OAOA sensor update rate, the positioning solution is more dependent on the INS-only information which is susceptible to drift over time. Note here, the significant improvement in performance obtained by using the loosely coupled and tightly coupled algorithm with augmented observations. This was not significant in the results of Table 4.1 where the update rate was higher. Therefore, it can be concluded that the loosely coupled and tightly coupled algorithms with augmented observations perform significantly better when update rate from the OAOA sensor is lower. In the case where the update rate is lower, the INS has to perform independently for a longer amount of time, which in turn allows the positioning solution to drift. Therefore, lower update rates from the OAOA sensor make room for the algorithms with augmented observations to improve the positioning performance. Another important observation from Tables 4.1 and 4.2 is that in the case of augmented observations, the maximum error does not increase as much as the average error when the update rate was halved. When the update rate is halved, the average error for the loosely coupled system with augmented observations increases by 41%, while the maximum error increases by only 25%. For the tightly coupled system with augmented observations, the increase in the average error is 79%, while the maximum error increases by only 10%. Therefore, it can be concluded that the algorithms with augmented observations appear to bound the maximum error, which may be significant for some applications. The amount of noise inserted to corrupt the simulated OAOA observa90  4.2. Performance Analysis and Discussion  Table 4.3: Performance comparison of OAOA-only, loosely coupled, tightly coupled, loosely coupled with augmented observations, tightly coupled with augmented observations integration algorithms (OAOA update is 10 Hz, AWGN with 0 mean and a variance of 3◦ is added to the simulated azimuthal angle values). IPS strategy  Max error [cm]  Average error [cm]  OAOA-only  10.13  5.94  Loosely coupled (LC)  4.74  3.39  Tightly coupled (TC)  4.82  2.49  LC with augmented observation  4.07  2.97  TC with augmented observation  4.12  2.34  tion data can be varied to observe the effect of noise on the performance of the proposed algorithms. An additive white Gaussian noise with mean 0◦ and variance 2◦ is chosen to generate results in Table 4.1 and Table 4.2. In this case, an additive white Gaussian noise with mean 0◦ and variance 3◦ is chosen to increase the amount of noise in the simulated AOA values. Using these noise corrupted AOA measurements, only a similar trajectory as shown in Figure 4.4(a) is obtained. In this case the maximum error is 10.13 cm, and the average error is 5.94 cm. Similar to Table 4.1, Table 4.3 compares the OAOA-only, loosely coupled, tightly coupled, loosely coupled with augmented observations and tightly coupled with augmented observations algorithms, but with an increased amount of noise. As expected, the maximum errors and the average errors are increased compared to Table 4.1. Note that in Table 4.3, the tightly coupled system has better accuracy than the loosely coupled system, and that algorithms with augmented ob-  91  4.2. Performance Analysis and Discussion servations do provide a little improvement in this case compared to the algorithms without augmented observations. This result is slightly different from the result shown in Table 4.1, where the improvement is not significant for the algorithms with augmented observations. It can be concluded that with a higher amount of noise, the two algorithms with augmented observations perform better compared to the algorithms without augmented observations. Due to the increased amount of noise, there is room for the algorithms with augmented observations to improve the performance by providing a more accurate positioning solution. In the case of lower noise, the performance of the algorithms without augmented observations is comparatively better, and therefore, it may not be possible to improve performance with the inclusion of augmented observations. Table 4.4: Performance comparison of OAOA-only, loosely coupled, tightly coupled, loosely coupled with augmented observations, tightly coupled with augmented observations integration algorithms (OAOA update is 5 Hz, AWGN with 0 mean and a variance of 3◦ is added to the simulated azimuthal angle values). IPS strategy  Max error [cm]  Average error [cm]  OAOA-only  10.13  5.94  Loosely coupled (LC)  8.76  6.92  Tightly coupled (TC)  7.98  4.29  LC with augmented observation  6.27  3.88  TC with augmented observation  5.11  3.21  Table 4.4 is similar to Table 4.2 but with noise of variance 3◦ instead of 2◦ . In the case of Table 4.2 and 4.4, the update rate from the OAOA sensor is 5 Hz. The results of Table 4.4 are similar to that of Table 4.2 but with  92  4.2. Performance Analysis and Discussion an increased amount of error (as expected). The improved performance of the algorithms with augmented observations is also evident in this case. In addition, as observed when comparing Table 4.1 and 4.2, the performance of the integration algorithm in Table 4.4 is worse than those in Table 4.3 due to a lower OAOA update rate for the results in Table 4.4. Table 4.5: Effect of magnetometer bias estimation on the performance of loosely coupled, tightly coupled, loosely coupled with augmented observations, tightly coupled with augmented observations integration algorithms (OAOA update is 10 Hz, AWGN with 0 mean and a variance of 2◦ is added to the simulated azimuthal angle values). IPS strategy  Avg. error without bias estimation [cm]  Avg. error with bias estimation [cm]  Loosely coupled (LC)  2.43  2.34  Tightly coupled (TC)  1.78  1.67  LC with augmented observation  2.32  2.20  TC with augmented observation  1.71  1.59  As stated in Section 3.3, for an indoor positioning solution, magnetometer error estimation should result in a more accurate positioning solution. The performance improvement obtained by estimating the magnetometer bias is shown in Table 4.5. The results of Table 4.5 are obtained under the same experimental conditions as those of Table 4.1. The first column is the average position error for all four algorithms without estimating the magnetometer bias. The second column is the average position error for all proposed algorithms with the magnetometer bias estimation. Magnetometer bias estimation improves the average positioning error of loosely coupled, tightly coupled, loosely coupled with augmented observa93  4.2. Performance Analysis and Discussion tions, and tightly coupled with augmented observations algorithms by 3.7%, 6.2%, 5.2%, and 7.0%, respectively, compared to the case when magnetometer bias estimation is not performed. From Table 4.5 it can be concluded that adding magnetometer errors in the EKF state vector for estimating magnetometer bias can improve the overall positioning performance.  94  Chapter 5  Conclusions and Future Work This research is based on a novel indoor positioning solution [4], which was proposed in 2011, based on AOA of light measured by a newly devised differential photosensor. However, an unobstructed LOS view between the optical source and photosensor is required given that the system is based on visible light. Anything blocking the LOS view between the source and sensor will destroy the integrity of the positioning system. Moreover, due to the independent measurement noise in the OAOA measurements, the resultant trajectory of the OAOA based positioning solution is not smooth. These problems with the newly proposed indoor positioning system were the motivation behind this research. For outdoor positioning an inertial navigation system can be used as a backup navigation system in the case of GNSS outage. The problem at hand, being similar to GNSS outages in outdoor environments, led to the idea of an indoor positioning solution that combines INS and OAOA-based positioning to provide a more accurate and reliable indoor positioning solution. The objective of this research was to develop algorithms to integrate  95  5.1. Developed Algorithms OAOA-based indoor positioning with INS based positioning to provide a more accurate and reliable positioning solution for indoor environments. Four different algorithms were developed for this purpose and compared in terms of their performance. In this chapter, all the key conclusions of this research are summarized. This is followed by recommendations for future work.  5.1  Developed Algorithms  For the four different algorithms developed to integrate OAOA-based positioning with INS-based positioning, an extended Kalman filter was used. The loosely coupled algorithm and tightly coupled algorithm were developed first. Then the observation vector for the EKF was augmented by adding an accelerometer bias estimate and quaternion vector estimate. This led to two more algorithms, namely, the loosely coupled algorithm with augmented observations and the tightly coupled system with augmented observations. Error modeling of the magnetometer was introduced in the proposed algorithms because a small heading error in the magnetometer bias can cause significant performance degradation in the integrated system for indoor environments. Inclusion of the magnetometer bias estimation improves the average positioning error of the four algorithms by 3.7% to 7%. Since mathematical singularities may occur in the case of using Euler angles, quaternion algebra was used instead to update the rotation matrix. Therefore, the EKF was used here for estimating the quaternion parameters in addition to the estimation of the conventional parameters of an error state  96  5.1. Developed Algorithms Kalman filter. The four proposed algorithms all performed better than the OAOA-only based indoor positioning solution. A maximum error of 7.34 cm and an average error of 4.21 cm for OAOA-only positioning was reduced to 3.15 cm and 2.34 cm, respectively, by using the loosely coupled integration algorithm. This was a reduction of the error of approximately 57% for maximum error and 44% for average error. For the same experiment, the tightly coupled integration algorithm outperforms the loosely coupled algorithm by reducing the average error to 1.67 cm, an improvement of approximately 60%. From all experiments conducted, it can be concluded that the tightly coupled system improves the average error by at least 14% compared to the loosely coupled algorithm. The algorithms with augmented observations perform significantly better than those without augmented observations, where the OAOA update rate is low. An average position error of 4.89 cm is reduced to 3.11 cm by using loosely coupled algorithm with augmented observations compared to the loosely coupled algorithm without observation augmentation. This is an improvement of approximately 36.4%. For the tightly coupled system, this improvement in the average error is approximately 32.3%. Improvement of performance by using augmented observations is also evident where overall performance is worse due to an increased amount of noise added to the OAOA simulated measurements. An improvement of approximately 12.3% in the average error is found using the loosely coupled algorithm with augmented observation compared to the loosely coupled algorithm alone, for a an OAOA update rate of 10 Hz and a noise variance of 3◦ . For a 97  5.2. Limitations and Future Work tightly coupled algorithm this improvement is approximately 6%. It can be concluded that if the update rate of the OAOA sensor is low, or if overall performance is worse due to an extended amount of noise in the simulated OAOA measurements, the algorithms with augmented observations perform significantly better.  5.2  Limitations and Future Work  The main limitation of this work is the lack of experimental data. At the time of conducting the research, the newly devised OAOA sensor was not able to collect data in a dynamic environment. Therefore, simulated data for the OAOA sensor was used for experimental purposes. This also limited the experimental trajectory to be a path of known geometry. In this case, a turntable was used for the experiment and, hence, the trajectory was circular. Theoretical (‘true’) positions for the OAOA sensor can be easily calculated given the rotation speed of the turntable. By adding the necessary circuitry to the OAOA sensor to collect data in a dynamic environment experiments could be conducted for arbitrary trajectories and arbitrary velocities. This would test the robustness of the proposed algorithms. For this work, all algorithms were implemented on a computer with significant processing capacity. However, for a practical indoor positioning solution working in real time, the microprocessor in the tracked device will most likely have limited computational capacity. Therefore, a study on the computational requirements of the proposed algorithms should be conducted. The computational complexity of the different algorithms may  98  5.2. Limitations and Future Work limit their use in a indoor positioning system with limited computational capability.  99  Bibliography [1] The Concise Oxford Dictionary, 9th ed. versity Press, 1995.  Oxford, U.K.: Oxford Uni-  [2] M. Depsey, “Indoor positioning systems in healthcare,” in Radianse Inc., White Paper, 2003. [3] A. L. Yanying Gu and I. Niemegeers, “A survey of indoor positioning systems for wireless personal networks,” IEEE Communication Surveys & Tutorials, vol. 11, pp. 13–32, 2009. [4] A. Arafa, X. Jin, and R. Klukas, “A differential photosensor for indoor optical wireless positioing,” in 24th International Technical Meeting of The Satellite Division of the Institute of Navigation (ION GNSS 2011), Portland, OR, 2011. [5] ——, “Wireless indoor optical positioning with a differential photosensor,” IEEE Photonics Technology Letters, vol. 24, pp. 1027–1029, Jun 2012. [6] X. Jin and J. F. Holzman, “Differential retro-detection for remote sensing applications,” IEEE Sensors Journal, vol. 10, no. 12, pp. 1883–1887, Dec. 2010. [7] C. A. Patterson, R. R. Muntz, and C. M. Pancake, “Challenges in location-aware computing,” IEEE Pervasive Computing, vol. 2, no. 2, pp. 80–89, 2003. [8] J. Hightower and G. Borriello, “Location sensing techniques,” in Technical Report UW CSE 2001-07-30, Department of Computer Science and Engineering, University of Washington, 2001. [9] K. Kaemarungsi and P. Krishnamurthy, “Properties of indoor received signal strength for wlan location fingerprinting,” in Proc. 1st Annual International Conference on Mobile and Ubiquitous Systems: Networking and Services (MobiQuitous 04), Boston, Mass, USA, aug 2004, pp. 14–23. 100  Bibliography [10] P. D. Groves, Principles of GNSS, Inertial, and Multisensor Integrated Navigation System. Boston: Artech House, 2008. [11] M. Vossiek, L. Wiebking, P. Gulden, J. Wiehardt, C. Hoffmann, and P. Heide, “Wireless local positioning,” IEEE Microwave Magazine, vol. 4, pp. 77–86, Dec 2003. [12] J. C. Chen, Y. C. Wang, C. S. Maa, and J. T. Chen, “Network-side mobile position location using factor graphs,” IEEE Trans. Wireless Communications, vol. 5, no. 10, pp. 46–52, Oct 2006. [13] R. Casas, D. Cuartielles, A. Marco, H. J. Gracia, and J. L. Falc, “Hidden issues in deploying an indoor location system,” IEEE Pervasive Computing, vol. 6, no. 2, pp. 62–69, 2007. [14] X. Fernando, S. Krishnan, H. Sun, and K. Kazemi-Moud, “Adaptive denoising at infrared wireless receivers,” in Proc. SPIE, 2003. [15] A. Ward, A. Jones, and A. Hopper, “A new location technique for the active office,” IEEE Personal Communications, vol. 4, no. 5, pp. 42–47, Oct 1997. [16] N. Priyantha, A. Chakraborty, and H. Balakrishnan, “The cricket location- support system,” in Proc. ACM Conference on Mobile Computing and Networking, 2000. [17] N. B. Priyantha, The Cricket Indoor Location System. MIT, 2005.  PhD thesis,  [18] (2008) Sonitor system website. [Online]. Available: sonitor.com/  http://www.  [19] H. M. Y. Fukuju, M. Minami and T. Aoyama, “Dolphin: An autonomous indoor positioning system in ubiquitous computing environment,” in Proc. IEEE Workshop on Software Technologies for Future Embedded Systems, May 2003. [20] M. S. H. Piontek and J. Kaiser, “Improving the accuracy of ultrasoundbased localisation systems,” in Proc. International Workshop on Location-and Context-Awareness, 2005. [21] L. M. Ni and Y. Liu, “Landmarc: Indoor location sensing using active RFID,” in Proc. IEEE International Conference on Pervasive Computing and Communications, 2003, pp. 407–416. 101  Bibliography [22] (2008) Wherenet web site. [Online]. Available: http://edu.symbol. com/docentauthorware/WhereNet/intro/intro1.asp [23] P. Bahl and V. Padmanabhan, “RADAR: An in-building RF based user location and tracking system,” in Proc. IEEE INFOCOM, vol. 2, Mar 2000, pp. 775–784. [24] (2008) Ekahau. [Online]. Available: http://www.ekahau.com/ [25] Y. Wang, X. Jia, and H. K. Lee, “An indoor wireless positioning system based on wireless local area network infrastructure,” in Proc. 6th International Symposium on Satellite Navigation Technology Including Mobile Positioning and Location Services, 2003. [26] T. Kitasuka, K. Hisazumi, T. Nakanishi, and A. Fukuda, “Wips: Location and motion sensing technique of IEEE 802.11 devices,” in Proc. 3rd International Conference on Information Technology and Applications (ICITA05), vol. 2, 2005, pp. 346–349. [27] S. Kawakubo, A. Chansavang, S. Tanaka, T. Iwasaki, K. Sasaki, T. Hirota, H. Hosaka, and H. Ando, “Wireless network system for indoor human positioning,” in Proc. 1st International Symposium on Wireless Pervasive Computing, 2006, pp. 1–6. [28] A. Genco, “Three step bluetooth positioning,” in Lecture Notes in Computer Science, vol. 3479, 2005, pp. 52–62. [29] J. Hallberg, M. Nilsson, and K. Synnes, “Positioning with bluetooth,” in Proc. 10th International Conference on Telecommunications, 2003. [30] M. Rodriguez, J. P. Pece, and C. J. Escudero, “In-building location using bluetooth,” in Proc. IWWAN, 2005. [31] J. C. F. Michel, M. Christmann, M. Fiegert, P. Gulden, and M. Vossiek, “Multisensor based indoor vehicle localization system for production and logistic,” in Proc. IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems, Heidelberg, Germany, Sep 2006, pp. 553–558. [32] F. Raab, E. B. Blood, T. O. Steiner, and H. R. Jones, “Magnetic position and orientation tracking system,” IEEE Trans. Aerospace and Electronic Systems, vol. AES-15, no. 5, pp. 709–718, Sep 1979.  102  Bibliography [33] J. Krumm, S. Harris, B. Meyers, B. Brumitt, M. Hale, and S. Shafer, “Multi-camera multi-person tracking for easy living,” in Proc. 3rd IEEE Intl Workshop Visual Surveillance, 2000. [34] D. Focken and R. Stiefelhagen, “Towards vision-based 3-D people tracking in a smart room,” in Proc. 4th IEEE International Conference on Multimodal Interfaces, Oct 2002. [35] A. Madhavapeddy, D. Scott, and R. Sharp, “Context-aware computing with sound,” in Proc. 5th International Conference on Ubiquitous Computing, oct 2003. [36] N. El-Sheimy and X. Niu, “The promise of mems to the navigation community,” Inside GNSS, pp. 46–55, Mar, Apr 2007. [37] M. Petovello, C. O’Driscoll, and G. Lachapelle, “Ultra-tight GPS/INS for carrier phase positioing in weak-signal environments,” NATO RTO SET-104 Symposium on Military Capabilities Enabled by Advances in Navigation Sensors, Oct 2007. [38] D. Titterton and J. Weston, Strapdown Inertial Navigation Technology, 2nd ed. Stevenage, U.K.: IEE, 2004. [39] M. Grewal, L. Weill, and A. Andrews, Global Positioining Systems, Inertial Navigation, and Integration. Newyork, U.S.: Wiley, 2001. [40] Gebre-Egziabher, M. Petovello, and G. Lachapelle, “Weighting gnss observations and variations of GNSS/INS integration,” GNSS Solutions: Inside GNSS, pp. 26–33, 2007. [41] M. Petovello, Real-Time Integration of a Tactical-Grade IMU and GPS for High-Accuracy Positioning and Navigation. Ph.D. dissertation, Department of Geomatics Engineering, University of Calgary, 2003. [42] R. V. Wong, Development of a RLG strapdown inertial survey system. Ph.D. dissertation, Department of Geomatics Engineering, University of Calgary, 1988. [43] F. Sadi, Jump Parameter Estimation with Low Cost MEMS Sensors and GPS for Action Sports Goggles. Master’s thesis, School of Engineering, University of British Columbia, Okanagan, 2011. [44] K. P. Schwarz and M. Wei, Inertial Navigation Systems with Geodetic Applications. Partial Lecture Notes for ENGO 623. Department of Geomatics Engineering, The University of Calgary. 103  Bibliography [45] N. El-Sheimy, “The potential of partial imus for land vehicle navigation,” InsideGNSS, Spring 2008. [46] S. Godha, Performance evaluation of low cost MEMS-based IMU integrated with GPS for land vehicle navigation application. Master’s thesis, Department of Geomatics Engineering, University of Calgary, Feb. 2006. [47] K. P. Schwarz and M. Wei, INS/GPS Integration for Geodetic Applications. New York, NY., USA: Walter de Gruyter. [48] K. J. Walchko and D. P. A. C. Mason, “Inertial navigation,” in Conference on Recent Advances in Robotics, 2002. [49] S. Nassar, Improving the inertial navigation system (INS) error model for INS and INS/DGPS applications. Ph.D. dissertation, Department of Geomatics Engineering, University of Calgary, 2003. [50] S. M. Kay, Fundamentals of Statistical Signal Processing: Estimation Theory. Prentice Hall PTR, Upper Saddle River, 1993. [51] M. Grewal and A. Andrews, Kalman Filtering: Theory and Practice Using MATLAB. Wiley-Interscience Publications, 2001.  104  

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.24.1-0073179/manifest

Comment

Related Items