Open Collections

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Indoor positioning through integration of optical angles of arrival with an inertial measurement unit 2012

You don't seem to have a PDF reader installed, try download the pdf

Item Metadata

Download

Media
ubc_2012_fall_islam_md. shariful.pdf
ubc_2012_fall_islam_md. shariful.pdf [ 1.76MB ]
Metadata
JSON: 1.0073179.json
JSON-LD: 1.0073179+ld.json
RDF/XML (Pretty): 1.0073179.xml
RDF/JSON: 1.0073179+rdf.json
Turtle: 1.0073179+rdf-turtle.txt
N-Triples: 1.0073179+rdf-ntriples.txt
Citation
1.0073179.ris

Full Text

Indoor Positioning through Integration of Optical Angles of Arrival with an Inertial Measurement Unit by Md. Shariful Islam B.Sc., Bangladesh University of Engineering and Technology, 2009 A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF APPLIED SCIENCE in The College of Graduate Studies (Electrical Engineering) THE UNIVERSITY OF BRITISH COLUMBIA (Okanagan) September 2012 c© Md. Shariful Islam 2012 Abstract A novel indoor positioning solution is proposed in this work. An inertial navigation system (INS) is integrated with optical angle of arrival (OAOA) measurements to yield a smoother, more accurate, and robust positioning solution for indoor environments. An extended Kalman filter (EKF) is used to integrate the INS and OAOA measurements. Four different algorithms are proposed for the novel indoor positioning solution by INS/OAOA integration. An error state Kalman filter is used for implementing all four algorithms. In previous work, magnetometer error estimation was not included in the EKF state vector. In this work, magne- tometer error estimation is added to the EKF state vector, and this reduced the average position error by 3.7% to 7%. Quaternion algebra is used in- stead of Euler angles due to the possibility of mathematical singularities for certain Euler angles. Quaternion vector estimation is performed by adding the quaternion vector to the state vector of the EKF. Both loosely coupled and tightly coupled integration strategies are ex- plored for INS/OAOA integration. The tightly coupled strategy reduces the average positioning error by 60% compared to an OAOA-only system while the loosely coupled strategy reduces the average error by 44%. However, the performance improvement of the tightly coupled system comes with an in- ii Abstract creased computational cost due to nonlinearities in the measurement model. The loosely and tightly coupled algorithms are modified by augmenting the observation vector with a prior accelerometer bias estimate and a quater- nion vector estimate. This results in loosely and tightly coupled algorithms with augmented observations. The algorithms with augmented observations perform significantly better, especially in a case of the low update rate for the OAOA sensor. An average position error of 4.89 cm is reduced to 3.11 cm by using the loosely coupled algorithm with augmented observations instead of the loosely coupled algorithm without observation augmentation. This is an improvement of approximately 36%. For the tightly coupled system, this improvement is approximately 32%. However, where the update rate from the OAOA sensor is fast enough, no significant performance improvement is observed by using algorithms with augmented observations. iii Table of Contents Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ii Table of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . iv List of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix List of Symbols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi Abbreviations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvi Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii Dedication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xix 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.1 Context . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.3 Problem Statement and Objective . . . . . . . . . . . . . . . 6 1.4 Thesis Outline . . . . . . . . . . . . . . . . . . . . . . . . . . 8 1.5 Contribution . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 iv Table of Contents 2 Literature Review . . . . . . . . . . . . . . . . . . . . . . . . . 10 2.1 Indoor Positioning Systems: An Overview . . . . . . . . . . 10 2.1.1 What is an Indoor Positioning System? . . . . . . . . 11 2.1.2 Location Technologies, Techniques and Algorithms . 11 2.1.3 Network-Based vs. Non-Network-Based System . . . 13 2.1.4 Criteria of Evaluating Indoor Positioning System . . 13 2.2 Existing Indoor Positioning Solutions . . . . . . . . . . . . . 15 2.2.1 Infrared (IR) Positioning Systems . . . . . . . . . . . 15 2.2.2 Ultra-sound Positioning Systems . . . . . . . . . . . . 15 2.2.3 Radio Frequency (RF) Positioning Systems . . . . . . 16 2.2.4 Magnetic Positioning Systems . . . . . . . . . . . . . 19 2.2.5 Vision-based Positioning Systems . . . . . . . . . . . 19 2.2.6 Audible Sound Positioning Systems . . . . . . . . . . 20 2.2.7 Differential Photosensor-based Positioning System . . 20 2.3 Proposed Indoor Positioning Solution . . . . . . . . . . . . . 23 2.3.1 Inertial Navigation System . . . . . . . . . . . . . . . 23 2.3.2 GPS and INS Integration Strategy . . . . . . . . . . . 24 2.3.3 Proposed Solution . . . . . . . . . . . . . . . . . . . . 26 3 INS and OAOA Integration . . . . . . . . . . . . . . . . . . . 27 3.1 OAOA for Indoor Positioning . . . . . . . . . . . . . . . . . . 27 3.1.1 The Corner-Cube Photosensor . . . . . . . . . . . . . 28 3.1.2 Determining Position . . . . . . . . . . . . . . . . . . 31 3.2 Inertial Navigation System . . . . . . . . . . . . . . . . . . . 32 3.2.1 Mathematical Notations in Inertial Navigation . . . . 32 v Table of Contents 3.2.2 Coordinate Frames . . . . . . . . . . . . . . . . . . . 35 3.2.3 Rotation Matrix Computation . . . . . . . . . . . . . 38 3.2.4 Modeling Motion in the Local Navigation Frame . . . 43 3.2.5 Step by Step Computation of Navigation Parameters 51 3.3 Proposed Integration Algorithm . . . . . . . . . . . . . . . . 55 3.3.1 Loosely Coupled Integration . . . . . . . . . . . . . . 61 3.3.2 Tightly Coupled Integration . . . . . . . . . . . . . . 67 3.3.3 Integration Algorithms with Augmented Observation 72 4 Experimental Work and Results . . . . . . . . . . . . . . . . 80 4.1 Experiment . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 4.1.1 Data Collection . . . . . . . . . . . . . . . . . . . . . 80 4.1.2 Experimental Setup . . . . . . . . . . . . . . . . . . . 83 4.2 Performance Analysis and Discussion . . . . . . . . . . . . . 86 5 Conclusions and Future Work . . . . . . . . . . . . . . . . . . 95 5.1 Developed Algorithms . . . . . . . . . . . . . . . . . . . . . . 96 5.2 Limitations and Future Work . . . . . . . . . . . . . . . . . . 98 Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 vi List of Tables 2.1 Summary and Comparison of Existing Indoor Positioning So- lutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 3.1 Quadrant information for roll from acceleration signs. . . . . 41 3.2 Quadrant information for pitch from acceleration signs. . . . 41 3.3 Quadrant information for yaw from magnetic field compo- nents signs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 4.1 Performance comparison of OAOA-only, loosely coupled, tightly coupled, loosely coupled with augmented observations, tightly coupled with augmented observations integration algorithms (OAOA update is 10 Hz, AWGN with 0 mean and a variance of 2◦ is added to the simulated azimuthal angle values). . . . 88 4.2 Performance comparison of OAOA-only, loosely coupled, tightly coupled, loosely coupled with augmented observations, tightly coupled with augmented observations integration algorithms (OAOA update is 5 Hz, AWGN with 0 mean and a variance of 2◦ is added to the simulated azimuthal angle values). . . . 89 vii List of Tables 4.3 Performance comparison of OAOA-only, loosely coupled, tightly coupled, loosely coupled with augmented observations, tightly coupled with augmented observations integration algorithms (OAOA update is 10 Hz, AWGN with 0 mean and a variance of 3◦ is added to the simulated azimuthal angle values). . . . 91 4.4 Performance comparison of OAOA-only, loosely coupled, tightly coupled, loosely coupled with augmented observations, tightly coupled with augmented observations integration algorithms (OAOA update is 5 Hz, AWGN with 0 mean and a variance of 3◦ is added to the simulated azimuthal angle values). . . . 92 4.5 Effect of magnetometer bias estimation on the performance of loosely coupled, tightly coupled, loosely coupled with aug- mented observations, tightly coupled with augmented obser- vations integration algorithms (OAOA update is 10 Hz, AWGN with 0 mean and a variance of 2◦ is added to the simulated azimuthal angle values). . . . . . . . . . . . . . . . . . . . . . 93 viii List of Figures 3.1 Photograph of the corner-cube photosensor. . . . . . . . . . . 28 3.2 Sketch of the corner-cube photosensor. . . . . . . . . . . . . . 29 3.3 Hypothetical system setup for an indoor positioning solution using the OAOA sensor. . . . . . . . . . . . . . . . . . . . . . 30 3.4 Coordinate frames. . . . . . . . . . . . . . . . . . . . . . . . . 36 3.5 Basic block diagram of INS mechanization equation for the local navigation frame. . . . . . . . . . . . . . . . . . . . . . . 50 3.6 Basic diagram of the INS mechanization block with feedback from the EKF and corrected input measurement. . . . . . . . 61 3.7 Block diagram of a loosely coupled OAOA and INS integrated positioning solution. . . . . . . . . . . . . . . . . . . . . . . . 62 3.8 Block diagram of a tightly coupled OAOA and INS integrated positioning solution. . . . . . . . . . . . . . . . . . . . . . . . 68 3.9 Observation augmenting block. . . . . . . . . . . . . . . . . . 73 3.10 Block diagram of a loosely coupled OAOA and INS integrated positioning solution with augmented observations. . . . . . . 76 3.11 Block diagram of a tightly coupled OAOA and INS integrated positioning solution with augmented observations. . . . . . . 78 ix List of Figures 4.1 A commercial athletic performance sensor used as the IMU unit. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 4.2 A DC motor turntable for creating a controlled motion. . . . 82 4.3 Hypothetical experimental setup for data collection (coordi- nate units are cm, and figure is not drawn to scale). . . . . . 83 4.4 Comparison between true and estimated trajectories. . . . . . 85 x List of Symbols λ Longitude Ωbib Skew symmetric matrix of ω b ib ωbib Rotation rate of frame ‘i’, wrt frame ‘b’, expressed in frame ‘b’ Ωbie Skew symmetric matrix of ω b ie ωbie Rotation rate of frame ‘i’, wrt frame ‘e’, expressed in frame ‘b’ Ωnie Skew symmetric matrix of ω n ie ωnie Rotation rate of frame ‘i’, wrt frame ‘e’, expressed in frame ‘n’ Ω(ω) Four dimensional skew symmetric matrix φ Azimuthal angle Rnb Rotation matrix to convert from body frame to navigation frame Rbn Rotation matrix to convert from navigation frame to body frame An skew symmetric matrix of an ba Bias vector in measured specific force vector bm Bias vector in the magnetometer measurement vector xi List of Symbols bω Bias vector in the angular rate measurement vector gn Earth’s gravity field HL[A] Measurement matrix for LC algorithm with augmented observation HL Measurement matrix for the loosely coupled algorithm HT [A] Measurement matrix for TC algorithm with augmented observation HT Measurement matrix for the tightly coupled algorithm Q Quaternion parameter Qk Quaternion parameter at kth time epoch rb Position vector expressed in b-frame rn Position vector expressed in n-frame rnOAOAinitial Initial position vector determined by OAOA block rnINSk+1 Position vector determined by INS block at (k + 1)th epoch rnOAOAk+1 Position vector determined by OAOA block at (k + 1)th epoch vn Velocity vector expressed in n-frame vnINSk+1 Velocity vector determined by INS block at (k + 1)th epoch zL[A] Observation vector for LC algorithm with augmented observation zL Observation vector for the loosely coupled algorithm zT [A] Observation vector for TC algorithm with augmented observation xii List of Symbols zT Observation vector for the tightly coupled algorithm θ Polar angle ϕ Latitude ω̃k+1 Raw gyroscope measurement at (k + 1)th epoch ãk+1 Raw accelerometer measurement at (k + 1)th epoch m̃k+1 Raw magnetometer measurement at (k + 1)th epoch abx Raw measurement from x-axis accelerometer aby Raw measurement from y-axis accelerometer abz Raw measurement from z-axis accelerometer bax Bias in the x-accelerometer bax Bias in the y-accelerometer bax Bias in the z-accelerometer bωx Bias in the x-gyroscope bωx Bias in the y-gyroscope bωx Bias in the z-gyroscope h Altitude mbx Raw measurement from x-axis magnetometer mby Raw measurement from y-axis magnetometer xiii List of Symbols mbz Raw measurement from z-axis magnetometer p Pitch r Roll tk kth time epoch tk+1 (k + 1)th time epoch y Yaw αx Reciprocal of the correlation time of x-accelerometer αy Reciprocal of the correlation time of y-accelerometer αz Reciprocal of the correlation time of z-accelerometer βx Reciprocal of the correlation time of x-gyroscope βy Reciprocal of the correlation time of y-gyroscope βz Reciprocal of the correlation time of z-gyroscope αD Diagonal matrix with −αx, −αy, −αz βD Diagonal matrix with −βx, −βy, −βz γD Diagonal matrix with −γx, −γy, −γz δωbib Vector of errors in the measured angular rate vector δab Vector of errors in the measured specific force vector δmb Vector of errors in the magnetometer measurement xiv List of Symbols δrn Position error state vector δvn Velocity error state vector δψn Attitude error state vector γx Reciprocal of the correlation time of x-magnetometer γy Reciprocal of the correlation time of y-magnetometer γz Reciprocal of the correlation time of z-magnetometer ωe Earth rotation rate xv Abbreviations GNSS Global Navigation Satellite System LOS Line of Sight NAVSTAR Navigation by Satellite Ranging and Timing GPS Global Positioning System IPS Indoor Positioning System AOA Angle of Arrival OAOA Optical Angle of Arrival INS Inertial Navigation System IMU Inertial Measurement Unit RFID Radio-Frequency IDentification WLAN Wireless Local Area Network WPAN Wireless Personal Area Network UWB Ultra-Wide Band TOA Time of Arrival RSS Received Signal Strength PN Personal Network MEMS Micro Electro-Mechanical Systems ECI Earth-Centered Inertial ECEF Earth-Centered Earth Fixed GMST Greenwich Mean Sidereal Time ENU East-Noth-Up NED North-East-Down EKF Extended Kalman Filter LKF Linear Kalman Filter xvi Acknowledgements I must start with showing my gratitude to my supervisor Dr. Richard Klukas, who has drawn an end to all my prejudice about graduate supervisor which was developed by horrifying stories about graduate life and tales of horrifying professors. Richard is such a nice person and mentor to work with that graduate life might not feel like a conventional graduate life at all. I am really lucky to have such a pleasant start of my graduate life with a supervisor like him. I would like to thank my committee members, Dr. Jonathan Holzman and Dr. Thomas Jhonson, for kindly being a part of my supervisory com- mittee. I have to thank my friend and colleague Ahmed Arafa, who is working on a novel indoor positioning solution. I got an opportunity to find a research topic by finding some problems in the new positioning solution. As a forget- ful person, I have to thank him again, as he lives in the same building with me. That is why, I can, sometimes, forget my laptop charger or backpack in the office and request him to bring those home, so that I can collect those at night. He is really a wonderful guy. I would like to thank my apartment-mates Nijam, Azam and Russell for keeping our apartment unit lively, which didn’t let me realize the huge xvii Acknowledgements distance between me and my family. Thanks to Skype, which makes my room feel like my room far away in Bangladesh, at least sometimes. Lastly, my family; I will not thank my family, my mom, my dad, my sisters and my brother. It’s them, who make it really difficult for me to stay cool even in Canada; because they are the reason, I am still warm. However, I will not thank them, because we don’t thank our family to be with us, we call them family because they are always with us. xviii Dedication To the truth I seek xix Chapter 1 Introduction 1.1 Context What is meant by the term ‘positioning’? Although there is no univer- sally agreed definition of positioning, in general it can be defined as “any of the several methods of determining the position of a body or device by geom- etry, astronomy, radio signals, etc. with respect to some known reference” [1]. In a broad sense, positioning can be divided into two fields, ‘outdoor po- sitioning’ and ‘indoor positioning’ depending on the different environment that the positioning system might experience. Indoor and outdoor environ- ments, being totally different, manifest different kinds of challenges for a positioning system, and thus make indoor positioning and outdoor position- ing different fields of research. GNSS (global navigation satellite system) is used almost universally for outdoor positioning, where line of sight (LOS) views between the GNSS receiver and at least four satellites are available. The most well known GNSS system is the Navigation by Satellite Ranging and Timing (NAVS- TAR) Global Positioning System (GPS), owned and operated by the US government and commonly known as GPS. The Russian GLONASS system 1 1.1. Context is also operational. These GNSS techniques are almost universally used for outdoor navigation and positioning without any significant competitor in the market. Due to signal blockage, GNSS does not work well, or at all, in indoor en- vironments. Moreover, indoor environments are different in many attributes compared to outdoor environments. Indoor environments are more complex. Various obstacles, such as, walls, equipment, and human beings, influence the propagation of electromagnetic waves, which leads to multi-path effects. The indoor environment changes more dynamically making indoor position- ing and navigation a challenging research field needing special attention. So the question is what exactly is an ‘indoor positioning system’ (IPS)? Dempsey [2] defines an IPS as a system that continuously and in real-time can determine the position of something or someone in a physical space such as in a hospital, a gymnasium, a school, etc. An IPS can provide different kinds of location information for location-based applications used by the user. Usually the absolute position information with respect to the map of a coverage area is offered by indoor positioning tracking systems and indoor navigation systems, because tracking and guidance services need the exact positions of the targets. Relative position information, which measure the motion of different parts of a target is another kind of output offered by the IPSs [3]. 2 1.2. Motivation 1.2 Motivation Accurate, reliable and real-time indoor positioning and position-based protocols and services are required in future generation communication net- works to significantly improve the performance of wireless networks through network planning, network adaptation, and load balancing. Position-based tracking systems can also be used in hospitals for patient tracking, and in warehouses for tracking valuable goods. Considering the importance of positioning information and its applica- tion in different fields, many positioning systems have been developed over the years. Infrared-based, ultrasound-based, radio frequency based, and vision-based positioning systems are among the most well-known position- ing solutions with potential. However, each of them has its own advantages and disadvantages and none dominates the indoor positioning market as GNSS does the outdoor positioning market. The industry and research com- munities are still looking for a robust, accurate, cheap and reliable indoor positioning solution. In 2011, a novel indoor positioning solution, based on angle of arrival (AOA) of light measured by a newly devised differential photosensor, was proposed in [4, 5] by Arafa, Jin, and Klukas. The positioning solution is based on a newly devised photosensor [6], which can determine the AOA of a light beam transmitted by a source and incident on the photosensor. Positioning information can be determined from the AOAs and the known frame of reference using simple trigonometry. The proposed system has several advantages over some of the existing positioning technologies. As 3 1.2. Motivation reported in [4], above a certain threshold, the AOA values does not depend on the intensity of the incident light. This makes the system independent of incident power and thus overcomes the problems associated with proximity methods. Moreover, the sensor itself is very inexpensive, and it is antici- pated that it will be possible to use existing indoor lighting systems as the necessary light sources for the positioning system. This will decrease the extra infrastructure required further lowering the cost of the system. However, an unobstructed LOS view between the optical source and pho- tosensor is required given that the system is based on visible light. Anything blocking the LOS view between the source and sensor will destroy the in- tegrity of the positioning system. In addition, the optical angle of arrival (OAOA) positioning solution at each epoch of time is independent of the solution at the previous and following epochs, and the noise associated with each epoch is uncorrelated. This results in a trajectory which is not smooth. Now the question is, can we find any positioning solution, that supports the OAOA based positioning system for a robust and more reliable positioning solution, even in the absence of LOS view? An inertial navigation system (INS) is a dead reckoning technique that uses a navigation processor and inertial sensors such as accelerometers and gyroscopes to continuously calculate the position, orientation, and velocity of a moving object without the need for external references. In navigation, dead reckoning is the process of calculating one’s current position by using a previously determined position, and advancing that position based upon known or estimated velocity over elapsed time. The most important parts of an inertial navigation system are the inertial measurement unit (IMU) and 4 1.2. Motivation a navigation processor, which integrates the IMU outputs to produce a po- sition, velocity and attitude solution. In basic terms, the velocity is updated by integrating the acceleration from the accelerometers, and the position is updated by integrating velocity. An inertial navigation system suffers from three basic problems. Firstly, being a dead reckoning system, the navigation solution must be initialized with some initial value. Secondly, being an it- erative integration process, error from the previous stages accumulates and becomes very large after a few stages. Thirdly, low grade inertial sensors are not accurate enough to work as a stand-alone navigation system because of their drift over time. However, short-term performance of an INS-based positioning system is very good and provides a smooth trajectory, unlike an OAOA-based positioning system. The most important point to be noted is that the inertial navigation system does not need any external reference. Now the question is, can we use an inertial navigation system to back up an OAOA-based positioning solution to enhance the performance of the overall positioning system? The above two questions motivate this research. INS and OAOA-based positioning systems are largely complementary, so by integrating them, the advantages of both technologies are combined to give a continuous, com- plete positioning solution with good long and short-term accuracy. In this research, different aspects of integration of an OAOA-based positioning sys- tem with an inertial navigation system are investigated. Different integra- tion algorithms and their effect on overall system performance is the main focus. In an integrated INS/OAOA indoor positioning system, the OAOA- based system should prevent the inertial solution from drifting, while the 5 1.3. Problem Statement and Objective INS should smooth the OAOA positioning solution and bridge any LOS view outages. 1.3 Problem Statement and Objective The following statements summarize the problem at hand. • The recently proposed OAOA-based positioning system [4, 5] has some major drawbacks. Firstly, because visible light is the communication link between the receiver and transmitter, the positioning system re- quires a continuous LOS view between the transmitter and receiver. This is not likely to always be maintained in indoor environments due to persons and objects blocking the direct path. Secondly, the maxi- mum output rate of the novel photosensor is yet to be determined. To date, the sensor has only been used for determining static positions. In the dynamic case, the sensors’s update rate may limit the smooth- ness of the trajectory. Thirdly, the positioning information provided by the sensor for a particular position does not depend on the previ- ous position, and due to measurement noise and system imperfections, the OAOA-based positioning system is not likely to provide a smooth trajectory. • Low grade inertial sensors cannot be used for stand-alone navigation because of their drift over time. Moreover, since inertial navigation is an iterative integration process, errors accumulate as they pass through the various stages. This deteriorates the final result after 6 1.3. Problem Statement and Objective just a few stages. Therefore, an independent positioning system is necessary to update the position information at every stage or at least after every few stages so the errors are not allowed to accumulate. However, the advantage of inertial sensors is that they have a fairly high output rate, which is typically 100 Hz. Moreover, being a dead reckoning system, position information at a particular point of time depends on the position information of the previous point of time, which results in a smooth trajectory. Given the above problem statements, the following objectives are defined for this particular research work. • Integration of OAOA-based positioning solution with the inertial nav- igation solution for providing a more reliable and accurate position information. • Using an extended Kalman filter for the integration purpose of OAOA based positioning and INS based positioning solution. • Developing different algorithms based on the extended Kalman filter, namely loosely coupled algorithm, tightly coupled algorithm, loosely coupled algorithm with augmented observation and tightly coupled algorithm with augmented observations. • Performance analysis and comparison of different algorithms in differ- ent conditions to determine their positive and negative features. 7 1.4. Thesis Outline 1.4 Thesis Outline Chapter 1 introduces the basics of indoor positioning and the motivation behind this research. The problem statements, objectives and contri- butions are clearly stated in this chapter. Chapter 2 summarizes all of the relevant current research and the state-of- the-art-techniques for indoor positioning. An overview on the existing positioning solutions and their performance, cost, and limitations is given. Chapter 3 gives the necessary theoretical background for OAOA/INS in- tegration performed. This chapter starts with a detailed overview of the OAOA-based indoor positioning solution, which is followed by the basics of inertial navigation. The proposed algorithms for integration are then described. Chapter 4 describes the experimental setup and is followed by experimen- tal and simulated results as well as discussion of the results. Chapter 5 presents the conclusion and make recommendations on the fu- ture work. 1.5 Contribution The contribution of this work can be summarized as follows. • A new indoor positioning solution is proposed which integrates OAOA- based indoor positioning solution with an inertial navigation system 8 1.5. Contribution to provide an accurate, reliable and inexpensive indoor positioning solution. • Four different algorithms are developed to integrate the OAOA-based positioning with INS positioning namely the loosely coupled algorithm, the tightly coupled algorithm, the loosely coupled algorithm with aug- mented observations, and the tightly coupled integration with aug- mented observations. These algorithms are described in Chapter 3. • Although some of these algorithms have previously been used to in- tegrate other types of positioning data and solutions (i.e. GPS with INS), their use to integrate OAOA positioning data with INS data is novel. 9 Chapter 2 Literature Review This chapter gives an overview of various technology options for Indoor Positioning including infrared (IR), ultrasound, radio-frequency identifica- tion (RFID), wireless local area network (WLAN), Bluetooth, sensor net- works, ultra-wideband (UWB), magnetic signals, vision analysis and audible sound. Based on these technologies different companies, researchers, and universities have developed different indoor positioning solutions [3]. Each solution has its own advantages and disadvantages. The designer of the in- door positioning system must compromise between performance and system complexity. In [7] it is reported that combining some positioning technolo- gies can improve the quality of positioning services. Therefore, it is necessary to investigate the characteristics and limitations of existing state-of-the-art technologies in order to develop an integrated solution that improves per- formance. 2.1 Indoor Positioning Systems: An Overview In this section the attributes of various indoor positioning solutions are described. Various evaluation criteria are discussed to compare the solution for the services demanded by users. 10 2.1. Indoor Positioning Systems: An Overview 2.1.1 What is an Indoor Positioning System? An indoor positioning system is a system that can provide the position of something or someone continuously and in real-time in a physical space such as in a building, a hospital, a gymnasium [2]. An indoor positioning system can provide different kinds of location information depending on the user requirement. Usually, indoor positioning tracking systems offer the absolute position information with respect to a map of the coverage area [3]. Relative position estimation and proximity detection are other types of output which also provided by some indoor positioning systems. 2.1.2 Location Technologies, Techniques and Algorithms The need for an indoor positioning solution has led researchers to in- troduce different locationing technologies such as IR, ultra-sound, RFID, WLAN, Bluetooth, UWB, magnetic technology, etc. Equipped with these lo- cationing technologies an indoor positioning solution uses several techniques to locate objects and provide location information. The basic techniques of indoor positioning are known as trilateration, triangulation, fingerprinting and vision analysis [8, 9]. Theoretically, it is also possible to use an inertial navigation system for indoor positioning. However, the cost of high grade inertial sensors is too high for most civilian applications. An aviation grade IMU costs approximately $100,000 and still drifts approximately 1.5 km in the first hour of operation [10]. Trilateration is the process of determining location by the measurement of distance, and then using the geometry of circles or spheres. In the case 11 2.1. Indoor Positioning Systems: An Overview of two dimensional trilateration, three or more reference points of known coordinates are selected and the distance from the point to be positioned to these reference points are determined using measurements such as received signal strength (RSS) or time of arrival (TOA) [11]. Knowing the distance between the point and each reference point, circles with radius equal to the distances and centered at the reference points can be drawn. The common intersection point of all circles is the position solution. The ability to resolve multipath with very large bandwidth signals (such as UWB) in the indoor environment makes TOA the most accurate positioning technique. However, it is complex to implement because it requires complex circuitry for precise clock synchronization. Although RSS measurements are generally easy to make, they may not be accurate due to change in the propagation environ- ment. Note that, RSS and TOA require at least three reference points. In contrast to the trilateration, triangulation is the process of estimating the location of a point by measuring angles to it reference points of known position. The measurements made are known as angle of arrival. In contrast to RSS and TOA, AOA requires only two reference points to determine position [3]. However, in triangulation the impact of an error in the AOA measurement on the position estimate will increase as the distance between the point to be positioned and the reference point increases [12]. Fingerprint-based positioning refers to algorithms that first collect the features of a scene and then estimate location by matching the collected features with a-priori information. The problem with fingerprint based po- sitioning is the need for a very large database for storing the a-priori infor- mation. Fingerprinting for location estimation is also complex and costly if 12 2.1. Indoor Positioning Systems: An Overview the number of users of the positioning system increases significantly [3]. Vision analysis for location determination uses a camera or camera array to collect images and then analyzes the images to determine the position of a point or object with respect to other objects of known position in the image. The advantage of using vision analysis is there is no need to carry any tracking device. However, any change in the indoor environment, even a change in the light level, can affect the positioning performance. 2.1.3 Network-Based vs. Non-Network-Based System Any indoor positioning solution can be categorized depending on var- ious criteria. A positioning solution can be categorized as network-based approach and non-network-based approach depending on whether the IPS uses any existing wireless network infrastructure to determine the position information. A non-network-based approach uses its own infrastructure ac- cording the designer and depending on the need of accuracy. On the other hand, network based approach reduces cost by using an existing wireless network which might come with a sacrificed accuracy level [3]. 2.1.4 Criteria of Evaluating Indoor Positioning System In this subsection some criteria given in [3] are discussed for evaluating an indoor positioning system. Security and Privacy: Any indoor positioning system uses some kind of personal network (PN) for communicating the position information. Therefore, network security and privacy is an important issue for an 13 2.1. Indoor Positioning Systems: An Overview indoor positioning system. Cost: The cost of a positioning solution for an indoor environment depends on the cost of the infrastructure components, the cost of the position- ing device for each user and the cost of the system installation and maintenance. Some indoor positioning solutions use existing infras- tructure such as WLAN and are therefore very cost effective. The initial cost of the positioning device carried by an individual user and its maintenance cost and lifetime should also be considered. Performance: For the performance evaluation of an indoor positioning so- lution the two main performance parameters are ‘accuracy’ and ‘preci- sion’, where accuracy relates to how close the estimated position is to the true position and may be quantified with average distance error, and precision relates to the difference between multiple estimates of the same position and can be quantified with standard deviation. The ‘delay’ of a measurement and the number of objects that an IPS can locate within a ceratin infrastructure, are also important performance evaluation issues. Usually, a trade-off exists between the performance and the price of an indoor positioning solution. User Preference: User preference is also an important issue when design- ing an indoor positioning solution. For the comfort of the users, the device should be wireless, small and light weight, have low power con- sumption, and be computationally powerful to offer accurate and real- time positioning information. 14 2.2. Existing Indoor Positioning Solutions 2.2 Existing Indoor Positioning Solutions There are several indoor positioning systems. In this section some exist- ing positioning solutions are introduced and their advantages and limitations are discussed. 2.2.1 Infrared (IR) Positioning Systems The IR-based positioning system is a non-network based positioning sys- tem which uses proximity technique to determine the positioning solution. The IR-based indoor positioning system is very accurate (several millime- ters) in position estimation. An IR-based positioning system needs line of sight communication between transmitter and receiver. Thus the coverage range per infrastructure is limited within a room [3]. For estimation of ab- solute position information, interference from strong light sources should be avoided [13]. Optical filtering and noise canceling signal processing algo- rithms are necessary to filter out the interference effect from florescent light and sunlight [14], which raises the cost of the system. The requirement of expensive system hardware such as a receiver camera array and connected via wires makes the positioning system very expensive [3]. 2.2.2 Ultra-sound Positioning Systems Position estimation using ultra-sound is described in [15–20]. Ultra- sound signals are used by bats to navigate and this inspires people to design a similar navigation systems [3]. Although ultra-sound positioning systems are inexpensive, the accuracy is worse than (several centimeters) the IR 15 2.2. Existing Indoor Positioning Solutions based positioning solutions (several millimeters). Moreover, ultra-sound based solutions suffer from reflected ultrasound signals from different ob- stacles [3]. 2.2.3 Radio Frequency (RF) Positioning Systems Radio waves can travel through walls and human bodies, which can be an advantage for indoor positioning. RF-based positioning systems can reuse existing RF technology systems such as WLAN [3]. In [3] some basic RF-based positioning solutions are described. RFID: Radio frequency identification (RFID) [21] is commonly used in complex indoor environments. The advantage of an RFID position- ing system is that it uses light and small tags (worn by the people or objects to be tracked). An RFID system can uniquely identify equip- ment and persons tracked in the system. However, this positioning technology needs numerous infrastructure components to be installed and maintained in the working area of the system [3]. Moreover, typi- cal RFID positioning solutions offer an error of 2 m to 3 m [22], which may not be suitable for some applications. WLAN: WLAN based indoor positioning technology is based on the fin- gerprinting technique. WLAN based IPS uses existing WLAN infras- tructure which makes it a cost effective solution [23–26]. The accuracy of location estimates based on the signal strength of WLAN signals is affected by various elements in indoor environments such as the move- ment and orientation of the human body, the overlapping of access 16 2.2. Existing Indoor Positioning Solutions points (AP), and nearby mobile devices, walls, doors, etc. WLAN technology is widely used and integrated in various wireless devices such as laptops, mobile phones, etc. Thus, WLAN-based posi- tioning systems can also reuse these wireless devices as tracked targets to locate persons. However, because of complex indoor environments [23–26], the performance of these positioning systems is not very good given accuracy of several meters. Moreover, using the stored informa- tion and fingerprinting technique for location estimation is complex and costly if the number of users of the positioning system increases significantly. Bluetooth: Bluetooth, which is also known as the IEEE 802.15.1 standard, is a kind of wireless personal area network (WPAN). Various Bluetooth clusters can be formed for Bluetooth-based indoor positioning [27– 30]. The main advantage of using Bluetooth for indoor positioning is the option of using devices that already have Bluetooth in them. Moreover, being cheap, Bluetooth-based indoor positioning offers a cost effective positioning solution for indoor environments. However, Bluetooth-based systems offer accuracies of only 2 m to 3 m and, more importantly, with a minimum delay of 20 s. Sensor Networks: Sensor-based positioning systems consist of a large num- ber of sensors fixed in predefined locations [31]. Sensors are devices exposed to a physical or environmental condition including sound, pressure, temperature, light, etc., and generate proportional outputs. A person or device can be located from the measurements taken by 17 2.2. Existing Indoor Positioning Solutions the sensors. Sensor-network-based positioning has great potential for indoor positioning due to emerging sensor network technology. How- ever, cheap and small sensors have limited processing capability and battery power compared to other mobile device which results in poorer accuracy [3]. Ultra-wideband: UWB-based positioning system uses both TOA and AOA techniques for position determination. The most exciting feature of an ultra-wideband signal is its extremely large bandwidth or extremely short pulse-width. This makes it among the best of all RF positioning solutions due to its ability to resolve multipath. The duration of an ultra-wideband pulse is less than 1 ns, which makes it possible to filter reflected signals from the original signal, and thus offer high accu- racy [3]. Compared with other RF-based positioning systems, UWB systems offer higher accuracies of about 15 cm in 3-D [3] and have coverage areas up to 400 m2. Therefore, UWB technology offers sev- eral advantages over other indoor positioning technologies, such as no line-of-sight (LOS) requirement, little multi-path distortion, less in- terference, high penetration ability, etc. However, UWB technology needs highly precise clock synchronization, which makes receiver de- sign complex and costly. A high precision UWB indoor positioning research package reported in [3] costs about $16,875. 18 2.2. Existing Indoor Positioning Solutions 2.2.4 Magnetic Positioning Systems A magnetic positioning system is a very high accuracy positioning system that does not suffer from the line-of-sight problem. This is an old and classic way of tracking and position measurement [32]. The magnetic sensors are small in size, robust and cheap. Moreover, it offers higher accuracy and enables multi-positioning tracking at the same time. However, the coverage was reported to be only 3 m in [32], which is not scalable for large indoor public application services. 2.2.5 Vision-based Positioning Systems A vision-based positioning system and identification of persons or devices in a complex indoor environment was introduced in [33, 34]. The positive side is that, in a vision-based positioning system, the tracked person or object does not need to carry any tracking device. Moreover, a low price camera can cover a large area. However, this system is not good considering the privacy of people. Moreover, in a dynamically changing environment, this system is not very reliable, since the position estimation is based on the saved vision information in a database. Interference sources such as weather changes and light conditions can also degrade the performance of a vision based positioning system. In addition, multiple persons moving in the same environment is a challenge for this kind of positioning system and a substantial amount of computation capability is required [3]. 19 2.2. Existing Indoor Positioning Solutions 2.2.6 Audible Sound Positioning Systems Currently, every wireless mobile device has the ability to emit audible sounds. The possibility of using audible sounds for indoor positioning was introduced in [35]. A 3-D indoor positioning solution named ‘Beep’ was de- signed using audible sound technology. It uses trilateration technique based on time of arrival (TOA) measured by a sensor in the ‘Beep’ system. An accuracy of around 0.4 m was found in an experimental environment of a 20 m × 9 m room in 90% of all the cases. The main advantage of using au- dible sound for indoor positioning is the opportunity to use existing devices such as cell phone or PDA to derive position information. However, such systems suffer interference from sound noises in many public indoor situa- tions. Moreover, transmitting audible sound for positioning is a continuous disturbance to the people in the indoor environment [3]. 2.2.7 Differential Photosensor-based Positioning System Recently a differential photosensor was introduced for simultaneous op- tical retroreflection, detection, and control in bidirectional sensor links [6]. This is an architecture with three mutually orthogonal photodiodes in a corner-cube arrangement (used to retroreflect incident light and sample the incident optical signal). Recent work [4, 5] proposes an indoor positioning technique based on the AOA of incident light measured with the differential photosensors. The corner-cube arrangement generates photocurrents de- pending on the AOA of the incident light. From the AOA measurements for two or more light sources with known positions, the position of the corner- 20 2.2. Existing Indoor Positioning Solutions cube sensor can be determined. An accuracy on the range of 2-3 cm in a 2-D plane was reported in [4]. The proposed system has several advantages over some of the existing positioning technologies. As reported in [4], above a certain threshold, the AOA value does not depend on the intensity of the incident light. This makes the system impervious to light level fluctuation and thus overcomes the problems associated with proximity methods. Moreover, the sensor it- self is very inexpensive, and it is anticipated that it may be possible to use indoor lighting as the necessary light sources for the positioning system. However, it should be noted that if room lights are used, the lighting system must be based on LED lights, since other types of room lighting (i.e fluores- cent) cannot be modulated as required by the system. Use of existing room lighting will decrease the extra infrastructure required and further lower the cost of the system. Table 2.1 summarizes the performance, cost and coverage of the above mentioned indoor positioning technologies. 21 2 .2. E x istin g In d o or P osition in g S olu tion s Table 2.1: Summary and Comparison of Existing Indoor Positioning Solutions Technology Accuracy Coverage Cost Infrared High (several millimeters) Medium High Ultra-sound High (several centimeters) Medium Low/Medium RFID (RF) Medium (1-2 meters) Medium High WLAN (RF) Low (several meters) Medium Low Sensor network (RF) Medium (1.5-3.8 meters) High Low UWB (RF) High (centimeter level) Medium High Bluetooth (RF) Low (2-3 meters) Medium High Magnetic High (1 centimeter) Low (3 meters) High Vision Low Medium Medium Audible sound Medium (.4 meter) Low Low 22 2.3. Proposed Indoor Positioning Solution 2.3 Proposed Indoor Positioning Solution 2.3.1 Inertial Navigation System An inertial navigation system (INS) is a dead-reckoning system that consists of an inertial measurement unit and a navigation processor. Inertial navigation system technology used as an alternative to GPS, because GPS does not work well in all environments [36]. MEMS (microelectromechanical systems) inertial technology is seen as both a possible complement and a potential alternative to GPS [36]. An inertial measurement unit (IMU) usually combines three accelerom- eters and gyros to produce a 3-D measurement of specific forces and angular rates. An accelerometer measures specific forces and a gyroscope measures angular rates, both without an external reference [10]. An IMU is the sensor package for an inertial navigation system which can produce an independent 3-D navigation solution. Current inertial sensor development is mostly focused on MEMS tech- nology. MEMS sensors are small, light, and exhibit much greater shock tolerance than conventional mechanical designs, though with relatively poor performance. Although there is no universally accepted definition of a high-, medium-, and low-grade IMU, they can be broadly grouped into five perfor- mance categories: marine, aviation, intermediate, tactical, and automotive [10]. According to [10] marine and aviation grade IMU sensors are not suitable for civilian use because of their high price, which is approximately $1 million and $100,0000 respectively. An intermediate grade IMU, which is used in 23 2.3. Proposed Indoor Positioning Solution small aircrafts and helicopters, might cost up to $50,000. A tactical grade IMU can be used to provide stand-alone positioning solution for a few minutes [10]. A long term positioning solution can be obtained by integrating it with a positioning system, such as GPS. These systems typically cost between $5,000 to $20,000 [10], and are typically used in guided weapons and unmanned air vehicles. The lowest grade IMU, which is known as automotive grade IMU are actually very cheap, and can be bought as a single accelerometer and gy- roscope unit. Automotive grade three axes-MEMS accelerometers, when produced in large quantities cost only $2-10 per unit depending on their accuracy. According to [36], a single gyroscope can currently cost as low as $5. They are expected to become less expensive according to current market predictions [36]. However, INS based on automotive grade IMUs, cannot be used as a standalone navigation system due to the large drift in the accelerometer and gyroscope. Therefore, INS based on automotive grade MEMS systems must be integrated with some other positioning system to yield a complete navigation solution [10]. 2.3.2 GPS and INS Integration Strategy Inertial navigation systems have a number of advantages. They operate continuously and provide high-bandwidth output at 50 Hz or even 100 Hz; and they exhibit low short-term noise [10]. However, the accuracy of an inertial navigation solution degrades with time as the errors are integrated through the navigation equations and are propagated through stages. 24 2.3. Proposed Indoor Positioning Solution On the other hand GNSS or GPS can provide high long-term position accuracy with limited error. However, compared to INS, the update rate is low. Moreover, GNSS signals are also subject to obstruction and inter- ference, such that GNSS cannot be relied upon to provide a continuous navigation system [10]. For outdoor applications where real-time kinematic (RTK) positioning capability in degraded environments requires centimeter level accuracy, INS is integrated with GNSS. Various algorithms are used to integrate INS and GNSS [37]. INS/GNSS integration architecture depends on three factors: how corrections are applied to the inertial navigation solution, what type of GNSS measurements are used, and how the GNSS user equipment is aided by the INS and integration algorithm [10]. In the literature, terms such as loosely coupled, tightly coupled, ultratightly coupled, closely coupled, cascaded, and deep are used to define integration architecture [38, 39]. In the loosely coupled architecture, INS and GNSS receivers operate as independent navigation systems and position information is blended using an Extended Kalman filter (EKF). When low quality inertial sensors are used, a feedback path is necessary to prevent the sensor errors from becoming unbounded. In general, the classical tightly coupled architecture provides a more accurate solution than the loosely coupled architecture. In this architecture, the INS and GNSS measurements are combined to generate a single blended navigation solution [40]. 25 2.3. Proposed Indoor Positioning Solution 2.3.3 Proposed Solution For the integration of INS with OAOA, loosely coupled and tightly cou- pled integration strategies similar to those used for GPS/INS integration will be used. From the above discussion it becomes apparent that, although there are a number of indoor positioning solutions, none of them has proven to be a complete solution in terms of accuracy, coverage, cost and robust- ness. Every system has its own advantages and disadvantages. The recently proposed indoor positioning solution based on OAOA is a potential indoor positioning solution that needs more detailed investigation. However, some drawbacks associated with the OAOA-based positioning solution, and the well known practice of GPS/INS integration for accurate outdoor position- ing, is the motivation behind integrating the OAOA-based positioning so- lution with the inertial navigation system for a more accurate and robust indoor positioning solution. The loosely coupled and tightly coupled archi- tectures used for GNSS/INS integration will be investigated with necessary modifications for INS/OAOA integration. 26 Chapter 3 INS and OAOA Integration In this chapter the basic theoretical background for OAOA-based indoor positioning system is investigated. The basics of the inertial navigation system is then introduced. The algorithms for loosely coupled and tightly coupled integration strategies of INS and OAOA-based indoor positioning and a modification of the algorithms using a error compensation block, is proposed at the end of the chapter. 3.1 OAOA for Indoor Positioning The recently proposed indoor positioning solution based on angles of ar- rival of light, uses a photosensor, consisting of three photodiodes arranged in a corner-cube. This photosensor can estimate the AOAs of light transmitted by an optical source. The position of the photosensor can be determined from the AOA and knowledge of the position of the optical sensor in a frame of reference. The proposed system has several advantages over some of the existing positioning technologies. As reported in [4], above a certain thresh- old, the AOA values does not depend on the intensity of the incident light. This makes the system independent of incident power and thus overcomes the problems associated with proximity methods. 27 3.1. OAOA for Indoor Positioning PD 2 PD 1 PD 3 Figure 3.1: Photograph of the corner-cube photosensor. 3.1.1 The Corner-Cube Photosensor The structure used for the proposed indoor positioning solution is a corner-cube as shown in Figure 3.1. Figure 3.2 shows a sketch of the sensor. It is comprised of three mutually orthogonal photodiodes (PD), with PD1 lying in the yz-plane, PD2 lying in the xz-plane, and PD3 lying in the xy-plane. The device was first introduced by Jin and Holzman in [6]. The basic parts of the structure are three reflective surfaces that are mutually orthogonal and oriented as an interior corner-cube orientation as shown in Figure 3.2. The structure can detect light arriving from an angle span between 0◦ and 90◦ in the azimuthal and polar directions. The two angles, azimuthal angle φ and polar angle θ, are defined according to a spherical coordinate system, as shown in Figure 3.2. When a beam of light falls on the sensor, photocurrents being proportional to the light intensity and dependent on the angle of arrival of the light, are generated in the three PDs. The values of φ and θ can be determined from the values of the currents generated in 28 3.1. OAOA for Indoor Positioning Incident light PD 1 PD 3 PD 2 z x y Azimuthal angle Polar angle θ φ Figure 3.2: Sketch of the corner-cube photosensor. the three PDs. Photocurrents i1, i2 and i3 are generated in PD1, PD2 and PD3, respec- tively, when beams of light from optical source A or optical source B strike the sensor (see Figure 3.3). The values of these photocurrents are propor- tional to the power of the incident light and also depend on the AOA of the incident light. However, to determine AOA only normalized photocurrents are required. The relationships between these normalized photocurrents I1, I2 and I3 and the AOA angles φ and θ are given by the following equations [4]: I1 = sin(θ) cos(φ), (3.1) I2 = sin(θ) sin(φ), (3.2) I3 = cos(θ). (3.3) 29 3.1. OAOA for Indoor Positioning (x 1 ,y 1 ,z 1 ) (x 2 ,y 2 ,z 2 ) (x 0 ,y 0 ,z 0 ) Optical beacon A Frequency f A OAOA sensor Optical beacon B Frequency f B x y z Origin (0,0,0) Figure 3.3: Hypothetical system setup for an indoor positioning solution using the OAOA sensor. Solving the above equations for φ and θ we get φ = tan−1 I2 I1 (3.4) θ = tan−1 √ I1 2 + I2 2 I3 . (3.5) Equations 3.4 and 3.5 may be used to calculate φ and θ from the measured value of the photodiode currents I1, I2 and I3. 30 3.1. OAOA for Indoor Positioning 3.1.2 Determining Position It is clear that, for determining the position of the sensor, at least two optical sources are necessary. To acquire and process photocurrents con- tributed by different optical sources, a multi-frequency LED configuration is used [4]. Figure 3.3 shows a hypothetical setup for determining the position of the sensor with respect to a known reference frame. The cube in Figure 3.3 represents our hypothetical reference frame, with origin at (0, 0, 0). Two optical sources are placed at two known positions in the reference frame, (x1, y1, z1) and (x2, y2, z2). The OAOA sensor is placed at an arbitrary position (x, y, z), which is to be determined. As shown in Figure 3.3, two optical sources at two different co-ordinates are driven by two function generators operating at different frequencies. The modulation scheme used in the experiments is on-off keying, which is done by using a function generator to drive each LED light source with a square wave of different frequency. Optical source A is modulated by frequency fA and optical source B is modulated by a frequency fB. This creates photocurrents (normalized) I1, I2 and I3, which have components at frequency fA and frequency fB. A simple bandpass filter can be used to extract the different frequency components of the generated currents, and these can be used to determine φA, φB, θA and θB, where φA and θA are the azimuthal angle and polar angle for optical beam A, and φB and θB are the azimuthal angle and polar angle for optical beam B. The AOA information of both optical sources can be used to determine 31 3.2. Inertial Navigation System the position of the sensor with respect to the reference frame by triangula- tion. More than two optical sources can be used for improving positional accuracy by providing redundant information. 3.2 Inertial Navigation System An inertial navigation system is a dead-reckoning navigation system, be- ing comprised of an inertial measurement unit and a navigation processor. An inertial measurement unit, which is comprised of a set of accelerometers, gyroscopes, and magnetometers is the main part of an inertial navigation system. This section introduces the necessary background for understand- ing inertial navigation systems by introducing inertial sensors, coordinate frames, rotation matrices, the initial alignment procedure, and the naviga- tion equations. 3.2.1 Mathematical Notations in Inertial Navigation Although the principle of inertial navigation is simple, the necessity of using multiple coordinate frames and transformation between them, causes inertial navigation computation to have particular notations and conven- tions. In an effort to clarify the following discussion, the majority of the notation and conventions used in this section can be summarized as follows. To indicate the coordinate frame in which the components of a vector are given, a superscript is attached to the vector and to the components of the vector. As an example, the position vector r described in the navigation 32 3.2. Inertial Navigation System frame (n−frame) can be described as rn =  rnx rny rnz  (3.6) where rnx , r n y , and r n z represent the components of the position vector along the x−,y−, and z−axes of the navigation frame. A rotation rate vector ωabc represents the rotation rate of frame ‘c’, rel- ative to frame ‘b’, expressed in frame ‘a’. Transformation of any vector from one computational frame to another is carried out through the rotation matrix between the two frames. A rotation matrix Rnb rotates any vector in the b−frame to a vector in the n−frame. As an example, transformation of a position vector represented in the b−frame to the n−frame can be done by the equation rn = Rnb r b. (3.7) The inverse of the transformation matrix describes a transformation in the opposite direction, which means Rbn is the inverse of R n b and can be used to transform a vector in the n−frame to a vector in the b−frame. Coordinate transformation of vectors applies to angular velocity vectors as well. Trans- formation of the vector ωbie from b−frame to n−frame can be carried out as ωnie = R n bω b ie. (3.8) Rotation can not only be expressed by an angular velocity vector, but 33 3.2. Inertial Navigation System also by a skew symmetric matrix containing the same vector components. The skew symmetric matrix corresponding to the angular velocity vector ωbib is defined by Ω b ib as Ωbib =  0 −ωz ωy ωz 0 −ωx −ωy ωx 0  (3.9) where, ωbib =  ωx ωy ωz . The skew symmetric matrix is a very useful matrix in navigation compu- tation. If we have two vectors a and b, their cross product a×b can be expressed in terms of their skew symmetric matrices (A or B) as a×b = Ab = −Ba. (3.10) The rotation between two coordinate frames can be expressed as the sum of two rotations as in ωbin = ω b ie + ω b en. (3.11) The point to be noted here is that the inner indices will be canceled out and the superscripts have to be the same. This means that only vectors in the same reference frame can be added or subtracted. 34 3.2. Inertial Navigation System 3.2.2 Coordinate Frames Earth is considered an inertial frame of reference in the case of simple mechanics problems, where the rotation of the Earth does not have a signif- icant impact. However, in the case of navigation, the rotation of the Earth has a significant impact on the navigation computation since inertial sen- sors measure their motion with respect to a real inertial frame. However, the user would like to know their position with respect to the Earth. Navi- gation computation, thus deals with multiple coordinate frames which need to be defined clearly. All coordinate frames considered in this work form orthogonal right-handed basis sets. The remainder of this section defines the following coordinate frames used in navigation problems: Earth-centered inertial (ECI) frame, Earth- centered Earth-fixed (ECEF) frame, local navigation frame, and body frame [10, 41]. All coordinate frames, with the exception of the INS body frame described below, are depicted in Figure 3.4. Earth-Centered Inertial Frame • Symbol: denoted by the symbol ‘i’. • Origin: at the mass center of the Earth. • Xi-axis: pointing towards the mean equinoctial colure in the equato- rial plane. • Zi-axis: parallel to the Earth’s instantaneous spin axis. 35 3.2. Inertial Navigation System Equator Equinox Up East N orth Greenwich meridian Ze, Zi Xn ZnY n λ Xe Xi Y e Y i Earth rotation axis 360 - GMST ϕ Figure 3.4: Coordinate frames. • Y i-axis: orthogonal to the X and Z axes to complete a right-handed frame. Earth-Centered Earth-Fixed Frame • Symbol: denoted by the symbol ‘e’. • Origin: at the mass center of the Earth. 36 3.2. Inertial Navigation System • Xe-axis: pointing toward the Greenwich meridian, in the equatorial plane. • Y e-axis: 90◦ east of the Greenwich meridian, in the equatorial plane. • Ze-axis: along the Earth’s polar axis (axis of rotation of the reference ellipsoid). • Coordinates in the ECEF frame can be transformed to the ECI frame by a negative rotation about the Ze axis of the frame by the amount of the Greenwich Mean Sidereal Time (GMST) [42]. Local Navigation Frame • Symbol: denoted by the symbol ‘n’. • Origin: origin of the navigation system. • Xn-axis: ellipsoid East (E-axis). • Y n-axis: ellipsoid North (N-axis). • Zn-axis: upward along the ellipsoidal normal (UP-axis). The INS Body Frame • Symbol: denoted by the symbol ‘b’. • Origin: coincident with that of the local navigation frame. • Xb-axis: right direction. • Y b-axis: forward direction. 37 3.2. Inertial Navigation System • Zb-axis: upward direction. 3.2.3 Rotation Matrix Computation The specific force measurement made by triaxial accelerometers in an IMU unit are expressed in the body frame. This raw measurement contains the components of the ‘gravity offset’, which needs to be subtracted from the raw measurement to find the actual velocity increments with respect to the navigation frame. However, the body frame is not aligned with the naviga- tion frame. Therefore, to compensate for the ‘gravity offset’ the orientation of the body frame with respect to the navigation frame is necessary. The most convenient way to represent the attitude of the body frame with respect to the navigation frame is through a set of three Euler angles commonly known as roll (r), pitch (p), and yaw (y), which represent rotation along the x−, y− and z−axes, in the body frame, respectively [10]. Roll, pitch and yaw define the rotation of the body frame with respect to the navigation frame. Therefore, from the roll, pitch and yaw, the rotation matrix, Rnb , for converting a vector in the body frame to a vector in the navigation frame, can be calculated by Rnb = RZ(y)RY (p)RX(r) (3.12) where RX(r), RY (p), and RZ(y) are rotation matrices along the x−, y−,and 38 3.2. Inertial Navigation System z−axes, respectively, and can be written as [43] RX(r) =  1 0 0 0 cos(r) sin(r) 0 −sin(r) cos(r) , (3.13) RY (p) =  cos(p) 0 −sin(p) 0 1 0 sin(p) 0 cos(p) , (3.14) RZ(y) =  cos(y) sin(y) 0 −sin(y) cos(y) 0 0 0 1 . (3.15) It should be noted that the sequence of the multiplication is very important. Multiplication of three rotation matrices resulting from a single set of roll, pitch and yaw in different sequences results in totally different rotation ma- trices. In this research, the rotation is applied along the z−, y− and x−axes sequentially. As discussed in the previous paragraph, knowing the roll, pitch and yaw, allows us to calculate the rotation matrix Rnb . In the opposite way, roll, pitch and yaw can be calculated from a known rotation matrix, Rnb as r = tan−1 R32√ R12 2 +R22 2 (3.16) p = tan−1 R31 R33 (3.17) y = tan−1 R21 R11 (3.18) 39 3.2. Inertial Navigation System where Rnb =  R11 R12 R13 R21 R22 R23 R31 R32 R33 . (3.19) Initial Alignment Computation One of the underlying assumptions of the INS mechanization equations is that the initial condition of the system is known. While position and velocity are usually easily input by the user, the initial orientation of the system is not so readily available. From the discussion above, it is clear that the rotation matrix Rnb can be calculated if the Euler angles r, p, and y are known, and vice versa. However, at the beginning of inertial navigation, neither is known and for this reason the INS must usually execute an initial alignment procedure to determine the initial value of the Euler angles. Where the IMU is stationary, self-alignment can be used to initialize roll and pitch with the help of accelerometer data, whereas heading or yaw is often initialized using a magnetic compass or magnetometer. The principle behind roll and pitch determination in the stationary con- dition is that when the INS is stationary (or traveling at a constant velocity), the only specific force sensed by the accelerometer is the reaction to gravity. In such a condition, the triaxial accelerometer measurements represent the components of the vertically upward antigravity force. Therefore, from the normalized accelerometer readings it is possible to determine the roll and 40 3.2. Inertial Navigation System pitch according to r = tan−1 aby√ abx 2 + abz 2 (3.20) p = −tan−1 a b x√ aby 2 + abz 2 (3.21) where, abx, a b y, and a b z are the raw measurements from the triaxial accelerom- eter along the x−, y−, and z−axes, respectively. For initial alignment, the IMU is kept static for around 30 seconds, and in that case, time averaged values of abx, a b y, and a b z are used instead of instantaneous values. Table 3.1: Quadrant information for roll from acceleration signs. Roll quadrant Sign of aby Sign of a b z 1 + + 2 + - 3 - - 4 - + Table 3.2: Quadrant information for pitch from acceleration signs. Pitch quadrant Sign of abx Sign of a b z 1 - + 2 - - 3 + - 4 + + Equation 3.20 and Equation 3.21 do not uniquely determine the quadrant in which the roll and pitch fall. In [43], lookup tables were constructed for North-East-Down (NED) axes conventions for the navigation frame. Table 41 3.2. Inertial Navigation System 3.1 and Table 3.2 are the lookup tables required to resolve the quadrant for roll and pitch using the sign of the accelerometer measurement for the East- North-Up (ENU) convention (which is used in this work) of the navigation frame. A magnetometer is not an inertial sensor. However, it is an important part of an inertial navigation system using low grade inertial sensors. Al- though it possible to use gyroscope data to determine the heading of the IMU by gyrocompassing, the performance of the gyrocompassing method depends on the quality of the gyroscope and only marine and aviation grade gyroscope are accurate enough for gyrocompassing [10]. For this reason, magnetometers can be used for determining the heading of the IMU for a low cost INS. The triaxial magnetometer measures the magnetic field vector of the Earth’s magnetic field in the body frame, represented by the vector mb = [ mbx m b y m b z ]T . From this vector it is possible to calculate the yaw of the body frame with respect to the navigation frame, if the roll and pitch is already known. If the roll, r, and the pitch, p, are already calculated from Equation 3.20 and Equation 3.21, the magnetic components experienced by the body frame along the x− and y−axes can be computed as [43] Xh = m b xcos(p) +m b ysin(r)sin(p)−mbzcos(r)sin(p) (3.22) Yh = m b ycos(r) +m b zsin(r). (3.23) From the values of Xh and Yh, the yaw angle can be calculated as y = tan−1( Yh Xh ). (3.24) 42 3.2. Inertial Navigation System Being a inverse trigonometric function, like roll and pitch, it is also impossible to uniquely determine the quadrant of the yaw angle just from Equation 3.24. Similar to the roll and pitch angles, the quadrant information can be resolved from the signs of Xh and Yh [43]. The necessary lookup table is given in Table 3.3. Table 3.3: Quadrant information for yaw from magnetic field components signs. For yaw Xh < 0 pi − tan−1 YhXh Xh > 0, Yh < 0 −tan−1 YhXh Xh > 0, Yh > 0 2pi − tan−1 YhXh Xh = 0, Yh < 0 pi 2 Xh = 0, Yh > 0 3pi 2 3.2.4 Modeling Motion in the Local Navigation Frame Although a navigation solution can be computed in any coordinate frame of interest, in most cases modeling motion in the local navigation frame is preferable. The local navigation frame formulation has the advantage that its axes are aligned to local East, North and up directions. In the local navigation implementation of the inertial navigation equations, the ECEF frame is used as the reference frame. This form of navigation equation has the advantage of providing a navigation solution in a form readily suited for the user [10]. In general, the position of a moving body described in n−frame is ex- pressed in terms of curvilinear coordinates (latitude, longitude and altitude). 43 3.2. Inertial Navigation System However, in the case of indoor positioning, cartesian coordinates are more preferable as the final positioning output. Curvilinear position information, which is given by rn =  ϕ λ h  (3.25) where ϕ is the latitude angle, λ is the longitude angle and h is the altitude, is still needed for intermediate steps of the inertial navigation computation. Position in the cartesian format can be represented by three components along the East direction (re), North direction (rn) and vertical direction (ru) as rncart =  re rn ru . (3.26) Velocity is expressed by three components along the East direction (ve), North direction (vn) and vertical direction (vu) vn =  ve vn vu . (3.27) The relationship between the velocity components ve, vn, vu and the curvi- linear positions are given by ϕ̇ = vn M + h (3.28) λ̇ = ve (N + h)cosϕ (3.29) ḣ = vu (3.30) 44 3.2. Inertial Navigation System where M = (ab) 2 [a2cos2ϕ+b2sin2ϕ]3/2 , N = a 2 [a2cos2ϕ+b2sin2ϕ]1/2 , a = 6378137.0 m is the semi-major axis of the Earth, and b = 6356752.3 m is the semi-minor axis of the Earth. Similar relationships between the cartesian position components re, rn, ru and the velocity components are straight forward and can be given as ṙe = ve (3.31) ṙn = vn (3.32) ṙu = vu. (3.33) In the above equations ṙe, ṙn, and ṙu represents the derivatives of re, rn, and ru, respectively. Position Mechanization Equation From the previous discussion the rate of change of the curvilinear posi- tion components and the cartesian position components are related to the velocity components by ṙn =  ϕ̇ λ̇ ḣ  =  0 1M+h 0 1 (N+h)cosϕ 0 0 0 0 1   ve vn vu  = D−1vn (3.34) ṙncart = v n. (3.35) 45 3.2. Inertial Navigation System Velocity Mechanization Equation Triaxial accelerometers measure the acceleration of a moving body along three mutually orthogonal directions in the b−frame. The accelerometer measurements are specific force measurements and resolved in the body frame as ab =  ax ay az . (3.36) This measurement is transformed to the local navigation frame using the rotation matrix Rnb according to an =  ae an au  = Rnb ab = Rnb  ax ay az . (3.37) However, the accelerometer components expressed in the local navigation frame cannot directly provide local navigation frame velocity components of the moving body. To do so requires three factors to be taken into account. The first is the Earth rotation rate (ωe = 15 degrees/hour) with respect to the inertial frame, which is resolved in the navigation frame, and can be represented as ωnie =  0 ωecosϕ ωesinϕ . (3.38) The second factor is the change of orientation of the navigation frame with respect to the Earth. This change of orientation is due to the defini- tion of the local North, local East and local vertical directions. The North 46 3.2. Inertial Navigation System direction is always tangential to the meridian, while the vertical direction is normal to the Earth’s surface at each instant. The angular velocity of the navigation frame with respect to the Earth, as observed from the navigation frame, can be represented as ωnen =  −ϕ̇ λ̇cosϕ λ̇sinϕ  =  −vn M+h ve N+h vetanϕ N+h . (3.39) The third factor is the Earth’s gravity field gn =  0 0 −g  (3.40) where g is obtained from the well known gravity model given as [42] g = 9.7802703 + .0517993sin2ϕ− 19.69405h a . (3.41) If we take into consideration all these factors, the time rate of change of the velocity components of a moving body in the local navigation frame can be expressed as v̇n = Rnb a b + (2Ωnie + Ω n en)v n + gn (3.42) where Ωnie and Ω n en are the skew-symmetric matrices corresponding to ω n ie 47 3.2. Inertial Navigation System and ωnen, respectively, and are expressed as Ωnie =  0 −ωesinϕ ωecosϕ ωesinϕ 0 0 −ωecosϕ 0 0  (3.43) Ωnen =  0 −v etanϕ N+h ve N+h vetanϕ N+h 0 vn M+h −ve N+h −vn M+h 0 . (3.44) Attitude Mechanization Equation The attitude angles of a moving body are determined by solving the time derivative equation of the transformation matrix. Since the mechanization is implemented in the local navigation frame, the following time-derivative transformation matrix should be considered [44] Ṙnb = R n bΩ b nb. (3.45) The angular velocity skew-symmetric matrix Ωbnb can be expressed as Ωbnb = Ω b ni + Ω b ib (3.46) = −Ωbin + Ωbib (3.47) = Ωbib −Ωbin. (3.48) Thus from Equation 3.45 and Equation 3.48, it is found that the rotation matrix Rnb can be obtained by solving the following differential equation Ṙnb = R n b (Ω b ib −Ωbin) (3.49) 48 3.2. Inertial Navigation System where Ωbib is the skew-symmetric matrix of the measurement of angular velocity vector (ωbib) provided by the gyroscopes. The angular velocity ωbin consists of two parts. The first part is ω b ie, which accounts for the Earth rotation rate, and the second part is ωben, which accounts for the orientation change of the navigation frame. Therefore, ωbin can be expressed as ωbin = ω b ie + ω b en (3.50) = Rbnω n ie + R b nω n en (3.51) = Rbn  0 ωecosϕ ωesinϕ + Rbn  −vn M+h ve N+h vetanϕ N+h  (3.52) = Rbn  −vn M+h ve N+h + ω ecosϕ vetanϕ N+h + ω esinϕ . (3.53) Using all the equations above, we are now in a position to express the mechanization equation for all the necessary navigation states. The inputs to the navigation frame mechanization equation are the gyroscope and the accelerometer measurements. The outputs are the curvilinear coordinates, three velocity components and three attitude components. The final equa- tion can be written as  ṙn v̇n Ṙnb  =  D−1vn Rnb a b + (2Ωnie + Ω n en)v n + gn Rnb ( Ω b ib −Ωbin) . (3.54) The square boxes in the above equation represents the input raw measure- 49 3.2. Inertial Navigation System Gravity computation INS Mechanization Block Input Block Output Block 2Ωnie Ω n en Ωbib Rnb ωbin ω n in ωnen ωnie vnv̇n ab Rbn Rnb + Rnb˙ vn + + - + - + + ∫ ∫ ∫ Initial alignment Rnb (updated) mb rn Figure 3.5: Basic block diagram of INS mechanization equation for the local navigation frame. ments found from the accelerometers and the gyroscopes. If the outputs are chosen to be cartesian coordinates, three velocity components and three attitude components, the final equation can be written as  ˙rncart v̇n Ṙnb  =  vn Rnb a b + (2Ωnie + Ω n en)v n + gn Rnb ( Ω b ib −Ωbin) . (3.55) Solving these 1st order differential equations provides the navigation param- eters in the local navigation frame. Figure 3.5, a visual form of Equation 3.54, is a block diagram which shows the basic parts of an inertial navigation system. The vertical box on the left of the figure represents the input block which consists of square boxes written ab, Ωbib and m b representing measurements from the accelerometers, gyroscopes and magnetometers. The inputs are then passed to the main algorithm which will be described shortly in the following section. 50 3.2. Inertial Navigation System 3.2.5 Step by Step Computation of Navigation Parameters Based on the above discussion of inertial navigation systems, a step by step computation of the navigation parameters is presented in this section. Step 0: Before Starting Determining the initial alignment of the IMU must be done before the INS algorithm begins. By using Equation 3.20, Equation 3.21, and Equation 3.24 we can determine the roll, pith and yaw of the IMU, respectively, before it goes into motion. Using the value of the initial roll, pitch and yaw in Equation 3.12, the initial value of the transformation matrix from b−frame to n−frame, Rnb (t0) is determined. For initial alignment, the IMU should ideally be stationary for a certain amount of time. Typically, a time span of 30 s is sufficient. The initial value of all the components of the velocity vector vn(t0) can be considered zero. Then ωnen(t0) can be determined from Equation 3.39. The value of ω n ie(t0) is a function of a given initial latitude and Earth’s rotation rate as shown in Equation 3.38. Step 1: Velocity Calculation The measurements from the triaxial accelerometer, ab(tk), at time in- stant tk, are multiplied by the rotation matrix R n b (tk). Then, according to Equation 3.42, v̇n(tk) is calculated and is then integrated to get the velocity 51 3.2. Inertial Navigation System components as ∆vn ∆t = Rnb a b − (2Ωnie + Ωnen)vn + gn (3.56) ⇒∆vn = (Rnb ab − (2Ωnie + Ωnen)vn + gn)∆t (3.57) vn(tk+1) = v n(tk) + 1 2 (∆vn(tk) + ∆v n(tk+1)). (3.58) Step 2: Position Calculation By integrating the velocity vector, the position vector can be determined. Both the curvilinear and cartesian coordinates for the position vector can be determined using rn(tk+1) = r n(tk) + 1 2 (vn(tk) + v n(tk+1))∆t. (3.59) This can be extended for the components of the position vector along East, North and upward directions as re(tk+1) = r e(tk) + 1 2 (ve(tk+1) + v e(tk))∆t (3.60) rn(tk+1) = r n(tk) + 1 2 (vn(tk+1) + v n(tk))∆t (3.61) h(tk+1) = h(tk) + 1 2 (vu(tk+1) + v u(tk))∆t. (3.62) 52 3.2. Inertial Navigation System Curvilinear position information can also be determined according to ϕ(tk+1) = ϕ(tk) + 1 2 (vn(tk+1) + v n(tk)) M + h ∆t (3.63) λ(tk+1) = λ(tk) + 1 2 (ve(tk+1) + v e(tk)) (N + h)cosϕ ∆t (3.64) h(tk+1) = h(tk) + 1 2 (vu(tk+1) + v u(tk))∆t. (3.65) Step 3: Rotation Matrix Update Updating the rotation matrix is a very important part of an INS algo- rithm and can be done by solving the first order differential equation, Equa- tion 3.49. However, to model the rotation matrix Rnb according to Equation 3.49, six differential equations are needed. For this reason, quaternion pa- rameters are introduced to describe the rotation of the b−frame with respect to the n−frame. Solving the quaternion parameters requires only four dif- ferential equations to be solved. Moreover, the quaternion solution avoids singularity problems that might exist with other solution methods. The time rate of change of the quaternion is described by the first-order differential equation [42] Q̇ = 1 2 Ω(ω)Q (3.66) where Ω(ω) is a four dimensional skew-symmetric matrix given as Ω(ω) =  0 ωz −ωy ωx −ωz 0 ωx ωy ωy −ωx 0 ωz −ωx −ωy −ωz 0 . (3.67) 53 3.2. Inertial Navigation System In Equation 3.67, ωx, ωy and ωz are the angular velocities of body rotation, which are determined by the gyroscopes after compensating for the Earth rotation and the local-level frame change of orientation, Ω = Ωbib −Ωbin. (3.68) To solve the first-order differential equation describing the time rate of change of the quaternion parameters, Euler methods can be used to de- termine the quaternion parameters Qk+1 at time tk+1 based on the values of the quaternion parameters Qk at time tk according to [42] Qk+1 = Qk + ( 1 2 Ω(ωk)Qk ) ∆t (3.69) ⇒  q1(tk+1) q2(tk+1) q3(tk+1) q4(tk+1)  =  q1(tk) q2(tk) q3(tk) q4(tk) + 1 2  0 δθz −δθy δθx −δθz 0 δθx δθyy δθy −δθx 0 δθz −δθx −δθy −δθz 0   q1(tk) q2(tk) q3(tk) q4(tk) . Once the quaternion parameters are determined at a certain point in time, the rotation matrix Rnb can be obtained using the direct relationship [42] Rnb =  q1 2 − q22 − q32 + q42 2(q1q2 − q3q4) 2(q1q3 + q2q4) 2(q1q2 + q3q4) −q12 + q22 − q32 + q42 2(q2q3 − q1q4) 2(q1q3 − q2q4) 2(q2q3 + q1q4) −q12 − q22 + q32 + q42 . (3.70) 54 3.3. Proposed Integration Algorithm For solving the above equation, initial value of the quaternion vector Q0, at time instant t0 is necessary, which can be determined by [42]  q1 q2 q3 q4  =  .25(R32 −R23)/q4 .25(R13 −R31)/q4 .25(R21 −R12)/q4 .5 √ 1 +R11 +R22 +R33  (3.71) where the values R11, R12, R13, R21, R22, R23, R31, R32, and R33 are the elements of the rotation matrix Rnb (t0), which is determined by the initial alignment procedure. The updated rotation matrix is then used in the next stage to rotate the measurements from the accelerometers, which is resolved in the body frame to transform them into a quantity in the navigation frame. The entire algorithm from Step 1 to Step 3 is then repeated in the same way to find a continuous navigation solution. 3.3 Proposed Integration Algorithm In this section, the proposed algorithm for the integrated OAOA/INS positioning solution is presented. The algorithm is based on some conven- tional GNSS/INS integration strategies. Different GNSS/INS integration architectures such as loosely coupled, tightly coupled, ultratightly coupled, and closely coupled are described in the literature. These architectures dif- fer in how corrections are applied to the inertial navigation solution, what type of GNSS measurements are used, and how the GNSS user equipment is aided by the INS and integration algorithm [10]. 55 3.3. Proposed Integration Algorithm For integration purposes, the extended Kalman filter (EKF) is one of the most popular estimators [45]. The EKF is here used to correct the INS trajectory with position updates from the OAOA positioning solution. The EKF is actually an adaptation of the linear Kalman filter (LKF) to nonlinear functions, and is necessary here given the nonlinear nature of the INS equations. An integration strategy can be either an open loop or closed loop, de- pending on the feedback provided to the INS mechanization block. In an open loop system, the INS algorithm works independently without any feed- back from the estimator’s error estimation. On the other hand, in a closed looped system, error estimation from the EKF is fed back to the INS mecha- nization block. In an open loop system with low cost MEMS sensors, errors can grow unbounded in a very short period of time. Therefore, an error state Kalman filter is used to estimate the errors and the error estimates are fed back to the INS mechanization block. The basis of the error state EKF is the INS error state model, which can be obtained by the perturbation analysis of the INS mechanization equations described in Section 3.2.4 [46]. The full derivation of perturbation analysis is complex and is beyond the scope of this work. Therefore, only the results are shown here. The results of the perturbed system can be expressed as [44, 47] δṙn = δv̇n (3.72) δv̇n = −2Ωnieδvn −Anδψn + Rnb δab (3.73) δψ̇ n = −Ωnieδψn + Rnb δωbib (3.74) 56 3.3. Proposed Integration Algorithm where δ in front of a parameter indicates a perturbed quantity and An = skew symmetric matrix of an, δψn = attitude error state vector, δab = vector of errors in the measured specific force vector, δωbib = vector of errors in the measured angular rate vector. The above equations can be used to model the inertial errors. However, in a system using a low grade MEMS IMU, errors may grow unbounded and it is important to estimate the errors present in the sensor measurement. All types of accelerometers and gyroscopes exhibit biases, scale factors, cross-coupling errors, and random noise to a certain extent [10]. Among all the errors, the accelerometer and gyroscope biases are the most significant and should be taken care of [48]. The bias is a constant error exhibited by all accelerometer and gyroscopes. It is independent of the underlying specific force or angular rate. In most cases bias is the dominant term in the overall error of an inertial instrument [10]. In the above equations δab and δωbib represent the errors in the accelerom- eter and gyroscope measurements respectively. Theoretically, these error terms are composed of a constant sensor bias, a temperature sensitivity effect, misalignment error, a scale factor and sensor noise [41]. However, modeling all of the above errors in a Kalman filter is impractical. Therefore, both the accelerometer and gyroscope errors are considered to consist of only a bias term and noise. The accelerometer and gyroscope error models 57 3.3. Proposed Integration Algorithm can therefore be written as δab = ba + wa (3.75) δωbib = b ω + wω (3.76) where ba is the bias vector in the measured specific force vector, and bω is the bias vector in the angular rate measurement vector. The parameters wa and wω are the measurement noise of the specific force vector and angular rate vector, respectively. According to the 1st order Gauss-Markov model [49], the bias for a single accelerometer can be represented with a discrete differential equation as ḃax = −αxbax + √ 2αxσax2w(t) (3.77) where, αx is reciprocal of the correlation time, σax 2 is the variance of the process, and bax is the bias associated with the x−axis accelerometer. This equation can be extended to the vector case considering all three accelerom- eters as  ḃax ḃay ḃaz  = −  −αx 0 0 0 −αy 0 0 0 −αz   bax bay baz +  √ 2αxσax2√ 2αyσay2√ 2αzσaz2 w(t). (3.78) The diagonal matrix with −αx, −αy, and −αz is denoted by αD, and the column matrix with √ 2αxσax2, √ 2αyσay2, and √ 2αzσaz2 is denoted by Σa. Similarly, for the triaxial gyroscope, the bias error model for the angular 58 3.3. Proposed Integration Algorithm rate measurement can be written as  ḃmω ḃmω ḃmω  = −  −βx 0 0 0 −βy 0 0 0 −βz   bmω bmω bmω +  √ 2βxσωx2√ 2βyσωy2√ 2βzσωz2 w(t). (3.79) The diagonal matrix with −βx, −βy, and −βz is denoted by βD, and the column matrix with √ 2βxσax2, √ 2βyσay2, and √ 2βzσaz2 is denoted by Σω. For both the accelerometer and the gyroscope error models, the white noise term w(t) is of zero mean and unity variance. The error modeling of the magnetometer is often neglected because it only affects the calculation of the yaw angle in the body frame with respect to the navigation frame and this does not lead to any large errors in position estimation for short durations [43]. This is especially true in the integration of INS with GNSS. Since GNSS is a very reliable positioning solution, a small heading error initiated by the magnetometer bias can be ignored without any significant performance degradation in the integrated system. However, for indoor positioning applications, the aiding positioning so- lution used in this work, namely OAOA, is not yet as reliable as GNSS. Moreover, a given amount of position error in an indoor environment may be considered to be more significant than the same in an outdoor environ- ment. Therefore, magnetometer error estimation is also necessary. The raw magnetometer measurements contain components from the mag- netic field of the navigation system, the magnetic field of surrounding equip- ment, in addition to the Earth’s magnetic field. Therefore, it can be said that, for a particular environment, these errors are analogous to a fixed bias. 59 3.3. Proposed Integration Algorithm A complex calibration method is available for calibrating the magnetometer [10]. However, for this work, it is assumed that the magnetometer error vec- tor δmb consists only of a bias term and random noise and can be written as δmb = bm + wm. (3.80) The bias error model for the magnetometer measurement can be written as  ḃmx ḃmy ḃmz  = −  −γx 0 0 0 −γy 0 0 0 −γz   bmx bmy bmz +  √ 2γxσmx2√ 2γyσmy2√ 2γzσmz2 w(t). (3.81) The diagonal matrix with −γx, −γy, and −γz is denoted by γD, and the col- umn matrix with √ 2γxσax2, √ 2γyσay2, and √ 2γzσaz2 is denoted by Σm. As stated earlier, the white noise term w(t) is of zero mean and unity variance. Figure 3.6 depicts the basic block diagram showing the INS mechaniza- tion block which gets feedback from the EKF and the input to the block is compensated for the bias estimation by the EKF. In the next section the proposed algorithms for OAOA/INS integration are described. A detailed description of the Kalman filter algorithm is not discussed here but may be found in the literature [39, 50]. Loosely coupled and tightly coupled inte- gration strategies are discussed first. This is followed by a modified version of the loosely coupled and tightly coupled integration algorithms which use an error compensation block before the INS mechanization block. 60 3.3. Proposed Integration Algorithm Gravity computation INS Mechanization Block Input Block Output Block 2Ωnie Ω n en Ωbib Rnb ωbin ω n in ωnen ωnie ab Rbn Rnb + vn + + - + - + + ∫ ∫Initial alignment Rnb (updated) mb Qk an Qk + EKF Bias Compensate + + + - - - - + + - rn b̂akb̂ ω k b̂ m k δv̂ n k δr̂ n k Qk v̇n vn Figure 3.6: Basic diagram of the INS mechanization block with feedback from the EKF and corrected input measurement. 3.3.1 Loosely Coupled Integration In the case of GNSS/INS loosely coupled integration, the INS and GNSS operate as independent navigation systems. The navigation solution from each are blended using an estimator to form a third navigation solution [40]. Following the basic architecture of the loosely coupled algorithm for GNSS/INS integration, a loosely coupled algorithm for OAOA/INS integra- tion is proposed. The key aspects of the proposed algorithm are described below and depicted in Figure 3.7. 1. The IMU consists of a triaxial accelerometer, a triaxial gyroscope and a triaxial magnetometer which provide the raw accelerometer mea- surements ãk+1, the raw gyroscope measurements ω̃k+1, and the raw magnetometer measurements m̃k+1. The subscript k + 1 denotes the 61 3.3. Proposed Integration Algorithm OAOA Sensor OAOA Processor IMU INS Mechanization Block EKF Output + - rnINSk+1 rnOAOAk+1 r̂nk+1 δr̂nk δv̂nk ω̃k+1 m̃k+1 ãk+1 r n O A O A in it ia l Ik+1 + + b̂ωk Q̂k + + - - + - b̂ ak b̂mk Figure 3.7: Block diagram of a loosely coupled OAOA and INS integrated positioning solution. current update cycle. The measurements from the accelerometer and the gyroscopes are then corrected using the bias estimate from the EKF which is based on information from the previous cycle and thus denoted by the subscript k in Figure 3.7. 2. As stated earlier, magnetometer error estimation is also necessary for indoor positioning and, therefore, is added to the EKF state vec- tor which provides significant performance improvement as shown in Chapter 4. 3. The estimated parameters are then sent to the INS mechanization block. The estimated velocity errors are used to correct the velocity in the INS mechanization block. The INS mechanization block cal- 62 3.3. Proposed Integration Algorithm culates the position vector, rnINSk+1 = [ rnINSxk+1 r nINS yk+1 r nINS zk+1 ]T , and the velocity vector vnINSk+1 = [ vnINSxk+1 v nINS yk+1 v nINS zk+1 ]T , with respect to the local navigation frame at each time epoch. 4. The rotation matrix for the rotation from the body frame to the nav- igation frame, implemented using Euler angles, may become singular at times. Therefore, instead of using Euler angles as in [43], quater- nion algebra is used to update the rotation matrix. Therefore, the EKF here is used for estimating the quaternion parameters in addi- tion to the estimation of attitude errors as in conventional INS/GNSS [10, 43]. This quaternion estimation is given as feedback to the INS mechanization block, where the quaternion parameters are then cal- culated according to Equation 3.69. The EKF estimated quaternion parameters are used for Qk in Equation 3.69. 5. For the INS mechanization block to begin, initial condition informa- tion is needed and this is provided by the OAOA processor block. In Figure 3.7 this information is represented by rnOAOAinitial . 6. The OAOA sensor block provides a current from each of the three orthogonal photodiodes. The elements are termed IPD1k+1, IPD2k+1, and IPD3k+1. The currents from these three photodiodes have com- ponents at different frequencies from each optical source. The com- ponents of the currents at the different frequencies are not shown in Figure 3.7. These current components are passed to the OAOA pro- cessor block which then calculates the positions rnOAOAxk+1 , r nOAOA yk+1 , and rnOAOAzk+1 along the three orthogonal axes. The position vector, r nINS k+1 , 63 3.3. Proposed Integration Algorithm calculated from the INS algorithm is then subtracted from the position vector rnOAOAk+1 , calculated from OAOA processor, and the difference is fed to the EKF as the observation vector. 7. According to [46], the basic EKF estimator consists of nine navigation error states consisting of three position error states δr̂nxk, δr̂ n yk, and δr̂nzk, three velocity error states δv̂ n xk, δv̂ n yk, and δv̂ n zk, and three attitude error states. In addition to all the error state vectors, the quaternion parameter vector Qk is also estimated here and it has four components. Moreover, for low cost MEMS IMU sensors, the state vector of the EKF must be augmented by incorporating accelerometer error states b̂axk, b̂ a yk, and b̂ a zk, and gyroscope error states, b̂ ω xk, b̂ ω yk, and b̂ ω zk. If the magnetometer error states are added to the EKF states, the EKF here becomes a twenty two state extended Kalman filter. 8. The estimated position error is used to correct the position output from the INS. The final estimated position is given as r̂nk+1 = r nINS k+1 + δr̂ n k . (3.82) EKF for Loosely Coupled Integration The Kalman filter used here for the OAOA/INS integration is a twenty two or nineteen state Kalman filter, depending on whether the magnetome- ter errors are estimated or not. The state vector for the EKF can be written 64 3.3. Proposed Integration Algorithm as xnk =  δrnk δvnk δψn Qk bak bωk bmk  . (3.83) All components of this state vector are vectors containing three elements, except Qk, which has four quaternion parameters q1(tk), q2(tk), q3(tk), and q4(tk). This presents a problem for writing the system model for the EKF. For this reason, Qk is divided into two parts as Qk =  q1(tk) q2(tk) q3(tk) q4(tk)  =   q1(tk) q2(tk) q3(tk)  q4(tk)  = Q(3×1)k q4k . (3.84) For the same reason Ω(ω) in Equation 3.67, is rewritten in the form Ω(ω) =   0 ωz −ωy −ωz 0 ωx ωy −ωx 0   ωx ωy ωz  [ −ωx −ωy −ωz ] 0  = Ω3×3 Ω3×1 Ω1×3 0 . (3.85) Based on Equation 3.72 and on the error analysis of the accelerometer, gyroscope and magnetometer, the system model of the EKF can be written 65 3.3. Proposed Integration Algorithm as  δṙn δv̇n δψ̇n Q̇(3×1) q̇4 δḃ a δḃ ω δḃ m  =  03×3 I3×3 03×3 03×3 03×1 03×3 03×3 03×3 03×3 −2Ωnie An 03×3 03×1 Rnb 03×3 03×3 03×3 03×3 −Ωnie 03×3 03×3 03×3 Rnb 03×3 03×3 03×3 03×3 12Ω3×3 1 2Ω3×1 03×3 03×3 03×3 01×3 01×3 03×3 12Ω1×3 0 01×3 01×3 01×3 03×3 03×3 03×3 03×3 03×1 αD 03×3 03×3 03×3 03×3 03×3 03×3 03×1 03×3 βD 03×3 03×3 03×3 03×3 03×3 03×1 03×3 03×3 γD   δrn δvn δψn Q(3×1) q4 δba δbω δbm  +  03×3 03×3 03×1 Rnb 03×3 03×1 03×3 Rnb 03×1 03×3 03×3 03×1 01×3 01×3 0 03×3 03×3 Σa 03×3 03×3 Σω 03×3 03×3 Σm   wa wω w . (3.86) Obtaining the Gauss-Markov model parameters for the sensor errors is an important task to this end. Approximately 12 hours of static data was collected by a commercial IMU device. In static conditions, the triaxial accelerometer measures only the reaction force due to gravity, the gyroscope measures only the Earth’s rotation and the magnetometer measures hard- iron and soft-iron magnetic flux. All of the quantities are constant, at least for the length of the test considered. Therefore, it can be concluded that any variations in the measurements must be caused by variations in the sensor errors, which is to be modeled. The autocorrelation function of the raw data was computed as stated in [49] to determine the parameters for 66 3.3. Proposed Integration Algorithm Gauss-Markov model. In addition to the system model, the EKF also needs a measurement model such as zk = Hkxk + wmk, which is basically a relationship between the observation vector zk and the state vector xk, where Hk is the mea- surement matrix and wmk is the measurement noise vector. For the loosely coupled system, the observation vector consists of three position errors and can be written as zL =  rnINSxk − rnOAOAxk rnINSyk − rnOAOAyk rnINSzk − rnOAOAzk  . (3.87) For the loosely coupled system, the state vector and the observation vector have a linear relationship. The measurement matrix for loosely coupled integration can be written as HL = [ I3×3 03×3 03×3 03×3 03×1 03×3 03×3 03×3 ] . (3.88) The algorithm for estimating the states of the EKF using the system model and measurement model, follows standard the EKF algorithm, which is available in any book on Kalman filtering [50, 51]. 3.3.2 Tightly Coupled Integration In the case of GNSS/INS tightly coupled integration, raw GNSS ob- servations are passed directly to an EKF. The observation vector contains the GNSS processor’s pseudo-range and pseudo-range rate measurements [10]. Following the basic architecture of a tightly coupled algorithm for GNSS/INS integration, a tightly coupled algorithm for OAOA/INS integra- 67 3.3. Proposed Integration Algorithm IMU INS Mechanization Block EKF Output r̂nk+1 δr̂nk δv̂nk ω̃k+1 m̃k+1 ãk+1 + + b̂ωk Q̂k + + - - + - b̂ ak b̂mk OAOA Sensor OAOA Processor rnINSk+1 r n O A O A in it ia l Ik+1 Current Calculation Block + - δIk+1 I k+1 INS Figure 3.8: Block diagram of a tightly coupled OAOA and INS integrated positioning solution. tion is proposed. The key aspects of the proposed algorithm are described below and depicted in Figure 3.8. 1. The difference between the loosely coupled and the tightly coupled integration algorithms is in the observation vector. In loose integra- tion, the position vector, rnINSk+1 , calculated from the INS algorithm is subtracted from the position vector, rnOAOAk+1 , calculated from OAOA processor, and the difference is fed to the EKF as the observation vector. In the case of the tightly coupled system shown in Fig. 3.8, however, the observation vector contains the differences between the OAOA sensor measured currents and current values predicted from the corrected inertial navigation solution. To be more specific, each of the currents from the photosensor has components at different fre- 68 3.3. Proposed Integration Algorithm quencies depending on the number of light sources used. Using the estimated position vector determined by the INS algorithm and sim- ple trigonometry, similar current components can be calculated. In the case of the tightly coupled algorithm, the difference between the two current values is fed to the EKF as an observation vector. 2. If N light sources are used for positioning, current from the each photo- diode will have N frequency components. IPD1 will have components IF1PD1 , I F2 PD1 , IF3PD1 , ... ... ..., I FN PD1 . Similarly, IPD2 will have components IF1PD2 , I F2 PD2 , IF3PD2 , ... ... ..., I FN PD2 , and IPD3 will have components like IF1PD3 , I F2 PD3 , IF3PD3 , ... ... ..., I FN PD3 . Similar current components can be calculated by knowing the position vector of the sensor, determined by the INS algorithm. The difference between them can be represented as δIF11 , δI F2 1 , δI F3 1 , ... ... ..., δI FN 1 , δI F1 2 , δI F2 2 , δI F3 2 , ... ... ..., δI FN 2 , δI F1 3 , δIF23 , δI F3 3 , ... ... ..., δI FN 3 . In the case of tightly coupled integration, these current differences comprise the observation vector for the EKF. EKF for Tightly Coupled Integration The EKF for the tightly coupled OAOA/INS integration has the same system model as the EKF used for loosely coupled integration. However, the measurement model in the tightly coupled case is different from that of the loosely coupled case. In the case of the loosely coupled algorithm, a linear relationship was available between the state vector and the observation vector. However, in the tightly coupled case, the measurement model is nonlinear and the relationship between the state vector and observation 69 3.3. Proposed Integration Algorithm vector can be represented by [50] zk = h(xk) + wmk. (3.89) The observation vector consists of the different frequency components of the current differences described above. The observation vector can be written as zT =  δIF11 ... δIFN1 δIF12 ... δIFN2 δIF13 ... δIFN3  . (3.90) Equation 3.89 does not represent a linear relationship between zk and xk. A linearization of this relationship is necessary for the Kalman filter algorithm to operate. According to [50], the measurement matrix can be written as the Jacobian of the Taylor expansion of h(xk). Therefore, the measurement 70 3.3. Proposed Integration Algorithm matrix is written as HT = ∂h ∂xk ∣∣∣∣∣ xk=x̂k|k−1 (3.91) =  ∂h11 ∂δrnx ∂h11 ∂δrny ∂h11 ∂δrnz · · · · · · ∂h11∂δbmz ∂h12 ∂δrnx ∂h12 ∂δrny ∂h12 ∂δrny · · · · · · ∂h12∂δbmz ... ... ... . . . ... ... ... ... . . . ... ∂h3N ∂δrnx ∂h3N ∂δrny ∂h3N ∂δrny · · · · · · ∂h3N∂δbmz  xk=x̂k|k−1 (3.92) where δI Fj i = hij(δr n x , δr n y , δr n z ) = hij ; i = 1, 2, 3 and j = 1, 2, 3, ......, N. (3.93) Note that, the constant of the Taylor series expansion of h(xk) appears in the Kalman filter correction equation [50]. The differentials of Equation 3.91 are calculated numerically. The values of these currents generated in the photosensor depend on the AOAs of the light beams striking the photosensor, which in turn is a function of the position of the photosensor. A general form of this function is not convenient to determine because the function depends on the number and position of the light sources used in the system as well as knowledge of the reference frame used. 71 3.3. Proposed Integration Algorithm 3.3.3 Integration Algorithms with Augmented Observation One of the major sources of error in inertial navigation systems are the different errors associated with the inertial sensors. Therefore, it is im- portant to compensate for the errors generated by noise present in the ac- celerometer, gyroscope and magnetometer measurements. For this reason, accelerometer bias, gyroscope bias and magnetometer bias errors are esti- mated in the EKF, and then used to correct the raw sensor measurements. However, the error estimation done in the EKF is based on information from the previous time epoch. Therefore, a prior error estimation method can be implemented which is associated with the current measurements of the sensor data. Moreover, except for the position error states, all the states which are estimated by the EKF, as discussed in the previous sections, are estimated solely using the system model, because there is no contribution from the sen- sor bias, velocity error or quaternion parameters in the observation vector. By augmenting the observation vector by adding prior sensor bias estimation data and the quaternion vector, the overall performance of the positioning system can be improved. This is shown in Chapter 4. The idea of augmented observations was introduced in [43]. However, for this work, a modified version of the basic idea of augmenting the observation vector is used for the specific problem at hand. The key and novel features of the integration algorithm which includes an augmented observation vector are depicted in Figure 3.9 and described as follows. 1. The EKF in the loosely coupled and tightly coupled strategies for 72 3.3. Proposed Integration Algorithm Observation Augmenting Block Input Block Ωbib Rnb ab Initial alignment mb IN S  M ec h a n iz a ti o n B lo ck ãk+1 m̃k+1 ω̃k+1 EKF Bias Compensate + + + - - - b̂akb̂ ω k b̂ m k Q LKF Observation vector k+1 ATk ãk+1 âk Figure 3.9: Observation augmenting block. OAOA/INS integration, as described in the previous section, estimates the accelerometer, gyroscope and magnetometer bias using informa- tion from the previous cycle. This is then used to correct the raw measurements from the accelerometers, gyroscopes and magnetome- ter in the current cycle. In [43] a linear Kalman filter (LKF) was used to estimate the bias in the accelerometer and gyroscope prior to the bias estimation in the EKF. However, in the present work an LKF is used to estimate only the accelerometer bias using the current cycle measurement, which is then used to augment the observation vector of the EKF. 2. The EKF used in loosely and tightly coupled integration estimates the quaternion vector, Q, which is provided as feedback to the INS mecha- nization block to determine the updated quaternion vector at the next time epoch according to Equation 3.66. The EKF used in the previous 73 3.3. Proposed Integration Algorithm section does not have any observation for quaternion estimation. The quaternion parameters estimated from the raw measurements of the current cycle are used to augment the observation vector for the EKF. This helps the EKF to estimate the quaternion parameters based on the most up-to-date information. Prior Accelerometer Bias Estimation The EKF estimates the accelerometer bias solely from the system model using information from the previous cycle. This estimated bias is then used to correct the raw accelerometer measurement. The EKF does not have any information on the accelerometer error associated with the current cycle accelerometer data. Therefore, a linear Kalman filter (LKF) is employed which uses the current cycle measurement from the accelerometer as part of its observation vector to estimate the prior accelerometer bias. In Figure 3.9, the linear Kalman filter used for accelerometer prior bias estimation is named LKF. The system model for bias estimation was dis- cussed in Section 3.3 and can be written as ḃ a = αDb a + Σaw(t). (3.94) The measurement model can be written as zba k = bak + wmk (3.95) where zba k is the observation vector for this measurement model and wmk 74 3.3. Proposed Integration Algorithm is the measurement noise vector. The accelerometer bias term required for the observation vector of the EKF can be calculated using the previously estimated accelerometer measurement âk and the state transition matrix Ak, as described in [43]. The details for calculating the state transition matrix can be found in [43]. The state transition matrix can be used for estimating the acceleration of the current cycle from the estimated acceleration of the previous cycle. The estimated current cycle acceleration is subtracted from the raw accelerometer measurement to find the bias associated with the current cycle. The bias term can be estimated according to zba k = b̃ a k+1 = ãk+1 −ATk âk. (3.96) This estimated bias term is part of the EKF observation vector. Prior Quaternion Parameter Estimation The extended Kalman filter also estimates the quaternion vector Qk. However, the EKF estimates the quaternion parameters solely based on the system model because there is no observations for the quaternion parame- ters. By incorporating a prior estimate of Qk in the EKF observation vector, better estimation of the quaternion parameters is possible. The accelerom- eter and magnetometer measurements may be used to calculate roll, pitch and yaw according to Equation 3.20, Equation 3.21, and Equation 3.24. Equation 3.12 can then be used to calculate the rotation matrix directly from the roll, pitch and yaw. The rotation matrix can be used for calculat- ing the quaternion vector according to Equation 3.71, and as a result, this 75 3.3. Proposed Integration Algorithm OAOA Sensor OAOA Processor INS Mechanization Block EKF Output + - rnINSk+1 rnOAOAk+1 r̂nk+1 δr̂nk δv̂nkr n O A O A in it ia l Ik+1 + + b̂ωk Q̂k + + - - + - b̂ ak b̂mk IM U ω̃k+1 m̃k+1 ãk+1 A u g m en ti n g B lo ck Figure 3.10: Block diagram of a loosely coupled OAOA and INS integrated positioning solution with augmented observations. calculated quaternion vector is directly related to the observed accelerom- eter and magnetometer measurement at the current cycle. The calculated quaternion is added to the observation vector of the EKF. Loosely Coupled Algorithm with Augmented Observations The loosely coupled integration with augmented observations is shown in Figure 3.10. The system model is the same as that discussed in Sec- tion 3.3.1. However, the measurement model is not the same due to the augmented observation vector. The augmented observation vector for the loosely coupled integration algorithm can be written as 76 3.3. Proposed Integration Algorithm zL[A] =  δrnk Qk(3×1) q4k bak . (3.97) The measurement matrix for the measurement model of the loosely coupled algorithm with augmented observation can be represented as HL[A] =  I3×3 03×3 03×3 03×3 03×1 03×3 03×3 03×3 03×3 03×3 03×3 I3×3 03×1 03×3 03×3 03×3 01×3 01×3 01×3 01×3 1 01×3 01×3 01×3 03×3 03×3 03×3 03×3 03×1 I3×3 03×3 03×3 . (3.98) Given the matrices of Equation 3.97 and Equation 3.98, as well as the system model and measurement model, the standard Kalman filter algo- rithm can be implemented to estimate the positioning solution. Tightly Coupled Algorithm with Augmented Observations The tightly coupled algorithm with augmented observations is depicted in Figure 3.11. In this case the observation vector consists of the current difference values, instead of δrnk as found in the observation vector of the loosely coupled algorithm. The other elements of the observation vector remain the same as for loosely coupled integration. The observation vector for the tightly coupled system with augmented observations can be written 77 3.3. Proposed Integration Algorithm INS Mechanization Block EKF Output r̂nk+1 δr̂nk δv̂nk + + b̂ωk Q̂k + + - - + - b̂ ak b̂mk IM U ω̃k+1 m̃k+1 ãk+1 A u g m en ti n g B lo ck OAOA Sensor OAOA Processor rnINSk+1 r n O A O A in it ia l Ik+1 Current Calculation Block + - δIk+1 I k+1 INS Figure 3.11: Block diagram of a tightly coupled OAOA and INS integrated positioning solution with augmented observations. as zT [A] =  δIF11 ... δIFN3 Qk(3×1) q4k bak  . (3.99) 78 3.3. Proposed Integration Algorithm According to [50], the measurement matrix can be written as HT [A] = ∂h ∂xk ∣∣∣∣∣ xk=x̂k|k−1 (3.100) =  ∂h11 ∂δrnx ∂h11 ∂δrny ∂h11 ∂δrnz · · · · · · ∂h11∂δbmz ∂h12 ∂δrnx ∂h12 ∂δrny ∂h12 ∂δrny · · · · · · ∂h12∂δbmz ... ... ... . . . ... ... ... ... . . . ... ∂h3N ∂δrnx ∂h3N ∂δrny ∂h3N ∂δrny · · · · · · ∂h3N∂δbmz ∂q1 ∂δrnx ∂q1 ∂δrny ∂q1 ∂δrny · · · · · · ∂q1∂δbmz ... ... ... . . . ... ... ... ... . . . ... ∂δbaz ∂δraz ∂δbaz ∂δrny ∂δbaz ∂δrny · · · · · · ∂δbaz∂δbmz  xk=x̂k|k−1 (3.101) where hij = δI Fj i = hij(δr n x , δr n y , δr n z ); i = 1, 2, 3 and j = 1, 2, 3, ......, N. (3.102) As stated earlier in the case of the tightly coupled system without aug- mented observations, a general form of this function is not easy to determine because the function depends on the number and position of the light sources used in the system, and a knowledge of the reference frame used. Therefore, the differentials are calculated numerically. Given the matrices of Equation 3.99 and Equation 3.100, as well as the system model and measurement model, the standard Kalman filter algorithm found in any standard text [39, 50] can be implemented to estimate the positioning solution. 79 Chapter 4 Experimental Work and Results In this chapter the experimental results for the performance analysis of the proposed algorithms described in the previous chapter are discussed. The description of the necessary experimental setup is first presented fol- lowed by details of the data collection and data processing procedure. The results and performance comparison of different algorithms are then pre- sented. The limitations and future scope of the experiments are also dis- cussed in this chapter. 4.1 Experiment 4.1.1 Data Collection For the proposed indoor positioning solution, inertial sensor data and data from the OAOA sensor are necessary. These data are then used by the proposed algorithm to calculate the positioning solution. 80 4.1. Experiment Figure 4.1: A commercial athletic performance sensor used as the IMU unit. IMU Data Collection In Figures 3.7, 3.8, 3.10, and 3.11 the IMU block supplies raw accelerom- eter, gyroscope and magnetometer measurements to the INS mechanization block. For the experiments conducted, the commercial athletic performance sensor shown in Figure 4.1 is used as the IMU unit. This device contains accelerometers, gyroscopes and magnetometers along three orthogonal axes and is able to collect data at a frequency of 100 Hz. For collecting data with an IMU, motion is required and this is provided by the ‘turntable’ shown in Figure 4.2. The turntable is a circular rotating platform with a diameter of 60 cm. The IMU is mounted on the turntable and collects data as the turntable rotates. 81 4.1. Experiment Diameter 60 cm Figure 4.2: A DC motor turntable for creating a controlled motion. OAOA Data Collection The basic principle of the OAOA sensor was discussed in Chapter 3. Two or more sources of light at known coordinates, and modulated by different frequencies as described in Chapter 3, are necessary for the experiment. The new OAOA sensor is able to provide location information by determining the AOAs of the incident light from the light sources, with respect to a known reference frame. At the time of the research, the OAOA sensor was not able to collect continuous data while moving. Laboratory experiments were done in static conditions for determining the position of the sensor and the results were published in [4, 5]. For those experiments, several current measurements were taken for a particular position of the sensor and then averaged to find the current measurement for that particular position. To collect the current data continuously while the sensor is moving, an auto- matic data interpretation circuit is needed. For the time being, the OAOA 82 4.1. Experiment 60 cm Origin (0,0,0) Start point (65,-90,50) Turntable x y z (65,40,50) (-60,-90,50) (-60,40,50) Figure 4.3: Hypothetical experimental setup for data collection (coordinate units are cm, and figure is not drawn to scale). sensor was able to provide positioning data only in static conditions. There- fore, for this work, simulated data is used for OAOA sensor observations for the case of a moving sensor. The nature of this simulated data is described in Section 4.1.2. 4.1.2 Experimental Setup Consider the hypothetical setup of Figure 4.3 which consists of the turntable and four optical sources at known coordinates. For this exper- iment the turntable is rotated at a constant angular velocity of 36 deg/sec. Therefore, the turntable makes one complete revolution in 10 seconds. For 83 4.1. Experiment a practical experiment with the OAOA sensor collecting data in a dynamic environment, the OAOA sensor and the IMU should be stacked together. In that case the OAOA sensor can sense the AOA of the incident light beams from the optical sources. The AOA values can later be used to calculate the position of the OAOA sensor. However, as stated earlier, the OAOA sen- sor is not yet ready to collect to data in a dynamic environment. Therefore, only the INS sensor is mounted on the turntable and the turntable is rotated at a constant velocity. The turntable velocity is measured with the help of a stopwatch. For calculating theoretical AOA values, to use as simulated data, the geometry of the trajectory and the velocity of the INS module are required. For a known geometry (circular in this case), the position of any moving object can be calculated using simple equations of motion. Since the geometry of the path and the velocity are already known, for any particular position of the IMU on the rotating turntable, theoretical AOA values for the four optical beacons can be calculated. These ‘true’ AOA values are then corrupted with additive white Gaussian noise (AWGN) and become the simulated OAOA data. The update rate of the OAOA sensor in a dynamic environment is still undetermined. Therefore, for this work different OAOA update rates are used to investigate the effect of the OAOA update rate on the performance of the positioning solution. 84 4.1. Experiment −40 −20 0 20 40 −70 −60 −50 −40 −30 −20 −10 0 10 Reference x-axis [cm] R ef er en ce y -a x is [c m ]   OAOA−based track True track (a) True trajectory and trajectory determined by OAOA only. −40 −20 0 20 40 −70 −60 −50 −40 −30 −20 −10 0 10 Reference x-axis [cm] R ef er en ce y -a x is [c m ]   Estimated track True track (b) True trajectory and trajectory determined by the OAOA and INS loosely coupled integration algorithm. Figure 4.4: Comparison between true and estimated trajectories. 85 4.2. Performance Analysis and Discussion The simulated OAOA measurements are generated at update rates of 10 Hz and 5 Hz. These OAOA position measurements are used to update the 100 Hz INS positioning solution using the proposed algorithms described in Chapter 3. Consequently, an OAOA update is provided to the INS position- ing process every 10 or 20 epochs. At the epochs, when no OAOA update is available, state vector prediction is performed using the state estimation of the previous epoch. In this work, positioning is performed in a horizontal (2-D) plane because the turntable is only able to provide a motion in 2-D space. However, this same algorithm is able to provide positioning solution in a three dimensional space. 4.2 Performance Analysis and Discussion Figure 4.4(a) shows the true trajectory (determined by known geometry, which is a circle with 60 cm diameter in this case) and the trajectory de- termined from the simulated and noise-corrupted OAOA values only. Note that no IMU data is used to produce the trajectories in Figure 4.4(a). As expected, the trajectory determined from OAOA information is not smooth, since noise associated with each epoch is totally uncorrelated with the noise of previous or future epochs. The simulated OAOA position information is created by computing the theoretical AOA values with noise. As stated above, in this work only horizontal positioning is considered, which depends only on the azimuthal angles if a particular orientation of the sensor is considered. From [4] it is clear that azimuthal angle errors are relatively constant irrespective of the position of the photosensor, which means the 86 4.2. Performance Analysis and Discussion errors in the azimuthal angle do not depend on the actual values of the azimuthal angle. The errors are typically less than 2◦ as reported in [4]. Therefore, white Gaussian noise with mean 0◦ and variance 2◦ was added to the AOA values to keep most of the angle errors less than 2◦. Using these noise corrupted AOA measurements, the trajectory of Figure 4.4(a) is obtained, where the maximum error is 7.34 cm. This level of positioning error is similar to that reported in [4]. Figure 4.4(b) shows the true trajectory and the trajectory determined by integrating INS and OAOA with the loosely coupled integration algorithm depicted in Figure 3.7. The INS/OAOA trajectory exhibits a smooth tran- sition from one epoch to the next, as expected, given the use of the EKF. It is obvious from the comparison of Figure 4.4(a) and Figure 4.4(b) that the integrated system provides a better solution in terms of accuracy and smoothness. The tightly coupled integration system of Figure 3.8 provides a trajectory similar to that of Figure 4.4(b) but with better accuracy. How- ever, from a figure such as Figure 4.4(b) it is not possible to quantify the amount of position error merely by observation. Therefore, the performance comparisons are quantified in tabular form as shown in Tables 4.1, 4.2, 4.3, and 4.4. In Table 4.1 the performance comparison of all four algorithms is quantified for an update rate from the OAOA sensor of 10 Hz and with AWGN in the simulated azimuthal angles with 0 mean and a variance of 2◦. Table 4.2 is similar to Table 4.1 but with a different update rate from the OAOA sensor, which is 5 Hz. Tables 4.3 and 4.4 are similar to Tables 4.1 and 4.2, respectively, but with a higher amount of AWGN (0 mean and a variance of 3◦) added to corrupt the simulated azimuthal angles. 87 4.2. Performance Analysis and Discussion Table 4.1: Performance comparison of OAOA-only, loosely coupled, tightly coupled, loosely coupled with augmented observations, tightly coupled with augmented observations integration algorithms (OAOA update is 10 Hz, AWGN with 0 mean and a variance of 2◦ is added to the simulated azimuthal angle values). IPS strategy Max error [cm] Average error [cm] OAOA-only 7.34 4.21 Loosely coupled (LC) 3.15 2.34 Tightly coupled (TC) 3.23 1.67 LC with augmented observation 3.01 2.20 TC with augmented observation 2.95 1.59 Table 4.1 represents the comparison between the performances of dif- ferent algorithms described in the previous chapter. Maximum errors and average errors are given for OAOA-only, loosely coupled, tightly coupled, loosely coupled with augmented observation and tightly coupled with aug- mented observation algorithms. For the results in Table 4.1, the OAOA update is provided at every 10th sample of the INS data. For the OAOA- only case the maximum position error is 7.34 cm and average positioning error is 4.21 cm. For the loosely coupled system and tightly coupled system, the maximum errors reduce to 3.15 cm and 3.23 cm, respectively. The improvement in performance of the tightly coupled system is not evident when considering maximum error only. The maximum error may occur due to a large value of noise at a particular epoch. However, as expected, the improvement in the average error is evident for the tightly coupled system is only 1.67 cm, compared to an average error of 2.34 cm for the case of the loosely coupled 88 4.2. Performance Analysis and Discussion system. The last two rows of Table 4.1 show the performance of the loosely coupled system and tightly coupled system with augmented observations, as depicted in Figure 3.10 and Figure 3.11 respectively. In this case the improvement is not very significant compared to the algorithms without augmented observations. This may be due to the high update rate from the OAOA sensor. Due to the high update rate, there may be a little room for the better algorithms to show significant performance improvement since the INS solution is being updated by OAOA measurement so often. However, the performance improvement is evident in the case where update rate is lower, as will be shown below. Table 4.2: Performance comparison of OAOA-only, loosely coupled, tightly coupled, loosely coupled with augmented observations, tightly coupled with augmented observations integration algorithms (OAOA update is 5 Hz, AWGN with 0 mean and a variance of 2◦ is added to the simulated az- imuthal angle values). IPS strategy Max error [cm] Average error [cm] OAOA-only 7.34 4.21 Loosely coupled (LC) 6.01 4.89 Tightly coupled (TC) 5.87 4.21 LC with augmented observation 3.79 3.11 TC with augmented observation 3.23 2.85 Table 4.2 shows the results for the same five algorithms as Table 4.1 but for an OAOA update provided at every 20th sample of the INS data. That is, the OAOA data rate is 5 Hz. In this case the performance of every algorithm is worse than that shown 89 4.2. Performance Analysis and Discussion in Table 4.1. This is expected because of the lower update rate of the OAOA sensor. With a lower OAOA sensor update rate, the positioning solution is more dependent on the INS-only information which is susceptible to drift over time. Note here, the significant improvement in performance obtained by using the loosely coupled and tightly coupled algorithm with augmented observa- tions. This was not significant in the results of Table 4.1 where the update rate was higher. Therefore, it can be concluded that the loosely coupled and tightly coupled algorithms with augmented observations perform sig- nificantly better when update rate from the OAOA sensor is lower. In the case where the update rate is lower, the INS has to perform independently for a longer amount of time, which in turn allows the positioning solution to drift. Therefore, lower update rates from the OAOA sensor make room for the algorithms with augmented observations to improve the positioning performance. Another important observation from Tables 4.1 and 4.2 is that in the case of augmented observations, the maximum error does not increase as much as the average error when the update rate was halved. When the update rate is halved, the average error for the loosely coupled system with augmented observations increases by 41%, while the maximum error increases by only 25%. For the tightly coupled system with augmented observations, the increase in the average error is 79%, while the maximum error increases by only 10%. Therefore, it can be concluded that the algo- rithms with augmented observations appear to bound the maximum error, which may be significant for some applications. The amount of noise inserted to corrupt the simulated OAOA observa- 90 4.2. Performance Analysis and Discussion Table 4.3: Performance comparison of OAOA-only, loosely coupled, tightly coupled, loosely coupled with augmented observations, tightly coupled with augmented observations integration algorithms (OAOA update is 10 Hz, AWGN with 0 mean and a variance of 3◦ is added to the simulated azimuthal angle values). IPS strategy Max error [cm] Average error [cm] OAOA-only 10.13 5.94 Loosely coupled (LC) 4.74 3.39 Tightly coupled (TC) 4.82 2.49 LC with augmented observation 4.07 2.97 TC with augmented observation 4.12 2.34 tion data can be varied to observe the effect of noise on the performance of the proposed algorithms. An additive white Gaussian noise with mean 0◦ and variance 2◦ is chosen to generate results in Table 4.1 and Table 4.2. In this case, an additive white Gaussian noise with mean 0◦ and variance 3◦ is chosen to increase the amount of noise in the simulated AOA values. Using these noise corrupted AOA measurements, only a similar trajectory as shown in Figure 4.4(a) is obtained. In this case the maximum error is 10.13 cm, and the average error is 5.94 cm. Similar to Table 4.1, Table 4.3 compares the OAOA-only, loosely cou- pled, tightly coupled, loosely coupled with augmented observations and tightly coupled with augmented observations algorithms, but with an in- creased amount of noise. As expected, the maximum errors and the average errors are increased compared to Table 4.1. Note that in Table 4.3, the tightly coupled system has better accuracy than the loosely coupled system, and that algorithms with augmented ob- 91 4.2. Performance Analysis and Discussion servations do provide a little improvement in this case compared to the algorithms without augmented observations. This result is slightly different from the result shown in Table 4.1, where the improvement is not signifi- cant for the algorithms with augmented observations. It can be concluded that with a higher amount of noise, the two algorithms with augmented ob- servations perform better compared to the algorithms without augmented observations. Due to the increased amount of noise, there is room for the algorithms with augmented observations to improve the performance by pro- viding a more accurate positioning solution. In the case of lower noise, the performance of the algorithms without augmented observations is compara- tively better, and therefore, it may not be possible to improve performance with the inclusion of augmented observations. Table 4.4: Performance comparison of OAOA-only, loosely coupled, tightly coupled, loosely coupled with augmented observations, tightly coupled with augmented observations integration algorithms (OAOA update is 5 Hz, AWGN with 0 mean and a variance of 3◦ is added to the simulated az- imuthal angle values). IPS strategy Max error [cm] Average error [cm] OAOA-only 10.13 5.94 Loosely coupled (LC) 8.76 6.92 Tightly coupled (TC) 7.98 4.29 LC with augmented observation 6.27 3.88 TC with augmented observation 5.11 3.21 Table 4.4 is similar to Table 4.2 but with noise of variance 3◦ instead of 2◦. In the case of Table 4.2 and 4.4, the update rate from the OAOA sensor is 5 Hz. The results of Table 4.4 are similar to that of Table 4.2 but with 92 4.2. Performance Analysis and Discussion an increased amount of error (as expected). The improved performance of the algorithms with augmented observations is also evident in this case. In addition, as observed when comparing Table 4.1 and 4.2, the performance of the integration algorithm in Table 4.4 is worse than those in Table 4.3 due to a lower OAOA update rate for the results in Table 4.4. Table 4.5: Effect of magnetometer bias estimation on the performance of loosely coupled, tightly coupled, loosely coupled with augmented observa- tions, tightly coupled with augmented observations integration algorithms (OAOA update is 10 Hz, AWGN with 0 mean and a variance of 2◦ is added to the simulated azimuthal angle values). IPS strategy Avg. error without bias estimation [cm] Avg. error with bias estimation [cm] Loosely coupled (LC) 2.43 2.34 Tightly coupled (TC) 1.78 1.67 LC with augmented observation 2.32 2.20 TC with augmented observation 1.71 1.59 As stated in Section 3.3, for an indoor positioning solution, magnetome- ter error estimation should result in a more accurate positioning solution. The performance improvement obtained by estimating the magnetometer bias is shown in Table 4.5. The results of Table 4.5 are obtained under the same experimental conditions as those of Table 4.1. The first column is the average position error for all four algorithms without estimating the magnetometer bias. The second column is the average position error for all proposed algorithms with the magnetometer bias estimation. Magnetometer bias estimation improves the average positioning error of loosely coupled, tightly coupled, loosely coupled with augmented observa- 93 4.2. Performance Analysis and Discussion tions, and tightly coupled with augmented observations algorithms by 3.7%, 6.2%, 5.2%, and 7.0%, respectively, compared to the case when magnetome- ter bias estimation is not performed. From Table 4.5 it can be concluded that adding magnetometer errors in the EKF state vector for estimating magnetometer bias can improve the overall positioning performance. 94 Chapter 5 Conclusions and Future Work This research is based on a novel indoor positioning solution [4], which was proposed in 2011, based on AOA of light measured by a newly devised differential photosensor. However, an unobstructed LOS view between the optical source and photosensor is required given that the system is based on visible light. Anything blocking the LOS view between the source and sensor will destroy the integrity of the positioning system. Moreover, due to the independent measurement noise in the OAOA measurements, the resultant trajectory of the OAOA based positioning solution is not smooth. These problems with the newly proposed indoor positioning system were the motivation behind this research. For outdoor positioning an inertial navigation system can be used as a backup navigation system in the case of GNSS outage. The problem at hand, being similar to GNSS outages in outdoor environments, led to the idea of an indoor positioning solution that combines INS and OAOA-based positioning to provide a more accurate and reliable indoor positioning solution. The objective of this research was to develop algorithms to integrate 95 5.1. Developed Algorithms OAOA-based indoor positioning with INS based positioning to provide a more accurate and reliable positioning solution for indoor environments. Four different algorithms were developed for this purpose and compared in terms of their performance. In this chapter, all the key conclusions of this research are summarized. This is followed by recommendations for future work. 5.1 Developed Algorithms For the four different algorithms developed to integrate OAOA-based positioning with INS-based positioning, an extended Kalman filter was used. The loosely coupled algorithm and tightly coupled algorithm were developed first. Then the observation vector for the EKF was augmented by adding an accelerometer bias estimate and quaternion vector estimate. This led to two more algorithms, namely, the loosely coupled algorithm with augmented observations and the tightly coupled system with augmented observations. Error modeling of the magnetometer was introduced in the proposed al- gorithms because a small heading error in the magnetometer bias can cause significant performance degradation in the integrated system for indoor en- vironments. Inclusion of the magnetometer bias estimation improves the average positioning error of the four algorithms by 3.7% to 7%. Since mathematical singularities may occur in the case of using Euler angles, quaternion algebra was used instead to update the rotation matrix. Therefore, the EKF was used here for estimating the quaternion parameters in addition to the estimation of the conventional parameters of an error state 96 5.1. Developed Algorithms Kalman filter. The four proposed algorithms all performed better than the OAOA-only based indoor positioning solution. A maximum error of 7.34 cm and an average error of 4.21 cm for OAOA-only positioning was reduced to 3.15 cm and 2.34 cm, respectively, by using the loosely coupled integration algorithm. This was a reduction of the error of approximately 57% for maximum error and 44% for average error. For the same experiment, the tightly coupled integration algorithm outperforms the loosely coupled algorithm by reducing the average error to 1.67 cm, an improvement of approximately 60%. From all experiments conducted, it can be concluded that the tightly coupled system improves the average error by at least 14% compared to the loosely coupled algorithm. The algorithms with augmented observations perform significantly bet- ter than those without augmented observations, where the OAOA update rate is low. An average position error of 4.89 cm is reduced to 3.11 cm by using loosely coupled algorithm with augmented observations compared to the loosely coupled algorithm without observation augmentation. This is an improvement of approximately 36.4%. For the tightly coupled system, this improvement in the average error is approximately 32.3%. Improve- ment of performance by using augmented observations is also evident where overall performance is worse due to an increased amount of noise added to the OAOA simulated measurements. An improvement of approximately 12.3% in the average error is found using the loosely coupled algorithm with augmented observation compared to the loosely coupled algorithm alone, for a an OAOA update rate of 10 Hz and a noise variance of 3◦. For a 97 5.2. Limitations and Future Work tightly coupled algorithm this improvement is approximately 6%. It can be concluded that if the update rate of the OAOA sensor is low, or if overall performance is worse due to an extended amount of noise in the simulated OAOA measurements, the algorithms with augmented observations perform significantly better. 5.2 Limitations and Future Work The main limitation of this work is the lack of experimental data. At the time of conducting the research, the newly devised OAOA sensor was not able to collect data in a dynamic environment. Therefore, simulated data for the OAOA sensor was used for experimental purposes. This also limited the experimental trajectory to be a path of known geometry. In this case, a turntable was used for the experiment and, hence, the trajectory was circular. Theoretical (‘true’) positions for the OAOA sensor can be easily calculated given the rotation speed of the turntable. By adding the neces- sary circuitry to the OAOA sensor to collect data in a dynamic environment experiments could be conducted for arbitrary trajectories and arbitrary ve- locities. This would test the robustness of the proposed algorithms. For this work, all algorithms were implemented on a computer with significant processing capacity. However, for a practical indoor position- ing solution working in real time, the microprocessor in the tracked device will most likely have limited computational capacity. Therefore, a study on the computational requirements of the proposed algorithms should be conducted. The computational complexity of the different algorithms may 98 5.2. Limitations and Future Work limit their use in a indoor positioning system with limited computational capability. 99 Bibliography [1] The Concise Oxford Dictionary, 9th ed. Oxford, U.K.: Oxford Uni- versity Press, 1995. [2] M. Depsey, “Indoor positioning systems in healthcare,” in Radianse Inc., White Paper, 2003. [3] A. L. Yanying Gu and I. Niemegeers, “A survey of indoor positioning systems for wireless personal networks,” IEEE Communication Surveys & Tutorials, vol. 11, pp. 13–32, 2009. [4] A. Arafa, X. Jin, and R. Klukas, “A differential photosensor for indoor optical wireless positioing,” in 24th International Technical Meeting of The Satellite Division of the Institute of Navigation (ION GNSS 2011), Portland, OR, 2011. [5] ——, “Wireless indoor optical positioning with a differential photosen- sor,” IEEE Photonics Technology Letters, vol. 24, pp. 1027–1029, Jun 2012. [6] X. Jin and J. F. Holzman, “Differential retro-detection for remote sens- ing applications,” IEEE Sensors Journal, vol. 10, no. 12, pp. 1883–1887, Dec. 2010. [7] C. A. Patterson, R. R. Muntz, and C. M. Pancake, “Challenges in location-aware computing,” IEEE Pervasive Computing, vol. 2, no. 2, pp. 80–89, 2003. [8] J. Hightower and G. Borriello, “Location sensing techniques,” in Tech- nical Report UW CSE 2001-07-30, Department of Computer Science and Engineering, University of Washington, 2001. [9] K. Kaemarungsi and P. Krishnamurthy, “Properties of indoor received signal strength for wlan location fingerprinting,” in Proc. 1st Annual International Conference on Mobile and Ubiquitous Systems: Network- ing and Services (MobiQuitous 04), Boston, Mass, USA, aug 2004, pp. 14–23. 100 Bibliography [10] P. D. Groves, Principles of GNSS, Inertial, and Multisensor Integrated Navigation System. Boston: Artech House, 2008. [11] M. Vossiek, L. Wiebking, P. Gulden, J. Wiehardt, C. Hoffmann, and P. Heide, “Wireless local positioning,” IEEE Microwave Magazine, vol. 4, pp. 77–86, Dec 2003. [12] J. C. Chen, Y. C. Wang, C. S. Maa, and J. T. Chen, “Network-side mobile position location using factor graphs,” IEEE Trans. Wireless Communications, vol. 5, no. 10, pp. 46–52, Oct 2006. [13] R. Casas, D. Cuartielles, A. Marco, H. J. Gracia, and J. L. Falc, “Hid- den issues in deploying an indoor location system,” IEEE Pervasive Computing, vol. 6, no. 2, pp. 62–69, 2007. [14] X. Fernando, S. Krishnan, H. Sun, and K. Kazemi-Moud, “Adaptive denoising at infrared wireless receivers,” in Proc. SPIE, 2003. [15] A. Ward, A. Jones, and A. Hopper, “A new location technique for the active office,” IEEE Personal Communications, vol. 4, no. 5, pp. 42–47, Oct 1997. [16] N. Priyantha, A. Chakraborty, and H. Balakrishnan, “The cricket location- support system,” in Proc. ACM Conference on Mobile Com- puting and Networking, 2000. [17] N. B. Priyantha, The Cricket Indoor Location System. PhD thesis, MIT, 2005. [18] (2008) Sonitor system website. [Online]. Available: http://www. sonitor.com/ [19] H. M. Y. Fukuju, M. Minami and T. Aoyama, “Dolphin: An au- tonomous indoor positioning system in ubiquitous computing environ- ment,” in Proc. IEEE Workshop on Software Technologies for Future Embedded Systems, May 2003. [20] M. S. H. Piontek and J. Kaiser, “Improving the accuracy of ultrasound- based localisation systems,” in Proc. International Workshop on Location-and Context-Awareness, 2005. [21] L. M. Ni and Y. Liu, “Landmarc: Indoor location sensing using active RFID,” in Proc. IEEE International Conference on Pervasive Comput- ing and Communications, 2003, pp. 407–416. 101 Bibliography [22] (2008) Wherenet web site. [Online]. Available: http://edu.symbol. com/docentauthorware/WhereNet/intro/intro1.asp [23] P. Bahl and V. Padmanabhan, “RADAR: An in-building RF based user location and tracking system,” in Proc. IEEE INFOCOM, vol. 2, Mar 2000, pp. 775–784. [24] (2008) Ekahau. [Online]. Available: http://www.ekahau.com/ [25] Y. Wang, X. Jia, and H. K. Lee, “An indoor wireless positioning sys- tem based on wireless local area network infrastructure,” in Proc. 6th International Symposium on Satellite Navigation Technology Including Mobile Positioning and Location Services, 2003. [26] T. Kitasuka, K. Hisazumi, T. Nakanishi, and A. Fukuda, “Wips: Loca- tion and motion sensing technique of IEEE 802.11 devices,” in Proc. 3rd International Conference on Information Technology and Applications (ICITA05), vol. 2, 2005, pp. 346–349. [27] S. Kawakubo, A. Chansavang, S. Tanaka, T. Iwasaki, K. Sasaki, T. Hi- rota, H. Hosaka, and H. Ando, “Wireless network system for indoor human positioning,” in Proc. 1st International Symposium on Wireless Pervasive Computing, 2006, pp. 1–6. [28] A. Genco, “Three step bluetooth positioning,” in Lecture Notes in Com- puter Science, vol. 3479, 2005, pp. 52–62. [29] J. Hallberg, M. Nilsson, and K. Synnes, “Positioning with bluetooth,” in Proc. 10th International Conference on Telecommunications, 2003. [30] M. Rodriguez, J. P. Pece, and C. J. Escudero, “In-building location using bluetooth,” in Proc. IWWAN, 2005. [31] J. C. F. Michel, M. Christmann, M. Fiegert, P. Gulden, and M. Vossiek, “Multisensor based indoor vehicle localization system for production and logistic,” in Proc. IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems, Heidelberg, Germany, Sep 2006, pp. 553–558. [32] F. Raab, E. B. Blood, T. O. Steiner, and H. R. Jones, “Magnetic po- sition and orientation tracking system,” IEEE Trans. Aerospace and Electronic Systems, vol. AES-15, no. 5, pp. 709–718, Sep 1979. 102 Bibliography [33] J. Krumm, S. Harris, B. Meyers, B. Brumitt, M. Hale, and S. Shafer, “Multi-camera multi-person tracking for easy living,” in Proc. 3rd IEEE Intl Workshop Visual Surveillance, 2000. [34] D. Focken and R. Stiefelhagen, “Towards vision-based 3-D people track- ing in a smart room,” in Proc. 4th IEEE International Conference on Multimodal Interfaces, Oct 2002. [35] A. Madhavapeddy, D. Scott, and R. Sharp, “Context-aware comput- ing with sound,” in Proc. 5th International Conference on Ubiquitous Computing, oct 2003. [36] N. El-Sheimy and X. Niu, “The promise of mems to the navigation community,” Inside GNSS, pp. 46–55, Mar, Apr 2007. [37] M. Petovello, C. O’Driscoll, and G. Lachapelle, “Ultra-tight GPS/INS for carrier phase positioing in weak-signal environments,” NATO RTO SET-104 Symposium on Military Capabilities Enabled by Advances in Navigation Sensors, Oct 2007. [38] D. Titterton and J. Weston, Strapdown Inertial Navigation Technology, 2nd ed. Stevenage, U.K.: IEE, 2004. [39] M. Grewal, L. Weill, and A. Andrews, Global Positioining Systems, Inertial Navigation, and Integration. Newyork, U.S.: Wiley, 2001. [40] Gebre-Egziabher, M. Petovello, and G. Lachapelle, “Weighting gnss ob- servations and variations of GNSS/INS integration,” GNSS Solutions: Inside GNSS, pp. 26–33, 2007. [41] M. Petovello, Real-Time Integration of a Tactical-Grade IMU and GPS for High-Accuracy Positioning and Navigation. Ph.D. dissertation, Department of Geomatics Engineering, University of Calgary, 2003. [42] R. V. Wong, Development of a RLG strapdown inertial survey system. Ph.D. dissertation, Department of Geomatics Engineering, University of Calgary, 1988. [43] F. Sadi, Jump Parameter Estimation with Low Cost MEMS Sensors and GPS for Action Sports Goggles. Master’s thesis, School of Engineering, University of British Columbia, Okanagan, 2011. [44] K. P. Schwarz and M. Wei, Inertial Navigation Systems with Geodetic Applications. Partial Lecture Notes for ENGO 623. Department of Geomatics Engineering, The University of Calgary. 103 Bibliography [45] N. El-Sheimy, “The potential of partial imus for land vehicle naviga- tion,” InsideGNSS, Spring 2008. [46] S. Godha, Performance evaluation of low cost MEMS-based IMU in- tegrated with GPS for land vehicle navigation application. Master’s thesis, Department of Geomatics Engineering, University of Calgary, Feb. 2006. [47] K. P. Schwarz and M. Wei, INS/GPS Integration for Geodetic Applica- tions. New York, NY., USA: Walter de Gruyter. [48] K. J. Walchko and D. P. A. C. Mason, “Inertial navigation,” in Con- ference on Recent Advances in Robotics, 2002. [49] S. Nassar, Improving the inertial navigation system (INS) error model for INS and INS/DGPS applications. Ph.D. dissertation, Department of Geomatics Engineering, University of Calgary, 2003. [50] S. M. Kay, Fundamentals of Statistical Signal Processing: Estimation Theory. Prentice Hall PTR, Upper Saddle River, 1993. [51] M. Grewal and A. Andrews, Kalman Filtering: Theory and Practice Using MATLAB. Wiley-Interscience Publications, 2001. 104

Cite

Citation Scheme:

    

Usage Statistics

Country Views Downloads
United States 13 1
France 3 0
China 3 19
City Views Downloads
Ashburn 11 0
Unknown 3 18
Beijing 3 0
Palo Alto 2 0

{[{ mDataHeader[type] }]} {[{ month[type] }]} {[{ tData[type] }]}

Share

Share to:

Comment

Related Items