"Applied Science, Faculty of"@en .
"Electrical and Computer Engineering, Department of"@en .
"DSpace"@en .
"UBCV"@en .
"Lee, Kung-Chung"@en .
"2010-09-28T14:46:12Z"@en .
"2010"@en .
"Master of Applied Science - MASc"@en .
"University of British Columbia"@en .
"The task of estimating the location of a mobile transceiver using the Received Signal Strength Indication (RSSI) values of radio transmissions to/from other radios is an inference problem. The fingerprinting paradigm is the most promising genre of methods studied in the literature. It constructs deterministic or probabilistic models from data sampled at the site. Probabilistic formulations are popular because they can be used under the Bayesian filter framework. We also categorize fingerprinting methods into regression or classification. The vast majority of existing methods perform regression as they estimate location information in terms of position coordinates. In contrast, the classification approach only estimates a specific region (e.g., kitchen or bedroom). This thesis is a continuation of studies on the fingerprinting paradigm. For the regression approach, we perform a comparison between the Unscentend Kalman Filter (UKF) and the Particle Filter (PF), two suboptimal solutions for the Bayesian filter. The UKF assumes near-linearity and imposes unimodal Gaussian densities while the PF does not. These assumptions are very fragile and we show that the UKF is not a robust solution in practice. For the classification approach, we are intrigued by a simple method we name the Simple Gaussian Classifier (SGC). We ponder if this simple method comes at a cost in terms of classfication errors. We compare the SGC against the K-Nearest Neighbor (KNN) and Support Vector Machine (SVM), two other popular classifiers. Experimental results present evidence that the SGC is very competitive. Furthermore, because the SGC is written in closed-form, it can be used directly under the Bayesian filter framework, which is better known as the Hidden Markov Model (HMM) filter. The fingerprinting paradigm is powerful but it suffers from the fact that conditions may change. We propose extending the Bayesian filter framework by utilizing the filter derivative to realize an online estimation scheme, which tracks the time-varying parameters. Preliminary results show some promise but further work is needed to validate its performance."@en .
"https://circle.library.ubc.ca/rest/handle/2429/28750?expand=metadata"@en .
"Localization Systems Using Signal Strength Fingerprinting by Kung-Chung Lee BASc Engineering Physics, The University of British Columbia, 2007 A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF Master of Applied Science in THE FACULTY OF GRADUATE STUDIES (Electrical and Computer Engineering) THE UNIVERSITY OF BRITISH COLUMBIA (Vancouver) September 2010 c\u00C2\u00A9 Kung-Chung Lee, 2010 Abstract The task of estimating the location of a mobile transceiver using the Received Sig- nal Strength Indication (RSSI) values of radio transmissions to/from other radios is an inference problem. The fingerprinting paradigm is the most promising genre of methods studied in the literature. It constructs deterministic or probabilistic mod- els from data sampled at the site. Probabilistic formulations are popular because they can be used under the Bayesian filter framework. We also categorize finger- printing methods into regression or classification. The vast majority of existing methods perform regression as they estimate location information in terms of po- sition coordinates. In contrast, the classification approach only estimates a specific region (e.g., kitchen or bedroom). This thesis is a continuation of studies on the fingerprinting paradigm. For the regression approach, we perform a comparison between the Unscentend Kalman Filter (UKF) and the Particle Filter (PF), two suboptimal solutions for the Bayesian filter. The UKF assumes near-linearity and imposes unimodal Gaussian densities while the PF does not. These assumptions are very fragile and we show that the UKF is not a robust solution in practice. For the classification approach, we are intrigued by a simple method we name the Simple Gaussian Classifier (SGC). We ponder if this simple method comes at a cost in terms of classification errors. We compare the SGC against the K- Nearest Neighbor (KNN) and Support Vector Machine (SVM), two other popular classifiers. Experimental results present evidence that the SGC is very competitive. Furthermore, because the SGC is written in closed-form, it can be used directly under the Bayesian filter framework, which is better known as the Hidden Markov Model (HMM) filter. ii The fingerprinting paradigm is powerful but it suffers from the fact that condi- tions may change. We propose extending the Bayesian filter framework by utilizing the filter derivative to realize an online estimation scheme, which tracks the time- varying parameters. Preliminary results show some promise but further work is needed to validate its performance. iii Preface \u00E2\u0080\u00A2 A version of Chapter 5 has been published. Kung-Chung Lee, Anand Oka, Emmanuel Pollakis, and Lutz Lampe, \u00E2\u0080\u0099A Comparison between Unscented Kalman Filtering and Particle Filtering for RSSI-based Tracking,\u00E2\u0080\u0099 in Proc. of 7th Workshop on Positioning, Navigation and Communication (WPNC), Dresden, Germany, March 2010. \u00E2\u0080\u00A2 A version of Chapter 6 has been submitted for publication. Kung-Chung Lee and Lutz Lampe, \u00E2\u0080\u0099Indoor Cell-Level Localization Based on RSSI Classifica- tion,\u00E2\u0080\u0099 submitted to 2011 Canadian Conference on Electrical and Computer Engineering (CCECE), Niagara Falls, Ontario, Canada, May 2011. I have transfered my copyright to the organizers of the conferences above. However, I have retained my copyright for writing this thesis. I am the primary author for the publications above. I have performed the major- ity of the work. Tasks include but are not limited to literature review, software and hardware design, performing experiments, data analysis and manuscript editing. While my collaborators have provided invaluable help, their roles are secondary. iv Table of Contents Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ii Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iv Table of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . v List of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix Glossary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . x Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xii Dedication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.1 Scope and Contributions . . . . . . . . . . . . . . . . . . . . . . 2 1.2 Organization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 2 Background and Related Work . . . . . . . . . . . . . . . . . . . . . 4 2.1 The White Box Paradigm . . . . . . . . . . . . . . . . . . . . . . 4 2.1.1 Ray Tracing . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.1.2 The Path Loss Model . . . . . . . . . . . . . . . . . . . . 5 2.2 The Fingerprinting Paradigm . . . . . . . . . . . . . . . . . . . . 6 3 Bayesian Filtering . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 v 4 Two Examples of Applying the Path Loss Model . . . . . . . . . . . 12 4.1 The First Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 4.2 The Second Setup . . . . . . . . . . . . . . . . . . . . . . . . . . 14 5 Unscented Kalman Filtering versus Particle Filtering . . . . . . . . 19 5.1 System Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 5.2 The Unscented Kalman Filter (UKF) . . . . . . . . . . . . . . . . 22 5.3 The Particle Filter (PF) . . . . . . . . . . . . . . . . . . . . . . . 23 5.4 Simulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 5.4.1 Uniform Radio Environment . . . . . . . . . . . . . . . . 29 5.4.2 Diverse Radio Environment . . . . . . . . . . . . . . . . 30 5.5 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . 31 5.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 6 A Comparison of Three Classifiers . . . . . . . . . . . . . . . . . . . 37 6.1 System Model and Classifiers . . . . . . . . . . . . . . . . . . . . 37 6.1.1 K-Nearest Neighbor (KNN) . . . . . . . . . . . . . . . . 38 6.1.2 Support Vector Machine (SVM) . . . . . . . . . . . . . . 38 6.1.3 The Simple Gaussian Classifier (SGC) . . . . . . . . . . . 41 6.2 Results without Filtering . . . . . . . . . . . . . . . . . . . . . . 44 6.2.1 The First Test . . . . . . . . . . . . . . . . . . . . . . . . 46 6.2.2 The Second Test . . . . . . . . . . . . . . . . . . . . . . 48 6.3 The Hidden Markov Model (HMM) Filter . . . . . . . . . . . . . 51 6.4 Results with Filtering . . . . . . . . . . . . . . . . . . . . . . . . 52 6.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 7 Online Parameter Estimation for the General Bayesian Filter . . . . 55 7.1 The Marginal Particle Filter and the Filter Derivative . . . . . . . 57 7.2 Simulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 7.2.1 Tracking the Path Loss Exponent . . . . . . . . . . . . . 61 7.2.2 Tracking Means of Gaussians . . . . . . . . . . . . . . . 63 7.3 Discussion and Future Work . . . . . . . . . . . . . . . . . . . . 64 8 Conclusion and Future Work . . . . . . . . . . . . . . . . . . . . . . 67 vi Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 vii List of Tables Table 5.1 Target Tracking Results in a Uniform Radio Environment . . . 30 Table 5.2 Target Tracking Results in a Diverse Radio Environment . . . . 31 Table 5.3 Tracking Results in an Office . . . . . . . . . . . . . . . . . . 35 Table 6.1 KNN Results (K = 1210) for the First Test . . . . . . . . . . . 46 Table 6.2 SVM Results (C = 10\u00E2\u0088\u00921.1 and \u00CE\u00B3 = 100.3) for the First Test . . . 47 Table 6.3 SGC Results for the First Test . . . . . . . . . . . . . . . . . . 47 Table 6.4 SGC Accuracy Bounds for the First Test . . . . . . . . . . . . 48 Table 6.5 KNN Results (K = 186) for the Second Test . . . . . . . . . . 49 Table 6.6 SVM Results (C = 100.4 and \u00CE\u00B3 = 100.8) for the Second Test . . 49 Table 6.7 SGC Results for the Second Test . . . . . . . . . . . . . . . . 50 Table 6.8 SGC Accuracy Bounds for the Second Test . . . . . . . . . . . 50 Table 6.9 Filtered SGC Results for the First Test . . . . . . . . . . . . . 52 Table 6.10 Filtered SGC Results for the Second Test . . . . . . . . . . . . 53 viii List of Figures Figure 3.1 A Graphical Illustration of the Bayesian Filter . . . . . . . . . 10 Figure 4.1 Room Kaiser 2020 . . . . . . . . . . . . . . . . . . . . . . . 13 Figure 4.2 A ZigBee Radio . . . . . . . . . . . . . . . . . . . . . . . . 14 Figure 4.3 RSSI Values and the Fitted Model for the First Setup . . . . . 15 Figure 4.4 Distribution of the Residuals for the First Setup . . . . . . . . 16 Figure 4.5 The Conference Table . . . . . . . . . . . . . . . . . . . . . 17 Figure 4.6 The Residuals and the RSSI Values for the Second Setup . . . 18 Figure 5.1 The Floor Plan for Simulations . . . . . . . . . . . . . . . . . 28 Figure 5.2 The First Position Coordinate Part of the Particles from Simu- lation Results . . . . . . . . . . . . . . . . . . . . . . . . . . 32 Figure 5.3 Floor Plan of Kaiser 4090 . . . . . . . . . . . . . . . . . . . 33 Figure 5.4 The \u00CE\u0093n(\u00C2\u00B7) Values from Experimental Results . . . . . . . . . 34 Figure 5.5 Horizontal Coordinate from Experimental Results . . . . . . . 36 Figure 6.1 A Graphical Illustration of the SVM . . . . . . . . . . . . . . 39 Figure 6.2 Floor Plan of the Environment and the Cell Numbers . . . . . 45 Figure 7.1 History of the Path Loss Exponent . . . . . . . . . . . . . . . 63 Figure 7.2 History of the Mean of the Gaussian for the First Cell . . . . . 65 Figure 7.3 History of the Mean of the Gaussian for the Second Cell . . . 66 ix Glossary The following are abbreviations and acronyms used in this thesis listed in alpha- betical order. ANN Artifical Neutral Networks AOA Angle of Arrival AP Access Point BER Bit Error Rate EKF Extended Kalman Filter EM Expectation-Maximization FC Fusion Center GPS Global Positioning System HMM Hidden Markov Model KF Kalman Filter KNN K-Nearest Neighbor LDA Linear Discriminant Analysis LOS Line-of-Sight ML Maximum Likelihood x MSE Mean Squared Error NLOS Non-Line-of-Sight PEP Pairwise Error Probability PF Particle Filter RBF Radial Basis Function RF Radio Frequency RSSI Received Signal Strength Indication SGC Simple Gaussian Classifier SIR Sampling Importance Resampling SNR Signal-to-Noise Ratio SVM Support Vector Machine TDOA Time Difference of Arrival TOA Time of Arrival UKF Unscentend Kalman Filter UWB Ultra-Wideband WLAN Wireless Local Area Network xi Acknowledgments I have been fortunate to work with many supportive friends from Kaiser 4090 at University of British Columbia. In particular, the first half of this thesis came directly from collaborative work with Anand Oka and Emmanuel Pollakis. I want to thank the people at Wireless 2000, Vladimir Goldenberg, Harry Lam, Cuong Lai and Vincent Chan, for their incredible support even during difficult times. I want to thank Wenbo Shi for sharing his experimental data. I learned a great deal from Professor Nando de Freitas\u00E2\u0080\u0099 course on Machine Learning and his helpful insights on Bayesian filtering. I would also like to thank the committee members, Professor Cyril Leung and Professor Vikram Krishnamurthy, for taking time out their busy schedules. Finally but not least, I would like to express my sincere gratitude to Professor Lutz Lampe for his support. This thesis would not have been possible without his patient and detailed help. xii Dedication in principio creavit Deus caelum et terram terra autem erat inanis et vacua et tenebrae super faciem abyssi et spiritus Dei ferebatur super aquas dixitque Deus fiat lux \u00E2\u0088\u0087 \u00C2\u00B7E = \u00CF\u0081 \u00CE\u00B50 \u00E2\u0088\u0087 \u00C2\u00B7B = 0 \u00E2\u0088\u0087\u00C3\u0097E =\u00E2\u0088\u0092\u00E2\u0088\u0082B \u00E2\u0088\u0082 t \u00E2\u0088\u0087\u00C3\u0097B = \u00C2\u00B50J+\u00C2\u00B50\u00CE\u00B50 \u00E2\u0088\u0082E\u00E2\u0088\u0082 t et facta est lux xiii Chapter 1 Introduction Location-specific services have become increasingly popular in recent years. Ap- plications include surveillance, access and inventory control, robotics and even location-based marketing [1]. Services enabled by the Global Positioning Sys- tem (GPS) are ubiquitous. Unfortunately, the use of GPS in indoor environments is quite limited due to the fact that there is rarely a Line-of-Sight (LOS) between a device and a satellite. An alternative solution is to utilize a possibly pre-existing indoor wireless network. The network locates mobile targets carrying transceivers by exploiting metrics of Radio Frequency (RF) transmissions to/from other radios. Typically, a number of sensors are installed at fixed locations and they monitor the mobile transceivers. A sensor may simply be an Access Point (AP) of the network. It is also possible to perform localization in an ad hoc network where every ra- dio is mobile. Traditional metrics include Time of Arrival (TOA), Time Difference of Arrival (TDOA), Angle of Arrival (AOA) and Received Signal Strength Indica- tion (RSSI) [1, 2]. Other possible but less popular metrics include network topology and hop count [3], Bit Error Rate (BER) [4] and Signal-to-Noise Ratio (SNR) [5]. For low-cost applications, TOA and AOA are not particularly attractive because they usually require dedicated hardware components. RSSI-based schemes have the unique advantage that the information is readily available. In almost every technology, RSSI readings are given to higher levels for evaluating the qualities of communication links. Although these readings are not built for localization purposes and they are often not very precise, many studies have shown that it is 1 possible to perform localization, albeit not at submeter accuracies. Technologies such as ZigBee, Bluetooth and Wireless Local Area Network (WLAN) are ubiqui- tous. Thus, in a sense, RSSI-based schemes almost come for \u00E2\u0080\u0099free\u00E2\u0080\u0099. Compared to other possibly free metrics such as SNR, RSSI is found to be more dependent on location and thus it can be used for localization better [5]. Therefore, RSSI-based schemes are the most promising for low-cost applications. Fundamentally, RSSI-based localization is an inference problem. Given some RSSI measurements, the goal is to estimate the location information of the mobile targets. This seemingly easy task is complicated by the fact that it is difficult to obtain propagation models for indoor environments. Due to reflections, refractions and other multipath effects, it is challenging to describe the properties of signal strength measurements. Numerous methods and algorithms have been proposed and studied. The fingerprinting paradigm is the most promising genre of meth- ods. The paradigm works by sampling RSSI values at various locations in the area in order to construct deterministic or probabilistic models. The sampling step is known as offline training and it is the most time consuming part because it requires human intervention. We argue that, for low-cost applications, a localization algo- rithm needs to be robust and low in complexity. Therefore, we may sacrifice the performance by using a model that may not fit the empirical data well but is simple to train and requires less human intervention. 1.1 Scope and Contributions The scope of this thesis is limited to RSSI-based schemes using low-cost compo- nents. We assume that there is only one mobile transceiver and all the other radios are fixed and used as sensors. We bypass the problem of data association and as- sume that we can uniquely determine the source and destination of a transmission. We categorize state of the art fingerprinting methods into regression or classifica- tion. Furthermore, we use Bayesian filtering to improve the performances of the methods as well as potentially solving some of the deficiencies of the fingerprinting paradigm. Specifically, the main contributions are: \u00E2\u0080\u00A2 A comparison between the Unscentend Kalman Filter (UKF) and the Particle Filter (PF), two suboptimal solutions for the Bayesian filter. 2 \u00E2\u0080\u00A2 An emphasis on the use of classification due to its simplicity and a compari- son between three classifiers. \u00E2\u0080\u00A2 A preliminary study on using a Maximum Likelihood (ML) scheme under the Bayesian filter framework in order to handle imperfectly trained and time- varying parameters. 1.2 Organization The rest of this thesis is organized as follows: We present a brief overview on pop- ular methods studied in the literature in Chapter 2. Probabilistic formulations can be used under the Bayesian filter framework introduced in Chapter 3. Chapter 4 shows some empirical data in two different setups, which serve as motivation for the following chapters. The next three chapters form the novel contributions of this thesis. Chapter 5 details a comparison between the UKF and PF for the regression approach. Chapter 6 goes over the classification approach and performs a compari- son between three classifiers. Chapter 7 shows our preliminary work on combating imperfectly trained and time-varying parameters. Finally, Chapter 8 concludes this thesis. 3 Chapter 2 Background and Related Work A rigorous Mathematical approach for the task of RSSI-based localization is to ex- plicitly construct a function y= h(x), where y are the RSSI measurements and x are the position coordinates. This function is rarely invertible so the standard approach is to choose an estimate x\u00CC\u0082 that minimizes \u00E2\u0080\u0096y\u00E2\u0088\u0092 h(x\u00CC\u0082)\u00E2\u0080\u0096 [6, 7]. This optimization problem may be substituted by less rigorous methods such as the bounding box [2]. Another approach is to construct a probability density p(y|x) and the estimate x\u00CC\u0082 should maximize the probability p(x|y). It is very common to convert the first deterministic framework to the second probabilistic framework by including addi- tive noise terms, e.g., y = h(x)+w. However, we emphasize that that there are other methods which do not rely on constructing functions or probability densities. This chapter reviews the literature on RSSI-based localization. Filtering is not considered for now. The use of filtering will be discussed in Chapter 3. The fol- lowing briefly summarize the white box approach followed by the fingerprinting approach. 2.1 The White Box Paradigm The following are two approaches that attempt to construct models from theoretical backgrounds. They can be said to be white box approaches. 4 2.1.1 Ray Tracing Ray tracing is the most fundamental method as its only assumptions are laws of Physics. Starting at the transmitter, radio waves are modeled as rays and they are traced as they hit obstacles, experiencing multipath effects. [8, 9] are examples of simple ray tracing. [10] is an example of more sophisticated methods. 2.1.2 The Path Loss Model In free space, using laws of Physics, it can be shown that RSSI decreases propor- tionally to the square of the separation distance between two radios [8]. In the special case of the 2-ray model, the two radios are assumed to be high above the ground and the only other propagation path other than the direct LOS path is the ground reflection. Using the small angle approximation and assuming the radios are placed sufficiently far apart, it can be shown that the rate of decay is pro- portional to the fourth power of the separation distance [8]. Formally, the signal strength y in dBmW is y = \u00CE\u0093\u00E2\u0088\u009210\u00CF\u0081log10d, (2.1) where \u00CE\u0093 is some additive constant, \u00CF\u0081 is the path loss exponent (\u00CF\u0081 = 2 for free space propagation and \u00CF\u0081 = 4 for the 2-ray model) and d is the separation distance. This is a log-linear model as \u00CF\u0081 plays the role of the slope and \u00CE\u0093 plays the role of the bias. It should be emphasized that free space propagation and the 2-ray model are both special cases. In real environments, it is very difficult to derive the path loss exponent or the bias. The classical approach is to take some measurements and attempt to adjust the parameters of the log-linear model such that the model fits the data. In addition, additive Gaussian noise is often included to model shadowing. Therefore, the standard path loss model is y = \u00CE\u0093\u00E2\u0088\u009210\u00CF\u0081log10d+w, (2.2) where \u00CE\u0093 and \u00CF\u0081 are parameters of the setup, d is the separation distance and w is the zero-mean Gaussian noise [8, 11]. Extensive measurements have been con- ducted and it has been shown that higher values of \u00CF\u0081 correspond to more absorp- 5 tive environments and \u00CE\u0093 may be a function of antenna gains, transmit power of the transmitter and frequency of transmissions [8]. Although the path loss model has been applied successfully in outdoor envi- ronments, its use in indoor environments is more limited. [10] uses ray tracing to show that it does not hold well in environments where reflections dominate. Since the approach of adjusting the parameters to fit the data is essentially regression, [12] uses the coefficient of determination, R2, to quantify how good the fit is. One possible extension to the standard path loss model is the divide and con- queror strategy. Instead of using a global model for all parts of the area, the area is divided into cells and each cell possesses its own unique path loss model. To distinguish this from the global path loss model, this is denoted the piecewise path loss model. It is reasonable to assume that the path loss exponent and the Gaussian variance are global but the key idea is that the biases are local. In [11], the authors use the terms additive floor and wall attenuation factors. For instance, two different cells may have different bias parameters because they have different wall attenua- tion factors. This approach has been recommended by the developers of RADAR [5]. 2.2 The Fingerprinting Paradigm In contrast to the white box paradigm, fingerprinting gives no regards to laws of Physics. It simply treats the problem as a black box. The only things that matter are the inputs and outputs. Using the language of Machine Learning [13, 14], this is treated as a supervised learning problem. The goal is to train a machine that learns the model and teach it how to performance inference. Fingerprinting works by sampling [15]. First, RSSI measurements are taken at known locations. A fingerprint at a specific location simply consists of a vector of RSSI measurements (instance) and the the location information (label). The information can be continuous in terms of location coordinates or discrete. In the end, a radiomap or database is constructed with a number of these instance-label pairs. This is commonly called the offline training phase. Then, a model is fitted to the empirical data. If the labels applied are continuous, this is known as regression. If the labels applied are discrete, this is known as classification. Finally, in the 6 online working or validation phase, when an instance originating from a unknown location arrives, the knowledge learned is used to estimate the label. This powerful paradigm has two important assumptions. First, sampling must be done carefully in fine spatial intervals. Second, since the model learned is cho- sen to fit the offline database, it is assumed that conditions of the online working phase are the same as the conditions of the offline phase. The vast majority of the literature choose to apply continuous labels, i.e., po- sition coordinates. This amounts of a regression problem. Many regression tech- niques have been proposed and studied. In fact, the path loss model discussed in Section 2.1.2 can be thought of as a regression technique as empirical data is fitted to a log-linear model. Other examples include the weighted K-Nearest Neighbor (KNN) used in RADAR [5], regression trees [16], Artifical Neutral Net- works (ANN) [17, 18], Support Vector Machine (SVM) regression [19] and prob- abilistic methods [20\u00E2\u0080\u009322]. The accuracies of all the proposed methods plateau around 2m using realistic setups. To our best knowledge, the literature on the classification approach is relatively sparse compared to regression. For most applications, it is enough to know if the target is in some specific region (e.g., bedroom or kitchen). In fact, if we are only interested in contextual information, then a coordinate obtained from a localiza- tion algorithm would have to be converted using a map. This is still the same fingerprinting paradigm but the algorithm estimates a region instead of an exact position coordinate. Let a cell be a small region of interests. The entire area is divided into a finite number of cells and they are labeled numerically. Now, the labels are cell numbers instead of coordinates. Although this can be viewed as a coarse version of regression, it has two major advantages: First, the training phase is vastly simplified because cell numbers replace position coordinates, which have to be obtained tediously. This requires less human intervention. Second, this leads to a classification problem and the techniques involved are often simpler and easier to implement1. [4] uses the SVM and the Linear Discriminant Analysis (LDA) to perform room-level localization. [19] uses SVM for both regression and classifica- 1The boundary between regression and classification is often blurred. For instance, a regression technique can easily be converted to a classifier by quantizing the final output. SVM is a native classifier but it can be modified to perform regression. 7 tion noting that it gives good classification results. [23] measures analog outputs from energy detectors of Ultra-Wideband (UWB) radios in a cell and models them as Gaussian distributions. The authors go into great details to justify the Gaussian model. Their main assumption is that the relevant impulse responses of the en- ergy detectors are realizations of Gaussian processes. Although the work done is for UWB energy detectors, the same principle works for RSSI-based schemes using general radios. To our best knowledge, [24] is the earliest work modeling RSSI values within a cell as Gaussian variables. We refer to it as the Simple Gaussian Classifier (SGC) due to its astonishing simplicity. The fingerprinting paradigm is powerful precisely because detailed ray tracing is infeasible in practice. For instance, [9] uses simple ray tracing but resorts to col- lecting measurements in order to estimate the reflection and absorption coefficients of obstacles. The path loss model is only valid for simple cases yet regression is used in order to fit measurements to the log-linear model. Nevertheless, we would like to point out that fingerprinting-based methods cannot handle theoretical ques- tions such as the fundamental limits on the accuracies of algorithms or the sampling interval (in space and time) required. Furthermore, how to divide the area into cells is a difficult question. The standard approach using the fingerprinting paradigm is to proceed forward with an arbitrary scheme and see if it meets the performance re- quirements. To answer these theoretical questions, it is required to know the exact physical model, which can only be obtained via detailed ray tracing. 8 Chapter 3 Bayesian Filtering In Chapter 2, many methods reviewed construct models in the form p(y|x). This is denoted the observation likelihood. The optimal estimate, in the Bayesian sense, should maximize the probability density p(x|y) = p(y|x)p(x) p(y) . (3.1) Without filtering, i.e., each task of inference is only treated as an one-shot-in-time event, p(x) is assumed to be uniform and this becomes maximizing the observation likelihood p(y|x). To our knowledge, using Bayes\u00E2\u0080\u0099 rule to perform localization first appeared in [22]. If x is continuous, then this optimization problem is typically difficult and requires numerical solutions. However, if x comes from a discrete set, then a simple search can be used. Location information is highly correlated in time. Intuitively, if the target is known to be at a specific location, it is highly likely to be in the vicinity of the same location at some later time. In the literature, it is extremely common to use the discrete-time Bayesian filter framework1. This approach is commonly named target tracking instead of static localization. Instead of assuming p(x) is uniform, time correlation is considered. First, let x1:t , [x1, . . . ,xt ], where t is the time index. xt is the unknown state and it lives in the state space of the framework. The transition or maneuver model p(xt |xt\u00E2\u0088\u00921) describes how the target evolves in 1The introduction in [25] is an extremely good read. 9 ...... xt+1xt\u00E2\u0088\u00921 yt\u00E2\u0088\u00921 p(xt |xt\u00E2\u0088\u00921) p(xt+1|xt) xt p(yt\u00E2\u0088\u00921|xt\u00E2\u0088\u00921) p(yt |xt ) p(yt+1|xt+1) yt+1yt Figure 3.1: A Graphical Illustration of the Bayesian Filter time in a Markovian fashion. Let y1:t , [y1, . . . ,yt ], where t is the time index. The observation model is the constructed model p(yt |xt). These two densities form the basis of the Bayesian filter p(xt |xt\u00E2\u0088\u00921) p(yt |xt) . (3.2) This can be illustrated by the directed graph in Figure 3.1. The standard choice for the initial density p(x0) is the uniform distribution if the initial state of the target is unknown. The Bayesian filter consists of two stages, prediction2 p(xt |y1:t\u00E2\u0088\u00921) = \u00E2\u0088\u00AB p(xt |xt\u00E2\u0088\u00921)p(xt\u00E2\u0088\u00921|y1:t\u00E2\u0088\u00921)dxt\u00E2\u0088\u00921 (3.3) and update p(xt |y1:t) = p(yt |xt)p(xt |y1:t\u00E2\u0088\u00921) p(yt |y1:t\u00E2\u0088\u00921) . (3.4) The normalization factor in the denominator can be calculated using p(yt |y1:t\u00E2\u0088\u00921) = \u00E2\u0088\u00AB p(yt |xt)p(xt |y1:t\u00E2\u0088\u00921)dxt . (3.5) This steps ensures that \u00E2\u0088\u00AB p(xt |y1:t)dxt = 1. (3.6) 2This is also known as the Chapman-Kolmogorov equation. 10 It is common to combine the two steps into one, i.e., p(xt |y1:t) = cp(yt |xt) \u00E2\u0088\u00AB p(xt |xt\u00E2\u0088\u00921)p(xt\u00E2\u0088\u00921|y1:t\u00E2\u0088\u00921)dxt\u00E2\u0088\u00921, (3.7) where c is the normalization constant. The filtered posterior density p(xt |y1:t) is conditioned on the history of obser- vations y1:t . The framework is recursive. Starting at some initial distribution p(x0), the framework moves forward in time and calculate p(x1|y1), p(x2|y1:2), . . . every time slot. As pointed out in [26], this deceptively simple framework has one major problem. These integral equations have no solutions except in two limited cases. The two exceptions are: \u00E2\u0080\u00A2 If the transition and observation models written as probability densities are Gaussians, then the Kalman Filter (KF) [27] is the optimal solution. All the probability densities involved are Gaussians. \u00E2\u0080\u00A2 If the state space is finite, i.e., xt comes from a finite set, then the integrals in Equation 3.3 and Equation 3.4 are converted to sums because the probability densities involved are discrete probability mass functions. Normalization becomes trivial and this is called the Hidden Markov Model (HMM) filter [28]. This is the case for the classification approach since the number of cells is finite! In all other cases, approximations must be used. This is another advantage of classification compared to regression because there is a closed-form solution. Summarizing Chapter 2 and this chapter, probabilistic formulations are the most popular in the literature and they can be used in the Bayesian filter frame- work. Deterministic methods can be converted to probabilistic ones. The ones that cannot be converted typically use simple time averaging. For instance, [4] uses the SVM, which is a deterministic classifier. The authors use simple time averaging to improve the results. 11 Chapter 4 Two Examples of Applying the Path Loss Model As discussed in Section 2.1.2, the path loss model is often cited in the literature. For localization purposes, its use is often said to be unreliable and unpredictable [2]. Although it can be backed up by theoretical analysis in specific cases, its use in real environments is backed up by the fingerprinting paradigm as discussed in Section 2.2. In this chapter, we take a closer look into this issue and demonstrate how well the path loss model works in two different setups. For each setup, the area is deemed geometrically homogeneous such that it makes no sense to further divide the area into smaller cells. Therefore, the global path loss model is applied. This chapter serves as motivation for later chapters. 4.1 The First Setup We take two ZigBee radios and perform a simple experiment at room Kaiser 2020 at University of British Columbia (UBC). The modules used are Rabbit 4510W kits1. According to the datasheet, the frequency of transmissions is 2.4GHz. One radio is fixed and the other one is placed at various distances from the fixed radio. Figure 4.1 and Figure 4.2 show the setup. Because there is a LOS between the two radios and there are no major obstacles, the environment is close to ideal and 1www.rabbit.com 12 Figure 4.1: Room Kaiser 2020 the path loss model fits the data very well. Figure 4.3 shows the measured values and the fitted model. The fitted model is y\u00CC\u0082 =\u00E2\u0088\u009218.3912log10 d\u00E2\u0088\u009250.5350, (4.1) where y\u00CC\u0082 is the signal strength and d is the separation distance. The R2 of the fit is 0.6826, which is not perfect but good enough. Now, we take a look at the residuals of the fit, i.e., a residual is ri = y\u00CC\u0082i\u00E2\u0088\u0092 yi, (4.2) where y\u00CC\u0082i is the predicted value using the model from a specific distance and yi is a measurement from that distance. According to the standard path loss model [8, 11], the residuals are realizations of a Gaussian process. This cannot be con- firmed by applying the chi-squared test at 95% confidence. Figure 4.4 shows the residuals. Although the figures do not seem perfectly Gaussian, they are close. We cannot claim that our simple experiment is conclusive and we speculate that the distribution will look more Gaussian with a better experimental setup. This serves 13 Figure 4.2: A ZigBee Radio as motivation for the work in Chapter 5. 4.2 The Second Setup The test area is a big conference table and we place the target at every possible location on this table. The table sits in a conference room and it is spaced at least one meter from the walls and corners. The mobile target uses a small loop antenna transmitting at 433MHz. One receiver, which uses MAX14732 modules, receives the transmissions using \u00CE\u00BB/4 antennas. The receiver is mounted high above the ground on one of the walls. Figure 4.5 shows the setup. The fitted model is y\u00CC\u0082 =\u00E2\u0088\u00922.2215log10 d\u00E2\u0088\u009274.4224, (4.3) where y\u00CC\u0082 is the signal strength and d is the separation distance. The R2 of the fit is 0.0000, which literally says that the fit is useless. Figure 4.6 shows that the histogram of residuals of the fitted model looks the same as the histogram of the 2www.maxim-ic.com 14 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 \u00E2\u0088\u009285 \u00E2\u0088\u009280 \u00E2\u0088\u009275 \u00E2\u0088\u009270 \u00E2\u0088\u009265 \u00E2\u0088\u009260 \u00E2\u0088\u009255 \u00E2\u0088\u009250 \u00E2\u0088\u009245 10log10(d) dB Data Fitted Figure 4.3: RSSI Values and the Fitted Model for the First Setup RSSI values themselves, i.e., we can just use this constant model y\u00CC\u0082 =\u00E2\u0088\u009274.4224. (4.4) This is easily explained by the fact that the logarithmic term ceases to pose any effect for realistic values of the path loss exponent, i.e., if variations in log10 d are small. Furthermore, it will be impossible to determine the exact location coordinate since any coordinate on the desk returns the same RSSI value according to the model. This is exactly the case of the piecewise path loss model. Since we divide the area into multiple small cells, the path loss exponent is irrelevant and can be set to zero if the cells are small enough. In this experiment, our test area is an isolated table in a room and it is evidently small enough. In addition, we note that the histograms in Figure 4.6 do resemble Gaussian distributions somewhat although neither passes the chi-square test at 95%. This leads to the revelation 15 \u00E2\u0088\u009220 \u00E2\u0088\u009215 \u00E2\u0088\u009210 \u00E2\u0088\u00925 0 5 10 0 500 1000 1500 2000 Fr eq ue nc y Histogram of Residuals \u00E2\u0088\u00924 \u00E2\u0088\u00923 \u00E2\u0088\u00922 \u00E2\u0088\u00921 0 1 2 3 4 \u00E2\u0088\u009220 \u00E2\u0088\u009215 \u00E2\u0088\u009210 \u00E2\u0088\u00925 0 5 10 15 Standard Normal Quantiles Qu an tile s o f R es idu als QQ Plot of Residuals versus Standard Normal Figure 4.4: Distribution of the Residuals for the First Setup that, within a small cell, RSSI values can be modeled as Gaussian disregarding the precise location coordinate of the target. This motivation leads to our work in Chapter 6. Of course, this chapter has assumed that the data actually follows the path loss model. [10] shows that the path loss model fails in environments where multipath effects dominate, i.e., if there are obstacles close to the radios. However, if we relax the definition of \u00E2\u0080\u0099good fit\u00E2\u0080\u0099, the path loss model serves a purpose because it is simple and easy to train via regression. This is a sharp contrast to regression schemes such as the ANN. Furthermore, the slope of the model, the path loss exponent, has some physical meaning as higher values indicate denser obstructions. This is a sharp contrast to other regression techniques whose parameter have no meaning at all. Therefore, there is a benefit of using the path loss model at a cost of obtaining a worse fit. The rest of this thesis will use the path loss model and variations of it as 16 Figure 4.5: The Conference Table the basis for the propagation model. 17 \u00E2\u0088\u009220 \u00E2\u0088\u009215 \u00E2\u0088\u009210 \u00E2\u0088\u00925 0 5 10 15 0 100 200 300 400 500 Fr eq ue nc y Histogram of Residuals of the Path Loss Model \u00E2\u0088\u009295 \u00E2\u0088\u009290 \u00E2\u0088\u009285 \u00E2\u0088\u009280 \u00E2\u0088\u009275 \u00E2\u0088\u009270 \u00E2\u0088\u009265 \u00E2\u0088\u009260 0 100 200 300 400 500 Fr eq ue nc y Histogram of RSSI Values Figure 4.6: The Residuals and the RSSI Values for the Second Setup 18 Chapter 5 Unscented Kalman Filtering versus Particle Filtering As discussed in Chapter 3, the HMM filter [28] is used for the classification ap- proach. The state space is finite so it is the exact and optimal solution. The case for the regression approach is not so straightforward. Since the state space is con- tinuous, no exact solution is known except one special case. If the transition and observation models are both linear and all the noise processes are additive Gaus- sian, then the famous KF [27] is the optimal solution. However, those conditions are rarely met in practice. In particular, the observation model cannot be expected to be linear. In cases where the strict conditions are violated but not too severely, one possi- ble solution is the classical Extended Kalman Filter (EKF) [27], which relies on lin- earizing the transition and observation models via Taylor series expansion. How- ever, the UKF, a newer variation of the KF, is found to perform better because it can capture more terms of the Taylor series expansion of a nonlinear function [29]. In particular, [30] shows that the UKF performs better than the EKF for TOA tracking. The UKF uses several deterministic \u00E2\u0080\u0099sigma points\u00E2\u0080\u0099 to capture the transformation of a probability density though a nonlinearity. Another competitive solution to deal with nonlinear conditions is the PF, which solves the general Bayesian filter by simulations. A practical and robust implementation of the PF is the Sampling Importance Resampling (SIR)-Particle Filter (PF) [25, 26]. 19 In the literature, authors have used variations of the KF (EKF [31] and UKF [17]) and the PF [32] for RSSI-based tracking using the regression approach. However, a head-to-head comparison of these techniques has not been made. This chapter addresses this issue by comparing and contrasting UKF and PF in terms of their accuracies and consistencies. The following sections describe the assumptions, summarize the two tech- niques and present the conclusion based on simulations as well as experimental results. 5.1 System Model This section presents a popular transition model [33] and a reasonable observation model based on discussions from Chapter 2 and Chapter 4. The goal is to track a transceiver moving in a bounded two-dimensional region. N sensors are placed at arbitrary but known locations in this area. The transceiver broadcasts to all sensors every T seconds. Each sensor listens, evaluates the signal strength from the target and forwards all the data to a central Fusion Center (FC). The unknown state vector of the target at time t is defined as xt = [ p(1)t v (1) t p (2) t v (2) t ]T , (5.1) where the variables are the position coordinates and velocity components with re- spect to some fixed two-dimensional coordinate system. The transition model is xt = f (xt\u00E2\u0088\u00921,ut\u00E2\u0088\u00921) = Axt\u00E2\u0088\u00921+But\u00E2\u0088\u00921, (5.2) 20 where A = \u00EF\u00A3\u00AE\u00EF\u00A3\u00AF\u00EF\u00A3\u00AF\u00EF\u00A3\u00AF\u00EF\u00A3\u00AF\u00EF\u00A3\u00B0 1 T 0 0 0 1 0 0 0 0 1 T 0 0 0 1 \u00EF\u00A3\u00B9\u00EF\u00A3\u00BA\u00EF\u00A3\u00BA\u00EF\u00A3\u00BA\u00EF\u00A3\u00BA\u00EF\u00A3\u00BB B = \u00EF\u00A3\u00AE\u00EF\u00A3\u00AF\u00EF\u00A3\u00AF\u00EF\u00A3\u00AF\u00EF\u00A3\u00AF\u00EF\u00A3\u00B0 T 2 2 0 T 0 0 T 2 2 0 T \u00EF\u00A3\u00B9\u00EF\u00A3\u00BA\u00EF\u00A3\u00BA\u00EF\u00A3\u00BA\u00EF\u00A3\u00BA\u00EF\u00A3\u00BB . (5.3) This transition model arises from discretizing Newton\u00E2\u0080\u0099s laws of motion [33]. The first term corresponds to inertia and the second term corresponds to accelerations due to random maneuvers. The But\u00E2\u0088\u00921 term is a stochastic process of accelerations assumed to be i.i.d. N (0,Q), where Q = \u00CF\u00832u I1. As for the observation model, there is no universally accepted and applicable model as discussed in Chapter 2. A reasonable one is chosen and the implications will be discussed after it is studied. The choice is the piecewise path loss model as it is generally applicable in indoor environments. As discussed in Section 2.1.2, the path loss model is characterized by the path loss exponent, the bias and the noise. It is reasonable to assume that the path loss exponent and the noise distributions are global and one can apply the same parameters for all sensors in the area. However, it is not reasonable to apply a uniform bias for all, because that would imply a monotonic environment. Therefore, we apply different biases according to some cell division scheme. Formally, the signal strength in dBmW of a transmission from the target heard at sensor n at time t is yn,t = \u00CE\u0093n(\u00CF\u0088(xt))\u00E2\u0088\u009210\u00CF\u0081log10\u00E2\u0080\u0096\u00CF\u0088(xt)\u00E2\u0088\u0092 rn\u00E2\u0080\u0096+wn,t . (5.4) Here, \u00CF\u0088(xt) = [ p(1)t p (2) t ]T is a function that simply returns the position part of the state vector. It describes the cell division scheme. \u00CE\u0093n(\u00C2\u00B7) and \u00CF\u0081 are the familiar 1Because the system is discrete in time, the actual physical accelerations are assumed to be piece- wise constant, which is a reasonable assumption provided T is small relative to the dynamics of the target. 21 bias and path loss exponent. rn is the known location of sensor n. As named in Section 2.1.2, this is the piecewise path loss model instead of the global one. Finally, wn,k is the familiar i.i.d. Gaussian for all n, t, distributed according to N (0,\u00CF\u00832w). The noise models shadowing and any other factors not accounted in the model. Thus, there are N RSSI observations at each time t. Let us collect these observations in a vector yt . Similarly, let the observation noise terms be collected in a corresponding vector wt . (Let us name the covariance of it R.) One can succinctly rewrite the observation equation as yt = h(xt ,wt) , (5.5) where h(\u00C2\u00B7) is implicitly defined according to Equation 5.4. Together, Equation 5.2 and Equation 5.5 form the basis of the Bayesian filter, which will be solved by either the UKF or the PF next. 5.2 The Unscented Kalman Filter (UKF) The UKF is a modified version of the classical KF. In the original KF, all the prob- ability densities involved in the calculations are Gaussians because everything is linear. Thus, they are completely characterized by means and covariances. How- ever, transforming a Gaussian density through a nonlinear function results in a non-Gaussian product. If the nonlineararity is not too severe, the UKF assumes that the resulting density can be approximated by a Gaussian. The UKF uses determin- istic \u00E2\u0080\u0099sigma points\u00E2\u0080\u0099 to deal with the nonlinear transformation. It can capture up to the third order term of the Taylor series expansion of the nonlinear transformation [29]. First, let us discuss the process to create sigma points. We start with a L- dimensional distribution with mean m and covariance matrix S. We have parame- ters \u00CE\u00B1 , \u00CE\u00B2 , \u00CE\u00BA and \u00CE\u00BB = \u00CE\u00B12(L+\u00CE\u00BA)\u00E2\u0088\u0092L. (5.6) The set of sigma points { \u00CF\u0082 i }2L i=0 are \u00CF\u00820 = m (5.7) 22 \u00CF\u0082 i = m+ (\u00E2\u0088\u009A (L+\u00CE\u00BB )S ) i , i = 1 . . .L (5.8) \u00CF\u0082 i = m\u00E2\u0088\u0092 (\u00E2\u0088\u009A (L+\u00CE\u00BB )S ) i , i = L+1 . . .2L. (5.9)(\u00E2\u0088\u009A (L+\u00CE\u00BB )S ) i is ith column of the square root of the matrix (L+ \u00CE\u00BB )S using the definition B = AAT if A is the square root matrix of B. It is common to use the Cholesky decomposition for this calculation. The weights for these sigma points are W 0s = \u00CE\u00BB L+\u00CE\u00BB (5.10) W 0c = \u00CE\u00BB L+\u00CE\u00BB +(1\u00E2\u0088\u0092\u00CE\u00B12+\u00CE\u00B2 ) (5.11) W ic =W i s = 1 2(L+\u00CE\u00BB ) , i 6= 0. (5.12) These weights are not used for any stochastic purposes and do not sum up to one necessarily. Typical values for the parameters \u00CE\u00B1 , \u00CE\u00B2 and \u00CE\u00BA are 10\u00E2\u0088\u00923, 2 and 0 respec- tively. The UKF retains the same structure of the original KF [27]. The prediction and update steps are done in the same manner except that the sigma points are used to deal with the nonlinearities and calculate the Kalman gain. The transition model Equation 5.2 is linear so the prediction step can be done using the original KF. However, in the interest of generality, the complete UKF algorithm is shown in Algorithm 1. Regarding the augmentation process, some authors have omitted it but the results are not the same as shown in [34]. 5.3 The Particle Filter (PF) The PF [25, 26] is a simulation-based method for solving the general Bayesian filter problem. Since it poses no assumptions on linearities of the problem, it is able to solve any Bayesian filter. The idea is simple. One simply simulates candidates, runs them through the transition model Equation 5.2 and weights them according to the observation model Equation 5.5. The estimate of the unknown state is a weighted average of these candidates. As the number of candidates increases, the performance approaches that of the optimal filter. 23 Algorithm 1 For each time slot t Start with previous time slot\u00E2\u0080\u0099s estimated state x\u00CC\u0082t\u00E2\u0088\u00921 and covariance Pt\u00E2\u0088\u00921. Augment with the Gaussian processes in Equation 5.2 and Equation 5.5, i.e., x\u00CC\u0082at\u00E2\u0088\u00921 = [ x\u00CC\u0082Tt\u00E2\u0088\u00921 E[u T ] E[wT ] ]T Pat\u00E2\u0088\u00921 = \u00EF\u00A3\u00AE\u00EF\u00A3\u00B0Pt\u00E2\u0088\u00921 0 00 Q 0 0 0 R \u00EF\u00A3\u00B9\u00EF\u00A3\u00BB . Create sigma points from this augmented result. Denote the sigma points \u00CF\u0087 it\u00E2\u0088\u00921. Let us distinguish the state part \u00CF\u0087 ix,t\u00E2\u0088\u00921, the ut part \u00CF\u0087 i u,t\u00E2\u0088\u00921 and the the wt part \u00CF\u0087 iw,t\u00E2\u0088\u00921. Prediction \u00CF\u0087 ix,t|t\u00E2\u0088\u00921 = f ( \u00CF\u0087 ix,t\u00E2\u0088\u00921,\u00CF\u0087 i u,t\u00E2\u0088\u00921 ) x\u00CC\u0082t|t\u00E2\u0088\u00921 = \u00E2\u0088\u00912Li=0 wis\u00CF\u0087 ix,t|t\u00E2\u0088\u00921 Pt|t\u00E2\u0088\u00921=\u00E2\u0088\u00912Li=0wic [ \u00CF\u0087 ix,t|t\u00E2\u0088\u00921\u00E2\u0088\u0092x\u00CC\u0082t|t\u00E2\u0088\u00921 ][ \u00CF\u0087 ix,t|t\u00E2\u0088\u00921\u00E2\u0088\u0092x\u00CC\u0082t|t\u00E2\u0088\u00921 ]T Update \u00CE\u00B3 it = h ( \u00CF\u0087 ix,t|t\u00E2\u0088\u00921,\u00CF\u0087 i w,t\u00E2\u0088\u00921 ) y\u00CC\u0082t = \u00E2\u0088\u00912Li=0 wis\u00CE\u00B3 it Kalman Gain Pyy = \u00E2\u0088\u00912Li=0 wic [ \u00CE\u00B3 it \u00E2\u0088\u0092 y\u00CC\u0082t ][ \u00CE\u00B3 it \u00E2\u0088\u0092 y\u00CC\u0082t ]T Pxy = \u00E2\u0088\u00912Li=0 wic [ \u00CF\u0087 ix,t|t\u00E2\u0088\u00921\u00E2\u0088\u0092 x\u00CC\u0082t|t\u00E2\u0088\u00921 ][ \u00CE\u00B3 it \u00E2\u0088\u0092 y\u00CC\u0082t ]T Kt = PxyP\u00E2\u0088\u00921yy Current Estimate x\u00CC\u0082t = x\u00CC\u0082t|t\u00E2\u0088\u00921+Kt(yt \u00E2\u0088\u0092 y\u00CC\u0082t) Pt = Pt|t\u00E2\u0088\u00921\u00E2\u0088\u0092KtPyyKTt Formally, let us denote P candidates of the true state vector { xit }P i=1. They have the same dimensionality and components as the true state vector xt at time t. Each candidate is called a particle with weight wit . Together, these particles approximate the posterior density p(xt |y1:t). However, it is not easy to sample from 24 the posterior directly. Instead, the standard is to draw particles from p(x1:t |y1:t). We draw samples from an arbitrary proposal density q and weight the particles according to the importance ratio wt(x1:t) = p(x1:t |y1:t) q(x1:t |y1:t) . (5.13) The standard is to assume the proposal density can be constructed recursively in time, i.e., q(x1:t |y1:t) = q(x1:t\u00E2\u0088\u00921|y1:t\u00E2\u0088\u00921)q(xt |yt ,xt\u00E2\u0088\u00921). (5.14) Therefore, the importance weights can also be updated recursively in time, i.e., wt(x1:t) = p(x1:t |y1:t) p(x1:t\u00E2\u0088\u00921|y1:t\u00E2\u0088\u00921)q(xt |yt ,xt\u00E2\u0088\u00921) wt\u00E2\u0088\u00921(x1:t\u00E2\u0088\u00921). (5.15) Furthermore, p(x1:t |y1:t) can be factored into p(x1:t |y1:t) \u00E2\u0088\u009D p(x1:t ,y1:t) = t \u00E2\u0088\u008F k=1 p(yk|xk)p(xk|xk\u00E2\u0088\u00921) (5.16) because of the conditional independence property of the Bayesian filter. This al- lows us to only keep the most current samples { xit }P i=1 at time t instead of keeping the entire history of samples, i.e., Equation 5.15 becomes w\u00CC\u0083it \u00E2\u0088\u009D p(yt |xit)p(xit |xit\u00E2\u0088\u00921) q(xit |yt ,xit\u00E2\u0088\u00921) w\u00CC\u0083it\u00E2\u0088\u00921. (5.17) Since we only have a finite number of these particles, the unnormalized weights can be normalized by wit = w\u00CC\u0083it \u00E2\u0088\u0091Pj=1 w\u00CC\u0083 j t . (5.18) Finally, this achieves the goal of using a set of particles { xit }P i=1 and associated weights { wit }P i=1 to approximate the posterior density. This scheme is known as sequential importance sampling. For this chapter, the transition model Equation 5.2 consists of Gaussians and they are extremely easy to generate. Therefore, the transition model Equation 5.2 25 is used as the proposal density and the algorithm is further simplified, i.e., q(xit |yt ,xit\u00E2\u0088\u00921) = p(xit |xit\u00E2\u0088\u00921) (5.19) and w\u00CC\u0083it \u00E2\u0088\u009D p(yt |xit)w\u00CC\u0083it\u00E2\u0088\u00921. (5.20) Owning to Equation 5.4, the observation likelihood between the target and a single sensor n is p(yn,t |xit) = 1\u00E2\u0088\u009A 2pi\u00CF\u00832w exp (\u00E2\u0088\u0092(yn,t \u00E2\u0088\u0092\u00CE\u0093n(\u00CF\u0088(xit))+10\u00CF\u0081 log10 \u00E2\u0080\u0096\u00CF\u0088(xit)\u00E2\u0088\u0092 rn\u00E2\u0080\u0096)2 2\u00CF\u00832w ) . (5.21) Therefore, the total observation likelihood given the total observation vector yt is p(yt |xit) = n=N \u00E2\u0088\u008F n=1 p(yn,t |xit). (5.22) It is well known that the weights will be concentrated on a small number of particles using this scheme [25, 26]. This is statistically harmful as the other particles have negligible weights and the number of useful particles is reduced. The Sampling Importance Resampling (SIR) scheme combats this degeneracy by resampling, i.e., discarding the negligible particles. How often resampling takes place is a matter of design choice. We simply resample at every time slot. Therefore, all the particles will have the same weight after each iteration of the algorithm. Unfortunately, in scenarios where the process variances in the transition model Equation 5.2 are small, this leads to sample impoverishment, where all the particles collapse to near- identical states. This work opts for a simple solution that uses larger variances for drawing from the transition model Equation 5.2 instead of the true values. This will slightly alter behavior of the standard SIR-PF. A summary of the PF is presented in Algorithm 2. The resampling step is described in Algorithm 3. 26 Algorithm 2 For each time slot t Start with previous time slot\u00E2\u0080\u0099s particles and weights, {xit\u00E2\u0088\u00921}Pi=1 and {wit\u00E2\u0088\u00921}Pi=1. Draw New Particles Draw P samples of the Gaussian process in Equation 5.2. Denoted each sample uit\u00E2\u0088\u00921. xit = f ( xit\u00E2\u0088\u00921,u i t\u00E2\u0088\u00921 ) Weight w\u00CC\u0083it = w i t\u00E2\u0088\u00921 p(yt |xit), where the likelihood is given by Equation 5.21 and Equa- tion 5.22. Normalize the weights using wit = w\u00CC\u0083 i t/\u00E2\u0088\u0091Pj=1 w\u00CC\u0083 j t . Current Estimate x\u00CC\u0082t = \u00E2\u0088\u0091Pi=1 witxit Resample using Algorithm 3 if necessary. Algorithm 3 (x jt ,w j t ) = RESAMPLE(xit ,wit) Construct the cumulative probabilities {cp}p=Pp=1 of wit . for j = 1 \u00C2\u00B7 \u00C2\u00B7 \u00C2\u00B7P do Generate a uniform random variable u\u00E2\u0088\u00BCU [0,1]. Find the smallest p such that u\u00E2\u0089\u00A4 cp. Set x jt = x p t and w j t = 1/P, i.e., all particles have the same weight. end for 5.4 Simulations Simulations are carried out in MATLAB to compare the UKF against the SIR-PF for the RSSI-based tracking problem described in Section 5.1. Target trajectories are generated using the system model and both filters are used to localize the target. The performance metric is the Mean Squared Error (MSE) between the true position 27 123 4 6 5 Sensor 1 Cell 1 Cell 2 Figure 5.1: The Floor Plan for Simulations coordinates and the estimates, i.e., \u00E2\u0088\u00862 = E [\u00E2\u0080\u0096\u00CF\u0088(xt)\u00E2\u0088\u0092\u00CF\u0088(x\u00CC\u0082t)\u00E2\u0080\u00962] . (5.23) The simulation MSE is obtained by 1 S S \u00E2\u0088\u0091 t=1 \u00E2\u0080\u0096\u00CF\u0088(xt)\u00E2\u0088\u0092\u00CF\u0088(x\u00CC\u0082t)\u00E2\u0080\u00962, (5.24) where S is the length of the simulation. The area is a 20m by 20m room divided into two cells. N = 6 sensors are placed at [9 1]T , [5 7]T , [\u00E2\u0088\u00927 7]T , [\u00E2\u0088\u00928 \u00E2\u0088\u00921]T , [\u00E2\u0088\u00922 \u00E2\u0088\u00927]T and [3 \u00E2\u0088\u00926]T , as depicted in Figure 5.1. Let l = 1 denote if the target is in the first cell and let l = 2 denote if the target is in the second cell. The global noise variance for the observation model Equation 5.4 is \u00CF\u00832w = 2. The global path loss exponent is \u00CF\u0081 = 2. The cells are used to define the \u00CE\u0093n(\u00C2\u00B7) biases in the observation model Equation 5.4. The mobile target, which starts at rest at the center of the room, is tracked as the it pings the sensors every T = 1 second. The variance for the transition model Equation 5.2 is \u00CF\u00832u = 10\u00E2\u0088\u0092 25 10 \u00E2\u0089\u0088 0.0032. The following results are averaged over 105 time slots. In order to reduce the number of particles required and to avoid sample impoverishment, 28 the PF resamples at every time slot and it uses a variance that is 100 times larger than the true value for drawing particles. Because the number of particles required is not known and must be determined empirically, the number of particles is varied until the performance ceases to increase. The number of sigma points required for the UKF is 2L+ 1 = 25, where L is the dimension of the augmented state space. The following two tests demonstrate how the behavior of the \u00CE\u0093n(\u00C2\u00B7) terms affects the performances of the filters. 5.4.1 Uniform Radio Environment For this test, all the \u00CE\u0093n(\u00C2\u00B7) terms are set to zero, i.e., the radio environment is uniform and one global path loss model applies to all. The area is essentially free space. Table 5.1 shows the MSE of the filters in terms of squared meters. The number of particles required for the PF is larger than the 25 sigma points required for the UKF. While the filters are different and the numbers are not directly comparable, this is a rough estimate indicating that the UKF is less computationally intensive. Since the transition model Equation 5.2 is linear, the only nonlinearity in the system is the path loss observation model Equation 5.4 and the UKF is able to handle it. The UKF is the better solution in this case due to lower computation costs. In terms of the finer statistics, the UKF is less consistent compared to the PF counterparts given sufficient number of particles. The 5-th and 95-th percentiles of the UKF are tighter but the variance is larger. This implies that the near-linearity assumption imposed by the UKF is correct the vast majority of the time but it fails and introduces large errors beyond the 95-th percentile occasionally. Finally, the PF is not able to reach the performance of the UKF despite a large number of particles because of the resampling scheme and the proposal density does not use the true value of the variance. 29 Table 5.1: Target Tracking Results in a Uniform Radio Environment Mean Squared Error (m2) over Time Filter Number Mean Variance 5-th 95-th of Particles percentile percentile PF 25 2.2313 210.1638 0.0357 6.0137 PF 50 1.2452 46.4644 0.0311 3.5388 PF 75 0.9318 13.1694 0.0292 3.0309 PF 100 0.9902 38.1152 0.0291 2.8899 PF 125 0.8034 1.8729 0.0289 2.7679 PF 150 0.7945 1.6093 0.0285 2.7339 PF 175 0.7735 1.1036 0.0287 2.6958 PF 200 0.7663 1.0823 0.0290 2.6695 PF 225 0.7617 1.0594 0.0285 2.6534 PF 250 0.7577 1.0515 0.0284 2.6278 UKF 0.6075 10.7986 0.0116 1.7033 5.4.2 Diverse Radio Environment For this test, the \u00CE\u0093n(\u00C2\u00B7) biases are varied, simulating a nonuniform environment, i.e., \u00CE\u0093n=1...2(l = 1) = 0 \u00CE\u0093n=3(l = 1) =\u00E2\u0088\u009215 \u00CE\u0093n=4(l = 1) =\u00E2\u0088\u009230 \u00CE\u0093n=5(l = 1) =\u00E2\u0088\u009230 \u00CE\u0093n=6(l = 1) =\u00E2\u0088\u009215 \u00CE\u0093n=1(l = 2) =\u00E2\u0088\u009220 \u00CE\u0093n=2(l = 2) =\u00E2\u0088\u009220 \u00CE\u0093n=3...6(l = 2) = 0 . (5.25) This is a \u00E2\u0080\u0099switching\u00E2\u0080\u0099 behavior and it leads to multimodalness in the system. Table 5.2 shows the MSE results. The nonuniform terms severely degrade the performance of the UKF to the point that the UKF basically fails. This is because the UKF imposes Gaussians for the densities required for the Bayesian filter. A Gaussian density is unimodal and thus it cannot handle the new multimodal system. 30 Table 5.2: Target Tracking Results in a Diverse Radio Environment Mean Squared Error (m2) over Time Filter Number Mean Variance 5-th 95-th of Particles percentile percentile PF 25 3.4454 783.1535 0.0365 6.8851 PF 50 1.0531 7.1887 0.0300 3.6132 PF 75 0.8970 3.0619 0.0291 3.0880 PF 100 0.8553 3.2835 0.0288 2.8842 PF 125 0.8096 1.8312 0.0280 2.7910 PF 150 0.7987 2.0580 0.0278 2.7404 PF 175 0.7772 1.8331 0.0278 2.6760 PF 200 0.7646 1.1146 0.0275 2.6573 PF 225 0.7567 1.0768 0.0284 2.6289 PF 250 0.7547 1.0828 0.0280 2.6259 UKF 7.4229 1313.1010 0.0130 36.0688 For instance, in Figure 5.2, we show the distribution of the first position coordinate part of the particles at some time index for the case of 300 particles. Since we do resampling at every time step, all the weights are equal and the histogram resembles the true posterior density p(xt |y1:t) if the number of particles is sufficient. As shown, this is clearly multimodal and the UKF is not equipped to deal with it. Obviously, the level of uniformness in the radio environment affects how badly the UKF performs. However, the PF has the same performance and consistency because it does not assume any linearity and the densities of the Bayesian filter are not fitted to unimodal Gaussians. Therefore, the PF is much more robust and insensitive to the fine details of the environment and scenario. 5.5 Experimental Results This section presents experimental results conducted in an office, which is charac- terized by diverse radio characteristics. The setup is a wireless network of seven ZigBee radios in room Kaiser 4090 at UBC. The modules used are the same ra- dios used in Section 4.1. One radio is designated as the target, which broadcasts to other radios every T = 1.5 seconds. Five radios are used as sensors. The last 31 \u00E2\u0088\u009210 \u00E2\u0088\u00929.5 \u00E2\u0088\u00929 \u00E2\u0088\u00928.5 \u00E2\u0088\u00928 \u00E2\u0088\u00927.5 \u00E2\u0088\u00927 \u00E2\u0088\u00926.5 0 20 40 60 80 100 120 Position Coordinate (m) Fr eq ue nc y Figure 5.2: The First Position Coordinate Part of the Particles from Simula- tion Results radio is attached to a personal computer acting as the FC, performing centralized data collection and tracking. The area plan is depicted in Figure 5.3. The area is 17m by 14m and the left side is shown as a dotted line because only the right half of the office is used. Since the model chosen is the piecewise path loss model Equation 5.4, a cell division scheme and various parameters are needed. Fifteen pre-defined calibration points are chosen. They are placed evenly and cover the entire area. Each point is the center of a cell and each cell possesses its own unique bias parameters. The orientations of all the antennas are kept constant and human movements are con- trolled. The standard procedure is to collect different sets of data for training and validation and collect measurements at different locations within the cells. How- ever, for simplicity, only one set of data is collected and it is used for both training and validation. Measurements are only taken at the centers of the cells. This is not 32 \u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000 \u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000 \u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000 \u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000 \u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000 \u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000 \u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000 \u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000 \u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000 \u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000 \u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000 \u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001 \u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001 \u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001 \u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001 \u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001 \u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001 \u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001 \u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001 \u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001 \u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001 \u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001 \u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000 \u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000 \u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000 \u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000 \u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000 \u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000 \u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000 \u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000 \u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000 \u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000 \u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000 \u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001 \u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001 \u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001 \u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001 \u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001 \u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001 \u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001 \u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001 \u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001 \u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001 \u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u00011 2 3 4 5 Sensor 1 Cubicles Cubicles A Column Pre\u00E2\u0088\u0092defined Calibration Point Figure 5.3: Floor Plan of Kaiser 4090 a huge problem as the focus of this chapter is on comparing two solutions for the Bayesian filter. The \u00CE\u0093n(\u00C2\u00B7) parameters are calculated from the sample averages from this single set of data. As for the other parameters required, they are all global and they are estimated to be \u00CF\u00832u = 0.02, \u00CF\u0081 = 2.5 and \u00CF\u00832w = 8. Cross-validation is used to optimize these parameters, using which the final MSE after filtering is minimized. Figure 5.4 shows the distribution of the estimated \u00CE\u0093n(\u00C2\u00B7) values. This range of values is roughly the same as the ones used in Section 5.4.2, exhibiting \u00E2\u0080\u0099switching\u00E2\u0080\u0099 behavior. Thus, similar results are expected. The same procedure from Section 5.4 is applied to the experimental data. One final note of interest is that, a mechanism is needed to deal with missing and im- probable RSSI values. Missing values are assigned 0dBmW and improbable values are obviously wrong values outside the valid range of RSSI measurements. Since the UKF and PF are both probabilistic, this chapter chooses to ignore these values, e.g., Equation 5.22 becomes p(yt |xit) = \u00E2\u0088\u008F yn,t 6=0, yn,t\u00E2\u0089\u00A4\u00E2\u0088\u009230, yn,t\u00E2\u0089\u00A5\u00E2\u0088\u009295 p(yn,t |xit). (5.26) Table Table 5.3 summarizes the results. The only difference is that the setup 33 \u00E2\u0088\u009270 \u00E2\u0088\u009265 \u00E2\u0088\u009260 \u00E2\u0088\u009255 \u00E2\u0088\u009250 \u00E2\u0088\u009245 \u00E2\u0088\u009240 \u00E2\u0088\u009235 \u00E2\u0088\u009230 0 5 10 15 dB Fr eq ue nc y Figure 5.4: The \u00CE\u0093n(\u00C2\u00B7) Values from Experimental Results is slightly different so the number of particles required for the PF is different and the number of sigma points for the UKF is 2L+ 1 = 23. It can be seen that the UKF fails to function and loses track frequently. For instance, in Figure 5.5, the true horizontal position (the longer dimension in the floor plan) of the target is shown against the UKF and PF estimates. The UKF fails to track two centers of the cells. Compared to the simulation results in Section 5.4.2, the experimental results are worse due to a larger observation noise variance as well as realistic conditions introduced. However, in both cases, the PF, given sufficient number of particles, performs better than the UKF. 5.6 Conclusion Simulations and real life experiments have been carried out to analyze the tracking results of the UKF and the PF. The UKF assumes near-linearity and imposes uni- 34 Table 5.3: Tracking Results in an Office Mean Squared Error (m2) over Time Filter Number Mean Variance 5-th 95-th of Particles percentile percentile PF 25 4.1094 98.4970 0.0386 26.2628 PF 50 4.7245 214.2936 0.0222 30.5959 PF 75 2.4589 113.7523 0.0197 8.1251 PF 100 2.0719 69.7546 0.0150 8.9874 PF 125 2.4531 103.4518 0.0168 9.8931 PF 150 1.8719 60.6488 0.0142 7.5954 PF 175 2.1849 87.9553 0.0123 8.9751 PF 200 2.7101 125.0296 0.0123 11.2968 PF 225 1.9210 65.4151 0.0152 8.2067 PF 250 2.0727 80.2627 0.0145 7.9518 PF 275 2.0467 68.2252 0.0128 9.5385 PF 300 2.0114 78.1461 0.0116 8.1292 UKF 7.1944 189.2249 0.0151 41.8589 modal Gaussian densities while the PF does not. These assumptions are very fragile and easily violated when the radio propagation characteristics of the tracking area are diverse, exhibiting \u00E2\u0080\u0099switching\u00E2\u0080\u0099 behavior. While the piecewise path loss obser- vation model Equation 5.4 considered does not cover all regression techniques, we believe that the same principle can be generalized. Any indoor environment in- troduces occlusions and multipath propagation effects, resulting in highly diverse parameters. Unless free space propagation is the dominant mode of propagation in the environment, we speculate that a regressed model will exhibit \u00E2\u0080\u0099switching\u00E2\u0080\u0099 behavior. The UKF cannot handle multimodal systems. Therefore, the UKF is un- suitable under realistic conditions. 35 0 500 1000 1500 \u00E2\u0088\u00922 0 2 4 6 8 10 12 14 16 Time Index Po si tio n Co or di na te (m ) True Position UKF Estimate PF Estimate (300 Particles) Figure 5.5: Horizontal Coordinate from Experimental Results 36 Chapter 6 A Comparison of Three Classifiers As mentioned in Section 2.2, the fingerprinting paradigm is powerful and many regression techniques have been proposed and studied in the literature. However, to our best knowledge, the classification approach is relatively uncommon in the literature. This chapter addresses this issue by comparing three classifiers: KNN, SVM and SGC. Although all three achieve roughly the same performance, the SGC is the simplest in terms of computation costs and storage requirements. Further- more, it can used under the Bayesian filter framework as discussed in Chapter 3 since it is probabilistic. On the other hand, the KNN and SVM are deterministic in nature and using the Bayesian filter framework is much more difficult. The following sections present the system setup, introduce and summarize the three classifiers and present the conclusion based on experimental results. 6.1 System Model and Classifiers Formally, the goal is to track a mobile transceiver moving in a bounded two- dimensional area equipped with N APs. Periodic broadcasts occur semi-frequently. Each AP acts as a sensor. It evaluates the signal strength from the target and for- wards all the information to the FC. The area is divided into L cells. Let yn be the RSSI between sensor n and the target in dBmW. Let y = [y1,y2, . . . ,yn]T . Let x de- 37 note a cell number that takes values between 1 and L. The fingerprinting paradigm works in two phases. First, P instance-label pairs, (xi,yi), i= 1, . . . ,P, are collected in known cells. Then, in the online working phase, an arriving RSSI instance from a unknown cell is compared against the radiomap and it is classified into one of the possible cells. The performance metric is the number of incorrect estimations, denoted classification error. The following are the three candidate classifiers. 6.1.1 K-Nearest Neighbor (KNN) The KNN is the simplest classifier. Nothing is done after the offline training phase and the entire radiomap is stored. During the online working phase, an arriving instance is compared against the instances in the radiomap and K nearest instance- label pairs are selected. The K labels determine the estimated label of the arriving instance via majority voting. K is determined empirically beforehand. \u00E2\u0080\u0099Nearest\u00E2\u0080\u0099 is subject to some norm definition [5, 13]. 6.1.2 Support Vector Machine (SVM) The SVM is a binary classifier and it attempts to maximize the margin between the two classes [14]. For now, only two cells are considered and the notation is changed slightly. Let xi = +1 denote if an instance comes from first cell and let xi =\u00E2\u0088\u00921 denote the opposite case. For the moment, the two classes are assumed to be linearly separable. Recalling that we have a database of training instance-label pairs (xi,yi), i = 1, . . . ,P, SVM aims to find some parameters w and b such that xi(wT yi+b)\u00E2\u0088\u00921\u00E2\u0089\u00A5 0 \u00E2\u0088\u0080i, (6.1) i.e., we find two hyperplanes that separate the two classes. This is illustrated in Figure 6.1. Given a unknown instance y, the classifier or the hypothesis is then x\u00CC\u0082 = sign(wT y+b). (6.2) Only some instance-label pairs are actually important. The pairs from the two classes that satisfy the inequalities above with equality are called Support Vec- 38 +1 +1 +1 +1 +1 +1 +1 \u00E2\u0088\u00921\u00E2\u0088\u00921 \u00E2\u0088\u00921 \u00E2\u0088\u00921 \u00E2\u0088\u00921 \u00E2\u0088\u00921 \u00E2\u0088\u00921 Ma rgin w T y+ b= \u00E2\u0088\u00921 w T y+ b= 1 Figure 6.1: A Graphical Illustration of the SVM tors. It is easily shown that the distance or margin between the two hyperplanes is 2/\u00E2\u0080\u0096w\u00E2\u0080\u0096. The optimal parameters can be found by solving the following quadratic optimization problem minimize w,b 1 2 wT w subject to xi(wT yi+b)\u00E2\u0088\u00921\u00E2\u0089\u00A5 0 \u00E2\u0088\u0080i . (6.3) Let \u00CE\u00BBi, i = 1, . . . ,P, denote the Lagrange multipliers. The dual form can be written as maximize \u00CE\u009B P \u00E2\u0088\u0091 i=1 \u00CE\u00BBi\u00E2\u0088\u0092 12\u00CE\u009B T D\u00CE\u009B subject to \u00CE\u00BBi \u00E2\u0089\u00A5 0 \u00E2\u0088\u0080i P \u00E2\u0088\u0091 i=1 \u00CE\u00BBixi = 0 , (6.4) where \u00CE\u009B is the vector of Lagrange multipliers and D is a symmetric matrix such that Di j , xix jyTi y j. Recalling that the support vectors satisfy the inequalities with equality, the Lagrange multiplier \u00CE\u00BBi for a support vector yi is strictly greater than zero. Therefore, if w\u00E2\u0088\u0097 is the optimal solution, then the optimal b\u00E2\u0088\u0097 satisfies b\u00E2\u0088\u0097 = 39 xi\u00E2\u0088\u0092w\u00E2\u0088\u0097T yi for any support vector pair (xi,yi). Finally, the classifier can be written as x\u00CC\u0082 = sign( P \u00E2\u0088\u0091 i=1 xi\u00CE\u00BB \u00E2\u0088\u0097i y T yi+b \u00E2\u0088\u0097), (6.5) where the stars denote the optimal values. If the two classes are not linearly separable, soft margin SVM introduces a penalty. The primal form of the optimization reads minimize w,b,\u00CE\u00BEi 1 2 wT w+C P \u00E2\u0088\u0091 i=1 \u00CE\u00BEi subject to xi(wT yi+b)\u00E2\u0088\u00921+\u00CE\u00BEi \u00E2\u0089\u00A5 0 \u00E2\u0088\u0080i \u00CE\u00BEi \u00E2\u0089\u00A5 0 \u00E2\u0088\u0080i , (6.6) where C is a constant parameter of the optimization problem and \u00CE\u00BEi are the penalty terms. The dual form, which conveniently does not contain the penalty terms, can be written as maximize \u00CE\u009B P \u00E2\u0088\u0091 i=1 \u00CE\u00BBi\u00E2\u0088\u0092 12\u00CE\u009B T D\u00CE\u009B subject to 0\u00E2\u0089\u00A4 \u00CE\u00BBi \u00E2\u0089\u00A4C \u00E2\u0088\u0080i P \u00E2\u0088\u0091 i=1 \u00CE\u00BBixi = 0 . (6.7) As before, \u00CE\u009B is the vector of Lagrange multipliers and D is a symmetric matrix such that Di j , xix jyTi y j. Finally, the linear SVM can be converted to handle nonlinear boundaries by using the kernel trick. In the dual form, the objective contains a symmetric matrix Di j , xix jyTi y j. The trick changes the dot product yTi y j to a kernel K(yi,y j). The physical meaning is that the kernel measures the similarity between two instances. The dot product is a linear kernel and sometimes using a nonlinear one yields better results. In this work, we use the standard Radial Basis Function (RBF) K(yi,y j) = exp ( \u00E2\u0088\u0092\u00CE\u00B3 \u00E2\u0088\u00A5\u00E2\u0088\u00A5yi\u00E2\u0088\u0092y j\u00E2\u0088\u00A5\u00E2\u0088\u00A52) , (6.8) where \u00CE\u00B3 \u00E2\u0089\u00A5 0 is a constant parameter. The dual form stays the same as Equation 6.7 40 and the final classifier is x\u00CC\u0082 = sign( P \u00E2\u0088\u0091 i=1 xi\u00CE\u00BB \u00E2\u0088\u0097i K(y,yi)+b \u00E2\u0088\u0097), (6.9) where the stars denote the optimal parameters. Although the SVM is a binary classifier, it can be extended to a multiclass one. This chapter takes the one-versus-one strategy [35]. Since there are L possible cells, L(L\u00E2\u0088\u00922)/2 binary classifiers are constructed. We iterate through each binary classifier and classify the unknown instance to one of the two possible labels. In the end, majority voting is conducted to determine the final label. 6.1.3 The Simple Gaussian Classifier (SGC) Other than log-linear propagation loss, the standard path loss model [8, 11] also includes shadowing, which is modeled as Gaussian noise. Limited RSSI precision creates small bands from where the same RSSI measurement is reported regardless of the actual location of the target [10] . Combined with shadowing, it is reasonable to assume that, within a small region, the RSSI observed will follow a Gaussian pro- file and it is impossible distinguish where the target is precisely. Chapter 4 supports this claim. Of course, the theoretical shapes of such bands cannot be determined easily without ray tracing. Modeling RSSI values as Gaussian random variables has been proposed numerous times in the literature. The our best knowledge, [22] is the first one to perform localization using a probabilistic formulation. Neverthe- less, the authors focus on estimating position coordinates instead of viewing it as a classification problem. Their choice of the probabilistic model uses a number of Gaussian kernels. We believe [24] is the first one to propose treating it as a classi- fication problem and specifically model RSSI values as Gaussian random variables. This has been discussed afterwards in [15, 20]. We are intrigued by its astonishing simplicity and we refer to it as the SGC. The distribution of RSSI between sensor n and any location in cell l is modeled as p(yn|x = l) = 1\u00E2\u0088\u009A 2pi\u00CF\u00832l,n exp ( \u00E2\u0088\u0092(yn\u00E2\u0088\u0092\u00C2\u00B5l,n) 2 2\u00CF\u00832l,n ) . (6.10) 41 It is safe to assume that the sensors are independent. Therefore, the multivariate distribution involving all sensors is p(y|x = l) = n=N \u00E2\u0088\u008F n=1 p(yn|x = l) = 1 (2pi)N/2|\u00CE\u00A3l|1/2 exp ( \u00E2\u0088\u0092(y\u00E2\u0088\u0092\u00C2\u00B5 l)T \u00CE\u00A3\u00E2\u0088\u00921l (y\u00E2\u0088\u0092\u00C2\u00B5 l) 2 ), (6.11) where the mean vector \u00C2\u00B5 l consists of \u00C2\u00B5l,n and the covariance \u00CE\u00A3l is a diagonal matrix with \u00CF\u00832l,n placed on the diagonal. In the online working phase, given an arriving instance y, the goal is to find the cell number l that maximizes the probability p(x|y). As mentioned in Chapter 3 and [22], p(x|y) = p(y|x)p(x) p(y) . (6.12) Without filtering, i.e., treating this as an one-shot-in-time event, p(x) is assumed to be uniform and the problem is equivalent to maximizing p(y|x). This is accom- plished by a simple search over all possible cell numbers, i.e., x\u00CC\u0082 = argmax x p(y|x). (6.13) Since there are only a finite number possible cells and the SGC is expressed in closed-form, it is possible to derive the Pairwise Error Probability (PEP) between two cells a and b analytically. Let the decision metric be D1 = log p(y|x = a)\u00E2\u0088\u0092 log p(y|x = b). (6.14) The target is estimated to be in cell a if D1 > 0 and estimated to be in cell b otherwise. After some manipulation, D1 =\u00E2\u0088\u0092 12 log |\u00CE\u00A3a|+ 1 2 log |\u00CE\u00A3b|+ 12y T (\u00CE\u00A3\u00E2\u0088\u00921b \u00E2\u0088\u0092\u00CE\u00A3\u00E2\u0088\u00921a )y +yT ( \u00CE\u00A3\u00E2\u0088\u00921a \u00C2\u00B5a\u00E2\u0088\u0092\u00CE\u00A3\u00E2\u0088\u00921b \u00C2\u00B5b ) + 1 2 \u00C2\u00B5Tb \u00CE\u00A3 \u00E2\u0088\u00921 b \u00C2\u00B5b\u00E2\u0088\u0092 1 2 \u00C2\u00B5Ta \u00CE\u00A3 \u00E2\u0088\u00921 a \u00C2\u00B5a . (6.15) 42 Therefore, if the target is known to be in cell a, the PEP is p(x\u00CC\u0082= b|x= a) = p(D1 < 0). y is a multivariate Gaussian but the term yT (. . .)y is no longer Gaussian. Therefore, for simplicity, the following assumes that the covariances are equal, i.e., \u00CE\u00A3a = \u00CE\u00A3b = P and the suboptimal decision metric is D2 = yT P\u00E2\u0088\u00921 (\u00C2\u00B5a\u00E2\u0088\u0092\u00C2\u00B5b)\u00E2\u0088\u0092 1 2 (\u00C2\u00B5a+\u00C2\u00B5b) T P\u00E2\u0088\u00921 (\u00C2\u00B5a\u00E2\u0088\u0092\u00C2\u00B5b) . (6.16) This new decision metric is completely Gaussian and it presents the worst case scenario since the covariances are assumed to be equal. From [15], the mean is E[D2] = 1 2 (\u00C2\u00B5a\u00E2\u0088\u0092\u00C2\u00B5b)T P\u00E2\u0088\u00921 (\u00C2\u00B5a\u00E2\u0088\u0092\u00C2\u00B5b) (6.17) and the variance is E [ (D2\u00E2\u0088\u0092E[D2])2 ] = (\u00C2\u00B5a\u00E2\u0088\u0092\u00C2\u00B5b)T P\u00E2\u0088\u00921 (\u00C2\u00B5a\u00E2\u0088\u0092\u00C2\u00B5b) . (6.18) The term Ha,b , (\u00C2\u00B5a\u00E2\u0088\u0092\u00C2\u00B5b)T P\u00E2\u0088\u00921 (\u00C2\u00B5a\u00E2\u0088\u0092\u00C2\u00B5b) (6.19) is called the Mahalanobis distance between the two multivariate Gaussian densi- ties. This explains why the KNN works so well using Mahalanobis norm instead of Euclidean norm [15, 20] for both regression and classification. Therefore, the PEP is p(x\u00CC\u0082 = b|x = a) = p(D2 < 0) =\u00CE\u00A6 ( \u00E2\u0088\u00921 2 \u00E2\u0088\u009A Ha,b ) , (6.20) where \u00CE\u00A6 is the cumulative density function of the standard Gaussian with mean of 0 and variance of 1. Similar to the idea of signal constellation from Communications, it is possible to derive a simple upper bound on the probability of returning the wrong cell num- ber if there multiple cells. If the target is in cell a, the union bound of making the wrong estimate is \u00E2\u0088\u0091 l 6=a p(x\u00CC\u0082 = l|x = a). (6.21) Since the PEP discussed is the worst case scenario, this union bound is a sum of upper bounds. Therefore, taking the complement, the lower bound on the proba- 43 bility of returning the correct estimate is obtained. Unfortunately, the union bound is only tight if the noise is small. This is not the case as classification errors under realistic conditions cannot be expected to be under 1%. Therefore, this bound is extremely loose. 6.2 Results without Filtering We now provide a direction comparison between the three classifiers introduced. The test environment is an office consisting of small rooms and we use the same custom-made radios used in Section 4.2. The environment is harsh as Non-Line- of-Sight (NLOS) conditions are the norm. The walls separating the rooms are made of wooden and paper materials. The mobile target broadcasts to APs at 433MHz using a small loop antenna every 100ms. The APs act as sensors. The APs are secured and mounted on walls, well above the ground. Each RSSI measurement is calculated based on the voltage values at the receiving antenna averaged over the packet length. A typical packet lasts approximately 10ms. Only two significant digits are available in dB scale. The APs communicate with each other and the FC via a mesh ZigBee network. The traffic in the network is kept low and APs are placed strategically to cover the entire area. Nevertheless, there is no way to discern the exact cause when an AP does not hear from the target. Either the signal is too weak or data packets are dropped in the higher level ZigBee network. When an AP receives nothing, an arbitrary RSSI value of \u00E2\u0088\u0092100dBmW is assigned. Probabilistic frameworks can deal with cases like this rather easily by ignoring these values, e.g., Equation 6.11 becomes p(y|x = l) = \u00E2\u0088\u008F yn 6=\u00E2\u0088\u0092100 p(yn|x = l). (6.22) However, it is not so straightforward for deterministic methods such as the SVM which relies on solving quadratic optimization problems. Therefore, to be fair to all classifiers, the \u00E2\u0088\u0092100dBmW values are simply taken at face value. Figure 6.2 shows the floor plan of the office and the placement of the APs. The dimensions are 14.77m by 24.38m. The blue dots are the 8 APs and the underlined numbers enumerate the cells. Since the area is nicely divided into rooms of roughly 44 Figure 6.2: Floor Plan of the Environment and the Cell Numbers equal sizes, each room is a cell. Two tests performed at different times are used to compare the classification capabilities of the three classifiers. For each test, two loops around the area are recorded. The first loop is used as the training data and the second loop is used as the validation data. There are various parameters needed for the classifiers and they are determined by cross-validation. Typically, cross-validation involves tuning using the training data only, as the validation data must remain unseen. However, the work presented here is less strict. We simply use the training data to train the classifiers with arbitrary parameters, use the validation data, evaluate the classification errors and repeat with different parameters until the best performance is achieved. For the KNN, the `2-norm (i.e., Euclidean distance) is used to measure distance. 45 Table 6.1: KNN Results (K = 1210) for the First Test Cell Number of Correct Off by Off by More Time Slots Estimations One Cell than One cell 1 124 79 34 11 2 213 187 26 0 3 366 330 34 2 4 202 1 200 1 5 166 152 1 13 6 175 12 159 4 7 133 97 27 9 8 139 47 91 1 9 147 96 10 41 Overall 1665 1001(60%) 582(35%) 82(5%) For the SVM, LIBSVM [36] is used to handle the optimization problem. As recommended by its authors, RSSI values are scaled linearly between 0 and 1 for numerical purposes before using the library. For the SGC, sample means and variances from the training instance-label pairs are used as the means and variances of the Gaussian variables. This is the frequen- cist approach. Furthermore, we take the sample variances of all the measurements in all cells and use them for calculating the Mahalanobis distance between two cells. This is only for illustration purposes and it is not used for estimation. 6.2.1 The First Test In this test, the target is kept on a stool at constant height and it never goes near obstacles or corners. For both the training and validation set, the antenna orienta- tion of the target is varied. In each cell, the target is placed at several locations for some time. The measurements at each time slot are recorded as an instance-label pair. The size of the training set is 3881 and the size of the validation set is 1665. 3471 out of 3881\u00C3\u0097 8 = 31048 (11%) possible measurements are missing for the training data and 1220 out of 1665\u00C3\u0097 8 = 13320 (9%) possible measurements are missing for the validation data. 46 Table 6.2: SVM Results (C = 10\u00E2\u0088\u00921.1 and \u00CE\u00B3 = 100.3) for the First Test Cell Number of Correct Off by Off by More Time Slots Estimations One Cell than One cell 1 124 45 55 24 2 213 176 37 0 3 366 325 35 6 4 202 13 188 1 5 166 148 11 7 6 175 119 29 27 7 133 92 36 5 8 139 68 69 2 9 147 84 28 35 Overall 1665 1070(64%) 488(29%) 107(6%) Table 6.3: SGC Results for the First Test Cell Number of Correct Off by Off by More Time Slots Estimations One Cell than One cell 1 124 35 59 30 2 213 152 61 0 3 366 296 68 2 4 202 106 93 3 5 166 128 34 4 6 175 57 82 36 7 133 94 34 5 8 139 89 48 2 9 147 89 12 46 Overall 1665 1046(63%) 491(29%) 128(8%) Table 6.1, Table 6.2 and Table 6.3 show the results. The raw numbers indicate that it is difficult to distinguish between neighboring cells, as indicated by the siz- able number of wrong estimates that are off by one cell. However, the estimates are off by more than one cell in only a small number of cases. This suggest that better AP placement and a higher number of APs are needed in order to obtain higher accuracies in this cell division scheme. More specifically, about 10% of all the measurements are missing and they have been assigned the arbitrary RSSI value 47 Table 6.4: SGC Accuracy Bounds for the First Test Cell The Closest Mahalanobis Distance Accuracy Cell to the Closest Cell Bound 1 2 0.7461 0.3694 2 1 0.7461 0.3378 3 4 2.0975 0.1071 4 3 2.0975 0.0919 5 6 1.5543 0.0735 6 5 1.5543 0 7 8 0.7092 0 8 7 0.7092 0 9 8 1.5601 0.1182 Average 0.1220 of \u00E2\u0088\u0092100dBmW. The missing measurements are more likely to be caused by weak signals rather than busy traffic in the network. Therefore, better placement of the APs will improve the performance. Table 6.4 shows the accuracy bounds using the SGC approach. As noted, the union bound is extremely loose. This is further complicated by the fact that model- ing the RSSI values as Gaussian random variables is only an approximation. There- fore, directly evaluating the classification performance is probably more useful than evaluating the PEP via calculating Mahalanobis distance. 6.2.2 The Second Test The first test is repeated but with realistic conditions. For both the training and validation loops, the target is allowed to change height, move close to walls and major obstacles and human movements are not controlled. Therefore, there are instance-label pairs in the validation set that are not seen in the training set and vice versa. The size of the training set is 2364 and the size of the validation set is 1089. 1999 out of 2364\u00C3\u00978= 18912 (11%) possible measurements are missing for the training data and 817 out of 1089\u00C3\u00978 = 8712 (9%) possible measurements are missing for the validation data. 48 Table 6.5: KNN Results (K = 186) for the Second Test Cell Number of Correct Off by Off by More Time Slots Estimations One Cell than One cell 1 170 150 3 17 2 70 8 61 1 3 127 106 5 16 4 85 21 42 22 5 177 106 53 18 6 86 56 21 9 7 108 32 56 20 8 130 58 69 3 9 136 97 23 16 Overall 1089 634(58%) 333(31%) 122(11%) Table 6.6: SVM Results (C = 100.4 and \u00CE\u00B3 = 100.8) for the Second Test Cell Number of Correct Off by Off by More Time Slots Estimations One Cell than One cell 1 170 130 23 17 2 70 27 42 1 3 127 104 18 5 4 85 53 19 13 5 177 111 50 16 6 86 49 25 12 7 108 37 45 26 8 130 63 62 5 9 136 81 39 16 Overall 1089 655(60%) 323(30%) 111(10%) Table 6.5, Table 6.6 and Table 6.7 show the results. Compared to the first test, the results are slightly worse due to the realistic conditions imposed. The problem of imperfect training is a big problem for the fingerprinting paradigm. For both regression and classification, the conditions for training and testing must be kept the same to the best of abilities. Once trained, the model learned is static and the performance decreases if the conditions change. Table 6.8 shows the accuracy bounds using the SGC. The same conclusion can be drawn as the first test. 49 Table 6.7: SGC Results for the Second Test Cell Number of Correct Off by Off by More Time Slots Estimations One Cell than One cell 1 170 119 46 5 2 70 24 46 0 3 127 86 27 14 4 85 46 20 19 5 177 89 76 12 6 86 36 36 14 7 108 51 43 14 8 130 75 51 4 9 136 84 28 24 Overall 1089 610(56%) 373(34%) 106(10%) Table 6.8: SGC Accuracy Bounds for the Second Test Cell The Closest Mahalanobis Distance Accuracy Cell to the Closest Cell Bound 1 2 2.9654 0.4744 2 1 2.9654 0.4609 3 4 2.0228 0.1892 4 3 2.0228 0.0406 5 6 0.8852 0 6 5 0.8852 0 7 8 1.2691 0 8 9 0.9471 0 9 8 0.9471 0.0813 Average 0.1385 Since all three classifiers are roughly equal in terms of classification errors, the SGC is clearly the preferred classifier. First, no cross-validation is required compared to the KNN and SVM. Second, the computation and storage costs are extremely low. The SGC only requires computing the sample means and variances from the training set. This can be done recursively so that only the newest instance- label pair is needed. On the other hand, the KNN requires storing and looking at every pair in a big radiomap and the SVM requires storing a big radiomap as well 50 Algorithm 4 For each time slot t Start with previous time slot\u00E2\u0080\u0099s weights, { wit\u00E2\u0088\u00921 }L i=1. For each state i, w\u00CC\u0083it = p(yt |xt = i)\u00E2\u0088\u0091 j w jt\u00E2\u0088\u00921 p(xt = i|xt = j) Normalize using wit = w\u00CC\u0083it \u00E2\u0088\u0091 j w\u00CC\u0083 j t The estimate is x\u00CC\u0082t = argmax i wit as solving a series of quadratic optimization problems. 6.3 The Hidden Markov Model (HMM) Filter Now, this chapter investigates the use of Bayesian filtering for the SGC. As men- tioned in Chapter 3, the HMM filter is the exact and optimal solution because the state space is finite. Let { wit = p(xt = i|y1:t) }L i=1 be the normalized weights of the discrete posterior density and let w\u00CC\u0083it be the unnormalized ones. Replacing all the in- tegrals in Chapter 3 by summations, the HMM filter is summarized in Algorithm 4. For the transition model p(xt |xt\u00E2\u0088\u00921), it can be easily generated from the floor plan. For the observation model p(yt |xt), the SGC is already a native probabilistic formulation so Equation 6.11 is available. On the other hand, the KNN and SVM are deterministic in nature. There are several methods for estimating probabilities for the KNN and SVM. For instance, in the majority voting stage for KNN, if a out of K labels are from cell Z, a reasonable probability estimate for p(yt |xt = Z) is a/K. Schemes for the SVM exist as well [37]. However, the SGC is already established as the preferred classifier and the next section will not include the KNN and SVM under the Bayesian filter framework. 51 Table 6.9: Filtered SGC Results for the First Test Cell Number of Correct Off by Off by More Time Slots Estimations One Cell than One cell 1 124 43 56 25 2 213 167 46 0 3 366 329 37 0 4 202 104 98 0 5 166 142 24 0 6 175 78 76 21 7 133 102 31 0 8 139 91 46 2 9 147 104 18 25 Overall 1665 1160(70%) 432(26%) 73(4%) 6.4 Results with Filtering The same sets of data from Section 6.2 are used. Considering the floor plan, the following transition probabilities are applied p(xt = i|xt\u00E2\u0088\u00921 = i) = 0.95, i = 1, . . . ,9 p(xt = i\u00C2\u00B11|xt\u00E2\u0088\u00921 = i) = 0.025, i = 2, . . . ,8 p(xt = 2|xt\u00E2\u0088\u00921 = 1) = 0.05 p(xt = 8|xt\u00E2\u0088\u00921 = 9) = 0.05 . (6.23) Table 6.9 and Table 6.10 summarize the results. As expected, the use of filter- ing improves the results. In particular, the overall success rate is at or above 70%. Very few estimates are wrong by more than one cell. This is similar to the results in [24] with 8 sensors. Interestingly, the imperfect conditions of the second test do not affect the performance. We draw the conclusion that the use of filtering counteracts the effects of blocking the antennas and moving the target close to walls. 6.5 Conclusion This chapter has compared the performance of the KNN, SVM and SGC in an in- door environment under realistic conditions. All three classifiers achieve the same 52 Table 6.10: Filtered SGC Results for the Second Test Cell Number of Correct Off by Off by More Time Slots Estimations One Cell than One cell 1 170 122 48 0 2 70 26 44 0 3 127 96 27 4 4 85 74 8 3 5 177 111 64 2 6 86 70 16 0 7 108 66 40 2 8 130 97 33 0 9 136 118 15 3 Overall 1089 780(72%) 295(27%) 14(1%) performance but the SGC is the simplest in terms of implementation and storage requirements. Furthermore, the SGC is a native probabilistic formulation and it can be used under the Bayesian filter framework. Without filtering, the accuracy ob- tained is around 60%. With filtering, the performance of the SGC improves to 70% and virtually no estimate is off by more than one cell. Of course, comparing against the regression approach, the classification ap- proach is less precise. We have carefully avoided talking about position coordi- nates and errors in terms of MSE. However, we can derive a simple estimate of the MSE of this scheme. Let us consider two adjacent cells such as cell 1 and 2 in Figure 6.2 and assume both cells are 5m by 5m. We use a coordinate system (u,v) and the origin is at the lower left corner of cell 1. From the experimental results, we consider the case that the classification accuracy is 70% and the mis- taken classifications are off by one cell. If we assume that the target is equally likely to be anywhere, i.e., uniformly distributed over the cell area, and we always return the center of cell 1 as the position coordinate, then the maximum error is 53 \u00E2\u0088\u009A 2.52+(2.5+5)2 \u00E2\u0088\u00BC= 7.91m and the MSE is 0.7 25 \u00E2\u0088\u00AB u=5 u=0 \u00E2\u0088\u00AB v=5 v=0 (u\u00E2\u0088\u00922.5)2+(v\u00E2\u0088\u00922.5)2dudv + 0.3 25 \u00E2\u0088\u00AB u=5 u=0 \u00E2\u0088\u00AB v=10 v=5 (u\u00E2\u0088\u00922.5)2+(v\u00E2\u0088\u00922.5)2dudv\u00E2\u0088\u00BC= 11.67. (6.24) Thus, the root mean squared error is 3.42m. This is in line with regression tech- niques presented in the literature performed under similar setups. 54 Chapter 7 Online Parameter Estimation for the General Bayesian Filter The biggest drawbacks of the fingerprinting paradigm discussed in Section 2.2 are the effort required to perform training and the static nature of the model trained. To our knowledge, there is not much discussion on the second issue in the litera- ture other than resorting to performing the offline training step again. [16] suggests placing multiple static reference APs, whose locations are perfectly known, in order to update the observation model. However, its use is limited if the number of ref- erence APs is low. This chapter proposes extending the Bayesian filter framework introduced in Chapter 3 and applied in Chapter 5 and Chapter 6 to deal with these two issues. A probabilistic observation model constructed from the fingerprinting paradigm is governed by various parameters. Due to imperfect training or chang- ing conditions, these parameters may not be optimal. Therefore, we view this challenge as estimating parameters of the observation model under the Bayesian filter framework. In addition to the unknown state variables, the parameters of the observation model are also unknown. For simplicity, let us assume that there is only one unknown parameter \u00CE\u00B8 . Let the subscript in p\u00CE\u00B8 (yt |xt) highlight this important unknown parameter. [38] gives an excellent overview on state of the art methods for estimating \u00CE\u00B8 . We take the ML approach because it is the most mature. The optimal value of the parameter in the 55 ML sense maximizes the total observation likelihood log p\u00CE\u00B8 (y1:t), (7.1) i.e., the value that best explains the sequence of observations up to the current time index t. For the HMM, this is an old problem and the famous Expectation- Maximization (EM) algorithm accomplishes the task [28]. This approach has been explored in [24]. Unfortunately, the EM algorithm is an offline method. It requires storing the entire sequence of observations up to time index t and iterating through the sequence several times. In contrast, the online ML scheme attempts to maximize the predictive likelihood p\u00CE\u00B8 (yt |y1:t\u00E2\u0088\u00921) = \u00E2\u0088\u00AB p\u00CE\u00B8 (yt |xt)p\u00CE\u00B8 (xt |y1:t\u00E2\u0088\u00921)dxt = \u00E2\u0088\u00AB \u00E2\u0088\u00AB p\u00CE\u00B8 (yt |xt)p(xt |xt\u00E2\u0088\u00921)p\u00CE\u00B8 (xt\u00E2\u0088\u00921|y1:t\u00E2\u0088\u00921)dxtdxt\u00E2\u0088\u00921 . (7.2) This can be accomplished by a stochastic gradient ascent algorithm \u00CE\u00B8t = \u00CE\u00B8t\u00E2\u0088\u00921+ \u00CE\u00B5 \u00E2\u0088\u0082 \u00E2\u0088\u0082\u00CE\u00B8 log p\u00CE\u00B8 (yt |y1:t\u00E2\u0088\u00921), (7.3) where \u00CE\u00B5 is the stepsize. Using a constant stepsize allows us to track the parameter if it is slowly changing in time. A large value of \u00CE\u00B5 provides faster convergence at a cost of lower stability. Equation 7.3 implies that we need to differentiate Equa- tion 7.2 with respect to the parameter. This is commonly known as the filter deriva- tive. We would like to point out this in an abuse of notation because evaluating the observation model p\u00CE\u00B8 (yt |xt) and the predictive likelihood Equation 7.2 requires knowing the exact value of \u00CE\u00B8 . To bypass the chicken and egg problem, the algo- rithm works as follows: At each time index, the filter uses the current estimate \u00CE\u00B8t\u00E2\u0088\u00921 to evaluate the observation likelihood, i.e., it is treated as the true value of \u00CE\u00B8 . Then, this likelihood is used to produce the posterior density in the exact same manner as the vanilla Bayesian filter in Chapter 3 and produce the predicative likelihood Equation 7.2. Finally, the predicative likelihood allows us to calculate a new value of the parameter \u00CE\u00B8t via Equation 7.3. 56 The online ML scheme is conceptually simple just like the vanilla Bayesian fil- ter discussed in Chapter 3 but no analytical solution exists except for some special cases. For instance, the algorithm has been studied for the HMM in [39]. In the in- terest of generality, we continue forward using the recent developments in particle methods, which allow us to use this parameter estimating scheme for the general Bayesian filter. This is a continuation of the PF introduced in Section 5.3. The following summarize the essential steps for achieving online ML parameter estimation using particle methods as well as our preliminary work for combating imperfectly trained and time-varying parameters. 7.1 The Marginal Particle Filter and the Filter Derivative The following are based on recent developments in marginal particle filtering [40]. It has been successfully used in robotics [41]. Formally, let p\u00CE\u00B8 (xt |y1:t), \u00CE\u00BE\u00CE\u00B8 (xt ,y1:t)\u00E2\u0088\u00AB \u00CE\u00BE\u00CE\u00B8 (xt ,y1:t)dxt . (7.4) Then, from Chapter 3, \u00CE\u00BE\u00CE\u00B8 (xt ,y1:t) = p\u00CE\u00B8 (yt |xt) \u00E2\u0088\u00AB p(xt |xt\u00E2\u0088\u00921)p\u00CE\u00B8 (xt\u00E2\u0088\u00921|y1:t\u00E2\u0088\u00921)dxt\u00E2\u0088\u00921. (7.5) Differentiating with respect to \u00CE\u00B8 , \u00E2\u0088\u0082 \u00E2\u0088\u0082\u00CE\u00B8 p\u00CE\u00B8 (xt |y1:t) = \u00E2\u0088\u0082 \u00E2\u0088\u0082\u00CE\u00B8 \u00CE\u00BE\u00CE\u00B8 (xt ,y1:t)\u00E2\u0088\u00AB \u00CE\u00BE\u00CE\u00B8 (xt ,y1:t)dxt \u00E2\u0088\u0092 p\u00CE\u00B8 (xt |y1:t) \u00E2\u0088\u00AB \u00E2\u0088\u0082 \u00E2\u0088\u0082\u00CE\u00B8 \u00CE\u00BE\u00CE\u00B8 (xt ,y1:t)dxt\u00E2\u0088\u00AB \u00CE\u00BE\u00CE\u00B8 (xt ,y1:t)dxt (7.6) and \u00E2\u0088\u0082 \u00E2\u0088\u0082\u00CE\u00B8 \u00CE\u00BE\u00CE\u00B8 (xt ,y1:t) = p\u00CE\u00B8 (yt |xt) \u00E2\u0088\u0082 \u00E2\u0088\u0082\u00CE\u00B8 log p\u00CE\u00B8 (yt |xt) \u00C3\u0097 \u00E2\u0088\u00AB p(xt |xt\u00E2\u0088\u00921)p\u00CE\u00B8 (xt\u00E2\u0088\u00921|y1:t\u00E2\u0088\u00921)dxt\u00E2\u0088\u00921 + p\u00CE\u00B8 (yt |xt) \u00E2\u0088\u00AB p(xt |xt\u00E2\u0088\u00921) \u00E2\u0088\u0082\u00E2\u0088\u0082\u00CE\u00B8 p\u00CE\u00B8 (xt\u00E2\u0088\u00921|y1:t\u00E2\u0088\u00921)dxt\u00E2\u0088\u00921 . (7.7) Now, realizing that all these integrals above are intractable, we invoke the idea of the PF as in Section 5.3. A particle xit is a candidate of the unknown state 57 vector and it has a weight wit . P particles and the weights approximate the posterior density, i.e., p\u00CC\u0082\u00CE\u00B8 (xt |y1:t) = P \u00E2\u0088\u0091 i=1 wit\u00CE\u00B4xit (x i t). (7.8) Similar to Section 5.3, a mixture of the transition model is used as the proposal distribution q\u00CE\u00B8 (xt |y1:t) = P \u00E2\u0088\u0091 j=1 w jt\u00E2\u0088\u00921 p(xt |x jt\u00E2\u0088\u00921). (7.9) This can be easily sampled using composition sampling. The importance weights are defined in the marginal space wt = p\u00CE\u00B8 (xt |y1:t) q\u00CE\u00B8 (xt |y1:t) . (7.10) In contrast to Equation 5.13, we are in the marginal space instead of the joint space. This is called the marginal PF and it is only equivalent to the original SIR-PF when the transition model is used as the proposal density. The weight update formula remains the same, i.e., w\u00CC\u0083it = p\u00CE\u00B8 (yt |xit), (7.11) where w\u00CC\u0083it is the unnormalized weight. Similar to the posterior density, the filter derivative is approximated by the same set of particles but with different weights, i.e., \u00E2\u0088\u0082\u00CC\u0082 \u00E2\u0088\u0082\u00CE\u00B8 \u00CE\u00BE\u00CE\u00B8 (xt ,y1:t) = P \u00E2\u0088\u0091 i=1 \u00CF\u0081 it\u00CE\u00B4xit (x i t) \u00E2\u0088\u0082\u00CC\u0082 \u00E2\u0088\u0082\u00CE\u00B8 p\u00CE\u00B8 (xt |y1:t) = P \u00E2\u0088\u0091 i=1 wit\u00CE\u00B2 i t \u00CE\u00B4xit (x i t) . (7.12) Let \u00CF\u0081\u00CC\u0083 it be the unnormalized weight of the filter derivative. From Equation 7.7, the update rule is \u00CF\u0081\u00CC\u0083 it = w\u00CC\u0083 i t \u00E2\u0088\u0082 \u00E2\u0088\u0082\u00CE\u00B8 log p\u00CE\u00B8 (yt |xit)+ w\u00CC\u0083it \u00E2\u0088\u0091 j w j t\u00E2\u0088\u00921 p(x i t |x jt\u00E2\u0088\u00921)\u00CE\u00B2 jt\u00E2\u0088\u00921 \u00E2\u0088\u0091 j w j t\u00E2\u0088\u00921 p(xt |x jt\u00E2\u0088\u00921) . (7.13) 58 Finally, these particles allows us to do online ML parameter estimation because \u00CC\u0082\u00E2\u0088\u0082 \u00E2\u0088\u0082\u00CE\u00B8 log p\u00CE\u00B8 (yt |y1:t\u00E2\u0088\u00921) = \u00E2\u0088\u0082\u00CC\u0082 \u00E2\u0088\u0082\u00CE\u00B8 p\u00CE\u00B8 (yt |y1:t\u00E2\u0088\u00921) p\u00CC\u0082\u00CE\u00B8 (yt |y1:t\u00E2\u0088\u00921) = \u00E2\u0088\u00AB \u00E2\u0088\u0082\u00CC\u0082 \u00E2\u0088\u0082\u00CE\u00B8 \u00CE\u00BE\u00CE\u00B8 (xt ,y1:t)dxt\u00E2\u0088\u00AB \u00CE\u00BE\u00CC\u0082\u00CE\u00B8 (xt ,y1:t)dxt = \u00E2\u0088\u0091 j \u00CF\u0081\u00CC\u0083 j t \u00E2\u0088\u0091 j w\u00CC\u0083 j t . (7.14) We summarize the complete Bayesian filter and the filter derivative approximated by particles in Algorithm 51. The normalization step ensures that \u00E2\u0088\u00AB p\u00CE\u00B8 (xt |y1:t)dxt = 1 and \u00E2\u0088\u00AB \u00E2\u0088\u0082 \u00E2\u0088\u0082\u00CE\u00B8 p\u00CE\u00B8 (xt |y1:t)dxt = 0. For the initial value of the filter derivative for at t = 0, we can simply set \u00CE\u00B2 i0 = 0 \u00E2\u0088\u0080i. We would like to point out that the same procedure needs to be repeated if there are multiple unknown parameters to be estimated. Algorithm 5 works for any Bayesian filter but the case for the HMM is further simplified because the state space is finite. Algorithm 6 is a summary of the case for the HMM with L states. One final thing to note is that the ML approach converges slowly. Therefore, we must assume that the parameters change slowly overtime. This is typically not a problem unless something extreme happens globally. In principle, we can skip the training phase and use Bayesian filtering to blindly estimate both the unknown state vector and the parameters. However, the ML approach is not guaranteed to converge to the global optima. For instance, for the HMM, if we initialize all the parameters for all states to equal values, i.e., p\u00CE\u00B8 (yt |xt = i)= p\u00CE\u00B8 (yt |xt = j)\u00E2\u0088\u0080i, j, then the filter will fail as the target is equally likely to be in anywhere. We must initialize the parameters to reasonable values that are close to the true values. Therefore, the training phase is still necessary but it may be done imperfectly since we only need good ballpark values to jump start the Bayesian filter. 1There is no explicit resampling in the algorithm because the proposal is a mixture of the transi- tion model. Using composition sampling for this already achieves resampling. 59 Algorithm 5 For a parameter \u00CE\u00B8 at time slot t Start with previous time index\u00E2\u0080\u0099s posterior and filter derivative, { x jt\u00E2\u0088\u00921 }P j=1 ,{ w jt\u00E2\u0088\u00921 }P j=1 and { \u00CE\u00B2 jt\u00E2\u0088\u00921 }P j=1 . Draw new particles using composition sampling. xit \u00E2\u0088\u00BC\u00E2\u0088\u0091 j w jt\u00E2\u0088\u00921 p(xt |x jt\u00E2\u0088\u00921) For each particle i, \u00E2\u0080\u00A2 w\u00CC\u0083it = p\u00CE\u00B8 (yt |xit) \u00E2\u0080\u00A2 \u00CF\u0081\u00CC\u0083 it = w\u00CC\u0083 i t \u00E2\u0088\u0082 \u00E2\u0088\u0082\u00CE\u00B8 log p\u00CE\u00B8 (yt |xit)+ w\u00CC\u0083it \u00E2\u0088\u0091 j w j t\u00E2\u0088\u00921 p(x i t |x jt\u00E2\u0088\u00921)\u00CE\u00B2 jt\u00E2\u0088\u00921 \u00E2\u0088\u0091 j w j t\u00E2\u0088\u00921 p(xt |x jt\u00E2\u0088\u00921) . Normalize wit = w\u00CC\u0083it \u00E2\u0088\u0091 j w\u00CC\u0083 j t wit\u00CE\u00B2 i t = \u00CF\u0081\u00CC\u0083 it \u00E2\u0088\u0091 j w\u00CC\u0083 j t \u00E2\u0088\u0092wit \u00E2\u0088\u0091 j \u00CF\u0081\u00CC\u0083 j t \u00E2\u0088\u0091 j w\u00CC\u0083 j t Update the parameter \u00CE\u00B8t = \u00CE\u00B8t\u00E2\u0088\u00921+ \u00CE\u00B5 \u00E2\u0088\u0091 j \u00CF\u0081\u00CC\u0083 j t \u00E2\u0088\u0091 j w\u00CC\u0083 j t 7.2 Simulations Now, armed with the ML scheme for online parameter estimation, we show how imperfect training and time-varying parameters can be handled. 60 Algorithm 6 For a parameter \u00CE\u00B8 at time slot t Start with previous time index\u00E2\u0080\u0099s posterior and filter derivative, { w jt\u00E2\u0088\u00921 }L j=1 and{ \u00CE\u00B2 jt\u00E2\u0088\u00921 }L j=1 . For each state i, \u00E2\u0080\u00A2 w\u00CC\u0083it = p\u00CE\u00B8 (yt |xt = i)\u00E2\u0088\u0091 j w jt\u00E2\u0088\u00921 p(xt = i|xt\u00E2\u0088\u00921 = j). \u00E2\u0080\u00A2 \u00CF\u0081\u00CC\u0083 it = w\u00CC\u0083 i t \u00E2\u0088\u0082 \u00E2\u0088\u0082\u00CE\u00B8 log p\u00CE\u00B8 (yt |xt = i)+ p\u00CE\u00B8 (yt |xt = i)\u00E2\u0088\u0091 j w jt\u00E2\u0088\u00921 p(xt = i|xt\u00E2\u0088\u00921 = j)\u00CE\u00B2 jt\u00E2\u0088\u00921. Normalize wit = w\u00CC\u0083it \u00E2\u0088\u0091 j w\u00CC\u0083 j t wit\u00CE\u00B2 i t = \u00CF\u0081\u00CC\u0083 it \u00E2\u0088\u0091 j w\u00CC\u0083 j t \u00E2\u0088\u0092wit \u00E2\u0088\u0091 j \u00CF\u0081\u00CC\u0083 j t \u00E2\u0088\u0091 j w\u00CC\u0083 j t Update the parameter \u00CE\u00B8t = \u00CE\u00B8t\u00E2\u0088\u00921+ \u00CE\u00B5 \u00E2\u0088\u0091 j \u00CF\u0081\u00CC\u0083 j t \u00E2\u0088\u0091 j w\u00CC\u0083 j t 7.2.1 Tracking the Path Loss Exponent This is a simplified case of Section 5.4. The transition model is a simple differential drive [33] xt = xt\u00E2\u0088\u00921+N (0,\u00CF\u00832u I), (7.15) where xt = [ p(1)t p (2) t ]T consists of the two-dimensional coordinates at time t and \u00CF\u00832u = 0.1. The area and the placement of the N = 6 sensors are the same as Sec- tion 5.4. The observation model is yn,t =\u00E2\u0088\u009210\u00CF\u0081t log10\u00E2\u0080\u0096xt \u00E2\u0088\u0092 rn\u00E2\u0080\u0096+N (0,\u00CF\u00832w), (7.16) 61 where the path loss exponent \u00CF\u0081t is now time-varying, rn is still the location coor- dinate of the sensor n and the variance of the noise is \u00CF\u00832w = 2. 104 time slots are simulated in MATLAB and the filter is tasked to track both the location coordinate of the target as well as the time-varying path loss exponent. The true path loss exponent varies according to \u00CF\u0081t = 2+ sin(pik/104), where k is the time index. Due to imperfect training, the filter is mistaken to believe \u00CF\u0081 = 3. We have already shown how the PF estimates the state in Chapter 5. Now, let us demonstrate the parameter estimation part. Once again, p(yn,t |xit) = 1\u00E2\u0088\u009A 2pi\u00CF\u00832w exp (\u00E2\u0088\u0092(yn,t +10\u00CF\u0081 log10 \u00E2\u0080\u0096xit \u00E2\u0088\u0092 rn\u00E2\u0080\u0096)2 2\u00CF\u00832w ) (7.17) and p(yt |xit) = n=N \u00E2\u0088\u008F n=1 p(yn,t |xit). (7.18) As required by Algorithm 5, p(xit |x jt\u00E2\u0088\u00921) = 1 2pi\u00CF\u00832u exp ( \u00E2\u0088\u0092(p(1),it \u00E2\u0088\u0092 p(1), jt\u00E2\u0088\u00921 )2 2\u00CF\u00832u ) exp ( \u00E2\u0088\u0092(p(2),it \u00E2\u0088\u0092 p(2), jt\u00E2\u0088\u00921 )2 2\u00CF\u00832u ) (7.19) and \u00E2\u0088\u0082 \u00E2\u0088\u0082\u00CF\u0081 log p(yt |xit) = \u00E2\u0088\u00921 \u00CF\u00832w N \u00E2\u0088\u0091 n=1 (yn,t +10\u00CF\u0081 log10 \u00E2\u0080\u0096xit \u00E2\u0088\u0092 rn\u00E2\u0080\u0096)(10log10 \u00E2\u0080\u0096xit \u00E2\u0088\u0092 rn\u00E2\u0080\u0096) . (7.20) P = 300 particles are used and we use a stepsize \u00CE\u00B5 = 10\u00E2\u0088\u00924. Figure 7.1 shows how well the ML scheme locks onto the true value of the path loss exponent and tracks it as it changes. 62 0 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000 2 2.2 2.4 2.6 2.8 3 Time Index Pa th L os s Ex po ne nt True Estimated Figure 7.1: History of the Path Loss Exponent 7.2.2 Tracking Means of Gaussians This is a simplified case of Section 6.4, i.e., the HMM filter with a finite number of cells. We have two cells l = 1 and l = 2 and the transition probabilities are p(xt = i|xt\u00E2\u0088\u00921 = i) = 0.95, \u00E2\u0088\u0080i p(xt = j|xt\u00E2\u0088\u00921 = i) = 0.05, \u00E2\u0088\u0080i 6= j . (7.21) For simplicity, there is only N = 1 sensor and we have a scalar observation yt . The Gaussian observation model is p(yn|x = l) = 1\u00E2\u0088\u009A 2pi\u00CF\u00832l exp (\u00E2\u0088\u0092(yt \u00E2\u0088\u0092\u00C2\u00B5l)2 2\u00CF\u00832l ) . (7.22) 63 We use the same noise variance for all cells, i.e., \u00CF\u00832l = 2 \u00E2\u0088\u0080l. The means of the Gaussians are time-varying \u00C2\u00B51 =\u00E2\u0088\u009270+5sin(pik/104) \u00C2\u00B52 =\u00E2\u0088\u009260+10sin(pik/104) , (7.23) where k is the time index. Due to imperfect training, the filter is mistaken to believe \u00C2\u00B51 =\u00E2\u0088\u009275 and \u00C2\u00B52 =\u00E2\u0088\u009255. As required by Algorithm 6, \u00E2\u0088\u0082 \u00E2\u0088\u0082\u00C2\u00B5l log p(yt |xt = i) = 1 \u00CF\u00832l (yt \u00E2\u0088\u0092\u00C2\u00B5l)I(xt = i), (7.24) where I(xt = i) = 1 if xt = i and I(xt = i) = 0 otherwise. The stepsize is \u00CE\u00B5 = 10\u00E2\u0088\u00922. 104 time slots are simulated in MATLAB and the filter is tasked with estimating both the state as well as the time-varying means of the Gaussians. Figure 7.2 and Figure 7.3 show how well the filter locks onto the true values and tracks the changes. 7.3 Discussion and Future Work We have used an online ML parameter tracking scheme to combat imperfectly trained and time-varying parameters. Nevertheless, we must point out its defi- ciencies. First, the observation model must be in closed-form and differentiable with respect to the parameters. This is already on top of the requirement that the Bayesian filter framework is only applicable to probabilistic formulations. Second, the stepsize is difficult to tune and it may be different for different parameters [38]. Furthermore, Algorithm 5 and Algorithm 6 are expensive as we must apply them for every parameter of interest. Nevertheless, this is the most promising way to deal with the deficiencies of the fingerprinting paradigm because it does not assume a number of known reference APs. We have attempted to apply the scheme to experimental data. Our preliminary results have been clouded by several unforeseen difficulties. For instance, for the SGC studied in Chapter 6, we have attempted to alter the setup by covering the antennas with foils. However, doing so has not altered the sample means of the 64 0 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000 \u00E2\u0088\u009275 \u00E2\u0088\u009274 \u00E2\u0088\u009273 \u00E2\u0088\u009272 \u00E2\u0088\u009271 \u00E2\u0088\u009270 \u00E2\u0088\u009269 \u00E2\u0088\u009268 \u00E2\u0088\u009267 \u00E2\u0088\u009266 \u00E2\u0088\u009265 Time Index M ea n of G au ss ia n (dB ) True Estimated Figure 7.2: History of the Mean of the Gaussian for the First Cell RSSI values in the cells dramatically. Intuitively, this is explained by the fact the the mean values are simple and robust. Small changes like human movements and covering the antennas do not change the conditions very much. In particular, since the model is only a rough approximation of reality, the modeling error is large. This is represented by large values for the variances for the SGC. In Equation 7.24, if a variance is large then the derivative is small and the scheme does not adapt to small changes. Furthermore, closed-form and differentiable probabilistic observa- tion models are most likely not going to fit the empirical data well. The use of them are justified for simplicity and robustness. While these approximations have led to satisfactory performance results, we have not validated the use of online parameter estimation for them. Limited by the length and scope of this thesis, we must cau- tion that this chapter contains preliminary results only and this online parameter tracking scheme may not be the panacea without further studies. 65 0 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000 \u00E2\u0088\u009260 \u00E2\u0088\u009258 \u00E2\u0088\u009256 \u00E2\u0088\u009254 \u00E2\u0088\u009252 \u00E2\u0088\u009250 \u00E2\u0088\u009248 Time Index M ea n of G au ss ia n (dB ) True Estimated Figure 7.3: History of the Mean of the Gaussian for the Second Cell 66 Chapter 8 Conclusion and Future Work Essentially, all models are wrong, but some are useful. \u00E2\u0080\u0094 George E.P. Box, Professor of Statistics We have presented three main contributions to the fingerprinting paradigm, which aims to solve the inference problem of RSSI-based localization by construct- ing models from empirical data. From simulations as well as experimental data, we have shown that the PF is better than the UKF for solving the Bayesian filter problem using the regression approach. This is due to the fact that the UKF imposes uni- modal Gaussian densities while realistic environments often introduce multimodal elements. Because there is not much discussion on the classification approach in the literature, we have emphasized its promising use based on experimental re- sults. In particular, the SGC, which models the RSSI values within a specific region as Gaussian variables, is shown to be the preferred solution because it is written in closed-form and it is differentiable. It can be used under the Bayesian filter framework completely. Finally, we have demonstrated some preliminary results for using the Bayesian filter derivative to achieve online parameter tracking. While it is promising, more experimental results are needed to validate its use. While the fingerprinting paradigm is powerful and has been studied well in the literature, there are two unsolved issues. First, the radiomap recorded in the offline training phase is static and there are no simple ways to adapt if the envi- ronment changes. We propose using a ML online parameter tracking scheme but it may not be the panacea. Second, the fingerprinting paradigm does not answer 67 questions such as the theoretical limits on the accuracies or the spatial sampling in- terval required in the offline training phase or the choice of the cell division scheme required by many methods. The standard choice is to simply proceed with an arbi- trary setup and evaluate its performance then repeat the procedure with a different setup until the performance is satisfactory. While this is partially alleviated by using simple and robust methods that require less human intervention and train- ing, theoretical answers are still unknown. This thesis has used simple and robust methods such as the piecewise path loss model and the SGC. As elegantly said by Professor Box, there are literally wrong models as they do not fit the empirical data perfectly but they have been useful. We have been able to achieve satisfactory lo- calization results but we have not been able to answer some fundamental questions. We believe that more theoretical analysis based on ray tracing and laws of Physics are required to answer these questions. 68 Bibliography [1] N. Patwari, J. Ash, S. Kyperountas, A. Hero III, R. Moses, and N. Correal, \u00E2\u0080\u009CLocating the nodes: Cooperative localization in wireless sensor networks,\u00E2\u0080\u009D IEEE Signal Processing Mag., vol. 22, no. 4, pp. 54\u00E2\u0080\u009369, July 2005. \u00E2\u0086\u0092 pages 1 [2] A. Boukerche, H. Oliveira, E. Nakamura, and A. Loureiro, \u00E2\u0080\u009CLocalization systems for wireless sensor networks,\u00E2\u0080\u009D IEEE Wireless Commun. Mag., vol. 14, no. 6, pp. 6\u00E2\u0080\u009312, Dec. 2007. \u00E2\u0086\u0092 pages 1, 4, 12 [3] M. Li and Y. Liu, \u00E2\u0080\u009CRendered path: Range-free localization in anisotropic sensor networks with holes,\u00E2\u0080\u009D IEEE/ACM Trans. Networking, vol. 18, no. 1, pp. 320\u00E2\u0080\u0093332, Feb. 2010. \u00E2\u0086\u0092 pages 1 [4] D. Kelly, S. McLoone, T. Dishongh, M. McGrath, and J. Behan, \u00E2\u0080\u009CSingle access point location tracking for in-home health monitoring,\u00E2\u0080\u009D in Proc. of 5th Workshop on Positioning, Navigation and Communication (WPNC), Hannover, Germany, Mar. 2008, pp. 23 \u00E2\u0080\u009329. \u00E2\u0086\u0092 pages 1, 7, 11 [5] P. Bahl and V. Padmanabhan, \u00E2\u0080\u009CRADAR: An in-building RF-based user location and tracking system,\u00E2\u0080\u009D in Proc. of 2000 IEEE INFOCOM, Nineteenth Annual Joint Conference of the IEEE Computer and Communications Societies, vol. 2, Tel Aviv, Israel, Mar. 2000, pp. 775\u00E2\u0080\u0093784. \u00E2\u0086\u0092 pages 1, 2, 6, 7, 38 [6] A. Beck, P. Stoica, and J. Li, \u00E2\u0080\u009CExact and approximate solutions of source localization problems,\u00E2\u0080\u009D IEEE Trans. Signal Processing, vol. 56, no. 5, pp. 1770 \u00E2\u0080\u00931778, May 2008. \u00E2\u0086\u0092 pages 4 [7] N. Sirola, \u00E2\u0080\u009CClosed-form algorithms in mobile positioning: Myths and misconceptions,\u00E2\u0080\u009D in Proc. of 7th Workshop on Positioning, Navigation and Communication (WPNC), Dresden, Germany, Mar. 2010. \u00E2\u0086\u0092 pages 4 69 [8] T. S. Rappaport, Wireless Communications. Prentice Hall, 1996. \u00E2\u0086\u0092 pages 5, 6, 13, 41 [9] G. Za\u00CC\u0080ruba, M. Huber, F. Kamangar, and I. Chlamtac, \u00E2\u0080\u009CIndoor location tracking using RSSI readings from a single Wi-Fi access point,\u00E2\u0080\u009D Wireless Networks, vol. 13, no. 2, pp. 221\u00E2\u0080\u0093235, Apr. 2007. \u00E2\u0086\u0092 pages 5, 8 [10] F. Potorti, A. Corucci, P. Nepa, F. Furfari, P. Barsocchi, and A. Buffi, \u00E2\u0080\u009CAccuracy limits of in-room localisation using RSSI,\u00E2\u0080\u009D in Proc. of 2009 IEEE Antennas and Propagation Society International Symposium (APSURSI), Charleston, SC, USA, June 2009, pp. 1\u00E2\u0080\u00934. \u00E2\u0086\u0092 pages 5, 6, 16, 41 [11] S. Seidel and T. Rappaport, \u00E2\u0080\u009C914 MHz path loss prediction models for indoor wireless communications in multifloored buildings,\u00E2\u0080\u009D IEEE Trans. Antennas Propagat., vol. 40, no. 2, pp. 207\u00E2\u0080\u0093217, Feb. 1992. \u00E2\u0086\u0092 pages 5, 6, 13, 41 [12] G. Chandrasekaran, M. Ergin, J. Yang, S. Liu, Y. Chen, M. Gruteser, and R. Martin, \u00E2\u0080\u009CEmpirical evaluation of the limits on localization using signal strength,\u00E2\u0080\u009D in Proc. of 6th IEEE Communications Society Conference on Sensor, Mesh and Ad Hoc Communications and Networks (SECON), Rome, Italy, June 2009, pp. 1\u00E2\u0080\u00939. \u00E2\u0086\u0092 pages 6 [13] T. Mitchell, Machine Learning. McGraw-Hill, 1997. \u00E2\u0086\u0092 pages 6, 38 [14] V. Vapnik, The Nature of Statistical Learning Theory, 2nd ed. Springer-Verlag, 2000. \u00E2\u0086\u0092 pages 6, 38 [15] K. Kaemarungsi, \u00E2\u0080\u009CDesign of indoor positioning system based on location fingerprinting technique,\u00E2\u0080\u009D Ph.D. dissertation, University of Pittsbugh, Feb. 2005. \u00E2\u0086\u0092 pages 6, 41, 43 [16] J. Yin, Q. Yang, and L. Ni, \u00E2\u0080\u009CLearning adaptive temporal radio maps for signal-strength-based location estimation,\u00E2\u0080\u009D IEEE Trans. Mobile Comput., vol. 7, no. 7, pp. 869\u00E2\u0080\u0093883, July 2008. \u00E2\u0086\u0092 pages 7, 55 [17] A. Paul and E. Wan, \u00E2\u0080\u009CWi-Fi based indoor localization and tracking using sigma-point Kalman filtering methods,\u00E2\u0080\u009D in Proc. of 2008 IEEE/ION Position, Location and Navigation Symposium (PLAN), Monterey, CA, USA, May 2008, pp. 646\u00E2\u0080\u0093659. \u00E2\u0086\u0092 pages 7, 20 [18] C. Laoudias, P. Kemppi, and C. Panayiotou, \u00E2\u0080\u009CLocalization using radial basis function networks and signal strength fingerprints in WLAN,\u00E2\u0080\u009D in Proc. of 2009 IEEE Global Telecommunications Conference (GLOBECOM), Honolulu, HI, USA, Nov. 2009, pp. 1 \u00E2\u0080\u00936. \u00E2\u0086\u0092 pages 7 70 [19] M. Brunato and R. Battiti, \u00E2\u0080\u009CStatistical learning theory for location fingerprinting in wireless LANs,\u00E2\u0080\u009D Computer Networks and ISDN Systems, vol. 47, no. 6, pp. 825\u00E2\u0080\u0093845, Apr. 2005. \u00E2\u0086\u0092 pages 7 [20] V. Honkavirta, T. Perala, S. Ali-Loytty, and R. Piche, \u00E2\u0080\u009CA comparative survey of WLAN location fingerprinting methods,\u00E2\u0080\u009D in Proc. of 6th Workshop on Positioning, Navigation and Communication (WPNC), Hannover, Germany, Mar. 2009, pp. 243\u00E2\u0080\u0093251. \u00E2\u0086\u0092 pages 7, 41, 43 [21] C. Figuera, I. Mora-Jimenez, A. Guerrero-Curieses, J. Rojo-Alvarez, E. Everss, M. Wilby, and J. Ramos-Lopez, \u00E2\u0080\u009CNonparametric model comparison and uncertainty evaluation for signal strength indoor location,\u00E2\u0080\u009D IEEE Trans. Mobile Comput., vol. 8, no. 9, pp. 1250 \u00E2\u0080\u00931264, Sept. 2009. \u00E2\u0086\u0092 pages [22] T. Roos, P. Myllyma\u00CC\u0088ki, H. Tirri, P. Misikangas, and J. Sieva\u00CC\u0088nen, \u00E2\u0080\u009CA probabilistic approach to WLAN user location estimation,\u00E2\u0080\u009D International Journal of Wireless Information Networks, vol. 9, no. 3, pp. 155\u00E2\u0080\u0093164, July 2002. \u00E2\u0086\u0092 pages 7, 9, 41, 42 [23] C. Steiner and A. Wittneben, \u00E2\u0080\u009CLow complexity location fingerprinting with generalized UWB energy detection receivers,\u00E2\u0080\u009D IEEE Trans. Signal Processing, vol. 58, no. 3, pp. 1756 \u00E2\u0080\u00931767, Mar. 2010. \u00E2\u0086\u0092 pages 8 [24] A. Haeberlen, E. Flannery, A. M. Ladd, A. Rudys, D. S. Wallach, and L. E. Kavraki, \u00E2\u0080\u009CPractical robust localization over large-scale 802.11 wireless networks,\u00E2\u0080\u009D in Proc. of 2004 ACM International Conference on Mobile Computing and Networking (MobiCom), Philadelphia, PA, USA, Oct. 2004, pp. 70\u00E2\u0080\u009384. \u00E2\u0086\u0092 pages 8, 41, 52, 56 [25] M. Arulampalam, S. Maskell, N. Gordon, and T. Clapp, \u00E2\u0080\u009CA tutorial on particle filters for online nonlinear/non-Gaussian Bayesian tracking,\u00E2\u0080\u009D IEEE Trans. Signal Processing, vol. 50, no. 2, pp. 174\u00E2\u0080\u0093188, Feb. 2002. \u00E2\u0086\u0092 pages 9, 19, 23, 26 [26] A. Doucet, N. de Freitas, and N. Gordon, Sequential Monte Carlo Methods in Practice. Springer-Verlag, 2001. \u00E2\u0086\u0092 pages 11, 19, 23, 26 [27] C. Chui and G. Chen, Kalman Filtering: with Real-Time Applications, 4th ed. Springer-Verlag, 2008. \u00E2\u0086\u0092 pages 11, 19, 23 [28] L. Rabiner, \u00E2\u0080\u009CA tutorial on hidden Markov models and selected applications in speech recognition,\u00E2\u0080\u009D Proc. IEEE, vol. 77, no. 2, pp. 257\u00E2\u0080\u0093286, Feb. 1989. \u00E2\u0086\u0092 pages 11, 19, 56 71 [29] S. Julier and J. Uhlmann, \u00E2\u0080\u009CUnscented filtering and nonlinear estimation,\u00E2\u0080\u009D Proc. IEEE, vol. 92, no. 3, pp. 401\u00E2\u0080\u0093422, Mar. 2004. \u00E2\u0086\u0092 pages 19, 22 [30] G. Binazzi, L. Chisci, F. Chiti, R. Fantacci, and S. Menci, \u00E2\u0080\u009CLocalization of a swarm of mobile agents via unscented Kalman filtering,\u00E2\u0080\u009D in Proc. of 2009 IEEE International Conference on Communications (ICC), Dresden, Germany, June 2009, pp. 1\u00E2\u0080\u00935. \u00E2\u0086\u0092 pages 19 [31] E. Menegatti, A. Zanella, S. Zilli, F. Zorzi, and E. Pagello, \u00E2\u0080\u009CRange-only SLAM with a mobile robot and a wireless sensor networks,\u00E2\u0080\u009D in Proc. of 2009 IEEE International Conference on Robotics and Automation (ICRA), Kobe, Japan, May 2009, pp. 8\u00E2\u0080\u009314. \u00E2\u0086\u0092 pages 20 [32] J. Rodas, C. Escudero, and D. Iglesia, \u00E2\u0080\u009CBayesian filtering for a bluetooth positioning system,\u00E2\u0080\u009D in Proc. of 2008 IEEE International Symposium on Wireless Communication Systems (ISWCS), Reykjavik, Iceland, Oct. 2008, pp. 618\u00E2\u0080\u0093622. \u00E2\u0086\u0092 pages 20 [33] X. Rong Li and V. Jilkov, \u00E2\u0080\u009CSurvey of maneuvering target tracking. Part I. Dynamic models,\u00E2\u0080\u009D IEEE Trans. Aerosp. Electron. Syst., vol. 39, no. 4, pp. 1333\u00E2\u0080\u00931364, Oct. 2003. \u00E2\u0086\u0092 pages 20, 21, 61 [34] Y. Wu, D. Hu, M. Wu, and X. Hu, \u00E2\u0080\u009CUnscented Kalman filtering for additive noise case: augmented versus nonaugmented,\u00E2\u0080\u009D IEEE Signal Processing Letters, vol. 12, no. 5, pp. 357\u00E2\u0080\u0093360, May 2005. \u00E2\u0086\u0092 pages 23 [35] C.-W. Hsu and C.-J. Lin, \u00E2\u0080\u009CA comparison of methods for multiclass support vector machines,\u00E2\u0080\u009D IEEE Trans. Neural Networks, vol. 13, no. 2, pp. 415\u00E2\u0080\u0093425, Mar. 2002. \u00E2\u0086\u0092 pages 41 [36] C.-C. Chang and C.-J. Lin, LIBSVM: a library for support vector machines, 2010, software available at http://www.csie.ntu.edu.tw/\u00E2\u0088\u00BCcjlin/libsvm. \u00E2\u0086\u0092 pages 46 [37] T.-F. Wu, C.-J. Lin, and R. Weng, \u00E2\u0080\u009CProbability estimates for multi-class classication by pairwise coupling,\u00E2\u0080\u009D Journal of Machine Learning Research, vol. 5, pp. 975\u00E2\u0080\u00931005, Aug. 2004. \u00E2\u0086\u0092 pages 51 [38] N. Kantas, A. Doucet, S. Singh, and J. Maciejowski, \u00E2\u0080\u009CAn overview of sequential Monte Carlo methods for parameter estimation in general state-space models,\u00E2\u0080\u009D in Proc. of 15th IFAC Symposium on System Identification (SYSID), Saint-Malo, France, July 2009. \u00E2\u0086\u0092 pages 55, 64 72 [39] I. Collings and T. Ryden, \u00E2\u0080\u009CA new maximum likelihood gradient algorithm for on-line hidden Markov model identification,\u00E2\u0080\u009D in Proc. of 1998 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), vol. 4, Seattle, WA, USA, May 1998, pp. 2261\u00E2\u0080\u00932264. \u00E2\u0086\u0092 pages 57 [40] G. Poyiadjis, A. Doucet, and S. Singh, \u00E2\u0080\u009CParticle methods for optimal filter derivative: application to parameter estimation,\u00E2\u0080\u009D in Proc. of 2005 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), vol. 5, Philadelphia, PA, USA, Mar. 2005, pp. 925\u00E2\u0080\u0093928. \u00E2\u0086\u0092 pages 57 [41] R. Martinez-Cantin, N. de Freitas, and J. Castellanos, \u00E2\u0080\u009CAnalysis of particle methods for simultaneous robot localization and mapping and a new algorithm: Marginal-SLAM,\u00E2\u0080\u009D in Proc. of 2007 IEEE International Conference on Robotics and Automation (ICRA), Rome, Italy, Apr. 2007, pp. 2415\u00E2\u0080\u00932420. \u00E2\u0086\u0092 pages 57 73"@en .
"Thesis/Dissertation"@en .
"2010-11"@en .
"10.14288/1.0071318"@en .
"eng"@en .
"Electrical and Computer Engineering"@en .
"Vancouver : University of British Columbia Library"@en .
"University of British Columbia"@en .
"Attribution-NonCommercial-NoDerivatives 4.0 International"@en .
"http://creativecommons.org/licenses/by-nc-nd/4.0/"@en .
"Graduate"@en .
"Localization systems using signal strength fingerprinting"@en .
"Text"@en .
"http://hdl.handle.net/2429/28750"@en .