UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Lifespace tracking and activity monitoring on mobile phones Dewancker, Ian 2014

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
24-ubc_2014_spring_dewancker_ian.pdf [ 2.61MB ]
Metadata
JSON: 24-1.0166870.json
JSON-LD: 24-1.0166870-ld.json
RDF/XML (Pretty): 24-1.0166870-rdf.xml
RDF/JSON: 24-1.0166870-rdf.json
Turtle: 24-1.0166870-turtle.txt
N-Triples: 24-1.0166870-rdf-ntriples.txt
Original Record: 24-1.0166870-source.json
Full Text
24-1.0166870-fulltext.txt
Citation
24-1.0166870.ris

Full Text

Lifespace Tracking and ActivityMonitoring on Mobile PhonesbyIan DewanckerBSE., The University of Waterloo, 2011A THESIS SUBMITTED IN PARTIAL FULFILLMENT OFTHE REQUIREMENTS FOR THE DEGREE OFMASTER OF SCIENCEinThe Faculty of Graduate and Postdoctoral Studies(Computer Science)THE UNIVERSITY OF BRITISH COLUMBIA(Vancouver)March 2014c? Ian Dewancker 2014AbstractDaily patterns of behaviour are a rich source of information and play animportant role in establishing a person?s quality of life. Lifespace refers tomeasurements of the frequency, geographic extent and independence of anindividual?s travels. While difficult to measure and record automatically,lifespace has been shown to correlate to important metrics relating to phys-ical performance, nutritional risk, and community engagement.MobiSense is a mobile health research platform that aims to improve mo-bility analysis for both ambulating and wheelchair users. The goals of thesystem were to be simple for users to collect mobility data, provide accessiblesummaries of daily behaviours and to enable further research and develop-ment in this area. The system is capable of lifespace summaries relating toindoor and outdoor mobility as well as activity trends and behaviours.For indoor reporting, we investigated robust classification algorithms forroom level indoor localization using WiFi signal strengths. We pursue topo-logical map localization as it requires simpler map models while preservinguseful semantic information associated with location. Personalized mapsare easy to create by capturing training observations in areas of interest.Outdoor summaries are captured by periodically recording GPS fixes.For activity monitoring, a decision tree classifier was learned using acombination of accelerometer and GPS features. The classifier can differ-entiate between stationary, wheeling (in a wheelchair), walking or vehiclemotion.To capture the relevant sensor data, we extended an open source log-ging application which records data streams locally before uploading datato a web service to process and visualize results. The custom web serviceprocesses the data and generates summary files which can then be visual-ized either for each individual day or over a user selected date range. Weemployed a heat map visualization for outdoor lifespace to understand thegeographic extent of a user?s mobility. For indoor and activity summaries,we employed temporal line charts to understand trends in a user?s mobility.iiPrefaceThe design and evaluation of the experiments outlined in Chapter 2 and3 were primarily done by me with assistance from Dr. Ian Mitchell. Alldata collection for use in this thesis was performed primarily by me withassistance from Dr. Jaimie Borisoff. I extended an open source sensor loggingapplication [13] to suit our needs. The backend for the web applicationwas designed and developed entirely by me. Tom Jin worked heavily onimproving my early versions of the data visualizations.A paper has been accepted for publication at RESNA 2014 that summa-rizes the work in this thesis [9]. Dr. Jaimie Borisoff wrote most of the paper,basing it on a near complete version of this thesis, and had assistance fromDr. Ian Mitchell and I.iiiTable of ContentsAbstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iiPreface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iiiTable of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . ivList of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viList of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viiList of Programs . . . . . . . . . . . . . . . . . . . . . . . . . . . . viiiAcknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . . ix1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.1 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2 Organization . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 WiFi Indoor Localization . . . . . . . . . . . . . . . . . . . . . 22.1 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . 32.2 WiFi Overview . . . . . . . . . . . . . . . . . . . . . . . . . . 42.3 Localization as Probabilistic Classification . . . . . . . . . . 52.3.1 Naive Bayes . . . . . . . . . . . . . . . . . . . . . . . 62.3.2 Chow-Liu Trees . . . . . . . . . . . . . . . . . . . . . 92.3.3 K-Nearest Neighbours . . . . . . . . . . . . . . . . . . 102.3.4 Random Forests . . . . . . . . . . . . . . . . . . . . . 112.4 Novelty Detection . . . . . . . . . . . . . . . . . . . . . . . . 132.4.1 Nearest Neighbour Threshold . . . . . . . . . . . . . 142.4.2 Nearest Centroid Thresholds . . . . . . . . . . . . . . 142.4.3 Naive Bayes and Chow-Liu Tree Thresholds . . . . . 162.5 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . 162.5.1 Data Collection . . . . . . . . . . . . . . . . . . . . . 162.5.2 Evaluation . . . . . . . . . . . . . . . . . . . . . . . . 18ivTable of Contents3 Activity Recognition . . . . . . . . . . . . . . . . . . . . . . . . 243.1 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . 253.2 Sensor Overview . . . . . . . . . . . . . . . . . . . . . . . . . 263.2.1 Tri-Axis Accelerometer . . . . . . . . . . . . . . . . . 263.2.2 Global Positioning System . . . . . . . . . . . . . . . 273.3 Feature Selection and Sensor Fusion . . . . . . . . . . . . . . 273.3.1 Accelerometer Features . . . . . . . . . . . . . . . . . 283.3.2 GPS Feature and Sensor Fusion . . . . . . . . . . . . 313.4 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . 323.4.1 Data Collection . . . . . . . . . . . . . . . . . . . . . 333.4.2 Evaluation . . . . . . . . . . . . . . . . . . . . . . . . 344 MobiSense System . . . . . . . . . . . . . . . . . . . . . . . . . 374.1 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . 384.2 System Architecture . . . . . . . . . . . . . . . . . . . . . . . 394.3 Phone Application . . . . . . . . . . . . . . . . . . . . . . . . 404.3.1 Sensor Logging . . . . . . . . . . . . . . . . . . . . . . 404.3.2 User Interface . . . . . . . . . . . . . . . . . . . . . . 424.4 Data Storage . . . . . . . . . . . . . . . . . . . . . . . . . . . 434.5 Data Processing . . . . . . . . . . . . . . . . . . . . . . . . . 444.5.1 Outdoor Summary . . . . . . . . . . . . . . . . . . . 454.5.2 Activity Summary . . . . . . . . . . . . . . . . . . . . 454.5.3 Indoor Summary . . . . . . . . . . . . . . . . . . . . . 454.6 Data Visualization . . . . . . . . . . . . . . . . . . . . . . . . 464.6.1 Timeline Selection . . . . . . . . . . . . . . . . . . . . 464.6.2 Outdoor Summary . . . . . . . . . . . . . . . . . . . 474.6.3 Activity and Indoor Summary . . . . . . . . . . . . . 485 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 525.1 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . 53Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54vList of Tables2.1 Example WiFi Access Point Information . . . . . . . . . . . 42.2 Indoor Localization Evaluation Dataset Summary . . . . . . . 182.3 Summary of Room Classification Accuracies . . . . . . . . . 202.4 Parameter Study Accuracies for K-NN Classifier . . . . . . . 202.5 Parameter Study Accuracies for Random Forest Classifier . . 21viList of Figures2.1 Example Probabilistic Graphical Model of Naive Bayes . . . . 62.2 Nearly Gaussian Looking Signal Strength Distribution . . . . 72.3 Non-Gaussian Signal Strength Distribution . . . . . . . . . . 82.4 Example Probabilistic Graphical Model of Chow-Liu Tree . . 92.5 Example Decision Tree for WiFi Localization . . . . . . . . . 122.6 Screenshot of Mapping Application User Interface . . . . . . 172.7 Screenshot of Ground Truth Application . . . . . . . . . . . . 192.8 ROC Curves for Novelty Detection . . . . . . . . . . . . . . . 233.1 Activity Classes of Interest . . . . . . . . . . . . . . . . . . . 243.2 Accelerometer Window Examples . . . . . . . . . . . . . . . . 303.3 Sensor Fusion Overview . . . . . . . . . . . . . . . . . . . . . 323.4 Activity Data Collection Configurations . . . . . . . . . . . . 333.5 Decision Tree Activity Classifier . . . . . . . . . . . . . . . . 343.6 Confusion Matrix for Activity Classifier . . . . . . . . . . . . 353.7 Comparison of Wheel Mounted vs MobiSense Actimetry . . 364.1 Overview of MobiSense System Architecture . . . . . . . . . . 394.2 MobiSense Phone Application User Interface . . . . . . . . . 424.3 Filesystem Organization on MobiSense Server . . . . . . . . . 434.4 Timeline Selection User Interface . . . . . . . . . . . . . . . . 464.5 Outdoor Summary Visualization . . . . . . . . . . . . . . . . 474.6 Activity and Indoor Pie Chart Summaries . . . . . . . . . . . 484.7 Individual Day Timeline Summary . . . . . . . . . . . . . . . 494.8 Aggregate Day Timeline Summary . . . . . . . . . . . . . . . 50viiList of Programs3.1 Function to generate average speed list . . . . . . . . . . . . 314.1 Data Analysis Process . . . . . . . . . . . . . . . . . . . . . . 44viiiAcknowledgementsI would like to thank my supervisors Dr. Ian Mitchell and Dr. Jaimie Borisofffor their guidance and suggestions during my study at UBC. The path to-wards this thesis was certainly not direct and I am very appreciative oftheir patience for my technical meandering. Much of this work would havebeen difficult or impossible without Dr. Borisoff?s data collection and vali-dation. I would also like to thank Dr. Nando de Freitas and Jordan Frankfor productive discussions and useful suggestions.Tom Jin was an excellent collaborator and was instrumental in takingthe web client visualizations to the next level. Great thanks as well tomy LCI labmates : Junaed, Pouria, Eric, Pooja, Dave and Mauricio forbouncing ideas off of. A very important thanks to my other friends at UBC,Felix and John for inspiring technical discussions and countless Vancouveradventures. Thanks as well to Josh and Ryan for California distractions andtechnical brainstorming. Much of the analysis and evaluation for this thesiswould not have been possible without the help of Arianne, Nikole, Vanessa,Rachel, and Alanna for helping with data collection and being great friendsduring my time in Vancouver.Thanks to my family in Belgium and Canada, especially Luc and Brigitte,and Oma and Opa for offering their homes and Internet connections duringmy trip to Belgium as well as Nancy and Probal for always welcoming me inToronto. Finally, thanks to my parents and sisters for their unconditionalsupport and love throughout my studies and travels.This research was supported by CanWheel (the Canadian Institutes ofHealth Research Emerging Team in Wheeled Mobility for Older AdultsGrant), the National Science and Engineering Research Council of Canada(NSERC) Discovery Grant, an NSERC Undergraduate Student ResearchAward, and Dr. Borisoff?s Canada Research Chair in Rehabilitation Engi-neering Design.ixChapter 1IntroductionMobile devices now surround us and have transformed into important toolsin our daily lives. Our activities and behaviours are becoming increasinglycoupled to devices with ever improving sensors. With current devices, mo-bility patterns and physical activity levels are particularly amenable to in-ference and analysis by leveraging the integration of GPS, accelerometerand other sensors. Mobile health systems that facilitate longterm mobilitystudies and reporting would potentially be of great value to health care, as-sistive technology and rehabilitation science. Highly personalized, detailedmobility traces are not only of interest to researchers, but also individualsmotivated to learn more about their own trends and behaviours. The quan-tified self movement aims to empower these individuals with accessible dataand analysis tools. Towards these ends, we investigate the design of a systemto track lifespace and monitor mobility of both ambulating and wheelchairusers with standard mobile phones. Lifespace refers to measurements of thefrequency, geographic extent and independence of an individual?s travels [37].We concern ourselves with models and approaches to summarize indoor andoutdoor movement as well as activity levels. We focus on techniques thatperform well on real sensors and in real-world environments.1.1 ContributionsWe evaluate four classifiers for room level localization using WiFi signalstrength using datasets from actual residences. We propose an effective setof features for use with a decision tree classifier to differentiate four activityclasses. Finally, we describe MobiSense, a mobile-to-web lifespace trackingsystem and its data visualizations.1.2 OrganizationIn Chapter 2 we present our investigation into robust classifiers for WiFibased indoor localization. Chapter 3 describes our approach to activityrecognition. Chapter 4 presents the entire MobiSense system, and Chapter5 discusses future work and extensions.1Chapter 2WiFi Indoor LocalizationIndoor localization is an important problem in the realms of robotics, ubiq-uitous computing and assistive technology. A robust indoor positioningsystem allows for context-aware assistive applications in the home and workplace. Even with room level localization, one could imagine useful applica-tions related to cognitive aids, navigation assistance, and tracking.In particular, an indoor localization component would be useful for sys-tems designed to measure lifespace and mobility. While various systemshave explored outdoor mobility summaries using GPS, indoor location ex-tends mobility summarization capabilities, and provides richer contextualinformation. Rooms within the home and office have strong associationswith specific activities (e.g. kitchen and cooking). High level activity recog-nition would therefore also be enhanced by indoor localization.To perform localization, a map or model is required to infer position.Localization is primarily performed on two types of maps : metric and topo-logical. Metric maps model continuous spaces and are capable of reportingrelative distances (e.g. 2m from south east wall of kitchen) in contrast withtopological maps which only model discrete locations (e.g. in kitchen). Wepursue topological map localization as it requires simpler map models whilepreserving useful semantic information associated with location. The keyelements required for probabilistic localization are shown below [42] :P (xi | xi?1, u) = motion modelP (z | xi) = observation modelWe concern ourselves with the investigation of a robust observationmodel for residential indoor locations. We do not assume connectivity infor-mation between places to be present in the map and so motion models areof limited use for localization. We are interested in low cost sensing devicesand observation models that work well in practice. In addition, it wouldbe desirable to require only minimal human effort for mapping and modelconstruction as it would decrease deployment efforts.22.1. Related WorkVision sensors are popular as they are low cost and have been well stud-ied in this context [34] [35] [8], but mounting cameras on people is potentiallyawkward and risks capturing sensitive or private information. While GPSrepeaters can be installed indoors, in most situations, GPS functionalitywill not be available inside buildings. Radio frequency signals can also beused to perform indoor localization, and since WiFi has become highly inte-grated into most urban and suburban infrastructures, it has the advantageof requiring little deployment effort. Signal measurements can be made byany mobile devices equipped with WiFi modems including mobile phones.While cameras require an unobstructed view of their surroundings, measur-ing WiFi signals can be done with a phone placed in a pocket or bag.To investigate room-level WiFi indoor localization, we evaluated fourprobabilistic classification algorithms. Performance was measured on a datasetcollected with an off-the-shelf Android phone and consists of real signalstrengths recorded in several residential environments. We also consideredthe problem of novelty detection, that is determining whether a WiFi obser-vation belongs to a known indoor location or not, and evaluated five noveltymetrics.2.1 Related WorkThe seminal work in using WiFi signals to localize indoors was [1] wherea large fingerprint database of signal strengths was captured and locationwas inferred by performing a nearest neighbour search. More recently, asystem proposed in [31] looked at organically growing an indoor fingerprintmodel by requiring intermittent input from users to establish ground truthlocation, thereby distributing the burden of site survey across multiple users.A probabilistic model was proposed in [35] that fused vision with WiFi onlow-cost mobile devices, however this approach required a detailed site mapand survey process. Ekahau, a commercial system [17], requires placingWiFi device tags within a building.Localization can only be achieved in conjunction with a map and sothere has also been research into posing the problem in the simultaneouslocalization and mapping (SLAM) context. The use of Gaussian processmodels for indoor WiFi localization was first proposed in [11], which requiredlabeled ground truth positions. These requirements were later removed toachieve full SLAM [10]. Recently, work in [16] showed state of the art resultsby posing the problem as a GraphSLAM instance.32.2. WiFi OverviewTopological or room based maps have also been of interest. In [45], asystem attempts to cluster similar WiFi fingerprints to generate a connectedtopological map. Work in [14] explored using hierarchical Dirichlet processesto learn clusters associated with indoor locations and rooms of interest.We approach the problem in a similar way to [2] in that we are interestedin defining an explicit training phase for map construction and using prob-abilistic classification algorithms to infer room-level topological location.2.2 WiFi OverviewWiFi networks have become popular in the home to service the growingnumber of personal mobile devices. Laptops, mobile phones and tablets alluse WiFi networks to connect to the Internet. Access points connect WiFienabled devices to wired networks. There are three key pieces of informationabout access points visible to other WiFi enabled devices. The service setidentifier (SSID) is a human-readable string specifying what network the APis part of. The basic service set identification (BSSID), also known as themedia access control (MAC) address in WiFi networks is a globally uniqueidentifier for the network interface of the access point. The received signalstrength indication (RSSI) is a generic metric used in telecommunicationsto represent the power present in a radio signal [24]. Example access pointinformation is shown in Table 2.1. Note that the SSID is not unique, howeverthe BSSID is. The RSSI will vary depending on the relative position of theAP and the listening device as well as the physical environment (location ofwalls and other obstacles).Table 2.1: Example WiFi Access Point InformationSSID (Name) BSSID (MAC Address) RSSI (Signal Strength)ubc secure 00:12:44:b0:4f:bb -73ubc secure 00:14:f1:ac:74:23 -36ubc visitor 00:1c:0e:42:47:45 -84bbox2-3964 00:10:7f:13:55:75 -66While most households only employ a single access point, neighbouringaccess points are often also visible. WiFi routers are cheap and emit a signaleven when not connected to the Internet.42.3. Localization as Probabilistic ClassificationIn probabilistic localization, observations represent sensor measurementsof physical environments. A common choice for observation vector whenusing WiFi signals is to simply use the raw RSSI values returned by themodem for each visible access point [16] [10] [2]. We assume that for a givenbuilding or indoor location there exists a set of observable access points withunique BSSIDs. The observation vector z is simply formed by recording thesignal strengths of the set of N access points associated with the indoorlocation (which depends on each location) as shown below.z = [ RSSI0, RSSI1, ? ? ? , RSSIN ]RSSIi = RSSI of BSSIDiThe SSID information of the access point is ignored if signal strength in-formation for a given BSSID is absent during a reading. The signal strengthfor the absent access point is cast to -100 as proposed in [2].2.3 Localization as Probabilistic ClassificationWe pose the problem of learning an observation model as an instance ofsupervised learning. Given a set of M training examples {(z0, x0), (z1, x1)... (zM , xM )} where zi is the observation vector described in the previoussection and xi is the place/room label (eg kitchen), we are interested inlearning a suitable model of P (z | x) and more importantly arriving atP (x | z) to infer location. In our domain, the target variable is x (location)and so we are not necessarily concerned with modeling the statistics of allobservation variables z (WiFi signal strengths) as generative models do. Agenerative model of P (z | x) would allow sampling of WiFi observations fora given room. For example, one could sample possible signal strengths ofall known access points given that you were in the kitchen. With P (z | x),one can use Bayes rule to arrive at P (x | z) and then localize to the mostprobable room. In contrast, discriminative models learn a direct mappingto the target distribution P (x | z) and do not necessarily learn the statisticsassociated with all variables.In this section, we describe two generative models of P (z | x), naiveBayes and Chow-Liu trees, and two discriminative models for P (x | z),random forests and k-nearest neighbours, and evaluate their classificationperformances on real-world data. Our comparison of classifiers is similar towork in [2]. While it is often the case that discriminative models performbetter on classification tasks [28], generative models can give deeper insightsinto variable statistics and offer a useful comparison.52.3. Localization as Probabilistic Classification2.3.1 Naive BayesOne of the simplest and most common approaches to classification is thenaive Bayes classifier. Naive Bayes is a generative model that makes a sim-plifying assumption about the conditional independence of feature variables[28]. In our context this means assuming that the signal strengths of eachaccess point are conditionally independent given the room the user is locatedin. Shown below in Figure 2.1 is a graphical model depicting the naive Bayesstructure. The nodes are variables and the arrows denote conditional de-pendence. In this example, the six signal strength observations (0-5) dependonly on the location (x) in which they are read.Figure 2.1: Example naive Bayes graphical model for localization.The probability of the entire WiFi observation conditioned on the roomis equal to the product of the individual conditional probabilities of eachaccess point?s signal strength conditioned on the room as shown below.P (z[0], z[1], ..., z[N ] | x) =N?i=0P (z[i] | x)The choice remains as to what model or distribution to use for the indi-vidual conditional probabilities. We investigate two alternatives : modelingthe conditional with a Gaussian distribution or as a normalized histogramdistribution. The representation of the conditional probabilities can dra-matically alter the classifier?s performance. Also worth remembering is thatwe have made a strong assumption in treating the signal strength variablesas independent. While this assumption often works well in practice [28], thedata may be better modeled with weaker assumptions.62.3. Localization as Probabilistic ClassificationGaussian DistributionOne option is to model each conditional probability of the naive Bayes as aone dimensional Gaussian distribution, as defined below.P (z[i] | x) =1?2pi?2z[i]exp(?(z[i]? ?z[i])22pi?2z[i])Each conditional then is parameterized by the mean, ?z[i] and variance,?2z[i] of the Gaussian. These parameters are estimated using maximum like-lihood estimates which is to say the empirical values calculated from thetraining samples.Figure 2.2: Frequency of signal strengths of single access point in a givenroom. No absent signals strength readings (-100) presentA histogram showing the frequency of signal strengths from a single ac-cess point in a room is shown in Figure 2.2. One can see that the distributionlooks somewhat Gaussian, however it also looks slightly skewed. Anotherconsideration is the situation where an access point would be visible in oneroom and not seen in another. This signal strength would have only absentvalues (-100) in the second room in the training data, so the Gaussian esti-mated would be degenerate with 0 variance. In practice, a small epsilon isadded to the empirical variance to prevent this [32].72.3. Localization as Probabilistic ClassificationHistogram DistributionAs is often the case when dealing with real sensor data, the Gaussian modelmay not accurately represent the underlying data. A common approach inrobotics is to learn a mixture model using expectation maximization as out-lined in [42]. In our scenario, the Gaussian assumption very obviously breaksdown when an access point is sometimes visible in a room and sometimesnot. The access point?s signal strength alternates between real strengths(-95 to -10 ) and the absent placeholder strength (-100 ). This is demon-strated in Figure 2.3 below and one can clearly see that a Gaussian fit tothese readings would not represent the data well. Additionally, in residentialenvironments, most access points seen will likely have weaker signal strengthsince they are not located within the same building.Figure 2.3: Frequency of signal strengths of single access point in a givenroom. Absent signal strength readings (-100) are presentInstead of learning a mixture model, we simply use a normalized his-togram of the empirical signal strength for each room and discretize thesignal strength into bins of 5 RSSI units.P (z[i] | x = r) = Hist(z[i], r)We return a small epsilon if the signal strength for a given room was notobserved in training. As we will see, this histogram approach (referred toas robust Naive Bayes in the Section 2.5.2) works much better than theGaussian model.82.3. Localization as Probabilistic Classification2.3.2 Chow-Liu TreesIntroduced in [6], the Chow-Liu algorithm approximates a probability dis-tribution by the closest tree-structured Bayesian network. The topology ofthe Bayesian network is determined by the maximum-weight spanning treeof the complete graph of N nodes for the N variables. The weights arecalculated as the mutual information between variables (eq 2.1 ).I(z[i], z[j]) =?z[i]??, z[j]??p(z[i], z[j]) logp(z[i], z[j])p(z[i])p(z[j])(2.1)Figure 2.4: Example graphical model of learned Chow-Liu treeThis structure saw interest again in robotics research in [8] for visualappearance models and to our knowledge this is the first time Chow-Liutrees have been evaluated for WiFi indoor localization.Why would this be an appropriate model for indoor localization? Thestructure could encode the belief about the signal strengths of neighbouringaccess points as dependent on the perceived signal strength of a centralaccess point in the home or building of interest.92.3. Localization as Probabilistic ClassificationThis is illustrated in Figure 2.4 where access point 2 is learned as thecentral AP for a given room (indeed this was the case) and the probabilitydistributions of the other access points? signal strengths depend on accesspoint 2?s strength as defined by the network.To find the probability of the entire WiFi observation conditioned on theroom, the structure of the learned tree is used where parent(i) representsthe parent variable in the tree as shown below.P (z[0], z[1], ..., z[N ] | x) =N?i=0P (z[i] | z[parent(i)], x)This is in contrast with the naive Bayes factorization as the signalstrengths are no longer conditionally independent; they depend on exactlyone other access point?s signal strength, and the Chow-Liu algorithm at-tempts to learn which access point that should be. We use a normalizedhistogram in the same way as described for the naive Bayes conditionalprobabilities to store the resulting model.P (z[i] | z[parent(i)] = s, x = r) = Hist(z[i], s, r)2.3.3 K-Nearest NeighboursThe k-nearest neighbour classifier is a non-parametric classifier that com-pares test input to training examples and finds the k most similar. It thenreturns the majority class label of those k training examples. While nearestneighbour methods are not inherently probabilistic, they can be convertedinto a probabilistic model by returning the percentage of the k neighboursvoting for each room as shown below. Euclidean distance is used as a similar-ity metric as described in [1]. The performance of the classifier can dependon the parameter k and so we perform a parameter study in section 2.5.2.NN(z) = k arg min(zi,xi)?O||zi ? z||2 (2.2)P (x = r | z) =|{(z, x) ? NN(z) | x = r } |k(2.3)102.3. Localization as Probabilistic Classification2.3.4 Random ForestsIntroduced in [41] and shown to be useful for WiFi localization in [2], randomforests are ensemble learners that use a collection of decision trees to classifytest input. They are called random because each tree in the forest is learnedon a random subset of the training data. Decision trees are grown by lookingfor the best way to successively split data. In most cases, this means findinga feature ? and a threshold ? to partition the data set on. We will refer tothe pair as a splitting candidate as shown below.splitting candidate ? = (?, ?)The normal decision tree learning algorithm attempts to find the best featureand threshold to split the data on. However, in random forests, the featurethat is chosen to split on is the best among a random subset of the features[32]. All suitable thresholds are evaluated for the random subset of featureshowever. In our context, these splitting candidates refer to an access point? and a signal strength ? to split on. For each potential split, two partitionsof the dataset are formed: the left set Ql that contains training pairs havingfeature ? less than threshold ? and the right set Qr which contains the rest.Q = {(z, x)}Ql(?) = { (z, x) | z[?] < ? }Qr(?) = Q \ Ql(?)With the data set partitioned into two sets, the split that maximizesthe information gain G(?) is selected for the current node of the tree. Theentropy, H(Q) of the sets is used in calculating the information gain asshown below.?? = arg max?G(?)G(?) = H(Q)??s?{l,r}|Qs(?)||Q|H(Qs(?))H(Q) = ??r?{rooms}p(x = r) log p(x = r)The best split is found and the process continues recursively until astopping criteria is met.112.3. Localization as Probabilistic ClassificationDuring classification, each tree in the forest votes for the room thatit classifies the current sensor reading as, so one can form an estimate ofP (x | z) simply by counting the number of trees that vote for each room asshown below.P (x = r | z) =|Trees voting for r|Total TreesFigure 2.5 shows an example of a single decision tree in the forest. Ineach node, a specific access point of the test input is compared to a thresholdand if less than or equal to the value the process moves down the left childof the node, otherwise down the right. This continues until a leaf node isreached that has a class, or in our case a room, associated with it.Figure 2.5: A decision tree compares specific access point signal strengthsto thresholds to decide locationIn the random forest implementation in [32], three key parameters governtheir performance and behaviour. The number of estimators is the number ofdecision trees that will be used. The maximum features controls the numberof features in the random subset to be considered for each split. Trees mayalso optionally consider all features in generating split candidates. Finally,the maximum depth controls how many successive decisions or splits shouldbe made at most for any tree. The trees may optionally continue generatingdecisions until the training data is perfectly partitioned. The performance ofthe classifier depends significantly on these three values and so we conducteda parameter study, presented in section 2.5.2.122.4. Novelty Detection2.4 Novelty DetectionLocalizing to an indoor room is important, however so is knowing when toreport that a user is probably not located in one of the trained rooms. In alarge building with many rooms or offices or perhaps even a large house, it islikely that a user will not train on all rooms within the building. Thereforethe system should only localize when confident that the readings representobservations from within rooms trained on. This is known as the problem ofnovelty detection in machine learning. We can pose the problem as learninga function that reports either an observation is novel or not, as shown below.f(z) ={1, inlier, observation from known room?1, outlier, novel observationThe use of one-class SVMs is a common approach to this problem [32],however their performance largely depends on the selection of a kernel func-tion which is a comparison metric for observation vectors. Motivated bythese kernel functions, we concern ourselves with investigating useful dis-tance metrics for comparing test observation vectors to the training data.One observation that the system can be confident to be novel (at leastas long as the WiFi environment is not degenerate) is the observation whereall the WiFi access points are not visible and every signal strength has theabsent value ?100 as shown below.zneg = [ ?100, ?100, ? ? ? , ?100 ] (2.4)We evaluate distance metrics used for novelty detection with thresholdscomputed relative to the novelty reported for zneg. Here {zi} represents theset of training observations and z is the measurement under consideration.f(z) ={1, if min dist(z, {zi}) < ??1, otherwise? = ? ? (min dist(zneg, {zi}))We propose five different novelty detection schemes and outline them inthe following sections.132.4. Novelty Detection2.4.1 Nearest Neighbour ThresholdThe first novelty detection approach we describe finds the nearest neighbourin the training set to the test vector and tests whether the Euclidean distanceto this nearest neighbour is less than a threshold.f(z) ={1, if mini?O ||zi ? z||2 < ??1, otherwise? = ? ? (mini?O||zi ? zneg||2)The threshold ? is defined as a fraction ? of the minimum Euclideandistance of zneg to the training data. O is the set of indices of trainingobservations.2.4.2 Nearest Centroid ThresholdsInstead of comparing a test vector directly to the training data, we couldcompare observations to a set of centroids computed from the dataset. Wefind the mean observation for each room as shown below.?i =1|Rl|?i?Rlzi (2.5)Rl is the set of indices of observations labelled as room l. We also learnnot one threshold for comparison, but separate thresholds for each centroid.Euclidean DistanceThe first distance metric for comparing test vectors to centroids is the eu-clidean distance as used in the nearest neighbour threshold.f(z) ={1, if ||?i ? z||2 < ?i?1, otherwise?i = ? ? ||?i ? zneg||2i = arg mini?U||?i ? z||2The distance between test vectors and their closest centroid is comparedto a centroid specific threshold. U is the set of indices of centroids.142.4. Novelty DetectionMahalanobis DistanceGaussian mixture models are a standard generative model for probabilitydistributions [28]. If we had a good measure of the probability of an observa-tion z, given the training data, that is P (z | {zi}), we could likely determinewhich are novel. An overview of Gaussian mixture models for novelty de-tection is presented in [36]. A GMM is defined by M Gaussians each havingtheir own mean, covariance and weight as shown below.P (z | {zi}) =M?i=1wi g(z |?i ,?i) (2.6)g(z|?i,?i) =1(2pi)D/2|?i|1/2exp(?12(z? ?i)T??1i (z? ?i))(2.7)An important decision when using a GMM model is how many com-ponents, M , to use. In [36] the author suggests growing the number ofcomponents until some stopping criteria is met. In our context, we chooseto use the number of rooms as the number of components and allow themeans to be the same as those calculated in eq 2.5. For the covariancematrices, one could use the empirical estimates from the data, however wefound better results by using the robust covariance estimator method of [25]implemented in [32]. The key distance metric in the GMM is the Maha-lanobis distance (shown below) in the exponent and so we focus on usingthat alone in our novelty detection scheme.d(z, ?i) =?(z ? ?i)T??1i (z ? ?i)The novelty detection then proceeds in much the same way as for theEuclidean distance centroid comparison, except now we use the Mahalanobisdistance to compare test vectors to the centroids.f(z) ={1, if d(z, ?i) < ?i?1, otherwise?i = ? ? d(zneg, ?i)i = arg mini?Ud(z, ?i)152.5. Experiments2.4.3 Naive Bayes and Chow-Liu Tree ThresholdsAs we saw in the previous section, finding P (z | {zi}) for an observationcould be a useful step towards detecting novelty. Fortunately we have twoother models, the naive Bayes and Chow-Liu trees, that are capable of de-termining this quantity since they model the distribution P (z | x, {zi}). Wecan marginalize out the location variable x to be left with only P (z | {zi})as shown below. We treat P (x | {zi}) as uniform as we have no prior as-sumptions about the probability of being localized in a given room.P (z | {zi}) =?xP (z, x | {zi})=?xP (z | x, {zi})P (x | {zi})=?xP (z | x, {zi})For novelty detection, we compare the probability of a test vector tothe probability of zneg. We take the negative log of the probability fornumerical stability and avoiding underflow as suggested in [28]. Computingthese probabilities for the Chow-Liu tree and Naive Bayes involves a largeproduct of conditional probabilities, one for each variable.f(z) ={1, if ? log( P (z | {zi}) ) < ??1, otherwise? = ?? ? log( P (zneg | {zi}) )2.5 ExperimentsFor many reasons, an indoor localization component would useful in anoverall system for measuring lifespace and summarizing mobility. The in-door environments such a system would be used in would likely be officesand residential buildings such as homes and apartments. We are thereforeinterested in evaluating the performance of these approaches on real datacollected from representative environments.2.5.1 Data CollectionTo collect evaluation data, an Android application was written that allowedfor the collection of WiFi signal strength data and images of the environ-162.5. Experimentsment. The phone used was an off-the-shelf Nexus 4 phone using standardAndroid libraries and OpenCV for accessing the camera.Figure 2.6 shows a screen shot of the data collection application. Theapp can create rooms (+ button), and record training data with that room(target button). The app continuously captures WiFi signal strengths every600ms (summarized in top right) and an image every 1.8 seconds from theback facing camera. The images were recorded to provide ground truth forevaluation. The app saves the data to a simple text format.Figure 2.6: A screen shot of the data collection application used to generatetraining and evaluation datasets for localization.Five datasets produced using the application are summarized in Table2.2. The datasets were collected in a variety of indoor environments, includ-ing 3 homes, 1 apartment and 1 dataset (UBC) combines readings from anoffice building with a cafe external to the building. Most datasets includedWiFi readings taken on multiple floors since it is important that the local-ization component be able to handle this situation. The number of readingsis the total WiFi measurements taken during the duration of the data col-lection session. For most datasets we collected around 100 measurements oftraining data per room.172.5. ExperimentsDataset Building Multi-floor Rooms WiFi APs Readings(Al) house yes 6 15 2100(Ar) aptm yes 5 42 2094(Fe) house yes 4 7 1269(UBC) multi-bldg yes 4 189 3138(Va) house no 5 20 1140Table 2.2: Indoor evaluation dataset summary (training and test).We can also see the range in the number of visible access points acrossthe datasets. The datasets from houses have relatively few access points,whereas the apartment and especially the office buildings at UBC have manymore. We kept the number of rooms trained on somewhat low as we do notanticipate a large number of rooms of interest in most applications. Eachdataset consisted of a training and test portion. In each collection we aimedfor at least twice as many test observations as training data.2.5.2 EvaluationThe datasets collected provided a good start for evaluating the localizationapproaches outlined in section 2.3, however indoor WiFi environments arenot static and can easily change over time. Access points may be moved,replaced or shut off after the training data has been collected and so there isa risk that the observation model would become out of date and incapableof properly localizing further observations.While it would be very difficult to simulate access points changing posi-tion as it would require inferring their original position and a propagationmodel of the signal strength through the indoor environment, a simpler morefeasible test would be to remove a random subset of access points from testobservation vectors. In other words, simulate as if access points had beenturned off and now only reported the absent signal strength value of -100.We created five simulated datasets that are modified versions of the originalswhere each observation vector has 20% of its access point readings set to-100. The access points selected to be turned off are chosen at random anda new random subset is generated for each observation. Where Va wouldrefer to the original dataset, VaS refers to the Va dataset with the simulateddefects. Access points that would appear in an environment after trainingwould not be considered in the observation. We evaluate room classifica-tion and novelty detection on both the original datasets and their respectivesimulated defect versions.182.5. ExperimentsTo assist in manually labeling ground truth novel and known room loca-tions within the test datasets, an application was developed that displayedthe sequence of images and used sliders to quickly label ranges of readings asone of the trained rooms or a novel room. Figure 2.7 shows a screen captureof the application ground truthing the (Ar) dataset.Figure 2.7: A screen shot of the application used to ground truth thelocation of the WiFi observations and label novel locationsThe two sliders under the images allowed us to quickly select large rangesof measurements and label them all at once. Novel measurements were alsomanually labeled for all three datasets. With the ground truth for datasets,it was now possible to conduct an evaluation on the standard and simulateddefect datasets.Room ClassificationTo compare the algorithms for room classification, we looked at reporting theaccuracy of the classifier which is simply defined as the total measurementscorrectly classified divided by the total number of test measurements.Accuracy =Total CorrectTotal Test MeasurementsTable 2.3 summarizes the performance of the various algorithms acrossthe datasets. Two of the classifiers, k-nearest neighbour and random forests,required parameters in their specification and so small parameter studies192.5. Experimentswere done to determine a good selection across the datasets. These param-eter studies are summarized in Table 2.4 for k-nearest neighbour where theparameter searched over is k, and in Table 2.5 for random forests where thethree parameters searched over are number of estimators, maximum fea-tures, and maximum depth. Based on the results in these tables, in theremainder of our analysis for k-nearest neighbours we select k to be 20 andfor random forests we select the parameter tuple to be (250, 10, 2). Theseconfigurations had the highest average accuracy across the test datasets.The other algorithms did not have relevant parameters.Table 2.3: Summary of Room Classification AccuraciesClassifier Avg. Al Ar Fe UBC Va AlS ArS FeS UBCS VaSRandom Forest 0.916 0.939 0.969 0.898 0.984 0.955 0.863 0.896 0.76 0.98 0.917Robust NB 0.890 0.789 0.936 0.843 0.992 0.940 0.759 0.922 0.817 0.987 0.910Chow-Liu 0.870 0.892 0.888 0.897 0.985 0.960 0.713 0.774 0.747 0.981 0.854K-NN 0.826 0.926 0.796 0.96 0.982 0.955 0.727 0.46 0.757 0.953 0.744Naive Bayes 0.193 0.189 0.116 0.313 0.216 0.120 0.189 0.116 0.313 0.237 0.120On analysis of the results, it would seem random forests are the supe-rior approach for this problem. They have the best overall performance onboth the normal datasets and the simulated defect datasets. A close secondwould be Robust naive Bayes and Chow-Liu trees as they too have strongperformance across all datasets. K-nearest neighbour has one dataset onwhich their performance is not competitive. Naive Bayes with Gaussiandistributions performs the worst and likely suffers from issues discussed inSection 2.3.1.Table 2.4: Parameter Study Accuracies for K-NN ClassifierParam Avg. Al Ar Fe UBC Va AlS ArS FeS UBCS VaS(1) 0.813 0.923 0.784 0.943 0.970 0.932 0.699 0.475 0.742 0.931 0.734(2) 0.817 0.929 0.772 0.943 0.974 0.940 0.706 0.466 0.744 0.931 0.769(3) 0.817 0.919 0.788 0.952 0.974 0.942 0.702 0.466 0.741 0.930 0.759(4) 0.820 0.920 0.778 0.948 0.975 0.945 0.703 0.466 0.746 0.951 0.769(5) 0.817 0.926 0.783 0.946 0.967 0.945 0.700 0.471 0.741 0.938 0.751(10) 0.818 0.922 0.771 0.936 0.980 0.950 0.706 0.458 0.747 0.949 0.761(20) 0.826 0.926 0.796 0.96 0.982 0.955 0.727 0.46 0.757 0.953 0.744202.5. ExperimentsTable 2.5: Parameter Study Accuracies for Random Forest ClassifierParams Avg. Al Ar Fe UBC Va AlS ArS FeS UBCS VaS(50, None, None) 0.7881 0.936 0.893 0.866 0.745 0.882 0.804 0.748 0.722 0.622 0.663(50, None, 2) 0.9074 0.928 0.938 0.901 0.984 0.96 0.848 0.85 0.762 0.981 0.922(50, None, 4) 0.8984 0.938 0.951 0.866 0.984 0.955 0.861 0.856 0.728 0.978 0.867(50, None, 10) 0.8675 0.939 0.935 0.873 0.982 0.887 0.833 0.841 0.73 0.967 0.688(50, 2, None) 0.722 0.823 0.708 0.863 0.753 0.802 0.69 0.551 0.731 0.651 0.648(50, 2, 2) 0.8474 0.829 0.777 0.889 0.976 0.932 0.749 0.722 0.752 0.959 0.889(50, 2, 4) 0.8528 0.877 0.851 0.863 0.966 0.922 0.79 0.77 0.731 0.931 0.827(50, 2, 10) 0.8192 0.866 0.867 0.863 0.96 0.829 0.742 0.74 0.731 0.913 0.681(50, 4, None) 0.7663 0.886 0.849 0.873 0.741 0.887 0.756 0.672 0.728 0.613 0.658(50, 4, 2) 0.9041 0.919 0.933 0.921 0.979 0.945 0.849 0.864 0.763 0.974 0.894(50, 4, 4) 0.8851 0.932 0.924 0.873 0.971 0.942 0.843 0.851 0.728 0.955 0.832(50, 4, 10) 0.8571 0.901 0.929 0.873 0.969 0.897 0.768 0.825 0.728 0.94 0.741(50, 10, None) 0.7855 0.935 0.865 0.868 0.747 0.882 0.809 0.728 0.723 0.63 0.668(50, 10, 2) 0.9095 0.938 0.949 0.905 0.983 0.96 0.863 0.856 0.762 0.972 0.907(50, 10, 4) 0.8936 0.938 0.941 0.866 0.974 0.952 0.867 0.856 0.722 0.966 0.854(50, 10, 10) 0.8677 0.935 0.938 0.87 0.986 0.897 0.824 0.841 0.723 0.965 0.698(100, None, None) 0.7896 0.935 0.894 0.87 0.738 0.882 0.811 0.75 0.728 0.62 0.668(100, None, 2) 0.9144 0.936 0.96 0.905 0.977 0.97 0.871 0.88 0.76 0.975 0.91(100, None, 4) 0.8987 0.936 0.961 0.866 0.98 0.952 0.858 0.874 0.728 0.97 0.862(100, None, 10) 0.8687 0.936 0.943 0.873 0.98 0.889 0.84 0.85 0.727 0.953 0.696(100, 2, None) 0.7238 0.829 0.722 0.863 0.749 0.802 0.693 0.553 0.731 0.648 0.648(100, 2, 2) 0.8482 0.873 0.791 0.846 0.974 0.927 0.802 0.712 0.703 0.957 0.897(100, 2, 4) 0.845 0.861 0.822 0.863 0.957 0.92 0.771 0.744 0.731 0.932 0.849(100, 2, 10) 0.825 0.86 0.893 0.863 0.953 0.837 0.736 0.769 0.731 0.917 0.691(100, 4, None) 0.7729 0.9 0.853 0.873 0.74 0.887 0.774 0.687 0.728 0.621 0.666(100, 4, 2) 0.908 0.919 0.942 0.911 0.986 0.957 0.848 0.868 0.765 0.982 0.902(100, 4, 4) 0.8897 0.9 0.938 0.873 0.982 0.947 0.845 0.86 0.728 0.972 0.852(100, 4, 10) 0.8589 0.9 0.922 0.873 0.975 0.905 0.759 0.852 0.728 0.946 0.729(100, 10, None) 0.7903 0.934 0.886 0.873 0.737 0.887 0.818 0.749 0.73 0.621 0.668(100, 10, 2) 0.9119 0.942 0.954 0.901 0.981 0.965 0.871 0.868 0.762 0.973 0.902(100, 10, 4) 0.9002 0.936 0.957 0.87 0.982 0.952 0.866 0.874 0.73 0.968 0.867(100, 10, 10) 0.8731 0.936 0.949 0.87 0.979 0.892 0.845 0.852 0.727 0.955 0.726(250, None, None) 0.7888 0.936 0.887 0.87 0.741 0.882 0.811 0.748 0.723 0.622 0.668(250, None, 2) 0.9148 0.941 0.96 0.901 0.982 0.957 0.866 0.894 0.762 0.978 0.907(250, None, 4) 0.9016 0.944 0.956 0.866 0.979 0.955 0.864 0.886 0.725 0.974 0.867(250, None, 10) 0.8697 0.938 0.945 0.87 0.98 0.887 0.845 0.856 0.723 0.96 0.693(250, 2, None) 0.7266 0.849 0.707 0.863 0.751 0.802 0.709 0.551 0.731 0.655 0.648(250, 2, 2) 0.8667 0.87 0.831 0.884 0.974 0.937 0.796 0.766 0.736 0.956 0.917(250, 2, 4) 0.851 0.827 0.834 0.863 0.972 0.92 0.777 0.783 0.731 0.951 0.852(250, 2, 10) 0.8237 0.882 0.902 0.863 0.959 0.809 0.744 0.778 0.731 0.903 0.666(250, 4, None) 0.7712 0.903 0.854 0.873 0.738 0.887 0.768 0.68 0.728 0.62 0.661(250, 4, 2) 0.9127 0.923 0.954 0.924 0.981 0.952 0.86 0.884 0.766 0.976 0.907(250, 4, 4) 0.8932 0.911 0.948 0.873 0.976 0.952 0.848 0.884 0.728 0.96 0.852(250, 4, 10) 0.86 0.901 0.93 0.873 0.982 0.892 0.767 0.857 0.728 0.954 0.716(250, 10, None) 0.7909 0.935 0.892 0.87 0.742 0.882 0.814 0.761 0.723 0.622 0.668(250, 10, 2) 0.9161 0.939 0.969 0.898 0.984 0.955 0.863 0.896 0.76 0.98 0.917(250, 10, 4) 0.8984 0.941 0.953 0.863 0.981 0.952 0.858 0.872 0.725 0.972 0.867(250, 10, 10) 0.8749 0.941 0.949 0.87 0.982 0.899 0.84 0.864 0.723 0.965 0.716212.5. ExperimentsNovelty DetectionReceiver operator characteristic curves are a useful visualization for informa-tion retrieval systems. They are 2D projections of a system?s performancein the space of true positive rate and false positive rate. True positive rate(TPR) and false positive rate (FPR) are defined in terms of the system?sreturned true positive (TP), true negative (TN), false negative (FN), andfalse positive (FP) counts.TPR =TPTP + FNFPR =FPFP + TNFor our novelty detection system, true positives are defined as correctlyclassified WiFi observation inliers (not novel), true negatives are defined ascorrectly classified observation outliers (novel), false positive are incorrectlyclassified outliers, and false negatives are incorrectly classified inliers.A good classifier is one that has a high true positive rate with a rela-tively low false positive rate. All the approaches described in section 2.4 areparameterized by ?, the fraction of the novelty reported for zneg to be usedas the novelty threshold. We searched the parameter space of ? and plottedthe TPR and FPR of the system with each parameter setting.ROC curves for the three test datasets that had novel readings labeledare summarized in Figure 2.8. On analysis of the results, we see that mostapproaches are competitive on the datasets. However, the performance ofall approaches is somewhat poor on the (Al) dataset, and this is likely dueto the fact that the novel readings were collected on the second floor directlyabove the first floor which the observation model was trained on. It wouldseem that all of these approaches have a difficult time discerning betweenobservations made on the first floor (inliers) and observations made on thesecond floor (outliers).In terms of overall performance, the best approach would likely be thecentroid thresholds using Euclidean distances. While competitive on theother datasets, the Mahalanobis distance seems to suffer on the UBC en-vironment which models larger rooms and has many more visible accesspoints. The robust Bayes threshold seems competitive on all but the (Al)dataset. The Chow-Liu tree threshold is the weakest of the five and neverachieves competitive performance.222.5. ExperimentsFigure 2.8: ROC Curves for Novelty Detection23Chapter 3Activity RecognitionContext awareness is an ever improving feature of assistive technologies andubiquitous computing. Inference of user activities and environmental inter-actions remains an important problem if only to serve higher level goals ofa particular system or application.In particular, the idea of using mobile phones and other wearable devicesto perform actigraphy (non-invasive activity monitoring) has become a com-mon theme in mobile health applications. Standard mobile phones are nowwell equipped to measure and report summaries of human rest and activitycycles. While most of these systems have focused on ambulating users only,wheelchair users would certainly also benefit from similar summaries.Figure 3.1: The activity classes of interest for our system. Stationary,walking, wheeling and vehicle useWe extended the approach of activity recognition proposed in [15] andintroduced the ability to classify wheeling motion for manual wheelchairusers in addition to the other classes shown in Figure 3.1. We have madeno attempt to optimize the approach and it could easily be substitutedfor another. A decision tree is learned that considers a combination ofaccelerometer and GPS features to infer activity. The approach computesfeatures on 2.8 second windows of accelerometer data sampled at 20Hz andcombines them with an average speed estimate from a stream of GPS fixes.We trained on data collected with the phone placed in the subject?s pocketand backpack to reduce dependence on specific placement or orientation ofthe phone. The algorithm works offline to analyze accelerometer and GPSstreams over the course of an entire day and is therefore able to correctlyclassify vehicle motion when GPS might be temporarily unavailable; forexample, when taking the subway or driving through a tunnel.243.1. Related Work3.1 Related WorkIn much the same way that section 2.3 posed the localization problem, theproblem of activity recognition can broadly be seen as performing classi-fication on sequential sensor data. A good summary of modern activityrecognition techniques is presented in [5]. Two general methods are definedfor creating activity recognition models. Data-driven approaches where amodel is learned from training data is contrasted with knowledge-drivenmethods where the model is primarily created from prior domain knowl-edge and expertise. The overview in [5] discusses the generative models:naive Bayes classification (NBC), hidden Markov models (HMMs), generaldynamic Bayesian networks (DBNs), as well as some useful discriminativemodels : decision trees, support vector machines (SVMs), conditional ran-dom fields (CRFs), and nearest neighbour (NN) approaches. Work in [43]presents a tutorial on CRFs applied to activity recognition and comparesresults with HMMs on a simulated dataset. Work in [26] extracts activitiesand significant places from GPS traces using hierarchically structured CRFs.We adopted decision trees for their simplicity and ease of interpretation.Work from [3] looks at wearing wireless accelerometers on the body, dis-cusses the difficulties in obtaining labeled data for genuine activities outsideof a lab environment and demonstrates the effectiveness of decision trees.More recently, [38] compares various machine learning approaches to solvethe problem of gait recognition efficiently on mobile phones.State of the art classification on mobile devices is achieved in [12] witha method called geometric template matching (GeTeM) which uses time-delay embeddings to construct a dynamical systems model capable of scoringsimilarity between time-series data.Activity recognition has also moved well beyond being a purely academicendeavour. Most deployed, working systems rely on analyzing data streamsof accelerometer readings. Wearable actimetry systems have existed in theconsumer space for some time. Two recent examples of popular wearabledevices for activity monitoring include the Fitbit [18], and Nike+ Fuelband[23]. Google Now summarizes distances biked and walked [21] and veryrecently, Google introduced activity recognition into Android location APIs[19]. Pedometers and cyclometers also been available for quite some time,however they only have limited recognition capability (steps and rotations).While these products are marketed towards consumers as quantified selfor training tools, activity traces are certainly also of interest to the assistivetechnology, gerontology, and rehabilitation science communities ([40], [37],[15]). It is this application domain that we are concerned with.253.2. Sensor Overview3.2 Sensor OverviewModern smartphones are certainly impressive in terms of their array of sen-sors. While the sensors are low-cost and consumer-grade, their level of inte-gration within the device and operating system is attractive. For instance,most smartphones come equipped with accelerometers, gyroscopes, magne-tometers, GPS receivers as well as light and proximity sensors, recently evenbarometers. It is quite remarkable that what were once expensive inertialmeasurement units only available in the domains of aerospace and militaryengineering now sit in low-cost form in nearly everyone?s pocket. For the ac-tivity recognition component of our system, we employed the accelerometerand GPS receiver.3.2.1 Tri-Axis AccelerometerAn accelerometer is a device capable of measuring acceleration forces. Theyexist on most mobile devices as a means of controlling the user interface,sensing display orientation or detecting falls for hard drive protection.The signal that we initially compute is total force magnitude. The grav-itational constant g is subtracted from that scalar value to put the meanvalue near zero. The resultant signal is neither the acceleration nor theapplied force without gravity, since that would properly require a vectorsubtraction, which is not possible with the data available on the device.m =?X2 + Y 2 + Z2 ? gg = 9.81 m/s2However, the signal is useful for our purposes, and this usage is commonin the field. This is the same univariate signal used in [15]. This approachhas the advantage of producing values that should be independent of theorientation of the phone, which allows for greater flexibility in placementand orientation by the user. It also reduces the amount of data producedby the system as we only store a single float to represent a sensor reading.For sample windows of this data stream see Figure 3.2The Android API specifies several high level sample rates that can beset when using the accelerometer [22]. We selected one that gives us ?20Hzsampling rate on Nexus 4 phones.263.3. Feature Selection and Sensor Fusion3.2.2 Global Positioning SystemThe global positioning system or GPS is a satellite navigation system thatis capable of locating users anywhere that there is line of sight with at leastfour of the GPS satellites orbiting Earth. A fix typically consists of thelatitude and longitude of the receiver, usually represented by two floatingpoint numbers. Latitude is an angle which ranges from -90? to 90? (0? atequator), and longitude ranges from westmost -180? to eastmost 180? (0? atPrime Meridian). For example, downtown Vancouver is [49.2839, -123.1200].fix = [LAT, LON ]Limitations of GPS include either too few satellites being visible or thesignal being reflected too many times and increasing its time of flight unpre-dictably. The second effect often occurs in downtown areas with many tallbuildings. When too few satellites are visible the receiver will be unable todetermine a fix. GPS therefore does not work well indoors or underground.However, GPS repeaters can be installed to provide indoor functionality[24]. In addition to determining a fix using only the receiver, most locationservices on mobile phones allow for improved positioning by consulting largedatabases of stored associations between known WiFi access points and celltowers and their locations. If the device is connected to the Internet, itcan consult this service and receive potentially more accurate location in-formation. However, we only considered unassisted GPS readings in ourexperiments. Often location services can also return estimates for speed,bearing and altitude with a reported accuracy of these measurements [19].3.3 Feature Selection and Sensor FusionAs previously discussed, the problem of activity recognition can be posedas a classification of sequential sensor data. In section 2.3 the features usedfor observations were simply the raw signal strengths associated with theWiFi access points. While one could use raw sequential data as featuresin this context, better results typically are achieved by pre-processing thesequential data into a smaller set of representative features.z = [ f0, f1, ? ? ? , f8 ]We propose an observation vector made up of nine features: eight areextracted from fixed-length windows of accelerometer measurements andone incorporated GPS measurements, nearly the same as [15]. In this way,sensor fusion is handled naturally by the classification algorithm.273.3. Feature Selection and Sensor Fusion3.3.1 Accelerometer FeaturesThe most common approach for analyzing sequential accelerometry datais to break the stream up into non-overlapping windows and perform theclassification on those smaller series [12] [38] [15].The first aspect of this approach that must be determined is the timelength of the window. We chose 3 seconds for the window length because itlies between the 1 second time span used in [15] and results shown in [40]that show most manual wheelchair bouts of mobility last 5 seconds.ACC = [ m0, m1, ? ? ? , mN?1 ]The window, ACC, is simply N real numbers representing the motionsignal as defined in section 3.2.1, where N depends on the desired time spanof the window and the sampling frequency. In our case, sampling at 20Hzwith a desired window length of 3 seconds leads to N = 60 samples for eachwindow. We can now consider features for this small time-series.One simple feature of a series is the variance. This is the first feature weconsider and is defined below in equation 3.1, where ?m is the mean of thetime series.f0 =1N ? 1N?1?n=0(mn ? ?m)2 (3.1)Fourier coefficients can also act as useful features for signal analysis.The Fourier transform is a standard approach that gives a breakdown ofthe frequency components present in a time-domain signal. Other similarbut more efficient approaches include the Goertzel algorithm used in [15] orcounting zero-crossings as a simple approximation to the signal?s fundamen-tal frequency.Xk =N?1?n=0mn ? e?i2pikn/N (3.2)|Xk| =?Xk.Re2 +Xk.Im2 (3.3)f1 =N?1?n=0|Xk| (3.4)We employ the standard discrete Fourier transform and compute the Ncomplex coefficients as defined above. The magnitude of the coefficients isknown as the spectrum and we include the energy (3.4 ) as a feature.283.3. Feature Selection and Sensor FusionIn addition to an aggregate feature like energy, it would also be useful toinclude some information about the individual frequency components of thesignal as features. Typically in spectral analysis, the sample frequency isplotted with the spectral response. The N sample frequencies for a discreteFourier transform can be produced as shown in (3.5) , where d is the samplespacing [29]. In our case we simply leave d = 1 .FREQ = [ 0, 1, ? ? ? , N/2? 1, ?N/2, ?(N/2? 1), ? ? ? ,?1 ]1d ?N(3.5)SPEC = [ |X0|, |X1|, ? ? ? , |XN/2?1|, |XN/2|, |XN/2?1|, ? ? ? , |X1| ] (3.6)We then select the three largest sample frequencies in terms of their spec-tral response as features and include the corresponding spectral responses asfeatures as well. Due to the symmetry of the response we need only considerN/2 elements of SPECl0 = index of largest element in SPEC[0:N/2]l1 = index of 2nd largest element in SPEC[0:N/2]l2 = index of 3rd largest element in SPEC[0:N/2]f2 = FREQ[l0]f3 = FREQ[l1]f4 = FREQ[l2]f5 = SPEC[l0]f6 = SPEC[l1]f7 = SPEC[l2]To better illustrate these features we considered the sample windowsfrom three of the activity classes shown in Figure 3.2As can be seen from the figures, the variance, energy and spectrumresponses are much larger in the walking examples than the other two classes.In terms of differentiating wheeling from stationary the variance and energyare also much larger in the former. It seems as though even just the varianceand energy would be enough to reliably distinguish these classes.293.3. Feature Selection and Sensor FusionFigure 3.2: Sample motion signal windows for three activity classes.303.3. Feature Selection and Sensor Fusion3.3.2 GPS Feature and Sensor FusionThe accelerometry features seem suitable for differentiating between the firstthree motion classes, however the classifier requires a way to distinguishvehicle motion from the other profiles.This is addressed in [15] by including an instantaneous speed estimatereturned from the GPS receiver and phone location service. We propose anoffline approach which assumes that a sequence of GPS fixes are available.Specifically, we generate a list of time intervals where each interval has anaverage GPS speed associated with it. Assuming we have a list of all KGPS fixes taken throughout the day in order, we generate a list of averagespeed intervals. This interval list consists of a 3-tuple of the format (starttime, end time, average GPS speed). The Haversine formula is an equationimportant in navigation, giving great-circle distances between two points ona sphere from their longitudes and latitudes [44].Program 3.1 Function to generate average speed listdef genAvgSpeed(GPS_FIX):idx = 0AVG_SPEED = []while ( idx <= K ):nxt = findFutureFix5Min( GPS_FIX, idx )dist = haversine( GPS_FIX[idx], GPS_FIX[nxt])start_time = GPS_FIX[idx].timeend_time = GPS_FIX[nxt].timeavg_speed = dist / (end_time - start_time)AVG_SPEED.append( ( start_time , end_time , avg_speed ) )idx = nxtreturn AVG_SPEEDIntuitively, the stream of GPS fixes are broken down into intervals of 5minutes or greater and the average speed is calculated for this interval. Thisapproach has several advantages over the use of the instantaneous estimatesfrom the phone?s location service. The instantaneous speed estimates areoften inaccurate, and GPS readings may be temporarily unavailable for pe-riods of time during travel for a number of reasons. For example, travelingunderground in a subway, long tunnel or other situations where GPS datais only sporadically available will misclassify vehicle motion when using theinstantaneous approach.313.4. ExperimentsThe final feature for the activity classifier is determined from where thenon-overlapping accelerometer windows lie in the GPS data stream. Theaverage speed of this interval is returned as the last feature of the observationvector. This process is illustrated in Figure 3.3f8 = findSpeed( AVG SPEED, ACCWindowTime) (3.7)Figure 3.3: Final feature vector is produced for each accelerometer signalwindow by combinining accelerometer features with the average GPS speedrecorded during the same time interval3.4 ExperimentsWe are primarily interested in having an activity recognition component inour overall system. To build a representative dataset, ideally one wouldcollect data from multiple subjects on multiple devices. The classifier wechose to evaluate is the decision tree as employed in [15]. The classifier isnot state of the art, but works reasonably well for our purposes and couldbe replaced with a superior approach.We compare our ability to classify wheeling motion with a wheel-mountedactimetry system similar to the one proposed in [40]. We also compare ourability to classify vehicle motion by using our proposed offline approachversus one that employs speed estimates returned by the Android locationservices.323.4. Experiments3.4.1 Data CollectionAccelerometry and GPS data streams were collected using a Nexus 4 An-droid phone and the MobiSense App 4.3. The phone was carried in a varietyof positions and orientations as shown in Figure 3.4Wheeling data was collected using a manual wheelchair and includedindoor and outdoor movement on a variety of surfaces around the UBCcampus. The subject was, however, not an actual wheelchair user. Walkingdata was collected while moving in and out of buildings. Effort was madeto walk with different gaits and speeds. Vehicle motion data was collectedwhile traveling on public buses in Vancouver. Stationary data was collectedwith the phone left on a desk. All data was collected by a single user on asingle phone.Figure 3.4: The four configurations for which the training and test datawere collected. Wheeling and walking training data was collected with thephone in a pocket and a backpack333.4. Experiments3.4.2 EvaluationWith the dataset collected, we split the data into 75% training and 25%test. A decision tree was learned using the CART algorithm as provided inthe sci-kit learn library for python [32] with a maximum depth of 3 decisionnodes. The learned tree is shown in Figure 3.5. Decision trees have severalnice properties: they are inexpensive to store, fast to evaluate, and easy tointerpret.Figure 3.5: Visualization of the final activity classification decision tree.Left branches are taken when condition is true, right branches when false.The learned tree gives us some insight into the activity classes. Thefirst decision is made on the average GPS speed feature. This feature valueis in meters per second so we can see that if the GPS speed is above 1.89m/s or 6.77 km/h then the tree will report the motion as vehicle. Thismay seem too low perhaps, but there was no training data for jogging orrunning present and this threshold could easily be adjusted. For the otherthree classes the two features selected were the variance of the signal andthe largest of all coefficient magnitudes. It is interesting that the energy andactual frequencies were not found to be useful by the algorithm.343.4. ExperimentsTo visualize the results of the classifier on the test dataset we employeda confusion matrix incorporating the four classes of interest shown in Fig-ure 3.6. The results are quite promising for this simple approach in that anaverage of 97% accuracy is achieved across the classes. Most of the confusionlies between walking and wheeling.Figure 3.6: Confusion matrix across activity classes of interest. Rows andcolumns represent the ground truth and classified activities respectively.If instantaneous GPS speed measurements are used instead of our ap-proach, the vehicle motion accuracy drops to around 67% on the test setwith high confusion between vehicle motion and wheeling.Likely the main issue with respect to misclassification in our approachis that there was no noise or fidget data present in the training set. Thedatasets were continuous recordings of the specific activity. For example,when the data for walking was collected, no time was spent standing stillwaiting briefly or tapping a foot impatiently. For the same reasons explainedin [3] it is difficult to capture and ground truth authentic fidgeting. In theclassifier?s current state, such actions would likely be classified as wheelingsince they would have low average GPS speed, low spectral response andslightly higher variance than a stationary signal. The classifier would alsolikely be more robust if it had been trained using multiple devices and users.353.4. ExperimentsTo further validate our classifier, we compared our results with a sum-mary produced by another wheelchair activity measurement system. Thissystem used an accelerometer mounted to a wheel. The signature from awheel mounted accelerometer is much cleaner and obvious when comparedto phone accelerometry measurements made from a user?s pocket or bag. InFigure 3.7, we see similar reported time periods of activity and inactivity.Figure 3.7: Comparison of phone based activity classifier (top) with customwheel mounted actimetry system (bottom).36Chapter 4MobiSense SystemMobile technology has reached a level of sophistication and ubiquity thatit now presents an attractive platform for assistive technology research andhealth-related applications. Mobile devices support a rich integration ofsensing, display, and Internet connectivity hardware and have become widelyadopted by the general public.The electronic health or eHealth movement has made great strides inimproving access to health care information by using Internet technologies.An obvious next step in this progression is to leverage both Internet andmobile technologies and investigate mobile health, or mHealth applications[27] that move towards further personalization and assistive capabilities. Ina more general sense, the quantified self movement [39] aims to promoteself-improvement through consistent data collection and analysis. The un-derlying philosophy of the quantified self movement is that consistent mea-surement and reflection of important personal metrics leads to an almostinstinctive improvement in behaviour. With sensing and recording becom-ing easier than ever, there is a growing number of quantified self tools forimproving lifestyle or behavioural habits such as diet, exercise, and timemanagement. Of particular interest to assistive technologists and rehabili-tation scientists is the measure of a person?s mobility, which is an importantfactor in assessing quality of life [37].MobiSense is a mobile health research platform that aims to improvemobility analysis for both ambulating and wheelchair users. The goals ofthe system are to be simple for users to collect mobility data, provide easilyaccessible summaries and analysis of daily behaviours and finally to enablefurther research and development by providing a sandbox environment forrapid prototyping and experimentation. The paradigm of distributed datacollection and centralized analysis stands to greatly benefit health care andassistive applications. With cloud computing and storage services becomingcheap and accessible, systems that harness these resources could be of greatinterest to health researchers and professionals. Investigating systems of thisnature requires a broad skill set; including familiarity with machine learning,user interfaces and distributed systems.374.1. Related Work4.1 Related WorkSeveral systems have been built in recent years that explored mobility sens-ing and reporting to varying degrees. One of the earlier realized systemswas UbiFIT [7], a project designed to track physical activity and provideon-phone visual feedback with a simple intuitive display. Users wore a sep-arate pedometry device that communicated with the phone to summarizeactivity using a visualization that was quick to interpret. As sensor integra-tion with phones became more sophisticated, systems relied less on externalhardware and solely on integrated sensors within the mobile device. FUNF,an open source sensing framework for Android [33], provided an extensibleframework for capturing a wide variety of sensor data from a device andpublishing it to a web service. HumansSense [13] is an open source datacollection platform on Android, capable of logging sensor data and period-ically uploading to a web service. Work done by [37] demonstrated a proofof concept system that utilized pre-placed bluetooth beacons in the home toinfer room level localization, as well as activity levels using mobile phones.This work also presented several succinct visualizations for summarizing thetime series lifespace data. A custom wearable device with GPS and activ-ity logging functionality was proposed in [4]. This device was designed tobe used for lifespace reporting and recorded logs on a memory card to beremoved and analyzed later.Recently, the open mHealth initiative [27] has proposed a standardizedarchitecture for mHealth and reporting applications. Ohmage, their partic-ipatory sensing platform [15], allows for data collection on the phone withthe ability to upload data to a web service for summarizing and visualizingreports for users. The application has been used to track users? activitylevels and has also been applied in other studies relating to post-traumaticstress disorder and monitoring stress levels in new mothers [30].MobiSense builds on the HumanSense project and couples it with a newweb service in the spirit of Ohmage and the open mHealth architecture. Thesystem provides user feedback and summarization visualizations through acustom web client. Many of the visualizations are styled after the lifespacevisualizations presented in [37]. The system uses an off the shelf AndroidNexus 4 phone with no hardware modifications and the web client can beviewed from any browser. To our knowledge, MobiSense is the first lifes-pace tracking system running on standard mobile phone hardware that pro-vides indoor, outdoor and activity summarizations for both ambulating andwheelchair users.384.2. System Architecture4.2 System ArchitectureThe open mHealth initiative identifies three key abstractions in the architec-ture of an mHealth system [27]. Data storage units (DSUs) define how thedata is stored and accessed, data processing units (DPUs) perform trans-formations and summarizations on the collected information and data visu-alization units (DVUs) define graphical representations. While MobiSensedoes not adhere exactly to the specifications of the units outlined by theinitiative, in part because the specifications are still in flux, they providea useful frame of reference for discussing the system?s components and cer-tainly future work could be done to achieve full compliance. Figure 4.1outlines the system?s overall architecture and data flow. Sensor data is col-lected on the user?s Android phone and uploaded to a web service hosted inthe cloud. The data is processed on the server and transformed into succinctsummarizations. A web client can then request those summaries for visual-ization to the user. Users login to the web client with the unique identifier(UID) associated with each phone.Figure 4.1: Overview of MobiSense system architectureSeveral decisions in the design of the system were influenced by the pref-erence for a flexible, research friendly environment. The operating system?sfile system was used as the primary storage unit instead of a database to re-duce technical overhead and complexity. The data processing could be donein the mobile application, but a centralized approach eased experimentationand prototyping. Python scripts running on the server analyze and trans-form stored data and are relatively simple to write and experiment with.Similarly, JavaScript visualization libraries are easy to use and customizefor the web client.394.3. Phone Application4.3 Phone ApplicationThe MobiSense phone application is a modification of the HumanSenseproject [13]. HumanSense logs many of the sensors supported by Androidbut for the lifespace measurements that MobiSense focuses on only WiFi,GPS and accelerometer readings are stored. The readings are stored asbinary files compressed with gzip on the phone?s SD card.4.3.1 Sensor LoggingEach sensor can be configured with a different polling rate and MobiSensesets suitable defaults for the different streams based on the way the data isused. The GPS sensor is polled once a minute and records the following:<unixtimestamp> (8 bytes, long int)<accuracy> (4 bytes, float)<bearing> (4 bytes, float)<speed> (4 bytes, float)<altitude> (8 bytes, double)<latitude> (8 bytes, double)<longitude> (8 bytes, double)Here, <unixtimestamp> is a standard unix timestamp and representsthe milliseconds since Jan 1 00:00:00 1970 GMT. Longitude and latituderepresent the GPS fix as described in Section 3.2.2. Accuracy, bearing,altitude and speed are other estimates returned by the GPS API in Android,however MobiSense does not make use of these. The WiFi modem is polledevery 10 seconds for MobiSense and stores the following data on file:<numAP> (4 bytes, int)<unixtimestamp> (8 bytes, long int)(numAP entries):<signal_strengh> (4 bytes, int)<SSID_len> = (2 bytes, int)<SSID_string> = (SSID_len bytes, string)<BSSID_len> = (2 bytes, int)<BSSID_string> = (BSSID_len bytes, string)Since the number of visible access points varies from reading to reading, eachentry is potentially of a different length. Here, <numAP> encodes the numberof access points observable for the current reading. The signal strength,SSID, and BSSID fields are described in Section 2.2.404.3. Phone ApplicationThe accelerometer is polled at a rate of 20Hz and records the following:<unixtimestamp> (8 bytes, long int)<mag> (4 bytes, float)Here <mag> represents the accelerometer signal as defined in Section3.2.1. The final important group of files created by MobiSense is the trainingdata used to learn a classifier for indoor localization as outlined in Section2.3. The training file is stored in human-readable plaintext and recordslabeled WiFi observations in the following format:R <unixtimestamp>L <roomName><BSSID_0> <SS_0>...<BSSID_N> <SS_N>Training files are generated whenever the user decides to add a new roomto their localization model or to record more training data for a given room.Since the application is expected to collect data over the course of afull day, an important consideration is the amount of data that the sensorlogging would produce. As it is polled most frequently, the accelerometersensor stream creates by far the most data. For an 18 hour recording periodwith full accelerometer, WiFi logging and GPS logging ?15 MB of gzip com-pressed sensor data would be collected (close to 50 MB uncompressed). Onlythe compressed files are ever uploaded to the web service, and only whenthe phone has an Internet connection through WiFi to prevent accidentaltransfer costs on a user?s data plan. Once the data has been processed asexplained in section 4.5, its summarization format is much more compact,each day taking up only ?100 KB.Another important consideration for any mobile application is batteryusage. With little screen use and no calls made or received, MobiSense wasable to run on a single battery charge on a Nexus 4 phone for 22 hours.Sensor logging files are continuously updated until either the user stopsrecording or the current time reaches midnight. If the user has not stoppedrecording at midnight, new logging files are created to reflect the start ofa new day and logging continues using these new files. An upload serviceperiodically checks for an Internet connection over WiFi and if there areun-uploaded files, it sends HTTP post request with the gzipped sensor filesas the body. The user can also manually start the uploading process. Alongwith these files, the timezone of the device is also uploaded to the service.414.3. Phone Application4.3.2 User InterfaceTo simplify the application?s use, a basic interface was designed to providehooks into the most important actions associated with the application. Fig-ure 4.2 below is a capture of the main screen.Figure 4.2: Screenshot of the MobiSense phone application.Displayed in the top left of the screen is the UID (unique identifier)associated with the device that is used to differentiate data streams andlogin to the server. MobiSense also allows the user to change the UID of thedevice, which allows for several different models and logs to be created whilestill using the same device. The central button is used to start the loggingservice. When pressed, the application will begin recording sensor streamsand storing the data locally on the device. To stop the service, the userpresses the same button again and another service will attempt to uploadthe recorded files. The user can also manually trigger this upload service.Below the central start button, is the area of the UI used for creating anindoor location model. The plus (+) button is used to create a new labeledroom. Pressing it will create a dialog where the user can enter the name fora new room they would like to train on (e.g. ?kitchen?). The drop downmenu after the plus button lists all the labeled rooms, so the user can addmore training data to a previously created room. Finally, the phone buttonstarts recording WiFi observations for training data on the currently selectedroom. It will record observations every 800 ms and store the training datain the format described in Section 4.3.1424.4. Data Storage4.4 Data StorageThe centralized processing and storage of the data streams is done on a singleAmazon EC2 instance running Ubuntu Linux. The Tornado web server isused to handle data uploads and requests for lifespace summaries. When theMobiSense phone application uploads compressed sensor data to the server,the upload handler decides where to store the data on disk. Figure 4.3 belowoutlines the organization of the filesystem on the server.Figure 4.3: Filesystem Organization on MobiSense ServerData and summaries are grouped by device UID and date. Indoor train-ing files ( <Date>-wifiData.txt ) are stored for each device, where <Date>represents a timestamp for the file. The indoor WiFi classifier and noveltydetector described in Chapter 2 are represented in a python pickle file (indoor model.pkl ). A shared activity model (not device specific) stores thedecision tree classifier explained in Chapter 3 for activity recognition. Thefile device info.pkl stores the timezone that the device was set to. Sensordata is uploaded as compressed gzip files which are placed into folders corre-sponding to the date they were created. The gzip files get decompressed to.raw files and then summary pickle files ( <Type>-summary.pkl ) are gener-ated using the raw uncompressed data. Three summary files are generated:one for activity, indoor, and outdoor data. The process that generates thesummary files is discussed in more detail in the following section.434.5. Data Processing4.5 Data ProcessingData processing is scheduled on the server using a single cron job thatsearches the filesystem for unprocessed files. The steps at a high level areshown below in Program 4.1. The job is scheduled to run every three min-utes and there is a locking mechanism in place to ensure that no new jobbegins execution while another is still running.Program 4.1 Data Analysis ProcessFILES_2_UNZIP = collectNewCompressedFiles()gunzip( FILES_2_UNZIP )NEW_INDOOR_TRAINING_FILES = collectNewIndoorTrainingFiles()updateIndoorModels ( NEW_INDOOR_TRAINING_FILES )NEW_GPS_FILES = collectNewGPSReadings()NEW_WIFI_FILES = collectNewWifiReadings()NEW_ACC_FILES = collectNewAccelerometerReadings()doOutdoorSummary( NEW_GPS_FILES )doActivitySummary( NEW_ACC_FILES )doIndoorSummary( NEW_WIFI_FILES )The unzip step decompresses all sensor stream files ending with a .gzextension and stores the decompressed data with .raw extension.All new indoor training files are processed and then converted from .txtextensions to .proc (for processed) to ensure that no file is processed twiceby the server (.proc files are ignored).The sensor streams are then analyzed and summarized by running scriptson the uncompressed .raw files. The .raw files are then re-labeled as .procand the scripts create or update their respective summary pickle files.Another important consideration is the data dependencies between thedifferent summarizations. As explained in Chapter 3, the activity recogni-tion relies on having the GPS stream already processed to allow for sensorfusion. The data analysis process schedules the GPS sensor streams to beanalyzed before the accelerometer streams. Similarly, the indoor summarycould likely benefit from having activity summary information and so thisis generated last. The device info.pkl file is created by the web server whenthe upload requests from the phone application come in.444.5. Data Processing4.5.1 Outdoor SummaryThe outdoor summary stores the sequence of GPS fixes, the average speedfor given time intervals and the total km traveled over the day. The structureof the summary file is a dictionary with three main entries:RAW : [ <time>, <lat>, <lon>, <s_est>, <acc>, <t_delta>, <d_delta> ]INT : [ <start_time>, <end_time>, <avg_speed> ] (intervals >= 5min)STATS : { TOT_FIXES : <num_fixes>, TOT_KM : <tot_km_travelled> }The RAW entry is a list of GPS fixes, with information relating to accuracy,speed estimate, and the time and distance from previous entry. The INTentry is a list of time intervals and the average speed calculated over thatinterval. As explained in Section 3.3.2, this information is needed for activityrecognition. The STATS entry is itself a dictionary and stores the total fixesand distance travelled for the day.4.5.2 Activity SummaryThe activity summary has a similar dictionary structure with two entries.INT : [ <start_time, <end_time>, <class> ] (intervals >= 2s)TOT : { <class> : <tot_time> }The decision tree outlined in Chapter 3 is used to process windows of ac-celerometer readings fused with average GPS speed, to decide on the activityclass for each interval. If two subsequent intervals have the same classifica-tion, they are condensed into one larger interval with the same classification.Krumm suggests using a two-state HMM to smooth transitions between ac-tive and stationary states [24]. For our purposes, motion classifications lessthan 2.5 seconds in duration are suppressed if surrounded by stationaryclassifications.4.5.3 Indoor SummaryThe stream of WiFi observations is processed in a similar way using thenovelty detection and classifier model outlined in Chapter 2.INT : [ <start_time>, <end_time>, <room_name> } (intervals >= 10s)TOT : { <room_name> : <tot_time> }Each interval that is identified as not novel has a room associated withit. The TOT entry stores the total time spent in each room for the day.454.6. Data Visualization4.6 Data VisualizationOne of the goals of MobiSense was to provide users with rich summaries oftheir daily patterns and behaviours. Visualizations play an important role incommunicating this information and careful thought is required to presentsuccinct, quickly interpretable charts and plots. A visualization componentis present in many lifespace and actigraphy applications and is key to guidingthe user?s exploration and interpretation of their own data [15] [37] [18] [23].As outlined in Section 4.2, all the logic for the visualizations lives in theweb client which is written in HTML and JavaScript. Summarization datais requested from the server, and returned in human readable JSON formatwhich is then rendered into the appropriate visuals.4.6.1 Timeline SelectionThe first step before data can be visualized is for the user to specify the datarange to be summarized. A slide bar with two adjustable ends allows forselection over the range of dates that the user collected data. Shown belowin Figure 4.4 is a screen shot of the timeline selector.Figure 4.4: Timeline Selection Slider User InterfaceThe user also has the option of visualizing each individual day over thatrange (Daily), or an aggregate summarization (Total) that is explained inthe subsequent sections. Once a range has been selected the site generatesrequests for the three summary streams from the MobiSense server. Theseare simple HTTP GET requests and three are sent, one to each summariza-tion handler on the server. For example, a request URL for the outdoor orGPS summary from May 20 to June 1 would look like the following:http://mobisense.ca/gps/?start=13-05-20&end=13-06-01The server will populate summary data and send it back in JSON for-mat. The JSON format closely follows the structure of the summary filespresented in Section 4.5464.6. Data Visualization4.6.2 Outdoor SummaryThe outdoor summary data is a stream of GPS fixes consisting of latitudesand longitudes. The Google Maps API [20] has built in functionality to vi-sualize a heat map distribution of logged GPS locations. Frequently loggedlocations produce larger, darker heat centers. In addition, trace lines canbe added to connect consecutive fixes. An example screen shot of the visu-alization is shown below in Figure 4.5:Figure 4.5: Outdoor Summary VisualizationThe data visualized here was collected by a single user over the course oftwo days. Clear centers around the user?s home and work place can be seenas well as two trips to parks in the city. For both individual day summariesand total aggregates over a date range, the nature of the visualization pre-sentation is essentially the same. Using an embedded instance of GoogleMaps for the outdoor visualization affords the user all the features and fa-miliar functions of Google Maps. For example users can zoom in and outand pan around the map, as well as investigate street level views of areas ofinterest. The total kilometres travelled for each day and the average numberof kilometres per day are also shown to the user. When the user chooses toview each day individually across a timeline, a separate visualization pageis generated for each day. When the total summarization of a timeline isrequested, a single map is generated for the entire period.474.6. Data Visualization4.6.3 Activity and Indoor SummaryThe visualizations for the activity and indoor timelines are closely relatedin that they are summarizing classifications over the course of the day. Piecharts visualizing a break down of the distribution of time spent in eachactivity class for the activity summarization and each room for the indoorsummarization are created for the user. Figure 4.6 shows an example ofeach chart for a given timeline.Figure 4.6: Pie Charts Summarizing Distribution of Activities and RoomsWe can see that for this period, over 58% of the user?s time was spent notmoving, about 30% of the time wheeling (since this was a wheelchair user)and 11% driving or using public transportation. Similarly, from the indoorchart, one can see the user spent most of his time indoors in the living roomor desk table. This pie chart visualization of activities was employed in theprototype system proposed in [37].Users can also mouse over the sections of the pie chart and see absoluteminutes logged for each activity class or room. Here, as for the outdoorsummary, the visualization stays the same whether the user requests indi-vidual day summaries or the total aggregation over the entire timeline. Thepie chart works well for a quick summary of the user?s time; however, in-teresting temporal trends are also likely of interest to users for behaviouranalysis. The following visualizations are designed to show specifically whenclassifications occur during the day.484.6. Data VisualizationAs the summary of the data streams is stored as a sequence of classi-fications, it made sense to visualize this sequence explicitly in addition tothe aggregate pie charts. Each classification interval is visualized directly ina color-coded timeline. The approach is also employed in the work of [37].Figure 4.7 presents an example summary for May 19, 2013.Figure 4.7: Screenshot of Activity and Indoor Daily Timeline SummaryThere are only three classes of interest for the activity summary, whilethe number of rooms for the indoor summary depends on the training model.The JavaScript library DyGraph was used to create the graphs and allowsfor zooming in or out. In this example we can see the descriptive power ofthis style of visualization. The user begins the day in the bathroom andbedroom for about 50 minutes, then spends some time in the kitchen andliving room and then leaves the house at around 12 noon. The user returnsto the house at 1 and then leaves by vehicle at 2 and returns around 9.494.6. Data VisualizationThe previous visualizations were used to plot single days, however ifthe user requests a total aggregation over the dates of interest, a differentsummary is presented. Instead of sending back separate sequences for eachday, the MobiSense server aggregates over each day in the range and sendsa single sequence that represents the entire date range. The timeline ispopulated with predetermined intervals of 10 minutes over the course of 24hours and each interval stores a probability distribution of the classes forthat time over the date range. For example, if the user selects a range of10 days and for 3 out of 10 of those days the user was classified as being intheir bathroom from 8:10 to 8:20, then for that interval the bathroom wouldbe visualized with a probability of 0.3.Figure 4.8: Screenshot of Total Summary Timeline Visualization504.6. Data VisualizationThe total 24 hour timeline is visualized as a 100% stacked line chart,where each interval is a distribution of classifications that add up to 1. Anexample summary using this visualization technique is shown in Figure 4.8Here again we can see the descriptive power of this kind of visualizationin this example. The activity summary shows us that there is a tendency forthe user to be using vehicle transportation in the morning between 9 and 10,likely representing the morning commute. There is a smaller correspondingpeak for the return trip home between 5 and 6. After about 11 at night, theprobability of stationary activity dominates and represents the user goingto sleep or at least leaving their phone somewhere for the night. For the restof the day, wheeling and stationary are mostly distributed equally.Analyzing the indoor plot we can see similar trends. With more roomsin the distribution the plot becomes somewhat harder to read, but one canstill see that there is a definite trend towards being in the living room in theevening and similarly the bedroom at night. Since every interval across thedays of interest will not necessarily have a classification (novel locations) wealso included a subplot below the summary that visualizes data density foreach interval. This subplot shows for the dates of interest, how many havenon-novel WiFi readings during each time interval. In this example, we seethat the data density it quite low throughout the day, until the user returnshome in the evening. The data density drops to 0 when the user has nonon-novel readings during the interval or when the phone was not collectingdata. In the example we can see that no data was collected from 6 to 8 inthe morning over the range as the activity data is also absent there.Currently the server populates the aggregate structure over the daterange each time a summary is requested. With relatively small ranges,the time spent processing is negligible; however, if larger date ranges wereconsidered, more thought would be required to cache or store the aggregateresults in a query optimized format.51Chapter 5ConclusionEvaluating indoor localization on real and simulated datasets has groundedour investigation and given us more confidence that the approaches wouldperform well in a variety of residences and non-ideal environments. Randomforests were demonstrated to be highly effective as an approach to roomlevel localization using WiFi signal strengths. They had the best overallperformance on both the normal datasets and those with simulated defects.For novelty detection, the best approach was less clear; however, the nearestcentroid approaches seemed to be more robust than the others. Of theevaluated approaches, nearest centroid with Euclidean distance thresholdsappeared to be most robust across the environments evaluated.The activity classifier was able to differentiate between wheeling, walk-ing, stationary and vehicle motion to a high degree of accuracy using the testdata collected. However, in practice, it was possible to confuse walking withwheeling. To further validate our classifier, we compared our results witha summary produced by another wheelchair activity measurement system.In this system an accelerometer was mounted directly to one wheel and wesaw that our approach gave comparable actigraphy summaries.We proposed MobiSense as a research platform for lifespace summariesusing mobile phones. The combination of a simple timeline selection withrich visualizations could act as a powerful explorative tool for researchersand users interested in their own patterns and behaviours. Date range aggre-gates and daily summaries for activity and indoor location were visualizedas 100% stacked line charts and line charts respectively. Heat maps servedas visualizations for outdoor activity and all visualizations were effective inquickly understanding lifespace trends and patterns. The timeline selectioncontrol provides a clear, simple point of entry for manipulating and explor-ing date ranges. The server-side data storage and processing is organized ina straight forward way to reduce technical overhead and complexity, whilefollowing the spirit of the architecture abstractions of the openMHealth spec-ifications. Extensions and improvements to the system would be relativelystraight forward to implement and experiment with. A test instance of thesystem is currently hosted at http://mobisense.ca525.1. Future Work5.1 Future WorkWhen proposing an entire system, many avenues for future improvementsand investigation present themselves. In particular, more effort could bespent in the development of a truly robust classifier between walking andwheeling activity classes. As explained in Section 3.4.2, in practice thesignals are confused for each other somewhat frequently.While the design of MobiSense was influenced a great deal by the archi-tecture patterns presented by the openMHealth initiative, full adherence tothe openMHealth abstraction specification would be preferable to promotestandards and collaboration. A very serious aspect of any openMHealth sys-tem is the concern about system security and user privacy. Mobile healthsystems may potentially be storing sensitive and private user informationand so great care must be taken to ensure that data is not compromised.Currently MobiSense does not have any security measures in place.As a tool for exploring and summarizing lifespace, a primary goal ofMobiSense is for researchers and users to identify trends and common be-haviours in their daily lives. While passive summarization of information isan important step towards making adjustments and behavioural improve-ments to lifestyle and activity, future systems may not only respond touser queries, but actively request information and make suggestions to usersdirectly. Currently the visualizations are the only meaningful interactionusers have with the MobiSense system. It is exciting to imagine future as-sistive capabilities of MobiSense as an interactive agent, instead of only asa passive system. Extending the predictive and interactive abilities of Mo-biSense could, for example, enable the creation of very useful personalizedreminders and alerts to users. The development of long-term personal dataoffers an exciting context for future work in unsupervised machine learningand wearable systems. Ohmage [15] and HumanSense [13] can query theuser, however these requests are simple prompts for data or labeling.The introduction of Google Glass presents another interesting futureplatform for data collection and visualization. With video recording anda small heads up display, interactions and data logging could potentiallybecome much richer. We are beginning to see devices and systems thatare more personalized and responsive than ever before. It is an excitingopportunity to apply some of these ideas to assistive technology and possiblyimprove quality of life and overall health.53Bibliography[1] Paramvir Bahl and Venkata N Padmanabhan. RADAR: An in-buildingRF-based user location and tracking system. In INFOCOM 2000. Nine-teenth Annual Joint Conference of the IEEE Computer and Communi-cations Societies. Proceedings. IEEE, volume 2, pages 775?784. IEEE,2000.[2] B. Balaguer, G. Erinc, and S. Carpin. Combining classification andregression for WiFi localization of heterogeneous robot teams in un-known environments. In Intelligent Robots and Systems (IROS), 2012IEEE/RSJ International Conference on, pages 3496?3503, 2012.[3] Ling Bao and Stephen S. Intille. Activity recognition from user-annotated acceleration data. In Alois Ferscha and Friedemann Mat-tern, editors, Pervasive Computing, volume 3001 of Lecture Notes inComputer Science, pages 1?17. Springer Berlin Heidelberg, 2004.[4] P. Boissy, S. Briere, M. Hamel, M. Jog, M. Speechley, A. Karelis,J. Frank, C. Vincent, R. Edwards, and C. Duval. Wireless inertialmeasurement unit with GPS (WIMU-GPS); Wearable monitoring plat-form for ecological assessment of lifespace and mobility in aging anddisease. In Engineering in Medicine and Biology Society,EMBC, 2011Annual International Conference of the IEEE, pages 5815?5819, 2011.[5] Liming Chen, J. Hoey, C.D. Nugent, D.J. Cook, and Zhiwen Yu. Sensor-based activity recognition. Systems, Man, and Cybernetics, Part C:Applications and Reviews, IEEE Transactions on, 42(6):790?808, 2012.[6] C. K. Chow and C. N. Liu. Approximating discrete probability dis-tributions with dependence trees. IEEE Transactions on InformationTheory, 14:462?467, 1968.[7] Sunny Consolvo, Predrag Klasnja, David W. McDonald, Daniel Avra-hami, Jon Froehlich, Louis LeGrand, Ryan Libby, Keith Mosher, andJames A. Landay. Flowers or a robot army?: encouraging awareness54Bibliography& activity with personal, mobile displays. In Proceedings of the 10thinternational conference on Ubiquitous computing, UbiComp ?08, pages54?63, New York, NY, USA, 2008. ACM.[8] Mark Cummins and Paul Newman. FAB-MAP: Probabilistic Local-ization and Mapping in the Space of Appearance. The InternationalJournal of Robotics Research, 27(6):647?665, 2008.[9] Ian Dewancker, Jaimie Borisoff, Boyan Tom Jin, and Ian M. Mitchell.MobiSense: Lifespace racking and activity monitoring on mobilephones. In Proceedings of Rehabilitation Engineering and AssistiveTechnology Society of North America Annual Conference, RESNA2014, 2014.[10] Brian Ferris, Dieter Fox, and Neil D Lawrence. WiFi-SLAM UsingGaussian Process Latent Variable Models. In International Joint Con-ference on Artificial Intelligence, volume 7, pages 2480?2485, 2007.[11] Brian Ferris, Dirk Ha?hnel, and Dieter Fox. Gaussian processes for signalstrength-based location estimation. In In Proc. of Robotics Science andSystems. Citeseer, 2006.[12] J. Frank, S. Mannor, J. Pineau, and D. Precup. Time series analysisusing geometric template matching. IEEE Transactions on PatternAnalysis and Machine Intelligence, 2012.[13] Jordan Frank. Humansense android app. http://jwf.github.io/Humansense-Android-App/, August 2013.[14] Jordan Frank, Shie Mannor, and Doina Precup. Generating storylinesfrom sensor data. Pervasive and Mobile Computing, 2013.[15] John Hicks, Nithya Ramanathan, Hossein Falaki, Brent Longstaff, Kan-nan Parameswaran, Mohamad Monibi, Donnie H Kim, Joshua Selsky,John Jenkins, Hongsuda Tangmunarunkit, et al. Ohmage: An openmobile system for activity and experience sampling. Technical report,CENS Technical Reports, 2011.[16] Joseph Huang, David Millman, Morgan Quigley, David Stavens, Se-bastian Thrun, and Alok Aggarwal. Efficient, generalized indoor WiFiGraphSLAM. In Robotics and Automation (ICRA), 2011 IEEE Inter-national Conference on, pages 1038?1043. IEEE, 2011.55Bibliography[17] Ekahau Inc. Ekahau : Business intelligence through location. http://www.ekahau.com/, 2013.[18] Fitbit Inc. Fitbit. http://www.fitbit.com/us, July 2013.[19] Google Inc. Google Location APIs. http://developer.android.com/google/play-services/location.html, July 2013.[20] Google Inc. Google Maps API. https://developers.google.com/maps/, August 2013.[21] Google Inc. Google Now. http://www.google.com/landing/now/,July 2013.[22] Google Inc. Motion Sensors. http://developer.android.com/guide/topics/sensors/sensors_motion.html, July 2013.[23] Nike Inc. Nike+ Fuelband. http://www.nike.com/us/en_us/c/nikeplus-fuelband, July 2013.[24] John Krumm. Ubiquitous Computing Fundamentals. Chapman &Hall/CRC, 1st edition, 2009.[25] Olivier Ledoit and Michael Wolf. A well-conditioned estimator forlarge-dimensional covariance matrices. Journal of multivariate anal-ysis, 88(2):365?411, 2004.[26] Lin Liao, Dieter Fox, and Henry Kautz. Extracting Places and Activ-ities from GPS Traces Using Hierarchical Conditional Random Fields.Int. J. Rob. Res., 26(1):119?134, January 2007.[27] Open mHealth. Open mHealth. http://openmhealth.org/, August2013.[28] Kevin P. Murphy. Machine Learning: A Probabilistic Perspective(Adaptive Computation and Machine Learning series). The MIT Press,August 2012.[29] Numpy and Scipy Contributors. Discrete Fourier Transform.http://docs.scipy.org/doc/numpy/reference/generated/numpy.fft.fftfreq.html, July 2013.[30] Ohmage. Ohmage projects. http://ohmage.org/projects.html, Au-gust 2013.56Bibliography[31] Jun-geun Park, Ben Charrow, Dorothy Curtis, Jonathan Battat, EinatMinkov, Jamey Hicks, Seth Teller, and Jonathan Ledlie. Growing anorganic indoor location system. In Proceedings of the 8th internationalconference on Mobile systems, applications, and services, pages 271?284. ACM, 2010.[32] Fabian Pedregosa, Gae?l Varoquaux, Alexandre Gramfort, VincentMichel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Pret-tenhofer, Ron Weiss, Vincent Dubourg, Jake VanderPlas, AlexandrePassos, David Cournapeau, Matthieu Brucher, Matthieu Perrot, andEdouard Duchesnay. Scikit-learn: Machine Learning in Python. TheComputing Research Repository, abs/1201.0490, 2012.[33] Alex Pentland, Nadav Aharony, Wei Pan, Cody Sumter, and AlanGardner. Funf: Open sensing framework. http://www.media.mit.edu/research/groups/1448/funf-open-sensing-framework,August 2013.[34] Andrzej Pronobis, Oscar M. Mozos, Barbara Caputo, and Patric Jens-felt. Multi-modal semantic place classification. The International Jour-nal of Robotics Research (IJRR), Special Issue on Robotic Vision, 29(2-3):298?320, February 2010.[35] Morgan Quigley, David Stavens, Adam Coates, and Sebastian Thrun.Sub-meter indoor localization in unmodified environments with in-expensive sensors. In Intelligent Robots and Systems (IROS), 2010IEEE/RSJ International Conference on, pages 2039?2046. IEEE, 2010.[36] Stephen J Roberts. Extreme value statistics for novelty detection inbiomedical data processing. In Science, Measurement and Technology,IEE Proceedings-, volume 147, pages 363?367. IET, 2000.[37] Ana Katrin Schenk, Bradley C Witbrodt, Carrie A Hoarty, Richard HCarlson, Evan H Goulding, Jane F Potter, and Stephen J Bonasera. Cel-lular telephones measure activity and lifespace in community-dwellingadults: proof of principle. J Am Geriatr Soc, 59(2):345?52, 2011.[38] Oliver S. Schneider, Karon E. MacLean, Kerem Altun, Idin Karuei, andMichael M.A. Wu. Real-time gait classification for persuasive smart-phone apps: structuring the literature and pushing the limits. In Pro-ceedings of the 2013 International Conference on Intelligent User In-terfaces, IUI ?13, pages 161?172, New York, NY, USA, 2013. ACM.57Bibliography[39] Quantified Self. Quantified self : Self knowledge through numbers.http://quantifiedself.com/about/, August 2013.[40] Sharon Eve Sonenblum, Stephen Sprigle, and Ricardo A Lopez. Manualwheelchair use: bouts of mobility in everyday life. Rehabil Res Pract,2012.[41] Leo Breiman Statistics and Leo Breiman. Random forests. In MachineLearning, pages 5?32, 2001.[42] Sebastian Thrun, Wolfram Burgard, and Dieter Fox. ProbabilisticRobotics (Intelligent Robotics and Autonomous Agents). The MITPress, 2005.[43] Douglas L. Vail, Manuela M. Veloso, and John D. Lafferty. Conditionalrandom fields for activity recognition. In Proceedings of the 6th interna-tional joint conference on Autonomous agents and multiagent systems,AAMAS ?07, pages 235:1?235:8, New York, NY, USA, 2007. ACM.[44] Wikipedia. Haversine formula. http://en.wikipedia.org/wiki/Haversine_formula, 2013. [Online; accessed 19-July-2013].[45] Chenshu Wu, Zheng Yang, Yunhao Liu, and Wei Xi. Will: Wirelessindoor localization without site survey. In INFOCOM, 2012 ProceedingsIEEE, pages 64?72. IEEE, 2012.58

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.24.1-0166870/manifest

Comment

Related Items