UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Exorcising N2 stigmata in Sequential Monte Carlo Klaas, Mike 2005

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
831-ubc_2005-0509.pdf [ 5.4MB ]
Metadata
JSON: 831-1.0051326.json
JSON-LD: 831-1.0051326-ld.json
RDF/XML (Pretty): 831-1.0051326-rdf.xml
RDF/JSON: 831-1.0051326-rdf.json
Turtle: 831-1.0051326-turtle.txt
N-Triples: 831-1.0051326-rdf-ntriples.txt
Original Record: 831-1.0051326-source.json
Full Text
831-1.0051326-fulltext.txt
Citation
831-1.0051326.ris

Full Text

Exorcising N  2  Stigmata in Sequential Monte Carlo by Mike Klaas B.Sc, Dalhousie University  A THESIS SUBMITTED IN PARTIAL F U L F I L M E N T OF T H E REQUIREMENTS FOR T H E D E G R E E OF MASTER OF SCIENCE in The Faculty of Graduate Studies (Computer Science)  T H E UNIVERSITY OF BRITISH COLUMBIA August 2005 © Mike Klaas, 2005  Abstract  Abstract Sequential Monte Carlo (SMC) has, since being "rediscovered" in the early 1990's, become one of the most important inference techniques in machine learning. It enjoys a prominent place in statistics, robotics, quantum physics, as well as control and other industrial applications. SMC methods represent probability densities as a dicrete set of N Dirac masses called particles. This non-parametric representation provably converges to the true distribution of interest and is effective in high dimensional applications. Many sophisticated SMC algorithms require 0(N ) 2  computation, which is viewed  as impractically-expensive by researchers in the field. Hence, N  2  SMC algorithms  possess what we call "N stigmata"—they are simply "too slow." This thesis aims to 2  exorcise these stigmata. We present a survey of areas where these algorithms occur and show that in every case their expense results from having to compute a sum-kernel or max-kernel operation. Both are of a class of operations called iV-body problems which occur in physics, statistics, and machine learning. We show how the techniques developed in this field can be used to accelerate sum- and max-kernel (and consequently the SMC algorithms), reducing the cost to 0(Nlog N)—in some cases O(N).  Using these methods, i V  2  Monte Carlo algorithms should be applicable in a much wider range of settings. Along the way, we introduce new SMC algorithms for marginal filtering and a novel algorithm for drastically accelerating max-kernel.  Contents  iii  Contents Abstract  11  Contents .  •  iii  List of Tables  vii  List of Figures  viii  Acknowledgements  x  1 Preamble  1  1.1  1  Notational remark  N algorithms in Sequential Monte Carlo 2  3  4  2 SMC and Bayesian Inference  5  2.1 Bayesian state estimation  6  2.1.1  Prediction, filtering, and smoothing  7  2.1.2  Bayesian filtering  8  2.1.3  Bayesian smoothing  8  2.1.3.1  Forward-backward smoothing  9  2.1.3.2  Two-filter smoothing  9  2.1.4 2.2  Nonanalytic example  10  Sequential Monte Carlo  10  2.2.1  12  Resampling  iv  Contents 2.3  2.4  Particle smoothing  15  2.3.1  Forward-backward smoother  16  2.3.2  Two-filter smoother  17  2.3.3  Maximum a posteriori (MAP) smoother  18  Experiments 2.4.1  2.4.2  21  M A P smoothing  •  21  2.4.1.1  Main example  21  2.4.1.2  Real-world application  22  Forward-backward smoothing  25  3 The Marginal Particle Filter  26  3.1 Marginal Particle Filter  26  3.2  3.1.1  Auxiliary Variable M P F  28  3.1.2  Discussion  31  3.1.3  Cases of equivalence  33  Efficient Implementation 3.2.1  3.3  3.4  II  Fast methods for A M P F  34 35  3.3.1  Multi-modal non-linear time series  35  3.3.2  Stochastic volatility  37  3.3.3  Fast methods  Summary  4 Fast methods  4.2  34  Experiments  Acceleration methods  4.1  •  . 38 . 40  41 42  The problem  42  4.1.1  Sum-Kernel  43  4.1.2  Max-Kernel . . .  43  Preliminaries  44  Contents 4.2.1  Assumptions  4.2.2  Spatial indices  4.2.3  44 ,  45  4.2.2.1  Kd-trees  46  4.2.2.2  Anchors hierarchy and metric trees  46  Dual-tree recursion  47  4.2.3.1  Single-tree recursion  47  4.2.3.2  From one tree to two  48  4.3 Fast' methods for sum-kernel  50  4.3.1  Fast Gauss Transform  51  4.3.2  Improved Fast Gauss Transform  52  4.3.3  Dual-tree sum-kernel  52  4.4 Fast methods for max-kernel  54  4.4.1  The distance transform  54  4.4.1.1  55  Extension to irregular grids in 1-D  5 Dual-tree max-kernel 5.1  57  The algorithm  59  5.1.1  59  Influence bounding  5.2  Performance in iV  62  5.3  The effect of other parameters  67  5.3.1  Distribution and dimensionality  67  5.3.2  Kernel bandwidth  69  5.4  Conclusion and an application  72  5.4.1  Maximum a posteriori belief propagation  72  5.4.2  Relaxation of kernel assumption  73  5.4.3  Summary  73  6 A better(?) algorithm 6.1  Bounding in dual-tree algorithms 6.1.1  A bounds-tightening regime for max-kernel  74 • . . . 75 76  vi  Contents 6.1.2 6.2  The algorithm 6.2.1  6.3  6.4  Solving the optimization  Discussion  77 •  8 0  81  Results  81  6.3.1  STDT results  82  6.3.2  DTDT results  82  Summary  7 Postamble  87 8 8  7.1 ' JV Sequential Monte Carlo  88  7.2  90  2  Fast methods  Bibliography  92  List of Tables  vii  List of Tables 3.1  1-D time series; RMS error and weight variance  37  3.2  Fast methods applied to the marginal particle filter  40  4.1  Summary of sum-kernel methods  54  List of Figures  viii  List of Figures 2.1 A Markov model  7  2.2  Example 2.1.4  11  2.3  Sequential importance sampling  13  2.4  Comparison of SIS, SIR, and SIS with occasional resampling  14  2.5  Particle filtering algorithm at time t  15  2.6  M A P smoothing algorithm  2.7  M A P smoothing results; state estimate  21  2.8  M A P smoothing results; baseline view  22  2.9  1-D time series results; efficiency.  23  . 20  2.10 Beat-tracking results: time v. particle count  24  2.11 Forward-backward smoothing on synthetic data  24  3.1 The M P F algorithm at time t  28  3.2  The A M P F algorithm at time t  31  3.3 M P F mixture predictive density  32  3.4  Variance reduction with M P F and A M P F  3.5  1-D time series; state estimate  36  3.6  1-D time series; importance weight variance  36  3-7  1-D time series; unique particle count  37  3.8 Stochastic volatility model; state estimate  38  3.9  39  Stochastic volatility model; importance weight variance.  , . . . . 33  3.10 Stochastic volatility, unique particle count  39  4.1 Inclusion and exclusion (single-tree)  48  List of Figures  ix  4.2  Dual-tree recusion  49  4.3  Illustration of the fast Gauss transform  51  4.4  Dual-tree bounding for spherical nodes  •. . . 53  4.5  Lower envelope computed with the distance transform  55  5.1  Max-kernel bounding  60  5.2  Pseudocode for dual-tree max-kernel algorithm, part 1  63  5.3  Pseudocode for dual-tree max-kernel algorithm, part 2  64  5.4  Dual-tree pruning  65  5.5  Plots for dual-tree max-product example  5.6  Synthetic data (N = 20000) with c = 20 clusters  67  5.7  Dual-tree max-kernel; time v. dimensionality.  68  5.8  Dual-tree max-kernel; distance computations v. dimenionality  68  5.9  Time v. dimensionality; ratio to kd-tree = 1  70  5.10 Dual-tree max-kernel; distance computations v. bandwidth 6.1  . 66  71  An upper bound on maximum influence obtained by rotating the particles in X  78  6.2  Lower envelope using the distance transform  78  6.3  An lower bound on maximum influence obtained by rotating the particles in X  79  6.4  Bounding in a metric space  80  6.5  STDT, d = 3, time v. N  83  6.6  STDT, h = 1, d = 3, DCs/leaves v. TV  83  6.7  STDT, h = l, N = 10000, performance v. d  84  6.8  DTDT, h — 0.1/1, d = 3, cpu time v. N  6.9  DTDT, h = 0.1/1, d = 3, distance computations v. N  6.10 DTDT, h=l,N=  10000, performance v. d.  84 85 85  6.11 DTDT, h = l,N = 10000, relative comparison  86  6.12 DTDT, pruned nodes v. tree depth  86  Acknowledgements  x  Acknowledgements Working alone, this thesis would not have occurred. I must especially thank two people I worked closely with in the course of this research. First, my supervisor, Nando de Freitas, who has been an invaluable, source of ideas, advice, and support throughout my degree. Second, Dustin Lang, with whom I met practically daily working on fast methods. It is not an exaggeration to say that there are no ideas in Part II of this thesis untouched by his influence. Both truly deserve co-authorship credit for this work. I would generally like to thank all my collaborators—a partial list of whom can be found on the preceeding page. They have been wonderful to work with, and I hope they forgive the bits of our co-authored papers that have found their way into this thesis. Finally, I am indebted to my two readers, Nando de Freitas and Daniel Huttenlocher, for their helpful comments and suggestions.  Chapter 1  Ouvrez la tete. Gnossiene No.  3  ERIC  SATIE  Preamble This thesis is about two very different things, and is hence organized into two parts. The first part is about a certain class of algorithms that have 0(N ) 2  cost. These  algorithms have many advantages over cheaper alternatives, but are largely ignored 1  due to this cost, which stems from a core computation which is similar among all the algorithms. The second part of this thesis is concerned with a class of algorithms which can accelerate this core computation. In both parts, you will find contributions to the state-of-the-art. The first part is about Sequential Monte Carlo (SMC) [10, 40, 34]. Monte Carlo algorithms approximate probability distributions by a set of samples, called particles, and can be used to numerically estimate integrals which cannot be evaluated analytically. In the domain of Bayesian filtering (or smoothing), where the task is to infer the distribution of a latent state given a sequence of observations, these integrals arise frequently and as such SMC methods are a popular inference technique [2]. These are, in fact, the most widely-known application of SMC techniques; most of the examples we consider are those of particle filters and particle smoothers. In Part I, N will refer to the number of particles used by an SMC algorithm. The use of large particle populations is desirable as the quality of the Monte Carlo approximation increases in the number of particles used. Practitioners generally use as many particles as is computationally feasible, and thus the sensitivity of the cost of an algorithm to the size of the particle population is of paramount importance when using SMC. For this reason, techniques that exhibit superlinear (generally quadratic) cost in N and have reputations of impracticality. These stigmata are the topic of this 1  Assuming cheaper alternatives even exist  1  Chapter 1. Preamble  2  thesis. N algorithms, though costly, can accomplish things impossible with linear2  time algorithms (such as smoothing; improving the estimate of past states given future observations), or do similar things with greater precision and less variance. The first part will introduce Sequential Monte Carlo in the context of Bayesian state estimation, and discuss areas in which TV algorithms arise in this setting. Further, we 2  present in Chapter 3 the Marginal Particle Filter, which is a novel filtering algorithm that outperforms existing techniques in several respects, but exhibits 0(N ) 2  cost.  Throughout Part I, we show that all the N algorithms presented owe their cost to 2  the same core computation, of which there are two variations. One is the sum-kernel computation: N fj = £ wiKfayj).  for j = l,...,N  (1.1)  i=i  Here, source particles {XJ} with weights {u>i} exert an influence on target particles {y^} given by UiK (xj,yj). Sum-kernel aims to determine the sum of influence exerted on each target particle by all source particles. The other is known as the max-kernel problem, for j = 1,...,N  ft  = max w ( x ; , y^), {  (1.2)  where we wish to determine the source particle of maximum influence at each target particle. Both of these operations cost 0(N ) when implemented straightforwardly, which 2  we will call the naive implementation. We will not directly discuss acceleration strategies for N algorithms in Part I. Instead, we will show how the algorithms reduce to 2  equations  (1.1)  and  (1.2),  and devote the entirety of Part II to methods for perform-  ing these two operations efficiently. We review existing acceleration techniques and in Chapter 5 present a novel algorithm for performing max-kernel in Monte Carlo state spaces, for which no previous algorithm exists. Our goal is to demonstrate that N methods in Sequential Monte Carlo are prac2  tical when implemented using these fast methods, and consequently undeserving of stigmata, which we hope to exorcise.  Chapter 1. Preamble  1.1  3  Notational remark  The conventional particle filtering notation uses x to denote latent state, y to denote observations, and w is used for importance weights. In fast methods, x and y are often used to denote  source  and target points (particles), and w to denote source weights.  These variables are not completely unrelated: a Monte Carlo particle x will sometimes correspond to x in fast methods, and an importance weight w will often correspond to a source particle weight w, but not always. The symmetry breaks down further in the case of y, as a filtering observation y is completely unrelated to y in fast methods. Since we do not trust our ability to safely navigate the quagmire that would result from succumbing to the temptation to abuse notation in this case, we deviate slightly from conventional notation in both parts. We will use u to denote latent state, z for observations, and w for importance weights when discussing SMC, and use x, y, and UJ when speaking of fast methods. All mathematical notation in this thesis is standard, except for one eccentricity from Sequential Monte Carlo: when a sequence is indexed by a subscript variable, such as  {v }, t  we will use v b, (with a a:  < b)  to denote the set  {v , a  v +i,..., a  Vb-i,Vb}.  Part I algorithms in Sequential Monte Carlo  Chapter 2  Munissez-vous de clairvoyance.  Gnossiene No. 3  ERIC  SATIE  S M C and Bayesian Inference Bayesian state estimation is ubiquitous in the A l community, being one of the most popular techniques for performing inference in dynamic models. Examples of its use include tracking [42], diagnosis [7], and control [36]. An unobserved signal (latent states Ut € U) is assumed to exist, and evolves according to a (typically Markovian) dynamic model. Additionally, Bayesian filtering assumes the existence of observations (zt) which are conditionally independent given the process. An observation model specifies the generation of observations given a specified latent state. Of interest is estimating a distribution of the latent state up to time t given the observations up to that time; either the joint filtering distribution distribution p(u \zi ). t  :t  p(ui |zi ) : t  ; t  or the marginal filtering  If all observations up to time T are available, the state at  time t < T can be estimated more accurately; determining  p(ut|zi r) :  is known as  smoothing.  In some cases this model can be solved using exact inference, for instance using the Kalman or H M M filters [26]. Unfortunately, real-world models are rarely simple enough to be solved exactly, often containing non-linearity, non-Gaussianity, or hybrid combinations of discrete and continuous variables which, all of which lead to intractable integrals. Since these integrals cannot be solved analytically, approximation techniques are required. One of the most successful and popular approximation techniques is Sequential Monte Carlo (SMC), which is referred to as Particle Filtering (PF) in the Bayesian filtering domain [34, 10, 40, 11]. In its most basic form, particle filters work by starting with a sample from the posterior at time t — 1, predicting the state at time t, then updating the importance weights based on the observation z . These samples form an t  5  Chapter 2. SMC and Bayesian Inference  6  approximation of the joint density p ( u i | z i ) at time t. Often, however, it is the filter:t  :t  ing distribution p(ut|zi ) that is desired. This is approximated by dropping samples :t  of the states ui . . . u -i t  at time t (often implicitly). Particle smoothing algorithms  are more expensive, but can provide an SMC approximation to the smoothed density 7j(ui|zi ). :r  In this chapter, we review Sequential Monte Carlo in the setting of Bayesian state estimation. Along the way, we will encounter several N algorithms. In each case, we 2  will show how to implement these algorithms efficiently by reducing them to a sumkernel (equation (1.1)), or max-kernel (equation (1.2)) problem. We assume algorithms exist to solve these operations efficiently and with little (or no) error. Precisely what algorithms can be applied and in what cases will be dealt with the part II.  2.1  Bayesian state estimation  The unobserved signal {u }, Ut € U, is modelled as a Markov process of initial dist  tribution p(ui) and transition prior p(uj|u£_i). The observations {z^}, z 6 Z, are t  assumed to be conditionally independent given the process {u } and of marginal dist  tribution p (z \ Ut). Hence, the model is described by t  p(u |ut_i)  t > 1, and  t  p(zi|u )  t > 1.  t  We denote by u i  : t  = {ui,...,u } and z i = {zi, ...;z }, respectively, the signal and t  : t  t  the observations up to time t, and define p(ui|un) = p(ui) for notational convenience. Figure 2.1 depicts" this model graphically. The goal of Bayesian filtering is to estimate sequentially in time either the marginal filtering distribution p (ut\zi )  or the joint filtering distribution p ( u i | z i ) .  :t  :t  :i  Often the aim is estimate the expectation of a function over the filtering distribution, i.e., ^ ( / 0 = E p ( u | « ) [ / t ( u ) ] = f ft(n )p(u \z )du t  l!t  t  t  t  1:t  t  (2.1)  Chapter 2.  7  SMC and Bayesian Inference  o  o  o Figure 2.1: The well-nigh ubiquitous figure of a Hidden Markov Model (HMM). The clear nodes are random variables representing the latent state at time t, and the shaded nodes are observed variables. Note that the observations at time t are d-separated from the rest of the graph by the latent state at time t, and said latent state is solely dependent on the preceeding state. for some function of interest f : tC —> R ^t integrable with respect to p ( u | z i ) . n  t  t  : t  Examples of appropriate functions include the conditional mean, in which case ft  (u ) = U t  t  or the conditional covariance of u j where ft(u ) t  2.1.1  = uu  T  t  t  -  Ep  ( u t  |  S l : t )  [u ]Ej t  ( u t  |  a i ! t )  [u ] • t  Prediction, filtering, and smoothing  As mentioned in the previous section, Bayesian state estimation involves determining the distribution of the latent state at time t, u . There is specific nomenclature for t  the precedure of performing this estimation given different sets of observations z. The three facets of state estimation are:  1  Prediction Estimating state in the future of available observations: p ( u | z i f c )  k  Filtering Estimating state up to the time for which we have data: p(u \zi k)  k = t.  t  t  1  :  :  < t.  If the intuition behind these names is not immediately obvious, it is because the nomenclature  has been borrowed from signal processing literature where it originated. Filtering is the process of removing noise from a corrupted signal, and smoothing reduces the "jumpiness" of the signal. These terms now exist without less confusing alternatives in the Bayesian state estimation literature.  Chapter 2. SMC and Bayesian Inference  8  Smoothing Estimating state for which future observations exist: p(ui|zi fc) :  k > t.  This list is in ascending order of accuracy; it is clear that an increased number of observations will improve our estimate of state. The list is also ordered by computational complexity. This is not particularly surprising as prediction is needed for filtering, and filtering is needed for smoothing.  2.1.2  Bayesian filtering  General Bayesian filtering is a two-step procedure. Given an estimate of p(ui_i|zi _i), :t  we can marginalize over u _i to obtain an formula for p(u |zi _i): t  t  :t  p(u |zi _i) = J p(u |u _i)p(ut_i|zi _i)du _i. t  :f  f  t  :f  t  (2.2)  This is known as the prediction step, and the density of equation (2.2) is hence the predictive density. We then update this density with the observation from time t, z , t  using Bayes' rule: R t l:tJ U  z  —— • ——. Jp(z |u )p(u |z _ )du t  t  t  1:t  1  (2.3)  t  It is only in rare cases that the integrals in equations (2.2) and (2.3) can be computed analytically. Approximation strategies have therefore been devised to solve 2  this model in more realistic settings. Such strategies include the Extended Kalman Filter, the Unscented Kalman Filter [25], and the Gaussian-mixture Filter, which are convered in-depth by Fearnhead [11].  2.1.3  Bayesian smoothing  Assume we have observations up to time T, and we wish to estimate the state at time i < T, i.e., determine p(ut|zi x). :  2  Such as the case of a linear-Gaussian model, which can be solved exactly using the Kalman  filter [26], and the case of solely discrete distributions, in which case the integrals are in fact sums and can be computed directly (albeit often only at prohibitive computational expense).  Chapter 2. SMC and Bayesian Inference 2.1.3.1  9  Forward-backward smoothing  The smoothed density p(u |zi T) can be factored as follows: :  t  p(u |zi ) = J t  p(u ,u i\z )du i  :T  t  = J  t+  hT  t+  p(ut i|zi T)p(ut|Ut i,Zi :  +  +  : T  )dUt+l  = J p(u l|zi )p(ut|ut l,Zi:t)dUt i t +  : T  +  +  smoothed /  = p(u z ) / v 'J t  dynamics  ~  *  -jr — ^ p(u i |z )  1:t  v  t+  filtered  s  dut+i.  (2.4)  1 : t  •>» ' prediction  v  Hence, we obtain a recursive formula for the smoothed density p(u |zi x) in terms of t  :  the filtering density at time t p(ut|zi ), the prediction density p ( u i | z i ) , and the :t  4+  ;t  smoothed density at time t+1 p(ut+i|zi r). The algorithm proceeds by making first a :  forward filtering pass to compute the filtered distribution at each timestep, and then a backward smoothing pass to determine the smoothing distribution. 2.1.3.2  Two-filter smoothing  A smoothed marginal distribution can also be obtained by combining the result of two independent filter-like procedures; one running forward in time and another backward. We use the following factorization: P(u |zi ) =p(u |zi _i,Zt:r) t  : T  t  :t  P(utNl:t-l)p(Zt:r|zi:t-l, Ut)  =  P(zt r|Zl:«-l) :  OCp(u |zi _i)p(zt:r|u ) t  'ocp(u |z t  V  :t  1 : t  v  filter one  t  )p(z "  t + 1 : r  |ui).  v  filter  '  two  (2.5)  Chapter 2. SMC and Bayesian Inference  10  Filter one is our familiar Bayesian filter. Filter two can be computed sequentially using the Backward Information Filter [33], noting that: p(zt:r|u ) = J p ( z , z t i r , u t i | u t ) d u i t  t  /  +  :  +  t+  reverse dynamics  p(z i |u i) t +  : T  t +  p(u i\u ) t+  p(z \u )du .  t  t  filtered pdf at 4+1  t  (2.6)  t+1  likelihood  Remark. As Briers notes [4], p(z r|ut) is not a proper probability density in u^. An t:  artificial prior to deal with this is presented in that reference.  2.1.4  Nonanalytic example  We motivate the Bayesian state estimation problem with an example, which is a standard example of a difficult filtering workload [15]. The dynamics are specified by: ui  ~N(0,o ) v  u = i u _ i + 25 ~l + 8 cos (1.2i) +v 2 1+u _  t>l  Ut  t  t  t  t  Z t  1  = -L+ 20  Wt  .  t>1 ~  where vt ~ A/"(0, o ) and wt ~ A/"(0, o ). The filtering distribution for o = I, o = 10 v  w  v  w  is shown in Figure 2.2. Note that while the noise in the model is Gaussian, it is non-linear and multimodal, due to the quadratic term in the observation model. It is thus impossible to derive analytic expressions for the posterior. We will return to this example several times in the course of this thesis.  2.2  Sequential Monte Carlo  The approximation techniques mentioned near the end of Section 2.1.2 all operate on the principle of replacing the model with a tractable (typically linear-Gaussian) approximation, and performing exact inference on this approximate model. In Sequential Monte Carlo, we do not use an approximation to the orginial model, but instead perform approximate inference on the exact model.  Chapter  11  2. SMC and Bayesian Inference  Time (t)  Figure 2.2: Filtering distribution p(u |zi ) for the example model of Section 2.1.4. t  :t  If we had a set of samples (or particles) | u ^ |  from p(u |zi ), we could apt  :t  proximate the distribution with the Monte Carlo histogram estimator 1 p(du \z ) t  N  =  1:t  -^J2^ (i){du ) u  t  where 5 ( ) (du ) denotes the delta Dirac function. This can be used to approximate 3  t  t  the expectations of interest in equation (2.1) with N  < ' /<) = ^E/<(0 i=l  This estimate converges almost surely to the true expectation as N goes to infinity. Unfortunately, one cannot easily sample from the marginal distribution p(ut|zi j) di:  rectly. Instead, we draw particles from p(ui t|zi t) and samples ;  :  Ui t_i :  are ignored. To  draw samples from p ( u i | z i ) , we sample from a proposal distribution q and weight :t  :t  the particles according to the following importance ratio: P(Ul:t|zi:t)  p ( u i | z ) q(ui _l  (?(Ul t Z l  q{U  :  'Defined as <5 <•;> (dut) =  : t  :t  )  g  1:t  du ,  1  u < °  0  otherwise.  t  1:t  :t  Zi ) p(Ul : t  : t  |zi t_i) :  _lZi _i) : t  12  Chapter 2. SMC and Bayesian Inference  The proposal distribution is constructed sequentially g(Ul:t|zi:t) = < 7 ( u i _ i | z i _ i ) g ( u | z , U l _ l ) : t  : t  t  t  .  : t  and, hence, the importance weights can be updated recursively . W  t  P(ui:t|zi:t)  = —  —  / n  :  P(ui:t_i|z _i) q{u \z , 1:t  Given a set of N particles j u ^ ^ j  t  Ui  t  : t  „x  (2.7)  rWt-l. _i)  , we obtain a set of particles j u ^ j  by sam-  pling from g(uj|u^? _ , z i ) and applying the weights of equation (2.7). An estimate i  1  :t  of p(dut\zi ) is then :t  N  p(du \z ) = ^w <5 (i)(u ). W  t  1:t  t  u  i=i  t  '  The familiar particle filtering equations for this model are obtained by remarking that t  P(ui:t|z ) 1:t  OCp(ul:t,Zi ) = : t  JJ p(z  fe  |u )p(u fc  fc  |u _l), f c  fc=l  given which, equation (2.7) becomes ~(0  ^  p(z<  ( (») )P{ t U  ( (*) Q  ..(0  (2.8)  Zt,Ut-l)  ~(j)  This iterative scheme produces a weighted measure |u^ ,u;j j :i  and is known as  Sequential Importance Sampling (SIS) [41]. Note that SIS performs importance sampling in the joint space, as samples from p(ui t_i|zi t_i) are augmented to produce :  :  samples from p(ui t|zi ); it is intuitive to think of particles as paths through the state :  ;i  space which grow at each iteration. Figure 2.3 illustrates this point.  2.2.1  Resampling  Unfortunately, the SIS algorithm often leads to unbounded variance in the importance weights (equation (2.8)), which leads to nonconvergence. This phenomenon is known  Chapter 2. SMC and Bayesian Inference  13  U xU x W • • • =U  l  -0 ,(0  u l:t  Figure 2.3: Sequential importance sampling; particles at time t can be thought of as a path inU . 1  as degeneracy of the importance weights. Figure 2.4(a) demonstrates a situation in which degeneracy is fatal to the SIS algorithm. To achieve an estimate of minimum variance, we would like a set of samples from the exact posterior distribution, in which case the importance weights are all = 1, and thus have zero variance. Generally, the further our samples are from this target distribution, the greater resulting importance weight variance. This quantity is hence used as one measure of quality for particle filter algorithms. The importance weights are also used to calculate a measure called effective sample size, which is an estimate of the number of non-degenerate particles in the current population. It is defined as follows: (2.9)  To overcome importance weight degeneracy, an effective solution is to resample the particles according to the discrete measure implied by the importance weights. This results in particles with high weight being multiplied, while particles with low weight get discarded, ensuring that the population remains viable. If this resampling step is performed at every timestep, the Sequential Importance Sampling/Resampling (SIR) algorithm is obtained. Resampling should not be performed if it is not necessary, however, as it adds variance to the resulting state estimate. Hence, it is often advocated that resampling only be performed when the effective sample size (equation (2.9)) drops below a certain threshold, say N/3. The manner in which resampling is performed affects the variance of the result. These include multinomial (convential) resampling, as well as deterministic, systematic, and residual resampling [28]. Convergence of the SIR algorithm has only been  Chapter 2. SMC and Bayesian Inference  State estimate  Importance weight variance  20  0.04  15  0.035  10  0.03  5  0.025  0  0.02  -5 -10 -15 -20.  SIS SIR SIS + thres. resamp.  0.015 •»••• -  SIS SIR SIS + thres. resamp. True state Observations 10  ii  0.01 0.005  40  20 30 Timestep  0.  50  (a) State estimate  10  600  20 30 Timestep  40  50  (b) Importance weight variance  Effective sample s i z e 700  14  Unique particles (max = 700) 500  SIS SIR S I S + thres. resamp.  — SIS •- SIR - S I S + thres. resamp.  400  500 400  300  300  I 200  it  ii  200 100  100 0. 0  t  10  i  > J  I  11 i i  1 1  » I ' .  •1 )»  IP's  .'  A  1  »  '  1  •  M •»i  20 30 Timestep  (c) Effective sample size  40  50  10  20 30 Timestep  40  50  (d) Unique particle count  Figure 2.4: Comparison of SIS, SIR, and SIS with thresholded resampling on the example in Section 2.1.4. The SIS importance weight variance grows until a single particle has all the mass (Figure (6)), and when machine precision is exhausted, the particle filter is no longer updated (Figure (a)). This is also why the importance weight variance goes to zero. This can also be seen in Figure (c), as the sample size quickly drops to zero. Resampling allows the particle filter to survive the effects of high-dimensional sampling.  15  Chapter 2. SMC and Bayesian Inference  theoretically proven in the case of multinomial resampling, but the other methods appear to be as stable, and can reduce the variance caused by resampling considerably. Figure 2.5 contains pseudo-code for the SIS and SIR algorithms.  Sequential importance sampling step •  F o r i = 1,N,  s a m p l e f r o m t h e proposal  •  For i = 1,N,  evaluate t h e i m p o r t a n c e , weights  Wl  •  =  —.  '—.  —Vll  i-r  Normalise t h e i m p o r t a n c e weights ~(i)  Selection step If N n < C , resample t h e discrete weighted measure | u j e  r  measure yu[ \ l  jjj  l )  , w^j  t o o b t a i n an unweighted  o f A f new particles.  Figure 2.5: Particle filtering algorithm at time t. When C = N, this is the SIR algor  rithm (always resample); when C = —oo, this is the SIS algorithm (never resample); r  when 0 < C < N, this is SIS with thresholded resampling (resample when needed). r  2.3  Particle smoothing  SMC also affords solutions to the Bayesian smoothing problem. Generally these algorithms are only slightly more complicated than their filtering brethren, but can provide substantially more accurate estimates.  Regrettably, these algorithms tend  to be under-used, as they all involve computing an interaction between an individual particle estimate and a probability density, which is typically represented by a population of N particles. Since this is done for each particle estimate, the cost is 0(N )—substantially higher than filtering. In this section, we present three Af SMC 2  2  Chapter 2. SMC and Bayesian Inference  16  smoothing algorithms and show how their cost can be substantially reduced. 2.3.1  F o r w a r d - b a c k w a r d smoother  We can derive an SMC smoothing algorithm by using the following Monte Carlo approximation to equation (2.4):  p(u \z ) t  1:T  (»)\  JV  N  oc ^2 w  E  (i)  i=l  (2.10)  <S i)(u ).  t+l\T  1i(  t  u! )] fc)  We maintain the original particle locations—thus depend on the filtered distribution having support where the smoothed density is significant—but reweight the particles to obtain an approximation to the smoothed density. The weight for particle  is  then (i)  (i)  A  ™\\T =  t  W  P(USI t  N  E  )  u  (j) t+l\T  Z^fc=l  3=1  t  w  P \ t+l u  (2.11)  «?>)]  The smoothing algorithm is straightforward: first, perform filtering to obtain a weighted f (i) (i) 1 N  measure ju£ , wl' j  for each t. Next, we recurse from t = T — 1,..., 1 using equa= w^). The set j u f ^ u ^ j  tion (2.11) to calculate w^ (we define w^ T  T  is now a  weighted measure approximation of p(ut|zi x) for all t: :  w p ( u | z ) = 5^w |t* (o(u )-  p(u \z ) t  N  (  t  hT  1:T  t  t  u  i=l  Equation (2.11) costs 0(iV ) operations to evaluate directly, but can easily be per3  formed in'0(iV ) by observing that the denominator of the fraction in equation (2.11) 2  is independent of i, hence can be performed independently from i for each j. 0(N ) 2  is still too expensive, however, so we demonstrate how to evaluate this equation using sum-kernel fast methods. Recall the sum-kernel expression: N  for ; = 1,. ..,N  ft  =£  u K ( , y,). {  Xi  (1.1)  Chapter 2. SMC and Bayesian Inference  17  We assign iV  .  r  mi N  •) N  r  .  r  N  perform sum-kernel using these parameters, and assign the results to |o4+i } I  we assign  W:MC'  J  • Next, j=i  {^};^K}::,  H,1i M ^ M ' ' } , ! '  ^u<i>.|u<'>),  perform sum-kernel, and assign the results to |/3 j t  .It is now clear that  In the first sum-kernel operation, the u^^ particles are the target particles, while in the second call, they are the source particles. Using these two sum-kernel calls, we can reduce the computational complexity of equation (2.11) (and hence the entire algorithm) from 0(N ) 2  to O(NlogN) or lower.  Remark. It should be pointed out that the mapping we have performed cannot be done for arbitrary densities p(-). The kernel K(-) must obey certain properties to be able to apply the fast methods. Different methods are suitable for different kernels; the Fast Gauss Transform, for instance, requires that K(-) be a Gaussian kernel. Some methods additionally impose restrictions on the distribution of the source and target points. We will ignore these details in this part—they will be covered in depth in part II.  2.3.2  Two-filter smoother  The computationally-intensive component to implementing the two-filter smoother is calculating the Monte Carlo approximation to equation (2.6). Let j u j ^ . u ; ^ j  be  the weighted measure approximation produced by filter one (conventional forward  Chapter 2. SMC and Bayesian Inference filter), and | u f \ u ; ^ |  18  be the measure obtained by running filter two (backward 4  information filter) at time t. The Monte Carlo approximation to the smoothing density is [3]: p ( u | z i ) (X t  : r  N /\w.  <J.(o(u ).  (2.12)  t  3=1  i=l  It  Applying sum-kernel algorithms to compute equation (2.12) proceeds similarly to the case of forward-backward smoothing. 2.3.3  Maximum a posteriori (MAP) smoother  In practical applications, it is often not necessary to estimate the entire distribution p(ui r|zi r). Instead, the goal is to determine the sequence for which the aforemen:  :  tioned distribution is maximized; this is the maximum o posteriori sequence u^  p  — argmaxp(ui r|zi r). :  (2-13)  :  "1:T  For continuous state spaces, equation (2.13) almost never admits an analytic form. In [15], Godsill et al. develop an SMC approximation to (2.13). First, standard particle filtering is performed. At time t, the set of particles | u ^ |  can be viewed  as a discretization of the state space. Since particles are more likely to survive in regions of high-probability, this discretization will be sensible. We can approximate (2.13) by 5  finding the sequence of maximum probability on this grid, namely: -MAP A  argmax  p(ui r|zi r) • :  (2-14)  :  ui^efcLiM },^ 0  This approximation still requires exponential computation. However, it can be solved efficiently using dynamic programming. We will present the algorithm in significant detail, as it enjoys a prominent place in the remainder of this thesis. Here, we assume thatfiltertwo produces an approximation p(ut|z r) using an artificial prior  4  t:  7t according to [3]. This avoids the dangerous implicit assumption that Jp(zi T|ut) dut < oo as is :  common in previous literature [24]. 5  No claim is made about the optimality of this discretization, but it will be refined in areas where  Ujf^^  3  is likely to lie and coarse in areas where the solution is unlikely. Thus it is hoped that the error  induced by the discretization is minimal.  Chapter 2. SMC and Bayesian Inference  19  Due to the Markov property of the model (Section 2.1), the density in equation (2.13) factors as T  p ( u i : T | z i r ) OC :  JJ P (z \u ) P (u t  t  t  |u _l) t  ,  t=l  hence equation (2.13) is additive in log space: u ^  = argmax^  p  "1:T  [logp(z |u ) t  t  + logp(u  t  |u _i)] . 4  t=l  Thus the Viterbi algorithm [43] can be used to compute (2.14) efficiently. Figure 2.6 shows pseudocode for the algorithm. Step (2) in Figure 2.6 is the cause of the algorithm's N cost. This 2  computation can be implemented in 0(T • NlogN)  0(N T) 2  time by using a max-kernel fast  method described in Chapter 5. We can re-write the maximization as 6 (j) = iogp(z u j ^ ) +max <5 _i(i) + logp(u^ fc  fc  e "^=p{z \u^) s  k  fc  max [ e ^ «  uj^)  - p j u ^ u ^ ) ] ,  and thus use max-kernel (equation (1.2)) by setting  1  3  and assigning the results (which are, in the max-kernel case, a vector of maximum influence for each particle and a vector of indices of the source of maximum influence) f 1 |/fc(j))*fc(j) | • It is now clear that, for all j, N  t o  5 (j) k  = log p ( z  f  c  |u^)  + log  [r (j) k  and  Chapter 2. SMC and Bayesian Inference  20  0. Filtering. For t = 1,..., T Perform particle filtering to obtain the weighted measure  1. Initialization. For i = 1,... ,N (5i(i) = logp ( u ^ ) + logp(zi u ^ ) 2. Recursion. For k = 2,... ,T and j = 1,..., N  i*-i(0 + logp(u^|u£,) VfeO') = argmax <5 -i(i) + logp( fe  3. Termination. IT = argmax Sx(i)  = 4^  -MAP  For k = T -1, ..,1 i  k  -MAP fc u  = ^fc+i(«jfc+i) _ -  „(i ) fc f c  u  5. Aggregation. -MAP U  1:T  A / - M A P - M A P —MAP\ — \ l > 2 )•••) T / u  U  U  Figure 2.6: Maximum a posteriori particle smoothing algorithm. The algorithm produces an approximation to u ^  p  by employing the Viterbi algorithm on the discretized  state space induced by the particle filter approximation at time t: ^u[ \w[^ j l  . Note  that the importance weights are ignored—because of this it is useless to use resampled (unweighted) particles, as this will result in a strictly coarser discretization. The computational expense of the algorithm is 0(N T) due to step (2). 2  Chapter 2. SMC and Bayesian Inference  2.4 2.4.1 2.4.1.1  21  Experiments M A P smoothing M a i n example  We turn to the main example of Section 2.1.4. Figures 2.7 and 2.8 show the gain achieved by performing M A P smoothing in this setting. RMSE was significantly reduced over the filtered estimate. The particle filter estimate is also more prone to be led astray by outliers in the observations, as having access to the future observations provides evidence that a previous data point was an outlier. 30 20 10 0 -10 -20 -30 0  10  20 30 Time step  40  50  Figure 2.7: Particle filter and M A P estimates of the latent state in the 1-D time series experiment (Section 2.1.4). Mean error for the particle filter was 4.05, while the M A P solution achieved a mean error of 1.74. . Figure 2.9 shows efficiency results for implementing the M A P smoother on the main example of Section 2.1.4. Since the state space is one-dimensional, both the distance transform and dual-tree algorithms can be used. Both provide orders of magnitude improvement in speed. We give the results in terms of number of distance computations required, because in some applications (particularly in vision), this aspect computationally dominates the cost of the procedure.  Chapter 2. SMC and Bayesian Inference  22  50 Noisy obs. True state P F estimate M A P estimate  40 30 20 10 0 -10 -20  10  20 30 Time step  40  50  Figure 2.8: The results of the experiment of Figure 2.7, transformed so that the true state lies on the x axis. This shows the stark contrast between the two algorithms. 2.4.1.2  Real-world application  Beat-tracking is the process of determining the time slices in a raw song file that correspond to musical beats. This is a challenging problem: both the tempo and phase of the beats must be estimated throughout the song. This is an typical example of cases where the single best result is desired rather than a distribution over states; the objective is the sequence of maximum probability. M A P smoothing after particle filtering has achieved impressive results in this setting [30]. We omit the details of the probability model for the sake of brevity, but a full explanation is found in [30]. Since the state space of this model is a three-dimensional Monte Carlo grid, the distance transform cannot be used. Figure 2.10 summarizes the results: songs can be processed in seconds rather than hours with this method. Using the fast method also enables more particles to be used, which results in a better solution: the probability of the M A P sequence with 50000 particles was p = 0.87, while using 1000 particles resulted in a M A P sequence of probability p = 0.53.  Chapter 2. SMC and Bayesian Inference  23  Particles Figure 2.9: 1-D time series results, displayed in log-log scale. The dual-tree algorithm became more efficient than naive computation after approximately 70ms of compute time. Both dual-tree and distance transform methods show similar asymptotic growth, although the constants in the distance transform are approximately three times smaller.  24  Chapter 2. SMC and Bayesian Inference  Particles  Particles  Figure 2.10: Beat-tracking results: time v. particle count. The dual-tree method becomes more efficient at t = 10ms, and thereafter dominates the naive method. Note the log-log scale.  Figure 2.11: Particle-smoothing results on synthetic data, shown on a log-log scale for clarity. For the same computation cost, FGT achieves an RMSE two orders of magnitude lower. We are able to smooth with as many as 4,000,000 particles with this method!  Chapter 2. SMC and Bayesian Inference  2.4.2  25  Forward-backward smoothing  Forward-backward particle smoothing was tested on a three-dimensional linear-Gaussian state space model populated with synthetic data. By keeping the model sufficiently simple, the particle smoother can be compared to the analytic solution (in this case, a Kalman smoother). Since this is a sum-kernel problem, the acceleration method necessarily introduces error. Thus, we report both the cpu time for the naive and fast method (in this case, the Fast Gauss Transform (FGT)) and the error of both algorithms after a given amount of cpu time. Figure 2.11 houses the results. The data verify that using more particles, smoothed approximately using the FGT, is better in terms of RMSE than using fewer particles smoothed exactly (and lethargically).  Chapter 3  Plus  intimement.  Gnossiene No.  The  2  ERIC  SATIE  Marginal Particle Filter  We have seen how Sequential Monte Carlo techniques are useful for state estimation in non-linear, non-Gaussian dynamic models. These methods allow us to approximate the joint posterior distribution using sequential importance sampling. In this framework, the dimension of the target distribution grows with each time step, and thus it is necessary to introduce some resampling steps to ensure that the estimates provided by the algorithm have a reasonable variance. In many applications, we are only interested in the marginal filtering distribution which is defined on a space of fixed dimension. In this chapter, we develop a novel particle filtering algorithm which performs importance sampling directly in the marginal space U of  p(ut|zi  ; i  ),  hence avoiding the need to  perform importance sampling in a space of growing dimension. Using this idea, we also derive an improved version of the auxiliary particle filter (ASIR). We show using synthetic and real-world experiments that these algorithms improve significantly over Sequential Importance Sampling/Resampling (SIR) and Auxiliary Particle Filtering (ASIR) in terms of importance weight variance, and provide theoretical results that confirm this reduction. The marginal particle filter is another example of an N SMC  algorithm. As usual,  2  fast methods can be applied to make this technique substantially more efficient.  3.1 SIS  Marginal Particle Filter estimates p ( u i t | z i ) by taking an estimate of p ( u i :  : t  : t  _i |zi  ; t  _i)  and augmenting it  with a new sample u at time t. Each particle at time t is a draw over the joint space t  p(ui:t|zi ), sampled sequentially, thus can be thought of as a path through the state :t  26  27  Chapter 3. The Marginal Particle Filter  space at times 1... t (Figure 2.3). At each time step, the dimension of the sampled paths is increased by the dimension of the state space at time i , quickly resulting in a very high dimensional space. The sequential nature of the algorithm means that the variance is high, leading to most paths having vanishingly small probability. This problem is known as  degeneracy  of the weights, and usually leads to weights whose  variance tends to increase without bound. Remark. It is not trivial to observe that all SIS-like algorithms operate in the joint space. Figure 2.5 contains the pseudocode for SIS; note that this is the algorithm in use by practitioners. Bereft of its derivation, it is not immediately apparent that the algorithm performs importance sampling in p(ui t|zi t). :  :  One strategy employed to combat degeneracy is to use a  resampling  step after  updating the particle weights to multiply particles (paths) with high probability and prune particles with negligible weight (the SIR algorithm—Section 2.2.1). The Marginal Particle Filter (MPF) uses a somewhat more principled approach. We perform particle filtering  directly  on the marginal distribution p(ut|zi ) instead of :t  on the joint space. The predictive density is obtained by marginalizing p ( u | z i _ i ) = J p(ut\ut-i)p(u -i\zi:t-i)du -i t  :t  t  (3.1)  t  hence, the filtering update becomes p(ui|z ) oc p(z |u )p(u |zi _i) 1:t  t  t  t  :t  JP(ut|ut-i)p(u _i|zi _i)du _i.  = p(z«|ut)  f  ;t  t  The integral in equation (3.1) is generally not solveable analytically, but since we have a particle approximation of p(u _i|zi _i) ^namely, j u t - i i ^ t - i } ) ' we can approximate t  :t  (3.1) as the weighted kernel estimate J2jLi wi-iP ( * | t - i ) • u  u  While we are free to choose any proposal distribution that has appropriate support, it is convenient to assume that the proposal takes a similar form, namely ?(u |zi ) = t  :t  J2  t-lQ ( *| *> i - l )  W  3=1  U  Z  U  •  (-) 3  2  Chapter 3. The Marginal Particle Filter  28  The importance weights are now on the marginal space: p(u |zi ) <?(u |zi ) t  w  t  :t  t  :i  Pseudo-code for the algorithm is given in Figure 3.1.  Marginal Particle  •  Filter  (MPF)  F o r % = 1, ...,N, s a m p l e f r o m t h e p r o p o s a l TV  u;' ~  j=l •  F o r i = 1,N,  evaluate the importance weights  (i)  •  Normalise the importance weights  (») _ w'  ~(i) t ~ti) w  Figure 3.1: The M P F algorithm at time t.  3.1.1  Auxiliary Variable M P F  The auxiliary particle filter (ASIR), introduced by Pitt and Shepard [39, 1], is designed to improve the performance of sequential Monte Carlo in models with peaked likelihoods (which is another source of importance weight variance). In this section, we derive an algorithm that combines both approaches. When the likelihood is narrow, it is desirable to choose a proposal distribution that samples particles which will be in high-probability regions of the observation model. The auxiliary particle filter works by re-weighting the particles at time t — 1 to boost them in these regions.  Chapter 3. The Marginal Particle Filter  29  We are interested in sampling from the following target distribution N  o c p ( z | u ) ^ u ; J i p ^ u uji\) )  p(u \z ) t  i  1:t  t  1  (3-3)  t  J'=I  j=l  The auxiliary P F uses the following joint distribution: p{k, u |zi t) = t  /c is known as an  auxiliary  p (z uj*\) p (u u £ \ , z ) .  :  t  t  t  and is an index into the mixture of equation (3.3).  variable  Thus, p(k\zi..t)  ex wf\p[z  (3.4)  u£\)  t  = wf\ Jp(z \\x )p{u t  t  u £ \ ) du  t  t  Since the exact evaluation of (3.4) is usually impossible, we approximate this via what is known as a  step. For each index  simulation  k  at time  t —  1, we choose u.^  associated with the distribution p(ut|u^\) in some deterministic fashion (/J,^ could be the expected value, for instance). We define the  simulation  weight  1  for index  k  to  be <"i-lp(* t-i  ~  A  Using these weights, the auxiliary particle filter defines the following joint proposal distribution: q(k,u \z ) t  =  1:t  q(k\zi )q(u \z ,k) :t  t  1:t  where 9(A:|z ) = A S , 1:t  (fc)  9(ui|zi ,A;) = q(u :t  x  t  To prevent this step from introducing bias into the estimate, the simulation weights must be  chosen independantly from u^'.  Chapter 3. The Marginal Particle Filter  30  The importance weight is given by p(fc,U  Zl:t)  t  w (k, u ) - ——  :  t  q(k,u (k) \zz ) t  v  (  (3.5)  r  1:t  v.-- l P (*l S) P (*l !-i> *) U  u  u  z  oc -  In the marginal particle filter, we use the same importance distribution but instead of performing importance sampling between  p(k, u t | z i ) : t  and  q (k, Ut\zi ), :t  rectly perform importance sampling between p ( u t | z i ) and q(u \zi ) : t  t  :t  we dito compute  the weights p(u \z ) t  w  1:t  ( *) = —f—i  V  u  (-) 3  6  q{ut\zi: ) t  Ef i^(^iua)p(u |ua,z ) =  OC  t  i  Ef Aa (u | a,z ) = 1  g  t  t  U  This leads to the auxiliary marginal particle filter (AMPF) which is described in Figure 3.2. We expect that performing importance sampling directly between the distributions of interest will lead to a reduction in variance. It is not hard to show that it can be no worse. Proposition 3.1.1. The variance of the AMPF importance sampling weights w(ut) is less than or equal to ASIR's importance weights w(k,Ut).  Proof. By the variance decomposition lemma, we have var [w (k,  u )] = t  var [E(w (k,  u )\ u )} + t  t  E [var (w (k,  = var [w (ut)] + E [var (w (k,  u )| t  u )\ t  u )] t  u )]. t  Hence, as  E[var(ttf(fc,u )|u )] > 0 t  t  it follows that var [w (u )] t  <  var [w (k,  Ut)].  •  Chapter 3. The Marginal Particle Filter  Auxiliary •  Marginal Particle Filter  For i = 1 , N ,  (AMPF)  choose simulation  (*)  hi  (  d  and calculate mixture weights  (.*) \  <-p[ut  J  T(i) t-1  (*K>)  T(i)  •  31  A  E;  F o r i = l , ...,N, s a m p l e f r o m t h e p r o p o s a l N U  (i)  3=1 •  For i = 1,TV,  evaluate the importance weights  ~(») = to;  •  Normalise the importance weights  v^/V  ~(j)  Figure 3.2: The A M P F algorithm at time t. The <— symbol denotes the deterministic selection of a likely value from the distribution, such as the mean or a mode of the density.  3.1.2  Discussion  Since the particles at time t are sampled from a much lower dimensional space in the marginal filter algorithms, we expect that the variance in the weights will be significantly less than that of (A)SIR for the same number of particles. Proposition 3.1.1 proves that it is no greater in the auxiliary variable setting, and a similar result holds for SIR. Figure 3.3 demonstrates this with an example. Consider a multi-modal state estimate at time t — 1, and a Gaussian transition prior. In (A)SIR, a particle's transition likelihood is relative to the tail end of a path (3.3(a)), while the marginal P F calculates  Chapter 3. The Marginal Particle Filter  (a) (A)SIR samples a mixture component  (b) Marginal filtering uses the entire mix-  and uses this to compute importance sam-  ture.  32  pling weights.  Figure 3.3: Predictive density p(u |z _i) in (A)SIR and Marginal P F . By using a t  1:t  single mixture component, the (A)SIR estimate ignores important details of the distribution; a particle lying in the left mode should be given less weight than one lying in the right mode. the true marginal transition density (3.3(b)). The marginal P F improves over (A) SIR whenever a particle has high marginal probability but low joint probability along its path. This can occur due to heavytailed importance distributions or models with narrow or mis-specified transition priors. In contrast, the improvement of MPF over SIR will not be very pronounced if the observation model is peaked (i.e., if likelihood is highly concentrated), as this will influence the importance weights more than the effect of sampling in the joint space. In these cases, A M P F should be used. Figure 3.4 demonstrates the two types of variance reduction. Finally, it should be noted that Sequential Monte Carlo applies to domains outside of Bayesian filtering, and an analogous marginal SMC algorithm can be straightforwardly derived in a general SMC context. Note that the evaluation of the proposal (equation (3.2)) must be performed for each sample, thus both M P F and A M P F incur an JV cost. As we later show, this can 2  be improved substantially.  Chapter  Particle weight variance  ,x 10"  1.5  33  3. The Marginal Particle Filter  j j!  —  |j  - -  SIR Auxiliary marginal P F Marginal P F ASIR  \t •  ii . :  ? . :  jt •  *' :  0.5  i  20  10  Timestep  30  40  Figure 3.4: Importance weight variance reduction. "Spikes" in weight variance are caused by unlikely observations. ASIR (red) is successful in smoothing these occurrences when using SIR (green). The marginal P F (black) reduces overall variance by sampling in a smaller-dimensional space, but still suffers from spikes. A M P F (blue) gains the advantages of both approaches. 3.1.3  Cases of equivalence  There is one case where SIR and marginal P F are equivalent. When the transition prior is used as the proposal distribution, then the M P F weight update equation becomes:  «&)  p(*t  ..(0  = p(z  uj ) . l)  t  When using SIR, particles are resampled after being weighted, but this is precisely equivalent to sampling from the marginal proposal distribution E j ^ t - i p ( t w  The SIR weight update equation is  urjp  (»)  p[u  K  = p(z  uj ) l)  t  t  •2.)  (0 w.t - i (after renormalizing)  u  u  t-i)-  34  Chapter 3. The Marginal Particle Filter  since w^_ is set to N~ after resampling. In both cases, the conventional likelihoodl  l  weighted filter is recovered. Similarly, when performing auxiliary filtering, it may be possible to sample exactly from the optimal proposal q(ut\z , k) — p(ut\z ,u[ ^)  and exactly evaluate p(zt\u j. ^ )  k  t  (  t  k  l  (equation (3.4)). In this case the importance weight variance is zero, thus no marginal filtering improvement is possible.  3.2  Efficient Implementation  The mapping to marginal filtering is direct. For instance, equation (3.2) can be formulated as a sum-kernel operation (equation (1.1)) as follows:  K(x.i,yj)  = q(u  t  = yj|u _i = X j , z ) . t  t  Note that while we have reduced M P F to the same asymptotic complexity as SIR, the constants (while not prohibitive) are certainly higher than in SIR. The use of M P F is preferrable when there is uncertainty about the choice of transition prior p(u |ut_i), t  which is common in industrial applications [36]. In many applications (such as vision), the likelihood is much more complex and expensive to evaluate than the transition prior. In these cases, it if often more efficient to use fewer particles with the marginal filter than using many particles with SIR.  3.2.1  Fast methods for A M P F  In the auxiliary particle filter, we approximate the integral the following way: J P(2 \u )p(u t  t  t  u^)  where  du ^p(z t  /4 ) 4)  t  /4° 4- p ( u j u j ^ ) .  (3.7)  (3.8)  This is basically a single-particle Monte Carlo approximation. We can do better using fast methods. First, we draw M samples from the marginal unweighted predictive  Chapter 3. The Marginal Particle Filter  35  density: (fc)  l£>(u |u«), (  i = l . . .M  To evaluate (3.7), we actually want a set of samples distributed according to p ^ U j  (3.9) ^,  not eq. (3.9). Hence, we treat draws from the latter as draws obtained via importance sampling. Our estimate of (3.7) is then:  Jp(zt|u )p(u t  u ^ ) du,  t  M  (k)\ ( ( f c ) p(zt Pi )V\Pt  -420  M We can evaluate this operation efficiently using two sum-kernel operation, as we demonstrated for particle smoothing (Section 2.3.1). Alternatively, it is possible to perform the computation exactly, but using M <^ N.  3.3  Experiments  We present several examples of using the marginal particle filter for Bayesian filtering. We compare the algorithms in several respects: the error of the state estimate to the ground truth (when known); the variance of the importance weights; and the unique particle count. High variance is indicative of degeneracy of the importance sampling weights, and affects the precision and variance of the estimator. Unique particle count is a measure of the diversity of the particles. The latter two are both important: it is trivial to construct an algorithm which performs well under either of these measures individually, but the construction will behave pathologically under the other measure.  3.3.1  Multi-modal non-linear time series  We apply the marginal particle filter to the main example from Section 2.1.4. Table 3.1 and Figures 3.5, 3.6, and 3.7 summarize the results. M P F improves over SIR slightly in terms of RMSE, and produces a substantial reduction in importance weight variance.  Chapter 3. The Marginal Particle Filter  36  Particle weight variance SIR Marg P F  10  20 30 Timestep  40  50  Figure 3.6: 1-D time series; variance of the importance weights. The spikes are the result of unlikely data—MPF does particularly better than SIR in these cases.  37  Chapter 3. The Marginal Particle Filter  Table 3.1: 1-D time series; RMS error and weight variance. Method  RMS Error  (variance)  Importance weight variance  SIR  2.902  1.03  0.000163  MPF  2.344  0.06  0.000025  Unique particles (max = 600) 600  i  • t II  400  SIR MargPF  t  i  ,1  11 1 1 11, - I I , 1 ^ , 1 1  300  1  i  1  500  i  1 1,1 1,1 1, 1 1  1  1  'A  1  ''  i  , i  , i  *  • > • i i  i  i /  1  ' i i • 1  1  .  • » " 1  '.•  7  t  Iv  >  I  \  A''J\  200 100  i  ii ii 11 • i  .  1  ' 'A M ily  1  •A, r\»  1  i  ''JV  / V  r±J  r  V  /  -  0. 0  10  20 30 Timestep  40  50  Figure 3.7: 1-D time series; unique particle count. Although the marginal P F generally has better diversity, an unlikely observation can still cause problems (such as at t — 8). 3.3.2  Stochastic volatility  Monte Carlo methods are often applied to the analysis of the variance of financial returns as the models involved cannot be solved analytically. One popular model is known as stochastic volatility [27], which can be summarized as: z = e /3 exp {u /2} t  t  t  u = (j>u -i + rj t  where r\ ~ t  A/"(0,0V,),  e ~ N(0,1), t  t  t  and x\ ~ A/"(0,<7 /(l -(f) ))- We analyze the 2  1  weekday close of the U.K. Sterling/U.S. Dollar exchange rate from 1/10/81 to 28/6/85. There are 946 timesteps, but we limit analysis to the first 200 for readability, and use  Chapter 3. The Marginal Particle Filter  38  the model parameters fit to the data using M C M C in [27]. We use as proposal the 2  transition prior with heavier tails to test the marginal filter's ability to compensate for poor proposals. State estimate 2.5  SIR Auxiliary marginal P F Marginal P F ASIR Observations  2 1.5 1 0.5 0  X  X  '"'  X  -0.5  50  x*>  100 Timestep  150  200  Figure 3.8: Stochastic volatility model; state estimate. The results are summarized in Figures 3.8, 3.9, and 3.10. A l l results are means over five runs.  3.3.3  Fast methods  We compared the M P F implemented naively to an implementation using the fast Gauss Transform (FGT), as it is conceivable that the error introduced by the FGT would offset the variance-reduction benefits that M P F provides. The results in Table 3.2 indicate that we can increase the precision sufficiently to render this issue moot. (Additionally, we performed all the experiments in the previous section using the approximation techniques.) 2  That is, Markov Chain Monte Carlo, which along with SMC comprise the two main classes of  Monte Carlo algorithms.  Chapter 3. The Marginal Particle Filter Particle weight variance  ,x 10" — 1.5  39  SIR Auxiliary marginal P F Marginal P F ASIR  I  I) !. i: •'  0.5 !  .  .  . j  bi'v -'vA..-ii,VV.yjA.^^Vvi,^\ 1  50  100 Timestep  150  200  Figure 3.9: Stochastic volatility model; importance weight variance. Marginal filtering consistently achieves lower variance, and the best result is obtained by the A M P F algorithm, which combines the advantages of both ASIR and marginal PF. See Figure 3.4 for a closer view of the first twenty timesteps, which more clearly illustrates the interaction among SIR, A M P F , and ASIR. Unique particles 400  1  — 350 L •!•  SIR Auxiliary marginal P F Marginal P F ASIR  •i  300  :(  !  :, 250  [%  s  if*/ V  J 200  150.  *Sj7  50  100 Timestep  150  200  Figure 3.10: Stochastic volatility, unique particle count. Marginal P F consistently has higher diversity.  Chapter 3. The Marginal Particle Filter e  N  le-3  500  le-3  le-7  1500  5000  time(s) naive  0.300  FGT  0.181  naive  2.568  FGT  0.310  naive  28.14  FGT  1.482  speedup  40  RMSE 1.2684  1.66  1.2685 1.2469  8.28  1.2542 1.2466  19.0  1.2466  Table 3.2: Fast methods applied to the marginal particle filter. All reported results are the mean values over ten runs. The F G T enables a substantial improvement in speed at the cost of a slight increase in error. For the last test, we substantially decreased the error tolerance of the FGT approximation (which increases the runtime), and were still able to achieve a considerable speedup and RMSE within the variance of the test to the naive. This suggests that the error introduced by the F G T approximation can be made less than the inherent error caused by Monte Carlo variance.  3.4  Summary  Particle filtering involves importance sampling in the high-dimensional joint distribution even when the lower-dimensional marginal distribution is desired. We have introduced marginal importance sampling which overcomes this deficiency, and have derived two new particle filtering algorithms using marginal importance sampling that improve over SIR and ASIR, respectively. We have presented theoretical and empirical results which show that M P F and A M P F achieve a significant reduction in importance weight variance over the joint-space algorithms, and have shown how the ensuing computational burden can be drastically reduced.  Part II Acceleration methods  Chapter 4  Du bout de la pensee. Gnossiene No.  1  ERIC  SATIE  Fast methods To recap: we have explored a variety of N algorithms in Sequential Monte Carlo, and 2  have shown how to map the expensive part of their computation to base operations of a certain form. The remainder of this thesis will be about performing those operations quickly. Not all the methods we present will be interchangeable; we will be explicit in detailing the conditions where each method can be used.  4.1  The problem  We are given a set of source particles X 1  {i^i}iLi-  =  {XJ}^  1  with associated scalar weights  We are additionally given a set of target particles Y = {yj}^L  v  influence function  and an  (or kernel) that acts on a pair of target and source particles and  determines (along with the target weight) the influence between the two particles. The goal is to determine the sum of influence from all source particles on all target particles, or determine the source particle of maximum influence on each target. These are examples of generalized iV-body problems [16] which originated with the computation of the force between astronomical bodies in physics, and deal with the computation of some sort of N interaction among N entities. 2  Note that the term particle here is overloaded with Monte Carlo particles from part I. We apologize  x  for any resulting confusion.  42  43  Chapter 4. Fast methods  4.1.1  Sum-Kernel  Sum-kernel (or Sum-Product) is the first problem. To reiterate equation (1.1), we wish to determine N  forj = l , . . . , M  /,- = X>itf(xi>yj)-  (1-1)  i=i  Note that j runs from 1 to M, rather than 1 , . . . , N as was the case in part I; although all the instances of this problem that occur in the former part had the same number of source and target particles, there is no reason that the sum-kernel problem needs to be constrained in such a manner. The naive cost of this algorithm is in this case O(MN). None of the methods we present compute equation (1.1) exactly. Instead, they  r~i require an additional parameter e, and have as their output a vector | fj j  M  , where  each fj is within a certain error tolerance of the true kernel sum. This can be absolute error, in which case Vj,  <e,  fi-fi  or relative error, where Vj,  max  < 1 + e.  min  The sum-kernel problem is sometimes called weighted Kernel Density Estimation (KDE).  4.1.2  Max-Kernel  The max-kernel (or Max-Product; weighted maximum kernel) problem aims to determine two quantities for each target particle: the source particle exerting maximum influence, and the influence that particle exerts. I.e., forj = l , . . . , M  N  / * = max u;jK( j,yj) X  N  i* = arg max cji K ( , y^). i=l Xi  (1.2)  Chapter 4. Fast methods The output of a max-kernel algorithm will the two vectors  44 . > x  . " Unlike 1  sum-kernel, the exact answer is returned (although in some applications might suffice to have a approximate answer, we do not consider such cases in this work). Remark 4.1.1. In the case that all the weights are identical, and K is interpreted as an inverse distance function (such as K = —||x—y|| ), then max-kernel is min-distance, 2  i.e., the all-nearest-neighbour problem of statistics.  4.2 4.2.1  Preliminaries Assumptions  All the methods we cover require that the source and target particle exist in a metric space. Definition 4.2.1. A metric space M. = (V, S) is a tuple containing a set of points V, and a metric 5 : V x V —> [0,oo), such that the following hold for all x , y £ P : 1. <S(x,y) = 0  x = y  2. <5(x,x) = 0 3. <5(x,y) >0 4. £(x, z) < <5(x,y) + S(y, z)  (triangle inequality)  We will refer to the metric S(-) as the distance function.  In some cases, it is  not required that the distance function obey the triangle inequality (definition 4.2.1, condition 4). We will indicate when this is the case. Remark. All vector spaces are metric spaces, as a metric is implied by the existence of a norm. For instance, in Euclidean R space, the n  norm ||x||2 induces a corre-  sponding metric J(x, y) = | | x - y | | | . In Sequential Monte Carlo for Bayesian inference, the particles usually lie in the model state space, which are multi-dimensional vector spaces.  45  Chapter 4. Fast methods  We can now state the main assumption underlying fast methods: Assumption 4.2.1 (Kernel assumption). The kernel function K(x,y) eterized by a metric 5, that is, K(x.,y) = K(S{x.,y)).  is param-  Further, K is non-strictly  decreasing or increasing in this parameter. For instance, the Gaussian kernel K(x,y)  = exp { - ^ | | x - y|| } (and trucated 2  versions thereof) is clearly parameterized by the L 2 metric, and decreases as the L2 distance grows. This assumption may seem restrictive, but it encompasses many commonly-used kernels. Besides obvious distance functions like the L family of norms, inner-products p  are also metrics. Hence kernels such as polynomial K(x, y) = (x • y + b) and sigmoid p  K(x.,y) = tanh(ax • y — 3) satisfy assumption 4.2.1. From this point on, we will assume that K is decreasing, not increasing, in 8. Remark. An important point to note when using fast methods with SMC is that an abitrary discrete potential function does not satisfy assumption 4.2.1, and hence cannot be accelerated. It is still the case that some types of discrete potentials may be accelerated, if they have some spatial structure. 4.2.2  S p a t i a l indices  Spatial indices (sometimes called spatial access methods) intelligently subdivide a space into regions of high locality given some concept of distance. The index consists of a number of cells that contain points. To make access efficient, each the points in a cell should satisfy some easily-computable property. For instance, this property when using kd-trees is that the points' value in a certain dimension is less than a certain splitting value. It is extremely common that the cells that partition the particles can be further subdivided to create a more refined partition; spatial indices often consist of a hierarchy of paritions which form a tree. Spatial indices originated in computational geometry, but techniques have also been developed in the database literature to facilitate high-dimensional nearest-neighbour search. In the past five years, this topic has also been pursued in the machine learning  Chapter 4. Fast methods  46  community [16, 35, 44]. A full review of spatial indices is beyond the scope of this work; Chavez presents a unifying view in [5]. Instead, we briefly cover the two most-used examples. 4.2.2.1  Kd-trees  A kd-tree operates on a multi-dimensional vector field. Cells are created by recursively choosing a dimension and a splitting value in that dimension; particles having value less than or equal to that value are placed in one cell, and the remainder in the other. This procedure is now performed recursively on the two cells (note: the two cells may choose different splitting dimensions and/or values). The dimension of largest spread is typically chosen as splitting dimension. Kd-trees are effective in low dimensional settings; a 2-D example is given in Figure 5.4 (not all levels are shown). For dimensionality greater than ten, other methods may be more appropriate. 2  4.2.2.2  Anchors hierarchy and metric trees  Metric trees are more relaxed in their requirements than kd-trees: they need only a defined distance metric. Nodes in a metric tree consist of a pivot (a point lying at the centre of the node), and radius. All points belonging to the node must have a distance to the pivot smaller than the radius of the node. The Anchors hierarchy introduced, 3  by Moore [35] is an efficient means of constructing a metric tree. Unlike kd-trees, metric tree construction and access costs do not have factors that explicitly depend on dimension. Note that they are still vulnerable to the distributional effects of dimensionality (the distance histogram tends to be more peaked in high dimensions). Thus they may not outperform kd-trees, even in high dimensions. The Anchors hierarchy construction technique is particularly vulnerable to peaked distance distributions. A 2  That is, according to [17]. In our experiments, we have found this depends heavily on the distribu-  tion of the data; cases exist where kd-trees outperform supposed "dimensionality-independent" spatial indices for all tested dimensions, and cases where a dimensionality-independent index outperforms kdtrees in as few as six dimensions. We recommend testing both to determine their appropriateness for a given application. Note: it is not the case that all points within the radius of the pivot belong to the node.  3  Chapter 4. Fast methods  47  theoretical discussion of this effect can be found in [5]; experiments demonstrative of this effect can be found in Section 5.3.1. This phenomenon is also one of the main reasons why fast methods vary in their performance as the dataset is varied. 4.2.3  D u a l - t r e e recursion  Dual-tree recursion, introduced by Gray and Moore [16], is an example of so-called higher-order divide and conquer. It is rather important to iV-body methods, so we will devote some space to its description. We will consider only dual-tree recursion applied to ./V-body simulation (though it could conceivably be used in other ways as well). Hence, we have a set of source particles (X), and a set of target particles (y). Further, there is some concept of influence between source and target particles. The particles lie in a metric space, and we presume that the source particles are indexed in a tree-based spatial index which we call X. A node (cell) in this tree will be called an X-node. 4.2.3.1  Single-tree recursion  The preceding setup is sufficient to describe single-tree algorithms, which are another name for the class of algorithms evoked by the mention of "tree-based recursion". We assume the existence of a operation that compares a query point y £ ^ to an X-node, and outputs either include, exclude, or neither. The algorithm proceeds as follows: first, compare y to a node X (initialized to the root of the X tree). If the output is either include or exclude, then the algorithm terminates. Otherwise, the algorithm recurses on the children of X. If there is more than one target particle, then this algorithm will be performed independently for each y € y. The outputs of the comparison operators deserve explanation: • inclusion (or subsumption) The entire node X can be included in the results of the algorithm. • exclusion The entire node X can be excluded from the search algorithm.  Chapter 4. Fast methods  48  Example 4.2.1. (Range-Counting) Assume we wish to count the number of source particles within a certain range r of y. An X-node can be included when the y  maximum distance from y to the node is less than r (see Figure 4.1). Hence, all y  particles in X are within distance r of y, and we can add the count of particles in X y  (which we assume is precomputed), to a running count of the result of the algorithm. Similarly, if the minimum distance from y to X is greater than r , we can immediately y  exclude all points in X from consideration. If neither of these conditions hold, then X must be further refined—we recurse on its children. It is possible to reach the leaves of the tree (individual particles). It is common that inclusion and exclusion are not both part of a given algorithm. Consider binary search, which is easily seen to be a single-tree algorithm as we have described them. At each step, a node is excluded and one is expanded; no concept of inclusion makes sense in this setting.  o  o  o  \  o  -""  B  a  Figure 4.1: Inclusion and exclusion in example 4.2.1 (range counting). Shown are target point y, query radius r , and three X-nodes, A, B, and C. A can be included y  and C can be excluded, but B must be further refined (recursed on).  4.2.3.2  From one tree to two  In the range-counting example, a very similar set of inclusion and exclusion decisions will be made for two query points that are spatially proximate. Consider adding  Chapter 4. Fast methods  (c) Dual-tree comparison  49  (d) Dual-tree recursion (one of four)  Figure 4.2: Dual-tree and single-tree recursion; 4-2(a), 4-2(b): Single-tree recursion processes a single target particle at a time, and recurses on the children of the X-node. Dotted lines indicate lower bounds on distance, and dashes lines show upperbounds. 4.2(c), 4-2(d): Dual-tree recursion considers several target particles at a time by using node-node bounds. Recursion proceeds on the children of both nodes. Only one pair of the four recursive calls is shown.  Chapter 4. Fast methods  50  another query point y' to Figure 4.1 such that y' is quite close to y. It is likely that A would still be included and C excluded for y'. This is central idea of dualtree recursion: whenever possible, we want to share pruning decisions among target particles. We now assume that there is a tree-based spatial index built on the set of target particles y as well as the source tree on X. Instead of particle-node comparisons, basic operations in dual-tree algorithms will be node-node comparisons—specially, a comparison between a Y-node and an X-node. The comparison results in precisely the same possible outputs (inclusion, exclusion, or neither), and the result of the comparison must hold for every y in said Y-node. If the X-node cannot be included or excluded, then the algorithm recurses on the cross-product of the children of the Y and X-nodes. In the case of binary trees, the recursion on Y-node A, and X-node B is comprised of four recursive calls, on (A.left, B.left), (A.left, B.right), (A.right, B.left), and (A.right, B.right) (Figure 4.2). This induces the problem of selecting which recursive call to perform first; usually this requires resorting to heuristics (the problem of determining the optimal recursion order is exponential for many dual-tree algorithms). Alternatively, all four pairs of nodes can be pushed on a global priority queue, and the pairs evaluated on the basis of a heuristical priority value [17]. Intuitively this approach seems ideal, but maintaining the queue requires substantial overhead, and its use increases the memory consumption of the dual-tree algorithm enormously. Some solutions to this problem are found in [31].  4.3  Fast methods for sum-kernel  We will give relatively short shrift to the description of sum-kernel methods, as we do not make a contribution to this area in this thesis. For more information, see our tech report that presents an empirical comparison of sum-kernel algorithms [32]. For a more in-depth and relaxed presentation, see Dustin Lang's thesis [31].  Chapter 4. Fast methods  4.3.1  51  Fast Gauss Transform  If the kernel is Gaussian, then the fast Gauss Transform (FGT) [19, 20] can be used to perform sum-kernel in O(N). This algorithm is an instance of more general fast multipole methods for solving iV-body interactions [18]. We present a brief overview of this algorithm subsequently.  5  Figure 4.3: Illustration of the fast Gauss transform. The contribution of points xi 3 :  in box B is summarized by a single Hermite expansion about x g . This expansion is then translated to x"c and Taylor expanded to X4 7. :  The intuition behind the FGT is illustrated by Figure 4.3. To evaluate the interaction between N points, we first partition the space. Then instead of considering the individual contribution of each point in a partition, we only consider a single aggregated contribution at the centroid of the partition. In this way, if there is a cluster of points far away, this cluster can be interpreted as a single point summarizing the contribution of all the points in the cluster. As shown in Figure 4.3, the partition could be a square grid. This is acceptable if the problem is low dimensional, say x^ g R . 3  More precisely, the F G T works by carrying out a Hermite expansion about the centroid in each partition and then transferring the aggregated effect to other partitions via a Taylor series expansion. Hence, at each source box B, we expand the Gaussian field with a multivariate Hermite series of p terms:  52  Chapter 4. Fast methods  where a is a multidimensional index, h (-) is a Hermite basis function, NB is the 4  a  number of points in partition B, XB is the centroid of partition B and A (B) are the series coefficients given by A (B) — ^ ^2f=i Qj a  a  (^"^p")  • We  c a n  precompute these  Hermite expansions for all boxes in 0(p N) operations where d is the dimension of x d  and p is the number of terms in the expansion. For each target box C, we transform the Hermite expansions into a single Taylor expansion:  B  where  =  0<V  i=l  £  £  ^(B)^  V  V  6  (^=p)  J  •  Evaluating these Taylor series takes again 0(dPN) operations.  4.3.2  Improved Fast Gauss Transform  The improved Fast Gauss Transform (IFGT) [44] replaces the Hermite expansion of the F G T with a Taylor series expansion. This yields numerous advantages, such as improved performance (dramatically so in high dimensions) and removing the requirement of a single variance parameter. The main disadvantage over the FGT is that the error bound is considerably looser, and if parameters are chosen to fulfill this bound then the algorithm is much slower than even naive computation for dimensions greater than d = 2 [32]. Parameter selection is a major open problem for this method. Further, the error bound in [44] is subtly wrong. Both issues are tackled in [32].  4.3.3  Dual-tree sum-kernel  Dual-tree sum-kernel is more flexible than the previous methods, requiring only assumption 4.2.1 to be met (the particles need not even lie in a vector space). This encompasses most continuous densities and some discrete distributions (in the latter 4  x € h  ai  A multi-index a = (cu,..., ad) 6 N is a d-tuple of nonnegative integers such that for any d  R , we have x d  = x^x?  a  (Xi)/l„ (X ) •• • h 2  2  ad  (Xrf).  2  • •- x ^ " ,  |a| = ai + a + ... + a , a\ = cn\a \• • • a \ and h (x) 2  d  2  d  a  =  Chapter 4. Fast methods  53  case, it depends on the parameters of the distribution). This algorithm is an instance of generalized dual-tree recursion described in Section 4.2.3. As in all dual-tree algorithms, this method first builds a spatial tree on both the source and target points. The algorithm is distinguished in the manner in which nodenode comparisons are performed; here we introduce influence bounding, which is the main tool used in kernel-influence-based dual-tree algorithms. Assume we have a node X of source points, and a node Y of target points. We can easily state the kernel influence that all the particles in X have on a particle y in Y: N  (4.1) i=i  Let W~x be the sum of the weights of the particles in node X. Further, assume that 5 i m  n  and  5  max  are the minimum and maximum distances between nodes X and Y,  respectively (Figure 4.4). Since K is decreasing in S by assumption 4.2.1, we can bound equation (4.1) for any y by upper bound  < ^UiKiSfayW^ex  W K(8 ) x  max  <  W K(5 ). x  min  (4.2)  lower bound  These bounds can be used to estimate (4.1) as «  fy(X)  fy(X) = W  K(6 ) + K(S ) 2 min  X  max  (4.3)  with absolute error bounded by e (X) y  =  W  2  x  (4.4)  5, max 6, mm  Figure 4.4: Dual-tree bounding for spherical nodes. The bounds of equation (4.2) can be tightened by recursing on the X and Y node using dual-tree recursion as described in Section 4.2.3. At each stage of the algorithm,  54  Chapter 4. Fast methods dimension  error  speed  up to 3-D  absolute  O(N)  Gaussian  up to 10-D  n/a  O(N)  Ass. 4.2.1 x € M  thousands  Method  Kernel  data  FGT  Gaussian  x GR  IFGT Dual-Tree  n  a  abs. or rel. 0(Nlog  N)  b  "Absolute error, but if guaranteed then algorithm is no longer O(N). [17] claims O(N) recursion time, but it remains unproven to our knowledge. B  Table 4.1: Summary of sum-kernel methods. each particle y will be associated with a series of X nodes  • • •, Xg^ }  {Xi ,X2, , iy  y  y  and  thus a bound on the total error introduced by approximating f as 2~2i=i / y P Q *> s  D v  y  (4.4): e e = J2e (X ). y  y  (4.5)  t  i=i  When equation (4.4) drops below a specified desire error tolerance e, equation (4.3) is used to estimate the sum of influences at that point. A dual-tree sum-kernel algorithm using relative error (rather than absolute) is analogous.  4.4 4.4.1  Fast methods for max-kernel T h e distance t r a n s f o r m  In [12, 14], Felzenszwalb and Huttenlocher derive a fast algorithm for computing the distance transform, which is equivalent to computing max-kernel for a certain class of kernels. This achieves O(N) cost and is very efficient in practice^—the constants hiding in that asymptotic cost are gloriously low. Additionally, the problem is separable in dimensionality, so a d-dimensional transform of N points costs d  0(dN ). d  Consider the case of a Gaussian kernel in one dimension. We wish to solve ij =  arg max i  = arg mm  Wi  exp  - Xi  x,;  logUi  (4.6)  55  Chapter 4. Fast methods  Felzenszwalb et al. note that equation (4.6) is upperbounded by a parabola (whose shape is determined by cr, but is the same for all points) anchored at ( X J , /(«)), where f{i) = — loga>;. Hence the distance transform at a point yj is the lowest parabola induced by an x point at yj, and the entire distance transform is defined by the lower envelope over M of all such parabolas.  Figure 4.5: Lower envelope computed with the distance transform. The algorithm has two steps: First, determine which parabolas comprise the lower envelope and where they occur (Figure 4.5). Next, sample each query point to determine the distance transform at that point. The distance transform was originally developped for regular discrete grids—in that setting it achieves O(N) runtime. It also extends to a exponential N  d  grid in d dimensions for a cost of 0(dN ). d  regular  Finally, a similar algorithm can be used  for kernels exponential in the L\ distance. In this case, the intersecting geometric shapes are cones rather than parabolas. It is easy to calculate distance transforms for truncated versions of the kernels we mention, as well as linear combinations of these kernels. 4.4.1.1  Extension to irregular grids in 1-D  While the distance transform was designed to work exclusively on regular grids, it is easily extended to irregular grids in the one-dimensional case, for a small increase in cost. 0(N log N) algorithms for computing the lower envelope of one-dimensional axis-aligned parabolas of arbitrary shape exist [8] and can be applied in this case.  56  Chapter 4. Fast methods  Since the parabolas in the distance transform setting are identically-shaped, we can modify the original algorithm to obtain a slightly simpler procedure compared to [8] (though having the same computational complexity). Assume we are given source particles { x i , . . . , x^r} and target particles { y i , . . . , YM}The first step of the algorithm is to compute the lower envelope of the parabolas anchored at {XJ}. This step is unchanged, save that the x particles need to be pre-sorted at a cost of 0(N log N). The second step is to calculate the value of the lower envelope at each y particle. This can be done by either pre-sorting the y particles, or employing binary search on the lower-envelope, which costs 0(M log M) or 0(M log N) respectively. Unfortunately, this extension only applies to the one-dimensional case.  Other  means must be used to compute higher-dimensional max-kernel problems on Monte Carlo (irregular) grids. A final item is needed to use this method with arbitrary Gaussian kernels with bandwidth cr . The x value of intersection between two parabolas anchored at (p, f(p)) 2  and (q,f(q)) respectively is given by: T  _  (<? f(p)+P )-(v f(q)-q ) 2(P-?)  a minor change from the formula in [14].  2  2  2  2  Chapter 5  Dans une grande bonte. Gnossiene No. 2  ERIC  SATIE  Dual-tree max-kernel In Chapter 4, we reviewed a variety of methods for performing fast sum- and maxkernel operations. The reader may have noted the relative paucity of the max-kernel section compared to the sum-kernel algorithms. Fewer methods exist for the max case, in part due to the smaller number of applications. Max-kernel problems arise in many settings of maximum a posteriori inference—this includes the M A P particle smoothing algorithm described in Section 2.3.3, but is also applicable to the Viterbi algorithm for HMMs, and M A P belief propagation. We also note (remark 4.1.1) that max-kernel is a generalization of the all-nearest-neighbour problem, and hence acceleration methods for the former can apply to the latter.  1  The most important existing prior work on accelerating max-kernel are the techniques developped by Felzenszwalb and Huttenlocher using the distance transform (Section 4.4.1, [12, 13, 14]). These techniques are extremely efficient, but suffer from two major limitations: • They are confined to a certain class of kernel functions. • They are only applicable to state spaces embedded in a regular grid of parameters.  2  Monte Carlo methods, such as M C M C and particle filters, have been shown to effectively adapt to examine interesting regions of the state space, and can achieve better results than regular discretizations using fewer support points [2, 40]. Problems requiring high-dimensional inference are ubiquitous in machine learning and are best Although there are usually more specialized (and hence more efficient) algorithms for solving nearest-neighbour problems, the asymptotic complexity is rarely improved. This limitation is not present in the one-dimensional case. 2  57  Chapter 5. Dual-tree max-kernel  58  attacked with Monte Carlo techniques, as regular discretizations grow exponentially and quickly become intractable. In this chapter, we address the need of fast algorithms for computing weighted max-kernel on so-called Monte Carlo or irregular grids. In particular, we develop a new algorithm based on dual-tree recursion to exactly solve the max-kernel problem. We derive the method in the context of kernels parameterized by a distance function, which represent a broad class of frequently used 3  kernel functions, including Gaussians, Epanechnikov, spherical, and linear kernels, as well as thresholded versions of the same. Our method can also be used to accelerate other spatial-based kernels (such as K(pc,y) = x - y ) , and problems that have multiple kernels over different regions of the state space, but we restrict our attention to the simpler and more common case in this work. Our empirical results show that our algorithm provides a speedup of several orders of magnitude over the naive method, becoming more efficient after as little as 10ms of cpu time. Further, we stress that our method is a more general solution to the max-kernel problem that the distance transform, as it strictly subsumes the class of problems to which the latter algorithm is applicable. However, while the dual-tree algorithm still compares favorably to naive computation on discrete grids where both algorithms can be applied, we find that the distance transform algorithm boasts smaller constants in these cases, being 3 to 4 times faster. The performance of algorithms based on dual-tree recursion as N grows is relatively well-understood; see Gray and Moore [17] and Ihler [23]. However, we have found that the performance of this family of techniques also depends heavily on other variables, such as the data distribution, the dimensionality of the problem, and the choice of spatial index and kernel. We present several experiments to investigate these effects, and we believe that the conclusions can be generalized to other pruning-based dual-tree algorithms. 3  That is, kernels satisfying assumption 4.2.1.  59  Chapter 5. Dual-tree max-kernel  5.1  The algorithm  Recalling our objective, we wish to compute equation (1.2): for j = 1,..., M  N  f* = max u>i K ( X J , yj) J  i=i  N  ij = arg max UiK{xi,yj),  (1.2)  where X j G X are the source particles, yj € y the target particles, and K a kernel function. In this section we develop an algorithm for solving max-kernel when the following conditions are met: 1. (X U y, 5) is a metric space, for some distance function S. 2. K satisfies assumption 4.2.1. The algorithm is an instance of dual-tree recursion techniques, hence requires treebased spatial indices built on X and y, called X and Y respectively. For each X-node, we assume the existence of cached statistics on the weights: w*(X) (the maximum weight of a source particle in X) and a sorted list of the particle weights (only needed for leaf nodes). These can be computed efficiently while building the spatial index. The spatial indices provide upper and lower bounds on the distance between an X node and Y-node, denoted 5 (X,Y), U  sum-kernel algorithm, it is based on  5.1.1  8 (X,Y) l  influence  = 6^(X,Y)  . Like the dual-tree  bounding.  Influence bounding  Consider a source node X and target particle y. If we had an upper bound on the influence of any particle in X on y, we could potentially prune X from consideration (if, for instance, we had already found a particle whose influence was greater than the upper bound). We know that all particles in X have distance at least S (X,y) from y, and have l  weight no more than u*(X). Since K decreases in 5, we can upper-bound the influence  60  Chapter 5. Dual-tree max-kernel  Figure 5.1: Given a node X and a particle y, we derive bounds on the maximum influence of a particle in y. on y by a particle in X as follows: max cjiK (8(xi,y)) i; x , € X  < u*(X)K(5 (X,y)).  (5.1)  l  If this upper bound is less than the pruning threshold for y (which we call r ) , then y  X can be pruned. The pruning threshold can be obtained by lower-bounding the influence any particle in X can exert on y. We can do slightly better than that by lower-bounding the influence of the particle  of maximum  influence  on y in  X.  Less convolutingly, we want a lower bound on the influence of the source particle of maximum influence on y (/*), given that the source particle lies in X. This is given by max.UiK(5(xi,y))  > u*(X)K(6 (X,y)). u  (5.2)  The intuition for the bounds in equations (5.1) and (5.2) is that the particle of maximum weight in X can be moved freely in the space defined by the node (which may be 4  a hypersphere, hyper-rectangle, or a ball in a metric space). The bounds are obtained when this particle is as close (or far) from y as possible within this region. The lower bound of equation (5.2) can be interpreted as a guarantee that we can find a particle in X that exerts influence on y at least as great the bound. Hence we 4  Which is not necessarily the particle of maximum influence on y in X.  Chapter 5. Dual-tree max-kernel  61  can set y's pruning theshold to that bound and discard X nodes which have an upper bound on influence that does not meet this threshold. This is sufficient to derive a single-tree algorithm; a dual-tree algorithm is obtained by considering bounds on node-node, rather than node-particle, distances. The algorithm proceeds by doing a depth-first recursion down the Y tree. For each node, we maintain a list of X-nodes that could contain the best particle (candidates). For each X-node we compute the lower and upper bounds of the influence of the maximum particle in the node on all points in the Y-node (f^ ' l  (X,  Uf  Y)) by evaluating  the kernel at the upper and lower bound on the distances to particles in the node ( W}(x,Y)). 0  In each recursive step, we choose one Y child on which to recurse. Initially, the set of X candidates is the set of candidates of the parent. We sort the candidates by their lower bound, which allows us to explore the most promising nodes first. For each of the candidates' children, we compute the lower bound on distance and hence the upper bound on influence. Any candidates that have upper bound less than the pruning threshold are pruned. For those that are kept, the lower influence bound is computed; these nodes have the potential to become the new best candidate. The influence bounds tighten as we descend the tree, allowing an increasingly number of nodes to be pruned. Once we reach the leaf nodes, we begin looking at individual particles. The candidate nodes are sorted by lower influence bound, and the particles are sorted by weight, so we examine the most promising particles first and minimize the number of individual particles examined. In many cases, we only have to examine the first few particles in the first node, since the pruning threshold often increases sufficiently to prune the remaining candidate nodes. Figures 5.2 and 5.3 contain pseudo-code for the algorithm. For a given node in the Y tree, the list of candidate nodes in the X tree is valid for all the points within the Y-node, which is the secret behind the efficiency of dual-tree recursion. In this way, pruning decisions are shared among Y points when possible. Of course, this limits the quality of the bound. At this point, a smaller set (children of Y) is considered to refine the bound.  Chapter 5. Dual-tree max-kernel  62  It is clear that the faster the lower bound can be tightened, the more opportunities for pruning nodes will be afforded. Hence, any time we evaluate a set of nodes or points, we first sort the set in descending order based on its lower bound. In many cases, this quickly improves the bound and allows many of the less-promising candidates to be pruned without being expanded.  5.2  Performance in N  We turn to empirical evaluation of the dual-tree algorithm. In this section, we focus on performance in synthetic and real-world settings as ./V grows; comparisons are made both in settings where the distance transform is applicable and where it is not. We present results in terms of both cpu time and number of distance computations (kernel evaluations) performed. This is important as in some applications the kernel evaluation is extremely expensive and thus dominates the runtime of the algorithm. We use the dual-tree algorithm to accelerate the maximum a posteriori particle smoothing algorithm described in Section 2.3.3. We performed experiments both in a synthetic model and real-world (beat-tracking) setting. The figures appear in Section 2.4.1. For the synthetic experiment, we chose a one-dimensional setting so that the dualtree algorithm could be directly compared against the distance transform (using the modified algorithm from Section 4.4.1.1). Figure 2.9 summarizes the results. It is clear that the distance transform is superior in this setting, although the dual-tree algorithm is still quite usable, being several orders of magnitude faster than the naive method. The real-world setting uses a three-dimensional state space, so the distance transform cannot be applied. Figure 2.10 shows that the dual-tree method significantly improves runtime in this application.  Chapter 5. Dual-tree max-kernel  root nodes of X and Y trees: X ,  INPUTS:  r  63  Y. r  ALGORITHM:  1 2 3  leaves = {} candidates = {X } max_recursive(Y , leaves, candidates, —oo) FUNCTION max_recursive(Y, leaves, candidates, 4 if (isleaf (Y) AND candidates = {}) r  r  //Base  5 6  8  9 10  Case: reached leaves (see figure 5.3).  max_base_case(Y, leaves) else //  7  Recursive case: recurse on each Y child.  foreach y G children(Y) T  =  y  T y  valid = {} foreach p € candidates //  Check if we can prune parent node p.  11  il(u*(p)K(6 (p,y))  12 13  continue foreach x € children (p)  <T )  l  //  y  Compute child bounds.  14  / ' ' ( x ) = UJ*(X) K // Set pruning threshold.  15  T = max (r , max ^ / ' ( x ) ^  16 17 18 19 20 21  7y):  {u  (a:, y))  }  y  y  valid = valid U {x G children (p) : f (x) > r } valid = {x G valid : f (x) > r } leaves^ = {x € valid : isleaf (x)} candidates^ = { i £ valid : N O T (isleaf (x))} sort(leaves by /') max-recursive (y, leaves^, candidates^, r ) u  y  u  y  y  y  Figure 5.2: Dual-tree max-kernel algorithm, part 1. Given a Y-node and pruning threshold, we compute the upper and lower bounds on influence /^ ^(x) for each x. u,  This allows us to prune X-nodes and tighten r. If we encounter a leaf X-node, we do not examine it further until we've reached a leaf in the Y tree (Figure 5.3).  64  Chapter 5. Dual-tree max-kernel  max_base_case(Yieaf,  FUNCTION 1  2  foreach  G  x  leaves  / W > ( ) = u*{x)K  (5^  x  3  T y = max  4  leaves — {x G leaves : /  5  sort  leaves):  [x,Y))  (/'(x))  (leaves  u  (x) >  r}  by / ' )  // Examine individual y points.  6  foreach y € Y  7  T  8  foreach x  y  =  r y  € leaves  // Prune nodes by Y (cached), then by y.  9 10  if (/V) < y continue T  O  R  u*{x) K  (6  l  (x, y)) < r ) y  // Examine individual x particles.  11  foreach  €x  // Prune by weight.  (  < T y  )  i f  13 14  break / (xi) = Wi • K ( 5 ( x i , y ) ) if ( / (xi) > r„)  15  r w  ^R  12  y  y  // i is the new best particle. 16 17  = / ; = «* = z  T  y  / (xi) y  Figure 5.3: Dual-tree max-kernel algorithm, part 2. We first calculate the influence bounds of the candidates, find the pruning threshold, and prune. We then examine individual y points. We prune based on the cached bound (for the parent node Y) and then on y. Next, we look at individual particles in the node x. The particles are sorted by weight, so when a particle is reached whose weight leads to an influence below threshold, we can skip the remaining particles. Finally, we compute the influence of the particle and decide if it is the new best.  Chapter  5. Dual-tree max-kernel  65  Figure 5.4: Example of pruning the max-kernel algorithm (for a single Y particle). The candidate (dark) and non-candidate (light) nodes are shown. In the bottom-right plot, a close-up of the six final candidate nodes is shown (dashed). The single box whose particles are examined is shown in black. The subset of the particles that are examined individually is shown in black. There were 2000 particles in X, of which six nodes (containing 94 particles total) were candidate leaf nodes. Of these, only six particles from the first node were examined individually.  Chapter 5. Dual-tree max-kernel  66  o rt  <u  eg -50 rt i—i  66  bO O  -100  Candidate Boxes  It 0> O  I—I  tuO O J  -3 -©-*-A•a-  Examined Boxes Examined Particles Skipped Particles Skipped Boxes Objects  Figure 5.5: Dual-tree max-kernel example. Top: the influence bounds for the nodes shown in Figure 5.4. The pruning threshold at each level is shown (dashed line), along with the bounds for each candidate node. Bottom: pruning at the leaf level: in the example, six leaf nodes are candidates. We begin examining particles in the first box. As it happens, the first particle we examine is the best particle (the correct answer). Pruning by particle weight (the upper marker) allows us to ignore all but the first six particles. The pruning threshold is then sufficiently high that we can prune the remaining candidate nodes without having to examine any of their particles.  Chapter 5. Dual-tree max-kernel  5.3 5.3.1  67  The effect of other parameters Distribution and dimensionality  To examine the effects of other parameters on the behaviour of the dual-tree algorithm, we ran several experiments varying dimensionality, distribution, and spatial index while keeping N constant. We used two spatial indices: kd-trees and metric trees (built using the Anchors hierarchy) as described in Section 4.2.2. We generated synthetic data by drawing points from a mixture of Gaussians distributed uniformly in the space. Figure 5.6 shows a typical clustered data distribution. In all runs the number of particles was taken to be N — 20000, and the dimension was varied to a maximum of d = 40. Figures 5.7 and 5.8 show the results for C P U time and distance computations, respectively. In these figures, the solid line represents a uniform distribution of particles, the dashed line represents a 4-cluster distribution, the dash-dot line has 20 clusters, and the dotted line has 100. As results shown are means over ten runs. We expect methods based on spatial indexing to fare poorly given uniform data  • ••.••.•'!-I:;'i.V"--.ri,f.,  1  1  ,i%S-t' ••' ••:<^$si^3&:--: :  Figure 5.6: Synthetic data (JV = 20000) with c = 20 clusters. since the average distance between points quickly becomes a very peaked distribution in high dimensions, reducing the value of distance as a measure of contrast among points. The results are consistent with this expectation: for uniform data, both the  Chapter  5.  Dual-tree  max-kernel  G8  Dimension  Figure 5.7: Time v. dimensionality. For clarity, only the uniform distribution and one level of clustered data (c — 100) are shown. This experiment demonstrates that some structure is required to accelerate max-kernel in high dimensions.  Figure 5.8: Distance computations v. dimensionality. The data is less clustered (c = 20) than in Fi gure 5.7. For kd-trees, clustering hurts performance when d < 8.  Chapter  5.  Dual-tree  max-kernel  69  kd-tree and Anchors methods exceeded 0(N ) distance computations when d > 12. 2  More surprising is that the kd-tree method consistently outperformed the anchors hierarchy on uniform data even up to d = 40; the depth of a balanced binary kd-tree of 20000 particles and leaf size 25 is ten, so for d > 10 there are many dimensions that are not split even a single time! This is likely due to the data distribution having such a low variance that for every query point, the entire tree needs to be expanded (i.e., virtually no pruning occurs). Of more practical interest are the results for clustered data. It is clear that the distribution vastly affects the runtime of dual-tree algorithms; at d — 20, performing max-kernel with the anchors method was six times faster on clustered compared to uniform data.  We expect this effect to be even greater on real data sets, as the  clustering should exist on many scales rather than simply on the top level as is the case with our synthetic data. It is also interesting to note the different effect that clustering had on kd-trees compared to metric trees. For metric trees, clustering always improved the runtime, albeit by only a marginal amount in low dimensions. For kd-trees, clustering hurt performance in low dimensions, only providing gains after about d = 8. The difference in the two methods is shown in Figure 5.9.  5.3.2  Kernel bandwidth  To measure the effect of different kernels, we test both methods on a 1-D uniform distribution of 200,000 points, and use a Gaussian kernel with bandwidth (a) varying over several orders of magnitude. The source weights were generated randomly from U([0,1]). The number of distance computations required was reduced by an order of magnitude over this range (Figure 5.10). This result was the inverse of the trend we were expecting. Intuitively, we expected the pruning to become more efficient as the bandwidth narrowed, since near points will have much stronger influence compared to farther points in this case. The explanation for this result soon became clear. Consider unweighted max-kernel. In this case, the strength of the kernel between two points is only important relative to the strength  70  Chapter 5. Dual-tree max-kernel  -*-o-  anchors kd-tree  - S" - J _  1*—  f -  BK--4  -e-§ - -<>-  •-*•• anchors -e • kd-tree  1*-  CD — • -B>- -133  ©  "" *"''—f^--^. _-ffi-_~^> r  _  Dimension Figure 5.9: Time v. dimensionality; ratio to kd-tree = 1. Metric trees are better able to properly index clusters: the more clustered the data, the smaller dimensionality required for metric trees to outperform kd-trees (d = 30 for somewhat-clustered data, d = 15 for moderately-clustered data, and d = 6 for significantly-clustered data). The cross-over point is indicated with a pale gray vertical line.  71  Chapter 5. Dual-tree max-kernel  Figure 5.10: Effect of kernel choice: distance computations v. bandwidth of a Gaussian kernel. The work done by the algorithm drops to 10% of its maximum value as o is varied, attaining a point where it is almost indistinguishable from the distance transform. The distance transform is effectively independent of this variation. of the kernel between other points. Thus the magnitude is irrelevant, and unweighted max-kernel will perform identically under different kernel bandwidths. In the weighted case, however, there is interplay between the source weights and the kernel; nodes with a low maximum weight can more easily be pruned. Thus, as the kernel bandwidth is increased, the importance of the weights relative to the kernel strength in determining max influence is increased, and thus more pruning opportunities are afforded. Our choice of a uniform distribution of source weights reduces the contrast between the nodes as well, as only the maximum weight in each node is meaningful for making pruning decisions. But the maximum value of N uniform draws from [0,1] is distributed with mean and variance as follows:  5  N mAX =  WTi  N CTMAX  =  N  2  >v^"(ivTi?  ;  the maximum weight in most nodes is very close to 1, making it hard to use the weights  Chapter  5.  Dual-tree  72  max-kernel  to prune nodes.  5.4  Conclusion and an application  5.4.1  M a x i m u m a posteriori belief propagation  Although we have focused on applications of max-kernel in Sequential Monte Carlo, we present its use in M A P belief propagation due to its importance and widespread use. Given a graphical model with latent variables u i , observations z i , and potentials : t  :t  ipkh 4>ki the joint probability distribution is admitted:  k,l  k •  We are interested in computing the maximum a MAP  u  i:t  posteriori  /  i  estimate,  N  = argmaxp(ui t|zi ). :  U l : t  :t  We can use the standard max-product belief propagation equations for message passing and marginal (belief) computation [38]. The message from node / to node k is given by:  mik{uki) = mb<p (uij,zi)ih (u i,uij) JJ ~ reU{l)-k k  kl  m i(uij),  k  (5.3)  r  3  where J\f(l)  —k  denotes the neighbours of node / excluding k. We can re-write equa-  tion (5.3) as a max-kernel problem by setting N  { \i=i  . (  (  ~ {^("'i' ')  Ui  2  II  n( ij)}  m  u  =  1  ,  -\N  A  (*i,yj)  r  i J V  = ^w(«fci,«y)-  K  reM{l)-k 5  Let  1  be uniform draws from [0,1]. We want [AMAX  = E {maxij} = x Pr(x = max a;,) da;. Jo N \ ~ J JJ a; = Nx ~ . Hence,  (  Nx  dx = pj  ^ • The variance calculation is similar.  N  1  N  l  Chapter 5. Dual-tree max-kernel  5.4.2  73  Relaxation of kernel assumption  A wide range of kernels are covered by assumption 4.2.1, but it still limits the applicability of dual-tree methods. Fortunately, there are cases in which this assumption can be relaxed. The requirement of the presence of the triangle inequality and monotonicity in a distance function are actually just requirements that influence can be bounded in some manner. For an efficient algorithm, these bounds should be cheap to compute. An example is the case of multiple kernels, which can all be evaluated at the endpoints of a node, and the maximum (or minimum) taken at each endpoint to achieve bounds. Another example would be a situation in which different kernels operate in different regions of the state space (a common requirement in vision applications).  5.4.3  Summary  Weighted maximum-kernel problems are common in statistical inference, being used, for instance, in belief propagation and M A P particle filter sequence estimation. We develop an exact algorithm based on dual-tree recursion that substantially reduces the computational burden of this procedure for a wide variety of kernels. It is particularly important when the state space lies on a multi-dimensional Monte Carlo grid, where, to our knowledge, no existing acceleration methods can be applied. The method we present speeds up the inner loop of belief propagation, which means that it can be combined with other acceleration methods such as node pruning and dynamic quantization [6] to achieve even faster results, albeit at the expense of the loss of accuracy that those methods entail. The techniques we present could also be integrated seamlessly into hierarchical BP [12]. We also look at the other variables that affect the performance of dual-tree recursion, such as dimensionality, data distribution, spatial index, and kernel. These parameters have dramatic effects on the runtime of the algorithm, and our results suggest that more exploration is warranted into these effects—behaviour as N varies is only a small part of the story.  Chapter 6  Portez cela plus loin. Gnossiene No.  3  ERIC  SATIE  A better(?) algorithm Algorithms based on dual-tree recursion all have a similar structure. A spatial index is built on the source and target particles, and each node is decorated with data that describes the properties of the particles contained in the node. Sum-kernel, for instance, requires that each source node maintain the value of the sum the weights of the contained particles. This data, called sufficient statistics [35], is used along with the constraint that the particles lie in the spatial region defined by the node to produce bounds, which enable nodal inclusion or exclusion (Section 4.2.3). For the sum- and max-kernel algorithms, these bounds are bounds on the kernel influence exerted on target particles by source particles. In many cases the bound calculations are relatively rudimentary; little aside from the spatial constraint of the node is used. One route for improving the performance of a dual-tree algorithm is thus to make use of more comprehensive sufficient statistics to calculate better node-node bounds. We expect that these better bounds will be more expensive to compute, but the hope is that fewer nodes will ultimately need to be expanded to achieve the desired error bounds (sum-kernel) or find the particle of maximum influence (max-kernel). In the chapter we develop a "better bounds" max-kernel algorithm. At each node, we store the distance from each particle to the node's centre of mass. When calculating bounds, we enforce these distances as a constraint (i.e., we assume a particle x lies on a shell of distance d from the centre of mass), hence achieving tighter bounds. This x  might sound expensive, but in the Gaussian kernel case, a one-dimensional distance transform can be used to drastically accelerate calculation of bounds; the new bounds are less than twice as costly to compute as the traditional dual-tree bounds. We present  74  Chapter  6.  A better (?)  75  algorithm  two new algorithms that use the improved bounds: single-tree distance transform (STDT) and dual-tree distance transform (DTDT). Unfortunately, although the new bounds can be up orders of magnitude better, the impact of this in terms of improving the runtime of the algorithm is marginal. We show that the new bounds reduce the number of kernel evaluations by as much as 50%, though the total computation time is not significantly reduced in most cases due to the increase in overhead. Based on this discouraging result and a similar failed attempt at an improved sum-kernel algorithm [31], we conjecture that improving node-node bounds in dualtree algorithms is an exercise unlikely to produce significant algorithmic improvement. It does, however, shed significant light on the behaviour of dual-tree algorithms.  6.1  Bounding in dual-tree algorithms  Dual-tree and single-tree recursion operate by using distance-related bounds to include or exclude nodes in the source and target spatial index trees. In general, these bounds are the solutions to optimization problems given a set of constraints. Consider the range-counting example (example 4.2.1) with metric trees as spatical indices. We wish to compare an X-node with centre cx and radius rx to a Y-node with centre cy and radius ry. Determining the bounds is equivalent to solving the following optimization problem: Optimize Subject to the constraints  maxo"(x, y) and min<5(x,y) x,y  x,y  S(y:,cx) < rx and o~(y,cy) < ry.  It is reasonable to assume that additional constraints will lead to bounds which are closer to the true value determined by the actual distribution of particles in the nodes. In the limit, we could store the position of every particle in a node, and "constrain" the optimization problem to particles lying at these positions. To usefully improve 1  Such an exercise would yield the exact answer, but the resulting optimization problem will have  1  complexity equivalent to the original iV-body problem, hence achieving no computational gain.  Chapter 6. A better(?) algorithm  76  bounds in dual-tree algorithms, we must find sufficient statistics which are both cheap to compute and which result in optimization problems which are efficiently-solved.  6.1.1  A bounds-tightening regime for max-kernel  The dual-tree max-kernel algorithm described in Chapter 5 has as sufficient statistic the maximum weight of a particle in an X-node, u*(X). The optimization problem is in this case (assuming X and Y are rx- and ry-balls, respectively): Optimize  2  extrema < max u5j • K(5(5ci,y)) > Subject to the constraints X — {-x.i,Ui}f  =l  5(%,c ) < r x  Vxi,  x  Zi<oj*(X)  3 Xj,  uij =  u*(X)  ^(y,Cy) < ry  X is an arbitrary set of weighted particles;  (6.1)  all  (6.2)  Xj  lie in X;  all X j have weight less than a maximum;  (6.3)  the maximum weight is attained;  (6.4)  y lies in Y.  (6.5)  Instead of allowing that the set of source particle be of arbitrary cardinality, with arbitrary weights (less than a maximum), and arbitrary position within X, we will impose significant constraints on the position of the points. First, we calculate the centre of mass  3  of the node, and the distance of each source particle to the centre  of mass (di = <5(x;,mx))- We also constrain the weights to be those of the particles in X. These distances and weights become spatial constraints for the optimization: 2  We switch to using the notation extrema to mean finding both the global maximum and minimum  of this function. 3  We use the unweighted centre of mass, which minimizes the distances to this point. The weighted  centroid may be more effective in the case of high weight variance in the node, as it should force high-weight particles to be closer to the centroid. If the spatial index is a metric tree, the anchor (node centre) could also be used, but using the centre of mass leaves us the freedom to use any spatial index.  Chapter 6. A better(?) algorithm  77  Optimize  extrema  {  max Ui • K(S(xi,y))  Subject to the constraints X  V x i , 5(xi,m ) x  VXj,  —  =  {XJ,  u)i},  (6.6)  Xi is distance di from mx;  (6.7)  particle weights are exactly specified;  (6.8)  y lies in Y.  (6.9)  x  5(xi,m ) x  tDj = LOi  <Ky,Cy) <  X is a particle set of size exactly N ;  r  Y  These constraints restrict the possible data distribution considerably; we exactly specify the number of particles rather than allow unboundedly many, exactly specify their weights, and significantly constrain their position. In fact, rotation on a hypershell 4  is the only freedom remaining. 6.1.2  Solving the optimization  The constraints we have chosen induce a more difficult optimization problem than does vanilla max-kernel. It remains, however, efficiently solveable. Consider the following geometric interpretation, which assumes the particles exists in a vector space. Consider a two-dimensional X-node containing particles that can rotate around the centroid. Let a be an line originating at a query point q and intersecting the node's centroid. Any particle x can be rotated about the centroid toward q to lie on a. This rotation cannot increase the distance to from x to q (and thus correspondingly decrease the influence between x and q). Consequently, the maximum influence on q after rotating all the particles cannot be less in this configuration. Finding the max-influence in this configuration gives a upper bound on the max-influence in the original node. Figure 6.1 demonstrates this graphically. 4  The new constraints are not strictly more restrictive, however, as constraint (6.7) may not imply  constraint (6.2) in some cases (for instance, a metric tree node where the centroid is far from the node's centre). In our experiments we have not found this to be problematic, but contraint (6.2) can be added to the new constraint set if desired.  Chapter  6. A better (?) algorithm  78  Figure 6.1: An upper bound on maximum influence obtained by rotating the particles in X. We are now required to solve a one-dimensional max-kernel problem to obtain this bound, at a cost of O(Nx)-  Further, the rotation required by each particle depends  on the position of q. However, the one-dimensional position of the particles along a is the same for any q! The only change is the distance from q along a. But this is a case of many queries to the same set of one-dimensional source points, which is precisely the situation where the distance transform can be applied. Hence, we store for each X-node the lower envelope of parabolas precomputed using the distance transform (Figure 6.2). When we need to compute a bound, we query the cached lower-envelope based on the distance of the query point to the centroid <5(mx,q). This reduces the cost from 0(Nx) to 0(logit!), where R < Nx is the number of points that comprise the lower envelope, often significantly less than the number of points in the node.  Figure 6.2: Lower envelope using the distance transform. To obtain a lower bound, one can similarly visualize the rotation of particles away  Chapter 6. A better (?) algorithm  79  from q, but still onto a (Figure 6.3).  • q  Figure 6.3: An lower bound on maximum influence obtained by rotating the particles in  X.  This produces a one-dimensional distribution on o which is the inverse of of the distribution in Figure 6.1. Thus there is no need to compute a lower-envelope of parabolas, instead we can just query the upper-bound envelope with -(5(mx,q) as the distance.  5  Remark. The, intuition to which we have appealed is plausible in Euclidean space, but the concept of "rotation" is nonsensical in an abstract metric space. We can, serendipitously, make a similar argument in the general case by invoking the triangle inequality. Consider a centroid c, source particle x and target particle q. Rotating x on the axis intersecting q and c is, essentially, assigning x to a new position, x', where <S(x', c) = 5(x, c) and r5(x', q) = 6(q, c) - <5(x, c). The distance of the new position x' to q must be no greater than S(x, q), as 5(q, x') < S(q, x) + <J(x, c) + S(c, x')  (triangle inequality)  < <*(q,x), hence the influence on q must be no less than x (see Figure 6.4). The proof for rotating away is analogous. 5  In the common case that the influence of the kernel dominates the effect of the source weights,  the answer to this query is almost always the particle closest to the centroid. In these cases, linear scan can be faster than binary search for querying.  Chapter 6. A better (?) algorithm  80  X  c  Figure 6.4: Bounding in a metric space  6.2  The algorithm  We used the bounding strategy of the previous section to implement single-tree and dual-tree versions of the max-kernel algorithm. The preprocessing step for both is the same: after the spatial index is built on the source particles, it is decorated with the relevant sufficient statistics. Specifically, for each X-node we compute • the centre of mass (or centroid): cx • the distances of all particles X{ G X to cx '• di • the one-dimensional lower envelope of the distance transform for points {di, This additional preprocessing does not add additional asymptotic complexity to the cost of building and decorating the tree in the original algorithm; the overhead is approximately 10-20% (see experiments). The changes to the recursion phase of the algorithm are minimal. Instead of using bounds on the distance between a target particle and an X-node, the target's distance to the centroid is computed, and the upper and lower bounds on the maximum influence are computed using the distance transform. In the dual-tree case, we have an X-node and a Y-node. For the upper-bounding, the query point is taken to be the closest point to X in Y, and for lower-bounding, the farthest. This algorithm is not significantly more complicated to implement than the original dual-tree max-kernel algorithm.  Chapter  6.2.1  6.  A better (?)  81  algorithm  Discussion  Sharper upper bounds should lead to nodes being pruned earlier in the search, which saves their children being expanded and considered separately. Tighter lower bounds lead to a faster increase in the pruning theshold which also leads to more aggressive pruning. Additionally, it is reasonable to expect that these two effects synergize. It is difficult to quantify how much improvement in bounds will be attained, or (more importantly) how much additional pruning will occur over the original algorithm. There are two scenarios in which we can see the potential for large gains. One is the case of a large spread of particle weights in a node: in this case, the weight is not distributed evenly in the node, and constraining the position of the high-weight particles should prove effective. Another scenario is a node with relatively evenlydistributed weight; distance is more important than weight in this case. Here, the lower bound is improved significantly, as any particle close to the centroid will still be close to the centroid after rotation away from the query point. An example of this behaviour is in Figure 6.3—the distance used in the bound is reduced by about half the span of the node. Unfortunately, the effects of dimensionality reduce the effectiveness of the improved bounding. First, the area of a d-dimensional hypershell grows exponentially in d, which makes the bounds increasingly loose. Further, for a uniform d-dimensional particle distribution, an increasing fraction of a node's particles will be located near the boundary of the node. There will be, in expectation, 2 ~ times more particles d  l  of distance greater than r/2 from the centre than particles of distance less than r/2 from the centre. The only consolation is that real datasets typically exist on a lowdimensional manifold; it is not uncommon for a hundred-dimensional real dataset to behave like a uniform distribution of dimension < 10 [35].  6.3  Results  We tested the STDT and DTDT algorithms in a variety of synthetic settings. All data is generated uniformly distributed in [0, l] with source weights drawn from U ([0,1]). d  Chapter  6.  A better (?)  82  algorithm  We used a Gaussian kernel with two bandwidths. In all tests, we used metric trees created using the Anchors hierarchy method. We measured cpu time, distance computations (kernel evaluations), the total number of leaf nodes examined, and number of pruned X-nodes. In one experiment, we additionally measured the number of pruned node by their tree depth. Legend labels "dual-tree" and "single-tree" always refer to the original algorithm and "STDT" / "DTDT" are used for the better bounds algorithms described in this chapter. In some tests, we distinguish between the preprocessing time and recursion time. The preprocessing time includes building the spatial indices and decorating the nodes with the relevant sufficient statistics.  6.3.1  STDT results  The most improvement was seen in the single-tree case. Distance computations are reduced by half; cpu time is reduced by up to 30%. The better bounds algorithm was faster for dimensionality up to d — 12 (of uniformly-distributed data ). 6  6.3.2  D T D T results  The results for DTDT are considerably less impressive. The gain in distance computations is considerably more modest, and the algorithm is only faster than the original algorithm in a few cases. This is due to two factors: First, as dual-tree recursion is more efficient than single-tree, the preprocessing takes up more time relative to the whole, hence the increase in preprocessing runtime is that more significant. Also, since dual-tree bounding is performed on a pair of nodes, we are only tightening one "half" of the bounds, so to speak. Dual-tree bounds  must  be looser, as they are shared among  a group of target particles. 6  While this may seem low, real data is rarely on a manifold of dimensionality this high. As shown  in Figure 5.7, even the original dual-tree algorithm is slower than naive computation at uniformlydistributed data at d = 12.  Chapter 6. A better (?) algorithm  ,x 10 a 8 7 <D  6  <D  5  E  l  — —  Single-tree, narrow bandwidth Single-tree (total time) S T D T (total time) Single-tree (recursion) S T D T (recursion) Single-tree(preproc) STDT(preproc)  8  A  x 10  j  /  7  ,<\ 6 /''' /-' / / *' ^ •I 5  — —  83  Single-tree, wide bandwidth Single-tree (total time) S T D T (total time) Single-tree (recursion) S T D T (recursion) Single-tree(preproa) STDT(preproc)  /  /y  0)  3 4  4  /  /* A  A  T  „  ±  Q.  E  "3  o 3  2  2  1 0  0.5  1 N  1.5  2 y m  5  =  1 ) 0.  0.5  ::::::::  1 N  '  :  1.5  2 y m  5  Figure 6.5: STDT, d = 3, h = 0.1/1, time v. N; The recursion time is reduced by as much as 30%, while incurring minimal additional preprocessing cost. The gain is less in the case of a stronger kernel, which indicates that better bounds produce more improvement when the particle weights are important relative to the kernel.  Single-tree, wide bandwidth  10'  §io  Single-tree (dist comps.) -STDT (dist comps) Single-tree (leaves examined STDT (leaves examined)  6  o O  10  10'  10'  Figure 6.6: STDT, h = 1, d = 3, DCs/leaves v. N; Better bounds reduces total distance computations by half, and examines but a third of the leaf nodes considered by the original algorithm (note log/log scale).  Chapter 6. A better (?) algorithm  Single-tree, wide bandwidth  Single-tree, wide bandwidth 10'  jjj 10  84  10  Single-tree (total time) STDT (total time) Single-tree (recursion) STDT (recursion) Single-tree(preproc) STDT(preproc)  Single-tree (DCs) STDT (DCs) Single-tree (leaves) - - - STDT (leaves) Single-tree(particles) STDT(particles)  10  x  -,-•'*  ^  10 °10  3  10'  10  10  Figure 6.7: STDT,  h = 1, N =  10  10000, performance v.  10  d\ Left:  cpu time as dimension-  ality increases. For d > 2, the better bounds algorithm is faster (note log/log scale). In higher dimensions, the recursion time dominates the total run-time of the algorithm. At d  —  12, the original algorithm is more efficient.  Right:  The relative savings  in distance computations, leaves, and particles examined decreased at dimensionality increases, as we expect.  Figure 6.8: DTDT, h = 1, d = 3, cpu time v. TV; The performance gains are significantly less pronounced than for single-tree. This is partially due to the preprocessing being a more important part of the algorithm's runtime.  Chapter  .X  10  6.  A better (?)  Dual-tree, narrow bandwidth  2.5  Dual-tree (DCs) DTDT (DCs)  x10  85  algorithm  Dual-tree, wide bandwidth Dual-tree (DCs) DTDT (DCs)  1.5 o  o O  o  0.5  0.5  1.5  0.5  1.5 v  2 m  5  Figure 6.9: DTDT, h = 1, d = 3, distance computations v. JV; The DTDT algorithm achieves slightly less than half as many distance computations compared to original algorithm in this setting.  D  D  .0 Figure 6.10: DTDT, h = 1, N = 10000, performance v. d; Left: DTDT is only more efficient in terms of cpu time when d = 3,4 (note log/log scale). Right: The gain is distance computations is modest (which explains the anemic cpu-time performance). After d — 6, there is effectively no gain in distance computations or number of examined leaf nodes (though there is a slight gain near d = 10; see Figure 6.11).  Chapter  86  6. A better(?) algorithm  Figure 6.11: DTDT, h = 1, TV = 10000, relative comparison; Improvement (in terms of distance computations) of DTDT compared to the original algorithm. Included for comparison are versions of the DTDT algorithm implemented using the new bounds for upper-bounding and lower-bounding only. The effect of using both is greater than the sum of using the bounds individually (for instance, at d = 3). The improvement diminishes significantly as dimension increases.  1 0  -i  0  ,  ,  ,  ,  1  5  10  15  20  25  depth  1f  y  i  ,  ,  ,  ,  0  5  10  15  20  depth  1  25  Figure 6.12: DTDT, pruned nodes v. tree depth; Left: A single run. Right: Mean over 10 runs. In both cases, the DTDT algorithm is able to prune nodes effectively at a depth one or two higher in the tree than the original algorithm. The bimodality is an artifact of the algorithm avoiding processing leaf X-nodes until a leaf of the Y tree is reached.  Chapter 6. A better (?) algorithm  6.4  87  Summary  We demonstrate a strategy for obtaining tighter bounds in the dual-tree max-kernel algorithm. In particular, we show a specific set of constraints on the composition of a node of source points leads to an optimization problem that can be solved efficiently using the distance transform. We find that the sharper bounds do not afford an appreciable increase in performance. The dual-tree version was particularly resistant to efficiency gains. At best, half as many distance computations were performed, which resulted in a cpu time gain of 30% (less for dual-tree). We have only presented synthetic results, but the algorithm was additionally tested on the particle smoothing applications of Chapter 2, and there was little-to-no difference between using the orginal algorithm and DTDT (we did not test the performance of single-tree algorithms on real-world examples, as they are dominated by their dual-tree counterparts). We believe that significant performance gains for dual-tree algorithms will be difficult to attain via better bounds. Dual-tree recursion is, essentially, a bounds-tightening strategy: loose-bounds are improved by considering smaller nodes. In this chapter, we use sufficient statistics that result in bounds that are considerably tighter while being only slightly more expensive to evaluate; the improvement gained was, at best, a constant factor.  Chapter 7  Conseillez-vous soigneusement. Gnossiene No.  3  ERIC  SATIE  Postamble This work is centred around two key goals: first, an exposition of a computationally expensive class of algorithms that are often ignored due to their cost—despite providing more accurate results than their less expensive brethren; next, a discussion of a set of fast methods for accelerating the aforementioned class of algorithms. Hopefully this will lead to a wider recognition of the value of the techniques, and allow them to shed the stigmata associated with them.  7.1  N Sequential Monte Carlo 2  All the algorithms we present are Sequential Monte Carlo (SMC) SMC  sampling algorithms.  techniques represent probability distributions empirically, using a set of N par-  ticles that are draws from the distribution. Monte Carlo methods are of great interest among academics and practitioners as they provide means for solving a great deal of thorny problems that do not admit analytic solutions, and boast theoretical convergence proofs in many cases. Further, Monte Carlo sampling is essential for highdimensional applications, as it is one of the few techniques that escapes the curse of dimensionality. Probably one of the most important and common applications of SMC  is in the  setting of Bayesian inference in state-space models. In Chapter 2, we introduced Bayesian state estimation and SMC Bayesian filtering has an SMC  techniques for performing inference in this setting.  analogue called the particle filter, which runs in 0(N).  Bayesian smoothing can provide a much more accurate estimate of state as it takes future observations into account. Unfortunately, all particle smoothing algorithms  88  89  Chapter 7. Postamble  suffer from 0(N ) 2  cost. This cost arises because each particle interacts with a density  represented by N particles. Chapter 3 present a new particle filtering strategy called marginal filtering. The standard sequential approach to obtain a sample from the filtered state Ut is to perform importance sampling in the joint space (which yields a sample ping the indices for  and drop-  ui t), ;  = 1, . . . , £ — 1 to obtain the desired sample. Sampling from  high-dimensional spaces is inefficient, so this procedure adds variance to the resulting estimate. We derive an algorithm for sampling directly in the marginal space (which does not grow in time as the joint space does), and show both empirically and theoretically that this algorithm provides a reduction in importance weight variance. To sample in the marginal space, it is necessary to integrate out the state in the previous timestep. This requires a particle-distribution interaction, which costs  0(N ). 2  In these two chapters, we concentrated on the Bayesian state estimation problem for clarity of presentation and to appeal to a broader audience. It is, however, straightforward to apply the same ideas in the context of general SMC [37, 22, 21]. The marginal sampling techniques also apply equally-well here, but the same types of N algorithms arise. Further, although we have not discussed implementation details 2  associated with individual models, the form of the models in some applications can also induce N algorithms. An example is particle filtering in game A l [29], where the 2  algorithm is unnecessarily weakened by the choice of a poor proposal to avoid  0(N ) 2  computation. Model parameter estimation can also be accomplished with SMC methods; this also involves N cost [9]. 2  In every one of these cases, the N cost is the result of one of two base operations, 2  called sum-kernel and max-kernel. We show how the SMC algorithms can be interpreted in the context of these operations. If the cost of these operations is reduced, the algorithms' complexity is also reduced. In part II we present a gamut of techniques which can reduce the cost of sumkernel and max-kernel to 0(N'logN)—in  some cases 0(N).  We believe that this  revolutionizes the potential usefulness of the algorithms cursed with i V stigmata. 2  Further, we expect that this will encourage research into better SMC algorithms by  Chapter 7. Postamble  90  reducing the need for concerns about the higher complexity that these algorithms generally entail. We close with a cautionary note. First, the fast methods have restrictions on their applicability. Although covering a wide range of common settings, they do not apply to the most general sum-kernel or max-kernel problem. The most important example is a discrete-state H M M with arbitrary discrete transition potentials. Second, due to the overhead of fast methods, the resulting accelerated algorithms are still slower than the notoriously-efficient O(N) particle filter. Thus, it is not an automatic decision to use i V algorithms (though it should be no longer an automatic decision to reject 2  them!).  7.2  Fast methods  In Part II, we focused on sum- and max-kernel and means of accelerating them. A central idea is that of dual-tree recursion, which describes a family of techniques that efficiently evaluate iV-body problems (i.e., problems which are described by a set of source particles exerting an influence on a set of target particles). Dual-tree recursion works by locating nodes of source and target particles that are spatially proximate at various levels of refinement. Source nodes are compared to target nodes; it may be possible to deal with the influence of all the particles in the nodes simultaneously, saving the need to examine individual particles. If not, nodes at a finer level of granularity are considered. For sum-kernel, the fast Gauss transform and improved fast Gauss transform can be used when the kernel is Gaussian. Both exploit series expansions to achieve performance gains. Alternatively, a more general class of kernels can be accelerated using a dual-tree algorithm based on bounding node-node influence based on the distance between the nodes. Max-kernel for state spaces lying on a regular grid can be computed efficiently using techniques based on the distance transform. This is generally not applicable to SMC algorithms, as their entire purpose is to generate a particle set which non-uniformly  Chapter 7. Postamble  91  covers the state space. We show how the distance transform can be simply extended to handle irregular grids, but only in one dimension. Accelerating higher-dimensional max-kernel is left unsolved. As such, in Chapter 5 we present a novel algorithm for max-kernel in general state spaces. We can upper-bound the influence of a node of target particles to a node of source particles by considering the minimum distance between them. If this bound is lower than a threshold (determined by lower-bounding the max influence), the entire node can be pruned. Since we can evaluate node-node interactions, we can use dual-tree recursion. Our experiments show that this algorithm enables substantial performance gains in synthetic and real-world experiments, though it is less efficient than the distance transform where the latter algorithm can be applied. Lastly, we develop an enhanced bounding strategy for max-kernel. By increasingly constraining the distribution of particles in a node, the upper and lower node-node bounds can be tightened considerably. Further, the distance transform can be applied to compute the bounds extremely quickly. We found that the new algorithm reduces the distance computations performed by as much as 50%, which is not terribly impressive: the new bounds have higher cost, and thus the new strategy is only occasionally faster than the original dual-tree max-kernel algorithm, and then only slightly. This is a gloomy result. Worse, in [31], we developed a bound-improving strategy for sum-kernel (which was similar but based on a different type of spatial constraint), and the results were similarly anemic: though the bounds were up to orders of magnitude tighter, the overall runtime of the algorithm was not significantly affected. A possible explanation is that dual-tree recursion is, in an essential respect, a means for taking loose node-node bounds and tightening them by considering smaller nodes. Thus improving node-node bounds in other ways does not yield notable gains. We conjecture that any attempt at improving bounding by leveraging richer sufficient statistics will yield at best marginal improvements in runtime. Of course, to be wrong would be delightful.  92  Bibliography  Bibliography [1] C Andrieu, M Davy, and A Doucet. Improved auxiliary particle filtering: Application to time-varying spectral analysis. In IEEE SCP 2001, Signapore, August 2001. [2] N Bergman. Recursive Bayesian Estimation:  Navigation and Tracking Applica-  tions. PhD thesis, Department of Electrical Engineering, Linkoping University, Sweeden, 1999. [3] M Briers, A Doucet, and S Maskell. Generalised two-filter smoothing for statespace models. In Submitted, 2005. [4] Mark Briers, Arnaud Doucet, and Simon Maskell. Smoothing algorithms for state space models. In Submitted, 2005. [5] E. Chavez, G. Navarro, R. Baeza-Yates, and J.L. Marroquin. Searching in metric spaces. ACM Computing Surveys, 33(3):273-321, September 2001. [6] J M Coughlan and H Shen. Shape matching with belief propagation: Using dynamic quantization to accomodate occlusion and clutter. In GMBV, 2004. [7] N de Freitas. Rao-Blackwellised particle filtering for fault diagnosis. In IEEE Aerospace Conference, 2001.  [8] O. Devillers and M . Golin. Incremental algorithms for finding the convex hulls of circles and the lower envelopes of parabolas. Inform. Process. Lett., 56(3):157-164, 1995. [9] A Doucet, C Andrieu, and V B Tadic. Online sampling for parameter estimation in general state space models. In Proc. IFAC SYSID, 2003.  Bibliography  93  [10] A Doucet, N de Freitas, and N J Gordon, editors. Sequential Monte Carlo Methods in Practice. Springer-Verlag, 2001. [11] P Fearnhead. Sequential Monte Carlo Methods in Filter Theory. PhD thesis,  Department of Statistics, Oxford University, England, 1998. [12] P Felzenszwalb and D Huttenlocher. Efficient belief propagation for early vision. In CVPR, 2004. [13] P Felzenszwalb, D Huttenlocher, and J Kleinberg. Fast algorithms for large-statespace HMMs with applications to web usage analysis. NIPS, 2003. [14] Pedro F. Felzenszwalb and Daniel P. Huttenlocher. Distance transforms of sampled functions. Technical Report TR2004-1963, Cornell Computing and Information Science, 2004. [15] S J Godsill, A Doucet, and M West. Maximum a posteriori sequence estimation using Monte Carlo particle filters. Ann. Inst. Stat. Math., 53(l):82-96, March 2001. [16] A Gray and A Moore. 'N-Body' problems in statistical learning. In NIPS, pages 521-527, 2000. [17] A Gray and A Moore. Nonparametric density estimation: Toward computational tractability. In SIAM International  Conference on Data Mining, 2003.  [18] L Greengard and V Rokhlin. A fast algorithm for particle simulations. Journal of Computational Physics, 73:325-348, 1987.  [19] L Greengard and J Strain. The fast Gauss transform. SIAM Journal of Scientific Statistical Computing, 12(l):79-94, 1991.  [20] L Greengard and X Sun. A new version of the Fast Gauss transform. Documenta Mathematica, ICM(3):575-584, 1998.  Bibliography  94  [21] Firas Hamze and Nando de Freitas. A sequential monte carlo approach to inference and normalization on pairwise undirected graphs of arbitrary topology. In Submitted, 2005. [22] Yukito Iba. Population Monte Carlo algorithms. In Transactions of the Japanese Society for Artificial Intelligence, volume 16, pages 279-286, 2001. [23] A T Ihler, E B Sudderth, W T Freeman, and A S Willsky. Efficient multiscale sampling from products of Gaussian mixtures. In NIPS 16, 2003. [24] M Isard and A Blake. A smoothing filter for condensation. In Proceedings of the 5th European Conference on Computer Vision (ECCV), volume 1, pages 767-781,  1998. [25] S J Julier and J K Uhlmann. A new extension of the Kalman filter to nonlinear systems. In Proc. of AeroSense: The 11th International Symposium on Aero space/Defence Sensing, Simulation and Controls, Orlando, Florida, volume  Multi Sensor Fusion, Tracking and Resource Management II, 1997. [26] R E Kalman. A new approach to linear filtering and prediction problems. Journal of Basic Eng., 82:35-45, 1960. [27] S Kim, N Shephard, and S Chib. Stochastic volatility: Likelihood inference and comparison with A R C H models. Review of Economic Studies, 65(3):361-93, 1998. [28] G Kitagawa. Monte Carlo filter and smoother for non-Gaussian nonlinear state space models. Journal of Computational and Graphical Statistics, 5:1-25, 1996.  [29] Mike Klaas, Tristram Southey, and Warren Cheung. Particle-based communication among game agents. In AIIDE, 2005. [30] D Lang and N de Freitas. Beat tracking the graphical model way. In NIPS 17, 2004. [31] Dustin Lang. Fast methods for inference in graphical models (and beat tracking the graphical model way). Master's thesis, University of British Columbia, 2004.  Bibliography  95  [32] Dustin Lang, Mike Klaas, and Nando de Freitas. Empirical testing of fast kernel density estimation algorithms. Technical Report TR-2005-03, Dept of Computer Science, UBC, February 2005. [33] D Q Mayne. A solution of the smoothing problem for linear dynamic systems. In Automatica, volume 4, pages 73-92, 1966. [34] N Metropolis and S Ulam. The Monte Carlo method. JASA, 44(247):335-341, 1949. [35] A Moore. The Anchors Hierarchy: Using the triangle inequality to survive high dimensional data. In 11 Al 12, pages 397-405, 2000. [36] R Morales-Menendez, Nando de Freitas, and David Poole. Estimation and control of industrial processes with particle filters. In American Control Conference, 2003. [37] Pierre Del Morel, Arnaud Doucet, and Gareth Peters. Sequential Monte Carlo samples. In in revision, 2005. [38] J Pearl. Probabilistic reasoning in intelligent systems: networks of plausible inference. Morgan-Kaufmann, 1988. [39] M K Pitt and N Shephard. Filtering via simulation: Auxiliary particle filters. JASA, 94(446):590-599, 1999. [40] C P Robert and G Casella. Monte Carlo Statistical Methods. Springer-Verlag, New York, 1999. [41] M N Rosenbluth and A W Rosenbluth. Monte Carlo calculation of the average extension of molecular chains. Journal of Chemical Physics, 23:356-359, 1955. [42] Jaco Vermaak, Arnaud Doucet, and Patrick Perez. Maintaining multi-modality through mixture tracking. (ICCV), 2003.  In International Conference for Computer Vision  Bibliography  96  [43] A J Viterbi. Error bounds for convolutional codes and an asymptotically optimum decoding algorithm. In IEEE Trans. Info. Theory, volume IT-13, pages 260-269, 1967. [44] C Yang, R Duraiswami, N A Gumerov, and L S Davis. Improved fast Gauss transform and efficient kernel density estimation. In ICCV, Nice, 2003.  97  Index N  marginal distribution, see filtering  2  stigma, ii, 1, 88  distribution particle, see particle filter  Anchors hierarchy, 46, 66, 68, 80 ball tree, see metric tree Bayesian filtering, 1, 5, 7-8, 35, 87 smoothing, 1, 5, 8-10, 87 state estimation, ii, 2, 5, 87 Belief Propagation (BP), 71 distance function, 44, 72 distance transform, 53-57, 61 distribution joint, see filtering, joint distribution dual-tree recursion, 47-49, 51, 52, 57, 58, 73, 74, 89 effective sample size, 13, 14 Fast Gauss Transform, 22, 49-51, 89 Improved (IFGT), 51, 89  Gaussian-mixture filter, 8 importance weights, 3, 5, 12, 20, 35 influence, 42, 47, 72, 73 function, 42 joint distribution, see filtering, joint distribution Kalman filter, 5, 8 Extended, 8 Unscented, 8 kd-tree, 45-46, 66, 68 kernel, 42, 52 assumption, 45, 72 Kernel Density Estimation, 43 marginal distribution, see filtering distribution Marginal Particle Filter, 2, 26-28  fast methods, 87  Markov process, 6  filtering  max-kernel, 2, 19, 43-44, 53-72, 88  distribution, 5, 6, 26 joint, 5, 6  unweighted, 68 Max-Product, see max-kernel  98  Index  M C M C , 38 metric space, 44, 58, 59, 78 metric tree, 46, 66, 74, 80 nearest-neighbour, 56 all-, 44, 56 particle fast methods, 3 filter, 1, 5, 10, 14, 56, 87 marginal filtering, 2 Monte Carlo, 1, 11 smoother, 1, 6, 15-19, 86, 87 source, 17, 42, 54, 58 target, 17, 42, 54, 58 predictive density, 8 Sequential Importance Sampling, 12 Sequential Monte Carlo, 1, 5 single-tree recursion, 47-48, 60 smoothing Bayesian, see Bayesian smoothing forward-backward, 9, 16-17, 22-25 MAP, 18-22, 56, 61 two-filter, 9-10, 17-18 spatial index, 45-46, 73 sufficient statistics, 73, 79, 81, 86, 90 sum-kernel, 2, 17, 22, 43, 49-52, 56, 58, 73, 88 Sum-Product, see sum-kernel  Viterbi algorithm, 19, 20, 56  

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.831.1-0051326/manifest

Comment

Related Items