UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Continuous ply minimization Busto, Daniel 2015

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
24-ubc_2015_september_busto_daniel.pdf [ 409.72kB ]
Metadata
JSON: 24-1.0166358.json
JSON-LD: 24-1.0166358-ld.json
RDF/XML (Pretty): 24-1.0166358-rdf.xml
RDF/JSON: 24-1.0166358-rdf.json
Turtle: 24-1.0166358-turtle.txt
N-Triples: 24-1.0166358-rdf-ntriples.txt
Original Record: 24-1.0166358-source.json
Full Text
24-1.0166358-fulltext.txt
Citation
24-1.0166358.ris

Full Text

Continuous Ply MinimizationbyDaniel BustoB.Sc., The University of British Columbia, 2012A THESIS SUBMITTED IN PARTIAL FULFILLMENTOF THE REQUIREMENTS FOR THE DEGREE OFMaster of ScienceinTHE FACULTY OF GRADUATE AND POSTDOCTORALSTUDIES(Computer Science)The University of British Columbia(Vancouver)July 2015c© Daniel Busto, 2015AbstractThis thesis is on the Continuous Ply Minimization problem, which is an exercise inminimizing the effects of uncertainty. Many geometric problems have been studiedunder models where the exact location of the inputs are uncertain; this adds a layerof complexity that depends on the level of the uncertainty. The level of uncertaintyis related to the regions in which a given entity may lie. At any given time themaximum number of these regions which overlap at a single point is called the“ply” of those entities at that time. These problems may be simplified by assumingthe entities have low ply at specific times. Previous work has investigated theproblem of obtaining low ply at a single targeted time, in a setting where onlysingle entity can be probed each time step. It was shown that, given a long enoughperiod of time, ply that was within a constant factor of the minimum ply that couldbe obtained at the target time. Continuous Ply Minimization works under a similarsystem, but we are interested in maintaining low ply over an entire interval of time.In order to prove results about this problem we introduce several new tools, whichaid our examination. We then produce an algorithm that can maintain constantfactor competitiveness with any algorithm’s average ply, as long as the entities arenot moving. This algorithm relies on maintaining constant competitiveness with anew notion, namely the “optimal blanket value” of a given set of entities. In thecase where the entities are moving, if we are given the maximum optimal blanketvalue, then we can produce an algorithm that maintains ply that is no worse than aconstant factor more than it.iiPrefaceThe body of this thesis is based on original, unpublished research, which was con-ducted with my co-supervisors David Kirkpatrick and Will Evans.iiiTable of ContentsAbstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iiPreface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iiiTable of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ivList of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vi1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.1 Problem Overview . . . . . . . . . . . . . . . . . . . . . . . . . 21.2 Paper Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42.1 Performance Measures for Online Algorithms . . . . . . . . . . . 42.2 Data in Motion . . . . . . . . . . . . . . . . . . . . . . . . . . . 52.3 The Black Box Model for Kinetic Algorithms . . . . . . . . . . . 62.4 Calculations on Uncertain Point Sets . . . . . . . . . . . . . . . . 62.5 Ply Minimization . . . . . . . . . . . . . . . . . . . . . . . . . . 72.5.1 Recurrent Case . . . . . . . . . . . . . . . . . . . . . . . 83 Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93.1 Movement Model . . . . . . . . . . . . . . . . . . . . . . . . . . 93.2 Probe Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103.3 Ply . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113.3.1 Competitiveness . . . . . . . . . . . . . . . . . . . . . . 124 Static case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184.1 Oblivious Algorithms . . . . . . . . . . . . . . . . . . . . . . . . 194.2 Intrinsic Ply Ratio . . . . . . . . . . . . . . . . . . . . . . . . . . 244.3 Our Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . 314.3.1 x-Blankets . . . . . . . . . . . . . . . . . . . . . . . . . 32iv4.3.2 Upper Bound on Blanket Size . . . . . . . . . . . . . . . 334.3.3 Rounded Blankets Algorithm . . . . . . . . . . . . . . . 375 Dynamic Case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 445.1 The Bucket Algorithm . . . . . . . . . . . . . . . . . . . . . . . 465.1.1 Perception Versus Reality . . . . . . . . . . . . . . . . . 475.1.2 Bucket Algorithm Analysis . . . . . . . . . . . . . . . . . 566 Concluding Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . 626.1 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65vList of Figures3.1 This diagram illustrates how the regions are layed out for ∆-arithmeticallyseparated points. . . . . . . . . . . . . . . . . . . . . . . . . . . 164.1 Lower bounding the area of the uncertainty regions for entity eiover time interval T and space interval I. The white bars representthe true uncertainty regions, and the gray triangles represent thearea we use for a lower bound. Notice the triangle may underesti-mate the uncertainty regions from the first gap by a large margin.In the figure on the left pi is such that|T |2piis less than |I|, in thefigure on the right pi is much smaller, so |I| is larger than |T |2pi . . . . 284.2 A graphic displaying the ideas behind our bound on |bxi |. Here∆ is four. The black circles represent the static locations of theentities, and the white bars represent their associate intervals in A.The entities have been split into four layers to make it easier tosee. The eight extremal entities have large intervals, which do notcontribute to the size of |bxi |. The volume of bxi can be bounded bysumming over the inverse volumes of the white regions of the eightinterior entities. . . . . . . . . . . . . . . . . . . . . . . . . . . . 345.1 An illustration of the state of the bucket algorithm at four differ-ent time steps. The rectangles represent buckets, which contain aqueue of entities inside them, and the dotted lines separate the in-terval into eight distinct time steps. Each bucket is associated withthe time steps it overlaps. The bolded buckets are the buckets thealgorithm is considering probing an entity from at that time step.The entities in red, along the bottom of the diagrams, indicate whatthe algorithm probed at the associated time step. . . . . . . . . . . 48vi5.2 Bounds on the possible true x-blanket of ei at time t′given thetrue x-blanket of ei at time t. The outer grey cones illustrate thebound on the positions of the extremal points of Γxi (t) from thetime t− 14 |bxi (t)| up to t. The middle grey cone shows the potentiallocations of ei. . . . . . . . . . . . . . . . . . . . . . . . . . . . 515.3 Bounds on the possible perceived x-blanket of ei at time t given thetrue x-blanket of ei at time t. Note that the perceived location ofany point in Γˆxi (t) may be based on its location up to 12 bxi (t) timesteps ago. The orange cone represents a bound on the location ofan entity outside of Γˆxi (t). Note that its right endpoint is necessarilyfurther from ei than the right endpoint of the leftmost grey triangle. 52viiChapter 1IntroductionMotion related problems have long been of interest to the Computer Science com-munity, and also to members of related fields. There are many different areas theseproblems might fall into. The area of motion planning deals with problems relatedto calculating routes for agents to follow, from an initial position to a goal position,without colliding with obstacles [13]. Kinetic Data Structures maintain informa-tion on certain properties of a point set, whose members are moving [11]. Thesimplest versions of these problems rely on knowing the true locations of certainobjects at all times. In practice, this is often not feasible. Frequently we base ourperception of a given environment on data that was collected in the past. There isno way to know if the situation has changed since the data was collected. How-ever stale data are sufficient for many real life tasks. For example, when crossingthe street we need to look both ways, but we do not continue to check once it hasbeen established that the street is empty. This is because we know that cars havebounded speed, and thus if a certain stretch of road is empty we conclude our pathwill remain unobstructed for the near future. Another example, of using stale datato predict future events, is an Elo-type rating system, used to quantify a player’s(or team’s) strength in competitive games (e.g. chess). Stronger players generallyhave higher scores, and so one may reasonably expect that in a tournament settingthe player with the highest rating will win. However such a rating system is basedon past performance, and it doesn’t take into account changes in a player’s strengthover time. Gains may be made due to more experience and practice, on the otherhand a player’s strength may drop due to time spent without play. There are alsothe uncertainties that are inherent in any such system, as a participant’s play on anygiven day may be affected by a myriad of factors, and is not completely regular.However, there generally are bounds on how far off the rating can be, and the moregames the rating is based on, the more accurate it will be. A complete beginner1has no hope of beating a master. Thus, as long as one has a way to measure theuncertainty associated with a given ranking, one can think of organizing a worldchampionship as the problem of calculating which players could potentially beranked number one (by looking at their ranking, and then accounting for the uncer-tainty). Inviting all of those players, and holding the tournament, then resolves theuncertainty, and gives you a precise ordering at that specific moment in time. Thisexample illustrates that our methods may be more generally applied, even thoughwe will mainly focus on the idea of physical entities moving in space.As the number of autonomous devices being used commercially, personally,and recreationally increases, the interest in algorithms which deal with the inter-actions between such devices increases. Many of these devices are equipped withsignalling equipment so that the device can transmit its precise location to somedistant operator. As these entities move it may be infeasible to keep track of theexact location at all times, especially if your application requires several such de-vices. Thus, if one is doing computation using the location of these entities, thereneeds to be some sort of protocol to decide which entity’s location should be up-dated at a given time step, to ensure the computation continues smoothly.1.1 Problem OverviewIn this paper we consider the problem of “Continuous Ply Minimization”. It is anabstraction of the kind of problems discussed in the opening. The problem is basedon the unpredictable movement of a group of entities. We are not instantaneouslyalerted to the movement of any given entity, and thus cannot keep accurate dataon the locations of all entities. However we can probe any given entity to receiveupdated information on its location at any time step, but only for one entity pertime step. We want to ensure that no group of entities becomes too clustered. Tothat end we want to probe the entities in such a way that no matter how they havemoved since being probed, the maximum number of entities that could be at thesame point, which we refer to as the ply, is small.It may not always be possible to achieve query patterns which lead to “optimal”overlap. Thus we will often aim to be within a small factor of the best possiblebehaviour of any algorithm.21.2 Paper OverviewIn the next chapter we review some previous work. This includes online algo-rithms, data in motion, and previous work on the ply minimization problem. On-line algorithms receive new data in the middle of their computation. Data in motionis a model for dealing with information which changes with time, and figuring outwhat is important to update and what is not. Previous work on the ply minimizationproblem has shown that good competitive ratios can be achieved, at pre-specifiedtime steps.In the third chapter we set up the model that we will use in our discussion ofthis problem. This model will overview how our entities can move, how we querytheir locations, and how we measure the effectiveness of our algorithm.In the fourth chapter we examine a simplification of the problem where noneof the entities move. We refer to this as the “Static Case”. The lack of motion isknown to the algorithm, but not to an observer, so we treat the uncertainty intervalsthe same way. With this restriction we create an algorithm that maintains ply that isno worse than a constant factor more than the worst ply of any algorithm, over thesame time period. Additionally we show that this ply is within a logarithmic factorof the best possible ply that could be achieved at any specific time in the interval.In the fifth chapter we discuss how the problem changes when we remove thestatic restriction. We show that a tweaked algorithm, which assigns entities to beprobed in “windows” of time, rather than at specific steps, maintains a relativelygood ply. Unfortunately this algorithm relies on having an oracle that gives it someinformation about the “crowdedness” of the entities.We conclude with a review of our results, and ideas for future work. Thisincludes ways to potentially improve on the result in chapter five to give a moregeneral algorithm for continuous ply minimization.3Chapter 2Related WorkThere are many different ways uncertainty manifests itself in computational prob-lems. We may be interested in making decisions which will have consequencesfor a long period of time, before all of the information about the behaviour of oursystem during that time is available, as is the case with online algorithms. It maybe expensive to acquire data, consequently we want to acquire only the data thatis crucial to getting an accurate result. The cost of acquiring data comes in manyforms, it may require physically moving a device some distance, or procuring moreexpensive equiment, or the expenditure of some other resource. In the Ply Mini-mization problem the cost associated with acquiring of data is the time it takes toprobe an entity. Whatever the cause of the uncertainty, we want to deal with it asgracefully as possible. The algorithms for these kinds of problems are often hard toanalyze. Algorithms that know what data to acquire can perform better than “fair”algorithms that may make useless queries searching for the relevant data.2.1 Performance Measures for Online AlgorithmsThere is a large amount of previous work on problems that deal with data that isupdated in real time. Algorithms that have to deal with data as it is produced, ratherthan analyzing data after it has been generated, are called “online algorithms”. Acanonical example, from the area of computer systems, is paging. When softwarerequests data that is stored on a hard drive, or some other storage medium, the op-erating system needs to decide what data should be kept in main memory and whatshould be removed to make room for the newly requested data. Reading data froma hard drive takes a (relatively) long time, and thus the operating system wants tokeep data in memory if it will be accessed several times, in order to speed up theprogram. However the operating system does not know what data will be needed4again in the future, and thus it cannot make optimal choices about which data tokeep. This is similar to our problem, since we do not know ahead of time whichentities will stay close together, and which will spread apart. If we did know theentities trajectories calculating the optimal query pattern would be possible. Unlikethe paging problem, in the continuous ply minimization problem the algorithm hascontrol over what data it receives at each time step, through its probes.There are many different ways to measure the performance of Online algo-rithms. Reza Dorrigiv and Alejandro Lo´pez-Ortiz reviewed several of the mostpopular methods in their 2005 survey [7].The most commonly used measure of performance for online algorithms is the“Competitive Ratio”. This measure compares the performance of the algorithm tothe performance of the “optimal algorithm”. The “performance” measure dependson some concept of cost, which will depend on the specific application. For exam-ple in the case of paging the cost of a given algorithm is usually formulated as thenumber of cache misses, that is the number of times a program tries to access datanot in main memory. Let A (σ) be the cost of the algorithm A on input σ , andOPT(σ) be the cost of the optimal algorithm. Then A is C(n)-competitive if andonly ifC(n) = max|σ |=n{A (σ)OPT(σ)}.[7]There can be several problems with using such a model of comparison. Onespecific problem is in deciding what the optimal algorithm is. Often, in order tomake that decision, more power it given to the optimal algorithm. For example, theoptimal algorithm may be one that is clairvoyant, and knows all the data it needsahead of time. One has to balance giving power to the optimal algorithm to makeit easy to describe and argue algorithmic costs for, with making sure it remainsreasonable measuring stick for other algorithms’ performance.2.2 Data in MotionIn his 1991 PhD thesis Simon Kahan studied “Data in Motion” [12]. Data in Mo-tion represents a broad class of problems, in which you are trying to answer queriesabout moving objects. The problem is exacerbated by the fact that the position ofobjects are not automatically updated, instead one has to request their current lo-cation. The main premise of the thesis is that as long as the queries you want toanswer do not require the exact location of every object you do not have to update5all of their locations. Instead you can probe objects based on whether or not theirtrue value will affect the answer to the query. For example if the query is ”Whatis the maximum of a set of integer valued variables?”, any variable whose rangeof possible values is bounded above by an integer that is smaller than the mini-mum bound on another variable could not possibly be the current maximum, andtherefore can be ignored in responding to the query. The trade-off this encouragesis sacrificing potentially longer query times in future, if more elements need to beprobed, to expedite the query that is being done at the moment. Whether or not thistrade-off is worth while depends on your model of computation, and how expen-sive probes are compared to the computation required to determine whether or notthey are necessary.2.3 The Black Box Model for Kinetic AlgorithmsMany classic geometric structures on point sets, such as Convex Hulls or Delau-nay Triangulations, are also interesting structures to study when the point set is inmotion. The focus of Kinetic algorithms is the problem of how to maintain thesestructures as the entities move[11]. One can reduce the amount of computationnecessary, by using insight on what changes can occur as time passes. These prob-lems were first examined with the assumption that the motion of the objects wasknown ahead of time. In the “Black Box Model” the algorithm does not know thetrajectories of the points ahead of time, but gets updates at regular intervals. Us-ing the “Black Box Model” makes it harder to predict when changes will occur inthe structure. The “Black Box Model” has been used in the study of many differ-ent kinetic problems, including: Convex Hulls and Delaunay Triangulations [5],Compressed Quadtrees [4], 2-Centres [6], and Spanners [10].2.4 Calculations on Uncertain Point SetsIn “The Post Office Problem on Fuzzy Point Sets” Franz Aurrenhammer, GerdSto¨ckl, and Emo Welzl study two different objectives for partitioning the plane, sothat each partition represents a section of the plane which is closest to a certain en-tity. Their model is very similar to the one used for Voronoi Diagrams, except theylook at uncertain point sets, where an entity’s location may be anywhere withina given disc [2]. The first objective they consider is answering queries about the“Probably-closest Disc”. The goal, in this case, is to partition the plane in to re-gions, one for each entity, so that any point in a given entity’s region is most likelyclosest to that entity, rather than to other entities. In order to calculate which entity6a point is closest to Aurenhammer et al assume that the position of an entity is rep-resented by a uniformly distributed random variable over the disc corresponding tothat entity. Aurenhammer et al show how to calculate the probably-closest entityfor a given point, and prove that the resulting regions are star-shaped, as long asthe discs are disjoint. They then move on to “Expected-closest Disc”. In this case,an entity’s region contains all points whose minimal expected distance to an entityis to the entity ei. They show that solving this problem is equivalent to finding apower diagram, for the expected minimal distances to the entities. Thus the solu-tion consists of convex polygons, which can be constructed in O(n logn) time, andO(n) space.Maarten Lo¨ffler, and Jack Snoeyink showed that, with O(n logn) prepossessingon a set of uncertainty discs, when given the location of points within correspond-ing uncertainty discs one can compute Vornoi diagrams in linear time [14]. Theseresults are originally presented under the assumption that all the discs have thesame radius, and are non-overlapping. However they provide an extension showingthat as long as the ply of the discs is bounded the algiorithm still works, althoughthe constant in the time bound of the algorithm is increased by a constant.Maarten Lo¨ffler, and Marc van Kreveld showed how to calculate the largestand smallest possible convex hulls of a set of entities that have uncertain locations[15]. They examine several different classes for the uncertainty regions, includingline segments, circles, and squares, giving polynomial algorithms for calculatingthe largest and smallest convex hull in each case.2.5 Ply MinimizationWilliam Evans, David Kirkpatrick, Maarten Lo¨ffer, and Frank Staals wrote a paperon a ply minimization problem, in which the algorithm only cares about achievingthe lowest possible ply at a single target time [8][9]. This is contrasted with ourgoal of maintaining consistently low ply over an entire time interval.The adversary in the Evans et al paper is a clairvoyant algorithm, which usesthe available time steps as effectively as possible. Hence it only probes duringthe n steps prior to the specified time step, because it has no need to probe anyentity more than once. Evans et al showed that even if one knew the location ofthe entities, calculating the optimal query pattern for a single target time, is NP-Hard. Thus, their adversary is a relatively strong algorithm. The algorithm theyconstructed to solve their problem queries all entities, then recurses on the half of7the entities which will have regions that cover points of high ply at the targetedtime. In this way Evans et al [8] focused their queries on high-priority entities. Therecursion requires you have twice as much time to query as you have entities. Ifthere is enough time to run the recursion, then their algorithm ensures a ply whichis within a constant multiple of the optimal ply [8].In previous work the algorithm could project forward to see what the ply wouldbe at the target time, if no more probes were made. The algorithm would try to af-fect the final sizes of the entities’ uncertainty regions to produce lower overlap.After each query it can see what the effect will be on the final situation, and updateits plan. This allows it to ignore entities which it knows will not participate in alarge overlap at the target time. The problem we examine in this paper is differentbecause there is no one single time to project forward to. We are worried aboutthe size of the entities’ uncertainty regions at every time step. This means we mustchoose queries that will be most effective across all future time steps.2.5.1 Recurrent CaseThe recurrent case studied in Evans et al’s paper [8] was a step in-between their“One-shot” problem, and our continuous one. Instead of trying to minimize theply at one specific time, the recurrent case aims to minimize the ply every time acertain number of time steps have elapsed. The authors note that if the gap betweentimes of interest are long enough (i.e. at least twice the number of entities) one cansimply treat every target time as a separate instance of their one-shot problem.They conclude, in the case where the algorithm doesn’t have enough time justto run the entirety of their algoirthm, by alternating between “Round Robin” andtheir method for minimizing ply they are able to get a O(√ nτ ) competitive ratiowith the optimal ply (in one dimensional space, the general bound depends on thedimension of the space), where τ is the length of the gap between probes. Note thatas τ approaches 1, the ratio approaches√n. In chapter four we show that RoundRobin’s ply is O(√n∆).8Chapter 3ModelIn this chapter we outline the model we use to investigate the Continuous Ply Min-imization problem. We work with a set of entities that have real-valued locations,and these entities move unpredictably, but with bounded speed. To maintain a ideaof how the entities are located, we probe a single entity at each time step. The aimof this process is to minimize the overlap between the potential locations of the setof entities; we measure this overlap as a quantity known as “ply”. It is impossibleto use worst case analysis to judge the effectiveness of a given algorithm, sinceif the entities are tightly packed high ply is unavoidable. Thus, we need differenttools to deal with competitiveness. We discuss competing against other algorithms,and introduce the notion of “intrinsic ply” as another measure to compete against.3.1 Movement ModelLet E = {e1, ...,en} be a set of entities that we are maintaining information about,specifically their current location. In this paper we restrict our attention to a onedimensional universe, that is the entities are living on the real line, R. Issues thatarise from dealing with the continuous version of the ply minimization problemappear in this simple case. We believe the findings should extend to higher dimen-sions.We need to introduce notation for discussing the position and movement of theentities in E . First we describe the notation for a given entity’s position at a specifictime.Definition 1. li(t) is the location of entity ei at time t.Note that since we are dealing with a one dimensional universe li(t) ∈ R.9Next we discuss how entities move. Each entity’s speed is less than 12 unit pertime step. This model is universally applicable, since all objects have bounded up-per speeds. Thus one can simply scale the unit of distance until the upper speedis 12 per unit time. In the extreme case any object’s speed is theoretically boundedby the speed of light. More practically, the maximum speed of a given device isusually well studied, and, unless acted on by outside agents, the device will staywithin expected operating parameters.The interactions between many different entities will be crucial in several ofour arguments. It will be important to have ways to talk about the distribution ofthose entities. One notion we will use is the “diameter” of a set of entities, whichis simply how far apart the extreme entities of the set are.Definition 2. The diameter of a set of points S is the furthest distance between twopoints in S. More formallydiameter(S) = maxx,y∈Sd(x,y)Where d(x,y) is the Euclidean distance from x to y.The diameter of a set of entities at a given time is the diameter of the set ofthose entities’ locations at that time.There are many ways this model might be varied to translate better to specificapplications. Evans et al [9] study how their results change when there are severalclasses of entities, each of which have different bounds on their speed. There aremany other variables that may be considered such as a bounds on acceleration, orturning radius. Many real life devices, such as cars, have bounds on their accel-eration. If there was extra information about the direction and magnitude of theentities motion at a given time, knowing an acceleration bound would allow the al-gorithm to reduce its uncertainty about the entity’s location at a future time. Theseare all interesting avenues for future work.3.2 Probe ModelWe assume that at time zero we have full knowledge of where all the entities are.At this point in time the entities start to move unpredictably, but with boundedspeed, from their starting position. The interval that contains all of potential loca-tions of an entity is referred to as the “uncertainty region” of ei. The uncertainty10region grows 12 unit in each direction at each time step, thus an entity’s uncertaintyregion’s volume is the difference between the current time and the last time theentity was probed.In order to keep the ply of the uncertainty regions low, we need to be able toreduce the size of the uncertainty region of a given entity. In our model that toolis a “probe”. Probes are instantaneous queries of a given entity’s current location.This means if we probe an entity, ei, at time t, we receive li(t). It is important tonote for any given time we consider the uncertainty region of an entity at that timeto be the uncertainty region before we make the probe, rather than after. Hence allentities have uncertainty regions of length at least 1 at all times (since it has beenat least a time step since they’ve been probed).Definition 3. For any given entity ei and any time t, pi(t) is the last time prior to tthat ei was probed.Definition 4. The uncertainty region associated with entity ei at time t, ri(t), isthe open ball centered at the location of ei when it was last probed, with diametert− pi(t). More formally:ri(t) = (li(pi(t))−12(t− pi(t)), li(pi(t))+12(t− pi(t)))Ignoring the delay between choosing the entity to probe and receiving the cur-rent location of the entity is reasonable as long as the time it takes to acquire theentity’s location is short when compared to a single time unit. If it is not, such aswhen working with entities which are spatially distant from the computation de-vice, it may be necessary to compensate for the delay between choosing an entityto query and receiving the requested information. It may also be interesting to con-sider cases were some entities can have their location updated more rapidly thanothers. It is important to note we ignore the time it takes for the computation todecide which entity the algorithm wants to probe. Again, unless the unit of time isvery small, this delay is negligible.3.3 PlyThe notion of ply is at the heart of our discussion. It is the measure we use todetermine how well we are maintaining a clear picture of the current distributionof the entities. Briefly, the ply of a set of entities at a given time is the maximaloverlap between those entities’ uncertainty regions at that time.11Definition 5. Given a set of intervals I the depth of a point p at time t, δI (p, t),is the number of intervals of I which overlap that point. That is δI (p, t) = |{I :p ∈ I, I ∈I }|.Definition 6. Given a set of intervals I the ply of that set of intervals at time t,ρI (t), is the maximum depth of a point, over all points p ∈ R, at time t. That isρI (t) = maxp(δI (p, t)).Previous work by Evans et al introduced this notion of ply [8]. We will oftenrefer to the ply of a set of entities, by which we mean the ply of the associateduncertainty regions. When the set of intervals is obvious from context it is omittedfrom the notation.3.3.1 CompetitivenessThere are many different ways we could approach competitiveness. The simplestis to use worst case analysis. As was mentioned at the beginning of the chapterworst case analysis does not work well for this problem. This is because, in thecase where all n entities are at the same location, the ply will be n regardless of thesequence of probes. Instead we rely on looking at the competitive ratio betweenour algorithm’s ply and the minimum ply any other algorithm could achieve. Thefirst quantity we consider is the minimum ply an algorithm can achieve at a spe-cific point in time. This quantity is unfairly low, since it is achieved once, andnot maintained over the entire time interval. Constructing a bound on this quantityrelies on the notion of “uncertainty realizations”. An uncertainty realization is aset of intervals, which theoretically could be the uncertainty regions for the enti-ties, given only their current positions. Note that the set of intervals which couldactually be the uncertainty regions for a given set of entities is much smaller thanthe set of uncertainty realizations, as the actual uncertainty regions depend on thetrajectories of the entities.Definition 7. For a set of entities E = {e1, ...,en} a set of open intervals A ={A1, ...,An} is an uncertainty realization of E at time t if the following three prop-erties hold:1. li(t) ∈ Ai2. |Ai| ∈ Z+3. |Ai| 6= |A j| if i 6= j12Note that, as discussed above, |ri(t)| is never zero, so it makes sense for uncer-tainty realizations to be restricted to intervals which have positive volume.Definition 8. For a set of entities E = {e1, ...,en}, A (E ) is the set of all possibleuncertainty realizations.When the set of entities is clear from context it is omitted.//Our surrogate for the lowest ply that can be achieved at a specific time is calledthe “intrinsic ply” at that time. Each uncertainty realizations has a ply. The intrinsicply is the minimum ply that could be achieved by an uncertainty realizations.Definition 9. For a set of entities E = {e1, ...,en} the intrinsic ply at time t, ∆(t) =minA(ρA(t)), where A is an uncertainty realization in A (E ).One can restrict the size of the uncertainty regions to those of size at most n,since reducing the size of a region can never increase ply. This is useful becauseit reduces the number of realizations one has to consider when calculating ∆(t).This follows fairly directly from the fact that larger uncertainty regions can onlyincrease the ply.Theorem 1. When calculating ∆(t) we only need to consider uncertainty real-izations which use intervals whose sizes are at most n. That is minA(ρA(t)) =minA˜(ρA˜(t)), where A is in in A (E ), and A˜ = {A : A ∈A ∀Ai ∈ A |Ai| ≤ n}.Proof. Assume to the contrary there is no uncertainty realization for which all theintervals have size at most n whose ply is the intrinsic ply. Let A be an uncertaintyrealization which does achieve intrinsic ply. Note that A contains at least one inter-val Ai for which |Ai|> n.We can transform A into a set which contains no intervals of length greater thann with the following process. First let A′ = A. For any Ai in A such that |Ai| > nthere exists an interval A′i, such that |A′i|= j < n, A′i ⊂ Ai, li(t) ∈ A′i and no other Akin A′has length j. Set A′ = (A∪{A′i})\{Ai}, and repeat until there are no intervalsof length greater than n.For any point p, the depth at that point for A′is at most as large as in A, thatis δA(p, t) ≥ δA′ (p, t), since the only possible change is that some intervals whichused to cover p no longer do. Hence A′realizes the intrinsic ply, and containsintervals whose volume is at most n. This is a contradiction, and thus ∆(t) is alwaysrealized by an uncertainty realization which contains no interval whose volume isgreater than n.13Note that ∆(t) is a measure which is oblivious to the past or future behaviourof the entities. That is the measure only depends on the current location of theentities. In this way it has a lot fewer restrictions on it than an algorithm does. Analgorithm has to make decisions based on the present situation, but its worst plyover a time interval depends heavily on what has happened before that time step,and what happens after that time step, over the course of the interval.In the discussion of how well an algorithm can do with respect to the intrinsicply, it is important to have an example of a point set for which it is hard to maintainlow ply. Our canonical example is “arithmetically spaced” points. The intuitionbehind this example is to force all of the entities as close together as possible whilestill having low intrinsic ply. This makes sense, since if the entities are close toone another it will be harder to probe the entities so that their uncertainty regionsremain pairwise independent. Our example is constructed in such a way that theintrinsic ply is one, regardless of the number of entities used. Consider buildingan example in terms of the uncertainty realization we assign to the entities in theexample. Each entity is thought of as corresponding to a “brick” of length equalto the uncertainty region it is assigned in the optimal uncertainty realization. Sincewe are trying to force the entities as close together as possible, without causingoverlap between the regions in the uncertainty realization (as the intrinsic ply willbe one) we should have the uncertainty regions all have volume n or less, and haveno gaps between the entities’ uncertainty regions. However it is not obvious whatorder the uncertainty regions should appear in. We decided to place the entities inorder of increasing uncertainty region volume, the smallest is placed at the origin,and the larger regions are placed to its right. The example is described formallybelow.Definition 10. For any positive integer n, let A(n) = {Ai} be the set of n openintervals such that the following conditions hold for all positive integers i less thann:1. A1 is the interval (0,1).2. Ai has volume i, that is |Ai|= i.3. The right endpoint of Ai is the left endpoint of Ai+1, that is supAi = infAi+1.A set of n entities are “arithmetically spaced” if the entity ei is at the midpoint ofthe interval Ai.It is now straightforward to confirm that our construction leads to the ply weintended.14Lemma 2. A set of arithmetically spaced entities has intrinsic ply one.Proof. By definition the set of intervals A(n) described in the definition of arith-metically spaced entities is an uncertainty realization of said entities. By definitioneach region shares its endpoints with its immediate neighbours, thus no regionsoverlap. Thus A(n) has ply one, and hence ∆= 1.We are also interested in situations where the entities have larger intrinsic plyvalues than one. We have an analogous concept which represents how tightlypacked entities can be if the intrinsic ply is higher than one. We again think aboutthe entities as “bricks”, which are the length of the uncertainty region assigned toit. Consider the smallest 2∆ entities. Each of these entities can be paired with an-other entity, whose assigned uncertainty region has volume at most 2∆, so that thecombined volume of their uncertainty regions is 2∆+ 1. Thus all of the first 2∆entities can be placed in the interval (0,2∆+ 1). We repeat this process for eachsubsequent group of 2∆ entities, creating “blocks” of length (4i− 2)∆+ 1. Thusthe size of the blocks follows an arithmetic sequence. We refer to these sets ofentities as ∆-arithmetically spaced entities.Definition 11. For any positive integer ∆ define the set of intervals A1(∆) to be:A1(∆) = {(0, j) : 1≤ j ≤ ∆}∪{( j,2∆+1) : 1≤ j ≤ ∆}We will define the rest of the Ai(∆)’s recursively. Let mi be the supremum of inter-vals in Ai(∆). Then mi is also the infimum of intervals in Ai+1(∆). Note m0 = 0.Thus, since each Ai(∆) has diameter (4i− 2)∆+ 1, the value mi is 2i2∆+ i. LetAij = (mi−1,mi−1+2i∆+ j) and Aij+∆ = (mi−1+2i∆+ j,mi), for j = 1, ...,∆. Ai(∆)is the set of all of these intervals. That isAi(∆) = ∪2∆j=1AijLetA(n,∆) =⋃i≤ n∆Ai(∆)A set of n entities are “∆-arithmetically spaced” if the entity e2∆i+ j is located atthe midpoint of Aij.Note that our definition for A(n) corresponds to the definition of A(n,1).Next we argue that the name ∆-arithmetically spaced entities makes sense,since their intrinsic ply is in fact ∆. This result follows directly from our definition.Lemma 3. A set of ∆-arithmetically spaced entities has intrinsic ply ∆.15123∆2∆2∆− 12∆− 2∆ + 12∆ + 12(i− 1)∆ + 12(i− 1)∆ + 22(i− 1)∆ + 3(2i− 1)∆ (2i− 1)∆ + 12i∆2(2i− 1)∆ + 12i∆− 12i∆− 2Figure 3.1: This diagram illustrates how the regions are layed out for ∆-arithmetically separated points.Proof. By definition the set of intervals A(n,∆) described in the definition of ∆-arithmetically spaced is an uncertainty realization of the given set of uncertaintyregions. In order to show the ply of A(n,∆) is ∆ we will argue that the intervalsin any given Ai(∆) do not overlap with intervals outside of Ai(∆), and that theyoverlap at most ∆−1 of the other entities in Ai(∆).Note that, by construction, the union of the intervals Aij and Ai∆+ j is the interval(mi,mi+1), minus the point where they meet, since the point at which they meet isin neither interval. Thus no intervals in Ai(∆) overlap with entities from Ak(∆), forany k 6= i. Also note the intervals in the pair are non-overlapping. There are ∆ pairsof intervals which do not overlap in Ai(∆) (since there are 2∆ entities total). Eachpoint in the interval is overlapped by at most one of the two intervals in each pair,thus any point is overlapped by at most ∆ intervals, and the ply is at most ∆. Thepoint mi + 12 is in all of the first ∆ intervals, hence the ply is at least ∆. Since wehave matching upper and lower bound, the ply of Ai(∆) is ∆.Since none of the intervals overlap with intervals from other Ai(∆)’s, and theply in any given Ai(∆) is ∆ the set of all the intervals, A(n,∆), has ply ∆.However, we will find that no algorithm can compete favourably with the in-trinsic ply, because it is a short sighted measure, which ignores the side effects ofminimizing ply at one specific point in time. ∆-arithmetically spaced entities willprovide an example in which no algorithm can maintain ply that is a constant factorof ∆(t) at all time steps (proof is provided in chapter four).We will also compare our algorithm’s performance directly to that of otheralgorithms. When comparing against other algorithms it no longer makes senseto compare their plies at each time step. An algorithm may have behaviour thatmakes their ply very low at a certain time step, while it is much higher at anothertime step. One does not want to conclude an algorithm is bad simply because one16evaluates its behaviour at a time when the other is doing much better than it wouldat any other given time step. To that end we will argue about the maximum plyalgorithms produce over some time period.17Chapter 4Static caseThe ply minimization problem can be hard to approach in its full generality. Thereare two main sources of difficulty in constructing a competitive algorithm for thisproblem. The first snag is the uncertainty associated with the movement of theentities. This uncertainty obscures which entity is best to probe, as a probe’s ef-fectiveness depends on how close the entity we are probing is to the neighbouringpoints. The second problem is the fact that we are restricted to probing at mostonce per time step. In order to gain insight into the techniques and tools we usein the general case, we first consider a simplification of the problem which focusesour attention on this second difficulty. That simplification is the “static case” inwhich no entities are moving. More formally, in the static case entity ei’s positionli(t) is a constant, and does not depend on the value of t. Because of this fact wedrop the time from the expression and use li to denote the position of ei at any time.In the static case there is no longer any uncertainty associated with the move-ment of the entities. This constraint means we have a clear picture of each entities’location, and hence of a probe’s effect. Specifically, we can calculate what an en-tities uncertainty region will be at each time step, based on our probe sequence.Thus, an adversary that knows the trajectories of the entities no longer has a com-petitive advantage. From the known location of the entities one could construct analgorithm which follows the optimal probe sequence. However, it is not clear howone would determine the optimal probe sequence. It seems reasonable to surmisethat determining the optimal probe sequence would be hard, as in the one-shot casecalculating the optimal probe sequence is NP-hard [8].The main problem is how to choose an entity to probe based on the knownlocations of the entities and their current uncertainty regions. To approach this18problem we first analyze how well an algorithm can maintain low ply if it has noinformation about the position of the entities, other than that they are static. RoundRobin is a prime example of such an algorithm. We will argue that Round Robin’sperformance is the best one can get from an “oblivious” algorithm. We show thatRound Robin manages to maintain a sub-linear competitive ratio, an interestingresult for such a simple algorithm.We then consider how well algorithms in general can perform as a functionof the intrinsic ply, ∆. Although a constant competitive ratio can be achieved atspecific points in time, as discussed in the previous work [8], trying to maintainlow ply at all times is impossible. Any algorithm that runs for a sufficiently longperiod of time will have average ply that is at least a log(n∆)factor greater than ∆,for certain sets of entities. Finally, we construct an algorithm which probes enti-ties with frequency proportional to a measure of their “closeness” to other entities,which we call their “x-blanket”. The algorithm, which we call the “Rounded Blan-kets algorithm”, achieves ply O(∆ log(n∆)). More importantly, it achieves constantfactor competitiveness with any other algorithm. This shows the “optimal blanketvalue” of a given set of entities is indicative that set of entities’ optimal ply, in thestatic case. We use the Rounded Blankets algorithm as a guide for how to build analgorithm in the dynamic case.4.1 Oblivious AlgorithmsWe begin our investigation by assuming our algorithm stores no information. Cru-cially this means it does not remember an entity’s location after it probes that entity.We refer to such algorithms as “oblivious” algorithms. All an obvious algorithmcan do is partition the time steps, and then assign a set of time steps from the parti-tion to each entity. This assignment corresponds to the decision to probe the givenentity at each time step in its assigned set. We call an assignment of entities tothese partitions a “labelling” of the entities. The only thing that affects this parti-tion is the number of entities, so any particular algorithm has the same behaviouron every set of entities that have the same cardinality. However, the resulting plywill be vastly different depending on the locations of the entities. We cannot getconstant factor competitiveness with an algorithm which knows where the entitiesare. This can be seen by considering a situation where certain entities are closetogether, and other entities are spread out, then the ones that are close together re-quire more probes to minimize ply. The question we aim to answer is how good acompetitive ratio can we achieve.19Observation 1. Each oblivious algorithm can be described by its partition of thetime steps P = {P1, ...Pn}. At any time t the entity corresponding to the parti-tion Pi has some gap since the last time it was probed. More formally this gap isgi(t) = min{t−τ : τ < t, τ ∈ Pi}. The volume of the uncertainty region of the entityassociated with Pi will be gi(t), at time t. Thus finding the worst case labelling isequivalent to placing uncertainty regions that correspond to these gaps such thatthe resulting ply is maximized. Note that unlike the intrinsic ply these uncertaintyregions must be centered on their associated entity’s location.Round Robin is a simple algorithm, and hence is a nice place to start. RoundRobin is an oblivious algorithm that partitions the time steps evenly into the setsPi = {z : z ≡ i (mod n)}. This means each entity is probed periodically, with aperiod of n time steps. We show that Round Robin gives the best worst case per-formance for an oblivious algorithm. We then move on to arguing that it alwaysmanages to maintain ply which is O(√n∆), for any set of n static entities. Wethen show that, for any given ∆, a set of ∆-arithmetically spaced entities will causeRound Robin to have ply Ω(√n∆) at least half the time in the worst case, showingthe above bound is tight.First we show that Round Robin is the “best” oblivious algorithm. By best wemean its worst case ply for a given set of n static entities and time t is no worse thanany other algorithm’s worst case ply for the same set of entities. The worst casebehaviour is understood to be taken over all labellings of the entities. Note thatRound Robin probes all entities with the same period, thus at any given time theentities’ uncertainty regions have volume one through n. Therefore, unlike with ageneric algorithm, the worst case assignment of uncertainty regions to entities maybe the same at each time step. However, these uncertainty regions will be producedby a different labelling at each time step.Theorem 4. For any set E of static entities, and any oblivious algorithmA , at anytime t there is a labelling of the entities such that A has ply greater than or equalRound Robin’s worst possible ply at t over all potential labellings of the entities.Proof. Let ρRR be Round Robin’s worst ply at t, over all possible labellings of E .Note that as Round Robin’s worst ply is the same at all times it doesn’t dependon t, and thus using just ρRR to represent its worst ply makes sense. Let LRR be alabelling of the entities that causes Round Robin to have ply ρRR at t. Since we areinterested in the worst case, we may assume that Round Robin is being run withlabelling LRR. For clarity I will refer to the uncertainty region generated by Round20Robin for entity ei at time t as “rRRi (t)”. Similarly those produced by A will bedenoted “rAi (t)”. For simplicity assume t > n, there will be a discussion about howto deal with smaller time steps at the end of the proof.Let Eρ(t) = {ei, ...,eρRR} be a subset E which is a minimal set that realizesRound Robin’s worst ply at t. That is |Eρ(t)| = ρRR, and all the rRRi (t) for ei inEρ(t) contain a common point. Assume the entities are numbered in such a waythat |rRR1 (t)| > |rRR2 (t)| > ... > |rRRρRR(t)|. Recall Round Robin probes entities onceevery n time steps. Thus we know that the earliest the entity ei was last probedis the time step t− n+ i, since all the entities have been probed in the last n timesteps, and the entities e1, ...,ei−1 were all last probed before it. Thus |rRRi (t)| ≤ n− i.We will now discuss how to label the entities so that A gets ply at least ρRRat t. By the definition of oblivious algorithms we have n sets of probe times. Letthese sets of time steps be called P1, ...,Pn. Without loss of generality we canassume that the partitions are ordered by the length of the gap since their last timestep prior to t. That is P1 has the largest gap, and Pn has the smallest. Note thatP1 cannot contain any of the time steps {t−n+1, ..., t−1}, since there are n−1sets which all must contain times between P1’s last time before t and t itself. Byinduction Pi cannot contain any of the time steps {t−n+ i+1, ..., t−1}. AssignPi to ei. Then |rAi (t)| ≥ n− i. Label the remaining entities arbitrarily. At time t,the probe sequence of A leads to uncertainty regions such that |rAi (t)| ≥ |rRRi (t)|for all i≤ ρRR. Since for these entities the associated uncertainty regions rRRi (t) alloverlap at t, the uncertainty regions {rAi (t) : i≤ ρRR} will all overlap at t, and thusA will have ply at time t which is at least ρRR.If t ≤ n then there are some entities which haven’t yet been probed by eitheralgorithm, since not enough time has passed since t = 0. The argument is the sameas before, except the first t− n entities are assigned sets from the partition whichdo not include times less than t.Now to move to how well Round Robin competes with the intrinsic ply. Themaximum possible ply is achieved if all of the entities’ uncertainty regions overlap.Thus the worst ratio would be if our algorithm had a ply of n but the optimal querypattern led to a ply of 1. This leads to a ratio of n between our ply, ρ , and the intrin-sic ply, ∆. We will show that even the simple algorithm of Round Robin does muchbetter than a linear competitive factor, and manages to maintain ply O(√n∆) in allcases. We also give a specific example where average ply Ω(√n∆) is unavoidablefor Round Robin, showing that this bound is optimal.21Theorem 5. For all times t and all sets of static entities E with cardinality n,Round Robin has ply O(√n∆).Proof. Consider an arbitrary group of n entities at some arbitrary time t. Considera subgroup Eρ of E , such that the cardinality of Eρ is ρ , and the uncertainty regionsof entities in Eρ all overlap at some point p at time t. That is this set of entities isone of the minimal sets (in terms of number of entities) which has ply ρ at time t.Assume without loss of generality Eρ = {e1, ...,eρ}.Round Robin queries these entities every n times steps. Thus each of theiruncertainty regions is at most n units wide, and so for each entity ei in Eρ theposition of ei, li, must be in the interval (p− n, p+ n). Let A = {A1, ...,An} bean uncertainty realization for Eρ which achieves ply ∆. By Theorem 1 we canassume that each Ai in A has volume at most n. Note that since li is in the interval(p− n, p+ n), and |Ai| is at most n, we know Ai is a subset of (p− 2n, p+ 2n).Since the length of this interval is 4n, and the regions {Ai, ...,Aρ} have ply at most∆ the total volume of these regions is at most 4n∆. Thus4n∆≥ρ∑i=0|Ai|≥ρ∑i=0i≥ 12ρ2→ ρ ≤√8n∆Thus Round Robin’s ply is O(√n∆)This result is very interesting, because it provides a glimpse into the underlyingrelationship between how densely packed the entities are and the intrinsic ply. Itshows that we can get a non-trivial bound on the competitive ratio between anyintelligent algorithm and the intrinsic ply just because of the amount of space a setof entities with ply ∆ must take up.We now move on to showing that this bound is tight for Round Robin. Wewill do this by showing for a set of ∆-arithmetically spaced points Round Robinproduces ply Ω(√n∆).22Theorem 6. If n static entities are ∆-arithmetically spaced then Round Robin pro-duces ply Ω(√n∆) at least half of the time.Proof. LetF be the first 12√n∆ of the n entities, which are ∆-arithmetically spaced.Recall one can divide ∆-arithmetically spaced entities into groups of 2∆ entitieseach that have diameter (4i−2)∆+1. F covers 14√ n∆ of these groups.Consider a point at time t when less than half of the elements of F have beenprobed in the past n2 time steps, that is in the interval [t− n2 , t]. Then at t half theentities in F have uncertainty regions which are wider than n2 , thus they each havea radius greater than n4 . The distance from the e1 to e 12√n∆ is at most the diameterof the union of the first 14√ n∆ groups. This is|l1− l 12√n∆| ≤14√n∆∑i=1((4i−2)∆+1)≤ 4∆(14√n∆)2− (2∆−1)(14√n∆)≤ n4Thus all of the entities in F that have been probed more than n2 time steps agohave uncertainty regions that cover all of the entities in F . Hence ply is Ω(√n∆).Consider any arbitrary time step t. Some number of entities in F are probedin the interval [t − n2 , t] and the others are probed during [t + 1, t + n2 + 1], sincethe union of these two intervals is n time steps long. If more than half are probedduring the first, less than half will be probed in the second and vice versa. Thusone of either t or t + n2 + 1 must be after a period of n2 time steps when less thanhalf of the elements of F have been probed. For any pair of time steps which areseparated by n2 time steps Round Robin produces a ply which is Ω(√n∆) at one.This means at half of all time steps Round Robin has ply Ω(√n∆).This shows that there are cases where the ply Round Robin gets can be as badas√n∆, which we know from Theorem 5 is also an upper bound on Round Robin’sply. Since we know no oblivious algorithm can do better than Round Robin, if wewant to guarantee ply less than√n∆ we have to use some information about thelocation of the entities. However it isn’t clear yet that we should expect to get plybetter√n∆. In the next section we will argue the best ply one can get relative to23the intrinsic ply is O(∆ log(n∆)).4.2 Intrinsic Ply RatioIn this section we will show that no algorithm can maintain a constant competitiveratio to the intrinsic ply, even in the static case. The tool we utilize to demonstratethis fact is the trade-off between probing an entity frequently enough that its uncer-tainty region remains manageable, and leaving enough probes for the other entities.We will conclude that we do not have enough probes to maintain small uncertaintyregions for all the entities.Our example is again a set of ∆-arithmetically spaced entities, as it was inthe section on oblivious algorithms. The intuition behind using this example, asdiscussed in the chapter on the model, is that it packs the entities in as tightly aspossible. As a preview we will informally argue that any algorithm will suffer lognply for a set of 1-arithmetically spaced entities. We will make simplifying assump-tions to make this argument easier. Assume that each element is probed with afixed frequency, that is the period between probes for a given entity is unchanging.Note that how often we can probe entities is bounded by the fact we can only probeone entity per time step. Hence the sum of these frequencies needs to be less thanone. An easy way to ensure this is to use Round Robin. Unfortunately, as shownin the previous section, we can’t avoid ply O(√n∆) by using Round Robin. Soinstead we want to distribute the probes so that the entities that are closely packedhave smaller average uncertainty regions. Since the gap between entities increasesby one for each entity going from e1 to en for 1-arithmetically spaced entities, itwould make sense for the probe frequency of the entities to be linear in the entitiesindex as well (recall that e1 is the leftmost entity, and en is the rightmost). So thetotal probe frequency should be ∑ni=11ki for some k. We want this sum to be lessthan one so:1 >n∑i=11ki> 1klogn→ k > lognSo setting each entity’s probe period to a constant factor of logn times its indexwill give us a set of probe frequencies, which do not require more than one probe24per time step. This means that ei’s uncertainty region grows to have length on theorder of i logn. The region up to i logn units away from ei contains about lognother entities. This means that we can expect a worst ply of approximately logn.Now we will build the tools we need to formally argue that ply proportionalto ∆ log(n∆)is sometimes required to guarantee a usable probe schedule. First, weintroduce a concept that will be important in several upcoming proofs. We willbe interested in averaging the area that an entity’s uncertainty region covers overa certain time period. However we may be only interested in a subset of the totaluncertainty region that lies within some interval.Definition 12. σ(i, I,T ) is the total overlap between the interval I and ei’s uncer-tainty region over the time period T . That is σ(i, I,T ) = ∑t∈T |ri(t)∩ I|.Using this newly defined quantity we will describe the relationship betweenhow often an entity is probed and the area covered by its uncertainty region, overa given time period. This will be useful in arguing how many probes a group ofentities needs to receive to avoid having large amounts of area covered by their un-certainty regions, which would lead to high ply. Note that this theorem’s hypothesisis stated in terms of the closure of a given interval I. This will be important in ourapplication, because we will be looking at open intervals, but want to argue aboutentities at the boundary of these intervals.Lemma 7. For any interval I, and entity ei such that li is in the closure of I, ifei is probed fewer than pi ≥ 1 times over the time interval T then σ(i, I,T ) >|T |min(|T |2pi,|I|)2 .Proof. There are three variables which affect the size of the overlap of ei’s uncer-tainty region and the interval I. These variables are: how many times the entity isprobed, when it is probed, and where it is positioned.Note that the area can only be increased by having fewer probes, since a probeadded at time t decreases the area of ei’s uncertainty region at that time, and alltimes after it up to the next probe. We know ei is probed less than pi times, thuswe can bound σ(i, I,T ) with the minimal area when ei is probed pi−1 times (notesince pi ≥ 1 this is non-negative).Consider an arbitrary probe sequence for entity ei, such that the jth probe oc-curs at time t j. Such a sequence breaks T into a number of “gaps”, that is time in-tervals between probes. More formally for all k ≤ pi the gap gk is [tk−1, tk], wheret0 = min(T ) and tpi = max(T ). Note that over the gap gk the uncertainty region25for ei, ri(t), goes from being the ball B(li,1) to being the ball B(li, |gk|) (with thepossible exception of the first gap, since ei may not have been probed at t0). LetG = {g1, ...,gpi}. Thenσ(i, I,T )≥ ∑g∈Gg∑j=1|I∩B(li, j)|Now we will argue what position of ei leads to the minimal σ(i, I,T ) value. Theposition of ei affects the length of the intersection in the sum. The length of thisintersection is minimized by placing ei at one of the extremes of I, since then onlyhalf of the ball is within the interval. Thusσ(i, I,T )≥ ∑g∈Gg∑j=1min(j2, |I|)We will apply Jensen’s Inequality using f (g) = ∑gj=1 min(j2 , |I|). In order touse Jensen’s Inequality we need to show that f (g) is convex. To see that this istrue, note the derivative of f (g) when g is less than 2|I| is g2 , and the derivative is|I| when g is greater than 2|I|. Thus the derivative is non-decreasing, and f (g) isconvex. Since |G|= pi by Jensen’s Inequality we have :∑g∈G f (g)pi≥ f(∑g∈G gpi)= f( |T |pi)↔∑g∈G∑gj=1 min(j2 , |I|)pi≥|T |pi∑j=1min(j2, |I|)↔ ∑g∈Gg∑j=1min(j2, |I|)≥ pi|T |pi∑j=1min(j2, |I|)Thusσ(i, I,T )≥ pi|T |pi∑j=1min(j2, |I|)Consider the plane R2, let the x-axis be position, and the y-axis be time. Thenwe can consider σ to be the area of the union of horizontal bars (ri(t)∩I)× [t, t+1].We can lower bound this area by the area of right triangles, which each have height26|T |piand base either |T |2pi or |I|, whichever is less. The total area of all triangles is halfthe area of the rectangle with base length min(|T |2pi, |I|), and height T .Thus we getpi|T |pi∑j=1min(j2, |I|)>|T |min(|T |2pi, |I|)2Which impliesσ(i, I,T ) >|T |min(|T |2pi, |I|)2We will probe entities at time intervals which are a constant factor of the lengthof an interval they are contained in (this interval, the x-blanket, will be discussedshortly). The following corollary deals with the case where the number of timesan entity is probed is less than a constant times the ratio between the length of thetime interval and diameter of the spatial interval of interest.Corollary 8. If ei is probed fewer than pi = k|T ||I| ≥ 1 times over the time period T ,then σ(i, I,T ) > |T ||I|4k , for k ≥ 1. Note that since we require pi ≥ 1, we must have|T | ≥ |I|k .Proof. By Lemma 7 we knowσ(i, I,T ) >|T |min(|T |2pi, |I|)2Substituting in our value for pi we getσ(i, I,T ) >|T |min(|I|2k , |I|)2Since k ≥ 1 the value of the min will be its first element, and hence σ(i, I,T ) >|T ||I|4k .Now we use these new tools to show for ∆-arithmetically spaced entities a plyof ∆ log(n∆)is unavoidable.Theorem 9. For every set of ∆-arithmetically spaced entities any algorithm willhave average ply Ω(∆ log(n∆)) over any time period T , where |T |> n log(n∆).27T|T |2pi|I||T |pi|I|Figure 4.1: Lower bounding the area of the uncertainty regions for entity ei overtime interval T and space interval I. The white bars represent the true uncertaintyregions, and the gray triangles represent the area we use for a lower bound. Noticethe triangle may underestimate the uncertainty regions from the first gap by a largemargin. In the figure on the left pi is such that|T |2piis less than |I|, in the figure onthe right pi is much smaller, so |I| is larger than |T |2pi .28Proof. Let n be the size of the set of ∆-arithmetically spaced entities. To start wewill subdivide the entities into groups of size ∆ log(n∆). The first ∆ log(n∆)entitieswill constitute the first group, G1, the next ∆ log(n∆)the second, G2, and so on.This means that each group coverslog( n∆)2 of the sets of 2∆ intervals we used inthe definition of ∆-arithmetically spaced entities. Recall that diameter(Gi) is thedistance between the entities in Gi that are furthest apart.We now consider the ply an algorithm will get for each group of entities, ignor-ing overlap that may occur with uncertainty regions of entities from other groups.This is a lower bound on the actual ply the algorithm achieves. In order to maintaina low ply within a group’s entities, a certain number of probes are required. Belowwe will prove that over a time interval T , to maintain ply which is O(∆ log(n∆))re-quires at least2∆|T | log( n∆)diameter(Gi) probes. Thus we call any group which receives more than2∆|T | log( n∆)diameter(Gi) probes during the time period T “good”, and all other groups “bad”. Wewill show that there exists a bad group, and that bad groups must have high ply.First we argue there is a bad group. Assume to the contrary there is not. Theneach group uses at least2∆|T | log( n∆)diameter(Gi) probes. To bound the total number of probesused we first need a bound on diameter(Gi). Recall the jth set from our constructionof ∆-arithmetically spaced entities has supremum mk = 2k2∆+k. Since each groupcoverslog( n∆)2 of these sets, the largest such set in Gi has diameter(4i log(n∆)2−2)∆≤ 2i log( n∆)∆Thus we get the following bound on the diameter of Gidiameter(Gi)≤(log(n∆)2)(2i log( n∆)∆)≤ ∆ log2( n∆)iOne important inequality for our next derivation is 2 log(n∆ log( n∆))> log(n∆).The justification for that inequality is as follows. Note that n∆ > log2(n∆), sincen > ∆ by definition. Thus:n∆> log2( n∆)29↔ n∆ log2(n∆) > 1↔(n∆ log(n∆))2> n∆↔ 2log(n∆ log(n∆))> log( n∆)Using these bounds to sum over all the groups we calculate that the total num-ber of probes used is:n∆ log( n∆)∑i=12∆|T | log(n∆)diameter(Gi)> 2∆|T | log( n∆)n∆ log( n∆)∑i=11∆ log2(n∆)i= 2|T |log(n∆)n∆ log( n∆)∑i=11i> 2|T |log(n∆) log(n∆ log(n∆))> |T |The last line follows from the above inequality.This is more than |T | probes over T . This is a contradiction since we onlyprobe once for each time step. Thus, there must be some group Gk which usesfewer than2∆|T | log( n∆)diameter(Gk) probes. That means that in Gk at least∆ log( n∆)2 entities areprobed fewer than 4|T |diameter(Gk) times.Let Gk be the smallest interval containing all the entities in Gk, that is Gk =argminI{|I| : ∀e j ∈ Gk l j ∈ I}. Note that |Gk| = diameter(Gk). Also, note thatthe diameter of Gk is less than the diameter of Gn/∆ log( n∆) , which is less thann∆ log( n∆)∆(log2(n∆)) = n log(n∆), by our above bound on the diameter of Gi. ByCorollary 8, since |T |> diameter(Gk), for each entity e j that is probed fewer than4|T |diameter(Gk) times σ( j,Gk,T ) >|T |diameter(Gk)16 . Since σ( j,Gk,T ) is the sum overall times in T of the overlap between e j’s uncertainty region, and Gk, in order tocalculate the average ply we divide the sum of the σ( j,Gk,T ) values for a givengroup by the length of the time interval, and the diameter of Gk. Since at least half30of the entities in Gi were probed fewer than4|T |diameter(Gk) the average ply is greaterthan1|T |diameter(Gk)∆|T | log(n∆)diameter(Gk)64= ∆ log(n∆)32So any algorithm must have average ply at least∆ log( n∆)32 , which is Ω(∆ log(n∆)).This shows that when expressed as a function of the intrinsic ply the best plywe can hope for is Ω(∆ log(n∆)).4.3 Our AlgorithmWe saw in the previous section that, for a set of n static entities, no algorithm canmaintain ply less than a constant times ∆ log(n∆), where ∆ is the intrinsic ply, over asufficiently long interval. This shows that intrinsic ply is an unfair measuring stickfor the effectiveness of our algorithm. We define a measure of entity crowdedness,called the “optimal blanket value”, in order to produce a more manageable com-parison for our algorithm. Any algorithm must have average ply which is at leasta constant factor of the optimal blanket value, over a sufficiently long interval. Wealso describe an algorithm, the Static Rounded Blankets algorithm, that maintainsply less than twice the optimal blanket value, at all time steps.In the previous section breaking up the set of entities into smaller groups, andthen arguing about the number of probes each of those groups requires to maintainlow ply, was shown to be an effective way to argue a certain ply is unavoidable. Italso suggests a simple formula for probing entities in the given example. The oneswhich were in the more tightly packed groups require more probes, while the onesthat are more spread out are probed less frequently. It makes sense to prioritizeentities that are closely packed together since they lead to high ply more quickly.What is not clear from the example is how to decide how large a group one shouldbe considering in general. To this end, we introduce a measure of local clustering,which we call an “x-blanket”. The x-blanket is a simple measure of how close onespecific entity is to its neighbours. Our algorithm, Static Rounded Blankets, is asimple periodic schedule, where each entity’s period is relative to its x-blanket vol-ume.314.3.1 x-BlanketsIn this subsection we introduce the concept of an “x-blanket”, and related notions.x-blankets form a core part of the way we formulate our algorithm. They are ameasure of how tightly crowded entities are. Before we define the x-blanket, letus look at an associated set of entities, Γxi (t). Informally Γxi (t) is the set of thex entities closest to ei at time t. These will be the basis of ei’s x-blanket and arereferred to as the entities “covered” by ei’s x-blanket.Definition 13. For any given x ∈ Z, index i ∈ {1, ...,n}, and time t, the set Γxi (t)contains ei and x− 1 other entities, such that for any entity e j ∈ Γxi (t) and en /∈Γxi (t), |li(t)− l j(t)| ≤ |li(t)− ln(t)|. That is at time t all other entities are at leastas far from ei as any given entity in Γxi (t).Note that if there are several entities which are the same distance from ei, theremay be multiple different sets which satisfy the requirements of the above defini-tion for Γxi (t). In such a case we choose the set which contains the entities with thelowest indices to be Γxi (t), arbitrarily.Now we turn our attention to the main construct, the x-blanket. Informally anentity ei’s x-blanket is the largest interval centered at ei’s location, which containsat most x entities.Definition 14. For any given x ∈ Z, index i ∈ {1, ...,n}, and time t, ei’s x-blanketat time t, bxi (t), is the largest open ball centered at li(t) that only contains entitiesin Γxi (t). This ball is the “(true) x-blanket”, of ei at time t.Note that this interval does not contain all of Γxi (t) when an entity in Γxi (t) isthe same distance from li(t) as some entity that is not in Γxi (t). Thus ei’s x-blanketwouldn’t change for a different choice of Γxi (t), since any entity that is the samedistance from li(t) as an entity in Γxi (t) are on the boundary of bxi (t); their inclusionin Γxi (t) would lead to the same bxi (t). Also note the entity which is xth furthestfrom li(t) (discounting ei) is on the boundary of bxi (t). Hence the volume of bxi (t)is simply twice the distance from li(t) to the entity which is xth furthest from ei.The above definitions were given with a dependence on time because this gen-erality will be required for the dynamic case. In the static case the x-blanket isinvariant with respect to time, since entities don’t move. Thus we will drop the de-pendence on t in our notation for the static case. We use |bxi | to refer to the volumeof the blanket bxi .32Definition 15. If x is the smallest integer such that ∑i1|bxi |< δ then x is called the“optimal blanket value for probe density δ”.The value δ is called the “probe density” because it is related to the fractionof time steps in which we need to probe entities in order to keep their uncertaintyregions contained within their blankets. The optimal blanket value can be thoughtof the smallest value that can be used to generate blankets for a given set of entities,without needing more than the appropriate probe density to keep the uncertaintyregions inside those blankets. The concept of optimal blanket value is importantfor determining what blanket value we should use in our algorithm. Considering aprobe density that is less than one can be helpful to allow flexibility in the timingof the probes, so that we can avoid cases where multiple entities require probessimultaneously.4.3.2 Upper Bound on Blanket SizeWe show that the optimal blanket value for any set of n static entities is relatedto the intrinsic ply of the entities. In particular, we show that the optimal blanketvalue for a given probe density δ is O(∆δ log(n∆)).Theorem 10. For any given set of n static entities the optimal blanket value forprobe density δ is O(∆δ log(n∆)).Proof. Let E be an arbitrary set of static entities with intrinsic ply ∆. Let A ={A1, ...,An} be an uncertainty realization for E with ply ∆. We want to show theoptimal blanket value is O(∆δ log(n∆)). This will be achieved by showing ∆δ log(n∆)is greater than the largest value of x for which the sum ∑ni=11|bxi |is less than theprobe density δ . We can assume that x > 8∆ without loss of generality, since ifx≤ 8∆, then it is in O(∆δ log(n∆))In order to show the optimal blanket value is less than ∆δ log(n∆), we will pro-vide an upper bound on the sum ∑ni=11|bxi |. To do this we need a lower bound on thevolume of the bxi ’s. For any given ei the volume of bxi is determined by the locationof the xth closest entity to ei ( excluding ei itself ). Naı¨vely the way to minimizethe volume of bxi would be to simply place x entities right on top of ei. Howeverwe also know that the uncertainty realization A has ply ∆, hence at most ∆ entitiescan be at the same location. As shown belov, the minimum width for the bxi will beachieved by creating ∆ “layers” of entities, each with ply 1, which are as close tothe same width as possible.33Figure 4.2: A graphic displaying the ideas behind our bound on |bxi |. Here ∆ is four.The black circles represent the static locations of the entities, and the white barsrepresent their associate intervals in A. The entities have been split into four layersto make it easier to see. The eight extremal entities have large intervals, which donot contribute to the size of |bxi |. The volume of bxi can be bounded by summingover the inverse volumes of the white regions of the eight interior entities.Consider partitioning the Γxi into ∆ subsets, L1, ...,L∆, such that the ply the un-certainty realization A gets for Li is 1. We will refer to these subsets as layers.More specifically for all e j,ek in Li the associated intervals of the uncertainty re-alization do not overlap, that is A j ∩Ak = /0. The diameter of Li is the distancefrom lmin(i) = minLi{l j : e j ∈ Li} to the similarly defined lmax(i). This is at leastthe sum ∑e j∈Li |A j|, minus any of the volume of the A j’s which lies outside of[lmin(i), lmax(i)]. That sum is minimized when the two entities in Li with the largest|A j| values are placed at lmin(i) and lmax(i), so the entirety of their A j’s can resideoutside of [lmin(i), lmax(i)]. Thus in each layer we can ignore the A j values of thetwo extremal entities. Let Xi, j be the two extremal entities of layer L j. Then letXi be the set of extremal entities, that is Xi =⋃jXi, j. For a helpful visualizationsee Figure 4.2.The layer which has the longest width is the only one that affects the length ofbxi . The length of the longest layer is longer than the average layer length. Hence|bxi |>1∆ ∑e j∈Γxi \Xi|A j|Now we move to looking at the sum of the inverse values as a whole. Fromabove we can deriven∑i=11|bxi |< ∆n∑i=11∑e j∈Γxi \Xi |A j|34Note that each e j can be covered by at most 2x entities’ x-blankets. This meanse j is a member of Γxi for at most 2x different entities. So each |A j| value shows upin the sum∑e j∈Γxi \Xi|A j|at most 2x times. Note the sumn∑i=11∑e j∈Γxi |A j|is maximized if the smallest n2x A j’s are used 2x times each. If you don’t use thesmallest n2x A j’s 2x times each you simply replace one of the instances of a largerone with one for which |A j| ≤ n2x that has not been used n2x and the sum increases.Each entity needs x entities assigned to its Γxi . Note that as the volume of theuncertainty regions of entities in Xi don’t contribute to the sum, their volume isunimportant, so only the volume of x−2∆ entity’s A j’s need to be considered in ourbound on bxi . We will assign one of the groups (1, ...,x−2∆),(x−2∆+1, ...,2x−4∆), ...,(( n2x −1)(x−2∆)+1, ..., n2x(x−2∆)) to each entity, each group will be as-signed 2x times.In order to see this allocation leads to the largest possible sum, we consideran abstraction of the problem. There are n different variables zi that are the sumof γ distinct positive integers, that is zi is equal to the sum yi1 + ...+ yiγ whereeach yi j is an integer. We will refer to the set of integers in zi’s sum as Yi, that isYi = {yi1 , ...,yiγ}. We can choose which integers are in zi’s sum, but each integer canonly be used for a limited number of sums. Our goal is to maximize the sum ∑ni=11zi.Now we will show that assigning the smallest y’s to the smallest z’s will lead to theoptimal solution. Assume that we have a system that satisfies the constraints asdescribed above. We can assume without loss of generality that z1 ≤ z2 ≤ ...≤ zn.Assume that for some i, j such that i < j there is an integer α in Yj that is not in Yithat is less than an integer β which is in Yi but not Yj. Then by swapping α and βso that α is in Yi and β is in Yj we increase the inverse sum of the zi’s by1zi +(α−β )− 1z j +(β −α)= z j− zi[zi +(α−β )][z j +(β −α)]Since we know z j > zi, z j > α and zi > β this quantity is positive, and hencethe swap increases the inverse sum. Hence since we are looking for the optimalblanket value for probe frequency δ we can assume the values are distributed in35groups as discussed above. From this we produce the following bound, in whichwe use the quatity x˜ = x−2∆. Note that x˜ > 3x4 since x > 8∆:δ <n∑i=1∆∑e j∈Γxi \Xi |A j|< 2x∆n2x∑i=11∑ix˜j=(i−1)x˜ j< 4x∆n2x∑i=11(ix˜)2 + ix˜− [(i−1)x˜]2− [(i−1)x˜]= 4x∆n2x∑i=11(ix˜)2 + ix˜− (ix˜)2 +2ix˜2− x˜2− ix˜+ x˜= 4x∆n2x∑i=11(2i−1)x˜2 + x˜< 4x∆n2x∑i=11(2i−1)x˜2= 4x∆x˜2n2x∑i=112i−1< 4x∆(3x4 )2n2x∑i=112i−1< 8∆xn2x∑i=112i−1= 8∆x( nx∑i=11i−n2x∑j=112 j)< 8∆xnx∑i=11i↔ x < 8∆δnx∑i=11i∈ O(∆δ log(nx))∈ O(∆δ log( n∆))36Thus the optimal blanket value for probe density δ is O(∆δ log(n∆)).Although our main motivation to work with the optimal blanket value is tomaintain ply close to any other algorithm’s average ply, this result implies that ifwe have an algorithm which maintains ply equal to the optimal blanket value, wecan maintain ply which is ∆ log(n∆).4.3.3 Rounded Blankets AlgorithmPreviously we introduced the notion of “x-blankets”, now we want to show howwe can use that tool to create an algorithm which maintains low ply. The sim-ple idea behind the algorithm is to query entities when their uncertainty region isthe same as their x-blanket. First we need to ensure that the x-blankets are largeenough that probing with this frequency doesn’t require more probes than there aretime units. Probing an entity every time its uncertainty region is the same as itsx-blanket means probing it once every |bxi | time steps, since the uncertainty regiongrows by 1 each time step. So as a simple first step we ensure that the sum ∑ni=11|bxi |is less than 1.However, even if ∑ni=11|bxi |is less than 1, it might be impossible to query theentities at the appropriate times. For example, two entities might have their uncer-tainty regions grow to be the same as their x-blankets at the same time. Thus wewill consider rounding the values of the blanket volume in order to make sure allof the periods are in sync. But, if we use the blankets designated by the optimalblanket value for probe density 1, then we cannot probe with a frequency greaterthan the inverse of the blanket volume, since then the sum of the frequencies willbe greater than one. On the other hand, we cannot probe with a frequency lessthan the inverse of the blanket volumes, since then the ply could be higher than weexpect. Instead, we will use the optimal blankets of probe density 12 , since this willgive us room to round the blanket volumes to the nearest power of two. This willmake scheduling probes for the entities simple.The algorithm, which is referred to as the “Rounded Blankets Algorithm”, isan algorithm which produces a recursive round robin schedule. A recursive roundrobin schedule is a schedule that splits its probes evenly among subsets of entities.These subsets partition the set of entities, that is no entity is in more than a singlesubset. Each subset splits its probes in an identical way to the main set, that isevenly amongst some partition of its elements. In this way round robin is applied37to each set of entities recursively. It is important to note that even though the probesare split evenly among the subsets, the subsets may not contain the same number ofentities. Having an unbalanced partition ensures some entities will be probed moreoften than others. The notion of recursive round robin is discussed in more depthin “Scheduling Techniques for Media-on-Demand” by Amotz Bar-Noy, Richard E.Ladner, and Tami Tamir [3].The algorithm has two main steps. The first is producing new rounded fre-quencies, which will be powers of two, and the second is constructing a schedulefor these new frequencies. The algorithm for schedule construction is presentedin “The Scheduling of Maintenance Service” by Shoshana Anily, Ceilia A. Glass,and Rafael Hassin [1].The Rounded Blankets algorithm is as follows:1. For a given set of n entities calculate x = the optimal blanket value for probedensity 12 (i.e. the smallest integer such that ∑ni=11|bxi |< 12 ).2. Set qi = 2blg |bxi |c.3. Query entity ei periodically, with period qiNow that we have an algorithm for probing entities we need to produce a boundon its performance, that is we want a bound on the ply the Rounded Blankets al-gorithm achieves. We will show the ply that the Rounded Blankets algorithm pro-duces is always bounded by the blanket value we use.Now to show that our algorithm will never have ply greater than some constanttimes the optimal blanket value. This follows fairly directly from the fact that ourblankets never cover more than x entities.Theorem 11. For all sets of n static entities, and all time steps t the RoundedBlankets algorithm maintains ply O(x), where x is the optimal blanket value forprobe density 12 .Proof. Note that the qi values are at least half the volumes of the associated x-blanket, |bxi |, since they are the largest power of two less than or equal to|bxi |. Thusthe inverse sum of the qi values will be less than one. Because of this fact, and thefact they are powers of two, our qi satisfy the conditions on the τi in the hypothesisof lemma 6.2 in [1]. Thus we can build a schedule in which each ei is probed onceevery qi time steps by following their algorithm.38Since the algorithm probes ei once every qi time steps, it probes entities beforetheir uncertainty region exceeds their blanket. Thus the depth of any given pointis bounded by the number of blankets that contain that point. Hence the ply isbounded by the point which is contained in the most blankets.The maximum number of blankets a point can be contained within is 2x. Wewill prove this via a contradiction. Assume to the contrary that 2x+ 1 x-blanketscover the point p. Let E be the set of all entities whose x-blankets cover p. LetEL be the subset of E such that for any entity, eL, in EL its position lL is at mostp. Similarly let ER be the subset whose locations are at least p. Assume, withoutloss of generality, |EL| ≥ |ER|. Then EL must contain at least x+ 1 entities. Notethat the entity in EL which has the minimal location has a blanket which coversall of the entities in EL, since it covers p. Thus it covers x+ 1 entities, which is acontradiction with the definition of x-blanket. Hence the ply at any given point canbe at most 2x. And the Rounded Blanket algorithm’s ply is O(x).So the Rounded Blankets Algorithm never has ply worse than a constant factorof the optimal blanket value. Since we have a bound on the intrinsic ply withrespect to the optimal blanket value, we can bound our algorithm’s performancewith respect to the intrinsic ply.Corollary 12. For any set of static entities the Rounded Blankets algorithm hasply which is O(∆ log(n∆)).Proof. Let x be the optimal blanket value for probe density 12 . From Theorem 10we know that x is O(∆δ log(n∆)), since δ is constant O(∆ log(n∆)). From Theorem11 we know the Rounded Blanket algorithm’s ply is O(x). Therefore the RoundedBlanket algorithm ply is O(∆ log(n∆)).This shows that our algorithm can maintain ply which is O(∆ log(n∆)) at alltime steps. The intrinsic ply can be thought of as the minimal ply any algorithmcan achieve at any time. Thus, the ply obtained by the Rounded Blanket’s algorithmat every time step is at most O(∆ log(n∆)) times the smallest ply any algorithm canobtain at any time step. The fact that Rounded Blanket’s ply is O(∆ log(n∆)), andnot the Ω(√n∆) bound achieved by Round Robin shows that there is an advantageto looking at how close entities are to each other. Note that, since log( n∆) <√ n∆ ,∆ log( n∆) <√n∆, and thus the Rounded Blankets Algorithm performs better thanRound Robin, in their respective worse cases.39Next we will show that no algorithm, which runs for a sufficiently long time,can achieve average ply better than Ω(δx), where x is the optimal blanket valuefor probe density δ . Hence the maximum ply achieved by the Rounded Blanketsalgorithm is within a constant factor of the average ply achieved by any other algo-rithm, over a sufficiently large interval of time. For the purpose of arguing resultsabout the Rounded Blankets algorithm we are only interested in the case where δis 12 , but the generalization to other blanket values will be useful in our discussionof the dynamic case. This theorem is a stronger version of Theorem 9.Theorem 13. For all sets of static entities any algorithm has average ply Ω( xδ ),where x is the optimal blanket value for probe density δ , over any time interval T ,such that |T |> δ maxi |bxi |.Proof. To prove this theorem we will consider two cases. Either entities are beingprobed too frequently, leading us to probe more times than we are allowed, or theyare being probed too infrequently, thus leading to large uncertainty regions andhigh ply.Let x be the optimal blanket value for probe density δ , and consider a timeinterval T . Let pi be the number of times ei is probed over this interval. The firstcase is that for all ei, the majority of the entities in Γx2i are probed at least4|T |δ |bx2i |times. Our second case is that there is some entity ei such that the majority of theentities in Γx2i are probed fewer than4|T |δ |bx2i |times. Note that the choice of constant isirrelevant in the second case, since probing fewer than any constant factor of |bx2i |will give ply on the order of x, as will be discussed later in the proof, however itis vital for the first argument that the constant is large enough to cause the probedensity to be too high.In the first case, for all ei, the majority of the entities in Γx2i are probed morethan 4|T |δ |bx2i |times. Note the total number of probes used on entities in a given Γx2iwill be at least the number of probes used on the x4 who are probed more than4|T |δ |bx2i |times, that is∑e j∈Γx2ip j >x44|T |δ |bx2i |= x|T |δ |bx2i |40→n∑i=1x|T |δ |bx2i |<n∑i=1∑e j∈Γx2ip jAlso note that each entity can be included in at most x of the Γx2i ’s, since thereare at most x entities which it is within x2 entities of. Thusn∑i=1∑e j∈Γx2ip j < xn∑k=1pk= x|T |Combining the above equations we deriven∑i=1x|T |δ |bx2i |< x|T |↔ x|T |> x|T |δn∑i=11|bx2i |> x|T |The last line follows since x is the optimal blanket value for probe density δ ,which means that for any smaller value yn∑i=11|byi |> δ . x|T |> x|T | is a contradiction,thus we can conclude that we are always in the second case.In the second case there is some entity ei such that the majority of the entitiesin Γx2i are probed fewer than4|T |δ |bx2i |times. By Corollary 8, for each entity ei whichis probed fewer than 4|T |δ |bx2i |times over a time period that is longer than δ |bx2i |4 , which|T | is by definition, σ(i,bxi ,T ) >δ |T ||bx2i |16 . Then the total area for all entities in Γxiis at least the sum of the areas of the entities which were probed fewer than 4|T |δ |bx2i |times. Thus∑{e j∈Γxi }σ( j,bx2i ,T )≥x4δ |T ||bx2i |16Averaging over the blanket and the time interval, the average ply is greater thanδx|T ||bx2i |64|T ||bx2i |= δx64 . This shows the average ply is Ω(δx).41Since we are always in the second case all algorithms have average ply whichis Ω(δx), over a time interval longer than δ maxi |bxi |.Now that we know any algorithm has average ply that is within a constant fac-tor of x, we can argue the Rounded Blankets algorithm performs well against otheralgorithms.Theorem 14. Given any set of entities and any time interval T , such that |T | >maxi |bxi |2 the Rounded Blankets algorithm’s maximum ply over T is O(x) and anyother algorithm’s average ply over T is Ω(x) where x is the optimal blanket valuefor probe density 12 .Proof. The Rounded Blankets algorithm always has ply O(x) by Theorem 11. Be-cause |T |> δ maxi |bxi | any algorithm has average ply Ω(δx) by Theorem 13, sinceδ = 12 this is Ω(x).This last theorem is the most important result in this chapter. It shows thatour algorithm, the Rounded Blankets algorithm, has maximum ply that is withina constant factor of any other algorithm’s average ply. It relies on the fact thatthe optimal blanket value is representative of the best behaviour an algorithm canexhibit in the static case. Note that average ply is a very good measure of an algo-rithm’s performance for the continuous ply minimization problem, since ply cannotchange rapidly. At any given time step ply can be decreased by at most one, sinceonly a single entity may be probed, and may at most double, if two large groupsof uncertainty regions go from non-overlapping to overlapping (this bound on in-crease only holds in the one dimensional case). Thus the variance of any algorithmis bounded. Any algorithm whose ply does not change by more than a constantfactor of the optimal blanket value will be within a constant factor of the RoundedBlanket algorithm’s ply at all time steps. Thus in certain cases it may be possibleto make stronger statements about the competitiveness of the Rounded Blanketsalgorithm.Note that, even if an algorithm is designed to genrate the lowest ply at a spe-cific point in time, an algorithm’s ply can never be less than ∆. Thus the RoundedBlankets algorithm is always within a log factor of any other algorithm’s ply at alltime steps. It is unclear in which cases another algorithm might do better than aconstant factor of x for a significant portion of the time.We now turn our attention to the case where entities are moving, which werefer to as the “dynamic case”. In this setting our algorithm no longer works, as42it assumes that the entities are stationary. However, by lowering the probe densitywe will give ourselves time to react to a changing environment, and probe enti-ties frequently enough so that their uncertainty regions are never bigger than theirblankets.43Chapter 5Dynamic CaseThe static case was a good launching point for our exploration of the continuousply minimization problem. It allowed us to focus on a few of the factors that causehigh ply, mainly how closely packed the entities are, without worrying about theuncertainty of our knowledge. In the dynamic case, where the entities are moving,there are many more factors that contribute to the success of our algorithm. Wewill show that, as long as entities are probed frequently enough, our view of howcrowded the entities are is not too far off the true situation. This means that thetools we developed while exploring the static case will be applicable in the dy-namic case, as long as we compensate for our lack of current information.As we move into the dynamic case our model changes slightly, since entitiesare allowed to move, but many aspects of the problem remain the same. Our goalis still to maintain consistently low ply. As we saw in the static case, comparingagainst the intrinsic ply, ∆, at each time step is inherently unfair. In certain situa-tions the minimum average ply any algorithm can maintain is Ω(∆ log(n∆)). Theexamples from the static case can be used to prove the result holds in the dynamiccase, since the dynamic case does not prohibit static trajectories. However, it maybe possible to set up a situation, in the dynamic case, for which no algorithm canmaintain ply O(∆min log(n∆max)), where ∆min = mint(∆(t)) and ∆max = maxt(∆(t)).In the static case we had a bound on the intrinsic ply with respect to the optimalblanket value. This bound is still applicable in the dynamic case, but the value ofthe intrinsic ply now depends on the time step. Thus we know the optimal blanketvalue at time t is at most O(∆(t) log(n∆)). A dynamic algorithm that is similar toour Rounded Blankets algorithm will have worst case performance that is tied tothe maximum optimal blanket value over a time interval, because that value char-acterizes the maximum number of blankets which may overlap at any one time44over the interval. The obvious comparison with intrinsic ply would be with itsminimum value, which could much less than the intrinsic ply at the time step whenwe have the maximum optimal blanket value. This means that intrinsic ply is aneven worse measuring stick in the dynamic case. The movement of the entities willalso make it harder for us to compare against other algorithms. In the static casewe were able to bound all algorithm’s ply with respect to the maximum optimalblanket value (which is the optimal blanket value at all time steps, since all theentities were static), but it is no longer clear in the dynamic case that any otheralgorithm’s ply should be bounded by the maximum optimal blanket value in thedynamic case. We cannot even make the weaker claim that an arbitrary algorithm’sply is bounded by the average, or even minimum, optimal blanket value.We would like to be able to argue about the competitiveness between our al-gorithm and any other algorithm. However our inability to argue a bound on otheralgorithm’s ply means we have to aim for an easier target. So our goal will be basedon the target in the static case, that is to maintain ply within a constant factor of theworst optimal blanket value over a given time period. This means our algorithmdoes not need to adjust to the changing blanket value during the running of thealgorithm, which simplifies the process. Regrettably, it is impossible to know theworst optimal blanket value over a given time period without knowing the trajec-tories of the entities. So, to make the problem manageable, we assume there is anoracle that tells us which blanket value to use. Will we refer to this blanket valueas χ .With the aid of these simplifications we craft an algorithm for the dynamiccase that has the same core ideas as the Rounded Blankets algorithm. Howeverthe movement of the entities, and our uncertainty of their current location, meansit is impossible for an algorithm to know when a given entity’s uncertainty re-gion will expand beyond its associated x-blanket. We show that, as long as weprobe frequently enough, our perception of an entity’s x-blanket’s volume cannotbe too much larger than its true volume. Since the entities’ rate of movement isbounded, x-blankets don’t change too quickly over time. Thus we can calculate, bydeducing the minimal possible volume of ei’s x-blanket based on the volume of itsperceived x-blanket when it is probed, the first time ei’s uncertainty region couldextend beyond its x-blanket. By reprobing ei before this time we can ensure thatei’s uncertainty region does not cover more than x other entities. As we saw in thestatic case as long as any entity’s uncertainty region never covers more than x otherentities, our ply is less than 2x.455.1 The Bucket AlgorithmNow we will describe the algorithm that we will use to probe entities in the dy-namic case. In many ways it is similar to the algorithm we used in the static case.However, we can no longer have a “static” probe sequence, because the demands ofeach entity will change as it moves. So we restrict our attention to the next time anentity will be probed. Our goal is to probe the entity before its uncertainty regionexpands beyond its x-blanket. We use the perceived x-blanket volume of the entitywe are currently probing to calculate the first time such an event could occur. Sincewe don’t have accurate data on the locations of entities we need to determine thefirst time at which the uncertainty region could expand beyond its x-blanket. Bylooking at the x-blanket sizes for small probe densities we are able to ensure therewill be enough time to probe all the entities we want to probe.Since we cannot rely on a static probe sequence we need a way to schedule thenext probe time. The algorithm will use the notion of “buckets”. In our applicationa “bucket” is a queue of entities which is associated with a given time interval. Allthe entities in a given bucket must be probed within the interval. We will split thetime line into a set of intervals for each power of two, in the natural way. Each ofthese intervals is associated with a bucket.Definition 16. βi, j is the ith bucket of length 2 j. It is associated with the interval[i2 j +1,(i+1)2 j], and contains a queue of entities.All the buckets are aligned such that, given a pair of buckets that overlap, thesmaller is wholly contained in the larger. This is described more formally in thefollowing observation, note we use the notation βi, j ⊂ βk,l to indicate the intervalassociated with βi, j is a subset of the interval associated with βk,lObservation 2. For each pair of buckets βi, j, βk,l , such that j < l, if βi, j∩βk,l 6= /0,then βi, j ⊂ βk,l .For convience, we will use the notation |βi, j| to refer to the length of the asso-ciated interval, that is |βi, j|= 2 j. When talking about a general bucket we drop theindicies i and j.The Bucket algorithm is as follows:Assume that we are given a value χ , which we use as our blanket value.1. At time t probe an entity from the smallest bucket that overlaps t and containsunprobed entities.462. Let j be the largest power of two which is less than 122 |bˆχi (t)|, that is, j =2blg( 122 |bˆχi (t)|)c. Place ei in the queue of the first bucket of length j that con-tains times after t, that is βdt/ je, j.3. Increment t, and return to step 1.We will present an example situation to illustrate how the algorithm works. Avisual representation of the process is shown in Figure 5.1.We focus on buckets of volume 8 or less which overlap a given time interval,this narrowing of perspective is necessary since there are infinitely many buckets.Our starting state is that there are no entities in the smallest bucket which overlapsthe first time step, one in the bucket of volume two, one in the bucket with volumefour, and four entities in the bucket of volume eight. Thus we choose to probethe entity which is in the bucket of volume two, and it is put in the bucket whichoverlaps time steps three and four.After a few more probes we reach the situation in the third diagram, only oneof the entities in the largest bucket has been probed, and both of the entities whichwere in smaller buckets are in the bucket of volume four which overlaps the fifththrough eighth time steps. Since e5 has been placed in a larger bucket its x-blanketmust have grown. In the last figure, neither of these entities is probed in the lasttwo time steps, so their x-blankets must have grown to the point where they wereplaced in the next bucket of volume eight (not shown in figure). Also, we can seein the last figure that there is one entity left in the bucket of size eight after the eighttime steps have elapsed. This means our algorithm has failed in this case.We want to argue that, in situations where such a failure occurs, the entitiesmust have been located in a distribution that had a larger optimal blanket valuethan the χ we used. In order to argue about the cases where our algorithm fails, weneed to argue our perceptions are close to the true situation, as we do in the nextsection.5.1.1 Perception Versus RealityIt will be important for us to distinguish between the true location of an entitiy andthe algorithm’s perception of that entity’s location. Our algorithm will often needto know the volume of an entity’s x-blanket, but it doesn’t know the current locationof the entities. Instead, the algorithm will use the last known location of an entityas a surrogate for its current location. To that end, we need notation to represent47Timee1 e2 e3e4e5e6(a) At the first time step the algorithmwill probe e5 as it is in the smallest non-empty “active” bucket.Timee1 e2 e3e4e5e5e6(b) At the second time step e4 is in thesmallest non-empty “active” bucket.Timee1 e2e4 e5e5 e4 e5 e3e6(c) The situation after the algorithm hasrun for half of the time steps in the inter-val.Timee1e2e4e5 e4 e5 e3 e5e6(d) The algorithm has run for the entireinterval interval. Note there is still anentity in the top buckets queue, thus ithas overflowed.Figure 5.1: An illustration of the state of the bucket algorithm at four differenttime steps. The rectangles represent buckets, which contain a queue of entitiesinside them, and the dotted lines separate the interval into eight distinct time steps.Each bucket is associated with the time steps it overlaps. The bolded buckets arethe buckets the algorithm is considering probing an entity from at that time step.The entities in red, along the bottom of the diagrams, indicate what the algorithmprobed at the associated time step.48this situation. First recall that pi(t) is the last time ei was probed. We refer to thelocation that the entity was at the last time it was probed, as its “perceived location”at the current time.Definition 17. For any entity ei, its perceived location at time t, lˆi(t), is the locationof that entity when it was last queried. That is lˆi(t) = li(pi(t)).The definitions of the perceived covered set and the perceived x-blanket aresimilar to their true counterparts, with the true locations of the points replaced bytheir perceived locations.Definition 18. For any given x ∈ Z and i ∈ {1, ...,n}, Γˆxi (t) is the set that containsthe perceived locations of ei and x other entities, such that for any entity ea ∈ Γˆxi (t)and eb /∈ Γˆxi (t) |lˆi(t)− lˆa(t)| ≤ |lˆi(t)− lˆb(t)|.Definition 19. For any given x ∈ Z and i ∈ {1, ...,n}, bˆxi (t) is the largest open ballcentered at lˆi(t) that contains Γˆxi (t). This ball is the “perceived x-blanket” of ei attime t.All of our uncertainty comes from movement over time. There are two distinctways in which this manifests itself in our analysis. First the width of an entity’sx-blanket may change over time due to the motion of all of the entities. Secondly,since we are looking at old location information, there is a difference between ourperception of what the entity’s x-blanket is at any time, and what it truly is. Weneed to argue that, even though we have flawed information about the location ofentities, we still can deduce how crowded an entity is. Equivalently, we argue thatour perception of an entity’s x-blanket is always close to its true x-blanket. Theargument will rely on us probing the entities with an appropriate frequency.One fact that will help with this argument is the relationship between an entity’sx-blanket volume, and the volume of the x-blanket of the entities in Γxi (t).Observation 3. For any entity e j that is in Γx+1i (t) at time t, |bxj(t)| ≤ 2|bxi (t)|, sincee j is in the closure of bxi (t) and there are at least x other entities in the closure ofbxi (t)Note that if we want a non-trivial lower bound for ei’s x-blanket volume at timet based on the uncertainty regions of the entities, then there needs to be a gap be-tween ei’s uncertainty region and the uncertainty region which is xth furthest fromei, otherwise the x closest entities to ei might all be on top of one another. In thatsituation bxi (t)’s volume would be zero, so we would not be able to discount thetrivial bound. We know the xth closest entity to ei is a distance of 12 |bxi (t)| awayfrom ei. Thus, if we want these entities to have no overlapping uncertainty regions49they must have been probed in the last 12 |bxi (t)| time steps, since no entity couldhave moved further than 14 |bxi (t)| in that time. From this we can conclude that anyentity must be probed within a number of time steps less than half the volume ofthe smallest blanket it is in. As we see from Observation 3, the entities in Γxi (t) mayhave x-blankets which have twice the volume of bxi (t). Thus, we can ensure that anentity has been probed within a number of time steps less than half the volume ofany x-blanket it may be in, by probing it within a number of time steps equal to aquarter of its own x-blanket’s volume.We will ensure that we have probed all entities within a number of time stepsequal to a quarter of their true x-blanket volume. We will show this is enough toprove the entities’ true x-blanket volumes have not changed much since the lasttime they were probed.Lemma 15. If t− t ′ ≤ 14 |bxi (t)| then 23 |bxi (t′)| ≤ |bxi (t)| ≤ 2|bxi (t′)|.Proof. At time t entity ei could be at most d = 18 |bxi (t)| units away from where itwas at t′, since t− t ′ ≤ 14 |bxi (t)| and all entities have a maximum speed of 12 unitsper time step. Similarly, any other entity e j is at most d units from where it was at t′.First we consider the lower bound on the blanket volume at time t. Recall thatthe volume of ei’s x-blanket is determined by the entity which is x+1st closest toei’s location, since it is the largest open ball that contains at most x entities. At leastthe x+ 1 closest entities to ei at time t, that is Γx+1i (t), are in the closure of bxi (t).Thus they are all in the interval [min(bxi (t))−d,max(bxi (t))+d] at time t′. ei couldhave been anywhere in the interval [li(t)− d, li(t) + d] at time t′. Since li(t)−min(bxi (t)) = max(bxi (t))− li(t) =|bxi (t)|2 , this means that |li(t′)−min(bxi (t′))| ≤|bxi (t)|2 +2d. The x-blanket volume is twice this length, and hence|bxi (t′)| ≤ |bxi (t)|+4d =32|bxi (t)|Thus23|bxi (t′)| ≤ |bxi (t)|See Figure 5.2 for an illustration of the bounds on the location of points in bxi (t)at time t′.The proof of the upper bound is similar, except that we argue at most x entities(including ei) could be in the interval (min(bxi (t))+ d,max(bxi (t))− d) at t′. This50li(t)min(t′) max(t′)timespacet′t2dmax length x-blanketat time t′min length x-blanketat time t′Figure 5.2: Bounds on the possible true x-blanket of ei at time t′given the truex-blanket of ei at time t. The outer grey cones illustrate the bound on the positionsof the extremal points of Γxi (t) from the time t− 14 |bxi (t)| up to t. The middle greycone shows the potential locations of ei.is because any entity in this interval at t′will be in bxi (t), which we know containsat most x entities. Hence|bxi (t′)| ≥ |bxi (t)|−4d= 12|bxi (t)|Thus|bxi (t)| ≤ 2|bxi (t′)|Combining these bounds we produce 23 |bxi (t′)| ≤ |bxi (t)| ≤ 2|bxi (t′)|.Next we will show that if we have probed all entities within a number of timesteps equal to a quarter of their true x-blanket volume then their true x-blanket vol-ume is relatively close to the perceived x-blanket volume for that entity at that time.The proof of this lemma is very similar to the proof of the previous lemma, so ourdiscussion will be more terse in this proof.Lemma 16. For any time t, if for all entities e j, t− p j(t) ≤ 14 |bxj(t)| then for allentities ei, 47 |bˆxi (t)| ≤ |bxi (t)| ≤ 4|bˆxi (t)|.51li(t)min(bxi (t)) max(bxi (t))timespacet− 12bxi (t)pi(t)t4d2dt− 12 (bxi (t) + z)zFigure 5.3: Bounds on the possible perceived x-blanket of ei at time t given the truex-blanket of ei at time t. Note that the perceived location of any point in Γˆxi (t) maybe based on its location up to 12 bxi (t) time steps ago. The orange cone represents abound on the location of an entity outside of Γˆxi (t). Note that its right endpoint isnecessarily further from ei than the right endpoint of the leftmost grey triangle.Proof. For any entity e j that is in Γx+1i (t) at time t, |bxj(t)| ≤ 2|bxi (t)|, by Observa-tion 3. Thus, all of these entities have been probed in the past 14 |bxj(t)| ≤ 12 |bxi (t)|time steps. Hence at time t all of the entities in Γx+1i (t) could have moved a dis-tance of at most 2d = 14 |bxi (t)| since they were last probed.To prove the lower bound we will consider the largest bˆxi (t) could be. Notethat all of the entities in Γx+1i (t) must have been in the interval [min(bxi (t))−2d,max(bxi (t))+ 2d] the last time they were probed. Also note that ei must havebeen within the interval [li(t)−d, li(t)+d], since it was probed in the last 14 |bxi (t)|time steps. Thus the maximum possible distance between lˆi(t) and the perceivedlocation of an entity in Γx+1i (t) is|bxi (t)|2 +3d = 78 |bxi (t)|. Hence 47 |bˆxi (t)| ≤ |bxi (t)|.The argument for the upper bound is similar. It is clear that the entity in Γxi (t)which is furthest from ei at time t, ex cannot be in the interval (min(bxi (t)) +2d,max(bxi (t))− 2d) at px(t). This would be enough to argue our result, exceptthat we also must consider entities not in Γxi (t). We need to argue all of the enti-ties that are not in Γxi (t) have perceived locations that are outside of [min(bxi (t))+2d,max(bxi (t))−2d] at pi(t), since any entity inside that interval would shrink thevolume of ei’s perceived x-blanket. We know for any entity eo not in Γxi (t), eo’s x-blanket can at most be large enough to contain all of bxi (t), since there are x entitiesin bxi (t). Thus52|bxo(t)|< 2|li(t)− lo(t)|+ |bxi (t)|= 2(z+bxi (t))where z = min{|min(bxi (t))− lo(t)|, |max(bxi (t))− lo(t)|}Informally z is the distance from eo to bxi (t). Hence t− po(t) <z+bxi (t)2 , whichmeans that |lˆo(t)− lo(t)| < z+bxi (t)4 = z4 +2d. And so no entity which is outside ofbxi (t) at time t could have a perceived location at time t that is inside [min(bxi (t))+2d,max(bxi (t))− 2d], since the distance from eo to that interval is z + 2d. Thismeans the minimal distance between lˆi(t) and entities in Γˆxi (t) is at least 14 |bxi (t)|.Thus |bxi (t)| ≤ 4|bˆxi (t)|.Combining the two bounds we generate 47 |bˆxi (t)| ≤ |bxi (t)| ≤ 4|bˆxi (t)|.The above results all rely on knowing that we have probed each entity recentlywith respect to its true x-blanket. It will not be possible for our algorithm to ensurethis directly as our algorithm will be scheduling probes into the future based on thecurrent perceived volume of a given entity’s x-blanket. However we will show aslong as an entity is probed within a number of time steps that is a small fraction ofour current perception of an entity’s x-blanket volume then it will never have beenleft unprobed for more than a quarter of its true x-blanket volume. Note that thisis a bound on the volume of ei’s uncertainty region, as it grows by one each timestep. This result will be important in showing we can maintain a reasonable ply.Theorem 17. If for all times t and all entities e j, t− p j(t)≤ 111 |bˆxj(p j(t))| then forall times t and any entity ei, t− pi(t) < 14 |bxi (t)|.Proof. Assume the hypothesis of the theorem holds. We will prove the result byinduction. Our base case is at time zero. The result is trivially true, since at time 0for each entity ei, t− pi(t) = 0.Assume that at all time steps τ < t, all entities e j have been probed in the last14 |bxj(τ)| time steps. Take ei some arbitrary entity. Our goal to bound the time thathas elapsed since ei was last probed with respect to the volume of its true x-blanketat time t. To do so we need to bound this volume by the volume of the perceived x-blanket at pi(t), since that is what we base our probe schedule on. We will do this intwo steps, first we bound the true volume of the x-blanket at pi(t) by the perceivedvolume at pi(t), and then we bound the true volume at t, by the true volume at pi(t).53To bound the volume of the perceived x-blanket at time pi(t) by the true x-blanket volume at that time, consider the set of entities Γxi (pi(t)). For every e j inΓxi (pi(t)) the last time it was probed, p j(pi(t)), is less than pi(t)− 14 |bxj(pi(t))| timesteps by our inductive hypothesis. Thus, by Lemma 16, 47 |bˆxi (pi(t))| ≤ |bxi (pi(t))|.Now we want to bound the volume of bxi (t). By the hypothesis of the theorem,t− pi(t) <111|bˆxi (pi(t))|< 744|bxi (pi(t))|< 16|bxi (pi(t))|Thus at most the entities in Γxi were in the interval [min(bxi (pi(t)))+ 112 bxi (pi(t)),max(bxi (pi(t)))−112 bxi (pi(t))], and ei itself has moved at most 112 bxi (pi(t)) units. Thus,|bxi (t)| ≥ |bxi (pi(t))|−13|bxi (pi(t))|=23|bxi (pi(t))|This implies,16|bxi (pi(t))| ≤16(32|bxi (t)|)= 14|bxi (t)|Combing this with our previous result we find that t− pi(t) < 14 |bxi (t)|.Thus by induction our result holds for all time steps.Corollary 18. For all pairs of times t, t ′ , such that t − t ′ < 111 |bˆxj(p j(t))| for allentities ei, 12 |bxi (t′)|< |bxi (t)|< 32 |bxi (t′)|.Proof. The result follows from Lemma 15 and Theorem 17.Corollary 19. If for all t and all entities e j, t− p j(t) < 111 |bˆxj(p j(t))| then for alltimes t and any entity ei, 47 |bˆxi (t)| ≤ |bxi (t)| ≤ 4|bˆxi (t)|.Proof. The result follows from Lemma 16 and Theorem 17.The final result for this section combines the above two results to argue that thevolume of our perceived x-blanket at a given time is a good approximation for thetrue x blanket, for an interval of time stretching into the future.54Lemma 20. If for all t and all entities e j, t− p j(t)< 111 |bˆxj(p j(t))| then for all timest and any entity ei, 821 bˆxi (t)≤ bxi (t∗)≤ 8bˆxi (t) for all t∗ in [t, t f ] where t f = t +bˆxi (t)11Proof. We will prove the upper and lower bounds separately. First we prove thelower bound.From Corollary 19 we know that47|bˆxi (t)| ≤ |bxi (t)|From Corollary 18 we get that|bxi (t)| ≤32|bxi (t∗)|Therefore47|bˆxi (t)| ≤32|bxi (t∗)|↔ 821|bˆxi (t)| ≤ |bxi (t∗)|For the upper bound, from Corollary 19 we get|bxi (t)| ≤ 4|bˆxi (t)|From Corollary 18 we know that12|bxi (t∗)| ≤ |bxi (t)|This means,12|bxi (t∗)| ≤ 4|bˆxi (t)|↔ |bxi (t∗)| ≤ 8|bˆxi (t)|By combining our upper and lower bounds we get the statement of the lemma:821|bˆxi (t)| ≤ |bxi (t∗)| ≤ 8|bˆxi (t)|In this section we have shown that, by probing frequently enough based on theperceived x-blanket volume, our perception of the entities is not too far off theirtrue locations. Knowing this we can show the bucket algorithm will be able to dofairly well, assuming we have an idea of what the optimal blanket value is.555.1.2 Bucket Algorithm AnalysisUsing our results from the previous section we will show that the Bucket Algo-rithm performs well, if we are given a reasonable value for χ .Definition 20. We say a bucket has “overflowed” if there is still an element in itsqueue at the end of its associated time interval.As long as no bucket overflows, we have managed to probe every entity beforeits uncertainty region exceeds its χ-blanket. By an argument similar to the oneused in the static case, this leads us to conclude we have low ply.Theorem 21. If no bucket overflows the Bucket Algorithm maintains ply O(χ), forthe value χ we are given.Proof. Assume that for a given χ value the Bucket Algorithm runs without a bucketever overflowing.Let ei be an arbitrary entity, and t be an arbitrary time step. Recall that whenthe Bucket Algorithm probed entity ei at time pi(t) it put ei into a bucket whichhas length at most 122 |bˆxi (pi(t))|, and note that the time after pi(t) before the start ofthat bucket is at most 122 |bˆxi (pi(t)))|, since it is the next bucket of its length whichdoes not overlap the current time step. Since the algorithm runs without a bucketoverflowing, this means that any ei will have been probed before 111 |bˆxi (pi(t))| atany time t. Then by Theorem 17 the entity ei was probed in the last 14 |bχi (t)| timesteps. Thus |ri(t)|< 14 |bχi (t)| (recall ri(t) is the uncertainty region of ei at t). Sinceli(t) ∈ ri(t)ri(t)⊆[li(t)−14|bχi (t)|, li(t)+14|bχi (t)|]Thus ri(t)⊂ bχi (t) =[li(t)− 12 |bχi (t)|, li(t)+ 12 |bχi (t)|]for all times t.Note that, similar to the static case, no more than 2χ of the χ-blankets overlapon any one given point. This is because for any given point the χ-blankets of atmost the χ entities to the left, and χ entities to the right overlap that point (this ismore formally discussed in Theorem 11). Since an entities uncertainty region iscontained in its χ-blanket, our ply is always less than 2χ , which is O(χ)Now we will show that a bucket overflows only if there is a time step in itsassociated time interval when the inverse sum of the χ-blanket volumes is greaterthan 1800 .56Theorem 22. If a bucket overflows then there is a time τ , in that bucket’s associatedtime interval, such that ∑ni=11|bχi (τ)|> 1800 .Proof. Assume that some bucket, β ∗, overflows, and let T ∗ be its associated timeinterval.LetB(t) = {βi, j : t ∈ [i2 j +1,(i+1)2 j]}, the set of buckets which include timet in their intervals. Note that there is one bucket in B(t) for each length j.We define φ(t) as follows (recall |β | is the length of the associated interval, notthe number of entities in bucket β ):φ(t) = ∑β∈B(t)∑ei∈β1|β |That is, we sum over all the placements of entities in buckets which overlap timet, taking the sum of the inverse of the size of the bucket the entity is placed in. Wecan think of φ(t) as a surrogate for the inverse sum of the x-blanket volumes, sincethe volume of the bucket an entity is placed into is closely related to the volumeof its x-blanket. There are many factors which will cause the sum of the inversebucket volumes to be much larger than the sum of the inverse blanket volumes,these necessitate the 1800 constant. Since we never remove elements from buckets,just mark them as probed, this sum includes entities that were chosen to be probedat a time step earlier than t. Note that if an entity ei that is in bucket β is probed andplaced in a bucket that is shorter than β , the new bucket it is placed in may overlapβ . Thus, it will contribute multiple times to φ(t), for all the times t when both ofthe buckets overlap. Note that φ(t) also includes contributions from entities whichare never probed if a bucket which overlaps t overflows.There are now two steps to our proof:1. Prove there exists a time τ such that φ(τ)≥ 1.2. Show that this implies at that time τ the sum ∑ni=1 1|bχi (τ)| is greater than1800For the first step of the proof we will consider ∑t∈T ∗ φ(t). Recall that T ∗ isthe interval associated with β ∗, the bucket which overflowed. The sum over all thebuckets is larger than the sum over just those buckets whose intervals are containedwithin β ’s. By Observation 2 we know that each of these buckets are wholly con-tained within T ∗. Thus we sum over the entities in the bucket β , |β | times. Hence57∑t∈T ∗φ(t) = ∑t∈T ∗∑β∈B(t)∑e∈β1|β |≥ ∑t∈T ∗∑β∈B(t)β⊆β∗∑e∈β1|β |= ∑β⊆β ∗|β |∑e∈β1|β |= ∑β⊆β ∗∑e∈β1> |T ∗|The last inequality is a consequence of β ∗ overflowing and thus there are morethan T ∗ entities that are scheduled to be probed over the interval T ∗, each of whichare in buckets β such that β ⊆ β ∗. This impliesφ¯ = ∑t∈T ∗ φ(t)|T ∗| > 1That is the average of the φ(t) values over the interval T ∗, φ¯ , is greater than 1.This means there is some time in T ∗ where the φ(t) is greater than one. Let τ besuch a time.Having established there exists a time τ when φ(τ) ≥ 1 we will show that∑i1|bχi (τ)|> 1800 , that is the inverse sum of the blanket volumes is only a factor of1800 less than φ(τ).Our goal is to describe the relationship between the volume of the bucket anentity is probed in that overlaps time τ , and the volume of the x-blanket of thatsame entity at time τ . Consider some arbitrary entity ei, and some bucket β that eiis placed in that overlaps time τ . Let pi(β ) be the time at which ei was probed andplaced in bucket β . The distinction between pi(β ) and pi(τ) is important, sincethe time step pi(τ) may be in β . We will be interested in the probe time at whichei was placed in β . We aim to probe an entity before time steps equal to 111thofits perceived x-blanket volume have past. Our algorithm does this by placing theentity in the first bucket whose size is less than 122ndof the entity’s perceived x-blanket volume. Note the factor of two is sufficient to insure that the time between58when the entity was probed and the end of the bucket it is placed in is less than 111 .Thus we know thatτ− pi(β ) <bˆxi (pi(β ))11Hence from Lemma 20821bˆxi (pi(β ))≤ bxi (τ)≤ 8bˆxi (pi(β ))We know from the definition of our algorithm that β ’s volume is at least 144ththe volume of bˆxi (pi(β )). This means,|bxi (τ)| ≤ 8|bˆxi (pi(β ))|→ |bxi (τ)| ≤ 352|β |It is important to note that an entity may show up in several different buckets inthe calculations of φ(τ). Observe the sum ∑τ∈β 1|β | is a geometric sum. It followsthat the entire contribution from a single entity is not more than twice its largestcontribution, since the bucket sizes are powers of two. That is if ωi is the length ofthe smallest bucket that ei is in that intersects τ thenφ(τ) < 2n∑i=11ωiSince |bxi (τ)| ≤ 352|β |,φ(τ) <n∑i=1704bxi (τ)→n∑i=11bxi (τ)> 1800Thus if β ∗ overflows, then ∑i 1|bχi (τ)| >1800 for some time step τ in T∗.Now we know that if a bucket overflows, it means there is a time when theinverse sum of the blanket sizes is relatively large, we argue this means the distri-bution of the entities at that time is “bad”. Specifically if the entities were frozenin those positions ply Ω(χ) would be unavoidable.Theorem 23. If a bucket overflows, then there is a time at which the entities arelocated in such a way that any algorithm would produce average ply Ω(χ) if theentities did not move.59Proof. By Theorem 22 we know if a bucket overflows, there is a time when∑i1|bχi (t)|>1800 , let τ be such at time. This means that at τ the optimal blanket value for probedensity 1800 is greater than χ . By Theorem 13 we know that any algorithm will haveaverage ply which is at most a constant factor times the optimal blanket value, forany given probe density, given T is sufficently long. Since the entities do not moveT can be arbitrarily long, eventually any algorithm’s average ply is Ω(χ) for thedistribution of entities at τ .For the following it is important to note the distinction between the value ofχ we are given, and the true optimal blanket value for probe density 1800 at time t,which I will call x(t).To elaborate on the above theorem, if a bucket overflows then there is sometime τ where the x(τ) was greater than the value χ we were given, by Theorem22. That means that the value of χ was smaller than the optimal blanket value atτ . As in the static case, we know the high optimal blanket value is indicative of aconfiguration of entities such that high ply is unavoidable if the entities stay in the“bad” configuration for an extended period of time. However, it is not clear in thiscase whether a smart algorithm could avoid having high ply by “preparing” for thistime step when the entities are too closely crowded anticipating that the entitieswill become uncrowded quickly.It is also not clear that our algorithm will fail simply because there is sometime step where the inverse sum of the x-blankets is relatively large. It is entirelyplausible that despite their being a time when x(τ)> χ that our algorithm still runssuccessfully. In fact, in our proof of Theorem 22 one can see that although we onlyconsider a single time step, the sum ∑t∈T φ(t) is relatively high on average. It isunclear from our analysis how quickly this sum value can change, and whether asingle identifiable violation actually implies a more lengthy interval of poor posi-tioning of the entities. A study of how sensitive our algorithm actually is to theseviolations would be an interesting avenue for future work.The algorithm presented in this chapter provided a good first step towards solv-ing the continuous ply minimization problem in its full generality. The algorithmdid depend on a oracle that provided with an χ value to shoot for, that future workwould aim to remove. It was also unfortunate that our algorithm does not adjustto an ever changing environment. We can get a good estimate of what the optimalblanket value is, based on the perceived location of the points. It would be inter-esting to design an algorithm that is more adaptive, and adjusts the blanket valueit uses to generate probes, based on how this estimate of the optimal blanket value60changes. This is not possible with our current tools, as our estimate of the blanketvalue is not tight enough, and it changes to rapidly to be used effectively. Althoughthere is still work to be done until the question of how to efficiently continuouslyprobe entities is answered in full, this work has provided a strong base to developon.61Chapter 6Concluding RemarksVariants of the ply minimization problem has been explored in a few different pa-pers [8][9]. The main focus, in these papers, is the “single-shot” case, that is thecase where one wishes to minimize the ply at a single point in time. The currentwork built on those investigations, by considering the continuous version of the PlyMinimization problem. This continuity required the introduction of new tools todeal with the challenges of being competitive across an entire interval of time.To start, we simplified the problem by assuming that the entities were not mov-ing. This simplification decreased the complexity of the problem, while still yield-ing interesting results. It allowed us to focus on issues that arise from the repetitivenature of the problem, and ignore those that are caused by the uncertainty of ourinformation. Looking at the static case lead to the conclusion that being compet-itive over an interval of time should not be interpreted as being competitive withthe best ply that could be achieved at each time step, because it is impossible foran algorithm to maintain optimally low ply at each time step. Specifically, the op-timal ply at a given time step may be a factor of logn smaller than the minimummaximum ply any algorithm could achieve over the interval in question. Thus, thefocus of our work was on the competitive ratio one could get when comparing theirown algorithm’s maximum minimum ply to that of any other algorithm’s averageply, over a given time interval. We produced an algorithm, the Rounded Blanketsalgorithm, which has worst case ply which is a constant factor more than any otheralgorithm’s average ply, over any sufficiently long interval of time. It operates bycalculating the optimal x-blanket for each entity, and then probing that entity be-fore its uncertainty region is as large as the blanket. This maintains ply which iswithin a constant factor of the optimal blanket value. Any other algorithm’s max-imum minimum ply is within a constant factor of the optimal blanket value. Thus62we get the aforementioned bound on the competitive ratio of the Rounded blanketsalgorithm.We then adapted the Rounded Blankets algorithm to work in the dynamic case.It is impossible to calculate a single, optimal, x-blanket for an entity, as we did inthe static case, since the movement of the points means it will change over time.The reliability of the unchanging locations of the entities in the static case wasimportant for building a schedule where only one entity is probed at each timestep. In the dynamic case we schedule entities into “windows” of time, insteadof at specific time steps. We call these windows buckets. They are based on thecurrent blanket volumes, and we require the algorithm to probe an entity beforeits window ends. As long as we work with a blanket value which no less than thetrue maximum optimal blanket value over the interval, our algorithm will probe allthe entities in the allotted time. We also show that we can conclude our ply willbe close the blanket value we use. Unfortunately the fact the point locations arechanging in the dynamic case means that the optimal blanket value is no longer atight bound on the behaviour of all algorithms.6.1 Future WorkOne potential avenue for future work is producing a more adaptive algorithm, anddiscovering a bound on the ply algorithms produce, in the dynamic case. An ideafor how such algorithms might improve on our results is to maintain a “workingblanket value”, which is our approximation of the true current blanket value, andupdating it when our perceived optimal blanket value crosses certain thresholds,such as being double or half the working blanket value. Another idea is increasingthe blanket value when a bucket overflows. Currently we view overflows as a “crit-ical error”, and they cause the algorithm to halt. By increasing the blanket valuewhen a bucket overflows we can work around this error state. In such a situation,entities would be moved into larger buckets, based on their new perceived blanketvolume. Whatever adjustments are made, one will have to argue about the com-petitiveness of the new algorithm. Arguing about the minimum maximum ply anyalgorithm can get in the dynamic case, and how it is related to the optimal blanketvalues of the time steps in the time interval of interest, would be useful results.Future work might also consider different models for the Continuous Ply Min-imization problem. Some differing models been explored for single shot case,including having a variety of speed bounds, and extending to higher dimensionalspaces [8][9]. The easiest of these would appear to be extending to a higher di-63mensional space, which should follow fairly directly from the arguments in thiswork. One might also consider bounding the acceleration of the entities, or usingprobabilistic models to represent the uncertainty regions of the entities. Anothermodification to consider is changing the probe model. It would be interesting toexamine how the results might change if the algorithm had more than one probeavailable per time step, or if the algorithm had a finite number of probes, but wasallowed to use more of them at times when the optimal blanket value was high, ifit used fewer when the optimal blanket value was low. The last model change onemight consider is changing the way ply is calculated. Currently we consider ply tobe the maximal overlap between uncertainty regions. However, the usefulness ofthis measure will depend on the application. It may be better to consider the totalnumber of uncertainty regions which are pairwise overlapping. Or alternatively,to consider some function of the number of uncertainty regions which overlap apoint, and then integrate over the real line. For example, take f (x) = δ (x)2, whereδ (x) is the number of entities which overlap x. Then the value we are interested inwould be∫ ∞−∞ f (x)dx.One subject, which was only briefly mentioned in this document, is the hard-ness of calculating the optimal query pattern. In the one shot case, Evans et al showthat calculating the optimal probe sequence is NP-hard, when one knows the truetrajectories of the entities [8]. It is hard to fathom that the problem gets easier in thecontinuous case. However the problem is distinct enough to admit the possibility,due to the difference between optimizing at a single time, versus optimizing overan entire interval. Perhaps the wider target area makes the correct probe at any onegiven step more tractable. Another way to try to gain insight into the hardness ofcontinuous probing, or indeed optimal probes in general, would be to relax the re-striction on probing once per time step. Consider instead the situation if we simplyrequired the entities be probed periodically, that is every Pi time steps, and that theinverse sum of these periods is less than one, that is ∑i1Pi< 1. The problem thentransforms into calculating optimal periods for each entity, rather than the optimalquery at each time step. It would be interesting to see if this makes the calculationsimpler.As we have discussed, there are many interesting directions for future researchto take. Many other variations exist which have not been touched on in this sec-tion. The answers to these questions will lead to a better understanding of the plyminimization problem in general.64Bibliography[1] S. Anily, C. A. Glass, and R. Hassin. The scheduling of maintenance service. DiscreteApplied Mathematics, 82(13):27 – 42, 1998.[2] F. Aurenhammer, G. Sto¨ckl, and E. Welzl. The post office problem for fuzzy pointsets. In Proceedings of the International Workshop on Computational Geometry -Methods, Algorithms and Applications, CG ’91, pages 1–11, London, UK, UK, 1991.Springer-Verlag.[3] A. Bar-Noy, A. Nisgav, and B. Patt-Shamir. Nearly optimal perfectly periodic sched-ules. Distributed Computing, 15(4):207–220, 2002.[4] M. de Berg, M. Roeloffzen, and B. Speckmann. Kinetic compressed quadtrees in theblack-box model with applications to collision detection for low-density scenes. InL. Epstein and P. Ferragina, editors, Algorithms ESA 2012, volume 7501 of LectureNotes in Computer Science, pages 383–394. Springer Berlin Heidelberg, 2012.[5] M. de Berg, M. Roeloffzen, and B. Speckmann. Kinetic convex hulls, delaunaytriangulations and connectivity structures in the black-box model. Journal of Com-putational Geometry, 3(1):223 – 249, 2012.[6] M. de Berg, M. Roeloffzen, and B. Speckmann. Kinetic 2-centers in the black-boxmodel. In Proceedings of the Twenty-ninth Annual Symposium on ComputationalGeometry, SoCG ’13, pages 145–154, New York, NY, USA, 2013. ACM.[7] R. Dorrigiv and A. Lo´pez-Ortiz. A survey of performance measures for on-line algo-rithms. SIGACT News, 36(3):67–81, 2005.[8] W. Evans, D. Kirkpatrick, M. Lo¨ffler, and F. Staals. Competitive query strategies forminimising the ply of the potential locations of moving points. In Proceedings ofthe Twenty-ninth Annual Symposium on Computational Geometry, SoCG ’13, pages155–164, New York, NY, USA, 2013. ACM.[9] W. Evans, D. Kirkpatrick, M. Lo¨ffler, and F. Staals. Query strategies for minimizingthe ply of the potential locations of entities moving with different speeds. Abstr. 30thEuropean Workshop on Computational Geometry (EuroCG), 2014.[10] J. Gao, L. J. Guibas, and A. Nguyen. Deformable spanners and applications. Com-putational Geometry, 35(12):2 – 19, 2006. Special Issue on the 20th {ACM} Sym-posium on Computational Geometry 20th {ACM} Symposium on ComputationalGeometry.65[11] L. J. Guibas. Kinetic data structures – a state of the art report, 1998.[12] S. Kahan. Real-Time Processing of Moving Data. PhD thesis, University of Wash-ington, 10 1991.[13] S. M. LaValle. Planning Algorithms. Cambridge University Press, Cambridge, U.K.,2006. Available at http://planning.cs.uiuc.edu/.[14] M. Lo¨ffler and J. Snoeyink. Delaunay triangulations of imprecise pointsin lineartime after preprocessing. In Proceedings of the twenty-fourth annual symposium onComputational geometry, pages 298–304. ACM, 2008.[15] M. Lo¨ffler and M. van Kreveld. Largest and smallest convex hulls for imprecisepoints. Algorithmica, 56(2):235–269, Feb. 2010.66

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.24.1-0166358/manifest

Comment

Related Items