UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Coupling, matching, and equivariance Soo, Terry 2010

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
24-ubc_2010_fall_soo_terry.pdf [ 907.7kB ]
Metadata
JSON: 24-1.0069962.json
JSON-LD: 24-1.0069962-ld.json
RDF/XML (Pretty): 24-1.0069962-rdf.xml
RDF/JSON: 24-1.0069962-rdf.json
Turtle: 24-1.0069962-turtle.txt
N-Triples: 24-1.0069962-rdf-ntriples.txt
Original Record: 24-1.0069962-source.json
Full Text
24-1.0069962-fulltext.txt
Citation
24-1.0069962.ris

Full Text

Coupling, Matching, and Equivariance by Terry Soo  B.A. (Honours), Simon Fraser University, 2004  A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY in The Faculty of Graduate Studies (Mathematics)  THE UNIVERSITY OF BRITISH COLUMBIA (Vancouver) April, 2010 c Terry Soo 2010  Abstract This thesis consists of four research papers and one expository note that study factors of point processes in the contexts of thinning and matching. In “Poisson Splitting by Factors,” we prove that given a Poisson point process on Rd with intensity λ, as a deterministic function of the process, we can colour the points red and blue, so that each colour class forms a Poisson point process on Rd , with any given pair of intensities summing λ; furthermore, the function can be chosen as an isometry-equivariant finitary factor (that is, if an isometry is applied to the points of the original process the points are still coloured the same way). Thus using only local information, without a central authority or additional randomization, points of a Poisson process can split into two groups, each of which are still Poisson. In “Deterministic Thinning of Finite Poisson Processes,” we investigate similar questions for Poisson point processes on a finite volume. In this setting we find that even without considerations of equivariance, thinning can not always be achieved as a deterministic function of the Poisson process and the existence of such a function depends on the intensities of the original and resulting Poisson process. In “Insertion and Deletion Tolerance of Point Processes,” we define for point processes a version of the concept of finite-energy. This simple concept has many interesting consequences. We explore the consequences of having finite-energy in the contexts of the Boolean continuum percolation model, Palm theory and stable matchings of point processes. In “Translation-Equivariant Matchings of Coin-Flips on Zd ,” as a factor of i.i.d. fair coin-flips on Zd , we construct perfect matchings of heads and tails and prove power law upper bounds on the expected distance between matched pairs. In the expository note “A Nonmeasurable Set from Coin-Flips,” using the notion of an equivariant function, we give an example of a nonmeasurable set in the probability space for an infinite sequence of coin-flips.  ii  Table of Contents Abstract  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  ii  Table of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . .  iii  List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  vi  Acknowledgments Dedication  . . . . . . . . . . . . . . . . . . . . . . . . . . . vii  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii  Statement of Co-Authorship . . . . . . . . . . . . . . . . . . . . .  ix  1 Introduction . . . . . . . . . . . . . . . . . . . . . 1.1 Overview of the manuscripts . . . . . . . . . . 1.2 Thinning, coupling, and stochastic domination 1.3 Ornstein theory and finitary codes . . . . . . . 1.4 Palm theory and shift-coupling . . . . . . . . . 1.5 Bibliography . . . . . . . . . . . . . . . . . . .  . . . . . .  . . . . . .  . . . . . .  . . . . . .  . . . . . .  . . . . . .  . . . . . .  . . . . . .  1 2 4 4 5 7  2 Poisson Splitting by Factors . . . . . . 2.1 Introduction . . . . . . . . . . . . . . 2.2 Some remarks about the proofs . . . . 2.3 Proof of Theorem 2.4 . . . . . . . . . 2.4 Proof of Proposition 2.9 . . . . . . . . 2.5 Selection rules . . . . . . . . . . . . . 2.6 Construction of selection rules . . . . 2.7 Encoding and distributing randomness 2.8 Proof of Theorem 2.1 . . . . . . . . . 2.9 The assignment function . . . . . . . 2.10 Proof of Theorem 2.3 . . . . . . . . . 2.11 Proof of Theorem 2.5 . . . . . . . . . 2.12 Open problems . . . . . . . . . . . . .  . . . . . . . . . . . . .  . . . . . . . . . . . . .  . . . . . . . . . . . . .  . . . . . . . . . . . . .  . . . . . . . . . . . . .  . . . . . . . . . . . . .  . . . . . . . . . . . . .  . . . . . . . . . . . . .  10 10 13 22 25 29 37 40 41 44 49 55 58  . . . . . . . . . . . .  . . . . . . . . . . . . .  . . . . . . . . . . . . .  . . . . . . . . . . . . .  . . . . . . . . . . . . .  iii  Table of Contents 2.13 Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . .  60  3 Deterministic Thinning of Finite Poisson Processes 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . 3.2 Proof of Theorem 3.1: easy implications . . . . . . . . 3.3 Deleting uniform random variables . . . . . . . . . . . 3.4 Couplings of Poisson random variables . . . . . . . . 3.5 The thinning, and proofs of corollaries . . . . . . . . . 3.6 Variants and open problems . . . . . . . . . . . . . . 3.6.1 Thickening . . . . . . . . . . . . . . . . . . . . 3.6.2 Equivariant thinning . . . . . . . . . . . . . . 3.6.3 Splitting . . . . . . . . . . . . . . . . . . . . . 3.7 Bibliography . . . . . . . . . . . . . . . . . . . . . . .  . . . . . . . . . . .  . . . . . . . . . . .  . . . . . . . . . . .  . . . . . . . . . . .  63 63 68 69 71 73 75 75 76 79 81  4 Insertion and Deletion Tolerance of Point Processes 4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . 4.2 Examples . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.1 Elementary examples . . . . . . . . . . . . . . 4.2.2 Continuum percolation and stable matching . 4.2.3 Perturbed lattices and Gaussian zeros . . . . . 4.3 Basic results . . . . . . . . . . . . . . . . . . . . . . . 4.4 Palm equivalences . . . . . . . . . . . . . . . . . . . . 4.5 Continuum percolation . . . . . . . . . . . . . . . . . 4.6 Stable matching . . . . . . . . . . . . . . . . . . . . . 4.7 Bibliography . . . . . . . . . . . . . . . . . . . . . . .  . . . . . . . . . . .  . . . . . . . . . . .  . . . . . . . . . . .  . 82 . 82 . 87 . 87 . 89 . 90 . 93 . 97 . 99 . 100 . 104  5 Translation-Equivariant Matchings of Coin-Flips on Zd 5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 Clumps . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3 Seeds, cutters, and blobs . . . . . . . . . . . . . . . . . . 5.3.1 Basic set-up . . . . . . . . . . . . . . . . . . . . . 5.3.2 Estimates . . . . . . . . . . . . . . . . . . . . . . . 5.4 Mass transport . . . . . . . . . . . . . . . . . . . . . . . . 5.5 Proof of Theorem 5.1 . . . . . . . . . . . . . . . . . . . . 5.5.1 First estimates . . . . . . . . . . . . . . . . . . . . 5.5.2 Long and thin blobs . . . . . . . . . . . . . . . . . 5.6 Bibliography . . . . . . . . . . . . . . . . . . . . . . . . .  . . . . . . . . . . .  . . . . . . . . . . .  105 105 108 109 109 110 111 114 114 119 124  6 A Nonmeasurable Set from Coin Flips . . . . . . . . . . . . 126 6.1 Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . 129 iv  Table of Contents 7 Conclusion . . . . . . . . . . . . . . . . . . . . 7.1 Non-randomized versus randomized . . . . 7.2 Finitary factors and source universal factors 7.3 Future directions . . . . . . . . . . . . . . . 7.4 Bibliography . . . . . . . . . . . . . . . . .  . . . . . . . . . . . . . .  . . . . .  . . . . .  . . . . .  . . . . .  . . . . .  . . . . .  . . . . .  130 130 130 131 133  v  List of Figures 2.1 2.2 2.3  An illustration of a thickening . . . . . . . . . . . . . . . . . . Lemma 2.18 condition (i) . . . . . . . . . . . . . . . . . . . . The definition of a pre-seed . . . . . . . . . . . . . . . . . . .  23 34 38  3.1  The set of pairs of intensities for which a thinning exists . . .  65  5.1 5.2 5.3  The event Ek ∩ Ck (r α )c . . . . . . . . . . . . . . . . . . . . . 114 An illustration of a long and thin blob . . . . . . . . . . . . . 121 The event Ck1 (r α ) ∩ Ek . . . . . . . . . . . . . . . . . . . . . . 122  vi  Acknowledgments I am indebted to Alexander Holroyd and Omer Angel for their interesting problems, guidance, and infinite patience. I could not have asked for better supervisors. I do not know how I could ever repay them. I thank my external examiner, Anthony Quas, for his helpful comments and his careful reading of this thesis. I thank Brian Marcus for being interested in my problems and being one of my university examiners. I thank Daniel for trying to explain to his younger brother the martingale strategy that he rediscovered when we were just kids. I am grateful that my dad introduced me and my brother to gambling at an early age; I am certain that this has something to do with my decision to study probability. Finally, I thank everyone who came to my defense; in particular, my girlfriend, Carolyn, who has so happily endured math geeks and math speak for the last 8 years.  vii  dჶdᑉdຄdЛ —dଓdᣖd࡚dெdफd୅dࣣdϯdಚd៨  To my mother, who helped me to learn the multiplication tables.  viii  Statement of Co-Authorship Chapter 2 was jointly authored by Alexander Holroyd, Russell Lyons, and Terry Soo. Terry Soo was responsible for the identification and design of the research program. The research was performed by Alexander Holroyd, Russell Lyons, and Terry Soo; the majority of the research was performed by Terry Soo. The manuscript was jointly prepared by Alexander Holroyd, Russell Lyons, and Terry Soo; the majority of the writing was done by Terry Soo. Chapter 3 was jointly authored by Omer Angel, Alexander Holroyd, and Terry Soo. Omer Angel, Alexander Holroyd, and Terry Soo were responsible for the identification and design of the research program. The research was performed by Omer Angel, Alexander Holroyd, and Terry Soo. The manuscript was jointly prepared by Omer Angel, Alexander Holroyd, and Terry Soo; the majority of the writing was done by Terry Soo. Chapter 4 was jointly authored by Alexander Holroyd and Terry Soo. Alexander Holroyd was responsible for the identification and design of the research program. The research was performed by Alexander Holroyd and Terry Soo. The manuscript was jointly prepared by Alexander Holroyd and Terry Soo; the majority of the writing was done by Terry Soo. Chapter 5 was authored by Terry Soo. Alexander Holroyd suggested the research program and provided the idea for the initial approach. Chapter 6 was jointly authored by Alexander Holroyd and Terry Soo. Alexander Holroyd and Terry Soo were responsible for the identification and design of the research program. The research was performed by Alexander Holroyd and Terry Soo. The manuscript was jointly prepared by Alexander Holroyd and Terry Soo; the majority of the writing was done by Terry Soo.  ix  Chapter 1  Introduction A point process is a random pattern of points. Point processes can be used to model many diverse phenomenon such as rainfall [22], the pine beetle [21], music transcription [20], wireless networks [2], and small-world networks [6]. Poisson point processes and their many variations are important in many applications. The Poisson point process is so pervasive that it is an inside joke in mathematics and statistics that every point process will eventually be found out to be the Poisson in disguise [23, page 228]. A Poisson point process Π of intensity λ > 0 on Rd is a random scattering of points in Rd that can be characterized in the following way: given a set S ⊂ Rd of volume L(S) the probability that number of Π-points in S is n equals e−λL(S) (λL(S))n /n!, and given any finite number of disjoint sets S1 , S2 , . . . , Sn ⊂ Rd , if Π(Si ) is the number of Π-points in the set Si , then random variables Π(S1 ), . . . , Π(Sn ) are independent. A Poisson point process on a set S ⊂ Rd is given by restricting a Poisson point process on Rd of the same intensity to the set S. A textbook application of Poisson point processes is to model the arrival times of customers to a ice cream shop on a summer afternoon; specifically, if on average λ customers enter the shop between 1 PM and 2 PM, then the arrival times between 1 PM and 2 PM behave like a Poisson point process of intensity λ on [1, 2]. The concept of thinning a Poisson point processes appears as follows. If each customer is independently a boy or girl with probability p and 1 − p respectively, then it turns out that the arrival times of boy customers and girl customers are independent Poisson point processes on [1, 2] with intensities pλ and (1 − p)λ respectively. It is interesting to note that if Π is a point process and Π1 is obtained from Π by independent thinning, then Π2 := Π − Π2 is independent of Π if and only if Π is a Poisson point process [1], [8]. One of the main results of this thesis is to provide a method of thinning a Poisson point process using only the randomness already present in the point process, so that given a Poisson point process one can generate another of lower intensity without any additional randomization. In particular, for a Poisson point process Π of intensity two on a disc of unit area there exists a 1  1.1. Overview of the manuscripts deterministic rule f which deletes points of Π so that the result is a Poisson point process of intensity one on the disc; furthermore, the rule deletes the same points irrespective of the orientation of the disc and the points on it. For a Poisson point process Π on the infinite plane there exists a deterministic rule f which deletes points of Π so that the result is a Poisson point process of any lower intensity; furthermore, whether a point is deleted or not depends only on the process restricted to a disc of finite radius about the point, irrespective of the orientation of that disc and the points on it. Thus the points of Π can self-organize according to the rule f and use only local information in the absence of a centralized coordinate system to remove points to produce a Poisson point process of lower intensity. Decentralized algorithms that use only local information are important in the analysis of small-world networks [6], [5].  1.1  Overview of the manuscripts  Let M be a space endowed with a group of symmetries G. A function f : M → M is G-equivariant if f (θµ) = θf (µ) for all θ ∈ G and all µ ∈ M. Equivariant functions appear in many different areas of mathematics. In statistics the notion of an equivariant estimator is important and arises naturally. For example, the sample mean is a shift-equivariant estimator for the mean; the sample mean height of women wearing one-inch heels is one inch more than the sample mean height of women wearing socks. See [15, Chapter 3] for more information. Often equivariant functions are referred to as factors in ergodic theory and coding theory. In this thesis we will study the problems of thinning and matching from the perspective of factors. Given a homogeneous Poisson point process on Rd it is well known that selecting each point independently with some fixed probability gives a homogeneous Poisson process of lower intensity. This is often referred to as (independent) thinning. In the following two chapters we consider the following questions. Does there exists a non-randomized thinning; that is, is it possible to choose a subset of the Poisson points as a deterministic function of the Poisson process so that the chosen points form a Poisson process of any given lower intensity? Furthermore, can the deterministic function be chosen to be a translation-equivariant factor (that is, if a translation is applied to the original process, the chosen points are translated by the same vector). A translation-equivariant factor is finitary if whether or not a Poisson point is chosen is determined by the process within a finite (random) radius of the points. Thus a finitary translation-equivariant factor is a rule 2  1.1. Overview of the manuscripts that is independent of the choice of origin and uses only local information. For the case d = 1–the infinite line, Ball [4] proved that thinning can always be achieved as a finitary translation-equivariant factor. In Chapter 2, we extend this result to all dimensions d and further strengthened it by showing that the function can be made isometry-equivariant, and that the non-chosen points can also form a Poisson process. Thus points of a Poisson process can self-organize and split into two groups, each of which are still Poisson. In Chapter 3, we consider similar problems for the case of finite volumes. Putting aside consideration of equivariance, by the Borel isomorphism theorem [26], without loss of generality, we may consider special case of a unit interval. We find that unlike the case of infinite volumes, on a unit interval, the existence of a non-randomized thinning depends on both the intensities of the original and resulting Poisson processes. We prove a necessary and sufficient condition on the two intensities for the existence of a non-randomized thinning. In particular, if the difference between the two intensities is greater than one, then there exists a non-randomized thinning. Returning to equivariance, we also show that on a circle of unit perimeter thinning can be achieved as a rotation-equivariant factor if and only if a nonrandomized thinning exists on the unit interval. We do not know whether isometry-equivariant thinning can be achieved on the circle, but we do know that the set of intensities for which it is possible must be strictly contained in the set of intensities for which rotation-equivariant thinning is possible. Many interesting directions remain. In particular, we know very little about splitting. In Chapter 4, we define for point processes, a version of the finite-energy condition of Newman and Schulman [17]. We say that a point process Π on Rd is insertion-tolerant if the law of the point process obtained from Π by adding a point uniformly in any subset of Rd is absolutely continuous with respect to the law of Π. We say that Π is deletion-tolerant if the law of the point process obtained from Π by deleting one of its points is absolutely continuous with respect to the law of Π. We prove several equivalent formulations of each condition, including versions involving Palm processes. Although the conditions are fairly simple, we show that a point process that satisfies them have other interesting properties. In particular, we give an application to the stable matchings of point processes as defined in [9]. The stable matching of a Poisson point process is obtained by iteratively matching mutually closest pairs of the points. The stable matching can also be obtained in the same way for any point process on Rd where the distance between the points of any two non-identical pairs of points differ and the 3  1.2. Thinning, coupling, and stochastic domination points do not contain an infinite sequence of points where distance between successive points is non-increasing. A stable matching is a natural example of an isometry-equivariant matching. Our interest in stable matching lies in the studying the typical distance between matched pairs. We show that if a point process is insertion or deletion tolerant, then the typical distance between matched pairs of points has an infinite dth moment. In Chapter 5, we study factor matching in the discrete setting of site percolation on Zd . Consider i.i.d. fair coin-flips at each site of Zd . We define a translation-equivariant perfect matching of heads and tails and prove that the distance from the origin to its partner X, satisfies P(X > r) ≤ −1 c(log r)4 r −2(1+4/d) for all r > 0, for some c = c(d) < ∞. It was an open problem of Holroyd and Peres [10] to the determine the optimal tailbehaviour of such factor matchings. This result improved on previous results [10], [16], and prompted further improvements by Timar [29]. In Chapter 6, we define a nonmeasurable set in the probability space for an infinite sequence of coin-flips. The example arises naturally from the notion of an equivariant function, and serves as a pedagogical illustration of the need for measure theory. In Chapter 7, we discuss some open problems and give future directions for research. In what follows we give an overview of some related results concerning thinning, matching and factors.  1.2  Thinning, coupling, and stochastic domination  As mentioned earlier, by independent thinning it is easy to thin or split a Poisson process. However, even without considerations of equivariance, the problem of proving the existence of a non-randomized thinning is non-trivial, especially in the case of finite volumes. Although Strassen’s theorem [27] tells us that if X and Y are random variables on a Polish space endowed with a closed partial order ≺ and Y stochastically dominates X, then there exist a (monotone) coupling of X and Y so that X ≺ Y , it does not tell us whether or not X can expressed as a deterministic function of Y .  1.3  Ornstein theory and finitary codes  Ornstein and Weiss [19] proved that any two Poisson point processes on Rd are isomorphic as factors; that is, if Π and Π are Poisson point processes on 4  1.4. Palm theory and shift-coupling Rd with finite nonzero intensities, then there exists a translation-equivariant d d bijection φ such that φ(Π) = Π and φ−1 (Π ) = Π. In fact, their result is part of a much more general theory and if d = 1 or 2, then the bijection φ can be chosen to be isometry-equivariant. In Chapter 2, we prove that d there exists an isometry-equivariant finitary map φ such that φ(Π) = Π ; furthermore the map φ does not depend on the intensity of Π and thus is source universal. Turning to the discrete setting, let A := {0, 1 . . . , n − 1}. Let {Xi }i∈Z be i.i.d. A-valued random variables with p.m.f. p; that is, P(X0 = i) = pi for all 0 ≤ i ≤ n − 1. Sometimes the random sequence (. . . , X1 , X0 , X1 , . . .) is called a Bernoulli shift and is denoted by B(p). The entropy of B(p) is n−1 given by h(p) := − i=0 pi log pi . By the Sinai factor theorem [25], if B(p) and B(q) are Bernoulli shifts on A and h(p) > h(q), then there exists a translation-equivariant factor from B(p) to B(q). Ornstein [18] proved that any two Bernoulli shifts on A with the same entropy are isomorphic; that is, there exists a translation-equivariant factor φ from B(p) to B(q) and the inverse φ−1 is a translation-equivariant factor from B(q) to B(p). Keane and Smorodinsky further strengthened these results by producing explicit finitary factors between Bernoulli shifts [12], [13]. Recently, Ball [3] proved that, if B(p) > B(q) and p stochastically dominates q, then in the special case that qi = 0 for all i ≥ 2, there is a translation-equivariant finitary factor map φ from B(p) to B(q) that is monotone (i.e., φ(x)i ≤ xi for almost all x ∈ AZ and all i ∈ Z). For more information on finitary codes see Serafin [24].  1.4  Palm theory and shift-coupling  Let Π be a translation-invariant ergodic simple point process on Rd with finite intensity. We let Π∗ denote its Palm version. See Section 4.4 or [11, Chapter 11] for details. Informally, one can regard Π∗ as the point process Π conditioned to have a point at the origin. Thorrison [28] proved that there exists a shift-coupling of (Π, Π∗ ); that is, there exists a random variable Z such that Z is a point of Π and translating Π by Z yields a point process that has the same law as Π∗ . Sometimes Z is called a extra-head scheme. Holroyd and Peres [10] proved that Z may be chosen as a deterministic function of Π. Thus there exists a non-randomized extra-head scheme. Non-randomized extra-head schemes can be defined in the following way. An allocation rule for Π is a translation-equivariant factor of Π that 5  1.4. Palm theory and shift-coupling assigns each point of Rd to a point of Π so that the set of points associated −1 to each point of Π has measure EΠ[0, 1]d . Let φ an allocation rule for Π. Holroyd and Peres proved that Z := φ(0) is a non-randomized extra-head scheme for Π and proved the existence of allocation rules by an adaption of the Gale-Shapley marriage algorithm [7]. The connection between allocation rules and Palm processes is further generalized and studied in [14].  6  1.5. Bibliography  1.5  Bibliography  [1] R. M. Assun¸c˜ ao and P. A. Ferrari. Independence of thinned processes characterizes the poisson process: An elementary proof and a statistical application. Test, 16:333–345, 2007. [2] F. Baccelli, O. Dousse, M. Haenggi, J. G. Andrews, and M. Franceschetti. Stochastic geometry and random graphs for the analysis and design of wireless networks. IEEE Journal on Selected Areas in Communications, 27(7):1029–1046, 2009. [3] K. Ball. Monotone factors of i.i.d. processes. Israel J. Math., 150:205– 227, 2005. [4] K. Ball. Poisson thinning by monotone factors. Probab., 10:60–69 (electronic), 2005.  Electron. Comm.  [5] C. Bordenave. Navigation on a Poisson point process. Ann. Appl. Probab., 18(2):708–746, 2008. [6] M. Franceschetti and R. Meester. Navigation in small-world networks: A scale-free continuum model. Journal of Applied Probability, 43(4):1173–1180, 2006. [7] D. Gale and L. Shapley. College admissions and stability of marriage. Amer. Math. Monthly, 69:9–15, 1962. [8] S. He and J. Wang. Thinning of point processes, revisited. Acta Mathematicae Applicatae Sinica, 12(3), 1996. [9] A. E. Holroyd, R. Pemantle, Y. Peres, and O. Schramm. Poisson matching. Ann. Inst. Henri Poincar´e Probab. Stat., 45(1):266–287, 2009. [10] A. E. Holroyd and Y. Peres. Extra heads and invariant allocations. Ann. Probab., 33(1):31–52, 2005. [11] O. Kallenberg. Foundations of Modern Probability. Probability and its Applications (New York). Springer-Verlag, New York, second edition, 2002. [12] M. Keane and M. Smorodinsky. A class of finitary codes. Israel J. Math., 26(3-4):352–371, 1977. [13] M. Keane and M. Smorodinsky. Bernoulli schemes of the same entropy are finitarily isomorphic. Ann. of Math. (2), 109(2):397–406, 1979. 7  1.5.  Bibliography  [14] G. Last and H. Thorisson. Invariant transports of stationary random measures and mass-stationarity. Ann. Probab., 37(2):790–813, 2009. [15] E. L. Lehmann and G. Casella. Theory of point estimation. Springer Texts in Statistics. Springer-Verlag, New York, second edition, 1998. [16] T. Liggett. Tagged particle distributions or how to choose a head at random. In In and Out of Equilibrium, volume 51 of Progr. Probab., pages 133–162. Birkhauser, 2002. [17] C. M. Newman and L. S. Schulman. Infinite clusters in percolation models. J. Statist. Phys., 26(3):613–628, 1981. [18] D. S. Ornstein. Ergodic Theory, Randomness, and Dynamical Systems. Yale University Press, New Haven, Conn., 1974. James K. Whittemore Lectures in Mathematics given at Yale University, Yale Mathematical Monographs, No. 5. [19] D. S. Ornstein and B. Weiss. Entropy and isomorphism theorems for actions of amenable groups. J. Analyse Math., 48:1–141, 1987. [20] P. H. Peeling, C. Li, and S. J. Godsill. Poisson point process modeling for polyphonic music transcription. Journal of the Acoustical Society of America Express Letters, 121(4):EL168–EL175, April 2007. Reused with permission from Paul Peeling, The Journal of the Acoustical Society of America, 121, EL168 (2007). Copyright 2007, Acoustical Society of America. [21] H. K. Preisler. Modelling spatial patterns of trees attacked by barkbeetles. Journal of the Royal Statistical Society. Series C (Applied Statistics), 42(3):501–514, 1993. [22] I. Rodriguez-Iturbe, D. R. Cox, and V. Isham. Some models for rainfall based on stochastic point processes. Proc. Roy. Soc. London Ser. A, 410(1839):269–288, 1987. [23] G.-C. Rota. Indiscrete thoughts. Birkh¨auser Boston Inc., Boston, MA, 1997. With forewords by Reuben Hersh and Robert Sokolowski, Edited with notes and an epilogue by Fabrizio Palombi. [24] J. Serafin. Finitary codes, a short survey. In Dynamics & Stochastics, volume 48 of IMS Lecture Notes Monogr. Ser., pages 262–273. Inst. Math. Statist., Beachwood, OH, 2006. 8  1.5.  Bibliography  [25] J. G. Sina˘ı. A weak isomorphism of transformations with invariant measure. Dokl. Akad. Nauk SSSR, 147:797–800, 1962. [26] S. M. Srivastava. A Course on Borel Sets, volume 180 of Graduate Texts in Mathematics. Springer-Verlag, New York, 1998. [27] V. Strassen. The existence of probability measures with given marginals. Ann. Math. Statist, 36:423–439, 1965. [28] H. Thorisson. Transforming random elements and shifting random fields. Ann. Probab., 24:2057–2064, 1996. ´ Tim´ [29] A. ar. Invariant matchings of exponential tail on coin flips in Zd . Preprint.  9  Chapter 2  Poisson Splitting by Factors 1 2.1  Introduction  Let B = B(Rd ) be the Borel σ-field on Rd . Let M be the space of all Borel simple point measures on (Rd , B), and let M be the product σ-field on M (we give detailed definitions in Section 2.2). Given an isometry θ of Rd and µ ∈ M, we define θ(µ) to be the measure given by θ(µ)(A) = µ(θ −1 (A)) for all A ∈ B. We say that a measurable mapping φ : M → M is isometryequivariant if θ(φ(µ)) = φ(θ(µ)) for all µ ∈ M and for all isometries θ of Rd . Similarly we say that φ is translation-equivariant if it commutes with all translations of Rd . We define a partial order ≤ on M via: µ1 ≤ µ2 if and only if µ1 (A) ≤ µ2 (A) for all A ∈ B. We say that a mapping φ is monotone if either φ(µ) ≤ µ for all µ ∈ M, or µ ≤ φ(µ) for all µ ∈ M. Our main result is the following. Theorem 2.1. For all d ≥ 1 and for all λ > λ > 0, there exists a monotone isometry-equivariant mapping φ : M → M such that if X is a homogeneous Poisson point process on Rd with intensity λ, then φ(X) and X − φ(X) are homogeneous Poisson point processes on Rd with intensities λ and λ − λ respectively. In other words, Theorem 2.1 states that the points of a Poisson process may be coloured red and blue, in a deterministic isometry-equivariant way, so that both the red process and the blue process are Poisson processes. Ball [3] proved that in the case d = 1, for all λ > λ > 0, there exists a monotone translation-equivariant mapping φ : M → M such that if X is a Poisson point process with intensity λ, then φ(X) is a homogeneous Poisson point process with intensity λ (in other words, the Poisson process may be “thinned” in a deterministic translation-equivariant way). Ball asked whether the same was possible in higher dimensions, and also whether the condition of translationequivariance could be strengthened to isometry-equivariance. Theorem 2.1 1 A version of this chapter has been submitted for publication. Holroyd, A.E., Lyons, R., and Soo, T. (2009) Poisson Splitting by Factors.  10  2.1. Introduction answers both questions affirmatively, as well as providing the additional property that X − φ(X) is a Poisson process. Evans [4] recently proved that Poisson processes cannot be thinned in an equivariant way with respect to any affine measure-preserving group that is strictly larger than the isometry group. If all considerations of monotonicity are dropped, then the following result of Ornstein and Weiss applies, even without the restriction that λ > λ. Theorem 2.2 (Ornstein and Weiss). For all d ≥ 1 and all λ, λ ∈ (0, ∞), there exists a translation-equivariant bijection φ : M → M such that if X and Y are homogeneous Poisson point processes on Rd with intensities λ d d and λ respectively, then φ(X) = Y and φ−1 (Y ) = X; furthermore, if d = 1 or 2, then the bijection φ can be chosen to be isometry-equivariant. Ornstein and Weiss [19] proved Theorem 2.2 as part of a much more general theory. The tools we develop to prove Theorem 2.1 allow us to prove the following variant of Theorem 2.2, which is weaker in the sense that we only prove the existence of a homomorphism, whereas Ornstein and Weiss proved the existence of an isomorphism. Theorem 2.3. For all d ≥ 1 and for all λ ∈ (0, ∞), there exists an isometry-equivariant mapping φ : M → M such that if X is a homogeneous Poisson point process on Rd with any positive intensity λ, then φ(X) is a homogeneous Poisson point process on Rd with intensity λ . The map we construct is isometry-equivariant (in any dimension), explicit, and satisfies an additional continuity property (see Theorem 2.5 below). In addition, the map we construct is source-universal; that is, in Theorem 2.3 the map φ does not have to depend on the intensity of X. When λ > λ, we do not know whether the condition of monotonicity can be added to Theorem 2.3 (in other words, whether a Poisson process can be deterministically “thickened”). Question 2.1. Let d ≥ 1 and let λ > λ > 0. Does there exists a monotone isometry-equivariant φ : M → M such that if X is a homogeneous Poisson point process on Rd with intensity λ, then φ(X) is a homogeneous Poisson point process on Rd with intensity λ ? However, we can prove that the answer to Question 2.1 becomes no when φ is required to satisfy the following additional condition. For µ ∈ M, we define the restriction of µ to a set A ∈ B via: µ A (·) := µ(·∩A) (so µ A ∈ M). 11  2.1. Introduction Let · be the Euclidean norm on Rd . The open ball of radius r centered at x is denoted by B(x, r) := {y : x − y < r}. Let X be a Poisson point process on Rd with law P . We say that a translation-equivariant measurable mapping φ : M → M is strongly finitary with respect to P if, for P -a.e. µ ∈ M, there exists a positive real number n = n(µ) such that for P -a.e. µ ∈ M, we have φ(µ)|B(0,1) = φ(µ )|B(0,1) whenever µ|B(0,n) = µ |B(0,n) . (In other words, the restriction of φ(µ) to the unit ball is determined by the restriction of µ to a larger ball, of random but finite radius.) With the addition of this condition, we can answer Question 2.1 in the negative, even if we drop the condition of isometry-equivariance. Theorem 2.4. Let d ≥ 1 and λ > λ > 0. Let X be a homogeneous Poisson point process on Rd with intensity λ and law P . There does not exist a translation-equivariant monotone measurable mapping φ : M → M such that φ(X) is a homogeneous Poisson point process on Rd with intensity λ and φ is strongly finitary with respect to P . In fact, our proof of Theorem 2.4 will not use the assumption of translation-equivariance either, so we actually prove the stronger statement that no mapping φ satisfying the other conditions can have have the property that the restriction of φ(µ) to the unit ball is determined by the restriction of µ to a larger random ball, as defined above. In Section 2.11, we shall show that the mappings that we produce to prove Theorems 2.1 and 2.3 are strongly finitary. The mapping produced in [3] is also strongly finitary. Theorem 2.5. Theorems 2.1 and 2.3 hold even with the further requirement that the isometry-equivariant mapping φ be strongly finitary with respect to P , where P is the law of X. Sometimes deterministic translation-equivariant maps like the ones of Theorems 2.1 and 2.3 are called factors. Factors are of basic importance in ergodic theory and continue to play a central role in applications of ergodic theory to combinatorics. The combinatorial and probabilistic aspects of factors themselves have received attention in recent years as well. It turns out that factors are intimately related to Palm theory and shift-coupling. For more information, see [27], [28], [9], and [16]. Factor graphs of point processes have also received considerable attention; see [5], [8], and [30]. Following [8], a factor graph of a point process X is a graph whose vertices are the points of X and whose edges are obtained as a deterministic translation-equivariant function of X. An important special case of a factor graph is a translation-equivariant matching; see [7] for some striking 12  2.2. Some remarks about the proofs results on this topic. Finally, we refer the interested readers to [19] for very general results regarding factors of Poisson processes and the well-studied isomorphism problem. One can ask questions similar to ours about factors in a discrete setting. Translation-equivariant matchings of i.i.d. coin flips on Zd are considered in [25] (Chapter 5) and [29]. Much is known about factors of Bernoulli shifts on Z; for example, see the monograph of Ornstein [18]. In particular, it is a classical result of Sinai [24] that if B(p) and B(q) are Bernoulli shifts on {0, 1, . . . , d − 1}Z (i.e., i.i.d. {0, 1, . . . , d − 1}-valued sequences with laws p and q) and the entropy of p is strictly greater than the entropy of q, then there is a factor from B(p) to B(q). Recently, Ball [2] proved that, if the entropy of p is strictly greater than the entropy of q, and p stochastically dominates q, then in the special case where B(q) is a shift on {0, 1}, there is a factor map φ from B(p) to B(q) that is monotone (i.e., φ(x)i ≤ xi for almost all x ∈ {0, 1, . . . , d − 1}Z and all i ∈ Z). The factor map φ given in [2] is also finitary; that is, φ is continuous almost surely when {0, 1, . . . , d − 1}Z is endowed with the product topology. Keane and Smorodinsky improved on results of Ornstein by producing explicit finitary factors between Bernoulli shifts. We refer the interested reader to the original papers of Keane and Smorodinsky [13],[14], and the recent survey article on finitary codes by Serafin [23]. Finally, we also mention the work of Angel, Holroyd and Soo [1] (Chapter 3) concerning monotone deterministic functions of Poisson point processes on finite volumes. In particular, if λ > λ and X is a Poisson point process of intensity λ on [0, 1], that article provides a necessary and sufficient condition on (λ, λ ) for the existence of a monotone deterministic map φ : M → M such that φ(X) is a Poisson point process on [0, 1] of intensity λ .  2.2  Some remarks about the proofs  We next motivate the proofs of Theorems 2.1 and 2.3 via some simple examples of mappings φ : M → M having some of the required properties. The proof of Theorem 2.4 is much shorter and is treated in Section 2.3. Of course, one of the requirements of φ is that it be measurable. All the maps we define will clearly be measurable; we provide the formal definition of the σ-field for M below. Measurability. The σ-field M of subsets of M is defined in the following ¯ be N ∪ {∞}. For way. Let N be the natural numbers (including zero) and N 13  2.2. Some remarks about the proofs ¯ is defined by pB (µ) = µ(B), for all B ∈ B, the projection map pB : M → N µ ∈ M. We let M be the smallest σ-field such that all the projection maps are measurable. ♦ Note that throughout this paper, the only laws we consider on M will be homogeneous Poisson point processes on Rd and their restrictions to subsets of Rd . We say that U is a U[0, 1] random variable if it is uniformly distributed in [0, 1]. Let L denote Lebesgue measure. Similarly, we say that V is a U[B] random variable if B ∈ B is a Borel set with finite nonzero measure and P(V ∈ ·) = L(· ∩ B)/LB. In the next examples and throughout this paper, we shall assert that certain random variables can be expressed as functions of U[0, 1] random variables. This can be justified by appealing to the Borel isomorphism theorem [26, 3.4.24]. However, very often we need only the following two results, which are consequences of the Borel isomorphism theorem. Because of the need for isometry-equivariance in our constructions, we shall often need to be rather explicit about such functions. Lemma 2.6 (Reproduction). There exist measurable deterministic functions {gi }i∈N , where gi : [0, 1] → [0, 1], such that if U is a U[0, 1] random variable, then {gi (U )}i∈N is a sequence of i.i.d. U[0, 1] random variables. For an explicit proof, see, for example, [12, Lemma 3.21]. Lemma 2.7 (Coupling). Let λ > 0. There exists a collection of measurable P P mappings φP = φP A A∈B , where for each A ∈ B, the map φA = φ(A,λ) : [0, 1] → M is such that if U is a U[0, 1] random variable, then φP A (U ) is a Poisson point process on A with intensity λ . Proof. By the Borel isomorphism theorem there exists a measurable function g : [0, 1] → M such that if U is a U[0, 1] random variable, then g(U ) is a Poisson point process on Rd with intensity λ . Set φP A (U ) := g(U )|A . Example 2.1 (A Zd -translation-equivariant mapping between Poisson point processes of arbitrary intensities). Let λ, λ > 0. Let X be a Poisson point process on Rd with intensity λ and law P . Let C0 be a cube of side-length 1 containing the origin 0 ∈ Zd , and let Ci := C0 + i for i ∈ Zd . Assume that C0 is such that the collection P = {Ci }i∈Zd is a partition of Rd . The mapping φ will be defined by specifying φ(·) C for all C ∈ P. We shall define φ only off a P -null set; it is not difficult to extend φ to all of M so that it still commutes with all translations of Zd . Let g : [0, 1] → M be a measurable function such that if U is a U[0, 1] random variable, then g(U ) 14  2.2. Some remarks about the proofs is a Poisson process on C0 with intensity λ . We shall define a measurable d map h : M → [0, 1]Z with the following properties: h(X) is a collection of i.i.d. U[0, 1] random variables, and for all translations θ of Zd we have h(θ(X))i = h(X)θ(i) for all i ∈ Zd . For all i ∈ Zd , let θi (x) = x + i for all x ∈ Rd . Given the mapping h, it easy to see that by taking φ(X)  Ci  := θ−i (g(h(X)i ))  for all i ∈ Zd , we have that φ commutes with translations of Zd and that φ(X) is a Poisson point process on Rd with intensity λ . It remains to define h. If X(C) = 1, then we say that C is special. Let K(i) be the index of the first special cube to the right of cube i; that is, K(i) = i + (n, 0, . . . , 0) where n = n(i) is the smallest non-negative integer such that Ci+(n,0,...,0) is special. Note that P -a.s. K is well defined. For each special cube Ci , let z(i) be the unique point x ∈ Ci such that X({x}) = 1. Since X is a Poisson point process, the random variables {X|Ci }i∈Zd are independent, and also conditional on the event that Ci is special, z(i) is a U[Ci ] random variable. Let f : C0 → [0, 1]N be a measurable function such that if V is a U[C0 ] random variable, then f (V ) is a sequence of i.i.d. U[0, 1] random variables. For all i ∈ Zd , let h(X)i := f z(K(i)) − K(i)  n(i)  .  It is easy to verify that h satisfies the required properties.  ♦  Let us remark that in Example 2.1 we used the fact that if X is a Poisson process, then conditional that it has one point in A, the location of that point is a U[A] random variable. This elementary fact will often be useful and we shall appeal to it again in the next example and in the proofs of Theorems 2.1 and 2.3. We refer the reader to [22, Theorem 1.2.1] or [15] for background and state a slightly more general result in the lemma below. Lemma 2.8. Let X be a Poisson point process on Rd with intensity λ. Let A ∈ B be a Borel set with finite Lebesgue measure. Let K be a Poisson random variable with mean λL(A). Let {Vi }i∈N be a sequence of i.i.d. U[A] random variables that are independent of K. Then X A has the same law as Z := K i=1 δVi . A central requirement in Theorems 2.1, 2.3, and 2.4 is that φ be a deterministic function of X. The mapping in Example 2.1 is a deterministic 15  2.2. Some remarks about the proofs function of X and commutes with all translations of Zd . Given a U[C0 ] random variable V , independent of X, we can modify Example 2.1 by starting with a randomly shifted partition {Ci + V }i∈Zd of Rd and obtain a mapping Φ that is a function of X and V . As a result of starting with a randomly shifted partition, the joint distribution of (X, Φ(X, V )) is fully translationinvariant. However, Φ is no longer a deterministic function of X. Instead of using the lattice Zd , we shall use randomness from the process to define a partition of Rd . It is straightforward to do this in an isometryequivariant way. The difficulty lies in choosing a partition that avoids potential dependency problems. We now turn our attention to Theorem 2.1. Let λ > λ > 0. It is nontrivial to show that there exists a (not necessarily translation-equivariant) monotone mapping which maps a Poisson point process of intensity λ to a Poisson point process of intensity λ . In Example 2.1, we asserted the existence of a certain coupling between uniform random variables and Poisson point processes via a measurable function g : [0, 1] → M such that whenever U is a U[0, 1] random variable, g(U ) is a Poisson point process. Due to the monotonicity requirement in Theorem 2.1, we require a more specialized coupling. An important tool in the proof of Theorem 2.1 will be Proposition 2.9 below, which is motivated by one of the key ideas from [3, Lemma 3.1]. Proposition 2.9 provides a coupling between a Poisson point process X in a finite volume and another, Y , of lower intensity, such that Y ≤ X and the process X − Y is also a Poisson point process. The process Y is not a deterministic function of X, but the coupling has certain other useful properties. Throughout this paper, it will be convenient to encode randomness as a function of U[0, 1] random variables, as was done repeatedly in Example 2.1. For any point process Z, the support of Z is the random set [Z] := x ∈ Rd : Z(x) = 1 . Elements of [Z] are called Z-points. We call a mapping Φ : M × [0, 1] → M a splitting if Φ(µ, u) ≤ µ for all (µ, u) ∈ M × [0, 1] and if both Φ(X, U ) and X − Φ(X, U ) are Poisson point processes whenever X is a Poisson point process and U is a U[0, 1] random variable independent of X. For example, consider the coupling between a Poisson point process X on Rd of intensity λ and another, Y , of lower intensity λ , that is given by colouring the points of X independently of each other red or blue with probabilities λλ and 1 − λλ and then taking the red points to be the set of Y -points. It is easy to see that 16  2.2. Some remarks about the proofs both Y (the red points) and X −Y (the blue points) are independent Poisson point processes on Rd with intensities λ and λ−λ . This elementary result is sometimes referred to as the colouring theorem [15] and this coupling can be expressed as a splitting since all the required coin-flips can be encoded as a function of a single U[0, 1] random variable. We shall revisit this elementary coupling in more detail in Section 2.5. The coupling given by Proposition 2.9 below is also a splitting. Proposition 2.9 (Splitting on finite volumes). Let λ > λ > 0. There exists a finite constant K = K(λ, λ ) and a family φfin of measurable mappings φfin A so that for each A ∈ B with finite Lebesgue measure larger than K, the map fin φfin A = φ(A,λ,λ ) : M × [0, 1] → M has the following properties: fin (a) The map φfin A is monotone; that is, φA (µ, u) ≤ µ for all (µ, u) ∈ M × [0, 1]. fin (b) For all (µ, u) ∈ M × [0, 1], we have φfin A (µ, u) = φA (µ A , u).  (c) If X is homogeneous Poisson point processes on Rd with intensity λ and U is a U[0, 1] random variable independent of X, then φfin A (X A , U ) is a Poisson point process of intensity λ on A, and X A − φfin A (X A , U ) is a Poisson point process of intensity λ − λ on A. (d) For all (µ, u) ∈ M × [0, 1], if µ(A) = 1, then φfin A (µ, u) = 0, while if fin µ(A) = 2, then φA (µ, u) = µ. (e) The family of mappings φfin has the following isometry-equivariance property: for any isometry θ of Rd , and for all (µ, u) ∈ M × [0, 1], fin θ φfin A (µ, u) = φθ(A) (θ(µ), u) .  We shall prove Proposition 2.9 in Section 2.4. Property (d) of Proposition 2.9 will be vital to the proof of Theorem 2.1. It states that whenever X A has exactly one point in its support, φfin A (X A , U ) will have no points, while whenever X A has exactly two points in its support, X A −φfin A (X A , U ) will have no points. Hence when X A has exactly one or two points the locations of these points provide a possible source of randomness. The next example will illustrate how property (d) is exploited and will help to motivate the proof of Theorem 2.1. To make use of property (d), we shall need the following elementary lemma. Let ⊕ denote addition modulo one; that is, for x, y ∈ R, let x ⊕ y be the unique z ∈ [0, 1) such that x + y − z ∈ Z. 17  2.2. Some remarks about the proofs Lemma 2.10 (Adding U[0, 1] random variables modulo 1). Let U1 and U2 be U[0, 1] random variables that are measurable with respect to the σ-fields F1 and F2 and such that U1 is independent of F2 and U2 is independent of F1 . If U := U1 ⊕ U2 , then U is independent of F1 , U is independent of F2 , and U is a U[0, 1] random variable. Proof. The proof follows from the Fubini theorem and the fact that for d every x ∈ R we have U1 ⊕ x = U1 . Let E ∈ F2 and let Q be the joint law of U2 and 1E . Let B ∈ B. By symmetry, it is enough to show that P({U ∈ B} ∩ E) = P(U1 ∈ B)P(E). By the independence of U1 and F2 , we have P({U ∈ B} ∩ E) = =  P(U1 ⊕ x ∈ B)i dQ(x, i) P(U1 ∈ B)i dQ(x, i)  = P(U1 ∈ B)P(E) . Example 2.2 (A monotone map φ : M → M which maps a Poisson process X to another of lower intensity such that X −φ(X) is also a Poisson process). Let λ > λ > 0. Let X be a Poisson point process on Rd with intensity λ and law P . Let P = {Ci }i∈N be an indexed partition of Rd into equal sized cubes, all translates of one another, large enough so that the Lebesgue measure of each cube is larger than the constant K from Proposition 2.9. The monotone mapping φ will be defined by specifying φ(·) C for all C ∈ P. Let U = {Ui }i∈N be a sequence of i.i.d. U[0, 1] random variables that are independent of X. Let φfin Ci (X, Ui ) ,  Φ(X, U ) := i∈N  where φfin is the splitting from Proposition 2.9. The map φ will be defined d d so that φ(X) = Φ(X, U ) and X − φ(X) = X − Φ(X, U ). By properties (c) and (b) of Proposition 2.9, we deduce that φ(X) and X − φ(X) are Poisson point process on Rd with intensities λ and λ − λ . If X(C) = 1, then we say that C is one-special, while if X(C) = 2, then we say that C is two-special. Let k1 and k2 be the indices of the one-special and two-special cubes with the least index, respectively. Note that P -a.s. k1 and k2 are well defined. Let Z 1 be the unique X-point in Ck1 . Let Z12 and Z22 be the two X-points in Ck2 , where Z12 is the one closest to the origin. Let C0 be the cube containing the origin. Fix a measurable 18  2.2. Some remarks about the proofs function fC0 : C0 → [0, 1] such that if V is a U[C0 ] random variable, then fC0 (V ) is a U[0, 1] random variable. For each C ∈ P, let c ∈ C be so that C − c = C0 and let fC : C → [0, 1] be defined via fC (x) = fC0 (x − c). Since X is a Poisson point process, it follows from Lemma 2.8 that conditional on k1 we have that Z 1 is a U[Ck1 ] random variable. Moreover it is easy to see that S 1 := fCk1 (Z 1 ) is in fact a U[0, 1] random variable independent of F 1 := σ 1[X(Ci )=1] X  Ci  :i∈N .  Similarly, it is easy to define S 2 as a function of (Z12 , Z22 ) so that S 2 is a U[0, 1] random variable independent of F 2 := σ 1[X(Ci )=2] X  Ci  :i∈N ,  namely, S 2 := fCk2 (Z12 ) ⊕ fCk2 (Z22 ) . To see why the above definition works, consider the random variables Y1 and Y2 defined as follows. Choose with a toss of a fair coin (that is independent of X) one of Z12 or Z22 to be Y1 and let Y2 be so that {Y1 , Y2 } = Z12 , Z22 . Clearly Y1 and Y2 are independent U[Ck2 ] random variables and S 2 = fCk2 (Y1 ) ⊕ fCk2 (Y2 ). Note that S 1 is measurable with respect to F2 and S 2 is measurable with respect to F 1 . Let S := S 1 ⊕ S 2 . By Lemma 2.10, we have that S is independent of F1 and S is independent of F2 . For all i ∈ N, let φ(X)  Ci  := φfin Ci X, gi (S) ,  where gi is the sequence of functions from Lemma 2.6. By property (b) of Proposition 2.9, we see that φ is monotone. We shall now show that d d φ(X) = Φ(X, U ) and X − φ(X) = X − Φ(X, U ). Observe that by property (d) of Proposition 2.9, for each one-special cube C we have φ(X) C = Φ(X, U ) C = 0 . d  Since S is independent of F 1 and {gi (S)}i∈N = {Ui }i∈N , we have that d  1[X(Ci )=1] φfin Ci (X, gi (S)) =  φ(X) = i∈N  1[X(Ci )=1] φfin Ci (X, Ui ) = Φ(X, U ) . i∈N  19  2.2. Some remarks about the proofs Similarly, by property (d) of Proposition 2.9, for each for each two-special cube C we have (X − φ(X))  C  = (X − Φ(X, U ))  C  = 0.  Since S is independent of F 2 , we have that X − φ(X) =  1[X(Ci )=2] X  Ci  − φfin Ci (X, gi (S))  1[X(Ci )=2] X  Ci  − φfin Ci (X, Ui )  i∈N  d  = i∈N  = X − Φ(X, U ). d  d  Thus φ(X) = Φ(X, U ) and X − φ(X) = X − Φ(X, U ).  ♦  As an aside, one might ask whether the two Poisson processes X and X − φ(X) in Example 2.2 or Theorem 2.1 can be made independent of each other, but it turns out that this is easily ruled out. (It may come as a surprise that two dependent Poisson processes can have a sum that is still a Poisson process; see [11].) Proposition 2.11. There does not exist a monotone map φ : M → M such that if X is a homogeneous Poisson point process on Rd , then φ(X) and X − φ(X) are independent homogeneous Poisson point processes on Rd with strictly positive intensities. Proof. Let X be a Poisson point process on Rd with intensity λ > 0. Let α ∈ (0, 1). Towards a contradiction assume that φ(X) and X − φ(X) are independent Poisson point processes in Rd with intensities αλ and (1 − α)λ. Let R and B be independent Poisson point processes on Rd with intensities αλ and (1 − α)λ. Note that d  (R, B, R + B) = (φ(X), X − φ(X), X) .  (2.1)  Now let Z := R + B and let B = B(0, 1) and consider the events  and  E := {Z(B) = 1} ∩ {R(B) = 1} E  := {X(B) = 1} ∩ {φ(X)(B) = 1} .  Clearly, P(E | Z) = α1[Z(B)=1] , but since E ∈ σ(X), we have that P(E | d  X) = 1E . Since α ∈ (0, 1), we conclude that P(E | Z) = P(E | X), which contradicts (2.1). 20  2.2. Some remarks about the proofs Outline of the proofs. Following the lead of Examples 2.1 and 2.2, we shall introduce an isometry-equivariant partition of Rd . The partition will consist of globes, which will be specially chosen balls of a fixed radius, together with a single unbounded part. The partition will be chosen as a deterministic function of the Poisson process by a procedure that does not need to examine the Poisson points inside the globes. The precise definition of this partition and its properties are somewhat subtle; see Sections 2.5 and 2.6. The most important property is that conditional on the partition, the process restricted to the bounded parts is a Poisson point process that is independent of the process on the unbounded part. This may be regarded as an extension of the following property enjoyed by stopping times for a onedimensional Poisson process: conditional on the stopping time, the process in the future is a Poisson process independent of the process in the past. The precise formulation of the property we need may be found in Proposition 2.17. To prove Theorem 2.1, we shall employ the splitting from Proposition 2.9 on the bounded parts as in Example 2.2. The Poisson points in the unbounded part will be split independently of each other with probabilities ( λλ , 1 − λλ ). When one of the balls of the partition contains exactly one or two points, the splitting from Proposition 2.9 is completely deterministic. Thus the locations of these points provides a source of randomness that can be used to facilitate the splitting from Proposition 2.9 on the other balls of the partition, as in Example 2.2, and in addition can be used to independently split the points that do not belong to a bounded part. Of course, we cannot use randomness precisely as in Example 2.2 since that privileges the origin and therefore is not equivariant. Instead, we use randomness from the available source that is (essentially) nearest to where it is used. Aside from some careful bookkeeping to ensure isometry-equivariance, the two main ingredients for the proof of Theorem 2.1 are: an isometryequivariant partition with the independence property described above, and the splitting from Proposition 2.9. Next we focus our discussion on these two ingredients. The radius R of the balls of the isometry-equivariant partition will depend on (λ, λ , d). For all x ∈ Rd and all 0 < s < r, we define the shell centered at x from s to r to be the set A(x; s, r) := y ∈ Rd : s ≤ x − y ≤ r  .  Let X be a Poisson point process on Rd and x ∈ Rd . A single ball of radius R contained in B(x, R + 80) will be chosen to be a globe (a member of 21  2.3. Proof of Theorem 2.4 the partition) only if two properties are satisfied: the shell A(x; R + 90 + d, R + 100 + d) contains no X-points; and the shell A(x; R + 80, R + 90 + d) is relatively densely filled with X-points, that is, every ball of radius 1/2 that is contained in A(x; R + 80, R + 90 + d) itself contains an X-point. A minor complication is that the set of x ∈ Rd satisfying these properties is not discrete, but consists of small well-separated clusters. The centers of mass of these clusters will be the centers of the globes. The key step in defining the splitting in Proposition 2.9 is to construct a coupling of Poisson random variables with the analogous properties of Proposition 2.9 (save isometry-equivariance). We shall obtain the joint mass function of the required coupling by taking finite perturbations of the joint mass function for two independent Poisson random variables. The calculations become straightforward when we view the mass function as an infinite matrix satisfying certain row and column constraints. The isometry-equivariant partition used in the proof of Theorem 2.1 is used again in the proof of Theorem 2.3; given this partition, the ideas in Example 2.1 can be easily adapted to prove a (weaker) translation-equivariant version of Theorem 2.3. It requires some additional effort to prove Theorem 2.3 in its entirety. The proof of Theorem 2.5 is not difficult and will follow from the definitions of the maps in Theorems 2.1 and 2.3. Organization of the paper. The rest of paper proceeds as follows. In Section 2.3 we prove Theorem 2.4. This section is independent of the other sections. Section 2.4 is devoted to a proof of Proposition 2.9. In Sections 2.5 and 2.6 we specify the properties that the isometry-equivariant partition must satisfy, and prove that such a partition does indeed exist. In Section 2.7 we define some desired properties of a procedure that assigns randomness from the globes that contain exactly one or two points to the other globes and to the points of the unbounded part. The proof of Theorem 2.1 is given in Section 2.8 and the existence of the procedure that assigns randomness is proved in Section 2.9. In Section 2.10 we prove Theorem 2.3. In Section 2.11 we prove Theorem 2.5. Finally, in Section 2.12 we state some open problems.  2.3  Proof of Theorem 2.4  In this section we shall prove Theorem 2.4. The proof is by contradiction. The basic idea is as follows. Let X be a Poisson point process on Rd with positive intensity λ and law P . Let φ : M → M be strongly finitary with 22  2.3. Proof of Theorem 2.4 11 00 11 00 1 0 0 1  11 00 00 11 00 11 1 0 0 1  1 0 0 1  11 00 00 11 00 11  11 00 11 00 0 1 0 1 0 1 1111111111 0000000000 0000000000 1111111111 0000000000 1111111111 0000000000 1111111111 0000000000 1111111111 0000000000 1111111111 0 1 0000000000 1111111111 0 1 0000000000 1111111111 0000000000 1111111111 0000000000 1111111111 0000000000 1111111111 0000000000 1111111111 0000000000 1111111111 0000000000 1111111111 0000000000 1111111111 0000000000 1111111111 0000000000 1111111111 0000000000 1111111111 0000000000 1111111111 0000000000 1111111111 0000000000 1111111111  11 00 11 00 0 1 1 0 0 1  1 0 1 0 11 00 00 11 00 11 1 0 0 1  Figure 2.1: The dots are the original points of X and the squares are points of φ(X)\X. The shaded region is B(0, 1) and the unshaded ring is A(0; 1, M ). By selecting subsets of the points in A(0; 1, M ) uniformly at random there is non-zero probability that we shall select all the dots. respect to P such that φ(X) is a Poisson point process on Rd with intensity λ > λ and X ≤ φ(X). Since φ(X) has greater intensity than X, with non-zero probability we have X(B(0, 1)) = 0 and φ(X)(B(0, 1)) ≥ 1. Since φ is strongly finitary with respect to P , there is a fixed M such that with non-zero probability, we also have φ(X)|B(0,1) = φ(X )|B(0,1) , where X is equal to X on B(0, M ) but is resampled off B(0, M ). Define a new simple point process Z from φ(X) by deleting all points in B(0, 1) and by deleting each point in [φ(X)|B(0,1)c ] independently with probability λ/λ conditional on φ(X). See Figure 2.1 for an illustration. Since φ(X) is a Poisson point process, φ(X)|B(0,1) is independent of φ(X)|B(0,1)c and we may define Z so that it is independent of φ(X)|B(0,1) . Since X ≤ φ(X), there is a non-zero probability that Z B(0,M ) = X A(0;1,M ) . Moreover, conditional on the event that X(B(0, 1)) = 0 and φ(X)(B(0, 1)) ≥ 1, there is a non-zero probability that φ(Z)|B(0,1) = φ(X)|B(0,1) . Clearly, this contradicts the independence of Z from φ(X)|B(0,1) ; the following lemma formalizes this intuition. Lemma 2.12. Let (S, S) be a measurable space. If X and Y are independent random variables taking values in S and A := {y ∈ S : P (Y = y) > 0}, then P({X = Y } ∩ {Y ∈ Ac }) = 0. Proof. We apply the Fubini theorem and the independence of X and Y as 23  2.3. Proof of Theorem 2.4 follows. Let µX be the law of X. Then P({X = Y } ∩ {Y ∈ Ac }) = P({X = Y } ∩ {X ∈ Ac }) = Ac  = Ac  P(Y = x) dµX (x) 0 dµX (x) = 0 .  With Lemma 2.12 we can now make the above argument for Theorem 2.4 precise. Proof of Theorem 2.4. Let λ > λ > 0. Towards a contradiction, let X be a Poisson point process on Rd with intensity λ and law P . Let φ : M → M be a mapping that is strongly finitary with respect to P such that X ≤ φ(X) and φ(X) is a Poisson point process on Rd with intensity λ . Since X ≤ φ(X) and φ(X) has greater intensity, we must have that P {φ(X)(B(0, 1)) ≥ 1} ∩ {X(B(0, 1)) = 0}  > 0.  For µ ∈ M, let N = N (µ) be the smallest natural number such that for P a.e. µ ∈ M we have φ(µ)|B(0,1) = φ(µ )|B(0,1) whenever µ|B(0,N ) = µ |B(0,N ) . Let E := {N (X) < M } ∩ {φ(X)(B(0, 1)) ≥ 1} ∩ {X(B(0, 1)) = 0} for some M > 0. Since φ is strongly finitary with respect to P , we have that P (N (X) < ∞) = 1 and we may choose M so that P(E) > 0 .  (2.2)  Note that on the event E we have that φ(X)|B(0,1) = φ X|A(0;1,M ) + W |B(0,M )c |B(0,1) , where W is independent of X and has law P . Let U be a U[0, 1] random variable independent of X and W . We shall show that there exists a measurable function H : M × M × [0, 1] → M such that P  H φ(X)|A(0;1,M ) , W, U = φ(X)|B(0,1)  ∩ E > 0.  (2.3)  Define a measurable function s : M×[0, 1] → M such that: if µ(Rd ) = ∞, then s(µ, u) = 0 for all u ∈ [0, 1], while if µ(Rd ) < ∞, then [s(µ, U )] is a 24  2.4. Proof of Proposition 2.9 uniformly random subset of [µ]. Since X ≤ φ(X) and since U is independent of X, we claim that for any event E ∈ σ(X) with P(E ) > 0, ∩ E  s(φ(X)|A(0;1,M ) , U ) = X|A(0;1,M )  P  > 0.  (2.4)  To verify (2.4), let 1  L := 0  1 s(φ(X)|A(0;1,M ) , u) = X|A(0;1,M ) du .  By the Fubini theorem and the independence of X and U , we have that P  s(φ(X)|A(0;1,M ) , U ) = X|A(0;1,M )  ∩ E  = EL1E .  Observe that from the definition of s and the fact that X ≤ φ(X), we must have that L > 0 P -a.s. Since 1E ≥ 0 and E1E > 0, it follows that EL1E > 0. Hence taking E = E, from (2.2) and (2.4) we have that P  s(φ(X)|A(0;1,M ) , U ) = X|A(0;1,M )  ∩ E > 0.  (2.5)  For all (µ, µ , u) ∈ M × M × [0, 1], define H(µ, µ , u) := φ s(µ|A(0;1,M ) , u) + µ |B(0,M )c |B(0,1) . From (2.5), the definition of H and the definition of E, it is obvious that (2.3) holds. Since φ(X) is a Poisson point process, φ(X)|B(0,1) and φ(X)|A(0;1,M ) are independent and since U and W are independent of X, we have that φ(X)|B(0,1) is independent of H φ(X)|A(0;1,M ) , W, U . In addition, P(φ(X)|B(0,1) = µ) = 0 for all µ ∈ M \ {0} and φ(X)|B(0,1) = 0 on the event E. Thus equation (2.3) contradicts Lemma 2.12.  2.4  Proof of Proposition 2.9  The proof of Proposition 2.9 is based on a specific coupling of two Poisson random variables. Lemma 2.13. For any α ∈ (0, 1), there exists a k(α) such that if λ > k(α), then there exist random variables X and Y such that X, Y and X + Y have Poisson distributions with respective means αλ, (1 − α)λ and λ, and P(Y = 0 | X + Y = 1) = 1 = P(X = 0 | X + Y = 2) . 25  2.4. Proof of Proposition 2.9 Proof. Write πiγ := e−γ γ i /i! for the Poisson probability mass function. We must find an appropriate joint mass function for X and Y , i.e., an element 2 Q of the vector space RN with all components non-negative and satisfying (1−α)λ  Qi,j = πiαλ ,  Qi,j = πj  j  Qi,k−i = πkλ ,  ,  i  (2.6)  i  and Q0,1 = Q1,1 = Q2,0 = 0 .  (2.7)  N2  Let P ∈ R be the mass function for independent Poisson random (1−α)λ variables, i.e., Pi,j := πiαλ πj , and note that P satisfies (2.6) (with P 2  s,t := 0 for (i, j) ∈ / in place of Q). For s, t ∈ N define E s,t ∈ RN by Ei,j [s, s + 2] × [t, t + 2], and  i s,t Ei,j :=  and note that Now let  j  s,t Ei,j =  j  s s+1 s+2 s,t i Ei,j  =  t 0 1 -1  t+1 -1 0 1  s,t i Ei,k−i  t+2 1 -1 0  = 0.  Q := P + P0,1 E 0,0 − (−P0,1 + P2,0 )E 1,0 − (−P0,1 + P2,0 + P1,1 )E 0,1 . From the definition of Q, it is easy to verify that (2.7) holds. (The idea is that adding a multiple of E s,t moves mass from location (s, t+1) to (s+1, t), without affecting the locations (i, j) with i+j ≤ s+t. First we transfer mass P0,1 from location (0, 1) to (1, 0); this results in mass P2,0 − P0,1 at (2, 0), which we then transfer to (1, 1); finally we similarly transfer the current mass at (1, 1) to (0, 2).) The equalities (2.6) follow from the above observations on sums involving P and E, so it remains only to check non-negativity of Q for λ sufficiently large. This follows on noting that for some c = c(k, α) > 0 we have Pi ,j ≥ cλPi,j whenever i + j = k and i + j = k + 1; therefore it suffices to take λ large enough compared with c(1, α)−1 , . . . , c(4, α)−1 . For later convenience we next rephrase Lemma 2.13 in terms of a mapping that constructs X from X + Y . Corollary 2.14. For any α ∈ (0, 1), there exist a k(α) and a measurable function F : N × [0, 1] → N with the following properties: (a) For all (n, u) ∈ N × [0, 1], we have that F (n, u) ≤ n. 26  2.4. Proof of Proposition 2.9 (b) For all u ∈ [0, 1], we have that F (1, u) = 1 and F (2, u) = 0. ¯ > k(α) and U is a U[0, 1] ¯ is a Poisson random variable with mean λ (c) If X ¯ ¯ ¯ − F (X, ¯ U ) are random variable independent of X, then F (X, U ) and X ¯ and (1 − α)λ ¯ respectively. Poisson random variables with means αλ ¯ be a Poisson Proof. Let α ∈ (0, 1) and k(α) be as in Lemma 2.13. Let X ¯ > k(α) and let U be a U[0, 1] random variable random variable with mean λ ¯ independent of X. By Lemma 2.13, let X and Y be Poisson random variables d ¯ with means αλ and (1 − α)λ such that X + Y = X. Define F so that d ¯ F (X, ¯ U )) = (X, (X + Y, X) .  With Corollary 2.14 the proof of Proposition 2.9 is relatively straightforward, except that property (e) requires a little care. We next present some definitions and elementary facts about Poisson processes that will be useful in the proof and in the rest of the paper. Recall that for µ ∈ M, we denote the restriction of µ to a set A ∈ B via: µ A (·) := µ(· ∩ A) . Recall that · is the Euclidean norm in Rd . We say that the inter-point distances of a point measure µ ∈ M are distinct if for all x, y, u, v ∈ [µ] such that {x, y} = {u, v} and x = y, we have that x − y = u − v . Lemma 2.15 (Elementary facts about Poisson point processes). Let X be a Poisson point process on Rd with positive intensity and law P . (a) Let a ∈ Rd . The distances from the X-points to the point a are distinct P -a.s. (b) For all d ≥ 1, the inter-point distances of X are distinct P -a.s. (c) P -a.s., every set of d elements of [X] has linear span equal to all of Rd . Proof. The proof follows easily from Lemma 2.8. Proof of Proposition 2.9. Let X be a Poisson point process on Rd with intensity λ > 0. Let α := λ/λ and let k(α) be defined as in Corollary 2.14. Let ¯ := Kλ > k(α). Let A ∈ B have Lebesgue measure larger K > 0 be so that λ ¯ := X(A), so that X ¯ is a Poisson random variable. Let F be than K. Let X a function as in Corollary 2.14. Let U be a U[0, 1] random variable independent of X. Let g1 , g2 : [0, 1] → [0, 1] be two functions as in Lemma 2.6 so 27  2.4. Proof of Proposition 2.9 that U1 := g1 (U ) and U2 := g2 (U ) are independent U[0, 1] random variables. Note that by property (a) of Corollary 2.14, F (X(A), U1 ) ≤ X(A). We shall fin define φfin A so that [φA (X, U )] is a subset of [X|A ] of size F (X(A), U1 ). Moreover, conditional on F (X(A), U1 ) = j, each subset of [X|A ] of size j will be chosen uniformly at random using the randomness provided by U2 . To do this carefully, we shall tag the points in [X|A ] and specify a way to use the randomness provided by U2 . Consider the following enumeration of the points in [µ|A ]. The center of mass of a Borel set C with positive finite Lebesgue measure L(C) > 0 is given by 1 x dL(x) ∈ Rd . (2.8) L(C) C  Let a be the center of mass of A. We say that µ admits the centric enumeration on A if µ(A) > 0 and if the distances from a to the points in [µ|A ] are distinct. The centric enumeration on A is given by the bijection ι = ιµ : [µ|A ] → {1, 2, . . . , µ(A)}, where ι(x) < ι(y) iff x − a < y − a . Note that by Lemma 2.15, part (a), X admits the centric enumeration on A Pλ -a.s. when X(A) > 0. Now we define some auxiliary functions that, when composed with U[0, 1] random variables, yield random variables with certain distributions. For any set B, let P(B) denote the set of all subsets of B. Let {si,j }j≤i be a collection of measurable functions where si,j : [0, 1] → P({1, 2, . . . i}) and if U is a U[0, 1] random variable, then si,j (U ) is uniformly distributed over subsets of size j of {1, 2 . . . , i}. For all µ ∈ M that do not admit the centric enumeration on A, set fin φA (µ, u) = 0 for all u ∈ [0, 1]. Otherwise, for (µ, u) ∈ M × [0, 1], we proceed as follows. If µ(A) = i, let ι : [µ|A ] → {1, . . . , i} be the centric enumeration. Let F (i, g1 (u)) = j. Define φfin A (µ, u) to be the simple point measure with support {x ∈ [µ|A ] : ι(x) ∈ si,j (u)}. fin fin Clearly, by definition, φfin A is monotone and φA (µ, u) = φA (µ|A , u). From Corollary 2.14, property (c), it is immediate that φfin A (X, U )(A) and X(A) − φfin (X, U )(A) are Poisson random variables with means λ L(A) and A (λ − λ )L(A) respectively. Moreover it is easy to check with the help of fin Lemma 2.8 that in fact φfin A (X, U ) and X A − φA (X, U ) are Poisson point processes on A with intensities λ and λ − λ respectively. Thus properties (c), (a) and (b) all hold. It is easy to see that property (d) is also inherited from property (b) of Lemma 2.14. Moreover we have the required property (e) since we enumerated the points in the support of µ|A in an isometry-equivariant way via the centric enumeration, while the functions 28  2.5. Selection rules g1 , g2 , fi , si,j are fixed functions independent of µ and A.  2.5  Selection rules  We shall now define an important class of isometry-equivariant partitions that will have a certain independence property. Recall that the open ball of radius r centered at x is denoted by B(x, r) := y ∈ Rd : x − y < r . ¯ r) := y ∈ Rd : x − y ≤ r . Let F ⊂ B The closed ball is denoted by B(x, denote the set of closed subsets of Rd . An R-selection rule is a mapping Ψ : M → F that has the following properties. (a) If X is a Poisson point process on Rd with intensity λ > 0 and law Pλ , then Pλ -a.s. Ψ(X) is a non-empty union of disjoint closed balls of radius R. (b) The map Ψ is isometry-equivariant; that is, for all isometries θ of Rd and all µ ∈ M, we have that Ψ(θµ) = θΨ(µ). (c) For all µ, µ ∈ M, provided µ and µ agree on the set ¯ 2) B(x,  H(µ) = HΨ (µ) :=  c  ,  (2.9)  x∈Ψ(µ)  we have that Ψ(µ) = Ψ(µ ). (d) The map Ψ is measurable; see below for the precise meaning of this. Let Ψ be an R-selection rule and let µ ∈ M. We call the connected components of Ψ(µ) the globes (under µ) and we denote the set of globes by Globes[Ψ(µ)]. The ether is Ψ(µ)c := Rd \ Ψ(µ). Note that the set of globes together with the ether form an isometry-equivariant partition of Rd . Note that the set H(µ) is obtained by first extending Ψ(µ) by distance 2, then taking the complement of the enlarged set. The idea behind the key condition (c) is that Ψ(µ) is determined only by the restriction of µ to Ψ(µ)c (for technical reasons it is convenient to insist that it is determined even on the smaller set H(µ) ⊂ Ψ(µ)c ). This will have the consequence that for a Poisson point process X, conditional on Ψ(X), the process restricted to Ψ(X) is still a Poisson point process. Proposition 2.16. For all d ≥ 1 and all R > 0, there exists an R-selection rule. 29  2.5. Selection rules We postpone the construction of selection rules until Section 2.6. Sometimes when the value of R is not important we shall refer to Ψ simply as a selection rule. The key property of selection rules is the following. Proposition 2.17 (Key equality). Let X and W be independent Poisson point process on Rd with the same intensity. For a selection rule Ψ, the process Z := W Ψ(X) + X Ψ(X)c has the same law as X, and Ψ(X) = Ψ(Z). Proposition 2.17 states that conditional on Ψ(X), not only is X|Ψ(X) a Poisson point process on Ψ(X), it is also independent of X|Ψ(X)c . Some Remarks on Measurability. It will be obvious from our construction of selection rules that measurability will not be an issue. However, for the sake of completeness and since we want to prove Proposition 2.17 before providing the explicit construction of selection rules, we assign the Effros σalgebra to F. For each compact set K ∈ B, let FK := {F ∈ F : F ∩ K = ∅}. The Effros σ-algebra for F is generated by the sets FK for all compact sets K ∈ B. Let (Ω, F, P) be a probability space. We call a measurable function X : Ω → F a random closed set. Thus if X is a Poisson point process and Ψ is a selection rule, then Ψ(X) is a random closed set. We shall not need to use any results from the theory of random closed sets; we refer the interested reader to [17] for background. Remarks on the Proof of Proposition 2.17. It is immediate from property (c) that Ψ(X) = Ψ(Z). The isometry-equivariance of selection rules (property (b)) will not play a role in the proof of Proposition 2.17. For the purposes of the following discussion, let us assume that Ψ does not have to satisfy property (b). Temporarily suppose instead that Ψ satisfies the following additional requirement. (b ) There exists a fixed Borel set D such that if X is a Poisson point process on Rd , then Ψ(X) ⊂ D ⊂ HΨ (X)c . By property (c) in the definition of a selection rule, it is easy to see that Ψ(X) is σ(X|Dc )-measurable. Since X|D and X|Dc are independent, we have that d W D∩Ψ(X) + X D∩Ψ(X)c = X D , d  where W = X and W is independent of X. Moreover, one can verify (see Lemma 2.19 below) that W  D∩Ψ(X)  +X  D∩Ψ(X)c  +X  d  Dc  = X.  (2.10) 30  2.5. Selection rules If Ψ satisfies condition (b ), then the left-hand side of (2.10) equals W X  Ψ(X)  +  and Proposition 2.17 follows. The above argument suggests that to prove Proposition 2.17, we should examine events where Ψ(X) is contained in some deterministic set. However in general, such events will have probability zero. We can overcome this problem by considering events where for some bounded Borel set A, we have that Ψ(X) ∩ A is contained in some deterministic set. For each bounded Borel set A, Lemma 2.18 below specifies some additional useful properties that we require of such events. Ψ(X)c  Lemma 2.18. Let X be a Poisson point process on Rd with positive intensity. Let Ψ be an R-selection rule and let H be defined as in (2.9). Let A be a bounded Borel set. There exists a finite set F , a collection of disjoint events {E(α)}α∈F , and a collection of bounded Borel sets {D(α)}α∈F with the following properties: (i) For all α ∈ F , on the event E(α), we have that Ψ(X) ∩ A ⊂ D(α) ⊂ H(X)c . (ii) For all α ∈ F , the event E(α) is σ(X (iii) The disjoint union  α∈F  D(α)c  )-measurable.  E(α) is an event of probability one.  We shall prove this later. The following lemma will be useful in the proof of Proposition 2.17. In particular, it justifies equation (2.10) when Ψ satisfies condition (b ). The lemma is a technical generalization of the fact if X and W are two independent Poisson point processes on Rd with the same intensity, then for all s ∈ B we have d W |s + X|sc = X. (2.11) Lemma 2.19. Let X and W be Poisson point processes with the same intensity on some Borel set D ⊂ Rd . Let T be a random closed set and let S := T ∩ D. Let Y be any point process and let V be an event. Let S := D \ S. If W is independent of (X , S, Y, V) and if X is independent of (S, Y, V), then for all measurable sets of point measures A ∈ M, P {X + Y ∈ A} ∩ V = P  W  S  +X  S  +Y ∈A ∩V .  31  2.5. Selection rules Proof. Let µX be the law of X and let Q be the joint law of S, Y and 1V . From (2.11) it is easy to see that for all Borel s ⊂ D and for all A ∈ M, 1[x|s +x|s ∈A ] dµX (x) = P(X ∈ A ) = P(W|s + X |s ∈ A ) =  1[w|s +x|s ∈A ] dµX (w) dµX (x). (2.12)  Let A ∈ M and L := P {X + Y ∈ A} ∩ V . By the independence of X and (S, Y, V), we have that L = =  1[x+y∈A] v dµX (x)dQ(s, y, v) 1[x|s +x|s +y∈A] dµX (x) v dQ(s, y, v).  (2.13)  Applying (2.12) to (2.13), we obtain that L=  1[w|s +x|s +y∈A] v dµX (w)dµX (x) dQ(s, y, v) .  (2.14)  Since X and W are independent and (X , W) and (S, Y, V) are independent, we easily recognize that the right-hand side of equation (2.14) is equal to P W S +X S +Y ∈ A ∩V . With the help of Lemmas 2.18 and 2.19 we now prove Proposition 2.17. Proof of Proposition 2.17. Let X, W : Ω → M be independent Poisson point processes on Rd with the same intensity, defined on the probability space (Ω, F, P). We shall use ω to denote an element of the probability space, and during this proof X(ω) will denote the point measure that is the image of ω under the random variable X (not “the number of X-points in ω”). Let Ψ be an R-selection rule and let Z := W Ψ(X) + X Ψ(X)c . Let A ∈ M. It suffices to show that P (X|A ∈ A) = P (Z|A ∈ A) for all bounded Borel sets A. Let A be a bounded Borel set and let {E(α)}α∈F and {D(α)}α∈F be collections of events and subsets of Rd that satisfy the conditions of Lemma 2.18. We shall show that for all α ∈ F , P {X|A ∈ A} ∩ E(α) = P {Z|A ∈ A} ∩ E(α) .  (2.15)  By summing over all α ∈ F , we can then conclude by property (iii) of Lemma 2.18 that P (X|A ∈ A) = P (Z|A ∈ A). Let us fix α ∈ F and set 32  2.5. Selection rules E := E(α) and D := D(α). Observe that for all ω1 , ω2 ∈ E, we have Ψ(X(ω1 )) = Ψ(X(ω2 )) whenever X(ω1 ) = X(ω2 ) on D c . This follows from property (c) in the definition of a selection rule and property (i) of Lemma 2.18. Now define S := Ψ(X|Dc ). Clearly, S is σ(X|Dc )-measurable and on the event E, we have that S = Ψ(X). Since X is a Poisson point process, we have that X|D∩A is independent of X|Dc ∩A . Also, by property (ii) of Lemma 2.18 we have that E ∈ σ(X|Dc ). By applying Lemma 2.19 with the following substitutions: D = D ∩ A , X = X|D∩A , W = W |D∩A ,  T = S , S = S ∩ D ∩ A , Y = X|Dc ∩A , V = 1E , it is easy to check that P {X|A ∈ A} ∩ E = P {W |D∩A∩S + X|D∩A∩S c + X|Dc ∩A ∈ A} ∩ E . Thus from the definition of S and property (i) of Lemma 2.18, we have that P {X|A ∈ A} ∩ E = P  W |Ψ(X)∩A + X|Ψ(X)c ∩A ∈ A ∩ E .  By the definition of Z, we see that we have verified equation (2.15) as required. It remains to prove Lemma 2.18. Proof of Lemma 2.18. We need some preliminary definitions. The open cube of side length 2r centered at the origin is the set (−r, r)d . The diameter of a set A ⊂ Rd is supx,y∈A x − y . Let X be a Poisson point process on Rd and let Ψ be an R-selection rule. Recall that by property (a) in the definition of a selection rule, all globes are balls of radius R. Fix a bounded Borel set A. Let A := x∈A B(x, 2R). Let {ci }N 1 be a collection of disjoint cubes of diameter 12 such that their union contains the set A . Thus, some cubes may not be open. Let ai ∈ ci be the centers of the cubes. Let Fi be the event that the center of some globe (under X) is an element of the cube ci . For a binary sequence α ∈ {0, 1}N of length N , define E(α) := 1≤i≤N : α(i)=1  Fi ∩  Fic . 1≤i≤N : α(i)=0  Set F := α ∈ {0, 1}N : P(E(α)) > 0 . Note that the events {E(α)}α∈F are disjoint and their union over all α is an event of probability 1, so that condition (iii) is satisfied. Note that if x ∈ Rd and x − ai ≤ 12 , then ¯ R) ⊂ B(ai , R + 1) ⊂ B(x, ¯ R + 2) . B(x,  (2.16) 33  2.5. Selection rules  111111111111 000000000000 000000000000 111111111111 000000000000 111111111111 000000000000 111111111111 000000000000 111111111111 000000000000 111111111111 000000000000 111111111111 000000000000 111111111111 000000000000 111111111111 000000000000 111111111111 000000000000 111111111111 111111111111 000000000000 000000000000 111111111111 000000000000 111111111111 000000000000 111111111111 000000000000 111111111111 000000000000 111111111111 000000000000 111111111111 000000000000 111111111111 000000000000 111111111111 000000000000 111111111111 000000000000 111111111111 000000000000 111111111111  Figure 2.2: The grid is an illustration of the cubes ci . The black squares are the centers of the globes. The hatched discs are the globes, the union of the shaded discs is the set D(α) that contains the globes intersecting A, and the area contained in the largest circles is part of the set H(X)c . Define D(α) :=  B(ai , R + 1) . 1≤i≤N : α(i)=1  Since every globe that intersects A has its center lying at distance at most R from A, every globe that intersects A must have a center in some cube ci . By definition, for every α ∈ F , on the event E(α) we see from (2.16) and (2.9) that Ψ(X) ∩ A ⊂ D(α) ⊂ H(X)c ,  since the diameter of each cube ci is 12 . See Figure 2.2 for an illustration. Thus condition (i) is satisfied. Observe that for each α ∈ F , we have that E(α) ∈ σ(X|D(α)c ) by property (c) in the definition of a selection rule, so that condition (ii) is also satisfied. Proposition 2.17 will be instrumental in proving Theorems 2.1 and 2.3. In Corollary 2.21 below, we make an important step in this direction by constructing a splitting that involves different mechanisms on the globes and on the ether. Before stating this result, we need some preliminary definitions. In particular, recall the elementary fact that if each point of a Poisson point process X with intensity λ is deleted independently of all 34  2.5. Selection rules others with probability λλ , where λ < λ, then the remaining points and the deleted points form independent Poisson point processes with intensities λ and λ − λ . To facilitate later variations on this theme, we shall give a very explicit version of this fact. Sometimes it will be convenient to specify a well ordering of the sets [µ], [µ|Ψ(µ)c ] and Globes[Ψ(µ)]. This can be done in the following way. Consider the ordering ≺ on Rd in which x ≺ y iff x < y or iff x = y and x is less than y in the lexicographic ordering of Rd . Thus we can well order [µ] and [µ|Ψ(µ)c ] via ≺ and well order Globes[Ψ(µ)] by well ordering the centers of the globes via ≺. We shall call ≺ the radial ordering. coin : Rd × [0, 1] → M via Define F coin = F(λ,λ ) F coin (x, u) := 1[u≤ λ ] δx .  (2.17)  λ  Define φind = φind (λ,λ ) : M × [0, 1] → M by ∞ coin F(λ,λ ) (xi , gi (u)),  φind (λ,λ ) (µ, u) := i=1  (µ, u) ∈ M × [0, 1] ,  where {xi }∞ i=1 is [µ] ordered by ≺ and the gi are from call φind the standard splitting. The following fact  (2.18)  Lemma 2.6. We shall is elementary.  Lemma 2.20 (Independent splitting). If X is a Poisson point process on Rd with intensity λ and U is a U[0, 1] random variable independent of X, then for all A ∈ B and for all λ < λ we have that φind (λ,λ ) (X|A , U ) and ind X|A − φ(λ,λ ) (X|A , U ) are independent Poisson point processes on A with intensities λ and λ − λ respectively. Corollary 2.21. Let X be a Poisson point process on Rd with intensity λ and let λ < λ. Let φfin be the splitting from Proposition 2.9. Let Ψ be an Rselection rule, where the Lebesgue measure of B(0, R) is larger than that of the constant K(λ, λ ) from Proposition 2.9. Let {bi }i∈Z+ = Globes[Ψ(X)], where we have ordered the globes via the radial ordering. Let U be a U[0, 1] random variable independent of X and let gi : [0, 1] → [0, 1] be a sequence of functions as in Lemma 2.6. The mapping Φ = Φ(λ,λ ) defined by φfin (bi ,λ,λ ) (X  Φ(X, U ) :=  bi  , gi (U )) + φind (λ,λ ) (X  Ψ(X)c  , g0 (U ))  i∈Z+  is a splitting such that Φ(X, U ) and X −Φ(X, U ) are Poisson point processes with intensities λ and λ − λ . 35  2.5. Selection rules Proof. The inequality Φ(X, U ) ≤ X is obvious from the definition of Φ, so we just need to check that Φ(X, U ) and X − Φ(X, U ) have the right distributions. This is made possible via Proposition 2.17. Let W be a Poisson point process on Rd with intensity λ that is independent of X and U . Let U1 , U2 be independent U[0, 1] random variables that are also independent of X and W . From the definition of φind , it is easy to see that d  ind φind (λ,λ ) (W |Ψ(X) + X|Ψ(X)c , U1 ) = φ(λ,λ ) (W  Ψ(X)  , U1 ) + φind (λ,λ ) (X  Ψ(X)c  , U2 ) ,  since the ordering of the points of W |Φ(X) + X|Φ(X)c is irrelevant as long as the ordering is independent of U1 and U2 . By Proposition 2.17, we have d that X = W |Φ(X) + X|Φ(X)c , so we obtain that d  ind φind (λ,λ ) (X, U1 ) = φ(λ,λ ) (W  Ψ(X)  , U1 ) + φind (λ,λ ) (X  Ψ(X)c  , U2 ) .  (2.19)  From property (c) of Proposition 2.9 and Lemma 2.20, it is easy to see that for any A ∈ B with Lebesgue measure larger than K, we have d  ind φfin (W |A , U1 ) . A (W |A , U1 ) = φ  Moreover, since X and W are independent, it follows that P i∈Z+  φfin bi (W |bi , gi (U1 )) ∈ · X  = P φind (W |Ψ(X) , U1 ) ∈ · X . (2.20)  (Recall that {gi (U )}i∈N is a sequence of i.i.d. U[0, 1] random variables.) Clearly by Proposition 2.17 and the definition of Φ, we have d  Φ(X, U ) = Φ(W |Ψ(X) + X|Ψ(X)c , U ) d  = i∈Z+  ind (X|Ψ(X)c , U2 ) . φfin bi (W |bi , gi (U1 )) + φ  From equation (2.20) and the fact that X and W are independent, it is easy to verify that d  Φ(X, U ) = φind (W  Ψ(X)  , U1 ) + φind (X|Ψ(X)c , U2 ) .  (2.21)  d  Putting (2.19) and (2.21) together, we obtain that Φ(X, U ) = φind (λ,λ ) (X, U1 ). Thus from Lemma 2.20 we have verified that Φ(X, U ) is a Poisson point process of intensity λ . The proof that X − Φ(X, U ) is a Poisson point process of intensity λ − λ follows by the same argument since φfin is a splitting by Proposition 2.9 and φind is a splitting by Lemma 2.20. 36  2.6. Construction of selection rules Let us remark that for Corollary 2.21, in order for Φ to be a splitting we must apply the splitting φfin in all the globes and not just the special ones. For example, if X is a Poisson point process on a bounded Borel set B, the following procedure will not result in a splitting: apply φfin if there are exactly one or two X points, otherwise apply φind . Before we begin the proof of Theorem 2.1, we first provide a construction of selection rules along with some other minor constructions that will be needed.  2.6  Construction of selection rules  Fix d ≥ 1 and R > 0. We shall now construct an R-selection rule. We need some preliminary definitions. Recall the definition of the shell A(x; s, r) := y ∈ Rd : s ≤ x − y ≤ r . Let X be a Poisson point process on Rd with intensity λ > 0 and law Pλ . A point x ∈ Rd is called a pre-seed if B(x, R + 100 + d) has the following two properties: (a) X A(x; R + 90 + d, R + 100 + d) = 0; (b) for every open ball B of radius we have X(B) ≥ 1.  1 2  satisfying B ⊂ A(x; R+80, R+90+d),  Given µ ∈ M, we also say that x is pre-seed under µ if (a) and (b) hold with X replaced by µ. If x is a pre-seed, we call A(x; R + 90 + d, R + 100 + d) the associated empty shell and A(x; R + 80, R + 90 + d) the associated halo. Clearly pre-seeds exist Pλ -a.s. An R-selection rule will be defined so that its globes will be balls of radius R contained in B(x, R + 80) for some pre-seed x. See Figure 2.3 for an illustration of a pre-seed. Observe that if x, y ∈ Rd are pre-seeds, then x − y ∈ (2, 2(R + 88 + d)); otherwise the empty shell of one pre-seed would intersect the halo of the other in such a way as to contradict the definition of a pre-seed. We say that two pre-seeds x, y are related if x − y ≤ 2. This gives an equivalence relation on the pre-seeds. Lemma 2.22. If X is a Poisson point process on Rd with positive intensity and law P , then every equivalence class of pre-seeds under X is an open set with positive finite Lebesgue measure P -a.s.  37  2.6. Construction of selection rules  11111111111111111111 00000000000000000000 00000000000000000000 11111111111111111111 00000000000000000000 11111111111111111111 00000000000000000000 11111111111111111111 00000000000000000000 11111111111111111111 00000000000000000000 11111111111111111111 00000000000000000000 11111111111111111111 00000000000000000000 11111111111111111111 00000000000000000000 11111111111111111111 00000000000000000000 11111111111111111111 00000000000000000000 11111111111111111111 00000000000000000000 11111111111111111111 00000000000000000000 11111111111111111111 00000000000000000000 11111111111111111111 00000000000000000000 11111111111111111111 00000000000000000000 11111111111111111111 00000000000000000000 11111111111111111111 00000000000000000000 11111111111111111111 00000000000000000000 11111111111111111111 00000000000000000000 11111111111111111111 00000000000000000000 11111111111111111111 00000000000000000000 11111111111111111111 00000000000000000000 11111111111111111111 00000000000000000000 11111111111111111111 00000000000000000000 11111111111111111111 00000000000000000000 11111111111111111111 00000000000000000000 11111111111111111111 00000000000000000000 11111111111111111111 00000000000000000000 11111111111111111111 00000000000000000000 11111111111111111111 00000000000000000000 11111111111111111111 00000000000000000000 11111111111111111111 00000000000000000000 11111111111111111111 00000000000000000000 11111111111111111111 00000000000000000000 11111111111111111111 00000000000000000000 11111111111111111111 00000000000000000000 11111111111111111111 00000000000000000000 11111111111111111111 00000000000000000000 11111111111111111111 00000000000000000000 11111111111111111111 00000000000000000000 11111111111111111111  Figure 2.3: An illustration of a pre-seed. The outer ring (the empty shell) contains no X-points. The intermediate ring (the halo) is relatively densely filled with X-points. The shaded area is unspecified in terms of X. Proof. Let C be an equivalence class of pre-seeds under X. Let a ∈ C. We shall find an open ball centered at a that is contained in C. Define the distance between two sets A, B in the standard way: A − B := inf x∈A,y∈B x − y . Define the following: δ1 :=  A(a; R + 90 + d, R + 100 + d) − [X] ,  δ2 := sup r : ∃x (B(x, r) ⊂ A(a; R + 80, R + 90 + d) and X B(x, r) = 0) , 1 δ := min(δ1 , δ2 , − δ2 )/4 . 2  In words, δ1 is the distance from the empty shell to the nearest X-point. Clearly, δ1 ∈ (0, 1). Also, δ2 is the radius of the largest ball inside the halo that contains no X-point. Clearly δ2 ∈ (0, 12 ). We easily see that for all x ∈ Rd , if x − a < d, then x ∈ C. We next choose a representative from each equivalence class in an isometry-equivariant way. Let C be an equivalence class of pre-seeds under µ. If C has positive Lebesgue measure (which happens almost surely for the Poisson point process), then we take c ∈ Rd to be its center of mass and 38  2.6. Construction of selection rules declare that c is a seed. Note that since C is a set of diameter at most 2, it is easy to see that C is contained in some ball of radius 2 that also contains the center of mass, whenever the center of mass exists. Note that c might not be a pre-seed (but it has properties similar to a pre-seed). ¯ R) a globe (under µ). Define the If c is a seed (under µ), we call B(c, mapping ΨR : M → F by stipulating that for each µ ∈ M, the Borel set ΨR (µ) is the union of the set of globes under µ. Given any two seeds, it is easy to see that their globes do not intersect. Thus the definition of a globe given here is consistent with the definition of a globe given in Section 2.5. Next, we show that for R > 0, the mapping ΨR is a selection rule, thus proving Proposition 2.16. Lemma 2.23. Let Ψ = ΨR be the mapping defined above. For all µ, µ ∈ M, c ¯ if µ = µ on H(µ) := x∈Ψ(µ) B(x, 2) , then µ and µ have the same preseeds. Proof. Assume that µ and µ agree on H(µ). Let z ∈ Rd be a pre-seed under µ. We claim that µ  A(z;R+80,R+100+d)  =µ  A(z;R+80,R+100+d)  ,  from which we deduce that z is also a pre-seed under µ . Let C(z) be the equivalence class of pre-seeds to which z belongs. Let c be the center of mass of C(z), so that c is a seed under µ. Since c has distance at most 4 from z and z has distance at least 2(R + 88 + d) from any pre-seed (under µ) not in C(z), we have that c has distance at least 2(R + 88 + d) − 4 from any pre-seed (under µ) not in C(z). Let m > 0 be the minimal distance from c to another seed (under µ). Clearly m ≥ 2(R+88+d)−8. Since µ = µ on H(µ), we have that µ A(c;R+2,m−R−2) = µ A(c;R+2,m−R−2) . Since z has distance at most 4 from c, clearly µ A(z;R+80,R+100+d) = µ A(z;R+80,R+100+d) . Therefore, H(µ ) ⊆ H(µ). Since µ = µ on H(µ ), our claim, and the result, follow. Proof of Proposition 2.16. Let R > 0 and d ≥ 1. We shall now check that Ψ = ΨR is indeed an R-selection rule. Property (a): Let P be the law of a Poisson point process on Rd with positive intensity. Note that pre-seeds occur P -a.s. By Lemma 2.22 we have that for P -a.e. µ ∈ M, every equivalence class of pre-seeds under µ has a well-defined center of mass, which in turn is a seed. Therefore, we have that seeds occur P -a.s. and Ψ(µ) = ∅ for P -a.e. µ. Also, by definition, if Ψ(µ) = ∅, then Ψ(µ) is a disjoint union of balls of radius R. 39  2.7. Encoding and distributing randomness Property (b): Let θ be an isometry of Rd . If x ∈ Rd is a pre-seed under µ, then θ(x) is a pre-seed under θ(µ). Therefore if C is an equivalence class of pre-seeds under µ ∈ M, then θ(C) is an equivalence class of pre-seeds under θ(µ). If c is the center of mass of C, then θ(c) is the center of mass of θ(C). Hence if b is globe under µ, then θ(b) is a globe under θ(µ). So clearly, Ψ is isometry-equivariant. Property (c): Let µ, µ ∈ M and assume that µ = µ on H(µ). By Lemma 2.23, µ and µ have the same pre-seeds. Thus, they have the same seeds and hence the same globes. Therefore by the definition of Ψ, we have Ψ(µ) = Ψ(µ ).  2.7  Encoding and distributing randomness  Unfortunately, our proofs of Theorems 2.1 and 2.3 do not follow from Proposition 2.17 alone. Recall that in Examples 2.1 and 2.2 we partitioned Rd into cubes, and the cubes that contained exactly one or two Poisson points were special. The locations of the Poisson points in a special cube were converted into sequences of i.i.d. U[0, 1] random variables whose elements were then assigned to the other cubes of the partition. The purpose of this section is to state a lemma that asserts the existence of a function that encapsulates the task of encoding and distributing randomness in the more complicated case where a deterministic partition is replaced by the selection rule from Section 2.6, and Example 2.2 is replaced by Theorem 2.1. Let Ψ be a selection rule. We say that a globe under µ is one-special if it happens to contain exactly one µ-point, and two-special if it happens to contain exactly two µ-points. A globe is special if it is either one-special or two-special. Denote the set of one-special globes by Globes1 [Ψ(µ)], the set of two-special globes by Globes2 [Ψ(µ)] and the set of special globes by Globes1,2 [Ψ(µ)]. Also let Ψ1 (µ), Ψ2 (µ), and Ψ1,2 (µ) denote the union of the set of one-special, two-special and special globes respectively. Let (Ψ1 (µ))c , (Ψ2 (µ))c , and (Ψ1,2 (µ))c denote the respective complements in Rd . Note that by Proposition 2.17, if X is a Poisson point process on Rd with positive intensity and law P , then one-special globes and two-special globes exist under X P -a.s. Lemma 2.24 (Assignment function). Let d ≥ 1 and R > 0. Let Ψ = ΨR be the selection rule from Section 2.6. There exists a function U = U Ψ : M × (F ∪ Rd ) → [0, 1] with the following properties.  (a) Let X be a Poisson point process on Rd with positive intensity. Let {κ(X)i }i∈N := Globes[Ψ(X)] ∪ [X|Ψ(X)c ], where we have ordered the 40  2.8. Proof of Theorem 2.1 set using the radial ordering. (Recall that globes are ordered by their centers.) If {Ui }i∈N is a sequence of i.i.d. U[0, 1] random variables that is independent of X, then X|(Ψ1 (X))c , Ψ1 (X), Ψ(X), {U (X, κ(X)i )}i∈N d  =  X|(Ψ1 (X))c , Ψ1 (X), Ψ(X), {Ui }i∈N  (2.22)  and X|(Ψ2 (X))c , Ψ2 (X), Ψ(X), {U (X, κ(X)i )}i∈N d  =  X|(Ψ2 (X))c , Ψ2 (X), Ψ(X), {Ui }i∈N .  (2.23)  (b) The map U is isometry-invariant; that is, for all isometries θ of Rd and for all (µ, b) ∈ M × (F ∪ Rd ), we have U (µ, b) = U (θ(µ), θ(b)). We call U Ψ the assignment function for the selection rule Ψ. Thus if X is a Poisson point process and b ∈ Globes[Ψ(X)] or if b ∈ [X|Ψ(X)c ], then the assignment function assigns a U[0, 1] random variable U (X, b) to b. Property (a) states that the U[0, 1] random variables have a certain independence property; the values of X on both the one-special and twospecial globes are needed to determine the values of the assignment function. The map that we shall define in the next section to prove Theorem 2.1 will use U to assign U[0, 1] random variables to the globes and the points of the ether. We shall see that property (a) makes proving Theorem 2.1 easy. Property (b) is necessary to ensure that the map that we define is isometryequivariant. Let us also remark that since by property (c) in the definition of a selection rule, Ψ(X) depends only on X|Ψ(X)c ⊂ X|(Ψ1,2 (X))c , therefore the addition of Ψ(X) in (2.22) and (2.23) is actually redundant. We now have all the tools we need to prove Theorem 2.1. We defer the proof of Lemma 2.24 to Section 2.9. Much of the proof is bookkeeping, but for property (a) we shall need to appeal to Proposition 2.17.  2.8  Proof of Theorem 2.1  We are now in a position to prove Theorem 2.1. First we give the definition of the mapping that satisfies the conditions of Theorem 2.1. Let X be a Poisson point process on Rd with intensity λ and let λ < λ. Recall the 41  2.8. Proof of Theorem 2.1 definition of the splitting φfin from Section 2.2 (Proposition 2.9) and the definitions of F coin and φind from (2.17) and (2.18) of Section 2.6. Let R = R(λ, λ ) > 0 be so that the Lebesgue measure of B(0, R) is larger than the constant K(λ, λ ) of Proposition 2.9. Let Ψ = ΨR be the R-selection rule from Section 2.6 and let U be the assignment function from Lemma 2.24. Define Γ = Γ(λ,λ ) as follows. For all µ ∈ M, φfin (b,λ,λ ) (µ|b , U (µ, b)) +  Γ(µ) :=  coin F(λ,λ ) (x, U (µ, x)) . x∈[µ|Ψ(µ)c ]  b∈Globes[Ψ(µ)]  (2.24) Proof of Theorem 2.1. From the definition of Γ it is easy to check that it is isometry-equivariant; we need only recall that by Lemma 2.24, the assignment function U is isometry invariant and that the splitting φfin and selection rule Ψ are isometry-equivariant. Also it obvious that Γ is monotone, so it suffices to check that Γ(X) and X − Γ(X) are Poisson point processes on Rd with intensities λ and λ − λ respectively. Let U be a U[0, 1] random variable independent of X and let gi : [0, 1] → [0, 1] be the functions from Lemma 2.6. Thus {gi (U )}i∈N is a sequence of i.i.d. U[0,1] random variables. Let {bi }i∈Z+ = Globes[Ψ(X)], where we have ordered the globes via the radial ordering. Similarly, let {xi }i∈Z+ = [X|Ψ(X)c ]. Let Φ be the splitting defined in Corollary 2.21. Note that Φ is a version of Γ that uses randomness from U instead of from certain points of X. By property (d) of Proposition 2.9 we have that 1[X(bi )=1] φfin bi X|bi , U (X, gi (U ))  Φ(X, U ) =  +  (2.25)  i∈Z+  φind (X|Ψ(X)c , g0 (U )) and X − Φ(X, U ) =  i∈Z+  1[X(bi )=2] X|bi − φfin bi X|bi , U (X, gi (U )) X|Ψ(X)c − φind (X|Ψ(X)c , g0 (U )) .  + (2.26)  42  2.8. Proof of Theorem 2.1 d  d  We shall show that Γ(X) = Φ(X, U ) and X − Γ(X) = X − Φ(X, U ). Set φfin bi (X|bi , U (X, bi )) ,  α := i∈Z+  F coin (xi , U (X, xi )) ,  β := i∈Z+  α  := i∈Z+  X|bi − φfin bi (X|bi , U (X, bi )) ,  and β  := X|Ψ(X)c −  F coin (xi , U (X, xi )) . i∈Z+  By definition, Γ(X) = α + β  and  X − Γ(X) = α + β .  By property (d) of Proposition 2.9, 1[X(bi )=1] φfin bi (X|bi , U (X, bi ))  α = i∈Z+  and α  = i∈Z+  1[X(bi )=2] X|bi − φfin bi (X|bi , U (X, bi )) .  By property (a) of Lemma 2.24, we have that d  1[X(bi )=1] φfin bi (X|bi , g2i (U )) +  α+β = i∈Z+  and α +β  d  = i∈Z+  F coin (xi , g2i+1 (U )) i∈Z+  1[X(bi )=2] X|bi − φfin bi (X|bi , g2i (U )) + X|Ψ(X)c −  F coin (xi , g2i+1 (U )) . i∈Z+  Thus, from the definition of Φ in Corollary 2.21, it is easy to see that d  α + β = Φ(X, U ) and α +β  d  = X − Φ(X, U ) .  Hence by Corollary 2.21, we have that Γ(X) and X − Γ(X) are Poisson point processes with intensities λ and λ − λ respectively. 43  2.9. The assignment function  2.9  The assignment function  In this section we shall prove Lemma 2.24. Many of the same tools will be useful again in the proof of Theorem 2.3. Recall that the assignment function contained within it the two tasks of generating and distributing uniform random variables. First we discuss how we generate uniform random variables. The following lemma describes explicitly how we convert the position of a single X-point in a ball (which is a uniform random variable on the ball) into a single uniform random variable on [0, 1]. We need to be explicit to preserve equivariance. Lemma 2.25 (Uniform random variables). For every d ≥ 1 and R > 0, ¯ R) → [0, 1] via : B(c, define fB(c,R) ¯ (x) := fB(c,R) ¯ The collection of mappings  fB(c,R) ¯  d  x−c R c∈Rd  .  has the following properties.  ¯ R)] random variable, then f ¯ 1. If V is a U[B(c, B(c,R) (V ) is a U[0, 1] random variable. 2. We have isometry-invariance; that is, for any isometry θ of Rd , ¯ R). (x) = fθ(B(c,R)) (θ(x)) for all x ∈ B(c, fB(c,R) ¯ ¯ Proof. Here we shall make good use of the fact that we are working with balls. Recall that the Lebesgue measure of a d-ball of radius R is given by C(d)Rd for some fixed constant C(d) > 0 depending only on d. Let V be a ¯ R). Then for 0 ≤ x ≤ 1, uniform random variable on the ball B(c, 1  (V ) ≤ x = P P fB(c,R) ¯  V − c ≤ Rx  1 d  ¯ d )) L(B(Rx = = x. ¯ L(B(R))  Each globe or X-point not in a globe will be associated to a one-special globe and to a two-special globe. It will be necessary to allow more than one globe or X-point to be associated to each special globe. First we need to develop some infrastructure. Recall that by Lemma 2.6 a single uniform random variable can be used to generate a sequence of i.i.d. U[0, 1] random variables.  44  2.9. The assignment function Encoding functions. We associate to every special globe a [0, 1]-valued and {gi }i∈N be the colsequence in the following way. Let fB(c,R) ¯ d c∈R  lections of functions from Lemmas 2.25 and 2.6 respectively. Let Ψ be an R-selection rule. For each b ∈ Globes1 [Ψ(µ)], let xb denote the unique µ-point in b and for each b ∈ Globes2 [Ψ(µ)], let x1b and x2b be the two µ-points in b, where we take x1b to be the one closest to the origin in a lexicographic ordering. Recall that ⊕ denotes addition modulo one. Let h = h Ψ : M × F → [0, 1] and h = hΨ : M × F → [0, 1]N be defined as follows:   if µ ∈ M, b ∈ Globes1 [Ψ(µ)], fb (xb ) h Ψ (µ, b) := fb (x1b ) ⊕ fb (x2b ) if µ ∈ M, b ∈ Globes2 [Ψ(µ)],(2.27)   0 if µ ∈ M, b ∈ Globes1,2 [Ψ(µ)]  and  hΨ (µ, b) := gi (h (µ, b))  i∈N  .  (2.28)  We call hΨ the encoding function for the selection rule Ψ and we call h Ψ the simplified encoding function for the selection rule Ψ. Lemma 2.26. Let d ≥ 1 and R > 0. Let Ψ be an R-selection rule. Both the encoding and the simplified encoding functions h, h satisfy the following properties. (a) The maps h, h are isometry-invariant; that is, for all isometries θ of Rd and for all (µ, b) ∈ M × F, h(µ, b) = h(θ(µ), θ(b)) and h (µ, b) = h (θ(µ), θ(b)). (b) Let X be a Poisson point process on Rd with positive intensity. Let b1i i∈N := Globes1 [Ψ(X)] and b2i i∈N := Globes2 [Ψ(X)], where we have ordered the sets of one-special and two-special globes by the radial ordering. If {Ui }i∈N is a sequence of i.i.d. U[0, 1] random variables that is independent of X, then X|(Ψ1 (X))c , Ψ1 (X), h (X, b1i )  d  i∈N  = X|(Ψ1 (X))c , Ψ1 (X), {Ui }i∈N  and X|(Ψ2 (X))c , Ψ2 (X), h (X, b2i )  d  i∈N  = X|(Ψ2 (X))c , Ψ2 (X), {Ui }i∈N .  Similarly, if {Ui }i∈N is a sequence of i.i.d. random variables, independ  dent of X, where U1 = {Ui }i∈N , then X|(Ψ1 (X))c , Ψ1 (X), h(X, b1i )  d  i∈N  = X|(Ψ1 (X))c , Ψ1 (X), Ui  i∈N  45  2.9. The assignment function and X|(Ψ2 (X))c , Ψ2 (X), {h(X, bi )}i∈N  d  = X|(Ψ2 (X))c , Ψ2 (X), Ui  i∈N  .  Proof of Lemma 2.26. The proof of property (a) follows immediately from the definition of an encoding function, property (b) of a selection rule, and Lemma 2.25. We now focus our attention on property (b). From the definition of h and the fact that the gi satisfy the conditions of Lemma 2.6, it suffices to verify the condition for the simplified encoding function h . We need some additional notation. Let {bi }i∈N = Globes[Ψ(X)], where we have ordered the globes via the radial ordering. Let ci ∈ bi be the centers of the globes. Also let c1i ∈ b1i and c2i ∈ b2i be the centers of the one-special and two-special globes. Let Vi1 be the unique X-point in each b1i . Similarly, let Vi2 be the set of unordered X-points of each b2i . Let θy be the isometry of Rd such that for all x ∈ Rd , we have θy (x) = x + y. Assume X has intensity λ. (X|bi ) i∈N is a sequence of It follows from Proposition 2.17 that θc−1 i ¯ i.i.d. Poisson point processes on B(0, R) with intensity λ; furthermore, the sequence is independent of X|Ψ(X)c , Ψ(X) . Hence by Lemma 2.17, ¯ R)] random variables that is inVi1 − c1i i∈N is a sequence of i.i.d. U[B(0,  2 dependent of X|(Ψ1 (X))c , Ψ1 (X) . Similarly, we have that θc−1 is 2 Vi i i∈N ¯ R)] random variables that is a sequence of i.i.d. pairs of unordered U[B(0, 2 independent of X|(Ψ2 (X))c , Ψ (X) . By the definition of h and Lemmas 2.25 and 2.10, the result follows immediately.  We turn now to the task of distributing randomness. A natural approach is to have each non-special globe request randomness from the closest available special globe (where distances are measured between the centers of the globes). However, we do not know much about the process of globecenters. In particular, it is not immediately obvious that it has distinct inter-point distances P -a.s. To avoid this problem, we shall make use of some of the other properties of seeds. Recall that if x is a pre-seed, we call A(x; R + 80, R + 90 + d) the halo. If x is a seed, then we shall also call A(x; R + 80, R + 90 + d) the halo. We shall associate to every globe a point in its halo in an equivariant way. Tags. Let Ψ be a selection rule from Section 2.6 and let the inter-point distances of µ ∈ M be distinct. For each globe under µ, we choose a point in its halo in the following isometry-equivariant way. First note that the halo 46  2.9. The assignment function contains more than three µ-points. Take the two mutually closest points in the halo, then choose the one of this pair that is closest to the other points in the halo. We call this point the tag of the globe. We note that by Lemma 2.15, part (b), tags are well defined and exist for every globe P -a.s. For completeness, if the inter-point distances in the halo are not distinct, we take the tag to be the center of the globe. Let t = tΨ : M × F → Rd ∪ {∞} be the measurable function defined as follows:   the tag of b if µ ∈ M, b ∈ Globes[Ψ(µ)], tΨ (µ, b) := x if µ ∈ M, b = {x} , x ∈ [µ|Ψ(µ)c ], (2.29)   ∞ otherwise.  We call tΨ the tagging function for the selection rule Ψ.  Lemma 2.27. Let d ≥ 1 and R > 0. Let Ψ = ΨR be the selection rule from Section 2.6. The tagging function t = tΨ : M × F → Rd ∪ {∞} has the following properties. 1. The map t depends only on (Ψ(µ), µ Ψ(µ)c ); that is, for all µ, µ ∈ M if (Ψ(µ), µ Ψ(µ)c ) = (Ψ(µ ), µ Ψ(µ )c ), then t(µ, ·) = t(µ , ·). 2. The map t is isometry-equivariant; that is, for all isometries θ of Rd and for all (µ, b) ∈ M × F, θ(t(µ, b)) = t(θ(µ), θ(b)). Here we take θ(∞) = ∞. Proof. The result follows immediately from the definition of the tagging function. Partners and Ranks. Let Ψ be a selection rule from Section 2.6. We shall now measure distances between globes, as well as distances between globes and µ-points, via the distances between their tags. Let the interpoint distances of µ ∈ M be distinct and also assume that Globes1 [Ψ(µ)] and Globes2 [Ψ(µ)] are both non-empty. For each globe b ∈ Globes[Ψ(µ)] we call its closest one-special globe its one-partner (if unique), and its closest two-special globe its two-partner (if unique). Similarly, for each x ∈ [µ|Ψ(µ)c ] we call its closest one-special globe its one-partner and its closest two-special globe its two-partner. Suppose that a globe b has a special globe B ∈ Globes1,2 [Ψ(µ)] as a partner; then B assigns the number 2n to b if there are exactly n globes with B as partner that are closer to B than b. We call the number that b is assigned by its one-partner its onerank and the number that b is assigned by its two-partner its two-rank. 47  2.9. The assignment function Similarly, a special globe B ∈ Globes1,2 [Ψ(µ)] assigns the number 2n + 1 to x if it is a partner of x ∈ [µ|Ψ(µ)c ] and there are exactly n partners in [µ] that are closer to B than x; we also call the number that x is assigned its one-rank or two-rank depending on whether it is assigned by its one- or two-partner. Let M be the set of point measures of M that have both oneand two-special globes and have distinct inter-point distances. We define p = pΨ : M × (F ∪ Rd ) → F × F as follows: pΨ (µ, b) := (one-partner of b, two-partner of b) if µ ∈ M and b ∈ Globes[Ψ(µ)] ∪ [µ|Ψ(µ)c ] and pΨ (µ, b) := (b, b) otherwise. We also define r = rΨ : M × (F ∪ Rd ) → N × N as follows: rΨ (µ, b) := (one-rank of b, two-rank of b) if µ ∈ M and b ∈ Globes[Ψ(µ)] ∪ [µ|Ψ(µ)c ] and rΨ (µ, b) := (0, 0) otherwise. We call pΨ the partner function for the selection rule Ψ and we call rΨ the rank function for Ψ. Also let χ(µ) := (Ψ(µ), Ψ1 (µ), Ψ2 (µ), µ|Ψ(µ)c ) for all µ ∈ M.  Lemma 2.28. Let d ≥ 1 and R > 0. Let Ψ be the selection rule from Section 2.6. The partner and rank functions p = pΨ and r = rΨ have the following properties. 1. The maps p, r depend only on χ(µ); that is, for all µ, µ ∈ M if χ(µ) = χ(µ ), then p(µ, ·) = p(µ , ·) and r(µ, ·) = r(µ , ·) 2. The map p is isometry-equivariant; that is, for all isometries θ of Rd and for all (µ, b) ∈ M × F, θ(p(µ, b)) = p(θ(µ), θ(b)). 3. The map r is isometry-invariant; that is, for all isometries θ of Rd and for all (µ, b) ∈ M × F, r(µ, b) = r(θ(µ), θ(b)). Proof. The result follows immediately from the definitions of the partner and rank functions and Lemma 2.27. Assignment functions. We shall now combine the encoding, partner and rank functions to obtain an assignment function. Let Ψ be a selection rule from Section 2.6. Define U = UΨ : M × (F ∪ Rd ) → [0, 1] as follows. Let h = hΨ , p = pΨ , and r = rΨ be the encoding, partner and rank functions. Recall that h : M × F → [0, 1]N . For all (µ, b) ∈ M × (F ∪ Rd ), let U (µ, b) := h(µ, p(µ, b)1 )r(µ,b)1 ⊕ h(µ, p(µ, b)2 )r(µ,b)2 .  (2.30) 48  2.10. Proof of Theorem 2.3 Proof of Lemma 2.24. The isometry-invariance of U follows immediately from the definition of U and Lemmas 2.26 and 2.28. Let X be a Poisson point process in Rd . Let {κi }i∈N := Globes[Ψ(X)] ∪ [X|Ψ(X)c ], b1i i∈N := Globes1 [Ψ(X)] and let b2i i∈N := Globes2 [Ψ(X)], where we have ordered the sets using the radial ordering. Let {Ui }i∈N be a sequence of i.i.d. U[0, 1] random variables independent of X. Let {Ui }i∈N be an i.i.d. sequence (independent of X), where U1 is a sequence of i.i.d. U[0, 1] random variables. From Lemma 2.28, p(X, ·) and r(X, ·) depend only on χ(X). It is clear that both χ(X) and h(X, b2i ) depend only on (X|(Ψ1 (X))c , Ψ1 (X)), so that by Lemma 2.26 X|(Ψ1 (X))c ,Ψ1 (X), χ(X), h(X, b1i )  i∈N  , h(X, b2i )  X|(Ψ1 (X))c , Ψ1 (X), χ(X), Ui  i∈N  d  i∈N  =  , h(X, b2i )  i∈N  . (2.31)  From the definition of the assignment function, it is clear that U (X, ·) depends only on χ(X), h(X, b1i )  i∈N  , h(X, b2i )  i∈N  .  It is also easy to see that χ(X) depends only on (X|(Ψ1,2 (X))c , Ψ1 (X), Ψ2 (X)). Thus from the definition of the assignment function, (2.31) and Lemma 2.10, it follows that X|(Ψ1 (X))c , Ψ1 (X), {U (X, κi )}i∈N  d  = X|(Ψ1 (X))c , Ψ1 (X), {Ui }i∈N .  Similarly, we have that X|(Ψ2 (X))c ,Ψ2 (X), χ(X), h(X, b1i )  i∈N  , h(X, b2i )  X|(Ψ2 (X))c , Ψ2 (X), χ(X), h(X, b1i )  d  i∈N  i∈N  , Ui  = i∈N  , (2.32)  from which it follows that X|(Ψ2 (X))c , Ψ2 (X), {U (X, κi )}i∈N  2.10  d  = X|(Ψ2 (X))c , Ψ2 (X), {Ui }i∈N .  Proof of Theorem 2.3  In this section, we shall show how the tools used to prove Theorem 2.1 can be adapted to prove Theorem 2.3. As a first step we prove a translationequivariant version of Theorem 2.3. That is, given λ > 0, we define a 49  2.10. Proof of Theorem 2.3 translation-equivariant map Φ : M → M such that if X is a Poisson process on Rd of positive intensity, then Φ (X) is a Poisson process of intensity λ . By modifying the map Φ we shall obtain a map Υ that is isometry-equivariant and satisfies the conditions of Theorem 2.3. We need some preliminary definitions before we can give the definition of Φ . Voronoi Cells. The Voronoi tessellation of a simple point measure µ ∈ M is a partition of Rd defined in the following way. The Voronoi cell of a point x ∈ [µ] is the set of all points y ∈ Rd such that x− y < z − y for all z ∈ [µ] \ {x}. The unclaimed points are the points that do not belong to a cell. We define the Voronoi tessellation V(µ) to be the set of all Voronoi cells along with the set of unclaimed points. Note that if µ is locally finite and not identically zero, then the set of unclaimed points has zero Lebesgue measure. Note that the Voronoi tessellation is clearly isometry-equivariant; that is, for any isometry θ of Rd we have V(θµ) = θV(µ) := {θυ : υ ∈ V(µ)}. For each A ∈ B with positive finite Lebesgue measure, let cA be its center of mass. Let Ψ be a selection rule and define c = cΨ : M → M via c(µ) :=  δcb .  (2.33)  b∈Globes[Ψ(µ)]  Note that c is also isometry-equivariant. The Voronoi tessellation induced by the center of the globes is given by V(c(µ)). The map Φ will be defined by placing independent Poisson point processes in each Voronoi cell of V(c(µ)). Let θy be the isometry of Rd such that for all x ∈ Rd , we have θy (x) = x + y. We define Φ in the following way. Let Ψ be the R-selection rule from Section 2.6, where R will be chosen later. Let φP be the collection of mappings from Lemma 2.7. Let U be the assignment function from Lemma 2.24. The map Φ = Φλ is defined via 1[b⊂v] θcυ φP (U (µ, b)) . (θ −1 (υ),λ )  Φ (µ) := υ∈V(c(µ)) b∈Globes[Ψ(µ)]  cυ  (2.34)  Proposition 2.29. The map Φ has the following properties. (a) The map Φ is translation-equivariant. (b) If X is a Poisson process on Rd with positive intensity, then Φ (X) is a Poisson process on Rd with intensity λ .  50  2.10. Proof of Theorem 2.3 Proof. Part (a) follows from the fact that the assignment function is isometry-invariant and that the selection rule, Voronoi tessellation and the map c are all isometry-equivariant. From the definition of Φ , one can verify that it is translation-equivariant since any two translations of Rd commute with each other. However, since translations and reflections do not necessarily commute, we have only translation-equivariance. Part (b) follows from Lemma 2.7 and Lemma 2.24 once we note that the Voronoi tessellation and the centers of the globes and Voronoi cells depend only on Ψ(X). The following example elaborates on the difficulty with reflections and proving Theorem 2.3. Example 2.3. Let λ > 0. Let B ∗ ⊂ B be the set of Borel sets with positive finite Lebesgue measure. There does not exist a family of measurable functions φp such that for each A ∈ B ∗ , φpA : [0, 1] → M has the following properties. 1. If U is a U[0, 1] variable, then φpA (U ) is Poisson point process on A with intensity λ . 2. The map φp is isometry-equivariant; that is, for all isometries θ of Rd , φpθA (U ) = θφpA (U ). Proof. Towards a contradiction, let φp satisfy the above properties. For each x ∈ Rd , let xi be the ith coordinate. Consider A := B(0, 1), the unit ball centered at the origin, and let A := {x ∈ A : x1 > 0}. Let θ be the reflection of the first coordinate; that is, if y = (y1 , . . . , yd ) for some yi ∈ R, then θ(y) = (−y1 , y2 , . . . , yd ). Let U be a U[0, 1] random variable. The event E := φpA (U )(A) = φpA (U )(A ) = 1 occurs with non-zero probability. However, θ(A) = A, so that whenever E occurs, φpθA (U ) = θφpA (U ). Note that in the proof of Example 2.3, the counterexample used a set A that is invariant under rotations and reflections. One would guess that the Voronoi cells of a random process such as the centers of the special globes should lack such symmetries. However, rather than dealing with the symmetries of the Voronoi cells, we proceed as follows. Let X be a Poisson process on Rd with positive intensity and let Ψ be a selection rule from Section 2.6. Let b ∈ Globes[Ψ(X)] and for simplicity assume that the center is at the origin. From the definition of a globe, there will always be at least d points in the halo of a globe. We shall choose d points from the halo and use them to associate an isometry to the globe. Given d points {x1 , . . . , xd } in the halo of b, we shall define an isometry θ with the following properties. 51  2.10. Proof of Theorem 2.3 1. We have θ(0) = 0 ∈ Rd . 2. For all i, j such that 1 ≤ i < j ≤ d, we have θ(xi )j = 0 ∈ R; that is, the j th coordinate of θ(xi ) ∈ Rd is zero for j > i. 3. For all i such that 1 ≤ i ≤ d, we have θ(xi )i > 0. Selecting d points from the halo of a globe is an easy extension of the idea of a tag of a globe. Also to prove that such an isometry exists and is unique, we appeal to the tools of linear algebra, in particular the QR factorization lemma. Notations and Conventions. To use the tools of linear algebra, it will be convenient to identify elements of Rd with column vectors; that is, Rd = Rd×1 . Given an isometry θ of Rd and a matrix A ∈ Rd×d , we let θ(A) ∈ Rd×d be the matrix obtained by applying θ to each of the columns of A. We also denote the identity matrix by I ∈ Rd×d . d-Tags. Let Ψ be a selection rule from Section 2.6 and let the inter-point distances of µ ∈ M be distinct. The d-tag of a globe b ∈ Globes[Ψ(µ)] is a matrix A ∈ Rd×d defined inductively as follows. The first column of the matrix is the tag of b. Given that the (i − 1)th column is already defined, the ith column is the µ-point in the halo of b that is closest to the (i − 1)th column and is not equal to any of the first i − 1 columns. For completeness, if the inter-point distances in the halo are not distinct, we take the d-tag to be the matrix in Rd×d where each column vector is the center of the globe. Let t¯ = t¯Ψ : M × F → Rd×d ∪ {∞} be the measurable function defined as follows: t¯Ψ (µ, b) :=  the d-tag of b ∞  if µ ∈ M and b ∈ Globes[Ψ(µ)], (2.35) otherwise.  We call t¯Ψ the d-tagging function for the selection rule Ψ. Lemma 2.30. Let d ≥ 1 and R > 0. Let Ψ = ΨR be the selection rule from Section 2.6. The d-tagging function t¯ = t¯Ψ : M × F → Rd×d ∪ {∞} has the following properties. 1. The map t¯ depends only on (Ψ(µ), µ Ψ(µ)c ); that is, for all µ, µ ∈ M, if (Ψ(µ), µ Ψ(µ)c ) = (Ψ(µ ), µ Ψ(µ )c ), then t¯(µ, ·) = t¯(µ , ·). 52  2.10. Proof of Theorem 2.3 2. The map t¯ is isometry-equivariant; that is, for all isometries θ of Rd and for all (µ, b) ∈ M × F, we have θ(t¯(µ, b)) = t¯(θ(µ), θ(b)). We take θ(∞) = ∞. Proof. The result follows immediately from the definition of the d-tagging function. We note that the d-tag of a globe is almost always a non-singular matrix by Lemma 2.15 (c). The following lemma allows us to associate an isometry to each globe and its d-tag. Recall that every isometry θ of Rd that fixes the origin can be identified with an unique orthogonal matrix Q ∈ Rd×d ; that is, there is an unique matrix Q such that QQT = QT Q = I ∈ Rd×d and Qx = θ(x) for all x ∈ Rd = Rd×1 . For background, see [21, Ch. 1]. Lemma 2.31 (QR factorization). For all d ≥ 1, if A ∈ Rd×d is a square matrix, then there exists an orthogonal matrix Q ∈ Rd×d and an upper triangular matrix ∆ ∈ Rd×d such that A = Q∆. Furthermore, if A is nonsingular, then the factorization is unique if we require the diagonal entries of ∆ to be positive. For a proof, see, for example, [10, 2.6]. Upper triangular matrices and fixing isometries. Let Ψ be a selection rule from Section 2.6 and let b ∈ Globes[Ψ(µ)]. The upper triangular matrix for b is the matrix ∆ ∈ Rd×d defined as follows. Let cb be the center of the globe b. Let A ∈ Rd×d be the d-tag for the globe b. Let A := A − cb I. If A is singular, then we take ∆ = 0 ∈ Rd×d . Otherwise, by Lemma 2.31, there exists a unique factorization such that A = Q∆, where ∆ ∈ Rd×d is an orthogonal matrix and ∆ ∈ Rd×d is an upper triangular matrix such that all its diagonal entries are positive. When A is non-singular, we say that the unique isometry σ such that σ(cb ) = 0 ∈ Rd and σ(A ) = ∆ is the fixing isometry for the globe b. Let ∆ = ∆Ψ : M × F → Rd×d be the measurable function defined as follows: ∆Ψ (µ, b) := the upper triangular matrix for b if µ ∈ M and b ∈ Globes[Ψ(µ)], while ∆Ψ (µ, b) := I ∈ Rd×d otherwise. Let d 0 ∈ (Rd )R be the function that sends every element of Rd to 0 ∈ Rd . The d fixing isometry function σ = σΨ : M × F → (Rd )R for the selection rule Ψ is defined as follows: σ Ψ (µ, b) := the fixing isometry for the globe b 53  2.10. Proof of Theorem 2.3 if µ ∈ M, b ∈ Globes[Ψ(µ)], and the d-tag of b is non-singular, while d σ Ψ (µ, b) := 0 ∈ (Rd )R otherwise.  Lemma 2.32. Let d ≥ 1 and R > 0. Let Ψ = ΨR be the selection rule from Section 2.6. The map ∆ = ∆Ψ : M × F → Rd×d and the fixing isometry d function σ = σ Ψ : M × F → (Rd )R have the following properties. 1. The maps ∆ and σ depend only on (Ψ(µ), µ µ, µ ∈ M if (Ψ(µ), µ Ψ(µ)c ) = (Ψ(µ ), µ ∆(µ , ·) and σ(µ, ·) = σ(µ , ·).  Ψ(µ)c  Ψ(µ )c  ); that is, for all  ), then ∆(µ, ·) =  2. The map ∆ is isometry-invariant; that is, for all isometries θ of Rd and for all (µ, b) ∈ M × F, we have ∆(µ, b) = ∆(θ(µ), θ(b)).  Proof. The first property follows immediately from the definitions of the maps and Lemma 2.30. We prove the second property in the following way. Let θ be an isometry of Rd . Let A ∈ Rd×d be a non-singular matrix, let a ∈ Rd , and set A := A − a I, where I is the identity matrix. Let A = Q∆ be the unique QR factorization of A, where all the diagonal entries of ∆ are positive. From the definition of the upper triangular matrix for a globe, it suffices to show that for some orthogonal matrix Q , we have θ(A ) − θ(a )I = Q ∆ .  Note that there exists an orthogonal matrix Q and c ∈ Rd such that for all x ∈ Rd = Rd×1 , we have θ(x) = Q x + c. Observe that θ(A ) − θ(a )I = Q A + cI − (Q a + cI) = Q (A − a I) =QA  = (Q Q)∆ . We are now ready to give the definition of the mapping that satisfies the conditions of Theorem 2.3. Let X and Y be Poisson processes on Rd with positive intensities λ, λ > 0. Set R = 1 and let Ψ be the R-selection rule d from Section 2.6 with R = 1. Let σ : M × F → (Rd )R be the fixing isometry function for Ψ, let φP be a collection of functions from Lemma 2.7, and let U be the assignment function from Lemma 2.24. Define Υ = Υλ : M → M as 1[b⊂v] 1[σ(µ,b)=0] σ(µ, b)−1 φP (σ(µ,b)(υ),λ ) (U (µ, b))  Υ(µ) :=  υ∈V(c(µ)) b∈Globes[Ψ(µ)]  (2.36) for all µ ∈ M. 54  2.11. Proof of Theorem 2.5 Proof of Theorem 2.3. From the definition of Υ it is almost immediate that it is isometry-equivariant. It suffices to check the following claim. Let υ ∈ B, let b ∈ Globes[Ψ(µ)], and let θ be any isometry of Rd . We claim that for all µ ∈ M, σ(θµ, θb)−1 φP σ(θµ,θb)(θυ) U [θµ, θb]  . = θ σ(µ, b)−1 φP σ(µ,b)(υ)) (U [µ, b]) (2.37) To check (2.37), observe that by Lemma 2.32 and the definition of the fixing isometry function, σ(θµ, θb) = σ(µ, b) ◦ θ −1 .  Hence, σ(θµ, θb)−1 = θ ◦ σ(µ, b)−1 and σ(θµ, θb)(θυ) = σ(µ, b)(υ). In addition, by Lemma 2.24 (part (b)), U (µ, b) = U (θµ, θb), whence P φP σ(θµ,θb)(θυ) (U (θµ, θb)) = φσ(µ,b)(υ) (U (µ, b)) .  Thus, (2.37) holds. Let X and Y be Poisson point processes on Rd with intensities λ, λ > 0. d Again it follows from Lemma 2.7 and Lemma 2.24 that Υ(X) = Y . We need only note the following: the Voronoi tessellation and the centers of the globes and Voronoi cells depend only on Ψ(X) (as in the case of Φ from Proposition 2.29) and from Lemma 2.32, the fixing isometry function σ also depends only on (Ψ(X), X|Ψ(X)c ).  2.11  Proof of Theorem 2.5  In this section, we shall prove Theorem 2.5 by showing that the map Γ defined in (2.24) and used to prove Theorem 2.1 and the map Υ defined in (2.36) and used to prove Theorem 2.3 are both strongly finitary. We shall prove the following stronger result from which Theorem 2.5 follows immediately. Theorem 2.33. Let Γ and Υ be the maps defined in (2.24) and (2.36) respectively. There exists a map T : M → N ∪ {∞} such that if X is a Poisson point process on Rd with positive intensity, then ET (X) is finite and for all µ, µ ∈ M such that T (µ), T (µ ) < ∞ and µ|B(0,T ) = µ |B(0,T ) , we have Γ(µ)|B(0,1) = Γ(µ )|B(0,1)  and  Υ(µ)|B(0,1) = Υ(µ )|B(0,1) .  55  2.11. Proof of Theorem 2.5 Let P be the law of X. Since ET (X) < ∞ implies that T (X) is finite P -a.s., Theorem 2.33 implies that Γ and Υ are both strongly finitary with respect to P . We shall require the following additional property that the selection rules defined in Section 2.6 satisfy. Lemma 2.34. Let ΨR be a selection rule from Section 2.6. For any z ∈ Rd ¯ R) is a globe under µ, then whenever µ|B(z,R+120+d) = any µ, µ ∈ M, if B(z, ¯ R) is also a globe under µ . µ |B(z,R+120+d) , we have that B(z, Lemma 2.34 is a localized version of property (c) in the definition of a selection rule. We omit the proof of Lemma 2.34, which uses the definition of pre-seeds and seeds and is similar to that of Lemma 2.23. Proof of Theorem 2.33. Let Ψ = ΨR be the R-selection rule from Section 2.6 that is used to define the map Γ = Γ(λ,λ ) . Recall that we use the R-selection rule with R = 1 to define the map Υ. We now work towards a definition of T . Fix r := 100(R + 101 + d). Let {Ci }i∈Zd be an indexed partition of Rd into equal-sized cubes of side length r such that Ci is centered at ir. For all i ∈ Zd , let ci ⊂ Ci be the ball of radius 1 concentric with the cube Ci and let Ei ∈ M be the set of measures such that ci contains a seed. Because the radius of ci is 1, it never contains more than one seed. Let X be a Poisson point process on Rd with positive intensity and law P . It follows from the definition of a seed that {1X∈Ei }i∈Zd is a collection of i.i.d. random variables with positive expectation. For each i ∈ Zd , let Ei1 ⊂ Ei be the set of measures where the globe corresponding to the seed in ci is one-special and similarly let Ei2 ⊂ Ei be the set of measures where the globe corresponding to the seed in ci is two-special. By Proposition 2.17, it follows that {1X∈E 1 }i∈Zd and {1X∈E 2 }i∈Zd are collections of i.i.d. random variables i i with positive expectation. Let 1 T11 (µ) := inf n ∈ Z+ : µ ∈ E(n,0,...,0) and for some 0 < k1 < k2 < n, 1 we have µ ∈ E(k for j = 1, 2 . j ,0,...,0)  Also define 2 T12 (µ) := inf n ∈ Z+ : µ ∈ E(n,0,...,0) and for some 0 < k1 < k2 < n, 2 we have µ ∈ E(k for j = 1, 2 . j ,0,...,0)  Similarly, define Ti1 and Ti2 for all 2 ≤ i ≤ d by using coordinate i. Clearly, 56  2.11. Proof of Theorem 2.5 for all 1 ≤ i ≤ d, both Ti1 (X) and Ti2 (X) have finite mean. We set d  r(Ti1 + Ti2 ) .  T := 8 j=1  We now show that Γ satisfies the required property. Let MT ⊂ M be the set of point measures such that µ ∈ MT iff T (µ) < ∞. Observe that since Γ is monotone, to determine Γ(µ)|B(0,1) it suffices to determine which points of [µ] ∩ B(0, 1) will be in [Γ(µ)] ∩ B(0, 1). If x ∈ [µ] does not belong to a globe, then whether or not it is deleted depends on the value of U (µ, x). Recall that U is the assignment function for Ψ. If x ∈ [µ] ∩ B(0, 1) does belong to a globe, then whether or not it is deleted depends on the globe b for which x ∈ b, on U (µ, b), and on the splitting φfin b (µ|b , U (µ, b)). Let c be defined as in (2.33), the point process of the centers of the globes. Thus it suffices to show that for all µ, µ ∈ MT such that µ|B(0,T (µ)) = µ |B(0,T (µ)) , we have (a) Ψ(µ) ∩ B(0, 1) = Ψ(µ ) ∩ B(0, 1) and c Ψ(µ)  B(0,1)  = c Ψ(µ )  B(0,1)  ,  (b) U (µ, x) = U (µ , x) for all x ∈ B(0, 1), and ¯ R)) = U (µ , B(y, ¯ R)) for all y ∈ B(0, 1). (c) U (µ, B(y, Write T = T (µ). Property (a) follows from µ|B(0,T ) = µ |B(0,T ) and Lemma 2.34. Property (b) follows from the following observations. If ¯ R) is a partner of some x ∈ B(0, 1), then x − z ≤ r max(T 1 , T 2 ). B(z, 1 1 Thus B(z, R + 120 + d) ⊂ B(0, T ). In addition, if y ∈ [µ] shares a partner with x and has lower one- or two-rank than x, then x−y ≤ 2r max(T11 , T12 ), so that y ∈ B(0, T ). Also note that the tag of a globe B(z, R) is contained in B(z, R + 120 + d). Hence by Lemma 2.34, the partners and ranks of x are determined on B(0, T ). Thus p(µ, x) = p(µ , x)  and r(µ, x) = r(µ , x)  for all x ∈ B(0, 1) and all µ, µ ∈ MT such that µ|B(0,T ) = µ |B(0,T ) , where p, r are the partner and rank functions of Ψ. From the definition of U , property (b) follows. ¯ R) is a globe, and B(z, ¯ R) is a partner Similarly, if y ∈ B(0, 1), b = B(y, ¯ R) shares a partner of b, then B(z, R+120+d) ⊂ B(0, T ). In addition, if B(y, with b and has a lower rank than b, then B(y, R + 120 + d) ⊂ B(0, T ). Thus property (c) also holds. 57  2.12. Open problems The proof that Υ has the required property is similar. Recall that Υ is defined by placing Poisson point processes inside each member of V(c) using the assignment function U . Recall that V(µ) is the Voronoi tessellation of the point process µ and each Voronoi cell receives the U[0, 1] variable assigned to the globe that is contained in the Voronoi cell. For all x ∈ Rd , let v(x, µ) be the member of V(c(µ)) to which x belongs. From the definition of T and Lemma 2.34, it follows that for all x ∈ B(0, 1) and all µ, µ ∈ MT such that µ|B(0,T (µ)) = µ |B(0,T (µ)) , we have v(x, µ) = v(x, µ ). Moreover, it is not difficult to verify that for each x ∈ B(0, 1), if b ⊂ v(x, µ) is a globe, then its partners, rank and assignment function are also determined on B(0, T (µ)).  2.12  Open problems  Question 2.1, in the introduction, asked whether a homogeneous Poisson point process X on Rd can be deterministically “thickened” via a factor— that is, whether there exists a deterministic isometry-equivariant map φ such that φ(X) is a homogeneous Poisson process of higher intensity that contains all the original points of X. We also do not know the answer to the following question, where we drop the requirement of equivariance. Question 2.2. Let d ≥ 1 and let λ > λ > 0. Does there exist a deterministic map φ such that if X is a homogeneous Poisson point process, then φ(X) is a homogeneous Poisson point process on Rd with intensity λ , and such that all the points of X are points of φ(X)? We can also ask similar questions in the discrete setting of Bernoulli processes. We do not know the answer to following simple question. Question 2.3. Let X = {Xi }i∈Z be a sequence of i.i.d. {0, 1}-valued random variables with E(X0 ) = 41 . Does there exist a deterministic map f such that {f (X)i }i∈Z is a sequence of i.i.d. {0, 1}-valued random variables with E(f (X)0 ) = 12 and f (X)i ≥ Xi for all i ∈ Z? Note that there does not exist a translation-equivariant map φ, a factor, that satisfies the condition of Question 2.3; if φ is a factor, then by the Kolmogorov-Sinai theorem, the entropy of φ(X) can not be greater than the entropy of X. See [20, Chapter 5] for more details. More generally, if B(p) and B(q) are Bernoulli shifts on {0, 1, . . . , d − 1}, where the entropy 58  2.12. Open problems of p is less than the entropy of q, one can ask whether there exists a deterministic map φ from B(p) to B(q) such that we have φ(x)i ≥ xi for all x ∈ {0, 1, . . . , d − 1} and all i ∈ Z. Also see [2] for more open problems. Remark. Ori Gurel-Gurevich and Ron Peled have informed us that they have answered Questions 1–3 (with respective answers no, yes, and yes) in a manuscript entitled “Poisson Thickening” [6].  Acknowledgments Alexander Holroyd and Russell Lyons were funded in part by Microsoft Research. Alexander Holroyd and Terry Soo were funded in part by NSERC. Russell Lyons was funded in part by NSF grant DMS-0705518.  59  2.13. Bibliography  2.13  Bibliography  [1] O. Angel, A. Holroyd, and T. Soo. Deterministic thinning of finite Poisson porcesses. Proc. Amer. Math. Soc., 2010. To appear. [2] K. Ball. Monotone factors of i.i.d. processes. Israel J. Math., 150:205– 227, 2005. [3] K. Ball. Poisson thinning by monotone factors. Probab., 10:60–69 (electronic), 2005.  Electron. Comm.  [4] S. Evans. A zero-one law for linear transformations of L´evy noise. In Proceedings of the Conference on Algebraic Methods in Statistics and Probability, Contemporary Mathematics Series, 2010. To appear. [5] P. A. Ferrari, C. Landim, and H. Thorisson. Poisson trees, succession lines and coalescing random walks. Ann. Inst. H. Poincar´e Probab. Statist., 40(2):141–152, 2004. [6] O. Gurel-Gurevich and R. Peled. arXiv:0911.5377.  Poisson thickening.  Preprint,  [7] A. E. Holroyd, R. Pemantle, Y. Peres, and O. Schramm. Poisson matching. Ann. Inst. Henri Poincar´e Probab. Stat., 45(1):266–287, 2009. [8] A. E. Holroyd and Y. Peres. Trees and matchings from point processes. Electron. Comm. Probab., 8:17–27 (electronic), 2003. [9] A. E. Holroyd and Y. Peres. Extra heads and invariant allocations. Ann. Probab., 33(1):31–52, 2005. [10] R. A. Horn and C. R. Johnson. Matrix Analysis. Cambridge University Press, Cambridge, 1990. Corrected reprint of the 1985 original. [11] J. Jacod. Two dependent Poisson processes whose sum is still a Poisson process. J. Appl. Probability, 12:170–172, 1975. [12] O. Kallenberg. Foundations of Modern Probability. Probability and its Applications (New York). Springer-Verlag, New York, second edition, 2002. [13] M. Keane and M. Smorodinsky. A class of finitary codes. Israel J. Math., 26(3-4):352–371, 1977.  60  2.13.  Bibliography  [14] M. Keane and M. Smorodinsky. Bernoulli schemes of the same entropy are finitarily isomorphic. Ann. of Math. (2), 109(2):397–406, 1979. [15] J. F. C. Kingman. Poisson Processes, volume 3 of Oxford Studies in Probability. The Clarendon Press Oxford University Press, New York, 1993. Oxford Science Publications. [16] G. Last and H. Thorisson. Invariant transports of stationary random measures and mass-stationarity. Ann. Probab., 37(2):790–813, 2009. [17] I. Molchanov. Theory of Random Sets. Probability and its Applications (New York). Springer-Verlag London Ltd., London, 2005. [18] D. S. Ornstein. Ergodic Theory, Randomness, and Dynamical Systems. Yale University Press, New Haven, Conn., 1974. James K. Whittemore Lectures in Mathematics given at Yale University, Yale Mathematical Monographs, No. 5. [19] D. S. Ornstein and B. Weiss. Entropy and isomorphism theorems for actions of amenable groups. J. Analyse Math., 48:1–141, 1987. [20] K. Petersen. Ergodic Theory, volume 2 of Cambridge Studies in Advanced Mathematics. Cambridge University Press, Cambridge, 1989. Corrected reprint of the 1983 original. [21] E. G. Rees. Notes on Geometry. Universitext. [University Textbook]. Springer-Verlag, Berlin, 1983. [22] R.-D. Reiss. A Course on Point Processes. Springer Series in Statistics. Springer-Verlag, New York, 1993. [23] J. Serafin. Finitary codes, a short survey. In Dynamics & Stochastics, volume 48 of IMS Lecture Notes Monogr. Ser., pages 262–273. Inst. Math. Statist., Beachwood, OH, 2006. [24] J. G. Sina˘ı. A weak isomorphism of transformations with invariant measure. Dokl. Akad. Nauk SSSR, 147:797–800, 1962. [25] T. Soo. Translation-equivariant matchings of coin-flips on Zd . Adv. in Appl. Probab., 42(1):69–82, 2010. [26] S. M. Srivastava. A Course on Borel Sets, volume 180 of Graduate Texts in Mathematics. Springer-Verlag, New York, 1998.  61  2.13.  Bibliography  [27] H. Thorisson. Transforming random elements and shifting random fields. Ann. Probab., 24:2057–2064, 1996. [28] H. Thorisson. Coupling, Stationarity, and Regeneration. Probability and its Applications (New York). Springer-Verlag, New York, 2000. ´ Tim´ [29] A. ar. Invariant matchings of exponential tail on coin flips in Zd . Preprint. ´ Tim´ [30] A. ar. Tree and grid factors for general point processes. Electron. Comm. Probab., 9:53–59 (electronic), 2004.  62  Chapter 3  Deterministic Thinning of Finite Poisson Processes 2 3.1  Introduction  Given a homogeneous Poisson point process on Rd , it is well known that selecting each point independently with some fixed probability gives a homogeneous Poisson process of lower intensity. This is often referred to as thinning. Ball [1] proved the surprising fact that in d = 1, thinning can be achieved without additional randomization: we may choose a subset of the Poisson points as a deterministic function of the Poisson process so that the chosen points form a Poisson process of any given lower intensity; furthermore, the function can be taken to be a translation-equivariant factor (that is, if a translation is applied to the original process, the chosen points are translated by the same vector). Holroyd, Lyons and Soo [7] (Theorem 2.1) extended this result to all dimensions d, and further strengthened it by showing that the function can be made isometry-equivariant, and that the non-chosen points can also form a Poisson process (it cannot be independent of the process of chosen points, however). Evans [3] proved that a Poisson process cannot be similarly thinned in an equivariant way with respect to any group of affine measure-preserving maps that is strictly larger than the isometry group. Here we address the questions of deterministic thinning for a Poisson process in a finite volume? Postponing considerations of equivariance, we simply ask whether there exists a deterministic thinning rule giving a Poisson process of lower intensity. The answer depends on the two intensities, as follows. Let L denote Lebesgue measure on Rd .  2  A version of this chapter has been accepted for publication. Angel, O., Holroyd, A.E., and Soo, T. (2010) Deterministic Thinning of Finite Poisson Processes. Proc. Amer. Math. Soc.  63  3.1. Introduction Theorem 3.1. Fix λ > µ > 0, and a Borel set S ⊂ Rd with LS ∈ (0, ∞). Let Π be a homogeneous Poisson process of intensity λ on S. Let X and Y be Poisson random variables with respective means λ LS and µ LS. The following are equivalent. (i) There exists a measurable function f such that f (Π) is a homogeneous Poisson process of intensity µ on S, and every point of f (Π) is a point of Π almost surely. (ii) There exists an integer k ≥ 0 such that and  P(X = k) ≤ P(Y = k),  P(X ≤ k + 1) ≤ P(Y ≤ k).  (iii) There is no integer k ≥ 0 such that P(X = k + 1) > P(Y = k + 1), and  P(X ≤ k + 1) > P(Y ≤ k).  Figure 3.1 depicts the pairs (λ, µ) for which conditions (i)–(iii) hold. If f satisfies condition (i) of Theorem 3.1 we say that f is a (deterministic, Poisson) thinning on S from λ to µ. The domain and range of f are both the set of simple point measures on S. The equivalence of (ii) and (iii) is of course a relatively mundane technicality, but it is useful to have both forms of the condition available. Remark 3.1. By the Borel isomorphism theorem (see e.g. [10, 3.4.24]) and the mapping theorem [6], Theorem 3.1 generalizes immediately to any standard Borel space S with a finite non-atomic measure in place of L. By the same token, it suffices to prove Theorem 3.1 for the special case S = [0, 1]. To see this, observe first that by scaling λ and µ we may assume that (S) = 1. Let φ : S → [0, 1] be a Borel isomorphism; that is, φ is a measurable bijection so that ◦ φ−1 (B) = L(B) for all Borel sets B ⊂ [0, 1]. Such φ (and φ−1 ) act naturally on point processes by φ(Π) = Π ◦ φ. If Π is a homogeneous Poisson point process on S, then by the mapping theorem, φ(Π) is a homogeneous Poisson point process of the same intensity on [0, 1]. Thus f is a thinning from λ to µ on S if and only if φ ◦ f ◦ φ−1 is a thinning from λ to µ on [0, 1]. The corollaries below follow from Theorem 3.1 by an analysis of the curves in Figure 3.1. 64  3.1. Introduction  4  µ 3  2  1  0 0  1  2  3  4  λ  5  Figure 3.1: The shaded (closed) region is the set of pairs of intensities (λ, µ) for which a thinning exists in the case LS = 1. Also shown are the curves P(X ≤ k + 1) = P(Y ≤ k) for k = 0, . . . , 5 (red), the curves P(X = k) = P(Y = k) for k = 1, . . . , 4 (blue), and the line µ = λ.  65  3.1. Introduction Corollary 3.2 (Monotonicity in λ). Suppose there is a thinning from λ to µ on [0, 1]. (i) If λ > λ, then there exists a thinning from λ to µ. (ii) If µ /λ = µ/λ and λ > λ, then there exists a thinning from λ to µ . Corollary 3.3 (Non-monotonicity in µ). There are positive real numbers λ > µ > µ such that there exists a thinning from λ to µ but not from λ to µ. Corollary 3.3 may come as a surprise. However, it follows from Theorem 3.1 by a numerical computation or an inspection of Figure 3.1. In particular, an example with LS = 1 is (λ, µ, µ ) = (1.45, 0.7, 0.6), (as may be checked by taking k = 1 in Theorem 3.1 (ii) and k = 0 in (iii)). Furthermore, there are examples satisfying λ = n + 1/2 + o(1) and µ, µ = n − 1/2 + o(1) as n → ∞. For µ > 0 define λc (µ) := inf λ > µ : there is a thinning from λ to µ on [0,1] . By Theorem 3.1 (ii) and Corollary 3.2 (i), there exists a thinning from λ to µ if any only if λ ≥ λc (µ). The next corollary states that there exists a thinning if the average number of points to be deleted is at least one, while the converse holds in asymptotic form. Corollary 3.4 (Asymptotic threshold). We have λc (µ) ≤ µ + 1 for all µ > 0, and λc (µ) ≥ µ + 1 − o(1) as µ → ∞ Our construction of thinnings relies on the following key result, which states that given n unordered uniformly random points in an interval, we may deterministically delete one of them in such a way that the remaining n − 1 are again uniformly random. Write B {n} for the set of all subsets of B of size n. Proposition 3.5 (One-point deletion). Let U1 , . . . , Un be i.i.d. random variables uniform on [0, 1], and define the random set U := {U1 , . . . , Un }. There exists a measurable function g : [0, 1]{n} → [0, 1]{n−1} such that g(A) ⊂ A for all A, and d g(U) = {U1 , . . . , Un−1 }.  Moreover, there exists a measurable v : [0, 1]{n} → [0, 1] such that v(U) is uniform on [0, 1] and is independent of g(U). 66  3.1. Introduction Even in the case n = 2, the first claim of Proposition 3.5 is far from obvious, and makes an entertaining puzzle. Of course, the claim would be trivial if we allowed g to be a function of the ordered tuple (U1 , . . . , Un ), or a function of U together with an independent roll of an n-sided die. The function v in Proposition 3.5 may be thought of as extracting “spare” randomness associated with the location of the deleted point U \ g(U). This will be useful in the proof of Theorem 3.1, because it will make it easy to delete a random number of further points once one point has been deleted. Proposition 3.5 is somewhat reminiscent of the following fact proved in [5] (although the proofs appear unrelated). Given a homogeneous Poisson process Π on Rd , it is possible to choose a point W of Π, as a deterministic function of Π, so that deleting W and translating the remaining points by −W yields again a homogeneous Poisson process. Proposition 3.5 motivates the search for couplings of Poisson random variables X and Y such that either X = Y = 0 or X > Y . An important observation of Ball [1, Lemma 3.1] is that the standard “quantile coupling” (i.e. X = FX−1 (U ) and Y = FY−1 (U ) where FX , FY are the distribution functions and U is uniform on [0, 1]) has this property provided the mean of X is sufficiently large as a function of the mean of Y . More generally, given a coupling of Poisson random variables X, Y with means λ, µ such that X > Y except on an event A ∈ σ(X) on which X = Y , it is not difficult to show using Proposition 3.5 that there exists a thinning from λ to µ. Condition (ii) of Theorem 3.1 implies the existence of such a coupling. Remark 3.2 (Infinite volumes). In the case of infinite volumes it is easier (but still nontrivial) to show that a Poisson thinning from λ to µ always exists when λ > µ; see [7, Example 2] (Example 2.2). Our results yield the following alternative construction, with the additional property that whether or not a point is deleted is determined by the process within a fixed finite radius. Partition Rd into cubes of volume 1/(λ − µ). By Corollary 3.4 there exists a thinning on each cube from λ to µ; by applying each simultaneously we obtain a thinning on all of Rd . The paper is organized as follows. In Section 3.2 we will prove some easier parts of Theorem 3.1. In Section 3.3 we will prove Proposition 3.5. In Section 3.4 we will define the coupling of Poisson random variables that will be used to prove the existence of a thinning. In Section 3.5 we will finish the proof of Theorem 3.1 and also prove the corollaries. Finally in Section 3.6 we will briefly address some variant concepts, including deterministic thinnings that are equivariant with respect to a group of isometries, and 67  3.2. Proof of Theorem 3.1: easy implications deterministic splittings, where the points of the Poisson point process are partitioned into two sets each of which forms a Poisson point process. We will also address deterministic thickening: we show that on a finite volume, it is impossible to add points, as a deterministic function of a Poisson point process, to obtain a Poisson point process of higher intensity.  3.2  Proof of Theorem 3.1: easy implications  We will prove Theorem 3.1 by showing that for the existence of a thinning as in (i), condition (iii) is necessary, (ii) is sufficient, and (iii) implies (ii). Let M be the space of all simple point measures on [0, 1]. For ν ∈ M, we denote the support of ν by [ν] := {x ∈ [0, 1] : ν({x}) = 1} . Let N = {0, 1, . . .}. For each n ∈ N, let Mn := {ν ∈ M : ν([0, 1]) = n}. The following characterization is useful. A point process Π on [0, 1] is a Poisson point process of intensity λ if and only if: the random variable Π([0, 1]) is Poisson with mean λ, and, for each n ∈ N, conditional on Π ∈ Mn , the set [Π] has the distribution of {U1 , . . . , Un }, where U1 , . . . , Un are i.i.d. random variables uniformly distributed on [0, 1]. See [8, Theorem 1.2.1] or [6] for background. Proof of Theorem 3.1: (i) =⇒ (iii). Let Π be a Poisson point process on [0, 1] with mean λ and let f be a thinning from λ to µ. Set X := Π([0, 1]) and Y := f (Π)([0, 1]) and let k ∈ N be such that P(X = k + 1) > P(Y = k + 1).  (3.1)  P X = Y = k + 1 = 0.  (3.2)  We will show that In other words, if on the event X = k + 1 the thinning f sometimes deletes points of Π, then it must (almost) always delete points. Since X ≥ Y , (3.2) is inconsistent with P(X ≤ k + 1) > P(Y ≤ k).  Thus if there exists a thinning then condition (iii) holds. It remains to show (3.2). Let Q be the law of {U1 , . . . , Uk+1 } where the Ui are i.i.d. uniform in [0, 1]. Let J := {ν ∈ Mk+1 : f (ν) = ν}. Thus, J is a set of measures where f does not delete any points. Let [J ] := {[ν] : ν ∈ J }, so that P(Π ∈ J ) = P(X = k + 1) · Q([J ]), 68  3.3. Deleting uniform random variables and also P(f (Π) ∈ J ) = P(Y = k + 1) · Q([J ]).  Since {Π ∈ J } ⊆ {f (Π) ∈ J }, we deduce  P(X = k + 1) · Q([J ]) ≤ P(Y = k + 1) · Q([J ]).  (3.3)  We see that (3.1) and (3.3) force Q([J ]) = 0. Hence P(Π ∈ J ) = 0, which implies (3.2). Proof of Theorem 3.1: (iii) =⇒ (ii). Since λ > µ we have P(X = 0) < P(Y = 0), and since i∈N P(X = i) = i∈N P(Y = i) = 1, there exists a minimal integer k ≥ 0 such that P(X = k + 1) > P(Y = k + 1). By condition (iii) we must have that P(X ≤ k + 1) ≤ P(Y ≤ k). By the minimality of k we have that P(X = k) ≤ P(Y = k). It remains to prove that (ii) implies (i) in Theorem 3.1, which we will do in Section 3.5 after assembling the necessary tools. Our strategy for constructing the thinning f will be as follows. If the number of points Π(S) is at most k, we retain all of them; otherwise, we first delete one point using Proposition 3.5, then delete a suitable random number of others.  3.3  Deleting uniform random variables  We will give two proofs of Proposition 3.5. Our original proof is given in Section 3.6 and gives a function g with an additional rotation-equivariance property. The proof below follows a suggestion of Michael Brand; a version appears on his web page of mathematical puzzles [2, March 2009]. Both proofs rely on the following observation. Lemma 3.6. Let Q be a probability measure on an arbitrary Borel space S. Let Qm be the law of {U1 , . . . , Um }, where the Ui are i.i.d. with law Q. Let g : S {n} → S {n−1} be a measurable function such that g(A) ⊂ A for all A, and define for B ∈ S {n−1} , R(B) = w ∈ S : g(B ∪ {w}) = B . If for Qn−1 -a.e. B ∈ S {n−1} we have Q(R(B)) = n−1 , then Qn ◦g−1 = Qn−1 . 69  3.3. Deleting uniform random variables Proof. We prove the stronger fact that the Radon-Nykodim derivative d(Qn ◦ g−1 )/dQn−1 is Qn−1 -a.e. equal to n Q(R(·)) (without any assumption on Q(R(B))). Let U1 , . . . , Un be i.i.d. with law Q and write Um = {U1 , . . . , Um }. Let A ⊆ S {n−1} be measurable. Since the Ui are exchangeable, Qn ◦ g−1 (A) = P g(Un ) ∈ A = n P g(Un ) = Un−1 ∈ A . We have the identity of events g(Un ) = Un−1 ∈ A = Un ∈ R(Un−1 ) ∩ {Un−1 ∈ A}. Therefore, since Un−1 , Un have respective laws Qn−1 , Q, Qn ◦ g−1 (A) =  A  n Q(R(B)) dQn−1 (B).  Proof of Proposition 3.5. By the Borel isomorphism theorem we may assume that the Ui are i.i.d. uniform in S = {1, . . . , n} × [0, 1] × [0, 1] instead of in [0, 1], and write Ui = (Xi , Yi , Zi ). Let Qm be as in Lemma 3.6. Let K be the {1, . . . , n}-valued random variable given by n  K≡  Xi  mod n.  i=1  Let W = (X , Y , Z ) be the element of U that has the Kth smallest Yi . Define g(U) = U \ {W }; v(U) = Z . Since the Xi ’s, Yi ’s and Zi ’s are all independent it is clear that v(U) is uniform on [0, 1] and independent of g(U). It remains to show that g(U) has law Qn−1 . To see this, let B ∈ S {n−1} and observe that for a.e. y , z ∈ [0, 1] there is a unique w = (x , y , z ) ∈ S so that g B ∪ {w} = B. Thus Q1 w ∈ S : g B ∪ {w} = B = n−1 . It follows from Lemma 3.6 that Qn (g−1 (·)) = Qn−1 (·), as required. The following corollary of Proposition 3.5 states how the “spare” randomness will be utilized in the proof of Theorem 3.1. Write B {<n} for the set of all subsets of B of size strictly less than n. 70  3.4. Couplings of Poisson random variables Corollary 3.7. Let U1 , . . . , Un be i.i.d. random variables uniformly distributed on [0, 1], and define the random set U := {U1 , . . . , Un }. Let Z be any {0, . . . , n − 1}-valued random variable that is independent of U. There exists a measurable function h : [0, 1]{n} → [0, 1]{<n} such that h(A) ⊂ A for all A, and d h(U) = {U1 , . . . , UZ }.  Proof. Define the random set Un−1 := {U1 , . . . , Un−1 }. Let V be uniformly distributed on [0, 1] and independent of (U1 , . . . , Un−1 ). Since Z < n, there exists a measurable h : [0, 1]{n−1} × [0, 1] → [0, 1]{<n} such that d  h(Un−1 , V ) = {U1 , . . . , UZ } and h(Un−1 , V ) ⊆ Un−1 ; to construct such an h, use V to randomly order Un−1 and independently construct Z with the correct distribution, and then select the first Z points in the ordering. Now d let g and v be as in Proposition 3.5, so that (g(U), v(U)) = (Un−1 , V ). Define h(U) := h(g(U), v(U)).  3.4  Couplings of Poisson random variables  In this section we will show that condition (ii) of Theorem 3.1 implies the existence of a certain coupling of Poisson random variables that will be used to construct thinnings. We need the following simple result which implies that each of the two families of curves in Figure 3.1 is non-intersecting. Lemma 3.8 (Non-intersection). Let X, Y be Poisson random variables with respective means λ, µ, where λ > µ. For every integer k ≥ 0, (i) P X = k + 1 ≤ P Y = k + 1 implies P X = k ≤ P Y = k ;  (ii) P X ≤ k + 1 ≤ P Y ≤ k implies P X ≤ k + 2 ≤ P Y ≤ k + 1 .  The following fact will be useful in the proof of Lemma 3.8, and elsewhere. If X is a Poisson random variable with mean λ, then P(X ≤ n) is the probability that the (n + 1)st arrival in a standard Poisson process occurs after time λ, so 1 ∞ −t n e t dt. (3.4) P(X ≤ n) = n! λ Proof of Lemma 3.8. Let X, Y be Poisson with respective means λ, µ, where λ > µ. Part (i) is easy to check: P(X = k) = ≤  k+1 λ k+1 λ  P(X = k + 1) P(Y = k + 1) =  µ λ  P(Y = k) < P(Y = k). 71  3.4. Couplings of Poisson random variables For (ii), using (3.4), the following inequalities are all equivalent: P(X ≤ k + 1) ≤ P(Y ≤ k);  P(X ≤ k + 1) ≤ P(Y ≤ k + 1) − P(Y = k + 1); 1 (k+1)!  ∞  λ  e−t tk+1 dt ≤ e−µ µk+1 ≤ 1≤  1 (k+1)! λ µ λ µ  ∞  µ  e−t tk+1 dt −  −µ k+1 1 µ ; (k+1)! e  e−t tk+1 dt; eµ−t  t µ  k+1  dt.  But the right side of the last inequality is clearly increasing in k. Corollary 3.9 (Monotone coupling). If condition (ii) of Theorem 3.1 is satisfied by Poisson random variables X and Y and an integer k, then there exists a coupling of X and Y with the following properties. (i) The coupling is monotone; that is X ≥ Y . (ii) If X ≤ k, then X = Y . (iii) If X > k, then X > Y . Before proving Corollary 3.9 we recall that if W and V are real-valued random variables then W stochastically dominates V if and only if there exists a coupling of W and V such that W ≥ V a.s. See e.g. [12, Chapter 1] and [11] for background. Proof of Corollary 3.9. Let X and Y be Poisson random variables that satisfy condition (ii) of Theorem 3.1 with an integer k. Applying Lemma 3.8, we obtain that P(X = j) ≤ P(Y = j) for all 0 ≤ j ≤ k  (3.5)  P(X ≤ j + 1) ≤ P(Y ≤ j) for all j ≥ k.  (3.6)  and By (3.5), we may define a probability mass function m   P(Y = 0) − P(X = 0) + P(X ≤ k) m(j) := P(Y = j) − P(X = j)   P(Y = j)  on N as follows: j = 0; 1 ≤ j ≤ k; j > k. 72  3.5. The thinning, and proofs of corollaries Let V be a random variable with mass function m. Also let W := (X − 1)1X>k . By (3.5) and (3.6), it is straightforward to check that W stochastically dominates V , so we may assume that W ≥ V . On X ≤ k we have W = 0 and therefore V = 0, hence we have the equality V = V 1X>k . Now define a random variable Y := X1X≤k + V 1X>k . The mass function of Y is obtained by adding those of X1X≤k and V , d  except at 0, and it follows that Y = Y . Therefore we may assume that Y = Y . On the other hand we may write X = X1X≤k + (W + 1)1X>k . By comparing the last two displays it is evident that the required properties (ii) and (iii) hold, and (i) is a consequence of them.  3.5  The thinning, and proofs of corollaries  Proof of Theorem 3.1: (ii) =⇒ (i). Assuming condition (ii) we construct a thinning f . Let k be an integer satisfying condition (ii). Let Π be a Poisson point process on [0, 1] with intensity λ. Write X = Π([0, 1]); thus X is a Poisson random variable with mean λ. Let Y be a coupled Poisson random variable with mean µ so that X and Y satisfy the conclusion of d Corollary 3.9. We will define f so that f (Π)([0, 1]) = Y . For each n ≥ 0, let Qn be the law of Y conditional on X = n. Let Zn be independent of Π and have law Qn . By Corollary 3.9, if n > k, then Y < n a.s. For each n > k, let hn : [0, 1]{n} → [0, 1]{<n} be the function from Corollary 3.7 corresponding to the random variable Zn . Let f be defined by: [f (Π)] :=  [Π] hn ([Π])  if X ≤ k; if X = n > k. d  By Corollary 3.9, we have f (Π)([0, 1]) = Y . In addition, from Corollary 3.7 we have that for all m ≥ 0, conditional on the event that f (Π)([0, 1]) = m, the m points of f (Π) have the distribution of m unordered i.i.d. random variables uniformly distributed on [0, 1] (this holds even if we condition also on Π([0, 1])). Thus f (Π) is a Poisson point process of intensity µ on [0, 1]. 73  3.5. The thinning, and proofs of corollaries Proof of Corollary 3.2. Let Fλ be the distribution function of a Poisson random variable with mean λ. Part (i) follows immediately from Theorem 3.1 condition (ii) and the facts that Fλ (k) is decreasing in λ for all k ≥ 0 and that e−λ λk /k! is unimodal as a function of λ. Let (λ, µ) and (λ , µ ) satisfy the conditions of Corollary 3.2 part (ii). By Theorem 3.1 condition (ii), it suffices to show that if for some fixed k ≥ 0, the pair (λ, µ) satisfies Fλ (k + 1) ≤ Fµ (k) and e−λ λk ≤ e−µ µk , then pair (λ , µ ) satisfies the same inequalities (with the same k). Let p := µ/λ. By a variant of the argument in the proof of Lemma 3.8 (ii), we have that Fλ (k + 1) ≤ Fµ (k) if and only if e−λ λk+1 ≤ (k + 1)  λ  e−t tk dt.  (3.7)  µ  By the change of variables, t = λs, we see that (3.7) is equivalent to 1  1 ≤ (k + 1)  e(1−s)λ sk ds.  (3.8)  p  The right side of (3.8) is increasing in λ. Since µ /λ = p and λ > λ, we have Fλ (k + 1) ≤ Fµ (k). Simple calculations show that e−λ λk ≤ e−µ µk if and only if −k log p . (3.9) λ≥ 1−p The left side of (3.9) is obviously increasing in λ. Thus we have that e−λ (λ )k ≤ e−µ (µ )k . For x ∈ R, let x denote its integer part. Proof of Corollary 3.4. First we show that λc (µ) ≤ µ + 1. By Corollary 3.2 (i), it suffices to show that if λ = µ + 1, then there is a thinning from λ to µ. By Theorem 3.1 condition (ii), we must show for some k ∈ N that Fλ (k + 1) ≤ Fµ (k) and e−λ λk ≤ e−µ µk . The latter condition is satisfied by choosing k = 1/ log(1+1/µ) . As in the proof of Lemma 3.8(ii), Fλ (k+1) ≤ Fµ (k) if and only if λ t k+1 eµ−t dt ≥ 1. (3.10) µ µ So by the change of variables t = µ + s and the equality λ = µ + 1, it suffices to verify that 1 s k+1 ds ≥ 1. (3.11) e−s 1 + µ 0 74  3.6. Variants and open problems Inequality (3.11) is a consequence of the observation that log(1 + s/µ) ≥ s for all s ∈ [0, 1], log(1 + 1/µ) which in turn follows from log(µ+s) ≥ (1−s) log µ+s log(µ+1), an instance of the concavity of log. Next we show that λc (µ) ≥ µ + 1 − o(1). Fix δ < 1, and let λ = µ + δ. By Theorem 3.1 condition (iii) it suffices to show that when µ is sufficiently large there is an integer k so that Fλ (k+1) > Fµ (k) and e−λ λk+1 > e−µ µk+1 . The latter condition is equivalent to the inequality 1+  δ µ  k+1  > eδ ,  (3.12)  while the former condition is equivalent to the negation of (3.10); moreover, by the change of variable t = µ + s this is equivalent to δ 0  e−s 1 +  s µ  k+1  ds < 1.  (3.13)  Set k = µ + 1 . For µ sufficiently large, (3.12) is satisfied with this k. Moreover, since k + 1 < µ + 2 and (1 + s/µ) < es/µ , we see that the left side δ of (3.13) is bounded above by 0 e2s/µ ds, which is strictly less than 1 for µ sufficiently large.  3.6 3.6.1  Variants and open problems Thickening  Theorem 3.1 and its corollaries address deterministic thinning, but what about deterministic thickening? Does there exist a measurable function f such that if Π is a Poisson point process on a Borel set S, then f (Π) ≥ Π and f (Π) is a Poisson point process on S of intensity higher than that of the original process Π? If S has finite volume, then the answer is no. Proposition 3.10. Fix λ > 0, and Borel set S ⊂ Rd with L(S) ∈ (0, ∞). Let Π be a homogeneous Poisson process of intensity λ on S. There does not exist a measurable function f such that f (Π) is a homogeneous Poisson process of intensity strictly larger than λ on S. Remark 3.3. In Proposition 3.10 we do not even require that f (Π) ≥ Π. 75  3.6. Variants and open problems Proof of Proposition 3.10. Let f be a measurable function. Let 0 denote the zero measure. If f (0) = 0 then P(f (Π) = 0) ≥ P(Π = 0) so that f (Π)(S) cannot be a Poisson random variable of larger mean than Π(S). If f (0) = 0 then P(f (Π) = f (0)) ≥ P(Π = 0) > 0 so that f (Π) gives positive mass to a single point measure other than 0 and hence can not be a Poisson process. By the Borel isomorphism theorem, for any Borel set S of infinite volume and any λ > 0, there exists a measurable function f such that if Π is a Poisson process of positive intensity on S, then f (Π) is a Poisson point process of intensity λ on S; but of course this does not guarantee f (Π) ≥ Π. It is shown in [7, Theorem 3] (Theorem 2.4) that even in the case of infinite volume, deterministic thickening is impossible if we impose an additional finitariness condition on f . Gurel-Gurevich and Peled [4] have informed us that they have recently proved that deterministic thickening is possible if this condition is dropped.  3.6.2  Equivariant thinning  As remarked earlier, Theorem 3.1 extends immediately to any Borel space with a finite non-atomic measure. When the space has non-trivial symmetries, new questions arise. Consider the length measure on the circle S 1 = {x ∈ R2 : x = 1}. Since this measure space is isomorphic to the interval [0, 2π] with Lebesgue measure, Theorem 3.1 tells us for which pairs λ, µ there exists a thinning. However the circle is more interesting because we can associate groups of symmetries. Given an isometry θ of S 1 and ν ∈ M(S 1 ), let θ(ν) be the measure given by θ(ν)(A) := ν(θ −1 (A)) for measurable A ⊆ S 1 . We say that a measurable mapping f : M(S 1 ) → M(S 1 ) is rotation-equivariant if θ(f (ν)) = f (θ(ν)) for all ν ∈ M(S 1 ) and all rotations θ of S 1 . Isometryequivariance is defined analogously. Theorem 3.11. If S is the unit circle S 1 , and Lebesgue measure is replaced with uniform measure on S 1 , then Theorem 3.1 holds even with the additional requirement that the thinning f in condition (i) be rotationequivariant. Proof. The proof of Theorem 3.1 goes through except that we need the following rotation-equivariant version of Proposition 3.5. Assuming condition (ii), this allows the thinning we construct to be rotation-equivariant. We omit the rest of the details. 76  3.6. Variants and open problems Proposition 3.12 (Equivariant deletion). Let U1 , . . . , Un be i.i.d. random variables uniformly distributed on S 1 , and define the random set U := {U1 , . . . , Un }. There exists a measurable function g : (S 1 ){n} → (S 1 ){n−1} ∪ {∅}, with the following properties: g is rotation-equivariant, g(A) ⊂ A for d  any set A, and g(U) = {U1 , . . . , Un−1 }. In addition, there exists a function v : (S 1 ){n} → [0, 1], such that v is rotation-invariant, and v(U) is uniformly distributed on [0, 1] and independent of g(U).  Remark 3.4. The pre-image of ∅ under the function g of Proposition 3.12 has measure 0. The inclusion of ∅ in the range is a technical convenience to allow g to be rotation-equivariant everywhere (as supposed to almost everywhere). For example, if A consists of n equally spaced points in S 1 , it is impossible to choose one in a rotation-equivariant way. To construct this function we rely on a classical problem involving fuel shortage, see e.g. [13, Gasoline Crisis]. See also Spitzer’s Lemma [9, Theorem 2.1]. We repeat the problem and its solution below. Lemma 3.13. Suppose a circular road has several gas stations along its length with just enough gas in total to drive a full circle around the road. Then it is possible to start at one of the stations with an empty tank and complete a circuit without running out of gas before the end. Proof. Pretend at first we are allowed to have a negative amount of gas and still drive. Start at any point and consider the amount of gas in the car as a function of the location. After a full circle the tank is exactly empty again. Any point at which the function takes its minimum is a suitable starting point. Proof of Proposition 3.12. Place n gas stations at the points of U ⊂ S 1 with gas for 1/n of the circle at each. Let z(U) be the station from which it is possible to drive around S 1 (in a counterclockwise direction); if there is more than one such station, set z(U) = ∅ (this has probability 0 for i.i.d. uniform points). Clearly g(U) := U \ {z(U)} is rotation-equivariant. To see that g(U) has the claimed distribution, consider B ∈ (S 1 ){n−1} , and let F (B) ⊂ S 1 be the set of x ∈ S 1 so that z(B ∪ {x}) = x. By Lemma 3.6, it suffices to show that F (B) has measure 1/n for a.e. B. To see that F (B) has measure 1/n, consider as above the amount of gas in the car (allowing a deficit) when 1/n gas is placed at each point of B, but now continue driving indefinitely around the circle. The gas function h(t) is skew-periodic: h(t + 1) = h(t) − 1/n. Furthermore, it has derivative 77  3.6. Variants and open problems −1 except at points t (mod 1) ∈ B where h is discontinuous. It follows that there is a set T of measure 1/n so that h attains a new minimum value at every t (mod 1) ∈ T . The set T is exactly the set of locations where it is possible to drive a full circle starting with 1/n gas, hence these are the x where z(B ∪ {x}) = x. Note that T is a finite union of intervals in S 1 . We define v as follows. If z(U) = ∅, then set (arbitrarily) v(U) = 0; otherwise, compute the set T corresponding to g(U). Given g(U), z(U) is uniformly distributed on T . Take the component (interval) of T containing z(U), rescale it to the interval [0, 1], and let v(U) be the image of z(U) under this rescaling. Proposition 3.12 gives a deletion procedure that is equivariant to rotations, but not to other isometries of the circle (namely, reflections). Question 3.1. Give necessary and sufficient conditions on λ and µ for the existence of an isometry-equivariant thinning on the circle S 1 from λ to µ. Remark 3.5. It is easy to see that Proposition 3.12, with the group of rotations replaced the by (the larger) group of isometries, does not hold in the case n = 2. Therefore, if there exists an isometry-equivariant thinning on S 1 , then whenever there are exactly two points, it must either keep both of them or delete both of them. This is not always possible; for example, consider the case where S 1 is endowed with the uniform probability measure, λ = 2, and µ = 1. If X and Y are Poisson random variables with means 2 and 1 respectively, then P(X = 2) > P(Y = 2)  and  P(X ∈ {0, 2}) > P(Y = 0).  Hence if f is a thinning on S 1 from 2 to 1, then f cannot be isometryequivariant because whenever there are exactly two points on S 1 , the first inequality implies that f cannot always keep both of them, and the second inequality implies that f cannot always delete both of them. However, by Theorem 3.11 and Corollary 3.4, there exists a rotation-equivariant thinning on S 1 from 2 to 1. Thus the set of (λ, µ) for which there is an isometry-equivariant thinning on S 1 from λ to µ is strictly smaller than the set for which there is a rotationequivariant thinning. We do not know whether Proposition 3.12, with the group rotations replaced by the group of isometries, holds in the case n ≥ 4. Ori Gurel-Gurevich has found a construction in the case n = 3 (personal communication).  78  3.6. Variants and open problems Theorem 3.11 can be easily generalized to some other symmetric measure spaces, by using only Proposition 3.12. For example, the 2-sphere S 2 = {x ∈ R3 : x = 1} with the group of rotations that fix a given diameter, or the torus R2 /Z2 with translations. However, we do not know whether there exists a rotation-equivariant thinning on the sphere, or an isometryequivariant thinning on the torus. Question 3.2. Give necessary and sufficient conditions on λ and µ for the existence of an rotation-equivariant (or isometry-equivariant) thinning from λ to µ on the 2-sphere S 2 . Similar questions about thinning can be asked in a more general setting. Let G be a group of measure-preserving bijections on a standard Borel space S and let M(S) be the space of simple point measures on S. We say that f : M(S) → M(S) is G-equivariant if f (γν) = γf (ν) for all ν ∈ M(S) and all γ ∈ G. For the unit ball it is not difficult to show that an isometry-equivariant version of Proposition 3.5 holds. Indeed, since isometries of the ball preserve the norm, any selection scheme that depends only on the norms of the points will automatically be isometry-equivariant. The function x → x d maps a uniformly distributed random variable on the unit ball to a uniformly distributed random variable on [0, 1], and any thinning procedure on [0, 1] can be composed on this mapping. Thus for the unit ball, Theorem 3.1 holds even with the additional requirement that the thinning f in condition (i) be isometry-equivariant. Question 3.3. For which spaces (S, G) is the existence of a thinning from λ to µ equivalent to the existence of a G-equivariant thinning from λ to µ? As seen above, this property holds for S 1 with rotations (Theorem 3.11), and for the ball with isometries, but not for S 1 with isometries (Remark 3.5).  3.6.3  Splitting  We say that a deterministic thinning f on [0, 1] from λ to µ is a (λ, µ)splitting if f (Π) and Π − f (Π) are both Poisson point processes on [0, 1], with respective intensities µ and λ − µ respectively. (The existence of a (λ, µ)-splitting implies the existence of both a thinning from λ to µ and a thinning from λ to λ − µ.) Question 3.4. Give necessary and sufficient conditions on (λ, µ) for the existence of a (λ, µ)-splitting. 79  3.6. Variants and open problems  Acknowledgements Omer Angel, Alexander Holroyd, and Terry Soo were funded in part by NSERC. Alexander Holroyd was funded in part by Microsoft Research.  80  3.7. Bibliography  3.7  Bibliography  [1] K. Ball. Poisson thinning by monotone factors. Probab., 10:60–69 (electronic), 2005.  Electron. Comm.  [2] M. Brand. Using your Head is http://www.brand.site.co.il/riddles, March 2009.  Permitted.  [3] S. Evans. A zero-one law for linear transformations of L´evy noise. In Proceedings of the Conference on Algebraic Methods in Statistics and Probability, Contemporary Mathematics Series, 2010. To appear. [4] O. Gurel-Gurevich and R. Peled. arXiv:0911.5377.  Poisson thickening.  Preprint,  [5] A. E. Holroyd and Y. Peres. Extra heads and invariant allocations. Ann. Probab., 33(1):31–52, 2005. [6] J. F. C. Kingman. Poisson Processes, volume 3 of Oxford Studies in Probability. The Clarendon Press Oxford University Press, New York, 1993. Oxford Science Publications. [7] R. Lyons, A. Holroyd, and T. Soo. Poisson splitting by factors. Preprint. [8] R.-D. Reiss. A Course on Point Processes. Springer Series in Statistics. Springer-Verlag, New York, 1993. [9] F. Spitzer. A combinatorial lemma and its application to probability theory. Trans. Amer. Math. Soc., 82:323–339, 1956. [10] S. M. Srivastava. A Course on Borel Sets, volume 180 of Graduate Texts in Mathematics. Springer-Verlag, New York, 1998. [11] V. Strassen. The existence of probability measures with given marginals. Ann. Math. Statist, 36:423–439, 1965. [12] H. Thorisson. Coupling, Stationarity, and Regeneration. Probability and its Applications (New York). Springer-Verlag, New York, 2000. [13] P. Winkler. Mathematical Puzzles: a Connoisseur’s Collection. A K Peters Ltd., Natick, MA, 2004.  81  Chapter 4  Insertion and Deletion Tolerance of Point Processes 3 4.1  Introduction  Let Π be a point process on Rd . We always assume that point processes are simple and locally finite. Let ≺ denote absolute continuity in law; that is, for random variables X and Y , taking values in the same measurable space, X ≺ Y if and only if P(Y ∈ A) = 0 implies P(X ∈ A) = 0 for all measurable A. Let B denote the Borel σ-algebra on Rd and let L be Lebesgue measure. We say that Π is insertion-tolerant if for every S ∈ B with L(S) ∈ (0, ∞), if U is uniformly distributed on S and independent of Π, then Π + δU ≺ Π, where δx denotes the point measure at x ∈ Rd . Let M denote the space of simple point measures on Rd and let M be the product σ-field on M. The support of a measure µ ∈ M is denoted by [µ] := y ∈ Rd : µ({y}) = 1 . A Π-point is an Rd -valued random variable Z such that Z ∈ [Π] a.s. A finite subprocess of Π is a point process F such that F(Rd ) < ∞ and [F] ⊂ [Π] a.s. We say that Π is deletion-tolerant if for any Π-point Z we have Π − δZ ≺ Π. For S ∈ B we define the restriction µ|S of µ ∈ M to S by µ|S (A) := µ(A ∩ S), for all A ∈ B. 3 A version of this chapter will be submitted for publication. Holroyd, A.E. and Soo, T. (2010) Insertion and Deletion Tolerance of Point Processes.  82  4.1. Introduction We will prove the following equivalences for insertion-tolerance and deletion-tolerance. Theorem 4.1 (Deletion-tolerance). Let Π be a point process on Rd . The following are equivalent. (i) The point process Π is deletion-tolerant. (ii) For any finite subprocess F of Π, we have Π − F ≺ Π. (iii) For all S ∈ B with finite Lebesgue measure, Π  Sc  ≺ Π.  Theorem 4.2 (Insertion-tolerance). Let Π be a point process on Rd . The following are equivalent. (i) The point process Π is insertion-tolerant. (ii) For any Borel sets S1 , . . . , Sn of positive finite Lebesgue measure, if U1 , . . . , Un are independent random variables, where each Ui is uniformly distributed in Si , if Π is independent of (U1 , . . . , Un ), then Π + ni=1 δUi ≺ Π. (iii) If (X1 , . . . , Xn ) is a random vector in (Rd )n that admits a conditional law given Π that is absolutely continuous with respect to Lebesgue measure a.s., then Π + ni=1 δXi ≺ Π. We will prove a stronger version of Theorem 4.2 which allows a random finite number of points to be added. We say that a point process is translation-invariant if it is invariant in law under all translations of Rd . In this case further equivalences are available as follows. Lemma 4.3. A point process Π on Rd is insertion-tolerant if and only if there exists S ∈ B with L(S) ∈ (0, ∞) such that if U is independent of Π and uniformly distributed in S, then Π + δU ≺ Π. Let Π be a translation-invariant point process with finite intensity; that is, EΠ([0, 1]d ) < ∞. We let Π∗ be its Palm version. See Section 4.4 or [7, Chapter 11] for a definition. Informally, one can regard Π∗ as the point process Π conditioned to have a point at the origin. Theorem 4.4. Let Π be a translation-invariant ergodic point process on Rd of finite intensity and let Π∗ be its Palm version. The following are equivalent. 83  4.1. Introduction (i) The point process Π is insertion-tolerant. (ii) Π + δ0 ≺ Π∗ . Condition (4.1) below appears to be a natural analogue of Theorem 4.4 (ii) for deletion-tolerance. However, it is only sufficient and not necessary for deletion-tolerance. Theorem 4.5. Let Π be a translation-invariant point process on Rd of finite intensity and let Π∗ be its Palm version. If Π∗ − δ0 ≺ Π,  (4.1)  then Π is deletion-tolerant. In Section 4.2, Example 4.3 shows that a deletion-tolerant process may not satisfy (4.1) and Example 4.6 shows that the analogue of Lemma 4.3 does not hold for deletion-tolerance. Next we will illustrate some applications of insertion-tolerance and deletion-tolerance in the contexts of continuum percolation and stable matchings. We will prove generalizations of earlier results. The Boolean continuum percolation model for point processes is defined as follows (see [9]). Let · denote the Euclidean norm on Rd . For R > 0 and µ ∈ M, consider the set O(µ) := ∪x∈[µ] B(x, R), where B(x, R) := y ∈ Rd : x − y < R is the open ball of radius R with center x. We call O(µ) the occupied region. The connected components of O(µ) are called clusters. Theorem 4.6. Let Π be a translation-invariant ergodic insertion-tolerant point process on Rd . For any R > 0, the occupied region O(Π) contains at most one unbounded cluster a.s. The proof of Theorem 4.6 is similar to the uniqueness proofs in [9, Chapter 7] which is based on the argument of Burton and Keane [2]. Next we turn our attention to stable matchings of point processes (see [5] for background). Let R and B be point processes on Rd with finite intensity. A one-colour matching scheme for R is a point process M of unordered pairs {x, y} ⊂ Rd such that almost surely [M] is the edge set of a simple graph ([R], [M]) in which every vertex has degree exactly one. Similarly, a two-colour matching scheme for R and B is a point process M of unordered pairs {x, y} ⊂ Rd such that almost surely, [M] is the edge set of a simple bipartite graph ([R], [B], [M]) in which every vertex has degree 84  4.1. Introduction exactly one. In either case we write M(x) = y if and only if {x, y} ∈ [M]. In the one-colour case, we say that a matching scheme is stable if almost surely there do not exist distinct points x, y ∈ [R] satisfying x − y < min { x − M(x) , y − M(y) } ,  (4.2)  and in the two colour case we say that a matching scheme is stable if almost surely there do not exist x ∈ [R] and y ∈ [B] satisfying (4.2). If R is a Poisson point process on Rd , then there exists a unique onecolour stable matching scheme for R, and if B is an independent copy of R, then there exists a unique two-colour stable matching scheme for R and B; furthermore, in both cases the scheme is given by the Gale-Shapley marriage algorithm [4], obtained by iteratively matching mutually closest pairs [5, Proposition 9]. The procedure of iteratively matching mutually closest pairs gives a unique stable matching scheme for more general point process than the Poisson. In the case that the stable matching scheme is unique we often refer to the stable matching scheme as a stable matching rule. Let µ ∈ M. We say that µ ∈ M has a descending chain if there exist x1 , x2 , . . . ∈ [µ] with xi−1 − xi ≥ xi − xi+1 for all i. We say that µ is non-equidistant if for all x, y, u, v ∈ [µ] such that {x, y} = {u, v} and x = y we have x − y = u − v . If R is a translation-invariant point process on Rd with finite intensity that almost surely is non-equidistant and has no descending chains, then there exists a unique one-colour stable matching rule for R [5, Proposition 9]. For the two colour-case, let R and B by point processes on Rd of equal finite intensity, jointly ergodic under translations. If the point process R + B is almost surely non-equidistant and has no descending chains, then there exists a unique two-colour stable matching rule for R and B [5, Proposition 9]. See [3] for more information on descending chains. In particular, it is shown that many well-studied point processes (including the Poisson) do not have descending chains. In this paper, our interest in stable matching lies in the typical distance between matched pairs. Let R be a translation-invariant point process on Rd with finite intensity that almost surely is non-equidistant and has no descending chains. Let M be the unique one-colour stable matching rule for R. Consider the distribution function F (r) := ER([0, 1]d )  −1  E# x ∈ R ∩ [0, 1)d : x − M(x) ≤ r ,  (4.3)  85  4.1. Introduction Following [5], let X be a random variable with probability measure P∗ and expectation operator E∗ such that P∗ (X ≤ r) = F (r) for all r ≥ 0. One may interpret X as the distance from the origin to its partner under the Palm version of (R, M) in which we condition on the presence of an R-point at the origin; see [5] for details. For two point processes R and B with a unique two-colour stable matching we define X, P∗ , and E∗ in the same way. Theorem 4.7 (One-colour stable matching). Let R be a translation-invariant point process on Rd with finite intensity that almost surely is nonequidistant and has no descending chains. If R is insertion-tolerant or deletion-tolerant, then the one-colour stable matching rule satisfies E∗ X d = ∞. Theorem 4.8 (Two-colour stable matching). Let R and B be independent translation-invariant jointly ergodic point processes on Rd with equal finite intensity such that the point process R + B is non-equidistant and has no descending chains. If R is deletion-tolerant or insertion-tolerant, then the two-colour stable matching rule satisfies E∗ X d = ∞. Theorems 4.7 and 4.8 strengthen the earlier results in [5] in the following way. Poisson point processes are insertion-tolerant and deletion-tolerant (see Example 4.1). In [5], Theorem 4.7 is proved in the case of Poisson processes, but the same proof is valid for the case that R is both insertion-tolerant and deletion-tolerant. Similarly, in [5], Theorem 4.8 is proved in the case of Poisson processes, but the same proof is valid for the case that R is insertion-tolerant. The following result is proved in [5] for the case of Poisson processes, but the argument they give applies more generally as follows. Theorem 4.9 ([5, Theorem 5]). Let R be a translation-invariant ergodic non-equidistant point process on Rd with no descending chains and intensity one. The one-colour stable matching rule satisfies P∗ (X > r) ≤ Cr −d for all r > 0, for some constant C = C(d) that does not depend on R. Thus Theorems 4.7 and 4.9 provide complimentary lower and bounds on X in the case of one-colour stable matching. It is an open problem to determine the correct power law of the two-colour stable matching rule for Poisson point processes in the case d ≥ 2. In the case d = 1, the twocolour stable matching rule for independent Poisson point processes satisfies 1 E∗ X 2 = ∞ and P∗ (X > r) ≤ Cr −1/2 for all r > 0, where C is some finite constant. See [5] for details. 86  4.2. Examples The rest of the paper is organized as follows. The next section is dedicated to examples. In Section 4.3 we prove some of the more elementary results including Theorems 4.1 and 4.2. Despite the similarities between insertion-tolerance and deletion-tolerance, the proof of Theorem 4.2 relies on the following lemma, whose analogue does not hold for deletion-tolerance (see Example 4.5). Lemma 4.10 (Monotonicity of insertion-tolerance). Let Π be a point process on Rd and let S ∈ B have finite nonzero Lebesgue measure. If Π is insertiontolerant, and U is uniformly distributed in S and independent of Π, then Π + δU is insertion-tolerant. Section 4.4 deals with Theorems 4.4 and 4.5. In Sections 4.5 and 4.6 we prove the results concerning continuum percolation and stable matchings.  4.2  Examples  First, we give examples of (translation-invariant) point processes that have some or none of the properties of insertion-tolerance and deletion-tolerance. We also provide examples to show that certain results concerning insertiontolerance do not have obvious analogs in setting of deletion-tolerance. Second, we give examples to show that the conditions in the results concerning continuum percolation and stable matching are needed. Finally, we study the perturbed lattice and the point process of Gaussian zeros.  4.2.1  Elementary examples  Example 4.1 (Poisson point process). The Poisson point process Π on Rd with positive intensity λ is both insertion-tolerant and deletion-tolerant. This follows immediately from Theorem 4.4 (ii) and Theorem 4.5 and the relation d Π∗ = Π + δ0 . It is also easy to give an direct proof of insertion-tolerance and to prove deletion-tolerance via Theorem 4.1 (iii). ♦ For S ⊂ Rd and x ∈ Rd , write x + S := {x + z : z ∈ S}. Example 4.2 (Randomly shifted lattice). Let U be uniformly distributed in [0, 1]d . Consider the point process Λ := U + Zd. Clearly, Λ is translationinvariant. Since no ball of radius 1/4 can contain more than one Λ-point, by Theorem 4.2 (ii), Λ is not insertion-tolerant. Also every ball of radius 5 must 87  4.2. Examples contain Λ-points. Thus by the Theorem 4.1 (iii), Λ is not deletion-tolerant. ♦ Example 4.3 (Randomly shifted site percolation). Let {Yz }z∈Zd be i.i.d. {0, 1}-valued random variables with EY0 = p > 0. Consider the random set W := z ∈ Zd : Yz = 1 . Let U be uniformly distributed in [0, 1]d and independent of W . From Theorem 4.1 (iii), it is easy to see that Λ given by [Λ] := U + W is deletion-tolerant. Clearly, as in Example 4.2, Λ is not insertion-tolerant. Moreover, it is easy to verify that almost surely [Λ] ∩ Zd = ∅, but [Λ∗ ] ⊂ Zd . Thus (4.1) is not satisfied. ♦ Example 4.4 (Superposition of a Poisson point process with a random shifted lattice). Let Π be a Poisson point process on Rd and let Λ be a randomly shifted lattice (as in Example 4.2) that is independent of Π. Consider the point process Γ := Π + Λ. The insertion-tolerance of Π is inherited by Γ, but Γ is no longer deletion-tolerant. As in Example 4.2 , every ball of radius 5 must contain Γ-points. ♦ Example 4.5 (Non-monotonicity of deletion-tolerance). We show that in contrast with Lemma 4.10, deleting a point from a deletion-tolerant process may destroy deletion tolerance. Let (Ni )i∈Z be i.i.d., taking values 0, 1, 2 each with probability 1/3, and let Π have exactly Ni points in the interval [i, i + 1), for each i ∈ Z, with their locations chosen independently and uniformly at random in the interval. It is easy to verify that Π is deletiontolerant with Theorem 4.1 (iii). Consider the Π-point Z defined as follows. If the first integer interval [i, i + 1) to the right of the origin that contains at least one Π-point contains exactly two Π-points, then let Z be the point in this interval that is closest to the origin; otherwise, let Z be the closest Π-point to the left of the origin. The point process Π = Π − δZ has the property that the first interval to the right of the origin that contains any Π-points, contains exactly one Π-point. Let Z be the first point Π -point to the right of the origin. The process Π := Π − δZ has the property that with non-zero probability the first interval to the right of the origin that contains any Π -points contains exactly two Π -points. Thus Π is not deletion-tolerant. If desired, the above example can be made translation-invariant by applying a random shift U as before. ♦ Example 4.6 (The existence of a set S such that Π|S c ≺ Π does not imply deletion-tolerance). Let Λ be a randomly shifted lattice in d = 1 (as in Example 4.2) and let Π be a Poisson point process on R of intensity one 88  4.2. Examples that is independent of Λ. Let Y := ∪x∈[Π] B(x, 5), and consider Γ := Λ|Y c . Let Z be the first Γ-point to the right of the origin such that Z + i ∈ [Γ] for |i| ≤ 20. Clearly, Γ − δZ ≺ Γ and Γ is not deletion-tolerant. On the other hand, since Π is insertion-tolerant, Γ|B(0,5)c ≺ Γ. (Note the contrast with Lemma 4.3 for insertion-tolerance.) ♦  4.2.2  Continuum percolation and stable matching  Example 4.7 (A point process that is neither insertion-tolerant nor deletion-tolerant and has an infinite number of unbounded clusters). Let {Yz }z∈Z be i.i.d. {0, 1}-valued random variables with EY0 = 12 . Let W := (x1 , x2 ) ∈ Z2 : Yx2 = 1 and let U be uniformly distributed in [0, 1]2 and independent of W . Consider the point process Λ with support U +W . Thus Λ is a randomly shifted lattice with columns randomly deleted. As in Example 4.2, Λ is not insertiontolerant and it is not deletion-tolerant. If the occupied region is given by a union of balls of radius 2, then the occupied region O(Λ) contains infinitely many unbounded clusters. ♦ Example 4.8 (A point process that is not insertion-tolerant, but is deletion-tolerant and has an infinite number of unbounded clusters). Let Λ be a randomly shifted super-critical site percolation in d = 2, as in Example 4.3. Let {Λi }i∈Z be independent copies of Λ. Let {Yz }z∈Z be i.i.d. {0, 1}valued random variables independent of Λ with EY0 = 21 . Consider the point process Γ with support [Γ] = {i∈Z:Yi =1}  [Λi ] × {i} .  Thus Γ is a point process in R3 , obtained by stacking independent copies of Λ. Clearly, the point process Γ is deletion-tolerant, but not insertiontolerant. If the occupied region is given by a union of balls of radius 2, then the occupied region O(Λ) contains infinitely many unbounded clusters. ♦ Open Problem 4.1. Does there exists a translation-invariant ergodic deletion-tolerant point process on R2 that has an infinite number of unbounded clusters? Example 4.9 (One-colour matching with two copies of the lattice with small perturbations). Let W = {Wi }i∈Zd and Y = {Yi }i∈Zd be two sequences 89  4.2. Examples of i.i.d. random variables uniformly distributed in [0, 0.001]d . Let U be uniformly distributed in [0, 1]d and take W, Y, U to be independent. Let R be the point process with support [R] = U +  i + Wi : i ∈ Z d ∪ i + Yi : i ∈ Z d  .  It is easy to verify that R is neither insertion-tolerant nor deletion-tolerant and that R has no descending chains and is non-equidistant. The one-colour stable matching rule satisfies x − M(x) < 1 for all x ∈ [R] a.s. ♦ Example 4.10 (Two-colour matching with the randomly shifted lattice). Let R and B be two independent copies of the randomly shifted lattice Z as defined in Example 4.2. Although R + B is not non-equidistant, it is easy to verify that there is a unique two-colour stable matching rule for R and B ♦ a.s. and it satisfies x − M(x) < 12 for all x ∈ [R] a.s. Open Problem 4.2 (Two-colour matching with two copies of the lattice with small perturbations). As in Example 4.9, let W = {Wi }i∈Zd and Y = {Yi }i∈Zd be two sequences of i.i.d. random variables uniformly distributed in [0, 0.001]d . Let U be uniformly distributed in [0, 1]d and take W, Y, U to be independent. Let R and B be the point processes with supports [R] = U + i + Wi : i ∈ Zd and [B] = U + i + Yi : i ∈ Zd . In the two-colour stable matching rule of R and B, what is the optimal tailbehaviour for X?  4.2.3  Perturbed lattices and Gaussian zeros  The next proposition provides a class of examples of point processes that are neither insertion-tolerant nor deletion-tolerant. For a point process Π and a measurable function h : M → R write Π(h) :=  h(x)dΠ(x) =  h(x).  (4.4)  x∈[Π]  Proposition 4.11 (Low-fluctuation processes). Let Π be a point process on Rd with finite intensity. Let h : Rd → [0, 1] be a measurable function such that h(x) = 1 for all x ∈ B(0, 1), and h(x) = 0 for all x ∈ B(0, 2)c . For each n ∈ Z+ , set hn (x) := h(x/n) for all x ∈ Rd . 90  4.2. Examples (i) If Π(hn ) − EΠ(hn ) → 0 in probability as n → ∞, then Π is neither insertion-tolerant nor deletion-tolerant. (ii) Also assume that Π is translation-invariant and ergodic under shifts of Zd . If lim sup Π(hn ) − EΠ(hn ) < ∞ a.s., n→∞  then Π is not insertion-tolerant and if lim inf Π(hn ) − EΠ(hn ) > −∞ a.s., n→∞  then Π is not deletion-tolerant. Proof of Proposition 4.11 (i). Let mn := EΠ(hn ). Since Π(hn ) − mn → 0 in probability, there exists a (deterministic) subsequence {nk } such that Π(hnk ) − mnk → 0 a.s. On the other hand, if U is uniformly distributed in B(0, 1), then (Π + δU )(hnk ) − mnk → 1 a.s. Therefore Π is not insertiontolerant. Similarly, if Z the Π-point closest to origin (where ties are broken using a lexicographic ordering on Rd ), then (Π − δZ )(hnk ) − mnk → −1. So Π is not deletion-tolerant. For each x ∈ Rd let θx be the translation defined by θx (y) := y + x for all y ∈ Rd . For a translation θ of Rd and a point measure µ ∈ M, we define θµ(S) := µ(θ −1 S) for all S ∈ B. Proof of Proposition 4.11 (ii). Let f : M → R be a measurable function such that f (Π) = lim sup Π(hn ) − EΠ(hn ) a.s. n→∞  Since Π is translation-invariant, for every translation θ of Zd we have that d f (θΠ) = f (Π). Moreover, since Π is also ergodic, for all x ∈ R we have that 1 lim N →∞ N  N i=1  1[f (θ(i,0,...,0) Π) ≤ x] → P(f (Π) ≤ x) a.s.  For all y ∈ Rd we have lim sup (Π + δy )(hn ) − EΠ(hn ) → f (Π) + 1 a.s. n→∞  Thus f (Π + δy ) = f (Π) + 1 a.s. Hence if U is uniformly distributed in B(0, 1), then for all x ∈ R we have that 1 N →∞ N  N  lim  i=1  1[f (θ(i,0,...,0) Π + δU ) ≤ x] → P(f (Π) + 1 ≤ x) a.s. 91  4.2. Examples Therefore Π is not insertion-tolerant. The proof for the case of deletiontolerance is similar. Example 4.11 (Perturbed Lattices). Let {Yz }z∈Zd be i.i.d. Rd -valued random variables. Consider the point process Λ given by [Λ] := z + Yz : z ∈ Zd . Note that Λ is translation-invariant and ergodic under shifts of Zd . It is easy to see that (for all dimensions d) if Y0 is bounded, then Λ is neither insertion-tolerant nor deletion-tolerant. If Y0 is bounded then, Λ(B(0, 1)) ≤ M , for some M ≥ 0. Thus, by Theorem 4.2 (ii), Λ is not insertion-tolerant (otherwise we could add M + 1 points in B(0, 1) to Λ. Also if Y0 is bounded, then Λ(B(0, N )) ≥ 1, for some N ≥ 0. Thus, by Theorem 4.1 (iii), Λ is not deletion-tolerant. For the special case d = 1, we can prove more. We show that if E|Y0 | < ∞, then Λ is neither insertion-tolerant nor deletion-tolerant. By Proposition 4.11 (ii) with h(x) = 1[0,1) (x), it suffices to show that lim sup Λ[−n, n) − EΛ[−n, n) < ∞ a.s., n→∞  to prove that Λ is not insertion-tolerant. The following simple calculation shows that EΛ[0, 1) = 1: EΛ[0, 1) = E z∈Z  = z∈Z  = z∈Z  1[Yz + z ∈ [0, 1)]  P Yz + z ∈ [0, 1) P Y0 ∈ [−z, −z + 1) .  = 1 Thus it suffices to show that lim sup Λ[0, n) − n < ∞ a.s. n→∞  Define W := z≤0 1[z + Yz ≥ 0]. Thus W is the number Λ-points in the nonnegative real axis that originated from the negative real axis. Since Y0 has finite mean, W also has finite mean. Clearly, lim sup Λ[0, n) − n ≤ W + W , n→∞  d  for some W = W . The proof that Λ is not deletion-tolerant is similar.  ♦ 92  4.3. Basic results Remark 4.1. In Example 4.11, the calculation that shows that EΛ[0, 1) = 1 can be viewed as a particular instance of the mass transport principle [1]. See also Section 5.4 ♦ Open Problem 4.3. Does there exists a perturbed lattice that is insertiontolerant? In particular, in the case d = 1, if the lattice perturbation has infinite mean, then is the perturbed lattice insertion-tolerant? What are the possible combinations of insertion-tolerance and deletion-tolerance for perturbed lattices? Example 4.12 (Gaussian analytic zeros on the plane). Let {an }∞ n=0 be i.i.d. standard complex Gaussian random variables (the distribution of a0 has density π −1 exp(−|z|2 ) with respect to Lebesgue measure on the complex plane) √ n . The set of zeros of f forms a translationn!)z and let f (z) := ∞ (a / n n=0 invariant point process Π in the complex plane. Sodin and Tsirelson [10, Equation (0.6)] show that Π satisfies the conditions of Lemma 4.11 (i), with a twice differentiable function h; in particular they show that Var Π(hn ) → 0 as n → ∞. Hence Π is not insertion-tolerant or deletion-tolerant. ♦  4.3  Basic results  In this section we give the proofs of the elementary results concerning insertion-tolerance and deletion-tolerance. The following simple application of the Fubini’s theorem will be useful. Remark 4.2. Let Π be a point process on Rd . If S ∈ B is a set of positive finite measure and U is uniformly distributed S and independent of Π, then P(Π + δU ∈ ·) =  1 L(S)  S  P(Π + δx ∈ ·)dx. ♦  For A ∈ M, we set Ax := {µ ∈ M : µ + δx ∈ A}. Thus Ax is the set of measures µ for which adding a point at x results in an element of A. Proof of Lemma 4.10. Let Π be insertion-tolerant. We first show that for almost all x ∈ Rd the point process Π + δx is insertion-tolerant. The proof 93  4.3. Basic results follows from the definition of Ax . Let V be uniformly distributed in S ∈ B and independent of Π. Suppose A ∈ M is such that P(Π + δx + δV ∈ A) = P(Π + δV ∈ Ax ) > 0. Since Π is insertion-tolerant, P(Π ∈ Ax ) = P(Π + δx ∈ A) > 0. Next, let U be uniformly distributed in S ∈ B and independent of (Π, V ). Let P(Π+δU ∈ A) = 0, for some A ∈ M. By Remark 4.2, P(Π+δx ∈ A) = 0 for almost all x ∈ S and since Π + δx is insertion-tolerant for almost all x ∈ Rd , we have P(Π + δx + δV ∈ A) = 0 for almost all x ∈ S. Applying Remark 4.2 to the process Π+δV , we obtain that P(Π+δU +δV ∈ A) = 0. With Lemma 4.10 we prove that insertion-tolerance implies the following stronger variant of Theorem 4.2 (ii), (iii), where the number of points added may be random. If (X1 , . . . , Xn ) is a random vector in (Rd )n with law that is absolutely continuous with respect to Lebesgue measure, then we say that random (unordered) set {X1 , . . . , Xn } is nice. A finite point process F is nice if for all n ∈ N, conditional on F(Rd ) = n, the support [F] is equal in distribution to some nice random set; we also say that the law of F is nice if F is nice. Corollary 4.12 (Weakly dependent insertion). Let Π be an insertion-tolerant point process on Rd and let F be a finite point process on Rd . If F admits a conditional law given Π that is nice, then Π + F ≺ Π. Proof of Theorem 4.2. Clearly, (iii) ⇒ (ii) ⇒ (i). From Corollary 4.12, it is immediate that (i) ⇒ (iii). Proof of Corollary 4.12. Let U be uniformly distributed in [0, 1] and independent of Π. Let f : M × [0, 1] → M be a measurable function such that for all π ∈ M we have that f (π, U ) is a nice finite point process. It suffices to show that Π + f (Π, U ) ≺ Π. Consider the events En,k := f (Π, U )(Rd ) = n  ∩  [f (Π, U )] ⊂ B(0, k) .  Let {Ur,k }nr=1 i.i.d. random variables uniformly distributed in B(0, k) and independent of (Π, U ). Let Fn,k := nr=1 δUr,k . By applying Lemma 4.10, n times, we see that Π + Fn,k ≺ Π; thus it suffices to show that Π + f (Π, U ) ≺ Π + Fn,k for some n, k ≥ 0. For each x ∈ (Rd )n , let (x1 , . . . , xn ) = x. If S ⊂ Rd has n elements, then we write S := (s1 . . . , sn ) ∈ (Rd )n , where si are the elements of 94  4.3. Basic results S in lexicographic order. For each n ≥ 0, let gn : (Rd )n × M → R be a measurable function such that gn (·, π) is the p.d.f. (with respect to ndimensional Lebesgue measure) of [f (π, U )] , conditional on f (π, U )(Rd ) = n. Let Q be the law of Π and let A ∈ M. Thus P Π + f (Π, U ) ∈ A, En,k = n  1 π+ B(0,k)n  i=1  δxi ∈ A g(x, π)dx dQ(π).  (4.5)  On the other hand, P Π + Fn,k ∈ A = 1 L(B(0, k))n  n  1 π+ B(0,k)n  i=1  δxi ∈ A dx dQ(π).  (4.6)  If P(Π + f (Π, U ) ∈ A) > 0, then there exist n, k ≥ 0 such that P(Π + f (Π, U ) ∈ A, En,k ) > 0; moreover from (4.5) and (4.6), we deduce that P(Π + Fn,k ∈ A) > 0. The proof of Theorem 4.1 relies on the following lemma. Lemma 4.13. Let Π be a point process on Rd . If F is a finite subprocess of Π, then there exists S ∈ B with L(S) ∈ (0, ∞) such that P(Π|S = F) > 0.  (4.7)  Proof. A ball B(x, r) is rational if x ∈ Qd and r ∈ Q+ . Let C be the collection of all finite unions of rational balls. Clearly C is countable. We will show that there exists S ∈ C satisfying (4.7). Since Π is locally finite, it follows that there exists a C-valued random variable S such that Π|S = F a.s. Since P(Π|S = F, S = S) = P(Π|S = F) = 1, S∈C  at least one of the terms of the sum is nonzero. With Lemma 4.13 we first prove the following special case of Theorem 4.1. Lemma 4.14. Let Π be a point process on Rd . The following conditions are equivalent. (i) The point process Π is deletion-tolerant. 95  4.3. Basic results (ii) If F is a finite subprocess of Π such that F(Rd ) is bounded almost surely, then Π − F ≺ Π. Proof. Clearly, (ii) implies (i). We show by induction on the number of points of the finite subprocess that (i) implies (ii). Assume that Π is deletion-tolerant. Suppose that (ii) holds for every finite subprocess F of Π such that F(Rd ) ≤ n. Let F be a finite subprocess of Π with F (Rd ) ≤ n + 1. Observe that on the event that F (Rd ) = 0, we have F = F + δZ , where F is a finite subprocess of Π with F(Rd ) ≤ n and Z is some Π-point. Let P(Π − F ∈ A) > 0, for some A ∈ M. If P(Π − F ∈ A, F (Rd ) = 0) > 0, then clearly P(Π ∈ A) > 0. Thus we assume without loss of generality that F = F + δZ so that P(Π − F − δZ ∈ A) > 0. By applying Lemma 4.13 to the point process Π − F, conditioned on Π − F − δZ ∈ A, there exists S ∈ B with finite Lebesgue measure, so that P (Π − F)|S = δZ  Π − F − δZ ∈ A  > 0.  (4.8)  Let AS := {µ + δx : µ ∈ A, x ∈ S}, so that by the definition of AS and (4.8), we have P(Π − F ∈ AS ) > 0. By the inductive hypothesis, P(Π ∈ AS ) > 0. Observe that if Π ∈ AS , there is an x ∈ [Π] ∩ S such that Π − δx ∈ A. Define a Π-point R as follows. If Π ∈ AS , let R be the point of [Π] ∩ S closest to the origin (where ties are broken using a lexicographic order) such that Π − δR ∈ A, otherwise let R be the Π-point closest to the origin. Hence P(Π − δR ∈ A) ≥ P(Π ∈ AS ) > 0. Since Π is deletion-tolerant, P(Π ∈ A) > 0. Proof of Theorem 4.1. We show that (iii) ⇒ (i) ⇒ (ii) ⇒ (iii). Assume that (iii) holds and that for some Π-point Z and some A ∈ M we have P(Π − δZ ∈ A) > 0. By Lemma 4.13, P(Π|S c ∈ A) > 0 for some S ∈ B, with finite Lebesgue measure. From (iii), P(Π ∈ A) > 0. Thus (i) holds and Π is deletion-tolerant. Assume that (i) holds. Let F be a finite subprocess of Π and suppose for some A ∈ M we have P(Π − F ∈ A) > 0. Define Fn as follows. Take Fn = F if F(Rd ) = n, otherwise set Fn = 0. Note that for some n, we have P(Π − Fn ∈ A) > 0. Since Π is deletion-tolerant, by Lemma 4.14, P(Π ∈ A) > 0. Thus (ii) holds. Clearly (ii) implies (iii), since for any set S ∈ B with finite measure, the point process with support [Π] ∩ S is a finite subprocess of Π. 96  4.4. Palm equivalences For A ∈ M, and a translation θ of Rd we write θA := {θµ : µ ∈ A}. Proof of Lemma 4.3. Let U, V be uniformly distributed on S, T ∈ B respectively and let U, T, Π be independent. Assume that Π + δU ≺ Π and let A ∈ M be such that P(Π + δV ∈ A) > 0. We will show that P(Π ∈ A) > 0. Since Π is translation-invariant, for all A ∈ M we have P(Π + δθU ∈ A ) = P(Π + δU ∈ θ −1 A ) and thus Π + δθU ≺ Π for all translations θ of Rd . By Remark 4.2, T := {w ∈ T : P(Π + δw ∈ A) > 0} has positive Lebesgue measure. By the Lebesgue density theorem [8, Corollary 2.14], there exists x ∈ T , y ∈ S, and ε > 0 so that L(T ∩ B(x, ε)) > 1/2LB(x, ε)  L(S ∩ B(y, ε)) > 1/2LB(y, ε).  Thus with z = x − y, the set T ∩ θz S has positive Lebesgue measure. Thus by Remark 4.2, P(Π + δθz U ∈ A) > 0. Since Π + δθz U ≺ Π, we have P(Π ∈ A) > 0.  4.4  Palm equivalences  In this section, we will discuss the insertion-tolerance and deletion-tolerance in the context of Palm processes. Let Π be a translation-invariant point process with finite intensity λ. The Palm version of Π is the point process Π∗ such that for all A ∈ M and all S ∈ B with finite Lebesgue measure, we have E#{x ∈ [Π] ∩ S with Π ∈ θx A} = λLS · P(Π∗ ∈ A),  (4.9)  where for any set B we let #B be its cardinality. Sometimes (4.9) is called the Palm property. By a monotone class argument, a consequence of (4.9) is that for all measurable f : M × Rd → [0, ∞) we have E Rd  f (θ−x Π, x)dΠ(x) = λ  Ef (Π∗ , x)dx;  (4.10)  Rd  see [7, Chapter 11]. Proof of Theorem 4.5. Let Π have intensity λ > 0. Let S ∈ B have finite Lebesgue measure. By Theorem 4.1 it suffices to show that Π|S c ≺ Π. Let P(Π|S c ∈ A) > 0, for some A ∈ M. Thus we may assume that P(∃x ∈ [Π] ∩ S : Π − δx ∈ A) > 0,  (4.11) 97  4.4. Palm equivalences otherwise P(Π ∈ A) > 0. By applying (4.10) with the function (µ, x) → 1[µ − δ0 ∈ θ−x A]1[x ∈ S], we obtain E# {x ∈ [Π] ∩ S : θ−x (Π − δx ) ∈ θ−x A} = λ  S  P(Π∗ − δ0 ∈ θ−x A)dx.  (4.12) From (4.11) and (4.12), we deduce that P(Π∗ − δ0 ∈ θ−x A) > 0, for some x ∈ S. By assumption, P(Π ∈ θ−x A) > 0. Since Π is translation-invariant, P(Π ∈ A) > 0. Proof of Theorem 4.4 (i) implies (ii). Suppose that Π + δ0 is not absolutely continuous with respect to Π∗ ; then there exists A ∈ M such that P(Π∗ ∈ A) = 0 but P(Π + δ0 ∈ A) > 0. Without loss of generality, take A to be a set that does not care whether there is a point at 0; that is if µ ∈ A, then µ ∈ A, provided µ, µ agree on Rd \ {0}. By translation-invariance, 0 < c := P(Π + δ0 ∈ A) = P(Π ∈ A) = P(Π ∈ θx A) for every x ∈ Rd . Hence the translation-invariant random set G := {x ∈ Rd : Π ∈ θx A} has intensity EL([0, 1]d ∩ G) = c. Moreover, if U is uniformly distributed in [0, 1]d and independent of Π, then P(U ∈ G) = c. Therefore defining the set A := {µ ∈ M : ∃x ∈ [µ] ∩ [0, 1]d with µ ∈ θx A}, we deduce that P(Π + δU ∈ A ) > 0. (Recall that A does not care whether there is a point at 0.) On the other hand by the Palm property (4.9) we have P(Π ∈ A ) ≤ E#{x ∈ [Π] ∩ [0, 1]d with Π ∈ θx A} = λLS · P(Π∗ ∈ A) = 0. Thus Π is not insertion-tolerant. The following observation will be useful in proof that (ii) implies (i) for Theorem 4.4. Lemma 4.15. Let Π be a translation-invariant point process on Rd with finite intensity. If Z is a Π-point and U is uniformly distributed in S ∈ B and independent of (Π, Z), then θU θZ Π ≺ Π. 98  4.5. Continuum percolation Proof. Let Q be the joint law of Π and Z. Since U is independent of (Π, Z), by Fubini’s theorem, for all A ∈ M, we have 1 L(S) 1 ≤ L(S) 1 = L(S) 1 = L(S)  P(θU θZ Π ∈ A) =  S  1[θu+z π ∈ A]du dQ(π, z)  Rd  1[θx π ∈ A]dx dQ(π, z)  Rd  P(θx Π ∈ A)dx  Rd  P(Π ∈ A)dx.  Proof of Theorem 4.4 (ii) implies (i). Suppose that Π + δ0 ≺ Π∗ . By a result of Thorisson [11] there exist a shift-coupling of (Π, Π∗ ); that is a Π-point d  Z such that Π∗ = θZ Π. (In fact, Holroyd and Peres [6] prove that Z may be obtained as a deterministic function for Π.) Let U be uniformly distributed in S ∈ B and independent of Π, Π∗ , Z. We have that Π + δU ≺ θU Π∗ and by Lemma 4.15, θU θZ Π ≺ Π. Thus insertion-tolerance follows from the chain of relations: d Π + δU ≺ θU Π∗ = θU θZ Π ≺ Π.  4.5  Continuum percolation  Theorem 4.6 is an immediate consequence of the following. Consider the Boolean continuum percolation model for a point process Π. Let W denote the cluster of containing the origin. For each M > 0 an M -branch is an unbounded component of W ∩ B(0, M )c . Lemma 4.16. For a translation-invariant ergodic insertion-tolerant point process, the number of unbounded clusters is a fixed constant a.s. that is zero, one, or infinity. Lemma 4.17. If an insertion-tolerant point process has an infinite number of unbounded clusters, then with positive probability there exists M > 0 so that there at least three M -branches. Theorem 4.18. For all M > 0, a translation-invariant ergodic point process has at most two M -branches. For a proof of Theorem 4.18 see [9, Theorem 7.1].  99  4.6. Stable matching Proof of Theorem 4.6. From Lemma 4.16, it suffices to show that there can not be infinitely many unbounded clusters; this follows from Theorem 4.18 and Lemma 4.17. For r > 0, let rZ d := rz : z ∈ Zd . Proof of Lemma 4.16. Let Π be a translation-invariant ergodic insertiontolerant point process. Let the occupied region be given by a union of balls of radius R > 0. By ergodicity, if K(Π) is the number of unbounded clusters, then K(Π) is a fixed constant a.s. Assume that K(Π) < ∞. It suffices to show that P(K(Π) ≤ 1) > 0. Since K(Π) < ∞, there exists N > 0 so that every unbounded cluster intersects B(0, N ) with positive probability. Consider the finite set S := (R/4)Zd ∩ B(0, N ). For each x ∈ S, let Ux be uniformly distributed in B(x, R) and assume that the Ux and Π are independent. Let F := x∈S δUx . Since B(0, N ) ⊂ ∪x∈S B(Ux , R), we have that P(K(Π + F) ≤ 1) > 0. By Theorem 4.2 (ii), Π + F ≺ Π, so that P(K(Π) ≤ 1) > 0. Proof of Lemma 4.17. The proof is similar to that of Lemma 4.16. Let Π be an insertion-tolerant point process with an infinite number of unbounded clusters. Let the occupied region be given by a union of balls of radius R > 0. Choose N large enough so that at least three unbounded clusters interest B(0, N ) with positive probability. Define a finite point process F exactly as in the proof of Lemma 4.16. The point process Π + F has at least three (N + R)-branches with positive probability and Theorem 4.2 (ii) implies that Π + F ≺ Π. Thus Π has three (N + R)-branches with positive probability.  4.6  Stable matching  Theorems 4.7 and 4.8 are consequences of the following lemmas. Let R be a point process with a unique one-colour stable matching rule M. Define H = H(R) := {x ∈ [R] : x − M(x) > x − 1} .  (4.13)  This is the set of R-points that would prefer some R-point in the ball B(0, 1), if one were present in the correct location, over their current partners. Similarly, define H for the case of two-colour stable matching. A calculation given in [5, Proof of Theorem 5(i)] shows that E#H =  c ∗ E (X + 1)d . d  (4.14) 100  4.6. Stable matching Lemma 4.19 (One-colour stable matching). Let R be a translation-invariant point process on Rd with finite intensity that almost surely is nonequidistant and has no descending chains. If R is insertion-tolerant, then P(#H = ∞) = 1. If R is deletion-tolerant, then P(#H = ∞) > 0. Lemma 4.20 (Two-colour stable matching). Let R and B be independent translation-invariant jointly ergodic point processes on Rd with equal finite intensity such that the point process R + B is non-equidistant and has no descending chains. If B is insertion-tolerant, then P(#H = ∞) = 1. If R is deletion-tolerant, then P(#H = ∞) > 0. Proof of Theorem 4.7. Use Lemma 4.19 together with (4.14). Proof of Theorem 4.8. Use Lemma 4.20 together with (4.14). The following lemma will be useful; it is direct consequence of [5, Lemma 16]. A partial matching of a point measure µ ∈ M is the edge set m of simple graph ([µ], m) in which every vertex has degree at most one. A partial matching is a matching if every vertex has degree exactly one. We write m(x) = y if and only if {x, y} ∈ m. We say a partial matching is stable if there do not exists distinct points x, y ∈ [µ] satisfying x − y < min { x − m(x) , y − m(y) } .  (4.15)  Lemma 4.21. Let µ ∈ M be non-equidistant, have no descending chains and let µ have a unique stable matching m. For each k ≥ 1, set Hk = Hkm (µ) :=  x ∈ [µ] : x − m(x) > |x| −  1 k  .  (a) If {x, y} ∈ m is a matched pair, then m \ {{x, y}} is the unique stable matching of µ − δx − δy . (b) If #Hk = 0, then for all x ∈ B(0, k1 ) there does not exists a stable matching for µ + δx ; moreover, m is the unique stable partial-matching of µ + δx . Proof of Lemma 4.19. Suppose that R is insertion-tolerant. It follows from insertion-tolerance and Lemma 4.21 (b) that P(#Hk (R) > 0) = 1 for all k ≥ 1. (If U0 is uniformly distributed in B(0, k1 ), then on the event Hk (R) = 0, there is no one-colour stable matching for R + δU0 .) Now we show that P(#H(R) = n) = 0 for all n ≥ 1. Towards a contradiction assume for some finite integer n ≥ 1 that #H(R) = n with nonzero 101  4.6. Stable matching probability. Consider the finite subprocess F of R that is given by the union of the set H with its partners, whenever the cardinality of H is n: [F] =  H(R) ∪ {M(x) ∈ [R] : x ∈ H(R)} , if #H(R) = n ∅ otherwise.  Observe that on the event #H(R) = n, for each x ∈ [F] there exists ε = ε(R, x) > 0 such that if r ∈ (0, ε), then for all y ∈ B(x, r) the onecolour stable matching rule for R + δy matches x with y. Thus by iteration, on the event #H(R) = n, there exists ε(R) > 0 such that if x1 , . . . x2n are the points of F in lexicographic order and U1 , . . . , U2n are independent random variables uniformly distributed in B(x1 , ε), . . . , B(x2n , ε), then the one-colour stable matching rule for R + 2n i=1 δUi matches each xi with Ui ; moreover, by Lemma 4.21 (a), #H(R − F) = 0, so there exists k > 0 so that 2n  P #Hk R +  δUi  =0  > 0.  i=1  However, by Theorem 4.2 (iii), 2n  R+  i=1  δUi ≺ R.  Now assume that R is deletion-tolerant. For each point x ∈ [R], let N (R, x) := {y ∈ [R] \ {x} : y − M(y) > x − y } . This is the set of R-points that would prefer x ∈ [R] over their partners. We will show that almost surely #N (R, x) = ∞ for all x ∈ [R].  (4.16)  From (4.16) it follows that if R(B(0, 1)) > 0, then #H = ∞. Since R is translation-invariant, P(R(B(0, 1)) > 0) > 0 and P(#H = ∞) > 0. It remains to show (4.16). Towards a contradiction, let x be a R-point (the one closest to origin) such that #N (R, x) < ∞. Consider the finite subprocess Z of R given by [Z] = N (R, x) ∪ {M(y) : y ∈ N (R, x)} . By Lemma 4.21 (a), the matching M given by [M ] = [M] \  y∈N (R,x)  {y, M(y)} 102  4.6. Stable matching is the unique one-colour stable matching rule for R−Z. Thus, N (R−Z, x) = 0 and almost surely R − Z − δM(x) has a unique stable one-colour partialmatching, given by M with the pair {x, M(x)} removed. However, from Theorem 4.1 (ii) and the deletion-tolerance of R we have R − Z − δM(x) ≺ R. Proof of Lemma 4.20. The proof for the case where R is insertion-tolerant is given in [5, Theorem 6(i)]. In the case that R is deletion-tolerant, we define for each x ∈ [B]: N (R, x) := {y ∈ [R] : y − M(y) > x − y } . As in the proof of Lemma 4.19, deletion-tolerance implies that N (R, x) = ∞ for all x ∈ [B] and thus P(#H = ∞) > 0.  Acknowledgments Alexander Holroyd and Terry Soo were funded in part by NSERC. Alexander Holroyd was funded in part by Microsoft Research. We thank Omer Angel and Yuval Peres for many valuable conversations.  103  4.7. Bibliography  4.7  Bibliography  [1] I. Benjamini, R. Lyons, Y. Peres, and O. Schramm. Group-invariant percolation on graphs. Geom. Funct. Anal., 9(1):29–66, 1999. [2] R. M. Burton and M. Keane. Density and uniqueness in percolation. Comm. Math. Phys., 121(3):501–505, 1989. [3] D. J. Daley and G. Last. Descending chains, the lilypond model, and mutual-nearest-neighbour matching. Adv. in Appl. Probab., 37(3):604– 628, 2005. [4] D. Gale and L. Shapley. College admissions and stability of marriage. Amer. Math. Monthly, 69:9–15, 1962. [5] A. E. Holroyd, R. Pemantle, Y. Peres, and O. Schramm. Poisson matching. Ann. Inst. Henri Poincar´e Probab. Stat., 45(1):266–287, 2009. [6] A. E. Holroyd and Y. Peres. Extra heads and invariant allocations. Ann. Probab., 33(1):31–52, 2005. [7] O. Kallenberg. Foundations of Modern Probability. Probability and its Applications (New York). Springer-Verlag, New York, second edition, 2002. [8] P. Mattila. Geometry of sets and measures in Euclidean spaces, volume 44 of Cambridge Studies in Advanced Mathematics. Cambridge University Press, Cambridge, 1995. Fractals and rectifiability. [9] R. Meester and R. Roy. Continuum Percolation, volume 119 of Cambridge Tracts in Mathematics. Cambridge University Press, Cambridge, 1996. [10] M. Sodin and B. Tsirelson. Random complex zeroes. I. Asymptotic normality. Israel J. Math., 144:125–149, 2004. [11] H. Thorisson. Transforming random elements and shifting random fields. Ann. Probab., 24:2057–2064, 1996.  104  Chapter 5  Translation-Equivariant Matchings of Coin-Flips on Zd 4 5.1  Introduction d  Consider the probability space (Ω, F, P), where Ω = {0, 1}Z , F is the stand dard product σ-algebra of {0, 1}Z , and P is the product measure on F with parameter p = 21 . We call elements of Zd sites. For γ ∈ Ω, a bijection φ : Zd → Zd is a matching on γ if every site x with γ(x) = 1 is mapped to a site y with γ(y) = 0 and vice-versa and if the composition φ ◦ φ is the identity mapping on Zd . For a site z, we define the translation θ z on Zd and Ω as follows: we set θ z x := x + z for all x ∈ Zd and for all γ ∈ Ω we set θ z γ(x) := γ(x − z) for all x ∈ Zd . A measurable mapping d Φ : {0, 1}Z × Zd → Zd is a matching rule if Φ(γ, ·) is a matching on γ for P-almost all γ. We say that Φ is translation-equivariant if it commutes with translations; that is Φ(θ z γ, ·) = θ z Φ(γ, ·) for P-almost all γ. Let · be the l∞ norm on Zd . We define Z = ZΦ (γ) := Φ(γ, O) to be the distance from the origin O = (0, . . . , 0) ∈ Zd to its partner. We will construct a translation-equivariant matching rule Φ and obtain upper bounds on P (Z > r). Theorem 5.1. For all d ≥ 1, there exists a translation-equivariant matching rule Φ such that for all r > 0, we have P (ZΦ > r) ≤ c(log r)4 r −β , for some c = c(d) < ∞, where β = β(d) =  2 1+ d4  .  Prior to this result the best known decay appears to have been the following. 4 A version of this chapter has been published. Soo, T. (2010) Translation-Equivariant Matchings of Coin-Flips on Zd . Adv. in Appl. Probab. 42(1):69–82.  105  5.1. Introduction Theorem 5.2. [11],[9] For d ≥ 1, there exists a translation-equivariant 1 matching rule Φ such that for all r > 0, we have P (ZΦ > r) ≤ cr − 2 , for some c = c(d) < ∞. Theorem 5.2 can be deduced from a simple construction due to Meshalkin. Meshalkin matching was originally used to construct isomorphisms of Bernoulli schemes [11]; it is the following construction. In d = 1, we define a translation-equivariant matching rule inductively, by first matching a zero to a one whenever a zero is immediately to the left of a one: . . . 011001111100001 . . . In the next stage, we remove the matched pairs, and then follow the same procedure. It is straightforward to check that bounding P (Z > r) amounts to bounding R = inf{m ≥ 1 : Sm = 0}, where Sm denotes a simple symmetric random walk. We may deduce the d ≥ 2 case of Theorem 5.2 from the following observation. By applying a translation-equivariant matching rule Φd−1 on Zd−1 to each (d − 1) dimensional plane, given by {z} × Zd−1 for each z ∈ Z, we obtain a translation-equivariant matching rule Φd on Zd = Z × Zd−1 , where P ZΦd−1 > r = P (ZΦd > r) for all r > 0. Theorem 5.1 provides faster decay than that provided by Theorem 5.2, for all d > 1. After this paper was written, Tim´ ar [14] proved the following stronger result. Theorem 5.3. [14] For any d ≥ 1 there exists a translation-equivariant d matching rule Φ such that for all r > 0, we have P (ZΦ > r) ≤ cr − 2 , for some c = c(d) < ∞. Some of the methods of this article also appear in [14]. New ideas are introduced in [14] and the methods of [14] are much more sophisticated. For d = 1, 2, the bounds obtained in Theorem 5.3 are essentially best possible. Theorem 5.4. [9] If d = 1, 2, then for any translation-equivariant matching d  rule Φ we have EZΦ2 = ∞. Hence for d = 1, 2 there does not exist a translation-equivariant matching rule Φ where P (ZΦ > r) ≤ cr −ρ , for some constants ρ > d2 and c = c(d) < ∞. Notice that the result of Theorem 5.4 is only valid for d = 1, 2. In fact for d ≥ 3, Tim´ ar [14] has shown that one can find a translation-equivariant matching rule with even faster decay than that given by Theorem 5.3. 106  5.1. Introduction Theorem 5.5. [14] For any d ≥ 3 and any ε > 0, there exists a translationequivariant matching rule Φ such that for all r > 0, we have P (Z > r) ≤ C exp(−cr d−2−ε ), for some constants 0 < c, C < ∞. Variants of matching in continuum settings have also been studied; see [8], [5], [4], [12] and the references within. Outline of the Proof The proof of Theorem 5.1 will proceed in two steps. We will construct a translation-equivariant matching and then determine bounds for it. To construct a translation-equivariant matching we will define, in a measurable translation-equivariant way, a sequence Pn of successively coarser partitions of Zd . Following [7], we call Pn a clumping rule. The members of Pn are called clumps or n-clumps, and we call the clumping rule locally finite if all the clumps are bounded. A component of a clumping rule is a limit of some increasing (with respect to set inclusion) sequence of clumps. A clumping rule is connected if it has only one component. Adapting the construction in [7] we will construct a locally finite connected clumping rule. From a locally finite connected clumping rule it is easy to obtain a translation-equivariant matching rule Φ; this is because a translation-equivariant matching rule can be defined be first matching as many sites as possible within each 1-clump and then iteratively matching as many unmatched sites as possible in each n-clump, for n = 2, 3, . . . . We will obtain with the central limit theorem and a version of the mass transport principle, a preliminary result which implies that for d ≥ 3 and for all 3 ε > 0, we have P (ZΦ > r) ≤ cr − 5 − , for some constant c = c(d, ε) < ∞. The preliminary result will not provide faster decay than that given by Theorem 5.2 in the case d = 1, 2. Upon a closer analysis of the geometry of the clumps, we will show that clumps that are long and thin happen with small probability; this analysis is behind the proof of Theorem 5.1. Outline of Paper The rest of the paper proceeds as follows. In Section 5.2 we discuss clumping rules and matchings from clumping rules. In Section 5.3 we outline the construction of a clumping rule and collect some useful bounds. In Section 5.4 we introduce a version of the mass transport principle that will be useful in the proof of Theorem 5.1. In Section 5.5 we prove Theorem 5.1. We conclude the paper with some related open problems.  107  5.2. Clumps  5.2  Clumps  Let PF (Zd ) denote all finite subsets of Zd . For A ⊂ Zd define translations of A via θ z A := {θ z x : x ∈ A}. Formally, a locally finite connected d clumping rule is a measurable mapping C : {0, 1}Z × N × Zd → PF (Zd ) d with the following properties. For P-almost all γ ∈ {0, 1}Z the following properties hold for all n ∈ N, and all x, y, z ∈ Zd : (i) (ii) (iii) (iv) (v)  x ∈ C(γ, n, x)  C(γ, n, x) ∩ C(γ, n, y) = ∅ =⇒ C(γ, n, x) = C(γ, n, y) C(γ, n, x) ⊂ C(γ, n + 1, x)  C(θ z γ, n, θ z x) = θ z C(γ, n, x) n  C(γ, n, O) = Zd .  Properties (i) and (ii) assure us that for each n ∈ N, the map C(·, n, ·) is a partition. Property (iii) makes the partition successively coarser, (iv) is translation-equivariance and (v) is connectedness. Proposition 5.6. There exists a locally finite connected clumping rule. The proof of Proposition 5.6 will be given in the next section. Matchings from Clumpings. From a locally finite connected clumping rule, we can construct a translation-equivariant matching rule in a countable number of stages. In the first stage, within each of the 1-clumps we match every possible site. Given that the (n − 1) stage is completed, within each of the n-clumps we match every site we can, ignoring the sites that were previously matched. In order to ensure that the resulting matching is translation-equivariant, use for example a lexicographic ordering on Zd to determine the maximal partial matching on the clumps in the following way. Match the smallest (under the lexicographic ordering) unmatched site x in a clump with γ(x) = 1 to the smallest unmatched site y in the clump with γ(y) = 0 and continue in this way until either the clump contains no more unmatched sites x with γ(x) = 1 or no more unmatched sites y with γ(y) = 0. We can see that the procedure does indeed yield a perfect matching as follows. Connectedness implies that γ(x) = 1 for every unmatched site x or γ(x) = 0 for every unmatched site x. Since p = 21 , these two alternatives must have equal probability. Hence by ergodicity, both alternatives occur 108  5.3. Seeds, cutters, and blobs with zero probability. Note that for our purposes we do not need to make this argument as we obtain upper bounds on Z, the distance from the origin to its partner, which easily imply that P (Z > r) → 0, as r → ∞, (see Theorem 5.1 or Proposition 5.19). Let us remark that this method does not give an isometry-equivariant matching. In the next section, we construct an explicit locally finite connected clumping rule C.  5.3  Seeds, cutters, and blobs  Our construction of the clumping rule C is adapted from [7]. In [7] and [15] clumpings are used to obtain factor graphs of point processes. See also [2] for background.  5.3.1  Basic set-up  Let · denote the l∞ norm on Zd . Let S(x, r) := {y ∈ Rd : x − y ≤ r}. Thus S(x, r) is the cube of side length 2r centered at x. We also write S(O, r) = S(r). We let {em }dm=1 be the standard unit basis vectors in Rd . For each k ∈ N, we say that a site x ∈ Zd is a k-seed if γ(x) = 1, and γ(y) = 0 for all y ∈ {x + ne1 : 1 ≤ n ≤ k − 1}. Whenever x is a k-seed we call the set {x + ne1 : 0 ≤ n ≤ k − 1} its shell. For example, a 4-shell has the form: 1000. Note that the probability of a k-seed occurring at a particular point is exactly 2−k . Two seeds are said to be overlapping if their shells intersect. Note that two seeds x and y overlap if and only if x = y. This property will be useful later (see Section 5.5.2). We define rk = 2k k2  1 d  1 + . 2  (5.1)  The reason for the choice of rk will be evident shortly. Define the vector sk := 100rk e1 .  (5.2)  A k-cutter is a subset of Rd of the form {y ∈ Rd : y − x = rk }, where x − sk is a k-seed. We introduce a shift sk for technical reasons which will surface latter. We define Wk ⊂ Rd to be the union of all the k-cutters. Note that we have chosen rk so that rk ∈ N. Thus we have that Wk ∩ Zd = ∅ for all k ∈ N. A k-blob is a connected component of Rd \ ∪j>k Wj . Hence 109  5.3. Seeds, cutters, and blobs we have that the sequence of k-blobs define a successively coarser partition of Rd (ignoring the elements of ∪k Wk .) The k-blobs induce a clumping rule C when we intersect the k-blobs with Zd . Note the technical distinction between blobs and clumps. It is obvious that the induced clumping rule C is translation-equivariant; it remains to show that it is locally finite and connected. It suffices to show that all the blobs are bounded and that for every x ∈ Rd , there is a k-blob that contains both x and the origin.  5.3.2  Estimates  In this section we obtain some estimates that will show that the clumping rule C as defined in the previous section is indeed locally finite and connected. The following events will be important in our analysis. Let Ek (x) := {x is enclosed by some k-cutter};  (5.3)  that is Ek (x) occurs if and only if for some site x0 , x0 − sk is a k-seed and x ∈ {y ∈ Zd : y − x0 ≤ rk }. Also let Ek = Ek (O). Let Uk (s) := {S(s) intersects some k-cutter}; that is Uk (s) occurs if and only if for some site x0 , x0 − sk is a k-seed and {y ∈ Rd : y − x0 = rk } ∩ S(s) = ∅. Also let Uk (s).  Ck (s) :=  (5.4)  j≥k  From an analysis of these events, we will deduce that the clumping rule C is both locally finite and connected. Moreover, we will see that the tail behaviour of ZΦ (where Φ is a translation-equivariant matching rule obtained from the clumping rule C) also depends on these events. Lemma 5.7 (Enclosure bounds). For all k > c1 , for some c1 = c1 (d) < ∞, we have P (Ekc ) ≤ e−k . Proof. Note that, Ek = {S(−sk , rk − 1) contains some k-seed}.  (5.5)  Let pk the maximum possible number of k-seeds inside S(−sk , rk −1). Recall that no two (distinct) k-seeds overlap and the probability that a k-seed occurs at a particular point is 2−k . Hence, P (Ek ) ≥ 1 − (1 − 2−k )pk ≥ rd  1 − e−2 pk . By our choice of rk in (5.1) and since ( kk ) ≤ pk ≤ 2rkk (2rk )d−1 for all k ≥ c1 , for some c1 = c1 (d) < ∞, we have that P (Ek ) ≥ 1 − e−k . −k  110  5.4. Mass transport Corollary 5.8. All k-blobs are bounded almost surely. Proof. It suffices to show that all k-blobs that contain O are bounded. By Lemma 5.7, we have that P(Ek ) → 1 as k → ∞, so that Ek occurs for infinitely many k almost surely. Hence all blobs which contain O are bounded. Lemma 5.9 (Cutter bounds). For all k ≥ 1 and all s > 0, we have P (Ck (s)) ≤ c3 s rkk , for some c3 = c3 (d) > 0. Proof. Observe that Uk (s) = {S(−sk , rk + s) \ S(−sk , rk − s) contains some k-seed}.  (5.6)  Clearly, P (Uk (s)) ≤ Nk (s)2−k , where Nk (s) is the number of lattice points in S(−sk , rk +s)\S(−sk , rk −s). We have that Nk (s) = |S(rk +s)|−|S(rk −s)| ≤ c2 rkd−1 s, for some c2 = c2 (d) > 0. So we obtain that, P (Uk (s)) ≤ c2 srkd−1 2−k . Thus recalling our choice of rk in (5.1), we have that P (Ck (s)) ≤  j≥k  P(Uj (s)) ≤ c2 s  j≥k  2−k rkd−1 ≤ c3 s  k . rk  (5.7)  Corollary 5.10. The clumping rule C is connected almost surely. Proof. Let s > 0. By the Borel-Cantelli lemma and (5.7) we have that Uk (s) occurs infinitely often with probability zero. Thus any point within distance s of O will eventually share a blob with it. Proof of Proposition 5.6. Apply Corollaries 5.8 and 5.10. Now we obtain a translation-equivariant matching rule Φ from our locally finite connected clumping rule C, via the procedure outlined in Section 5.2. We will use Lemmas 5.7, 5.9, and the central limit theorem to obtain bounds on ZΦ .  5.4  Mass transport  We will require a version of the mass transport principle in order to facilitate calculations. See [1] and [3] for background. Our main application of the mass transport principle will be to prove a modified version of Lemma 5.11 111  5.4. Mass transport below, which states that each site has an equal chance of not being matched within its k-clump. Similar ideas also appear in [8]. Let C be the clumping rule defined in Section 5.3 and let Φ be the corresponding translation-equivariant matching rule obtained from C using the procedure outlined in Section 5.2. We say that a site is k-bad if it is not matched in its k-clump. Let Lk (x) be the k-clump containing the site x and let Lk (O) = Lk be the k-clump containing the origin. Let #[Lk ] be the cardinality of Lk . Consider the sum ζ := x∈Lk (2γ(x) − 1), so that |ζ| is the number of k-bad sites in Lk . Lemma 5.11. For all k ≥ 1, the probability that the origin is k-bad is exactly E  1 #[Lk ]  x∈Lk  (2γ(x) − 1)  .  We define a mass transport to be a non-negative measurable funcd tion T : Zd × Zd × {0, 1}Z → R which is translation-equivariant; that is d for all x, y ∈ Zd and for all γ ∈ {0, 1}Z and for all translations θ of Zd , we have T (θx, θy, θγ) = T (x, y, γ). For A, B ⊂ Zd , we let T (A, B, γ) := x∈A,y∈B T (x, y, γ). We think of T (A, B, γ) as the mass transferred from A to B under γ ∈ Ω. We will use the following version of the mass transport principle. Lemma 5.12 (Mass transport principle). For any mass transport, T : Zd × d Zd × {0, 1}Z → R, we have ET (O, Zd , ·) = ET (Zd , O, ·). Proof. ET (O, Zd , ·) =  T (O, y, γ)dP(γ) y∈Zd  = y∈Zd  = y∈Zd  T (−y, O, θ −y γ)dP(γ) T (−y, O, γ)dP(γ)  = ET (Zd, O, ·). The first and last equalities follows from the Fubini theorem. The second equality follows from the translation-equivariance of T and the third equality follows from the translation-invariance of P. 112  5.4. Mass transport To illustrate the versatility of the mass transport principle (Lemma 5.12) we prove the following (unsurprising) fact. d  Proposition 5.13. Let F be the standard product σ-algebra of {0, 1}Z and let Pp be the product measure on F with parameter p. If p = 12 , then there does not exist a translation-equivariant matching rule. Proof. Let Φ be a translation-equivariant matching rule. Consider the mass transport M defined as follows. Let x be a site and γ ∈ Ω. If γ(x) = 1, then M (x, x, γ) = 1; that is, x sends one unit of mass to itself. Otherwise if γ(x) = 0, then M (x, y, γ) = 1, where y is a site with Φ(x, γ) = y and γ(y) = 1; that is x sends out a unit of mass to the site y that it is matched to under Φ(·, γ). Since Φ is translation-equivariant this defines a mass transport Pp -a.s. Let Ep be the expected value operator with respect to the measure Pp . Now since every site sends out exactly one unit of mass we have that Ep M (O, Zd , ·) = 1. Also by considering the cases γ(O) = 1 or γ(O) = 0 , we also have that Ep M (Zd , O, ·) = 2p. Hence we have by the mass transport principle that p = 12 . Proof of Lemma 5.11. For each k ≥ 1, we define a mass transport Tk by saying, if a site x is k-bad, then it sends out one unit of mass uniformly distributed between every site in its k-clump Lk (x), while x sends out no mass if it is not bad. To be precise, Tk (x, y, γ) :=  1 #[Lk (x)]  1x  is k-bad (γ)1y∈Lk (x) (γ).  (5.8)  It is easy to see that ETk (Zd , O, ·) = E  1 #[Lk ]  1x  is k-bad  .  x∈Lk  On the other hand, we have that ETk (O, Zd , ·) = P{O is k-bad}. Thus, an application of mass transport principle completes the proof. Now we are in a position get bounds on P (Z > r). We will see that the mass transport principle with information about the size of Lk and its diameter give us an estimate with an application of the central limit theorem.  113  5.5. Proof of Theorem 5.1  5.5 5.5.1  Proof of Theorem 5.1 First estimates  Let Φ be the translation-equivariant matching rule we obtain from the clumping rule C defined in Section 5.3. Recall that Z = ZΦ is the distance from the origin to its partner under Φ. We will obtain bounds on P (Z > r) by choosing a sequence of events Dk and a K = K(r) so that {Z > r} ∩ DK ⊂ {O is K-bad}. The events Dk will be chosen in a way so c ). that we can obtain upper bounds on P{O is K-bad} and P (DK Let α ∈ (0, 1). In fact we will end up choosing α = α(d) = 1+1 d . 4  Recall that the events Ek and Ck (s) were defined earlier in Section 5.3.2; see (5.3) and (5.4). Let Bk be the k-blob containing the origin. The following relations describe the geometry of Bk , when Ek or Ck (r α ) occur. We have Ek ⊂ {there exists x so that Bk ⊂ S(x, rk ) ⊂ S(2rk )}  (5.9)  Ck (r α )c ⊂ {S(r α ) ⊂ Bk }.  (5.10)  and We consider the following decomposition: (Ekc ∪ Ck (r α )) ∩ {Z > r} . (5.11) α c See Figure 5.1 for a realization of the event Ek ∩ Ck (r ) . {Z > r} =  (Ek ∩ Ck (r α )c ) ∩ {Z > r}  ∪  11111 00000 00000 11111 00000 11111 00000 11111 00000 11111 00000 11111 00000 11111 00000 11111 00000 11111 00000 11111 00000 11111  Figure 5.1: An illustration of the event Ek ∩ Ck (r α )c . The thick cutters represent k-cutters enclosing the origin. This corresponds to the event Ek . The shaded region represents the no cutter zone of radius r α about the origin. This corresponds to the event Ck (r α )c .  114  5.5. Proof of Theorem 5.1 The role of the parameter α can be explained heuristically as follows. If the parameter α is small, then Ck (r α )c occurs with high probability, but then Bk could possibly be very small. If α is close to 1, then Bk would almost contain a cube of length 2r, but then Ck (r α )c occurs with low probability. We will choose α to optimize over these alternatives. Let K = K(r) be defined to be the unique integer K such that 2rK < rK+1 < r ≤ rK+2  (5.12)  Note that for some c4 = c4 (d) > 0, we have that for all k ≥ c4 , rk = 1 1 (2k k2 ) d + 21 ≤ (ek ) d . Hence applying (5.1) with (5.12) for all r sufficiently large we have that (log 2)(K + 1) ≤ d log r ≤ (K + 2)  (5.13)  Proposition 5.14 (Decay of the first term in (5.11) via the central limit theorem). For all r > 0 and for the unique integer K = K(r) such that rK+1 < r ≤ rK+2 , we have P (EK ∩ CK (r α )c ) ∩ {Z > r} ≤  c5 , α (r )d/2  for some c5 = c5 (d) > 0. Remark: Note that the decay in Proposition 5.14 is the decay that appears in Theorem 5.3, if we set α = 1. Before we begin the proof of Proposition 5.14, we collect some easy, but important observations. By (5.9) and (5.12), we have (EK ∩ CK (r α )c ) ∩ {Z > r} ⊂ {O is K-bad}. So from (5.10) we have (EK ∩ CK (r α )c ) ∩ {Z > r}  {O is K-bad} ∩ EK ∩ {#[LK ] ≥ r αd } . (5.14) Recall that Lk is the k-clump containing O. To analyze the right hand side of (5.14) we will use the following version of Lemma 5.11. ⊂  Lemma 5.15. For all k ≥ 1, we have P {O is k-bad} ∩ Ek ∩ {#[Lk ≥ r αd ]} = E  1 #[Lk ]  x∈Lk  (2γ(x) − 1) ; Ek ∩ {#[Lk ] ≥ r αd } .  (5.15)  115  5.5. Proof of Theorem 5.1 Proof. We use the mass transport principle. Recall Tk as defined in the proof of Lemma 5.11: Tk (x, y, γ) :=  1 #[Lk (x)]  1x  is k-bad (γ)1y∈Lk (x) (γ).  Define another mass transport: Tˆk (x, y, γ) := Tk (x, y, γ)1Ek (x)∩{#[Lk (x)]≥rαd } (γ). Note that on the event {y ∈ Lk (x)} we have that the event Ek (x) occurs if and only if the event Ek (y) occurs and #[Lk (x)] = #[Lk (y)]. Hence we obtain that ETˆk (Zd , O, ·) = E  y∈Zd  = E y∈Zd  = E  Tk (y, O, ·)1Ek (y)∩{#[Lk (y)]≥rαd } Tk (y, O, ·)1Ek ∩{#[Lk ]≥rαd }  1 #[Lk ]  x∈Lk  (2γ(x) − 1) ; Ek ∩ {#[Lk ] ≥ r αd } .  On the other hand, we have that ETˆk (O, Zd , ·) = P {O is k-bad} ∩ Ek ∩ {#[Lk ] ≥ r αd } . Thus an application of the mass transport principle (Lemma 5.12) completes the proof. Next we will use the central limit theorem to estimate the right hand side of (5.15), but first we need to verify that we have the necessary independence. For k ≥ 1, consider the events: Hk (x1 , x2 , . . . xn ) := {x1 , . . . , xn } = Lk ∩ S(2rk ) , where xi ∈ Zd and xi ≤ 2rk . Let Gk := σ(γ(x) : x ∈ S(2rk )). The following lemma is behind why the (large) shift sk = 100rk e1 , along the axis e1 , appears in the definition of the k-cutters. Lemma 5.16. For all k ≥ 1 and for all independent of σ(Hk (x1 , . . . xn ), Ek ).  xi  ≤ 2rk , the σ-field Gk is  116  5.5. Proof of Theorem 5.1 Proof. Consider a site y with y < s3k . The event {y ∈ Lk } is determined by whether there exists j-seeds, with j ≥ k, to give rise to j-cutters that can separate y from O. However such j-seeds (and their shells) are at least at distance s2k from the origin. So we have that {γ(x) : x < s3k } is independent of Hk (x1 , x2 , . . . , xn ), for all xi ∈ Zd such that xi ≤ 2rk . Also recall that Ek from (5.5) is determined by γ(x), where x ≥ s2k . Now the proof of Proposition 5.14 amounts to a simple calculation, whose result we record in the next lemma. Lemma 5.17. For all k ≥ 1, we have P {O is k-bad} ∩ Ek ∩ {#[Lk ] ≥ r αd } ≤ c5 /(r α )d/2 , for some c5 > 0. Proof of Proposition 5.14. From (5.14) and Lemma 5.17, Proposition 5.14 follows immediately. Proof of Lemma 5.17. Let k ≥ 1, recall that by (5.9) we know that on the event Ek , we have Lk ⊂ S(2rk ). Fix x1 , . . . , xn ∈ S(2rk ) and let Hk = Hk (x1 , . . . , xn ). We will now compute A := E  1 #[Lk ]  x∈Lk  (2γ(x) − 1) ; Ek ∩ Hk (x1 , . . . , xn ) , n i=1 (2γ(xi )  by conditioning on Gk . Let Sn = calculation:  − 1). Consider the following  n  A = E n  −1 i=1  (2γ(xi ) − 1) 1Ek ∩Hk  = E E n−1 |Sn |1Ek ∩Hn = E  1 |Sn |E 1Ek ∩Hk n  1 = E |Sn | E(1Ek ∩Hk ) n c5 ≤ √ P (Ek ∩ Hk ) , n  Gk Gk  (5.16) (5.17) (5.18) (5.19) (5.20)  for some c5 > 0. Equality (5.17) is obtained by conditioning on the σ-field G. Equality (5.18) comes from the fact that the γ(xi ) are all G measurable. By 117  5.5. Proof of Theorem 5.1 Lemma 5.16, we have that Gk and σ(Hk , Ek ) are independent, thus equality (5.19) follows. Inequality (5.20) is obtained by applying the central limit theorem. By summing over all possible Hk (x1 , . . . , xn ), we obtain that E  1 #[Lk ]  x∈Lk  (2γ(x) − 1) ; Ek ∩ {#[Lk ] = n}  c5 ≤ √ P{#[Lk ] = n}. n  Furthermore, by summing over all n ≥ r αd we see that E  1 #[Lk ]  x∈Lk  (2γ(x) − 1) ; Ek ∩ {#[Lk ] ≥ r αd }  ≤ c5 /(r α )d/2 .  Thus an application of Lemma 5.15 completes the proof. Now we turn our attention to the second term in (5.11): (Ekc ∪ Ck (r α )) ∩ {Z > r}. We will bound this term in two different ways. As a first step, let us just throw away the term {Z > r}, since this will allow us to obtain a novel result for the case d ≥ 3 without much more additional effort. Lemma 5.18 (Decay of the second term in (5.11): First bound). For all r > 0 and for the unique integer K = K(r) such that rK+1 < r ≤ rK+2 , we have c P (EK ∪ CK (r α )) ≤ c6 r α−1 (log r), for some c6 = c6 (d) > 0. Proof. Recall that from Lemma 5.7 and Lemma 5.9, we already have bounds for the events appearing in this term. From (5.13) we see that c P (EK ) ≤ c7 r  d − log 2  ,  (5.21)  for some c7 = c7 (d) > 0. Note that logd 2 > d2 . On the other hand, applying (5.13) to Lemma 5.9 we obtain that P (CK (r α )) ≤ c8 r α−1 (log r),  (5.22)  for some c8 = c8 (d) > 0. Proposition 5.19 (Easy preliminary result). For all d ≥ 1, there exists a translation-equivariant matching rule Φ such that Z = ZΦ (γ) = Φ(γ, O) has the tail behaviour: P(Z > r) ≤ c9 r −β (log r), where c9 = c9 (d) > 0 and β = β (d) = 1+1 2 . d  118  5.5. Proof of Theorem 5.1 Proof. We can see from (5.11) and Proposition 5.14 and Lemma 5.18 that P(Z > r) ≤ c5 r  −αd 2  + c6 r α−1 (log r).  Hence we are led to minimize the quantity: max( −αd 2 , α − 1). So we choose (for the purposes of this proposition) α = α(d) = 1+1 d . 2  2  r) . For d = 2, Proposition 5.19, gives for d = 2, decay of order (log r 1/2 Theorem 5.2 still provides a better result, but for d ≥ 3, Proposition 5.19 provides faster decay than Theorem 5.2.  5.5.2  Long and thin blobs  With a closer analysis of the second term in (5.11) we will prove the following. Proposition 5.20 (Decay of the second term in (5.11): Closer analysis). For all r > 0 and for the unique integer K = K(r) such that rK+1 < r ≤ rK+2 , we have c P (EK ∪ CK (r α )) ∩ {Z > r} ≤ c10 r  −αd 2  + c11 r 2(α−1) (log r)4 ,  for some c10 = c10 (d) > 0 and c11 = c11 (d) > 0. Proposition 5.20 together with Proposition 5.14 yields a proof of Theorem 5.1. Proof of Theorem 5.1. From the previous results, (5.11), Proposition 5.14, and Proposition 5.20 we have that P (ZΦ > r) ≤ c5 r  −αd 2  + c10 r  −αd 2  + c11 r 2(α−1) (log r)4 .  Hence we are led to minimize the quantity: max( −αd 2 , 2(α − 1)). It is easy to 1 verify that we should take α(d) = 1+ d . Let β(d) := dα(d)/2 = 1+2 4 . Thus d  4  we obtain that,  P (ZΦ > r) ≤ c(log r)4 r −β , where c = c(d) < ∞ and β = β(d) =  2 1+ d4  .  We will now work towards a proof of Proposition 5.20. We will need to examine the geometry of the blobs a bit closer to prove Proposition 5.20.  119  5.5. Proof of Theorem 5.1 Again in light of (5.21) we do not need to worry about the event Ekc . Let us consider the decomposition, Ck (r α ) = Ekc ∩ Ck (r α ) ∪ Ek ∩ Ck (r α ).  (5.23)  The second term puts us in a position akin to the situation of Proposition 5.14, since we can control the diameter of the k-blob containing the origin. We now examine two situations. One where the k-blob containing O is possibly very small (see Lemma 5.21 and Figure 5.2) and another where there are enough points inside the k-blob to make good use of the central limit theorem (see Lemma 5.23 and Figure 5.3). Let j ≥ k, consider again j-seeds on the sets: Aj = Aj (r α ) := S(−sj , rj + r α ) \ S(−sj , rj − r α ).  Observe that seeds on two levels will not overlap; that is a seed in Aj will not overlap with a seed in Am , for j = m. Also recall that by our definition of k-seeds, no two (distinct) k-seeds will overlap. Since Ck (r α ) is the event that for some j ≥ k, the set Aj contains a j-seed. We will further split up this event. Define: Ck1 (r α ) := {for all j ≥ k, the set Aj contains at most one j-seed and Ck2 (r α )  there is an unique j ≥ k such that Aj contains a j-seed}  := Ck (r α ) \ Ck1 (r α )  (5.24)  We will throw away the term {Z > r} when we bound P(Ck2 (r α ) ∩ Ek ∩ {Z > r}), but we will keep it when we bound P(Ck1 (r α ) ∩ Ek ∩ {Z > r}). Lemma 5.21. For all r > 0 and for the unique integer K = K(r) such 2 (r α ) ≤ c (r α−1 (log r))2 , for some that rK+1 < r ≤ rK+2 , we have P CK 13 c13 = c13 (d) > 0. Proof. For all j ≥ 1, let Uj = {Aj contains a j-seed} . Thus from (5.6), we have Uj = Uj (r α ). Since for all j ≥ 1, no two (distinct) j-seeds overlap we have that P{Aj contains more than one j-seed} ≤ P(Uj )2 . Similarly, since seeds in Aj and Am do not overlap for j = m, we have P(Uj ∩ Um ) = P(Uj )P(Um ), for all j = m. Since Ck2 (r α ) ⊂    j>m≥k    Uj ∩ Um      j≥m    {Aj contains more than one j-seed} , 120  5.5. Proof of Theorem 5.1 we have P(Ck2 (r α )) ≤  P(Uj ) j≥k  P(Um ). m≥k  2 (r α ) ≤ c (r α−1 (log r))2 , for From equations (5.7) and (5.13) we have P CK 13 some c13 = c13 (d) > 0.  Thus we have an improved term r 2(α−1) . For a realization of the event see Figure 5.2.  Ck2 (r α )  11 00 00 11 00 11 00 11 00 11 00 11  Figure 5.2: The shaded region represents the k-blob containing the origin. Notice that on the event Ck2 (r α ) the k-blob can be quite small. For this reason it seems we will not be able to do any better by including {Z > r}. We now turn our attention to the event Ck1 (r α ). Lemma 5.22. For all r > 0 and for the unique integer K = K(r) such that 1 (r α ) ⊂ {#[L ] ≥ c r αd }, for some constant, rK+1 < r ≤ rK+2 , we have CK K 14 0 < c14 = c14 (d) < ∞. 1 (r α ), there is exactly one j-cutter that has the propProof. On the event CK erty that it intersects S(r α ) and j ≥ K; call this unique cutter C. Observe that if the cutter C was removed, the blob containing the origin would contain all of S(r α ). Note that C has side-length at least 2rK . It is easy to see that there is a constant c15 < ∞ independent of k so that c15 rk ≥ rk+2 . Thus c15 rK ≥ r. Therefore the blob containing the origin must contain a α d-cube with side-length cr . 15  Lemma 5.23. For all r > 0 and for the unique integer K = K(r) such that 1 (r α ) ∩ E ∩ {Z > r} ≤ c12 , for some rK+1 < r ≤ rK+2 , we have P CK K d (r α ) 2  c12 = c12 (d) > 0.  121  5.5. Proof of Theorem 5.1  0000000000000 1111111111111 1111111111111 0000000000000 0000000000000 1111111111111 0000000000000 1111111111111 0000000000000 1111111111111 0000000000000 1111111111111 0000000000000 1111111111111 0000000000000 1111111111111 0000000000000 1111111111111 0000000000000 1111111111111 0000000000000 1111111111111 0000000000000 1111111111111 0000000000000 1111111111111 0000000000000 1111111111111 0000000000000 1111111111111 0000000000000 1111111111111 0000000000000 1111111111111 0000000000000 1111111111111 0000000000000 1111111111111 0000000000000 1111111111111 0000000000000 1111111111111 0000000000000 1111111111111 0000000000000 1111111111111 0000000000000 1111111111111  Figure 5.3: An illustration of the event Ck1 (r α ) ∩ Ek . The shaded region represents the restricted cutter zone of radius r α about the origin. The very thick cutter represents the unique j ≥ k that intersects S(r α ). This corresponds to the event Ck1 (r α ). The other thick cutter represents a k cutter enclosing the origin. This corresponds to the event Ek . 1 (r α ) ∩ E ∩ {Z > r} ⊂ Proof. Again from (5.9) and (5.12) we have CK K {O is K-bad}. So by Lemma 5.22, it suffices to show that for all k ≥ 1, we have c (5.25) P {O is k-bad} ∩ Ek ∩ {#[Lk ] ≥ c14 r αd } ≤ 12 d , (r α ) 2  from some c12 = c12 (d) > 0. Equation 5.25 follows from Lemma 5.17. Proof of Proposition 5.20. Using (5.23) and (5.24), we have that c P (EK ∪ CK (r α )) ∩ {Z > r}  c 2 ≤ P(EK ) + P(CK (r α )) +  P  1 CK (r α ) ∩ EK ∩ {Z > r} .  From equation (5.21) and Lemmas 5.21 and 5.23 we obtain that c P (EK ∪ CK (r α )) ∩ {Z > r} ≤  c7 r  d − log 2  + c13 (r α−1 (log r))2 +  c12 d  (r α ) 2  .  Open problems 1. What is the optimal tail behaviour for translation-equivariant matchings on Zd in the case d ≥ 3? When d ≥ 3 from [14] for all ε > 0, there exists a translation-equivariant matching rule with exponential tails of order exp(−cr d−2−ε ), where c > 0 is some constant. Does there exists 122  5.5. Proof of Theorem 5.1 a translation-equivariant matching rule with tails of order exp(−cr d )? The original problem is from [9] and it also contains a few other related open problems. 2. We say that a translation-equivariant matching rule is oriented if it satisfies the additional restriction that if a site x is matched to a site y that contains a one, then yi ≥ xi for all i ≤ d. Observe that in Meshalkin matching, a zero is always matched to a one that is to the right of it. Note that it is not obvious that the method employed in this paper can be modified to work in an oriented setting. In one dimension, the restriction of orientation does not make a difference; one might think it should not for higher dimensions as well. What is the optimal tail behaviour for matchings in Zd with the restriction that we consider orientation as well?  Acknowledgments This research is funded in part by the NSERC and a U.B.C. Graduate Fellowship. I like to thank my supervisor Ander Holroyd for introducing me to this topic. Our numerous discussions and meetings were invaluable and his assistance and advice have helped to shape all aspects of this work. I would also like to take the opportunity to thank Vlada Limic for introducing me to probability.  123  5.6. Bibliography  5.6  Bibliography  [1] I. Benjamini, R. Lyons, Y. Peres, and O. Schramm. Group-invariant percolation on graphs. Geom. Funct. Anal., 9(1):29–66, 1999. [2] P. A. Ferrari, C. Landim, and H. Thorisson. Poisson trees, succession lines and coalescing random walks. Ann. Inst. H. Poincar´e Probab. Statist., 40(2):141–152, 2004. [3] O. H¨aggstr¨ om. Infinite clusters in dependent automorphism invariant percolation on trees. Ann. Probab., 25(3):1423–1436, 1997. [4] C. Hoffman, A. Holroyd, and Y. Peres. Tail bounds for the stable marriage of Poisson and Lebesgue. Canadian Journal of Mathematics, 61:1279–1299, 2009. arXiv:math.PR/0507324. [5] C. Hoffman, A. E. Holroyd, and Y. Peres. A stable marriage of Poisson and Lebesgue. Ann. Probab., 34(4):1241–1272, 2006. [6] A. Holroyd and T. Liggett. How to find an extra head: Optimal random shifts of Bernoulli and Poisson random fields. Ann. Probab., 29:1405– 1425, 2001. [7] A. Holroyd and Y. Peres. Trees and matching from point processes. Elect. Comm. in Probab., 8:17–27 (electronic), 2003. [8] A. E. Holroyd, R. Pemantle, Y. Peres, and O. Schramm. Poisson matching. Ann. Inst. Henri Poincar´e Probab. Stat., 45(1):266–287, 2009. [9] A. E. Holroyd and Y. Peres. Extra heads and invariant allocations. Ann. Probab., 33(1):31–52, 2005. [10] T. Liggett. Tagged particle distributions or how to choose a head at random. In In and Out of Equilibrium, volume 51 of Progr. Probab., pages 133–162. Birkhauser, 2002. [11] L. Meshalkin. A case of isomorphism of Bernouli schemes. Dokl. Akad. Nauk. SSSR, 128:41–44, 1959. [12] F. Nazarov, M. Sodin, and A. Volberg. Transportation to random zeroes by the gradient flow. Geom. Funct. Anal., 17(3):887–935, 2007. [13] H. Thorisson. Transforming random elements and shifting random fields. Ann. Probab., 24:2057–2064, 1996. 124  5.6.  Bibliography  ´ Tim´ [14] A. ar. Invariant matchings of exponential tail on coin flips in Zd . Preprint. ´ Tim´ [15] A. ar. Trees and grid factors for general point processes. Elect. Comm. in Probab., 9:53–59, 2004. electronic.  125  Chapter 6  A Nonmeasurable Set from Coin Flips 5 To motivate the elaborate machinery of measure theory, it is desirable to show that in some natural space Ω one cannot define a measure on all subsets of Ω, if the measure is to satisfy certain natural properties. The usual example is given by the Vitali set, obtained by choosing one representative from each equivalence class of R induced by the relation x ∼ y if and only if x − y ∈ Q. The resulting set is not measurable with respect to any translation-invariant measure on R that gives non-zero, finite measure to the unit interval [8]. In particular, the resulting set is not Lebesgue measurable. The construction above uses the axiom of choice. Indeed, the Solovay theorem [7] states that in the absence of the axiom of choice, there is a model of Zermelo-Frankel set theory where all the subsets of R are Lebesgue measurable. In this note we give a variant proof of the existence of a nonmeasurable set (in a slightly different space). We will use the axiom of choice in the guise of the well-ordering principle (see the later discussion for more information). Other examples of nonmeasurable sets may be found for example in [1] and [5, Ch. 5]. We will produce a nonmeasurable set in the space Ω := {0, 1}Z . Translation-invariance plays a key role in the Vitali proof. Here shift-invariance will play a similar role. The shift T : Z → Z on integers is defined via T x := x + 1, and the shift τ : Ω → Ω on elements ω ∈ Ω is defined via (τ ω)(x) := ω(x − 1). We write τ A := {τ ω : ω ∈ A} for A ⊆ Ω. Theorem 6.1. Let F be a σ-algebra on Ω that contains all singletons and is closed under the shift (that is, A ∈ F implies τ A ∈ F). If there exists a measure µ on F that is shift-invariant (that is, µ = µ ◦ τ ) and satisfies µ(Ω) ∈ (0, ∞), and µ({ω}) = 0 for all ω ∈ Ω, then F does not contain all subsets of Ω. 5  A version of this chapter has been published. Holroyd, A.E. and Soo, T. (2009) A Nonmeasurable Set from Coin Flips. Amer. Math. Monthly 116(10):926–928.  126  Chapter 6. A Nonmeasurable Set from Coin Flips The conditions on F and µ in Theorem 6.1 are indeed satisfied by measures that arise naturally. A central example is the probability space (Ω, G, P) for a sequence of independent fair coin flips indexed by Z, which is defined as follows. Let A be the algebra of all sets of the form {ω ∈ Ω : ω(k) = ak , for all k ∈ K}, where K ⊂ Z is any finite subset of the integers and a ∈ {0, 1}K is any finite binary string. The measure P restricted to A is given by P {ω ∈ Ω : ω(k) = ak , for all k ∈ K} = 2−|K| , where |K| denotes the cardinality of K. Thus P(Ω) = 1, and P = P ◦ τ on A. The Carath´eodory extension theorem [6, Ch. 12, Theorem 8] gives a unique extension P to G := σ(A) (the σ-algebra generated by A) satisfying P = P ◦ τ . In addition, the continuity of measure implies P({ω}) = 0 for all ω ∈ Ω. Hence Theorem 6.1 implies that G does not contain all subsets of Ω. Of course, the same holds for any extension (Ω, G , P ) of (Ω, G, P) for which P is shift-invariant (such as the completion under P). To prove Theorem 6.1 we will define a nonmeasurable function. We are interested in functions from Ω to Z that are defined everywhere except on some set of measure zero. Therefore, for convenience, introduce an additional element ∆ ∈ Z. Consider a function X : Ω → Z ∪ {∆}. We call X almost-everywhere defined if X −1 {∆} is countable, which implies that µ(X −1 {∆}) = 0, for any measure µ satisfying the conditions of Theorem 6.1. A function X is measurable with respect to F if X −1 {x} ∈ F for all x ∈ Z. We call X shift-equivariant if X(τ ω) = T (X(ω))  for all ω ∈ Ω  (where T (∆) := ∆). (We may think of a shift-equivariant X as an “originindependent” rule for choosing an element from the sequence ω.) Shiftequivariant functions of random processes are important in many settings, including percolation theory (for example in [2]) and coding theory (for example in [3, 4]). Lemma 6.2. If X : Ω → Z ∪ {∆} is an almost-everywhere defined, shiftequivariant function then X is not measurable with respect to any F satisfying the conditions of Theorem 6.1. Lemma 6.3. There exists an almost-everywhere defined, shift-equivariant function X : Ω → Z ∪ {∆}. Theorem 6.1 is an immediate consequence of the preceding two facts. Proof of Theorem 6.1. Let (Ω, F, µ) be a measure space satisfying the conditions of Theorem 6.1. Using Lemma 6.3, let X be an almost-everywhere 127  Chapter 6. A Nonmeasurable Set from Coin Flips defined shift-equivariant function. By Lemma 6.2, X is not F-measurable. Therefore there exists z ∈ Z such that X −1 {z} ∈ F. Proof of Lemma 6.2. Towards a contradiction, let X be a measurable function on (Ω, F, µ) satisfying the conditions of Lemma 6.2. Since X is shiftequivariant we have for each x ∈ Z, µ(X −1 {x}) = µ τ −x X −1 {x} = µ(X −1 {0}). Hence µ(X −1 Z) = µ x∈Z  X −1 {x} =  x∈Z  µ(X −1 {0}) = 0 or ∞,  which contradicts the facts that µ(X −1 {∆}) = 0 and µ(Ω) ∈ (0, ∞). Let us recall some facts about well-ordering. A total order on a set W is a well order if every nonempty subset of W has a least element. The well-ordering principle states that every set has a well order. It is a classical result of Zermelo [9] that the well-ordering principle is equivalent to the axiom of choice. Proof of Lemma 6.3. Say ω ∈ Ω is periodic if τ x ω = ω for some x ∈ Z\{0}. If ω is not periodic then (τ x ω)x∈Z are all distinct. Using the well-ordering principle, fix a well order of Ω and define the function X(ω) :=  ∆ the unique x minimizing τ −x ω under  if ω is periodic; otherwise.  (We may think of τ −x ω as ω viewed from location x, in which case X is the location from which ω appears least.) Clearly, X is shift-equivariant. It is almost-everywhere defined since Ω contains only countably many periodic elements.  Acknowledgements Alexander Holroyd and Terry Soo were funded in part by NSERC. Terry Soo was funded in part by a U.B.C. Gradudate Fellowship.  128  6.1. Bibliography  6.1  Bibliography  [1] D. Blackwell and P. Diaconis. A non-measurable tail set. In Statistics, Probability and Game Theory, volume 30 of IMS Lecture NotesMonograph Series, pages 1–5. Institute of Mathematical Statistics, Hayward, CA, 1996. [2] R. M. Burton and M. Keane. Density and uniqueness in percolation. Comm. Math. Phys., 121(3):501–505, 1989. [3] M. Keane and M. Smorodinsky. A class of finitary codes. Israel J. Math., 26(3-4):352–371, 1977. [4] M. Keane and M. Smorodinsky. Bernoulli schemes of the same entropy are finitarily isomorphic. Ann. of Math. (2), 109(2):397–406, 1979. [5] J. C. Oxtoby. Measure and Category, volume 2 of Graduate Texts in Mathematics. Springer-Verlag, New York, 2nd edition, 1980. [6] H. Royden. Real Analysis. Prentice Hall, third edition, 1988. [7] R. Solovay. A model of set theory in which every set of reals is Lebesgue measurable. Ann. Math., 92(1-56), 1970. [8] G. Vitali. Sul problema della misura dei gruppi di punti di una retta. Gamberini and Parmeggiani, Bologna, 1905. [9] E. Zermelo. Beweis, daß jede Menge wohlgeordnet werden kann. Math. Ann., 59(4):514–516, 1904.  129  Chapter 7  Conclusion 7.1  Non-randomized versus randomized  One of the common themes in Chapters 2, 3, and 5 is that randomness already present in the process can be used in a very careful way as to mimic a construction that one would make if additional independent randomization is available. In all cases we define certain constructions that are specific to a particular application. Although the constructions are explicit we do not develop a systematic way for randomness to be extracted and used. In Chapter 3, we see that if Π and Π are Poisson processes on [0, 1] of intensities 1.45 and 0.6 respectively, then there does not exists a determind istic function φ so that φ(Π) = Π and all the points of φ(Π) are points of Π. On the other hand, there exists a coupling of Π and Π so that all the points of Π are points of Π. Thus we see that sometimes additional randomization is required. In the discrete setting of Bernoulli shifts, entropy serves as a useful measure of randomness, but it is not clear how to define a useful equivalent for Poisson point processes.  7.2  Finitary factors and source universal factors  There exists examples of processes which are isomorphic, but not finitary isomorphic to Bernoulli shifts [9]. More recently, Van den Berg and Steif [9] gave an interesting class of examples where finitary factors do not exist, but non-finitary factors do exist. Thus, sometimes one must settle of a nonfinitary factor. We do not know whether there exists a finitary isomorphism between any two Poisson processes on Rd . We defined source universal finitary factors of Poisson process on Rd in Chapter 2. In the discrete setting of Bernoulli shifts source universal finitary factors are defined in [4]. Does there exists a source universal finitary translation-equivariant non-randomized thinning? Even without considerations of equivariance, we do not know the answer to the following simple question. Let S ⊂ Rd be a Borel set of finite volume. Does there exists a 130  7.3. Future directions deterministic map φ such that if Π is a Poisson process on S of intensity 3 or 4, then φ(Π) is a Poisson process on S of intensity 1 and all the points of φ(Π) are points of Π.  7.3  Future directions  Concerning factors of points processes many interesting questions and open problems remain. A particularly nice example of a question in this area is the following relative of the four colour theorem. How many colours does it take to colour a random stationary planar map Π if the colouring is required to be a (measurable) translation-equivariant factor of Π; that is, a deterministic function of Π such that if a translation is applied to Π, the translated map is coloured the same way. As in the case of Poisson thinning, the requirement that the colouring be a factor intuitively means that it does not depend on additional randomization or a central authority. Thus regions in the planar map use only local information to determine their colour. If the colouring is not required to be a deterministic function of the planar map, then it is much easier to show that there exists a stationary four colouring; that is, a colouring where the joint distribution of planar map and the colouring is stationary. Recently, it was shown that for the Poisson-Voronoi map, five colours suffice, but it remains open whether five colours are necessary [1], [8]. Another interesting problem comes from statistical physics [3], [2]. Given a Poisson process on Rd , is it possible to place non-overlapping spheres (hard spheres of differing radii) centered at the points, as a (translationequivariant) factor of the Poisson process, so that there is an unbounded component of touching spheres? In this setting, the existence of a stationary Poisson hard sphere model that percolates is known only in high dimensions (d ≥ 45) [2]; in particular, it is an open problem for the case d = 2, 3. Returning to the interpretation that factor maps use only local information, we mention an open problem of [5] that is related to the extra-head problem. Given a Poisson process on the plane with intensity one, as a factor of the point process, is it possible to partition the plane into bounded connected parts of volume one so that each point receives exactly one part? Recent progress has been made in [7], where the plane is partitioned into connected parts. Next we discuss one of the many interesting problems in factor matching [6]. Given two independent Poisson processes on the plane, R and B, as a factor of the point processes, is it possible to construct a (perfect) matching 131  7.3. Future directions of the R-points and B-points so that the matching is planar; that is, the line-segments joining the points do not cross. In fact, even the existence of a planar stationary matching of the R-points and the B-points is open. In the case where there is a single Poisson process, it is easy to show that there exists a stationary planar matching of the R-points; however, whether this is possible as a factor is unknown.  132  7.4. Bibliography  7.4  Bibliography  [1] O. Angel, I. Benjamini, O. Gurel-Gurevich, T. Meyerovitch, and R. Peled. Stationary map coloring, 2009. Preprint. [2] C. Cotar, A. E. Holroyd, and D. Revelle. A percolating hard sphere model. Random Structures Algorithms, 34(2):285–299, 2009. [3] O. H¨aggstr¨ om and R. Meester. Nearest neighbor and hard sphere models in continuum percolation. Random Structures Algorithms, 9(3):295–315, 1996. [4] N. Harvey, A. E. Holroyd, Y. Peres, and D. Romik. Universal finitary codes with exponential tails. Proc. Lond. Math. Soc. (3), 94(2):475–496, 2007. [5] C. Hoffman, A. E. Holroyd, and Y. Peres. A stable marriage of Poisson and Lebesgue. Ann. Probab., 34(4):1241–1272, 2006. [6] A. E. Holroyd, R. Pemantle, Y. Peres, and O. Schramm. Poisson matching. Ann. Inst. Henri Poincar´e Probab. Stat., 45(1):266–287, 2009. [7] M. Krikun. Connected allocation to Poisson points in R2 . Electron. Comm. Probab., 12:140–145 (electronic), 2007. ´ Tim´ [8] A. ar. Invariant colorings of random planar maps, 2009. Preprint. [9] J. van den Berg and J. E. Steif. On the existence and nonexistence of finitary codings for a class of random fields. Ann. Probab., 27(3):1501– 1522, 1999.  133  

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.24.1-0069962/manifest

Comment

Related Items