UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

The small ball inequality with restricted coefficients Karslidis, Dimitrios 2016

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
24-ubc_2016_may_karslidis_dimitrios.pdf [ 591.59kB ]
Metadata
JSON: 24-1.0228637.json
JSON-LD: 24-1.0228637-ld.json
RDF/XML (Pretty): 24-1.0228637-rdf.xml
RDF/JSON: 24-1.0228637-rdf.json
Turtle: 24-1.0228637-turtle.txt
N-Triples: 24-1.0228637-rdf-ntriples.txt
Original Record: 24-1.0228637-source.json
Full Text
24-1.0228637-fulltext.txt
Citation
24-1.0228637.ris

Full Text

The small ball inequality with restricted coefficientsbyDimitrios KarslidisA thesis submitted in partial fulfilment of the requirements forthe degree ofDoctor of PhilosophyinThe Faculty of Graduate and Postdoctoral Studies(Mathematics)The University of British Columbia(Vancouver)April 2016c© Dimitrios Karslidis, 2016AbstractThe main focus of this document is the small ball inequality. The small ball inequal-ity is a functional inequality concerning the lower bound of the supremum norm ofa linear combination of Haar functions supported on dyadic rectangles of a fixedvolume. The sharp lower bound in this inequality, as yet unproven, is of consider-able interest due to the inequality’s numerous applications. We prove the optimallower bound in this inequality under mild assumptions on the coefficients of a linearcombination of Haar functions, and further investigate the lower bounds under moregeneral assumptions on the coefficients. We also obtain lower bounds of such linearcombinations of Haar functions in alternative function spaces such as exponentialOrlicz spaces.iiPrefaceMuch of the following document is adapted from two of the author’s research papers:[29] and [30]. In particular, Proposition 5.6 and Section 6.1, Section 6.2 form themain content of [29], On the signed small ball inequality with restricted coefficients,while Section 6.3, Section 6.4 and all of Chapter 7 are adapted from [30], Orlicz spacesbounds for special classes of hyperbolic sums. The first of these two manuscripts hasbeen accepted for publication and will appear in Indiana University MathematicsJournal and the second paper has been submitted.iiiTable of ContentsAbstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iiPreface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iiiTable of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ivList of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viAcknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.1 Preliminary facts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51.2 Statement of the small ball inequality . . . . . . . . . . . . . . . . . . 71.3 Main results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102 Recent history of the small ball inequality . . . . . . . . . . . . . . . 143 Connections and applications . . . . . . . . . . . . . . . . . . . . . . . 193.1 Connection to probability theory . . . . . . . . . . . . . . . . . . . . 193.2 Connection to approximation theory . . . . . . . . . . . . . . . . . . 263.3 Connection to discrepancy theory . . . . . . . . . . . . . . . . . . . . 383.3.1 Conjectures in discrepancy theory . . . . . . . . . . . . . . . . 383.3.2 Proof of Theorem 3.17 . . . . . . . . . . . . . . . . . . . . . . 423.3.3 Hala´sz’s proof of Conjecture 3.18 in d = 2 . . . . . . . . . . . 45iv4 The two-dimensional small ball inequality . . . . . . . . . . . . . . . 514.1 Temlyakov’s proof of the two-dimensional small ball inequality . . . . 514.2 The sharpness of the small ball inequality in all dimensions . . . . . . 565 Elements of dyadic harmonic analysis . . . . . . . . . . . . . . . . . . 615.1 Khintchine’s inequalities . . . . . . . . . . . . . . . . . . . . . . . . . 615.2 The Littlewood-Paley inequalities . . . . . . . . . . . . . . . . . . . . 635.3 Connection of martingale differences with Haar functions . . . . . . . 676 Two proofs of Theorem 1.3 . . . . . . . . . . . . . . . . . . . . . . . . 746.1 Motivation for the splitting condition . . . . . . . . . . . . . . . . . . 746.2 The first proof of Theorem 1.3 . . . . . . . . . . . . . . . . . . . . . . 776.3 The second proof of Theorem 1.3 . . . . . . . . . . . . . . . . . . . . 806.4 The sharpness of the small ball inequality with restricted coefficients . 847 Orlicz space bounds for special classes of hyperbolic sums . . . . . 897.1 Orlicz spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 907.2 The proof of Theorem 1.6 . . . . . . . . . . . . . . . . . . . . . . . . 937.3 Study of signed hyperbolic sums . . . . . . . . . . . . . . . . . . . . . 958 Conclusions and future projects . . . . . . . . . . . . . . . . . . . . . 102Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106vList of Figures1.1 A Haar function, hR, supported on a rectangle R. . . . . . . . . . . . 21.2 Examples of Haar functions. . . . . . . . . . . . . . . . . . . . . . . . 63.1 The function w used in the proof of the two-dimensional small ballprobability conjecture by Talagrand. . . . . . . . . . . . . . . . . . . 23viAcknowledgementsPrimarily, I would like to thank my supervisor Malabika Pramanik. No part of thisthesis would have been possible without her unwavering commitment and constantencouragement. Malabika has generously assisted me in all my scientific activities.She is always willing to give me advice and help me to navigate my way through theacademic world.I want to thank the members of my supervisory committee, Philip Loewen,Richard Froese and Akos Magyar for their valuable advice, their support, and theirtime. I would especially like to express my deep gratitude to Philip Loewen forsupporting me in all my academic pursuits during my time at UBC. I would like tothank Fok-Shuen Leung for his constant support and his valuable advice. I wantto express my gratitude to Dmitriy Bilyk for sharing his rich knowledge with me.We also have had many discussions on the small ball inequality from which I havebenefited a lot.I would also like to acknowledge the abundant support I received from my girl-friend, Pam Sargent, during the process of typing the thesis. She has helped memore than anybody else in drafting this document and has never ceased to believein my abilities as a researcher. Σε ευχαρiστω pioλυ!Finally, I wish to thank my family, all of my friends and Idryma Paideias kaiEuropaikoy Politismou for their ample support all these years.viiChapter 1IntroductionIn this thesis, we focus on the small ball inequality, an inequality that conjecturesa lower bound for the supremum norm of any linear combination of Haar functions.The precise definition of Haar functions will be given in Section 1.1, roughly speakingthese are functions that take the value 1 on half of its support and −1 on the otherhalf. This inequality, which remains a conjecture in dimensions d ≥ 3, has importantapplications in several fields of mathematics. Here, we show that the conjecturedinequality is valid under some mild additional assumptions on the linear combinationsof Haar functions.To get an idea of what this inequality claims, first consider dyadic intervals in[0, 1]. These are intervals whose endpoints are consecutive dyadic rationals, i.e.,intervals of the form[m2n, m+12n)where n ≥ 0 and 0 ≤ m < 2n. For example, [0, 14)and[14, 12)are dyadic intervals of [0, 1] of length 1/4. Now, for a dyadic interval I,consider the function hI supported on I and taking the value −1 on the left half ofI, 1 on the right half of I, and zero outside of the interval. Finally, form a linearcombination of such functions hI , where the dyadic interval I is of a fixed length, i.e.,∑|I|=2−n αIhI . For reasons described later on, one is often interested in determininghow small the largest possible magnitude of such linear combinations can be. Inother words, one seeks lower bounds of the supremum norm of functions of the form∑|I|=2−n αIhI . We will explain in the next paragraph why this is reasonably simplewhen∑|I|=2−n αIhI is considered. The present work focuses on lower bounds of the1supremum norm of the higher dimensional analogues of such functions. The small ballinequality is an inequality which conjectures a sharp lower bound for such quantities,as∑|R|=2−n αRhR, where hR is a higher dimensional analogue of a function hI andits precise definition is given in (1.3).The key step in finding the largest possible magnitude of the function∑|I|=2−n αIhIis to notice that dyadic intervals of a fixed length are disjoint. Therefore, the largestmagnitude is the largest of numbers |αI |. However, this is not the case in higherdimensions, and this creates a barrier in solving the higher dimensional problem.To further clarify the problem, consider a rectangle R, say with area |R| = 2−n,in the unit square. Divide this rectangle into four subrectangles, Ri, i = 1, . . . , 4,where R1 corresponds to the upper left subrectangle, R3 corresponds to the lowerright subrectangle, and R2 and R4 correspond to the upper right and lower leftsubrectangles respectively. The length of each side of these subrectangles will behalf of the length of the corresponding side of the original rectangle. Consider afunction which takes the value +1 on R1 and R3, −1 on R2 and R4, and zero outsideof the rectangle R. Denote such functions by hR. A linear combination of suchfunctions corresponding to planar rectangles of area 2−n are functions of the formH =∑|R|=2−n αRhR. Letting the coefficients αR vary yields different correspondingfunctions H. In this setting, the small ball inequality conjectures a lower bound forthe largest value of H over all possible choices of coefficients in terms of n, for largevalues of n.−1+1+1−1Figure 1.1: A Haar function, hR, supported on a rectangle R.2The small ball inequality would be almost obvious if we only considered linearcombinations of functions which correspond to non-overlapping rectangles. When therectangles do overlap, the analysis is not straightforward. If we replace the planarrectangles by parallelepipeds, for example, the situation becomes much more difficultand delicate. In this thesis, we focus on obtaining lower bounds on the largest valueof H in terms of n, in any dimension.The formulation of the small ball inequality is given in Chapter 1. This chapter isintroductory in nature and discusses various versions of the small ball inequality, andwe give all of the preliminary facts needed to state the small ball inequality precisely.We also include the main results in Chapter 1 to better illustrate the connection ofthese results to the small ball inequality.In Chapter 2, we outline the history of this problem and progress to date.The main purpose of Chapter 3 is to highlight the connections between the smallball inequality and other areas of mathematics. Surprisingly, the validity of thesmall ball inequality is also a key to solving open problems in fields such as proba-bility theory, approximation theory and the theory of irregularities of distributions.These connections have been dealt with precisely in Chapter 3, but we provide somemotivation here in non-technical terms.Let the unit square T := [0, 1]2 represent the city of Vancouver. To each point ~t =(t1, t2) of T , we associate the price, X~t, of the property at that point. Since propertyprices in Vancouver are often high, it is natural to be interested in determining thelikelihood that the property prices in the entire city will fall below a prescribedvalue, say ε > 0. In other words, we are concerned with estimating P(sup~t∈T X~t < ε)for sufficiently small ε > 0, where P is the probability measure of the underlyingspace. Under certain normality on the probability distribution of {X~t}, the small ballinequality will determine this probability. We give all the details of this connectionin Section 3.1 of Chapter 3.Now, suppose we have a collection of messages represented as binary strings of0’s and 1’s. Due to physical constraints, we would like to determine a subcollectionof the messages with the property that, given any message in the collection, thereis a member of that subcollection which recovers the given message with prescribed3accuracy. We obviously want to find the smallest subcollection that will do the job.This is related to the notion of entropy, and the small ball inequality plays a crucialrole in solving such problems. This is further explained in Section 3.2 of Chapter 3.In practice, there are many functions whose integrals cannot be written in termsof elementary functions. However, we may still be interested in estimating the areaunder such curves. One can, of course, approximate it by a Riemann sum, and itis natural to want to obtain a bound on the error in the approximation. It turnsout that how well-distributed the tags of the partition of the Riemann sum areaffects the error bound. The discrepancy function defined in Section 3.3 of Chapter3 provides a way of determining how far the distribution of the tags are from aperfectly randomized scenario. In a quantifiable sense that has been made precisein Section 3.3, minimizing the error bound in the numerical integration problemcorresponds to minimizing the discrepancy function, and the small ball inequalityplays a vital role in bounding the discrepancy function. This connection is presentedin Section 3.3 of Chapter 3.In Chapter 4, we present a detailed proof of the two-dimensional small ball in-equality. We also highlight the similarities between the arguments of this proof andthe proof, given in Chapter 3, of the two-dimensional lower bound of the discrepancyfunction’s L∞-norm.In Chapter 5, we provide the background needed to understand the main re-sults of this thesis, presented in Chapter 6 and Chapter 7. In particular, we recallKhintchine’s inequalities, a special case of the Littlewood-Paley inequalities for mar-tingales, since Khintchine’s inequalities are used in the proof of Theorem 1.4. Wealso introduce the Littlewood-Paley inequalities for martingales and connect them tothe Littlewood-Paley inequalities for Haar functions, since such inequalities for Haarfunctions are used in the proofs of Theorem 1.3 and Theorem 1.6.Chapters 6 and 7 are primarily devoted to proving the main results. In particular,Theorem 1.3 and Theorem 1.4 are proved in Chapter 6. In Chapter 7, we first givethe necessary background information on exponential Orlicz spaces before movingon to proving Theorem 1.6. The reader will find this brief introduction to Orliczspaces helpful in understanding the proof of Theorem 1.6. We conclude Chapter 74with further investigation of hyperbolic sums in exponential Orlicz spaces.Chapter 8, the final chapter of the thesis, provides a summary of the results ofthis thesis and outlines some related future projects.1.1 Preliminary factsLet 1I(x) be the characteristic function of the interval I, i.e.,1I(x) =1, x ∈ I0, otherwise .Consider the collection of the dyadic intervals of [0, 1]:D ={I =[m2n,m+ 12n): m,n ∈ Z, n ≥ 0, 0 ≤ m < 2n}withD∗ = D ∪ {[−1, 1]}.(1.1)For every interval I ∈ D, its left and right dyadic halves are denoted by Il and Irrespectively. We define the L∞-normalized Haar function, hI , corresponding to aninterval I as:hI(x) = −1Il +1Ir . (1.2)Note that h[−1,1](x) = 1[0,1)(x) for all x in [0, 1].It is easy to extend Haar functions to higher dimensions. We consider the familyof dyadic rectangles in dimension d ≥ 2:Dd = {R = R1 × · · · ×Rd : Rj ∈ D},i.e., every R ∈ Dd is a Cartesian product of dyadic intervals. The Haar functionssupported on R are defined as a coordinate-wise product of one-dimensional Haarfunctions:hR(x1, . . . , xd) = hR1(x1) · · ·hRd(xd), where R = R1 × · · · ×Rd, Rj ∈ D. (1.3)5(a) A Haar function sup-ported on the interval[0, 1).(b) A Haar function sup-ported on the interval[0, 0.5).Figure 1.2: Examples of Haar functions.If we consider two distinct dyadic intervals, then either one will be strictly con-tained in the other, or they will be disjoint. Haar functions enjoy the followingproperties:h2R(x) = 1R(x), (1.4)∫[0,1]dhR(x)hR′(x)dx = 0, whenever R 6= R′, (1.5)∫[0,1]dhR(x)dx = 0, ∀R ∈ Dd. (1.6)LetHdn = {~r ∈ Zd+ : |~r| := r1 + r2 + · · ·+rd = n}withZd+ = {~r = (r1, . . . , rd) : rj ≥ 0 and rj ∈ Z, j = 1, . . . , d},6andR~r = {R ∈ Dd : Rj ∈ D and |Rj| = 2−rj , j = 1, . . . , d}.In other words, the set R~r is a collection of axis-parallel dyadic rectangles whose sidelength along the jth axis is 2−rj . The rectangles in R~r are disjoint and partition thed-dimensional unit cube, [0, 1)d. Also, it will be useful to observe that the cardinalityof the set Hdn denoted by #Hdn is of order nd−1. This can be stated as #Hdn ' nd−1,which means that there exist positive constants C1(d) and C2(d) depending on thedimension d such that C1(d)nd−1 ≤ #Hdn ≤ C2(d)nd−1. This estimate agrees withour intuition, since each of the first d − 1 coordinates of the vector ~r ∈ Hdn canbe chosen in n different ways. The last coordinate is determined immediately oncethe first d − 1 coordinates have been selected, since such ~r has prescribed l1-norm,|~r| = n.A function defined on [0, 1]d of the formf~r =∑R∈R~rαRhR with αR = ±1is called an ~r-function with parameter ~r ∈ Hdn. These ±1-valued functions are alsoknown in the literature as generalized Rademacher functions. It can easily be verifiedthat ~r-functions are orthonormal with respect to the L2-norm.In all dimensions d, | · | stands for the d-dimensional Lebesgue measure. Thesignum function of a real number x is defined bysgn(x) =1, if x ≥ 0−1, if x < 0 . (1.7)1.2 Statement of the small ball inequalityAs mentioned earlier, the small ball inequality is a functional inequality concerningthe lower bound of the supremum norm of a sum of Haar functions supported ondyadic rectangles of a fixed volume. Such sums of Haar functions are also referred7to as hyperbolic sums.For brevity, let Adn = {R ∈ Dd : |R| = 2−n}, i.e., the set of all dyadic rectangleswhose d-dimensional volume is equal to 2−n.Conjecture 1.1. (The small ball conjecture):In all dimensions d ≥ 2, for any choice of the coefficients αR, and all integersn ≥ 1, one has the following inequalitynd−22∥∥∥∥∥∑R∈AdnαRhR∥∥∥∥∥∞≥ Cd2−n∑R∈Adn|αR|, (1.8)where Cd is a constant depending only on the dimension d, not on n or the choice of{αR}.The critical feature of inequality (1.8) is the precise exponent of n occurring onthe left side. If we replace nd−22 with nd−12 , then we get the so-called “trivial bound”:nd−12∥∥∥∥∥∑R∈AdnαRhR∥∥∥∥∥L2≥ Cd2−n∑R∈Adn|αR|. (1.9)If we compare the conjectured small ball inequality with inequality (1.9), noting thatthe L∞([0, 1]d)-norm is at least as large as the L2([0, 1]d)-norm, then we can see thatthe conjectured small ball inequality is better than (1.9) by a factor of√n.Proof of (1.9). We calculate the L2-norm of Hn =∑|R|=2−n αRhR. Using the orthog-onality of Haar functions and the fact that ‖hR‖22 = 2−n, Parseval’s identity givesus‖Hn‖22 =∑|R|=2−n|αR|2‖hR‖22 = 2−n∑|R|=2−n|αR|2≥ C2d2−n(∑|R|=2−n |αR|)22nnd−1,where in the last line, we used the Cauchy-Schwarz inequality and the fact that#{R ∈ Dd : |R| = 2−n} = 2n#Hdn ' 2nnd−1. Now, taking the square root of both8sides, we get‖Hn‖2 ≥ Cd( ∑|R|=2−n|αR|)2−nn−d−12 ,which is exactly (1.9).There is a simplified form of the small ball inequality which is of interest as well.If we consider coefficients {αR} from the set {−1, 1} in the small ball conjecture,then we obtain the signed small ball conjecture, i.e.,Conjecture 1.2. (The signed small ball conjecture) If αR = ±1 for every R ∈ Adn,then we have ∥∥∥∥∥∑R∈AdnαRhR∥∥∥∥∥∞≥ Cdn d2 , (1.10)where Cd is a constant depending only on the dimension d, not on n or the choice of{αR}.Note that when {αR} ⊂ {−1, 1}, then the trivial bound in (1.9) is given by∥∥∥∥∥∑R∈AdnαRhR∥∥∥∥∥L2≥ Cdn d−12 . (1.11)In Section 4.2 we will prove that the conjectured signed small ball inequality, in-equality (1.10), is sharp. In other words, we will prove that there exists a positiveconstant Cd > 0 and a choice of the coefficients {αR} ⊂ {−1, 1} such that∥∥∥∥∥∑R∈AdnαRhR∥∥∥∥∥∞≤ Cdn d2 .In particular, this fact will imply that if inequality (1.8) holds, then it is sharp aswell.91.3 Main resultsAccording to inequality (1.10), if one wants to attempt to solve the signed small ballconjecture, then one has to obtain a gain over the trivial bound given in (1.11), nd−12 ,by a factor of√n so that√n·n d−12 is the optimal lower bound in the signed small ballinequality. The best known gain over the trivial bound for general coefficients {αR} ⊂{−1, 1} for all d ≥ 3 was obtained by D. Bilyk, M. Lacey, and A. Vagharshakyanin [5]. We will state their result precisely in Chapter 2. In this work, we follow adifferent approach to that taken in [5]. Here, we impose a restriction on the collectionof coefficients and aim to get the optimal bound appearing in the signed small ballconjecture for this restricted collection of coefficients.To better understand the restriction we impose on the coefficients, let us observethat, for given n ≥ 1 and d ≥ 3, any choice of coefficients {αR}R∈Adn ⊂ {−1, 1} canbe generated by a mapα∗ : Adn → {−1, 1}. (1.12)such that α∗(R) = αR for all R ∈ Adn. SinceAdn ⊂n⋃r1=0A1r1 ×n⋃r1=0Ad−1n−r1 ,the map α∗ in (1.12) can be viewed as a restriction of a, not necessarily unique, mapα :n⋃r1=0A1r1 ×n⋃r1=0Ad−1n−r1 → {−1, 1}.The signed small ball conjecture states that inequality (1.10) must hold for thecollection of coefficients {αR := α(R) = α∗(R), R ∈ Adn}. We will be interested incollections of coefficients generated by maps of the form:α :n⋃r1=0A1r1 ×n⋃r1=0Ad−1n−r1 → {−1, 1}10withα(R1 ×R2 × · · · ×Rd) = α1(R1) · α2(R2 × · · · ×Rd), (1.13)whereα1 :n⋃r1=0A1r1 → {−1, 1} and α2 :n⋃r1=0Ad−1n−r1 → {−1, 1}.With a slight abuse of notation, we denote the values of a map in (1.13) by αR =αR1αR2×···×Rd , and we letAsplit ={αR : R ∈ Adn, αR = αR1 · αR2×···×Rd with αR1 , αR2×···×Rd ∈ {±1}}.We will show that for any integer n ≥ 1, d ≥ 3 and any choice of coefficients in Asplit,Conjecture 1.2 holds, i.e.,Theorem 1.3. For all integers n ≥ 1, d ≥ 3 and all choices of coefficients {αR}R∈Adn ⊂Asplit we have that ∥∥∥∥∥∑R∈AdnαRhR∥∥∥∥∥∞≥ Cdn d2 , (1.14)where Cd is a constant depending only on the dimension d, not on n or the choice of{αR}.Moreover, the above inequality is sharp.Theorem 1.4. For any integer n ≥ 1 and d ≥ 3, there exists a collection of coeffi-cients {αR}R∈Adn ⊂ Asplit such that∥∥∥∥∥∑R∈AdnαRhR∥∥∥∥∥∞≤ Cdn d2 ,where Cd is a constant depending only on the dimension d.We have already seen that the L2-norm of the signed hyperbolic sum, Hn =∑|R|=2−n αRhR with {αR} ⊂ {−1, 1}, can be calculated explicitly up to a multiplica-tive constant by using the orthogonality of Haar functions, i.e., ‖Hn‖2 ' n d−12 . It11turns out that, for any p < ∞, the Lp-norm of the signed hyperbolic sum has thesame behaviour as its L2-norm, i.e., ‖Hn‖p ' n d−12 , where the implicit constantsdepend on p and on the dimension d. We will see that this fact is a consequence ofthe Littlewood-Paley inequalities. In order to gain insight and to better understandthe signed small ball conjecture, we will also look for bounds of signed hyperbolicsums in function spaces which lie between Lp and L∞. Exponential Orlicz spaces,denoted by exp(La), are one type of such function spaces. These will be defined andfurther discussed in Chapter 7. Exponential Orlicz spaces have already appearedin the study of problems related to the small ball inequality. Based on the work ofBilyk et al. in [6], in which they prove an analogous result in dimension 2 for thediscrepancy function instead of hyperbolic sums, we conjecture the following:Conjecture 1.5. If αR = ±1 for every R ∈ Adn, then for d ≥ 2 and n > 1 we have∥∥∥∥∥∥∑|R|=2−nαRhR∥∥∥∥∥∥exp(La)≥ C(d, a)n d−12 + 12− 1a , 2 ≤ a <∞, (1.15)where C(d, a) is a constant depending only on the dimension d and the scale ofintegrability a, not n or the choice of {αR}.The validity of this conjecture remains unknown in d ≥ 3. However, adjustingthe arguments given by V. Temlyakov in [49] shows that the conjecture is true indimension d = 2. We will also verify a weaker version of Conjecture 1.5 in alldimensions d ≥ 3 in Chapter 7. In particular, we will prove the following theorem:Theorem 1.6. For any integer n > 1 and any choice of coefficients αR ∈ Asplit, wehave ∥∥∥∥∥∥∑|R|=2−nαRhR∥∥∥∥∥∥exp(La)≥ C(d, a) n d−12 + 12− 1a , 2 ≤ a <∞ (1.16)for d ≥ 3.Theorem 1.3 and Theorem 1.4 showcase the usage of the collection Asplit in prov-ing optimal bounds for the conjectured signed small ball inequality. Therefore, it12is natural to ask whether the conjectured inequalities (1.10) and (1.15) hold understructural assumptions on the coefficients. Such questions have been discussed inChapter 8.13Chapter 2Recent history of the small ballinequalityIn this chapter, we focus on outlining known results pertaining to the resolution ofthe small ball conjecture.In dimension d = 2, the small ball conjecture was first proved by M. Talagrand[47] in 1994, and Temlyakov [49] gave a second proof in 1995. However, in moregeneral dimensions d ≥ 3, the small ball conjecture remains unsolved.Remark 2.1. In what follows, the expression A & B means that there is a constantK such that A ≥ KB. In our setting, K does not depend on n or {αR}.The first improvement over the trivial bound in higher dimensions was given indimension d = 3 by J. Beck [2]. In particular, the lower bound he obtained wasbetter than the trivial bound by a factor proportional to (log n)−1. In 2008, a bodyof work authored by Bilyk, Lacey, and Vagharshakyan [3, 4] made significant progresstoward understanding the small ball inequality. They proved the following theorem:Theorem 2.2. For d ≥ 3, there exists 0 < η(d) < 12such that, for all choices ofcoefficients αR and all non-negative integers n ≥ 1,nd−12−η(d)∥∥∥∥∥ ∑R∈AdnαRhR∥∥∥∥∥∞& 2−n∑R∈Adn|αR|. (2.1)14In the proof of Theorem 2.2, no specific values of η(d) are given, though it is clearfrom the method of proof that η(d) must be small. The small ball conjecture saysthat (2.1) should hold with η = 12.The proofs of the two-dimensional small ball inequality given by Talagrand andTemlyakov were fundamentally different. Temlyakov’s proof used a duality argumentwhereas Talagrand’s used a combinatorial argument. Here we give a rough outline ofTemlyakov’s duality argument as similar techniques have been employed in provingTheorem 1.3 and Theorem 1.6.To begin the argument, Temlyakov constructed a function, called a test function,Ψ ∈ L1([0, 1]2) such that‖Ψ‖1 ≤ C, (2.2)where C is an absolute constant which does not depend on the choice of the coeffi-cients {αR} or the integer n ≥ 1. He further required that〈Ψ,∑|R|=2−nαRhR〉:=∫[0,1]dΨ ·∑|R|=2−nαRhRdx ≥ C2−n∑|R|=2−n|αR| (2.3)for all coefficients {αR} and every n ≥ 1. He then deduced the two-dimensionalsmall ball inequality from a simple application of Ho¨lder’s inequality to (2.3) whichexploited (2.2).The structure of the test function that Temlyakov constructed restricted theproof’s validity to the two-dimensional small ball inequality. In particular, the testfunction that Temlyakov constructed was in the form of a Riesz product, i.e.,Ψ =n∏k=0(1 + fk), (2.4)where fk is the two-dimensional ~r-function defined byfk := f(k,n−k) =∑R∈R~rsgn(αR)hR, ~r = (k, n− k).15In dimension d = 2, the product of Haar functions supported on dyadic rectangles ofa fixed volume is again a Haar function. This property, known as the Product Rule,allows one to write Ψ−1 as a sum of Haar functions, and verify that Ψ satisfies (2.2)and (2.3). The precise formulation of the Product Rule is given in Lemma 3.23, statedin Section 3.3. Unfortunately, this property does not hold in dimensions d ≥ 3, andso Temlyakov’s proof only applies in the two-dimensional case. Temlyakov’s proofwill be presented in more detail in Chapter 4.The absence of the Product Rule in higher dimensions makes the general smallball conjecture a very challenging problem. However, Beck [2] and Bilyk, Lacey andVagharshakyan [3, 4] successfully modified Temlyakov’s approach to yield partialresults in higher dimensions. To adapt Temlyakov’s duality approach, they insteadconsidered the functionΨ =q∏t=1(1 + ρ˜Ft), (2.5)where q > 0 is a constant depending on n, ρ˜ > 0 is a constant depending on q, andFt =∑~r∈Atf~r. (2.6)Here, At is the collection of vectors whose first-coordinate depends on t in such away that {At}qt=1 partitions Hdn, i.e.,q⊔t=1At = Hdn.Expanding the product in (2.5) allowed them to writeΨ = 1 + Ψsd + Ψq, (2.7)where Ψsd is a sum of a product of ~r-functions for which the Product Rule can beapplied, and Ψq is a sum of a product of ~r-functions for which the Product Rule fails.They then employed the Littlewood-Paley inequalities to obtain estimates on the Lp-norm of a product of ~r-functions for which the Product Rule fails. Combining these16estimates with combinatorial arguments, Bilyk, Lacey and Vagharshakyan provedthat‖Ψsd‖1 ≤ Cfor q = nη. As a result, they obtained〈Hn,Ψsd〉 =∫[0,1]dHn(x)Ψsd(x)dx&qn− d−12 · 2−n∑R∈Adn|αR|.(2.8)Finally, they used Ho¨lder’s inequality to finish the proof of Theorem 2.2. Thistheorem gives an improvement over the trivial bound by a factor of q. Before theirwork, Beck had already used probabilistic and combinatorial methods to prove theL1-boundedness of Ψsd in dimension d = 3 for a value of q which was logarithmic inn and smaller than nη. Beck’s work heavily influenced [3, 4].As mentioned in Chapter 1, the complete resolution of the small ball conjec-ture seems to be a difficult problem, but the signed small ball conjecture, stated asConjecture 1.2 on page 9, is thought to be more manageable, due to the additionalsimplifying assumptions on the coefficients {αR}. In the case of the signed small ballinequality, the trivial bound reduces to∥∥∥∥∥∥∑~r∈Hdnf~r∥∥∥∥∥∥L2& n d−12 , (2.9)whereHn =∑R∈AdnαRhR =∑~r∈Hdnf~r.The trivial bound can be obtained by using the orthogonality of generalizedRademacher functions and the fact that #Hdn, the cardinality of the set Hdn, is oforder nd−1. The exponent of the integer n appears naturally in the trivial bound(2.9). This comes from the volume constraint on dyadic rectangles, |R| = 2−n, orequivalently |~r| = n, which reduces the number of “free” parameters in the vector17~r ∈ Hdn to d − 1. Therefore, the total number of terms in the sum is of order nd−1.In [5], Bilyk, Lacey, and Vagharshakyan explicitly quantified the improvement overthe trivial bound, i.e., they proved the signed version of the Theorem 2.2, explicitlyproviding the range of values for η. In particular, they showed that, for every  > 0,∥∥∥∥∥∑R∈AdnαRhR∥∥∥∥∥∞≥ Cd · n d−12 + 18d−, (2.10)where αR ∈ {±1}. Under an additional assumption on the length of the first sideof the dyadic rectangle R, Bilyk, Lacey, I. Parissis and Vagharshakyan [7] made animprovement over (2.10) when d = 3. Specifically, they showed that∥∥∥∥∥ ∑|R|=2−n|R1|≥2−n/2αRhR∥∥∥∥∥∞& n 98 , (2.11)where αR ∈ {±1}, and the sum is taken over all dyadic rectangles R = R1×· · ·×Rd ∈Adn with the length of the first side R1 of R to be bounded below by 2−n/2.18Chapter 3Connections and applicationsAs indicated in the introduction, the small ball inequality lies at the interface ofmany areas of mathematics. This chapter, and the next, is devoted to a detailedanalysis of these connections. The material in these chapters has been adapted fromthe excellent survey written by Bilyk [10].3.1 Connection to probability theoryFor convenience, we start by recalling some definitions from probability theory.Definition 3.1. (a) If T is an arbitrary index set, a centred Gaussian process,g = {gt ; t ∈ T}, is a stochastic process with the property that, for any positiveinteger m and for every choice of t1, . . . , tm ∈ T , (gt1 , . . . , gtm) has a multivariatenormal distribution. That is, the joint density function of (gt1 , . . . , gtm) is givenbyf(~x) = {(2pi)m det(V)}−1/2 exp{− 12~xV−1~x′}, ~x = (x1, . . . , xm) ∈ Rm,where V = (vij), vij = E(gtigtj), is the m×m covariance matrix depending ont1, . . . , tm, and ~x′ is the transpose of the vector ~x.(b) A Brownian sheet, B, is a centred multi-parameter Gaussian process which19satisfies the covariance relationE(B(~s )B(~t ))=d∏j=1min(sj, tj),for ~s,~t ∈ T := [0, 1]d, with ~s = (s1, . . . , sd) and ~t = (t1, . . . , td).The name “small ball inequality” comes from probability theory, where one isinterested in determining the exact asymptotic behaviour of the small deviationprobability φ(ε) := − logP(‖B‖L∞([0,1]d) < ε) as ε→ 0+. In general, research in smalldeviation problems aims to uncover the asymptotic behaviour of the probability thata random process is accumulated in a small ball around the mean of the process insome norm.It is a known fact that in dimension d = 1 (when B is Brownian motion),limε→0+φ(ε)ε−2 =pi28, see [19]. In higher dimensions, the situation is more difficultand subtle, though upper bounds of the asymptotic behaviour of φ(ε) as ε→ 0+ arecompletely understood. In particular, for all d ≥ 2 and ε > 0 small,φ(ε) ≤ Cε2(log1ε)2d−1. (3.1)This inequality was proved by T. Dunker, T. Ku¨hn, M. Lifshits and W. Linde [18].Lower bounds of φ(ε) for small ε > 0 are not as well-understood as the upperbounds, though it is believed that they should have the same form.Conjecture 3.2. If d ≥ 2 and φ(ε) is the small deviation probability associated tothe Brownian sheet B, then for small ε > 0,C ′ε2(log1ε)2d−1≤ φ(ε) ≤ Cε2(log1ε)2d−1, (3.2)where C and C ′ are absolute constants.Note that the right side of inequality (3.2) is equivalent to (3.1). Conjecture 3.2was shown to be true by Talagrand [47] in the two-dimensional case but remains20open in higher dimensions d ≥ 3.Remark 3.3. The two-sided inequality in Conjecture 3.2 is usually expressed morecompactly by writing φ(ε) ' 1ε2(log 1ε)2d−1as ε→ 0+.Though the precise lower bound appearing in Conjecture 3.2 has not been proven,there is a well-known lower bound on the L2-norm in dimensions d ≥ 2. Morespecifically, E. Csa´ki [17] has shown thatCε2(log1ε)2d−2≤ − logP(‖B‖L2([0,1]d) < ε). (3.3)In fact, since ‖B‖L2 ≤ ‖B‖L∞ , we haveφ(ε) ≥ Cε2(log1ε)2d−2, (3.4)where C is an absolute constant. Note that the lower bound of φ(ε) in (3.4) is smallerthan the lower bound stated in Conjecture 3.2 by a factor of log 1εfor sufficiently smallε > 0 .There is a continuous analogue of the conjectured small ball inequality, which westate below, and we will see that the validity of this continuous analogue implies thetruth of Conjecture 3.2. Recall that D∗ = D ∪ {[−1, 1]}.Conjecture 3.4. In dimensions d ≥ 2, there exists an orthonormal basis of L2([0, 1]d),{uR}, enumerated using the index set Dd∗, such that, for any choice of coefficientsαR,nd−22∥∥∥∥∥∥∑|R|=2−nαRηR∥∥∥∥∥∥∞& 2− 3n2∑|R|=2−n|αR|, (3.5)whereηR(x1, · · · , xd) :=∫ x10· · ·∫ xd0uR(y1, . . . , yd)dy1 · · · dyd, (x1, · · · , xd) ∈ [0, 1]d.Conjecture 3.4 remains open in dimensions d ≥ 3, but Talagrand’s [47] proof ofConjecture 3.2 in dimension d = 2 came from proving Conjecture 3.4 in d = 2. In21his work, he constructed an orthonormal basis {uR} of L2([0, 1]2), and correspondingfunctions ηR, from a specific orthonormal basis {uI} of L2([0, 1]), and the corre-sponding functions ηI . More precisely, in dimension 1, he divided a dyadic intervalI = [m · 2−k, (m + 1) · 2−k) into 4 successive dyadic subintervals of I of length 14|I|,i.e.,I =4⋃j=1Ij, Ij = [(4m+ j − 1) · 2−k−2, (4m+ j) · 2−k−2), j = 1, . . . , 4. (3.6)Note that the centre of the dyadic interval I, denoted by c(I), is equal to(4m+ 2) · 2−k−2. He then considered the functionuI(x) =1|I| 12 (−1I1(x) + 1I2(x) + 1I3(x)− 1I4(x)), (3.7)and after a simple integration over a dyadic interval I, he showed thatηI(x) =−(x− (4m+ 2)2−k−2 + 2−k−1)|I|− 12 , x ∈ I1(x− (4m+ 2)2−k−2)|I|− 12 , x ∈ I2 ∪ I3(2−k−1 − (x− (4m+ 2)2−k−2))|I|− 12 , x ∈ I40, otherwise.(3.8)The function η can be rewritten in the equivalent form ηI(x) = |I| 12w(x−c(I)|I|), wherec(I) is the centre of the dyadic interval I. Here, w is a continuous non-constantfunction supported on the interval[−12, 12]with mean zero. The explicit formula ofw is given byw(x) =−(x+ 12), −12≤ x ≤ −14x, −14≤ x ≤ 14−(x− 12), 14≤ x ≤ 120, otherwise.22Figure 3.1: The function w used in the proof of the two-dimensional small ballprobability conjecture by Talagrand.In dimensions d ≥ 2, to a dyadic rectangle R = R1×· · ·×Rd ∈ Dd∗, he associatedthe functionuR(y1, . . . , yd) = uR1(y1) · · ·uRd(yd) (3.9)so that {uR} formed an orthonormal basis for L2([0, 1]d). For this special choice of{uR}, ηR =∏dj=1 ηRj(xj) where ηRj(xj) = |Rj|12w(xj−c(Rj)|Rj |)and c(Rj) is the centreof the dyadic interval Rj, j = 1, . . . , d. Note that ηR = 2−n/2∏dj=1 w(xj−c(Rj)|Rj |)whenR ∈ Adn.Proposition 3.5. Conjecture 3.4 implies Conjecture 3.2.The proof of this proposition has been adapted from [10].Proof of Proposition 3.5. The proof relies on Levy’s [35] representation of a Brown-23ian sheet, B, in terms of a linear combination of independent N (0, 1) random vari-ables, i.e.,B(~s ) =∑k∈Nγkηk(~s ), ~s ∈ [0, 1]d, in distribution, (3.10)where {γk} are the independent N (0, 1) random variables,ηk(~s ) =∫ s10· · ·∫ sd0uk(y1, . . . , yd)dy1 · · · dyd, ~s = (s1, · · · , sd) ∈ [0, 1]d,and {uk} is any orthonormal basis of L2([0, 1]d). Since there are countably manydyadic rectangles in [0, 1]d, there is a bijection between N and Dd∗. By a slight abuseof notation, we will suppress the bijection and now write γR and ηR instead of γkand ηk, respectively. We decompose the Brownian sheet B into two pieces:B = X~t + Y~t,whereX~t =∑|R|=2−nγR(ω)ηR(~t ), Y~t =∑|R|6=2−nγR(ω)ηR(~t ),with ~t ∈ [0, 1]d. Note that X~t and Y~t are independent for each ~t ∈ [0, 1]d. We willchoose the precise value of the integer n later. By Anderson’s lemma, stated inLemma 3.6 below after the proof of this proposition, we get thatP(‖B‖L∞([0,1]d) < ε) ≤P∥∥∥∥ ∑|R|=2−nγRηR∥∥∥∥∞< ε≤PCn− d−22 2− 3n2 ∑|R|=2−n|γR| < ε≤P( ∑|R|=2−n|γR| < A),where A = εCnd−22 23n2 and C is the implicit constant appearing in (3.5) which dependson the dimension d. Here, the conjectured small ball inequality (3.5) was used to24get the second inequality. Now, by applying Chebyshev’s inequality, we can estimatethe distribution function of the sum of the absolute values of iid Gaussians {γR} asP( ∑|R|=2−n|γR| < A)≤ eAEe−∑|R|=2−n |γR|. (3.11)To calculate the quantity on the right side of (3.11), we first recall that the formulafor the moment generating function of the random variable |N (0, 1)| is given byM(t) = Ee|N (0,1)|t. A straightforward but tedious calculation then shows that M(t) =2et22∫ t−∞e(−1/2)x2√2pidx. Also, note that M(−1) ≤ 2e 12 · 0.16 ≤ e 12 e−C0 , where C0 is anabsolute constant larger than 1/2. Note that such a constant can be found, since0.32 < e−1/2 and e−x is continuous at x = 1/2. This, together with the fact that {γR}is a collection of K iid random variables, where K := #{R ∈ Dd : |R| = 2−n} =2n ·#Hdn ≥ Cd2nnd−1, gives us thateAEe−∑|R|=2−n |γR| = eA(M(−1))K ≤ exp(1Cεnd−22 23n2 − C12nnd−1), (3.12)where C1 =(C0 − 12)Cd is a positive constant which depends only on the dimensiond. Now, for small ε > 0, if we let n be a positive integer satisfying12CC12−n+12 (n+ 1)d2 ≤ ε ≤ 12CC12−n2 nd2 , (3.13)then the right side of inequality (3.13) implies that1Cεnd−22 23n2 ≤ 12C12nnd−1. (3.14)This means thatexp(1Cεnd−22 23n2 − C12nnd−1)≤ exp(− 12C12nnd−1). (3.15)25Note that the left side of (3.13) implies that log 1ε. n. Hence,P( ∑|R|=2−n|γR| < A)≤ exp(− 12C12nnd−1)≤ exp(− 12C1(2n2 )2ndn2d−1)≤ exp(− C′ε2(log1ε)2d−1),where C ′ is a positive constant. Therefore,P(‖B‖L∞([0,1]d) < ε) ≤ exp(− C′ε2(log1ε)2d−1).That is,φ(ε) & 1ε2(log1ε)2d−1.The following lemma, which was used in the proof of Proposition 3.5, is a standardfact in probability theory. We state it here without proof.Lemma 3.6. (Anderson’s Lemma [1]). Let Xt, Yt, t ∈ T be independent centeredGaussian random processes. ThenP(supt∈T|Xt + Yt| < c)≤ P(supt∈T|Xt| < c).3.2 Connection to approximation theoryWe begin this section by recalling some notions from approximation theory whichwill be used throughout.Definition 3.7. (a) Let Kˆ be a subset of a metric space (K, ρ). The set Kˆ is calledan ε-net for K, if for every x ∈ K there exists y ∈ Kˆ such that ρ(x, y) ≤ ε.(b) Points y1, . . . , ym ∈ K are called ε-separated if ρ(yi, yj) > ε for all i 6= j.26Note that if (K, ρ) is a compact metric space, then K contains a finite ε-net forevery ε > 0.Definition 3.8. Let K˜ be a compact subset of a metric space (K, ρ). Then thecovering number of the set K˜, N(ε, K˜), is the minimum of the cardinalities of ε-netsfor K˜, i.e.,N(ε, K˜) := min{N : ∃{xk}Nk=1 ⊂ K˜, K˜ ⊂N⋃k=1B(xk, ε)},where B(xk, ε) = {x ∈ K˜ : ρ(x, xk) ≤ ε} is the ball centred at xk of radius ε. Theε-entropy of the set K˜ is then defined byH(ε, K˜) := log(N(ε, K˜)).Definition 3.9. Let (K, ρ) be a compact metric space. Given ε > 0, we define thepacking number, M(ε,K), of the set K to beM(ε,K) = max{m ∈ N : ∃ ε-separated points y1, . . . , ym ∈ K}.The study of covering numbers and entropies goes back A. Kolmogorov. Around1950, Kolmogorov developed a strong interest in problems in information theory. In-spired by Shannon’s ideas, he introduced the notion of a covering number to measurethe size of a set, A, in a metric space. From the point of view of information theory,the entropy of A has the following usage. Consider the elements of A as “messages”encoded in binary strings of 0’s and 1’s. Suppose that ε > 0 is a number smallenough so that any message x in A can be recovered with reasonable accuracy fromsome x′ in A with ρ(x, x′) ≤ ε. One should think of x as the original and x′ as thetransmitted message. The quantity N(ε, A) then captures the economic yet effectivecollection of transmittable messages.Kolmogorov used the concept of covering numbers to give a simpler proof ofVitushkin’s theorem on superposition of functions. To do this, he found an upperbound for the ε-entropy of W r∞(In), the class of functions of n variables defined on27the unit cube In in Rn and having uniformly bounded partial derivatives up to orderr. In particular, he showed thatlog(N(ε,W r∞(In))) ≤ C(1ε)n/r.This was the starting point of investigating the ε-entropy of compact sets in functionspaces. We refer the interested reader to [32] for a thorough description of thedevelopment and history of covering numbers.The ε-entropy of a set A also has applications in approximation theory, specif-ically, linear approximations. To present these concepts in an abstract setting, wemust first introduce the notion of the n-width of a set, A, in a normed space, anothertool invented by Kolmogorov to measure the size of A. Let X be a normed spaceequipped with a norm ‖ · ‖, and let A be a subset of X. The n-width of the set A,dn(A), is defined to bedn(A) := infLnsupx∈Ainfξ∈Ln‖x− ξ‖,where the outer infimum is taken over all n-dimensional subspaces Ln ⊂ X. We willrefer to the quantity infξ∈Ln ‖x−ξ‖ as the deviation of the approximation of x by Ln,supx∈A infξ∈Ln ‖x− ξ‖ as the deviation of the approximation of the set A by Ln, anddn(A) as the optimal deviation of the approximation of A. For small ε > 0, in orderto ensure that dn(A) < ε, n must be at least as large as some simple lower bounddepending on ε, and is, roughly, H(ε, A). More precisely, G. Lorentz [36] showedthatn ≥ H(e2ε, A)− C02 + log 1ε+ log[H(e2ε, A)− C0] ,where C0 is an absolute constant and A is a compact set in a separable Banach spaceX. From the above inequality, we see that, in order to see how n depends on ε forsmall ε > 0, one needs to understand how H depends on ε.Here, we survey some results concerning the asymptotic behaviour of σp(ε) :=H(ε, B(MW p)), the ε-entropy of a specific class of functions with bounded mixedderivatives. As we will see, this asymptotic behaviour can be determined explicitly if28one can prove the continuous version of the small ball conjecture. This topic gaineda lot of attention after the connection between the entropy of this class of functionswith bounded mixed derivatives and small deviation probabilities was discovered (seeTheorem 3.12).Next, we define a specific integration operator that will be used to give the precisedefinition of our space of functions with bounded mixed derivatives.Definition 3.10. The integration operator Td, acting on functions defined on theunit cube [0, 1]d, is defined by(Tdf)(x1, . . . , xd) :=∫ x10· · ·∫ xd0f(y1, . . . , yd)dy1 · · · dyd. (3.16)Let B∞ and B(Lp) denote the unit ball in L∞([0, 1]d) and Lp([0, 1]d) respectively.Then, MW p([0, 1]d) := Td(Lp([0, 1]d)) is the space of functions on [0, 1]d with mixedderivatives ∂df∂x1∂x2···∂xd in Lp, which, throughout, we will equip with the sup-norm,‖ · ‖∞. We then define the subset B(MW p) ⊂MW p by B(MW p) := Td(B(Lp)). Todefine the covering number of the set B(MW p), we must first show that B(MW p)is a precompact set with respect to the L∞-norm.Proposition 3.11. The closure of the set B(MW p), B(MW p)‖·‖∞, is a compact setwith respect to the L∞-metric for 1 < p <∞ and for all d ≥ 1.Proof. To conclude that B(MW p)‖·‖∞is a compact set, we will use the Arzela`-Ascolitheorem. To do this, we need to show thatTd(B(Lp)) ⊂ B∞(0, c) (3.17)for some positive constant c, where B∞(0, c) is the ball centred at zero of radius cmeasured in L∞-norm, and that the family of functions {Td(f)}f∈B(Lp) is uniformlyequicontinuous, i.e.,∀ε > 0, ∃δ > 0 such that |Td(f)(~s )− Td(f)(~t )| < ε,∀~s,~t ∈ [0, 1]d with (3.18)29‖~s− ~t ‖2 :=( d∑i=1|si − ti|2)1/2< δ and ∀f ∈ B(Lp).First, for any f ∈ B(Lp), we have that|Td(f)(~t )| ≤∫ t10· · ·∫ td0|f(y)|dy ≤ ‖f‖p ≤ 1,for every ~t ∈ [0, 1]d. This shows that (3.17) holds.Now we will prove condition (3.18). To do this, we will show that|Td(f)(~s )− Td(f)(~t )| ≤d∑i=1|ti − si|1/q ∀~t, ~s ∈ [0, 1]d and ∀f ∈ B(Lp), (3.19)where q = pp−1 is the dual exponent of p, as (3.18) follows from (3.19) immediately.Suppose d ≥ 1 and ~t, ~s ∈ [0, 1]d with ti 6= si for i = 1, . . . d. For any such ~s and ~t, wedefine the following sets:I1 := {i ∈ {1, . . . , d} : si > ti},I2 := {i ∈ {1, . . . , d} : si < ti}.Note that these sets depend on the choice of ~s,~t ∈ [0, 1]d, and that I1∪I2 = {1, . . . , d}.Let [~0, ~s ] :=∏di=1[0, si] and [~0,~t ] :=∏di=1[0, ti]. Note that the corner that is furthestfrom the origin of each of these d-dimensional boxes is given by ~s and ~t respectively,and that the intersection of these boxes is non-empty. In fact,[~0, ~s ]⋂[~0,~t ] =d∏i=1[0,min(ti, si)],wheremin(ti, si) = si, ∀i ∈ I2 and min(ti, si) = ti ∀i ∈ I1.30We have that[~0, ~s ] =([~0, ~s ]⋂[~0,~t ])⋃(⋃i∈I1Ai), (3.20)where⋃i∈I1 Ai is the complement of the set [~0, ~s ]⋂[~0,~t ] in [~0, ~s ], and each Ai dependson the choice of ~s and ~t. In fact, for i ∈ I1, Ai = B1×· · ·×Bd with Bj = [0, sj] ⊂ [0, 1],j 6= i, and Bi = (ti, si]. Moreover,[~0,~t] =([~0, ~s ]⋂[~0,~t ])⋃(⋃i∈I2A˜i), (3.21)where, for i ∈ I2, A˜i = B˜1×· · ·× B˜d with B˜j = [0, tj] ⊂ [0, 1], j 6= i, and B˜i = (si, ti].Note that the complement of the set [~0, ~s ]⋂[~0,~t ] in [~0,~t ] is⋃i∈I2 A˜i. Therefore,using relations (3.20), (3.21), and Ho¨lder’s inequality, we obtain|Td(f)(~s )− Td(f)(~t )| =∣∣∣∣ ∫[~0,~s]f(y)dy −∫[~0,~t]f(y)dy∣∣∣∣≤∑i∈I1∫Ai|f(y)|dy +∑i∈I2∫A˜i|f(y)|dy≤∑i∈I1‖f‖p|ti − si|1/q +∑i∈I2‖f‖p|ti − si|1/q (∗)≤d∑i=1|ti − si|1/q,(3.22)for any f ∈ B(Lp), for all ~s,~t ∈ [0, 1]d with si 6= ti, i = 1, . . . , d.From (3.22), we see that (3.19) holds for points ~s,~t ∈ [0, 1]d with si 6= ti, i =1, . . . , d. Now, we will show that (3.19) holds for general points ~s,~t ∈ [0, 1]d. Fix anytwo points ~s and ~t in [0, 1]d and consider the setI0 = {i ∈ {1, . . . , d} : si = ti},i.e., I0 is the subset of indices in {1, . . . , d} for which the corresponding coordinates ofthe vectors ~s and ~t coincide. Let Ic0 denote the complement of the set I0 in {1, . . . , d}.31Now, letSI0 :=∏i∈I0[0, si], SIc0 :=∏i∈Ic0[0, si], and TIc0 =∏i∈Ic0[0, ti]be boxes corresponding to I0 and Ic0. We will let ySI0 , ySIc0, and yTIc0denote pointsbelonging to SI0 , SIc0 , and TIc0 respectively. Using Fubini’s theorem, we get that|Td(f)(~s)− Td(f)(~t)| =∣∣∣∣ ∫SI0[ ∫SIc0f(y)dySIc0−∫TIc0f(y)dyTIc0]dySI0∣∣∣∣≤∫SI0∣∣∣∣ ∫SIc0f(y)dySIc0−∫TIc0f(y)dyTIc0∣∣∣∣dySI0 .In order to estimate the absolute value of the difference of the two integrals in theabove inequality, we appeal to the case we considered previously. Now, for ySI0 ∈ SI0 ,with the aid of (∗) in (3.22), we get that∣∣∣∣ ∫SIc0f(y)dySIc0−∫TIc0f(y)dyTIc0∣∣∣∣ ≤∑i∈I1(∫SIc0|f(y)|pdySIc0)1/p|ti − si|1/q+∑i∈I2(∫TIc0|f(y)|pdyTIc0)1/p|ti − si|1/q.Here, I1 and I2 are subsets of Ic0 for which Ic0 = I1∪I2. Finally, integrating the aboveinequality with respect to ySI0 and applying Jensen’s inequality gives|Td(f)(~s)− Td(f)(~t)| ≤∑i∈I1‖f‖p|ti − si|1/q +∑i∈I2‖f‖p|ti − si|1/q≤ ‖f‖p∑i∈Ic0|ti − si|1/q ≤d∑i=1|ti − si|1/qfor any f ∈ B(Lp) and any ~t, ~s ∈ [0, 1]d. This completes the proof of the proposition.One branch of research in approximation theory is concerned with determiningthe asymptotic behaviour of σp(ε) := log(N(ε, p, d)) as ε→ 0+, where32N(ε, p, d) := min{N : ∃{xk}Nk=1 ⊂ B(MW p), B(MW p) ⊂N⋃k=1(xk + εB∞)}is the covering number of the set B(MW p). Recall that this is the smallest num-ber of balls of radius ε needed to cover the set B(MW p), or, equivalently, the sizeof the smallest ε-net of B(MW p). There is an interesting relation between thecovering number N(ε, p, d) and the packing numberM(ε, B(MW p)). Here, for con-venience, we will denote the packing number of B(MW p) by M(ε, p, d) rather thanM(ε, B(MW p)). In particular, it is easy to show thatM(2ε, p, d) ≤ N(ε, p, d) ≤M(ε, p, d). (3.23)J. Kuelbs and V. Li [33] have established a deep connection between the asymp-totic behaviours of φ(ε) and σp(ε). In particular, from their results, one can concludethe following:Theorem 3.12. Suppose d ≥ 2, b > 0, and ε > 0 is small. Thenφ(ε) ' ε−2(log1ε)bif and only if σ2(ε) ' ε−1(log1ε) b2.Here, the symbol “ ' ” is as in Remark 3.3. As mentioned in the previous section,the upper bound for φ(ε) has been proved for the special case when b = 2d− 1. So,according to Theorem 3.12,σ2(ε) ≤ Cε−1(log1ε) 2d−12for small ε > 0, where C is an absolute constant. Thus, in the setting of approxima-tion theory, the small ball conjecture is equivalent to:33Conjecture 3.13. For all d ≥ 2 and ε > 0 small, we have thatσ2(ε) ≥ C ′ε−1(log1ε) 2d−12,where C ′ is an absolute constant.Conjecture 3.13 was solved in dimension d = 2 by Kuelbs and Li [33]. They wereable to prove the conjecture in d = 2 by using Talagrand’s result on the asymptoticbehaviour of φ(ε) in d = 2 and Theorem 3.12. Also, Temlyakov [50] gave a secondproof of Conjecture 3.13 in d = 2 by using the analogue of the small ball inequalityfor trigonometric polynomials. In particular, he showed that, in d = 2, for all1 < p <∞, and for ε > 0 small,σp(ε) ≥ Cpε−1(log1ε) 32.Conjecture 3.13 remains open in d ≥ 3, though, using Theorem 3.12, it’s not hardto see that Conjecture 3.13 would follow from a proof of Conjecture 3.4 in d ≥ 3.However, there is a way to deduce Conjecture 3.13 from Conjecture 3.4 withoutinvoking Theorem 3.12, which is what we do here.Proposition 3.14. If Conjecture 3.4 is true, then so is Conjecture 3.13.In order to prove Proposition 3.14, we will need a preliminary result. Considera bi-valued function β : Adn → {−1, 1}, Adn = {R ∈ Dd, |R| = 2−n}. Let K = #Adn.Then, by defining βR := β(R) for R ∈ Adn, we can associated β with a vector~β ∈ {−1, 1}K whose “Rth” coordinate is βR . It has already been noted that K =2n#Hdn ' 2nnd−1.Lemma 3.15. There is an integer n0 ≥ 1 such that, if n ≥ n0 and K = 2n#Hdn,then there exists a set X ⊂ {−1, 1}K satisfying:1.∑|R|=2−n |β′R − βR| & K for any ~β 6= ~β′ ∈ X.2. log #X & K.34The proof of Lemma 3.15 will follow the proof of Proposition 3.14.Proof of Proposition 3.14. According to inequality (3.23), for a fixed small ε > 0,we need to produce a 2ε-separated collection of functions F˜ = {F˜~β} ⊂ B(MW 2) forwhich log #F˜ & 1ε(log 1ε) 2d−12.Now, to each such function β or equivalently vector ~β ∈ {−1, 1}K , we can furtherassociate the function F˜~β given byF˜~β =c2n2 nd−12∑|R|=2−nβRηR, (3.24)where c > 0 is a small constant and ηR := TduR, as given in the statement ofConjecture 3.4. Moreover, F˜~β ∈ B(MW 2) for every ~β ∈ {−1, 1}K . Indeed, sinceηR = TduR and {uR}R∈Adn is an orthonormal set of functions, we obtainF˜~β = Td(c2n2 nd−12∑|R|=2−nβRuR),and ∥∥∥∥ c2n2 nd−12∑|R|=2−nβRuR∥∥∥∥22=c22nnd−12n ·#Hdn ≤ 1for c sufficiently small.Let X ⊂ {−1, 1}K be a set satisfying the conditions of Lemma 3.15. Then,F˜ = {F˜~β}~β∈X is a 2ε-separated collection of functions in B(MW 2) for whichlog #F˜ ' 1ε(log1ε) 2d−12as ε→ 0+,for a suitable choice of n ≥ n0. Indeed, let ε > 0 be any small number strictly lessthan 12n1/20 2−n0 , where n0 is the integer which appears in Lemma 3.15, and let n ≥ n0be an integer for which n1/22−n ≥ 2ε ≥ (n+ 1)1/22−(n+1). First, we show that F˜ is a2ε-separated family with respect to the L∞-norm. From Conjecture 3.4, if ~β, ~β′ ∈ X35are such that ~β 6= ~β′, then we get that‖F˜~β − F˜~β′‖∞ =∥∥∥∥ c2n2 nd−12∑|R|=2−n(βR − β′R)ηR∥∥∥∥∞&2−n2 n− d−12 · n− d−22 2− 3n2∑|R|=2−n|βR − β′R|&n− 2d−32 2−2n · 2n · nd−1 = n1/22−n.(3.25)Hence, F˜ is a 2ε-separated family with respect to the L∞-norm. Since2ε ≥ (n+ 1)1/22−(n+1), we therefore have thatlog #F˜ = log #X & K & 2nnd−1& 2nn− 12nd− 12& 1ε(log1ε)d− 12.Now, to prove Lemma 3.15, it will be useful to first introduce some notions fromcoding theory.In coding theory, a subset X ⊂ {−1, 1}K is called a binary code of length K, andwe can endow such a binary code with the Hamming distance. This is a distancefunction, dH , defined on X ×X bydH(x, y) = #{j = 1, . . . , K : xj 6= yj},i.e., the Hamming distance between x and y is the number of components in which xand y do not coincide. It is easy to verify that (X, dH) is, in fact, a metric space. Theminimum Hamming distance of a binary code X is then defined to be the smallestHamming distance between its elements, minx,y∈X,x 6=ydH(x, y). So, in the languageof coding theory, to prove Lemma 3.15, we must find a binary code X of length36K = 2n#Hdn, such thatmin~β,~β′∈X,~β 6=~β′dH(~β, ~β′) ≥ K4, (3.26)andlog #X & K. (3.27)Since (3.26) tells us that∑|R|=2−n |β′R − βR| ≥ 2K4 & K for any ~β 6= ~β′ ∈ X, it’sclear that (3.26) implies the first condition on X in Lemma 3.15.It is easy to show that there exists a binary code X satisfying (3.26). If thisbinary code X is equal to {−1, 1}K , then #X = 2K , and (3.27) follows immediately.In general, X will be a strict subset of {−1, 1}K , and so #X will be some fractionof 2K . Gilbert-Varshamov’s theorem, stated below, provides us with a lower boundof this fraction, and this lower bound will be used in the proof of (3.27).Theorem 3.16. (Gilbert-Varshamov’s theorem) Let A(m, k) denote the maximalsize of a binary code of length m and minimum Hamming distance at least k. ThenA(m, k) ≥ 2m∑k−1j=0(mj) .We will postpone the proof of Gilbert-Varshamov’s theorem until we have com-pleted the proof of Lemma 3.15.Proof of Lemma 3.15. We will apply Gilbert-Varshamov’s theorem with m = K andk = m4in order to show that log #X & K. First, note that there is a maximal codeX of length m and minimum Hamming distance m4, so Gilbert-Varshamov’s theoremis applicable. Using Stirling’s formula, m! ' 1√2pim(me)m, we obtain(mm/4)' 1√m(14)−m4(34)−3m4.Applying Gilbert-Varshamov’s theorem and recalling the fact that(mj)is an increas-37ing function of j when 0 ≤ j < k, we get that#X ≥ 2m∑k−1j=0(mj) ≥ 2mk(mk) = 2mm/4(mm/4)' 1√m2m(14)m/4(34)3m/4& 1√mCm & bm√mCmbm& C˜m,for a sufficiently large value of m, or, equivalently, for a big enough integer n. Here,C = 2(1/4)1/4(3/4)3/4 > 1, b is any number satisfying 1 < b < C, and C˜ = Cb. Hence,log #X & m.Finally, to conclude the section, we prove Gilbert-Varshamov’s theorem.Proof of Gilbert-Varshamov’s Theorem. First, note that, given any x ∈ {−1, 1}m,#{y ∈ X : dH(x, y) = j} =(mj)for j = 1, . . . ,m. If we let BH(x, k) denote theHamming ball centred at x of the radius k, i.e., BH(x, k) = {y | dH(x, y) < k},then #BH(x, k) =∑k−1j=0(mj). Let X be a maximal code of length m and minimumHamming distance at least k. Then⋃x∈X BH(x, k) = {−1, 1}m. As a result, weobtain2m = #⋃x∈XBH(x, k) ≤∑x∈X#BH(x, k) = #Xk−1∑j=0(mj),which completes the proof.3.3 Connection to discrepancy theory3.3.1 Conjectures in discrepancy theoryDiscrepancy theory studies the deviation of the actual data from the ideal data. Oneof the most challenging problems in geometric discrepancy theory is, given a setPN = {p1, p2, . . . , pN} ⊂ [0, 1]d with cardinality #PN = N , to obtain a lower bound38on the discrepancy function,DN(x1, . . . , xd) = #{PN ∩ [0, x1)× · · · × [0, xd)} −Nx1 · · ·xd, (3.28)with respect to some norm. Relation (3.28) represents the difference between theactual and expected number of points in the box [0, x1) × · · · × [0, xd). When thepoints p1, . . . , pN are uniformly distributed, these quantities are the same and soDN(x1, . . . , xd) = 0. Lower bounds of the discrepancy function with respect to theL∞-norm indicate the best case scenario for the distribution of points, i.e., howclose to uniformly distributed the distribution of the points can be. Motivation forstudying the behaviour of the discrepancy function comes from numerical integra-tion. In numerical integration, one attempts to approximate integrals of functionsf : [0, 1]d → R that cannot be computed analytically. Moreover, while doing this,one wants to minimize the error in their approximation. The Koksma-Hlawka in-equality connects the error, which appears in the process of the numerical integra-tion, with the discrepancy function. This inequality says that, for an N -point setP = {p1, . . . , pN} ⊂ [0, 1]d, and for all p ∈ [1,∞],∣∣∣∣ ∫[0,1]df(x)dx− 1NN∑i=1f(pi)∣∣∣∣ ≤ Vq(f)‖DN‖pN . (3.29)Here, q is the dual exponent of p, and Vq(f) is a quantity depending on f and p. Inthe one-dimensional case, d = 1, Vq(f) = ‖f ′‖q when f is a smooth function. Theinterested reader can consult [31, 27] for the precise definition of Vq(f) in dimensionsd ≥ 2. From inequality (3.29), we see that an N -point set with smallest possibleLp-norm gives rise to the smallest error. Hence, we want to find an N -point setPN ⊂ [0, 1]d which minimizes the Lp-norm of DN . This motivates the study ofuniform lower bounds of the discrepancy function DN . The first quantitative resultin this direction, stated below, was obtained by K. F. Roth [39]. Note that we willprove Roth’s theorem in the subsequent subsection.39Theorem 3.17. (Roth, 1954) In all dimensions d ≥ 2, for any N -point set PN ⊂[0, 1]d, one has‖DN‖2 ≥ Cd(logN) d−12 , (3.30)where Cd is an absolute constant that depends only on the dimension d.Furthermore, inequality (3.30) in Theorem 3.17 is sharp in the sense that thereexists an N -point set PN for which ‖DN‖2 ≤ Cd(logN) d−12 . This fact was shownby Roth [40, 41], Chen [14] and Frolov [20]. The best known constant Cd in (3.30)was obtained by A. Hinrichs and L. Markhasin [26]. In general, the Lp-behaviourof the discrepancy function is well understood for 1 < p < ∞. In fact, the Lp-behaviour of the discrepancy function is similar to the L2-behaviour, i.e., ‖DN‖p ≥C(d, p)(logN)d−12 , for 1 < p < ∞. This was first shown by W. M. Schmidt [42,43]. He originally considered the values of p for which the dual exponent q is aneven integer using Roth’s approach. One can also obtain such lower bounds on theLp-norm of DN for the full range of 1 < p < ∞ by using the Littlewood-Paleyinequalities. This will be explained in more detail after proving Roth’s theoremand after introducing the Littlewood-Paley inequalities. In addition, examples of N -point sets with ‖DN‖p ≤ C(d, p)(logN) d−12 were constructed by Chen and Skriganov[15, 16] for p = 2 and by Skriganov [46] for p > 1. Determining the Lp-behaviour ofthe discrepancy functions for p = 1,∞ is one of the most challenging open problemsin the field of geometric discrepancy.Conjecture 3.18. (L∞-Conjecture) For all d ≥ 2 and all PN ⊂ [0, 1]d with #PN =N ,‖DN‖∞ ≥ Cd(logN) d2 , (3.31)where Cd is an absolute constant depending only on the dimension d.Since L∞([0, 1]d) is continuously embedded in L2([0, 1]d), Roth’s theorem imme-diately implies that‖DN‖∞ ≥ Cd(logN) d−12 . (3.32)Comparing (3.31) and (3.32), we see that there is a gap of order√logN between theconjectured lower bound of the discrepancy function and the lower bound provided by40Roth’s theorem. This gap was eliminated by Schmidt [42] in d = 2. More precisely,he showed that ‖DN‖∞ ≥ C logN . One decade later, Hala´sz [25] gave another proofof the same result by combining the technique of using Riesz products with Roth’sorthogonal function method. This is where the small ball inequality enters the areaof discrepancy theory. More precisely, Hala´sz’s approach can be transferred to provethe small ball inequality in d = 2. After we prove Roth’s theorem, we will analyzeHala´sz’s proof and, in Chapter 4, show how his technique can be implemented toprove the small ball inequality in d = 2. The connection between problems indiscrepancy theory and the small ball inequality is based purely on the methods ofproof used. Currently, there is no indication that the results in discrepancy theorywill yield results concerning the small ball inequality, or vice versa.In 1989, by employing techniques using Riesz products, Beck [2] was able tomake some progress toward improving (3.32) in higher dimensions. Specifically, indimension d = 3, he showed that‖DN‖∞ ≥ C logN · (log logN) 18−ε, for small ε > 0. (3.33)In other words, he improved the lower bound, logN , by a factor of order (log logN)18−ε,for some small ε > 0. In dimensions d ≥ 3, almost two decades later, Bilyk, Lacey,Vagharshakyan [3, 4] improved the bound, (logN)d−12 , by a factor of order (logN)θby showing‖DN‖∞ ≥ Cd(logN) d−12 +θ, (3.34)where θ is a small positive constant depending on the dimension d. Their workwas based on Beck’s work and improved Beck’s result. It is still unknown whetherConjecture 3.18 is sharp, though examples of sets have been constructed by J. H.Halton and J. M. Hammersley [23, 24] which obey the upper bound: ‖DN‖∞ ≤Cd(logN)d−1. This leads to another L∞-conjecture:Conjecture 3.19. For all d ≥ 2 and all point sets PN in [0, 1]d with #PN = N wehave‖DN‖∞ ≥ Cd(logN)d−1, (3.35)41where Cd is an absolute constant.The L1-behaviour of the discrepancy function is another related open problem.Conjecture 3.20. (L1-Conjecture) For all d ≥ 2 and all N -point sets in [0, 1]d,‖DN‖1 ≥ Cd(logN) d−12 . (3.36)In dimension d = 2, Conjecture 3.20 was solved by Hala´sz [25]. Vagharshakyan[51] improved the constant Cd in Hala´sz’s proof. It has also been conjectured that, for0 < p < 1, the Lp-norm of DN satisfies the lower bound analogous to that appearingin (3.36).3.3.2 Proof of Theorem 3.17Here, we prove that, for any N -point set in [0, 1]d and for all d ≥ 2, the L2-norm ofDN is of order (logN)d−12 . Having expanded the discrepancy function with respectto the orthogonal basis of Haar functions:DN =∑R∈Dd∗〈DN , hR〉‖hR‖22hR,Roth’s approach was to identify the terms of the expansion which do not contributesignificantly to ‖DN‖2. The following proposition quantifies what this means moreprecisely.Proposition 3.21. Let PN ⊂ [0, 1]d be a Npoint-set with #PN = N , and let n ∈ Nbe such that 2n−2 ≤ N < 2n−1. Then, for every ~r ∈ Hdn, there exists an ~r-function,f~r, such that〈DN , f~r〉 :=∫[0,1]dDN · f~rdx ≥ cd > 0, (3.37)where the constant cd depends only on the dimension d.We will first assume that Proposition 3.21 holds to prove Theorem 3.17, and thenwe will prove Proposition 3.21 immediately afterward.42Proof of Theorem 3.17. Here, we use a duality argument and construct a test func-tion, F , such that ‖F‖2 ' log d−12 N and 〈DN , F 〉 & (logN)d−1. Using these twolower bounds, the Cauchy-Schwarz inequality will imply:‖DN‖2 ≥ 〈DN , F 〉‖F‖2 & (logN)d−12 ,which is the estimate stated in Roth’s theorem.Define F byF =∑~r∈Hdnf~r,where f~r is the ~r-function associated to ~r ∈ Hdn which comes from Proposition 3.21.Using the orthogonality of ~r-functions and the fact that logN ' n, we get that‖F‖2 =(∑~r∈Hdn‖f~r‖22)1/2= (#Hdn)1/2 ' nd−12 ' (logN) d−12 .On the other hand, Proposition 3.21 ensures that〈DN , F 〉 ≥ cd#Hdn & nd−1 & (logN)d−1,which completes the proof.Remark 3.22. The same approach can, essentially, be used to prove the Lp lowerbounds of the discrepancy function. This will require us to estimate the Lp- norm ofthe function F , but, using the Littlewood-Paley inequalities, one can show that thishas the same behaviour as the L2-norm of F , i.e., ‖F‖p ' (logN) d−12 . Now, togetherwith Ho¨lder’s inequality and Proposition 3.21, this will give us the estimate provedby Schmidt,‖DN‖p ≥ 〈DN , F 〉‖F‖q & (logN)d−12 , (3.38)where q is the dual exponent of p ∈ (1,∞). The Littlewood-Paley inequalities areneeded to replace the orthogonality argument used in calculating the L2-norm of F .43We refer the reader to Remark 5.8 for more details regarding the Lp-estimates of F .Proof of Proposition 3.21. First, we observe that the discrepancy function associatedto a set PN ⊂ [0, 1)d with #PN = N can be written in an equivalent form:DN(~x) =∑~p∈PN1[~p,~1)(~x)−Nx1 · · ·xd, (3.39)where ~p = (p1, . . . , pd) ∈ PN ⊂ [0, 1)d and [~p,~1) = [p1, 1)× · · · × [pd, 1). Fix a dyadicrectangle R = I1×· · ·× Id ∈ Dd, and assume that the point ~p ∈ PN is not containedin R. Therefore, there exists j ∈ {1, . . . , d} such that pj is not contained in Ij. Asa result,∫[0,1]1[pj ,1)(xj)hIj(xj)dxj = 0, since either pj is to the right of Ij, in whichcase Ij ∩ [pj, 1) = ∅ and so∫[0,1]1[pj ,1)(xj)hIj(xj)dxj = 0, or pj is to the left of Ij and∫[0,1]1[pj ,1)(xj)hIj(xj)dxj =∫[0,1]hIj(xj)dxj = 0. Therefore,∫[0,1]d1[~p,~1) hR = 0, andwe get that 〈 ∑~p∈PN∩R=∅1[~p,~1), hR〉= 0. (3.40)Also, since∫IjxjhIjdxj =|Ij |24, j = 1, . . . , d, we get〈x1 · · ·xd, hR〉 = 4−d|R|2. (3.41)Using (3.40) and (3.41), we get a crucial relation which will be essential for the restof the proof,〈DN , hR〉 = −N4−d|R|2, (3.42)for all dyadic rectangles R which do not contain points from PN .Recall that the setR~r = {R ∈ Dd : Rj ∈ D and |Rj| = 2−rj , j = 1, . . . , d}is a collection of disjoint axis-parallel dyadic rectangles whose sides are of prescribedlengths. Now, for each ~r ∈ Hdn, we are ready to define an ~r-function consisting oftwo parts, where the first part is a sum over the dyadic rectangles in R~r which do44not contain points from PN , and the second part is a sum over the dyadic rectanglesin R~r which contain points from PN . That is, we decompose f~r asf~r =∑R∈R~r:R∩PN=∅−1hR +∑R∈R~r:R∩PN 6=∅sgn〈DN , hR〉hR, (3.43)wheresgn(x) =1, if x ≥ 0−1, otherwise .The inner product of the discrepancy function, DN , with the second term of thedecomposition in (3.43) is a positive number. Taking into account relation (3.42),this gives us〈DN , f~r〉 ≥∑R∈R~r:R∩PN=∅N4−d|R|2. (3.44)Since 2n−2 ≤ N < 2n−1 and #R~r = 2n, the number of rectangles in R~r whichintersect the N -point set PN is at most 2n−1. Thus, we have#{R ∈ R~r : R ∩ PN = ∅} ≥ 2n(1− 12)≥ 2n−1.Therefore, recalling that |R| = 2−n, we get the desired estimate:〈DN , f~r〉 ≥∑R∈R~r:R∩PN=∅N4−d|R|2 ≥ 2n−12n−2 · 2−2n4d= cd.3.3.3 Hala´sz’s proof of Conjecture 3.18 in d = 2Here, we give all the essential details of Hala´sz’s proof of Schmidt’s result for thediscrepancy function in d = 2,‖DN‖∞ ≥ C logN. (3.45)45Hala´sz’s approach was to use a duality argument in combination with Riesz producttechniques and Roth’s principle, Proposition 3.21. Using Riesz products turned outto be very fruitful in proving the two-dimensional small ball inequality, as we willsee in the next chapter.To present Hala´sz’s proof, we need two auxiliary lemmas. The first lemma showshow Haar functions interact when multiplied.Lemma 3.23. (Product Rule) If R 6= R′ ∈ D2 are not disjoint dyadic rectanglesand |R| = |R′|, thenhR · hR′ = ±1hR∩R′ ,i.e., the product of two Haar functions is again a Haar function.We let the reader verify the result of this simple lemma. This lemma breaksdown in dimensions d ≥ 3, and the absence of the Product Rule in d ≥ 3 is a majorobstacle in solving the L∞-discrepancy conjecture.Lemma 3.24. Let f~r be any ~r-function with parameter s, i.e., |~r| := r1 + · · ·+rd = s.Then, for some constant bd > 0,〈DN , f~r〉 ≤ bdN2−s.The interested reader can find the proof of the above lemma in [10].Hala´sz’s proof. First, we construct a test function Φ which satisfies two key estimates:‖Φ‖1 . 1, (3.46)and|〈DN ,Φ〉| & logN. (3.47)Then, once we have shown that (3.46) and (3.47) hold, applying Ho¨lder’s inequalityin (3.47) will yield the conjectured estimate (3.45).The function Φ will be a Riesz product, where the building blocks of the productwill be ~r-functions coming from Proposition 3.21. To be more precise, we fix n of46order logN , i.e., n ' logN , and we define the test function Φ byΦ :=n∏k=0(1 + γfk)− 1, (3.48)where γ ∈ (0, 1) is a small constant which will be determined later, andfk := f(k,n−k) =∑R∈R~rαRhRis the two-dimensional ~r = (k, n− k)-function constructed in Proposition 3.21.We proceed to establish properties (3.46) and (3.47) for our test function Φ.Expanding the product in (3.48), we getΦ˜ :=n∏k=0(1 + γfk) = 1 + Φ1 +n∑k=2Φk, (3.49)whereΦ1 = γn∑k=0fk, (3.50)andΦk = γk∑0≤j1<···<jk≤nfj1 · fj2 · · · fjk , k ≥ 2. (3.51)Note that the functions fk are ±1-valued. Since γ < 1, 1 + γfk > 0 for all k,and so Φ˜ is a positive function. In addition, with the aid of Lemma 3.23, we havethat∫[0,1]dΦkdx = 0 for all k ≥ 1, since a finite product of ~r-functions is a linearcombination of Haar functions. Therefore,‖Φ˜‖1 =∫[0,1]dΦ˜dx = 1. (3.52)Moreover, noticing that Φ = Φ˜− 1, from the triangle inequality we obtain‖Φ‖1 ≤ ‖Φ˜‖1 + 1 = 2, (3.53)47which is (3.46).Now, we concentrate on showing (3.47). Recall that, from Proposition 3.21,〈DN , fk〉 ≥ cd for k = 0, . . . , n, for some constant cd < 1. From this it is easy to seethat〈DN ,Φ1〉 ≥ cdγ logN. (3.54)Hence, to complete the proof of (3.47), it suffices to show that, for some absoluteconstant c, we have〈DN ,n∑k=2Φk〉 ≤ 2cγ2 logN, (3.55)for a small value of γ < 1 such that γ < min(12, cd2c). Indeed, since Φ = Φ1 +∑nk=2 Φk,by applying the reverse triangle inequality, (3.54) and (3.55), we get|〈DN ,Φ〉| ≥ 〈DN ,Φ1〉 − 〈DN ,n∑k=2Φk〉 ≥ C(d, γ) logN, (3.56)where C(d, γ) = cdγ− 2cγ2. Note that C(d, γ) > 0 since γ < cd2c . From the definitionof Φk, k = 2, . . . , n, and Lemma 3.23, we can see that product of ~r-functions, fj1 ·fj2 · · · fjk , is again an ~r-function with ~r = (jk, n − j1). We rewrite the function Φkwith respect to the parameter s asΦk = γk2n∑s=n+1∑0≤j1<j2<···<jk≤n:n−j1+jk=sf~j~r , (3.57)where f~j~r is an ~r-function depending on j1 < j2 < · · · < jk with ~r = (jk, n − j1).Given a value of parameter the s and values of j1 and jk such that s = jk + n− j1,the number of ways to select distinct values of j2 < · · · < jk−1 between j1 and jk isequal to(s−n−1k−2). Furthermore, the number of ways to select the indices j1 and jksuch that jk + n− j1 = s for a given value of s is 2n− s+ 1. Using this, (3.57) and48Lemma 3.24, we get the following estimates:〈DN ,n∑k=2Φk〉 ≤ b2Nn∑k=2γk2n∑s=n+12−s(2n− s+ 1)(s− n− 1k − 2)≤ b2Nγ22n∑s=n+12−s(2n− s+ 1)s−n+1∑k=2(s− n− 1k − 2)γk−2≤ b2Nγ22n∑s=n+12−s(2n− s+ 1)(1 + γ)s−n−1≤ b2γ2N2−n−12n∑s=n+1(1 + γ2)s−n−1(2n− s+ 1)≤ 2nb2γ22n∑s=n+1(1 + γ2)s−n−1≤ 2nb2 2γ21− γ ≤ cγ21− γ logN.Here, we have also used the fact that N2−n−1 < 1, since 2n−2 ≤ N < 2n−1 andn ' logN . Now, choosing γ < min(12, cd2c), we have thatcγ21− γ logN ≤ 2cγ2 logN,where c = 4b2, which completes the proof of (3.55).As we have already mentioned, the L1-conjectured lower bound of the discrepancyfunction was solved by Hala´sz [25] in d = 2, where, once more, he used Riesz producttechniques. For the proof of L1-conjectured lower bound of the discrepancy function,the test function Φ was defined byΦ :=n∏k=0(1 +iγ√logNfk)− 1.In [25], by introducing complex numbers, he showed that ‖Φ‖∞ . 1. As in the49previous calculations, he also proved that |〈DN ,Φ〉| &√logN for an appropriatevalue of γ. These two estimates immediately imply the L1-conjectured lower boundof the discrepancy function in d = 2.50Chapter 4The two-dimensional small ballinequalityThe two-dimensional small ball inequality states that, for any collection of coefficients{αR} and any integer n ≥ 1, one must have the following inequality∥∥∥∥ ∑|R|=2−nαRhR∥∥∥∥∞≥ C22−n∑|R|=2−n|αR|, (4.1)where C2 is an absolute constant. Here, we will prove the two-dimensional small ballinequality, and will show that it is sharp. In fact, we will show in Section 4.2 thatfor all d ≥ 2, the d-dimensional small ball inequality is sharp.4.1 Temlyakov’s proof of the two-dimensional smallball inequalityAlthough there are multiple proofs of the two-dimensional small ball inequality [47,49, 11], here, we will only present the one given by Temlyakov [49]. The main reasonthat we focus on the details of Temlyakov’s proof is to highlight the connectionbetween the small ball inequality and discrepancy theory. As we already mentioned,51this connection lies in the method of proofs. In particular, we will demonstrate thesimilarities between Temlyakov’s proof of the small ball conjecture in d = 2 andHala´sz’s proof of the conjectured L∞-lower bound of the discrepancy function ind = 2. As in Hala´sz’s proof, the Product Rule, stated as Lemme 3.23 on page 46,was the key component in Temlyakov’s proof. We can extend this product rule toa finite product of Haar functions, i.e., if {Rj}kj=1 ⊂ D2 is a finite sequence of non-disjoint dyadic rectangles, all of the same volume, then∏kj=1 hRj = ±1h∩kj=1Rj . As aresult, the product of distinct ~r-functions, f~rj , ~rj ∈ H2n, is again an ~r-function, i.e.,∏kj=1 f~rj = f~r. However, here |~r| = r1 + r2 > n.It is easy to see that this rule does not hold in higher dimensions. For example,in d = 3, suppose R and R′ are distinct non-disjoint rectangles of the same volumeand whose first sides coincide, i.e., R1 = R′1. Then the product of the Haar functionssupported on these dyadic rectangles is not a Haar function, since hR1 · hR′1 = h2R1 =1R1 . The failure of this rule in higher dimensions makes the small ball inequality ahard problem in d ≥ 3.Proof of the small ball inequality for d = 2. Here, we solve the small ball conjecturefor d = 2 via a duality approach. To do this, we need to construct a test function,Ψ, which satisfies the following two relations:‖Ψ‖1 ≤ C, (4.2)where C is an absolute constant which does not depend on the choice of the coeffi-cients {αR} or the integer n ≥ 1, and〈Ψ,∑|R|=2−nαRhR〉:=∫[0,1]dΨ ·∑|R|=2−nαRhRdx ≥ C2−n∑|R|=2−n|αR|. (4.3)Once we construct such a test function, the small ball conjecture in d = 2 followsfrom a simple application of Ho¨lder’s inequality to (4.3) for p = 1 and q =∞.As in Hala´sz’s proof, the test function Ψ is a Riesz product. Here, the building52blocks of Ψ are again ~r-functions, but this time they are defined byfk := f(k,n−k) =∑|R|=2−n:|R1|=2−ksgn(αR)hR,and the test function Ψ is defined byΨ =n∏k=0(1 + γfk). (4.4)Here, γ ∈ (0, 1] is any constant. For simplicity, we choose the constant γ to be equalto 1, but we will see that the proof works for any positive constant γ ≤ 1.Now, we are ready to show that Ψ satisfies (4.2) and (4.3). First, since (1 +fk) isnonnegative for all k ∈ {0, . . . , n}, we observe that the test function Ψ is nonnegative.Expanding the product in (4.4) givesΨ = 1 + Ψ1 +n∑k=2Ψk, (4.5)whereΨ1 =n∑k=0fk, Ψk =∑0≤j1<···<jk≤nfj1 · · · fjk , k = 2, . . . , n. (4.6)As we remarked after restating the Product Rule, for all k ∈ {2, . . . , n}, Ψk is a sumof ~r-functions. Thus, since∫[0,1]df~r = 0 for any ~r-function, we get that∫[0,1]dΨk = 0for all k ∈ {1, . . . , n}. Therefore, ‖Ψ‖1 =∫[0,1]dΨ = 1 + 0 + 0 = 1, which shows thatΨ satisfies (4.2).Now, set Hn =∑|R|=2−n αRhR and note that 〈Ψ, Hn〉 =∫[0,1]2Hndx+ 〈Ψ1, Hn〉+∑nk=2〈Ψk, Hn〉. Since∫[0,1]2Hndx = 0, in order to finish the proof of (4.3), it sufficesto show〈Ψ1, Hn〉 ≥ C2−n∑|R|=2−n|αR|, (4.7)and〈Ψk, Hn〉 = 0, k = 2, . . . , n. (4.8)53It is easy to check that (4.8) holds. Indeed, as we have already mentioned, fork = 2, . . . , n, Ψk is a sum of ~r-functions with |~r| > n, i.e., Ψk is a linear combinationof Haar functions supported on dyadic rectangles of volume strictly less than 2−n.As a result, for every k ∈ {2, . . . , n}, when we multiply Ψk with Hn we get a linearcombination of functions of the form hRhR′ , where R = R1 × R2, R′ = R′1 × R′2 aretwo-dimensional dyadic rectangles with |R| = 2−n and |R′| < 2−n respectively. Since|R| 6= |R′|, Ri 6= R′i for either i = 1 or i = 2. Hence, hRihR′i is a Haar function. Dueto the fact that the 1-dimensional integral of Haar functions is equal to zero, whenwe integrate the product Ψk ·Hn over the unit square [0, 1]2, we therefore get (4.8).Now, we focus on the proof of (4.7). First, note that the function Hn can bewritten asHn =∑|R|=2−nαRhR =∑~r∈H2n∑R∈R~rαRhR, (4.9)since ∪~r∈H2nR~r = {R ∈ D2 : |R| = 2−n}. Using the above expression for Hn and theorthogonality of Haar functions, we have∫[0,1]2Ψ1Hndx =∑~r∈H2n∑R∈R~r∫[0,1]2|αR|h2Rdx = 2−n∑|R|=2−n|αR|, (4.10)which shows that (4.7) holds.From the proof of the two-dimensional small ball inequality we see that C2 = 1.If we had worked with the test function Ψ =∏nk=0(1 + γfk), γ < 1, then we wouldhave gotten that C2 = γ < 1. In Hala´sz’s proof, the constant γ had a specific value.In particular, he used the test function Φ =∏nk=0(1 + γfk) − 1 = Φ1 +∑nk=2 Φk,where Φ1 and Φk, k ∈ {2, . . . , n} were defined in (3.50) and (3.51) respectively. Thevalue of γ was chosen in such a way that the lower bound of 〈DN ,∑nk=2 Φk〉 wassmaller than the lower bound of 〈DN ,Φ1〉. Also, Ψ and Φ differed in that Hala´sz’s ~r-functions fk := f(k,n−k) were chosen so that they had the property that 〈DN , fk〉 ≥ cdfor all k ∈ {0, . . . , n}, whereas Temlyakov’s didn’t have any specific properties. The‘−1’ term was included in the definition of Φ for a technical reason. Specifically,including that term allowed Hala´sz to avoid calculating the integral∫[0,1]2DN(x)dx54when estimating the lower bound of 〈DN ,Φ〉 :=∫[0,1]2DN(x)Φ(x)dx.In the proof of the two-dimensional small ball inequality we estimated the L∞-norm of Hn from below by estimating the integral〈Ψ, Hn〉 :=∫[0,1]2ΨHndx.The test function Ψ was defined as it was because it identifies small sets where Hn isvery large. As a result, the integral 〈Ψ, Hn〉 is a good estimate for the L∞-norm ofHn. Indeed, consider for simplicity that {αR} ⊂ {−1, 1}, i.e., sgn(αR) = αR. Then,in this case, Hn =∑~r∈H2n f~r =∑nk=0 fk and Ψ =∏nk=0(1 + fk)= 2n+1 1E, whereE = {~x ∈ [0, 1]2 : fk(~x) = 1 for every k ∈ {0, . . . , n}}.In other words, the function Ψ is supported on the set E where Hn is maximized. Asa consequence of this fact, we have that∫[0,1]2ΨHndx =∫EΨHndx = ‖Hn‖∞∫EΨ =‖Hn‖∞, since ‖Ψ‖1 =∫EΨdx = 1.Remark 4.1. Temlyakov actually proved a stronger result. His proof carries throughif one replaces Hn with∑|R|≥2−n αRhR to give∥∥∥∥ ∑|R|≥2−nαRhR∥∥∥∥∞& 2−n∑|R|=2−n|αR|.Riesz products had already been used by Sidon [44, 45] to study lacunary Fourierseries. Recall that an increasing sequence {λj}∞j=1 ⊂ N is called lacunary if thereexists q > 1 so that λj+1/λj > q. Sidon used Riesz products to prove that if fhas a lacunary series, i.e., there exists a lacunary sequence Λ such that the Fouriercoefficients of f ,fˆ(k) =∫[0,1]f(x)e−2piikxdx,55are supported on the sequence Λ, then‖f‖∞ &∞∑k=1|fˆ(k)|,and‖f‖1 & ‖f‖2,where f is a bounded 1-periodic function. For the first inequality, he used the RieszproductPN(x) =N∏k=1(1 + cos(2piλkx+ δk)),where δk was chosen so that eiδk = fˆ(k)/|fˆ(k)|. He showed that ‖PN‖1 = 1 for everyN ∈ N and ‖f‖∞ ≥∣∣∣∣ ∫[0,1] f(x)PN(x)∣∣∣∣ = 12 ∑Nk=1 |f(λk)|. He completed the proofby letting N go to infinity. The same ideas were implemented for the proof of thesecond inequality.4.2 The sharpness of the small ball inequality inall dimensionsIn this section, we show that the small ball inequality is sharp, i.e., there exists aconstant Cd > 0 and a collection of {αR}, in fact {αR} ⊂ {−1, 1}, such that∥∥∥∥∥∥∑R∈AdnαRhR∥∥∥∥∥∥∞≤ Cdn− d−22 · 2−n∑R∈Adn|αR| = Cdn d2 .It suffices to show that the above inequality holds on average, i.e.,E∥∥∥∥∥∥∑R∈AdnαRhR∥∥∥∥∥∥∞≤ Cdn d2 ,56where {αR(ω)}R∈Adn are iid ±1-valued random variables on a probability space (Ω,P)with P(ω ∈ Ω : αR(ω) = 1) = P(ω ∈ Ω : αR(ω) = −1) = 12 . Indeed, recall Markov’sinequality: Let (X,Σ, µ) be a measure space. If f ∈ L1 we have thatµ({x : |f(x)| > a}) ≤ ‖f‖L1a,and for a = 2‖f‖L1 we have thatµ({x : |f(x)| > a}) ≤ 12.So, when µ = P (P a probability measure) and f(ω) =∥∥∥∑R∈Adn αR(ω)hR∥∥∥∞ we haveP({ω : |f(ω)| > 2E(f)}) ≤ 12which implies thatP({ω : |f(ω)| ≤ 2E(f)}) > 12.This implies that there exists ωo ∈ Ω such that|f(ω0)| =∥∥∥∥∥∥∑R∈AdnαR(ω0)hR∥∥∥∥∥∥∞≤ 2E∥∥∥∥∥∥∑R∈AdnαR(ω)hR∥∥∥∥∥∥∞. n d2 .To present the proof of the sharpness of the d-dimensional small ball inequality, weneed a preliminary lemma. This lemma is a standard fact in probability theory andwe state it here without a proof.Lemma 4.2. (Bernstein’s inequality [28]) If Zi, i = 1, . . . , N , are independent andidentically distributed ±1-valued random variables with P(Zi = 1) = P(Zi = −1) =570.5 and (c1, . . . , cN) ∈ RN , thenP(∣∣∣∣ N∑l=1ciZi∣∣∣∣ > λ)≤ 2 exp(− λ24σ2),where σ2 =∑Ni=1 σ2i with σ2i = E((ciZi)2) and E(Zi) = 0, for i = 1, . . . , N .Theorem 4.3. The small ball inequality is sharp on average, i.e.,E∥∥∥∥∥∥∑R∈AdnαRhR∥∥∥∥∥∥∞≤ Cdn d2 ,where {αR(ω)}R∈Adn are iid ±1 valued random variables on a probability space (Ω,P)with P(αR(ω) = 1) = P(αR(ω) = −1) = 12 .Proof. First, we observe that the function∑R∈Adn αR(ω)hR(·) is constant on dyadiccubes of side length 2−(n+1) for each ω ∈ Ω. This can be seen by noting that eachrectangle of volume 2−n can be covered by the dyadic cubes of side length 2−(n+1),and so each cube will lie in that part of the rectangle where the Haar function isconstant.As a consequence, the total number of such cubes is M = 2(n+1)d. If we denoteeach such cube by Qk, k = 1, . . .M , then we can create M random variablesXk(ω) =∑R∈AdnαR(ω)hR(xk),where xk ∈ Qk for k = 1, . . . ,M . Applying Bernstein’s inequality to Xk, for k =1, . . . ,M , we getP(|Xk| > t) ≤ 2 exp(− t24#Hdn). (4.11)Let ψ(t) = exp(t2) and define the following random variables:Yk(ω) =Xk(ω)Cnd−12,58where the constant C will be chosen later. Taking into account the fact that{ω : ψ(|Yk(ω)|) > t} = {ω ∈ Ω : |Yk(ω)| > ψ−1(t)},we have thatE(ψ(|Yk|)) = ∫ ∞0P(ψ(|Yk(ω)|) > t)dt=∫ ∞0P(|Yk(ω)| > ψ−1(t))dt=∫ ∞0P(|Xk(ω)| > Cnd−12√log t4#Hdn)=∫ ∞0min{1, 2 exp(−C2nd−1 log t4#Hdn)}dt (by (4.11))≤∫ ∞0min{1, 2t−K}dt,where #Hdn ≤ Cdnd−1 and K = C24Cd. Now we choose K sufficiently large, or equiva-lently we choose C sufficiently large, so that∫ ∞0min{1, 2t−K}dt ≤ C ′,where C ′ is an absolute constant, so thatE(ψ(|Yk|)) ≤ C ′. (4.12)Since ψ(t) = exp(t2) is a convex, increasing function on [0,∞) andψ(supk=1,...,M{|Yk(ω)|})= supk=1,...,M{ψ(|Yk(ω)|)},59we get the following by applying Jensen’s inequality:ψ(E(supk=1,...,M{|Yk(ω)|})) ≤ E(ψ( supk=1,...,M{|Yk(ω)|}))≤ E(supk=1,...,M{ψ(|Yk(ω)|)})≤ E(M∑k=1ψ(|Yk(ω)|))≤ C ′M = C ′2(n+1)d,where C ′ is an absolute constant, and the last line was obtained by using (4.12).Since ψ−1(t) =√log t, we have thatE(supk=1,...,M{|Yk(ω)|}) ≤ (n+ 1) 12d 12√C ′ log 2,which implies thatE(supk=1,...,M{|Xk(ω)|}) ≤ C ′′n d−12 n 12d 12 = C ′′n d2d 12 ,where C ′′ is an absolute constant. Hence, we have shown thatE∥∥∥∥∥∥∑R∈AdnαR(ω)hR∥∥∥∥∥∥∞. n d2 , (4.13)since supk=1,...,M{|Xk(ω)|} = ∥∥∥∑R∈Adn αR(ω)hR∥∥∥∞ , ∀ω ∈ Ω.60Chapter 5Elements of dyadic harmonicanalysis5.1 Khintchine’s inequalitiesKhintchine’s inequalities allow us to, up to a constant, calculate the p-norms of alinear combination of iid random variables. We will use Khintchine’s inequalities toprove Theorem 1.4.Let {Xi, 1 ≤ i ≤ N} be a collection of ±1-valued iid random variables on aprobability space (Ω,P) such that P(Xi = 1) = P(Xi = −1) = 12 . Then, Khintchine’sinequalities state that there exist constants Ap > 0 and Bp > 0 such thatAp[ N∑i=1|ai|2] 12≤∥∥∥∥ N∑i=1aiXi∥∥∥∥p≤ Bp[ N∑i=1|ai|2] 12, 1 < p <∞,for any finite sequence of real numbers (ai)Ni=1.The proofs of these inequalities rely on two key facts about the distributionfunction of the random variable SN =∑Ni=1 aiXi. In particular, the distribution61function of SN , dSN (λ) = P({ω ∈ Ω : |SN | > λ}), satisfies the sub-Gaussian bounddSN([ N∑i=1a2i] 12λ)≤ 2e− 12λ2 , for all λ > 0,and the Lp-norm of SN can be computed in terms of dSN :‖SN‖pp =∫ ∞0pλp−1dSN (λ)dλ, 0 < p <∞.Khintchine’s inequalities can be extended to the case of complex-valued squaresummable sequences {aj}. When p = 2, the orthogonality of the random variablesXi, 1 ≤ i ≤ N , reduces Khintchine’s inequalities to Parseval’s identity:‖SN‖22 =N∑i=1|ai|2.Moreover, Bp = 1 when 0 < p ≤ 2 and Ap = 1 for p ≥ 2, as the trivial estimates‖SN‖p ≤ ‖SN‖2, 0 < p ≤ 2 and ‖SN‖2 ≤ ‖SN‖p, p ≥ 2 show respectively. For acomplete proof of these inequalities, we refer the interested reader to [21]. For theproof of the optimal values of Ap and Bp, we refer the reader to [22, 53].A direct application of Khintchine’s inequalities shows that the p-norms of alinear combination of Rademacher functions are comparable, i.e., there exist positiveconstants Ci(p, q), i = 1, 2 such thatC1(p, q)∥∥∥∥ N∑k=0akrk∥∥∥∥p≤∥∥∥∥ N∑k=0akrk∥∥∥∥q≤ C2(p, q)∥∥∥∥ N∑k=0akrk∥∥∥∥p, q > p > 0. (5.1)Here, rk =∑I∈D:|I|=2−k hI are Rademacher functions which are iid random variableson the probability space ([0, 1), | · |).We would like to extend the above inequalities to more general functions. Morespecifically, we would like to extend them to∑Nk=0∑|I|=2−k αIhI or, in the multivari-able case, to∑R∈Dd:|R|=2−N αRhR, where the coefficients {αI} or {αR} depend not62only on the scale of the dyadic rectangles, but also on the rectangles themselves. Forthis reason, the Littlewood-Paley inequalities will be useful.5.2 The Littlewood-Paley inequalitiesIn this section, we will present the Littlewood-Paley inequalities for martingales.These inequalities can be considered as a generalization of Khintchine’s inequalities.The special case of the Littlewood-Paley inequalities for Haar functions will be usedfor the proof of the higher dimensional analogue of inequalities (5.1). To introduce theLittlewood-Paley inequalities for martingales, we need some preliminary definitionsfrom probability theory.Suppose X ∈ L1(Ω,B,P) is an integrable real-valued random variable and letG ⊂ B a be sub-σ-algebra. We define the conditional expectation of X given G to bea random variable, denoted by E(X|G), which satisfies(i) E(X|G) is G-measurable and integrable.(ii) For all G ∈ G, we have ∫GXdP =∫GE(X|G)dP.We mention only those properties conditional expectations possess that we will usethroughout this chapter.(1) Linearity: If X, Y ∈ L1, and α, β ∈ R, we haveE(αX + βY |G) = αE(X|G) + βE(Y |G).(2) E(E(X|G)) = E(X).(3) Product Rule: Let X, Y be random variables satisfying X, Y ·X ∈ L1. If Y ∈ G,63i.e., Y is measurable with respect to G, thenE(XY |G) = Y E(X|G).(4) Smoothing: If G1 ⊂ G2 ⊂ B, thenE(E(X|G2)|G1) = E(X|G1).(5) If X ∈ L1 is independent of the σ-algebra G, thenE(X|G) = E(X).The proof of these properties can be found in [38].Definition 5.1. Suppose we are given integrable random variables {Xn}n≥0 and σ-algebras {Gn}n≥0 which are sub-σ-algebras of B. Then {(Xn,Gn)}n≥0 is a martingaleif:(i) {Gn}n≥0 is an increasing sequence of σ-algebras, i.e.,G0 ⊂ G1 ⊂ G2 ⊂ · · · ⊂ B.(ii) Xn is adapted, i.e., Xn ∈ Gn, for each n ∈ N0 = {0, 1, 2, . . .}.(iii) For 0 ≤ m < n,E(Xn|Gm) = Xm.By the smoothing property, condition (iii) in the definition of a martingale isequivalent to E(Xn+1|Gn) = Xn. In many applications, Gn is the smallest σ-algebragenerated by {Xi : 0 ≤ i ≤ n}, i.e. Gn = σ(X0, . . . , Xn). Note that every martingale{(Xn,Gn)}n≥0 can be written as Xn =∑nj=0 dj, where {(dj,Gj)}j≥0 is a martingaledifference, i.e., dj ∈ Gj with dj ∈ L1 and E(dj+1|Gj) = 0 for j ≥ 0. Indeed, simplydefine d0 = X0 and dj = Xj−Xj−1, j ≥ 1. We can see that a martingale is a natural64generalization of a sum of independent random variables. Indeed, suppose {Zn}n≥0 isan independent sequence of integrable random variables such that E(Zn) = 0, n ≥ 0.Set Yn =∑ni=0 Zi and Gn = σ(Z0, . . . , Zn). Then {(Yn,Gn)}n≥0 is a martingale sinceE(Yn+1|Gn) = E(Yn|Gn) + E(Zn+1|Gn) = Yn + E(Zn+1) = Yn. Note that, in thiscase, {(Zi,Gi)}i≥0 is a martingale difference. Moreover, we can create a martingalefrom a random variable X ∈ L1 via the smoothing property. Specifically, if we takeXn = E(X|Gn), n ≥ 0, then {(Xn,Gn)}n≥0 is a martingale.Now, if {(Xn,Gn)}n≥0 is a martingale such that E(X2n) < ∞, then {dj} is anorthogonal martingale difference, i.e.,E(didj) = 0, i 6= j.Indeed, using properties (2) and (3) of conditional expectation, if j > i, then we getE(didj) = E (E(didj|Gi))= E(diE(dj|Gi)) = 0.As an immediate consequence, we obtainE(X2n) = E( n∑j=0d2j), n ≥ 0,which is equivalent to‖Xn‖2 =∥∥∥∥( n∑j=0d2j) 12∥∥∥∥2. (5.2)The Littlewood-Paley inequalities are an Lp-version of (5.2), but they can also beviewed as an extension of Khintchine’s inequalities to more general martingales.Theorem 5.2. (Littlewood-Paley inequalities) Let 1 < p < ∞. There are positive65real numbers Mp and Np such that, if {(Xn,Gn)}n≥0 is a martingale, thenMp∥∥∥∥( n∑j=0d2j) 12∥∥∥∥p≤ ‖Xn‖p ≤ Np∥∥∥∥( n∑j=0d2j) 12∥∥∥∥p, n ≥ 0. (5.3)If 1 < p <∞ and supn≥0 ‖Xn‖p <∞ then Xn converges to a random variable Xboth almost everywhere and with respect to the Lp-norm. Moreover, (5.3) becomesMp∥∥∥∥( ∞∑j=0d2j) 12∥∥∥∥p≤ ‖X‖p ≤ Np∥∥∥∥( ∞∑j=0d2j) 12∥∥∥∥p. (5.4)Here, the quantityS(X) =( ∞∑j=0d2j) 12is called the square function of X.These inequalities were proven by D. Burkholder [12]. A major component ofBurkholder’s proof of the Littlewood-Paley inequalities the following lemma.Lemma 5.3. If {dj} is a martingale difference and {εj} ⊂ {−1, 1}, then∥∥∥∥ n∑j=0εjdj∥∥∥∥p≤ cp∥∥∥∥ n∑j=0dj∥∥∥∥p, 1 < p <∞. (5.5)We will not prove inequality (5.5), but will show how to derive the Littlewood-Paley inequalities from Lemma 5.3.Proof of the Littlewood-Paley inequalities. Consider the Rademacher functionsrj =∑I∈D:|I|=2−j hI , j = 0, . . . , n on [0, 1]. Using inequality (5.5), we immediatelyget thatc−1p ‖Xn‖p ≤∥∥∥∥ n∑j=0rj(t)dj∥∥∥∥p≤ cp‖Xn‖p, for every t ∈ [0, 1]. (5.6)Raising both sides of the above inequality to the power p and integrating over the66interval [0, 1], we obtain(c−1p )p‖Xn‖pp ≤∫[0,1]∥∥∥∥ n∑j=0rj(t)dj∥∥∥∥ppdt ≤ (cp)p‖Xn‖pp. (5.7)Applying Fubini’s Theorem and Khintchine’s inequalities to the middle expressionof the above inequality, we have(c−1p )p‖Xn‖pp ≤ Bpp∥∥∥∥( n∑j=0d2j) 12∥∥∥∥pp. (5.8)Thus,‖Xn‖p ≤ Bpcp∥∥∥∥( n∑j=0d2j) 12∥∥∥∥p, (5.9)which is the right side of (5.3) with Np = Bpcp. The left side of (5.3) is provensimilarly.Also, Burkholder [13] found the values of the optimal constants Np and Mp oc-curring in the Littlewood-Paley inequalities:Np = max{p,pp− 1}− 1,Mp =(max{p,pp− 1}− 1)−1.5.3 Connection of martingale differences with HaarfunctionsRecall from (1.1) that D is the set of dyadic intervals of [0, 1], i.e.,D =∞⋃k=0A1k, (5.10)67where A1k is the collection of dyadic intervals of length 2−k and D∗ = D∪ [−1, 1]. Wewill be interested in the martingale:{(E(f |∆k),∆k)}k≥0, (5.11)where ∆k = σ(A1k) and f ∈ L1([0, 1]d). Note thatE(f |∆k) =∑I∈A1k〈f〉I 1I(x),where〈f〉I = 1|I|∫If(x)dxis the average of f over I. In the next proposition, we relate the martingale differencedk = E(f |∆k)− E(f |∆k−1)to Haar functions.Proposition 5.4. For every f ∈ L1([0, 1]) and for all k ≥ 1, we havedk(x) =∑I∈A1k−1〈f, hI〉|I| hI(x), x ∈ [0, 1]. (5.12)Proof. First, we observe that, for every I ∈ Ak−1, we have2〈f〉I = 〈f〉Il + 〈f〉Ir , (5.13)where Il and Ir are the left and right halves of the dyadic interval I. Also, we canexpress E(f |∆k) asE(f |∆k) =∑I∈A1k−1〈f〉Il 1Il +〈f〉Ir 1Ir (5.14)68and E(f |∆k−1), using (5.13), asE(f |∆k−1) =∑I∈A1k−112(〈f〉Il + 〈f〉Ir)(1Il +1Ir). (5.15)Subtracting (5.15) from (5.14), we getdk =∑I∈A1k−112(−〈f〉Il + 〈f〉Ir)(−1Il +1Ir) =∑I∈A1k−1〈f, hI〉|I| hI , (5.16)which concludes the proof.It is well-known that the collection of Haar functions, {hI}I∈D∗ , is an uncondi-tional Schauder basis for Lp([0, 1]), 1 < p <∞. As a consequence, for f ∈ Lp([0, 1])and 1 < p <∞, we get thatf =∞∑k=0dk (5.17)with respect to the Lp-norm. Furthermore, the square function of f is given byS(f) =(∣∣∣∣ ∫ 10f(x)dx∣∣∣∣2 +∑I∈D|〈f, hI〉|2|I|2 .1I) 12. (5.18)If f has a Haar expansion,f =∑I∈D∗αIhI , (5.19)then its square function isS(f) =(∑I∈D∗|αI |2 1I) 12, (5.20)and (5.4) becomesMp∥∥∥∥( ∑I∈D∗|αI |2 1I) 12∥∥∥∥p≤ ‖f‖p ≤ Np∥∥∥∥( ∑I∈D∗|αI |2 1I) 12∥∥∥∥p(5.21)69for 1 < p <∞.For functions of the formf(~x) =∑R∈Dd∗αRhR(~x), ~x = (x1, . . . , xd) ∈ [0, 1]d, (5.22)we define the product dyadic square functionSd(f) =( ∑R∈Dd∗|αR|2 1R) 12. (5.23)Inequalities (5.21) hold even when the coefficients αI are elements of some Hilbertspace (see [52]). Using this fact, one can prove the d-dimensional version of inequal-ities (5.21):Theorem 5.5. (Product Littlewood-Paley Inequalities [8]) For functions of the formf =∑R∈Dd∗ αRhR, we have(Mp)d‖Sd(f)‖p ≤ ‖f‖p ≤ (Np)d‖Sd(f)‖p, 1 < p <∞. (5.24)The next proposition compares different Lp-norms, p ∈ (0,∞), of a function.In particular, when f is a special linear combination of Haar functions, the nextproposition shows that the norms ‖f‖p are comparable for all p. The proof followsthe strategy outlined in Corollary 1.4 of [48, page 5-6], and we extend that corollaryto dimensions greater than one. Specifically, we establish the following:Proposition 5.6. Let f be a linear combination of Haar functions of the formf(~x) =∑R∈AdnαRhR(~x),such that the square function, Sd(f), is constant on [0, 1]d(i.e., Sd(f)(~x) = c(n, d)for every ~x ∈ [0, 1]d, where the constant c(n, d) depends on the integer n ≥ 1 and onthe dimension d). Then there exist positive constants c1(p, q, d) and c2(p, q, d) such70thatc2‖f‖q ≤ ‖f‖p ≤ c1‖f‖q for every 0 < p < q <∞. (5.25)Proof. First, we observe that the right side of inequality (5.25) is a simple conse-quence of Ho¨lder’s inequality (c1 = 1). Therefore, our goal is to prove thatc2‖f‖q ≤ ‖f‖p for every 0 < p < q <∞.We will show the above inequality by considering the following three cases:Case 1: 1 < p < q <∞.Case 2: 0 < p ≤ 1 < q <∞.Case 3: 0 < p < q ≤ 1.In Case 1, using the Littlewood-Paley inequalities and the fact that Sd(f) is inde-pendent of ~x, we get that‖f‖p ≥ (Ap)d‖Sd(f)‖p = (Ap)d‖Sd(f)‖q ≥(ApBq)d‖f‖q. (5.26)Thus, c2‖f‖q ≤ ‖f‖p with c2 =(ApBq)d.Now, we turn our attention to the proof of Case 2. First, choose α ∈ (0, 1) suchthat p > αq. Set r = pαqand s = rr−1 (i.e.,1r+ 1s= 1). Also, set b = 1 − α so thatα + b = 1. Note that αr = pq< 1, and bs > 1 since (1− α) rr−1 > 1 if and only if1 > αr. Now, applying Ho¨lder’s inequality, we get that‖f‖qq =∫[0,1]d|f(x)|αq|f(x)|bqdx ≤ ‖f‖αqαrq‖f‖bqbsq. (5.27)But bs > 1 shows that bsq > q > 1, and Case 1 implies that there exists a positiveconstant c2 such that(c2)bq‖f‖bqbsq ≤ ‖f‖bqq . (5.28)Combining inequalities (5.27) and (5.28), we get that‖f‖qq ≤ (c2)−bq‖f‖αqαrq‖f‖bqq71which implies that‖f‖q(1−b)q ≤ (c2)−bq‖f‖αqαrq.As a result, ‖f‖q ≤ (c2)− bα‖f‖αrq. Thus, Case 2 follows since αrq = p.Finally, we prove Case 3. Define the numbers α, b, r and s in the same way asin Case 2 and note that inequality (5.27) still holds. Since q ≤ 1 implies bsq ≤ bs,using the right side of inequality (5.25), we get‖f‖bqbsq ≤ ‖f‖bqbs. (5.29)Now, putting together inequalities (5.27) and (5.29), we get that‖f‖qq ≤ ‖f‖αqαrq‖f‖bqbs. (5.30)Since bs > 1 ≥ q, Case 2 gives us that ‖f‖bqbs ≤ (c2)−bq‖f‖bqq . Combining thiswith inequality (5.30), we get that ‖f‖qq ≤ (c2)−bq‖f‖αqαrq‖f‖bqq which is equivalent to‖f‖q(1−b)q ≤ (c2)−bq‖f‖αqarq. As a consequence, ‖f‖q ≤ (c2)−ba‖f‖arq. Since αrq = p,this completes the proof of Case 3.Remark 5.7. To summarize, the Littlewood-Paley inequalities are used in the proof of(5.25) for the case q > p > 1. In particular, the assumption that the square functionis constant on [0, 1]d permits the p-norms of the square function to be comparable.Combined with the Littlewood-Paley inequalities, this fact proves inequality (5.25)for q > p > 1. To prove (5.25) for the full range of the values of p and q, one needsto combine the proof of the special case q > p > 1 and a subtle treatment of Ho¨lder’sinequality. If we assume that the coefficients {αR} satisfy the inequalitiesc1 ≤ |αR| ≤ c2, for every R ∈ Adn,where c1 and c2 are positive constants independent of R ∈ Adn instead, then inequal-ities (5.25) still hold, as it is still true that the p-norms of the square function arecomparable.72Remark 5.8. It is clear that iff(~x) =∑R∈AdnαRhR(~x),with {αR}R∈Adn ⊂ {−1, 1}, then the square function, (Sdf), is independent of ~x ∈[0, 1]d. More precisely,Sd(f)(~x) =[∑R∈Adn|αR|2 1R(~x)] 12=[#Hdn] 12 ,where #Hdn denotes the cardinality of the set Hdn. An immediate consequence of thisis that‖Fr1‖L1([0,1]d−1) & ‖Fr1‖L2([0,1]d−1) & (n− r1)d−22 , r1 = 0, 1, . . . , n, (5.31)whereFr1(~x′) =∑R′∈Ad−1n−r1αR′hR′(~x′) (5.32)with ~x′ = (x2, x3, . . . , xd) ∈ [0, 1)d−1 and {αR′} ⊂ {−1, 1}. Estimate (5.31) will becrucial in showing Lemma 6.4 and Theorem 1.3. Moreover, for r1 = 0, Fr1 is thefunction used in the proof of the lower bound of the Lp-norm of the discrepancyfunction when 1 < p <∞, and, if n ' logN , then ‖Fr1‖p ' (logN)d−22 . In addition,defineAr1 =∑~r′∈Hd−1n−r1∑R∈R~rαRhR,with (r1, ~r′) ∈ Hdn, r1 = 0, 1, . . . , n, and {αR} ⊂ {−1.1}. Using the same reasoningas for Fr1 , we get‖Ar1‖1 & ‖Ar1‖2 & (n− r1)d−22 , (5.33)r1 = 0, 1, . . . , n. These inequalities will be essential in proving Lemma 7.5.73Chapter 6Two proofs of Theorem 1.3The main objective of this chapter is to prove Theorem 1.3. In other words, weprove the signed small ball conjecture for all d ≥ 3 under an additional constrainton the coefficients {αR}R∈Adn . We will give two proofs of Theorem 1.3. The first onewill be based on a direct estimate of the L∞-norm of the hyperbolic sum Hn, andthe second one will be based on a duality argument which uses Riesz products. Thesecond proof is particularly important since the techniques used there can also beused to obtain lower bounds of hyperbolic sums in other function spaces, such asexponential Orlicz spaces. We will discuss this more in Chapter 7. However, beforeproving Theorem 1.3, we first motivate the additional constraint on the coefficientsthat appears in Theorem 1.3, the splitting condition.6.1 Motivation for the splitting conditionRecall that a coefficient αR ∈ {−1, 1} satisfies the splitting condition if αR = αR1 ·αR′with αR1 , αR′ ∈ {−1, 1} and R = R1×R′ = R1×R2× · · · ×Rd. If one assumes thatthe coefficients of the signed hyperbolic sum satisfy the splitting condition, then thesigned hyperbolic sum Hn takes the formHn(~x) =n∑r1=0Br1(x1)Fr1(~x′), ~x = (x1, ~x′) ∈ [0, 1]d, (6.1)74withBr1(x1) =∑|R1|=2−r1αR1hR1(x1)andFr1(~x′) =∑|R′|=2−(n−r1)αR′hR′(~x′), r1 = 0, 1, . . . , n.The functions {Br1}nr1=0 can be viewed as random variables defined on the probabilityspace (Ω,B, P ) = ([0, 1),B([0, 1)), |·|), where |·| is the Lebesgue measure and B([0, 1))is the Borel σ -algebra on the interval [0, 1), as we will explain later. The power of thesplitting condition is that, when used to rewrite the hyperbolic sum as in (6.1), therandom variables {Br1}nr1=0 are independent, and the independence of these randomvariables will facilitate the computation of the supremum norm of Hn and lead tothe gain of n1/2 over the trivial bound. We prove the independence of {Br1}nr1=0 afterthis preliminary discussion.To prove the independence of the generalized Rademacher functions {Br1}nr1=0,we will need to represent them in terms of iid binary random variables. To give allthe details of this representation, we need to recall some facts from probability theoryabout the dyadic sequence {di(x)}i≥1 ⊂ {0, 1} associated to the dyadic expansion ofa point x ∈ [0, 1),x =∞∑i=1di(x)2i. (6.2)By convention, we use the terminating dyadic expansion for the dyadic rationals{x = k2m: k,m ∈ Z,m ≥ 0, 0 ≤ k < 2m}.Fact 1: Each di is a random variable.Fact 2: The sequence {di, i ≥ 1} is iid. In other words, each di is identicallydistributed with P [di = 1] = P [di = 0] =12, and the di are independent.The proof of these facts can be found in [38, page 99-100].To each such dyadic sequence we associate a sequence of the iid±1-valued randomvariables, {Xi}i≥1, defined by75Xi =−1, if di = 01, if di = 1 ,where i ∈ Z+ = {1, 2, . . .}. For every i ∈ Z+, we let~Xi = (X1, . . . , Xi),~di = (d1, . . . , di),~bi = (b1, b2, . . . , bi) ∈ {−1, 1}i,with the convention that {−1, 1}0 = {0}, ~d0 = 0, ~X0 = 0 and ~b0 = 0. Now, fixk ∈ {0, 1, . . . , n} and a collection of numbers {αI}I∈D:|I|=2−k ⊂ {−1, 1}, and definethe map:ak : {−1, 1}k → {αI}I∈D:|I|=2−k , ak(~bk) = αI , (6.3)where I is the dyadic interval with |I| = 2−k given byI =[k∑i=1bi + 12i+1,k∑i=1bi + 12i+1+ 2−k).Observe that every x ∈ I has the property that ~Xk(x) = ~bk. Note that a0(~b0) = α[0,1)and that this map is onto. Now, it is easy to verify thatBr1(x1) = ar1( ~Xr1(x1))Xr+1(x1), r1 = 0, 1, . . . , n. (6.4)Hence,Hn(~x) =n∑r1=0ar1( ~Xr1(x1))Xr1+1(x1)Fr1(~x′), ~x = (x1, ~x′) ∈ [0, 1]d. (6.5)Proposition 6.1. The random variables {Bk = ak( ~Xk)Xk+1}nk=0 are iid binary ran-dom variables on the probability space ([0, 1),B([0, 1)), P = | · |).76Proof. First, for fixed values of 0 ≤ k ≤ n and b0 ∈ {−1, 1}, we show thatP (Xk+1ak( ~Xk) = b0) =12. We will then use this to show that the random vari-ables {ak( ~Xk)Xk+1}nk=0 are independent.Without loss of generality, we can assume that 0 < k ≤ n. Fixing b0 ∈ {−1, 1},we getP (Xk+1ak( ~Xk) = b0) =∑~bk∈{−1,1}kP(Xk+1ak( ~Xk) = b0, ~Xk = ~bk)=∑~bk∈{−1,1}kP(Xk+1 =b0ak(~bk), ~Xk = ~bk)= 2k12(12)k=12.Now, using the independence of the random variables {Xi}n+1i=1 , for any ~bn+1 ∈{−1, 1}n+1, we get thatP (X1a0( ~X0) = b1, . . . , Xn+1an( ~Xn) = bn+1) = P (X1 = b˜1, . . . , Xn+1 = b˜n+1)=(12)n+1=n∏k=0P (Xk+1ak( ~Xk) = bk+1),where b˜1, . . . , b˜n+1 are ±1-valued deterministic constants depending on b1, . . . , bn+1.This last equality gives the independence of the random variables {ak( ~Xk)Xk+1}nk=0.6.2 The first proof of Theorem 1.3The next proposition gives us one of the main ingredients of the first proof of Theorem1.3: an explicit formula for the L∞-norm of a linear combination of iid ±1-valued77random variables.Proposition 6.2. Let (Ω,F , P ) be a probability space and {Ui}ni=1 be a finite collec-tion of ±1-valued independent random variables for which P (Ui = ±1) > 0 for everyi = 1, . . . , n. If (ci)ni=1 is any finite deterministic sequence of real numbers, then∥∥∥∥∥n∑i=1ciUi∥∥∥∥∥∞=n∑i=1|ci|.Proof. It is clear that |∑ni=1 ciUi(ω)| ≤∑ni=1 |ci| for every ω ∈ Ω. So, to complete theproof, we need to find ω0 ∈ Ω such that Ui(ω0) = sgn(ci), i = 1, . . . , n, where sgn isthe signum function defined in (1.7). Let A =⋂ni=1{ω ∈ Ω : Ui(ω) = sgn(ci)}. Usingthe independence of the random variables {Ui}ni=1, we get that P (A) =∏ni=1 P (Ui =sgn(ci)) > 0, which shows that A is not a set of measure zero. Thus,∥∥∥∥∥n∑i=1ciUi∥∥∥∥∥∞=n∑i=1|ci|.The first proof of Theorem 1.3. Using the definition of the supremum norm and (6.5),we get‖Hn‖∞ = sup~x′∈[0,1]d−1supx1∈[0,1]∣∣∣∣∣n∑r1=0Fr1(~x′)ar1(~Xr1(x1))Xr1+1(x1)∣∣∣∣∣ . (6.6)Now, if we can show that‖Hn‖∞ ≥n∑r1=0‖Fr1‖L1([0,1]d−1), (6.7)then we can use (5.31) to obtain the desired inequality. To obtain (6.7), we needto rewrite the inner supremum appearing on the right side in (6.6). Fix ~x′ =(x2, x3, · · · , xd) ∈ [0, 1]d−1. Applying both Proposition 6.1 and Proposition 6.2 to78Ur1 = αr1( ~Xr1)Xr1+1 and cr1 = Fr1(x2, x3, · · · , xd), r1 = 0, 1, . . . , n, we get thatsupx1∈[0,1]∣∣∣∣∣n∑r1=0Fr1(~x′)ar1(~Xr1(x1))Xr1+1(x1)∣∣∣∣∣ =n∑r1=0|Fr1(~x′)|. (6.8)Combining (6.6) and (6.8), we get‖Hn‖∞ = sup~x′∈[0,1]d−1n∑r1=0|Fr1(~x′)|. (6.9)Inequality (6.7) easily follows since‖Hn‖∞ ≥∥∥∥∥∥n∑r1=0|Fr1|∥∥∥∥∥L1([0,1]d−1)=n∑r1=0‖Fr1‖L1([0,1]d−1).Now, using inequality (5.31), estimate (6.7) becomes‖Hn‖∞ ≥ cdn∑r1=0‖Fr1‖L2([0,1]d−1). (6.10)Using the orthogonality of Haar functions, we have that‖Fr1‖L2([0,1]d−1) = ‖Sd−1(Fr1)‖L2([0,1]d−1) & (n− r1)d−22 . (6.11)Finally, putting inequalities (6.10) and (6.11) together, we obtain the desired result.Remark 6.3. The two basic components of the first proof are the following:‖Hn‖∞ = sup~x′∈[0,1]d−1n∑r1=0|Fr1(~x′)|, (6.12)‖Fr1‖L1([0,1]d−1) & ‖Fr1‖L2([0,1]d−1), (6.13)for r1 = 0, 1, . . . , n. According to Remark 5.7, if we consider coefficients {αR1αR′}79in Hn for which αR1 ∈ {−1, 1} and 0 < c1 ≤ |αR′| ≤ c2 for all R1 × R′ ∈ Adn,then inequality (6.13) still holds. Moreover, for the same collection of coefficients,the splitting condition ensures that equality (6.12) also remains true. Based onthese two facts, the small ball conjecture, Conjecture 1.1, can be proved for thisspecial collection of coefficients. Indeed, using (6.12), (6.13), and the Cauchy-Schwarzinequality, we obtain‖Hn‖∞ &n∑r1=0‖Fr1‖L2([0,1]d−1)&n∑r1=0( ∑~r′∈Hd−1n−r1∑R′∈R~r′|αR′|22−(n−r1))1/2& 2−nn−(d−2)/2n∑r1=02r1∑~r′∈Hd−1n−r1∑R′∈R~r′|αR′ |& 2−nn−(d−2)/2n∑r1=0∑~r′∈Hd−1n−r1∑|R1|=2−r1∑R′∈R~r′|αR1||αR′|& 2−nn−(d−2)/2∑|R|=2−n|αR|,which shows the validity of Conjecture 1.16.3 The second proof of Theorem 1.3The second proof of Theorem 1.3 will use a duality argument. That is, for every col-lection of coefficients {αR} ⊂ Asplit, we will construct a test function ψ ∈ L1([0, 1]d)such that‖ψ‖L1 ≤ C,80where C is a constant which does not depend on the choice of coefficients {αR} ⊂Asplit or the integer n ≥ 1. In addition, the test function ψ will also satisfy〈ψ,Hn〉 :=∫[0,1]dψHndx & nd2 ,where Hn =∑|R|=2−n αRhR with {αR} ⊂ Asplit. Theorem 1.3 follows from the abovetwo inequalities by a simple application of Ho¨lder’s inequality to 〈ψ,Hn〉. We willprove the two inequalities in Lemma 6.4. This lemma is also essential in provingTheorem 1.6. The construction of the test function ψ will be similar to that inTemlyakov’s proof of the two-dimensional small ball inequality, given in Section 4.1of Chapter 4.Lemma 6.4. For any n ≥ 1, d ≥ 3, and any choice of coefficients {αR}R∈Adn ⊂ Asplit,there exists a function ψ ∈ L1([0, 1]d) with ‖ψ‖L1 = 1 such that〈ψ,Hn〉 =∫[0,1]dψHndx & nd2 . (6.14)Here, the implicit constant depends only on the dimension d and not on the collectionof coefficients {αR}R∈Adn ⊂ Asplit or the integer n ≥ 1.Proof. Recall the expression of the functionHn(~x) =∑R∈AdnαRhR(~x) (6.15)given in (6.1) and define the test function ψ as the Riesz productψ(~x) =n∏r=0(1 +Br(x1)sgn(Fr(~x′))). (6.16)Here, we have replaced r1 with r for our convenience. We claim that ψ has theproperty that‖ψ‖L1([0,1]d) = 1 (6.17)81and satisfies (6.14). Indeed, we first observe that, for any ~x ∈ [0, 1]d, ψ(~x) ≥ 0,since each factor in the product forming ψ is nonnegative. Expanding the productin (6.16), we getψ = 1 + ψ1 + ψ2, (6.18)withψ1(~x) =n∑r=0Br(x1)sgn(Fr(~x′)), (6.19)andψ2(~x) =n∑k=2∑0≤s1<···<sk≤nk∏j=1Bsj(x1)sgn(Fsj(~x′)). (6.20)Now, it is easy to see that‖ψ‖L1([0,1]d) =∫[0,1]dψ(x)dx = 1 +∫[0,1]dψ1(x)dx+∫[0,1]dψ2(x)dx.Hence, to complete the proof of (6.17), it suffices to show that∫[0,1]dψ1(x)dx = 0 (6.21)and ∫[0,1]dψ2(x)dx = 0. (6.22)To show (6.21), we first recall from Proposition 6.1 that {Br}nr=0 are indepen-dent random variables of mean zero. Now, for r ∈ {0, . . . , n}, this implies that∫[0,1]Br(x1)dx1 := E(Br) = 0, where E is the expected value of Br with respect tothe one-dimensional Lebesgue measure, and this proves (6.21).Now, fix k ∈ {2, . . . , n} and {sj}kj=1 ⊂ {0, . . . , n} with s1 < · · · < sk. Again, usingthe fact that {Br}nr=0 are independent random variables of mean zero, we get that∫[0,1]∏kj=1Bsj(x1)dx1 = E(∏kj=1 Bsj) =∏kj=1 E(Bsj) =∏kj=1∫[0,1]Bsj(x1)dx1 = 0,which proves (6.22).82To show that ψ satisfies (6.14), we first use (6.18) to get〈ψ,Hn〉 =∫[0,1]dHn(x)dx+ 〈ψ1, Hn〉+ 〈ψ2, Hn〉. (6.23)Now, since {Br}nr=0 are of mean zero, we get that∫[0,1]dHn(x)dx = 0. Therefore, toconclude (6.14), we show that〈ψ1, Hn〉 & n d2 (6.24)and〈ψ2, Hn〉 = 0. (6.25)Since {Br}nr=0 are ±1-valued independent random variables of mean zero, using(5.31), we get that〈ψ1, Hn〉 =∫[0,1]d−1n∑r,r′=0sgn(Fr)Fr′∫[0,1](BrBr′)=n∑r=0‖Fr‖L1([0,1]d−1)&n∑r=0‖Fr‖L2([0,1]d−1)&n∑r=0(n− r) d−22 & n d2 ,(6.26)which proves (6.24). Again, using the fact that {Br}nr=0 are ±1-valued independentrandom variables of mean zero, from the definitions of the functions ψ2 and Hn weget that〈ψ2, Hn〉 =∫[0,1]d−1n∑r=0n∑k=2∑0≤s1<···<sk≤nFrk∏j=1sgn(Fsj)∫[0,1]k∏j=1(BsjBr)=0,(6.27)which completes the proof.83The second proof of Theorem 1.3. Lemma 6.4 ensures the existence of the functionψ with ‖ψ‖L1([0,1]d) = 1 for which∫[0,1]dψHndx & nd2 . (6.28)Applying Ho¨lder’s inequality in (6.28), we get that ‖Hn‖∞ & n d2 .6.4 The sharpness of the small ball inequality withrestricted coefficientsIn this section, we prove Theorem 1.4: the sharpness of the small ball inequality whenthe coefficients satisfy the splitting condition. Precisely, this means we will provethat, for all n ≥ 1 and d ≥ 3, there exists a choice of coefficients {αR}R∈Adn ⊂ Asplitfor which ∥∥∥∥ ∑R∈AdnαRhR∥∥∥∥∞. n d2 . (6.29)The proof of Theorem 1.4 relies on two lemmas: the first establishes the inde-pendence of a product of random variables, and the second estimates the expectedvalue of the point-wise maximum of a collection of random variables.Lemma 6.5. Fix an integer l ≥ 1. Let {Xi, Y ij : 1 ≤ i ≤ m, 1 ≤ j ≤ l} be acollection of independent random variables taking the values ±1 with probability 12.Then the random variables {XiY ij : 1 ≤ i ≤ m, 1 ≤ j ≤ l} are independent.Proof. Let−→W i = (XiYi1 , . . . , XiYil ), i = 1, . . . ,m. First, we claim that, for ~i ∈84{−1, 1}l and i = 1, . . . ,m, P(−→W i = ~i)= 12l. To see this, observe thatP(−→W i = ~i)=P(XiY i1 = i1, . . . , XiY il = il)=∑bi∈{−1,1}P(Y i1 =i1bi, . . . , Y il =ilbi, Xi = bi)=12l22=12l.Now, we prove the independence of {XiY ij : 1 ≤ i ≤ m, 1 ≤ j ≤ l}. To prove this,we need to show thatP(−→W 1 = ~1, . . . ,−→Wm = ~m)=( 12l)m.Let ~b = (b1, . . . , bm) ∈ {−1, 1}m, ~X = (X1, . . . , Xm) and ~Yl,i = (Y i1 , . . . , Y il ), i =1, . . . ,m. The independence of the random variables {Xi, Y ij : 1 ≤ i ≤ m, 1 ≤ j ≤ l}gives us thatP(−→W 1 = ~1, . . . ,−→Wm = ~m)=∑~b∈{−1,1}mP(~Yl,1 =~1b1, . . . , ~Yl,m =~mbm, ~X = ~b)=( 12l)m2m2m=( 12l)m.Lemma 6.6. Let Y1, . . . , YN be random variables on a probability space (Ω,P) andφ : [0,∞) → R be a convex and strictly increasing function. If Eφ(|Yk|) ≤ C, k =1, . . . , N , then we have thatE(sup1≤k≤N|Yk|)≤ φ−1(N),85where C is an absolute constant and φ−1 is the inverse function of φ.The proof of this lemma, which we omit here, can be found in Lacey’s excellentsurvey [34] on the small inequality and discrepancy theory. The proof is a generaliza-tion of the proof of the special case when φ(t) = exp(t2), t ≥ 0, which was presentedin Chapter 4 in the proof of the sharpness of the d-dimensional small ball inequality.Proof of Theorem 1.4. Suppose that the coefficients {αR1αR′} in Asplit are chosenrandomly and independently, i.e., consider a probability space (Ω,P) where {αR1}and {αR′} are two families of ±1-valued iid random variables which are independentof each other. To prove (6.29), it suffices to show thatE∥∥∥∥ ∑R∈AdnαRhR∥∥∥∥∞. n d2 , (6.30)where E denotes the expected value with respect to the probability P.Let Qk be a dyadic cube in Dd of side-length 2−(n+1), k = 1, . . . , 2(n+1)d. Notethat the collection of these dyadic cubes partitions the unit cube [0, 1)d. SetZk(ω, ~x) =∑|R|=2−nαRhR|Qk =∑~r∈Hdn∑Qk⊂R∈R~rαR1(ω)αR′(ω)hR(~x), (ω, ~x) ∈ Ω×Qk,and fix any ~x ∈ Qk. Note that #{R ∈ Adn, Qk ⊂ R} = #Hdn. Using this factand applying Lemma 6.5 to the random variables {αR1} and {αR′}, we get that Zkis a linear combination of #Hdn many iid ±1-valued random variables. ApplyingKhintchine’s inequalities yieldsC−1d nd−12 ≤ E(|Zk|) ≤ Cdn d−12 . (6.31)Set Yk =|Zk|E|Zk|A , where A is a large constant to be determined. We claim that thereexists A, depending on the dimension d, for whichEφ(Yk) . 1, (6.32)86where φ(t) = exp( t22), t > 0, for every k ∈ {1, . . . , 2(n+1)d}. To proceed, we will firstshow how (6.30) follows from (6.32), and then we will prove the claim.To see how (6.30) follows from the (6.32), let N = 2(n+1)d. Lemma 6.6 impliesthatE(sup1≤k≤N|Yk|).√n. (6.33)Combining (6.31) and (6.33) and using the fact that Zk is constant on the dyadiccube Qk for a fixed ω ∈ Ω, we get thatE∥∥∥ ∑R∈AdnαRhR∥∥∥∞=E(sup1≤k≤N|Zk|)=E(sup1≤k≤NYkE|Zk|A). n d−12 E(sup1≤k≤NYk). n d−12√n.n d2 .Now we prove the claim. First, we estimate the distribution function of φ(Yk) asfollows:P(φ(Yk) > λ) =P(Yk > φ−1(λ))=P(Yk >√2 log λ)=P(|Zk| > E|Xk|A√2 log λ)≤P(|Zk| > AC−1d nd−12√2 log λ).87Therefore, using Bernstein’s inequality, stated in Lemma 4.2, we getP(φ(Yk) > λ) ≤2 exp(nd−1 log(λ−A2C−2d 2)4#Hdn)≤2 exp(nd−1 log(λ−A2C−2d 2)4nd−1C ′d). 1λA2C−2d2C′d. 1λ2,for A chosen large enough. Now, applying the distribution identity,E(φ(Yk)) =∫ ∞0P(φ(Yk) > λ)dλ,we have thatE(φ(Yk)) .∫ 10dλ+∫ ∞1λ−2dλ ≤ C ′′d .88Chapter 7Orlicz space bounds for specialclasses of hyperbolic sumsAs we have already mentioned, the Lp-bounds of the signed hyperbolic sums, Hn =∑|R|=2−n αRhR with {αR} ⊂ {−1, 1}, are equivalent to the L2-bounds. Indeed,applying Proposition 5.6, there exist positive constants Ci(2, p, d), i = 1, 2, such thatC1(2, p, d)‖Hn‖2 ≤ ‖Hn‖p ≤ C1(2, p, d)‖Hn‖2, 0 < p <∞.Using the orthogonality of Haar functions, we obtain ‖Hn‖2 = ‖Sd(Hn)‖2 ' n d−12 .Therefore, ‖Hn‖p ' n d−12 for all p in (0,∞), where the implicit constants depend onthe value of p and the dimension d.In this chapter, we explore lower bounds of signed hyperbolic sums in exponentialOrlicz spaces. We have chosen to work with these spaces because, for 1 ≤ p < ∞,exponential Orlicz spaces can be considered as intermediate function spaces betweenLp and L∞, and the L∞-lower bounds of signed hyperbolic sums remain conjecturedfor all d ≥ 3 whereas the Lp-lower bounds are well-understood. Also, lower boundsof the discrepancy function have already been studied in exponential Orlicz spacesin dimension d = 2 [6]. We are interested in obtaining analogous lower bounds inthe case of signed hyperbolic sums. In dimension d = 2, we will adjust Temlyakov’sproof of the two-dimensional small ball inequality for signed hyperbolic sums to the89framework of exponential Orlicz spaces, and show that the lower bounds obtained forthe discrepancy function can also be realized for signed hyperbolic sums. In higherdimensions, we will obtain lower bounds of signed hyperbolic sums in exponentialOrlicz spaces under the additional assumption that the coefficients αR are from theset Asplit.Before proving our main results, we first present the background material onOrlicz spaces needed to follow the proofs of the main theorems of this chapter,Theorem 1.6 and Theorem 7.4.7.1 Orlicz spacesFor p ≥ 1, Orlicz spaces are natural generalizations of Lp-spaces. To see this, notethat |x|p is a convex function for p ≥ 1, and a function f is in Lp(X,µ) if∫X|f |p dµ <∞,whereas, roughly speaking, f is in the Orlicz space associated to a convex functionφ : R→ [0,∞) if ∫Xφ(f) dµ <∞.More precisely, let φ : R → [0,∞] be a convex, even function with φ(0) = 0and φ 6≡ 0. Given a finite measure space (X,M, µ), one defines the Orlicz spaceassociated to the function φ asLφ(X,µ) ={f : X → R measurable : ∃a > 0,∫Xφ(af)dµ <∞}.We endow Orlicz spaces with the so-called Luxembourg norm: for any measurablefunction f ∈ Lφ(X,µ), the norm of f is defined by‖f‖φ := inf{t > 0 :∫Xφ(f/t)dµ ≤ 1}.90This is precisely the Minkowski functional associated to the setV ={f measurable :∫Xφ(f)dµ ≤ 1}.Note thatLφ(X,µ) ={f measurable : ‖f‖φ <∞}.Such a function φ is increasing on R+ := [0,∞). Indeed, let 0 ≤ s1 < s2, then theconvexity of φ implies thatφ(s1) ≤ s2 − s1s2φ(0) +s1s2φ(s2) ≤ φ(s2).From the definition we can see that, to understand the composition of Lφ(X,µ),we only need to know how φ(x) grows as |x| → ∞. More specifically, if φ ∼ h as|x| → ∞, i.e.,C1h ≤ φ ≤ C2h, |x| ≥M,for some positive constants C1, C2,M , then the two quantities ‖f‖φ and ‖f‖h areequivalent and we see that it suffices to know the asymptotic behaviour of φ.Orlicz spaces coincide with Lp spaces for certain choices of φ. If φ(x) = |x|p (1 ≤p <∞), then Lφ(X,µ) = Lp(X,µ) and ‖f‖φ = ‖f‖p, and ifφ(x) =0, if |x| ≤ 1∞, otherwise ,then Lφ(X,µ) = L∞(X,µ) and the ‖f‖φ = ‖f‖∞. We will be interested in the Orliczspaces which are associated with functions of the formφa(x) =e|x|a − 1, |x| ≥ 1(e− 1)|x|, otherwise,where a > 0. These spaces are known as exponential Orlicz spaces and are denotedby exp(La).91To prove our main results, we will need an Orlicz space version of Ho¨lder’s in-equality, i.e., given φ as in the definition of Lφ, we want to find C > 0 and a functionφ∗ such that ∫|fg|dµ ≤ C‖f‖φ‖g‖φ∗ , (7.1)for all f ∈ Lφ and g ∈ Lφ∗. Note that this requires that φ∗ is an even convex functionwith φ∗(0) = 0. To prove this generalization of Ho¨lder’s inequality, φ∗ is taken to bethe function conjugate to φ, i.e., φ∗ : R→ [0,∞] is the function defined byφ∗(y) = sup{xy − φ(x) : x ∈ R}. (7.2)It follows from the definition that φ∗ is convex, even and satisfies φ∗(0) = 0. Anotherimmediate consequence of this definition is Young’s inequality:xy ≤ φ(x) + φ∗(y)for all x and y. Young’s inequality implies that|f(x)g(x)| ≤ φ(f(x)) + φ∗(g(x)), for all x ∈ X. (7.3)We can now see that (7.1) holds for C = 2. Indeed, it suffices to show (7.1) for‖f‖φ = ‖g‖φ∗ = 1 and, in such a case, we obtain (7.1) simply by integrating bothsides of inequality (7.3).This generalization of Ho¨lder’s inequality will be applied to φa and φ∗a to prove themain results of the chapter. Note that, by convention, the Orlicz spaces associatedto φ∗a are denoted by L(logL)1a , and it can be verified that φ∗a(y) ∼ |y|(log(1 + |y|))1aas |y| → ∞.The next proposition deals with estimating Orlicz space norms of an indicatorfunction. A detailed proof can be found in [37].Proposition 7.1. Let E ∈M. Then12µ(E)(φa)−1(1µ(E))≤ ‖1E ‖φ∗a ≤ µ(E)(φa)−1(1µ(E)),92where (φa)−1 is the inverse of φa(x), x ≥ 0.Applying the above proposition to a probability measure gives us that‖1E ‖L(logL) 1a ≤ P(E) · (1− log(P(E))1a . (7.4)7.2 The proof of Theorem 1.6Before we proceed to the proof of Theorem 1.6, note that it follows from the definitionof exponential Orlicz spaces that, for each 1 < p <∞ and a > 0, we have continuousembeddings L∞ ⊂ exp(La) ⊂ Lp. Therefore,nd−12 . ‖Hn‖2 . ‖Hn‖exp(La) (7.5)holds for any choice of coefficients {αR} ⊂ Asplit. In fact, (7.5) holds for any collection{αR} ⊂ {−1, 1}. We can see that the lower bound in (7.5) is better than the onegiven in Theorem 1.6 when a < 2, but when 2 ≤ a < ∞, inequality (1.16) from thestatement of Theorem 1.6 gives us an improvement over the lower bound in (7.5) bya factor of n12− 1a for any collection of coefficients {αR} ⊂ Asplit. It is for this reasonthat Theorem 1.6 is stated only for a ≥ 2. As we will soon see, the proof of Theorem1.6 works for all a > 0.Proof of Theorem 1.6. From Lemma 6.4 there is a positive function ψ such that‖ψ‖L1([0,1]d) = 1 and ∫[0,1]dψHndx & nd2 . (7.6)Also note that this test function can be written asψ = 2n+1 1E,whereE ={~x = (x1, ~x′) ∈ [0, 1]d : Br(x1) · sgn(Fr(~x′)) = 1 for all r = 0, . . . , n}.93Furthermore, |E| = 2−(n+1) since ∫[0,1]dψdx = 1. Applying Ho¨lder’s inequality forOrlicz spaces to (7.6), we get‖2n+1 1E ‖L(logL) 1a ‖Hn‖exp(La) & nd2 . (7.7)Combining (7.4) with the fact that |E| = 12n+1, we obtain‖2n+1 1E ‖L(logL) 1a ≤ Cd21an1a , n > 1. (7.8)Now, using (7.8), (7.7) becomes‖Hn‖exp(La) ≥ (Cd)−12− 1an d2− 1a ,which completes the proof.From the proof of Theorem 1.6 we can see that the constant C(d, a) appearingin (1.16) must be equal to (Cd)−12−1a . Taking the limit of both sides of inequality(1.16) as the scale of integrability a approaches infinity, we get the signed small ballinequality when {αR} ⊂ Asplit.It is worth mentioning that bounds of this type have already appeared in thefield of irregularities of distribution of points (see [6]). In particular, Bilyk et al. [6]showed that, in d = 2, the discrepancy function,DN(x1, x2) := #(PN ∩ [0, x1)× [0, x2))−Nx1x2,associated to the N -point set PN ⊂ [0, 1]2 satisfies the following lower bound‖DN‖exp(La) & (logN)1− 1a , 2 ≤ a <∞.They used a duality argument in the setting of exponential Orlicz spaces in theirproof which, in fact, provided the motivation for the proof of Theorem 1.6.One can also use a duality argument in framework of exponential Orlicz spacesto prove Conjecture 1.5 in d = 2. Indeed, recall Temlyakov’s test function used in94the proof of the two-dimensional small ball inequality,Ψ =n∏k=0(1 + f(k,n−k)) = 2n+1 1E,whereE = {~x ∈ [0, 1]2 : f(k,n−k) = 1, for all k ∈ {0, 1, . . . , n}}.Adapting the proof of the two-dimensional small ball inequality to the signed hyper-bolic sum Hn =∑nk=0 f(k,n−k), we have〈Ψ, Hn〉 & n.Applying Ho¨lder’s inequality for Orlicz spaces as in the proof of Theorem 1.6, we get‖Hn‖exp(La) & n1− 1a , 2 ≤ a <∞.7.3 Study of signed hyperbolic sumsIn this section, we are concerned with exploring L∞ and exp(La)-lower bounds of alinear combination of Haar functions with signed coefficients αR ∈ {−1, 1}. To moti-vate our discussion, we first restate the signed small ball conjecture and Conjecture1.5 in an equivalent form.Conjecture 7.2. (Restatement of the signed small ball conjecture) There existsa vector ~µ = (µ0, . . . , µn) ∈ {−1, 1}n+1 such that, for any choice of coefficients{βR} ⊂ {−1, 1} and any integer n ≥ 1, we have the inequality∥∥∥ ∑~r∈Hdnµr1∑R∈R~rβRhR∥∥∥∞& n d2 , d ≥ 3. (7.9)By choosing µr1 = 1, r1 = 0, 1, . . . , n and βR = αR, R ∈ Adn, it is immediatethat the small ball conjecture implies (7.9). To get the reverse implication, (7.9)implies the small ball conjecture, simply set βR = µr1αR for R = R1 ×R′ ∈ Adn with|R1| = 2−r1 , r1 = 0, . . . , n.95Conjecture 7.3. (Restatement of Conjecture 1.5) There exists a vector~µ = (µ0, . . . , µn) ∈ {−1, 1}n+1 such that, for any choice of coefficients {βR} ⊂ {−1, 1}and any integer n > 1, we have the inequality∥∥∥ ∑~r∈Hdnµr1∑R∈R~rβRhR∥∥∥exp(La)& n d−12 + 12− 1a , 2 ≤ a <∞, (7.10)for all d ≥ 3.The equivalence can be obtained by methods similar to those presented before.Using the techniques developed in the previous section, we will show the existenceof (n+ 1)-dimensional vectors ~µ and ~ν ∈ {−1, 1}n+1 which satisfy inequalities (7.9)and (7.10) respectively, though these vectors may depend on the choice of coefficients{αR} ⊂ {−1, 1}. Specifically, we show:Theorem 7.4. For any integer d ≥ 3 and any choice of coefficients {αR} ⊂ {−1, 1},there exist (n + 1)-dimensional vectors ~µ = (µ0, . . . , µn) ∈ {−1, 1}n+1, n ≥ 1, and~ν = (ν0, . . . , νn) ∈ {−1, 1}n+1, n > 1, such that∥∥∥ ∑~r∈Hdnµr1∑R∈R~rαRhR∥∥∥∞& n d2 , (7.11)and ∥∥∥ ∑~r∈Hdnνr1∑R∈R~rαRhR∥∥∥exp(La)& n d−12 + 12− 1a , (7.12)for 2 ≤ a <∞.This theorem will be proved by employing a duality argument which uses a testfunction in the form of a Riesz product. Instead of working with the function∑~r∈Hdnµr1∑R∈R~rαRhR,we let µr1 : Ω → {−1, 1} be iid random variables with P(µr1 = ±1) = 12 , r1 =960, 1, . . . , n and consider the functionH~µ(ω, ~x) =∑~r∈Hdnµr1(ω)∑R∈R~rαRhR(~x)defined on the probability space(Ω× [0, 1)d,P× | · |). Then, for a particular choiceof ω ∈ Ω, H~µ(ω, ·) takes the form of the function inside the L∞ and exp(La)-normin (7.9) and (7.10) respectively. The usage of these random variables combined with(5.33) will be crucial in getting the lower bounds in (7.11) and (7.12).The idea of introducing random coefficients for proving existence results is notnew; as we saw in Chapter 4, it was already used to prove the sharpness of theconjectured lower bound in the small ball inequality for all d ≥ 3. However, therandomization used in Chapter 4 required each coefficient in the collection {αR} tobe an iid ±1-valued random variable, and there were about nd−12n iid random coef-ficients needed for that proof, whereas the randomization used here is considerablymilder, employing just n+ 1 iid random variables.Just as we proved the lower bound appearing in Theorem 1.3 separately in Lemma6.4, we will prove the lower bound appearing in Theorem 7.4 in the following lemma.Lemma 7.5. Let d ≥ 3 and n ≥ 1. For each collection of coefficients αR ∈ {−1, 1},there exists a positive test function ψ~µ ∈ L1(Ω× [0, 1]d) such that‖ψ~µ‖L1([0,1]d×Ω) = 1 (7.13)andE~µE(ψ~µH~µ) & nd2 , (7.14)where E~µ is the expected value with respect to the probability measure P and E(ψ~µH~µ) =∫[0,1]d(ψ~µH~µ)dx is the expected value with respect to the Lebesgue measure.Proof. We want to find a test function ψ~µ ∈ L1(Ω× [0, 1]d) such that‖ψ~µ‖L1(Ω×[0,1]d) = 1 (7.15)97andE~µE(ψ~µH~µ) & nd2 . (7.16)Note first thatH~µ(ω, ~x) =n∑r1=0µr1(ω)Ar1(~x),whereAr1 =∑~r ′∈Hd−1n−r1∑R∈R~rαRhR,with (r1, ~r′) ∈ Hdn. Consider the function ψ~µ defined byψ~µ(ω, ~x) =n∏r=0(1 + µr(ω)sgn(Ar(~x))) , (7.17)where we have replaced r1 with r for our convenience. Observe that, for all (ω, ~x) ∈Ω× [0, 1]d, ψ~µ(ω, ~x) ≥ 0, since each factor in the product is nonnegative. Expandingthe product in (7.17), we getψ~µ = 1 + ψ1 + ψ2, (7.18)withψ1(ω, ~x) =n∑r=0µr(ω)sgn(Ar(~x)), (7.19)andψ2(ω, ~x) =n∑k=2∑0≤s1<···<sk≤nk∏j=1µsj(ω)sgn(Asj(~x)). (7.20)Hence,‖ψ~µ‖L1(Ω×[0,1]d) = EE~µψ~µ = 1 + EE~µψ1 + EE~µψ2.Therefore, to complete the proof of (7.15), it suffices to show that, for every ~x ∈[0, 1]d,E~µψ1 = E~µψ2 = 0,98but this follows from the fact that {µr}nr=0 are iid ±1-valued random variables.Now, we’ll show that ψ~µ satisfies (7.16). Using (7.18), we haveE~µE(ψ~µH~µ) = E~µEH~µ + E~µE(ψ1H~µ) + E~µE(ψ2H~µ), (7.21)and, since E~µ(µr) = 0, r = 0, . . . , n, we get that E~µH~µ = 0 for every x ∈ [0, 1]d.Therefore, it suffices to show thatE~µE(ψ1H~µ) & nd2 , (7.22)andE~µE(ψ2H~µ) = 0. (7.23)Appealing to the fact that {µr}nr=0 are iid random variables and using (5.33), weobtain (7.22) as follows:E~µE(ψ1H~µ) =En∑r,r′=0sgn(Ar)Ar′E~µ(µrµr′)=n∑r=0‖Ar‖L1([0,1]d)&n∑r=0‖Ar‖L2([0,1]d)&n∑r=0(n− r) d−22 & n d2 .(7.24)Now, using the definition of the functions ψ2 and H~µ, and, again, the fact that{µr}nr=0 are iid random variables, it is clear that we haveE~µE(ψ2H~µ) =En∑r=0n∑k=2∑0≤s1<···<sk≤nArk∏j=1sgn(Asj)E~µk∏j=1(µsjµr)=0,(7.25)99which gives (7.23) and completes the proof.Proof of Theorem 7.4. Applying Ho¨lder’s inequality to (7.16) we get‖H~µ‖L∞(Ω×[0,1]d) & nd2 .Therefore, there exists ω0 ∈ Ω such that ‖H~µ(ω0, ·)‖L∞([0,1]d) & n d2 and we see that(7.11) holds.To prove (7.12), we note first that the test function can also be written asψ~µ = 2n+1 1E,whereE ={(ω, ~x) ∈ Ω× [0, 1]d : µr(ω) · sgnAr(~x) = 1 for all r = 0, . . . , n}.Since E~µEψ~µ = 1, it is clear that(P× | · |)(E) = 2−(n+1).In addition, from (7.4) we have that ‖ψ~µ‖L(logL) 1a ≤ 21an1a . Now, applying Ho¨lder’sinequality for Orlicz spaces gives‖H~µ‖exp(Laω,~x) ≥ C(d, a)n d2− 1a , (7.26)where C(d, a) is a positive constant depending on the dimension d and the scaleof integrability a. Here, ‖ · ‖exp(Laω,~x) denotes the exponential Orlicz space normwith respect to the measure P × | · |. We claim that there exists ω0 ∈ Ω such that‖H~µ(ω0, ·)‖exp(La) ≥ 12C(d, a)nd2− 1a , where ‖ · ‖exp(La) denotes the exponential Orliczspace norm with respect to Lebesgue measure only. Indeed, using the definition of100exponential Orlicz space, (7.26) implies that∫Ω∫[0,1]dφa(H~µ(2)−1C(d, a)nd2− 1a)dPdx > 1.This shows that there exists ω0 ∈ Ω such that∫[0,1]dφa(Hµ(ω0, ·)(2)−1C(d, a)nd2− 1a)dx > 1,so ‖H~µ(ω0, ·)‖exp(La) ≥ 12C(d, a)nd2− 1a which completes the proof of Theorem 7.4.101Chapter 8Conclusions and future projectsIn Chapter 6 we gave two proofs of the signed small ball inequality under the addi-tional assumption that the coefficients are in Asplit. Imposing the splitting propertyallowed us to exploit the independence of the random variables {∑|R1|=2−r1 αR1hR1 ,r1 = 0, . . . , n} defined on the probability space ([0, 1), | · |). This, together with theLittlewood-Paley inequalities for Haar functions, enabled us to show that the signedsmall ball inequality holds in all dimensions d ≥ 3. In particular, it was shown that‖Hn‖∞ &n∑r1=0‖Fr1‖1 &n∑r1=0‖Fr1‖2 &n∑r1=0(n− r1) d−22 & n d2 .It is natural to wonder if the signed small ball inequality might also hold forcoefficients which satisfy a different splitting property. That is, consider the collectionof coefficientsA′split = {αR : R ∈ Adn, αR = αR1×R2 · αR3×···×Rd with αR1×R2 , αR3×···×Rd ∈ {−1, 1}}.Problem 1: Given any n ≥ 1 and d ≥ 3, does the signed small ball inequality holdfor every collection of coefficients {αR} ⊂ A′split?The new splitting property elevates the complexity of the problem. The pathswhich were taken in the proofs of Theorem 1.3 can not be followed here since, in this102case, the corresponding random variables generated by the new splitting property,∑|R′|=2−r1−r2αR′hR′ , r1 ∈ {0, . . . , n} and r2 ∈ {0, . . . , n− r1}with R′ = R1 × R2, are not independent. This can be seen in the following way.First, observe that the signed small ball inequality is sharp when the coefficientsare restricted to the set A′split. In other words, there exists a constant Cd > 0 andcoefficients {αR} ⊂ A′split such that∥∥∥∥ ∑|R|=2−nαR1×R2αR3×···×RdhR∥∥∥∥∞≤ Cdnd/2. (8.1)This can be shown using the same argument as that given in Theorem 1.4. If theaforementioned random variables were independent, then, as in the proof of Theorem1.3, we would get that‖Hn‖∞ ≥n∑r1=0n−r1∑r2=0‖Fr1,r2‖L1([0,1]d−2), (8.2)where Fr1,r2 =∑|R′′|=2−(n−r1−r2) αR′′hR′′ with R′′ = R3 × · · · ×Rd. Since Proposition5.6 is also applicable to Fr1,r2 , ‖Fr1,r2‖1 & ‖Fr1,r2‖2, and this together with (8.2)shows that, for any collection of coefficients {αR} ⊂ A′split,‖Hn‖∞ ≥n∑r1=0n−r1∑r2=0‖Fr1,r2‖L2([0,1]d−2) &n∑r1=0n−r1∑r2=0(n− r1 − r2)(d−3)/2 & n(d+1)/2.However, the above inequality contradicts the validity of (8.1).As shown in Remark 6.3, the techniques employed in the proofs of Theorem 1.3can be used to show that the d-dimensional small ball inequality holds for a specialfamily of coefficients {αR1 · αR2×···×Rd} with αR1 ∈ {±1} and c1 ≤ |αR2×···×Rd | ≤ c2,for given positive constants c1 and c2. If we weaken the above condition so that wehave only an upper bound or only a lower bound, one might wonder if (1.8) still103holds. That is, letBsplit = {αR : R ∈ Adn, αR = αR1 · αR2×···×Rd with αR1 ∈ {±1}}.Problem 2: Does the small ball inequality hold for all αR ∈ Bsplit which satisfy0 < c ≤ |αR2×···×Rd|, or for all αR ∈ Bsplit which satisfy |αR2×···×Rd | ≤ c ?Under this weaker restriction on the coefficients, it is not known whether theLp-norms of such hyperbolic sums are comparable, as this cannot be deduced as adirect consequence of Remark 5.7.It would also be very interesting to know whether the conjectured lower bound ofsigned hyperbolic sums in exponential Orlicz spaces described in (1.15) is optimal.Problem 3: If d ≥ 2 and n > 1, is there a positive constant C(d, a) andcoefficients {αR} ⊂ {−1, 1} for which∥∥∥∥∥∥∑|R|=2−nαRhR∥∥∥∥∥∥exp(La)≤ C(d, a) n d−12 + 12− 1a , 2 ≤ a <∞ ? (8.3)We would be able to prove the above inequality if we knew that, for any collectionof coefficients {αR} ⊂ {−1, 1} and d ≥ 2,∥∥∥∥∥∥∑|R|=2−nαRhR∥∥∥∥∥∥exp(L2)≤ Cd n d−12 . (8.4)Indeed, first recall the useful interpolation inequality which says that, for any 0 <b < a <∞, we have that‖Hn‖exp(La) . ‖Hn‖baexp(Lb)‖Hn‖1−ba∞ ,where Hn =∑|R|=2−n αRhR. The proof of the above interpolation inequality can befound in [9]. From Section 4.2, we know that there exists a collection of coefficients{αR} ⊂ {−1, 1} such that ‖Hn‖∞ ≤ Cdnd/2. For this collection of coefficients,using the interpolation inequality for a ≥ 2 and b = 2 together with (8.4), we get104(8.3). However, (8.4) is not known in general. From [3] we know that, for anycollection coefficients {αR} ⊂ {−1, 1}, we have ‖Hn‖exp(Lb) . n d−12 , where b = 2d−1 ,and combining this with the interpolation inequality only gives∥∥∥∥∥∥∑|R|=2−nαRhR∥∥∥∥∥∥exp(La)≤ C(d, a) nd−1− 1a , 2 ≤ a <∞. (8.5)Note that for d = 2 inequalities (8.3) and (8.5) coincide, and this shows that thelower bound in (1.15) is optimal when d = 2.105Bibliography[1] T. W. Anderson, The integral of a symmetric unimodal function over a sym-metric convex set and some probabilities inequalities, Proc. Amer. Math. Soc.,6 (1955), no. 2, 170-176.[2] J. Beck, A two-dimensional van Aardenne-Ehrenfest theorem in irregularities ofdistribution, Composition Math., 72 (1989), no. 3, 269-339.[3] D. Bilyk, M. Lacey, On the small ball inequality in three dimensions, Duke Math.J., 143 (2008), no. 1, 81-115.[4] D. Bilyk, M. Lacey and A. Vagharshakyan, On the small ball inequality in alldimensions, J. Funct. Anal., 254 (2008), no. 9, 2470-2505.[5] D. Bilyk, M. Lacey and A. Vagharshakyan, On the signed small ball inequality,Online J. of Anal. Comb., 3 (2008), Art 6,7 pp.[6] D. Bilyk, M. Lacey, I. Parissis and A. Vagharshakyan, Exponential squared inte-grability of the discrepancy function in two dimensions, Matematika, 55 (2009),no. 1-2, 1-27.[7] D. Bilyk, M. Lacey, I. Parissis and A. Vagharshakyan, A three-dimensional smallball inequality, Dependence in probability, analysis and number theory, 73-87,Kendrick Press, Heber City, UT, (2010).[8] D. Bilyk, On Roth’s orthogonal function method in discrepancy theory, Unif.Distrib. Theory, 6 (2011), no. 1, 143-184.106[9] D. Bilyk, L. Markhasin, BMO and exponential Orlicz space esti-mates of the discrepancy function in arbitrary dimension, available athttp://arxiv.org/abs/1411.5794.[10] D. Bilyk, Roth’s orthogonal function method in discrepancy theory and somenew connections, a panorama of discrepancy theory, 71-158, Lecture Notes inMath., 2107, Springer-Verlag, Cham, (2014).[11] D. Bilyk, N. Feldheim, The two-dimensional small ball inequality and binarynets, available at www.arxiv.org/pdf/1511.07326.pdf.[12] D. L. Burkholder, Martingale tranforms, Ann. Math. Statist., 37 (1966), no. 6,1494-1504.[13] D. L. Burkholder, Sharp inequalities for martingales an stochastic integrals,Aste´risque, 157-158 (1988), 75-94.[14] W. W. L. Chen, On irregularities of distribution, Mathematika, 27 (1981), no.2, 153-170.[15] W. W. L. Chen, M. M. Skriganov, Explicit constructions in the classical meansquares problem in the irregularities of point distribution, J. Reine Angew.Math., 545 (2002), 67-95.[16] W. W. L. Chen, M. M. Skriganov, Davenport’s theorem in the theory of irregu-larities of point distribution, J. Math. Sci. (N. Y. ), 115 (2003), no. 1, 2076-2084.[17] E. Csa´ki, On small values of the square integral of a multiparameter Wienerprocess, Statistics and probability (Visegra´d, 1982), 19-26, Reidel, Dordrecht(1984).[18] T. Dunker, T. Ku¨hn, M. Lifshits, W. Linde, Metric entropy of the integrationoperator and small ball probabilities for the Brownian sheet, C. R. Acad. Sci.Paris Se´r. I Math., 326 (1998), no. 3, 347-352.107[19] W. Feller An Introduction to Probability Theory and its Applications. Wiley,New York (1966)[20] K. K. Frolov, Upper bounds on the discrepancy in Lp, 2 ≤ p <∞, Dokl. Akad.Nauk. USSR, 252 (1980), 805-807.[21] L. Grafakos, Classical Fourier Analysis, Third edition,Graduate Springer, NewYork 2014. ISBN: 978-1-4939-1193-6; 978-1-4939-1194-3.[22] U. Haagerup, The best constants in Khintchine’s inequality, Studia Math., 70(1981), no. 3, 231-283.[23] J. H. Halton, On the efficiency of certain quasirandym sequences of points inevaluating multidimensional integrals, Num. Math., 2 (1960), no. 1, 84-90.[24] J. M. Hammersley Monte Carlo methods for solving multivariable problems, Ann.New York Acad. Sci., 86 (1960), 844-874.[25] G. Hala´sz, On Roth’s method in the theory of irregularities of point distributions,Recent progress in analytic number theory, Vol. 2, pp. 79-94. Academic Press,London (1981).[26] A. Hinrichs, L. Markhasin, On lower bounds for the L2 discrepancy, Journal ofComplexity, 27 (2011), no. 2, 127-132.[27] E. Hlawka, Funktionen von beschra¨nkter Variation in der Theorie der Gle-ichverteilung, Ann. Mat. Pura Appl, 54 (1961), no. 1, 325-333.[28] W. Hoeffding, Probability inequalities for sums of bounded random variables, J.Amer. Statist. Assoc., 58 (1963), no. 301, 13-30.[29] D. Karslidis, On the signed small ball inequality with re-stricted coefficients, Indiana Math. J., to appear, available athttp://www.iumj.indiana.edu/IUMJ/forthcoming.php.108[30] D. Karslidis, Orlicz spaces bounds for special classes of hyperbolic sums, submit-ted.[31] J. F. Koksma, Een algemeene stelling uit de theorie der gelijkmatige verdeelingmodulo 1, Mathematica B, 11 (1943), 7-11.[32] A. N. Kolmogorov, V. M. Tikhomirov, ε-entropy and ε-capacity of sets in func-tional spaces, Uspekhi Matem. Nauk, 14 (1959), no. 2, 3-86.[33] J. Kuelbs, W. V. Li, Metric entropy and the small ball problem for Gaussianmeasures, J. Funct. Anal., 116 (1993), no. 1, 133-157.[34] M. Lacey, Small Ball and Discrepancy Inequalities (2008), available atwww.arxiv.org/pdf/math/0609816v1.pdf[35] P. Levy, Problemes concrets d’analyze fonctionelle, Gautheir-Villars (1951)[36] G. G. Lorentz, Metric entropy and approximation, Bull. Amer. Math. Soc., 72(1966), 903-937.[37] M. M. Rao, Z. D. Ren, Theory of Orlicz Spaces, Pure and Applied Mathematics,Inc., Marcel Dekker, New York-Basel-Hong Kong 1991. ISBN: 0-8247-8478-2[38] S. I. Resnick, A Probability Path, Birkha¨user Boston, Inc., Boston, MA, 1999.ISBN: 0-8176-4055-X.[39] K. F. Roth, On irregularities of distribution, Mathimatika, 1 (1954), no. 2, 73-79.[40] K. F. Roth, On irregularities of distribution III, Acta Arith., 35 (1979), no. 4,373-384.[41] K. F. Roth, On irregularities of distribution IV, Acta Arith., 37 (1980), no. 1,67-75.[42] W. M. Schmidt, Irregularities of distribution VI. Compositio Math., 24 (1972),no. 1, 63-74.109[43] W. M. Schmidt, Irregularities of distribution X, Number theory and algebra,311-329, Academic Press, New York (1977).[44] S. Sidon, Verallgemeinerung Emus Satzes u¨ber die absolute Konvergenz vonFourierreihen mit Lu¨cken, Math. Ann., 97 (1927), 675-676.[45] S. Sidon, Ein Satzu¨ber trigonometrische Polyfonem mit Lg¨ucken und seine An-wendung in der Theorie der Fourier Reihen, J. Reine Angew. Math., 163 (1930),251-252.[46] M. M. Skriganov, Harmonic analysis on totally disconnected groups and irregu-larities of point distributions, J. Reine Angew., 600 (2006), 25-49.[47] M. Talagrand, The small ball problem for the Brownian sheet, Ann. Probab., 22(1994), no. 3, 1331-1354.[48] C. Thiele, Lectures on Real Analysis, Bonn 2010/UCLA 2012, available atmath.ucla.edu/ thiele/lecturenotes/bonnlectures/lecture4.pdf[49] V. N. Temlyakov, Some inequalities for multivariate Haar polynomials, EastJournal on Approximations, 1 (1995), no. 1, 61-72.[50] V. N. Temlyakov, An inequality for trigonometric polynomials and its applica-tions for estimating the entropy numbers, Journal of Complexity, 11 (1995), no.2, 293-307.[51] A. Vagharshakyan, Lower bounds for L1 Discrepancy, Mathematica, 59 (2013),no. 2, 365-379.[52] G. Wang, Sharp square-function inequalities for conditionally symmetric mar-tingales . Trans. Amer. Math. Soc. 328 (1991), no. 1, 393-419.[53] R. M. G. Young, On the best possible constants in the Khintchine inequality, J.London Math. Soc., (2) 14 (1976), no. 3, 496-504.110

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.24.1-0228637/manifest

Comment

Related Items