S O R T I N G R E A L N U M B E R S O N R A N D O M A C C E S S M A C H I N E S B y Daniel S. Blumenthal B . S. (Computer Science) Syracuse University A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF T H E REQUIREMENTS FOR T H E D E G R E E OF M A S T E R OF SCIENCE in T H E FACULTY OF GRADUATE STUDIES COMPUTER SCIENCE We accept this thesis as conforming to the required standard T H E UNIVERSITY OF BRITISH COLUMBIA Septemeber 1995 © Daniel S. Blumenthal , 1995 In presenting this thesis in partial fulfillment of the requirements for an advanced degree at the University of Br i t i sh Columbia, I agree that the Library shall make it freely available for reference and study. I further agree that permission for extensive copying of this thesis for scholarly purposes may be granted by the head of my department or by his or her representatives. It is understood that copying or publication of this thesis for financial gain shall not be allowed without my written permission. Computer Science The University of Br i t i sh Columbia 2366 M a i n M a l l Vancouver, Canada V 6 T 1Z4 Date: Abstract This work describes a family, Segment-Sort, of algorithms for rapid sequential sorting of real numbers. Two computational models are discussed which correspond to the two main types of Segment-Sort algorithm: deterministic and random. W i t h the determin-istic model, the Basic R A M , it is possible to sort input populations randomly chosen from a broad class of common probability distributions in "space" (number of memory words, each able to hold a real number) and average time both linear in the number of real numbers given as input. Included among these distributions are a variety of types containing singularities, unbounded oscillations and points of actual nonzero probability (atoms). W i t h the second model, the Random R A M , one may sort n arbitrarily chosen distinct real numbers in 0(n) operations using only 0(n) memory words on average. Except for random integer selection on the Random R A M , both models are confined to simple binary arithmetic. The power of both models appears to stem largely from the combination of left and right shifting operations. i i Table of Contents Abstract ii Table of Contents iii Acknowledgement v 1 Introduction 1 1.1 Earlier Work on the Sorting Problem 1 1.2 Terminology and Notational Conventions 2 1.3 Computat ional Models 3 1.4 The Power of Bidirectional Shifting 4 1.5 Derived Constant-Time Operations 6 1.6 Random Variables Bounded by Input Size and the "Logari thm L i m i t Rule" 6 2 Fast Deterministic Sorting 8 2.1 Linear-Time Deterministic Sorting 8 2.1.1 Double-Bucketing and Linear-Time Performance 8 2.1.2 The Basic Algor i thm . . . 9 2.1.3 Performance of Segment-Sortl 13 2.1.4 Resolving Singularities in Linear Expected Time 18 2.2 Addi t iona l Improvements on the Comparison-Based T ime Complexity for Sorting 29 2.2.1 Infinite Collections of Atoms 29 iii 2.2.2 Arbi t rary Isolated P . D . F . Discontinuities 33 3 Random Sorting of Distinct Real Numbers in Linear Expected T i m e 35 3.1 The Random-Segment-Sort Algor i thm 36 3.2 Analysis of Random-Segment-Sort 39 Bibliography 45 iv Acknowledgement Firstly, I thank my supervisor, Dr . Nicholas Pippenger for helping me at every stage in the conceptual development and wri t ing of this thesis. I thank N S E R C for its financial support of this project. I also wish to thank Dr . A l a n Wagner, whose guidance and recommendations have been of great assistance in making this a presentable document. In addition, I thank Dr . David Kirkpat r ick for personally aiding me in the development of important ideas relating to the latter parts of this work. Chapter 1 Introduction 1.1 Earlier Work on the Sorting Problem The problem of sorting items into order by rank or numerical value is one that has been given alot of attention, particularly by computer scientists. A wide variety of different methods and models for sorting have been developed. The most widely used and heavily studied of these models are the comparison-based models, in which it is assumed that the only primitive operation available for getting information about the relative rankings of elements in a collection of ranked elements is the comparison operation, which takes two elements as operands and tells you whether the first element is less than, equal to, or greater than the second. It has been shown that under this assumption, given n real numbers to sort, any possible sorting algorithm wi l l have to do Q ( n l o g 2 n ) such comparison operations, on average, to put the numbers in order [Knu73]. There are, however, other models for sorting which do not depend solely on compar-isons, in which it is possible to sort n real numbers wi th fewer than 0 ( n l o g 2 n) primitive instructions al l wi thin the realm of simple arithmetic and random choice. Of particular interest among the algorithms supported by such models are the distributive sorting al-gorithms (e.g., bucket-sort as described in [CLR90]). The principal algorithms analyzed here fall wi thin this class. 1 Chapter 1. Introduction 2 1.2 Terminology and Notational Conventions In keeping wi th standard terminology, the number, t, of primitive operations executed by a procedure, P, for any given input S is indicated by saying that P requires or runs in "time" t on input S. Note that when I refer to a quantity, x(n), as being "O(n)," I mean that there is a fixed real number k that is independent of all variable quantities, such that for al l sufficiently large n: \x(n)\ < kn. The precise meaning of "sufficiently large" in this context may, however, be dependent on variable quantities (such as an underlying probability distribution from which inputs are randomly drawn, where n is the number of inputs). Furthermore, in referring to a quantity, x(n), as being "o(n)," I allow the bounding function implici t in the "little oh" notation to depend, similarly, on variable quantities such as underlying distributions. Also, when I refer to a sort-routine as "standard," I mean that it is a comparison-based sort-routine (e.g. heapsort) which, given n ranked elements as input, w i l l sort them in 0(n log 2 n) time in the worst possible case, using 0(n) extra memory words. Present in the text as well are some more mathematical references whose precise meaning may not be apparent. Among these are references to "resolving" singularities and other probability distributions. Generally, by saying that a particular sorting al-gorithm (A) "resolves" a particular distribution (D) in 0(f(n)) time, I mean that the expected time for the sorting by A of n elements randomly drawn from D is 0(f(n)). In connection with probability distributions, I also refer to a probability density function as a p.d.f., and to the cumulative distribution function corresponding to any probability distribution as a c.d.f. I also refer to real values taken on by a random variable that have nonzero probabilities of occurrence as "atoms." In a corresponding manner, I refer to buckets (i.e., subsets of the input generated in a bucket-sort [CLR90] or other bucketing Chapter 1. Introduction 3 algorithm) whose corresponding bucket-intervals contain zero, one, or multiple atoms as "nonatomic," "monatomic," and "multi-atom" buckets, respectively. Final ly, to allow the analysis to proceed at a higher level, I also observe the following conventions. Typically, the primary quantities under discussion wi l l be mathematical (i.e., numbers rather than memory locations or the binary string representations of num-bers therein). Objects to which primitive instructions and other procedural steps are applied wi l l commonly be treated as numbers where the intended meaning is clear (e.g., addition wi l l be discussed in a mathematical context although the actual Add instruction described above, by nature, must operate on memory words). To denote the number of elements in a set or object (S) made up of multiple elements, I use the symbol | S |, consisting of the symbol for the object itself surrounded by vertical bars. I also use sur-rounding vertical bars to represent the absolute value of a number, allowing the meaning of the bars in each case to be indicated by context. 1.3 Computational Models Both the Basic R A M and the Random R A M have memory locations that can hold any real numbers. Indirect addressing is assumed on both models (i.e., memory locations passed as arguments may be computed values). The Basic R A M ' s primitive instruction set is included in that of the Random R A M . eight operations are regarded as primitive on the Basic R A M . These are: 1. Copy(a,b): Copy the contents of memory location a into memory location b. B o t h a and b are permitted to be any of the active memory words or any contiguous external file segment. 2. Add(a,b): A d d real numbers a and b, and return their sum. Chapter 1. Introduction 4 3. Negate(a): Return the additive inverse of real number a. 4. L-Shift(a,j): Return 2 J a (where a is a real number and j is an integer). 5. R-Shift(a,j): Return 2 _ J a (where a is a real number and j is an integer). 6. Floor(a): Return the greatest integer less than or equal to the real number a. 7. Compare(a,b,{Li, L2, L3}): Return the comparative relationship of a to b (where a and b are real numbers). That is, if a < b then return the result "less than," if a = b then return the result "equal" and i f a > b then return the result "greater than." If "less than" is returned, jump (i.e., transfer the flow of program control) to the instruction immediately following location L\ in the program's code. If "equal" is returned, jump to the instruction immediately following L 2 , and if "greater than" is returned, jump to that immediately following L 3 . 8. Point-Normalize(a): Return the greatest integer less than or equal to log 2 (a). The Random R A M is the same as the Basic R A M except for having in its primitive instruction set the following additional instruction: Random(i,j): Return an integer randomly chosen from the among those between integers i and j inclusive, with each integer in this range having an equal chance of being chosen. 1.4 The Power of Bidirectional Shifting The majority of "realistic" computational models which are commonly regarded as se-quential, strictly speaking, do things in parallel, since, any time one performs a binary operation on two data that are longer than one bit each — like comparing two floating-point numbers — and counts it as a unit operation, all the bits of the two data are Chapter 1. Introduction 5 impl ic i t ly accessed in parallel. The degree of implici t parallelism required reaches an extreme in the case of real number sorting on a digital machine, as even to perform basic actions like comparing two real numbers in finitely many unit steps, one may have to access any number of bits (in the case of two equal irrationals, a literally infinite number of bits) in parallel. Consequently, it is generally not a simple matter to classify a machine that handles real numbers as being strictly sequential, and typically, such classifications must take individual problems' contexts into account. It is, perhaps, more instructive to analyze the power of a machine model relative to other models in the context of a specific problem. The primitive operation sets of the two machine models presented here, particularly that of the Random R A M (as we shall see in the third chapter), though l imited to simple arithmetic (and random integer selection), confer a good deal of power in the context of real number sorting. In particular the combination of the two shift instructions (L-Shift and R-Shift) appears to be quite powerful. In fact, it has been proven that on a machine whose primitive instructions are Add, Negate, Compare, L-Shift, R-Shift [and Floor] [KR84], it is possible to sort an arbitrary collection of integers in worst-case linear-time, but on a machine wi th only the Add, Negate and Compare instructions, the worst-case complexity for sorting n arbitrarily chosen integers is f2(nlog 2 n) [PS]. It is interesting to note that even when the L-Shift instruction (without R-Shift) is added to these three instructions, the resulting machine model s t i l l requires fi(nlog2n) time to sort n integers in the worst case [Rei]. Since the results presented here for the Random R A M depend on its abili ty to sort arbitrary integer-valued input sets in worst-case linear-time, it is important when considering these results to keep in mind the power of the two shift instructions combined. Chapter 1. Introduction 6 1.5 Derived Constant-Time Operations There are two particularly convenient operations, Binary-Floor and Binary-Ceiling, which, though not defined as primitive themselves for either the Basic or the Random R A M , may be implemented in constant time using the Basic R A M ' s primitive operations. Binary-Floor^) takes as input a positive real number x and returns 2 L l o g 2 X - l , while Binary-Ceiling^) takes as input a positive real number, x, and returns 2 ^ l o g 2 ^ . It is clear that the expression Ll°g2 c a n be computed from a single Point-Normalize operation, and raising 2 to the power of the resultant value amounts to a single shift. A slight modi-fication yields the Binary-Ceiling. These operations are helpful in proving a of number basic results. 1.6 Random Variables Bounded by Input Size and the "Logarithm Limit Rule" This section introduces a useful theorem and corollary thereof which is applied later on in the text. The corollary, in particular, allows easy establishment, wi thin certain algorithms, of 0(n) expected time complexity for sorting the contents of a bucket when its corresponding bucket-interval lies adjacent to a singularity under certain constraints. Theorem 1 Let X be any random variable whose value is at most n, and let g be any nondecreasing function that assigns a nonnegative value to every nonnegative integer. Then E[Xg(X)} < g(n)E[X}. Proof: X is always bounded above by n. Since g is nondecreasing as well, it therefore follows that g(X) < g(n), so that Xg(X) < Xg{n) (always). Therefore, E[Xg(X)] < E[Xg(n)] Chapter 1. Introduction 7 = g(n)E[X}. Since the sorting algorithms used as subroutines at the lowest level by the procedures described herein are standard sort-routines, the application of the preceding theorem to the case where g{x)= log2(a;) has particular relevance. Hence the following corollary: Corollary 2 (The "Logarithm Limit Rule") Let n and X be as defined in Theorem 1. Then E [ X l o g 2 X ] < l o g 2 (n)E[X]. Proof: Just take g{x)= log2(a;) in Theorem 1. Chapter 2 Fast Deterministic Sorting 2.1 Linear-Time Deterministic Sorting 2.1.1 Double-Bucketing and Linear-Time Performance The analysis for deterministic linear-time results applies an adaptation of a theorem (Theorem 1.5 in [Dev86]) according to which the following holds: Let / be any probability density function wi th nonzero values confined to [0,1] for which / f2(x) dx < oo. Then there is a distribution independent positive constant k such that there exists a positive integer N dependent upon / such that for every integer n > N for every random selection, S, of n elements from / , S may be bucket-sorted in expected time less than kn. This result is proven by reference to the "double-bucketing" algorithm, which plays a fundamental role in some of the deterministic Segment-Sort algorithms. Theorem 1.5 in [Dev86] entails an assumption of bottom-level quadratic-time sorting wi th in buckets. Here, this theorem is adapted to apply to the case where a standard sort-routine is used at the bottom level, rather than a quadratic-time sort-routine. The distribution-independent nature of double-bucketing's ut i l i ty was observed indepen-dently by Tamminen in [Tam85]. The idea of double-bucketing itself is much older (see [Mac66]). 8 Chapter 2. Fast Deterministic Sorting 9 2 .1 .2 The Basic Algorithm Below, a naive Segment-Sort algorithm (Segment-Sortl) is described in a pseudocode-like format. For Segment-Sortl, it is assumed only that the input numbers are ran-domly chosen from some probability distribution wi th support confined to the interval [0,1] and made up of two subdistributions or parts, (A) and (B), where part (A) is a probability density function, / , mult iplied by some positive real constant c < 1 wi th Jr /(*) 1°§2 /(*) dt < oo; and part (B) is a discrete distribution whose domain is a finite set of real numbers to each of which a nonzero probability corresponds. A probability distribution of this type may be defined precisely wi th respect to its cumulative distri-bution function (c.d.f.) by saying that its c.d.f. is the sum of two parts, (Ac) and (Bc) where: (1) there is a function / — zero-valued outside [0,1] — such that $ f(t) l og 2 f(t) dt < co, and for any real x, Ac(x) = /0X f(t)dt; and (2) Bc is a step function of finitely many steps which rises from zero to its highest step within the interval [0,1] Either part (A) or part (B) may be empty (i.e., either Ac(x) or Bc(x) may be equal to zero for al l x). I refer to this type of probability distribution as an AB-distribution. Let Sn be a population of n numbers randomly chosen from an arbitrary AB-dis t r ibu t ion , and let t(Sn) be the time that Segment-Sortl requires to sort Sn. I prove here that for al l sufficiently large n, E[t(S' n)] < kn where A; is a constant independent of the distribution from which the numbers are chosen. Segment-Sortl (Input-File, Output-File) Chapter 2. Fast Deterministic Sorting 10 (1) Read the input from Input_File into an input array (In), and count the number (n) of input elements. Step (1) requires 0(n) time. (2) Load the input elements (In[l..n]) into buckets according to their values as in the first stage of an ordinary bucket sort [CLR90], except for using Binary-Ceil ing(n) instead of n buckets. For each input element q, the bucket that receives element In[g] w i l l be bucket number [ (Binary-Ceiling(n)In(g]) + l j . Also count the elements that fall into each bucket and record the location of the bucket. Step (2) requires 0(n) time. (3) A p p l y steps (1) and (2) as in a recursive descent (but without exceeding a recur-sion depth of two) successively to each of the buckets formed in the preceding step that contains more than a certain fixed threshold number, Ei, of elements, applying a "bottom-level" sort-routine to the other primary (i.e., ini t ia l ly formed) buckets as in a standard bucket sort. This "double-bucketing," [Dev86] wi l l yield a (possibly-empty) collection of secondary buckets wi thin any of the primary buckets that contains more than Ei elements. Step (3) requires 0(n) time. (4) To each of these secondary buckets in turn, apply the Atom-Handler algorithm de-scribed below — passing as parameters the bucket and the number of its elements. When this procedure has been applied to each of the secondary buckets in each of the primary buckets, the end result wi l l be a complete ordering of the input. (See the description of the Atom-Handler routine below for a time-complexity analysis of this step of Segment-Sort l . ) (5) Let a terminal bucket be any bucket that has thus far (i.e., by the end of step (4)) been formed but not further subdivided. Read the sorted elements in order (from the first element in the first terminal bucket to the last element in the last terminal bucket) Chapter 2. Fast Deterministic Sorting into Output_File. This final step of Segment-Sort 1 requires 0(n) time. 11 Atom-Handler (< size of bucket (B) >,< (location of) bucket B >) (1) If | B | (i.e., the size, or number of elements in B) exceeds a certain threshold number, E2, of elements, then load the subpopulation, S, consisting of B 's first B |J elements into an array; otherwise, sort the contents of B using a standard sort-routine and ter-minate this algorithm. Clearly, step (1) requires 0(y\B\) time (per execution) in the worst case. (2) Use a standard sort-routine to sort S. This requires 0(yj\ B | log 2 | B |) time. (3) A p p l y the Find-Atom-Value subroutine (described below) to the (now sorted) sample S. This takes 0 ( |S | ) time. Since |S|= [y/\ B | J , this step takes 0(yj\ B |) time. (4) If an atom-value (rather than the value "nonatomic") has been returned in the pre-ceding step, create three sub-buckets for the elements of B : the first for elements whose values are less than the atom-value ("low" elements), the thi rd for elements whose values exceed the atom-value ("high" elements) and the middle bucket for elements that have precisely the atom-value ("atomic" elements). Establish a pointer to the location of each of these sub-buckets. Only three buckets (if any) are created in this step, so it takes 0(1) time. (5) If three buckets were created in the preceding step (step (4)), read the elements of B , comparing the value of each to the atom-value returned in step (3), and placing the low elements, atomic elements and high elements in their respective sub-buckets created Chapter 2. Fast Deterministic Sorting 12 in step (4). This requires 0 ( | B | ) time. (6) If B has been partitioned into sub-buckets, independently sort the elements in the buckets for high and low elements using a standard sort-routine. The atomic elements are al l identical in value and in a common bucket, so they're already sorted in this case. If B has not been subpartitioned, simply use a standard sort-routine to sort al l of its elements. The amount of time required by this step is analyzed in the following section of this paper. (7) Return the pointers created in step (4). This obviously takes constant time. Find-Atom-Value (< finite nondecreasing sequence of ranked elements (S) >) (1) Let t ing the current element value begin as that of the first element of S, set a counter (CT) equal to one, a "current mode count" register ( C M C ) equal to zero, and a "current mode value" register ( C M V ) equal to the current element value. This , of course, takes constant time. (2) Read S from beginning to end. Whi le doing so, increment C T by one at each reading of an element that equals the previous element. A t each reading of an element unequal to the previous element, if C T > C M C , set C M C equal to the value of C T , reset C T to 1, and then set C M equal to the most recently encountered element value. (3) If C M C <|S| / 2 , then return the value "nonatomic" (i.e.,having no elements derived from an atom). Otherwise, return the value in C M V as the atom-value. B y this process, the C M C register wi l l end up containing the size of the largest subset of Chapter 2. Fast Deterministic Sorting 13 elements of S that have a common value, and the C M V register wi l l contain that common value. Step (2) clearly takes 0 ( |S | ) time. Thus this whole subroutine takes 0 ( |S | ) time per execution. 2.1.3 Performance of Segment-Sortl Let / be an arbitrary AB-Dis t r ibu t ion , and let Sn be a population of n elements randomly chosen from the distribution / . As indicated above, part (A) of any AB-dis t r ibu t ion is a probability density function, g, scaled by a proper fraction such that g(x) log 2 g(x) integrates to a finite value, and part (B) of any AB-dis t r ibu t ion is a collection of atoms. Let Tn = t(Sn) be the expected time taken by Segment-Sortl to sort Sn. I prove here that there exists a positive real constant k — not dependent upon f — such that for al l sufficiently large n, Tn < kn; i.e., that: Let Un be the expected total time taken by step (4) of Segment-Sortl in processing Sn. Each of the first three steps and the fifth step of Segment-Sortl requires 0(n) time (where n is the number of input elements) so Tn = Un + 0(n). Thus it wi l l be sufficient to show that there exists a positive real constant k — not dependent upon f — such that if n is sufficiently large, Un < kn; i.e., that: The first three steps of Segment-Sortl effect a double-bucketing [Dev86] of the input. Step (4) consists in applying the Atom-Handler routine to each of the buckets formed in this double-bucketing process, and therefore, the total fraction of Segment-Sortl 's Tn = 0{n). (1) Un = 0(n). (2) Chapter 2. Fast Deterministic Sorting 14 running time for which step (4) is responsible is essentially that for which the A t o m -Handler routine is responsible (ignoring the negligible time contribution of delays between the calling and the actual beginning of the Atom-Handler routine, etc.) In other words, we assume for the purposes of this analysis that the following condition holds: Condition 0: for any positive integer n: Un equals the expected total time devoted by Segment-Sortl to the Atom-Handler routine. The Atom-Handler routine — when called from Segment-Sortl — subpartitions each of these secondary buckets large enough to require subpartitioning and containing "atomic" elements into three "tertiary" buckets, one of which is reserved for the elements derived from part (B) of / (i.e., the atomic elements). If a given secondary bucket which the Atom-Handler routine subpartitions contains elements derived from more than one atom, it w i l l not — as described above — succeed in separating the elements of that bucket which part (B) of / is responsible for from those which part (A) is responsible for. How-ever, by Lemma 3 (below), i f n is sufficiently large, no such "multi-atom" secondary buckets w i l l be passed to the Atom-Handler routine. Thus the Atom-Handler routine effectively separates those input elements for which part (A) is responsible from those for which part (B) is responsible. Once the atomic elements of a "monatomic" secondary bucket have been deposited in their own tertiary bucket, they — being al l equal in value — require no further sorting, and al l that remains for Atom-Handler to do with the secondary bucket's elements is the final sorting of those elements derived from part (A) of / . Lemma 3 If n is sufficiently large, no multi-atom buckets will be passed to the Atom-Handler routine during the sorting of Sn by Segment-Sortl. Proof: Chapter 2. Fast Deterministic Sorting 15 Since the number of distinct atoms in the probability distribution / is a finite number, there exists a minimal distance, d, between any two consecutive atoms of / . The distri-bution / is an AB-dis t r ibut ion , so it is effectively confined to the interval [0,1] . Clearly, if we take N — [1/^1, when n > N, no primary bucket wi l l be wide enough to encompass two atoms. The Expected Running T ime of the Atom-Handler Routine B y Lemma 3, if n is sufficiently large, not a single multi-atom bucket gets passed to the Atom-Handler routine. Consequently, I assume, for the purposes of this analysis, that every bucket passed to the Atom-Handler routine is either monatomic or nonatomic. Let Bo, then, be a (monatomic or nonatomic) bucket created as an end product of the double-bucketing which occurs in steps (2) and (3) of Segment-Sort 1 and passed as input to the Atom-Handler routine in step (4) of Segment-Sort 1; and let | B0 | be the number of elements in Bo. The Atom-Handler routine itself consists of seven steps. W i t h B0 as the input bucket under consideration, the following condition holds: Condition 1: the first four steps and the seventh step of Atom-Handler individually, and thus, collectively, require 0(\ B0 |) time per execution. The two remaining steps, (5) and (6), do the bulk of Atom-Handler 's work. In step (5) (which is done iff Bo is monatomic) the elements of B0, as described above, are placed in one of three buckets accordingly as their values are either less than, equal to, or greater than that of Bo's atom. This obviously requires only a single reading, comparison and writ ing for each element of Bo', which implies the following condition: Condition 2: step (5) of Atom-Handler requires 0(\ Bo |) time per execution. Chapter 2. Fast Deterministic Sorting 16 In step (6), the elements of So, having already been loaded into tertiary buckets if necessary, undergo Segment-Sortl 's final phase of sorting. If Bq is monatomic, the atomic elements in the tertiary bucket designated for these need no further sorting, and if B0 is nonatomic, there are no atomic elements to sort; so Atom-Handler devotes the remainder of its processing time (discounting the negligible time for step (7)) to the final sorting of the nonatomic elements of B0, i.e., the elements of B0 derived from part (A) of / . This remaining processing time is precisely that required by step (6). According to the analysis in Section 1.5 of [Dev86], an elementary double-bucketing sort-routine using a quadratic-time sort-routine at the lowest level can sort n elements — randomly selected from an arbitrary p.d.f. whose square integrates to a finite number — in 0(n) expected time. If, in the analysis given there, for each terminal bucket, B , the assumed bottom-level quadratic-time sort-routine is replaced by a standard sort-routine, the time-complexity of | B | 2 for the final sorting of B is correspondingly replaced by a time complexity of 0(\ B | log 2 | B |). If a corresponding analysis is then carried through under this new assumption about bottom-level sorting, the result is that an elementary double-bucketing sort-routine using a standard sort-routine at the lowest level can sort n elements — randomly selected from an arbitrary p.d.f. g for which g(x) l og 2 g(x) integrates to a finite number — in O(n) expected time. The secondary buckets passed to the Atom-Handler routine in step (4) of Segment-Sortl are the end products of an elementary double-bucketing, and part (A) of / is a p.d.f. (g) for which g(x) log 2 g(x) integrates to a finite number. Therefore, if Atom-Handler were to receive as inputs only those net portions of its actual input buckets deriving from part (A) of / , and proceed to sort these net portions without further partit ioning — using a standard sort-routine — it would completely sort al l these contributions from part (A) of / in an expected time no greater than kn where k is a distribution-independent constant. The buckets that get sorted in step (6) of Atom-Handler are formed by a further Chapter 2. Fast Deterministic Sorting 17 partit ioning of the net portions derived from part (A) of / of those secondary buckets received by Atom-Handler as input — and are thus smaller than these net portions. Their collective sorting (by a standard sort-routine) w i l l therefore require at most twice as much time as that of these net portions derived from part (A) . Let Vn be the expected total time taken by step (6) of the Atom-Handler routine over the full course of Segment-Sor t l ' s execution in processing the population Sn of n elements randomly chosen from the AB-dis t r ibu t ion / . Then by Theorem 1.5 in [Dev86]: Vn = 0(n). ' (3) Let Wn be the expected total time taken by the first four steps and the seventh step of Atom-Handler over the full course of Segment-Sortl 's execution in processing Sn. Then it follows immediately from Condition 1 that: Wn = 0(n). (4) Let Xn be the expected total time taken by the fifth step of Atom-Handler over the full course of Segment-Sortl 's execution in processing Sn. Then by Condition 2: Xn = 0{n).. (5) It follows from Condition 0 that Un = Vn + Wn + Xn. This together wi th equations (3), (4) and (5) gives us equation (2). Final ly, equation (1) follows since we also have Tn = Un + 0(n). This completes the proof that Segment-Sortl w i l l sort random selections from an arbitrary AB-dis t r ibu t ion in distribution-independent linear expected time. Chapter 2. Fast Deterministic Sorting 18 2.1.4 Resolving Singularities in Linear Expected Time Ordinary Double-Bucketing It is known [DK81] that for any probability density function, / , i f ordinary single-bucketing wi th a bottom-level standard sort-routine is used to sort a collection, Sn, of n elements randomly drawn from / , then the expected time taken by such an algorithm to sort Sn w i l l be asymptotically bounded above by some function of the form M n ( l o g 2 n) where M is real, i f and only i f /. f(x)\og2f(x)dx < oo. In this section, it is shown that, unlike single bucketing, double-bucketing wi l l yield such expected time performance in certain cases where the corresponding integral diverges, i.e. there is a probability density function, / , such that f(x) l og 2 f(x)dx = oo. but (nonetheless) i f ordinary double-bucketing with a bottom-level standard sort-routine is applied to a collection of n elements randomly drawn from / , then the expected time taken to sort the n elements wi l l be less than M n ( l o g 2 n) for some positive real constant M. Let N be the set of al l nonnegative integers (i.e., N = {0 ,1 ,2 , • • •}). Let aj = l / ( ( l / 2 ) + In 2). Now, let / : R -> R be defined by the equations: / w = ^ ^ i f 0 < ^ 1 / 2 ' f(x) = af if 1/2 < x < 1, and f(x) = 0 otherwise. L Chapter 2. Fast Deterministic Sorting 19 Then / is nonnegative real-valued everywhere, and (as shown below) / integrates to unity, so / is a probability density function. Final ly, for any Riemann-integrable non-negative real-valued function g on R such that fag(x)dx is finite, I define the probability distribution of g's shape, gp, as the unique probability density function for which there is a constant c such that for every real number x, gp(x) = cg{x). The following theorem is a formal statement of the fact that double-bucketing, unlike single-bucketing, yields linear expected time sorting of data randomly drawn from / . Theorem 4 A population of n elements randomly drawn from the probability density function f: (1) can be sorted in 0(n) expected time by ordinary double-bucketing with a standard sort-routine applied to the final buckets thereby generated; and (2) cannot be sorted in O(n) expected time by ordinary single-bucketing with a standard sort-routine applied to the final buckets thereby generated. Proof: The second assertion of this theorem follows immediately from Devroye and Klincsek's results [DK81] one consequence of which is that: For any probability density function / : R —>• R wi th bounded support, i f J R f(x) log 2 f(x) dx diverges then, given a random selection of n elements from / as input, the expected running time of a single- bucketing algorithm using a standard sort-routine at the lowest level is co(n). Let n be any positive integer and let Sn be a population of n input elements randomly drawn from / . A standard sort-routine runs in Q(k l og 2 k) expected time on an input set of size k, so let h(x) be any real-valued function on R + which is 6 ( x l o g 2 : r ) . If single Chapter 2. Fast Deterministic Sorting 20 bucketing wi th a "bottom-level" sort-routine that runs in h(k) expected time on an input set of size k is applied to Sn, the expected time required for this single-bucketing sort wi l l be u)(n) i f J R f(x) log 2 f(x) dx diverges. Since h(x) is 0(x\og2x), this happens iff Jjih(f(x)) dx diverges. J R , M / ( X ) ) dx does indeed diverge, hence this theorem's second assertion. Now (to prove the theorem's first assertion), suppose that ordinary double-bucketing with a standard sort-routine applied to the final buckets is used to sort Sn. To begin with, let / = Jo1/2 f(x) dx and for every ordered pair (ft, c) such that 0 < b < c < 1/2, let Ibtc = ft f(x) dx. For any such b and c: h,c = [ , . a / , 2 d x = Jb x(\og2x) Holding c fixed while decreasing b, we find that ( l / l n 6 ) — ( 1 / l n c ) approaches — 1 / l n c as b approaches zero; therefore, f Jo Io x(\og2xy b-*oK l n c Therefore Chapter 2. Fast Deterministic Sorting 21 1 = (ln2)2af , n v ' 1 = a ; In 2, In (1/2) so f(x)dx = 1 + f(x)dx = ( O / I » 2 ) + 2 = 0n(2) + l / 2 ) . , = £ | ± $ = l . which shows that / is, indeed, a p.d.f. Now, consider the number of elements of Sn expected to fall into the primary bucket-interval nearest to zero when ordinary single-bucketing (with (0, 1] regarded as the full domain of possible input values) is applied to Sn. This bucket-interval ( i i ) is (0, l/n], the first subinterval formed by partit ioning / ' s effective domain ((0, 1]) into n segments of equal width. The proportion of the elements of Sn expected to fall into I\ w i l l therefore equal r-l/n f{x) dx = l im (7 I ) 1 / r i ) = [ l / ( ^ + l n 2 ) ] [ - l / l n ( l / n ) ] [ l / ( ^ + In2)](1/Inn) provided that n > 2. Chapter 2. Fast Deterministic Sorting 22 Therefore, the actual number of elements of Sn expected to fall into Ii approaches [ 1 / ( | + l n 2 ) ] ( n / l n n ) (= 0 ( n / l n n ) ) as n approaches infinity. Consequently, by the Logari thm L i m i t Rule, the expected time taken to sort the contents of the bucket corresponding to Ii using a standard sort-routine wi l l be 0 ( ( n / l n n ) l o g 2 ( n / l n n ) ) = 0(n). Now, for each integer j such that 1 < j < [n/2\, let Ij be the jth primary bucket-interval in the parti t ion of (0,1] induced when single-bucketing is applied to Sn (i.e., the subinterval ((j — l)/n,j/n]). The function / is monotonically decreasing and positive-valued on (0, 1/2], so for every j such that 1 < j < |ra/2j, 0 < f(j/n) < f((j - l)/n). In particular, fUM = + ln2)](n/j)[ - l / l o g 2 (j/n)} and f(U - l ) / n ) = + ln2)](n/(j - 1)][ - 1 / l og 2 ((j - l ) /n ) ] so, 1 < f((j - l)/n)/f(j/n) = - l ) ] [ log a 0 7 n ) / l o g 2 ((j - l ) /n ) ] < [ j / 0 ' - l ) ] < 2 ( f o r j > 2 ) . That is, the value of / at the left end of any given primary bucket-interval beyond the first (up to primary interval number L n / 2 J ) is between one and two times as great as the value of / at the right end of the same interval. Since / is monotonic on (0, 1/2], it follows that the value of / at any point in a given primary bucket-interval from the second to the ([n/2\) th is between one and two times as great as the value of / at the Chapter 2. Fast Deterministic Sorting 23 right end of the same interval. Therefore the following condition holds: Condition 3: Given ordinary double-bucketing of a random selection of n elements from / , each primary bucket-interval from the second to the [\n/2\) th inclusive wi l l be the domain of some restriction, r , of / having so moderate a peakedness [Dev86] that a random selection of k elements from the probability distribution of r 's shape can be sorted in O(k) expected time by a single (additional) phase of bucketing. Since the proportion of elements expected to fall into primary bucket-interval number [n/2\ 4-1 approaches zero as n approaches infinity we also have the following simple fact: Condition 4: Pr imary bucket-interval number [n/2\ 4- 1 may be neglected in this time-complexity analysis. Final ly, since / is constant on each primary bucket-interval beyond number [n/2\ + 1, the following condition holds: Condition 5: Each primary bucket-interval beyond number [n/2\ + 1 is the domain of a uniform subdistribution (i.e., constant-valued restriction), r c , of / which is such that a random selection of k elements from the probability distribution of r c ' s shape can be sorted in 0(k) expected time by a single additional phase of bucketing. If the single-bucketing responsible for generating the primary buckets corresponding to Ii, h, • • • is just the first phase of an ordinary double-bucketing procedure, every primary bucket containing sufficiently many elements wi l l be partitioned into secondary buckets. As indicated above, the contents of I\ may be sorted in 0(n) time by a standard sort-routine alone, so in the case of an ordinary double-bucketing procedure where a standard sort-routine is used to sort the final buckets, the total time devoted to the sorting of Chapter 2. Fast Deterministic Sorting 24 I\S contents is 0(n). Moreover, it follows from Conditions 3, 4 and 5 that in this case, the total expected time required to sort a l l those elements that fall into primary buckets whose corresponding bucket-intervals lie beyond I\ w i l l be 0(n). Therefore, the total expected time required to sort al l elements of Sn v ia ordinary double-bucketing is 0(n) +0{n) =0{n). Double-Bucketing with Shrinking Bucket-Intervals Some probability densities have singularities too severe to be resolved by double-bucketing in linear-time when the bucket intervals designated in each application of the inherent single-bucketing algorithm are of equal width. In many such cases it is s t i l l possible to sort a random selection of n elements from the given probability density function in O(n) time by choosing bucket-intervals whose widths contract in the direction of each singularity (within a suitable neighborhood of it) at an appropriate rate. To sort in linear-time by applying such a technique, it is, of course, necessary either to know in advance the location of every singularity that warrants a sequence of contracting buckets or to locate al l such singularities (with adequate precision) in 0(n) time. This section focuses on resolving a target singularity once its location is already known and assumes, without loss of generality, that this target singularity lies at zero. Since it is possible to have a singularity whose c.d.f. (F(x)) steepens arbitrarily quickly as x approaches zero, it may be necessary and sufficient (for linear-time resolution of the singularity by the kind of technique mentioned in the preceding paragraph) to assign a sequence of bucket-intervals that narrow as an arbitrarily fast-growing function of the sequence index. For example, a sequence of bucket-interval widths such as 1/2, l / ( 2 2 ) , l / ( 2 2 2 ) , l / ( 2 2 2 ), • • • m a y be appropriate; or even a sequence for which the jth term is less than l / A f j j ) , where " A " stands for Ackermann's function. This section illustrates the use of bucket-interval contraction with a simple example, briefly outlining Chapter 2. Fast Deterministic Sorting 25 a linear-time technique for partitioning an input set into buckets whose corresponding intervals shrink in accordance with the geometric series {2 - J ' } ; defining two probabili ty density functions with singularities that can be resolved in linear-time by a form ( " E D B -Sort") of double-bucketing with such geometric contraction in the first bucketing phase, but are not amenable to linear-time resolution by ordinary double bucketing; and then proving that E D B - S o r t resolves any member of SE in linear expected time but that sorting by ordinary double-bucketing does not. Given an input population of n real numbers, Sn, belonging to the interval (0, 1], E D B -Sort first partitions these numbers into primary buckets by placing al l input numbers belonging to ( 2 _ J , 2 1 -- 7] into the bucket whose index is j, for 1 < j < n — 1 (and placing all numbers belonging to (0, 2~ J] in the bucket indexed by n). Except for this first phase of bucketing, E D B - S o r t is identical to a sort by ordinary double-bucketing. Beyond its first bucketing phase, it has a second bucketing phase in which each primary bucket, B , of size | B | , where | B | is sufficiently large, is partitioned into a set of | B | secondary buckets whose intervals are of equal width. Then, a bottom-level phase of sorting by a standard sort-routine is applied in which the contents of each nonempty terminal bucket are sorted, and, finally, the elements are output, fully ordered, to a storage device. Determining which primary bucket a given input element, x, belongs to requires only the (constant-time) computation of Negate(Point-Normalize(Binary-Floor(x))). The value of this expression is the index of the primary bucket into which x should be placed. Thus, E D B - S o r t takes 0(n) time provided that the actual net time devoted to the bottom-level brute-force sorting is O(n) . The two probability densities defined below (as stated formally in the next theorem) are resolvable in linear expected time by E D B - S o r t but not by ordinary double-bucketing: (1) Let g : R - » R be defined by the equations: Chapter 2. Fast Deterministic Sorting 26 g(x) = 1 i f 1/2 < x < 1, and g(a;) = 0 otherwise. Let gp be the probability distribution of g's shape. /• r1/2 1 / #(a;)da; = 1/2 + / -f— r^dx JR JO X(— log, xY'2 (1/2) - ( ln2) 3 / 2 [ l im( u~3/2du)] = a->-0 7-In a (1/2) + 2 ( l n 2 ) 3 / 2 ( ( l n 2 ) - 1 / 2 - 0) = (1/2) + 2.1n2. Therefore gp = [ l / ( ( l / 2 ) + 2 In2)]^. (2) Let : R —> R be defined by the equations: h(x) = - 7 — i \n T ~ i ^ 7 7 if 0 < a; < 1/4, x ( - log 2 x ) [ l o g 2 ( - log 2 x)fl2 Chapter 2. Fast Deterministic Sorting 27 h(x) = 1 i f 1/4 < x < 1, and h(x) = 0 otherwise. Let h„ be the probability distribution of h's shape. r r1/4 1 / h(x)dx = 3/4 + / —; ^ rrrjrdx = JR. JO X(— log 9x) log,(— log 9 x) \ b l z (3/4) + (In2) 7 2 JO X ( _ M X ) [ H _ L N X ) _ M { M 2 ) ] 5 / 2 D X /•In(ln4)-In(ln2) (3/4) - ( l n 2 ) 7 / 2 [ l i m ( / u^du)] W ' V ' L ^ 0 V l n ( - l n a ) - l n ( l n 2 ) / J (3/4) + [2(ln2) 7 / 2 /3][ln(ln4) - l n ( l n 2 ) ] " 3 / 2 = (3/4) + (2 /3)( ln2) 2 . Therefore hp = [ l /[(3/4) + (2/3)((ln2) 2)]]/i. Theorem 5 For any input size n, a population of n elements randomly drawn from either gp or hp: (1) can be sorted inO(n) expected time by double-bucketing with geometrically shrink-ing primary bucket-intervals and a standard sort-routine applied to the final buckets thereby generated {i.e., by E D B - S o r t ) ; and Chapter 2. Fast Deterministic Sorting 28 (2) cannot be sorted in 0(n) expected time by either ordinary single-bucketing or ordinary double-bucketing with a standard sort-subroutine applied to the final buckets thereby generated. Proof: Consider first the probability density function gp. Let ag = [1 / ( (1/2) + 21n2)]. Then, on the interval (0, 1/2], gpx = ag/[(— log 2 x)3/2], which is clearly monotonically decreasing on this interval. Therefore, for any a and b such that 0 < a < b < 1/2, the ratio of 1/a to 1/6 wi l l be at least as great as the ratio of gp(a) to gp{b). When geometrically contracting bucket-intervals are used as indicated in the description of E D B - S o r t above, the ratio of the reciprocal of the lower endpoint of the smaller to that of the lower endpoint of the larger of any two consecutive bucket-intervals wi l l be two. It therefore follows that the ratio of maximum to minimum probability density for gp attained wi th in any of the primary bucket-intervals lying between zero and 1/2 is at most two. From this, it follows (by an argument analogous to that which applied Conditions 3 through 5 of the preceding section) that in the case of an input population of n elements randomly drawn from gp, the contents of al l EDB-Sort-generated primary buckets beyond the first may be sorted in a total of 0(n) expected time by a second phase of bucketing followed by bottom-level standard sorting. The first primary bucket interval is (0, 2~n], and if you integrate gp from zero to 2~™, mult iply by n , and then mult iply the resulting product by its logarithm to get an expression for the expected time for standard sorting of its contents, you wi l l find that E D B - S o r t w i l l process the contents of the first bucket in 0(n) expected time as well. Therefore, the total expected time for an E D B - S o r t to process a random selection of n elements from gp is 0(n). The proof that ordinary double-bucketing wi l l not yield 0(n) expected time sorting of a random selection of n elements from gp is equally straightforward. The first bucket's interval w i l l extend at least from zero to 1/n2 when ordinary double-bucketing is used. Integrating gv from zero to 1/n 2 , mul t iplying Chapter 2. Fast Deterministic Sorting 29 by n, and then mult iplying by the resulting product's logarithm yields an expression for the expected time for standard sorting of the first bucket's contents which is clearly u(n). A n argument essentially identical to that given here for gp shows that this theorem's two conditions hold for hp as well. It is, in fact, possible, by the similar reasoning, to demonstrate that the two conditions of this theorem hold for al l functions in an (uncountably) infinite class of functions having wi th k > 0 and e, b\, • • •, in (0, 1). 2.2 Addit ional Improvements on the Comparison-Based T i m e Complexity for Sorting 2.2.1 Infinite Collections of Atoms Another useful property of the Atom-Handler routine described above is that it may be used to sort size-n random selections from domain-bounded but otherwise arbitrary atomic distributions in o ( n l o g 2 n ) expected time. A simple algorithm, Bucket-Sort/Test, that does just this is outlined below. the form: 1 x(-\og2x)W Ukj: [ [ l o g ? ( - l o g 2 x ) ] ^ ] Chapter 2. Fast Deterministic Sorting 30 Bucket-Sort/Test(Input_File, Output-File) (1) A p p l y the first two steps of Segment-Sortl. (2) To each bucket formed in the preceding step, apply the Atom-Handler algorithm — passing as parameters the location of the bucket and the number of its elements. This step is the same as Step (4) of Segment-Sortl, but is applied here to primary buckets generated by just one level of ordinary bucketing. (3) Read the sorted elements in order, from the first element in the first bucket to which Atom-Handler returns a pointer to the last element in the last such bucket, into Output-Fi le . Thus, let / be an arbitrary probability distribution made up entirely of atoms wi th values in [0,1]. The atoms of / may, in this case, be infinite in number and may even be dense on the interval [0,1]. The following nonetheless holds: Theorem 6 For every integer n > 0, let a Pn be a population of n elements randomly chosen from f. Then any Pn may be sorted by Bucket-Sort/Test in o(n\og2n) expected time. Proof: Let {am} be the sequence formed by taking the atoms of / in order of decreasing weight (i.e., probability of occurrence). For any positive integer m, Let am be the mth term of {am}, let w(am) be the weight of the atom am, and let Am be the sum of the weights of terms ai, • •_•, am. Since / is a probability distribution made up of atoms, the total weight of al l its atoms is equal to one. Therefore, the sequence of sums {Am} = Ai, A2, • • • converges to one. This is equivalent to the condition: Chapter 2. Fast Deterministic Sorting 31 For every e > 0 there is an integer N such that for al l integers n > N, \An — 1| < e. Now, for every positive integer n , for every positive integer j < n, let Ij>n = [(j — l)/n,j/n). (IjjTl is the subinterval of [0,1] corresponding to the jth bucket formed in the single-bucketing phase of any execution of Bucket-Sort/Test on an input set of size n.) Also, let M be an arbitrary fixed positive integer, and for every positive integer n, let J(n) be the integer such that the value of the atom aM belongs to the subinterval ^j(n) n widths of the intervals in the sequence v ^ ( 2 ) 2 ' ' ' ' a P P r o a c n z e r o a s n approaches infinity. For each positive integer n, let H(n) be the index of the largest (lowest indexed) term of {am} other than OM wi thin the subinterval Ij(n) n-Since every other atom in the sequence {am} is a finite nonzero distance from aM, H{n) approaches infinity as n approaches infinity. For H(n)> M, the total weight of al l atoms other than a u within the subinterval I,( \ is less than 1 — ATJi \ , because these 1 1 J\n),n H[n) — 1 atoms form a subsequence of the sequence formed by discarding the first H{p) — 1 terms of {am}. It therefore follows that the total weight of al l atoms other than OM wi th in the subinterval I < \ approaches zero as n approaches infinity. Consequently, the expected proportion of elements having the value of aM to fall into a primary bucket generated by Bucket-Sort/Test that corresponds to / / \ wi l l approach 1 as n approaches infinity, so the value of aM w i l l ultimately emerge as the majority value for such a bucket in Bucket-Sort/Test 's testing phase. For each integer n > 0, let Bn be a primary bucket generated by Bucket-Sort/Test that corresponds to / / \ , and let e(n) be the expected proportion of elements to fall into Bn that do not have the value of aM- Clearly e(n) approaches zero as n approaches infinity. Since t(n) is the expected proportion of elements to fall into Bn that Bucket-Sort/Test w i l l ultimately sort using its bottom-level standard sort-routine (the rest being found to have the value of Q M and counted in linear time), it follows that Bucket-Sort/Test w i l l Chapter 2. Fast Deterministic Sorting 32 sort the contents of Bn in o( | /3„ | log 2 \Bn\) expected time. Since aM is an arbitrary term of {am}, what holds for a w w i l l also hold for each term of {am} up to dj, given an arbitrarily large integer j. Therefore, by the addit ivi ty of expected values (i.e., X^=i(E[xfc]) = E [ £ ^ = 1 x f c]), it follows that for any fixed positive integer j, Bucket-Sort/Test w i l l collectively process al l input elements having a value equal to that of a member of the set So = {a i , • • •, a,-} in an arbitrarily small fraction of | So | l°g2 I So I expected time, given a sufficiently large input size n. Thus, let e0 be any number such that 0 < eo < 1. Let C be the constant coefficient implici t in the asymptotic upper bound of 0(n log 2 n) for the running time of the bottom-level standard sort-routine used in Bucket-Sort/Test on an input set of n elements. Take j sufficiently large that 1 — Aj < ( e 0 / ( C + l ) ) . Then, for any integer m > 1, m(l — Aj)log2(m(l — Aj)) < m{l — Aj)\og2 m < (eo/(C + l))m\og2m. When the input size n is sufficiently large, the expected time needed by Bucket-Sort/Test to collectively process those elements derived from atoms ai, • • •, aj w i l l be less than ( e 0 / ( C + l ) ) n log 2 n , and, of course, as the standard sort-routine used requires less than Ck l og 2 k time to sort any k ranked elements (given any integer k > 1), the expected time needed to process those elements derived from the remaining atoms ( a j + i , - : - ) cannot exceed Cn(l — Aj)\og2(n(l — Aj)). Therefore, the expected time needed by Bucket-Sort/Test to process the input elements derived from atoms a i , • • • a,-, • • • (i.e., al l the input elements) is less than {eol{C + l))n\og2n + Cn{l-Aj)\og2{n{\-Aj)) < ( e 0 / ( C + l ) ) n l o g 2 n + ( C e 0 / ( C + l ) ) n l o g 2 ( ( C 6 0 / ( C + l ) )n) < ( e 0 / ( C + l ) ) n l o g 2 n + ( C e 0 / ( C + l ) ) n l o g 2 n — e 0 n l o g 2 n. Chapter 2. Fast Deterministic Sorting 33 2.2.2 Arbi trary Isolated P . D . F . Discontinuities Let f(x) be a probability density function that is bounded on every closed interval which is a subset of (0, 1]. (As usual, it is assumed that al l positive density is confined to the interval [0,1].) There may be a discontinuity at x = 0 'which results from singularity, divergence by oscillation, or any other possible cause; Let Sn be a population of n elements randomly drawn from / . Then the following is true. Theorem 7 Let Sn be given as input to a sorting algorithm consisting of a single phase of ordinary bucketing and the subsequent application of an standard sort-routine to the buckets thus formed (i.e., an ordinary bucket-sort). Then the expected time for the sorting Sn is o(n\og2 n). Proof: After the in i t ia l phase of bucketing, for 1 < j < n, the jth bucket (in order of increasing element value) wi l l correspond to the subinterval ((j — l)/n, j/n]. Since / is a probability density function wi th nonzero values confined to (0, 1], for every e e (0, 1] there is a (unique) 5 G (0, 1] such that f0s f(x)dx — e. Let e0 be any number in (0, 1] and let S0 be the number for which f0s° f(x)dx = e0. Since S0 > 0, the range of f's restriction to [So, 1] has an upper bound (U). Let B0 be the bucket formed in the bucketing of Sn which corresponds to the bucket-interval containing So ((\nSo\ — l)/n, \nSo]/n]). Let B\ be any bucket thus formed whose corresponding bucket-interval lies to the right of Bo's wi thin (0, 1]. Since / is upper bounded by U over B^s bucket-interval, the expected number of elements from Sn to fall into Bi is upper bounded by U. Therefore, since a standard sort-routine is used for the final sorting of Bi ' s contents, the expected time for this final sorting of Bi ' s contents is 0(U\og2 U). Since B\ is an arbitrary bucket whose corresponding interval lies beyond that of B0, and there are at most n — 1 such buckets, it therefore follows that the expected time required for the final sorting of a l l the buckets Chapter 2. Fast Deterministic Sorting 34 beyond B0 is 0(nU\og2U) = o{n\og2n). Since S0 > 0, it is clear that for sufficiently large n, the lower endpoint of bucket-interval corresponding to B$ is strictly positive, from which it follows that the restriction of the range of / to this interval w i l l have an upper bound. Let U' be such an upper bound. Then the expected number of elements from Sn to fall into Bo (for sufficiently large n) wi l l be upper bounded by U'. The integral of the restriction of / to any bucket-interval to the left of S 0 ' s bucket-interval must be at most e0 because the interval must lie entirely between zero and 5o- It follows immediately that the expected number of elements of Sn to fall into any bucket whose corresponding interval lies to the left of Bo's is upper bounded by eon. Therefore, the ratio of the expected time taken (by the standard sort-routine) for the final sorting of any such bucket's contents to the expected number of elements of Sn to fall into the bucket is O(log 2 (e 0 n)) = 0 ( l o g 2 n ) . Since the expected number of elements to fall into the union of all the buckets wi th corresponding intervals to the left of Bo's is at most e 0 n, it follows that the total expected time for the final sorting of the buckets whose corresponding intervals lie to the left of B0's is O ( e 0 n l o g 2 n ) . Therefore, the total expected time for the final sorting of al l the buckets is O(e 0 n log 2 n) + 0(U') 4- o(n l og 2 n) — O(e0n l og 2 n). Since e0 may be arbitrarily small, the total expected time for this final phase of sorting is o(n l o g 2 n ) . Since the in i t ia l bucketing of the elements itself takes 0(n) time, the entire sorting of Sn as described.here takes o (n log 2 n) expected time. Chapter 3 Random Sorting of Distinct Real Numbers in Linear Expected T i m e In this final chapter, I present a surprisingly general and yet simple result concerning the Random R A M . It turns out that there are algorithms for the Random R A M by which one may sort n distinct real numbers in 0(n) expected time regardless of how they are chosen, i.e., even if they are chosen by an enemy. A great deal of credit for this result must be given to Kirkpat r ick and Reisch [KR84] for their algorithm by which, for any m 6 N , an arbitrary population of m distinct integer values may be sorted in 0(m) absolute worst-case time. A slightly modified version of Kirkpat r ick and Reisch's linear-time integer-sorting algorithm, that handles duplicate integers by attaching to the end of each input element a unique index string, and achieves sorting of the real-valued inputs to Random-Segment-Sort by attaching them to integer-valued sort-keys, is used here as a subroutine for this chapter's main algorithm, Random-Segment-Sort. The algorithm of Kirkpat r ick and Reisch may be used for linear-time sorting of an arbitrary finite set of rational numbers, as these can be scaled up by their least common denominator, making them integers, and then scaled back down after sorting. B y itself, it is, however, fundamentally incapable of sorting arbitrary real numbers in linear-time (as is immediately apparent from an inspection). Random-Segment-Sort is presented in the usual pseudocode-like format. It is as-sumed, without loss of generality, that the input numbers lie wi thin the interval [0, 1], as their true domain can be computed and effectively reduced to this interval in 0(n) time by finding the extreme input values, and applying to al l the input numbers a common 35 Chapter 3. Random Sorting of Distinct Real Numbers in Linear Expected Time 36 contraction (by shifting) and translation (by addition) that places these extreme values wi thin [0, 1]. 3.1 The Random-Segment-Sort Algori thm Random-Segment-Sort, as mentioned above, is analyzed under the assumption that the input numbers are distinct. It w i l l clearly have the same average-time performance in the natural context of input randomly drawn from an arbitrary p.d.f. (where the probability of two equal values is zero): Random-Segment-Sort(Input_File, Output_File) (1) Read the input from Input-File into an input array (In), counting the number (n) of input elements. (2) Let t ing TV be the number of elements of the current input array (In c), set j equal to Floor(Random(0,N)) +1. The memory variable j now contains a random index (ran-domly chosen from among all possible indices) for In c . A t the "top," or zeroth recursion level (where the jth recursion level is the total sequence of al l primitive instructions exe-cuted at a recursion depth of j), the current input array, In c , equals In and the number of elements in it is n. (3) Scan I n c to find the value of the least of its elements strictly exceeding lnc[j] in value, and set V equal to this value provided that this value exists. If such a value fails to exist (i.e., lnc[j] is maximal within In c ) , return to step (2). (4) A t this point, the value of the expression | V — lnc[j] I w i l l be equal to the size of a randomly chosen "gap" between two consecutive values in the ascending linear order that Chapter 3. Random Sorting of Distinct Real Numbers in Linear Expected Time 37 the input numbers comprise. Let gap-size be equal to the value of Binary-Floor(\ V — lnc[j] |) and let input-resolution equal g a p * s i z e • The values of gap-size and input-resolution may both be computed (in constant time) by Point-Normalize, shifting, negation and addition operations. They wi l l both be positive integral powers of two. (5) Allocate the arrays Integer-PartsfL.A 7] and Initial-Integer-Parts-Indices[l..A r]. (This step need be nothing more than setting up pointers to the starting and ending points of the two arrays.) (6) For every k G {1, • • •, N}, set Integer-Parts[k] equal to the value of Inc[fc], left-shift Integer-Parts[/c] by a number of binary places equal to the value of Point-Normalize(input-resolution) (thereby effecting a multiplication by input-resolution), and then reduce the scaled-up value of Integer-Parts[/c] to its integer part (with a floor operation). (7) For every k G { l , - - - , i V } , set Initial-Integer-Parts-Indices[/c] equal to k. The value of Initial-Integer-Parts-Indices[/e] now equals the in i t ia l Integer-Parts array-index (k) of the kth element of the sequence comprised by the elements of Integer-Parts prior to their sorting. (8) If N < 4, sort the elements of Integer-Parts using an insertion-sort [CLR90]. Oth-erwise, sort the elements of the Integer-Parts array in O(N) time using a version of Kirkpat r ick and Reisch's "Algor i thm I V " [KR84] modified to handle duplicate integers and carry satellite data as described here. Algor i thm I V wi l l always sort m arbitrary input integers in less than 30m operations (for any integer m exceeding 4). In the mod-ified version used here, the input integers are ini t ia l ly left-shifted by Binary-Ceiling(N), and a unique index in the range 1, • • •, N (padded on the left wi th zeroes to a length of \og2(Binary-Ceiling(N)) is attached to the end of each entry in the Integer-Parts array, Chapter 3. Random Sorting of Distinct Real Numbers in Linear Expected Time 38 to distinguish duplicate integers. Also, the in i t ia l value of Initial-Integer-Parts-Indicesffc] (for every k G {1, • • •, N}) is treated as a satellite datum bound to the in i t ia l value of Integer-Parts[fc], so that the jth term (XJ) of the final sequence of elements in Init ial-Integer-Parts-Indices wi l l equal the index of the in i t ia l position corresponding to Xj in the Integer-Parts array. (9) A t this point, for every k G { 1 , - - - , J V } , the value of Integer-Parts [A;] equals the integer part of the value of Inc[Initial-Integer-Parts-Indices[A;]] scaled up by a factor of input — resolution. Arrange the elements of I n c so that their scaled-up integer parts are in ascending order as follows: allocate an auxiliary array, Aux- In c [ l . . iV] ; for each k G {1, • • •, N}, set Aux-In c[A;] equal to In c[Initial-Integer-Parts-Indices[A;]]; and finally, for each k G {1, • • • ,N}, write the value of Aux-In c[/c] back into Inc[A;]. (10) Allocate an array (Bucket c [l . . iV]) to hold pointers to the beginnings of the "buckets" in In c , i.e., the maximal contiguous segments of I n c whose elements' values al l correspond to equal integers in Integer-Parts. Also, allocate an array (Bucket-Size c[l.../V]) to hold the sizes of these buckets. (11) Let the number of "buckets" in In c as described in step (10) be referred to as N'. Read through Integer-Parts, and for each k G {1, • • •, AT'}, set Bucketc[fc] equal to the I n c array-index of the beginning of the kth bucket (equal to the Integer-Parts array-index at which the kth new value in Integer-Parts begins). Also, record the number of elements in each bucket (the "bucket size") in Bucket-Size c as you proceed. (12) Deallocate the three auxiliary arrays Integer-Parts, Initial-Integer-Parts-Indices and A u x - I n c . (13) To each segment of the In array now corresponding to a bucket [delimited by pointers Chapter 3. Random Sorting of Distinct Real Numbers in Linear Expected Time 39 created in step (11)] that contains more than one element, recursively apply steps (2) through (12) unti l the elements of In are fully ordered. The recursion should be applied to the buckets from left to right at each level. (14) Write the N elements of In, in order, into Output-Fi le . 3.2 Analysis of Random-Segment-Sort It can be verified by inspection that i f Random-Segment-Sort terminates, it does so wi th the correct output, a linear ordering of the real numbers it receives as input. We wi l l show that it runs in expected time O(n) , which implies that it terminates wi th probability one. The body of this analysis is thus devoted to the algorithm's outstanding feature, i.e., its linear expected time performance for distinct input numbers. Steps (1) and (14) of the algorithm clearly take 0(n) time. I consider now the applica-tion of the component procedure (P s ) consisting of steps (2) through (12) to a "current segment" (In c) of size m > 1 of the principal input array, In, which the algorithm uses for storage of the input at various stages of the sorting process. Steps (2) and (3): Due to the storage of "bucket sizes" in step (11), the "size," or number of elements in I n c w i l l always be immediately accessible. Consequently, step (2) takes constant time per execution. Step (3) also takes constant time per execution. The probability of returning to step (2) at the end of step (3) is only 1/m, so steps (2) and (3) are both executed 0(1) times on average per execution of step (4) (which is only executed once at the top-level in each recursive call). Therefore, steps (2) and (3) collectively take 0(1) expected time per execution of step (4). Chapter 3. Random Sorting of Distinct Real Numbers in Linear Expected Time 40 Steps (4) and (5): Step (4), as indicated, takes constant time per execution. Step (5) also takes constant time per execution and is performed only once per execution of step (4). Consequently these steps collectively require 0(1) time per execution of step (4). Steps (6) through (9): These four steps each take 0(m) time per execution, and each is executed just once per execution of step (4). Therefore the time devoted to these four steps per execution of step (4) is 0 ( m ) . Steps (10) through (12): Steps (10) and (12) clearly take constant time per execution. Step (11), which involves reading a size-m array (Integer-Parts) and setting up pointers to a subset of its indices, takes 0(m) time per execution. Each of these three steps is executed exactly once per execution of step (4). Therefore the total time devoted to these three steps per execution of step (4) is 0(1) + O ( l ) 4- 0(m) — O(m) . Thus the procedure (Ps) consisting of steps (2) through (12) takes 0(m) time in a single application to a size-m segment of the In array, and consequently takes 0(n) time in its application to the full In array at the top level. In step (13), Ps is recursively applied to each segment of In corresponding to a bucket of size at least two generated in the ini t ia l application of P s to In. A t the top level in each recursive call , the gap size, g, ("| V — lnc[j] |" above) is, in effect, randomly chosen from among the sizes of the gaps between consecutive elements (currently held in In c) in the final ordering of the input, so its expected rank among these gap sizes is that of median. The effective bucket-interval width, w, (referred to as "gap-size" above) is even smaller than g, and the relative ranking of w among these gap sizes (expressed as a proper fraction) is directly proportional to the expected fraction of elements of I n c that end up isolated in buckets of size 1 indexed by the Bucket c array. Therefore, at the top level in Chapter 3. Random Sorting of Distinct Real Numbers in Linear Expected Time 41 each recursive call , the expected proportion of isolated elements is at least 1/2. That is, at least half the elements of I n c are expected to be isolated in buckets of size 1 by a single application of P s to In c , while at most half the elements of I n c are expected to end up in multi-element buckets. Thus Random-Segment-Sort performs a linear expected number (i.e., 0(n)) of primitive instructions excluding recursions, and calls itself recursively on subproblems of expected total size at most half the size of the input (within each recursive call). This implies that the expected running time of Random-Segment-Sort is 0{n). It is, in fact, possible to guarantee the termination of Random-Segment-Sort without sacrificing its 0(n) average-time performance. Let Random-Segment-Sort 0 be an algo-r i thm identical to the original Random-Segment-Sort, except that where the depth of the recursion within any recursive call of Random-Segment-Sort 0 is n , the subset of the original input passed to that call is submitted to a standard sort-routine for bottom level sorting. For any nonnegative integer j, let Tj be the ratio of the fraction of original input elements that get isolated at a recursion depth of j + 1 or higher (in an application of Random-Segment-Sorto to a set of n input elements) to the fraction of original input elements that get isolated at a recursion depth of j or higher. Let e be any number such that 0 < e < 1. B y the nature of the random choice of effective, relative bucket-interval widths (input scale-up factors) in Random-Segment-Sort 0 , it is clear that, given values for r i , • • • ,rj, the probability that Tj+\ w i l l exceed 1/2 is less than 1/2. That is: Therefore, if n is sufficiently large, the probability that wi th n levels of recursion, the r^'s for at least (1 — e)n of these levels w i l l exceed 1/2 is less than Chapter 3. Random Sorting of Distinct Real Numbers in Linear Expected Time 42 2-(I-)"—<L(I-VJ)' (r(1-"«) „i>< 2-{l-e)nn(nen) = 2 " ( 1 " e ) n n ( e n ) + 1 . Let h(n) = 2 ~ n / 2 and let en = l / ( l o g 2 n ) 2 . Then the probability that wi th n levels of recursion, the r^'s for at least (1 — en)n of these levels wi l l exceed 1/2 is less than 2 - ( l - e „ ) n n ( e B n ) + l = 2 ^ n ^ ! l ^ r ) = 2 ^ l o g 2 n ) " 2 n ( l o g 2 2 n ) 2 " ( l o ^ + > o g 2 2 " x ) < 2 - n / 2 = h ^ for s u f f i c i e n t i y i a r g e n . If it is nor. the case that the r^'s for at least (1 — en)n of levels exceed 1/2, then fewer than (1 — en)n of the r^'s exceed 1/2, which implies that, given n recursive levels, at least enn = n/( log2n) of the r^s are bounded above by 1/2. This condition implies that the total number of clustered input elements remaining has been divided by a factor of 2 n / ( 1 ° g l n ) ) which is much greater than n for large n. It is impossible for the number of clustered elements remaining to be divided by more than n. Therefore, when n is large, Random-Segment-Sorto wi l l terminate, before ever invoking its standard sort-routine, Chapter 3. Random Sorting of Distinct Real Numbers in Linear Expected Time 43 with a probability of at least 1 — h(n). The probability that the standard sort-routine wi l l be invoked is consequently bounded above by h(n) = 2~™/2, so even i f it were to start from scratch on the entire original input, taking 0 ( n l o g 2 n ) time, it would increase the expected running time of Random-Segment-Sorto by at most 0 [ ( n l o g 2 n ) / (2 n / 2 ) ] = 0 (1 ) . A W o r d on Space Requirements Although time requirements are the principal focus of this analysis, it is worth noting that the expected number of memory words used by Random-Segment-Sort is also 0 ( n ) . This is evident from the following two facts: (1) A single execution of any one of the Random R A M ' s primitive instructions affects only 0(1) memory words. (2) Random-Segment-Sort performs a linear expected number of primitive instructions excluding recursions, and calls itself recursively on subproblems of expected total size at most half the size of the input, within each recursive call . Since individual memory words can hold real numbers, this 0 ( n ) "space" complexity does not have the same significance as it would if, for example, every memory word's contents could be represented with a fixed number of bits. For that reason, I prefer to call it "word" complexity. The word and time complexities of Random-Segment-Sort are qualified by the fact that Kirkpat r ick and Reisch's integer-sorting algorithm, used in step (8), requires a number of bits for intermediate results which is at least exponential in the total number of bits sub-mitted to it as input (it is not a "conservative" algorithm). One may avoid this by adding Chapter 3. Random Sorting of Distinct Real Numbers in Linear Expected Time 44 a bitwise A N D and O R to the Random R A M ' s primitive instruction set, and substituting the conservative integer-sorting algorithm (signature sort) of Andersson, Hagerup, Ni ls -son and Raman [AHNR95] that runs in 0(n log 2 l og 2 n) time, given n input elements, for Kirkpat r ick and Reisch's algorithm in step (8) of Random-Segment-Sort. The result would be a modified version of Random-Segment-Sort having only slightly poorer ex-pected time performance. B y effecting an expected 50% or greater reduction, in going from each recursion depth to the next, in the number of input elements to be isolated at higher recursion depths, and having the asymptotic performance characteristic at any one recursion level indicated by the use of signature sort in step (8), this modified ver-sion of Random-Segment-Sort would achieve complete sorting of n distinct real numbers in 0 ( n l o g 2 l og 2 n) expected time. The constants associated wi th signature sort are also reasonably small. Bibliography [AHNR95] A . Andersson, T . Hagerup, S. Nilsson, and R . Raman. Sorting in linear time? Symposium on the Theory of Computing, 27:427-436, 1995. [AHU74] Alfred V . Aho, John E . Hopcroft, and Jeffrey D . Ul lman . The design and analysis of computer algorithms. Addison-Wesley, Reading, Mass., 1974. [CLR90] Thomas H. Cormen, Charles E . Leiserson, and Ronald L . Rivest. Introduction to algorithms. M I T Press, New York, N . Y . , 1990. [DeG87] Morris H . DeGroot . Probability and Statistics. Addison-Wesley, 1987. [Dev81] L . Devroye. O n the average complexity of some bucketing algorithms. Com-puters and Mathematics with Applications, 7:407-412, 1981. [Dev86] Luc Devroye. Lecture Notes on Bucket Algorithms. Birkhauser Boston, Inc., 1986. [DK81] L . Devroye and T . Klincsek. Average time behavior of distributive sorting algorithms. Computing, 2 6 ( l ) : l - 7 , 1981. [EB] Peter van Emde Boas. Machine Models and Simulations. , in: J . van Leeuwen (Ed.), Handbook of Theoretical Computer Science. Elsevier Science Publishers B . V . , 1990. [Ehr82] G . Ehr l ich . Searching and sorting real numbers. Journal of Algorithms, 2 :1-12, 1982. [Gal78] J . Galambos. The Asymptotic Theory of Extreme Order Statistics. John Wiley, New York, N . Y . , 1978. [Gon81] G . H . Gonnet. A linear probing sort and its analysis. Proceedings of the Thirteenth Annual ACM Symposium on the Theory of Computing, pages 90-95, 1981. [Knu73] Donald E . K n u t h . The Art of Computer Programming, Vol. 3 : Sorting and Searching. Addison-Wesley, Reading, Mass., 1973. 45 Bibliography 46 [KR84] D . Kirkpat r ick and S. Reisch. Upper bounds for sorting integers on random access machines. Theoretical Computer Science, 28:263-276, 1984. [Mac66] M . Donald MacLaren. Internal sorting by radix plus sifting. Journal of the Association for Computing Machinery, 13(3):404-411, 1966. [PS] W . J . Pau l and J . Simon. Decision trees and random access machines. Sym-posium ilber Logik und Algorithmik, Zurich, 1980. [Rab] M . Rabin . Probabilistic Algorithms, in: J . Traub (Ed.), Algorithms and Com-plexity. Academic Press, New York, N . Y . , 1976. [Rei] Stefan Reisch. Komplexi tat von Sortierproblemen und Anwendungen der Kolmogoroff-Komplexitat. Doctoral Thesis, Faculty of Mathematics, Univer-sity of Bielefeld, 1982. [Sed77] R . Sedgewick. The analysis of quicksort algorithms. Acta Informatica, 7:327-355, 1977. [Tarn] M a r k k u Tamminen. Analysis of recursive bucket methods. Report H T K K -T K O - B 4 4 , Helsinki University of Technology, Laboratory for Information Processing Science, Otaakari 1, SF-02150 Espoo 15, F in land , 1982. [Tam85] M a r k k u Tamminen. Two levels are as good as any. Journal of Algorithms, 6(1):138-144, 1985.
- Library Home /
- Search Collections /
- Open Collections /
- Browse Collections /
- UBC Theses and Dissertations /
- Sorting real numbers on random access machines
Open Collections
UBC Theses and Dissertations
Featured Collection
UBC Theses and Dissertations
Sorting real numbers on random access machines Blumenthal, Daniel S. 1995
pdf
Page Metadata
Item Metadata
Title | Sorting real numbers on random access machines |
Creator |
Blumenthal, Daniel S. |
Date Issued | 1995 |
Description | This work describes a family, Segment-Sort, of algorithms for rapid sequential sorting of real numbers. Two computational models are discussed which correspond to the two main types of Segment-Sort algorithm: deterministic and random. With the deterministic model, the Basic RAM, it is possible to sort input populations randomly chosen from a broad class of common probability distributions in "space" (number of memory words, each able to hold a real number) and average time both linear in the number of real numbers given as input. Included among these distributions are a variety of types containing singularities, unbounded oscillations and points of actual nonzero probability (atoms). With the second model, the Random RAM, one may sort n arbitrarily chosen distinct real numbers in 0(n) operations using only 0(n) memory words on average. Except for random integer selection on the Random RAM , both models are confined to simple binary arithmetic. The power of both models appears to stem largely from the combination of left and right shifting operations. |
Extent | 2226298 bytes |
Genre |
Thesis/Dissertation |
Type |
Text |
FileFormat | application/pdf |
Language | eng |
Date Available | 2009-02-09 |
Provider | Vancouver : University of British Columbia Library |
Rights | For non-commercial purposes only, such as research, private study and education. Additional conditions apply, see Terms of Use https://open.library.ubc.ca/terms_of_use. |
DOI | 10.14288/1.0051188 |
URI | http://hdl.handle.net/2429/4328 |
Degree |
Master of Science - MSc |
Program |
Computer Science |
Affiliation |
Science, Faculty of Computer Science, Department of |
Degree Grantor | University of British Columbia |
GraduationDate | 1995-11 |
Campus |
UBCV |
Scholarly Level | Graduate |
AggregatedSourceRepository | DSpace |
Download
- Media
- 831-ubc_1995-0505.pdf [ 2.12MB ]
- Metadata
- JSON: 831-1.0051188.json
- JSON-LD: 831-1.0051188-ld.json
- RDF/XML (Pretty): 831-1.0051188-rdf.xml
- RDF/JSON: 831-1.0051188-rdf.json
- Turtle: 831-1.0051188-turtle.txt
- N-Triples: 831-1.0051188-rdf-ntriples.txt
- Original Record: 831-1.0051188-source.json
- Full Text
- 831-1.0051188-fulltext.txt
- Citation
- 831-1.0051188.ris
Full Text
Cite
Citation Scheme:
Usage Statistics
Share
Embed
Customize your widget with the following options, then copy and paste the code below into the HTML
of your page to embed this item in your website.
<div id="ubcOpenCollectionsWidgetDisplay">
<script id="ubcOpenCollectionsWidget"
src="{[{embed.src}]}"
data-item="{[{embed.item}]}"
data-collection="{[{embed.collection}]}"
data-metadata="{[{embed.showMetadata}]}"
data-width="{[{embed.width}]}"
async >
</script>
</div>
Our image viewer uses the IIIF 2.0 standard.
To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.831.1-0051188/manifest