UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Growth processes on formulas and reversible circuits Brodsky, Alexander 2003-11-17

You don't seem to have a PDF reader installed, try download the pdf

Item Metadata

Download

Media
831-ubc_2003-859479.pdf [ 6.06MB ]
Metadata
JSON: 831-1.0051610.json
JSON-LD: 831-1.0051610-ld.json
RDF/XML (Pretty): 831-1.0051610-rdf.xml
RDF/JSON: 831-1.0051610-rdf.json
Turtle: 831-1.0051610-turtle.txt
N-Triples: 831-1.0051610-rdf-ntriples.txt
Original Record: 831-1.0051610-source.json
Full Text
831-1.0051610-fulltext.txt
Citation
831-1.0051610.ris

Full Text

Growth Processes on Formulas and Reversible Circuits by Alexander Brodsky B.Math. University of Waterloo, 1997 M.Sc. University of British Columbia, 1999 A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF Doctor of Philosophy in THE FACULTY OF GRADUATE STUDIES (Department of Computer Science) We accept this thesis as conforming to the required standard The University of British Columbia June 2003 . © Alexander Brodsky, 2003 In presenting this thesis/essay in partial fulfillment of the requirements for an advanced degree at the University of British Columbia, I agree that the Library shall make it freely available for reference and study. I further agree that permission for extensive copying for this thesis for scholarly purposes may be granted by the Head of my department or by his or her representatives. It is understood that copying or publication of this thesis for financial gain shall not be allowed without my written permission. Department of Computer Science The University of British Columbia 2366 Main mall Vancouver, BC Canada V6T 1Z4 Abstract Among their many uses, growth processes have been used for constructing reliable networks from unreliable components (Moore and Shannon, 1956) and deriving complexity bounds of various fam ilies of functions (Valiant, 1984). Hence, analyzing such processes is an important and challenging problem. In this thesis we parameterize a growth process by its initial conditions and characterize it by the existence and shape of a limiting probability distribution that describes the likelihood of it realizing a particular Boolean function. We identify the limiting distributions of several classes of growth processes on formulas, and derive conditions under which results such as Valiant's hold. We consider growth processes that use linear, self-dual, and monotone connectives, completely char acterizing those processes that use linear or monotone connectives. In the latter case, we derive a novel technique that combines spectral analysis (Savicky, 1990) with probabilistic arguments; the technique is also applicable to growth processes that use other connectives. Our characterizations also yields bounds on the convergence of these growth processes. Unfortunately, a comparable definition and characterization of growth processes on general cir cuits is impractical due to the dependencies between the probabilities associated with various circuit components. However, reversible circuits (Toffoli, 1980) are inherently more structured than gen eral circuits. To study growth processes beyond the formula setting, we propose and characterize growth processes on reversible circuits. Intriguingly, aspects of the characterizations that proved difficult in the former setting—such as proving that the limiting distribution is uniform—turn out to be relatively easy in the latter. In fact, the limiting distribution of a growth process on reversible cir cuits is characterized completely by its support—the set of functions that the process can generate. Consequently, we also provide bounds on the convergence of these growth processes. Finally, the regular structure of reversible circuits provides ample motivation for considering the reversible circuit complexity of finite Boolean functions—an important issue, since the precursor to such applications as quantum computation is reversible computation. We derive relationships be tween reversible circuits and other models of computation such as permutation branching programs (Barrington, 1985), based on a new measure that we call bandwidth. By leveraging these relation ships, we exhibit a natural gap between two families of reversible circuits that correspond to width-4 and width-5 permutation branching programs. Based on the same measure, we define a hierarchy of families of reversible circuits that corresponds to the SC class hierarchy—a natural circuit-based definition of the class SC. Lastly, we provide constructions for several common Boolean functions and derive sufficient conditions under which a Boolean function has a polynomial-size realization. ii Contents Abstract ii Contents iiList of Figures v Acknowledgements vi 1 Introduction 1 1.1 Growth Processes on Formulas . • 2 1.2 Reversible Circuits and Growth Processes on Reversible Circuits 3 1.3 Reversible Circuit Complexity 4 1.4 Results in this Thesis 5 2 Background 7 2.1 Boolean Functions, Formulas, and Circuits 7 2.2 Random Formulas2.2.1 Probabilistic Amplification 8 2.2.2 Spectral Analysis 11 2.2.3 Related Work 2 2.3 Reversible Computation2.3.1 Reversible Circuits 13 2.4 Random Reversible Circuits 4 2.5 Complexity of Reversible Computation 15 2.5.1 Space-Time and Other Trade-offs 6 2.5.2 Relationships with Other Models of Computation 18 2.5.3 Related Work 19 3 Growth Processes on Formulas 22 3.1 Definitions 23.2 Growth Processes that Use Linear Connectives 24 3.3 Growth Processes that Use Self-Dual Connectives 6 3.4 Growth Processes that use Monotone Connectives 27 iii 3.4.1 Growth Processes that Use Unbalanced Monotone Connectives 27 3.4.2 Growth Processes that Use Balanced Connectives . 36 3.5 Growth Processes that Use Other Functions . . 45 4 Growth Processes on Reversible Circuits 47 4.1 Definitions 44.2 The Limiting Distribution of Growth Processes on Reversible Circuits 49 4.3 The Support of Growth Processes on Reversible Circuits 52 4.3.1 The Support of Growth Processes that are not Bandwidth-limited 52 4.3.2 The Support of Growth Processes that are Bandwidth-limited 56 4.4 Convergence of Growth Processes on Reversible Circuits 60 5 Reversible Circuit Complexity 62 5.1 Definitions 65.2 Bounded Bandwidth Reversible Circuits 64 5.2.1 Simulating Reversible Circuits with Branching Programs 65.2.2 On the Power of Bandwidth-2 Reversible Circuits and 4-PBPs 66 5.3 Polylogarithmic Bandwidth Reversible Circuits 77 5.4 Unbounded Bandwidth Reversible Circuits 9 5.4.1 Reversible Circuit Constructions 80 5.4.2 Sufficient Conditions for Realizing Permutations by Polynomial Size Circuits 87 5.4.3 Techniques and Heuristics for Reversible Circuit Constructions 89 6 Conclusion and Future Work 91 Bibliography 94 iv List of Figures 1.1 A block diagram of a reversible circuit 3 2.1 Some typical characteristic polynomials for networks of three crummy relays. ... 9 2.2 A NOT <d = 1 © a), a controlled-NOT (b' = b ® a), and a Toffoli (c' = af\b@c). . 13 2.3 A method for resetting garbage lines 14 2.4 A partial representation of the improved simulation for c = 2 17 4.1 A four gate construction of a NOT gate 51 4.2 A four gate construction of a C-NOT gate4.3 A four gate construction of a C-NOT gate4.4 A four gate construction of a NOT gate 51 4.5 A four gate construction of a Toffoli gate4.6 The four singleton 2-functions 4 4.7 A (0)-controlled Toffoli 56 4.8 The circuit C ~ (7 15 31)4.9 A simulation of one Toffoli gate with another 58 4.10 Realizations of controlled 2-cycles 60 5.1 Schematic short forms of block components 81 5.2 Realizations of a) a half-incrementor and b) a nigh-incrementor. 85.3 A realization of the adder. : 82 5.4 A realization of the consensus function 84 5.5 A realization of the threshold function5.6 Realizations of the threshold function Tn-i 5 5.7 A realization of the threshold function 86 v Acknowledgements "Research does not occur in a vacuum" is an oft-cited axiom, meaning that research does not oc cur in isolation. An immediate corollary is that this work would not have been possible without the contributions of a great many people. Primus inter pares, I thank my supervisor and mentor Nick Pippenger. Without his patience, insight, and encouragement, I doubt that this work would have come to fruition; via example and advice, he was my first guide through world of theoretical computer science. With equal import, I thank my friend and mentor Bettina Speckmann, whose generosity, steadfastness, ideals, and uncompromising demand for excellence in all aspects of life greatly tempered me. I have no doubt that without her friendship and example, I would be a different person; thank you for the ideas, arguments, schemes, and the multifarious late-night discussions! My home for the past five years has been the Distributed Systems Group, who accepted me—a theory person—and provided me with a lively and stimulating work environment—though I must admit that I do not have a performance section in my dissertation. Many thanks to my twin brother Dima, my great friend Matt Pedersen, as well as Yvonne, Chamath, Joon, and the rest of the DSG. You made me look forward to spending each day in the lab! Thank you to the Theory Group for the collaboration, interaction, and feedback that I have received over the years. Special thanks to Mike McAllister for taking me under his wing when I arrived here six years ago, Stephane Durocher and Ellen Gethner for their collaboration on the rectilinear crossing number problem, and Anne Condon, Joel Friedman, and David Kirkpatrick for being on my committee and providing very helpful feedback. The Department of Computer Science at UBC is a unique and an amazing place to work in; many of its members have made my tenure here an experience that I would not trade for anything. Many thanks to my fellow grad students and friends, Andrea, Paul, Martin, Lisa, and Brian, for your friendship, and to the faculty, Alan, Norm, and Gail for sharing your experiences and your advice. Most importantly, I thank my family for their everlasting support. In my weekly conversations I was always asked three questions: Are you healthy? Have you found someone nice? Are you done yet? I am pleased to say that I can answer affirmative to the first and the last questions; I am still working on number 2. Thanks to my parents, Larisa and Boris, my grandparents, Bella and Benjamin, my favourite brother Dima, and everyone else for the love, expectations, and care packages! ALEXANDER BRODSKY The University of British Columbia June 2003 vi Chapter 1 Introduction Bounding the circuit complexity of Boolean functions is an outstanding and ongoing challenge. Determining the circuit complexity of a Boolean function yields insight not only into the practical aspects of complexity, i.e., how many transistors does it take to realize a given function, but also into the structure and relationships of complexity classes. The techniques for deriving the bounds have themselves become objects of study; generalizing techniques—making them applicable to a wider class of problems—is an important problem in its own right. Thus, it goes without saying that exploration of the applicable techniques furthers the study of complexity. One well known technique that has come to prominence over the past two decades is the prob abilistic method. The probabilistic method, commonly termed probabilistic amplification, has been used to characterize complexity classes [Adl78, BG81, AB084] and to derive nonconstructive upper bounds on the circuit complexity of specific Boolean functions [Val84, Raz88]. The latter method iteratively constructs, or grows, a circuit via a random process. After some number of iterations the resulting circuit may realize the required function with a nonzero probability. We call the latter form of the probabilistic method a growth process. To make the analysis of a growth process tractable, the constructions are usually restricted to a subclass of circuits. Otherwise, the dependencies between the probabilities associated with vari ous components of a circuit render the analysis intractable. For example, both Valiant [Val84] and Razborov [Raz88], restricted their attention to fan-out 1 circuits: formulas. Alternatively, if the cir cuits are reversible—comprising fan-out 1 bijective gates—the growth processes are also amenable to analysis. Reversible circuits [Lan61, Tof80, FT82] and reversible computation [Ben73, Ben89] constrain every gate and every step of the computation to be completely reversible, so that no information may be lost at any step of the computation. Reversible circuits, which realize a bijection from n inputs to n outputs, comprise n wires that are manipulated by bijective gates. Since quantum computation has come to the forefront of theoretical and applied research, investigations into reversibility and reversible circuits have assumed a more prevalent role [Ben88a, Ben88b] because reversibility is a precondition of quantum computation. Fortuitously, the restrictive nature of reversibility induces a structure on the circuits that is much more amenable to analysis; both within the framework of growth processes as well as for circuit complexity in general. 1 1.1. Growth Processes on Formulas 2 In this thesis we are concerned with growth processes on formulas, growth processes on re versible circuits, and reversible circuit complexity. In the first two parts of the thesis we investigate and characterize growth processes based on their initial parameters; we formulate general theorems that predict the resulting distributions. The goal is to provide a methodology for constructing a growth process, given some specification. This not only provides a design strategy, but also delin eates the natural limitations of growth processes. In the latter part of the thesis we investigate reversible circuit complexity. We investigate if there exists a reversible circuit complexity hierarchy analogous to the one comprising the low-lying complexity classes, an important goal for fitting reversible computation within the current complexity hierarchy. We also investigate the relationships between reversible circuits and other models of computation, as well as concrete constructions, which allow us to derive a set of useful rules and heuristics for general construction of reversible circuits. 1.1 Growth Processes on Formulas The growth process, elegantly exhibited by Valiant [Val84] and first studied by Moore and Shan non [MS56], is used to nonconstractively prove the realization of a given n-adic function. The growth process on Boolean formulas is a random process defined on the space of rc-adic Boolean functions, whose parameters are the initial distribution on the variables, negations, and constants, and a fixed fc-adic Boolean function, called a connective. To grow a formula of depth i, the random process first grows k random formulas of depth i — 1 and then composes them using the connective; formulas of depth 0 are chosen randomly according to the initial distribution. If the probability of growing a random formula that realizes a particular function can be shown to be greater than zero, then the function may be realized in kl gates—the size of the random formula. Alternatively, a random Boolean formula generated by i iterations of the growth process com prises a complete k-aiy tree of depth i, where each internal node corresponds to the connective in the natural way, and the leaf nodes are randomly assigned according to the initial distribution. Consequently, the realized function only depends on the connective and the assignment of the leaf nodes. For each iteration of the growth process there is a corresponding distribution on the formulas of depth i that induces a distribution on the space of w-adic Boolean functions. If the induced distribution approaches some fixed distribution as i approaches infinity, then the growth process has a limiting distribution, which is approached at some rate as / approaches infinity. The limiting distribution and the rate of approach characterize the growth process. In many applications [Val84, Bop85, GM91, Sav95a, Sav98] the rate of approach is the main issue of interest. As we will show, in almost all cases that we considered, the rate of approach is exponential in n. In this thesis we are concerned with three questions: the existence of the limiting distribution, the shape of the limiting distribution, and the rate at which the limiting distribution is approached. In most cases, we assume that the initial distribution is uniform over its support and that the support is a subset of the variables, their negations, and constants. For a given support and connective, the goal is to answer the three preceding questions: the existence, shape and convergence to a 1.2. Reversible Circuits and Growth Processes on Reversible Circuits 3 limiting distribution. For example, Savicky [Sav90, Sav95a] formulated broad conditions under which the limiting distribution is uniform over the entire «-adic function space and is approached at an exponential rate. 1.2 Reversible Circuits and Growth Processes on Reversible Circuits Reversible circuit, first proposed by Landauer [Lan61] and Toffoli [Tof80], must satisfy one crite rion: no information may be lost during any step of the computation. For example, an AND-gate loses information, because given an output of 0, it is impossible determine if the inputs were (0,0), (0,1), or (1,0). Consequently, all gates must be bijective and have fan-out 1. Using the Toffoli's terminology, a reversible circuit on n inputs comprises n or more wires, called lines that are manip ulated by gates that take some number of lines as inputs, and place output on the same set of lines; see Figure 1.1. Additional lines, called garbage (or ancillary) lines, may be used for temporary storage of values. o— o— o-h 3 4 -- 5 •O •O •o •o •o Figure 1.1: A block diagram of a reversible circuit. A reversible circuit is simply a sequence of gates that are placed on the lines and manipulate their values as the lines pass through the gates. The length or size of the circuit is equal to the length of the gate sequence. The standard gates:—defined in Section 2.3.1—such as the NOT gate, the controlled-NOT gate, and the Toffoli gate, only modify one of the lines and use the remaining lines as read-only control lines. This creates a natural distinction between read-only lines, which are not modified by any gate in the circuit, and read-write lines, which may be modified by at least one gate in the circuit. The width of a circuit is the number of lines and the bandwidth of a circuit is the number of read-write lines. A growth process on a reversible circuit is a random process defined on the space of permutation functions and whose parameters are the number of lines in the circuit, and the distribution on the set of gates that may comprise the circuit. The growth process starts with an empty circuit, which realizes the identity permutation. On each iteration the process selects a gate from the distribution and suffixes it to the circuit from the preceding iteration, growing the circuit by one gate. For each iteration of the process there is a corresponding distribution on the set of reversible functions, which 1.3. Reversible Circuit Complexity 4 induces a distribution on the set of permutations. Naturally, we consider the same three issues as in the preceding section: the existence of a limiting distribution, the support of the the limiting distribution, and the convergence rate to the limiting distribution. Not surprisingly, the characteristics depend exclusively on the width of the circuit and the distribution on the set of gates. As we shall see, under a very broad set of constraints, the limiting distribution exists and is uniform over the set of even permutation functions. 1.3 Reversible Circuit Complexity The complexity of a reversible circuit is usually parameterized by a combination of circuit depth, width, and bandwidth, which are usually stated as functions of the number of variables. The band width of a circuit corresponds to the amount of state that a circuit must track during the compu tation and is analogous to the space requirements of a computation. Analogously, circuit length corresponds to the time requirements of a computation. Reversible circuits, and reversible computation in general, present an interesting space-time trade-off. As Landauer [Lan61] observed, information cannot be thrown away during computation; it must either be stored, or reversibly erased. Bennett [Ben73, Ben89] demonstrated that the latter is accomplished by running the computation that generated the information backwards, leveraging the fact that the computation is reversible. The trade-off between storing data, reversible erasure, and regeneration is analogous to the trade-off between space and time, i.e., a computation may either use additional space to store the information that was generated during the course of the computa tion, or use additional time to reversibly erase unnecessary information. Within the framework of reversible Turing machines, Bennett [Ben73, Ben89] and Lange et al. [LMT97, LMTOO] derived optimal strategies for both time parsimonious and space parsimonious computation. Additionally, Li and Vitanyi [LV96a, LV96b] and, Buhrman et al. [BTV01] derived general space-time tradeoffs using versions of Bennett's [Ben89] pebbling games that simulate reversible computation. Toffoli [Tof80] noted the analogous requirement for reversible circuits. Reversible circuits use some number of garbage lines to store temporary values used during the computation. Before being reused, a garbage line must be erased; this is usually accomplished by reversing the computation that generated the value. Consequently, there is a trade-off between the length of the circuit and the number of garbage lines required. Various models of computation, such as branching programs [Lee59, Weg87, Weg79, Weg82] and straight line programs [Ost54, Cle90], have been used to investigate various problems in cir cuit complexity. Similarly, models such as permutation branching programs [CG75, Bar85, Bar89, BS95], programs over groups [BT88, BST90], and reversible straight-line programs [Ost54, Cle90], are used to investigate problems within reversible circuit complexity. Results within the frameworks of these computation models are directly applicable—with a little work—within the context of re versible circuit complexity, and vice versa. In this thesis, we use the framework of permutation branching programs and programs over groups to investigate reversible circuits with low or or constant bandwidth. There is a natural rela tionship between permutation branching programs and a class of reversible circuits whose variable 1.4. Results in this Thesis 5 lines are read-only and whose read-write lines represent the state of the branching program dur ing the computation. On the other hand, reversible straight-line programs are closely related to reversible circuits that comprise mostly read-write lines. In recent years, reversible circuits have generated greater interest within both the quantum com putation and the hardware communities. Reversible computation is a necessary requirement for quantum computing and uses significantly less power [Ben82, BGL+93, Ben88a, Ben88b], an im portant benefit to today's hardware architects. Consequently we consider concrete realizations of several commonly utilized Boolean functions. 1.4 Results in this Thesis Our goal, as stated previously, is to improve our understanding of two frameworks: growth pro cesses and reversible circuits. Our approach for the former provides a set of general theorems that characterize a growth process based on its initial parameters. We develop theorems for both growth processes on formulas and growth processes on reversible circuits. Our approach for the latter yields a multifarious set of results: relationships between reversible circuits and various models of compu tation, new bounds on constant bandwidth reversible circuits, new concrete reversible realizations of commonly used Boolean functions, and new heuristics for constructing reversible circuits. The third chapter of this thesis derives a set of theorems for characterizing growth processes on formulas. After presenting the notation (Section 3.1), we characterize growth processes that use linear connectives (Section 3.2), self-dual connectives (Section 3.3), and monotone connectives (Section 3.4). We use a novel combination of amplification arguments [Val84, Bop85] and spectral analysis [Raz88, Sav90, Sav95a] to derive a technique for characterizing growth processes using other connectives as well (Sections 3.4.2 and 3.5). Additionally, we derive general convergence bounds for the majority of the growth processes under investigation. We note that in some cases even very minute changes in the connective can result in drastically different convergence rates. The fourth chapter derives a characterization of growth processes on reversible circuits. We relate growth processes on reversible circuits to random walks on groups [Ald89, KLNS89]. Our characterization of growth processes on reversible circuits is accomplished via the techniques used to analyze the random walks. We define growth processes on reversible circuits (Section 4.1) and derive a broad set of constraints that ensure that the limiting distribution exists and is uniform over the set of even permutation functions (Section 4.2-4.3). We also show that the convergence rate of such growth processes is exponentially slower than in the case of growth process on formulas (Section 4.4). The fifth chapter focuses on reversible circuit complexity. In Section 5.1 we review permuta tion branching programs [Bar85], three different notions of acceptance, and define the notion of a program transformation. Using these notions, we derive relationships between reversible circuits and permutation branching programs, making our results applicable to both reversible circuits and permutation programs (Section 5.2). Using two different complexity measures on reversible circuits we define a complexity hierarchy and relate it to the low-lying complexity classes (Section 5.3). 1.4. Results in this Thesis 6 Additionally, in Section 5.4, we describe several concrete reversible circuit constructions, in cluding: an incrementor, threshold function, consensus function and adder. We show that all of these have concise constructions and derive a set of heuristics for realizing other Boolean func tions. Particularly, in Subsection 5.4.2, we show that if a permutation has a polynomial size cycle representation, then the permutation can be realized by a polynomial size reversible circuit! Chapter 2 Background 2.1 Boolean Functions, Formulas, and Circuits A Boolean cube of order n is the set Bn — {0,1}" of all rc-bit vectors and an rc-adic Boolean function is a map /: Bn —> B\, i.e., an «-adic Boolean function take n Boolean arguments and yields a truth value, 0 (false) and 1 (true). A function is monotone if for any assignment of variables, changing any variable from 0 to 1 does not cause the function to change from 1 to 0. A Boolean circuit is represented by a directed acyclic graph comprising nodes of positive outde-gree and a single node of outdegree 0. Nodes of indegree 0 are labeled by a Boolean variable x,, or its negation x,, and nodes of positive indegree are labeled by a Boolean function, which is typically a conjunction (A), disjunction (V), or negation (->); the node of outdegree 0 is called the output node. The circuit computes a truth value in the natural way; the value of a node is either the value of the variable with which it is labeled, or the value of the function with which it is labeled—the function's arguments are the values of the node's predecessors. The truth value of the computation is the value of the output node. A Boolean formula is a Boolean circuit where each node has outde gree (fan-out) of at most 1. A monotone circuit (formula) is a Boolean circuit (formula) such that none of the nodes are labeled by negations or nonmonotone functions. 2.2 Random Formulas Random Boolean formulas have been studied throughout the last half century. Five years after Shannon's [Sha38] seminal paper relating Boolean algebra and switching circuits, Riordan and Shannon [RS42] proved that "almost all" Boolean functions require an exponential number of circuit elements to be realized. Although Shannon [Sha49] proved an upper bound on the complexity of realizing any Boolean function on n variables, it wasn't until 1959 that Lupanov [Lup61b] obtained an asymptotic upper bound corresponding to the lower bound in [RS42]. There was a quick succession of results on bounding the size of circuits and formulas that realize any Boolean function on n variables [Lup58, Lup61a, Lup61b, Lup65, Kri61, Rez62, Kor65, Kor66, Pip76] in the early 60s. However, apart from the quadratic lower bounds on formulas of Neciporuk [Nec66] and Khrapchenko [Khr71, Khr72], little was accomplished in deriving bounds 7 2.2. Random Formulas 8 on the complexity of specific functions. In the early 80's new techniques for deriving lower bounds were introduced. The main features were restrictions on the class of allowable circuits—bounded circuit depth or monotone circuits— and the use of probabilistic techniques. Furst et al. [FSS81] showed that parity could not be com puted by a polynomial size bounded depth circuit, by a probabilistic technique involving random partial assignments, called random projections. Similar methods were used by Yao [Yao83, Yao85] and Hastad [Has86, Has89]. Razborov [Raz85] proved a super-polynomial lower bound on mono tone circuits that decide the clique problem. This was extended to an exponential lower bound by Andreev [And85] and, Alon and Boppana [AB87] and used by Tardos [Tar88] to prove an exponen tial gap between monotone and nonmonotone circuit models. Not surprisingly, probabilistic techniques are extremely useful in proving upper bounds as well. In 1984 Valiant [Val84], using random formulas, proved that the formula complexity of the majority function was not only polynomial—an implication of a result of Ajtai et al. [AKS83] on sorting networks—but that the size was bounded by 0(n53). Four years later, Razborov [Raz88] proved that there exist polynomial size monotone formulas that represent some Ramsey graphs. In the former case, probabilistic amplification was used to prove the existence of the small formula that realizes the majority function. In the latter case, spectral analysis was used to prove that the probability of creating the appropriate small formula was not zero. 2.2.1 Probabilistic Amplification Following the work of von Neumann [vN56], Moore and Shannon [MS56] demonstrated how to construct "Reliable Circuits from Less Reliable Relays". This two part work laid the foundations for the probabilistic technique that would later be called probabUistic amplification. Moore and Shannon [MS56] asked the following question: can a reliable two-terminal circuit—one that be haved as if constructed from reliable relays—be constructed using unreliable "crummy" relays that work with probability p; all relays are controlled by a single source. Their approach was to con struct a more reliable relay from a number of crummy relays, iterating the process until the required level of reliability was achieved. Let p be the probability that a crummy relay works (closes when its coil is activated) and let 1 — p be the probability that the crummy relay does not work (remains open when its coil is activated). Given an network of k relays that connect two terminals, the probability that the network works like a relay—the two terminals will become connected when the coils are activated—is equal to the probability that some number of relays will work, creating a path from one terminal to the other. This probability is denoted by the corresponding characteristic polynomial Mp) = iv(i-P)i"i, 1=0 where B, is the number of ways of selecting / of k relays that if closed (work), create a closed circuit. Figure 2.1 illustrates behaviour of some characteristic polynomials. Since each relay within the network is also a two-terminal network, one possible way to improve reliability is to replace each relay within the original &-relay network with a copy of the network 2.2. Random Formulas 9 Figure 2.1: Some typical characteristic polynomials for networks of three crummy relays. itself; this is called an iterative composition. Analogously, the probability that the iterative composi tion will work is equal to the composition of the original £-relay network's characteristic polynomial with itself, i.e., h(h(p)), and the probability that a network that comprises i iterative compositions will work is h'(p), the ith composition of h(p) with itself. Moore and Shannon [MS56] showed that iteratively composing the fc-relay network with itself monotonically increases (or decreases) the reliability of the overall network, and that the rate of change is determined by the characteristic polynomial of the &-relay network. For example, let N\, A^, Nj, and N4 be the four different 3-relay networks that correspond to the characteristic polynomials in Figure 2.1. In the first case, since h\ (p) > p on the interval (0,1) and hence h(hl~l(p)) > h'(p), an iterative composition of N\ realizes a reliable relay that works with probability approaching 1—as the number of iterations increases—regardless of how crummy the original relays were, as long as they're not completely broken. Similarly, since h$(p) < p and ht,(p) < p on (0,1), iterative compositions of N$ or N4 realize very crummy relays that work with probability approaching 0—unless the crummy relays are not crummy at all. Finally, since /z2(p) has a fixed-point, in this case h(j) = an iterative composition of N2 realizes a reliable relay if the relays are not half bad1, and realizes a crummy relay if the relays are more than half bad. Moore and Shannon [MS56] considered relays that not only did not close properly, but also that did not open properly, causing a closed circuit even when the relay coils were inactive. Thus, their focus was on networks whose characteristic polynomial had a fixed-point. In [MS56] several important properties about h(p) are proved including that h(p) has at most one fixed-point on the interval (0,1) and that the slope at the internal fixed-point is strictly greater than 1 unless h(p) = p. The latter implies that for all p less than the fixed-point, say s, h(p) < p on (0,s), and h(p) > p on M). Following Moore and Shannon [MS56], the technique proved useful in showing that certain functions had a small circuit or formula complexity. Following Adleman's [Adl78] characterization of the complexity class RP, Bennett and Gill [BG81] proved a similar result for the class BPP, namely, that the languages in BPP could be decided by polynomial size circuits. Additionally, \P>{ 2.2. Random Formulas 10 Ajtai and Ben-Or [AB084] showed that probabilistic constant depth circuits can be simulated by their deterministic analogs. However, perhaps the most elegant exhibition of the technique was by Valiant [Val84]. Since closing a relay in a circuit cannot open a closed circuit and since opening a relay cannot close an open circuit, two-terminal circuits are equivalent to monotone Boolean formulas. For every k element network, there is a corresponding &-adic formula, called a connective, whose inputs correspond to the relays of the network. The formula evaluates to true on an assignment if and only if closing the corresponding relays closes the k element network. Analogously, an iterative composition of the formula, replaces each instance of a variable with a copy of the formula—the variables are renamed in the copy. If we assume that the variables are independent and are true with probability p and false with probability l — p, then the probability of the formula evaluating to true is equal to h!(p), where h'(p) is ith composition of h(p), the characteristic polynomial of the k element network. We call this iterative composition process a growth process. This framework was used by Valiant [Val84] to prove that the majority function can be realized by a monotone formula of size 0(n53). Valiant showed that iteratively composing the connective (G\ V G2) A (G3 V G4) with itself about 2.65 log(rc) times and then randomly assigning the n variables and 0 to the inputs of the resulting formula realizes the majority function with high probability. The distribution used for the variable assignment was p(je,-) = <j) and p(0) = 1 — n(j), where ty = s/(n—l) and s is the fixed-point of the connective. The key to the proof is the probabilistic amplification phenomenon. Since the probability of a variable being true corresponds to the weight of the assignment, the probability of an input being 1 is greater than s if and only if the majority of the variables are 1. Consequently, the iterative composition creates divergence away from the internal fixed-point followed by convergence toward the bounding fixed-points. Consequently, with high probability the resulting formula evaluates to true if the majority of the inputs are 1, and false otherwise. Boppana [Bop85, Bop89] further studied probabilistic amplification, deriving upper and lower bounds on the monotone formula complexity of Boolean threshold functions. He showed that Valiant's [Val84] result was indeed optimal, and that the monotone formula complexity of the mth threshold function is bounded by c?(m43rclog(n)), where 0 < m < n. This was accomplished by analyzing the slope of the characteristic polynomial near its fixed-points, in order to obtain bounds on both the divergence away from the internal fixed-point and convergence towards the bounding fixed-points. Additionally, Gu and Maruoka [GM91] investigated amplification within the context of formulas comprising alternating levels of AND and OR gates, and Radhakrishnan and Subrahmanyam [RS94] investigated threshold functions within the context of directed contact networks. Dubiner and Zwick [DZ92, DZ97] extended Boppana's [Bop89] lower bounds to nonmonotone formulas. All of the preceding work analyzed growth processes whose limiting distribution became concentrated on a single function, i.e., a threshold function; however many of the growth processes have a limit ing distribution that is spread over a significant portion of the space of Boolean functions, requiring a different approach. 2.2. Random Formulas 11 2.2.2 Spectral Analysis If the limiting distribution of a growth process is not concentrated on a single function, then proving that a limiting distribution exists or even that there is a nonzero probability of growing a specific function requires a different approach. One such approach is spectral analysis, which was used by Razborov [Raz88] and Savicky [Sav88, Sav90, Sav94, Sav95a, Sav95b]. In the former case, Razborov [Raz88] used spectral analysis to prove that some large graphs with Ramsey properties had formula representations that are exponentially smaller; this was later quantitatively improved by Savicky [Sav95b], who demonstrated that these Ramsey properties are far from random. In 1988, Savicky [Sav88, Sav90] proved that under a broad set of conditions, the limiting dis tribution of a growth process is uniform over the entire space of n-adic functions. Five years later he further generalized the conditions and determined the asymptotic convergence rate towards the limiting distribution. Under a general set of conditions Savicky [Sav94, Sav95a] showed that the rate of approach of the probability of realizing a particular function / to the limiting probability 2~2" yields information about /: the rate is fastest for the linear functions, and slowest for the bent functions [Rot76]—bent functions are furthest, in Hamming distance, from the linear functions. In [Sav88, Sav90] Savicky leverages the fact that the limiting distribution is uniform over the space of n-adic Boolean functions and that the Fourier transform (defined below) of the uniform distribution is concentrated on a single Fourier coefficient. He proves that all but one of the Fourier coefficients of the /th distribution approach zero as i approaches infinity, implying that the /th dis tribution approaches the uniform distribution as / approaches infinity. In [Sav95a] Savicky derives an exponentially decaying upper bound on the magnitude of the Fourier coefficient, proving that asymptotically, the convergence is exponentially fast. The Fourier transform A, of a probability distribution 7t, is defined by A«(/)= £ (2.1) where % is the set of all n-adic Boolean functions and 7t,-(g) is the probability of selecting function g from the /th distribution. The vectors / and g are Boolean vectors of size 22" that represent functions by their truth table. Naturally, the inverse Fourier transform is defined by = i X (-l)(/^A,-(/). (2.2) Since the Fourier coefficient of the zero function, 0, is always equal to one, i.e., A,(0) = 1 for all /, Savicky [Sav90] shows that all other coefficients tend to zero as i tends to infinity, proving his result. The Fourier transform plays a role in many of our results, but it needs to be adapted in various ways to suit different cases. For example, when dealing with linear functions, we represent the functions / and g in definition 2.1 not as Savicky does, by their truth-tables, but rather by their coefficients as multivariate polynomials over GF(2). In other cases, when establishing a limiting distribution that is uniform over a proper subset of all n-adic Boolean functions, we use what we call "restriction lemmas", which assert relationships that hold among the coefficients of the Fourier transform. 2.3. Reversible Computation 12 2.2.3 Related Work Not all growth processes have a limiting distribution. In some cases the processes may have alter nating limiting distributions, an example of this is worked out in Section 3.2. In this case, as / ap proaches infinity, the sequence of distributions 712; converges to one distribution, while the sequence of distributions 7t2i+i converges to another distribution. This is due to the fact that some functions may not be realizable by formulas of odd (or even) depth. For example, a growth process whose initial distribution is uniform on the projections and whose connective is (G\ V G2) A (G3 V G4), has an alternating distribution; even depth formulas realize only the monotone functions, while odd depth formulas realize only the negated monotone functions. In general, if the projection functions cannot be realized by formulas of synchronized depth greater than some constant depth, no limiting distribution can exist [HN77, HN79]. This inquiry was begun by Kurdryavcev [Kud60b, Kud60a], who investigated the completeness of automata with no feedback—essentially combinational circuits. Some of this work was rediscovered [HR98] by Loomis [Loo65] in 1965. In 1977, Hikata and Nozaki [HN77, HN79] proved that projections are necessary and sufficient to ensure completeness. However, this is not necessarily a sufficient condition for the existence of a limiting distribution. Using another model of random formulas—where a formula is approximated by an infinite tree—Lefmann and Savicky [LS97] obtained a relationship between the formula complexity of a function and the probability of its occurrence within the infinite tree model, namely the negative logarithm of the probability differs by a polynomial factor from the formula complexity of the respective function. For a fixed connective, G\ A G2 © G3 © G4, Savicky [Sav98] derived a bound between the formula complexity of a function and the probability that a random formula, grown using the connective, realizes the function. This was accomplished by deriving the behaviour of the Fourier coefficients from one iteration of the growth process to the next. 2.3 Reversible Computation In 1948 Shannon and Weaver [Sha48, SW49] formalized the equivalence between information and entropy, a result dating back to Maxwell's Demon [Max71] and the work Szilard [Szi29]. In 1961 Landauer [Lan61] showed that if a computation is performed reversibly—none of the operations throw away information—then the amount of power required to drive the computation is dependent only on its rate. Landauer [Lan61] also proposed a reversible gate that was capable of simulating traditional irreversible Boolean operations, such as conjunction. In the early 70's Bennett [Ben73] continued the work on reversible computation within the framework of reversible Turing machines. He proved that any Turing machine could be simulated reversibly by using extra space proportional to the length of the computation. Using the same frame work, the trade-off between space and time was further investigated by Bennett [Ben89], Levine and Sherman [LS90], and Lange et al. [LMTOO]. This trade-off was also investigated within the context of pebbling games and information theory [Zur89, BGL+93, LV96a, LV96b, LTV98, ABOIN96, BTV01]. However, not until 1980 was a realizable model of reversible computation proposed. 2.3. Reversible Computation 13 2.3.1 Reversible Circuits Reversible circuits were first defined by Toffoli and Fredkin in [Tof80, FT82]. The circuits comprise fan-out 1 gates that implement bijective functions and operate on fixed subsets of lines. The three standard gates are: the unary NOT gate, which negates the value on the wire; the binary controlled-NOT gate, which negates the second line if the value of the first line is 1, and the ternary Toffoli gate (controlled-controlled-NOT), which negates the third line if the first two lines both have a value of 1; see Figure 2.2. A feature of these gates is that each one is its own inverse. a—0—a' a b a' b' a b c a' b' c' Figure 2.2: A NOT (a' = 1 0 a), a controlled-NOT (b' = b © a), and a Toffoli (c' = a A b © c). Since reversible circuits are bijective, not all rc-adic Boolean functions may be realized by an n line reversible circuit. For example, the disjunction on n variables cannot be implemented by an n line circuit where the output is, say, the first line in the circuit. This follows from the fact that n line reversible circuits are bijections on the Boolean cube, hence, for exactly half of the outputs the first line must have a value of 0. Thus, in most of the cases an additional line, which is initialized to 0 and called a garbage line, is required. Using the following elegant argument, Toffoli [Tof80] showed that one additional line is both necessary and sufficient. Let F : Bm —> Bn be an arbitrary Boolean function. Construct a reversible function Rf : Bm+n —> Bm+n such that F is embedded within it. Begin with a truth table for F = {(x,y) 6 Bm x Bn \ F(x) = y}. Let the truth table for Rp be {(V,/) G Bm+n x Bm+n \x' = vc,y' = (y®z)x, z 6 Bm, F(x) = y}. This truth table defines a bijective function, within which F is embedded. In fact, with a little twiddling, its easy to ensure that Rf is an even permutation function. This is vital because even if an n-adic Boolean function is a permutation function, it may not be realizable by an n line circuit. Universality Given a universal set of gates, e.g., {A,->}, any Boolean function can be realized using only gates from the set. In the case of reversible gates and circuits, regardless of what finite set of gates is used, there exist reversible functions that are not realizable without the use of an additional line. This was shown by Coppersmith and Grossman [CG75]. Let k < n. The operation of a k-ary invertible gate within an n line circuit realizes an even permutation—an element of the alternating group A2". Assuming that universal set of gates is finite, the set contains a gate with a maximal number of inputs and outputs, say k. Hence, all n line circuits, where k < n are composed of gates that only realize even permutations, implying that such 2.4. Random Reversible Circuits 14 circuits can only realize even permutations themselves. Thus, reversible functions that compute odd permutations of the symmetric group 52" cannot be realized. By using an additional line, any odd permutation on 2" points can be embedded into an even permutation on 2"+1 points and hence, can be realized by an n + 1 line reversible circuit. We say that a set of gates is universal, with respect to reversible circuits, if any permutation in the alternating group Ajn may be realized by an n line reversible circuit comprising these gates. Coppersmith and Grossman [CG75] proved that the Toffoli gate and the NOT gate, which they call 2-functions, form a universal set. Related Results Fredkin and Toffoli [FT82] derived similar results for conservative circuits. Conservative circuits are reversible circuits that preserve the weight of the wires—the number of ones and zeros—across every operation. In this case, the number of additional lines required to realized any ra-adic Boolean function is equal to the maximum difference between the weight of an input and the weight of its corresponding output. Both the results of Toffoli [Tof80] and, Fredkin and Toffoli [FT82] imply no bounds on the size of a reversible circuit that realizes a particular function. However, taking a cue from the framework of reversible Turing machines [Ben73, Ben89, LMTOO], there is a trade-off between the number of garbage lines and circuit size. Since creating garbage, without cleaning it up, creates unnecessary entropy; a simple technique, first exhibited by Bennett [Ben73], can be used to clean up the garbage lines, i.e., reversibly reseting them to zero, at a cost of some additional lines. Let C be a reversible circuit comprising a number of input/output lines, and additional garbage lines. The construction to reversibly reset the garbage lines to zero is shown in Figure 2.3; here i denotes the input, g denotes the garbage, o denotes the output of circuit C, and C-1 denotes circuit Cs inverse. The gate sequence of C-1 is simply the reverse gate sequence of circuit C. The concept of trading lines (space) for another resource reappears frequently in the literature on the complexity of reversible computation; see section 2.5. 2.4 Random Reversible Circuits 9 Figure 2.3: A method for resetting garbage lines. Unlike growth processes on formulas, the multiple fan-out of gates within a general circuit makes analysis of growth processes on circuits intractable. Although reversible circuits retain the interde pendence of general circuits, growth processes on reversible circuits are much more amenable to 2.5. Complexity of Reversible Computation 15 analysis. Since all gates are reversible and each gate within a n line reversible circuit realizes a permuta tion, each gate realizes an element of the symmetric group and the empty circuit realizes the identity. A growth process on reversible circuits starts with the empty circuit and grows a circuit by itera-tively suffixing gates to the circuit; the gates are chosen according to a fixed distribution on a set of allowable gates. Thus, a growth process on reversible circuits corresponds to a random walk on the symmetric group. To determine whether a growth process has a limiting distribution, the shape of the distribution, and to bound the convergence rate, we look to the theory of random walks on groups and graphs. A Markov chain is a discrete stochastic process that is defined by its set of states, and a stochas tic matrix that defines the probability of transitioning from one state to another [MR95]. An ergodic Markov process is one where every state is reachable from every other state in the process; a ran dom walk on a group is an ergodic Markov process. The periodicity of a process is the greatest common denominator of the lengths of all cycles of the corresponding graph induced by the states and transitions of the process. A process is aperiodic if its periodicity is equal to 1. In our case the induced graph is the Cayley graph and assuming that all gates are their own inverses, the periodicity is either 2 or 1. The underlying analysis of such processes can be traced to the theorems of Frobenius [Frol2] and Perron [Per07], which are used to characterize the eigenvalues of the respective transition ma trices. Assuming the respective process is ergodic, the eigenvalues of the corresponding transition matrix have magnitude at most 1; one of the eigenvalues is equal to 1, and the remaining eigenvalues are of magnitude strictly less than 1 if and only if the process is aperiodic [MR95]. If the process has a period of 2, then one other eigenvalue is equal to — 1 and the remaining eigenvalues are of magnitude strictly less than 1. A stationary or limiting distribution exists if and only if the process is aperiodic. In Chapter 4, using these criteria, we derive the conditions under which a growth pro cess on reversible circuits has a limiting distribution and show that the distribution is uniform over the entire set of realizable functions. 2.5 Complexity of Reversible Computation The modern notion of computational complexity dates back to Shannon [Sha49], who formalized the circuit complexity of a function, and to the work of Hartmanis and Steams [HS65], who for malized the complexity of a function within the framework of Turing machines. The first notions of complexity within the framework of reversible computation came from Lecerf [Lec63], who proposed the notion of a reversible Turing machine, and Landauer [Lan61] who showed that the energy consumption of a reversible computation could be arbitrarily reduced by reducing the speed of the computation itself. However, it wasn't until Bennett's [Ben73] investigation of reversible Turing machines and Toffoli's [Tof80] investigation of reversible circuits that the basic framework of reversible computational complexity was established. Within the framework of reversible Turing machines the two measures generally considered are space—the amount of tape that a Turing machine uses—and time—the number of steps that a 2.5. Complexity of Reversible Computation 16 Turing machine performs. Although these measures are identical to the resources considered in the general complexity framework, their behaviour is governed by an additional set constraint, namely, reversibility. Within the framework of reversible circuits, the main complexity measures include: circuit size, circuit width, and circuit bandwidth. The size or length of the circuit is the number of steps a reversible circuit performs during the computation. In almost all circuits, each gate depends on the preceding gate in the sequence. Consequently, almost all the gates execute sequentially, and the depth of the circuit—the longest path through the circuit—is comparable to its size; at most the size is a factor of one circuit width greater than the depth. The width of a circuit is the total number of lines used by the circuit, and is analogous to the total space used by a Turing machine with a single tape that is used for input, computation, and output. Since this definition is unsatisfactory, for the study of low-lying complexity classes, the bandwidth of the circuit is defined as the number of read-write lines in the circuit. This is analogous to the space used by a Turing machine that has a separate tape for the input. Additionally, we generally assume a nonuniform circuit model, i.e., the circuits correspond to Turing machines with some amount of advice, which is a function of the input size. Unlike in general circuit complexity, where circuit depth and circuit size are often orthogonal measures, in the case of reversible circuits the two measures are for the most part synonymous— the number of gates is comparable to the depth of the circuit. Thus, reversible circuits are most closely related to narrow circuits, such as the ones inherent in the definition of the class SC: the set of problems computable by bounded fan-in polynomial size circuits of width 0(log(«)') [Coo79]. Reversible circuits can be simulated by general circuits of comparable size—a Toffoli gate can be realized by an AND gate and an XOR gate. Hence, many of the results applicable to general circuits are also applicable to reversible circuits. Reversible computational complexity attempts to address the same questions as general com putational complexity, for example: determining the trade-offs, e.g., space-time, relating the com putational power of the model with respect to other computation models, and, determining the complexity of a particular function or class of functions. We quickly review the related work done in these endeavours. 2.5.1 Space-Time and Other Trade-offs Space-time trade-offs have been a major theme within the framework of reversible Turing machine complexity [Ben73, Ben89, LS90, LMTOO]. These trade-offs were also further investigated within the framework of pebbling games [LV96a, LV96b, LV97, LTV98] and information theory [Ben82, Zur89, BGL+93]. Given an irreversible computation using time T and space 5, Bennett [Ben73] showed how to reversibly simulate it using time 0(T) and space 0(S+ T) by having the Turing machine keep a history of the computation, which was reversibly erased after the output was computed. One of the major criticisms of this simulation is that the amount of space used is proportional to the length of the computation. Unfortunately, since T could be exponentially larger than S, the space overhead is 2.5. Complexity of Reversible Computation 17 considerable. This overhead is mitigated by trading time for space. Instead of recording the history of the entire computation, the computation is divided into segments and the history is kept only for the current segment. At the end of each segment a checkpoint is saved. The history between the last two checkpoints is reversibly erased and used for the history of the next segment. Depending on the size of the segments, a computation may be performed reversibly using 0(Slog T) space and 0(T 2) time. Bennett [Ben89] tightened these trade-offs to 0(T l+e) time and 0(S\ogT) space, by using a hierarchy of segments and checkpoints to perform the simulation. The computation is divided into segments of size m, where m^S and the number of segments is c", where c> 1. The c" irreversible segments of computation are simulated by (2c— 1)" reversible segments via a recursive computa tion; a partial simulation for c — 2 is illustrated in Figure 2.4. The figure depicts the schedule of creation and deletion of the computation segments. At each stage a computation segment is either reversibly created (listed in the • row) or deleted (listed in the X row); the time during which the ith computation segment exists is represented by the bars in the ith row. Levine and Sherman [LS90] noted that the constants in the time and space bounds depend exponentially on £, which is propor tional to the inverse logarithm of c. c E 7 0) X Reversible Segments 21 31 41 51 61 71 81 gl 10| 11112| 13J 14| 15| 16| 17| 18119120 21 22 23 24 25 26 27 1 a n 'in > 8 Figure 2.4: A partial representation of the improved simulation for c = 2. Lange, McKenzie, and Tapp [LMT00] demonstrated that a space parsimonious reversible sim ulation of a deterministic Turing machine M is also possible. Using a variation of the technique used by Sipser [Sip80], and by Cook and McKenzie [CM87], an Euler tour is performed on the configuration graph of M, where the edges are induced by the transitions between configurations. Since M is space bounded, the induced graph is finite and the space used by the simulation is only an additive constant more than the space used by M. The key point is that the tour can be performed reversibly; once a halting configuration is found, the tour stops. Since the number of configurations is potentially exponential, the time of the computation may also become exponential. The results of Bennett [Ben73] and Lange et al. [LMTOO], correspond directly to the observa tions made about reversible circuits [Tof80, FT82]. Given a general k-ary circuit of depth d and comprising n gates, we can construct an equivalent reversible circuit either by adding 0(n) lines (space), thus preserving the depth, or by adding one line and exponentially increasing the depth. A comparable result to the one in [Ben89] is also possible. 2.5. Complexity of Reversible Computation 18 2.5.2 Relationships with Other Models of Computation In our investigation we commonly restrict the bandwidth of the reversible circuits to be either con stant, or polylogarithmic in the number of inputs. General circuits that are restricted to polylogarith-mic width or constant width are conjectured to only be able to compute problems in the low-lying complexity classes such as NC [Pip79] and SC [Coo79]. We say "conjectured" because we can't even prove that NC1 ^ NP [Joh94, Page 136]. A related model of computation is that of branching programs [Lee59], in particular permutation branching programs [Bar85]. These programs are very similar to reversible circuits with read-only input lines and read-write additional lines on which all the computation is performed. Complexity Classes NC, AC, and SC The class NC (Nick's Class) is the class of problems computable by bounded fan-in circuits of depth (9(log(n)') and polynomial size; the classes were named by Cook [Coo79] in honour of Nicholas Pippenger [Pip79]. If we allow gates of unbounded fan-in, i.e., {-i,An,V„} where A„ = A"=1x, and V„ = V"=1x, for all positive n, the class AC, also defined in [Ruz79, Coo85], is the class of functions computable by unbounded fan-in circuits of depth 0(log(n)') and polynomial size. A direct implication of the definitions is that AC C NC+1. The class NC (AC) is defined as the union of NC' (AC) for all i [Joh94]. The class SC (Steve's Class) is the class of problems computable by bounded fan-in circuits of width 0(\og(n)') and polynomial size; the classes were named after Stephen Cook who first studied them in [Coo79]. The class SC is the union of SC over all i. The class L/poiy, nonuniform logarithmic space, is the class of problems solvable by a logarithmic space bounded automata with a polynomial amount of advice; note that Lypoiy = SC1. Branching Programs A branching program (BP), introduced by Lee [Lee59], is a rooted directed acyclic graph compris ing interior nodes (out-degree 2) and two leaf nodes (out-degree 0). Each interior node is labeled by a variable, xi, and its two outgoing arcs are labeled 0 and 1. The two leaf nodes are also labeled 0 and 1. The computation begins at the root and traverses the graph along the arcs; at each interior node the computation follows the arc whose label is the value of the corresponding variable and terminates at leaf node, outputting the leaf's label. The depth of a node is the length of the longest path from the root to the node and the depth of a branching program is the maximum depth of the two leaf nodes. A branching program is synchronized (leveled) if the difference in depth between any two adjacent nodes is exactly one, and its width is the maximum number of nodes at the any depth. A width-w permutation branching program (w-PBP) is a synchronized width-w branching program such that the labels of all arcs incident on the same interior node differ. The class PBP is the class of problems that are solvable by polynomial-size branching programs and the class P^ is the class of problems solvable by polynomial-size, width-w branching programs. 2.5. Complexity of Reversible Computation 19 There have been significant attempts to derive lower bounds for branching programs, partic ularly bounded-width branching programs [CFL83, BDFP83, Pud84, BS95] and polylogarithmic-width branching programs [ABH+86]. Perhaps the best known result is that of Barrington [Bar89] who showed that NC1 is equal to the class of functions computed by polynomial-size bounded-width permutation branching programs. A w-PBP B can be formulated as a sequence of instructions where the /th instruction is a tuple (ji,fi,gi), where 7, G Z„ identifies one of the n inputs {x\,X2,...,*«}, and / and g are permutations on Zw. If XJJ is true, the /th instruction yields permutation /;, otherwise it yields gt. The output, B(x), of program B on input x, is the product of permutations yielded by the instructions of B. Program B accepts language L if there is a permutation such that Using permutations from the alternating group A5, Barrington's [Bar89] construction simulates an AND operation using a commutator and inductively composes a polynomial size 5-PBP that realizes any function in NC1. The other direction follows from the fact that the computing the product of polynomial number of elements from a fixed finite group is in NC1. Interestingly, 3-PBPs are not as powerful as 5-PBPs; computing the conjunction requires an exponentially long program [Bar85]. Not surprisingly, there has been significant work done to investigate the reasons for the disparity between 3-PBPs and 5-PBPs, and to generalize the lower bounds for bounded-width branching programs and programs over groups [BT88, BST90, BS95]. Perhaps the most fascinating problem is determining if there exists a polynomial size 4-PBP that can compute the conjunction over n variables. Barrington et al. [BST90] have proven that the answer is negative if the instructions of the program are restricted to the alternating group A4, i.e., the program is over the group A4. Intriguingly, the problem for general 4-PBPs remains open! 2.5.3 Related Work For completeness, we mention several related frameworks in which reversible computation has been studied. These include: reversible straight line programs [Cle90], space-time-reversibility trade offs using a pebbling model [LV96b, LTV98, BTV01], and information theory based analysis of reversible computation [Ben82, Zur89, BGL+93, LV96a]. Reversible Programs In his doctoral thesis, Cleve [Cle90] investigated reversible computation using a straight line pro gram model. Following the same formalism, a statement of the a program is denoted by x G L X$LL Ri<- f(Rjl,Rj2,...,Rjk), where / is a function over some ring that operates on k inputs from registers, and stores the result in another register, not necessarily distinct from one of the arguments; a register may hold 2.5. Complexity of Reversible Computation 20 any element of Each statement induces a map from to and is reversible if the map is a bijection. The statements are usually drawn from a basis, such as Ri <- c-Rhce3L Ri <- (Rj-Rk)-Rh j,k?i, and is reversible if all its statements are reversible. Another concrete example is the basis Ri <- l@Rt Ri <- (RjARk)®Ri, j,k^i, over the field Z2, which computes the negation and the Toffoli gates respectively. Reversible Pebbling A technique for proving space-time trade-offs is pebbling [Pip80]. The pebble game consists of a connected digraph G and a fixed number of pebbles. Play consists of placing pebbles on and removing pebbles from the vertices of G using the following rules: a pebble may be removed from a vertex at any time, and a pebble may be placed on a vertex only if all of its predecessors have a pebble on them or if the vertex is a start vertex. The game is won only if every vertex is visited. The pebbles represent registers (space) and the vertices represent steps in the computation. The total number of moves—removing or placing a pebble (erasing or storing a register)—corresponds to the length of the computation. The number of pebbles versus the number of moves corresponds to the space-time trade-off. Using the pebble game, Cook [Coo74], Ladner [Lad75], Hopcroft et al. [HPV75], and others proved space-time trade-offs and completeness results. Additionally, the first space-time trade off, which was due to Paterson and Hewitt [PH70], could be cast in the framework of a pebbling game [Pip80]. In 1996, Li and Vitanyi [LV96b], and Li et al. [LTV98] used pebbling to show that Bennett's [Ben89] reversible simulation of an irreversible computation is, given some assumptions, optimal. The reversible pebble game uses a modified removal rule: a pebble cannot be removed from a vertex unless all of the predecessors of the vertex have pebbles on them. In essence, information cannot be erased, it may only be cancelled out. The simulation graph is a singly linked chain of T vertices, with each vertex representing a step in the irreversible computation. The main result is that if there are n pebbles and T > 2", then there is no winning strategy; the computation cannot be simulated. Additionally, if E erasures are allowed, then the computation can be performed with n — log (E + 1) pebbles where E is odd; for each missing pebble exponentially more erasures must be performed. Buhrman et al. [BTV01] extended the model to accommodate the result of Lange et al. [LMT00]. Reversibility and Information Theory Bennett et al. [BGL+93] and, Li and Vitanyi [LV96a], make the observation that the amount of erasure performed during a computation is a measure of its irreversibility. The irreversibility cost 2.5. Complexity of Reversible Computation 21 Ei(x,y) of reversibly computing x from y, denned by Zurek [Zur89] and discussed by Bennett et al. [BGL+93], is the number of source and garbage bits required to reversibly perform the compu tation. They show that Ej(x,y) = K(x\y) + K(y\x) up to an additive logarithmic term, where K(x\y) is Kolmogorov complexity of x given y [LV97]. This measure highlights a striking relationship between the information content of strings and the resources required to compute them reversibly. Chapter 3 Growth Processes on Formulas In this chapter we investigate growth processes on formulas. We classify growth processes primarily by the type of connective used by the process. For example, one class is the class of growth processes that use linear connectives. The remaining parameters, such as the initial distribution, the number of variables, and the specific nature of the connective, are considered simply as parameters of the class of growth processes. We characterize several classes of growth processes, including growth processes that use linear connectives, self-dual connectives, and monotone connectives. We derive criteria that determine if a limiting distribution exists, the shape of the limiting distribution, and the rate at which the limiting distribution is approached. We first cover the basic definitions before proceeding to our results. 3.1 Definitions Let Bn denote the Boolean cube of size n and let the weight of a Boolean vector x e Bn, denoted be the number of 1 bits in x. For any two Boolean vectors x,y € Bn, we say that x < y if x; < y;, for all i = 1.. .n; we say thatx < y if x < y andx,- < y,- for at least one iE [l,n]. A Boolean function is said to be linear if it can be represented as the sum of binary variables and constants modulo 2. A Boolean function is said to be self-dual if it satisfies the equation f(x\ ,...xn) = f(x\,xn). A Boolean function / is said to be monotone if for all assignments x and y, x < y implies that f(x) < f(y). A Boolean function is balanced if exactly half of the entries in its truth table are ones, and the rest are zeros. Let % denote the family of n-adic Boolean functions, let frL\ denote the family of n-adic monotone Boolean functions, and let L\ denote the family of rc-adic linear functions. Let k be a positive integer and a be a &-adic Boolean function, which we call the connective. A growth process is denoted by a pair (/x,a), where is a distribution on and a is a connective. Distribution \i is called the initial distribution and is uniformly distributed on a subset of the pro jections, their negations, and constants. This subset contains the n projections {x\,...,xn}; it may contain their negations [x\,... ,xn}, and it may contain neither, one, or both of the constants {0,1}. A growth process induces a sequence of probability distributions 7t, on Jn for each i > 0 in the following way. We take 7i0 = \x. and for i > 1, is the probability that a o (gi,... ,gk) = 22 3.1. Definitions 23 /, where g\,...,gk are independent random variables with distribution rt,_i and ao (gi,...,gk) is the composition of a with the Boolean functions gi,...,gk> which for conciseness is denoted aC?i )•••)§*)• The /th iteration of the growth process induces the probability distribution 71; on %. The support of a probability distribution n, denoted supp(n), is the set {/ : n(f) > 0} and the support of a growth process is the set of all functions / € % for which 7i,-(/) > 0 for some i > 0: Uisuppfai). If n, tends to n, as i approaches infinity, distribution n is called the limiting distribution of the growth process. We say that a growth process converges to a limiting distribution if a limiting distribution n exists such that Tt, tends to n as / approaches infinity. A process converges rapidly if for i > Calog(«), max/|7i(/) — < 2~n, where Ca is a constant that depends only on a. (Unless otherwise stated, the base of the logarithm is assumed to be 2.) There are also cases—which we shall see later—in which 7t2, and Tr.2,+1 tend to distinct alter nating limiting distributions. When a limiting distribution exists, we can have n(f) > 0 only for / in the support of the growth process. As Valiant's [Val84] result indicates, however, there may be functions in the support for which 7i,-(/) tends to zero as i approaches infinity, so that n(f) = 0. The asymptotic support of a growth process with a limiting distribution n is the set supp{n). In formally, the "result of a growth process" refers to the existence, the identity, and the convergence to, the limiting or alternating distributions. If a is a £-adic connective, the characteristic polynomial of a is Mp) = ivi(n)pi(i-pri 1=0 v1 / where (3, is the fraction of assignments of weight i for which a is true: _ \{x€Bk : |x| = /, a(x) = 1}[ (1) If the arguments of a are assigned independent random binary variables, x = xi,... ,Xk, that are 1 with probability p and 0 with probability 1 — p, then Pr[a(x) = 1] = Aa(p). The /th composition of Aa(p) with itself is denoted by A'a(p) =Aa(Aa(.. .Aa(p)...)). The Fourier transform A,- of a probability distribution 7t, is defined as A,-(W)= X (-i){wJ)m(f). (3.1) For convenience, the inner product (w,f) = ^iwifi is defined to be over the integers, rather than over %i, and unless otherwise noted, Boolean n-adic functions are represented by their truth tables, namely, as Boolean vectors from Bin. Since the probabilities must sum to 1, |A,(w)| < 1, for all iv € f„, A,(0) = 1, and the inverse Fourier transform is m(f) = W X (-l)</,H,)A,-(w). (3.2) w£!F„ The Fourier transform plays a role in many of our results, but needs to be adapted in various ways to suit different cases. When dealing with linear functions, for example, we represent the 3.2. Growth Processes that Use Linear Connectives 24 functions not by their truth-tables, but rather by their coefficients as multivariate polynomials over Z2. However, unless otherwise noted, the standard representation is assumed. 3.2 Growth Processes that Use Linear Connectives A function / is linear if it is of the form f(x\,... ,xn) = CQ © c\x\ © • • • ffi cnxn for some constants co, c\,..., cn € 1*2- We may assume without loss of generality that a depends on all its arguments, so that a(yi,... ,yk) = c©yi © • • • ®yk, where k > 2. The result of the growth process depends on the support of of the initial distribution n, the parity of k, and the constant term c. This claim is proven in Theorem 3.2, which considers supports of fi that do not contain negations of projections, and in Theorem 3.3, which considers supports of /ithat do contain negations of projections. For all cases where a limiting distribution exists, Theorem 3.4 bounds the convergence rate of TI,- to the limiting distribution. Thus, we completely characterize growth processes that use linear connectives. To begin, we derive a recurrence for the Fourier coefficients of the respective probability distri butions TI,-, from which we derive the limiting distribution. Since compositions of linear functions are themselves linear, we represent the linear functions by the vector (co, c\,...,cn) of their coeffi cients, and the class of linear functions is represented by the Boolean cube Bn+\. Finally, let w\ denote the constant function 1 {w\ = 100... 0), whereas 1 = 11... 1. Proposition 3.1 Let (n, a) be a growth process, where a is a k-adic linear connective and let w G £„. The Fourier coefficients of the probability distribution Tt, of the growth process are described by the recurrence relation Ai+l(w) = (-iy^^Ai(w)k, where c is the constant term of the connective. Proof; A,.+1w = x TI(+1 (/)(-i)<-^> = x 1 n«i(8/)(-i)M • /GA ' /6A |64 7=1 «(«)=/ = X flniigjX-l)^^-^ = 2 ntt,(g,-)(-l)<^> = (-l){cw>'w)Ai(w)k. In the last step of the derivation, the sum is expanded into a telescopic sum comprising k nested sums that collapse into the final result. • Using Proposition 3.1, the following theorems classify the growth processes on linear connec tives. 3.2. Growth Processes that Use Linear Connectives 25 Theorem 3.2 Let (jit, a) be a growth process, where a(y) = cffiyi © • • • ffiy^, k > 1, and the support of fi does not contain negations ofthe projections. 1. If {0,1} C\supp(iJL) ^ {0,1}, k is odd and c — 1, then the growth process has alternating limiting distributions, each of which is uniform over one half of the support of the growth process (which consists of all linear functions for which 0"=i Cj = I). 2. In all other cases, the limiting distribution is uniform over the support ofthe growth process (which depends on k, c, and the presence of constants in the support). Proof: Two facts are key to this theorem: first, that |A;(w)| < 1, and second, that if |A,(w)| < 1, then limI-_00AI-(w) = 0. Only the nonzero (magnitude 1) coefficients contribute to limiting distribution (equation 3.2); fortunately, these are determined solely by the support of the initial distribution. Depending on which constants are part of the support, there are either one, two, or four coefficients of magnitude 1: If k is odd and c = 1, the recurrence from Proposition 3.1 implies that A,+i(wi) = —A,(wi) and A,+i(l) = —A,(l). Hence, if supp(n) fl {0,1} ^ {0,1}, the resulting distribution is alternating. In the case where one of the constants is missing from the support, only two coefficients have magnitude 1, and thus, the alternating distributions are each uniform over half of L\. Otherwise, if both constants are missing, the alternating distributions are each uniform over one quarter of Ln. If c = 0, k is even, or {0,1} r\supp([i) = {0,1}, the limiting distribution exists because the sign of the magnitude 1 coefficients does not alternate. We can read off the limiting distribution from the Fourier coefficients. If both constants are in the support, then the limiting distribution is uniform over L^. If only one of the constants is present, then the distribution is uniform over half of C, and if neither is present, then the distribution will be uniform over a quarter of Ln. • If the support of ju, contains negations of projections, then using the same proof technique yields the following theorem. Theorem 3.3 Let (n, a) be a growth process, where ct(y) = c@y\ © • • • ®yk, k > 1, and the support of fx contains negations ofthe projections. 1. If {0,1} C\supp(ix) = 0 and k is odd then the limiting distribution is uniform over all linear functions of odd number of variables. 2. If {0,1} D supp([x) = 0 and k is even then the limiting distribution is uniform over all linear functions of even number of variables. {0,l}n«<pp(/x) = {0,l} => A0(0) {0,l}nsupp(n) = {0} A0(0) {0,l}nsupp(n) = {l} => A0(0) {0, l}C\supp(n) = 0 A0(0) A0(wi) = 1, 1, A0(l) = -1, An(wi) = 1, A0(l) = A0(w, ©1) = -1. 1 3. Otherwise, the limiting distribution is uniform over all of Ln. 3.3. Growth Processes that Use Self-Dual Connectives 26 Proof: If {0,1} Dsupp(ix) ^ 0, then there is only one coefficient of magnitude 1, An(0) = 1, implying the last case. Otherwise, there is one other magnitude 1 coefficient, Ao(l©wi) = —1. If k is odd, then A,+ i (1 ®w\) = A,-(l © w\)k = —1, implying the first case of the theorem. If k is even, then A,+i (1 © w\) = A,-(l©wi)* = 1, implying the second case. • Note, that if projection negations are present, no alternating distribution can occur. To show that the convergence of 7t,- to n is rapid we use the inverse Fourier transform. Theorem 3.4 Let (n, a) be a growth process, where a is a k-adic linear connective, k > 1, and the support of ju. contains the n projections. If the growth process has a limiting distribution n and i > 2[pgBff» then for any linear function f, \n(f) — < 2~n. Proof: Let D — {w : |Ao(w)| < 1}. Then may be written as: KiV) = 2-"-' X (-l)^A,(w) weBn+i = 2-""1 X(-l)(w,/>A;(w) + 2-"-1 X (-l)<lv,/)A;(w) = 7i(/) + 2-"-1 2(-1)<^'>A,-(W). Thus, for any linear function /, = |2""_1 X (-1)<H,,/>A'(/)I < maxlAoWl*'' < (l-n^f. Solving inequality (1 -n~])k' < 2~", in terms of yields: i > • • 3.3 Growth Processes that Use Self-Dual Connectives Savicky [Sav90] showed that if the connective is balanced (that is, if it assumes the value 1 for just one-half of the combinations of argument values) and nonlinear, and the distribution \i is uniform over all the projections, their negations, and constants, then the limiting distribution will be uniform over all of %. If we remove the constants from the support of n and assume the connective a is self-dual, then the support of the growth process is the set of all self-dual functions. In this case the limiting distribution of the growth process is uniform over this support. Unfortunately, bounding the rate of convergence of these growth processes appears to be too difficult.1 Theorem 3.5 If the connective is nonlinear and self-dual, and the support of n comprises the pro jections and their negations, then the limiting distribution will be uniform over the family of self-dual n-adic functions. 'Savicky [Sav95a] did derive bounds for the growth process discussed in [Sav90] (see Theorem 3.27), these bounds are not applicable because the Fourier coefficients of weight 2 functions are not guaranteed have magnitude strictly less than 1. 3.4. Growth Processes that use Monotone Connectives 27 Proof: Observe that there is a bijection between the set of all functions on n variables and the set of self-dual functions on n + 1 variables, for example, the map f(xi,x2,...,x„) f(xi,x2,...,xn)xn+i yf(x0,xi,...,x„)xn+i. The result follows. • 3.4 Growth Processes that use Monotone Connectives We now focus on growth processes that use monotone connectives. For the rest of this section we assume that a is monotone and the support of fl contains only the projections and possibly constants. We first investigate growth processes that use unbalanced connectives and show that the result of such growth processes is a limiting distribution that is concentrated on a threshold function (Theorem 3.13). We show that except in one case (Theorem 3.17), the convergence to the limiting distribution is rapid (Theorem 3.16 and Theorem 3.19). We then investigate growth processes on balanced connectives and show that if the number of arguments, n, is even, then the distribution is not concentrated on a threshold function (Theorem 3.26); in Theorem 3.34, we also bound the rate of convergence. 3.4.1 Growth Processes that Use Unbalanced Monotone Connectives Growth processes that use unbalanced monotone connectives concentrate probability on a threshold function; the type of threshold function depends on the connective and the support. A threshold function Tk(x\,... ,xn) assumes the value 1 if and only if at least k of its n arguments assume the value 1. We consider constant functions Tn+\ = 0 and 7b = 1 to be special cases of threshold functions. The following sequence of propositions and lemmas culminate in a formal statement of this claim, Theorem 3.13. There are two cases to consider: first, when the characteristic polynomial of a, Aa(p), has no fixed-point on the open interval (0,1), and second, when Aa(p) has a fixed-point on (0,1). Proposition 3.6 Let (/x, a) be a growth process, where a is a monotone connective whose charac teristic polynomial, A(p), has no fixed-point on the interval (0,1). Then the limiting distribution will be concentrated on a threshold function. Proof: Since A(p) has no fixed-point on (0,1), either Aa(p) < p throughout (0,1), or Aa(p) > p throughout (0,1). If Aa(p) > p throughout (0,1), then by the standard amplification argument— see Section 2.2.1 and Theorem 3.16—the limiting distribution is concentrated on T\ (disjunction of all variables), or 7b if 1 is in the support of \L Similarly, if Aa(p) < p throughout (0,1), then the limiting distribution is concentrated on Tn (conjunction of all variables), or Tn+\ if 0 is in the support of /x. • Furthermore, we can easily describe connectives whose characteristic polynomials have no fixed-points on (0,1). 3.4. Growth Processes that use Monotone Connectives 28 Lemma 3.7 IfAa(p) / pon the interval (0,1), then a(x) = Xj V a'(x) (when Aa(p) > p), or a(x) = X[ Acc'(x) (when Aa(p) < p). Proof: If a(x) ^x,-Va'(x), by Lemma 3.9, Aa(p) = 0(p 2) which implies that there exists a positive constant Eo such that for all 0 < e < eo, Aa(e) < e. Similarly, if oc(x) / x, A a'(x), then by duality, 1 - Aa(l - p) = 0(p 2), which means that Aa(l - e) > 1 - e for all 0 < e < £i for some ei > 0. Since Aa(p) is continuous, there must be a fixed-point in (0,1), which is a contradiction. • In the second case, where Aa(p) has a fixed-point in (0,1), Moore and Shannon [MS56] have shown that this fixed-point is unique. Not surprisingly, the limiting distribution depends on the fixed-point. Thus, we first derive two facts about the fixed-point of the characteristic polynomial. Lemma 3.8 The characteristic polynomial Aa(p) has a fixed-point of j if and only if the connective a is balanced. Proof: By definition X"=oP<C;) 's tne numDer °f assignments for which a is true. If Aa(j) = ^, thenAa(i) = S?=oP.>(")(2),'(2)"",' = FE?=OP«(") = V Hence' XUfr(") = which means that a is balanced. Conversely, if a is balanced, then Aa(j) = \. m Lemma 3.9 If a. is a monotone connective that is not of the form a(x) = x, V a'(x), then on the interval (0,1), Aa(p) <((*) + l)p2-Proof: Since oc is not of the form a(x) = x, V a'(x), the first two coefficients of Aa(p) are Po = Pi = 0. Thus, on the interval (0,1), MP) = PaQy + W < Qp2 + B{p)P 2 < (Q + l)/ since B(p) < 1 on the interval (0,1). • Fact 3.10 If a is a nonconstant monotone connective, then: 1. Aa(p) > pon the interval (0,1) if and only if for some i, oc(x) = x, V cc'(x) if and only z/Pi > 0. 2. Po = 0, $k = 1 and P^_i = |, for some a € [0,k]. 3. Aa(s) = s, s € (0,1), Aa(p) < p, p € (0,5) andAa(p) > p, p € (s, 1) if and only ifA'a(0) = ^'a(l) — 0 if and only J/Pi = 0 and Pi_i = 1. Proof: First, the coefficient Pi is the fraction of singleton assignments on which a is 1. Since a is also monotone, Pi > 0 if and only if a(x) — x, V a'(x) for at some x,; the first part of the equivalence follows from Lemma 3.7. Second, since a is nonconstant, the fractions of weight 0 and weight k assignments for which it is 1, is exactly 0 and 1, respectively; there are also exactly k assignments of weight k— 1. Third, if Aa has a fixed-point s on the interval (0,1), then by part one, Pi = 0. Also, (3^—1 = 1, otherwise a is 0 on at least one assignment of weight k — 1 and can be written as a(x) = x, A a!(x). Consequently, by Lemma 3.7, Aa does not have a fixed-point on the interval (0,1), which is a 3.4. Growth Processes that use Monotone Connectives 29 contradiction. If Pi = 0 and P^_i = 1, then a takes neither of the two forms of Lemma 3.7 and hence, must have a fixed-point on the interval (0,1). Finally, since the derivative of Aa is A'a(p) = U(l-p) k- ]+kp k-\l-^-:) + p(l-p)B(p), where B(p) is some polynomial of p. Therefore, A'a(0) = A'a(l) = 0 if and only if Pi = 0 and P*-i = 1- • Lemma 3.11 If 'Aa(p) is a characteristic polynomial of monotone connective OL, such that A(p) is not identically p, then any fixed-point of Aa(p) on (0,1) is either irrational or |. Proof: By contradiction; without loss of generality assume that the fixed-point po = j < \ and gcd (r, s) = 1; if po > \ we consider Aa(l — p), whose fixed-point is 1 — po < j. Hence, Multiplying both sides by s k, noting from Fact 3.10 that P^ = P^_i = 1, and evaluating the result modulo (s — r) 2 yields rs k~1 = £ P7 ( k) rJ(s - rf-> = r k + kr*-\s-r) + (s- r) 2£ p, f^) r>(s - r) k~^ 2 j=0 \JJ ;=0 \J/ = r k + kr k~\s-r) mod (s-r) 2. Evaluating the left side modulo (s — r) 2 yields rs k~ x = r(r+(s-r)) k- 1 = r^ ( k~  l)r l(s - r)^ 1' 1 = r^ + r{k - iy- 2(s -r) + r(s - r) 2 £ ( k ~ M r> (s - rf^'i j=0 \ J J = r k + (k-l)r k-\s-r) mod (s-r) 2. Therefore, r*-1(?-r) = 0mod (s-r) 2. Since gcd (r,s) = gcd(r, (s — r) 2) = 1, r*-1 ^ 0 mod (s — r) 2; this is a contradiction. • Theorem 3.12 Let (/x, a) be a growth process, where OL is a monotone unbalanced connective whose characteristic polynomial has a fixed-point t G (0,1), and the support of jl contains only the pro jections. The limiting distribution of the growth process is concentrated on the threshold function T\tn\ Proof: Since a is unbalanced and has a fixed-point on (0,1), by Lemma 3.8, the fixed-point is not j. Hence, by Lemma 3.11, the fixed-point is irrational. Since the fraction of variables set to true in any assignment is by definition rational, the fraction will always be strictly greater or strictly less than the fixed-point t. Hence, by the standard amplification argument, the limiting distribution will be concentrated on the threshold function 7jm]. • 3.4. Growth Processes that use Monotone Connectives 30 Theorem 3.12 can easily be modified to cover the cases in which one or both constants are in the support of p. Combining proposition 3.6 and theorem 3.12 proves the initial claim: Theorem 3.13 If (/x, a) is a growth process, where a is a monotone unbalanced connective and the support of fi does not contain the negations of projections, then the limiting distribution will be concentrated on a threshold function. Convergence Bounds Except in one case, all these growth processes converge rapidly to their limiting distribution. In the exceptional case the convergence requires Can k iterations where k is the arity of the connective a; we provide specific criteria that determine whether a process will converge rapidly or not. There are two main cases: either Aa(p) has a fixed-point, or not. We first derive bounds for the latter case, and then for the former. Unless explicitly stated, we assume that constants are not in supp(fi), however, the following analysis changes little if constants are in supp(n). When Aa(p) has no Fixed-point on (0,1) In this case, either Aa(p) > p for p G (0,1), or Aa(p) < p, for p G (0,1). Since the two cases are symmetric, the same bounds apply to both. Hence, without loss of generality, assume that Aa(p) < p on the interval (0,1). Lemma 3.14 If a is a monotone connective such that Aa(p) < p on the interval (0,1) and, Aa(p) has degree k > 2 and p^t-i < ^j2-, then A'a{ \ — e) > ff/or  a^ positive e < e* = J^+T-Proof: Begin by differentiating Aa(p): j-p (p k+h-\kP k-\\-P)+kfvt Q P'(i - p)k-ij kP k~ x + pVi£/-2(£- kp - I) - X P,- (k) P1'"1 (1 - p) k-'- } (pk - i), i=0 v 3.4. Growth Processes that use Monotone Connectives 31 and evaluate at 1 — e: k-2 A'a(l-e) = ^l-e)*-2(l-e + r3*-i*e-P*-i)-X^^^ i=o W > kil-e^il-e + ^ke-^-Y^C)^-''^ i=o Vv = k(l- e)k~2(\ - e + pV,*e - B*.!) - ke X P, (*) (e)4-''"2 i=0 W > jfc( 1 - e)*-2 (1 - e + B*_,ke - B*_1) - ke X ( i=o V/ > jfe(i-E)*-2(l-e + ^-^(ike-l))-ikE2k AC = (l-e)*-2(2 + te(Jt-3))-*e2*. For k > 2 and e < e*, (1- e)k"2(2 + ke(k-•3))- *e2* (1- t)k-2{2 + ke(k-•3))- 1 2 (1- 48' 2 35 24' Lemma 3.15 //"a is a monotone connective such that Aa(p) < p on the interval (0,1) and, Aa(p) has degree k> 2 and R^_i = ^p-, ?/ien, /or a// positive e <k~\ l + ek< A'a(l - e) < (1 - e) k- 2(k(k- 2)e +1). Proof: For the lower bound, observe that since A'a(p) = kp k~< - X Pi ( k) P l~ X (1 - Pf- 1'' (p* ~ 0, i=0 W for all p > 1 — k~ x, minimizing A'a(/?) maximizes the coefficients P,, for all i = 2...k — 2. Since the connective is of the form a(x) = Xj A o!(x), P, < (k~l) (*) = k^i. Hence, A'a(p) > kp k~ ] -X ( k~ ^P^il-pf-'-'ipk-i) = 1 + (1 -Pf- 2(kp-I)-i=o V 1 / 3.4. Growth Processes that use Monotone Connectives 32 Thus, for all e<k~l, A'a(l-e) > l + ek-2(k-i-kE) > l + e*~2 > 1 + e*. For the upper bound we minimize all PJfor i — 2... k — 2, i.e., P, = 0. Thus, for all p > 1 — k~1, A'a(p) < kp k~ ] -(k-l)pk-2(pk-k+ 1) = pk-\k(k-2)(\-p) + 1), implying that for all £ < k~1, A'a(l-e)<(l-e)k-2{k(k-2)e+l). m Theorem 3.16 Let (/x, a) be a growth process where a is a k-adic monotone connective such that Aa(p) < p on the interval (0,1), k > 2 and P^-i < ^j2-. There exists a constant ca, such that for all n>0,ifi>3 log(n) + ca, then for all f, - n(f) \ < 2~ n. Proof: Let f be a random variable with the distribution n,. Using an argument that is similar to Valiant's [Val84], we claim that if i > 3 log(n) + ca, then for \x\ = n, P[f(x) = 0] = 0, and for all x such that \x\ < n, P[f(x) = 1] < 2~2". The former follows from the monotonicity of a; regardless of the number of iterations, a false negative will never occur. In the latter case, assuming that all variables are independent, if |JC| < n, P[fo(x) = 1] = \x\/n < 1 — n~ x. For / > 0, P[f(x) = 1] =A'a(p), where A'a denotes the r'th composition of Aa(p) with itself. Expanding A a (p) around 1, Aa(p)=Aa(l)+A'a(l)(p-l) + 0((p-l)2), yields: Aa(l-e) = l-eA'a(l) + C(e2). From Lemma 3.14, let y = 35/24 and let zk = -^+1 • There exists an £o < ek such that, for all e < Eo, Aa(l-e) < 1-ey. Since P[f0(x) = 1] < 1 -n~\for i > 21og(n) + 21og(£0) > (log(«) + log(e0))/log(35/24), ^(1 -e) < 1 - EV < 1 - E0. An additional constant number of iterations, say da, yields A^(l-Eo) <c. By Lemma 3.9, Aa(p) < k2p2, thus we fix c < ^ and let j = log(n) + 1. Hence, Ai(c)<()t2c)27'<2-2J' = 2-2". Therefore, for i > 31og(n) + 21og(e0) + da + 1 and all x such that \x\ < n, P[f(x) = 1] < 2"2", implying that -Tt(/)| < 2"". • 3.4. Growth Processes that use Monotone Connectives 33 Unfortunately, if =  k-y-, convergence takes time polynomial in n, e.g., if a(x) = \Z k=2(x\ A xi), convergence takes on the order of n k time! Theorem 3.17 Let (/x, cx) be a growth process where cc is a k-adic monotone connective such that Aa(p) < p on the interval (0,1), k > 2 and B^-i = k-^-. For any En < 0, if A la(l — n~ x) < 1 — En, then for sufficiently large n, i>p(n)(log(n) + log(e0)), where (k- l)(k- 2)n < p{n) < £y. Proof: Let f be a random variable with the distribution 7t,. If \x\ = n — 1 then P[fo(x) = 1] = 1 — n_1. Furthermore, by Lemma 3.15, for sufficiently large n, A'a(l-n- ]) < {\-n- xf- 2{k{k-2)n- x + 1). Since y < A'a(l — n~ l), therefore log(Y) < (k-2)\og{l-n- l) + \og(k{k-2)n- 1 + 1), implying that ^>((^-2)log(l-n-') + log(^-2)n-1 + l))-1>fc2_^ + 2 + Q(l). Thus, if A'a(l — < 1 — En, then for sufficiently large n, A'a(l — n~ l) < En implies that i > (k - 1) (it - 2)2n(log(n) + log(E0)). In fact, this is the best case. If a(x) = \Z ki=2{x\ Ax,), then Aa(p) = p — p(l — p) k~ x. By Lemma 3.15, y < 1 + n 2~ k(k- 1 -kn~ l), implying that log(y) < log(l +n 2~ k(k - l-kn~ x)), and i4y>(log(l + -ta"')))"' >^i. Thus, if A'a(l — n~ x) < 1 — En, then for sufficiently large n, i > |^(log(n) + log (en)). •• Consequently, a growth process that uses a connective whose characteristic polynomial has no fixed-point can be classified as either rapidly converging or slowly converging, with the value of the second last coefficient, p\t_i, determining rate of convergence! When Aa(p) has a Fixed-point on (0,1) When the characteristic polynomial Aa(p) does have a fixed-point on the interval (0,1), a similar analysis is used. Lemma 3.18 Let Aa(p) be the characteristic polynomial of any k-adic monotone connective a. If Aa(p) has a fixed-point s G (0,1), then A'a(s) > 1 + 3.4. Growth Processes that use Monotone Connectives 34 Proof: Consider a projection connective, say %(x) =xt,,b€ [1, n]. The corresponding characteristic polynomial is AX{P) = p = P k+P(i - P) k-' + § (* 7!) - P)*-'", whose fixed-point is everywhere and whose slope is 1. Note that Pi = 1 and P,t_i = k=^-. Let TI(X) = ^A*'^ V (y^Xl AXi-be a fc-adic monotone connective. The corresponding characteristic polynomial HP) = p-p(i-p)k-l + pk-\i-p) = Ax{p)-p{l-p) k- x+p k-\l-p) = AX(P) + (A^-AX(P)) has a fixed-point at \. Not surprisingly, this is almost Ax(p) except that Pi = 0 and Pt_i = 1, i.e., the difference is just two terms. We claim that A\ (< A'a(s). The claim is proved by contradiction. Assume that A'TL(^) is not the minimum slope at a fixed-point, then there exists a &-adic monotone connective whose degree k characteristic poly nomial A^(p) has a fixed-point t e (0,1), such that A'^(t) < A'T1(^) and A'^(t) is the minimum slope. Since Pi = 0 and P^-i = 1 must hold for A^, we write A^(p) in a manner similar to Ar](p): At(p) = Ar\{p) + (Ac(p) — Ar\(p))- Specifically we are interested in the differences between A^(p) and A^p). In fact, HP) = Mp)+[pi0(i-p)k-'° + ph(i-p)k-h - p h(i-p) k- jo+p"(i-P) k- j where // < <k — 2 and ji > > 2. Since A^(p) ^ A^(p), and A^(p) has a fixed-point on (0,1), either j'o or jo must exist. Without loss of generality assume that io exists and consider the characteristic polynomial A§(p) = A^(p) — p'°(l — p) k~'°. Since the connective corresponding to A^(p) is monotone, kt < I'O- which implies that the derivative p'0-1 (1 - p) k~ io~ l(i0 - kp) of the term p'°(l - p) k~ io is positive for all t <  lf. Since the fixed-point u of A§(p) is bounded by '-^j^-, A'§(M) < A'^(t), implying that A'^(t) is not the minimum slope, which is a contradiction! In fact, iteratively subtracting the terms p H (1 — p) k~ n and adding the terms p;,(l — p) k~^ 1, reduces A^(p) to Atl(p)!. Noting that A\(\) = 1 + completes the proof. • Theorem 3.19 Let (it, a) be a growth process where a is a k-adic monotone connective such that Aa( s) =  s S (0,1). 77iere erata a constant ca, such that for all n > 0, if i>k2 klog(n) + ca, then for all functions f, |7t,(/) — 7t(/) | < 2~". 3.4. Growth Processes that use Monotone Connectives 35 Proof: Let fi be a random variable with the distribution 7t,. Using an argument that is similar to Valiant's [Val84], we claim that if i > k2 klog(n) + ca, then for all x such that |JC| < sn, P[fi(x) = 1] < 2~2n, and for all x such that \x\ > sn, P[fi{x) = 0] < 2"2". We first argue the former. Assuming that all variables are independent, if \x\ < sn, P[fo(x) = 1] < s — n_1ea(n), where ea(n) = minygz \s — ^ \ = \s — & |. Since s is a root of degree k polynomial, by Liouville's Approxi mation Theorem [Apo97] Ea(n) = Jo s n where the constant ea depends only on the connective. For i > 0, P[fi(x) = 1] = A'a(p). Expanding Aa(p) around s, MP) = Ms) +A'a(s){p -s) + 0((p - s)2), yields Aa(s-e) = s-eA'a(s) + 0{e2). By Lemma 3.18, fix y = 1 + 2~k+l; there exists an £n such that for all e < £o, Aa(s — e) < s — ey. Since P[fo(x) = 1] < s — n~}ea(n), if i > log(n ea(n)-1e0)/log(Y) > log(n*+1e-1e0)/log(Y), then A'a(s - e) < s - ey' < s - e0. An additional constant number of iterations, say da, yields By Lemma 3.9, Aa(p) < k2p2, thus, we fix c < ^ and let j = log(n) + 1. Hence, Mc)<(k2cf <2-2J=2-2". Therefore, if log(Y) for all x such that |x| < sn, P[fi(x) = 1] < 2~2". By the same argument, if \x\ > sn, P[fi{x) = 0] < 2~2". Since |x| > sn, for P[fo(x) = 0] < 1 -s-n-ha(n), P[fi(x) = 0)=Aa{p) = 1 -Aa(l-p), and P[fj(x) = 0}=AJa(p), j > 0. Just as in the preceding case, the composition of Aa with itself first yields a first order divergence from l—s, followed by a second order convergence towards zero. Therefore, — < 2~". • To reduce the constant in front of the log term, one solution is to use a nonuniform initial distribution to ensure that ea(«) is bounded from below by a constant, as is done by Valiant [Val84]. 3.4. Growth Processes that use Monotone Connectives 36 3.4.2 Growth Processes that Use Balanced Connectives In this subsection, it will be convenient to assume that the support of it contains both constants, as well as the projections, and to deal later with the cases in which one or both constants are missing from the support of it. If the connective is balanced, then by Lemma 3.8, its characteristic polyno-Hence, by the standard amplification argument, the limiting distribution will be concentrated on the n-adic majority function 7[„/2]- In fact, by Theorem 3.19, the convergence to the majority function is rapid; if i > k2klog(n) + 0(1), then |TI,-(/) — 7t(/)| < 2~n. When the number of variables is even, however, something completely different happens. The family of slice functions, denoted Sm,n and defined by Berkowitz [Ber82], are monotone rc-adic functions that assume the value 1 for all assignments of weight greater than m, assume the value 0 for all assignments of weight less than m, and may take on either value for assignments of weight m. Unlike other growth processes where the distribution is either concentrated on a single function or is uniform on the support of the growth process, the following growth processes have a limiting distribution that is uniform on the set S„/2,n- This set includes a large number, of functions; but according to a result of Korshunov [Kor80], includes only a tiny fraction, less than exp —(((j^2+i)2~"//2), of the support of the growth process, which is the set of all monotone functions Mn. Define the ra-adic functions Claim 3.20 The Fourier coefficients of the probability distribution 71 that is uniform on the slice and otherwise functions in Sn/2,n are given by where the inner product is over Z. Proof: Let c = |5j.J"1 = 2~U). If (f,Xn) = 0, then «e5nifl g€W We combine amplification with Fourier methods to obtain our result in this case. The following "Restriction Lemma" is the key to doing this. 3.4. Growth Processes that use Monotone Connectives 37 Claim 3.21 Let (/x, a) be a growth process, where a is a balanced monotone connective and where n is even, and let fbea Boolean n-adic function. If f(x) = I for some x with \x\ < n/2, or if f(x) = 0 for some x with \x\ > n/2, then lim,-_,oo7i;(/) = 0. Proof: This follows from Theorem 3.19. • Lemma 3.22 (The Restriction Lemma) Let (jii, a) be a growth process where a is a balanced monotone connective. Then for all w£f„, HmA/H = (-1)<U«'W> limA,(wAx„). /—•oo i KX) Proof: We begin with the definition A,-W= X (-i^Kv), V6B„ then rewrite the equation as A/(w)= 2(-l)<^>n,-(v) = X X(-l)^>7t,-(fVM), v€fl„ t<Xn u<Xn and consider the restriction of w to the slice |, that is, w A x«- Since lim,^*,Tt,(r V u) = 0 if u ^ u„, lim,_oo A,-(w A Xn) can be rewritten as limA,-(wAx„) = lim V (-l)^^"^,-^) = !™X X (-i)('Vu'wAZ"W'vu) I->°°'<XI.M<X7 <<Xn This, in conjunction with _ ^ (_i)(tVu-,vA^> limTii(rVM) t<XnU<%H _ y ^^(^..WAX.) iim7t;(f vi>n) t<Xn _ y ^^(^'Ax,)^!)^".^) iimjij(«Vt)„) f<Xn = y (_i)(''wA^himn,(rV\)„) _ y (-1)<''W> limJt,-(*V'uB)-HmA,(w) = limX X (-l)('VB,w>n,-('Vu) = X X (-l)^"'^ hm7i,(r V«) '<Xn U<Xn '<Xn U<Xn = X(-1)<'VU",W>lim7I«-('v"») = XC-^^C-^^'^^^^VDn) •<Xn t<Xn = (_!)<»-,-> £ (-1)<''W> HmTt^V^) = (-i)^.->iimA;(wAx„), f<Xn yields the identity. • 3.4. Growth Processes that use Monotone Connectives 38 Hence, all we need to show is that lim^oo A,-(vi>) = 0 for w such that 0 < w < %n. To do this we use Savicky's [Sav90] argument, which uses induction on the weight of w and the recurrence k Ai+1{w)=%aj{w)Ai(w)j+yi(w), (3.3) where teBk \'\=j y<n = s n5«(v(a))nA/(v;)) 0</j<» "<<<>=1 5«W = ^(-^(-i)^. Note that unlike the analysis of growth processes with linear connectives in Proposition 3.1, this recurrence is much more complicated. We first state two small lemmas of Savicky's [Sav90] and then the main proposition. Lemma 3.23 (Savicky, 1990, Lemma 5.1 in [Sav90]) Let xi+\ = £*=iO/x/, such that \XQ\ < 1, X*=i \aj\ < 1.  and \a\ \ < 1. Then \xi+\ \ < a\xi\, where a< \a\| + |xo|(l — \a\ |). Hence, |x,| < a'|xo| and lim,-,,*..*;, = 0. Lemma 3.24 (Savicky, 1990, Lemma 5.2 in [Sav90]) Let xi+\ = yi + 2Zkj=\ ajxi> such that |x,-| < 1, and a = zlkj=\ \aj\ < 1- Then \XJ+\ \ < a\xi \ + y; and lim,_,ooX, = 0. Proposition 3.25 Let (/X, a) be a growth process where a is a monotone balanced nonprojection connective, n is even, and the support of pi comprises the projection functions and constants. If 0 < w < %„, then lim,^*, A,-(w) = 0. Proof: Let w < %n and recall equation 3.1: Ao(w) = X rco(/)(-i)(/'w> = -L- £ (-l)^-f€F„ n^L f€supp(n) To prove this proposition we substitute our base cases into Theorem 5.3 in [Sav90]. We need only show that Ao(vv) = 0 if \w\ = 1, and that |AQ(W)| < 1 if |w| = 2. In the first case, since w < %„, w is true on a single assignment of weight n/2. Hence, (w,x,) = 1 for exactly half the projections, where Xi is the ith projection function. Hence, the projections cancel each other out. Similarly, the two constants annihilate one another. Hence, Ao(vv) = 0 if \w\ = 1. The latter case, \w\ = 2, is only slightly harder. Since the constant 0 is part of the support, there will be at least one positive contribution, — l^'^TtoW = Hence, if |Ao(w)| = 1, all other contributions must also be positive, specifically, (w,x,) = 0 for all x,; by the pigeonhole principle 3.4. Growth Processes that use Monotone Connectives 39 this is not possible. Hence, |Ao(w)| < 1 if \w\ = us to invoke Lemma 3.23. For the case of |w| > 2, by induction lim,\ Lemma 3.24 to complete the proof. • 2. Since \w\ = 2, y,(w) = 0 in equation 3.3 allowing ooy;(w) = 0 and, |A,(w)| < 1, hence we can invoke Proposition 3.25, together with Claim 3.20, yields one of our main results. Theorem 3.26 Let (it, a) be a growth process, where a is a monotone balanced nonprojection connective, n is even and the support of it comprises the projection functions and constants. Then the limiting distribution is uniform on the functions in Sn/2,n-Convergence Bounds To bound the convergence within the slice we use a theorem of Savicky [Sav95a]; the conditions of the theorem are verified in Proposition 3.25. Theorem 3.27 (Savicky, 4.8 in [Sav95a]) If a is balanced and nonlinear, AQ(W) = 0 for every w such that \w\ = 1, Ao(w) < 1 for every w such that \w\ — 2, and there exists a w such that \w\ = 2 and Ao(w) > 0, then max|7i,(/) -n(f)\ = e-AM. A more explicit bound, in terms of the number of arguments and the arity of the connective, is possible. We use a more explicit version of Lemma 3.22 and bound the convergence of the growth processes characterized by Theorem 5.3 in [Sav90]. A corollary of Lemma 3.28 is that the same bound also applies to the growth processes on monotone formulas whose limiting distribution is uniform over the slice functions. We first prove Lemma 3.28, then prove a sequence of lemmas that culminate in the bound: Theorem 3.34. Lemma 3.28 Let (ju,ct) be a growth process, where a is a balanced monotone connective, then for some fixed £ < 0 and for all w G jF„, Ai(w) = 0(e2) + (-l)^MwAXn). Proof: Following the proof Lemma 3.22 begin with the definition and rewrite the equation as A,-(w)= X (-l)<v,HV-(v) = X E(-1)<'V",W>II.-('V")-vefl„ t<Xn U<Xn Consider the restriction of w to the slice |, that is, wA%n. By Theorem 3.19, after 0(log(n)) iterations 7t,-(f Vn) = 0(e2'), if u ^ v>n. Hence, for i > 0(log(n)), A,(w A %n) can be rewritten as 3.4. Growth Processes that use Monotone Connectives 40 A,-(wAX„) = 2(-l)^A*»>7t,-(v) v£B„ = X X (-l)<'V"'wAX->7I1-(fVll) l<XnU<xH = Yj(-l)<'Vu»^«)Tri(rVi)n)+X X (-ll^'^^il'V") = XC-lJ^-^Tt^V^+X X 0(e2') = o(e2')+SC-1)^"'^^^^^) = 0(e2')+ X(-l)<''wA^>(-l)<u-wAx->7il-(/Vt)„) l<Xn = o(e2i)+Z(-i)('M)m(tvun) l<Xn = 0(e2i)+2Z(-l){''w)Mtvvn). t<Xn This, in conjunction with AiH = £ X(-l)frv-^('Vu) '<Xn U<Xn ><Xn t<XnU<Xn,U^X>n = K-ij^-^-cv^+x X °(£2'') (<Xn t<XnU<Xn,u¥=V„ = 0(e2i)+Z(-iy^ni(t\/vn) t<Xn = 0(t2')+ X(-l)(f'w>(-l)(u"'tl,>^(fV^) = o(e2') + (-l)<u-H'> Xl-1)^71^^^) = 0(e2) + (-l)^w)MwAXn), yields the identity. • Lemma 3.29 (Savicky, 1990, Lemma 4.7 in [Sav90]) For any k-adic connective a, 1) X/€S* Sa(l)2 = L and 2) a is balanced and nonlinear if and only if Sa (0) = 0 and for all s € Bh \Sa{s) \ < 1. Theorem 3.30 (Savicky, 5.3 in [Sav90]) Let (n, a) be a growth process, where a is a k-adic non linear balanced connective, and assume that the initial distribution n is uniform over the projections, their negations, and constants. For all w £ such that w > 0, lim,-_oo A,-(vf) = 0. 3.4. Growth Processes that use Monotone Connectives 41 Lemma 3.31 Let (/*, a) be a growth process, where a is a k-adic nonlinear balanced connective, and assume that the initial distribution /x is uniform over the projections, their negations, and con stants. For any w € jF„ such that \w\ = 2, and any positive c < 1, if i>\og{c~])n2k, then |A,(w)| < c. Proof: By Theorem 5.3 in [Sav90], A*(w) = Xa;(vv)Ai-i(w>'> 7=1 where ,EBk IEBk l(l=J \l\=j By corollary 3.23, |A;(w)| < a'|A0(w)|, where a < a\(w) + A0(w)(l -a\(w)). By Theorem 5.3 in [Sav90], A0(w) < hence, n—1 2 a < ax{w) + —— (l-ai(w)) = 1 - ——-(1 -a\(w)), n+1 n+1 and since ai (w) < 1 — 2 a< 1 -2"*. ~ n+1 Setting xo = Ao(vv) and solving for i in the inequality a'|Ao(w)| < c yields log(c)-log |x0| i> log (a) and substituting for a on the right, log(c)-log|x0| log(c) — log |x0| log(fl) log (1 - ^2-*) < (("+l)2*-1 + ^)(log(C-1) + log|xo|) < ((n+l)2*-1 + i)log(c-1) < n2*log(c-1). Thus, for i > log(c-])n2k, |A,-(w)| < c. • Lemma 3.32 x,+ 1 = y,- + X*=i a/*/, -™c/z tfiatf |x,| < 1, W a - X*=1 < 1. <(/-/ + 2)*a'_'+1 < a < I for some k>0 and I > 0. 77iew |x;| < (f — / + l)k+la'~l for all i > I. 3.4. Growth Processes that use Monotone Connectives 42 Proof: By Lemma 3.24, \xi+]1 < a|x,-| +yt. Hence, by induction on i \xi+i\ < a('(|x,| + (/-/ + 2)y) < a'-' ^l + y(/-/ + 2)^ < a,'-/(i-Z + 2)*+1. Lemma 3.33 Let (it, a) be a growth process, assume that the conditions of Theorem 3.30 are satis fied, and let <*= 2|Sa(0|3<l-2-*. ieBk If\w\ = d > 2 and ^MO+i^f (3.4, rTzen |A,(w)| < al-^bd[i), where bd(i) = (i - i2 + 2)(*+1)"~3, and b2{i) = 1. Proof: The proof is by induction on j. By Theorem 5.3 in [Sav90], k A«'W = y,-(w)+ Xaj,(w)A,_i(w)J, ;=i where *;M = £ 5a(0H,= X Sa(t)d, {t&Bk: \t\=j} {t€Bk: \t\=j] and yM=l( II 5a(v(a))VnA,(v;)Y v€G* \a€B„\w(a)=\ J \j=\ j where Gw = {v£ #2" : 0 < v < w}. Since S,£B,5a(l)2 = 1 and \Sa(s)\< 1 for all s € B*, fl(w) = taj(w) = t S 5«(0M 7=1 y'=Wefl*:|f|=y = S5a(0H < E|5a(0lH refi* teBk t£Bk t€Bk The base case, d = 2, is proved by Lemma 3.31. For the base case, d = 3, we first bound y,-(w). By Theorem 5.3 in [Sav90], A,-(w) = 0 if \w\ = 1 and by Lemma 3.31, A,(w) < a'-^MO if |w| = 2 3.4. Growth Processes that use Monotone Connectives 43 Hence, for all w e Gw, A,(w) < a'-»2*iog(a-1) If we let ^ = ^log^"1), then A,-(w) < ai-hb2(i) and y.-H = s ( n «a(v(a))) (n%)] veG* \a€fl„|w(a)=l / \y=l / < I iWi) < X (^-^rf-iCO)* V6G* = (1,0) ^-'W)' < (2rfa,'-^-'^_i(i)] Solving for i in the inequality (2da,'-«-'^_1(i))* < a yields ^+log(fcrf_1(»)) , (*+l)^ _;<; which reduces to equation 3.4 when evaluated at d = 3. Hence, for i > ij yi{w) < (2V-'"-1^_1(0)* < b^iifa1-1" <a, and by Lemma 3.24 and Corollary 3.32 we complete the base case: |A,-+i(w)| < a\Ai(w)\+yi(w) < a|A,(w)| W~%_i(0 < ^|l+Jjlj < a'_«(i-irf + 2) < a''-w(j_i2 + 2) Assume that the hypothesis holds for all w of weight less than some fixed d and let |w| = d. Repeating the above calculations in terms of d we get an identical bound for y,(w), y,-(w) < (2rfo''-^-'6d_1(i))* < fed_1(i)*fl,'-<-<a, 3.4. Growth Processes that use Monotone Connectives 44 where, by the inductive hypothesis, and hence (k+\)dd . ld = lo^Fiy^-1 |A,-+i(w)| < a|A,(w)|+y,(w) < a|A,(W)|W-%_,(0* < d-* I 1 + £ b\ k d-\ J='d / \ i — T ' J=ld / < d'id(i - id + l)(i - h + 2)(*+1)i"1"3* < a''-^(i - i2 + 2)(i - i2 + 2)(*+""~'~3* = a'-^(i-i2 + 2)(*+1)'"3 completing inductive step. Theorem 3.34 Let (n,oi) be a growth process, assume that the conditions of Theorem 3.30 are satisfied, let a be as in Lemma 3.33, and let / = W2Mog(fl-') + 2'"(*+11f -log (a l) For any positive c < 1, ifw^O and IggM log(,-/ + 2) , log (a) log (a ') fTzen |Aj(w)| < a Proof: By Lemma 3.33 the coefficient of weight 2" has the greatest converging bound: |A/+iW|<ai-^(/-n2/:log(a-1) + 2)(*+1)2", . where J * ... , |N 22"(vt+l)2" < /I2* log a"1 + . \ ' =1. log (a 'j 3.5. Growth Processes that Use Other Functions 45 Solving for i in the inequality a,'-'2"(i-n2*log(a-1) + 2)(A:+1)2" < c completes the proof. • Thus, by equation 2.2, L feTn 22" ' 22" < ^r + i 2 |A,(/)| < i+ max |A,(/)|, 22 /€^„\o implying that for all g <E y>„ \n(g) -7t,(g)| < c if i satisfies equation 3.5. Although, the convergence ofthe growth process is eventually exponentially fast, by Theorem 3.34, it may take an exponential number of iterations (in n) before this occurs. Varying the Initial Distribution Proposition 3.25 can easily be modified to cover the cases in which one of the constants is missing from the support of /x: in these cases there is concentration on a single function when n is even and uniform distribution on a set of slice functions when n is odd. When the support of /x consists only of the projection functions, however, the situation can be more complicated. If a is not self-dual or n is odd, the result is the same as when both constants are present. If a is self-dual and n is even, however, the limiting distribution is uniform on a subset of the slice functions. Theorem 3.35 Let (/x, a) be a growth process where a is a monotone self-dual nonprojection con nective, n is even, and the support of fl comprise the projection functions. Then the limiting distri bution is uniform on the self-dual functions in S„/2,n-Proof: This is similar to that of theorem 3.5. • We note that there are 22^"/2) self-dual functions in Sn/2,n- According to Sapozhenko's [Sap89] result, this is only a tiny fraction, less than exp — ((„/2+1) 2—"/2_ 1), of the support of the growth process, which is the set of self-dual monotone functions. 3.5 Growth Processes that Use Other Functions We can use the same method to analyze other growth processes. For example, the uniform distri bution on the set of bipreserving functions (that is, those functions satisfying f(0,... ,0) = 0 and 3.5. Growth Processes that Use Other Functions 46 /(l,..., 1) = 1) can be generated by a growth process that uses the bipreserving selection connec tive a(x,y,z) — xy Viz and an initial distribution that is uniform on the projection functions. The same technique as in the monotone case is sufficient to prove this; the corresponding restriction lemma yields the identity HmA,<w) = (-l)^">A(wAK,,), i—+oo where the two functions are Tin = f\xj and K„ = I \JXJ A f\xj. A similar analysis for the O-preserving and 1-preserving functions follows easily. Chapter 4 Growth Processes on Reversible Circuits In this chapter we investigate growth processes on reversible circuits. The investigation is divided into three parts. In Section 4.2 we show that the result of a growth process on reversible circuits is either a limiting distribution or two alternating distributions. We show that in almost all cases the limiting distribution does exist and is uniform over the set of functions that can be realized by the growth process. In Section 4.3 we investigate what functions are realizable by a growth process on reversible circuits, and in Section 4.4 we bound the convergence rate of growth processes on reversible circuits. We first cover the basic definitions before proceeding to our results. 4.1 Definitions Let 52" denote the symmetric group on 2" points and let A2n denote the alternating group on 2" points; the groups will be regarded as acting on the points of the Boolean cube, Bn. Let C denote an m-gate reversible circuit comprising n lines (wires) and let \C\ = m denote the size of C (the number of gates). A reversible circuit takes an assignment, x, of size n and yields an output, denoted C(x), which is also of size n. Unless noted otherwise, reversible circuits are assumed to comprise the standard reversible gates that include: the identity gate, the unary NOT gate, the binary controlled-NOT gate (C-NOT), and the ternary Toffoli gate (controlled-controlled-NOT gate). An m-gate reversible circuit C is specified by a word of length m, C = g\g2.-.gm, over an alphabet Z representing the set of possible gates; each gate, g,, is specified by its operation and the lines on which it operates. A NOT gate, operating on line i, is denoted 0,-; a controlled-NOT gate, operating on line j and controlled by line i, is denoted ®y, and a Toffoli gate, operating on line k and controlled by lines i and j, is denoted by ®l£J. The identity gate, denoted e, serves an important purpose in our analysis, but performs no operation on the circuit. Since each of the gates is its own inverse, the inverse of a gate, denoted gjx, is g,-. The composition of two n-line reversible circuits C = g\...gm and C = g\. ..g', is the con catenation of the two words C" = CC' = g\. ..gmg\ • ..g'm,. Composition is also denoted as C'(C), and the output of the composition is denoted C"(x) = C'(C(x)). Since each standard gate is its own 47 4.1. Definitions 48 inverse, the inverse of a circuit, denoted C , is simply C~X = (C)"1 = {g\-..gmTX =gmX •••8\X = gm---g\-Analogously, the inverse of a composition of two circuits is (CC')-1 = C'_1C_1. Let S be a set of rc-line reversible circuits. Since every element in S realizes a permutation on the Boolean cube Bn, each reversible circuit corresponds to an element of S2n and the closure of S (under composition), corresponds to a subgroup of S2n • Thus, every element in the subgroup is realized by some composition of circuits from S. If a reversible circuit C realizes a permutation a £ S2», then we say that C ~ a; if two circuits, C and C', realize the same permutation, then we say that C ~ C. An immediate consequence of the correspondence between reversible circuits and elements of S2n is that the group axioms also apply to the composition and manipulation of reversible circuits. We use the group-theoretic setting to prove many of our results dealing with reversible circuits. A growth process on reversible circuits is denoted by a pair (fi,X) where it is a uniform distribu tion on a subset of the gate set X. Let X = Xn, where unless otherwise noted, the set Xt,, 1 < b <n, comprises • 1 identity gate: e, (no operation is performed) • bNOT gates: ©,-, i = n-6+1,... ,n, • b(n— 1) controlled-NOT gates: ®^-, j = n — b+ 1,... ,n, i = 1,... ,j — 1, j + 1,... ,n, and • ^("2") Toffoli gates: ®'k Aj, k = n — b+ 1,... ,n, j = 1,... ,k— l,k+ 1,... ,n, i = j+ l,...,k — l,k+ l,...,n. Since a reversible circuit realizes an element of S2n, the ith iteration of the growth process induces a distribution xc, on S2n in the following way. The distribution Tto is concentrated on the identity element e, i.e., TIO(E) = 1, and if a e Sin, then 7t,(a) = Pr [Ca,._,Cg ~ a], where 6",_i is a random variable distributed according to distribution 7t,_i, and | is a random variable distributed accord ing to distribution it; and Q denote the corresponding circuits that realize the elements of the respective sample spaces. Recalling the definitions from Chapter 3, a growth process has a limiting distribution, 71, if 7t, approaches 71 as i (the number of iterations) tends infinity; and a growth process has alternating distributions if 7t2, and n2i+\ approach two distinct distributions as i tends to infinity. The support of a growth process is the set of all functions that are realizable by the process, i.e., U,>oswpp(7i;). We say that a gate set 5 is alternating, if there does not exist a circuit C over S such that \C\ is odd and C ~ e. A gate-symmetric gate set S CXj satisfies the condition that if a Toffoli (controlled-NOT or NOT) gate is in S, then all Toffoli (controlled-NOT or NOT) gates that are in Xj, are also in S. Intuitively this means that no line within the circuit is preferred to any other line. If supp(ix) C Xi,, b < n, the support of a growth process (/t,X) will only contain circuits whose bandwidth is less than n, i.e., some fixed set of lines is guaranteed to be read-only. We call such a growth process bandwidth-limited or a bandwidth-/? growth process, if b is known. If for each of the n lines the support of /t includes gates that modify each of the n lines, then the growth process is not bandwidth-limited. For conciseness, an ra-bit input to a bandwidth-limited circuit is denoted xx1 = (x\,... ,xr,x1 \,... jxfb), where x comprises the read-only inputs and x1 comprises the read-write inputs. 4.2. The Limiting Distribution of Growth Processes on Reversible Circuits 49 4.2 The Limiting Distribution of Growth Processes on Reversible Circuits Amazingly, the question of whether a growth process on reversible circuits has a limiting distribu tion is decoupled from the question of what that limiting distribution is. Assuming that the distribu tion n on the gate set X is uniform, there is straightforward proof that the limiting distribution, if one exists, will be uniform over some set of permutation functions, which corresponds to a subgroup of 52". In this section we prove the following theorem: Theorem 4.1 Let (n,X) be a growth process on reversible circuits, where fx is a distribution on X and let Z = supp(ix). The growth process has a limiting distribution if and only if 2, is not alternating. Otherwise, the result of the growth process is two alternating distributions. To prove this we first argue that the growth process is an ergodic Markov process with peri odicity at most 2 and then argue that the Markov process is aperiodic if and only if the support of distribution ii is not alternating. Lemma 4.2 If (n,X) is a growth process, where X contains the standard gates and fx is a distri bution on X, then the growth process is an ergodic irreducible Markov process with periodicity at most 2. Proof: The state space of this Markov process is a subgroup of 52" that is generated by the set of elements that are realized by the gates in the support of fi. Assuming that the transition matrix is indexed by the elements of the subgroup, entry M0]X of the transition matrix is equal to it(g) where Cg ~ rj_1x, i.e., the probability of drawing gate g according to distribution it, such that composing a with the element realized by g yields x. Since all gates are their own inverses, MOT = AfT)(J, implying that M is symmetric. Therefore, each state can be returned to after two transitions, which means that, by definition, the period of the process is at most 2. Furthermore, since the state space is equal to the subgroup, every state is reachable from the identity, and the identity is reachable from every state. Hence, the Markov process is ergodic, irreducible, and has a period of at most 2. • Lemma 4.3 Let (fi,X) be a growth process, where X contains the standard gates and let Z = supp(pi). The Markov process is aperiodic if and only J/Z is not alternating. Proof: By Lemma 4.2 the periodicity of the process is at most 2; hence, the process is either aperiodic or has periodicity 2. If Z is not alternating, there exists a circuit C G Z+, such that m = \C\ is odd and C ~ e. Hence, there is an odd length cycle in the graph induced by process. Since gcd(2,m) = 1, the process is aperiodic by definition. Conversely, if the process is aperiodic, there must be at least one odd cycle in the respective graph. Concretely, starting in some state, say o~, after an odd number of transitions, the process can end up in the same in the same state. Let C ~ a and let the transitions be denoted by gi, for i= 1,... ,m, where m is odd. Thus, Cg\g2 ...gm~C. Prefixing C-1 to both sides yields that g\g2 • • -gm ~ C~ XCg\g2 ...gm~ C_1C ~ E, completing the proof. • 4.2. The Limiting Distribution of Growth Processes on Reversible Circuits 50 Corollary 4.4 Let (it,X) be a growth process, where X contains the standard gates and let 2 = supp(ii). The Markov process has periodicity 2 if and only if'2 is alternating. Additionally, we make use of a well known result from the Markov Theory. Theorem 4.5 (Theorem 6.2, Page 132 in [MR95]) Any irreducible, finite, and aperiodic Markov process has the following properties: 1. All states are ergodic. 2. There is a unique limiting distribution n such that for each state s, n(s) > 0. 3. IfN(s,t) is the number of times that process visits state s in t steps, then lim,^co  N^ = n(s). Proof of Theorem 4.1: By Lemma 4.2 the growth process is an ergodic and irreducible Markov process. By Lemma 4.3 and by Theorem 4.5, the process has a limiting distribution. The "otherwise" part follows from the Frobenius-Perron Theorem [Frol2, Per07]. Let M be the transition matrix of the process, and let 7to be the initial distribution. The ith distribution may be written as 71, = M'TIQ. Following Lovasz [Lov96], 71/ can be written m 71; = M'Tlo = X KjVjvTno, where m is the size of the subgroup of S2", described in the proof of Lemma 4.2, and VJ is the jth eigenvector of M. We assume that the eigenvectors are orthogonal and that the eigenvalues, A,,, are in decreasing order. If the periodicity of the process is 2, then by the Frobenius-Perron Theorem [Frol2, Per07], all but two of the eigenvalues have magnitude less than 1, X\ = 1, and Xm = — 1. Therefore Iim7i2; - (v\vj + vmv rl)n0, (—too lim7t2l+i = (viv[ - vmvlt)n0, l—too which are the two alternating distributions. • The question of whether growth process (n,X) has a limiting distribution is therefore reduced to whether the support of it is alternating or not. The answer, in most cases, is that the support is not alternating. The following proposition enumerates sufficient conditions that ensure that the support of it is not alternating. Proposition 4.6 Let (fi,X) be a growth process, let 2 = supp(n), and assume that X comprises all standard gates that operate on lines l,...,n. If one of the following conditions hold: 1. the identity gate £ is in 2, 2. n > 2 andfor some distinct i,j G [l,n], the gates ®/, ©;, and ©, are in 2 (see Figure 4.1), 3. n > 3 and for some distinct i,j,k G [l,n], the gates ©/, ®*, and ©f are in 2 (see Figure 4.2), 4.2. The Limiting Distribution of Growth Processes on Reversible Circuits 51 4. n > 3 and for some distinct i,j,k G [l,n], the gates 0*Ay, 0*, and 0f are in Z (see Fig ure 4.3), 5. n > 3 and for some distinct i,j,k G [l,n], the gates 0*A7, 0^, 0;-, and 0,- are in Z (see Figure 4.4), or 6. n>4 and for some distinct i,j,k,l G [l,w], the gates 0*Ay, 0^A*, and 0|M are in £ ('see Figure 4.5), then Z is nor alternating. Proof: By inspection. • (i) CD CD _ -©-a) Wo = — (i)^O (10 — o)-e-© (k) Figure 4.1: A NOT gate. Figure 4.2: A C-NOT gate. Figure 4.3: A C-NOT gate. 0) CD Q CD CD Gi ro-Gi ro-Figure 4.4: A NOT gate. Figure 4.5: A Toffoli gate. If the support of /x on X is gate-symmetric and the circuit width is greater than or equal to four, then Proposition 4.6 applies. Proposition 4.6 does not apply to growth processes on reversible circuits of width 2 and 3 that only use controlled-NOT gates (and Toffoli gates, respectively)— even-though the support of it may be gate-symmetric. In general determining whether the support is alternating may be as difficult as determining the support of the growth process. If a limiting distribution does exist, then it will be a uniform distribution on the support of the growth process. This result stems from the following lemma. Lemma 4.7 (Lemma 6.3, Page 132 in [MR95]) Let G = (V,E) be a nonbipartite connected graph, let d(v) denote the degree of vertex v G V, and let M be a transition matrix for a Markov process whose states are the vertices ofG, such that _{ d(u)~\ (u,v)£E, U'V"\0, (u,v)tE' where u,v G G. The limiting distribution Tt is given by n(v) = ^jgj. 4.3. The Support of Growth Processes on Reversible Circuits 52 Corollary 4.8 If G = (V,E) is a d-regular, nonbipartite, connected graph, then the limiting distri bution ofthe Markov process defined in Lemma 4.7 is uniform over V. Using this corollary we can characterize the shape of the limiting distribution of a growth pro cess on reversible circuits by the support of the growth process. The key observation is that a growth process induces a Cayley graph of the subgroup of S2", which is generated by the elements in 2 = supp(fi) and is the support of the process. The vertices correspond to the elements of the subgroup, where the degree of each vertex is equal to the size of S. The following theorem relates the support of the process to the limiting distribution. Theorem 4.9 Let (/t,X) be a growth process, where it is a uniform distribution on~LC.X and X comprises the standard reversible gates. If £ is not alternating, then the limiting distribution is a uniform distribution over the support ofthe growth process. Proof: Since £ is not alternating, by Theorem 4.1, a limiting distribution exists. The growth process corresponds to the Markov process in Lemma 4.7, where G is the Cayley graph induced by growth process. By Corollary 4.8, the limiting distribution is uniform over the vertices of G, which correspond to the support of the process. • Thus, the limiting distribution is completely characterized by the support of the growth process. 4.3 The Support of Growth Processes on Reversible Circuits The general problem of determining the support of a growth process reduces to the problem of determining what group is generated by a given set of generating elements; this is a notoriously hard problem. In the present context the generating set corresponds to the support of it, a uniform distribution on the set X of standard gates. Since it is natural to decouple gate type, i.e., NOT, controlled-NOT, or Toffoli, from the lines on which a gate operates, we assume that the support of it is gate-symmetric. Additionally, under this assumption we can characterize the support of the growth processes. We first discuss processes that are not bandwidth-limited, and then extend the discussion to bandwidth-limited growth processes. In the former case, we first consider processes on circuits with a small number of lines, n = 2 (Theorem 4.10) and n = 3 (Theorem 4.11), and then we consider processes on larger circuits with various supports of \i (Theorems 4.13-4.15). Similarly, for the bandwidth-limited case, Theorems 4.16, 4.17, and 4.19 characterize growth processes on circuits with bandwidth 1, 3, and 2, respectively, and Theorem 4.18 characterizes growth processes on circuits with bandwidth greater than 3. 4.3.1 The Support of Growth Processes that are not Bandwidth-limited The maximum fan-in of any gate in the gate set X is three (the fan-in of the Toffoli gate). Conse quently, there are three cases, with respect to circuit width, that we need to consider. Namely, we need to consider circuits of width n — 2, n = 3, and n > 3. The first two cases can easily be done by computational enumeration, while the last case requires an analytical approach. The following two theorems characterize the support of growth processes on two and three lines. 4.3. The Support of Growth Processes on Reversible Circuits 53 Theorem 4.10 Let (tt,X) be a growth process on two lines, where /x is a uniform distribution on a subset ofX such that Z = supp(pL) is gate-symmetric with respect to X. Then 1. If'Z contains both controlled-NOT and NOT gates, then the support of the process corresponds to the group S4. 2. If ' Z contains only controlled-NOT gates, then the support of the process corresponds to the group Sj on the inputs {1,2,3} and the input 0 is a fixed-point. 3.IfE contains only the NOT gates, then the support of the process corresponds to the group C2 x C2. Proof: By enumeration. • Theorem 4.11 Let (ii,X) be a growth process on three lines, where \i, is a uniform distribution on a subset ofX such that Z = supp(fi) is gate-symmetric with respect to X. Then 1. IfE contains both Toffoli and NOT gates, then the support of the process corresponds to the group 5g. 2. IfE contains only Toffoli gates, then the support of the process corresponds to the group S4 on the inputs {3,5,6,7} and the remaining inputs are fixed-points. 3.IfE contains both Toffoli and controlled-NOT gates, then the support of the process corre sponds to the group Sj on all inputs except the input 0, which is a fixed-point. Proof: By enumeration. • If we restrict a growth process on n > 2 lines to using only controlled-NOT gates and/or NOT gates, then the resulting support will correspond to a subgroup of affine transformations on the Boolean cube [CG75]. Hence, all interesting growth processes on n > 2 lines must use Toffoli gates. Consequently, the three cases in Theorem 4.11 are the three sub-cases that need to be considered when analyzing growth processes on n > 2 lines. To do this we use a theorem of Coppersmith and Grossman [CG75]. A ^-function, k < n, is a permutation, a € S2n, on the Boolean cube defined by ((ooa2...an_i)a)m = < flm' miL L ,0<j<n-\, y a;©/(a,-,,a,-2,...,a,J, m = j where a = (aoai•••an-\) € Bn, and / is any &-adic Boolean function. If / is a singleton function, i.e., true for only one input, then the ^-function, a, is also called a singleton. For a fixed k, the basic set comprises the 2i(yt"1) ^-functions, corresponding to each of the 2k singleton functions and the possible choices of inputs and output. Theorem 4.12 (Coppersmith and Grossman, [CG75]) Let Gk,n denote the subgroup ofS2n generated by basic set of k-functions, where n > 1. Then 1. If n > 3 and 1 < k < n — 1, then Gk,n = A2n. 2. G„_i,„ = S2n, (k = n- I). 3. G\}n is the group of affine transformations on Bn, (k-1). 4.3. The Support of Growth Processes on Reversible Circuits 54 A reversible gate that takes k control lines, computes a Boolean value, and performs an XOR with the result on another line, realizes a ^-function. Concretely, a Toffoli gate realizes a singleton 2-function—the conjunction of the two control lines—and placing NOT gates on one or both control lines, before and after the gate, realizes the remaining three singleton 2-functions; see Figure 4.6. Thus, Theorem 4.12 can be used to characterize the support of growth processes on n > 3 lines that use Toffoli and NOT gates. (0) (1) (2) (1)A(2) —( 5— ifTi (1)/ \(2) -0-(1)A(2) (1)A(2) Figure 4.6: The four singleton 2-functions. Theorem 4.13 Let ([i,X) be a growth process onn> 3 lines, where it is a uniform distribution on a subset ofX such that Z = supp(n) is gate-symmetric with respect to X and contains both the Toffoli and NOT gates. Then the support of the growth process corresponds to the alternating group A2n. Proof: First, each of the four singleton 2-functions is a realized by a Toffoli gate along with 0, 2, or 4 NOT gates that bracket the control lines of the Toffoli gate. Second, one of the four singleton 2-functions corresponds to a Toffoli gate, and a sequence of the four singleton 2-functions corresponds to the NOT gate. Thus, by Theorem 4.12, the support of the growth process corresponds to the alternating group. • If the support of it does not contain both Toffoli and NOT gates, i.e., the support comprises either solely Toffoli gates or, both Toffoli and controlled-NOT gates, then the support of the corresponding growth process is an alternating group, on the set of inputs of weight greater than one and zero, respectively; a similar phenomenon is observed in our analysis of growth processes on two and three lines. The subsequent two theorems formalize this notion. Let A2}n denote the set of all even permutations on the Boolean cube Bn where all inputs of weight less than two are fixed-points; the set A2^n corresponds to the alternating group A2"-n-\ on all points of Bn that are of weight two or more. Similarly, let A\>n denote the set of all even permutations on the Boolean cube Bn where the input of weight zero is a fixed-point. Naturally, A \ „ corresponds to the alternating group A2«_i on all points of Bn except the one of weight zero. Theorem 4.14 Let (n,X) be a growth process onn>3 lines, where it is a uniform distribution on a subset ofX such that Z = supp(ii) is gate-symmetric with respect to X and contains only the Toffoli gates. Then the support of the growth process is A2^n. Proof: Since Z is a proper subset of the the support of it in Theorem 4.13, the corresponding subgroup must be a subgroup of A2n, and hence cannot have any odd permutations. Furthermore, 4.3. The Support of Growth Processes on Reversible Circuits 55 Toffoli gates are triggered only when both their control lines are 1. Thus, every input of weight less than two will not trigger any Toffoli gate and cannot be permuted by a circuit of Toffoli gates. Consequently, the n + 1 inputs of weight less than two must be fixed-points and the support of the growth process can be at most A2FL. Thus, we only need to argue that the growth process can realize all functions in A2>N. The proof is by induction on n. The base case, n = 4, follows from the fact that the 3-cycles that generate A2,4 can be constructed from at most five Toffoli gates; these are listed in Table 4.1 and were determined via computational enumeration. The generators for A2)A, which correspond to er er ©2M er er ~ (3 ? 11) er er er er er ~ (91113) elA2el^®32M®lA2®lM ~ (5 7 13) ®lA202A4e3A40.A2e3A4 ^ (1Q u u) ®lA2@r®]M®lA2@]M ~ (6 7 14) e^302A4elA302A403A4 ^ (12 14 15) Table 4.1: The 3-cycles that span A2,4. the 12 different Toffoli gates, are listed in Table 4.2; however, by Proposition 4.6, only eight are necessary. For a fixed n > 3 assume that the support of a growth process on n lines is A2}„. We now er ~(3 7)(11 15) 0iA2 ~(3H)(7 15) e^3 ~(5 7)(13 15) eiA3 ~ (5 13)(7 15) e^4 ~(9 11)(13 15) ©^4 ~ (9 13)(11 15) e?A3 ~(6 7)(14 15) er~(6 14)(7 15) ©?M ~ (10 11)(14 15) ©f4-(10 14) (11 15) ©f4~(12 13)(14 15) ©|A4~(12 14)(13 15) Table 4.2: Generators of A2^. argue that the support of growth processes on n + 1 is A2t„+\. To prove the inductive step we show that all permutations in A2^n+\, which can be represented by a 3-cycle, can be realized; this implies that all functions A2t,1+i are realizable. In fact, it suffices to show that at least one 3-cycle can be realized. Fix the permutation a' = (3 7 15), let circuit C' ~ a', which by the induction hypothesis can be realized by the growth process on n > 3 lines; and assume that C' comprises the lines labeled 1,... ,n. For a = (7 15 31), we construct C ~ a, by modifying C in the following manner. Assuming that the n + 1 line in C is labeled 0, replace each Toffoli gate in C by the construction of Barenco et al. [BBD+95], exhibited in Figure 4.7. Thus, circuit C, exhibited in Figure 4.8, is a controlled version of C that is controlled by line 0, that is, C~ CT. To construct a permutation represented by the 3-cycle (x 15 31), where |x| > 1, we construct an additional circuit C" such that C" ~ T = (xl)(yz) where y and z are not 15 and 31, and conjugate rj by T to yield TOT-1 = (x 15 31). The circuit for x can be realized in a manner similar to realizing circuit C. Thus, every permutation in A2>n+\, which is represented by a 3-cycle can be realized, implying that every permutation in A2,„+i can be realized by the growth process. • 4.3. The Support of Growth Processes on Reversible Circuits 56 (0) (i) (j) 4—+ (k) -e (i) — Y Figure 4.7: A (0)-controlled Toffoli. (0) (n)H Controlled-C Figure 4.8: The circuit C ~ (7 15 31). Theorem 4.15 Ler a growth process on n > 3 /mes, where fiis a uniform distribution on a subset ofX such that £ = supp(u) is gate-symmetric with respect to X and contains the Toffoli and controlled-NOT gates. Then the support ofthe growth process is A \ >n. Proof: Same argument as in Theorem 4.14. • Since the limiting distribution, if one exists, is uniform over the support of the growth process, the limiting distribution of non-bandwidth-limited is uniform over a large fraction of the symmetric group. One way to reduce the support is to use bandwidth-limited growth processes. 4.3.2 The Support of Growth Processes that are Bandwidth-limited The support of bandwidth-limited growth processes is significantly smaller than the support of non-bandwidth-limited growth processes. Since some fraction of the lines are fixed, i.e., read-only, the realizable permutations correspond to a quotient group of the alternating group A2n. The n lines of a bandwidth-limited growth process comprise r < n read-only lines and b = n — r read-write lines. Without loss of generality, we assume that the first r lines, 1,..., r are read-only and the remain ing lines, r+ 1,... ,n are read-write. Consequently, all realizable permutations are elements of the quotient group A2n/Br, i.e., each element can be factored into 2r disjoint permutations, denoted o\ such that the points of each permutation share the same unique /--bit prefix, x E Br. That is, for input xx1, x remains fixed and x1 is permuted by a*, where cr* is a function of x. Since some lines are read-only, the support of distribution it is naturally restricted to the set of gates that only modify read-write lines. In this context, the notion of gate-symmetry is therefore taken to be with respect to X),, the set of gates that may comprise a circuit of bandwidth b. The analysis of bandwidth-limited growth processes can be divided into four parts. Namely, we consider processes where the number of read-write lines is 1, 2, 3, or greater than 3, i.e., b = 1, b = 2, b = 3, or b > 3. In all but the b = 2 case the respective support of the growth processes can be explicitly obtained. In the case of b = 2, determining the support is related to an open question that is discussed in Chapter 5. We discuss the cases where b^2 and then look at what happens when 6 = 2. 4.3. The Support of Growth Processes on Reversible Circuits 57 Theorem 4.16 Let (fi,X\) be a bandwidth-limited growth process on n > 1 lines where b — 1, r = n—l and £ = supp(fi) is gate-symmetric. 1. IfZ contains only the controlled-NOT gates, then the support ofthe process corresponds to permutations CT of the form: (x\,...,xn-i,x'\)<5=(x\,...,xn-\,x'\@f\(x)), where f\ (x) = ©"=/c,-x,- and c,- 6 {0,1}. 2. IfL contains only Toffoli gates, then the support ofthe process corresponds to permutations a of the form: (xi,..:,x„_i,x'i)o-= (xu...,xn-Ux\ ®fz(x)), where f2(x) = ©"=1 ©"=,•+1 CijXjAXj and Cjj G {0,1}. 3.IfZ contains both Toffoli and controlled-NOT gates, then the support of the process corre sponds to permutations CT of the form: (*!,... ,Xn_i,x'i)CT = (*],... ,X„_i,x'i ©/] (x) ffi/2(x)). 4. If the NOT gate is also present then the size of each ofthe supports is doubled, containing the negation of the functions f\ and f2 as well. Proof: Since all gates correspond to elements that commute with each other and each gate is its own inverse, all circuits have a canonical form, i.e., either a circuit has one instance of the gate or not. This corresponds to taking a linear sum (modulo 2) of the outputs of the gates and the last line of the circuit. • In the present context, disallowing certain gates from £ results in the same phenomenon that was observed in the preceding subsection, namely, certain inputs are fixed-points of any permutation realized by the growth process. Since we are interested in the general structure of the respective support, we restrict our attention to growth processes whose distribution, it, is uniform over the entire set Xb, i.e., the set £ is gate-symmetric and contains all the NOT gates, the controlled-NOT gates, and the Toffoli gates. We now characterize the support of growth processes on circuits with three read-write lines and processes on circuits with more than three read-write lines. Theorem 4.17 Let (it, A3) be a bandwidth-limited growth process on n > 3 lines, where b = 3, r = n — 3 and \x. is a uniform distribution on A3. Then, the support ofthe growth process comprises all permutations CT of the form (XX')CT = x(xl TCT*), where T is a permutation on lines n — 2,n—\, and n, which is independent of x, and CT* is a permutation on the three lines that is even and depends on x. Proof: To prove this we use a modification of Barrington's [Bar85, Bar89] technique, which was described in Section 2.5.2. Let p be a permutation on B3, and let circuit C be a realization of p on the lines n — 2, n — 1, and n. We first need to show that if p is even, there is a corresponding controlled circuit C^'K i.e., that there exists a circuit that realizes permutation p on lines n — 2, n — 1, and n, if 4.3. The Support of Growth Processes on Reversible Circuits 58 and only if line i, i < r, has a value of 1. Since we cannot implement a controlled-Toffoli gate on three lines [CG75], we cannot use the construction in Theorem 4.13. However, we can leverage the fact that p is even to avoid this problem. By Theorem 4.11 we know that circuit C exists. Since the only gates that realize an odd per mutation on the three read-write lines are Toffoli gates and p must be even, there must be an even number of Toffoli gates in C. Since there are three types of Toffoli gates ®n~2An~\ ©^I^", and ®"~2A"> we replace the latter two with the former, via the construction illustrated in Figure 4.9. Note, that this does not change the number of Toffoli gates. Next replace all controlled-NOT gates with Toffoli gates and all NOT gates with controlled-NOT gates, connecting the additional input to line i, i.e., all controlled-NOT and NOT gates become (i)-controlled gates. Observe that if line i has value 0, then none of the controlled gates will work; this leaves only the Toffoli gates, which are all of the same type. Since Toffoli gates are their own inverses and there is an even number of them, their computations will cancel each other out. Thus, if i has value 0, the resulting permutation is the identity. However, if i has value 1, then all the controlled gates will trigger if and only they would have triggered in the original circuit C; the resulting permutation is p. Thus, we have constructed a controlled circuit that computes the even permutation p on lines n — 2, n — 1, and n. -2An (n-2) (n-1) (n) CD i(b Figure 4.9: A simulation of one Toffoli gate with another. Since we can construct controlled circuits that can compute any permutation p € A%, using Barrington's [Bar89] conjunction construction we can construct a circuit Cx which realizes the per mutation cr*, that is, the circuit performs permutation p on lines n — 2, n — 1, and n if and only if the lines 1,..., r have the value x. A controlled circuit that realizes an odd permutation can not be constructed. Using contra diction, suppose that we were able to construct a controlled circuit where C realizes an odd permutation p. There exists an even permutation x—which can be realized by a controlled circuit— such such that px ~ ®"~2A"_1, i.e., realizes a Toffoli gate. Hence, the controlled circuit realizes a Toffoli gate with three control lines, something that is impossible according to Coppersmith and Grossman [CG75]. To complete the proof we show how an oblivious permutation on the lines n — 1, n — 1, and n is realized. Consider a circuit comprising a C^ control circuit and an oblivious circuit D on the three lines. Let cr*' ~ C^ and let x ~ D. Observe that o^'x = x(x~l)Xiax'xXi = x(x-1ax)*'; note that x_1o~x is also an even permutation. Thus, C^D ~ DC'^l\ where C' ~ x_1ax. Note that the oblivious circuit D can realize any permutation in S° on the lines n — 2, n — 1, and n. m A corollary of Theorem 4.17 is the following theorem, which characterizes the support of bandwidth-Z> growth processes for b > 3. The proof parallels that of Theorem 4.17, except that 4.3. The Support of Growth Processes on Reversible Circuits 59 the Toffoli transformation argument is unnecessary because the construction technique used in The orem 4.14 suffices. Theorem 4.18 Let (lx,Xb) be a bandwidth-limited growth process onn> b>3 lines, r = n — b, and IX is a uniform distribution on X3. Then, the support of the growth process comprises all permutations CT of the form (xr')a = x(x!tax), where x is an even permutation on the b read-write lines that is independent of x, and CT* is an even permutation on the b read-write lines that depends on x. Proof: Since only even permutations are realizable on the lines n— r+ 1,... ,n, the parity of x must also be even. Similarly, any even permutation p on B^, can be realized by some circuit C. For any line i, i € [1, r], the circuit can be transformed into an (j)-controlled circuit via the construction depicted in Figure 4.8. The rest of the proof mirrors Theorem 4.17. • Unfortunately, the case when b = 2 is much more challenging. Unlike the two proceeding characterizations, characterizing the support of a bandwidth-2 growth process with currently known techniques is not possible. In fact, such a characterization would go a long way to solving a problem that has remained open for nearly 20 years; see Chapter 5 for an in depth discussion of the problem. We complete this section by proving that a natural set of functions that are in the support of bandwidth-fr growth processes, where b > 2, are not in the support of bandwidth-2 growth processes. Theorem 4.19 Let (lx,X2) be a bandwidth-limited growth process on n > 2 lines, where b — 2, r = n — 2, and \x is a uniform distribution on X2. If a realizable permutation is of the form (xx')c! = x(x! cr*), where cr* is a permutation on lines n — 1 and n, and is performed for only a specific value ofx, then CT* must be even. Proof: The proof consists of two steps. First, we argue that for any i, 1 < i < r, and any permutation on lines n—l and n of a bandwidth-2 circuit, an (i) -controlled and (1)-controlled circuits that realize the permutation can be constructed. Second, we show that if cr*, for a particular x, can be realized, then cr* for any x can be realized. Finally, without loss of generality, we fix x to be 1, and argue that any circuit that realizes cr* must realize an even permutation. Any permutation on B2 may be constructed from three permutations, (01), (02), and (03). Figure 4.10 illustrates the controlled realizations for each of these permutations. Thus, any (i)-controlled or (z')Z-controlled permutation on lines n—l and n is realizable by combining these con structions in the natural way. Suppose we have a circuit C that performs permutation ay \i x = y^ \ and the identity if input x 7^ y. To construct a circuit C' that realizes the same permutation if and only if x = 1, we replace all gates that are controlled by lines i such that y, = 0. Since each such gate realizes an (/)-controlled permutation, we replace it with a circuit that realizes the (/)-controlled permutation. The only exception is a Toffoli gate that is doubly controlled by lines i and j. However, each Toffoli gate can be replaced by a sequence of gates that are not doubly controlled. For example, consider a Toffoli gate that is controlled by lines / and j, i,j<n—l and operates on line n—l. This may be written as ((01)(23))^A(;). Mimicking a technique of Barrington's [Bar85], this gate may be replaced by four singly controlled permutations (012)W, (132)^, (021)W, (123)^. Once all doubly controlled gates are eliminated, the transformation is applied. 4.4. Convergence of Growth Processes on Reversible Circuits 60 (i) • (n-1)-(n) 0*0 (oiy (i) • (n-1)-(n) -0 ——< c I— ^ n Pi \—i —i (Tii I i(Ti 5— (01) (i) (0 2)' 1 c 1 I >) , 1 1 (1 —i (Tii > —( H (0 2)1 ra 3)(i) ra 3)(i) Figure 4.10: Realizations of controlled 2-cycles. Thus, without loss of generality we limit our attention to circuits that compute permutation a1 if x = 1, and e if x ^ 1. Let C be such a circuit and by the preceding argument, assume that C comprises a sequence of uncontrolled and controlled circuits C^'\ I < i < r. If any of the r lines are 0, then C performs the identity permutation, e, on the two read-write lines, and if all lines are 1 then the permutation a1 is performed. Thus, if all lines are 0 then only gates that are not controlled by the read-only lines will operate. Since the circuit performs e, which is an even permutation, the uncontrolled parts of the circuit must realize an even permutation. Now consider all singleton inputs, i.e., where line i has value 1 and all other read-only lines are 0. On such an input, all parts of the circuit controlled by line i will operate, however, since C must also perform the identity permutation, the parity of the permutations performed by the controlled parts must be even. Thus, since parity is preserved, a1 must be even. Given any even permutation on the two read-write lines, a circuit of length 0(2") can be constructed by mirroring the construction in [Bar85]. • While the support of bandwidth-2 growth process includes a large number of permutations, it lacks the natural set of permutations that perform a odd permutations for a single input. At the same time, the support is much more complicated than that of bandwidth-1 growth processes. The nature of the support of bandwidth-2 growth processes plays an important role in a well known open problem regarding the computational power of bandwidth-4 permutation branching programs, which are studied in Chapter 5. 4.4 Convergence of Growth Processes on Reversible Circuits We have shown that a growth process on reversible circuits will either have a limiting distribution or two alternating distributions. Unlike in the case of growth process on formulas, the limiting distribution will be uniform on the support of the growth process. In fact, the size of the support is usually quite large, on the order of Q((2b !)2" 'ri~c), where b is the bandwidth of the growth process 4.4. Convergence of Growth Processes on Reversible Circuits 61 and c is some positive constant. Furthermore, the support of it, the distribution on the gate set, is relatively small, on the order of 0(bn2) gates. Hence, it is not surprising that the convergence rate of such processes is relatively slow, compared to growth processes on formulas. To bound the rate of convergence we refer to the current literature on random walks on groups and look at the Cayley graph induced by the growth process. The graph is d-regular, where the degree d, is equal to the cardinality of £ = supp(n), and hence bounded by 0{bn2). Additionally, the size of the graph is equal to the size of the support of the growth process, and hence, is on the order of 0((26!)2" ''n~c). Thus, the diameter of the graph is bounded from below by ^(j^). which means that the convergence rate is also bounded from below by £2 (5^), and for non-bandwidth-limited growth processes will in fact be polynomial in 2". Thus, the convergence of growth processes on reversible circuits is significantly slower than in the case of growth processes on formulas. Using well known bounds we can bound the second eigenvalue; thus, bounding the rate of convergence. The lower bound inequality, which was "discovered by many people" [Fri91] such as McKay [McK81] and Alon [AI086], bounds the second eigenvalue from below, To bound the second eigenvalue from above we use the inequality exhibited by Diaconis and Saloff-Coste [DSC93], versions of which also appeared in [Ald87] and in [Bab91]: where / is the diameter of G. Thus, the second eigenvalue is bounded by Thus, the rate of convergence will be polynomial in the diameter of the Cayley graph. Chapter 5 Reversible Circuit Complexity In this chapter we investigate the reversible circuit complexity of finite Boolean functions. This chapter is divided into three parts. In Section 5.2 we investigate the complexity of finite Boolean functions in the context of bounded bandwidth reversible circuits, in particular, when the bandwidth is on the order of a couple of lines. The problem of characterizing the support of growth processes on bandwidth-2 reversible circuits, which was discussed in Chapter 4, is closely related to the problem of determining the complexity of conjunction within the context of bandwidth-2 reversible circuits. Section 5.3 discusses polylogarithmic bandwidth reversible circuits, defines a natural hierarchy of classes of problems that are decidable by reversible circuits with polylogarithmic bandwidth, and relates the hierarchy to the SC hierarchy. Finally, in Section 5.4 we investigate the complexity and constructions of several common families of Boolean functions and obtain some general criteria for bounding the complexity of finite Boolean functions within the context of non-bandwidth-limited reversible circuits. 5.1 Definitions Recall from Chapter 4 that a reversible circuit Conn lines comprises a sequence of m gates, C = g\... gm, each of which is controlled by zero, one, or two lines, and each of which XOR another line with the conjunction of the control lines; the gates are the NOT gate (©,•), the controlled-NOT gate (©/), and the Toffoli gate (©/M), where i, i— l,...,n, label the lines of the circuit. The output of circuit C on input x G B,„ is denoted C(x) G Bn, and the composition and inverse of circuits corresponds respectively to the concatenation and reversal of the circuits' gate sequences. A circuit C is bandwidth-limited if at least one line in the circuit is read-only; a bandwidth-fc circuit, where b < ra, comprises b read-write lines and r = n — b read-only lines. We assume that lines 1,..., r are read-only lines and the remaining lines, r + 1,..., ra, are read-write lines. Each reversible circuit realizes a permutation on the Boolean cube Bn, which corresponds to an element of the group S2" • We write C ~ a G S2" if C realizes a and we write C ~ C if C and C' realize the same permutation. An (/)-controlled circuit, denoted C^'\ performs two different permutations depending on the value of line i. If line i has value 1, the circuit performs a permutation in which line i is fixed, otherwise the circuit performs the identity permutation. Additionally, we say that a circuit 62 5.1. Definitions 63 is x-controlled, for some x G Bk, k < n, if the circuit, denoted Cx, is controlled by a fixed subset of lines of size k, called control lines. If the lines hold the value x, then Cx performs a permutation in which the control lines are fixed, and performs the identity permutation otherwise. Unless otherwise stated, x G Br, and the control lines are the read-only lines, 1,..., r. Using the definitions of Barrington [Bar85, Bar89], a width-w permutation branching program (w-PBP) P on n variables x\,...,xn is a sequence of instructions of the form cr*' or CT1, where instruction cr*' yields permutation a G Sw if x,- = 1 and yields the identity, 8, otherwise; instruction a1 always yields a. On input x, program P yields a permutation P(x) € Sw, where P(x) is the product of the yields of the instructions of P. A program P over group G is defined in a similar manner [BST90], except that each instruction, cr*', yields either a group element a G G, or the identity, and the yield of the program, P(x), is also an element of G. Thus, w-PBPs are also programs over groups where G = Sw. The length of program P, denote \P\, is the number of instructions in P. Let P be a program over group G on n variables, xi,...,xn. Program P strongly accepts lan guage L C Bn if there exists a a € G, a ^ e, such that for all x G B„ P(x) = ( £' X*L . W | CT, x G L Program P weakly accepts language L C S„ if for all x € Bn P(x) = {£' **L . 1 CT^ e, xG L Since every finite group G can be embedded into a permutation group Sw for appropriate w, the following definition makes sense for programs over any finite group. Adapting a definition due to Borodin et al. [BDFP83], program P monotonically accepts L C Bn if there exists a point p G [1, w], such that for all x G B„, p is not a fixed-point of yield P(x) if x G L, and P(x) = e if x 0 L. Observe that all strongly accepting programs are also monotonically accepting and all monotonically accepting programs are also weakly accepting. Additionally, if / is the characteristic Boolean function of L, i.e., f(x) = 1 if and only if x G L, then we say that program P strongly computes, weakly computes, or monotonically computes function /. Just as in the case of reversible circuits, branching programs may be composed in several ways. The concatenation of two programs P and P', denoted PP', is the concatenation of the instruction sequences of P and P'. Since the inverse of a concatenation (PP')-1 is P'~XP~X, the inverse of a program P is the inverted sequence and reversed of the instructions of P. A commutator of programs P and P', [P,P'} = PP'P~~lP'~l, mirrors the group-theoretic definition of a commutator. Finally, the kth power of P is the concatenation of k copies of P, i.e., Pk = PP...P. Let P be a program over group G. A program transformation T takes a program P as an argument and yields a new program via a basic set of operations. A transformation is either 1. a constant: *P(P) = T1 for some T G G, 2. a projection: Y(P) = P, 3. an inverse: VF(P) = P_1, or 4. a concatenation: 'F(P) = 0(P)Y(P) of two transformations. 5.2. Bounded Bandwidth Reversible Circuits 64 For conciseness, the composition of transformations, vF(0(f)), is commonly used, but is not a basic operation because it is constructible from the preceding rules. An important feature is that the resulting program XF(P) has length that is only a constant factor larger than the original program. Concretely, a transformation is a word comprising placeholders and oblivious instructions, e.g., T1, where each placeholder is replaced by an instance of program P, or its inverse, when the transformation is applied to P. Applying a transformation to a constant program, which is a permutation, yields a permutation. Hence, a program transformation is also map from G to itself. This concise characterization is used extensively in our discussion; however, while each transformation is a map from G to itself, not all maps are valid transformations. 5.2 Bounded Bandwidth Reversible Circuits Bounded bandwidth reversible circuits share many similarities with bounded width permutation branching programs (w-PBP). Thus, it is not surprising that many of the results obtained within the context of the permutation branching program framework are also applicable to the reversible circuit model; particularly, within the framework of bounded bandwidth reversible circuits. Thus, we first derive the explicit relationship between bounded bandwidth reversible circuits and bounded permu tation branching programs, and then use the PBP framework to characterize bounded bandwidth reversible circuits. Theorems 5.1 and 5.3 show the equivalence between bandwidth-2 reversible circuits and width-4 permutation branching programs. While Theorem 5.16 shows how weakly accepting 4-PBPs can be transformed into monotonically accepting 4-PBPs of comparable size, Theorem 5.18 shows that weakly accepting 4-PBPs cannot be transformed into strongly accepting 4-PBPs. This is in contrast to Theorem 5.24, which shows how to transform a weakly accepting PBP of width 5 or greater into a strongly accepting PBP! 5.2.1 Simulating Reversible Circuits with Branching Programs In order to compare the two models we need to define what it means for one model to simulate the other. A width-w permutation branching program P realizes a map from the Boolean cube to the symmetric group, P : B„ —> Sw, and a reversible circuit C realizes a map C : Bn —• Bn. However, a bandwidth-o reversible circuit C can be viewed in a different manner. Since the circuit comprises b read-write lines and r read-only lines, depending on the values of the r read-only lines, circuit C performs a permutation on the b read-write lines. Thus, C may also be viewed as a map of the form C: Br —> S2i> • We say that a bandwidth-o reversible circuit C simulates a w-PBP on n variables, if C realizes the map C : Br —> S2i,, where r — n and w < 2b, such that Cx performs permutation P(x) on the b read-write lines. Similarly, a w-PBP on n variables simulates a bandwidth-o circuit, if w = 2b, r = n, and P(x) is the permutation that Cx performs on the b read-write lines. One immediate observation is that 2-PBPs cannot simulate bandwidth-1 reversible circuits if the circuits contain Toffoli gates. Since each instruction of a 2-PBP depends on at most one variable, 5.2. Bounded Bandwidth Reversible Circuits 65 i.e., cf', and a G S2 must either be an identity, or the permutation (01), then each 2-PBP yields (01 )/M, where f{x) = c$ ® 0"=i c,x;, and c,- G {0,1}. Namely, 2-PBPs can strongly compute any linear function and cannot compute, even weakly, any nonlinear function. Since the Toffoli gate computes a conjunction of two inputs, no 2-PBP can simulate a Toffoli gate. Another negative simulation result is that if 2b — 2 < w < 2b, then bandwidth-o reversible circuits cannot simulate all w-PBPs. A corollary of the results of Coppersmith and Grossman [CG75] is that odd parity permutations cannot be realized by reversible circuits on more than three lines. Thus, no instruction that yields an odd permutation can be simulated by a reversible circuit. However, if w < 2b — 2, there exist two points, p\ = 2b — 1 and p2 = 2b — 2, such that any odd permutation CT G Sw, can be embedded into A2i, as the even permutation a(p\p2) G A2i,. Thus, the permutations yielded by the program P are preserved. Fortunately, 4-PBPs and bandwidth-2 reversible circuits can simulate each other. For the remainder of our discussion we assume that all gates, Toffoli, controlled-NOT, and NOT gates, are available. Theorem 5.1 If P is a 4-PBP on n variables and length m, then there exists a bandwidth-2 re versible circuit of length 4m that simulates P. Proof: There are two types of instructions that we need to consider, controlled instructions, cf, and oblivious instructions, a1. Let cr*' be an instruction of P. By Theorem 4.19, the corresponding controlled circuit, can be realized by concatenating in some order the controlled circuits that realize the permutations (01), (02), and (03). In fact, via computational enumeration, all 24 per mutations can be realized in four gates or less. An oblivious instruction can be realized in nearly identical manner by replacing all controlled gates with uncontrolled ones. Since each instruction can be simulated by at most four gates, the result follows. • An immediate corollary of Theorem 5.1 is that any w-PBP P, w < 2b — 1, can be simulated by a bandwidth-o circuit C; furthermore, |C| G 0(\P\). Corollary 5.2 IfP is a w-PBP on n variables such that w < 2b — 1, for some b, then there exists a bandwidth-b reversible circuit of length 0(\P\) that simulates P. Proof: By Theorems 4.17and 4.18, any even permutation on the b read-write lines may be realized either in a controlled or oblivious form. Since every permutation in Sw can be embedded into A2t, using a number of gates that depends only on b, a w-PBP P can be simulated by a bandwidth-o circuit that comprises Od/5!) gates. • The converse of Theorem 5.1 is also true. Theorem 5.3 If C is a bandwidth-2 circuit comprising n read-only lines and m gates, then there exists a 4-PBP of length 4m that simulates C. Proof: To prove the latter, we need to show that each reversible gate: NOT, controlled-NOT, and Toffoli gate, can be simulated by one or more instructions. There are three classes of gates that we need to consider: gates that are not attached to any of the read-only lines, gates that are attached to exactly one of the read-only lines, and gates that are attached to two read-only lines. Each gate 5.2. Bounded Bandwidth Reversible Circuits 66 that is not attached to any read-only lines and realizes a permutation, CT, on the two lines, can be simulated by a single oblivious instruction CT1. Similarly, each gate that is attached to exactly one read-only line, which performs a permutation CT on the two lines if the control line is 1, can be simulated by a single controlled instruction CT*' . The only gate that is attached to two read-only lines is a Toffoli gate that is controlled by two read-only lines and negates one of the two read-write lines, i.e., ®/M, where j,k < n+ 1 and i = n + 1, n + 2. If both control lines j and k have value 1, then the Toffoli gate performs either permutation (01)(23), if i = n+l, or permutation (02)(13), if i = n + 2, on the two read-write lines. The former Toffoli gate can be simulated via four instructions that form a commutator [(012)% (132)*] = (012)*' (132)* (021)*; (123 )** = ((01) (23))*^', and a similar set of four instructions simulate the latter Toffoli gate. Observe, that if either Xj or xk are 0, the commutator yields the identity and if both are 1, then the commutator yields (01)(23). Thus, every gate can be simulated by either one or four instructions, completing the proof. • A corollary of Theorem 5.3 is that any circuit C of bandwidth b can be simulated by a 2fc-PBP that is at most four times the length of C. Corollary 5.4 If C is a bandwidth-b circuit comprising n read-only lines and m gates, then there exists a 2b-PBP of length 4m that simulates C. Consequently, all characterizations of 4-PBPs are immediately applicable to bandwidth-2 re versible circuits. Additionally, an immediate characterization of all bounded bandwidth reversible circuits, where b > 2, is possible, namely: Corollary 5.5 The class of problems decidable by bandwidth-b reversible circuits, where b > 2, is exactly NC1 [Bar89]. Proof: This is an immediate consequence of Corollaries 5.2 and 5.4, and Barrington's characteri zation of w-PBP, w > 5. • 5.2.2 On the Power of Bandwidth-2 Reversible Circuits and 4-PBPs The power of 4-PBPs has long been a major open issue. It is well known that 3-PBP must be of length exponential in n to compute the conjunction of n variables [Bar85], while a 5-PBP requires a polynomial number of instructions to compute the conjunction on n variables [Bar89]. Thus, a bandwidth-3 reversible circuit can compute the conjunction of n lines using only a polynomial number of gates. In fact, bounded bandwidth reversible circuits of bandwidth greater than two, have the computing power corresponding to the complexity class NC1. In the case of bandwidth-2 circuits, the question of whether polynomial length circuit can compute the conjunction of n lines remains open. We further study this problem, by investigating programs over groups. 5.2. Bounded Bandwidth Reversible Circuits 67 Programs over Subgroups of 54 Our first observation is that programs over the subgroups of S4, i.e., 4-PBPs whose instructions may only yield elements of a subgroup of 54, are inherently not very powerful. There are few interesting subgroups of 54; interesting in the sense of programs over the respective subgroup. Apart from the groups S3 = D3, D4, and A4, all remaining subgroups of S4 are isomorphic to direct products of cyclic groups, or the groups D3 and D4 [Burll]. Since we are primarily interested in strongly accepting programs, programs over direct products have the same computational power as programs over one of the components of the direct product. Thus, we need only consider the three groups listed above. Barrington [Bar85] showed that programs over S3, which is also equal to D3, the dihedral group of degree 3, can compute the conjunction of n variables, albeit using programs of exponential length. Barrington et al. [BST90] also proved that programs over nilpotent groups are even weaker, they cannot compute the conjunction of n variables for sufficiently large n. Since the dihedral group of degree 4, D4, is nilpotent, no program over D4 can compute the conjunction of n variables ei ther. We give a tight bound proving that no program over D4 can compute the conjunction over three variables. The lower bound, a conjunction on two variables, is realized by the program Recall that an embedding D4 in S4 is generated by the elements {(0123), (13)}. Using a tech nique similar to the one in [Bar85] the following sequence of lemmas proves our claim. We first show that the computational power of programs over D4, is equal to the computational power of 4-2-parity circuits and then argue that a 4-2-parity circuit cannot compute the conjunction function. A 4-2-parity circuit is a two-level circuit comprising a single unbounded fan-in modulo-4 gate fed by unbounded fan-in parity gates, whose inputs are the variables. A 4-2-parity circuit C com putes a Boolean function / if for some fixed t € {1,2,3}. Lemma 5.6 Let C be a 4-2-parity circuit that computes f on n variables and contains s parity gates. Then, there exists a program over D4 of length 0(ns) that strongly computes f. Proof: Let be the fan-in of the ith parity gate and let x-jk be the £th input to the gate. The gate is simulated by sequence P, of 2n, instructions of the form where CT = (01)(23) and x = (02) are in D4. Since CT and x are their own inverses, the yield of P, is the identity if the parity of the inputs is even. Otherwise, if the parity of the inputs is odd, the yield is CTX = (0123), which is an increment modulo-4. Concatenating the instruction sequences of length 2nj < 2n for each of the s parity gates yields a program of size at most 2ns that strongly computes /. • (13)X1 ((01)(23))*2 (13)" ((01)(23))*2. m = 1 Pi = (fJi CT*^ ... <f'"i x*J'i X*^ ... x*Jai 5.2. Bounded Bandwidth Reversible Circuits 68 Lemma 5.7 Let P be a program over D\ on n variables and of length m that strongly computes function f. Then there is a 4-2-parity circuit C comprising 0(m) parity gates that computes f. Proof: For each instruction cr*' in P, rewrite cr as a product of the two generators: a = Tbpc, where x = (13), p = (0123), b £ Z2, and c e Z4, and hence, cr*' = (x*')fc(p*')c. For conciseness, the ith instruction is written as %a'b<pa'c>, where a,- € {x\,... ,x„, 1}, bj 6 Z2, and c,- € Z4. Thus, program P is a sequence of two types of instructions p _ ^a\b\pa\c\^M2b2pa2C2 ^.ambmpdmcm which we reorder such that all instructions of the form Ta'bi, i = 1,..., n, precede all instructions of the form pa,Ci, / = 1,... ,n. Observe that if c, = 0 mod 2, then the instruction pa'Ci commutes with all instructions of the form xpW—this is a short form for the sequence x^'x*292.. .x*"9", o, 6 Z2, where p(x) — ©"=1<7,x,i, a,- € Z2, is a linear function on n variables. Thus, we need only worry about the case where c, = 1 mod 2. If Ci = 1 mod 2, via the identities px = xp3 and p3x = xp, rewrite pa'c'zp^ as xpW (pZ^Wpa/c/. Starting from the end of the sequence and working towards the front, collect all xa'6' instructions and shuffle them towards the front of the sequence. The resulting sequence is of the form p _ TpMpgiMpS2M_ ._p4'mW) where gj(x) is a function of the form a,(2c?,p,(x) + c;), pi(x) is a parity function that corresponds to a partial collection of instructions yielding x, and = c,- mod 2. For example, the step (m — k) of the shuffle looks like p — ^a\b\paic\^2b2pa2c2 ^akbk pakckxpk+\WpSi+l W pi'mW _ Taii>ipaiciTa2&2pa2f'2 ^A^P/t+l W^p2^atPt+iWpfltQpgi+i W _ _ _ pgmW _ T"ifei pa,ci Ta2*2 p«2C2 _ _ _ TPk{x) pgk W pS<:+l W _ pi'm W _ Finally, since the modulo-4 gate always counts from zero, this corresponds to applying the yield of P to point 0. Since 0 is a fixed-point of permutation x, we can drop the prefix xPM from the transformed program, ending up with the program pl _ pg\ (x) pg2 (x) _ _ pgm (x) _ Since p corresponds to an increment modulo-4, each instruction performs gj(x) increments modulo-4. Hence, we need only show how to compute g(x) using parity gates. Since gi(x) = ai(2diPi(x) + ci) = 2aidiPi(x) + a,c,-, pg,'W = p2a.^p.Wpo,c,> where c(- and dt = ct mod 2 are con stants. If di = 0, the resulting product is pa'Ci which is realized by c,- parity gates, with input a;, feeding into the modulo-4 gate. Otherwise, if d, = 1, the first part of instruction pla<pMpaic< is rewritten as p2aipi{x) _ plplpa,ep,Wpa;©lpl©p,W which is realized using five parity gates; this concludes the proof. • 5.2. Bounded Bandwidth Reversible Circuits 69 An immediate corollary of the preceding lemmas is that programs over D4 and 4-2-parity circuits are equivalent. Corollary 5.8 The function f is strongly computable by a program over D4 if and only if the func tion is computable by a 4-2-parity circuit. Furthermore, the length of the program and the number of parity gates in the circuit are within factor ofO(n) of each other. To complete our claim we prove that 4-2-parity circuits cannot compute the conjunction of three variables, implying that neither can programs over D4. Lemma 5.9 No 4-2-parity circuit can compute the conjunction of three variables. Proof: A 4-2-parity circuit on three variables consists of at most seven different parity gates: the power set of {XQ,X\ ,X2} minus the constant parity gate—since conjunction is O-preserving, constants cannot be used. We need not consider parity gates that perform negation since each such parity gate may be simulated by a constant parity gate followed by three copies of the unnegated gate, i.e., if a parity gate computes 1 © p(x), this is the same as four parity gates: a constant 1 and three gates that compute p(x). Each type of gate is used at most three times since all the parity gates feed into the mod-4 gate. Hence, to construct the circuit we need to solve the following set of linear equations modulo-4 where ap^ denotes the number of times the parity gate that computes p(x) is used and the right-hand side denotes the resulting value of the circuit—the circuit must yield a nonzero value if and only if the conjunction is true. ax0 + aXo®X\ + aX0®*2 + aXQ®X\ 0X2 = 0 mod 4 aXl + axo®X\ + axi 9*2 + ax0®xi ®X2 = 0 mod 4 aX2 + a*0©-*2 + aX{ ®.Q + ax0@xi ®x2 = 0 mod 4 aX{ + ax0®X2 + axi®x2 = 0 mod 4 a*o + aX2 + aX0®X\ + ax\®X2 = 0 mod 4 aXl + aX2 + aXQ®X\ + aX0®X2 = 0 mod 4 axo + aXx + aX2 + ax0®xi ®x2 = y ^ 0 mod Summing these equations together results in the unsatisnable equation 0 = y ^ 0. Hence, a 4-2-parity circuit cannot compute the conjunction of three or more variables. • This lemma, in conjunction with the preceding corollary, implies our claim. Theorem 5.10 No program over D4 can strongly compute the conjunction of three variables. Proof: By Corollary 5.8 the computational powers of 4-2-parity circuits and of programs over D\ are the same. By Lemma 5.9, 4-2-parity circuits cannot compute the conjunction of three or more variables. Hence, neither can programs over D4. • The remaining proper subgroup of 54 is the alternating group A4. While programs over A4 can compute the conjunction on n variables, via a construction that is similar to the one used in [Bar85], Barrington et al. [BST90] showed that programs over A4 that strongly compute the conjunction of n 5.2. Bounded Bandwidth Reversible Circuits 70 variables must be of exponential length. In [Bar89, BST90] it is conjectured that no program over a solvable group—all finite groups of order less than 60 are solvable [DH92]—can compute the con junction on n variables. If the conjecture is true, then the alternating group A5 is the smallest group over which polynomial length programs can compute the conjunction on n variables. This would imply that a reversible circuit must be of bandwidth-3 or greater in order to compute the conjunc tion over n variables. Since the conjecture remains open, there may yet be a clever construction that realizes a polynomial length program over 54 that computes a conjunction. To get further insight into this problem we differentiate between programs based on modes of acceptance, i.e., strongly accepting, weakly accepting, or monotonically accepting. We study the computational power of each class of programs and the relationships between the classes. The Computational Power of Weakly Accepting and Monotonically Accepting 4-PBPs The acceptance mode of a permutation branching program significantly affects the computational power of the model. Since each mode of acceptance—weak, monotone, and strong—is subsumed by its predecessors, a natural complexity hierarchy is formed. We show that weakly accepting width-4 branching programs may be transformed into monotone accepting width-4 branching pro grams, and more importantly, that such programs cannot necessarily be transformed into strongly accepting programs. However, in the case of width-w permutation branching programs, w > 4, weak acceptance is equal to strong acceptance. Furthermore, transforming a weakly accepting program into a monotonically accepting or a strongly accepting program requires only a linear increase in size of the original program; the magnification factor only depends on w. Thus there is a natural gap between width-4 and width-5 permutations branching programs. These comparisons are particularly useful in the context of bounded bandwidth reversible circuits because it illustrates a natural trade off between the complexity of the computation and the complexity of the post-processing phase that entails 'reading' the result from the outputs of the circuit. We first compare the computational power of programs that accept weakly versus programs that accept strongly. Namely, we prove that weakly accepting programs can be significantly shorter than their strongly accepting analogues. We first need the following definition and lemma. An ordered sequence of permutations T is said to be faithful if the product of any nonempty subsequence of T is not the identity. Lemma 5.11 For any integer w > 1, there exists a faithful sequence Tw of all distinct 2-cycles ofSw. Proof: The proof is by induction on w. In the base case where, w = 2, S2 only has one 2-cycle. Thus, the only nonempty subsequence of T2 is T2, which contains one 2-cycle and whose product is not the identity; hence T2 is faithful. Assume that for some wo > 1 there exists a faithful sequence TWQ. For w = WQ + 1 we use Two to construct Tw. Each 2-cycle in Sw either belongs to Two, or is of the form (z'w) where i < w. Let Uw be any sequence comprising all 2-cycles of the form (z'w). We claim that the concatenation Tw = TWQUW is faithful. Let x be the product of any nonempty subsequence of TWQ and x> be the product of any nonempty subsequence of Uw. First, v can be represented by a single cycle (i\i2...ik w), ij < w, 5.2. Bounded Bandwidth Reversible Circuits 71 i.e., the product will never be the identity. Second, by the inductive hypothesis, x is not the identity. Finally, since w is not a fixed-point of u and is a fixed-point of x, we have x/u"1, and thus the product of any nonempty subsequence of Tw is not the identity. • Corollary 5.12 There is a linear time algorithm for constructing the sequence Tw. Using the sequence Tw as the basis of our construction, we show that for some languages, such as the disjunction on n variables, the weakly accepting program can be significantly shorter than a strongly accepting one. Theorem 5.13 Let g[n) be the complexity of strongly computing the disjunction on n variables by a w-PBP, where w > 4. The complexity of weakly computing the disjunction of n variables by a w-PBP is at most c-g(^) where c = (2) — 1. Proof: First, divide the n variables into groups of size -c and for each group construct a strongly accepting permutation branching program that computes their disjunction such that its complexity is g{"). Furthermore, each of these subprograms, denoted Pj, will yield a specific permutation, a,, if the disjunction is true, and e otherwise. Second, concatenate these subprograms P = PQP\ . ..Pc. If we choose the permutations appro priately, program P will yield e if and only if each of the subprograms, Pj, yields 8, i.e., all of disjunctions are false. It only remains to choose the permutation. By the Lemma 5.11, for any symmetric group Sw there is a sequence of (2) 2-cycles such that the product of a subsequence is equal to £ if and only if the subsequence is empty; let T = XjX2 • • •T('v) be that sequence. If a, = x,x,+i, the sequence 0"iO"2... ac also has this property. Thus, Pi yields a, if and only if the disjunctions of its share of variables is satisfied. Hence, program P yields £ if and only if the disjunction of the n variables is false. • Since the yield P(x) of a strongly accepting program must either be £ or a fixed nonidentity permutation, if the function being computed is nonlinear, in all likelihood the yield is an even permutation. Thus, Theorem 5.13 assumes that each subprogram P, yields an even permutation. In all cases but one, the cycle-structure of Pi(x) is not an issue either. Using a single commutator [Pj(x),ol], where a1 is an oblivious instruction, transforms the yield into a 3-cycle, and conjugating the 3-cycle with another appropriate oblivious instruction yields the required 3-cycle; Table 5.1 lists the six different forms of the yield P,-(x). If the yield of P, is a 3-cycle or 2-cycle, no transformation, apart from the conjugation, is necessary. Otherwise, each program P, can be transformed into one that yields a 3-cycle and whose length is at most 2|P,| + 4. Only yields of the form (rs)(tu) of programs over S4 cannot be transformed. However, one can construct programs over S4—albeit of exponential length—that compute the disjunction and yield a 3-cycle length; in this case Theorem 5.13 is also applicable. In the case of programs over A4 a weakly accepting program P = P1P2, which computes the disjunctions can be constructed, where Pi and P2 compute the disjunction of half the variables each and yield (01) (23) and (02) (13) respectively. Thus, P yields £ if the disjunction is false, or (01)(23), (02)(13), or (03)(12) if the disjunction is true. Furthermore, program P is at most half the size of a strongly accepting program 5.2. Bounded Bandwidth Reversible Circuits 72 Form of P(x) w a1 (r\r2...rn)...,n>3 w>n (rxr2...rm),m = 2[n/2\-\ (rst)(uvw)... w > 6 (rwtus) (rst)(uv)... w>5 (rs)(uv) (rst) w>3 N/A (rs)(tu)(vw)... w > 6 (rtv) (rs)(tu) w>5 (rsv), v^r,s,t,u (rs) w > 3 N/A Table 5.1: Forms of P(x) and the corresponding a1. over A\ that computes the disjunction. However, weakly accepting programs are unsatisfying in the following sense. Suppose C simulates a weakly accepting program over 54. Hence, the read-only lines correspond to the variables and the two read-write lines represent the yield of the program. Suppose that the read-write lines are initially 0. Since the program being simulated is weakly accepting, some of the permutations that are yielded, e.g., (123) and (132), have 0 as a fixed-point. Thus, for those inputs that yield such permutations, the output of the read-write lines of C remains unchanged—assuming the read-write lines were initially 0. Hence, determining the result of the computation by reading the output lines is not possible and requires that the computation be performed twice with different initial values on the read-write lines. This requirement seems to inherently violate the notion of what it means for a circuit to compute a Boolean function. To solve this problem we use a slightly stronger notion of acceptance, namely, monotone ac ceptance. Recall that a program monotonically accepts x if it yields a permutation with a particular fixed fixed-point if and only if x is not in the language. Thus, the corresponding circuit first initial izes the read-write lines to the fixed-point, simulates the program, and then XORs the result with the fixed-point. The output is the disjunction of the read-write lines, which will be 0 if and only if the input x to the circuit was not in the language. Fortunately, we can convert a weakly accepting program over S4 into a monotonically accepting one. Theorem 5.14 . Let P be a weakly accepting program over S^, such that for all x, P(x) G A4. Then the program ¥(P) is a monotonically accepting program over S4, where ¥ = P3 ((O^P2^)/*4^))2 (02)(13). Proof: The characteristic map of ¥ on elements of A4 is ¥(e) = e ¥((023)) =¥((01X23)) =¥((031)) = (01)(23) ¥((123)) = ¥((012)) = ¥((02)(13)) = ¥((132)) = ¥((032)) = (02)(13) ¥((021)) =¥((03)(12)) =¥((013)) = (03)(12). Since (01)(23), (02)(13), and (03)(12), have no fixed-points, the result follows. • 5.2. Bounded Bandwidth Reversible Circuits 73 Since transformation *F only works on programs whose yields are in A4, we use the following theorem to transform programs that yield odd permutations into programs that only yield even permutations. Theorem 5.15 The characteristic map of transformation X¥=[(0123),P](P(23))6P6, is such that ^(54) C A4 and ^(c) = E if and only if a = £. Proof: By enumeration of the characteristic map on all 24 elements of S4. • Composing these theorems yields the desired theorem. Theorem 5.16 There exists a transformation W such that for any weakly accepting program over S4, *F(P) is a monotonically accepting program for the same language over S4. In fact, the transformation increases the length of P by a factor of at most 14 x 15 + 1 = 211. This implies that the computational power of programs over 54 that monotonically accept is the same as the power of weakly accepting programs over 54. We can even say something stronger, namely that monotonically accepting programs over A4 have the same power as weakly accepting programs over A4. Corollary 5.17 If P is program over A4 that weakly computes f, there exists a program over A4 that monotonically computes f and is of length 15\P\ + 0(1). Proof: First apply Theorem 5.14 to P. Since the parity of the constant permutations in the trans formation is even, we can collect the permutations at one end of the program via the standard conjugation technique used in Theorem 4.17. Thus, the resulting program will be over A4. • The equivalence between weakly accepting and monotonically accepting width-4 programs begs the comparison between strongly accepting and monotonically accepting width-4 programs. The comparison yields a natural characterization of the difference in computational power of 4-PBPs versus 5-PBPs. Namely, we would like an analog of Theorem 5.16, that is, a general transformation *F that transforms any weakly accepting program into a strongly accepting one. Amazingly, such a transformation cannot exist for programs of width 4, but does exist for programs of greater width! This is a natural distinction between the computational powers of 4- PBPs and 5-PBPs! Theorem 5.18 There does not exist a transformation *P such that for any program P over S4, Y(P) is a strongly accepting program computing the same Boolean function as P. Proof: We shall consider weakly accepting programs over 54 whose yields are from the Klein group K = {e,(01)(23), (02)(13), (03)(12)} C 54. By contradiction, assume that there exists a transformation ¥ that converts weakly accepting pro grams into strongly accepting ones. Let 0"i = (12)(34), let o2 = (13)(24), and let Q = o^'cr^2 be 5.2. Bounded Bandwidth Reversible Circuits 74 a program. Now, consider the structure of *F(<2). ^(Q) is a word comprising constant instructions of the form x1 and nonconstant instructions: either a*1 or cr^2. We can rewrite the program, without changing the function, i.e., cs^x1 = x1 (x_1ax)*''. Furthermore, since conjugation preserves structure, the structure of each instruction is preserved. First, since ^(Q) must yield s if Q yields e, all the constant instructions cancel each other out. Second, since all instructions of Q, and hence ofx¥(Q), belong to the Klein group, the instructions commute. Hence, VF(<2) can be rewritten as ^(Q) = x*'x^2, where Xi,x2 E K. Unless X\ — x2, *F((2) is not strongly accepting, and if equality holds then ^(Q) yields e on assignment x\ = x2 = 1, contrary to what Q yields. Hence, cannot exist. • In fact, a similar argument can be extended to programs over A4. If a transformation did exist, then a polynomial length program computing the conjunction of n variables would be possible. Namely, we use Barrington's divide and conquer construction [Bar89]. If P and P' each strongly compute the conjunction on 2k disjoint variables, where the positive yields of P and P' are distinct, then PP' weakly compute the conjunction of 2k+] variables, and  X¥(PP') strongly computes the conjunction of 2k+x variables. Repeating this composition logrc times yields polynomial length program, and this is impossible according to [BST90]. However, a general transformation does exist on programs of width greater than four. Weak Acceptance Equals Strong Acceptance for w-PBPs, w > 4 If w > 4 then a weakly accepting w-PBP can be transformed into a strongly accepting program with only a linear increase in size. To outline the construction, the goal is to construct a transformation that is characterized by the map for some p ^ e and n,p,c e Sw. Hence, given program P, if Q = ^(P), then Q yields p on input x if and only if P yields n on input x, and yields e otherwise. In fact, we need only exhibit the construction for a fixed 7i0, say Tto = (01... w — 1) since Concatenating \SW \ — 1 copies of ^(P) for each n e Sw \e, yields the desired transformation. To implement Y^, the transformation verifies the point order of permutation Jio by comparing the adjacent points within the permutation. The comparison is performed by a simpler transformation YT on transpositions, where Yx(u) = e if and only if i) 7^ x. Computing the conjunction of these comparisons via the standard commutator transformation of [Bar89] completes the construction. We begin with the construction for YT. Lemma 5.19 Let w > 4 and let x E Sw be a 2-cycle. There exists a transformation Yx such that YT(e) = e and for every 2-cycle V E Sw, Yx(,u) ^ e if and only ifx> = x. Furthermore, the permutation YT (x) can be represented by two disjoint 2-cycles. ^(^ = ^(^-'710). 5.2. Bounded Bandwidth Reversible Circuits 75 Proof: Let x = (rt) € Sw and let (su) be a 2-cycle such that [(rt),(su)] = e. The straight-line composition of Yx is: If M commutes with (rs)—v is either a disjoint 2-cycle or the identity—then Yn(u) = £, implying that Yi (u) = Y3 (v>) = Y(rt) (u) = e. Otherwise, there are two cases: either x> = (rv), v ^ s or i) = (vs), v r. If v= (rv), then Y0((rv)) = (rvs). If v = u ^ t then Y] ((rv)) = e, and so does Y(rf)((rv)). Oth erwise, Yi((rv)) = (rs)(uv), Y2((rv)) = (ruvs), arfd Y3((rv)) = (rv)(su). If v / t, then Y4((rv)) = (rv)(stu) and hence Y(„)(rv) = e. On the other hand, if v = f, then Y4((rt)) = (rwrf) and hence, Y(rt)(rt) = (rs)(tu). If i) = (vs), then Yo((vs)) = (rsv). If v = u ^ t. then Yi((vs)) = e, and so does Y(rt)((vs)). Otherwise, Yi((vs)) = (rv)(us), Y2((vs)) = (rv), and Y3((vs)) — e, implying that Y(rt)(vs) = e. Hence, Y(rt)(u) ^ e if and only if v = (rt), and Y(rt)((/?)) = (rs)(fw). • To construct the main transformation, *¥n, we need to formalize the behaviour of the conjugate operation. Lemma 5.20 (Selection Lemma) Let a be a cycle ofthe form (rs...tu...) where the ellipses (...) represent zero or more additional points within the cycle. Then G(SU)G~1 = (rt). Proof: We can rewrite O" as: To T, T2 Y3 T4 T(rt) [(rs),v] [(rus),T0] Yi (su) (T2)2 r3(tu) (T4)6 cr = rs.. .tu...) rs.. .t)(ru...) trs.. .)(ru...) trs)(t...)(ru)(r...) trs)(ru)(t ...)(r...) turs)(t ...)(r...). yields the result. • 5.2. Bounded Bandwidth Reversible Circuits 76 Corollary 5.21 IfG is a permutation that is represented by the cycle (rst...), then cy(st)c 1 = (rs). To check the adjacent order of the points in a permutation, we use corollary 5.21 in conjunction with lemma 5.19. To verify that the permutation is (12... w), w checks must be performed, to ensure that 1 follows w, 2 follows 1, and so on. Each of these checks is performed by a conjugation trans formation composed with YT transformation. The conjunction of these w checks is then computed. This formalized in the following theorem. Theorem 5.22 Let w > 4 and let Tto = (01... w — 1) 6 Sw. There exists a width-w program trans formation that is characterized by the map that sends every element except Tto to £ and sends Tto to some nonidentity element. Proof: Let Hr = P(st)P~1, r = 0... w — 1, be the conjugation transformation where s = (r+l mod w) and t = (s+l mod w), i.e., rst corresponds to a triple of points in permutation Tto-Without loss of generality, assume that the characteristic map of Y(„.) maps (rs) to (rs)(tu), where u — (t + 1 mod w). Compose the Er transformation with Y(n.) to construct the adjacency checking transformation Or(P) = 0rY(,y)(Sr(P))^-1, where (j)r = (0r)(01)(0.s)(03)(0;)(04)(0w). Conjugating by §r is characterized by a map that sends £ to £ and sends (rs)(tu) to (01)(34). Finally, is a combination of the transformations <X>r, r = 1... w — 1. Define the recursive commutator of an ordered set of permutations to be [{Pi,p2,---,P*}] = [Pi,[{P2,---,-pJt}]], where [{p„_i,p„}] = [p„_i,p„]. Since [(01)(34),(012)] = (012), the construction of is the recursive commutator of the set {Oi, O2,..., On, (012)}, Yn0 = [{O1(P),O2(p),...,O„(P),(012)}]. The characteristic map of sends Tto to (012) and sends all other permutations to £. If the permutation .is not Tto then the characteristic maps of at least one <£,, i= 1... w — 1 will send the permutation to the identity—the recursive commutator of the yields of the 0,s yields the identity. On the other hand, if all 0,s yield (01)(34), the yield of the recursive commutator is (012). Finally, note that the size of the transformation only depends on Sw. Hence, the transformation increases program size by a constant factor. • Corollary 5.23 For all w > 4 the width-w program transformation x¥n = vFreo(PTt~1Tio) is charac terized by the map that sends every element except Tt to £ and sends Tito (012). Using the preceding corollary we can now construct the transformation *F that transforms any weakly accepting program into a strongly accepting one. 5.3. Polylogarithmic Bandwidth Reversible Circuits 11 Theorem 5.24 If P is a weakly accepting program over Sw, w > 4, which computes a Boolean function f, then *F(P) strongly computes Boolean function f, where transformation W is T= n ^(p). neSw\E Proof: Each of the 4^ transformations transforms P into a program that yields (012) if and only if P yields n. Hence, if P yields a nonidentity, exactly one of the transformed programs will yield (012) and the rest will yield e. Thus, for any input x, Y(P) yields (012) if and only if P does not yield e; *F(P) yields e otherwise. The length of ¥(P) is 0(|P|) and depends only on w. u Thus, the computational power of weakly accepting and strongly accepting programs of width-5 or greater is equivalent. The same is true for reversible circuit of bandwidth-3 or greater. Thus, there is a natural distinction between the power of 4-PBPs (bandwidth-2 reversible circuits) and (5+)-PBPs (bandwidth-(3+) reversible circuits)! Finally, by Corollary 5.5 the class of problems decidable by bounded bandwidth reversible circuits of bandwidth greater than two is NC1. We now shift our attention to polylogarithmic bandwidth reversible circuits. 5.3 Polylogarithmic Bandwidth Reversible Circuits Polylogarithmic bandwidth (polylog-bandwidth) reversible circuits comprise b € O((log«)') read-write lines and r = n — b read-only lines. Since such circuits of unlimited length can trivially com pute any Boolean function in P, we focus on polynomial size polylog-bandwidth circuits. Define the class RC to be the class of problems solvable by polynomial size reversible circuits of bandwidth 0((logn)'). Note that the class of problems decidable by bounded bandwidth polyno mial size reversible circuits, which is also NC1, is the class RC°, i.e., NC1 = RC°. Define the class RC = U,>oRC to be the class of problems solvable by polynomial size polylog-bandwidth circuits. Since RC C RC+1, the complexity classes RC form a hierarchy, called the RC hierarchy. There is a natural relationship between the RC hierarchy and the SC hierarchy, which comprises the classes SC. Recall from Section 2.5.3 that SC is the class of problems that is decidable by a Turing machine that uses 0((logw)!) space and a polynomial amount of advice. In this section we prove the equivalence between RC and SC by proving two things: first, we show that RC C SC, and second, we show that SC C RC2'. Theorem 5.25 For all i > 0, RC C SC. Proof: Let C be a polynomial size reversible circuit on n inputs that is of bandwidth 0(log(n)'). We construct a Turing machine T that simulates circuit C and uses a work tape of length 0(log(n)'), an input tape, and an advice tape. The input tape contains the input comprising the n input variables to C and the advice tape contains the circuit, encoded in 0(\C\ log(n)) space as a sequence of 4-tuples (g,c\,C2,o), where each 4-tuple comprises four integers, the gate type, the control lines, and the output line. The work tape stores the state of the 0(log(«)') read-write lines of C, and a counter of O(logrc) bits to keep track of the input tape head and locate the read-write lines on the work tape. 5.3. Polylogarithmic Bandwidth Reversible Circuits 78 Machine T simulates circuit C one gate at a time and stores the intermediate result on the work tape. The machine stores the gate type in its finite state, and uses the counter to locate the control and output lines on the work tape. The output is computed within the finite state and written over the previous state of the output line. The final output of T comprises the final state of all the read-write lines. Since C is polynomial in size the computation of T takes (9(|C|nlog2n) steps and uses 0(log(n)') space. Thus, simulating C is in SC, and hence RC C SC. • The converse is not necessarily true since the class SC is defined within the framework of gen eral computation whereas the class RC requires that the problems within it be decided reversibly. However, via Bennett's [Ben89], all problems in SC are decidable by polynomial size reversible circuits of bandwidth 0((logrc)2i). Theorem 5.26 For all i > 0, SC C RC2''. Proof: Let T be any Turing machine that only uses 0((logn)') work space and a polynomial amount of advice, where n is the size of the input. Since i > 0, T may use at least O(logrc) space, we can use Bennett's construction to construct a reversible Turing machine R that uses quadratic space, 0((logrc)2'), and polynomial time to simulate T [Ben89]. Fixing n, and hence the advice to 7?, we argue that R can be simulated by a polynomial size reversible circuit C of bandwidth C((logn)2'). Let c be a constant that depends only the size of the input alphabet, work tape alphabet, finite state, and the degree of the polynomial p[n), where p(n) bounds the length of the advice tape. Circuit C comprises of several parts: input-tape lines: n read-only lines corresponding to the input, advice-head lines: at most c\ogn read-write lines corresponding to the advice tape (counter), input-head lines: at most cXogn read-write lines corresponding to the input tape head (counter), work-head lines: at most clogrc read-write lines corresponding to the work tape head (counter), finite-state lines: at most c log n read-write lines, corresponding to the finite state control of R, and work-tape lines: at most c(logra)2' read-write lines corresponding to the work tape, where each cell of the tape is simulated by 0(1) read-write lines. Using a simplification akin to the one in [Ben73], we assume that the instructions of R can either • obliviously move the work tape head, • read/write a cell of the work tape, • obliviously move the input tape head, • read a cell of the input tape, • obliviously move the advice tape head, • read a cell of the advice tape, or • do nothing. The three oblivious head movements correspond to an increment or decrement of the advice-head, input-head, and work-head counters, a permutation that can be realized in a polynomial number of 5.4. Unbounded Bandwidth Reversible Circuits 79 gates since the counters are at most c\ogn lines wide. The increments and decrements are controlled by the finite-state lines. Since the advice tape can be thought of as a truth table of a Boolean function, say a, on at most clogn variables, which can be realized by a polynomial size controlled circuit on the advice-head lines. The circuit is controlled by the finite-state lines and the operation comprises three steps: perform a on the advice-head lines, save the output in the finite-state lines, and perform a-1 on the advice-head lines. Similarly, reading the input corresponds to a controlled copy of a single input-tape line into the finite-state lines. The copy is controlled by the input-head lines and the finite-state lines. The reading and writing of the work tape is similarly performed, namely, a copy operation is performed to or from the finite-state lines and is controlled by the work-head lines and the finite-state lines. While the work-head lines are unnecessary because the work-head lines can be embedded into the work-tape lines, their explicit use simplifies the construction. Finally, we only need to note the finite-state lines. A polynomial size circuit on a constant number of read-write lines is sufficient to realize a permutation that corresponds to the transitions of the finite state. There is only one issue. Since the Turing machine is reversible, we're guaranteed that the transition function is a permutation. However, unlike a Turing machine, a circuit cannot cease computing once the halting state is reached. A circuit must perform all m computational steps, where m = maxx6Bn TIME(/?,x). Since the transition function must be a permutation, the circuit simulates 'halting early' via an additional 0(m) halting states that are iterated through for m — TIME(/?,x) steps. The additional halting states perform no operation and serve only as place holders in the transition function. Since R takes polynomial time in n to complete, at most clogra lines suffice to represent the finite state. If we were guaranteed that all computation paths of R on inputs of size n were the same length, then we could make do with 0(1) finite-state lines. Thus, a polynomial size bandwidth-0((log n)2') reversible circuit can simulate a Turing machine T, which uses 0((logn)') space and polynomial time. Thus, SC C RC2'. • An immediate corollary of the preceding two theorems is that polylog-bandwidth circuits can decide exactly the problems contained in class SC. Corollary 5.27 RC = SC. Theorem 5.26 can easily be generalized to Turing machines that use polynomial space; the corresponding circuits essentially have unbounded bandwidth. 5.4 Unbounded Bandwidth Reversible Circuits Unbounded bandwidth reversible circuits—circuits comprising only read-write lines—are more general than their bounded bandwidth analogues. Consequently, characterizing their computational power is much more challenging. By the same argument as the one used in Theorem 5.26, polyno mial size unbounded bandwidth circuits can be used to decide exactly the same set of problems as Turing machines that use polynomial space and time, i.e., the problems of class P. 5.4. Unbounded Bandwidth Reversible Circuits 80 In many physical contexts, such as quantum computation, using garbage lines—in addition to the n lines containing the input—is expensive. Since the number of garbage lines must be kept to a minimum, we focus on circuits that realize functions on n variables and comprise n + c read-write lines, where c is 0, 1, or 2, and the c garbage lines are initialized to 0. To compute a Boolean function /, function / is first embedded into a permutation on either Bn, if / is balanced, or Bn+\ otherwise [Tof80]. Usually the exact embedding does not matter as long as the resulting permutation a induces a partition of Bn (or Bn+\) such that for some fixed i G [1,«], for all x £ Bn, (xcr), = f{x). If / is Boolean function of the form / : Bn —> B^, 1 < k < n, a similar criterion can be derived. Unfortunately, many such functions require more than a constant number of garbage lines [Tof80]. A prime example of this is the multiplication function, that takes two rc-bit strings and yields a 2n bit string. Since there are 2"+1 inputs that map to the same output, i.e., multiplication by zero, multiplication can only be embedded into a permutation on Boolean cube of order 4n or larger. We limit our attention to Boolean functions that are either of the form / : Bn —> B\, or permu tations of the form / : Bn —> Bn. In either case / can be embedded into an even permutation on Bn or Bn+\. We first investigate how to realize permutations corresponding to some common func tions; in particular we provide an interesting recursive construction for realizing threshold functions. Following this, we show that if a permutation has a polynomial size cycle representation, then the permutation can be realized by a polynomial size circuit (Theorem 5.34). Based on the observations that we derived from our constructions, we conclude this chapter by describing some heuristics for realizing Boolean functions. 5.4.1 Reversible Circuit Constructions We first investigate how to realize common permutations such as incrementors (decrementors) and adders. After deriving realizations for these components, we use them to realize Boolean functions such as the consensus function and various threshold functions. In most cases the constructions are relatively efficient, requiring 0(n2) gates. The exception is the majority function, which does not seem to have an efficient realization. For conciseness, we use several schematic short forms. First the £-line controlled Toffoli gate, k < n — 1, which computes the conjunction of k lines and XORs the output line. A &-line controlled Toffoli (^-Toffoli) gate can be constructed using 0(k) Toffoli gates [BBD+95] and is illustrated in Figure 5.1a. Second, the controlled &-NOT, comprises k controlled-NOT gates that are all controlled by the same line. In most cases, we will be using the controlled (n — l)-NOT, which is illustrated in Figure 5.1c. Additionally, we use blocks to denote a component of a circuit. A component may either be simple, controlled by another line, as in Theorem 4.14, or the component may control another line, i.e., compute a Boolean function on k lines and XOR another line with the results. The block components are illustrated in Figures 5. Id, 5.1b, and 5.1e. The controlled k-HOT and the fc-Toffoli gate are examples of a controlled component and a component control. 5.4. Unbounded Bandwidth Reversible Circuits 81 c o o I o o I TJ a o o c c cl, E o o Figure 5.1: a) A fc-Toffoli gate; b) a component control; c) a controlled n-NOT; d) a controlled component; e) a simple component. Realization of Various Incrementors We begin by constructing a half-incrementor, a permutation function on Bn that is represented by two disjoint cycles of the form Tt = (0 1 ... 2' n-1 1)(2 n—1 ",n— 1 + 1 ... 2" - 1). Realizing a full incrementor on Bn is impossible because the permutation (0 1 ... 2" — 1) is odd, and hence cannot be realized using regular Toffoli and NOT gates [CG75]. However, a nigh-incrementor, corresponding to the permutation (0 1 ... 2"-1 — 2), can be realized. The half-incrementor can be realized via a sequence of ^-Toffoli gates, where k = n — 2... 0. Observe that an incrementor modifies the z'th least significant bit of the input if and only if the conjunction of the i— 1 least significant bits of the input is equal to 1. Thus, the circuit comprises n — 1 components (^-Toffoli gates), where the y'th gate is an (n — 2 — j) -Toffoli gate that negates line n—l — j and is controlled by the lines i, i = l...n — 2 — j; the construction is illustrated in Figure 5.2a. TO z —i •—i \— —i i—( i , c \ c CM c V : rv' l c CM O 1 O a b Figure 5.2: Realizations of a) a half-incrementor and b) a nigh-incrementor. The last line in the circuit in Figure 5.2a may seem superfluous, but it is required in order to realize the (n — 2)-Toffoli gate. The line is used as a temporary register and is returned to its original 5.4. Unbounded Bandwidth Reversible Circuits 82 value by the end of the computation of the (n — 2)-Toffoli gate. By straightforward induction on n, it is easy to see that the circuit realizes permutation rt. Since the realization of each &-Toffoli gate requires 0(k) normal Toffoli gates (2-Toffoli gates), the half incrementor may be realized in 0(n2) gates. It follows that if we use an additional garbage line, then a complete incrementor can be realized, otherwise, the best we can hope to realize is a nigh-adder. Not surprisingly, the half-incrementor forms the basis of the realization of a nigh-incrementor. The nigh-incrementor may be realized by concatenating an additional circuit onto the one that realizes a half-incrementor. Begin by noting that the nigh-incrementor corresponds to the permu tation x = (0 1 ... 2n~l - 2). Let p = TT'T = (0 2""1 2" - 1). Thus, the circuit in Figure 5.2b, which realizes the nigh-incrementor, is a half-incrementor concatenated with a circuit that realizes permutation p. By Corollary 5.32, which we prove in a later subsection, any permutation that is represented by a 3-cycle can be realized by a circuit of size 0(n). Thus, the nigh-incrementor can be realized in 0(n2) gates as well. The half-incrementor is also a primary component in the construction of the adder. A Realization of the Adder We consider an adder that takes two rc-bit inputs, on In lines and outputs the result on the latter n lines, n + 1,..., 2n, and the first summand on the former n lines, 1,..., n. The adder comprises a sequence of n controlled half-incrementors; see Figure 5.3. The kth half-incrementor is controlled by line k,ke [l,n], and increments the n — k most significant lines of the second summand, i.e., the increment is performed on lines n + k,...,2n. This follows from the observation, that adding 2-> to an n-bit value corresponds to performing an increment on the n — j most significant bits. The adder does exactly that, performing a controlled increment for each of the n bits of the first summand. Figure 5.3: A realization of the adder. Since each half-incrementor can be realized in 0(n2) gates, the entire adder can be realized in 0(n3) gates. Unfortunately, there seems little that can be done to reduce this bound. While one would think that a ripple adder could be implemented, each stage of the ripple adder loses 5.4. Unbounded Bandwidth Reversible Circuits 83 information—the preceding carry—implying that in order for a ripple adder circuit to work re versibly, all carry information needs to saved. Currently, we know of no way to accomplish this. The current realization trades space for time, which is the n stage adder just described. In contrast to the incrementor, to realize an adder requires no additional lines, even though both functions are permutations. The difference being that the adder is an even parity permutation and the incrementor is not. The other sufficient condition for requiring the additional garbage line occurs when the function being realized is not a permutation, such as unbalanced Boolean functions of the form / : B„ —> B\. Two families of such functions, the consensus and the threshold functions are studied next. A Realization of the Consensus Function The consensus function on n inputs evaluates to 0 for all but two inputs, 1 and 0, i.e, fix) = \/XiV l\Xi. i=i i=i Since the function is unbalanced an additional line is required to reversibly realize the function. Without loss of generality, assume that the additional line, which is labeled 0 and initialized to zero, is also the output line. The consensus function corresponds to the permutation 71 = (0 l)(2n+1-2 2"+1-l), the first 2-cycle toggles line 0 if all lines have value 0, and the second 2-cycle toggles line 0 if all lines have value 1. The (n — l)-Toffoli gate that is controlled by the lines 2,...,n and toggles line 0, realizes the permutation l)=(2"+1-4 2n+1 -3)(2"+1 -2 2n+1-l) and differs from the required permutation only in the first 2-cycle. Thus, this gate forms the basis of the circuit. To repair the difference, we use the following observation. For any 2-cycle, T = (st), conjugating x by a = (rsu) and then by p = (qtu) yields yields the 2-cycle parcT_1p-1 = (qtu)(rsu)(st)(rus)(qut) = (qtu)(rt)(qut) = (qr). Setting q = 0, r = 1, s = 2n+1 -4,t = 2"+1 - 3, and u to any other value but q, r, s, t, 2"+1 - 2, and 2"+I — 1 specifies two 3-cycles, such that conjugating u by them, yields permutation n. The consensus function can therefore be realized in the corresponding way, as is illustrated in Figure 5.4. By Corollary 5.32, the circuits that realize the 3-cycles can be realized in 0(n) gates. Hence, the consensus function can be realized in 0(n) gates. As we shall see, the fact that the consensus function is heavily unbalanced means that a concise realization is possible. This trend is also evident in the realization of threshold functions. 5.4. Unbounded Bandwidth Reversible Circuits 84 < > + J + CO 1 + c CM CM c CM CM C CM c CM O o I C~(1 O o o 1 C~(1 C~(1 Figure 5.4: A realization of the consensus function. A Realization ofthe Threshold functions Tn, T\, Tn_i, T2, Tn-2 Recall that a threshold function 7* on n variables is 1 if and only if the weight of the assignment is greater or equal to k. Since the threshold function is unbalanced—unless k = [n/2] and n is odd—an additional line, labeled 0, is required to realize the threshold function. If the threshold functions Tn and T\ are treated like the conjunction and disjunction functions on n variables, which correspond to an odd permutation, a realization of Tn would also realize an n-Toffoli gate on n + 1 lines—something that is not possible [CG75]. However, the threshold function Tn can be embedded into an even permutation, say (2"+1 —2 2"+1 — 1)(0 2). By Corollary 5.33, which we prove later, this permutation is realizable in 0(n) gates. Hence, both threshold functions Tn and (by duality) T\ can be realized in 0(n) gates. To gain further insight into reversible circuit constructions, we consider more direct realizations of the threshold functions 7^, 1 < k < n. We first show how to realize the threshold functions and T2. We only show how to realize Tn-\ since the realization of T2 easily follows by negating the inputs and outputs. (0) (1) (2) (3) (n-2) (n-1)-(n) < 3-1 c r< >, , )—( X y \. , c , r J \ , k fl h i '*. : 1 ^ , r —( ) ' 1—1 • ( 1—i 1— Figure 5.5: A realization of the threshold function T„. The realization, which is shown in Figure 5.5, comprises one controlled n-NOT gate and n (n— 1)-Toffoli gates. The circuit is composed of three stages. Stage one contains one (n— 1)-Toffoli gate that is controlled by lines 1,... ,n — 1 and toggles line 0. Stage two consists of the controlled n-NOT component that is controlled by line 0 and negates the lines 1,..., n. Finally, stage three consists of the remaining (n— l)-Toffoli gates, each of which toggles line 0 and is controlled by a 5.4. Unbounded Bandwidth Reversible Circuits 85 different subset of the lines 1,..., n; the subset 11 is naturally excluded as a choice. If the input is of weight less than n — 1, then given that line 0 is initialized to 0, none of the (n — 1)-Toffoli gates performs the XOR; if the input is such that the lines 1,... ,n — 1 have value 1, then the first (n — l)-Toffoli gate toggles line 0, the second stage toggles all lines, which means that none of the (n — 1)-Toffoli gates in stage three will toggle line 0; this also covers the case when the input is of weight n. Otherwise, if the input is of weight (n — 1), and line n is 1, then none of the gates in the first two stages will toggle any of the lines, and only one of the (n — 1)-Toffoli gates in stage three toggles line 0. Consequently, line 0 will be toggled exactly once. Thus, the circuit realizes the threshold function Tn-\ and since the circuit comprises 0{n) controlled-NOT and (n— l)-Toffoli gates, the circuit is of size 0(n2). However, to realize the threshold function Tn-2 requires 0(n3) gates. (0) (1) (2) (3) (n)' * CD n choose n-2 (n-2)-Toffoli gates ± n choose n-2 (n-2)-Toffoli gates n choose n-1 (n-l)-Toffoli gates a) n is even b) n is odd Figure 5.6: Realizations of the threshold function Tn-The realization of the threshold function 7„_2 depends on whether n is odd or even. If n is even then the realization (a) in Figure 5.6 is used, and if n is odd, realization (b) is used. The realizations comprise three and four stages, respectively. The first three stages of both realizations are identical to the ones in the realization of 7j,_]. Stage three of both realizations comprises ( "2) components, each of which is an (n — 2)-Toffoli gate that is controlled by a different subset of the lines 1,... ,n and controls line 0—the gates are too numerous to draw in Figure 5.6. Stage four of the second realization comprises (n"j) components, which are (n— l)-Toffoli gates, each controlled by a different subset of the lines 1,..., n and each of which controls line 0. Both realizations perform identically on inputs whose weight is n, n — 2, or less than n — 2; the dependence on the parity of n occurs only for inputs of weight n — 1. For inputs of weight less than n — 2, none of the (n — l)-Toffoli gates or (n — 2)-Toffoli gates toggle line 0. For inputs of weight n — 2, exactly one (n — 2)-Toffoli gate in stage three toggles line 0. For inputs of weight n (or where the lines 1,...,«— 1 all have value 1), the first two stages of both realizations, toggle line 0, and negate all n lines, 1,..., n, preventing all subsequent gates from toggling line 0. For inputs of weight n — 1, where line n have value 1, the realizations perform slightly differently. If n is even and the weight of the input is n — 1, then line 0 will be toggled exactly (""2) —n—l times by the (n — 2)-Toffoli gates in the last stage. Since n is even, n — 1 is odd, and hence line 0 will be toggled an odd number of times. However, if n is odd, then n — 1 is even and line 0 5.4. Unbounded Bandwidth Reversible Circuits 86 is toggled an even number of times. Since line 0 must be toggled an odd number of times if the weight of the input is greater or equal to n — 2, the last stage in realization (b) ensures that the line is toggled one additional time if the input weight is n — 1. Thus, the resulting realizations compute the threshold function Tn-i- The third stage of both realizations comprises 0(n2) (n — 2)-Toffoli gates, thus the size of the circuit is 0(n3). Using these as realizations as base cases we recursively construct threshold functions 7*, where 1 < k < n. Realization of the Threshold functions Tk A realization of threshold function 7}. comprises two realizations of simpler threshold function re alizations. Let T^m denote a threshold function on m variables with threshold k—in the preceding discussion, the dependence on n was implicit, i.e., 7^ = T^n. The two components of a realization of T^n are a controlled realization of of i „_ i, and a controlled realization of Tk,n-1; the composition is illustrated in Figure 5.7. Figure 5.7: A realization of the threshold function 7^. If line n has value 1, then the circuit needs only to check that the weight of the remaining n—l lines is k— 1 or greater. The first controlled component, which realizes 7i_ !_„_], performs this function. Otherwise, if line n has value 0, the weight of the remaining n—l lines must be of weight k or greater if the threshold is to be met. The second controlled component, which realizes T^n-1, controlled by the negation of line n, performs this task. Each of the components are realized in the same way; the base cases, 72iOT and Tm-2,m are realized by the nonrecursive constructions presented in Figure 5.6. Unfortunately, the complexity of this construction, particularly for the majority function, is exponential in n. The recurrence relation R(k,n) = R(k— I,n—l) + R(k,n— 1) describes the com plexity of the construction in terms of the number of m-Toffoli gates, 0 < m < n, where each of the two terms includes one of the two additional NOT gates in the construction. At the nth step of the recursive construction, each of the gates from the realizations of Tk-\t„-\ and T^n-i are ex tended by one control line, which is attached to line n. Since the realizations for T2}„, and r„,_2,m are the base cases with complexity c € 0(n2), R(2,m) = R(m — 2,m) = c. Recurrence R(k,n) is the same as the one for binomial coefficients (with different boundary conditions), namely, R(2,m) = R(m - 2,m) = c = (™) • (c/Q1))- Hence, by inspection, R(k,n) = c • Finally, since each m-Toffoli gate requires 0(m) gates, our realization of the threshold function T^n requires O (n3 (^I*)) standard gates, i.e., NOT, controlled-NOT, and Toffoli gates. Note, that as the thresh-5.4. Unbounded Bandwidth Reversible Circuits 87 old function becomes more balanced—k approaches |—the realization becomes more complex. The problem then is to determine when a polynomial realization is possible. 5.4.2 Sufficient Conditions for Realizing Permutations by Polynomial Size Circuits Some of the constructions in Section 5.4.1 assume that a permutation represented by a 3-cycle can realized by a circuit of size polynomial in n. In fact, any 3-cycle can be realized using 0(n) gates. We first prove this result in the subsequent lemma and two theorems and then discuss its implication, namely, that every permutation that can be concisely described using cycle notation, has a polynomial size realization! For the remainder of this discussion, assume that circuit C^z) ~ (xyz), xj,z£ Bn, and in general Ca ~ a. Lemma 5.28 lfC[Q\2) is a reversible circuit on n lines, then for any x,y € Bn, x,y ^ 0, there exists a reversible circuit C of size 0(n), such that the circuit CC(Q\2)C~X ~ C((k>>)-Proof: Select two lines i and j, setting u = XjXj, v = y,y/, u,vG B2, such that u ^ v and u,v e {1,2,3}. Such a choice is possible, otherwise x — 0, y = 0, or x = y, none of which can happen because (Oxy) is a 3-cycle. Call the lines i and j control lines. The circuit C consists of three stages. Stage one comprises |x| — \u\ Toffoli gates plus |y| — |v| Toffoli gates. The first subsequence is bracketed by a pair of NOT gates on line i (j) if x,- (xj) is 0; the second sequence is analogously bracketed if y, (yj) is 0. For each k ^ ij if xy_ = 1 a Toffoli gate ®J.Ay, controlled by lines i and j, toggles line k. The second subsequence of Toffoli gates is analogously specified. Thus, on input x or y, all lines but lines i and j are toggled to 0. Stage two swaps line i with line 1 and line j with line 2. This can be done using 0(1) gates. Finally, stage three manipulates lines 1 and 2 since these lines now hold the value of the control lines. If u ^ 1 and v ^ 2, then stage three maps M to 1 and v to 2; this also takes 0(1) gates. Therefore, circuit C maps input 0 to 0, input x to 1, and input y to 2, using 0(|x| + \y\) Q 0(n) gates. The circuit may permute other points in Bn, but this is of no consequence. Since C ~ (x 1... y 2...), composing circuit C with C(0i2) m the form of a conjugate yields CC{on)C-' ~ (x 1.. .y 2.. .)(012)(x 1.. .y 2...)~] = (Oxy) ~ C[0xy), which completes the proof. • Corollary 5.29 If is a reversible circuit on n lines, then there exists a reversible circuit C of size 0(n), such that the circuit CC^^C-1 ~ C(oi2)-Theorem 5.30 follows easily from the lemma and the corollary. Theorem 5.30 (3-cycle Hardness Theorem) IfC^) is a reversible circuit on n lines, then for any distinct x1 ,y' ,z' € Bn, there exists a circuit C of size 0(n), such that CC^xyz^C~l ~ C^yY)-Proof: Since XORing the input with a constant bit vector can be performed by 0(n) NOT gates, we can transform into Coyz, where y> = y(Bx and z = z@x. Similarly, for a circuit C^f), where y' = y' @x! and f = z1 ©x\ can be transformed into C^yzi) using 0(n) gates. Let Cx be the 5.4. Unbounded Bandwidth Reversible Circuits 88 circuit comprising |JC| not gates such that CxC^XyZ)Cx ~ C(o^) and correspondingly let Cy be such that C(yyz') ~ CyC(0/j')Cc'"1. By Corollary 5.29, circuit C^) can be transformed into C(oi2) and by Lemma 5.28, this circuit can be transformed into C(oy'z')* also in 0(n) gates. Let C\ and C2 be the circuits such that C(on) ~ C\C(Qyt)C\x andC^f) ~ C2C(0i2)C2"1. Since QyyV) ~ Cx*C2C\CxC(xyz)C~xC\ xc2~xcx>~x, setting C = C^CiCxCx, which is of size 0(n), completes the proof. • Thus, all 3-cycles are equally hard to realize in the sense that, given a polynomial size realization of one 3-cycle, any other 3-cycle can be realized by using an additional 0{n) gates. Fortunately, a 3-cycle can be realized by a reversible circuit of size 0(n). Theorem 5.31 For n > 1 there exists reversible circuit C(oi2) on n lines of size 0{n). Proof: If n < 3 we can construct a reversible circuit that realizes any permutation and uses a constant number of gates. For n > 3, observe that (012) may be factored into Xi = (01)(63) and x2 = (02)(63). Thus, we need only demonstrate that permutations X, and x2 can be realized in 0(n) gates. First, the permutation (01)(23) (respectively (02)(13)) may be realized by 0(n) gates. The circuit comprises three stages: a negation, followed by a toggling, followed by a negation. Stages one and three negate the n — 2 lines 3,...,n. The middle stage is an (n — 2)-Toffoli gate, con trolled by lines 3,... ,n, that toggles line 1 (respectively line 2); each stage requires 0(n) gates. Let C(oi)(23) ~ (01)(23) and C(02)(i3) ~ (02)(13) respectively. Next, we construct a reversible circuit C(oi)(63) that realizes permutation (01)(63). The re versible circuit CC!l — 0i03A20i realizes a permutation that transposes 2 and 6 and whose fixed-points include all points that are congruent to 0, 1, or 3 modulo 4. Since the conjugate ai(01X23)0, = (01)(63), therefore C(o,)(63) =CaiC(oi)(23)CCTl-1. Similarly, we construct C(0i)(63)- The circuit CG2 = 02 03A2 02 transposes points 1 and 5, with fixed-points comprising all points that are congruent to 0, 2, and 3 modulo 4. Using conjugation, we construct circuit C(02)(53) = Ca2(-(02)(i3)Ca2- The circuit Cp = 02A3 02A3 02A3, switches the values of the lines 1 and 2, using line 3 as the control. Permutation p transposes 5 and 6; only points congruent to 5 or 6 modulo 8 are permuted. Since the conjugate p(02)(53)p_1 = (02)(63), therefore C(02)(63) = CpC(02)(53)Cp x. The required circuit is C(oi2) = C(oi)(63)C(02)(63) is °f size 0(n). • In conjunction with Theorem 5.30, we get the following two corollaries. Corollary 5.32 Any 3-cycle can be realized by a reversible circuit on n > 1 lines of size 0(n). Corollary 5.33 A permutation on Bn comprising two disjoint transpositions, can be realized by a reversible circuit on n lines of size 0(n). Proof: If CT = (ab)(cd), then CT can be factored into two 3-cycles CT = (abc)(cad). m 5.4. Unbounded Bandwidth Reversible Circuits 89 Since every cycle comprising m points can be represented by m — 1 transpositions, another corollary is that a permutation can be realized using 0(nk+l) gates if its cycle representation—the number of non-fixed-points—is 0(nk) in size. Theorem 5.34 A permutation on Bn that permutes 0(nk) points can be realized by a reversible circuit on n lines of size 0(nk+i). Thus, any function that can be embedded into an even permutation that has a polynomial size cycle representation can be realized by a polynomial size reversible circuit. The converse is not true. There are many functions, such as negation or the incrementor, that have an exponential cycle representation, but a concise reversible circuit realization. This chapter culminates with a description of some design heuristics for reversible circuits. 5.4.3 Techniques and Heuristics for Reversible Circuit Constructions Several techniques and heuristics have emerged for realizing functions with reversible circuits. These techniques include: using commutators, using conjugation, embedding functions within per mutation groups, and taking advantage of "don't cares" to yield polynomially realizable permuta tions. The commutator of two circuits [C0,Ct] = CCCTCQ1C~}—assuming that permutations CT and T do not commute—provides a useful mechanism for constructing controlled circuits in a bandwidth-limited environment. Although this is not a major issue in the unbounded-bandwidth framework, it is useful for combining circuits that are controlled by distinct sets of control lines into one that is controlled by union of the control lines. This technique is used by Barenco et al. [BBD+95] to construct (n — 2)-Toffoli gates. Conjugating one group element by another, CTTCT-1, preserves the cycle structure of x, but changes the points on which the permutation operates. Conjugating one reversible circuit with another, CACXC^1, performs the same function, the structure of the realized permutation remains unchanged, but the inputs that the resulting circuit permutes are changed. This technique is most useful for adapting a circuit that does 'almost the right thing' to one that performs the required per mutation. Conjugation was heavily used in the preceding subsection, particularly in the construction and transformation of 3-cycles. Conjugation allows the circuit designer to decouple circuit structure from input representation, i.e., if the structure of the permutation that is realized by the circuit is correct, then the circuit can easily be adapted to work on the right set of inputs. Permutation functions induce a strict structure on the reversible circuit, namely, the circuit must perform a very specific permutation, modulo the parity issue—odd parity functions need to be em bedded into even parity permutations. Boolean functions of the form /: Bn —> B\, need to be embed ded into a permutation on either Bn, Bn+\, or Bn+2—in the case of the conjunction and disjunction functions. Since the right embedding allows a much more efficient realization of the function than a wrong embedding, choosing the right embedding is paramount. An example of this is choosing a polynomially representable permutation rather than an exponentially representable one. In this 5.4. Unbounded Bandwidth Reversible Circuits 90 case, we have a simple method to construct a polynomial size circuit. First, decompose the per mutation into transpositions. Create a realization for each transposition using the technique from Corollaries 5.32 and 5.33. Concatenate the transpositions to yield the final circuit. If a function is heavily unbalanced, i.e., the ratio of Is to 0s in the truth table is 2±aW, then the number permutations that must toggle the output line is small. Thus, the circuit can be constructed by realizing a transposition for each entry belonging to the minority of the truth and concatenating the realizations. This does not always yield an optimal circuit; by allowing the circuit to perform additional permutations that do not affect the the output line, a size reduction may be obtained. The realization of the threshold function Tn-\ permutes two of the 2" inputs. If we were to use the transposition composition technique for realizing the circuit, the number of gates, used in the alternative realization would be much greater than in the hand-crafted realization. Since we "don't care" what the circuit does for certain inputs, we can craft the circuit, by ignoring entire ranges of inputs, a similar method, involving Karnaugh-maps [Kar53], is used for optimizing general combi nation circuits. Unfortunately, if a function is balanced or the permutation is not polynomially representable, it is not known how to determine if an efficient realization is possible or how to derive one. One approach is to decompose the function into a linear component and a nonlinear component, i.e., f(x) — L(x) ®N(x), where L and /V are the linear and nonlinear components. The former can be realized in 0(n) gates and if the latter can be embedded into a polynomially representable permuta tion, then / can be efficiently realized. However, in the case of some functions, such as the majority function, this technique does not suffice; it is not even clear if the majority function can even be efficiently realized. Chapter 6 Conclusion and Future Work In this thesis we investigated growth processes on formulas and reversible circuits, and the complex ity of finite Boolean functions in the reversible circuit model. First, we analyzed growth processes on formulas, characterizing the processes based on the existence and shape of their limiting distribu tion. Since a comparable characterization of growth processes on general circuits is intractable, due to the dependencies between the probabilities associated with various circuit components, we intro duced a growth process over reversible circuits. As before, we characterized these processes based on the existence and shape of their limiting distribution. Second, we investigated the complexity of finite Boolean functions within the framework of reversible circuits and derived relationships between reversible circuits and other models of computation. In the first part of the thesis we derived a method—applicable under a broad set of conditions— for characterizing growth processes on formulas. The characterizations included growth processes that use linear, self-dual, and monotone connectives. Unlike growth processes that use linear or self-dual connectives, where the limiting distribution is either uniform over the support or concentrated on a single function, we showed that growth processes that use balanced monotone connectives have a limiting distribution that is uniform over the slice functions. To prove this, we created a novel technique for analyzing growth processes that combines amplification arguments [Val84, Bop85] and spectral analysis [Raz88, Sav90, Sav95a]. Additionally, we derived convergence bounds for most of these growth processes and proved that the convergence rate of a process strongly depends on the connective. In most cases we showed that the convergence is logarithmic in the number of variables. Our analysis of the characteristic polynomial of the monotone connectives yielded a well denned set of monotone connectives that have no internal fixed-points, but whose corresponding growth processes take exponentially more time to converge! A general theory for all connectives—not just balanced [Sav90], linear, or monotone—remains to be developed. Initial empirical work has shown that the limiting distribution of a growth process ceases to be uniform if the characteristic polynomial of the corresponding connective has multiple internal fixed-points; current techniques are inapplicable in this situation. While there is evidence of intriguing relationships between the characteristic polynomial of a connective and the limiting distribution, we cannot make any conjecture at this time. A first step is to consider connectives 91 92 such that for every depth d greater than some constant, the projection function can be realized by a formula that is a complete depth-d tree built from the connective. In contrast to growth processes on formulas, we showed that the limiting distribution of a growth process on reversible circuits—if one exists—is guaranteed to be uniform over the support of the growth process. Furthermore, we showed that either a limiting distribution exists, or the growth process has two alternating distributions. Hence, we derived a broad set of sufficient conditions under which a limiting distribution exists. We introduced the notion of gate-symmetry and showed that apart from two cases, the limiting distribution exists if the support of the initial distribution is gate-symmetric. The notion of gate-symmetry also proved useful in characterizing the support of a growth pro cess. We showed that if the support of the initial distribution of a growth process is gate-symmetric, then in all cases but one, the support of the respective growth process could be characterized. As part of the characterization we introduced the notion of the bandwidth of a reversible circuit; the growth processes whose supports were not amenable to characterization were growth processes on bandwidth-2 circuits. Characterizing the support of the growth processes allowed us to bound their convergence rates. We showed that the convergence rate is very slow—polynomial in the diameter of the corresponding Cayley graph—in contrast to the convergence rates of growth processes on formulas. This is due to the fact that unlike growth processes on formulas, where the formula is grown by a constant multiple every iteration, growth processes on reversible circuits grow the circuit by an additive constant every iteration. Thus, it is not surprising that the convergence rate of the latter processes is much slower. In the third part of the thesis we investigated the reversible circuit complexity of finite Boolean functions. We first showed that bounded-bandwidth reversible circuits can be simulated by bounded-width permutation branching programs and vice-versa; bandwidth-2 reversible circuits have ex actly the same power as width-4 permutation branching programs and reversible circuits of greater bounded bandwidth are characterized by the class NC1. Within the context of bounded width per mutation branching programs, we introduced a transformation framework, under which we showed that weakly accepting and monotonically accepting width-4 programs have the same computation power—a weakly accepting program can be transformed into a monotonically accepting one with only a constant-factor increase in size. However, we proved that such weakly accepting programs cannot be transformed into strongly accepting ones. This is in stark contrast to programs of width 5 or more, for which we derived a transformation that transforms every weakly accepting program into a strongly accepting one. Thus, we demonstrated a natural gap between width-4 and width-5 programs; correspondingly, there is also a natural gap between bandwidth-2 and bandwidth-3 reversible circuits. We considered poly logarithmic bandwidth circuits and defined a natural hierarchy of complexity classes, RC. We showed that RC C SC C RC2' and hence, that the union of all RC, RC, is equal to SC. Thus, polylogarithmic bandwidth bounded circuits provide another natural framework for studying low-lying complexity classes such as SC. Lastly, within the context of unbounded bandwidth reversible circuits we derived concrete con structions and upper bounds for several common functions including the incrementor, adder, consen-93 sus, and threshold functions. We proved that if a function can be embedded into a permutation that has a polynomial size cyclic representation, then the function has a polynomial size realization; we presented an explicit method for constructing a polynomial size realization of the respective func tions. This is in contrast to the space parsimonious reversible construction of Lange et al. [LMTOO], which only guarantees an exponential size realization. There are several tracks along which this research can proceed. Our investigation has yielded new insight into the problem of whether polynomial size width-4 permutation branching programs can compute the conjunction of n variables, but the answer still remains elusive. One approach to this problem is to further investigate various program transformations. If it is possible to trans form a weakly accepting program into a strongly accepting one, with a bounded increase in size, then a construction for a polynomial size width-4 program that computes the conjunction follows. Otherwise, this provides further evidence that the conjecture of Barrington [Bar89] is correct. While the sufficient conditions for polynomial-size realizations are useful, we would also like to ascertain when a function is not polynomially realizable. For example, our construction for the majority function is exponential in the number of variables. The question remains as to whether this bound is tight. In fact the majority function is a good place to begin for characterizing functions that cannot be realized by polynomial-size reversible circuits (without an additional O(logn) lines). Bibliography [AB87] N. Alon and R. B. Boppana. The monotone circuit complexity of Boolean functions. Combinatorica, 7(1): 1-22, 1987. [ABH+86] M. Ajtai, L. Babai, P. Hajnal, J. Komlos, P. Pudlak, V. Rodl, E. Szemeredi, and G. Turan. Two lower bounds for branching programs. In Proceedings of the 18th Annual ACM Symposium on Theory of Computing, pages 30-38, May 1986. [AB084] M. Ajtai and M. Ben-Or. A theorem on probabilistic constant depth computations. In Proceedings of the 16th annual ACM Symposium on Theory of Computing, pages 471_474, 1984. [ABOIN96] D. Aharonov, M. Ben-Or, R. Impagliazo, and N. Nisan. Limitations of noisy reversible computation, 1996. [Adl78] L. Adleman. Two theorems on random polynomial time. In Proceedings of the 19th Annual IEEE Symposium on Foundations of Computer Science, pages 75-83, October 1978. [AKS83] M. Ajtai, J. Komlos, and E. Szemeredi. An 0(n\ogn) sorting network. In Proceedings of the 15th Annual ACM Symposium on Theory of Computing, pages 1-9, 1983. [Ald87] D. Aldous. On the Markov-chain simulation method for uniform combinatorial and simulated annealing. Prob. Engng. Info. Sci., 1:33^16, 1987. [Ald89] D. Aldous. Lower bounds for covering times for reversible markov chains and random walks on graphs. Journal of Theoretical Probability, 2(1):91-100, January 1989. [AI086] N. Alon. Eigenvalues and expanders. Combinatorica, 6:83-96, 1986. [And85] A. Andreev. A method for obtaining lower bounds for the complexity of individual monotone functions. Dokl. Akad. Nauk USSR (Russian), 282(5): 1033-1037, 1985. English translation in Soviet Math. Dokl. 31(3):530-534, 1985. [Apo97] T. Apostol. Modular Functions and Dirichlet Series in Number Theory. Springer, 2nd edition, 1997. 94 Bibliography 95 [Bab91] L. Babai. Local expansion of vertex-transitive graphs and random generation in finite groups. In Proceedings of the 23rd annual ACM Symposium on Theory of Computing, pages 164-174, 1991. [Bar85] D. Barrington. Width-3 permutation branching programs. Technical Memo MIT/LCS/TM-293, Massachusetts Institute of Technology, Laboratory for Computer Science, 1985. [Bar89] D. Barrington. Bounded-width polynomial-size branching programs recognize exactly those languages in NC1. J. Comput. System Sci., pages 150-164, 1989. [BBD+95] A. Barenco, C. Bennett, D. DiVincenzo, N. Margolus, P. Shor, T. Sleator, J. Smolin, and H. Weinfurter. Quantum gates and circuits. Phys. Rev. A., 52:3457-3467, 1995. [BDFP83] A. Borodin, D. Dolev, F. Fich, and W. Paul. Bounds for width two branching programs. In Proceedings of the 15th Annual ACM Symposium on Theory of Computing, pages 87-93, April 1983. [Ben73] C.Bennett. Logical reversibility of computation. IBM Journal of Research and De velopment, 17:198-200, November 1973. [Ben82] C. Bennett. The thermodynamics of computation-a review. International Journal of Theoretical Physics, 21:905-940, 1982. [Ben88a] C. Bennett. Notes on the history of reversible computation. IBM Journal of Research and Development, 32(1), 1988. [Ben88b] C. Bennett. Notes on the history of reversible computation. IBM Journal of Research and Development, 44(l/2):270-277, January 2000 (reprint of Bennett, 1988). [Ben89] C. Bennett. Time/space trade-offs for reversible computation. SIAM Journal on Com puting, 18(4):766-776, 1989. [Ber82] S. Berkowitz. On some relationships between monotone and nonmonotone cir cuit complexity. Technical report, Department of Computer Science, University of Toronto, Canada, Toronto, Canada, 1982. [BG81] C. Bennett and J. Gill. Relative to a random oracle A, PA / NP'4 + co-NP"1 with probability 1. SIAM Journal on Computing, 10(1):96—113, February 1981. [BGL+93] C. Bennett, P. Gacs, M. Li, P. Vitanyi, and W. Zurek. Thermodynamics of computation and information distance. In Proceedings of the 25th Annual ACM Symposium on the Theory of Computing, 1993. [Bop85] R. Boppana. Amplification of probabilistic Boolean formulas. In Proceedings of the 26th Annual IEEE Symposium on Foundations of Computer Science, pages 20-29, October 1985. Bibliography 96 [Bop89] R. Boppana. Amplification of probabilistic Boolean formulas. Advances in Computing Research 5: Randomness and Computation, pages 27-45, 1989. [BS95] D. Barrington and H. Straubing. Superlinear lower bounds for bounded-width branch ing programs. Journal of Computer and System Sciences, 50(3):374-381, June 1995. [BST90] D. Barrington, H. Straubing, and D. Therien. Non-uniform automata over groups. Information and Computation, 89(2): 109-132, December 1990. [BT88] D. Barrington and D. Therien. Finite monoids and the fine structure of NC1. Journal ofthe ACM, 35(4):941-952, October 1988. [BTV01] H. Buhrman, J. Tromp, and P. Vitanyi. Time and space bounds for reversible simula tion. lr\arXiv:quant-ph/010I133, 2001. [Burll] W. Burnside. Theory of groups of finite order. Cambridge University Press, 2nd edition, 1911. [CFL83] A. Chandra, M. Furst, and R. Lipton. Multi-party protocols. In Proceedings of the 15th Annual ACM Symposium on Theory of Computing, pages 94-99, April 1983. [CG75] D. Coppersmith and E. Grossman. Generators for certain alternating groups with ap plications to cryptogaphy. SIAM Journal on Applied Mathematics, 29(4): 624—627, December 1975. [Cle90] R. Cleve. Methodologies for Designing Block Ciphers and Cryptographic Protocols. PhD thesis, University of Toronto, 1990. [CM87] S. Cook and P. McKenzie. Problems complete for deterministic logarithmic space. Journal of Algorithms, 8:385-394, 1987. [Coo74] S.Cook. An observation on time-storage trade-off. Journal of Computer and System Sciences, 9(3):308-316, December 1974. [Coo79] S. Cook. Deterministic CFL's are accepted simultaneously in polynomial time and log squared space. In Proceedings ofthe 11th Annual ACM Symposium on Theory of Computing, pages 338-345, May 1979. [Coo85] S. Cook. A taxonomy of problems with fast parallel algorithms. Information and Control, 64:2-22, 1985. [DH92] K. Doerk and T. Hawkes. Finite Soluble Groups. Berlin: de Gruyter, 1992. [DSC93] P. Diaconis and L. Saloff-Coste. Comparison techniques for random walk on finite groups. Ann. Probah, 21:2131-2156, 1993. Bibliography 97 [DZ92] M. Dubiner and U. Zwick. Amplification and percolation. In Proceedings of the 33rd Annual IEEE Symposium on Foundations of Computer Science, pages 258-267, October 1992. [DZ97] M. Dubiner and U. Zwick. Amplification by read-once formulas. SIAM Journal on Computing, 26(1): 15-38, January 1997. [Fri91] J. Friedman. On the second eigenvalue and random walks in random d-regular graphs. Combinatorica, 11, 1991. [Frol2] G. Frobenius. Uber matrizen aus nicht negativen elementen. S.-B. Deutsch. Akad. Wiss. Berlin, Math.-Nat. Kl, pages 456-477, 1912. [FSS81] M. Furst, J. Saxe, and M. Sipser. Parity, circuits, and the polynomial-time hierarchy. In Proceedings of the 22nd Annual IEEE Symposium on Foundations of Computer Science, pages 260-270, October 1981. [FT82] E. Fredkin and T. Toffoli. Conservative logic. International Journal of Theoretical Physics, 21(3/4):219-253, 1982. [GM91] Q. Gu and A. Maruoka. Amplification of bounded depth monotone read-once Boolean formulae. SIAM Journal on Computing, 20(l):41-55, February 1991. [Has86] J. Hastad. Almost optimal lower bounds for small depth circuits. In Proceedings of the 18th Annual ACM Symposium on Theory of Computing, pages 6—20, May 1986. [Has89] J. Hastad. Almost optimal lower bounds for small depth circuits. ADVCR: Advances in Computing Research, 5, 1989. [HN77] T. Hikita and A. Nozaki. A completeness criterion for spectra. SIAM Journal on Computing, 6(2):285-297, June 1977. [HN79] T. Hikita and A. Nozaki. Corrigenda: A completeness criterion for spectra. SIAM Journal on Computing, 8(4):656, November 1979. [HPV75] J. Hopcroft, W. Paul, and L. Valiant. On time versus space and related problems. In Proceedings of the 16th Annual IEEE Symposium on Foundations of Computer Science, pages 57-64, October 1975. [HR98] T. Hikita and I. Rosenberg. A completeness criterion for uniformly delayed circuits. Acta Applicandae Mathematicae, 52:49-61, 1998. [HS65] J. Hartmanis and R. Stearns. On the computational complexity of algorithms. Trans. Amer. Math. Soc. (AMS), 117:285-306, 1965. [Joh94] D. Johnson. A catalog of complexity classes. In J. van Leeuwen, editor, Handbook of Theoretical Computer Science, volume A, chapter 2. Elsevier Science Publisher, 1994. Bibliography 98 [Kar53] M. Karnaugh. The map method for synthesis of combinational logic circuits. AIEE Transactions, Part I Communication and Electronics, 72:593-599, November 1953. [Khr71] V. Khrapchenko. Complexity of the realization of a linear function in the class of n-circuits. Mat. Zametki, (Mathematical Notes of the Academy of Science of the USSR, 10:21-23, 1971. [Khr72] V. Khrapchenko. A method of of obtaining lower bounds for the complexity of n-schemes. Mat. Zametki, (Mathematical Notes of the Academy of Science of the USSR, 11:474-479, 1972. [KLNS89] J. Kahn, N. Linial, N. Nisan, and M. Saks. The cover time of random walks on graphs. Journal of Theoretical Probability, 2(1): 121—128, January 1989. [Kor65] V. Korobjov. O monotonnykh funktsiyakh algebry logiki. Prob. Cyb., 13:5-28, 1965. [Kor66] V. Korobjov. Sur le nombres des functions booleennes monotones de n variables. CR. Acad. Sc. Paris, 262:1088-1090, 1966. [Kor80] A. Korshunov. O chisle monotonnykh bulevykh funktsii. Problemy Kibernetiki, 38:5— 100, 1980. [Kri61] R. Krichevskii. Realizations of functions by superpositions. Prob. Cyb., 2:458-477, 1961. [Kud60a] V. Kudryavcev. Completeness theorem for a class of automata without feedback cou plings. Soviet Math. Doklady, 1:537-539, 1960. [Kud60b] V. Kudryavcev. Problems of completeness for automatic machine systems. Soviet Math. Doklady, 1:146-149, 1960. [Lad75] R. Ladner. The circuit value problem is log space complete for P. SIGACTN: SIGACT News (ACM Special Interest Group on Automata and Computability Theory), 7, 1975. [Lan61] R. Landauer. Irreversibility and heat generation in the computing process. IBM Journal of Research and Development, 5:183-191, 1961. [Lec63] Y. Lecerf. Machines de turing reversibles. CR. Acad. Sc. Paris, 257:2597-2600, 1963. [Lee59] C. Lee. Representation of switching circuits by binary-decision programs. Bell System Technical Journal, 38:985-999, July 1959. [LMT97] K. Lange, P. McKenzie, and A. Tapp. Reversible space equals deterministic space. In Proceedings of the 12th IEEE Conference on Computational Complexity, pages 45-50, 1997. [LMT00] K. Lange, P. McKenzie, and A. Tapp. Reversible space equals deterministic space. Journal of Computer and System Sciences, 60(2):354-367, 2000. Bibliography 99 [Loo65] H. Loomis, Jr. A theory of high-speed clocked logic. IEEE Trans. Elektron. Comput., EC-14:157-172, 1965. [Lov96] L. Lovasz. Random walks on graphs - a survey. In D. Miklos, V. T. Sos, T. Szony, and Janos Bolyai, editors, Combinatorics, Paul Erdos is Eighty, volume 2, pages 353-398. Mathmatical Society Budapest, 1996. [LS90] R. Levine and A. Sherman. A note on Bennett's time-space tradeoff for reversible computation. SIAM Journal on Computing, 19(4):673-677, 1990. [LS97] H. Lefmann and P. Savicky. Some typical properties of large and/or Boolean functions. Random Structures and Algorithms, 10:337-351, 1997. [LTV98] M. Li, J. Tromp, and P. Vitanyi. Reversible simulation of irreversible computation. PhysicaD, 120:168-176, 1998. [Lup58] O. Lupanov. A method for synthesizing circuits. Izv. vysshykh uchebnykh zavedenii, Radiofizika, 1:120-140, 1958. [Lup61a] O. Lupanov. Implementing the algebra of logic functions in terms of bounded depth formulas in the basis {+,*, — }. Soviet Physics Doklady, 6:107-108, 1961. [Lup61b] O. Lupanov. O realizatsii funktsii algebry logiki formulami iz konechnykh klassov (formulami ogranichennoT glubiny) v bazise &, V, ~ . Problemy Kibernetiki, 6:5-14, 1961. [Lup65] O. Lupanov. Ob odnom podkhode k sintezu upravlyayushchikh sistem. Problemy Kibernetiki, 14:31-110, 1965. [LV96a] M. Li and P. Vitanyi. Reversibility and adiabatic computation: Trading time and space for energy. In Proceedings ofthe Royal Society of London, Series A, volume 452 of A, pages 769-789, 1996. [LV96b] M. Li and P. Vitanyi. Reversible simulation of irreversible computation. In Proceed ings ofthe 11th IEEE Computational Complexity Conference, pages 306-306, 1996. Submitted to Physica D, 1997. [LV97] M. Li and P. Vitanyi. An introduction to Kolmogorov complexity and its applications. Springer, 2nd edition, 1997. [Max71] J. Maxwell. Theory of Heat. Longmans, Green and Co., London, 1871. [McK81] B. McKay. The expected eigenvalue distribution of a large regular graph. Linear Algebra and its Applications, 40:203-216, 1981. [MR95] R. Motwani and P. Raghavan. Randomized Algorithms. Cambridge University Press, New York, 1995. Bibliography 100 [MS56] E. Moore and C. Shannon. Reliable circuits using less reliable relays. Journal of Franklin Institute, 262(3): 191-208, 1956. [Nec66] E. Neciporuk. A Boolean function. Russian Academy of Sciences Doklady. Mathe matics, 1, 1966. [Ost54] A. M. Ostrowski. On two problems in abstract algebra connected with horner's rule. In O. Taussky-Todd, editor, Studies in Mathematics arid Mechanics Presented to Richard von Mises, pages 40-48. Academic Press, New York, 1954. [Per07] O. Perron. Uber Matrizen. Mathematische Annalen, 64:248-263, 1907. [PH70] M. Paterson and C. Hewitt. Comparative schematology. In Proj. MAC Conference on Concurrent Systems and Parallel Computation, pages 119-128, December 1970. [Pip76] N. Pippenger. The realization of monotone Boolean functions. In Proceedings of the 8th Annual ACM Symposium on Theory of Computing, pages 204-210, 1976. [Pip79] N. Pippenger. On simultaneous resource bounds. In Proceedings of the 20th Annual IEEE Symposium on Foundations of Computer Science, pages 307-311, October 1979. [Pip80] N. Pippenger. Pebbling. In Proceedings of the 5th IBM Japan Symposium on Mathe matical Foundations of Computing, 1980. [Pud84] P. Pudlak. A lower bound on complexity of branching programs. In Proceedings of the 11th Symposium on Mathematical Foundations of Computer Science, volume 176 oiLNCS, pages 480-489, September 1984. [Raz85] A. Razborov. Lower bounds on the monotone complexity of some Boolean functions. Doklady Akad. Nauk USSR, 282:1033-1037, 1985. [Raz88] A. Razborov. Formulas of bounded depth in basis (A,©) and some combinatorial problems. Voprosy Kibernetiky, USSR, pages 149-166, 1988. [Rez62] V. Reznik. The realization of monotonic functions by means of networks consisting of functional elements. Soviet Physics Doklady, 6(7):558-561, 1962. [Rot76] O. Rothaus. On bent functions. Journal of Combinatorial Theory, Series A, 20:300-305, 1976. [RS42] J. Riordan and C. Shannon. The number of two-terminal series-parallel networks. J. Math. Phys., 21:83-93, 1942. [RS94] J. Radhakrishnan and K. Subrahmanyam. Directed monotone contact networks for threshold functions. Information Processing Letters, 50(4): 199-203, May 1994. [Ruz79] W. Ruzzo. Tree-size bounded alternation. In Proceedings of the 11th Annual ACM Symposium on Theory of Computing, pages 352-359, April 1979. Bibliography 101 [Sap89] A. Sapozhenko. O chisle antitsepei v mnogosloinykh ranzhirovannykh mnozhestvakh. Diskretnaya Matematika, 1:110-128, 1989. [Sav88] P. Savicky. Random Boolean formulas representing any Boolean function with asymp totically equal probability (extended abstract). In Proceedings ofthe 12th Symposium on Mathematical Foundations of Computer Science, volume 324 of Incs, pages 512— 517. Springer, September 1988. [Sav90] P. Savicky. Random Boolean formulas representing any Boolean function with asymp totically equal probability. Discrete Mathematics, 83:95-103, 1990. [Sav94] P. Savicky. On the bent Boolean functions that are symmetric. European Journal of Combinatorics, pages 407-410, 1994. [Sav95a] P. Savicky. Bent functions and random Boolean formulas. Discrete Mathematics, 147:211-234, 1995. [Sav95b] P. Savicky. Improved Boolean formulas for the Ramsey graphs. Random Structures and Algorithms, 6:407-415, 1995. [Sav98] P. Savicky. Complexity and probability of some Boolean formulas. Combinatorics, Probability and Computing, 7(4):451-463, 1998. [Sha38] C. Shannon. A symbolic analysis of relay and switching circuits. AIEE Trans., 57:713— 723, 1938. [Sha48] C. Shannon. A mathematical theory of communication. The Bell System Technical Journal, 27(3):379-423, July 1948. [Sha49] C. Shannon. The synthesis of two-terminal switching circuits. Bell System Technical Journal, 28:59-98, 1949. [Sip80] M. Sipser. Halting space-bounded computations. Theoretical Computer Science, 10(3):335-338, March 1980. [SW49] C. Shannon and W. Weaver. A Mathematical Theory of Communication. University of Illinois Press, Urbana, Illinois, 1949. [Szi29] L. Szilard. Tiber die Entropieverminderung in einem thermodynamischen System bei eingriffen intelligenter wesen. Zeitschrift fiir Physik, 53:829-856, 1929. [Tar88] E. Tardos. The gap between monotone and non-monotone circuit complexity is expo nential. Combinatorica, 8(1), 1988. [Tof80] T. Toffoli. Reversible computing. In Automata, Languages and Programming, 7th Colloquium, volume 85 of Lecture Notes in Computer Science, pages 632-644, 1980. Bibliography 102 [Val84] L. Valiant. Short monotone formulae for the majority function. Journal of Algorithms, 5:363-366, 1984. [vN56] J. von Neumann. Probabilistic logics and the synthesis of reliable organisms from unreliable components. In C. Shannon and J. McCarthy, editors, Automata Studies. Princeton University Press, 1956. [Weg79] I. Wegener. Switching functions whose monotone complexity is nearly quadratic. Theoretical Computer Science, 9(l):83-97, July 1979. [Weg82] I. Wegener. Boolean functions whose monotone complexity is of size n2 log n. Theo retical Computer Science, 21(2):213-224, November 1982. [Weg87] I. Wegener. The Complexity of Boolean Functions. Wiley Teubner Series in Computer Science. John Wiley and Sons, New York, 1987. [Yao83] A. Yao. Lower bounds by probabilistic arguments (extended abstract). In Proceedings of the 24th Annual IEEE Symposium on Foundations of Computer Science, pages 420— 428, 1983. [Yao85] A. Yao. Separating the polynomial-time hierarchy by oracles (preliminary version). In Proceedings of the 26th Annual IEEE Symposium on Foundations of Computer Science, pages 1-10, October 1985. [Zur89] W. Zurek. Thermodynamic cost of computation, algorithmic complexity and the in formation metric. Nature, 341:119-124, September 1989. 

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.831.1-0051610/manifest

Comment

Related Items