UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

A panoply of quantum algorithms Furrow, Bartholomew 2006

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Notice for Google Chrome users:
If you are having trouble viewing or searching the PDF with Google Chrome, please download it here instead.

Item Metadata

Download

Media
831-ubc_2006_fall_furrow_bartholomew.pdf [ 407.89kB ]
Metadata
JSON: 831-1.0085242.json
JSON-LD: 831-1.0085242-ld.json
RDF/XML (Pretty): 831-1.0085242-rdf.xml
RDF/JSON: 831-1.0085242-rdf.json
Turtle: 831-1.0085242-turtle.txt
N-Triples: 831-1.0085242-rdf-ntriples.txt
Original Record: 831-1.0085242-source.json
Full Text
831-1.0085242-fulltext.txt
Citation
831-1.0085242.ris

Full Text

A Panoply of Quantum AlgorithmsbyBartholomew FurrowB.Sc., Queen’s University, 2004A THESIS SUBMITTED IN PARTIAL FULFILMENT OFTHE REQUIREMENTS FOR THE DEGREE OFMaster of ScienceinThe Faculty of Graduate Studies(Physics)The University Of British ColumbiaSeptember, 2006c© 2006, Bartholomew FurrowAbstractThis thesis’ aim is to explore improvements to, and applications of, a funda-mental quantum algorithm invented by Grover[1]. Grover’s algorithm is abasic tool that can be applied to a large number of problems in computer sci-ence, creating quantum algorithms that are polynomially faster than fastestknown and fastest possible classical algorithms that solve the same prob-lems. Our goal in this thesis is to make these techniques readily accessibleto those without a strong background in quantum physics: we achieve thisby providing a set of tools, each of which makes use of Grover’s algorithmor similar techniques, that can be used as subroutines in many quantumalgorithms.The tools we provide are carefully constructed: they are easy to use, andthey are asymptotically faster than the best tools previously available. Thetools that we supersede include algorithms by Boyer, Brassard, Høyer andTapp[2], Buhrman, Cleve, de Witt and Zalka[3] and Du¨rr and Høyer[4].After creating our tools, we create several new quantum algorithms,each of which is faster than the fastest known classical algorithm that ac-complishes the same aim, and some of which are faster than the fastestpossible classical algorithm. These algorithms come from graph theory, com-putational geometry and dynamic programming. We discuss a breadth-firstsearch that is faster than Θ(edges) (the classical limit) in a dense graph,maximum-points-on-a-line in Θ(N3/2 lgN) (faster than the fastest classi-cal algorithm known), as well as several other algorithms that are similarlyillustrative of solutions in some class of problem.Through these new algorithms we illustrate the use of our tools, workingto encourage their use and the study of quantum algorithms in general.iiTable of ContentsAbstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iiTable of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . iiiList of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vList of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viAcknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . . vii1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.1 A Brief History of Quantum Computing . . . . . . . . . . . . 31.1.1 Fundamental Work . . . . . . . . . . . . . . . . . . . . 31.1.2 Exponential Speedups . . . . . . . . . . . . . . . . . . 41.1.3 Grover’s Algorithm and BBHT . . . . . . . . . . . . . 51.1.4 Algorithms Using BBHT . . . . . . . . . . . . . . . . . 61.2 Conventions, Notation, and Notes . . . . . . . . . . . . . . . . 71.3 Capabilities of our Theoretical Quantum Computer . . . . . . 81.4 New Material Presented in this Thesis . . . . . . . . . . . . . 92 Grover’s Algorithm and Amplitude Amplification . . . . . . 112.1 Grover’s Algorithm . . . . . . . . . . . . . . . . . . . . . . . . 112.2 BBHT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152.3 Amplitude Amplification . . . . . . . . . . . . . . . . . . . . . 162.4 BCWZ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172.5 Inside an Iteration: Properties of F . . . . . . . . . . . . . . . 192.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203 Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213.1 Finding a Solution to F, findsol . . . . . . . . . . . . . . . . . 213.2 Minimum Finding, minfind . . . . . . . . . . . . . . . . . . . 223.3 Finding all x that Satisfy F, findall . . . . . . . . . . . . . . . 23iii3.4 Finding a Minimal d Objects of Different Types, mindiff . . . 244 Applications in Graph Theory . . . . . . . . . . . . . . . . . . 274.1 Breadth-First Search, BFS . . . . . . . . . . . . . . . . . . . . 284.2 Depth-First Search, DFS . . . . . . . . . . . . . . . . . . . . . 304.3 Single-Source Shortest Paths with Negative Edge Weights,SPNW . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 314.4 All-Pairs Shortest Paths with Negative Edge Weights, APSP 334.5 Improvements to Existing Quantum Graph Algorithms . . . . 354.5.1 Query Complexity . . . . . . . . . . . . . . . . . . . . 354.5.2 Single-Source Shortest Paths . . . . . . . . . . . . . . 364.5.3 Bipartite Matching . . . . . . . . . . . . . . . . . . . . 415 Applications in Computational Geometry and Dynamic Pro-gramming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 465.1 Computational Geometry Algorithms . . . . . . . . . . . . . . 465.1.1 Maximum Points on a Line, maxpoints . . . . . . . . . 465.1.2 Maximum Points on a Line: Zd . . . . . . . . . . . . . 475.1.3 Maximum Points on a Line: R2 . . . . . . . . . . . . . 485.2 Dynamic Programming Algorithms . . . . . . . . . . . . . . . 495.2.1 Coin Changer, coinchange . . . . . . . . . . . . . . . . 495.2.2 Maximum Subarray Sum, subarray-sum . . . . . . . . 506 Summary and Conclusions . . . . . . . . . . . . . . . . . . . . . 526.1 Future Directions . . . . . . . . . . . . . . . . . . . . . . . . . 54A BBHT: Running Time and Probability of Failure . . . . . . 56Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60ivList of Tables6.1 Section 3.1’s findsol compared to classical and quantum al-ternatives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 526.2 Section 3.2-3.4’s algorithms minfind, findall and mindiff com-pared to classical and quantum alternatives . . . . . . . . . . 536.3 Chapter 4’s algorithms BFS, DFS, SPNW and APSP, com-pared to classical and quantum alternatives . . . . . . . . . . 536.4 Section 4.5’s algorithms, single-source shortest paths and bi-partite matching, compared to classical and quantum alter-natives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 546.5 Chapter 5’s algorithms, maxpoints, coinchange and subarray-sum, compared to classical and quantum alternatives . . . . . 55A.1 Probability of failure and expected running time for BBHT . 59vList of Figures2.1 Starting state for Grover’s algorithm: N = 6, q = 1 . . . . . . 132.2 One Grover iteration: N = 6, q = 1 . . . . . . . . . . . . . . . 144.1 (a) A non-maximum matching. (b) An augmenting path. (c)A new matching . . . . . . . . . . . . . . . . . . . . . . . . . 424.2 (a) A non-maximummatching. (b) Partway through bipartitematching. (c) Another step. (d) A new matching . . . . . . . 45viAcknowledgementsThe author, being a neophyte to the art of writing scientific papers, wouldparticularly like to thank his advisor, Bill Unruh, for being an excellentfont of advice and encouragement, as well as being very supportive of theauthor’s choice of topic. He would also like to thank his second reader,David Kirkpatrick, for a series of excellent suggestions and no small numberof corrections. He would like to thank Yury Kholondyrev, Matthew Chanand others involved in the UBC programming team for some early ideas ofproblems to tackle, and plenty of practice explaining himself; and finally hewould like to thank Du¨rr and Høyer, authors of [4], for being the first toshow him an exciting field, full of potential.vii1. IntroductionThe field of quantum algorithms is very young, with the very notion ofquantum computation having been introduced in the early 1980s[5, 6, 7].This thesis’ aim is to make the study of quantum algorithms easily accessibleto those without a strong background in quantum physics; we will achievethis by providing a set of tools that can be used as subroutines in manyquantum algorithms. Great care is taken in the construction of these tools,which are both easy to use and technologically superior to those that arecurrently available, featuring asymptotically faster running times.We will accomplish this aim in three steps, firming up work that hasalready been done and introducing new algorithms along the way. Ourfirst step is to investigate the properties of two quantum search algorithms:the BBHT algorithm and the BCWZ algorithm. With BBHT we devoteconsiderable effort to discovering its probability of error, which had notpreviously been done; we make use of the result in Chapter 3.As quantum searches, BBHT and BCWZ take a function F (x), with xtaking the values {0, . . . , N − 1} and F (x) taking the values 0 and 1, andfind a (typically rare) element of the domain x such that F (x) = 1. We callthis a “solution” for F . Equivalently, we can say that F is the characteristicfunction for some subset of {0, . . . , N − 1}, and quantum search algorithms’goal is to find some element of that subset.We will frequently make reference to F, N and M in this section: Frefers to a function whose solutions we are trying to find, N is the sizeof its domain, and M is the number of solutions (equivalently: F refersto the characteristic function of some subset, M is the cardinality of thatsubset, and F ’s domain is {0, . . . , N − 1}). The two algorithms we willexamine, BBHT and BCWZ, while both quantum searches, are different inmethodology and in their properties: BBHT runs faster than BCWZ whenM is large (when there are multiple solutions), and BCWZ runs faster thanBBHT when we require a high probability of success. We discuss what itmeans for these algorithms to fail, as well as the probability with which thathappens, below.The BBHT algorithm[2] was invented by Michel Boyer, Gilles Brassard,1Peter Høyer, and Alain Tapp in 1996. It performs a quantum search inΘ(√N/M ) time,1 making the assumption that at least one solution exists.Since we often do not know a priori if solutions exist, however, BBHT haslimited utility. It is easy to modify BBHT so that, instead of assumingthat solutions exist, it gives up after a certain point; but this gives it someprobability of failure, a chance that it might fail to find a solution when oneexists. That modification, not original to this paper[8, 9], is required formost uses of BBHT; in Chapter 2 we are the first to analyze it in depth,calculating its probability of failure and rederiving its running time.The BCWZ algorithm[3] was the invention of Harry Buhrman, RichardCleve, Roland de Wolf and Christoph Zalka in 1999. Like BBHT, it per-forms a quantum search; it also accepts a parameter ǫ, an upper-bound onthe probability of failure that we are willing to tolerate. Given that param-eter, BCWZ runs in Θ(√N lg ǫ−1) time. The advantage of BCWZ is thatthe dependence of the running time on the probability of failure ǫ scalesas√lg ǫ−1: if one were to search with BBHT instead of BCWZ, tryingit multiple times to reduce the probability of failure, one would arrive atΘ(√N/M lg ǫ−1) time. We re-derive BCWZ in Chapter 2, showing a littlemore detail than the authors did in their original derivation.Our second step is the introduction of our new tools, founded on BBHTand BCWZ, that perform various kinds of quantum searches. We combinethese two, yielding a superior search algorithm that shares the best featuresof both: the fast running time of BBHT when there are many solutions tobe found, and the fast running time of BCWZ when we want a very lowprobability of failure. Using this improved algorithm, we then derive a newprocedure for finding the minimum of a function G(x), based on that of Du¨rrand Høyer[4]; we also analyze the complexity for finding all M solutions toa function F, and introduce a tool, mindiff, that will be useful for solvingproblems in graph theory. We do all this in Chapter 3.Our third objective is to use these tools, showing their application tobe straightforward. We invent several new quantum algorithms, each ofwhich is polynomially faster than the fastest known classical algorithm thatachieves the same goal, and some of which are polynomially faster thanthe fastest possible classical algorithm that achieves the same goal. Ournew algorithms solve problems that come from the areas of dynamic pro-gramming, computational geometry, and graph theory. A summary of thosespeedups can be found in Chapter 6; perhaps our most impressive result isa dynamic programming algorithm that speeds up a problem that has an1See Section 1.2 for an explanation of O, Θ, Ω notation.2Θ(N3) classical solution2 to Θ(N2) quantumly.We also look at quantum algorithms by Du¨rr, Heiligman, Høyer &Mhalla[8] and Ambainis & Sˇpalek[9], examining how they deal with failurein BBHT. We improve their algorithms by factors of order logN by replacingtheir use of the BBHT search algorithm with our improved BBHT/BCWZcombination.Through these steps we hope to introduce the field of quantum algo-rithms to as broad an audience as possible; there are many exciting prob-lems that remain to be solved, and we hope that this thesis will motivateand aid in such research.1.1 A Brief History of Quantum Computing1.1.1 Fundamental WorkOur chronicle begins in the years 1980-1982, with work done independentlyby Yuri Manin and Richard Feynman[5, 6]. Both researchers consideredthe problem of simulating the natural world using standard computers, andboth came to the conclusion that it was impossible to do efficiently: sim-ulating quantum mechanics with a classical computer would require workexponential in the number of particles being simulated. Feynman observedthat if one had a computer whose basic operations were quantum mechanicalin nature—what he was the first to call a quantum computer—one could,presumably, simulate a quantum system while only doing work polynomialin the number of particles. What this means is that quantum physics it-self is somehow exponentially more powerful than a classical computer; andthat something capable of harnessing quantum physics’ strengths might besimilarly powerful.In the early 1930s, a great deal of work was done on the notion of ef-fective computation: what exactly computers can and cannot do. Severalindependent approaches were taken, pursued in work by Church, Turing andKleene[11, 12, 13]; ultimately they all came to equivalent conclusions aboutcomputers’ abilities, in some sense launching the formal study of computerscience. A quantum analog to these conclusions was described in 1985 byDavid Deutsch, where he stated[14]: “Every finitely realizable physical sys-tem can be perfectly simulated by a universal model computing machineoperating by finite means.” In the same paper he introduced such a “uni-versal model computing machine,” and thus launched the formal study of2There is also a Θ(N3 lg lgNlgN) algorithm due to Tamaki[10].3quantum computing.There has been a great deal of study devoted to probabilistic algorithms:algorithms that employ randomness as part of their logic. Randomness per-meates the algorithms in this thesis, since the procedures on which they arebased involve operations on and measurements of quantum systems. Everyalgorithm presented here has some probability of failure: some probabilitythat it will run, terminate, and return an incorrect response. There aredifferent ways of dealing with this; we can repeat the algorithms, or partsof them, to reduce the probability of failure: this gives us a relationship be-tween probability of failure and running time. Throughout this paper, theanalyses of our algorithms’ running times will frequently include the symbolǫ, indicating how much slower our algorithms become as we require lowerand lower probabilities of failure.1.1.2 Exponential SpeedupsAlthough this thesis focusses on on quantum algorithms that offer poly-nomial speedups over their classical (non-quantum) equivalents, we digressbriefly here to discuss some early quantum algorithms that are exponentiallyfaster than their classical counterparts—being either faster than the fastestpossible classical algorithm to solve the same problem, or faster than thefastest known classical algorithm. These are, of course, what has made thefield of quantum algorithms so exciting.An early quantum algorithm was invented by Dan Simon in 1994[15].Simon’s algorithm solved an artificial problem, one whose solution lackedobvious application; however he was able to show that a quantum algo-rithm would solve it exponentially more rapidly than any classical algorithmcould. Moreover, Simon’s algorithm laid the groundwork for Peter Shor’ssubsequent development of quantum factoring.In 1994, Peter Shor published an algorithm that solved two importantproblems in number theory, both of which are generally believed to be super-polynomial on a classical computer: prime factorization, and the discrete logproblem[16]. This algorithm is perhaps the best-known product of quantumcomputing: because it performs factoring in polynomial time, it could inprinciple be used to crack modern cryptosystems, such as RSA.This section, regrettably, contains the last mention of exponential speed-ups in this thesis. While the study of exponential speedups is exciting, thusfar results have been hard to come by, and generalize poorly. Thus one canfactor, and do useful things with factoring, but one can not solve more thanone or two similar problems. We will instead focus on polynomial speedups4brought by Grover’s algorithm, a very general tool that can be used to solvea seemingly huge number of problems either faster than they can be solvedclassically, or faster than the fastest known classical solution.1.1.3 Grover’s Algorithm and BBHTWe will be comparatively brief in this section, since Chapter 2 expands onmany of the developments we will discuss here.With the exponential improvements over the fastest known classical al-gorithms offered by Simon and Shor, it is natural to ask whether quantumcomputers provide exponential speedups in general; one could ask if there issome general methodology, exponentially faster on quantum computers thanon classical computers, for seeking solutions to some class of problems. In1994, Charles Bennett, Ethan Bernstein, Gilles Brassard and Umesh Vazi-rani looked at searching spaces of sizeN, and concluded that an unstructuredsearch for a unique solution would not be faster than Ω(√N) on a quan-tum computer, as opposed to Θ(N) on a classical computer[17]. They didnot have an algorithm that would achieve this quadratic speedup, but theirlower bound would prove prophetic.In 1996, Lov Grover introduced an algorithm for what he called “search-ing for a needle in a haystack”: given (for example) a phone book with Nrandomly-ordered entries, Grover’s algorithm could find a particular namein Θ(√N) accesses to the book[1]. Classically this would take Θ(N) ac-cesses: we would have to look at each entry of the phone book individually(note that we do not normally build phone books this way: traditionallythey are alphabetized, allowing for a much more efficient search).Grover’s algorithm has potential applications that range far further thanlookups in poorly-designed phonebooks, but in its original form it is quitelimited: while it could find Mr. Zloklikovits in Vancouver without difficulty,it would have a great deal more difficulty with finding an arbitrary one ofthe many McDonald’s restaurants[18]. Grover, although he did discuss it,did not go into detail about what would happen in the case where therewas more than one needle in the haystack; and so his algorithm neededmodification.We now discuss two different enhancements to Grover’s algorithm: theBBHT algorithm and the BCWZ algorithm. Both of them accomplishGrover’s original goal, quickly finding a needle in a haystack; and bothwork even if there are multiple needles, finding one at random. BBHT isfaster than BCWZ when there are many solutions to be found, while BCWZis faster when a high probability of success is required.5Michel Boyer, Gilles Brassard, Peter Høyer, and Alain Tapp publishedthe BBHT algorithm in 1996[2]. Consider Grover’s phone book example, andour difficulty with finding any one of the M different McDonald’s restau-rants. Boyer, Brassard, Høyer and Tapp constructed an algorithm thatwould find a random McDonald’s, after performing Θ(√N/M ) accesses tothe phone book. More generally, the BBHT algorithm solves the followingproblem: given a function F (“is this entry a McDonald’s?”) that mapsthe domain {0, . . . , N − 1} (the phonebook) to {0, 1}, find any x such thatF (x) = 1 (find a McDonald’s). The applications for this are very broad, aswe shall see throughout this thesis. It is also worth noting that BBHT mayfail; there is some probability that, even if a solution (needle, McDonald’s)exists, BBHT will not find one. We will discuss this further in Chapter 2.Harry Buhrman, Richard Cleve, Ronald de Wolf and Christof Zalka pub-lished the BCWZ algorithm in 1999[3]. Their algorithm used a techniqueknown as amplitude amplification, invented by Brassard and Høyer and fur-ther explored by Brassard, Høyer, Mosca and Tapp, which we will discussin Chapter 2. The BCWZ algorithm solves the same problem as BBHT;but unlike BBHT, BCWZ’s speed does not increase with the number of so-lutions. Like BBHT, it may fail to find a solution when one exists; BCWZ,however, is designed to minimize that probability in an efficient manner.1.1.4 Algorithms Using BBHTShortly after the publication of BBHT, Christoph Du¨rr and Peter Høyer,also in 1996, demonstrated a use for the algorithm: minimum finding[4].Given some function G that maps the domain {0, . . . , N − 1} onto sometotally ordered set, Du¨rr and Høyer’s minimum-finding algorithm finds anyx such that G(x) is minimal, after Θ(√N) calls to G.BBHT also provides many speedups in graph theory. Christoph Du¨rr,Mark Heiligman, Peter Høyer and Mehdi Mhalla wrote the first paper onthe topic, which gave quantum solutions to several classic graph problems[8].The algorithms that they gave are polynomially faster than their classicalcounterparts. We will examine their algorithm for finding shortest paths inweighted graphs in Section 4.5.2.Another paper speeding up graph problems was written by Ambainisand Sˇpalek; they also addressed classic graph problems[9]. We will examinetheir algorithm for maximum bipartite matching in Section 4.5.3, speedingit up slightly. They also addressed problems similar to bipartite matching,such as maximum flow in integer networks, which we will not examine.61.2 Conventions, Notation, and NotesThere are a few things that we need to discuss for those unfamiliar withthe procedures of algorithmic analysis. We create several new algorithms inthis thesis, and for each of them we are interested in analyzing its runningtime: how long it takes to compute a solution. Since our algorithms haverandomness in them, we will be interested in the expected time it takesto run our algorithms on some input. Since any given problem can havea very large number of possible inputs, so we use a common formalism incomplexity analysis:For any problem, consider the set of all possible inputs and group themtogether by their size. For any given size, take the worst possible input ofthat size (the input that causes our algorithm to have the slowest expectedrunning time). Now we have a mapping from input size to time; that tellsus how fast our algorithm is as a function of input size, but this functionmay be erratic or hard to analyze. We need a mechanism for comparingthese functions, so we can say that one algorithm is “faster” or “slower”than another. To do so, we compare these functions against well-behavedbenchmark functions, such as N, N2, N lgN, and 2N .Consider such a well-behaved benchmark function, b(N), and a functionf(N) that indicates (as discussed above) the worst-case expected runningtime of some algorithm on input of size N . We say that f(N) is O(b(N))if, for some constant c, f(N) < cb(N) for all N ; this indicates that ouralgorithm’s running time is no slower than b(N). We say that f(N) isΩ(b(N)) if, for some constant c, f(N) > cb(N) for all N, indicating thatour algorithm’s running time is no faster than b(N). Finally, we say thatf(N) is Θ(b(N)) if it is both O(b(N)) and Ω(b(N)), which indicates thatour algorithm’s running time goes approximately as b(N).This formalism is convenient for many reasons, not the least of which isthat it allows us to dispense with constants and lower-order terms. 2N2 +10000 is Θ(N2), as is 5N2 + 50. This lets us dispense with many concernsabout machine-specific issues: is division slower than addition? As long asit’s only slower by a constant factor, it doesn’t matter.Following the convention found in Cormen et al.’s book[19], we assumethat basic arithmetic and addressing conventions take constant time. Thisis consistent with other papers on quantum algorithms: see for example[1, 2, 3, 4, 8, 9].A note on terminology: we frequently use the term “classical” to refer toalgorithms that are designed for a classical model of computing. While thisis a standard term, it is perhaps unfortunate: in computer science literature,7a classical algorithm is one that has proven its worth and stood the test oftime. Euclid’s and Dijkstra’s algorithms are perhaps the best examples,being algorithms in common use that were invented hundreds of years ago.We attempt to maintain some of this usage by using the term “classic” torefer to such algorithms.Finally, wherever the character ‘z’ appears on its own in this document,it is pronounced “zed.”1.3 Capabilities of our Theoretical QuantumComputerThis thesis’ algorithms are designed based on certain assumptions aboutwhat sort of machine they will be run on; we discuss those here. Fun-damentally, we take our machine to be equivalent to a “universal modelcomputing machine” by Deutsch’s definition[14]. That assumption alone isinsufficient, however, to let us analyze our algorithms: any classical com-puter is equivalent to a Turing Machine, but there are many operations thatare asymptotically faster if one has a standard personal computer usingC++.As such, we assume throughout the paper that the computer on whichour algorithms are running is fundamentally very similar to a modern per-sonal computer: specifically that it follows the RAM model outlined byCormen et al.[19]. To add to that model the sort of capabilities that weassociate with an idealized quantum computer, we first say that all our bitsare qubits, and make the assumption that we do not have to concern our-selves with decoherence (that the qubits will stay at whatever value they aregiven); furthermore, that an arbitrary number of qubits can be entangledtogether. While the assumption that decoherence will not be an issue isperhaps unrealistic, we hope that our algorithms can be appropriately mod-ified, that error correction codes will be sufficiently sophisticated, or that,while not arbitrary, decoherence times will be long enough for many of ouralgorithms to work.Our processor should be capable of quantum operations. In particular,it should be capable of some universal set of quantum logic gates as outlinedin Nielsen and Chuang’s text[20]. A universal set of quantum logic gates iscapable of simulating an arbitrary quantum gate, though the operations inthis thesis only require basic arithmetic operations, Hadamard gates, andsingle-qubit gates.We require that our random-access memory be accessible in superpo-8sition: this means that if we pass a linear combination of number statesto our RAM-accessor, such as3 (|0〉 + |2〉 + |5〉)/√3, it can extract all theappropriate memory entries in parallel. To illustrate this, let us define theoperator M, which retrieves qubits from memory. To access the 5th qubitin memory, we would perform the following operation: M|5〉|0〉. This wouldgive us |5〉|a〉, where a is the content of the 5th qubit in memory. The resultwould be entangled with the original qubit, still sitting in the 5th slot ofmemory. By accessing memory in superposition, we mean performing theoperation M(|5〉 + |2〉)|0〉/√2 and getting the result (|5〉|a〉 + |2〉|b〉)/√2,where b is the content of the 2nd qubit in memory.Other papers on quantum algorithms[8, 9] choose to focus on analyz-ing the number of queries to some oracle their quantum algorithms take,eschewing the analysis of running time. We will discuss this further in Sec-tion 4.5.1; our results can be similarly analysed, and for the most part ourrunning times are equal to our query complexities. In some algorithms, par-ticularly that of Section 4.5.2, there is a difference; it is because of suchdifferences that we feel running time is a better measure than query com-plexity of algorithms’ performance, despite a large degree of uncertaintyabout exactly how quantum computers will be implemented in the future(and what unknown factors will appear in running times).1.4 New Material Presented in this ThesisThis section is intended to be a quick reference, providing ready access tomaterial that is original to this thesis. We will list this material in the orderin which it appears. Recall that F refers to a function whose solution weare trying to find, N refers to the size of F ’s domain, and M refers to thenumber of solutions.Our analysis of BBHT’s probability of failure (Section 2.2 and appendixA) is original, and is one of the most important contributions of this thesis;also original is our detailed presentation of BCWZ (Section 2.4).Section 3.1’s findsol is original, and is perhaps the most important con-tribution of this thesis; also original is its application in Section 3.3, findall.Section 3.2’s minfind is Du¨rr and Høyer’s[4], modified to take advantage offindsol’s capabilities (they used an unmodified BBHT, and aborted it afterΘ(√N) time). Section 3.4’s mindiff comes from Du¨rr, Heiligman, Høyer and3This notation will be defined and discussed in Section 2.1. Intuitively, |5〉 indicates aseries of qubits (each of which is indicated by ai|0〉 + bi|1〉) set to |0〉 . . . |0〉|1〉|0〉|1〉, thebinary representation of 5.9Mhalla’s similar algorithm[8], though we provide several of the algorithm’sspecifics that did not impact its query complexity, and were thus not studiedby the original authors (see Section 4.5.1).The algorithms in Sections 4.1 and 4.2 (BFS and DFS) are our own,although Ambainis and Sˇpalek have an almost-identical algorithm[9] forbreadth-first search that used BBHT instead of findsol.The algorithm in Section 4.3 (SPNW) is our own, as is Section 4.4’sAPSP. Our improvements to Du¨rr, Heiligman, Høyer and Mhalla’s single-source shortest paths[8] and Ambainis and Sˇpalek’s bipartite matching[9]algorithms are original (see Section 4.5), and we add some additional detailto DHHM’s single-source shortest paths.The algorithms in Chapter 5, maxpoints, coinchange and subarray-sumare all original, though they are based on classical algorithms that are notoriginal to this thesis.102. Grover’s Algorithm andAmplitude Amplification2.1 Grover’s AlgorithmTo describe Grover’s algorithm, we will use Grover’s original metaphor:needle-finding. Grover’s algorithm is a procedure to find a “needle” in a“haystack.” More precisely, given a function F, with value 0 except atsome unique q (the “needle”) for which F (q) = 1, acting on the domain{0, . . . , N − 1} (“haystack”), Grover’s algorithm finds the needle q. In thissection we will explain the physics of Grover’s algorithm, taking the needlealways to be the integer q.First we need to define some notation. A qubit is a two-state system,and so we will represent the two states as |0〉 and |1〉. One qubit’s state,then, can be written as a|0〉+ b|1〉, where a and b are complex numbers suchthat |a|2 + |b|2 = 1. If we have two qubits, A and B, their joint state canbe written as c|0〉A|0〉B + d|0〉A|1〉B + e|1〉A|0〉B + f |1〉A|1〉B . We changenotation here to write this as c|00〉+d|01〉+e|10〉+f |11〉, and finally changenotation one more time to write the above as c|0〉+d|1〉+ e|2〉+f |3〉.4 Notethat we started by writing our qubits separately, and ended by representingthe qubit-string (or “qudit” as it is sometimes known) as an integer. Wewill consistently use this final notation throughout the paper. We will useRoman characters and integers to represent eigenstates of the number basis(such as |i〉 and |3〉), and Greek characters to represent arbitrary states.To implement Grover’s algorithm, we assume that we have implementedF as a quantum black box F . For these purposes, this means that we canpass a state |i〉 into F , and end up with F|i〉 ≡ (−1)F (i)|i〉. Because quantumphysics is linear, we can evaluate F (0), . . . , F (N − 1) “simultaneously” byimplementing F and passing in |Ψ〉 ≡ (|0〉+ |1〉 + . . . + |N − 1〉) /√N .4There is clearly an ambiguity whenever we write |0〉 or |1〉, since each could representthe state of a single qubit or the state |00 . . . 000〉 (respectively |00 . . . 001〉) of some numberof qubits. We hope which is meant will be clear from context.11The first step in Grover’s algorithm is to put our system |φ〉 into thefollowing starting state, which we will refer to as |Ψ〉 all throughout thischapter:|Ψ〉 = 1√N|0〉+ 1√N|1〉+ . . .+ 1√N|q〉+ . . .+ 1√N|N − 1〉 (2.1)Grover’s algorithm works by gradually nudging the amplitude on the |q〉term toward 1, and simultaneously nudging the other amplitudes toward0. The last step of the algorithm is measurement: since |q〉’s amplitude isnow near 1, it is likely that our measurement’s result will be q. This seriesof nudges is called amplitude amplification on |φ〉 (our notation for “ourcurrent state”).We begin our discussion of Grover iterations with a diagram of the vectorspace in which we will sit. Our starting state is |Ψ〉, the even superpositionof all possible inputs; our desired state is |q〉. Considering the span of thosetwo vectors, which is two-dimensional, we can now construct Figure 2.1.The first step in Grover’s algorithm is to take our initial state |Ψ〉 andapply F to it. Revisiting our definition of F , we realize that this simplyreflects our state about the vertical axis. That is the first step in a Groveriteration; the second step is to reflect about our initial state, |Ψ〉 (we willcall the operator that does this J , and provide its details later). We can seethe results of this process in Figure 2.2.It is easy to see from Figures 2.1 and 2.2 that a single iteration increasesthe angle between the current state |φ〉 and the vertical axis by 2θ. θ wasdefined in Figure 2.1 to be the angle between the vertical axis and |Ψ〉, ourstarting state; since the probability of measuring |q〉 in our starting state is1N , sin2(θ) = 1N and thus θ = sin−1(1√N).We are still missing a few details, but we can now write Grover’s algo-rithm as the following procedure:1. Create the black box F such that F|i〉 = (−1)F (i)|i〉.2. Let the state of our system be called |φ〉. Initialize |φ〉 to |Ψ〉 ≡(|0〉+ |1〉+ . . . + |N − 1〉) /√N .3. Let θ ≡ sin−1(1√N). Let α = θ.4. Repeat the following, until α+ 2θ > π2 :(a) Apply F to |φ〉, followed by J , the reflection about the startingstate |Ψ〉.12θΨ0 + 2 + 3 + 4 + 551Figure 2.1: Starting state for Grover’s algorithm: N = 6, q = 1. θ is definedto be the angle between |Ψ〉 and the vertical axis. Note that the verticalaxis has no |1〉 term, and is thus orthogonal to the horizontal axis.(b) Let α = α+ 2θ.5. Measure |φ〉 in the number basis; call the result x.6. If F (x) = 1, return x. Otherwise go to step 1.The “goto” in step 6 is necessary because θ may be such that α neverexactly reaches π2 , the point at which x would necessarily equal q. Thismeans that every time we run through steps 1–6, there is some probabilityof “failure”—measuring the wrong x. Since we never measure unless ourcurrent angle from the vertical axis is ≥ π6—if the angle is less, we will doanother Grover iteration—this probability is ≤ 23 , and so in the worst casethe number of times we repeat the body of the algorithm has expectation 3.We still lack two major details: we have not discussed the implementa-tion of J ; and while we have a procedure, we would like to know how longit takes (how often step 4 will repeat for given N). Implementing J is a13φ0 = Ψφ1φ20 + 2 + 3 + 4 + 551Figure 2.2: One Grover iteration: N = 6, q = 1.matter of combining a few quantum gates. J ’s purpose is to reflect about|Ψ〉, our initial state; to implement it, we can switch bases so that |Ψ〉 = |0〉,and then reflect about |0〉. We leave the details to Grover’s original paper[1].Finally, we need a running time. For starting states with a large numberof solutions (θ > π6 ), we effectively choose a random input and check itagainst F, for which the expected number of applications of F is a constant.For θ ≤ π6 , we perform at least one Grover iteration. We first note that ournumber of iterations is ≤ π2θ . Since θ ≥ sin θ in that range, we have ournumber of iterations ≤ π2 sin θ = π√N2 . We can take a lower bound similarly,and arrive at Θ(√N) iterations through step 4, and thus Θ(√N) calls to F .As for running time that comes from sources other than the calls to F,we have Θ(√N) calls to J . J is a Θ(1) operation, and thus takes no moretime than F (which must be Ω(1)). One detail of J is that it only works onan integer number of qubits: executing it leaves us with entanglement over14states that are not in the domain of F, and can thus break Grover’s. Todeal with that we can simply extend the domain of F to the nearest powerof 2, defining F (x) = 0 for x not in the original domain. This increases Nby a factor of two at most, and thus leaves us with the same asymptoticperformance.2.2 BBHTGrover’s algorithm has a major disadvantage, which is that it is designedfor the one-needle case: where F (x) = 1 for only one x. It has the problemthat every Grover iteration increases |φ〉’s angle from the vertical axis by 2θ;after about π4θ iterations, each iteration reduces the probability of obtainingthe correct result.When the number of solutions M is known, Grover’s algorithm is easyto adapt. Recall that θ, the angle between the vertical axis and our startingstate, was initially sin−1√1N . Since we now have M possible solutions,the probability that measuring |Ψ〉 would give one of the solutions is MN ;therefore the projection of |Ψ〉 onto the horizontal axis, sin2 θ, must equalMN . Thus we have that θ = sin−1√MN ; aside from noting that change, wecan proceed normally with Grover’s algorithm, obtaining a random solutionwhen it terminates. If the number of solutions is not known, however, thenneither is θ; and Grover’s algorithm as written will not consistently offer usa quadratic speedup.The BBHT algorithm[2] solves the needle-finding problem when thenumber of solutions is not known. The concepts it makes use of were intro-duced in Section 2.1, and so we begin by presenting Boyer, Brassard, Høyerand Tapp’s algorithm, slightly modified for our purposes:1. Create the black box F such that F|i〉 = (−1)F (i)|i〉.2. Initialize m = 1 and set λ = 1.31.53. While m ≤ 2√N, repeat the following unless instructed to return:(a) Let the state of our system be called |φ〉. Initialize |φ〉 to |Ψ〉, theequal superposition of all states.(b) Choose an integer j uniformly at random such that 0 ≤ j < m.5This choice was made to minimize the running time of BBHT under a certain approx-imation (see Appendix A): any number strictly between 1 and 2 would work.15(c) Apply j Grover iterations JF to |φ〉.(d) Measure |φ〉 in the number basis, and call the result x.(e) If F (x) = 1, return x; otherwise, set m to λm.4. Return false.Note that we return the special value false if no solution has been foundafter a sufficiently large number of iterations through step 3. Since M isunknown, it is possible that M = 0 and there are no solutions; thus it isimportant to give up at some point if no solution has yet been found. Thatis a modification we have made from the original algorithm: we allow ouralgorithm to give up if there is probably no solution, which also introducesthe possibility of failure (which we explore below); it may give up too soon.Intuitively, BBHT works by trying several different numbers of Groveriterations, which (depending on how many iterations there were) will yielddifferent probabilities of success for different values of M . The algorithm asa whole will fail with probability < .5M−.93, and its total number of callsto F has an expectation of Θ(√N/M); we prove this in Appendix A.BBHT is simple and powerful: given a binary function F and its domain,it finds x so that F (x) = 1 if such an x exists, returns false if it does not,and requires no additional information. Its one disadvantage is that it mayfail, returning false incorrectly. BBHT is the primary tool used by Du¨rr,Heiligman, Høyer and Mhalla[8] and Ambainis and Sˇpalek[9] in constructingtheir algorithms; we use it in combination with BCWZ (see Section 2.4) toameliorate the effects of failure on running time.2.3 Amplitude AmplificationAmplitude amplification generically refers to any process where one takessome state a|φ〉+ b|χ〉 and selectively increases the magnitude of one of itsamplitudes, a or b. Grover’s algorithm is an amplitude amplification process:we start off with an even superposition of all input states, and amplify theamplitude of the states that satisfy some function F .An amplitude amplification process is a search, much like Grover’s algo-rithm or BBHT. Like Grover’s algorithm, an amplitude amplification processstarts in some state |β〉, gradually amplifies the amplitude of the terms weare interested in through a process much like a Grover iteration, and thenterminates by measuring the state.In Section 2.4 we will make use of an algorithm called exact search, anamplitude amplification process invented by Brassard and Høyer[21]. One of16the difficulties with Grover’s algorithm is that it does not necessarily succeedon the first try, even if we know M ; because the probability of measuringa correct answer is sin2((2m + 1)θ) after m Grover iterations, there maynot be some integer number of iterations that would guarantee success. Ifwe allow modifications to our Grover iterations, however, we can decreasethe amount by which our angle increases, and make our required numberof iterations an integer. Brassard and Høyer’s exact search does just that,accepting a parameter M1 > 0 and, if M1 is the number of solutions to thegiven function F, returning a random such solution in Θ(√N/M1) calls toF . IfM1 is not the number of solutions to F, it will terminate in Θ(√N/M1)calls to F, and return either a solution or false.Exact search’s strength does not come from the fact that it never failsif the guess for M is right; after all, if M is known then it is easy to adaptGrover’s algorithm to try until it succeeds. Its strength is that if we tryexact search for a given M1 and it fails, then the function definitively doesnot have exactly M solutions. We will find this to be useful in Section 2.4,when we explore the BCWZ algorithm.Amplitude amplification has a number of other uses, such as workingwith heuristics to solve difficult decision problems; we will not explore thoseuses here. Suffice it to say that it is a useful procedure in general, andworthy of exploration in its own right.2.4 BCWZThe BCWZ algorithm, invented by Buhrman, Cleve, de Wolf and Zalka[3],is another general search algorithm; but where BBHT is speedy when thereare many solutions, BCWZ deals well with errors.Let us examine how we typically deal with errors classically. Considera function G that finds a solution to some problem. If a solution exists, Gfinds that solution with probability 1− p; otherwise it simply returns false.Suppose that we suspect there are no solutions: given that G is our onlytool, how can we be sure to within a probability of ǫ that no solution exists?The solution is to keep trying G until the probability is less than ǫ that,if a solution existed, it was not found. Since the probability that G fails tofind a solution when one exists is p, if we try r times then the probabilitythat we fail is pr. Since we want pr = ǫ, we have r = log(ǫ)/ log(p). Wethus need to repeat G Ω(lg ǫ−1) times before we can be confident that ourprobability of failure is less than ǫ.The BBHT algorithm, which we discussed in Section 2.2, has all of the17properties that we assigned to G. If it finds a solution, we know immediately;if it fails to find one, or if there is none, it will claim no solution exists. Itfails with some probability p if there is a solution; so if we want to ensurethat a function F has no solutions (and be right with probability 1− ǫ) wemust try BBHT Ω(lg ǫ−1) times.BCWZ does better than that, and (as we will see) reduces our error-dependence to Θ(√lg ǫ−1). We have already introduced the ideas that weneed, and so we will begin by writing out the algorithm. Note that step 1has a logarithm taken in base 1.5; our choice of 1.5 is not entirely arbitrary,but is important to the analysis of the algorithm. Our ultimate requirementis that our logarithm’s base X is such that 0.61logX ǫ−1< ǫ for all ǫ in (0, 1),and so we could have taken any base in the range (1, 1.6].1. Let M0 =⌈log1.5 ǫ−1⌉.2. Apply exact search, taking its “guess” for M, M1, to be all integers in[1,M0]. If a result other than false is ever returned, return it.3. Repeat the following steps M0 times unless instructed to return:(a) Let the state of our system be called |φ〉. Initialize |φ〉 to |Ψ〉, theeven superposition of states.(b) Choose an integer j uniformly at random such that 0 ≤ j <√N/M0.(c) Apply j Grover iterations JF to |φ〉.(d) Measure |φ〉 in the number basis, and call the result x.(e) If F (x) = 1, return x.4. Return false.After step 2, we can be sure that M = 0 or M > M0; consequently,√N/M0 >√N/M . We found in our analysis of BBHT (see AppendixA) that every time we go through a series of steps such as steps 3 above, wehave probability at least 0.39 of finding a solution if one exists. So eitherM < M0, and we are guaranteed to find a solution in step 2 (the probabilityof failure is 0); or we arrive at step 3, and after each iteration our proba-bility of failure is reduced by a factor of 0.61. Our probability of failure forM > M0 is thus 0.61log1.5 ǫ−1< ǫ. As for running time, our number of callsto F is bounded by:M0∑M1=1Θ(√N/M) +M0Θ√N/M0 = Θ(√N lg ǫ−1) (2.2)18Thus demanding a low probability of failure from BCWZ is cheaper thandemanding the same from BBHT; but since BBHT is a factor of√M fasterthan BCWZ when is error probability is a constant, it is rarely practical touse BCWZ on its own. We will explore using the two algorithms jointly inSection 3.1, and thus create a tool that is more powerful than either alone.It is worth noting that we have made a small modification to the originalBCWZ algorithm: instead of our step 3, Buhrman, Cleve, de Wolf and Zalkahave a step saying “conduct M0 searches, each with O(√N/M0) queries.”They later conclude that “each of the searches in [that step] can be madeto have error probability ≤ 1/2.” It is unclear what “can be made to have”means there, but such searches may have probability of success near 0 ifan unfortunate choice of ǫ (and thus M0) is made. Our step 3 avoids thatdifficulty, and ends up having error probability ≤ 0.61. This can be reducedto 0.5 by the simple expedient of trying it twice; this procedure is presumablywhat the authors intended, or close to it.2.5 Inside an Iteration: Properties of FEach of the search algorithms that we have discussed operates with somefunction F, and finds some solution x such that F (x) = 1; we have not givenmuch discussion to what properties F may have, however. Can the functionaccess tables of previously-stored information? Can it look things up in ahash table or search tree? Can it write in memory, and have those writingsavailable in the next iteration? To some extent this discussion depends onthe capabilities we have chosen for our quantum computer, but there willinevitably be some things we need and some things we can never have.It is worth noting that in this thesis, the functions we evaluate insideGrover iterations are always fundamentally classical in nature: look up some-thing in a table; multiply two numbers together; if A then do B; and so forth.F is executed as a quantum black box on a superposition of states, so itneeds to be implemented with quantum gates—objects that are fundamen-tally reversible. Furthermore, after F has been evaluated and each term ai|i〉has become either ai|i〉 or −ai|i〉, we need to have no intermediate qubitsfrom our computation of F tied up with that result. After all, if we tookF(|i〉|0〉+ |j〉|0〉) and somehow came out with (−1)F (i)|i〉|a〉+(−1)F (j)|j〉|b〉,then any further manipulation of our state with Grover’s algorithm wouldnot budge |a〉 and |b〉, and thus would not allow interference between |i〉 and|j〉.Fortunately for us, decades of work have been done on reversible com-19puting, and a paper by Bennett[22] contains what we need: any irreversibleprocedure may be executed reversibly, as long as we keep any intermediateresults. This allows us to perform operations such as multiplication, whichhas no inverse—given ab we can not necessarily find a and b—reversibly, bystoring ancillary information (given ab and a, we can find b).Generalizing this process to an arbitrary quantum algorithm F, we un-dertake the following process: we start by executing F, taking care never toerase any information. If at any point we have to perform a fundamentallyirreversible operation like multiplication, we keep ancillary information sothat we are able to reverse the operation. Once we have finished our com-putation and determined k = F (x), our second step is to apply a gate thatgives us a total phase of (−1)k. Finally we take every step involved in theexecution of F and reverse it in order from last to first. Finally we end upin our initial state, but for the phase on |x〉.The fact that we reverse each step has the effect of undoing any writingthat we do to memory; so while we can perform an arbitrary classical com-putation inside F, and can read from memory set before the beginning ofthe computation (such as arrays, hash tables and binary search trees), anyalterations that we make to that memory will be undone before the nextGrover iteration.2.6 SummaryWe have discussed four search algorithms in this chapter. We summarizethem here, using our notation from earlier: F is a binary-valued functionthat we are trying to find a solution for (x such that F (x) = 1), N is thesize of F ’s domain, and M is the number of solutions. An algorithm is saidto fail if it returns that there are no solutions when there is at least one.Grover’s algorithm is a search that, when M = 1, takes Θ(√N) calls toF and never fails. It is made obsolete by the other three algorithms.BBHT is a search that does not require knowledge ofM, takesΘ(√N/M )calls to F, and fails with probability less than .5M−.93. If M = 0, it takesΘ(√N) time but never fails.Exact search is a search that takes a guess for M, which we call M1; ifthat guess is correct, it takes Θ(√N/M ) calls to F, and has probability 0of failure. Otherwise it takes Θ(√N/M1) calls to F, and fails with someprobability.BCWZ is a search that does not require knowledge of M, accepts aparameter ǫ, takes Θ(√N lg ǫ−1) calls to F, and fails with probability ≤ ǫ.203. ToolsHere we present some basic algorithms, founded on BBHT and BCWZ (seeSection 2.6). These serve as subroutines to be used throughout this thesis,where they will be referred to by their abbreviated names, found in thesection headers.Note that each of the following algorithms operates with some givenfunction often called F, whose evaluation could have some arbitrary timecomplexity; as such, our standard unit of time for this chapter is the numberof calls to the given function. There are, of course, operations in thesealgorithms outside of the function calls; in each of these algorithms, exceptfor mindiff in Section 3.4, the extra operations are subdominant to thenumber of calls to the given function. In Section 3.4 we will use tJ andtK to specify how the running time of the given functions J and K impactthe total running time of the algorithm.Each algorithm in this section is not only polynomially faster than thefastest known classical algorithm that achieves the same goal, but is alsopolynomially faster than the fastest possible classical algorithm.3.1 Finding a Solution to F, findsolTheorem 1 Take a binary-valued function F over the domain {0, . . . , N −1}. The following algorithm findsol searches for a solution x in the domainsuch that F (x) = 1. If there are M > 0 solutions, findsol will return arandom one with probability > 1 − 0.5M−1.86ǫ, where ǫ is some probabilityof failure that we are willing to tolerate. If it successfully does so, findsoltakes an expected Θ(√N/M +√N lg ǫ−1M−1.86) calls to F ; if it is unsuc-cessful, or if there are no solutions, findsol returns the special value falseafter Θ(√N lg ǫ−1) calls to F .In the following we use an extra parameter r, which allows tradeoffs incost between the case where a solution exists and the case where there is nosolution. We found r = 2 to be adequate for our purposes throughout thisthesis; we include it as a parameter here in case someone else has use for it.21The principle we use here is very straightforward. First, we acknowledgethat we can’t do any better than√N lg ǫ−1 (a single BCWZ) in the casewhere there are no solutions, so we try to optimize for the case where thereare solutions and we can hope for Θ(√N/M ) calls to F . To do this, wetry BBHT first due to its faster running time. Then if we have not found asolution, we check for one with BCWZ to make sure.1. Run BBHT up to r times. If any of those returns a result that satisfiesF, immediately return that result.2. Run BCWZ with parameter ǫ. If it returns a result that satisfies F,return that result; otherwise return false.The analysis for this is very straightforward. If there are solutions, step1 takes an expectation of Θ(√N/M ) calls to F (we expect it to repeat lessthan twice). The BBHTs all fail with probability Θ(.5rM−.93r); if they dowe move on to step 2, which takes√N lg ǫ−1 calls to F . This gives us anexpectation of Θ(2√N/M + .5rM−.93r√N lg ǫ−1) total calls to F in thecase where there are solutions; these reduce to to the promised quantitieswhen r = 2. If there are no solutions, step 1 is Θ(r√N) and step 2 isΘ(√N lg ǫ−1).Looking at the probability of failure, we observe that the algorithm can-not possibly find a solution that does not exist, and therefore cannot failwhen there are no solutions (the error is “one-sided”). If there are solutions,the probability of failure is ≤ .5rM−.93rǫ, the probability that the BBHTsand BCWZ all fail.We chose r = 2 because 2 is the smallest value that gives us a sufficientlysmall coefficient of√lg ǫ−1 in the running time (one proportional to less thanM−1). That property is important in Section 3.3; in general, almost anynatural number is a reasonable choice for r.3.2 Minimum Finding, minfindTheorem 2 Take a function G that maps the domain {0, . . . , N − 1} tosome totally-ordered set. The following algorithm minfind finds x in thedomain such that G(x) is less than or equal to all G(y) in expected timeΘ(√N lg ǫ−1)and with probability ≤ ǫ of failure.This algorithm is based on one by Du¨rr and Høyer[4], and is identicalin approach to theirs; but where we use findsol (step 2a, below) they use22BBHT lg ǫ−1 times. The motivation for this algorithm, as with theirs, isrepeatedly to find y with smaller and smaller values for G(y). To do thisefficiently, we use findsol as introduced in Section 3.1.1. Pick y uniformly at random from the set {0, . . . , N − 1}.2. Repeat the following until instructed to return:(a) Run findsol with parameter ǫ to find an element y′ : G(y′) < G(y).(b) If findsol returns an element, set y = y′; otherwise return y.Du¨rr and Høyer show that the probability that y will ever take on thekth-lowest value is 1/k, and that for different k, those probabilities are in-dependent. With that in mind, we can sum over all values of k to arrive atan expected running time and a probability of failure. For expected runningtime, we find:tminfind =√N lg ǫ−1 +N∑k=21k√N lg ǫ−1k−1.86≤√N lg ǫ−1 +∫ N1dkk√N lg ǫ−1k−1.86≤√N lg ǫ−1 +√N lg ǫ−1calls to G. We calculate the probability of failure similarly, first noting thatPfail ≤∑k P (k)Pfail(k):Pfail ≤N∑k=21kǫk−1.86 ≤∫ N1dkkǫk−1.86 ≤ ǫ3.3 Finding all x that Satisfy F, findallTheorem 3 Take a binary function F over the domain {0, . . . , N −1}. Letany x for which F (x) = 1 be called a solution for F, and let the number ofdistinct solutions be called M . The following algorithm, findall, finds allsolutions for F in Θ(√NM+√N lg ǫ−1) expected calls to F, with probability≤ ǫ of failure.The idea behind this algorithm is to find successive solutions x, strikingeach off the search as we find it in order to guarantee that we find somethingdifferent every time. We do this straightforwardly with findsol.231. Create a direct-address table D to store results found so far.62. Repeat the following until instructed to return:(a) Run findsol with parameter ǫ to find an element x of {0, . . . , N−1}such that F (x) = 1 and D does not contain x.(b) If findsol returns an element, add it to the result set and D;otherwise, return the result set.We calculate the number of calls to F with a straightforward integral:tfindall =√N lg ǫ−1 +M∑k=1(√N/k + k−1.86√N lg ǫ−1))≈ 2√N lg ǫ−1 +∫ M1dk(√N/k + k−1.86√N lg ǫ−1))≈ 2√N lg ǫ−1 +√NM +√N lg ǫ−1We calculate the probability of failure similarly, noting that the probabilityof failure Pfail ≤∑k Pfail(k):Pfail ≤M∑k=1ǫk−1.86 ≤∫ M1dkǫk−1.86 ≤ ǫ3.4 Finding a Minimal d Objects of DifferentTypes, mindiffSuppose that we want to book d holidays to different destinations, and thereareN flights yi leaving our home airport to various destinations, with variouscosts J(yi). The following algorithm finds us the d cheapest destinations,and their respective cheapest flights.Theorem 4 Take a function J (our “cost”) over the domain {0, . . . , N−1},and some division of that domain into disjoint subsets (our “destinations”).The following algorithm, mindiff, takes the lowest-cost element of each sub-set (breaking ties arbitrarily) and takes the lowest-cost d of those elements(again breaking ties arbitrarily), and returns the result. mindiff achieves6A direct-address table is a construct like a hash table, allowing us to keep track ofalready-found elements with Θ(1) operations. Details can be found in Cormen et al.’sbook on algorithms[19].24this in O((tJ + tK)(√Nd+√N lg ǫ−1)+ d lgN lg d)time, with probabil-ity ≤ ǫ of failure.In terms of implementation, mindiff accepts a cost function J and asubset-division function K. We say that x and y are in the same subset iffK(x) = K(y). We will call the d elements of mindiff’s result set xi. To findthe elements that we seek, we will start off with some “bad” set of d elementsthat are not a valid result set for mindiff, and repeatedly use findsol to findelements y that “improve” our current result set by satisfying either of thefollowing conditions:1. J(y) < J(xa) and K(y) = K(xa) for some a. This means y and xa arein the same subset, and y is lower-cost than xa.2. J(y) < J(xa) for some a, K(y) 6= K(xi) for any i. This means y isin some subset that doesn’t appear in our result set, but is lower-costthan something that does.The basis for this algorithm comes from Du¨rr, Heiligman, Høyer andMhalla[8]; their algorithm is roughly the same as our step 3 below, but itonly outlines that step, and not the various data structures necessary forits implementation. The principle behind both their algorithm and ours isrepeatedly to find y such that it meets either of the conditions above, andto replace the appropriate element of the result set with the new y.1. Let x be the array of answers. Initially, let the xi be “infinities,” forwhich J(xi) =∞, and K(xi) is unique and not equal to K(y) for anyy in {0, . . . , N − 1}.2. Let D be a direct-address table mapping K(xi) to i, and initialize itas such. Let T be a balanced binary search tree containing the pair(J(xi), i) for all i, sorted by J(xi), and initialize it as such.3. Repeat the following until J has been evaluated Ω(√Nd) times, or theloop has repeated Ω(d lgN) times (whichever happens first):(a) Let τ be the largest J(xk) in T, and k the corresponding index.(b) Use BBHT to find some element of the domain y such that eitherJ(y) < τ and K(y) /∈ D (condition 2), or K(y) ∈ D and J(y) <J(xD(K(y))) (condition 1). Note that J(xD(K(y))) is the cost of thecheapest flight that we have found so far going to y’s destination,if that is currently in our result set.25(c) If condition 1 was met, set xD(K(y)) = y, and update D and Tcorrespondingly. Otherwise, if condition 2 was met, set xk = y,and update D and T accordingly.4. Run findsol with parameter ǫ to check whether there is still a y thatsatisfies either condition as outlined in step 3b. If not, return x. If so,repeat step 3.Terminating the loop in step 3 after Ω(√dN ) calls to J provides proba-bility of success > 12 , which is shown by Du¨rr, Heiligman, Høyer and Mhalla.They also show that Ω(d) iterations suffice to eliminate a constant fractionof the domain from consideration, thus Ω(d lgN) iterations will also provideprobability of success > 12 . In order to improve the probability of success,we run findsol with parameter ǫ to check whether we are yet done; if we arenot, we repeat step 3 until we are. Since the probability for step 3 to finishsuccessfully after one pass is≥ 12 , we expect to repeat it – and findsol – an ex-pectation of≤ 2 times. We also have to consider the contribution of updatingand accessing T, which will take Θ(lg d) time with every iteration; thus ourtotal running time is O((tJ + tK)(√dN +√N lg ǫ−1)+ d lgN lg d)withprobability ≥ 1− ǫ of success.Note that if d is greater than the number of distinct values for K (≡ γ),we return γ valid elements and d − γ infinities (fictitious elements of thedomain as defined in step 1).264. Applications in GraphTheoryA graph G = (V,E) consists of a finite set of vertices V = {vi} and a finiteset of edges E = {ei}, where ei = (vj , vk). In a directed graph, each edgeis an ordered pair. In an undirected graph, each edge is an unordered pair.For convenience when analyzing algorithms that run on graphs, we write |V|as V, and |E| as E.Many problems in computer science can be reduced to graph problems.For example, a subway system could be represented as an undirected graph,in which each stop is represented by a vertex and the each line connectinga pair of stops is represented by an edge.A common graph problem is to find a path from one vertex to anotheralong edges in the graph. More formally, a path p between vertices vstartand vend is an ordered set of edges, (ei1 , ei2 , . . . , ein), where eij = (vij1, vij2),such that vi11 = vstart, vin2 = vend, and eij2 = eij+11 for each 1 ≤ i < n(the jth edge in the path ends where the j + 1th edge begins). Often one isinterested in finding the shortest path between two vertices: the path withthe smallest number of edges n.In a variant of the shortest path problem, each edge and/or vertex isassigned a weight, and the shortest path is considered to be the one forwhich the sum of weights of all vertices and edges in the path is minimized.A graph in which the vertices or edges have weights is referred to as aweighted graph.The graph algorithms presented here assume the graphs they operate onwill be simple graphs: they will have at most one edge between any twovertices (or two edges in opposite directions, in the directed case), and no“self-edges” that directly connect a vertex to itself. Most of these algorithmsare very easy to generalize to graphs that do not have those properties, butwe leave that task to the enterprising reader.In this chapter we will focus on quantum versions of long-studied clas-sic problems such as shortest paths, searching through graphs, and graphmatchings (suppose you want to pair up vertices that are connected; what’s27the maximum number of disjoint pairs you can make?).We present the algorithms here for two models of representing graphs,which we will discuss as quantum black boxes. If there is an edge betweenvertices vi and vj, we refer to it as eij. We present these two models becauseone, the edge array model, tends to yield more efficient algorithms; on theother hand, the adjacency matrix model is a common representation, andcould be given to an algorithm as input. The models are:• The adjacency matrix model, as a quantum black box, is passed i, j(0 ≤ i, j < V ) and returns whether eij exists. This is a mathematicalfunction, often represented classically as a V × V matrix with entriesin {0, 1}.• The edge array model, as a quantum black box, is passed i, j andreturns the destination of the jth edge outgoing from vertex vi (weassume for convenience that we know how many edges are outgoingfrom each vertex). Classically this is usually represented as a raggedarray, but sometimes is generated mathematically as-needed. We callthe set of edges outgoing from vi di, and its cardinality |di|. The edgearray model is sometimes called the adjacency listmodel; though whenit is referred to by that name, the internal representation of edges maybe a linked list rather than an array.If the graph is weighted, the adjacency matrix and edge array models alsoreturn the weight of the edge queried.For an excellent introduction to graph theory and algorithms therein, seeCormen, Leiserson, Rivest and Stein’s classic introduction to algorithms[19].It contains detailed discussions of breadth-first and depth-first searches, Di-jkstra’s algorithm and the Bellman-Ford algorithm, as well as all-pairs short-est paths. We look at all of these in this chapter, but leave the details tothis reference.4.1 Breadth-First Search, BFSBreadth-first and depth-first search are two of the simplest algorithms forsearching a graph, and find extensive use inside many important graph al-gorithms. The principle behind each is the same: starting at some source,we systematically explore the vertices of our graph, “visiting” each vertexconnected to the origin in some order. By introducing quantum versionsof each here, we tarnish their simplicity but maintain their strength andincrease their speed.28As we mentioned above, BFS and DFS both see extensive use. Both canbe used to determine whether a vertex is connected to the rest of the graph(if there is a path from that vertex to every other vertex), and breadth-firstsearch in particular can be used to compute shortest paths in an unweightedgraph. Depth-first search, on the other hand, can be used to detect “bridges”in a graph: edges which, if they were removed, would sever the graph intotwo pieces with no edges between them. There is a great deal of utility to behad from these two over and above what is discussed here, and both are well-studied techniques in classical computing; both run on classical computersin Θ(E) time, which is provably optimal.To implement a breadth-first search here, we begin with classical BFS:we keep a list of vertices we want to visit, and every time we visit anotherof those vertices we add all of its unvisited neighbours to the list. Throughuse of a boolean array we ensure each vertex is only visited and added once.To choose the order in which the vertices are visited, we let our list bea “queue,” wherein vertices added first are visited first; thus we end upvisiting the vertices in order of how close they are to the origin of our search(breadth-first).To speed up the process of finding all of the unvisited neighbours of eachnode, we use Section 3.3’s findall. This algorithm is based on a BFS fromAmbainis and Sˇpalek[9], though they use repeated BBHTs rather than ourfindall.Theorem 5 The following algorithm BFS executes a breadth-first searchthrough a graph G = (V,E) in O(√V 3 lg(V ǫ−1)) time in the matrix model,O(√V E lg(V ǫ−1)) in the edge array model. There is probability ≤ ǫ that itfails, executing the breadth-first search incorrectly.1. Let the vertex from which we are searching be called va. Let there bea queue of vertices q, and let it initally contain only va. Let there bea boolean array vis of size V, with entries vis[i] = δi,a.2. Repeat the following until q is empty:(a) Remove the first element of q and call it vi.(b) Visit vi.(c) With Section 3.3’s findall, find all neighbours vj of vi with vis[j] =false.(d) For each such vj , set vis[j] = true and add vj to q.29Since BFS makes a call to a function that may fail, findall, there is someprobability that it fails. We may pass the BFS function some constant ǫ, aprobability of failure that we are willing to tolerate; since BFS calls findallV times, if we require that findall has probability ≤ ǫ/V of failure each time,the total probability with which BFS fails will be less than ǫ.In the matrix model, each vertex vi is processed at most once, and itsfindall contributes√V ni +√V lg(V ǫ−1) to the running time in the matrixmodel, where ni is the number of elements added to q. In the edge arraymodel, each vertex is processed at most once and contributes√|di|ni +√|di| lg(V ǫ−1). By the Cauchy-Schwartz inequality, we have:∑vi∈V√ni |di| ≤√∑vi∈Vni√∑vi∈V|di| =√V E (4.1)∑vi∈V√|di| lg(V ǫ−1) ≤√∑vi∈V|di|√∑vi∈Vlg(V ǫ−1) ≤√V E lg(V ǫ−1) (4.2)We conclude that BFS in the edge array model runs inO(√V E lg(V ǫ−1))time, and since E < V 2, BFS in the matrix model runs in O(√V 3 lg V ) time.Classically the fastest possible breadth-first search algorithm takes Θ(E)time in the edge array model, Θ(V 2) time in the matrix model; so forbounded error probability, BFS is faster than its classical counterpart inthe matrix model, and faster than its classical counterpart in the edge arraymodel for E ∈ Ω(V lg ǫ−1).4.2 Depth-First Search, DFSClassically depth-first and breadth-first search can have very similar im-plementations, and the same is true in the quantum regime. The simplestimplementation of depth-first search in both regimes, however, is a recursiveone, which we show here.Theorem 6 The following algorithm DFS executes a depth-first searchthrough a graph G = (V,E) in O(√V 3 lg V ) time in the matrix model,O(√V E lg(V ǫ−1)) in the edge array model, with probability ≤ ǫ that it willfail, performing the depth-first search incorrectly.1. Let the vertex from which we are searching be called va. Let therebe a boolean array vis of size V, with its entries initialized to 0. CallDFS-BODY(va).302. Function DFS-BODY(vertex vk):(a) Visit vk. Set vis[k] = true.(b) Repeat the following until instructed otherwise:i. Use Section 3.1’s findsol to find a neighbour of vk that hasnot yet been visited, vi. If findsol returns false, return.ii. Recursively call DFS-BODY(vi).Since DFS makes a call to a function that may fail, findsol, there is someprobability that it fails. We may pass the DFS function some constant ǫ, aprobability of failure that we are willing to tolerate; since DFS calls findsolno more than 2V times—one call to find each vertex, and one call returningfalse from each vertex—if we require that findsol has probability ≤ ǫ/2Vof failure each time, the total probability with which DFS fails will be lessthan ǫ.For each of the calls to findsol that returns false, we find a contribution(in the edge array model) of√|di| lg(2V ǫ−1) to our running time, whichsums over all vertices to O(√V E lg(V ǫ−1)) by the Cauchy-Schwartz in-equality (see equation 4.2). We must also sum the running times of thesuccessful findsols: we note that for each vertex vi, if we end up findingni of its neighbours through DFS-BODY(vi), the running time of that willbe O(∑nik=1√(|di| /k) lg(2V ǫ−1))= O(√|di|ni lg(V ǫ−1)). Summing thatcontribution over each vertex, we again use the Cauchy-Schwartz inequal-ity (equation 4.2) to arrive at O(√V E lg(V ǫ−1)). In the matrix model wesimply replace E with V 2, arriving at O(√V 3 lg(V ǫ−1)).Classically the fastest possible depth-first search algorithm takes Θ(E)time in the edge array model, and Θ(V 2) time in the adjacency matrixmodel; so for bounded error probability, BFS is faster than its classicalcounterpart in the matrix model, and faster than its classical counterpart inthe edge array model for E ∈ Ω(V lg ǫ−1).4.3 Single-Source Shortest Paths with NegativeEdge Weights, SPNWClassically, the problem of single-source shortest paths with negative edgeweights is solved using an algorithm called Bellman-Ford[23, 24], on whichwe base our algorithm.7 Our algorithm returns an array of shortest distances7Thanks to Yury Kholondyrev for a discussion that led to this algorithm.31to points, or the special value false if there exists a negative-weight cycle inthe graph that can be reached from the source. It also computes an arrayfrom, whose ith element is the index of the vertex previous to vi on theshortest path from va to vi; this allows the shortest path from va to vi to berecovered.Intuitively, we are going to take each edge in turn and see if it helpsour current shortest path to each point (a technique called “relaxation”);we repeat that process V times, at which point each edge will have helpedall it can.Theorem 7 Given a graph G = (V,E), the following algorithm SPNWreturns an array whose ith element is the shortest distance from the sourceva to vertex vi, or ∞ if no such path exists. If there is a negative weightcycle that can be reached from va, instead of an array it returns the specialvalue false. It does this in O(√V 5 lg(V ǫ−1)) time in the matrix model,O(√V 3E lg(V ǫ−1)) in the edge array model, with probability ≤ ǫ that itfails, returning an incorrect result.1. If we are using the edge array model, set up an array f such that f [i][j]is the source of the jth edge incident on i.2. Initialize an array dist, such that dist[i] =∞ for i 6= a, 0 for i = a.3. Initialize an array from, such that from[i] = −1.4. Repeat the following V − 1 times:(a) For each vertex vi, using the algorithm of Section 3.2, minfinda vertex vj such that eji exists, and dist[j] + length(eji) is min-imized. Execute the minfind by searching over f [i] in the edgearray model, V in the matrix model.(b) If dist[j] + length(eji) < dist[i], set dist[i] = dist[j] + length(eji)and set from[i] = j.5. Repeat step 4a one more time. If it changes dist, return false. Other-wise return dist.Since SPNW makes a call to a function that may fail, minfind, thereis some probability that it fails. We may pass the SPNW function someconstant ǫ, a probability of failure that we are willing to tolerate; sinceSPNW calls minfind V 2 times, if we require that minfind has probability32≤ ǫ/V 2 of failure each time, the total probability with which SPNW failswill be less than ǫ.This algorithm, like Bellman-Ford, works due to the fact that all shortestpaths in a graph without negative weight cycles must use fewer than Vedges. Each time through step 4, we ask “could the path to vertex vi beshorter if we were allowed to use one more edge?” Repeating this V − 1times lets us use V − 1 edges, and repeating it a last time lets us checkwhether there is a negative weight cycle. Meanwhile we keep our arrayfrom, which tells us how we got to vi and allows us to recover the wholepath. In the edge array model, the running time is V∑i√|di| lg(V ǫ−1) =O(√V 3E lg(V ǫ−1)), again making use of the Cauchy-Schwartz inequality(equation 4.2). In the matrix model, our E becomes a V 2 as usual, and wehave O(√V 5 lg(V ǫ−1)). Since this is greater than V 2, if the graph is sparseit may be worth first converting to the edge array model.Classically there are two fastest known algorithms for single-source short-est paths: Bellman-Ford, discussed above, takes Θ(V E) time in the edgearray model, Θ(V 3) time in the adjacency matrix model; and an algorithmby Zwick[25], which computes all-pairs shortest paths, runs in Θ(V 2.575) inboth models. For bounded error probability, SPNW is faster than both inthe matrix model, faster than Zwick’s algorithm in the edge array model,and faster than Bellman-Ford in the edge array model when E ∈ Ω(V lg V ).4.4 All-Pairs Shortest Paths with Negative EdgeWeights, APSPTheorem 8 Given a graph G = (V,E), the following algorithm APSPreturns an array whose i, jth element is the length of the shortest path betweenvertices vi and vj, ∞ if no such path exists. If there is a negative weightcycle in the graph, instead of an array it returns the special value false. Itdoes this in Θ(√V 5(lg V +√lg(V ǫ−1))+ V 2 lg3 V ) in the matrix model,Θ(√V 3E(lg V +√lg(V ǫ−1))+ V 2 lg3 V ) in the edge array model.We can do this directly with Johnson’s algorithm[19, 26]8, which wewill lay out in classical terms and then make obvious substitutions with ourown quantum subroutines. Johnson’s works by running Dijkstra’s algorithm8Thanks to Wei-Lung Dustin Tseng, Michael Li and Man-Hon “Matthew” Chan forsimultaneously suggesting that looking at Johnson’s might yield something better thanthan the algorithm I was presenting to them at the time.33from every origin point, which gives the shortest paths from all points to allother points; the difficulty is that Dijkstra’s does not work in graphs withnegative-weight edges, so first it is necessary to reweight edges so that all oftheir weights are positive.When reweighting the edges, we need to reweight them in such a waythat running Dijkstra’s algorithm will return a genuine shortest path. It isnot enough, for example, to pick a large number and add it to each edge;this would cause Dijkstra’s algorithm to favour paths that used few edges.We will instead apply a label a[i] to each vertex vi, and let weight′(eij) =weight(eij) + a[i] − a[j]. If we run Dijkstra’s algorithm using the weight′s,any path from vx to vy will be reweighted by a[x] − a[y]; every vertex inbetween will have its a value subtracted and then added. Thus genuineshortest paths will be found, with the only caveat being that the paths’lengths need to be reweighted.Now that we know we can label vertices and reweight edges appropri-ately, we need to decide how to label the vertices. Let a[i] be the lengthof the shortest path from anywhere to vi. This will be 0 at most, since wedefine the distance from vi to itself to be 0. If there is a negative-weight edgegoing to vi, it may be the weight of that edge. We will find these distancesby inserting an imaginary node s into the graph with weight-0 edges goingeverywhere, and then using Bellman-Ford to find the shortest path to eachnode from there. If the shortest path to vi comes from vj , then there is apath of identical length that comes from s. Using Bellman-Ford also tellsus right away whether there is a negative-weight cycle in the graph.Now we need to show that this labelling causes our reweighting to giveonly nonnegative weights to edges. Let there be an edge eij between vi andvj , with weight w. Our reweighted edge has weight w′ = w+ a[i]− a[j]. a[i]is the shortest distance to vi from anywhere, and so the shortest distance tovj from anywhere is at most a[i]+w; so we have a[j] ≤ a[i]+w, and w′ ≥ 0.Now we have a way of altering our graph that preserves shortest pathsand gives nonnegative weights to all edges. From there, we can run Dijkstra’salgorithm from each vertex, fixing our distances only when we’re done, andso find the shortest paths (and distances) to each vertex from each vertex.The quantum version for this algorithm is a straightforward adaptation.We replace Bellman-Ford with our SPNW algorithm from Section 4.3, andDijkstra’s algorithm with our adaptation of Ambainis and Sˇpalek’s single-source shortest path algorithm from Section 4.5.2.Since APSP makes calls to functions that may fail, SPNW and single-source shortest paths, there is some probability that it fails. We may passthe APSP function some constant ǫ, a probability of failure that we are34willing to tolerate; since APSP calls SPNW and single-source shortest pathsa total of V + 1 times, if we require that both have probability ≤ ǫ/(V +1)of failure each time, the total probability with which APSP fails will be lessthan ǫ.One SPNW and V single-source shortest paths, each with failure proba-bility ≤ ǫ/(V +1), take a total of Θ(√V 5(lg V +√lg(V ǫ−1))+ V 2 lg3 V)in the matrix model, Θ(√V 3E(lg V +√lg(V ǫ−1))+ V 2 lg3 V)in the edgearray model.Classically there are two fastest known algorithms for all-pairs short-est paths with negative edge weights: Johnson’s algorithm takes Θ(V E +V 2 lg V ) time in the edge array model, Θ(V 3) time in the matrix model;Zwick’s algorithm runs in Θ(V 2.575) in both models. So for bounded er-ror probability, APSP is faster than Johnson’s algorithm in the edge arraymodel for E ∈ Ω(V lg4 V ), is always faster in the matrix model than John-son’s algorithm, and is faster than Zwick’s algorithm in both models.4.5 Improvements to Existing Quantum GraphAlgorithmsIt has quickly become to the tradition in the literature[8, 9] to devise quan-tum algorithms with BBHT as though there were no probability that itcould fail, and then to throw a factor of log(N) into the running time atthe end to take the probability of failure into account. Here we give twoexamples of algorithms that can be given faster asymptotic behaviour withcareful treatment of errors (done by using the tools introduced in Chapter3).4.5.1 Query ComplexityIn the algorithms we discuss here, the authors chose to find the quantumquery complexity of their algorithms, rather than the running time. Thequery complexity is the number of times a graph is “queried”: how manytimes the algorithm has to ask “is there an edge between vi and vj?” in theadjacency matrix model, or ask “what is the jth edge coming out of vertexvi?” in the edge array model.We have chosen to analyze running time rather than query complexityfor the algorithms here, as well as the algorithms elsewhere in the paper,because we hope the day will come when quantum and classical algorithms35will be run on the same computer, at the same speed; and so an Θ(√V 3 lg V )quantum algorithm might practically be said to be faster than an Θ(V 2)algorithm. There are of course many reasons to prefer both, but this is theone we have chosen.4.5.2 Single-Source Shortest PathsDu¨rr, Heiligman, Høyer and Mhalla (DHHM) discuss algorithms for single-source shortest paths, minimum spanning tree, connectivity and strongconnectivity[8]. The quantum query complexity for their single-source short-est paths, O(√V E lg2 V ), can be improved by using mindiff, whereupon itbecomes O(√V E lg V ). The explanation follows.We will begin by trying to motivate this algorithm as a speedier version ofDijkstra’s Θ(E lg V ) algorithm. That algorithm involves iteratively buildingup a set T of nodes, the shortest path to each of which we already know;each iteration expands that set by one element by finding the next-closestnode to a, the origin of the search. First we will present a classical Dijkstra’salgorithm, then discuss how that can be improved using quantum methods.Our Dijkstra’s and quantum single-source shortest paths will make useof priority queues: these will be implemented as balanced binary searchtrees with Θ(lgN) access, insertion and deletion. The knowledgeable readerwill note that Fibonacci heaps are generally more appropriate for Dijkstra’salgorithm; while that is true, they do no better in the quantum single-source shortest paths algorithm, and so we avoid them for simplicity’s sake.The reader who is used to C++ may imagine our priority queue to be aset〈pair〈int, pair〈int, int〉〉〉.We begin with the classical Dijkstra’s algorithm:1. Construct an array Trace of size V, and initialize its entries to −1.2. Construct an array LeastCost of size V, and initialize its entries to ∞.3. Let there be a priority queue Q of edges, initially containing the(priority, value) pair (0, (a,−1)).4. While Q is not empty, repeat the following:(a) Remove the least value from Q: call its priority cost, and its value(loc, from).(b) If LeastCost[loc] 6= ∞, we have already been to dest; go back tostep 4a.36(c) Set LeastCost[loc] = cost and Trace[loc] = from.(d) For each element dloc[i] of dloc, call its destination v and itsweight c, and do the following: if LeastCost[v] = ∞, insert the(priority, value) pair (cost + c, (v, loc)) into Q.5. LeastCost[i] now contains the length of the shortest path from a to i,or∞ if no such path exists; Trace[i] is the predecessor to i in a shortestpath from a to i (i.e. the path goes a, . . . ,Trace[i], i).Step 4 will repeat V times at most: every time the loop runs, a newelement of LeastCost will be set. Our most time-consuming step is thus step4d, which is at the core of the algorithm: inserting edges into a priorityqueue so that the edge of least cost may be removed. A total of E edgeswill be inserted, and since the priority queue will be of size O(E) this willcost Θ(E lgE) = Θ(E lg V ). Step 4a shares this running time.Our goal in constructing the quantum single-source shortest path, then,will be to reduce the length of time taken by the equivalent of step 4d.Since most of our graph algorithms seem to offer speedups on the order of√E/V speedups, we will shoot for something similar (and will in fact hit√E/V / lg V ).Since the Fibonacci heap implementation of Dijkstra’s algorithm takesΘ(E + V lg V ) time, if we wish to outdo that then we may not look at eachedge classically; instead we must do some sort of minfind over edges. Forthe first few vertices, that seems to work: finding the closest vertex to theorigin vertex a is roughly a Θ(√V ) operation. Of course, finding the next-closest vertex in the same way is Θ(√2V ), and so on up until the last vertex,which will cost Θ(√E). This gives us something roughly Θ(V√E), whichis slower than Dijkstra’s algorithm; somehow we need to reduce the numberof minfinds that gets done.The solution, due to Du¨rr et al., is to group vertices together, run asort of minfind from the group, and then use that result as many times aspossible. Consider the following: initially we just need to find the closestvertex to a, b. Now we can find the closest (“closest” always means “closestto a”) two vertices that use edges coming out of {a, b}, by using mindiff;call the closer of those c, and the further of them d. We still have the next-closest vertex from {a, b}, and can re-use that result: so we find the closestvertex that uses edges coming from {c}, and see whether it or d is closest(assume for the sake of example that this is d). Now we can merge d into{c}, which becomes the same size as {a, b}, so we merge them together toget {a, b, c, d} and find the closest 4 vertices that use edges coming from37that set. We continue in this fashion, merging sets whenever two sets arethe same size, and only ever re-computing the closest 2x vertices to a setwhen it changes. The procedure is formalized below:1. Construct an array Trace of size V, and initialize its entries to −1.2. Construct an array LeastCost of size V, and initialize its entries to ∞.Set LeastCost[a], the cost to get to the origin of the search, to 0.3. Let there be a priority queue Q of edges, initially empty.4. Let there be a stack of vertex sets S, initially containing {a}, the originvertex. We will refer to the current top element of the stack by S.top.5. Repeat the following until instructed to exit the loop:(a) Call the function SSSP–Mindiff, defined below; call the resultingset R.(b) For each element R[i] of R, call its destination v, its source u andits weight c, and insert the (priority, value) pair (c, (v, u)) into Q.(c) Remove the least value from Q: call its priority cost, and itsvalue (loc, from). Repeat this until LeastCost[from] =∞ or Q isempty.(d) If Q is empty, go to step 6.(e) Set LeastCost[loc] = cost and Trace[loc] = from. Push {loc} ontoS.(f) As long as S has at least two elements, and the top two elementsof S have the same size, merge the top two elements of S.6. LeastCost[i] now contains the length of the shortest path from a to i,or∞ if no such path exists; Trace[i] is the predecessor to i in a shortestpath from a to i (i.e. the path goes a, . . . ,Trace[i], i).We would like to run mindiff on the edges outgoing from our sets, butmindiff operates on the domain {0, . . . , N − 1}; as such, we have to adapt itto work on several disjoint collections of objects. This adds a factor of lg Vto our algorithm and to DHHM’s (see below).Function SSSP–Mindiff:1. Create a table T of size |S.top| such that T [i] is ∑ij=0 ∣∣dS.top[j]∣∣.382. Let k = |S.top|, and N = T [k − 1] (the number of edges out from allvertices in S.top).3. Define the mappingH(i) = the ith edge out from all vertices in S.top.H(i) can be computed by doing a binary search on T, followed by somebasic arithmetic.4. Define c(i) to be edge H(i)’s weight, u(i) to be its source, and v(i) tobe its destination.5. Run Section 3.4’s mindiff, using N as above, d = k, F (i) = c(i) +LeastCost[i], and G(i) = v(i). Return the resulting set.This algorithm is different from DHHM’s in that our mindiff algorithmis more carefully constructed than the version they use, in two respects.The first is that theirs has constant probability of failure, and they re-peat it Θ(lg(V )) times, leading to slower performance than ours overall.The second is that they do not make some of the specifics of their algo-rithm clear: for example, they do not describe how they maintain theirlist of best answers so far. This will inevitably add to the total run-ning time of their algorithm (though not its queries to the graph, whichis what they chose to analyze), and so their mindiff’s running time endsup as O((tF + tG)√Nd lgN + some quantity), at a contrast to our fasterO((tF + tG)(√Nd+√N lg ǫ−1)+ d lgN lg d).The running time analysis for our mindiff and theirs is not complete; theevaluation of F and G as defined in SSSP–Mindiff is a Θ(lgN) process. Ourmindiff thus takes O(√Nd lgN +√N lg ǫ−1 lgN + d lgN lg d)operationshere, whereas theirs takes O(√Nd lg2N + some quantity).9Since this algorithm makes calls to a function that may fail, mindiff,there is some probability that it fails. We may pass the function someconstant ǫ, a probability of failure that we are willing to tolerate; since itcalls mindiff no more than V times, if we require that each call to mindiffhas probability ≤ ǫ/V of failure each time, the total probability with whichour algorithm fails is less than ǫ.We may now analyze our number of graph queries compared to theirs:as far as querying the graphs go, our algorithms are identical but for the9Rather than performing a binary search as we do, DHHM iterate through all of theedges connected to S.top and label each one individually. Had they cared about runningtime, they doubtless would not have taken this approach.39extra factor of lg V on their mindiff that arises due to errors. This be-comes an extra factor of lg V in the algorithm’s running time, for a to-tal query complexity of O(√V E(lg V +√lg(V ǫ−1))) for our algorithm (vs.O(√V E lg V lg(V ǫ−1)x) for theirs).Now that we know how long our mindiff step takes, we may evaluateour running time. We perform this calculation only for our own algorithm:DHHM’s time complexity is identical but for the extra lg V on the leadingterm, the verification of which is an exercise that the reader is welcome toperform. Our time complexity has two major parts: insertion/removal ofelements in Q, and the function SSSP–Mindiff. Merging sets is briefly worthconsideration, but recall that mindiff is immediately run on sets after theyare merged, and that takes at least Ω(size) time.In order to determine the time complexity for inserting/removing el-ements from Q, we will first examine how Q is updated. Q is updatedevery time a new S.top appears, which is at the beginning of every iterationthrough step 5. The sequence of update sizes follows:1, 2, 1, 4, 1, 2, 1, 8, 1, 2, 1, 4, 1, 2, 1, 16, 1, . . . (4.3)Note that 1 shows up as every second digit, 2 as every fourth, etc. Wecan thereby say that for each size s, it appears no more than V/s timesin the sequence. Since the sizes are powers of two, we have a total of ≤∑lg Vi=0 V/2i × V = V lg V elements in Q. Since insertion takes an averageof Θ(lg V ) time, the time used in the maintenance of the priority queue isbounded from above by O(V lg2 V ).Next we need the time used by mindiff. The frequency of set sizesis identical, so for each size we will sum over all sets of that size that weencounter throughout the algorithm. Letmij be the number of edges comingfrom the jth set of size 2i:tmindiff ≤lgV∑i=0V/2i∑j=0(√mij2i lgmij +√2i lg(V ǫ−1) lgmij + i2i lgmij)(4.4)Note that no vertex will be in two different sets of the same size, and so∑V/2ij=0 mij = E.tmindiff ≤lg V∑i=0(√E2iV2ilg V +√2i lg(V ǫ−1)V2ilg V + i2iV2ilg V)(4.5)tmindiff ≤√EV lg2 V + V lg V√lg(V ǫ−1) + V lg3 V (4.6)40This is the dominant term, and so our algorithm takes O(√EV lg2 V +V lg3 V+V lg V√lg(V ǫ−1)) time, and DHHM’s takesO(√EV lg2 V lg(V ǫ−1)+ V lg3 V ) time.The fastest known classical solution to this problem, Dijkstra’s algo-rithm, runs in Θ(E + V lg V ), Θ(V 2) in the matrix model; so for boundederror probability, our algorithm is faster than its classical counterpart in thematrix model, and faster in the edge array model for E ∈ Ω(V lg4 V ). Notethat the fastest possible classical algorithm must be Ω(E), so our algorithmis also faster than that for E ∈ Ω(V lg4 V ) in the edge array model, andalways in the matrix model.4.5.3 Bipartite MatchingAmbainis and Sˇpalek[9] address bipartite matching, non-bipartite match-ing and maximum flow. Their algorithm for bipartite matching runs inO(V√E + V lg V ) time, and is a quantum adaptation of Hopcroft andKarp’sclassical Θ((E + V )√V ) algorithm[27]; we further modify their algorithm,solving the problem here in O(V√(E + V ) lg V ) time.The problem of bipartite matching can be described in several ways: forexample, consider a collection of boys and girls to be vertices of a graph,and have an edge in the graph for each (boy, girl) pair that would make agood couple. In bipartite matching, we pair off the boys and girls in such away that only compatible couples are paired, each person has at most onepartner, and there is a maximum number of pairings.Some basic principles underlie most solutions to this problem. Considersome (non-maximum) matching-so-far M between boys and girls; if we canconstruct a path P starting at an unmatched boy and ending at an un-matched girl such that all edges in the path are either unmatched (boy, girl)edges or matched (girl, boy) edges, then the old matching can be expandedby one more pair by taking M′ = M⊕P (where M⊕P means taking all edgesin either M or P, but not both). Intuitively, where M and P have an edge incommon, we are “unmatching” that (boy, girl) pair, and “rematching” thetwo using the surrounding edges in the path. Because this path augmentsM by adding one to its size, it is called an augmenting path. Figure 4.1illustrates a single execution of this procedure.The principle behind Hopcroft and Karp’s algorithm is as follows: sup-pose that every time we want to find an augmenting path P, we find theshortest such path. They proved that if we do that, we will see at most2√V different path lengths in the whole process of constructing a maximummatching. So if we devise a process to find a maximal set of disjoint aug-41boys girlsaboys girlsbboys girlscFigure 4.1: (a) A non-maximum matching M. (b) An augmenting path P,using unmatched (boy, girl) edges and matched (girl, boy) edges. (c) A newmatching, M′ = M ⊕ P. Dark edges indicate edges that are part of thecurrent matching.menting paths of minimal length (maximal means here that the set cannotbe expanded by adding more paths of the same length), each time we ex-ecute that procedure, the next set will have longer length. We can thusrepeat that process O(√V ) times, seeing that number of distinct lengths,and end up having constructed a maximum matching.The construction of such a set of augmenting paths can be accomplishedthrough the use of a breadth-first search followed by a modified depth-firstsearch, which we will replace by our own BFS and DFS. We outline thesesteps below, and show them diagrammatically in Figures 4.2; we suggestfollowing the steps in the figures in parallel with those written below.1. Find the set of all unmatched girls. Consider there to be an edge toeach of those nodes from a new “fictional” node t.2. Find the set of all unmatched boys. Consider there to be an edge fromeach of those nodes to a new “fictional” node s.3. Let there be an array a of size V whose entries are initialized to ∞.4. Conduct a breadth-first search originating at t, where the only edgespermitted for use are matched (boy, girl) edges, unmatched (girl, boy)edges, and the edges incident on s and t. When visiting a vertex vi,42set a[i] to vi’s distance from t. If the breadth-first search terminateswithout reaching s, return that no augmenting path exists; otherwisestop as soon as the search reaches s. Perform this search using BFS.5. Reverse all the edges incident on s and t.6. Any path running from s to t that uses unmatched (boy, girl) edgesand matched (girl, boy) edges will be an augmenting path. If we alsorequire that each step vi → vj brings us closer to t (a[i] = a[j] + 1),any such path will be an augmenting path of minimal length.7. Conduct a depth-first search originating at s, where an edge eij canonly be used if a[i] = a[j] + 1, and the only such edges permittedfor use are unmatched (boy, girl) edges, matched (girl, boy) edges, andedges incident on s and t. When visiting an unmatched girl (whosedistance from t will be 1), trace back the path taken, mark it as anaugmenting path, and resume the search from s (not visiting any ofthe nodes on the path again).Hopcroft and Karp have done the work for us of proving that the al-gorithm works in principle, leaving us to show that the procedure outlinedabove finds a maximal set of disjoint augmenting paths of minimum length.To show that, we need to show that the paths we find are all of minimumlength; and that after this procedure, there is no augmenting path remainingthat has the same length and is completely disjoint.The latter condition is straightforward. Assume there is such a path P:then at some point it must reach an unmatched girl, and it will connect thatunmatched girl to t.Since our initial breadth-first search computes the distance from eachvertex to t (and thus from each vertex to some unmatched girl), the require-ment that each step reduces that number ensures that all paths found inour depth-first search are of minimum length.The running time for our algorithm is straightforward to compute giventhe results for BFS and DFS. Hopcroft and Karp proved that our procedure(above) needs to be run no more than 2√V times. The procedure itselfcomprises a BFS, a DFS, and some number of backtraces. The backtracesuse each node found in the DFS at most once, and so do not add to theDFS’ running time.Since this algorithm makes calls to functions that may fail, BFS andDFS, there is some probability that it fails. We may pass our function someconstant ǫ, a probability of failure that we are willing to tolerate; since it calls43BFS and DFS no more than a total of 4√V times, if we require that BFS andDFS have probability ≤ ǫ/4√V of failure each time, the total probabilitywith which bipartite matching fails will be less than ǫ. Our breadth-firstand depth-first searches thus have a running time of O(√V E lg(V ǫ−1)).Ambainis and Sˇpalek’s algorithm, on which ours is based, is almost iden-tical; the difference is that where our BFS and DFS use findsol and findall,theirs use repeated applications of BBHT. This causes their breath-firstand depth-first searches to run in O(√V E lg ǫ−1) time, whereas ours run inO(√V E lg ǫ−1) time. Our total running time is thus O(V√E lg(V ǫ−1)), afactor of√lg V faster than Ambainis and Sˇpalek’s O(V√E lg(V ǫ−1)).Ambainis and Sˇpalek also discuss non-bipartite matching and maximumflow in the same paper; in both cases they ignore errors for the body oftheir algorithms, and throw on an extra factor of lg V at the end in order toreduce the probability of failure to a constant. While that works, this sectionshows that it is not necessarily optimal for bipartite matching; and due tothe similarity of bipartite matching to the other problems they consider, it isreasonable to conjecture that one could also achieve a speedup on the orderof√lg V for general matching and flow.44boys girlsaboys girlsbs t01122334445boys girlscs t01122334445boys girlsdFigure 4.2: (a) A non-maximum matching M. (b) After steps 1-4. Nodesare labelled by their distance from t; for the purposes of this step, matched(thick) edges go left-to-right, and unmatched edges go right-to-left. (c) Aftersteps 5-7. Two possible augmenting paths are shown in red. Here, matchededges go left and unmatched edges go right; the thin edge in the middlecannot be used, however, because it would mean increasing distance from t(and thus lead to an augmenting path not of minimal length). (d) A newmatching, M′ = M⊕ {paths}.455. Applications inComputational Geometryand Dynamic Programming5.1 Computational Geometry AlgorithmsGeometry problems are a natural area of attack for quantum algorithms,because by defining N points we have implicitly defined Θ(N2) relationshipsbetween those points, making it very natural to ask questions whose answersuse information Θ(N2) in the size of the question. We will address points,which will be in either Zd or R2, as pi.5.1.1 Maximum Points on a Line, maxpointsThis problem is, in all of its generality, a very simple one: given N points,find the line that goes exactly through the maximum number of them. Wedifferentiate here between a solution that is practical for integers10 and aslightly slower solution that is practical for real numbers; acknowledgingthat practical computers, however quantum, have precision issues wherereal numbers are concerned.Intuitively each algorithm works by taking a single point p and findingout how many points are on the best line that goes through p. We thenuse minfind to find the best such p. In the Zd case, our method is to findthe vector from p to each other point, canonicalize it using GCD, and thenstick all those vectors into a hash table so that we can quickly count repeats.In the R2 case, our method is to sort the points in counterclockwise orderabout p and see look for collinear points, which should now be orderedconsecutively.This is a particularly interesting problem to solve in Z2 because it isa member of a class of classical problems called “3SUM-hard”[28]. Of the10Thanks to Kory Stevens for a productive conversation that removed a factor of lgN .46problems belonging to this class, all of the known ones have classical lower-bounds of at most Ω(N), and upper bounds of at least O(N2). All problemsin the class are reducible from the 3SUM problem: given a set S of Nintegers, is there some triplet a, b, c in that set such that a + b + c = 0?That is quite a straightforward problem to solve with findsol in Θ(N), whilewe will solve maxpoints in N1.5; thus we open a gap of√N between twosimilar problems where no such gap existed before. This raises interestingquestions about the maximum points on a line problem, and a number ofother problems in 3SUM-hard. This in turn suggests that many of thealgorithms in 3SUM-hard may be amenable to sub-N2 quantum solutions.5.1.2 Maximum Points on a Line: ZdTheorem 9 Let there be N points in Zd, whose coordinates are bounded by±U . The following algorithm maxpoints finds the straight line on whichlies the maximum number of those points, in Θ(N3/2d lgU√lg ǫ−1) time andwith probability ≤ ǫ that it will fail, returning an incorrect result.1. Use Section 3.2’s minfind to maximize the following function, mup(maximum using p), over all points p. Call the result P .2. Function mup(p):(a) Create an empty hash table H, mapping vectors in Zd (keys) tointegers (values).(b) For each point pi:i. Define −→a = −→pi −−→p .ii. Normalize −→a , keeping its entries in the integers, so that thefirst nonzero component is positive and the gcd of the abso-lute values of the components is 1.iii. If −→a is not yet in H, insert it in H mapping to value 1; if −→ais already in H, increment its value.(c) Return the maximum value in H: the number of points on thebest line going through p.3. Run mup on P, but instead of returning the maximum value in thehash table return its corresponding key, and call it−→V .4. The answer to return is the line−−→X(t) =−→P + t−→V .47In mup, all vectors to other points from p are canonicalized in such away that any pair of points collinear with p will have the same directionvector −→a . mup repeats d gcds N times, for a total of Θ(Nd lgU), and ourmain function’s most costly operation is one minfind that evaluates mupΘ(√N lg ǫ−1) times. Thus our total running time is Θ(N3/2d lgU√lg ǫ−1),and our probability of failure is ǫ. Classically the problem can be solved inΘ(N2d lgU).5.1.3 Maximum Points on a Line: R2Theorem 10 Let there be N points in R2. The following algorithm findsthe straight line on which lies the maximum number of those points inΘ(N3/2 lgN√lg ǫ−1), with probability ≤ ǫ that it will fail, returning someincorrect result.1. Use minfind to maximize the following function, mup2, over all pointsp. Call the result P .2. Function mup2(p):(a) Let −→ai = −→pi − −→p . If −→ai .x < 0, or −→ai .x = 0 and −→ai .y < 0, thenreverse −→ai . This puts all points to the right of p.(b) Sort the −→ai as follows: −→ai < −→aj iff (−→ai ×−→aj ) · ẑ > 0. This has theeffect of sorting the pi in counter-clockwise order about p.(c) Iterate over the sorted array, keeping a running total of how manyconsecutive −→ai have cross product of 0 with one another. Returnthe maximum such total. (Practically, we should see how manyconsecutive −→ai have cross product < δ for some small δ, and loopthrough a second time to catch the nearly-straight-up and nearly-straight-down −→ai ).3. Run mup2 on P, but instead of returning the maximum total, returnsome point (other than P ) on the line giving that total. Call it P ′.4. The answer to return is the line−−→X(t) =−→P + t(−→P ′ −−→P ).This algorithm sorts the points about each point p, which has the effectof grouping collinear points together. Then it simply counts how manyconsecutive collinear points it can find. mup2 is Θ(N lgN), and our mostcostly operation is one minfind that evaluates mup2 Θ(√N lg ǫ−1) times,for a total running time of Θ(N3/2 lgN√lg ǫ−1) and probability of failure ǫ.Classically this problem can be solved in Θ(N2 lgN).485.2 Dynamic Programming AlgorithmsDynamic programming (DP) is a technique for solving problems by combin-ing the solutions to subproblems. DP algorithms achieve this by partitioningtheir problems into subproblems, solving the subproblems recursively, andthen combining the solutions to solve the original problem. What distin-guishes dynamic programming from other approaches is that the subprob-lems are not independent: subproblems share sub-subproblems with oneanother. A dynamic programming algorithm solves every sub-subproblemonly once and saves its result in a table, thus eliminating the need to recom-pute the answer for a sub-subproblem every time it is needed[19].Dynamic programming is often used to solve optimization problems.Given some situation (a problem), come up with a choice (each possiblechoice leads to a subproblem) that optimizes some final quantity (way downat the sub-. . . -subproblem level). We will see an example of this in Section5.2.1. Since DP is often used to make some sort of optimal choice, DP al-gorithms in general are obvious candidates for Section 3.2’s minfind, whichreduces the time taken to check all our options.5.2.1 Coin Changer, coinchangeGiven a monetary system with some set of coins and bills, we may wish tomake some precise amount of money – the coin changer problem is to useas few coins and bills as possible. Intuitively, this is easy: with Canadianor American money, for example, to make D cents one can simply take thelargest bill/coin of value v ≤ D, then make D−v cents in the same way. Forexample, to make 40c one would take the largest coin less than 40c (25c),then the largest coin less than the remaining 15c (10c), and finally a 5c coin.This is a greedy approach that works for most real currencies, but it is notalways optimal: for example, should a 20c piece be added to the Canadiansystem, then making 40c only takes two coins, but the greedy approach willstill cause us to use three. Should the reader ever travel to Costa Rica orBhutan, he or she will encounter a non-greedy currency system.11Theorem 11 Given a length C integer array of coin denominations V, aswell as an integer D, the following algorithm coinchange returns the mini-11The Bhutanese ngultrum is divided into 100 chertrum. The three least valuableBhutanese coins in modern circulation are the 20 chertrum coin, the 25 chertrum coin, andthe 50 chertrum coin. Making 40 chertrum is not possible using the greedy technique; evenwith a 5-chertrum coin added, the greedy technique does not yield an optimal solution.49mum number of coins required to make D units, or ∞ if making D units ofcurrency is impossible. It achieves this in Θ(D√C lgD) time.Since we are trying to minimize a quantity, the number of coins used,making D units optimally is a matter of choosing one coin V [i] to use, thenmaking D−V [i] units optimally. To do so we build up a table T, where T [i]is the minimum number of coins needed to make i units. We start by fillingin T [i] with i small, since later entries will depend on earlier ones.1. Let there be an array T of size D+1, such that initially T [0] = 0, andT [i 6= 0] =∞.2. For d from 1 to D, DO:(a) Use the algorithm of Section 3.2 to minfind one of the coins V [i]such that d− V [i] ≥ 0, and 1 + T [d− V [i]] is minimal.(b) If such a coin was found, let T [d] = 1 + T [d− V [i]].DONE.3. Return T [D].Here we simply fill in the table as discussed above, by using minfindto determine which coin should be taken first. Since coinchange makes acall to a function that may fail, minfind, there is some probability that itfails. We may pass the coinchange function some constant ǫ, a probabilityof failure that we are willing to tolerate; since coinchange calls minfind Dtimes, if we require that minfind has probability ǫ/D of failure each time,the total probability with which BFS fails will be less than ǫ. Each minfindtakes Θ(√C lg(Dǫ−1)) time, and minfind is repeated D times for a totaltime complexity of Θ(D√C lg(Dǫ−1)).The reason we discuss this example is because it is very representativeof how one can improve dynamic programming algorithms in general usingquantum techniques. Many other problems, for example minimum-operationmatrix chain multiplication, can be solved quickly with a quantum algorithmin much this manner.5.2.2 Maximum Subarray Sum, subarray-sumTheorem 12 Given an N ×N array of real numbers A, the following algo-rithm subarray-sum finds a rectangular subarray such that the sum of the50subarray’s elements is maximized, in Θ(N2√lg ǫ−1) time and with probabil-ity of failure ǫ. We will address the result by its limits: (miny, minx, maxy,maxx).This is another classic problem, for which the fastest known classicalsolution runs in Θ(N3√log logNlogN)and was found by Tamaki[10]. There isa much more straightforward (though still clever) Θ(N3) solution, whichinvolves maximizing the sum of all Θ(N2) possible column ranges, each inΘ(N).Our algorithm begins by creating a table T that makes checking the sumfor an arbitrary rectangle Θ(1), and then simply minfinds over all rectangles.This algorithm, like the classical one, is really greedy rather than dynamicprogramming; we include it in this section because the construction of T isDP.1. Let there be an N ×N array T, whose i, j element will hold the sumfor subarray (0, 0, i, j). Initialize its entries to 0, and define T [i][j] = 0if i or j is negative. The next step will fill in T as desired.2. For i from 0 to n− 1, For j from 0 to n− 1 DO:(a) T [i][j] = A[i][j] + (T [i− 1][j] + T [i][j − 1]− T [i− 1][j − 1]).DONE.3. There are N4 possible rectangular subarrays. The summation overany such array is T [maxy][maxx] − T [maxy][minx − 1] − T [miny −1][maxx]+T [miny−1][minx−1], which is a Θ(1) calculation. Use thealgorithm of Section 3.2 to minfind over all such (miny, minx, maxy,maxx) and find the subarray with the maximum summation, and thenreturn it.The creation of T takes Θ(N2), and the minfind takes Θ(N2√lg ǫ−1)and has probability of failure ǫ. The dynamic programming part of thisalgorithm is the construction of T .516. Summary and ConclusionsWe began the body of this thesis with careful analysis of the BBHT algo-rithm, finding its probability of failure; the result allowed us to constructan algorithm, findsol, that solves unstructured search (the same problem asGrover’s algorithm solves) faster than previous algorithms. The benefit ofthis tool is widespread: any algorithm making use of unstructured searchmay use findsol, often saving√lg time factors over BBHT, the previous toolof choice.We summarize findsol in Table 6.1, contrasting it with the best pre-existing quantum algorithm (BBHT) as well as the fastest possible proba-bilistic and deterministic classical algorithms that achieve the same goals.Unstructured Search Solution Exists No SolutionQuantum Θ(√N/M +√N lg ǫ−1/M1.86) Θ(√N lg ǫ−1)Previous Quantum Θ(√N/M ) Θ(√N lg ǫ−1)Classical Probab. Θ(N/M) Θ(N lg ǫ−1)Classical Determ. Θ(N) Θ(N)Table 6.1: Section 3.1’s findsol compared to alternatives. The unit of timeis the number of calls to F ; N is the size of the domain, M is the numberof solutions, and ǫ is the maximum probability of failure we will tolerate.After analyzing BBHT and introducing findsol, we discussed a series ofother tools designed to be used in the construction of algorithms. The toolsare various kinds of quantum searches, allowing us to perform operations likeminimum-finding faster than we could do with a classical computer: theyare summarized in Table 6.2, and contrasted with the pre-existing quantumalgorithms as well as the fastest possible deterministic classical algorithmsachieve the same goals.After constructing these tools, we made use of them, finding applicationsin graph theory. The tools are presumably applicable in a wide range of al-gorithms, and we chose to target some of the more central, classic algorithmsin graph theory: breadth-first search, depth-first search, single-source short-52Minimum Finding Finding all M solutionsQuantum Θ(√N lg ǫ−1) Θ(√NM +√N lg ǫ−1)Previous Quantum Θ(√N lg ǫ−1) Θ(√NM +√N lg ǫ−1)Classical Determ. Θ(N) Θ(N)Finding d Minimal, Different ObjectsQuantum Θ(√Nd+√N lg ǫ−1 + d lgN lg d)Previous Quantum Θ(√Nd+√N lg ǫ−1+?)Classical Determ. Θ(N)Table 6.2: Section 3.2-3.4’s algorithmsminfind, findall andmindiff comparedto alternatives. The unit of time is the number of calls to F, except in thed lgN lg d term of mindiff, where the unit is time. N is the size of thedomain, M is the number of solutions, and ǫ is the maximum probability offailure we will tolerate.est paths (with negative edge weights allowed), and all-pairs shortest paths.We summarize those results in Table 6.3, contrasting with the fastest pos-sible classical algorithms in the case of breadth-first and depth-first search,and the fastest known classical algorithms in the case of single-source andall-pairs shortest paths. Note that several of our graph algorithms can runmore slowly than their classical counterparts for E sufficiently small; ineach such case there is some a such that the quantum algorithm is faster ifE ∈ Ω(V lga V ).Problem Quantum Complexity ClassicalBFS O(√V E lg(V ǫ−1)) Θ(E)DFS O(√V E lg(V ǫ−1)) Θ(E)Single-Src. S.P. O(√V 3E lg(V ǫ−1)) Θ(V E)All-Pairs S.P. O(√V 3E(lg V +√lg(V ǫ−1)) + V 2 lg3 V ) Θ(V 2.575)Table 6.3: Chapter 4’s algorithms, BFS (breadth-first search), DFS (depth-first search), SPNW (single-source shortest paths with negative weights)and APSP (all-pairs shortest paths with negative edge weights) comparedto classical alternatives. Results are presented in the edge array model; V isthe number of vertices in the graph, E is the number of edges, and ǫ is themaximum probability of failure we will tolerate. In this table, to convertthe complexity from the edge array model to the adjacency matrix model,change E to V 2.53We have also adapted a graph algorithm by Du¨rr, Heiligman, Høyer andMhalla, as well as one by Ambainis and Sˇpalek, to use our tools; we have thusgiven them slight speedups by improving how they deal with the possibilityof failure. The results are summarized in Table 6.4, and contrasted with thefastest known classical algorithms.Single-Source Shortest Path (+ve edge weights)Quantum Adaptation O(√EV lg2 V + V lg3 V + V lg V√lg(V ǫ−1))Previous Quantum O(√EV lg2 V lg(V ǫ−1) + V lg3 V+?)Classical Θ(E + V lg V )Bipartite MatchingQuantum Adaptation O(V√E lg(V ǫ−1))Previous Quantum O(V√E lg(V ǫ−1))Classical Θ(√V E)Table 6.4: Section 4.5’s algorithms, compared to the algorithms they wereadapted from and the fastest known classical algorithms. Results are pre-sented in the edge array model; V is the number of vertices in the graph, Eis the number of edges, and ǫ is the maximum probability of failure we willtolerate. In this table, to convert the complexity from the edge array modelto the adjacency matrix model, change E to V 2.Finally we addressed a few problems in computational geometry and dy-namic programming. In Section 5.1 we took a problem (maxpoints) whosebest classical limits are Ω(N) and O(N2), and invented a bounded-errorΘ(N3/2 lgN) algorithm for it; hopefully there are other problems that canbe addressed similarly. In Section 5.2.1 we addressed the coin changer prob-lem, a problem that is commonly used when introducing dynamic program-ming. This example is illustrative of what one can do for dynamic program-ming with judicious use of minfind; meanwhile the maximum subarray-sumproblem in Section 5.2.2 is a good illustration of how one can make mi-nor adaptations to a classical algorithm that make it amenable to quantumtechniques.6.1 Future DirectionsIn this thesis we have chosen to focus on deriving new algorithms rather thanproving lower bounds. As such, it is possible that the algorithms presentedhere are not optimal, which presents clear directions for future research:54Problem Quantum Complexity ClassicalPoints on a Line (Zd) N3/2d lgU√lg ǫ−1 N2d lgUPoints on a Line (R2) N3/2 lgN√lg ǫ−1 N2 lgNCoin Changer D√C lg(Dǫ−1) DCMaximum Subarray Sum N2√lg ǫ−1 N3Table 6.5: Chapter 5’s applications in computational geometry and dynamicprogramming, compared to fastest known classical algorithms. ǫ is the max-imum probability of failure we will tolerate.searching for lower bounds that approach the upper-bounds presented here,and finding faster algorithms.There are few published quantum algorithms (at least when viewed inthe context of the number of published classical algorithms!), which meansthat there is a great deal of exciting work to be done; and with so manyclassical algorithms with no quantum counterparts, much of the low-hangingfruit remains untouched.55A. BBHT: Running Timeand Probability of FailureThe probability of failure for BBHT is the probability that, for each m up to2√N, we never successfully find a result when there is one to be found. Tocalculate that probability, first we need a result originally derived by Boyer,Brassard, Høyer and Tapp[2]: first, note that after j Grover iterations, theprobability of returning a valid result is sin2((2j + 1)θ). For a given m, jcould be any of 0, . . . ,m−1; averaging over those values, we see the followingfor the probability of failure for any given iteration through step 3:Pfail,m =m−1∑j=01msin2((2j + 1)θ)=12mm−1∑j=01− cos((2j + 1)2θ)=12− 14mm−1∑j=0(ei4jθei2θ + e−i4jθe−i2θ)=12− 14m(ei2θ1− ei4θm1− ei4θ + e−i2θ 1− e−i4θm1− e−i4θ)=12− sin(4mθ)4m sin(2θ)Which is the probability that an invalid result will be returned, for m aninteger. m is of course not actually an integer, but by choosing a randominteger 0 ≤ j < m, we treat it as one and can consider it to be one for thepurposes of that formula.We wish to upper-bound the probability of error for BBHT as a whole,and we will start by differentiating between the cases 0 < θ ≤ π4 (M ≤ N/2)and π4 < θ ≤ π2 (M > N/2). For any M ≤ N/2, we wish to find an m0 suchthat for each repetition of the outer loop when m > m0, the probability of56failure is less than or equal to some constant. For M > N/2, we will findthat the probability of failure is always less than or equal to some constant.We begin by considering M ≤ N/2. In order to find m0, first we have tofind critical points of fθ(m) ≡ 12 + sin(4mθ)4m sin(2θ) , the probability that an invalidresult will be returned:dfθ(m)dm= 04θ cos(4mθ)4m sin(2θ)=sin(4mθ)4m2 sin(2θ)4mθ = tan(4mθ)4mθ = 0, 4.49, 7.73, . . .Now we consider the form of fθ(m). It starts off at fθ(0) =12 +θsin(2θ)and decreases from there; we want to find the first maximum it will returnto after dipping down, meaning 4mθ = 7.73. Since 0 < θ ≤ π4 , we usesin(2θ) ≥ 4πθ, and arrive at (when 4mθ = 7.73) fθ(m0) ≤ 12 + sin(7.73)4pi×7.73 ≈ 0.6.That does not give us m0, however: m0 is when fθ(m) first dips that low.Solving numerically and using sin θ ≤ θ:0.6 =12+sin(4m0θ)4π4m0θ4m0θ ≤ 2.78m0 ≤ 0.69/ sin θm0 ≤ 0.69√N/MFor π4 < θ ≤ π2 , although fθ(m) is well-behaved and slowly-oscillating overthe space of integer values of m, it oscillates wildly in between; so our previ-ous approach, based on considering fθ as a function acting on the continuum,will not work. To fix this problem, instead of considering θ, we now con-sider the angle φ ≡ π2 − θ; first noting that fθ(m) = 1 − fφ(m), meaningthat success for θ corresponds to failure for φ:Pfail(m) =12+sin(4mθ)4m sin(2θ)=12+sin(4m(π2 − φ))4m sin(π − 2φ) =12− sin(4mφ)4m sin(2φ)Now we are back in the elysian realm of 0 ≤ φ < π2 , and we can bound theprobability of failure for φ from below and use that result. The procedurehere is as before, but instead of 7.73 we use the first root of tan(4mφ) = 4mφ,4.49. For φ < π4 we use sin(2φ) ≤ 2φ, and arrive at (when 4mφ = 4.49)57Pfail ≥ 12+ sin(4.49)2×4.49 ≈ 0.39. That is the lowest the probability of failure fφ(m)ever gets, and correspondingly it is the lowest the probability of success1− fθ(m) ever gets.We now have that, for any given iteration of the outer loop, the prob-ability of failure for M > N/2 is less than or equal to 0.61 for all m, andthe probability of failure for M ≤ N/2 is less than or equal to 0.6 form ≥ m0 = 0.69√N/M . We now compute the total probability of failureand running time for each case.For M > N/2 the total probability of failure is simply 0.61lgλ(2√N) ≈.5N−0.26lnλ , and the probability of getting to the kth iteration through the mainloop is 0.61k . This gives us a total running time of∑lgλ(2√N)k=0λk2 (0.61)k <1211−0.61λ .ForM < N/2 the total probability of failure is 0.6lgλ(2√N)−lgλ(0.69√N/M),which gives us 0.6lgλ(2.8√M) ≈ (2.8M)−0.25/ lnλ. The running time is thesum:t =lgλ(0.69√N/M)∑k=0λk2+lgλ(2√N)∑k=lgλ(0.69√N/M )λk2(0.6)k−lgλ(0.69√N/M)≈∫ lgλ(0.69√N/M)0λk2dk +∫ lgλ(2√N)lgλ(0.69√N/M)λk2(0.6)k−lgλ(0.69√N/M )dk=0.69√N/M2 ln λ+ (0.69√N/M )− lgλ 0.6∫ 2√N0.69√N/Mdx2xlgλ 0.6=0.69√N/M2 ln λ+ (0.69√N/M )− lgλ 0.6[dx2x1+lgλ 0.61 + lgλ 0.6]2√N0.69√N/M=0.69√N/M2 ln λ+(3√M)lgλ 0.61 + lgλ 0.6√N − 12√N/M1 + lgλ 0.6Since we have√N/M dependence from the first term, we should chooseλ such that the second term contributes no worse, which gives us the con-dition lgλ 0.6 < 1, or λ < 1.64. We now have:t ≤ 0.69√N/M2 lnλ− 12√N/M1 + lgλ 0.6which is minimal for λ ≈ 1.31, and more importantly is Θ(√N/M). Boyer,Brassard, Høyer and Tapp arbitrarily chose λ = 87 , which we would like to58say makes the algorithm 50% slower than our choice of λ; but that is onlytrue in this approximation. Furthermore, the optimal value for λ is actuallya function of M/N, so there is no one optimal λ in general.Using λ = 1.31, our results can be summarized in Table A.1. Mostimportant to us is that our running time is Θ(√N/M ) calls to F, and ourprobability of failure is less than .5M−.93. It is also worth noting thatour earlier restriction, λ < 1.64, came because we chose a small root fortan(x) = x. If we had chosen a larger root, λ could have been larger, up toan asymptotic maximum of 2.Case Probability of Failure Expected Running TimeM ≤ N/2 ≤ .4M−.93 ≤ 1.9√N/MM > N/2 ≤ .5N−.96 ≤ 2.3Table A.1: Probability of failure and expected running time for BBHT (forλ = 1.31)59Bibliography[1] L. Grover. A fast quantum mechanical algorithm for database search.In Proceedings of the 28th Annual ACM Symposium on Theory of Com-puting (STOC), pages 212–219, 1996.[2] M. Boyer, G. Brassard, P. Høyer, and A. Tapp. Tight bounds on quan-tum searching. Fortschritte Der Physik, 46:493–505, 1998.[3] H. Buhrman, R. Cleve, R. de Wolf, and Ch. Zalka. Bounds for small-error and zero-error quantum algorithms. In Proceedings of the 40thIEEE Symposium on Foundations of Computer Science (FOCS), pages358–368, 1999.[4] C. Du¨rr and P. Høyer. A quantum algorithm for finding the minimum.quant-ph/9607014, 1996.[5] Yu.I. Manin. Computable and uncomputable (in Russian). Moscow,Sovetskoye Radio, 1980.[6] R. Feynman. Simulating physics with computers. International Journalof Theoretical Physics, 21(6/7):467–488, 1982.[7] P. Benioff. Quantum mechanical hamiltonian models of Turing ma-chines that dissipate no energy. Journal of Mathematical Physics,22:495.[8] C. Du¨rr, M. Heiligman, P. Høyer, and M. Mhalla. Quantum querycomplexity of some graph problems. In Proceedings of ICALP 2004,Turku, Finland, 2004.[9] A. Ambainis and R. Sˇpalek. Quantum algorithms for matching andnetwork flows. cs/0508205, 2005.[10] H. Tamaki and T. Tokuyama. Algorithms for the maximum subar-ray problem based on matrix multiplication. In Proceedings of the 9thSymposium on Discrete Algorithms (SODA), volume 12, pages 446–452,1998.60[11] A. Church. An unsolvable problem of elementary number theory. Amer-ican Journal of Mathematics, 58(2):345–363, 1936.[12] A. Turing. On computable numbers, with an application to theEntscheidungsproblem. Proceedings of the London Mathematical So-ciety, 42:230–265, 1936.[13] S.C. Kleene. Lambda-definability and recursiveness. Duke Mathemati-cal Journal, 2:340–353, 1936.[14] D. Deutsch. Quantum theory, the Church-Turing principle and the uni-versal quantum computer. Proceedings of the Royal Society of LondonSer. A, A400:97–117, 1985.[15] D.R. Simon. On the power of quantum computation. In Proceedingsof the 35th Annual IEEE Symposium on the Foundations of ComputerScience (FOCS), pages 116–123, 1994.[16] P. Shor. Algorithms for quantum computation: discrete logarithms andfactoring. In Proceedings of the 35th Annual Symposium on Foundationsof Computer Science (FOCS), 1994.[17] C.H. Bennett, E. Bernstein, G. Brassard, and U. Vazirani. Strengthsand weaknesses of quantum computing. SIAM Journal on Computing,26(5):1510–1523, 1997.[18] Yellow Pages Group Co. Canada 411. http://www.canada411.com,2006.[19] T.H. Cormen, C.E. Leiserson, R.L. Rivest, and C. Stein. Introductionto Algorithms. MIT Press, 2001.[20] M.A. Nielsen and I.L. Chuang. Quantum Computation and QuantumInformation. Cambridge University Press, Cambridge, 2000.[21] Gilles Brassard and Peter Høyer. An exact quantum polynomial-timealgorithm for Simon’s Problem. In Israeli Symposium on Theory ofComputing and Systems, pages 12–23, 1997.[22] C.H. Bennett. Logical reversibility of computation. IBM Journal ofResearch and Development, 17(6):525–532, 1973.[23] R. Bellman. On a routing problem. Quarterly of Applied Mathematics,16(1):87–90, 1958.61[24] L. Ford and D. Fulkerson. Flows in Networks. Princeton UniversityPress, 1962.[25] U. Zwick. All pairs shortest paths in weighted directed graphs: exactand almost-exact algorithms. In Proceedings of the IEEE Symposiumon Foundations of Computer Science (FOCS), 1998.[26] D. Johnson. Efficient algorithms for shortest paths in sparse networks.In Proceedings of the IEEE Symposium on Foundations of ComputerScience (FOCS), 1998.[27] J. Hopcroft and R. Karp. An n5/2 algorithm for maximum matching inbipartite graphs. SIAM Journal of Computing, 2(4), December 1973.[28] A. Gajentaan and M. Overmars. On a class of O(n2) problems incomputational geometry. CGTA: Computational Geometry: Theoryand Applications, 5, 1995.62

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            data-media="{[{embed.selectedMedia}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
https://iiif.library.ubc.ca/presentation/dsp.831.1-0085242/manifest

Comment

Related Items