UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Fast post processing algorithms for fault tolerant quantum computation using surface codes Zaribafiyan, Arman 2014

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
24-ubc_2014_november_zaribafiyan_arman.pdf [ 778.09kB ]
Metadata
JSON: 24-1.0166069.json
JSON-LD: 24-1.0166069-ld.json
RDF/XML (Pretty): 24-1.0166069-rdf.xml
RDF/JSON: 24-1.0166069-rdf.json
Turtle: 24-1.0166069-turtle.txt
N-Triples: 24-1.0166069-rdf-ntriples.txt
Original Record: 24-1.0166069-source.json
Full Text
24-1.0166069-fulltext.txt
Citation
24-1.0166069.ris

Full Text

Fast Post Processing Algorithms For Fault TolerantQuantum Computation using Surface CodesbyArman ZaribafiyanBSc, Isfahan University of Technology, 2011A THESIS SUBMITTED IN PARTIAL FULFILLMENTOF THE REQUIREMENTS FOR THE DEGREE OFMaster of Applied ScienceinTHE FACULTY OF GRADUATE AND POSTDOCTORALSTUDIES(Electrical and Computer Engineering)The University Of British Columbia(Vancouver)September 2014c© Arman Zaribafiyan, 2014AbstractLocal architecture of qubits in two dimensional lattices is known as one of the can-didates to run fault tolerant quantum computation with high threshold. Topologicalquantum error correcting codes, such as surface codes, are used to make this archi-tecture robust against various quantum error models. In this research we presenta fast, concise, operationally inexpensive and highly parallelizable decoding algo-rithm for surface codes without using concatenation which has a threshold range of8.6% to 10.5% varying based on a parameter called OMSS. Thanks to paralleliza-tion of the proposed algorithm, the time complexity scales logarithmically in thelattice size for small OMSS. This can be compared to the work of [9, 10] which hasa threshold of 8% for the bit-flip error channel within the same complexity order.iiPrefaceThis dissertation is original and unpublished work by the author, A. Zaribafiyan.The ”Adjusment of Weights” idea in chapter 3 is suggested by my supervi-sor Prof. Robert Raussendorf, and the rest of the work is my original work andimplementation.iiiTable of ContentsAbstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iiPreface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iiiTable of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ivList of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viiList of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viiiList of Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiiGlossary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiiiAcknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xivDedication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.1 Plan of the Thesis . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2 Quantum Computers . . . . . . . . . . . . . . . . . . . . . . . . 31.2.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . 31.2.2 A Glimpse of Quantum Mechanics . . . . . . . . . . . . . 51.2.3 Quantum Computation . . . . . . . . . . . . . . . . . . . 131.2.4 Fault Tolerant Quantum Computation and Threshold The-orem . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15iv1.3 Error Correction . . . . . . . . . . . . . . . . . . . . . . . . . . . 161.3.1 Classical Error Correction . . . . . . . . . . . . . . . . . 161.3.2 Linear Codes . . . . . . . . . . . . . . . . . . . . . . . . 191.3.3 Quantum Error Model and Decoherence . . . . . . . . . . 211.3.4 Quantum Error Correction . . . . . . . . . . . . . . . . . 281.3.5 3-qubit Bit-flip Code . . . . . . . . . . . . . . . . . . . . 281.3.6 3-qubit Phase-flip Code . . . . . . . . . . . . . . . . . . 311.3.7 9-qubit Shor Code . . . . . . . . . . . . . . . . . . . . . 341.3.8 CSS Codes . . . . . . . . . . . . . . . . . . . . . . . . . 381.3.9 General QEC Picture and Concatenated Codes . . . . . . 421.3.10 Stabilizer Codes . . . . . . . . . . . . . . . . . . . . . . 461.4 Surface Codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . 491.4.1 Geometric Picture . . . . . . . . . . . . . . . . . . . . . 501.4.2 Surface Codes, Error Correction and Threshold . . . . . . 551.4.3 Defects, Holes and Scaling . . . . . . . . . . . . . . . . . 611.4.4 Fault Tolerant Quantum Computation with Surface Codes 612 Surface Code Simulation . . . . . . . . . . . . . . . . . . . . . . . . 662.1 Overview of the Error Correcting Algorithm . . . . . . . . . . . . 662.1.1 Error Channel and Random Generator . . . . . . . . . . . 682.1.2 Finding Ising Vorticies . . . . . . . . . . . . . . . . . . . 682.1.3 Edmonds Perfect Matching Algorithm . . . . . . . . . . . 682.1.4 Path Generator . . . . . . . . . . . . . . . . . . . . . . . 692.1.5 Homologically Non-trivial Cycles . . . . . . . . . . . . . 692.2 Monte Carlo Simulation . . . . . . . . . . . . . . . . . . . . . . 702.2.1 Numerical Results . . . . . . . . . . . . . . . . . . . . . 722.3 Larger Lattice Sizes . . . . . . . . . . . . . . . . . . . . . . . . . 732.4 Crossing Points . . . . . . . . . . . . . . . . . . . . . . . . . . . 752.5 Error bars . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 753 Fast and Simple Error Correction Post Processing . . . . . . . . . . 773.1 Minimum Weight Perfect Matching (MWPM) Algorithm . . . . . . 783.2 A Simple Heuristic . . . . . . . . . . . . . . . . . . . . . . . . . 79v3.3 Sub-optimality and Adjustment of Weights . . . . . . . . . . . . . 813.4 Complexity of the Algorithm . . . . . . . . . . . . . . . . . . . . 833.4.1 Parallel Processing . . . . . . . . . . . . . . . . . . . . . 843.4.2 Complexity Analysis . . . . . . . . . . . . . . . . . . . . 853.5 Numerical Results: Threshold . . . . . . . . . . . . . . . . . . . 863.5.1 3 Dimensional Topological Memory . . . . . . . . . . . . 873.6 A Family of Hybrid Algorithms and Improved Threshold . . . . . 883.6.1 The General Idea . . . . . . . . . . . . . . . . . . . . . . 903.6.2 Numerical Results: Spectrum of Thresholds . . . . . . . . 923.6.3 Large Lattice Size Limit . . . . . . . . . . . . . . . . . . 923.6.4 The Threshold Jump . . . . . . . . . . . . . . . . . . . . 953.7 Approximations to WPM Algorithms and Improved Complexity . 964 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100viList of TablesTable 1.1 Different single and two qubit quantum gates (unitary opera-tors) along with their matrix representation and their applicationon computational basis states |ψ〉= α|0〉+β |1〉. . . . . . . . . 14Table 1.2 Encoding table for the 3-bit majority code. . . . . . . . . . . . 18Table 3.1 List of selected comparison-based sorting algorithms . . . . . . 84viiList of FiguresFigure 1.1 Classes of problems based on their complexity. . . . . . . . . 5Figure 1.2 Symmetric bit-flip error channel. A bit is independently changedwith probability p, or it keeps its original value with probabil-ity 1-p. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17Figure 1.3 A schamatic representation of a two level concatanated codes.The logical qubits are encoded into a seris of qubits in the firstlayer. Then a second code ise used to encode each individ-ual qubit in the first layer to even more physical qubits in thesecond layer. [28] . . . . . . . . . . . . . . . . . . . . . . . . 44Figure 1.4 The residual error after n’th encoding layer versus the initialerror for a concatenated 3qubit bit flip code. Below a certainthreshold as we increase the encoding layers the residual errordrops down to the desired value. . . . . . . . . . . . . . . . . 46Figure 1.5 Lattice definitions. X and Z stabilizers and Logical X and Zoperators. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50viiiFigure 1.6 A trivial cycle of Pauli Z operators shown in solid dark bluelines as a multiplication of the Z stabilizer operators on a groupof plaquettes on the lattice. Each plaquette applies a pauli Zoperator on each of the 4 data qubits adjacent to it. Now, twoadjacent plaquettes share exactly one data qubit because of thesquare arcitecture of the lattice. Two adjacent plaquette opera-tors apply the Z Pauli operator twice on the shared qubit. Sinceσ2i = I a group of adjacent plaquettes act trivially on the dataqubits inside and only apply Pauli operator on a closed loop ofdata qubits forming the boundary of the plaquettes. This cy-cle is a trivial cycle on this lattice. The lattice boundaries withthe same dashed line color are identified which forms a trorictopology in this case. . . . . . . . . . . . . . . . . . . . . . 52Figure 1.7 The gray edges are the physical data qubits on the lattice. Eachblack solid line on an edge shows one Pauli Z operator (Phaseflip) on that specific data qubit. When these errors occur onadjacent qubits they make longer chains of errors. . . . . . . . 56Figure 1.8 How error chains commute with all stabilizers except at theirboundaries. The black lines show Pauli Z errors, the bluecrosses show X stabilizers at various sites on the lattice andthe green and red lines show where the stabilizer commutesand anticommutes with errors respectively. The commutationrelations depends on the parity of qubits shared between errorchains and stabilizers. . . . . . . . . . . . . . . . . . . . . . 56Figure 1.9 Stabilizer syndromes only reveal information about the bound-aries of the chains of errors. . . . . . . . . . . . . . . . . . . 57Figure 1.10 An example of an error chain with our guess for a recoverychain only having information about the boundaries of that er-ror chain. The black line is the error chain and the blue dashedline is the recovery chain. The Pauli Z plaquette operators in-side the loop are emphasized to show how a combination ofstabilizers on the lattice can generate a trivial cycle. . . . . . 58ixFigure 1.11 Error chains and their boundaries as quasi particles. Thesechains can be recognized as logical operators on the surfacecode. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60Figure 1.12 Scaling factors on a lattice and time evolution of a movingdefect. An elementary cell is shown for reference on the topleft of the front lattice. . . . . . . . . . . . . . . . . . . . . . 62Figure 1.13 Implementing a Controlled-NOT gate by twisting a pair of pri-mal and dual holes representing the control and target qubits. . 64Figure 1.14 The operational overhead of building fault tolerant gates as afunction of circuit size for various gates, at error probabilityperror = 10−4. . . . . . . . . . . . . . . . . . . . . . . . . . . 65Figure 2.1 The error chain E and the recovery chain E ′. This image istaken from Dennis et al. [7] . . . . . . . . . . . . . . . . . . 67Figure 2.2 L=5 Lattice. Error chain is shown with red solid lines and therecovery chain is shown in blue stripped line. . . . . . . . . . 69Figure 2.3 D = E + E ′ builds a homologically non trivial cycle in thislattice. The solid black lines are the Error chains E’s and dottedblue lines are recovery chains E ′. The vertices are the ancillaqubits on the sites of the lattice and edges (lines) are the dataqubits affected by either the error or the recovery chains. . . . 70Figure 2.4 Simulator illustrates the lattice graph including E and E’ chains. 73Figure 2.5 Error threshold averaging 105 error samples . . . . . . . . . . 74Figure 2.6 Investigating the crossing point with larger lattice sizes . . . . 75Figure 3.1 Random matchings on a complete graph with 8 vertices . . . . 80Figure 3.2 A simple not realistic and extreme example of the situationwhen sorting fails in minimizing the matching cost. . . . . . . 83Figure 3.3 The threshold calculation for the refined sorting based match-ing algorithm. . . . . . . . . . . . . . . . . . . . . . . . . . . 87Figure 3.4 Threshold calculations for the 3 dimensional topological quan-tum memory architecture using surface codes. . . . . . . . . . 88Figure 3.5 Threshold values vs. OMSS, for different lattice sizes. . . . . 92xFigure 3.6 Extrapolating the large size limit thresholds for a 2% portionof the Edmonds matching algorithm. . . . . . . . . . . . . . . 93Figure 3.7 Asymptotic large size limit curve . . . . . . . . . . . . . . . 94Figure 3.8 The success probability of surface code error correction is clas-sified in 3 groups, close to 1 (black), which happens when weare below threshold, close to .5 (white) when we are above thethreshold and close to .75 (gray) when we are really close tothe threshold itself. Data is gathered for a fixed error probabil-ity of perror = 8.1% . . . . . . . . . . . . . . . . . . . . . . . 96xiList of Algorithms3.1 Simple random perfect matching algorithm on a complete graph. . 813.2 Perfect matching algorithm on a complete graph with sorted weights. 82xiiGlossaryMWPM Minimum Weight Perfect MatchingOMSS Optimal Matching Subgraph relative SizeFTQEC Fault Tolerant Quantum Error CorrectionxiiiAcknowledgmentsFirst and foremost, I would like to thank Prof. Robert Raussendorf for his pricelessguidance, for being my teacher and fully supportive supervisor. This thesis wouldnot have been possible without his advice, suggestions and constant encourage-ment.Furthermore I would like to extend my gratitude to Prof. Konrad Walus forbeing my supportive co-supervisor, my colleagues in the Quantum InformationGroup, especially Dr. Leon Loveridge for proof reading the thesis and providingvaluable comments. I would also want to thank all of my friends at UBC for theirdiscussions and for creating memorable and pleasant moments during the courseof this research.Finally, a heartfelt and deep thanks go to my family, who have always encour-aged me and given their unconditional support through my studies at the Universityof British Columbia. I could not have completed this thesis without their support.xivDedicationTo my dearest wonderful Love,Negar,who made me able to conduct this research with her true love and patience, despitewe were geometrically far distant from each other,and to my family who always supported me by all means.xvChapter 1IntroductionIf I have seen farther it is by standing on the shoulders of Giants.— Sir Isaac Newton (1642 - 1727)1.1 Plan of the ThesisQuantum computers have shown promising results solving problems which are cur-rently computationally hard for classical computers [19][36][25]. Although therehas been groundbreaking progress in realizing such devices [2][20][26][21][27],scaling them to a size that can handle real world problems is still a high-tech chal-lenge. This is because quantum computers use the quantum mechanical propertiesof certain physical systems which are very fragile and can be distorted by the envi-ronmental noise. Accumulation of this noise makes it hard to scale these systemsup. However, threshold theorem proved that fault tolerant quantum computationwith noisy devices is possible with arbitrary accuracy provided that the noise levelof the subsystems are below a threshold [1]. So quantum computation in its largesize meaning is impossible without considering the effect of noisy systems.Fault tolerance is the idea of using error correction to protect a quantum statefrom quantum noise. There are various proposed methods for fault tolerant quan-tum computation using different error correcting approaches. Different error cor-recting schemes are mostly compared by the error threshold they can achieve andthe complexity of their encoding/decoding algorithms.We will discuss error thresh-1old in more detail in section 1.4.2. Among all these error correction schemes,surface codes with their simple architecture and high error threshold value 1 are sofar one of the best candidates to be the chosen protocol for fault tolerant quantumcomputation.As it will be more elaborated in the remainder of this thesis, certain classicalprocessing is involved in the quantum error correction with surface codes. Theerror threshold of a decoding scheme is closely related to this classical process-ing step. In the case of surface codes this classical post processing boils down tofinding the minimum weight perfect matching on a complete graph. This post pro-cessing is essential to fault tolerant quantum computation and the quantum speedup advantage can be lost if we need to spend a lot of time on the classical postprocessing. Therefore, we need to reduce the time complexity of these classicalalgorithms as much as we can in order not to delay the quantum computation withslow classical processing. There is also a trade of between the complexity and thethreshold gained using these algorithms. So an ultimate goal would be to devise alow complexity post processing algorithm which maintains a high threshold. Therehas been different approaches in the literature to either reduce the complexity of theclassical processing or gain a higher error threshold [11, 12, 24]. We used a greedyalgorithm with a weight adjustment step to both reduce the complexity to that of[9]. Then we have also shown that a hybrid approach using our greedy approxima-tion and Edmonds matching algorithm [12] can achieve an error threshold higherthan [9] while keeping the complexity at the same level. Our proposed algorithmand it’s numerical results and comparisons are explained in detail in chapter 3.In the remainder of the thesis, first we discuss the importance and significanceof using quantum computers along with a short summery of quantum mechanicsbackground. Then we introduce the idea of fault tolerant quantum computationand its use of error correction. A whole section is dedicated to fundamentals oferror correcting codes. Then with a definition of quantum error models in section1.3.3, we initiate the discussion on quantum error correcting codes. A series of ex-amples on quantum error correcting codes will help us establish a formal analogybetween linear classical codes and quantum codes. Surface codes are a subclass1Due to their robustness against local errors2of a more general family of quantum error correcting codes, which are called sta-bilizer codes. So in section 1.3.10, we introduce the stabilizer formalism and thenwe conclude the first chapter by an introduction to Surface codes in section 1.4,which are the main target of this research. This introduction includes the formaldefinition of surface codes, the geometric picture and how the topology plays a rolein the robustness of these codes against errors, and then we explain how the actualerror correction is done using surface codes in section 1.4.2. A formal definition oferror threshold as a measure of the quality of an error correcting code is given inthe same section.In chapter 2 we review the steps of error correction with surface codes in moredetail and in an algorithmic fashion. Then we introduce our numerical analysismethod and how the simulator is developed. We conclude this chapter by a nu-merical test example to show that our developed numerical simulation engine isconsistent with the literature. Then in chapter 3 we propose our algorithm alongwith different ways to improve it and its numerical results along with the benchmarkings with the current methods in the literature. Finally we conclude the thesisin chapter 4 along with a number of proposed challenges for future works.1.2 Quantum Computers1.2.1 MotivationIt’s easy to imagine how the birth of computers changed our lives. Smaller yetfaster processors were rapidly manufactured one after another enabling us to domore sophisticated calculations in shorter time and using less resources. It openeda wider area of problems that we can tackle efficiently. Despite all this rapid pro-gresses, there are classes of problems which are generically very hard to solve [17].What do we mean by hardness of a problem? Each paradigm of calculation bringsa complexity theory with itself. If you have a universal paradigm of computation,then there has to be a mapping between any arbitrary operation on an input andthe set of instructions in your computer. Complexity theory looks at the numberof instructions you need to employ in order to run a certain operation. The moreinstructions you need to run a certain operation, the harder it will be for the com-3puter to handle that operation. Looking at the classical gate model paradigm ofcomputation, the complexity theory classifies problems into certain groups basedon their hardness:P problems The set of problems which can be solved with a Turing machine inpolynomial time.NP problems Class of computational problems for which solutions can be com-puted by a non-deterministic Turing machine in polynomial time.NP-hard Class of problems at least as hard as hardest problems in NP.NP-complete class of problems which are NP and any NP problem can be reducedto them in polynomial time.Figure 1.1 illustrates these problem classes and their relation in a Venn dia-gram. Some problems are computationally intensive to be solved on a classicalcomputer. Although you might be able to check whether a potential answer is cor-rect or not, finding the solution may not happen in a polynomial time. [17] hasa long list of famous Np-complete problems. The most important fact is that allthis complexity discussion is coming from our notion of computation, the classicalTuring machine. If a problem is hard to solve in a Turing machine model, it doesnot mean that it is also hard to solve in another computation paradigm. And this isexactly where an alternative notion of computation becomes important, especiallyto tackle these hard problems.Quantum computers are proven to have computational resources for an alter-native paradigm of computation in which classically challenging problems can besolved faster (in some cases the speed-up is even exponential). A list of manyNP-Complete problems can be found in [25] which can be tackled by current re-alizations of quantum computers [2]. As another example, we can point to Shor’sfactoring algorithm [36] and Grover’s search algorithm [19] as the most famousquantum algorithms in the literature.Quantum computation as a general notion is the idea of using systems whichare governed with quantum physic rules and has special properties like tunneling,4Figure 1.1: Classes of problems based on their complexity.entanglement, superposition, and to harvest these properties as computation re-sources. There are several different ways to realize these quantum systems in asmall scale. Photons, ion traps [21], cold atoms in optical lattices [26], supercon-ducting charge (or flux) qubits [20, 38], are all examples of potential experimentalrealization of quantum systems. Although the theory of quantum computation forany of these approaches is rapidly progressing, scientists still have technologicallimitations to scale these systems up to a size able to handle real world challengingproblems. In spite of these engineering challenges, some approaches to quantumcomputation have shown promising results in terms of scalability.1.2.2 A Glimpse of Quantum MechanicsIn physics, a system is explained by the Hamiltonian. Hamiltonian is an operatorthat relates the total energy of the system to the energies and forces applied to theparticles in the system either as a result of an external source (exp. a potential dueto a magnetic field) or due the existence of the particles themselves. A Quantumsystem is a physical system in which its Hamiltonian is an operator with quantized5eigenvalues (energies) acting on a very special vector space called a Hilbert space.By diagonalizing the Hamiltonian operator we can get all information about theenergy levels of a physical system. Equation 1.1 shows the Hamiltonian opera-tor for a one-particle quantum system where V (r, t) is the potential applied to theparticle and ∇2 is just∂ 2∂x2 +∂ 2∂y2 +∂ 2∂ z2 operator.H =−h¯22m∇2 +V (r, t) (1.1)The quantized eigenvalues are the possible energies that a quantum system canacquire and eigenvectors are the state of the system corresponding to a specificenergy (eigenvalue). Since states are vectors on the Hilbert space we can use diracbra-ket notation to show a state with a ket |ψ〉 or its complex conjugate with a bra〈ψ|.H|ψ〉= Eψ |ψ〉 (1.2)Equation 1.2 is showing that Hamiltonian as an operator, has eigenstate |ψ〉with corresponding eigenvalue Eψ which is the energy of that state and can bemeasured. The lowest possible energy eigenvalue and eigenstate is of significantimportance and it’s called the ground state of the system. All other energy stateswhit larger energies are called excited states.2 Although solving a Hamiltonianand finding the ground state of a quantum system can be a hard problem itself andneeds a more detailed discussion, but it’s outside the scope of this introduction sofor now we assume we are working with systems in which we are aware of theirenergy eigenstates.As an example we can go through details of a quantum system consisting of oneparticle with two energy levels. These two level systems are called qubits. Theyare either systems with only two energy levels or systems with many energy levels2This particular naming convention is coming from the fact that naturally a system tends to get tothe configuration which has the lowest possible energy. In order to move a system out of its lowestenergy state we have to do work and literally excite the system in order to put it in a configurationwith higher energy. A very simple example is the potential energy stored in a ball hanging from acertain distance of the ground. When we drop the ball it moves toward the ground to lose potentialenergy. if we want the ball to move in opposite direction we have to excite it and apply force to it.6but only the two smallest energies are taken in account. Let’s call the lowest energystate (often called ground state), state 0 or |0〉 and the second energy state (oftencalled as first excited state) state 1, |1〉. Right here an intuitive analogy between bits( binary number either 0 and 1) and qubits ( Quantum system in either 0 or 1 state)can be established. The first and hence the most fundamental difference is that aqubit (two level quantum system) can also exist in a superposition of its eigenstates.If |ψ〉 is denoting the state of a qubit, superposition means the following is a validstate of this system:|ψ〉= α|0〉+β |1〉 (1.3)where α and β are complex numbers in general. Although this quantum systemexists in a superposition of states, when it’s measured in σz basis, it collapses to oneof the eigenstates of that basis (|0〉 or |1〉) with a certain probability proportional tothe square of the constants in the superposition relation 1.3 . Unlike this situation,an energy eigenvalue can be degenerate. Degeneracy happens when an eigenvaluecorresponds to more than one eigenstate of the observable (measurement operator).In general an eigenvalue corresponds to a subspace of the Hilbert space, called theeigenspace of that eigenvalue. The number of eigenstates in the eigenspace definesthe degree of the degeneracy. When each eigenvalue of the system corresponds toonly one eigenstate, the system is non-degenerate under that observable and theeigenstates of the observable form a basis for the Hilbert space, i.e. each state ofthe Hilbert space can be represented as a superposition of the eigen states of thatspecific observable. Later in this section, we will talk more about measurementsand the probabilistic nature of observing a certain energy eigenvalue and eigenstatein a quantum system.All quantum properties of two level quantum systems (qubits) can be under-stood using the Algebra of Pauli Operators σx, σy, σz and the Identity operator I[16]. The Hilbert space of a qubit is two dimensional and all operators defined onthis space have a 2 by 2 matrix representation. The Pauli operators can be repre-sented as follows:σx =(0 11 0),σy =(0 −ii 0),σz =(1 00 −1), I =(1 00 1). (1.4)7In fact the equation 1.3 we saw earlier, is a general expression for a two levelquantum system diagonalized in σz basis. σz is often called the computationalbasis, and its eigen vectors are written as |0〉 and |1〉. Each of the Pauli operatorshas two eigenvalues +1 and -1 and the following eigen-vectors corresponding to +and - eigenvalues:ψx+ =1√2(11),ψx− =1√2(1−1), (1.5)ψy+ =1√2(1i),ψy− =1√2(1−i), (1.6)ψz+ =(10),ψz− =(01). (1.7)The commutation relations between these operators is really important whenstudying their application to a quantum state:[σx,σy] = 2iσz, (1.8)[σy,σz] = 2iσx, (1.9)[σy,σx] =−2iσz (1.10)Each operator commutes with itself, so [σx,σx] = 0 and two different operatorsanticommute {σx,σy}= 0. In general:{σi,σ j}= 2δi j i, j ∈ {x,y,z} (1.11)8If we have a qubit prepared in state |ψ〉 of equation 1.3, thenσz|ψ〉= σz(α|0〉+β |1〉) (1.12)= α(σz|0〉)+β (σz|1〉) (1.13)= α(+1|0〉)+β (−1|1〉) (1.14)= α|0〉−β |1〉 (1.15)The third line holds since |0〉 and |1〉 are eigenvalues of σz Pauli operator witheigenvalues +1 and -1 respectively. In fact, applying the σz operator can be a verysimple, yet important example of a quantum error occurring on a two level quantumsystem, changing our qubit state.So far we have been talking about a closed quantum system. However, weknow that in order to use a quantum system, manipulate it, or get informationabout it we need interact with that system. One way that a closed quantum systemcan interact with its environment is through measurement.Quantum measurements are also operators applied to a quantum state. It’s apostulate in quantum mechanics that an observable M (measurement operator) isan operator that maps the Hilbert space of a quantum system to a subspace of thatHilbert space. As we discussed earlier, if the system is not degenerate under anobservable, the eigen vectors of the observable form a basis for the Hilbert spaceof the quantum system under measurement. In fact, the result of a measurement(i.e. what we can expect to see in a lab) can be any of the eigenvalues of theobservable. Since they are measurement values the eigenvalues have to be real andthus the observable must be Hermitian (self-adjoint) operators. A quantum stateliving in a Hilbert space can be represented as a superposition of the eigen-vectorsof that observable defined on the same Hilbert space. The Born rule named afterMax Born states that if an observable M is measured in a quantum system in anormalized state |ψ〉 then:• The measurement results is one of the eigenvalues m of the hermitian opera-tor M with a certain probability.• The probability of observing an specific eigenvalue m is Pr(m)= 〈ψ|P†mPm|ψ〉where Pm = |Vm〉〈Vm| is the projection operator for the eigen vector Vm cor-9responding to the eigenvalue m if the eigenspace of m is 1 dimensional (i.e.non-degenerate eigenvalue), and P†m is its complex conjugate.If eigenvalue m is degenerate of degree gm, then its eigenspace forms a gm-dimensional subspace of the Hilbert space. All possible degenerate eigenstates of this eigenvalue are vectors in this space and can be written as thelinear combination of its orthonormal basis |Emi〉. In this case we can alsodefine a similar projection operator Pm =gm∑i=1|Emi〉〈Emi|.• After the measurement the state of the system is projected toPm|ψ〉√〈ψ|P†mPm|ψ〉.• The Vm eigen vectors satisfy the completeness equation, ∑mP†mPm = 1. Thecompleteness equation shows the fact that the probabilities of observing theeigenvalues add up to 1:1 =∑mPr(m) =∑m〈ψ|P†mPm|ψ〉 (1.16)To summarize these points, when we measure an observable of a quantum sys-tem, the state of the system is projected to a subspace of the system’s Hilbert space.In fact, if the eigenvalue is non-degenerate, the system is being projected to aneigen vector of that observable and the real value we measure and see in the lab isthe eigenvalue corresponding to that eigen-vector. Observing each eigenvalue hasits own probability and all these probabilities add up to 1 [28]. As an example wesuppose a quantum system initially in state |ψ〉= α|0〉+β |1〉. If we measure thissystem in the σz basis, the system will be projected to either state |0〉 or state |1〉which are the eigen-vectors of observable σz. Please note that σz is indeed a her-mitian operator and the state of the system ψ is already written as a superpositionof eigen-vectors of σz operator. Now according to the Born Rule:• The measurement outcome can be either +1 or -1 which are the eigen-valuesof operator σz and:V+1 = |0〉 and V−1 = |1〉 (1.17)10with projection operators:P+1 = |0〉〈0| and P−1 = |1〉〈1| (1.18)• The probability of each measurement outcome and finding the system in thecorresponding eigen-vector is:Pr(m =+1) = 〈ψ|P†+1P+1|ψ〉 (1.19)= 〈ψ|P+1|ψ〉 (1.20)= 〈ψ|0〉〈0|ψ〉 (1.21)= (α∗〈0|+β ∗〈1|)|0〉〈0|(α|0〉+β |1〉) (1.22)= (α∗ 〈0|0〉︸︷︷︸=1+β ∗ 〈1|0〉︸︷︷︸=0)(α∗ 〈0|0〉︸︷︷︸=1+β ∗ 〈0|1〉︸︷︷︸=0) (1.23)= α∗α = |α|2 (1.24)or (using the same steps):Pr(m =−1) = 〈ψ|1〉〈1|ψ〉= |β |2 (1.25)• The system is projected in to either of the following states:P+1|ψ〉√〈ψ|P†+1P+1|ψ〉,orP−1|ψ〉√〈ψ|P†−1P−1|ψ〉(1.26)in 1.24 and 1.25 we calculated the part under the square root in the denom-inator. So the state of the system after this measurement is either of thefollowing states:11P+1|ψ〉√Pr(m =+1)=P+1|ψ〉√|α|2(1.27)=|0〉〈0|ψ〉|α| (1.28)=|0〉〈0|(α|0〉+β |1〉)|α| (1.29)=|0〉(α〈0|0〉+β 〈0|1〉)|α| (1.30)=α|α| |0〉 (1.31)or :P−1|ψ〉√Pr(m =−1)=P−1|ψ〉√|β |2(1.32)=β|β | |1〉 (1.33)It can be shown that the phase factors likeβ|β | can be ignored and the statesin 1.31 and 1.33 are in fact |0〉 and |1〉 states.These calculations are showing that measuring a quantum system in state |ψ〉in the σz basis has two possible outcomes, +1 with probability |α|2 and -1 withprobability |β |2. The state of the system after the measurement will be projectedto either |0〉 if +1 was measured and |1〉 if -1 was measured.Measurement is only one example of interaction between a closed quantumsystem and its environment. However, it has been shown that if a system of manyqubits is prepared in a highly entangled state, we can use this system to do uni-versal quantum computation, by only performing measurements on these qubits[5]. This paradigm of quantum computation is called measurement-based quantumcomputation (MBQC).This was a very short high level introduction to a few postulates of quantummechanics. For the remainder of this thesis we suppose that the reader is comfort-able with postulates of quantum mechanics, especially properties and definitions12of two level quantum systems (qubits).1.2.3 Quantum ComputationFirst stated by Feynman in 1982 [15], the peculiar properties of quantum physicscan be exploited as a universal resource for computation. As we discussed brieflyearlier in this chapter, there are many different ways to exploit this potential power.This diversity brings different paradigms of computation with it, some are close tothe classical notion of computation and the idea of universal Turing machine [39]and some of them work in a totally different manner. For example the gate-modelquantum computation is a quantum mechanical generalization of the classical com-puters we are working with 3.The idea behind the gate-model computation is to have a universal set of op-erations (gates). A set of gates operating on boolean data 4 is called functionallycomplete (universal) if any possible logical truth table (logical operation) can beexpressed as a combination of these gates in a boolean expression. For example,{AND,NOT} boolean operations form a universal set for classical computation,meaning any other operation like OR, XOR, NAND, etc. can be built combin-ing only AND and NOT gates. Thus the notion of universality is essential for acomputer in order to run any possible operation on the provided data. The samelogic applies to the gate model approach to quantum computation. A universalset of quantum gates is needed in order to have a functionally complete quantumcomputer.As we mentioned earlier and similar to classical computers, the qubit is the3As an example of a completely different approach to quantum computation we can name theAdiabatic Quantum Computation or AQC in short [14]. This approach works on the basis that aquantum system will stay in its lowest energy state if its Hamiltonian operator evolves very slowly.It’s proved in [13] that any operation can be decomposed to the problem of finding the ground stateof a specific Hamiltonian starting from another Hamiltonian and evolving it slow enough in to be inan adiabatic fashion. It’s easy to see that this approach is different from the gate model approach.Instead of encoding an operation in a combination of a limited number of gate types, AQC encodesa problem into the desired Hamiltonian of a quantum system. The system starts from a known andprepared Hamiltonian and evolves slowly in time. The adiabatic theorem proves that if the systemwas originally in its ground state then if the evolution was slow enough with a high probability it willremain in it’s ground state all the way through the process and what we can measure at the end of theprocess is nothing but the ground state of the desired Hamiltonian.4or bits in the notation of classical computers.13Table 1.1: Different single and two qubit quantum gates (unitary operators)along with their matrix representation and their application on computa-tional basis states |ψ〉= α|0〉+β |1〉.Gate Matrix rep. Example of ApplicationPauli X gate X =(0 11 0)X |ψ〉= α|1〉+β |0〉Pauli Z gate Z =(1 00 −1)X |ψ〉= α|0〉−β |1〉Phase shift gate Rθ =(1 00 eiθ)Rθ |ψ〉= α|0〉+βeiθ |1〉Controlled-NOT gate CNOT =1 0 0 00 1 0 00 0 0 10 0 1 0 If the first qubit is in |1〉it applies a pauli X tothe second qubit, other-wise leaves it as it is. Forexample: |01〉→ |01〉, but|11〉 → |10〉.unit of quantum data, which stores the information about the state of the two-level quantum system (the qubit). Similar to the classical gate model approach,we manipulate these qubits by successive application of different gates from ouruniversal set in order to conduct a certain complex operation. These operationsare called quantum algorithms. A quantum operation in general can be any unitaryoperator acting on the Hilbert state of the quantum computer. Unlike classicalboolean operations, being unitary makes quantum operations reversible. Table 1.1includes a few examples of quantum gates along with their matrix representations.As an example of a quantum algorithm running on a universal gate-modelquantum computer we can point to Shor’s famous factoring algorithm [36]. Havingan ideal quantum computer with its universal set of gates working on sufficientlymany qubits without any noise, Shor’s factoring algorithm makes it possible to fac-tor a large integer N in polynomial time O(log3(N)). One should note that this issuper-polynomially faster than the best classical factoring algorithm general num-ber field sieve [30] 5. This quantum algorithm is efficient enough to break RSA5The complexity of this algorithm is of order O{exp(c(log13 (n))(loglog(n))23 )}.14cryptography systems. The security of RSA systems is defined based on infeasi-bility of finding prime factors of large integer numbers using classical computers.The record for the largest number factored using Shor’s algorithm on a quantumdevice was set in 2012 for factoring 21 [27] 6.1.2.4 Fault Tolerant Quantum Computation and Threshold TheoremAlthough experimental realization of ideal quantum computers are possible in pro-totype sizes, large scale quantum computation cannot happen free of noise. Thisis exactly why fault tolerance is an important factor when it comes to scalablequantum computers. The chance of destructive correlation with environment andquantum decoherence7 increases with the size of the quantum system. So it’s al-most inevitable to implement a large scale quantum computer with ideal gates,measurements and interactions in general. Now the question is whether quantumcomputers can still work with faulty operations?Due to the threshold theorem, large scale quantum computation is possible evenwith noisy gates and systems provided that the noise is below a certain threshold[1].Threshold theorem for quantum computation: A quantum circuit con-taining p(n) gates may be simulated with probability of error at most ε usingO(poly(logp(n)ε )p(n)) (1.34)gates on hardware whose components fail with probability at most p, providedp is below some constant threshold, p < pth, and given reasonable assumptionsabout the noise in the underlying hardware.Fault tolerant quantum computation is only possible if there is a quantum errorcorrection step in the computation process which repeatedly brings the faulty sys-tem to the valid code space and does not allow the error to propagate throughout6The largest number factored so far with a quantum computer is 143. But this was done usingthe adiabatic quantum computation paradigm, not the Shor’s algorithm on a gate-model quantumcomputer.7for a detailed formulation of decoherence refer to section 1.3.3.15the computation process. If the error is below threshold and is controlled in thismanner then the faulty quantum system can simulate an ideal quantum system evenwhen the number of qubits scale up. This is the significance of quantum error cor-rection and the reason we are interested to study error correcting codes. Differentfault tolerant quantum computation schemes together with various error correctingcodes can demonstrate different fault tolerance thresholds. High threshold faulttolerant quantum computation is possible using the proper error correcting codes[33].1.3 Error CorrectionQuantum Error correction plays an important role in quantum information process-ing and quantum computation. Exactly like its classical analogue, quantum errorcorrection is the concept of restoring the state of a quantum system after it’s sub-jected to noise. A quantum system can easily be disturbed by its environment if notprotected. This will cause quantum decoherence which can be modeled as an errorchannel affecting quantum bits (qubits) in the system. Quantum error correctioncan be understood as a generalization of error correction for classical communica-tion through noisy channels.1.3.1 Classical Error CorrectionThe best way to approach quantum error correction is to start with the basics ofclassical error correction. In information theory, a word or code-word is a repre-sentation of a particular message which is supposed be processed, communicated,stored or etc. The unit of information is often taken to be a bit and can be rep-resented by a vector in Galois field F12 which is basically the field consisting ofvectors [0] and [1]. A unit of information is defined to be the ability of distin-guishing two opposing binary situations, 0 or 1, Yes or No, True or False. A wordor message then can be represented as a series of zeros and ones carrying bits ofinformation of that particular message. So from now on, a word is going to bea vector of length n in Fn2, as an example m1 = 1101 is a message with 4 bits ofinformation.Since in a real world realization of computers and communications systems,16we always have noisy channels and faulty gates, any processing or communicationor etc happening to message m1 can change its content with a certain probabilisticmodel.m1 = 1101Erroneous Procedure−−−−−−−−−−−→ E (m1) = 1001 (1.35)As you can see in 1.35, the second bit in message m1 is flipped. There arevarious error models and this was only a simple example, for now we can supposethat the erroneous procedure is a noisy communication channel over which wewant to transmit message m1. A noisy communication can be modeled as an errorchannel. An error channel affects each individual bit of information independently.Bits can change, remain the same, or be erased with different probabilities. Figure1.2 depicts a simple symmetric bit-flip channel.Figure 1.2: Symmetric bit-flip error channel. A bit is independently changedwith probability p, or it keeps its original value with probability 1-p.Now we use a simple error correction to protect our message against this or anyother one bit flip error like that through the noisy channel. The very summarizedversion of the classical error correction idea is to add redundancy to the represen-tation of a given word. As the simplest example, we use the 3-bit majority code.The encoding En is relatively simple. Make 3 copies of each logical informationbit (adding redundancy), so instead of representing each logical bit of informationwith 1 physical bit, we represent each logical bit of information with 3 physicalbits:17Table 1.2: Encoding table for the 3-bit majority code.Logical information Physical (redundant) representation0 0001 111En(m1 = 1101) = 111111000111 = M1 (1.36)M1 = 111111000111E (.)−−→ E (M1) = 110111010011 = E (M1) (1.37)De(E (M1))Error Correction−−−−−−−−→ 110︸︷︷︸1111︸︷︷︸1010︸︷︷︸0011︸︷︷︸1= 1101 = m1 (1.38)The decoding is as simple as a majority vote. Even if an error happens toone of the physical bits, the remaining two are still representing what the initiallogical information was. So when we receive each 3 physical bits, count 0’s and1’s and who ever has the majority has a higher probability to be the initial bit ofinformation. We should note error correction relies on the probabilistic nature oferror and hence the result of an error correction has a probability to be correct. Ourgoal is to maximize this success probability. The following example can be reallyhelpful to understand the probabilistic behavior of error correction:m0 = 1→M0 = En(m0) = 000 (1.39)m1 = 1→M1 = En(m1) = 111 (1.40)E1→ Two bit flip on first and second bits (1.41)E2→ One bit flip error on third bit. (1.42)E1(M0 = 000) = 110E2(M1 = 111) = 110}Same Syndrome. (1.43)These two messages E1(M0) and E2(M1) are decoded to 1 because of the ma-jority vote decoding, which is only correct for the second case. If we assume thateach bit-flip error is an independent random event with probability pe, then the18probability of error E1 is p2e whereas it’s only pe for E2. So the error correctiontries to guess the most probable error and correct for it, this minimizes the errorcorrection failure probability. Thus this code is only able to correct up to one bitflip error and it fails in the event of more than one bit error. Consequently, themajority vote error correction will succeed with probability 1−3p2e +2p3e :Pr{Success}= 1−Pr{Failure} (1.44)= 1− (Pr{Only 2 erros}+Pr{Only 3 errors}) (1.45)= 1− ((32)p2e(1− pe)+(33)p3e) (1.46)= 1−3p2e +2p3e (1.47)More sophisticated codes with more complex encoding and decoding schemeshave been developed for various error models, however, all classical error correct-ing codes share the fundamental ideas of the 3-bit majority-vote code.1.3.2 Linear CodesLinear codes are the most famous and commonly used class of classical error cor-recting codes because the mathematical formalism makes it easier to both constructand analyze these codes. Formal definition of a linear error correcting code is asfollows:Definition 1. A (n,k) linear code is a k-dimensional linear subspace C of n-dimensional vector space Fnq, where Fq is the finite filed with q elements. Forspecial case q = 2 this structure will be a binary linear code.A vector in C is called a code-word. It’s easy to show that there are in generalqk code words in a linear code. Looking back at our notation in the introductory 3-bit repetition code example, a (n,k) binary linear code is able to encode 2k differentmessages by adding n− k redundant bits to the representation of each message(code-word).Since code C is a linear subspace of Fn2 of rank k, it has a basis of rank k, whichmeans the whole code space C can be represented as span of k code-words. The19code generator matrix Gk×n is the row concatenation of these basis code-words.Encoding a k bit message is now as easy as matrix multiplication on the specifiedfinite field. 8M1 = En(m1) = m1.G (1.48)H.MT1 = H.(m1.G)T = 0 (1.49)For each linear code there exists an H(n−k)×n matrix whose kernel is C andit’s used for error correction and decoding. This matrix is called the parity checkmatrix. Since the code basis is the kernel of H, any code-word in C will be zero.On the other hand, if H is multiplied by an erroneous message (a vector outside thecode space) the result will be non-zero which will be used as a syndrome to guesswhich family of errors might have happened and to correct them. The followingare generator and parity check matrices for our previous 3-bit repetition code alsoknown as the (3,1) Hamming code:G =[1 1 1],H =[1 1 00 1 1](1.50)This code has two code-words c0 = [0] and c1 = [1]. It’s easy to verify theproperties we mentioned on this code:c0.G = [0][1 1 1]= [000] =C0 (1.51)c1.G = [1][1 1 1]= [111] =C1 (1.52)H.CT1 =[1 1 00 1 1].111= 0 (1.53)8The reader should note that since the codes are in general vector spaces defined on finite fieldFnq, all encoding, decoding, syndrome detection and etc. algebraic operations are carried out on thesame finite field. So in the very special case of binary codes defined on Fn2 all arithmetic operationsare modulo 2.20we can also apply the parity check matrix to erroneous messages in order toobserve the syndromes:M1 =[0 1 0](1.54)H.MT1 =[1 1 00 1 1].010= 1 6= 0 (1.55)We now detected a possible one bit-flip error on message M1 and try to correctit by mapping it to the closest valid code-word in the code-space. This kind ofdecoding is called the nearest-neighbor decoding which in this case is as simple asthe majority vote among the 3 bits.1.3.3 Quantum Error Model and DecoherenceBefore we move on to quantum error correcting codes, first we have to devisea model for quantum errors. Any interaction between a quantum system and itsenvironment can cause the state of the quantum system to change. The interactioncan change a quantum system to a classical state (Like a projective measurementin computational basis 9), or it can just end up flipping a quabit or a relative phasein the superposition. In order to model the quantum errors which are coming fromthe interaction between the system and its environment, we look at these errors asoperators acting on the state of the system. So far we have been looking at quantummechanical systems from a vector space point of view. However, there are otherways to represent a quantum mechanical system. The density operator or oftencalled the density matrix is another way of representing a quantum system whichis mathematically equivalent to the state vector in Hilbert space approach, and it’smostly used when studying systems interacting with their environment.The density operator provides a convenient way for representing quantum sys-tems whose states are not fully known. As an example, let’s assume that the onlyinformation we have about a quantum system is that it is in one of the states |ψi〉9which ruins the superposition property of the quantum state, we discussed this earlier in 1.2.221with a corresponding probability pi. This quantum system is a statistical mixtureof subsystems in different states |ψi〉. Each subsystem is in its own state but whenwe study the union of these subsystems as a whole, the only thing we can say aboutit, is the statistical distribution of the states among these different subsystems. Sowhen this system is observed the statistical nature of the mixture makes the out-come probabilistic. This is why we assign a probability pi to each possible state|ψi〉 in a mixture. The {pi, |ψi〉} is called an ensemble of pure states. Pure state, incontrast to a mixed state, is a quantum state that can not be represented as a con-vex combination (mixture) of other states. The density operator for the mentionedsystem can be written as:ρ ≡∑ipi|ψi〉〈ψi| (1.56)Now let’s study the evolution of this ensemble due to a unitary operator U .10If the system is in state ψ j with probability p j then after the unitary evolution it isgoing to be in state Uψ j with probability p j. According to this and equation 1.56we have:ρ U−→∑ipiU |ψi〉〈ψi|U† =UρU† (1.57)The notion of measurement can also be translated to the density operator lan-guage. If we perform a measurement with projection operators Pm and the ensem-ble is in state |ψi〉 then the probability of observing m is:Pr(m|system in i’th state) = 〈ψi|P†mPm|ψi〉= tr(P†mPm|ψi〉〈ψi|) (1.58)Now that we calculated the measurement probability for one possible state |ψi〉,we can generalize it to the whole ensemble:10Operator U is unitary if and only if U† =U−1.22Pr(m) =∑ipiPr(m|system in i’th state) (1.59)=∑ipitr(P†mPm|ψi〉〈ψi|) (1.60)= tr(P†mPmρ) (1.61)Using what we had in section 1.2.2 on Born’s rule and the same logic we folllowedhere we can show that the density operator for the ensemble after this measurementis:ρm =∑ipiPm|ψi〉〈ψi|P†mtr(P†mPmρ)(1.62)=PmρP†mtr(P†mPmρ)(1.63)All postulates of quantum mechanics can be reformulated in density operatorlanguage [28]. The density operator also provides an easy way of looking at sub-systems in a composite quantum system. This is exactly what we are looking forbecause quantum errors are result of the quantum system having correlation withit’s environment and make a composite system with it. If we have two physicalsystems A and B interacting with each other, then the density operator of the wholecomposite system of the two ρAB can be reduced to the density operator of eachsystem just by partial trace over the other system, for example:ρB = trA(ρAB) (1.64)Now suppose we have a system of interest (for example our desired qubits)which we call the quantum computer or QC in short and we represent it with den-sity operator ρQC. This system is interacting with its environment with densityoperator ρenv resulting in a total density operator like ρ = ρQC⊗ρenv11 . From our11This is an important assumption that the composite system of the quantum computer and its23reduced density matrix method we saw earlier, the final result E (ρ) of a unitaryevolution of the system interacting with the environment can be written as follows:E (ρ) = trenv[U(ρQC⊗ρenv)U†] (1.65)Using the orthonormal basis |ei〉 for the state space of the environment andsupposing that environment has the density operator ρenv = |e0〉〈e0| initially, wecan turn the above equation in to a new representation called the operator-sumrepresentation:E (ρ) =∑i〈ei|U [ρ⊗|e0〉〈e0|]U†|ei〉 (1.66)=∑iEiρE†i (1.67)where Ei = 〈ei|U |e0〉 is called the Kraus operator acting on the quantum computerstate space. When the sum for E (ρ) contains more than one element it shows thatquantum computer system is subject to quantum decoherence because of its cor-relation with its environment. Coherence in a quantum state is the ability to makeinterference which is a direct result of the superposition property. The environmentnoise can cause a quantum system to lose coherence and become a classical statelosing all its useful quantum properties for quantum computation. Decoherencecan be modeled as lose of information from the quantum computer to its environ-ment. Now that we developed the mathematical machinery, we can start a counterattack to preserve the coherence of a quantum state through the quantum computa-tion process. This is exactly what we call the quantum error correction.A comparison between classical error models and quantum error model cannow be helpful. Like classical communication channels, the effect of environmen-tal noise on a quantum system can be modeled as an error channel with the initialenvironment are in a product state as stated above. The correlation between system and environmentcan be much more general and complex but it’s reasonable to assume to have the product state since itshows up in many practical cases. The same logic and line of thought can be applied to more generalcorrelations but it’s out of the scope of this thesis. We encourage the reader to look at chapter 8 of[28] for a detailed discussion of these situations.24state (which carries information) as the input and the noisy transmitted version ofthat as the output. In classical communication, while the information is transmit-ted through the channel, each possible error operation can act on it with a certainprobability. Errors act on individual bits and the probability distribution of error ondifferent bits are independent yet identical. So each bit is going through the sameerror channel but it ends up with a probabilistic outcome. Using this property wecan model a communication channel as a one-bit error channel. Depending on thechannel under study, the bit might remain correct with a probability and changedue to different errors (like a bit flip) with another probability. These probabilitiesshould add up to 1 since eventually one of these events should occur to each bit ofinformation. Figure 1.2 shows an example of a classical error channel called binarysymmetric channel. It’s called symmetric because the probability of a 0 changingto 1 or a 1 changing to 0 is the same. We can use our operator-sum representationto build a quantum bit flip error channel. According to equation 1.67 if:E0 =√pI =√p[1 00 1]E1 =√1− pX =√1− p[0 11 0](1.68)thenρ → E (ρ) = pρ+(1− p)XρX (1.69)will be the operator sum representation of a quantum bit flip channel. A qubit ispreserved in its state with probability p or its |0〉 and |1〉 states are flipped withprobability 1− p. Unlike classical error models that only includes bit flips, as seenin equation 1.67 quantum noise operations can be more general than just a bit flip.However, it’s shown that an arbitrary independent error on a single qubit can bewritten as a linear combination of pauli operators σx (Bit flip), σz (Phase flip),σy (both, σy = σxσz) and the identity operator I [28]. This reduction makes erroranalysis and study of quantum error correction codes easier [8, 28]. Using example1.15, if |ψ〉= α|0〉+β |1〉 is a general quantum state, the following is an exampleof different quantum errors acting on it:25σz|ψ〉= α|0〉−β |1〉 (Phase Flip) (1.70)σx|ψ〉= α|1〉+β |0〉 (Bit Flip) (1.71)σy|ψ〉= iσxσz|ψ〉= α|1〉−β |0〉 (Both errors) (1.72)As a general purpose example, we use the density operator formalism to modeldepolarizing quantum error channel. If each of the bit flip, phase flip, σy has theoccurrence probabilities p/3 and with probability 1− p the state remains with noerror (identity operator). If a quantum system is initially in a mixed state withdensity matrix ρ then the following shows its evolution due to this example of aquantum error channel:ρ → E (ρ) = (1− p)ρ︸ ︷︷ ︸No error+p3σxρσx︸ ︷︷ ︸Bit Flip+p3σzρσz︸ ︷︷ ︸Phase Flip+p3σyρσy︸ ︷︷ ︸Both Errors(1.73)To conclude this part of the introduction on quantum error model, we wantto discuss the relation between quantum noise and time evolution of a quantumsystem and Schrodinger equation. In order to do so, we use a pedagogical exampleof a quantum system which its time evolution will give us the error model for adepolarizing channel.Suppose we have a 1 qubit quantum computer system ρQC which is interactingwith its environment ρenv forming the total density operator ρ . Using the followingidentity:I2=ρ+XρX +YρY +ZρZ4(1.74)we can think of the depolarizing channel acting on the quantum computer singlequbit density matrix ρQC in a way that it is either untouched with probability 1− por it’s replaced with a completely mixed state I/2 with probability p:26E (ρQC) = pI2+(1− p)ρQC (1.75)in order to realize this as the time evolution of the quantum computer and itsenvironment, we take the environment to be a 2 qubit system and the followingHamiltonian between the total 3 qubit, computer and environment systems. Thefirst and second qubits are the environment and the third qubit is the quantumcomputer system:H = J(t)|1〉〈1|(−→S 2.−→S 3) (1.76)where J(t)(−→S 2.−→S 3) =J(t)4 (X1X2+Y2Y3+Z2Z3) is the Heisenberg Hamiltonian be-tween the second qubit in the environment and the third qubit which is the quantumcomputer system. It’s shown that the time evolution of Heisenberg Hamiltonian fora time of pi will give us the unitary evolution of a swap e−ipi−→S 2 .−→S 34 between the twoqubits 2 and 3. Thus the time evolution of the whole system described in Hamil-tonian 1.76 will give us a controlled-swap operation between 2nd and 3rd qubits,controlled by the state of the first qubit.Now let’s suppose the qubit of the quantum computer system, qubit number3, is initially in an arbitrary mixture ρQC and we initialize the two qubits of theenvironment, qubit 1 and qubit 2, in the following mixture (1− p)|0〉〈0|+ p|1〉〈1|and I/2 respectively. Now if this system evolves according to the Hamiltonianexplained above, the first qubit in the environment will be in state |0〉 with proba-bility (1− p) which leaves the ρQC untouched, or it is in state |1〉 with probabilityp which swaps the state of the quantum computer system with mixture I/2. Usingthe identity we saw earlier for I/2 we see that this is exactly like a depolarizingchannel acting on our 1 qubit quantum computer system:E (ρQC) = (1− p)ρ+ p3(XρX +YρY +ZρZ) (1.77)This shows how the time evolution of a system and its environment can cause an27error model similar to depolarizing channel on the system in reality as we mathe-matically modeled earlier. Similar examples can be used to show how a physicalquantum system evolving in time is exposed to errors which are modeled using themathematical machinery of error channels we discussed in this part. Now that wehave a good understanding of these error models, we can focus on preserving oursystem from these errors using the error correction schemes.1.3.4 Quantum Error CorrectionIt’s intuitive to consider the same ideas we used for classical error correction to pro-tect quantum states against quantum noise, but as we discussed in 1.2.2, quantummechanics brings odd properties which makes information processing different.For example, if we want to use the 3-bit repetition code idea to build a quantumcode we have to note the following issues:No Cloning We can not copy Quantum states, i.e. starting from general state|ψ〉= α|0〉+β |1〉 we can not make |ψ〉⊗3 = |ψ〉|ψ〉|ψ〉.Measurement destroys quantum information As we discussed in 1.2.2, mea-suring the quantum state |ψ〉 to get information about the error, will killthe quantum state itself.Different errors unlike classical information processing where all errors can beexpressed as bit-flips, A continuum of quantum errors can happen to a quan-tum state.We studied the last item earlier in the quantum error model section. Now thatwe pointed out the differences between classical and quantum error correction anddefined quantum error channel we are ready to build our first simple quantum errorcorrecting code, the 3-qubit bit flip code, which is a generalization of the 3-bitrepetition code considering the differences mentioned here.1.3.5 3-qubit Bit-flip CodeThis code only concentrates on bit-flip (σx) errors, so the error channel is:28ρ → E (ρ) = p0ρ+(1− p0)σxρσx︸ ︷︷ ︸Bit Flip(1.78)We use the same idea of the 3-bit repetition code we studied earlier. The logical|0〉 and |1〉 states will be |000〉 and |111〉 respectively. For a general state |ψ〉 =α|0〉+β |1〉, we have:|0〉 → |0〉L = |000〉 (1.79)|1〉 → |1〉L = |111〉 (1.80)|ψ〉 → |ψ〉L = α|0〉L +β |1〉L = α|000〉+β |111〉 (1.81)Now according to our bit flip channel definition in 1.78 a bit-flip error canhappen on any of the three qubits involved in the code, so one of the following fourpossible scenarios might happen:|ψ〉LI−→ α|000〉+β |111〉 (1.82)|ψ〉Lσx1−−→ α|100〉+β |011〉 (1.83)|ψ〉Lσx2−−→ α|010〉+β |101〉 (1.84)|ψ〉Lσx3−−→ α|001〉+β |110〉 (1.85)(1.86)In this context, the underscore indexes of each Pauli operator points to the qubitthat it’s acting on. So far we have managed to decompose all different quantumerrors to a limited set of Pauli operators, and we also managed to make a 3-qubitrepetition encoding. But what happens when we want to decode and error correct?we know that we can not measure the quantum state. If we measure the state it willkill the quantum superposition and won’t even give us much information aboutthe original state since it only collapses to one of the eigenstates. So instead ofmeasuring the state we do what we did for linear classical codes. We use the ideaof parity check matrix H and apply another operator which acts trivially on the29code space.Since the |ψ〉L state is written in the computation basis, we try the σz operator.Knowing the fact that:σz|0〉=+1|0〉 (1.87)σz|1〉=−1|1〉 (1.88)We put an even number of σzi operators acting on different qubits and applythem to the general state, for example:σz1σz2 |ψ〉L = σz1σz2(α|000〉+β |111〉) (1.89)= σz1σz2α|000〉+σz1σz2β |111〉 (1.90)= α|000〉+(−1)2β |111〉 (1.91)= α|000〉+β |111〉 (1.92)= |ψ〉L (1.93)As far as the 3-qubit code is concerned we have found an operator which actsalmost like the parity check matrix in the classical case. So now let’s see the appli-cation of this operator on an erroneous state E1|ψ〉= σx1 |ψ〉= α|100〉+β |011〉:(σz1σz2)E1|ψ〉= σz1σz2σx1 |ψ〉 (1.94)= σz1σx1σz2 |ψ〉 (1.95)= (−1)σx1σz1σz2 |ψ〉 (1.96)=−σx1 |ψ〉 (1.97)=−E1|ψ〉 (1.98)The second line holds since two operators on two different qubits are two operatorson two disjoint Hilbert spaces and thus they commute. The third line holds sincedifferent Pauli operators (σx and σz) acting on one qubit anticommute. This relationsuggests that the erroneous state E1|ψ〉L is an eigenstate of the (σz1σz2) operatorwith eigenvalue -1. Before we also saw that this operator acts trivially on the codespace i.e., any logical state |ψ〉L is a +1 eigenstate of this operator. Not all possible30one bit flip error anticommute with this operator, though. In order to cover allpossible bit-flip errors we need to have a set of such operators. The set{σz1σz2I3, I1σz2σz3} (1.99)is the set of operators we are looking for. It’s easy to see that any one bit flip errorwill anticommute with at least one of the operators in this set. The operators in theset commute with each other and act trivially on the code space. The decoding isas simple as applying these two operators to a possibly faulty state and measuringthe results. These two operators are basically acting as parity check row vectors inthe parity check matrix (look at example 1.50).Now we have made our first simple quantum code to correct for one bit fliperror. The problem with this simple code is that it only corrects one bit-flip error.However, as we discussed earlier, quantum errors are much more general than this.We reduced the individual qubit errors to bit-flip, phase-flip and both. The nicething about these set of errors is that although they are fundamentally different butthey can be corrected with the same logic. This actually motivates how we want toprotect our state against phase flips using our 3 bit repetition idea again.1.3.6 3-qubit Phase-flip CodeThe operator for a single phase flip is σz Pauli operator so the error channel modellooks like the following:ρ → E (ρ) = p0ρ+(1− p0) σzρσz︸ ︷︷ ︸Phase Flip(1.100)Now we want to use the 3-qubit repetition idea to encode a general qubit state|ψ〉=α|0〉+ |1〉 in 3 physical qubits so that it’s protected against a single phase flipoperation. The analogy between the classical 3 bit repetition and quantum 3 qubitrepetition code was simple because of the shared idea of a bit flip. but we knowthat a σz acting on computational bases changes only the phase which in fact whatmotivates the mind. So in order to make the analogy it’s good to find a new basis inwhich the σz operator can flip states (like what σx does to the computational basis).To acheive this let’s look at the X basis and the eigen basis of σx operator. From31equation 1.5 we can write this basis as follows. Without loss of generality we callthese states |+〉 and |−〉:|+〉= ψx+ =1√2(|0〉+ |1〉) (1.101)|−〉= ψx− =1√2(|0〉− |1〉) (1.102)Now let’s see what the phase flip operator does to these eigen basis:σz|+〉=1√2(σz|0〉+σz|1〉)=1√2(|0〉− |1〉) = |−〉 (1.103)σz|−〉= |+〉 (1.104)The second line holds because |0〉 and |1〉 states are +1 and -1 eigenstates ofthe σz operator respectively. Now we see that if we prepare our qubits in the eigen-basis of the σx operator. the phase flip operator will act like a bit flip operatorchanging |+〉 to |−〉 and |−〉 to |+〉. The rest of the encoding decoding procedureis now exactly identical to that of the 3-qubit bit flip code. We start with the log-ical information |ψ〉L = α|+〉+ betta|−〉 and encode it in 3 copies of the logicaleigenbasis:|+〉 → |+〉L = |+++〉 (1.105)|−〉 → |−〉L = |−−−〉 (1.106)|ψ〉 → |ψ〉L = α|+〉L +β |−〉L = α|+++〉+β |−−−〉 (1.107)Since the rest of the procedure is identical we only show one example of howthe phase flip error act on this code:32|ψ〉LI−→ α|+++〉+β |−−−〉 (1.108)|ψ〉Lσx2−−→ α|+−+〉+β |−+−〉 (1.109)A majority vote decoding can then be used to decode the faulty state back intoone of the two logical states |−〉L or |+〉L. Exactly similar to the 3-qubit bit flipcode we can now have a series of check operators to give us error syndromes. errorsyndrome give us information about the possible error occurred on the state withoutactually disturbing the quantum state itself. Since we are working in the σx basis,use this pauli operator as a parity check syndrome. Similar to 3-qubit bit flip codelet’s look at the implementation of even number of X pauli operators like σx1σx2on the logical states:σx1σx2 |ψ〉L = σx1σx2(α|+++〉+β |−−−〉) (1.110)= σx1σx2α|+++〉+σx1σx2β |−−−〉 (1.111)= α|+++〉+(−1)2β |−−−〉 (1.112)= α|+++〉+β |−−−〉 (1.113)= |ψ〉L (1.114)Now let’s see the application of this operator on an erroneous state E1|ψ〉 =σz1 |ψ〉= α|−++〉+β |+−−〉:(σx1σx2)E1|ψ〉= σx1σx2σz1 |ψ〉 (1.115)= σx1σz1σx2 |ψ〉 (1.116)= (−1)σz1σx1σx2 |ψ〉 (1.117)=−σz1 |ψ〉 (1.118)=−E1|ψ〉 (1.119)The second line holds since two operators on two different qubits are two operatorson two disjoint Hilbert spaces and thus they commute. The third line holds since33different Pauli operators (σx and σz) acting on one qubit anticommute. This relationsuggests that the erroneous state E1|ψ〉L is an eigenstate of the (σx1σx2) operatorwith eigenvalue -1. Before we also saw that this operator acts trivially on the codespace i.e., any logical state |ψ〉L is a +1 eigenstate of this operator. This is exactlywhat we had earlier for the bit flip channel. So we can construct a set of thesecheck operators to cover all possible phase flip positions on 3 qubits. The resultingset is similar to that of 1.99 but with the exception that here we used σx operator inour check syndromes.{σx1σx2I3, I1σx2σx3} (1.120)So far we managed to design two separate yet similar codes to correct either forone bit flip or one phase flip error on a qubit state. However, as mentioned earlierthese errors can happen at the same time on a qubit state. So a good code should beable to correct for all these types of errors at simultaneously. Although we need amore complex code to achieve this, it hav been shown that the same logic for phaseflip and bit flip code can be concatenated to make the 9-qubit Shor code [8, 28],which can correct up to one bit flip and one phase flip error simultaneously.1.3.7 9-qubit Shor CodeLet’s take a look at the depolarizing channel as a reasonable error model where alldifferent single errors are possible:ρ → E (ρ) = (1− p)ρ︸ ︷︷ ︸No error+p3σxρσx︸ ︷︷ ︸Bit Flip+p3σzρσz︸ ︷︷ ︸Phase Flip+p3σyρσy︸ ︷︷ ︸Both Errors(1.121)The concatenation of idea of the 9-qubit shor code to protect against both phaseflip and bit flip errors is simple. Let’s start with the 3-qubit phase flip code and it’sencoding to logical states:34|+〉 → |+〉L = |+++〉 (1.122)|−〉 → |−〉L = |−−−〉 (1.123)|ψ〉 → |ψ〉L = α|+〉L +β |−〉L = α|+++〉+β |−−−〉 (1.124)This encoding protects against one phase flip on any of the 3 qubits involvedbut no protection for bit flip errors. Now let’s write this logical basis in the compu-tational basis:|+〉L = |+++〉=|0〉+ |1〉√2⊗|0〉+ |1〉√2⊗|0〉+ |1〉√2(1.125)|−〉L = |−−−〉=|0〉− |1〉√2︸ ︷︷ ︸Phase qubit 1⊗|0〉− |1〉√2︸ ︷︷ ︸Phase qubit 2⊗|0〉− |1〉√2︸ ︷︷ ︸Phase qubit 3(1.126)Now that we have the logical states in computational basis, let’s protect eachof the 3 qubits involved with a 3-qubit bit flip repetition code:|0〉+ |1〉√2︸ ︷︷ ︸Phase qubit i=|000〉+ |111〉√2(1.127)If we repeat this for all 3 phase protected qubits then we will end up with thefollowing 9-qubit logical state:|0〉Shor =12√2(|000〉+ |111〉)⊗ (|000〉+ |111〉)⊗ (|000〉+ |111〉) (1.128)|1〉Shor =12√2(|000〉− |111〉)︸ ︷︷ ︸qubits 1 to 3⊗(|000〉− |111〉)︸ ︷︷ ︸qubits 4 to 6⊗(|000〉− |111〉)︸ ︷︷ ︸qubits 7 to 9(1.129)Now that we have the logical states the encoding is simple. A general 1 qubitstate |ψ〉 = α|0〉+β |1〉 can be encoded to these logical states |ψ〉L = α|0〉Shor +35β |1〉Shor. The error correction and decoding is also similar to the 3-qubit phase flipand bit flip codes. We have to follow the same idea of check operators to detectand locate errors (either phase or bit flip errors) on these 9 physical qubits. It’simportant to note that these 9 physical qubits are used to protect only 1 logical qubitstate against all single quantum errors. Equation 1.129 shows that consecutive 3qubit groups (qubit 1 to 3 , 4 to 6, and 7 to 9) each represent one bit flip repetitioncode. So the check operators for bit flip error syndromes should be extended on thephysical qubits of these groups. In a simple analogy with the 3-qubit bit flip checkoperators we had earlier in equation 1.99 we can devise the bit flip check operatorsfor these 9 qubit code:phase qubit 1 :Z1 Z2 I3 I4 I5 I6 I7 I8 I9I1 Z2 Z3 I4 I5 I6 I7 I8 I9phase qubit 2 :I1 I2 I3 Z4 Z5 I6 I7 I8 I9I1 I2 I3 I4 Z5 Z6 I7 I8 I9phase qubit 3 :I1 I2 I3 I4 I5 I6 Z7 Z8 I9I1 I2 I3 I4 I5 I6 I7 Z8 Z9(1.130)The same logic can be used to derive the phase flip check operators (errorsyndrome operators) but with a difference. Since the phase flip code was the firstlayer of oue two layer concatanated encdoing, as stated earlier now groups of threequbits (1,2,3), (4,5,6) and (7,8,9) represents 3 phase flip repetition qubits. So usingequation 1.131 we can now derive the 9 qubit phase flip check operators:{σx1σx2σx3︸ ︷︷ ︸Phase qubit 1σx4σx5σx6︸ ︷︷ ︸Phase qubit 2I7 I8 I9︸ ︷︷ ︸Phase qubit 3, I1 I2 I3︸ ︷︷ ︸phase qubit 1σx4σx5σx6︸ ︷︷ ︸phase qubit 2σx7σx8σx9}︸ ︷︷ ︸phase qubit 3(1.131)We can summarize all these check operators in a big matrix of pauli operators:36Z1 Z2 I3I1 Z2 Z3I IIZ4 Z5 I3I1 Z5 Z6II IZ7 Z8 I3I1 Z8 Z9X1 X2 X3 X4 X5 X6 I7 I8 I9I1 I2 I3 X4 X5 X6 X7 X8 X9(1.132)Now the error correction and decoding is simple. we repeat what we did forthe 3-qubit bit flip and phase flip codes. First we apply all these check operators tothe physical qubits. if the logical state of these 9 physical qubits is a +1 eigenvalueof all these 8 check operators, it means that there is no error that we can correctfor and the state is one of the logical possible |0〉Shor or |1〉Shor states. Otherwise,if any of these operators syndrome is -1 instead of a +1 then we detect and locatethe error exactly like how we did this for the repetition codes.There a couple of important final remarks on what we saw about the 9-qubitShor code. First of all, this code can only encode 1 qubit state into a group ofphysical qubits. So one actually needs 9 qubits in order to protect the state of 1qubit carrying quantum information. Second, the 9-quibt Shor code is only able tocorrect for single qubit errors and it fails on the event of two simultaneous errorsor more. So we need more sophisticated codes in order to correct for more numberof errors and also maintain the redundancy in a reasonable level.Third point is that this code is able to correct for both bit flip and phase fliperrors by using two different codes concatenated together. As you can see from thetable of check operators, we can perfectly divide these operators to two sets of X-check operators and Z-check operators for phase flip and bit flip codes respectively.The idea of using two codes in parallel one for phase flip and one for bit flip errors isan important and practical trick which made us able to build quantum codes whichare actually able to correct for all possible quantum error types in our model.The final remark is the relation between these quantum codes and classical37codes. The idea of check operators and error syndromes is very similar and ana-logue to parity check matrices and syndromes in linear classical codes we sawearlier. Although these quantum codes we studied so far were simple generaliza-tion for pedagogical purposes actual complex and applied quantum codes are alsomade of classical codes. As an example Calderbank, Shor and Stean showed a sys-tematic way of building a family of quantum error correcting codes directly fromclassical error correcting codes [28] called CSS codes. Similar to what we pointedhere CSS codes also consist of two classical codes in parallel to make a quantumerror correction code robust against both bit flip and phase flip errors.1.3.8 CSS CodesThe analogy we build up between quantum and classical error correction by varioussimple examples gives rise to an important family of quantum codes generated outof classical linear codes we studied earlier. CSS codes named by initials of itsinventors are the structured realization of quantum codes based on classical codes.These codes are a subclass of quantum stabilizer codes which we’ll study later inthis chapter. Their structured construction makes it easier to look for a quantumcode with desired properties.The CSS code is built of two [n,k1] and [n,k2] linear classical codes C1 and C2respectively with the condition that C2 ⊂ C1 and C1 and C⊥1 can both correct up tot errors. The resulting code will be a [n,k2− k1] quantum code which can correctup to t qubit errors. Now we have to focus on the logical states and their encodingfor a CSS(C1,C2) quantum code. If x is a codeword in C1 the following will defineour logical states12:|x+C2〉=1√|C2|∑y∈C2|x+ y〉 (1.133)This code will encode k1− k2 qubits in n qubits, so it has 2k1−k2 such logicalencoded states. As we mentioned earlier this code is able to correct for both bitflip and phase flip errors. We choose E1 and E2 to be binary vectors of size n and12All the arithmetic operations between classical codewords x and y are binary and done in Z2.38their i’th element is 1 if there is a bit flip happening on i’th qubit for E1 and aphase flip on i’th qubit for E2 and 0 otherwise. In this way we have two vectorsthat can model the random errors happening on the n physical qubits, and then thefollowing will be an erroneous state.E1E2|CSS〉=1√|C2|∑y∈C2(−1)(x+y).E2 |x+ y+E1〉 (1.134)where E1 and E2 are the corresponding quantum error operators acting on a logicalCSS code state to cause bit flip and phase flip errors located at the support of E1and E2. Exactly similar to classical error correction we use the parity check matrixto detect bit flip or phase flip errors. Let’s try the C1 code parity check matrix H1to detect bit flip errors. Since we can not apply that directly to the physical qubitswe suppose we have enough ancilla qubits prepared in |0〉 and in a product statewith our actual data storing qubits. Since the parity check matrix acts trivially onthe code space we’ll have the following result using it:|x+ y+E1〉|0〉H1−→ |x+ y+E1〉|H1(x+ y+E1)〉= |x+ y+E1〉|H1E1〉 (1.135)H1(E1E2|CSS〉) |0〉︸︷︷︸Ancilla=1√|C2|∑y∈C2(−1)(x+y).E2 |x+ y+E1〉|H1E1〉 (1.136)= (E1E2|CSS〉)|H1E1〉 (1.137)The last line shows that application of the parity check matrix will keep thedata qubits in the state they were before and gives us a syndrome of bit flip errorshappened on the ancilla qubit. We can now measure the ancilla qubit withoutdisturbing the quantum system and get information about the bit flip errors. Afterwe learned which qubits had been exposed to bit flip errors we simply apply a NOTgate to correct for the bit flip error happened on them. This will bring the state to aCSS code state only with phase flip errors:39E2|CSS〉=1√|C2|∑y∈C2(−1)(x+y).E2 |x+ y〉 (1.138)The same logic with a slightly different implementation uses the C2 code paritycheck matrix H2 to get information about phase flip errors and then we correct forthose errors in the same fashion to bring the erroneous codeword back to its codespace.A simple example of a CSS code is the [7,1] Steane code whcih uses the linearclassical [7,4,3] Hamming code to generate a [7,4] code as C1 and a [7,3] code asC2. This CSS code encodes 1 logical qubit in 7 physical qubits and it can correct1 phase flip and bit flip error. One can compare this new quantum code with the9-qubit Shor code which used 9 qubits to achieve the same goal, and understandhow this new construction can help design better codes. The [7,1] Steane code canprotect 1 logical qubit state so it only has two logical codewords |0〉L and |1〉L:|0〉L =1√8(|0000000〉+ |1010101〉+ |0110011〉+ |1100110〉+ |0001111〉+ |1011010〉+ |0111100〉+ |1101001〉) (1.139)|1〉L =1√8(|1111111〉+ |0101010〉+ |1001100〉+ |0011001〉+ |1110000〉+ |0100101〉+ |1000011〉+ |0010110〉) (1.140)This gave us an understanding of how our knowledge of classical error correct-ing codes can be used to build good quantum error correcting codes. We also learnwhy we always need two codes running in parallel to be able to correct for bothquantum bit flip and phase flip errors.The heart of error detection procedure in a classical linear code is its paritycheck matrix H. During the last few sections on error correction we have seen howintuitively this useful notion is generalized to quantum error correcting codes ascheck operators. These check operators work very similar to parity check matricesto give us syndromes about errors without disturbing the state carrying information.40In the case of CSS codes is easier to establish this relation since these codes aredirectly made of classical linear codes with their parity check matrices serving ascheck operators. Now let’s have a more close look at check operators for [7,1]Steane code:I1 I2 I3 X4 X5 X6 X7I1 X2 X3 I4 I5 X6 X7X1 I2 X3 I4 X5 I6 X7III1 I2 I3 Z4 Z5 Z6 Z7I1 Z2 Z3 I4 I5 Z6 Z7Z1 I2 Z3 I4 Z5 I6 Z7(1.141)As we discussed earlier, the two linear codes work at the same time to provideresilience against both bit flip and phase flip errors. The check operators due toeach linear code looks similar except the that one is a product of Pauli X operatorsto detect phase flips and the other is a product of Z operators to detect bit flips. Thissimilarity between the two codes suggests a new regrouping and representationof these check operators as it can be seen in 1.142. In general, for a CSS codeCSS(C1,C2) with H1 and H2 as the parity check matrices for linear codes C1 andC2 respectively, we can write the quantum error correcting check operator matrix inthe following format by replacing 1’s in the first parity check matrix by X operatorsand 1’s in the second parity check matrix by Z operators.[HX1 II HZ2](1.142)We can test the application of these check operators on the logical states of theSteane code, let’s pick the first check operator and apply it to the |0〉L. First weinvestigate the application of this operator on each term in the superposition:41term 1: X4X5X6X7|0000000〉= |0001111〉 → term 5term 2: X4X5X6X7|1010101〉= |1011010〉 → term 6term 3: X4X5X6X7|0110011〉= |0111100〉 → term 7term 4: X4X5X6X7|1100110〉= |1101001〉 → term 8term 5: X4X5X6X7|0001111〉= |0000000〉 → term 1term 6: X4X5X6X7|1011010〉= |1010101〉 → term 2term 7: X4X5X6X7|0111100〉= |0110011〉 → term 3term 8: X4X5X6X7|1101001〉= |1100110〉 → term 4 (1.143)→ X4X5X6X7|0〉L = |0〉L (1.144)This shows how the check operators act like identity operator on logical encodedstates. In other words, the encoded states are the +1 eigenstates of check operators.These check operators form a group called stabilizer group. Each of these operatorsare called one stabilizer of the code because they stabilize the encoded states toitself, while any other state out of the encoded space is a -1 eigenstate of at leastone of these stabilizers. This group theoretic understanding of check operatorsgave birth to an important family of error correcting codes called stabilizers codeswhich we will discuss in more detail in the remainder of this chapter. Before that,in the next short subsection we review a list of common steps shared between alldifferent quantum error correcting codes, specially to address what the similaritiesand the differences are between classical and quantum error correction.1.3.9 General QEC Picture and Concatenated CodesAlthough various families of quantum error correcting codes have been developedfor various error models and properties, There are certain fundamental items whichare shared between all quantum error correcting schemes. The following list canbe both a take away summery of what we mentioned about quantum error correc-tion and it also serves as a list of common facts shared among all quantum errorcorrecting codes:421. The unit of quantum information is the state of a qubit, which is a vector ina two dimensional vector space (Hilbert space) in C2.2. Quantum noise acts like error operators on a qubit states. These errors canbe discretized into bit flip and phase flip errors acting on qubits.3. In order to protect the qubits against these quantum errors we need quantumerror correction. We can not make copies of quantum information but we canmake copies of the known eigenbasis in which these states are represented.to encode a quantum state one only needs to represent the state in the logicalbasis instead of the actual physical basis.4. Using this we encode the state of a set of qubits in the states of a bigger setof physical qubits. the information we want to protect is called the logicalstate or encoded state and the set of redundant qubits we use to encode thisinformation to is called the physical qubits.5. Classical error correcting codes with special conditions can be used to buildquantum error correcting codes. CSS codes are an example of an easy struc-ture that can be used to make quantum codes out of existing classical codes.These codes consists of two separate error correcting codes, one for bit flipsand one for phase flip errors.6. Exactly like parity check matrix in classical linear codes, quantum codesalso have a set of check operators. these operators act trivially on the statesin the code space but they leave error syndrome in ancilla qubits when theyhit the error operators.7. The error syndromes on ancilla qubits can be measured to detect and locatethe errors without disturbing the encoded quantum information.8. After detecting the erroneous physical qubits we apply the proper operationto the erroneous state to recover it and undo the effect of errors bringing itback to the code space.9. Exactly similar to classical codes, each quantum code has its own properties,the number of logical qubit states that it can encode, the number of physical43qubits that it needs, the number of phase flip and bit flip errors that it cancorrect and ...Figure 1.3: A schamatic representation of a two level concatanated codes.The logical qubits are encoded into a seris of qubits in the first layer.Then a second code ise used to encode each individual qubit in the firstlayer to even more physical qubits in the second layer. [28]Now that we have a general understanding of quantum error correction andits relation with classical error correcting codes, we can discuss the idea of con-catenated coding and explain the threshold theorem in more depth. As the namesuggests the concatanated coding is the idea of using error correcting codes repeat-edly. As we mentioned in previous sections we can encode quantum informationin the state of physical qubits by using a quantum code. However, each of thosephysical qubits used in the first encoding carries a certain quantum state and noth-ing stops us to use another quantum code to encode those states in a bigger group ofphysical qubits. Figure 1.3 illustrates a two level concatenation of quantum codesin general. This repetition can go on and we can always add another layer of en-coding. any quantum code can correct up to a certain number of quantum errors.44The 9 qubit and the 7 qubit codes are both able to only correct up to a single ar-bitrary error, which means they fail to protect the state if more than two or moreerrors occur on the physical qubits. We actually calculated the success probabilityof the 3 qubit code which was 1−3p2e +2p3e where pe is the probability of a singleerror on the physical qubit. Thus by one encoding layer we were able to reduce theerror probability from pe to less than 3p2e . With the same logic we can show thateach quantum code based on its detection/correction capacity can reduce the initialindividual error probability pe to at most cpte where c is a constant and t dependson the code distance. Without loss of generality let’s work with a code which cancorrect up to one error and add another layer of error correction. This time thefailure probability is reduced to at most c(cp2e)2. This concatenated repetition cango on and on up to an arbitrary k number of layers. In that case, the final failureprobability is reduced to at most (cpe)2kc . Now if we want this concatenated code tohave a certain accuracy ε , the failure probability should be less than ε:(cpe)2kc< ε (1.145)If we choose k large enough this inequality can hold provided that cpe < 1 orpe < 1c . This argument shows that arbitrary accuracy on a quantum memory ispossible by using enough layers of concatenation provided that the individual errorprobability is lower than a certain threshold pth =1c. A similar argument can beused to generalize this result to the case when we want to have faulty gates and doquantum computation. Figure 1.4 illustrates how adding more layers of concate-nation can help decreasing the failure error provided that the initial error value isbelow a threshold. This is a brief explanation of how threshold theorem is provedand helps us to have fault tolerant quantum computation using concatenation oferror correcting schemes.This summery is a good motivation to start our next section on Stabilizer codes.One of the significance of stabilizer codes is the very well structured formalism torepresent a code. All the vague analogies we made between classical and quantumcodes are rigorously formalized in the stabilizer formalism.45Figure 1.4: The residual error after n’th encoding layer versus the initial errorfor a concatenated 3qubit bit flip code. Below a certain threshold as weincrease the encoding layers the residual error drops down to the desiredvalue.1.3.10 Stabilizer CodesStabilizer codes introduced by Daniel Gottesman [18], are a family of quantumcodes with special group theoretical structures. This structure makes them eas-ier to find and to analyze. Stabilizer formalism is the powerful group theoreticalframework in which these codes are discussed.Let’s assume a subspace C and error operators Ei in a way that an erroneousstate E1|ψ〉 is orthogonal to |ψ〉 for any state |ψ〉 in C . A measurement can beconducted to show whether the system is in state |ψ〉 or E1|ψ〉 to detect if error E1has happened. In order to distinguish errors we can generalize this condition to anytwo errors on an arbitrary state on the code space:〈φ |E1E2|ψ〉= 0 (1.146)46If this relation holds for any two states in subspace C and any two possiblesingle qubit errors, then C is a valid quantum error correcting code. Now one wayto build such a subspace C with these properties is to fill C with +1 eigenstates ofan operator G such that any error pair will anticommute with G:〈φ |E1E2|ψ〉= 〈φ |E1E2G|ψ〉 (1.147)=−〈φ |GE1E2|ψ〉=−〈φ |E1E2|ψ〉= 0 (1.148)The second line holds because G and E1E2 anticommute. Note that since C is a +1eigen space of operator G then:∀ψ ∈ C ,G|ψ〉= |ψ〉 (1.149)G fixes (acts as an identity operator on) all code-words inC . The set of all operatorslike G which fix the code-words form a group S, called the stabilizer of the code[18]. The stabilizer group S of a [n,k] stabilizer code has a minimal representationin terms of n− k independent generators. In order to describe a code in stabilizerformalism one only needs to find the generators of group S, all other operators thatfix the code can be written as products of these generators. In fact what we had inequation 1.99 was the generator for the stabilizer group of the 3-qubit bit-flip code.The generators of the stabilizer group S are quantum operators on the physicalqubits of the code. These operators consist of combinations of the Pauli operatorsσx to anticommute with phase flip errors, σZ operators to anticommute with bit fliperrors, and Identity operators. These operators act trivially on the code space whilethey leave error syndromes when acting on an erroneous state. This is exactlythe formal definition of what we called a check operator so far. Like all otherquantum codes, each stabilizer code can encode a specific number of logical qubitsin a specific umber of physical qubits and it’s able to correct for a specific numberof single qubit errors happening on the physical qubits. Now what are the erroroperators that a stabilizer ocde can correct. Before mentioning the error correctioncondition we need a couple of well-known definitions and the following theorem13:13For the proof of the following two theorems we encourage the reader to refer to [18]47Theorem 1. The set of tensor products of Pauli operators σx, σy, σz and I with apossible overall factor of −1 or ±i forms a group G under multiplication.We use the notation Gn when we want to refer to the group of tensor productsof the Pauli operators on n qubits. For example XIZII is an element of G5 whichapplies a σx to the first qubit and a σz to the third one, and act trivially on the restof the qubits. As we used earlier, we put indecies for each Pauli operator to showwhich qubit it is acting on, so for the same example we write: X1I2Z3I4I5. For everygroup like Gn we have a notion of group normalizer which is defined as follows:Definition 2. The normalizer of a subgroup S in the group G is defined to beNG (S) = {g ∈ G | gS = Sg}.The stabilizer group S is a subgroup of the bigger Pauli group Gn. With thisdefinition the normalizer of this subgroup consists of all elemtns in Gn which com-mute with all elements in stabilizer group S. In other words, The normalizer of thestabilizer group consists of all combinations of Pauli operators in Gn on the phys-ical qubits which commute with all stabilizers in S. The following theorem nowtells us which errors we can correct using a stabilizer code:Theorem 2. Error-correction conditions for stabilizer codes Let S be the stabi-lizer group for a stabilizer code C(S). Suppose {E j} is a set of operators in Gnsuch that E†j Ek /∈ N(S)−S for all j and k. Then {E j} is a correctable set of errorsfor a stabilizer code C(S).When using a stabilizer code, we encode the quantum state we want using thelogical eigenbasis defined by the codewords of the code. In order to correct forerrors, we apply all elements of the stabilizer group to the system and measure theerror syndromes on ancilla qubits which are interacting with the physical qubitsaffected by the stabilizer. The collected error syndromes gives us information todetect and locate the errors. Then we apply proper operations to undo those errorsand preserve the logical quantum state encoded in the physical qubits.The following is the stabilizer group of a [5,1] stabilizer code which encodesone logical qubit state in 5 physical qubits and can correct up to 1 arbitrary singlequbit error. This code has 5-1=4 stabilizer generators:48G1 = X1Z2Z3X4I5G2 = I1X2Z3Z4X5G3 = X1I2X3Z4Z5G4 = Z1X2I3X4Z5 (1.150)1.4 Surface CodesQuantum surface codes are a family of stabilizer codes ( [18]) and were introducedby Bravyi and Kitaev [4] as an improved planar version of Alexi Kitaev’s toriccodes [7]. Fault tolerant quantum computation is possible with high thresholdusing the surface code as the error correcting code [33]. The main purpose ofthis research is to study the classical post processing in the error correction step offault tolerant quantum computation using surface codes [35] [7]. A thorough andnicely written review of surface codes can be found in Fowler et al. [16].The underlying principles of quantum error correction is exactly the same forsurface codes. However, surface codes have additional advantages because theyalso make use of the especial geometric positioning of the physical qubits. So farwhat we have learned about quantum coding was purely theoretical. We nevermentioned how the physical qubits will put together interact and etc. Surface codeis an example that not only demonstrates the principles of quantum error correctionfor pedagogical purposes but it also illustrates how a physical system can exploitthese theoretical principles to make a robust quantum memory. Another advantageof surface codes is that not only they are good codes to prevent quantum informa-tion from quantum noise, but they also make quantum operations possible. Theespecial physical geometry and interaction of qubits with each other makes us ableto realize quantum gates. Raussendorf and colleagues [33] show how we can usesurface codes not only for protecting quantum states but also doing fault tolerantquantum computation with them.49Figure 1.5: Lattice definitions. X and Z stabilizers and Logical X and Z op-erators.1.4.1 Geometric PictureSurface codes are formed from a two dimensional lattice architecture of qubits ona surface of nontrivial topology. The data qubits (physical qubits which store infor-mation about the encoded state) are placed on the edges of the 2D lattice of linearsize L, having 2L2 qubits in total. Ancilla qubits are placed on the plaquettes andvertices of the lattice and there is a nearest neighbor Ising interaction between eachancilla qubit and its neighboring data qubits. The ancila qubits can be measured toget error syndromes about the data qubits adjacent to them.The two dimensional lattice architecture, the local nearest neighbor interaction,and high tolerance of surface codes to errors (as high as 1% per operation error)5014, makes it one of the most realistic methods of building a solid-state quantumcomputer (Fowler et al. [16]). We use the stabilizer formalism to discuss the errorcorrection on surface codes.The stabilizer formalism suggests us to introduce the stabilizers of this codefirst. In order to have an easier notation, we use X and Z instead of σx and σzoperators, respectively. The following defines the stabilizers on the surface codeand their commutation relations:ZP =⊗l∈PZl (1.151)Xs =⊗l∈sXl (1.152)[ZPi ,XPj ] = 0,∀i, j ≤ L (1.153)The ZP stabilizers are called the plaquette stabilizer and Xs stabilizer are calledsite stabilizers. As we can see in the equations 1.151-1.153 these operators are de-fined on all sites and plaquette of the lattice. The commutation condition betweenstabilizer operators is satisfied because as it can be seen in figure 1.5 they are ei-ther applied to 4 different qubits or if they share qubits they always share an evennumber of them. This is the first time we see how the geometry of defining the thelattice and the square or star like shape of stabilizer operators is playing a role inthe code definition.Now let’s study this geometry more. Applying stabilizers to the data qubits willnot change the encoded information, because the quantum state of the system is a+1 eigenstate of the stabilizer operators. How can we interpret this in a geometricpicture. Without loss of generality let’s focus on plaquette stabilizers. From 1.2.2we know that Pauli operators are both unitary and hermitian because of that twoconsecutive application of a Pauli operators is the same as the Identity operator orσ2i = I for i ∈ x,y,z. Combining this fact with the geometric picture of a group ofplaquette stabilizers in figure 1.6 shows that the net effect of the stabilizer operators14Just as a simple comparison, a two dimensional lattice implementation of Steane and Bacon-shor codes with nearest neighbor interactions has an error threshold of 2×10−5 compared to 0.01 ofsurface codes. [37]51Figure 1.6: A trivial cycle of Pauli Z operators shown in solid dark blue linesas a multiplication of the Z stabilizer operators on a group of plaquetteson the lattice. Each plaquette applies a pauli Z operator on each ofthe 4 data qubits adjacent to it. Now, two adjacent plaquettes shareexactly one data qubit because of the square arcitecture of the lattice.Two adjacent plaquette operators apply the Z Pauli operator twice on theshared qubit. Since σ2i = I a group of adjacent plaquettes act triviallyon the data qubits inside and only apply Pauli operator on a closed loopof data qubits forming the boundary of the plaquettes. This cycle is atrivial cycle on this lattice. The lattice boundaries with the same dashedline color are identified which forms a troric topology in this case.zz z zz zz zzon the lattice of qubits is equal to a closed loop of Pauli Z operators. Their effecton qubit inside the loop is canceled because each qubit inside the closed loop ishit twice and σ2z = I. On the other hand we know that the encoded quantum statein this surface code is the +1 eigenstate of all stabilizer operators. So this closedloop of Pauli operators is also an stabilizer of the code since it can be written asmultiplication of the stabilizer group generators of the surface code. So the resultof the special geometric picture of surface codes is that any closed loop of Paulioperators 15 which can be written as multiplication of stabilizer generators16, acts15Either they are random noise or applied by us as the controllers of the quantum computer.16Or in this geometric picture, any closed loop of Pauli operators which can be tiled with stabilizergenerators.52trivially on the code space. In order to formally define what we just discussed weneed a couple of definitions:Definition 3. A 1-chain is a mapping from Z2 = {0,1} to each edge on the lattice.The set of all 1-chains form a vector space over Z2. The sum of two 1-chainsis disjoint union of edges with a non-zero value in each 1-chain. a 0-chain is thena similar assignment of Z2 values to the sites and a 2-chain is the assignment ofsuch values to the plaquettes of the lattice. A boundary operator σ is then definedto take 2-chains to 1-chains and 1-chains to 0 chins. The boundary of a 2-chain(plaquette) is the sum of 4 edges (1-chians) surrounding it and the boundary of a1-chain is the two sites (0-chians) at its ends. This brings us to the notion of trivialcycle on a surface code.Definition 4. A chain with trivial boundary is called a cycle.Pauli operators can be a expressed as tensor products of X and I operators timesa tensor product of Z and I operators. The tensor product of Z and I operators candefine a 1-chain where edges acted on by Z operators are mapped to 1 and the restof the edges are mapped to 0. As we discussed earlier, such a 1-chain commuteswith all plaquette operators. However, it only commutes with site operators wherean even number of Z operators act on the edges adjacent to that specific site. Thusthe 1-chain should have a trivial boundary and consequently it is a cycle.17Definition 5. A cycle is homologically trivial if it is the boundary of a 2-chain.So a tensor product of Z or X Pauli operators on the lattice which forms a closedloop an can be tiled by stabilizer generators is a trivial cycle as it’s shown in figure1.6. There also exists homologically nontrivial cycles on the lattice topology.Definition 6. A cycle is homologically nontrivial if it can not be expressed as theboundary of any 2-chain.Whether a combination of Pauli operators make a trivial cycle on a surface ornot depends on the boundaries of the surface. Figure 1.5 illustrates a group of Pauli17A similar argument can be made on the dual lattice to show that tensor products of X and Ioperators are also cycles on the dual lattice. The dual lattice is the lattice obtained by changing allplaquette to sites, site to plaquettes, vertical edges to horizontal edges and vice versa.53Z operators named as logical Z operator. If the two horizontal boundaries of thesurface are identified together and the two vertical boundaries are also identifiedtogether, then this operator can also make a closed loop (a cycle). However, thisclosed loop can not be tiled with Plaquette operators. So a series of Pauli operatorson data qubits which connects two identical boundaries of the surface code will alsomake a loop in definition but this loop of operators can not be written as multipli-cation of stabilizer operators. This operator commutes trivially with the stabilizersso it does not generate an error syndrome. However, it acts non-trivially on thecode space which affects the logical quantum information stored in the lattice.Let’s rephrase what we just discussed a take a closer look at it. A non-trivialcycle of operators will not act trivially on the quantum state of the lattice, or inother words the encoded logical state of the system is not a +1 eigenstate of thisoperator. So applying this loop of Pauli operators not only will change the stateof the individual physical qubits on the lattice but it will also change the logicalencoded state of the system. 18 This is in fact the reason we call these operators”Logical” operators, because they act on the system and change the logical stateof the system which we wanted to protect from changing. Similar to the classicalcase of a bit flip, if we know which logical operator has happened on the latticewe can simple apply the same operator again and cancel its effect. However, theproblem with the non-trivial cycles is that they commute with stabilizer operators.As we discussed earlier in 1.3.10, stabilizers act like parity check matrices andgive us syndromes about the errors. They key idea behind the stabilizers is in theircommutation relation. Stabilizers commute with each other and they can detecterror operators which do not commute with them. This is how a stabilizer give usan error syndrome. Now if an operators occurs on the qubits which changes thestate of physical qubits and it can not be detected by the stabilizers it will act asan un-correctable error on the logical information. So if we apply a series of Paulioperators which make a non-trivial cycle on the code surface,it means we haveapplied a logical operator on the encoded state. However, If these operators happendue to a random event (like a random noise) they change the logical informationand can not be detected. Thus they act as un-correctable errors.18Remember that the logical encoded state can remain the same if the state of physical qubitschange up to trivial cycle of Pauli operators.54This discussion of trivial and non-trivial cycles shows how the geometry ofphysical qubits, the shape 19 of stabilizer operators and also the topology of thesurface of the lattice form the unique proprieties of the surface code.1.4.2 Surface Codes, Error Correction and ThresholdWhen we realize the surface code, the physical data qubits on the lattice are subjectto quantum noise. According to our earlier discussion in section 1.3.3, we can dis-cretize quantum noise as combinations of bit flip and phase flip errors on physicalqubits. Errors can happen on a single qubit or they can happen on adjacent qubitsand form a longer chain of error operators20. Figure 1.7 illustrates how single ran-dom errors on adjacanet qubits on the lattice can form long chains of Pauli phaseor Bit flip operators.The key to detect an error is to use error syndromes provided by the stabilizergenerators. As we studied so far, stabilizer generators are like check operators. Onrepeated cycles, we measure the error syndrome of all plaquette and site stabiliz-ers. As it was mentioned in section 1.3.10 if an error operator anticommutes witha stabilizer operator we see a -1 eigenvalue of that stabilizer which is an error syn-drome. However, as it’s illustrated in figure 1.8 chains of error operators commutewith both plaquette and site stabilizers except at their boundaries. So the only in-formation we detect about errors is the location of the boundaries of those errorson the lattice 1.9.The next is to correct for detected errors. If we knew which qubits were subjectto random error operators we could apply the same operator to those qubits againand eliminate the error bringing back the system into its encoded state. However,we showed previously that the only information we can get from error syndromeis the location of the boundaries of chains of error operators. Although we do notknow the exact position of each chain of errors occurred, but we also showed that19While we are interested to keep the geometric intuition, the reader should note that what weexplain is exactly in line with the abstract definition in stabilizer formalism. For example, by talkingabout the shape of the stabilizer we focus on the data qubits where a certain stabilizer acts non-trivially. As it was mentioned in section 1.3.10 each stabilizer generator acts on a subset of physicalqubits by Pauli operators σz or σx and it leaves the rest of the qubits untouched (Acts like an Identityoperator on them).20The reader should remember that in this context by error chain we mean a chain of Pauli opera-tors σx, σy and σz on the data qubits.55Figure 1.7: The gray edges are the physical data qubits on the lattice. Eachblack solid line on an edge shows one Pauli Z operator (Phase flip) onthat specific data qubit. When these errors occur on adjacent qubits theymake longer chains of errors.Figure 1.8: How error chains commute with all stabilizers except at theirboundaries. The black lines show Pauli Z errors, the blue crosses showX stabilizers at various sites on the lattice and the green and red linesshow where the stabilizer commutes and anticommutes with errors re-spectively. The commutation relations depends on the parity of qubitsshared between error chains and stabilizers.56Figure 1.9: Stabilizer syndromes only reveal information about the bound-aries of the chains of errors.the trivial cycles of Pauli operators act trivially on the code space. In other wordsa trivial cycle of Pauli operators is a multiplication of stabilizer generators whichfixes the encoded state. Thus it does not matter where exactly the error chain is.We can guess what the error chain was and connect the boundaries by a new chaincalled the recovery chain which is just our guess of what the original chain was.Since the boundaries of the recovery chain and the original error chain are identicalthey will form a cycle of Pauli operators on the lattice. If this cycle is a trivial cyclethen we have successfully eliminated that error chain. Figure 1.10 illustrates howthe original unknown error chain and our guess for the recovery chain can make atrivial cycle eliminating the effect of the error chain.Since the single qubit errors are random variables which are independent andhas identical error probability distributions. The probability of an error chain de-creases when it lengths become longer. Using a taylor expansion of the probabilityof error chains of a certain length and looking at the dominating factor in the largesize limit we can see that this probability is in fact exponential in the length of theerror and it’s proportional to plerror. So the shortest possible recovery chain be-tween two error boundaries has the most probability to be the actual error. On theother hand since we do not want to generate non-trivial cycles, we want to avoid57Figure 1.10: An example of an error chain with our guess for a recovery chainonly having information about the boundaries of that error chain. Theblack line is the error chain and the blue dashed line is the recoverychain. The Pauli Z plaquette operators inside the loop are emphasizedto show how a combination of stabilizers on the lattice can generate atrivial cycle.long chains as much as we can. Therefore the criteria for the recovery path is tofind the shortest path we want to connect. Then we apply the same Pauli operatorto the data qubits on this path and by making the trivial cycle we get rid of thaterror.21Now if we have a lot of these error chains on the surface, after measuring allstabilizers, we will see an even number of these defective boundaries which wecall defects. We do not know which two defects are coming from the same chainbecause we don’t know the chains. Now by globalizing the argument we had ear-lier, the best guess of the error correction process is to pair up these defects andput recovery chains between each pair in such a way that we globally minimizethe length of these recovery paths. The task of finding a disjoint pairing of ver-tices in a graph is called perfect matching and if we want to do this task in a way21A more sophisticated interpretation of error correction on surface codes is that boundaries oferror chains can be thought as electric and magnetic quasi-particles which are created at a point andthen drift apart by an error chain. The error correcting procedure must find a recovery chain betweeneach pair of quasi-particles of the same type, to bring them back together and annihilate them [23].58that we minimize the length of the paths we chose to pair the vertices, it becomesan optimization problem called minimum weight perfect matching problem. Theclassical post processing part of the error correction is exactly where we solve thisoptimization problem to make the best guess pairing up the defects. This is theexact statement of the problem we want to address in this research. Chapter 3 ofthe thesis is a detailed discussion of an idea of how to improve the complexity ofthis post processing step in the error correction on surface codes.We will have a detailed algorithmic study of the error correction on surfacecodes in section 2.1 , and we will discuss how the probability of failure exponen-tially decays as we make the lattice larger if the error probability is below a certaincritical point called the error threshold. The existence of this critical point relieson the probabilistic nature of the error correction. We expect that when the errorprobability becomes higher, longer chains become more probable and thus we willhave more chance of making the wrong guess and generating a non-trivial cycle oncode surface. This will result in the failure of the error correction. So intuitively weexpect that when we increase the error probability above a certain value the errorcorrection is more probable to fail. This threshold shows the robustness of an errorcorrection algorithm and this is one factor to compare different approaches.Theorem 3. Success probability of error correction with surface codes can bearbitrarily close to 1 by making the lattice size large enough provided that theerror probability is below a certain threshold.[7]This threshold is calculated numerically using Monte Carlo simulation. for afixed error rate as we enlarge the lattice size the success probability increases andit approaches 1 in the large size limit.The advantage of this syndrome measurement and classical post processing isthat the post processing and recovery chain generation can be postponed as far aswe collect syndrome results with a proper pace. The idea is that for example ifa physical bit is flipped we can continue keeping the quantum memory and thenwhen we need the quantum information reverse the bit flip and process the data.Likewise, we can keep the error syndromes at discrete times and then when weneed to use the quantum information stored in our lattice we look at the errorsyndromes we have accumulated and do the post processing accordingly. Errors59Figure 1.11: Error chains and their boundaries as quasi particles. Thesechains can be recognized as logical operators on the surface code.can accumulate through time, but because of this discussion we just had that isfine. However, we still need to collect syndrome measurements with a frequencythat does not allow errors to accumulate and pass the threshold. We know that inthe large size limit the error correction will most likely fail if the error probabilityis bigger than the threshold. This argument only applies when we want to have aquantum memory. 22 What if we want to do a certain computation on this quantumdata by applying quantum gates? for a group of gates called the Clifford gates23 it is still possible to trace the error syndromes and do the post processing later.However, This family of gates is not enough to have a universal gate set. In order tomake universal quantum computation we need to add non-Clifford gates. The postprocessing can not be postponed to a point after a non-Clifford gate. So when weare doing actual quantum gates with our surface codes. we can only post pone the22A quantum memory is a devise that can store and preserve some quantum information for adesired amount of time and then one can read off that information from it.23The clifford group consists of the Hadamard, Controlled-z and the pi2 rotation gates.60postprocessing untill we hit a non-clifford quantum gate in the circuit of quantumgates we want to apply. Section 1.4.4 briefly explains how we construct a universalset of quantum gates for fault tolerant quantum computation with surface codes.For a more detailed study of how quantum gates can be done with surface codeswe encourage the reader to look at [33].1.4.3 Defects, Holes and ScalingAs we discussed earlier the topology and geometry of the lattice defines the shapeof stabilizers and the error correction properties of this code. in the simple caseof a torus topology where the horizontal boundaries of the surface are identifiedand the vertical boundaries are also identified we will have the toric code withtwo different types of non-trivial cycles which means we can encode two logicalencoded quantum states in this architecture of 2L2 physical qubits. This simpleversion is called the toric code. The number of logical qubit states that we canencode is dependent on the topology of the surface. By changing this topology wecan change the encoding properties of the code. In case of the toric topology thenumber of non-trivial cycles and hence the number of logical encoded states is twotimes the number of genus in the torus.Defects are used to add non-trivial boundary conditions to the topology of thecluster. Defects can be carved out of the cluster by local Z-measurements of allqubits in the region of a defect.We can add encoded information by adding more defects but the problem is thatshort cycles wrapping around a defect are homologically nontrivial cycles whichcannot be detected nor corrected, hence error correction fails in those cases. So wehave to re-scale the surface to have thicker defects and hence bigger logical cells.The logical cell is re-scaled from elementary cell of the lattice by a factor of λand defects are thickened by a factor of d, figure 1.12.1.4.4 Fault Tolerant Quantum Computation with Surface CodesThe surface code so far we were studying can act as a memory that can preservethe logical quantum state from certain quantum errors. However, it’s proven thatthese codes can be used for quantum computation too Raussendorf and Harrington61Figure 1.12: Scaling factors on a lattice and time evolution of a moving de-fect. An elementary cell is shown for reference on the top left of thefront lattice.[33]. Initially the CNOT gate could be conducted with surface code using a 3Darchitecture of stacked surface layers. Later, Raussendorf showed that the CNOTgate can be carried out on the surface code by braid transformations on a singlesurface [33–35]. This is a great improvement. However, in order to have universalcomputation resource we need to be able to carry out a universal set of gates. Allother complicated quantum computations can be carried out by composing themout of the universal gate set. If we add two single qubit rotations to our two qubitgate CNOT, this will make a universal gate:CNOT Controlled NOT gate.S Gate Rotation bypi4along the Z axisT Gate Rotation bypi8along the Z axis62The two rotations are not easy to accomplish. They need special states preparedon ancilla qubits and injected to the surface in order to happen. These special statesare called magic states and since they’re hard to prepare it’s easier to start with anoisy version and then distill a state with higher fidelity through an iterative processcalled magic state distillation [3]. The following ancilla states must be preparedwith high fidelity in order to realize the rotation gates:|Y 〉=|0〉+ |1〉√2(1.154)|A〉=|0〉+ eipi4 |1〉√2(1.155)Applying the magic state distillation each time will increase the fidelity in theoutput states and in this fashion it is expected that it will lower the overall opera-tional cost of fault tolerance but that is generally not the case, since it also increasesthe number of distillation operations and noisy input states exponentially. Eachlevel of magic state distillation has its own scaling factors.The operational overhead for each gate type is a function of1. Circuit size2. Scaling factors3. Circuit layoutAs is shown in the figure 1.14 as the circuit becomes bigger, after a certainpoint it iss beneficial to add another level of magic state distillation rather thancontinuing with the same old noisy states. The bumps on certain curves in thisfigure is exactly when the overhead with an extra distillation level is less than theoverhead of continuing with the same level.63Figure 1.13: Implementing a Controlled-NOT gate by twisting a pair of pri-mal and dual holes representing the control and target qubits.64Figure 1.14: The operational overhead of building fault tolerant gates as afunction of circuit size for various gates, at error probability perror =10−4.65Chapter 2Surface Code SimulationIt doesn’t matter how beautiful your theory is, it doesn’t matter howsmart you are. If it doesn’t agree with experiment, it’s wrong.— Richard P. Feynman (1918 - 1988)In this chapter we try to develop a simulator engine for surface codes in orderto carry out numerical analysis on the error correcting procedure. This chapterstarts with a brief explanation of different modules in the error correcting procedurewhich both helps to understand the classical algorithm and also the topologicalrobustness of surface codes against quantum errors.2.1 Overview of the Error Correcting AlgorithmThe toric code and the planar surface code have the same accuracy threshold [16],so I started my simulations with the toric code model which was simpler to workwith. Since we are only considering the memory error I assumed that our measure-ments are perfect and thus used the Random-Bond Ising Model (RBIM) to computethe accuracy threshold. In order to build the algorithm, first I start with a simplererror channel which has only Z errors.ρ −→ (1− pz)ρ+ pzσzρσ†z (2.1)Generalizing the result to the real depolarizing channel in equation (2) is straightforward since the X errors are assumed to be independent from Z errors and they’re66treated in the exactly same manner but on the dual lattice. The only important thinghere is to note that Y errors introduce error correlations on the primal and duallattices.ρ −→ (1− p)ρ+ p3(σzρσ†z +σxρσ†x +σyρσ†y ) (2.2)A random error generator is used to generate an error sample on an L×L lattice.The Chain E is the set of links (qubits) on the lattice which suffered the Z error. Aswe know, the only important point is to know the boundary ∂E of the one-chain tolocate the Ising vortices. We can imagine Ising vortices as defects which nucleatesin pairs and then drift apart on the links of the one-chain. To recover from theerrors, we have to generate another one-chain as a recovery chain bounded by theboundary of a one chain (the position of the defects). It’s like bringing the twodefects together again in order to re-annihilate.Figure 2.1: The error chain E and the recovery chain E ′. This image is takenfrom Dennis et al. [7]So if we consider a complete graph with the Ising vortices as its vertices, thenthe problem of finding a chain to re-annihilate the defects is translated to findingthe set of edges in the mentioned graph, in which every vertex has one and only oneadjacent. Considering a complete graph there are lots of such sets called match-ings of a graph. If we compute the physical distance between two defects in thelattice and assume it as the weight of the edge linking these two together, then ourproblem is to find the minimum weight matching of the mentioned graph. I usedEdmonds perfect matching algorithm to find the minimum weight matching, whichis our minimum cost recovery chain to overcome the errors.The recovery is successful only if the chain D = E +E ′ has a trivial homology67class. Otherwise, A homologically non-trivial cycle has happened on the latticeand hence the recovery fails. I repeat this procedure for many randomly generatederror samples to estimate the failure probability with respect to the error rate.For p < pth the failure probability approaches 0 exponentially as L→∞, a more de-tailed discussion in section ?? justifies the following relation between the residualtopological error e f ailure and the linear size of the lattice size L:e f ailure ∼ e−κ(p)L (2.3)If we use Monte Carlo simulation to estimate the failure probability for differ-ent L, we expect to observe that for p < pth the failure probability decreases if Lincreases while for p > pth the failure probability grows with lattice size L. So ifwe have failure probability versus p for different L, There would be a unique pointwhere all lines cross each other, which is the estimated value for threshold pth.2.1.1 Error Channel and Random GeneratorIn order to generate random errors on the lattice we used boost library which in-cludes MT19937, a variant of the Mersene Twister, a famous random number gen-erator with a period of 219937−1≈ 106000.2.1.2 Finding Ising VorticiesIf we turn the lattice into a giant graph with lattice sites as the vertices and the errorchain as the edges of the graph then Ising vortices or defects will be the nodes withodd degree.2.1.3 Edmonds Perfect Matching AlgorithmAs I mentioned before, the problem of finding the recovery chain can be trans-lated to the problem of finding the minimum weight matching on a complete graphshowing defects as its vertices and E chain links as its edges. Here the weight of achain is the sum of the vertical and horizontal distance of its boundary (defect pair)in the lattice.The most complex part of our simulation is the perfect matching algorithm with a68complexity of O(l6) [12]. As was mentioned before, this complexity makes it hardfor us to do numerical analysis for larger lattice sizes and smaller error probabili-ties. So we want to address this problem and by adding heuristics to the matchingalgorithm make the post processing error correction (decoding step) faster. Theentire chapter 3 of this research will be dedicated to this new idea along with itsnumerical results. In the simulator we used an implementation of the Edmondsmatching algorithm called Blossom [24].2.1.4 Path GeneratorAfter we found which sites are going to be linked with an E ′ chain, we have togenerate paths on the lattice from each defect to its pair.Figure 2.2: L=5 Lattice. Error chain is shown with red solid lines and therecovery chain is shown in blue stripped line.2.1.5 Homologically Non-trivial CyclesIn order to see whether we can successfully preserve the lattice from memory errorswe have to check if D = E +E ′ chain builds a homologically non-trivial cycle. Theexistence of a non-trivial cycle is checked by a simple algorithm which looks forrows or columns of qubits in the lattice which includes an odd number of errorchains.69Figure 2.3: D = E+E ′ builds a homologically non trivial cycle in this lattice.The solid black lines are the Error chains E’s and dotted blue lines arerecovery chains E ′. The vertices are the ancilla qubits on the sites of thelattice and edges (lines) are the data qubits affected by either the erroror the recovery chains.2.2 Monte Carlo SimulationPutting all these blocks together we’ll have a toric code failure probability simula-tor:1. Generate an error sample on the lattice (E chain)2. Locate defects (Ising Vortices)3. Find the minimum-weight matching for defects (E’ recovery chain)4. Examine the homology class of D = E +E ′ cycle5. Declare a success or a failure70The following is a sample result of running the simulator for L = 3, p = 0.1:Error sample matrix:τm,n =0 0 0 0 0 00 0 0 0 0 00 0 0 −1 0 00 0 0 0 0 00 0 0 0 0 00 0 0 0 −1 0Location of defects (Ising Vortices):ξm,n =0 0 0 0 1 00 0 0 0 0 00 0 1 0 1 00 0 0 0 0 00 0 0 0 1 00 0 0 0 0 0weighted adjacency matrix for a graph with defects as vertices:Gi, j =0 4 2 24 0 2 42 2 0 22 4 2 0This matrix is fed into the Edmonds perfect matching algorithm to find the mini-mum cost matching between defects. The result is a matrix which clarifies whichpair of defects should be matched:Mi, j =5 53 53 31 571which says that for example the defect on ξ5,5 must be matched to the defect onξ3,5 and so on ... The result is a lattice including both E chains and E’ chains:χm,n =0 0 0 −1 0 00 0 −1 0 0 00 0 0 −1 0 00 0 0 0 −1 00 0 0 0 0 00 0 0 0 −1 0then the whole lattice is translated to adjacency matrix of a giant graph with sitesof the lattice as its vertices and the E or E ′ chains as the edges:χm,n =0 0 0 0 0 0 0 0 00 0 1 0 1 0 0 0 00 1 0 0 0 0 0 0 10 0 0 0 0 0 0 0 00 1 0 0 0 1 0 0 00 0 0 0 1 0 0 0 10 0 0 0 0 0 0 0 00 0 0 0 0 0 0 0 00 0 1 0 0 1 0 0 0This matrix is used to check the homology class of the cycle. There is a non-trivialcycle in this example and the simulator returns +1 as a failure:Total Failure = 1A graph of the code is also provided in simulation; for clarification, a schematicsample of this output is shown in figure 2.4.2.2.1 Numerical ResultsThe key point to find the pth is to repeat the simulation for a lot of error samplesand for different lattice sizes L. After error analysis we figured out that 105 error72Figure 2.4: Simulator illustrates the lattice graph including E and E’ chains.samples gives us reliable results. The detailed error analysis can be found in section2.5.The architecture in this numerical simulation is exactly that of [7] for the bit-flip error channel. Section 2.5 discuss the slight difference between this study andthe literature. This can be immediately compared with that of [9] which is alsointroducing a new decoding algorithm with threshold 8.2% for bit-flip error chan-nel below the 10.5% but with an advantage of operational complexity L2log(L)compared to L6 of Edmonds perfect matching algorithm.From figure 2.5 : pth ≈ 0.1 = %102.3 Larger Lattice SizesAlthough higher threshold is an advantage for a decoding algorithm, it’s importantto note that none of the quantum computations are going to happen with errorprobabilities close to threshold. On the other hand, in order to conduct a real worldfault tolerant computation we will be working with lattices of size 100 to 1000 andprobably bigger than that based on different parameters in the computation such astolerable error, type of gates, number of gates , error level, etc. So what we actuallyneed to do is to study the behavior of these codes for small error probabilities(where they really would operate) and for large lattice size limit.Both of these studies are computationally unfeasible. The number of qubitsgrows quadraticaly with the lattice size and the perfect matching algorithm is73Figure 2.5: Error threshold averaging 105 error samples0.02 0.04 0.06 0.08 0.1 0.12 0.14 0.16 0.18 0.2 0.2200.10.20.30.40.50.60.70.8PerrorP failureError threshold   357911O(L6) which really slows the process down as we go to larger lattice sizes. In thesmaller errors case, we need more random samples in order to observe less prob-able error chains, a couple of orders of magnitude more random samples will alsodramatically slow down the Monte Carlo simulation process. These two reasonsmake it challenging to gather high quality numerical results. We use several differ-ent methods and approaches to make the simulation faster and the general purposeof this research is also to design a matching algorithm which is less complex thanthe Edmonds matching algorithm.742.4 Crossing PointsThe simulation results for L = 3,5, . . . ,11 gives us a threshold of 10% while wefound a 10.5% threshold in the literature [7] for the same error channel. The firststep is to investigate the crossing points of curves for different lattice sizes. As youcan see in figure 2.6, there is an asymptotic approach toward the 0.105 value forthreshold mentioned in previous works, as we go to larger lattice sizes.Figure 2.6: Investigating the crossing point with larger lattice sizes0 0.05 0.1 0.15 0.2 0.25 0.3 0.350.0990.10.1010.1020.1030.1040.105Inverse Lattice LengthCrossing Points3,735, 3927, 3119, 2311, 15The next important thing is to compare the results with the numerical error inthe Monte Carlo simulation.2.5 Error barsAfter a random noise occurred on our lattice there might be only two differentoutcomes: either the topological memory can survive the errors and preserve thequantum information or it may fail to correct the errors and the information is lost.When there’s only the possibility of a failure or a success we can model our systemby a Bernoulli trial. If we repeat a Bernoulli trial for Nmax times, then one caneasily deduce that the Bernoulli distribution approaches a Gaussian distribution asNmax goes to infinity. Like any other random variable with a specific distributionwe expect a mean and variance for this experiment. Our Monte Carlo simulations75in this research can be modeled as a chain of Bernoulli trials for Nmax times, andthe numerical error ENumericalin the resulting data is the standard deviation σ of thisBernoulli distribution.ENumerical =σµ (2.4)For a Bernoulli trial with failure probability Pf ailure and Nmax repetitions:µ = Nmax×Pf ailure (2.5)σ2 = Nmax×Pf ailure× (1−Pf ailure) (2.6)Using these relations together:ENumerical =√Nmax×Pf ailure× (1−Pf ailure)Nmax×Pf ailure=√(1−Pf ailure)Nmax×Pf ailure(2.7)Now we apply the numerical results at the Perror = Pthreshold in the relation above,the number of repetitions is 105 and the failure probability Pf ailure = 0.22 aroundthe threshold point:ENumerical =√(1−0.22)0.22×105= 0.5954% (2.8)As we can see the numerical error is the main reason behind the 0.5% between ourresults and the previous ones obtained from the literature [7, 35].Numerical error for 105 samples ≈ 0.6%76Chapter 3Fast and Simple Error CorrectionPost ProcessingBe Fearful When Others Are Greedy and Greedy When Others AreFearful. — Warren BuffettAlthough the complexity of the post processing algorithm for Fault TolerantQuantum Error Correction (FTQEC) is polynomial in the linear size of the lattice,since this is just a step in a quantum computation process, it has to be comparablyfast otherwise the speed up using a quantum processor won’t be observed by thelong delay due to the classical post processing algorithms. The other reason thatwe need a fairly fast and simple post processing step is that a lot of theoreticalstudies on these quantum architectures involves a lot of Monte Carlo numericalcalculations which needs millions of runs of these algorithms with large latticesizes. So the computational cost of these algorithms are very important and even asmall speed up accumulates and shows up a great advantage in terms of time.This is basically the motivation behind the study of post processing algorithms.In this following chapter we propose a very simple greedy heuristic algorithmwhich serves as the post processing engine. Using parallelising techniques it has atime complexity logarithmic in the lattice size with high threshold. The results canbe compared to the re-normalization group approach of Duclos-Cianci and Poulin[9, 10]. Using the idea of concatenating codes they are able to get to a threshold77of almost 8% with a logarithmic time complexity in terms of the lattice size. Ourwork, however, results in a family of hybrid algorithms with an adjustable thresh-old varying from 8.6 to 10.5 % with a time complexity that can be asymptoticallyas low as that of Duclos-Cianci and Poulin [9]. The implementation of our algo-rithm is much easier due to the simple heuristic we used and it’s expected to workmore efficiently on low error regimes compared to [10]. Another advantage of ourwork is that it can make use of approximation algorithms to lower the operationalcomplexity when the threshold is adjusted to be higher than 8.6%.3.1 Minimum Weight Perfect Matching (MWPM)AlgorithmAs we discussed earlier the error correction step on the lattice is reduced to pairingdefects and then having proper operations on a path between them to annihilatethose defects. Pairing defects has to happen in such a way that the paths betweeneach pair is the best guess for the error chain that actually happened between them.Since the error probability distribution is i.i.d (independent and identically dis-tributed) the probability of a path is inversely proportional to it’s length. So ourguess will have the highest probability to be the actually error chain occurred if wechoose the shortest path possible between the two defective qubits. This means thatdefects should be paired in such a way that the distance between each two paireddefects is a minimum.This pairing problem is reduced exactly to a weighted perfect matching prob-lem on a graph where it’s vertices are the defects. There is an edge between twovertices (defects) if there is a path between them on the lattice. There might beseveral paths between two defects on the lattice but we are interested in the short-est path between nodes. Hence, The weight of each edge will be the lengths of theshortest path between those two defects.Although a WPM always has an optimal answer on a complete graph, due totopological properties of the surface code, the decoding process is really robustaccording to this pairing process. As we mentioned earlier only nontrivial cycleswill make the code fail so even if a pairing is not the optimal possible matching ofdefects it will still correctly compensate for errors if there is no nontrivial cycle,78and the error correction will succeed. This is exactly the robustness that we employto build a sub-optimal matching algorithm without any dramatic reduction in thethreshold.3.2 A Simple HeuristicThus we can easily see that the whole post processing step is translated to a findinga minimum cost perfect matching on a graph where vertices are the defects and theweight of each edge between these vertices is the length of the shortest distancebetween those two defects. Such a minimum cost perfect matching will cover allvertices , so all defects will be annihilated and since it has a generic optimizationof the cost (length of the paths) it always matches vertices in such a way that theglobal length of all paths is the minimum it can be. Literally it tries to matchdefects which are closer to each other to reduce the length of the paths.But the resulting defect graph on which we want to apply the perfect matchingalgorithm has a very nice property that can be used as a heuristic added to ourpost processing algorithm to dramatically lower the complexity of matching step.Since the defects are on a lattice (a connected graph) there is at least one pathbetween each random defect. So the graph that we build based on defects andpaths between them will be a complete graph since as I mentioned there is at leastone path between each two defects on a lattice.Finding a perfect matching on a complete graph with even number of verticesis simpler than finding a perfect matching on a random graph. In fact since thereexists an edge between any two vertices on a complete graph , any random pairingof the vertices will be a perfect matching. So we almost have no difficulty findinga perfect matching on a complete graph.Simple perfect matching algorithm on a complete graph is written in pseudo-code in 3.1.However as we discussed earlier we are not just interested in a perfect matching-more precisely we are interested in finding a minimum cost perfect matching wherethe cost is the length of the paths between matched vertices.This assumption can make the problem a lot more complex because now it’smixed with an optimization problem of minimizing the cost. However, we will use79Figure 3.1: Random matchings on a complete graph with 8 vertices012345670123456780Algorithm 3.1 Simple random perfect matching algorithm on a complete graph.Initialize G a complete graph of size nGvertices← vector of vertices in GGedges← vector of edges (vector of pairs of vertices)while number of vertices in G is not zero dox1,x2← random pair of vertices from Gverticesappend [x1,x2] to the matching listdelete x1 and x2 from the Gverticesdelete all edges adjacent to x1 and x2 except the one between themend whilea simple heuristic to apply a very small modification to the simple algorithm wediscussed above to change it to a minimum cost perfect matching algorithm.By adding a greedy heuristic to the simple matching algorithm on a completegraph, the program tries to minimize the cost of matching. The heuristic is asfollows: since we want to minimize the cost of the perfect matching, instead ofchoosing a random pair of vertices each time, we pick the pair with the lowest costfor the edge between them.In terms of the error correction on the lattice of qubits, this heuristic means thatwhen starting the error correction, look at all defects and calculates their mutualdistances, then match the two defects which are closer to each other than any otherpair. Apply the most trivial chain of Pauli operators between them and annihilatethem. Then we can assume that these two defects have disappeared and go to thenext pair with the shortest distance and repeat this procedure until all defects arepaired up.So the only thing we need to change in the previous trivial algorithm is to sortthe list of edges according to their weights, this will help us find the lowest costpair in each iteration.3.3 Sub-optimality and Adjustment of WeightsAs we discussed earlier, like other approaches to finding the simplest post pro-cessing algorithm, our proposed method is also a sub-optimal approach. All thesesub-optimal approaches are then compared based on the highest threshold that they81Algorithm 3.2 Perfect matching algorithm on a complete graph with sortedweights.Initialize G a complete graph of size nGvertices← vector of vertices in GGedges← vector of edges (vector of pairs of vertices)Gedges← Sort Gedges in ascending orderwhile number of vertices in G is not zero do[x1,x2]← The left-most edge in Gedgesappend [x1,x2] to the matching listdelete x1 and x2 from the Gverticesdelete all edges adjacent to x1 and x2 except the one between themend whilecan get and the distance of their threshold to the optimal matching threshold. So asit’s expected there is a trade of between the complexity of the algorithm and howclose it is to the the optimal threshold.Although the sorting algorithm with all its simplicity gives us a relatively goodthreshold which is comparable to other approaches, but we can study why thissimple algorithm is sub optimal and maybe manage to apply minor modifications toget a slightly more complex algorithm with a significant gain in terms of threshold.Figure 3.2 shows a simple random example where sorting the list of weights ofedges fails to give us the optimal matching. As we can see the sorted list of weightsis [1,2,2,100]. Applying our simple greedy matching algorithm, the first edge to bepicked up will be (0,1) with weight 1. After that we have to erase all other adjacentedges to vertices 0 and 1. So the only edge remained in the graph will be (2,3) withcost 100. The total cost of this matching is 101, but we can simply observe thatif we pick edges (0,3) and (1,2) we’ll cover all vertices of the graph in a matchingwith cost 4.This example shows that optimizing the cost of the perfect matching is a globaloptimization problem. If we use divide and conquer methods to break it intosmaller sub-problems over subsets of vertices, optimizing those sub-problems doesn’tguarantee that the global problem will also be optimized.We add a simple step to redefine the weights of each edge in the defect graphin order to partially tackle this problem. The idea behind this weight modification82Figure 3.2: A simple not realistic and extreme example of the situation whensorting fails in minimizing the matching cost.1221000123is to consider the edges that we are left with after choosing a specific edge in thematching and removing its ends from the defect graph.We iterate over all edges of the graph and we check what remains if this edgeis chosen in the matching. Then we look at the remaining edges and add a penaltyinversely proportional to the sum of their weights to the weight of that specificchosen edge. In this way if choosing an edge removes a lot of small weight edgesit gets more penalty than an edge which removes a lot of higher weight edges whenit’s chosen in the matching.3.4 Complexity of the AlgorithmThe motivation behind this study was to design and develop a simpler and fasteralgorithm for the post processing step. Thus, it is really important to evaluate thecomplexity of the proposed algorithm. Going through the operations, one can seethat the complexity is dominated by the sorting algorithm which is the heart of thisheuristic.83Table 3.1: List of selected comparison-based sorting algorithmsName Best Average Worst MemoryQuick sort n log(n) n log(n) n2 average: log(n), worst case: nMerge sort n log(n) n log(n) n log(n) worst case nIn-place Merge sort - - n(log(n))2 1Insertion sort n n2 n2 1Bubble sort n n2 n2 1Selection sort n2 n2 n2 1Sorting is perhaps the most well-known and oldest computer algorithm. In factalmost all computer algorithm text books start with a chapter on sorting. Thesealgorithms are often classified based on their computational complexity.Different sorting algorithms are designed for different sorting scenarios. Oper-ation complexity, use of memory, stability and etc., are the parameters which be-come important when choosing a proper sorting algorithm for a certain situation.Our goal is to reduce the complexity of the matching algorithm so the memory isnot a problem here and we want to run the algorithm with the least complexitypossible.As we can observe having sorting as the bottleneck of our algorithm we intro-duce a lot of freedom in choosing our sorting scheme that best fits the needs andrequirements of a specific application. But the advantage is not only in a wide va-riety of choices when it comes to sorting algorithms. In fact sorting algorithms arevery well studied for paralleling and multi-threading, and there were great achieve-ments in reducing their time complexity using paralleling ideas. In the next sectionwe’ll explain more how we will benefit from this advantage.3.4.1 Parallel ProcessingParallel processing is the idea of dividing the tasks in an algorithm and runningthem at the same time step on many different processing devices. In this way theoperational complexity of an algorithm is divided by the number of parallel deviceswhich are running different parts of the code and then the overall time complexityof the parallel algorithm will be reduced proportional to the number of parallel84devices.Among all different algorithms those which are designed based on a divideand conquer method have a great potential to be parallelized since they are alreadydesigned for divided work load. For example merge sort is a great example wheredifferent levels of multi-threading and paralleling can be applied to reduce its timecomplexity from n log(n) to log3(n). There is a detailed study of the operationalcomplexity of merge sort in section 2.3.1 of Cormen et al. [6]. on page 803 of thesame reference, you can find a detailed implementation of a multi-threaded parallelmerge sort and its complexity analysis.There are other sophisticated parallel and multi-threaded implementations ofsorting algorithms, even combining a couple of them as a hybrid sorting enginethat show much better time complexity performance. David Powers used CRCWPRAM with n processors to develop an implementation of quick sort which workswith a time complexity of log(n) [31] [32]. There are lots of other similar studiespublished in the last 20 years.As a conclusion of this section, sorting algorithm are a well studied subjectin computer science and there are lots of resources on how to optimize them interms of their time complexity. Using multi-threading and paralleling techniquesit’s possible to get a random list of n elements sorted in time log(n).3.4.2 Complexity AnalysisAs discussed earlier the complexity of matching algorithm is reduced to the com-plexity of the sorting steps which is dependent on the size of the list we want tosort. This list consists of all possible edges in a complete graph where it’s verticesare the defects on the two dimensional qubit lattice. If N is the number of edges:N =(v2)=v(v−1)2(3.1)where v is the number of defects on the lattice. Now that we have the relationbetween elements in the sorting list and defects on the lattice, we have to count thenumber of defects. But as we discussed in chapter 1, defects are drawn from a setof independent and identically distributed (i.i.d) random variables. So we have tofind the expected value (average) of the number of defects on a lattice.85Each qubit can have an error with probability perror or stay as it is with proba-bility (1− perror). A lattice of linear size L has exactly 2L2 qubits on it, so if X is arandom variable with its value equal to the total number of defects on an instanceof a lattice with size L:v = E[X ] = perror× (total number of qubits) = 2perrorL2 (3.2)Combining equation 3.1 and 3.2, we will get the final relation between the sizeof the sorted list and the inputs of the matching algorithm. We can easily write Nas a function of input parameters L and perror:N(L, perror) =(2perrorL22)=2perrorL2(2perrorL2−1)2≈ L4 (3.3)Equation 3.3 suggests that the matching algorithm has to sort a list of sizeproportional to L4. Using techniques that we discussed in previous sections thesorting can be then operated in log(L) time. So the overall time complexity of thisalgorithm is logarithmic in L.3.5 Numerical Results: ThresholdAs we discussed in the introduction section, since the analytic study of the thresh-old is unfeasible it’s common in the literature to use Monte Carlo methods to getstatistical information about the the error profile and the threshold.As we discussed in error bars analysis in chapter 1, the uncertainty in randomsample results in Monte Carlo simulation dictates the minimum number of sampleswe need in order to have a meaningful distance between our data points. For thefollowing study of threshold each data point is the average of 105 random sampleshaving an uncertainty close to 0.05 for the e f ailure.The numerical results state a threshold of 7.6% for our pure sorting-based per-fect matching algorithm. So far we have designed a very simple greedy heuristicwhich improves the time complexity of post processing from almost L6 to log(L)with only decreasing the threshold from 10.5% to 7.6%. But this is not the end ofstory.Although the performance of the proposed algorithm is close to the optimal86Figure 3.3: The threshold calculation for the refined sorting based matchingalgorithm.0.07 0.072 0.074 0.076 0.078 0.08 0.082 0.084 0.086 0.088 0.09 0.092 0.094 0.096 0.098 0.10.050.10.150.20.250.30.350.40.450.50.55Error ProbabilityResidual ErrorThreshold when optimal subgraph is of size 4  7911131721252933Threshold ~ 0.076case with a lot of improvement in terms of time complexity and simplicity, wewant to push this forward and implement a hybrid matching engine to reach foreven closer to optimal thresholds while maintaining the simplicity of the algorithm.This idea is explained in detail in the next section.3.5.1 3 Dimensional Topological MemorySo far we have assumed that our measurements are ideal, but in a noisy environ-ment measurement has an error probability qe. It’s shown that the 2 dimensionalsurface code and faulty measurements can be modeled as a three dimensional archi-tecture of qubits using surface codes plus the third dimension where measurementshappen through time. The error correction can be mapped to a random plaquetteZ2-gauge model in 3 dimensions, Dennis et al. [7].As you can see in figure 3.4 our method has a threshold of 1% for the 3 dimensionalsurface code and the bit flip channel. The RPGM fault tolerance error threshold fora depolarizing channel is shown to be %3.2 [29], The minimum weight perfectmatching decoding algorithm has a lower threshold of %2.9 [35]. Unfortunatelythere is no study of 3D surface code for re-normalization group decoder in Duclos-87Figure 3.4: Threshold calculations for the 3 dimensional topological quan-tum memory architecture using surface codes.0.0102 0.0104 0.0106 0.0108 0.011 0.0112 0.0114 0.0116 0.011800.010.020.030.040.05PerrorP failureThreshold calculation for new decoder in 3D  357911Cianci and Poulin [11] to compare with.3.6 A Family of Hybrid Algorithms and ImprovedThresholdNow that we have a very fast and straight forward greedy algorithm to do thematching let’s take a step back and look at what’s exactly happening in the realworld when we are matching defects.As we discussed 2.1, chains of errors will introduce defects at their boundaries.When we want to eliminate these defects, we need to guess what the error chain wasand undo the Pauli operations due to error. But thanks to the stabilizers we can onlysee the boundaries of the chains (defects) not the exact error chain itself. Again thestabilizer code and the topology of the surface will come to our help. Since a trivialcycle of Pauli operators on the surface will not act as a logical operator and keepsthe system in its encoded state, we do not need to guess the exact path of the errorchains. Having the boundaries the only thing we need to do is to try making trivialcycles with the defects. The defects are already chained together by a chain oferror, and so the only thing we need to do is to find a path between two defects and88apply the specific Pauli operator to it. This path in addition to the chain of errorswill make a homologically trivial cycle on the code surface and eliminate thosedefects.While doing this defect pairing and getting rid of them we have to be carefulabout one thing. This process will succeed if we manage to build trivial cycles withthe error chains. A non-trivial cycle will apply a logical operation to the surfaceand change the logical data that we’re storing on the surface.Another way of looking at this is to study the length of the error chains. Anon-trivial cycle on a torus for example has to wrap around the surface of the code,which means the non-trivial cycle has to be of length L. This is a very long chain oferrors and its probability of occurrence will decrease when we go to larger latticesizes.So the idea is to avoid generating large chains because they have higher proba-bility of becoming a non-trivial cycle. Since we do not know what exactly the errorchains are, we try to choose the shortest path between the two for error correction,so that the error correcting path added to the real error chain will make the small-est possible cycle and in this way try to minimize the probability of generating anon-trivial cycle.This argument has two important meanings for us:1. First, we always have to pick the shortest path between two paired qubitsto make sure we minimize the probability of building a non-trivial cyclealong with the error chain between those two. This leads to the idea ofminimum cost perfect matching. Minimizing the cost means always tryingto pair the qubits in sush a way so as to have the minimum overall length ofpath between all qubits with each other and avoiding building long chainson the lattice. A more local, hence sub-optimal version of the same idea willbe trying to do it in a greedy way and always tries to pair the two nearestdefects, which we used to build our simple algorithm with it.2. The second point is the building block of the idea behind this section. Inaddition to the first point what we can get from this is that the physical systemdoesn’t care how you pair up the defects as far as you don’t make a non-trivial cycle. On the other hand non-trivial cycles are chains of length L89which has less probability to occur and their probability is reduced as we goto larger lattice sizes. These two points together suggest that we only need tobe careful when we are dealing with a long chain of errors. In other words,when defects are closer thanL2to each other we do not need to be pairingthem in the most optimal way. But when we are dealing with defects farfrom each other, if we pick a wrong pair then there is a high probability thatwe end up generating non-trivial cycle on the lattice.These two points will lead us to the general idea behind a hybrid decodingengine for surface codes which has the highest threshold among all sub-optimalapproaches, while maintaining the time complexity of operations logarithmic insize of the lattice.3.6.1 The General IdeaCombining the two facts discussed in the previous section we use branch and cuttechniques to build a hybrid matching algorithm of the defect graph. As we dis-cussed before, when we face long error chains we really need to be careful inmatching but when it comes to defects which are closer to each other even a simplesorting and matching the defects greedily won’t harm the overall performance ofthe surface code. So we try a branch and cut approach to find a perfect matchingon the complete defects graph.The general idea is to apply the perfect matching in it’s simplest way until weremain with defects which are all distanced from each other. At that stage we needto be careful because picking a sub-optimal matching might cause a long chain andconsequently a non-trivial cycle on the surface. This is where we stop the greedyalgorithm and we apply the optimal Edmonds matching algorithm to pair up therest of the defects.Please note that shorter chains are more probable to happen on a lattice andsince they are short in comparison to the length of the lattice they have less prob-ability to make a non-trivial cycle, even if they are matched sub-optimally. On theother hand longer chains are less probable and since they are long and close to thesize of the lattice they have higher probability of making a non-trivial cycle. Sothe idea is to apply the sub-optimal and fast algorithm to the bigger portion of the90defects (which have short distance from each other) and then apply the optimal andslow algorithm to the remaining small number of defects which are far from eachother. In this way we act fast where we can and act optimally and slow where weshould be careful and in this way both maintain a high threshold (probability ofsuccessful decoding) and a low time complexity.When we start our greedy sort based matching algorithm, we choose the short-est edges and we pair the defects at their ends. We have to note that we do thematching process step by step so each time one edge is picked, it’s added to thematching list, it’s vertices and all edges adjacent to those vertices are removedfrom the original graph and then the same procedure is repeated for the remaininggraph. But there is an important fact we have to remember: each time we are donewith one step, the remaining graph is still a complete graph. In fact it is a completegraph with 2 fewer vertices compared to the complete graph before that step.Now we define a parameter called the Optimal Matching Subgraph relativeSize (OMSS) as a barrier. We continue repeating the greedy algorithm until wematch OMSS percent of the total number of defects on the lattice. Then we turn offthe greedy algorithm and apply the optimal Edmonds perfect matching algorithmto the rest of the defects in order to minimize the risk of building non-trivial cyclesbetween vertices which have a long distance.The OMSS parameter defines what portion of the matching problem is solvedwith the simple greedy algorithm and what portion is solved with the slower yetoptimal Edmonds matching. So this parameter controls a trade off between timecomplexity and threshold. A bigger OMSS means optimal Edmonds is applied toa bigger subgraph so the threshold will be closer to the optimal with the cost thatthe overall algorithm becomes slower. The two extreme cases are when OMSS =0% and OMSS = 100%. The first one means all defects are matched using thesimple fast greedy algorithm. It gives us the sub-optimal 7.6% threshold and timecomplexity of order log(L). The second case OMSS = 100% where all defectsare matched using Edmonds algorithm gives us the optimal 10.5% threshold andwell the high complexity of the Edmonds matching of order at least L6. Varying theOMSS parameter between 0 and 100 % will result in a gradual increase of thresholdfrom 7.6% to 10.5% with the cost of adding complexity.The next step is to see the behavior of this trade off. We need to study the91change in threshold with respect to increasing OMSS.3.6.2 Numerical Results: Spectrum of ThresholdsWe calculated the threshold for various OMSS around a fixed lattice size to see howthe threshold is improved by increasing the parameter.As we can see in figure 3.5, the threshold strictly increases non-linearly withOMSS increased. The other behavior that we observe is the overall shift in thethreshold graphs when we move toward larger lattice size. It seems from figure3.5 that the threshold is approaching a large-size limit for a fixed OMSS as weincrease the size of the lattice. This observation is well studied in the next section.Figure 3.5: Threshold values vs. OMSS, for different lattice sizes.0 5 10 15 20 25 30 35 400.0740.0760.0780.080.0820.0840.0860.0880.090.0920.094OMSSThreshold  11−3171−91 (raw data)21 − 413.6.3 Large Lattice Size LimitIn order to investigate the large lattice size limit behavior of the threshold withvarying OMSS we try to plot fixed OMSS graphs for threshold versus1L. Thishelps us to extrapolate the infinite size limit of the threshold for a fixed OMSS.Figure 3.6 shows a sample of the extrapolation method to obtain the numerical92Figure 3.6: Extrapolating the large size limit thresholds for a 2% portion ofthe Edmonds matching algorithm.0 0.005 0.01 0.015 0.02 0.025 0.03 0.035 0.04 0.045 0.050.0770.0780.0790.080.0810.0820.0830.0840.0850.0860.087 Edmonds perfect matching size2%1/LThresholdresults for a fixed OMSS = 2%. We get an extreme extrapolated threshold for eachfixed OMSS.Putting all large size limit thresholds we got via extrapolation and drawingthem versus their OMSS we will get the large size limit behavior of the thresholdby varying the size of the optimal matching sub-graph. Figure 3.7 shows this largesize limit behavior. The large size limit graph has two important parts. The firstrapid step-like jump from 7.6% to 8.6% threshold at the beginning and then theparabolic behavior from 8.6% to 10.5% threshold.The second part of the figure with parabolic behavior suggests the non-linearincrease in threshold when we change OMSS. Although this is interesting enoughto understand that the threshold grows faster than OMSS, but the most interestingpart of the graph is the step like jump at the beginning.In order to study the jump, let’s look at the complexity more carefully. OMSSis a constant multiplied to the total number of defects. So in the large size limitwhen we are calculating the order of complexity it doesn’t really matter whetherwe are applying Edmonds matching to the whole system or only a fraction of it i. e.O((OMSS×L)6) = O((c.L)6) = O(L6), with c being a constant. Still the Edmondsmatching will dominate the complexity and hence the overall time complexity ofthe algorithm will be as bad as the Edmonds matching itself. This argument showsthat if OMSS is a linear function of the lattice size then the time complexity will93not dramatically change from that of the Edmonds matching. But the good resultis that the threshold increases non-linearly when we change OMSS linearly andthe most non-linearity is observed in the beginning of the figure. We can concludefrom the jump in figure 3.7 that OMSS does not need to be a linear function of thelattice size for us to get to a threshold of 8.6%.Since the jump suggests that as far as OMSS is non-zero, in the large sizelimit we will get to 8.6% threshold, we can choose OMSS as low as possible sothat the complexity of that instance of algorithm will be dominated by the sortingalgorithm not the Edmonds perfect matching. Now we need to study how thethreshold increases with increasing the lattice size to prove the conjecture.Figure 3.7: Asymptotic large size limit curve0 10 20 30 40 50 60 70 80 90 1000.0750.080.0850.090.0950.10.1050.110.115Asymptotic prediction of the Threshold behaviour for large size limitRelative size of the perfect matching subgraph (percent)ThresholdIn conclusion, we have designed an algorithm that has a time complexity oforder log(L) and a large size limit threshold of 8.6% for a bit-flip error channel ona lattice of qubits. This result can be compared to that of [9] and [10].The only thing that we have to note before we start using this is that the hy-94brid algorithm makes sense where the size of the remaining complete graph forEdmonds matching is at least 4 because the lower complete graph will be of size2 which has only one possible perfect matching. So different algorithms will havethe same result. This leads to the following calculation which suggests a lowerbound on the size of the large lattice:OMSS× (2perrorL2)≥ 4 (3.4)Llarge ≥ d√2OMSS× perrore (3.5)3.6.4 The Threshold JumpAs it is illustrated in figure 3.7, there is a very steep increase in threshold when wego to OMSS greater than zero. It intuitively suggests that by turning the Edmondsperfect matching on over a very small defect subgraph we can have a significantincrease in the lattice size. Now we need to study the size of this small subgraphas we increase the lattice size. We conjecture that by increasing the lattice sizethis small subgraph size decays exponentially and hence in the large size limit, thesorting-based matching is dominating the complexity of the whole algorithm, notthe Edmonds matching.Figure 3.8 shows the way we studied our conjecture. We fixed the error prob-ability right in the middle of the steep jump at perror = 8.1%, then observed howthe success probability of the code changes with the lattice size as we go fromOMSS = 0% to OMSS = 2% where the jump ends. For the configurations whereperror = 8.1% is above their threshold we expect to get a success probability closeto .5 and for the cases where perror = 8.1% is below their threshold we expect toget a success probability close to 1. The exponential decay in the transition frombelow to above threshold in figure 3.8 proves our conjecture. However, we need togo to larger lattice sizes in order to have a better estimation of the curve fitted onthe transition line.This basically means that, at a large size limit the OMSS needed to get to athreshold of 8.6% decays exponentially, thus the complexity of the algorithm is95dominated by the sorting-heuristic.Figure 3.8: The success probability of surface code error correction is classi-fied in 3 groups, close to 1 (black), which happens when we are belowthreshold, close to .5 (white) when we are above the threshold and closeto .75 (gray) when we are really close to the threshold itself. Data isgathered for a fixed error probability of perror = 8.1%0 50 100 150 200 250 30000.20.40.60.811.21.41.61.82Lattice SizeOMSS (%)3.7 Approximations to WPM Algorithms and ImprovedComplexityAs discussed in [12] matching is one of those applied problems for which there ex-ists a polynomial time solution but all trivial algorithms will take exponential timeto solve it. One of the steps that computer scientists take to reduce the computa-tional complexity of hard optimization problems is to try approximation methods.Approximations become very important when you can tolerate an answer closeenough to the optimal answer in order to get a gain in terms of speed. Sometimesan exact algorithm to solve a problem cannot be parallelized while an approxi-mation of it will be easily parallelised and saves a lot of time. This is another96motivation behind approximation algorithms.In our case, through various experiments done in this study and other studiesin the literature, it’s easy to see that the fault tolerant quantum computation withcluster states is robust to the change of decoding algorithm and even the simplestheuristic will maintain a highly comparable threshold to that of the optimal case.This suggests that maybe there is room for improving the time complexity of de-coding algorithm using the approximation methods to the Edmonds MWPM.A super fast approximation to the matching algorithm is introduced in Hol-loway et al. [22] for minimum weight perfect matching in a complete graph satis-fying the triangle inequality between its weights. This algorithm is highly paral-lelisable and has a time complexity of O(log3(n)) where n is the number of verticesin the complete graph. With our complexity calculations in part 3.4.2 this meansa time complexity of O(log3(L)) where L is the linear size of the lattice.As a future work, it would be great to run the Monte Carlo simulations withan implementation of Holloway’s Algorithm and see the behavior of the threshold.But what we can conjecture is that the threshold is definitely bigger than 8.6%because this algorithm is supposed to act at least as well as the simple sortingheuristic. The result will be a poly-logarithmic algorithm with a threshold evenhigher than 8.6%.97Chapter 4ConclusionsTo be an error and to be cast out is a part of God’s design.— William Blake (1757 - 1827)In summary we used the following ideas and heuristics to improve the postprocessing algorithm for FTQEC using surface codes:Complete Graph The graph which is made of defects as its vertices and edges be-tween them weighted by the length of the shortest path between two defectsforms a complete graph. Finding a perfect matching on a complete graph iseasier than a general random graph because all vertices are connected.Sorting and approximating Because of the above, finding a minimum weightperfect matching on a complete graph can be approximated by heuristicsas simple as sorting all the edges and always picking the shortest one.Parallelising These approximation algorithms are highly parallelisable which canhelp us get a much lower time complexity.Hybrid Algorithms Because of the robustness of the lattice against trivial cycleswe only need to be careful when we are decoding long range defects. Sowe can apply our approximation to the bigger chunk of defects which arenumerically proven to be close to each other and then apply the slow opti-mal algorithm to the remaining long range defects which are again proven98numerically to form a small portion of the defects, particularly for the largelattice sizes.Using all these ideas we proposed a family of decoding algorithms with asliding parameter OMSS which can give you any threshold between 8.6% to theoptimal 10.5% by adding more complexity. The complexity of getting the 8.6%threshold is log(L) for large lattice sizes. Please note that the same idea of hybridalgorithms can be used for other approximation algorithms like [10] or it can beimplemented with [22] to both improve complexity and threshold.The most important fact about our proposed approach is its compatibility withthe nature of fault tolerant quantum computation with surface codes. Althoughthreshold is a parameter to compare different methods, but the surface codes arenot expected to operate anywhere close to the threshold. In a real scenario thesurface code will run on a large lattice and for very small error probabilities. Ourheuristic fails where long and short chains are close to each other on the surface.However, in a lower-error regime we expect most of the error chains to be veryshort (almost 1 or 2 errors) and distanced from each other (almost no long chainsand well-distanced). Thanks to this and topological robustness of the surface codes,our sorting-based matching is expected to work really well even without hybrid orapproximation methods considered. In fact, this is part of the study we want topropose as a very important future work. we can extend and continue this researchin certain directions including but not limited to the following:• Simulate and compare the behavior of the proposed algorithm in low-errorregimes, especially compare the quantum resources needed for each of thedifferent error correcting methods.• Develop the complete graph MWPM approximation algorithm to measure thepotential improvement in threshold and complexity.• Reduce the operational complexity of the algorithm by doing the error match-ing on the lattice data structure instead of extracting a new graph out of it.99Bibliography[1] D. Aharonov and M. Ben-or. Fault-tolerant quantum computation withconstant error rate, 1999. → pages 1, 15[2] S. Boixo, T. F. Ronnow, S. V. Isakov, Z. Wang, D. Wecker, D. A. Lidar, J. M.Martinis, and M. Troyer. Evidence for quantum annealing with more thanone hundred qubits. Nat Phys, 10:218–224, 2014. ISSN 1745-2473.doi:10.1038/nphys2900. → pages 1, 4[3] S. Bravyi and A. Kitaev. Universal quantum computation with ideal cliffordgates and noisy ancillas. Phys. Rev. A, 71:022316, Feb 2005.doi:10.1103/PhysRevA.71.022316. URLhttp://link.aps.org/doi/10.1103/PhysRevA.71.022316. → pages 63[4] S. B. Bravyi and A. Y. Kitaev. Quantum codes on a lattice with boundary.1998. → pages 49[5] H. J. Briegel, D. E. Browne, W. Dur, R. Raussendorf, and M. Van den Nest.Measurement-based quantum computation. Nat Phys, 2009. → pages 12[6] T. H. Cormen, C. E. Leiserson, R. Rivest, and C. Stein. Introduction toAlgorithms. The MIT Press, 3 edition, 2009. ISBN 9780262533058. →pages 85[7] E. Dennis, A. Kitaev, A. Landahl, and J. Preskill. Topological quantummemory. Journal of Mathematical Physics, 43(9):4452–4505, 2002.doi:http://dx.doi.org/10.1063/1.1499754. URLhttp://scitation.aip.org/content/aip/journal/jmp/43/9/10.1063/1.1499754. →pages x, 49, 59, 67, 73, 75, 76, 87[8] S. J. Devitt, W. J. Munro, and K. Nemoto. Quantum error correction forbeginners. Reports on Progress in Physics, 76(7):076001, 2013. URLhttp://stacks.iop.org/0034-4885/76/i=7/a=076001. → pages 25, 34100[9] G. Duclos-Cianci and D. Poulin. Fast decoders for topological quantumcodes. PRL, 104, 2010. → pages ii, 2, 73, 77, 78, 94[10] G. Duclos-Cianci and D. Poulin. A renormalization group decodingalgorithm for topological quantum codes. In ITW, 2010. → pages ii, 77, 78,94, 99[11] G. Duclos-Cianci and D. Poulin. Fault-tolerant renormalization groupdecoder for abelian topological codes. In QIC, volume 14, pages 721–740,2014. → pages 2, 88[12] J. Edmonds. Paths, trees, and flowers. CJM, 17, 1965. → pages 2, 69, 96[13] E. Farhi, J. Goldstone, S. Gutmann, and S. Michael. Quantum computationby adiabatic evolution. 2000. URL arXiv:quant-ph/0001106v1. → pages 13[14] E. Farhi, J. Goldstone, S. Gutmann, J. Lapan, A. Lundgren, and D. Preda. Aquantum adiabatic evolution algorithm applied to random instances of annp-complete problem. Science, 292(5516):472–475, 2001.doi:10.1126/science.1057726. URLhttp://www.sciencemag.org/content/292/5516/472.abstract. → pages 13[15] R. P. Feynman. Simulating physics with computers. International Journal ofTheoretical Physics, 21(6-7):467–488, 1982. ISSN 0020-7748.doi:10.1007/BF02650179. URL http://dx.doi.org/10.1007/BF02650179. →pages 13[16] A. G. Fowler, M. Mariantoni, J. M. Martinis, and A. N. Cleland. Surfacecodes: Towards practical large-scale quantum computation. Phys. Rev. A,86:032324, Sep 2012. doi:10.1103/PhysRevA.86.032324. URLhttp://link.aps.org/doi/10.1103/PhysRevA.86.032324. → pages 7, 49, 51, 66[17] D. S. Garey, Michael R.and Johnson. Computers and Intractability: A Guideto the Theory of NP-Completeness. W. H. Freeman, 1979. ISBN0716710455. → pages 3, 4[18] D. Gottesman. Stabilizer codes and quantum error correction. PhD thesis,Caltech, 1997. → pages 46, 47, 49[19] L. K. Grover. A fast quantum mechanical algorithm for database search. InProceedings of the Twenty-eighth Annual ACM Symposium on Theory ofComputing, STOC ’96, pages 212–219, New York, NY, USA, 1996. ACM.ISBN 0-89791-785-5. doi:10.1145/237814.237866. URLhttp://doi.acm.org/10.1145/237814.237866. → pages 1, 4101[20] R. Harris, J. Johansson, A. J. Berkley, M. W. Johnson, T. Lanting, S. Han,P. Bunyk, E. Ladizinsky, T. Oh, I. Perminov, E. Tolkacheva, S. Uchaikin,E. M. Chapple, C. Enderud, C. Rich, M. Thom, J. Wang, B. Wilson, andG. Rose. Experimental demonstration of a robust and scalable flux qubit.Phys. Rev. B, 81:134510, Apr 2010. doi:10.1103/PhysRevB.81.134510.URL http://link.aps.org/doi/10.1103/PhysRevB.81.134510. → pages 1, 5[21] W. K. Hensinger, S. Olmschenk, D. Stick, D. Hucul, M. Yeo, M. Acton,L. Deslauriers, C. Monroe, and J. Rabchuk. T-junction ion trap array fortwo-dimensional ion shuttling, storage, and manipulation. Applied PhysicsLetters, 88(3):034101, 2006. doi:http://dx.doi.org/10.1063/1.2164910. URLhttp://scitation.aip.org/content/aip/journal/apl/88/3/10.1063/1.2164910. →pages 1, 5[22] N. W. Holloway, S. Ravindran, and A. M. Gibbons. Approximatingminimum weight perfect matchings for complete graphs satisfying thetriangle inequality. In Graph-Theoretic Concepts in Computer Science,volume 790 of Lecture Notes in Computer Science, pages 11–20, 1994. →pages 97, 99[23] A. Y. Kitaev. Fault tolerant quantum computation by anyons. Annals Phys.,303:2–30, 2003. doi:10.1016/S0003-4916(02)00018-0. → pages 58[24] V. Kolmogorov. Blossom v: a new implementation of a minimum costperfect matching algorithm. Mathematical Programming Computation, 1(1):43–67, 2009. ISSN 1867-2949. doi:10.1007/s12532-009-0002-8. URLhttp://dx.doi.org/10.1007/s12532-009-0002-8. → pages 2, 69[25] A. Lucas. Ising formulations of many np problems. Frontiers in Physics, 2(5), 2014. ISSN 2296-424X. doi:10.3389/fphy.2014.00005. → pages 1, 4[26] O. Mandel, M. Greiner, A. Widera, T. Rom, T. W. Ha¨nsch, and I. Bloch.Coherent transport of neutral atoms in spin-dependent optical latticepotentials. Phys. Rev. Lett., 91:010407, Jul 2003.doi:10.1103/PhysRevLett.91.010407. URLhttp://link.aps.org/doi/10.1103/PhysRevLett.91.010407. → pages 1, 5[27] E. Martin-Lopez, A. Laing, T. Lawson, R. Alvarez, X.-Q. Zhou, and J. L.O’Brien. Experimental realization of shor’s quantum factoring algorithmusing qubit recycling. Nat Photon, 6:773–776, 2012.doi:10.1038/nphoton.2012.259. → pages 1, 15102[28] M. A. Nielsen and I. L. Chuang. Quantum Computation and QuantumInformation. The Cambridge University Press, 1 edition, 2000. ISBN9780521635035. → pages viii, 10, 23, 24, 25, 34, 38, 44[29] T. Ohno, G. Arakawa, I. Ichinose, and T. Matsui. Phase structure of therandom-plaquette Z2 gauge model: accuracy threshold for a toric quantummemory. Nuclear Physics B, 697:462–480, Oct. 2004.doi:10.1016/j.nuclphysb.2004.07.003. → pages 87[30] C. Pomerance. A tale of two sieves. Notices Amer. Math. Soc, 43:1473–1485, 1996. → pages 14[31] D. M. W. Powers. Parallelized quicksort and radixsort with optimal speedup.In Proceedings of International Conference on Parallel ComputingTechnologies, 1991. → pages 85[32] D. M. W. Powers. Parallel unification: Practical complexity. In AustralasianComputer Architecture Workshop, Flinders University, 1995. → pages 85[33] R. Raussendorf and J. Harrington. Fault-tolerant quantum computation withhigh threshold in two dimensions. Phys. Rev. Lett., 98:190504, May 2007.doi:10.1103/PhysRevLett.98.190504. URLhttp://link.aps.org/doi/10.1103/PhysRevLett.98.190504. → pages 16, 49, 61,62[34] R. Raussendorf, J. Harrington, and K. Goyal. A fault-tolerant one-wayquantum computer. Annals of Physics, 321(9):2242 – 2270, 2006. ISSN0003-4916. doi:http://dx.doi.org/10.1016/j.aop.2006.01.012. URLhttp://www.sciencedirect.com/science/article/pii/S0003491606000236. →pages[35] R. Raussendorf, J. Harrington, and K. Goyal. Topological fault-tolerance incluster state quantum computation. New Journal of Physics, 9(6):199, 2007.URL http://stacks.iop.org/1367-2630/9/i=6/a=199. → pages 49, 62, 76, 87[36] P. Shor. Algorithms for quantum computation: discrete logarithms andfactoring. In Foundations of Computer Science, 1994 Proceedings., 35thAnnual Symposium on, pages 124–134, Nov 1994.doi:10.1109/SFCS.1994.365700. → pages 1, 4, 14[37] K. M. Svore, D. P. Divincenzo, and B. M. Terhal. Noise threshold for afault-tolerant two-dimensional lattice architecture. Quantum Info. Comput.,7(4):297–318, May 2007. ISSN 1533-7146. URLhttp://dl.acm.org/citation.cfm?id=2011725.2011727. → pages 51103[38] T. Tanamoto, Y.-x. Liu, S. Fujita, X. Hu, and F. Nori. Producing clusterstates in charge qubits and flux qubits. Phys. Rev. Lett., 97:230501, Dec2006. doi:10.1103/PhysRevLett.97.230501. URLhttp://link.aps.org/doi/10.1103/PhysRevLett.97.230501. → pages 5[39] A. Turing. On computable numbers, with an application to theentscheidungsproblem. Proceedings of the London Mathematical Society,42:230–265, 1936. doi:10.1112/plms/s2-42.1.230. → pages 13104

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
https://iiif.library.ubc.ca/presentation/dsp.24.1-0166069/manifest

Comment

Related Items