UBC Undergraduate Research

Robustness of an AKLT State on a Honeycomb Lattice. Mar, Philip Allen 2011

You don't seem to have a PDF reader installed, try download the pdf

Item Metadata

Download

Media
[if-you-see-this-DO-NOT-CLICK]
Mar_Philip_Allen_UBC_2011_Physics_449_Honours_Thesis.pdf [ 1.1MB ]
Metadata
JSON: 1.0085966.json
JSON-LD: 1.0085966+ld.json
RDF/XML (Pretty): 1.0085966.xml
RDF/JSON: 1.0085966+rdf.json
Turtle: 1.0085966+rdf-turtle.txt
N-Triples: 1.0085966+rdf-ntriples.txt
Original Record: 1.0085966 +original-record.json
Full Text
1.0085966.txt
Citation
1.0085966.ris

Full Text

Robustness of an AKLT State on aHoneycomb LatticeImprovements to Simulation AlgorithmsbyPhilip Allen MarA THESIS SUBMITTED IN PARTIAL FULFILLMENT OFTHE REQUIREMENTS FOR THE DEGREE OFBACHELOR OF SCIENCEinThe Faculty of Science(Combined Honours in Physics and Mathematics)THE UNIVERSITY OF BRITISH COLUMBIA(Vancouver)April 2011c Philip Allen Mar 2011AbstractCluster states used in measurement based quantum computation do notexist as nondegenerate ground states of two-body Hamiltonians, makingthem di cult to realize physically [8]. Using A eck-Kennedy-Lieb-Tasaki(AKLT) states on a honeycomb lattice, a universal resource can be obtainedfor performing MBQC, by transforming these states into cluster states viatwo transformations. The  rst transformation turns the AKLT states into agraph state by using a three-element positive operator value measures andanalyzing the e ects of these operators in yielded encoded qubits on groupsof vertices (domains). The graph state is then transformed to a cluster stateas described in [8]. The transformation to a cluster state requires that thegraph state contain a connected path from the left side of the original latticeto the right side. Since POVMs are measurements, they may sometimes, ina physical implementation of MBQC, contain errors. Instead of a three-element POVM, we might have a nine-element POVM, with various e ectson the groups of vertices they are applied to. This can cause entire groupsof vertices to be rendered useless for quantum computation, in particular,they may prevent the existence of a connected path, so that the graph statecannot be transformed to a cluster state. We must make sure the AKLTstate is robust, that is, even with certain errors in the POVM, it still can betransformed to a cluster state. We ran simulations to con rm that for certainerrors in the POVM (dependent on a parameter pdelete), there always existsa connected path. The main result in this study was the formulation of analgorithm that helped drastically reduce the time to run these simulations.Since the AKLT states are large entangled systems, it is time consuming toproduce ‘typical lattices, but the introduction of this algorithm should helpin running further simulations for other POVM outcomes in the future.iiTable of ContentsAbstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iiTable of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . iiiList of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vList of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viAcknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . . viiiDedication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix1 Introduction and Motivation . . . . . . . . . . . . . . . . . . . 11.1 Quantum Computation . . . . . . . . . . . . . . . . . . . . . 11.2 Measurement Based Quantum Computation (MBQC) . . . . 11.3 A eck Kennedy Lieb Tasaki (AKLT) states . . . . . . . . . 21.4 Robustness of the AKLT state . . . . . . . . . . . . . . . . . 22 Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42.1 Circuit Model . . . . . . . . . . . . . . . . . . . . . . . . . . 42.1.1 Universality . . . . . . . . . . . . . . . . . . . . . . . 52.2 Cluster State Quantum Computation . . . . . . . . . . . . . 52.3 A eck-Kennedy-Lieb-Tasaki (AKLT) States . . . . . . . . . 72.3.1 One Dimensional AKLT State . . . . . . . . . . . . . 72.3.2 Two Dimensional AKLT State . . . . . . . . . . . . . 82.4 Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102.4.1 Positive Operator Value Measure (POVM) . . . . . . 10iiiTable of Contents2.4.2 Transformation Rules . . . . . . . . . . . . . . . . . . 122.4.3 Conditions for Universality . . . . . . . . . . . . . . . 132.4.4 Calculation . . . . . . . . . . . . . . . . . . . . . . . . 143 Progress . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173.1 Algorithm Overview/Programs . . . . . . . . . . . . . . . . . 173.2 Previous Work . . . . . . . . . . . . . . . . . . . . . . . . . . 233.3 Testing Algorithms . . . . . . . . . . . . . . . . . . . . . . . 253.4 Probability Weighting . . . . . . . . . . . . . . . . . . . . . . 283.5 Brute-Force Algorithm Summary . . . . . . . . . . . . . . . . 293.5.1 Single Lattice . . . . . . . . . . . . . . . . . . . . . . 303.6 Algorithm Improvement . . . . . . . . . . . . . . . . . . . . . 313.6.1 Domain Grouping . . . . . . . . . . . . . . . . . . . . 313.7 Time Improvement . . . . . . . . . . . . . . . . . . . . . . . 343.7.1 Single Lattice Test . . . . . . . . . . . . . . . . . . . . 343.7.2 Full Simulation Test . . . . . . . . . . . . . . . . . . . 353.8 Accuracy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 363.8.1 Average Size of Domains & Average Domain Degree . 383.8.2 20 20 Lattices with Connected Paths at pdelete =0:35 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 394 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 404.1 Results Summary . . . . . . . . . . . . . . . . . . . . . . . . 404.2 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . 41Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43ivList of Tables3.1 Simulations Times for Di erent-Sized Lattices . . . . . . . . . 253.2 Probability Weighting: Single Simulations Times for Di erent-Sized Lattices . . . . . . . . . . . . . . . . . . . . . . . . . . . 353.3 Lattice Characteristics: Average Domain Size and Degree Us-ing Brute Force Algorithm . . . . . . . . . . . . . . . . . . . . 383.4 Lattice Characteristics: Average Domain Size and Degree Us-ing New Algorithm . . . . . . . . . . . . . . . . . . . . . . . . 39vList of Figures2.1 Cluster State Diagram . . . . . . . . . . . . . . . . . . . . . . 62.2 One Dimensional AKLT State . . . . . . . . . . . . . . . . . . 72.3 Two Dimensional AKLT State . . . . . . . . . . . . . . . . . 92.4 Step by step transformation . . . . . . . . . . . . . . . . . . . 112.5 Transformations . . . . . . . . . . . . . . . . . . . . . . . . . 112.6 POVM Map . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122.7 Transformation Rule 1 . . . . . . . . . . . . . . . . . . . . . . 132.8 Transformation Rule 2 . . . . . . . . . . . . . . . . . . . . . . 142.9 Conditions for Universality . . . . . . . . . . . . . . . . . . . 153.1 Generating a Hexagonal Lattice . . . . . . . . . . . . . . . . . 183.2 Transformation Rule 1/Union Find . . . . . . . . . . . . . . . 193.3 Transformation Rule 2/Graph Generation . . . . . . . . . . . 203.4 Domain Deletion . . . . . . . . . . . . . . . . . . . . . . . . . 213.5 Breadth First Search . . . . . . . . . . . . . . . . . . . . . . . 223.6 Probability Weighting . . . . . . . . . . . . . . . . . . . . . . 233.7 Previous Work: Domain Deletion . . . . . . . . . . . . . . . . 243.8 Previous Work: Edge Deletion . . . . . . . . . . . . . . . . . 243.9 No Probability Weighting: 40 by 40 lattice. . . . . . . . . . . 263.10 No Probability Weighting: 60 by 60 lattice. . . . . . . . . . . 263.11 No Probability Weighting: 80 by 80 lattice. . . . . . . . . . . 273.12 No Probability Weighting: 100 by 100 lattice. . . . . . . . . . 273.13 No Probability Weighting: 40 by 40 lattice with Threshold . . 283.14 Probability Weighting: 12 by 12 lattice . . . . . . . . . . . . . 293.15 Time for Simulation versus Lattice Height . . . . . . . . . . . 303.16 Time for Simulation versus Lattice Size . . . . . . . . . . . . 31viList of Figures3.17 Algorithm Improvement . . . . . . . . . . . . . . . . . . . . . 323.18 12 12 Lattice with Probability Weighting . . . . . . . . . . . 363.19 40 40 Lattice with Probability Weighting . . . . . . . . . . . 373.20 60 60 Lattice with Probability Weighting . . . . . . . . . . . 37viiAcknowledgementsThank you to Robert Raussendorf who was my supervisor for the 449 ThesisProject and to Tzu-Chieh Wei whose work was the basis for most of theproject. Thank you to Robert Kie for his supervision of the 449 thesisclass.viiiDedicationDedicated to my parents, who have loved me and raised me in patience, myfriend from high school who shared a passion for physics (Eric), my friendsin university who are going on to other things (Zion, Yakov), my friends atchurch, my physics and math teachers (in particular, Jon and John), AuntieMaribel, and to the real Jesus, to whom I wish I could dedicate this withall my heart, because He has given me everything.ixChapter 1Introduction and Motivation1.1 Quantum ComputationQuantum computation is interested in exploiting quantum e ects in orderto perform certain computational tasks using specialized algorithms that dotake these e ects into account. The most immediate application is in per-forming tasks that would otherwise take classical computation much longer,or in running simulations that are inherently quantum mechanical in na-ture. One such example is Peter Shor’s algorithm which could theoreticallyfactor large numbers into their prime factors in polynomial, rather thansuperpolynomial time (the latter being de ned as not being bounded byany polynomial)[10][6][7]. It should be noted that the quantum comput-ing resources needed for implementing Shor’s algorithm have not yet beenachieved. Previous models of quantum computation were experimentally de-signed by controlling interactions acting as gates between various spin-1/2particles acting as ’bits’ or ’quantum-bits’ (qubits).1.2 Measurement Based Quantum Computation(MBQC)In 2000, Robert Raussendorf and Hans J. Briegel proposed a new modelof quantum computation that contrasts with these types of experiments:measurement based quantum computation, called a ‘one-way quantum com-puter’. Rather than implementing interactions between spin-1/2 particlesto perform computation; the computation itself is performed by making lo-cal measurements on a prepared highly-entangled state of spin-1/2 systems,11.3. A eck Kennedy Lieb Tasaki (AKLT) statescalled cluster states. It can be shown that cluster states are universal forquantum computation; that is, from a  nite set of operations on this clusterstate we can approximate any unitary operation on the qubits (i.e. we cando quantum computation on this cluster state). Unfortunately, obtaining acluster state physically is much harder. As shown in [4], cluster states donot exist as nondegenerate ground states of systems that are easily analyzed(two-body interactions) and so cannot be obtained easily in any real setting[7].1.3 A eck Kennedy Lieb Tasaki (AKLT) statesOne solution that was recently found was to use what are called AKLTstates (for A eck-Kennedy-Lieb-Tasaki states) on a honeycomb lattice ar-rangement. AKLT states exist as ground states of Hamiltonians written assummations of adjacent two-body interactions, so that they can be realizedphysically [7]. It was shown that an AKLT state can be transformed to acluster state via two transformations ( rst to a graph state, and then secondto a cluster state in a square lattice), and thus can be used as a universalresource for quantum computation.1.4 Robustness of the AKLT stateThe two aforementioned transformations require actual physical operationson the AKLT states, and thus are, in real life, subject to errors. The mainobjective is to investigate how tolerant the AKLT state is to errors, that is,given certain errors in the transformation of the AKLT state, how certainis the AKLT state going to be still universal for quantum computation?Speci cally, if there are errors in the  rst transformation, does the resultinggraph state still satisfy the conditions for the further transformation to theuniversal cluster state?The investigation proceeded primarily through simulations: increasingthe probability of an error in the  rst transformation and checking whethervarious lattices satis ed conditions to transform to a universal cluster state.21.4. Robustness of the AKLT stateA con rmation of such robustness of the AKLT states is an encouraging, al-beit very small, step towards the realization of measurement based quantumcomputing 1.1This just ensures universality, much more needs to be done in making sure the resultingcomputation is feasible and error-free as well.3Chapter 2Theory2.1 Circuit ModelThe most common model for quantum computation is the circuit model,essential for understanding the basics of quantum computation. The funda-mental object in quantum computation is the qubit, which can be describedas a state vector in a two-dimensional Hilbert Space 2, called the statespace. Qubits can be expressed as linear combinations of computationalbasis states; so if j i is the bra-ket describing the qubit, and we denote asj0i and j1i, then the qubit can be expressed as j i=  j0i+  j1i, with thenormalization condition thatj j2+j j2 = 1. An essential aspect of quantumcomputation is the property of quantum systems to be entangled. To ana-lyze these, we need a way to describe composite systems of several qubits.This is described by using tensor product notation. If each individual sys-tem is denoted by j ii, then the composite system is the tensor product ofthese states: Nni=1j ii [4].Quantum computation in the circuit model begins with the preparationof a simple state that initializes the computer, such as Nni=1j0ii. In order toperform a computation, operators on these states are de ned, analogous tologic gates in classical computing. These operators in the circuit model areunitary 3. Finally, we de ne a means of measurement on the qubits. Extend-ing to operations on multiple qubits, there are controlled operations, wherethe unitary operation on one or more qubits (target qubits) is dependent onthe state of other qubits (control qubits).2A vector space over some  eld together with an inner product that is complete [3]3Since unitary operators (UUy = UyU = I) de ne deterministic operations on thequbits, contrasted to measurements which are probabilistic42.2. Cluster State Quantum Computation2.1.1 UniversalityThe most important concept is that there exist  nite set of gates that canbe used to approximate any unitary operator to arbitrary accuracy 4. Thequantum circuit model represents qubits by quantum wires (moving leftto right in time) that are processed by a set of unitary operators (quan-tum gates). From this model, we  nd that any quantum circuit can bereplaced by one that contains only single-qubit gates and controlled-phasequbit gates. Controlled-phase qubit gates are two-qubit gates with the ac-tion of taking a composite state and switching a sign, for example,jxi jyi!( 1)xyjxi jyi for x;y20;1 (2.1)Thus, the single qubit gates and controlled phase two qubit gates areuniversal for quantum computation 5 [2].2.2 Cluster State Quantum ComputationThe cluster state model uses measurements on a prepared system of manyentangled qubits. Outputs are expressed in terms of the un-measured qubitsof the system. An n-qubit cluster state can be represented by a graph Gwhere, the vertices represent prepared qubits in the state j+i := (j0i+j1i)=p2 (wherefj0i;j1igis the computational basis state) and the edges rep-resent controlled-phase two-qubit gates. Since controlled-phase gates com-mute, it does not matter the order in which they are applied to the qubitsdoes not matter. When applying measurements to single qubits in the clus-ter state, we label them according to what sequence we apply them; some-times having the same time-label indicating that the order in which theyare applied does not matter. The time label also determines how qubits of a4In mathematical terms, this means that there are a  nite number of operators in thismodel of computation, that, for any unitary operator, there exists a  nite sequence ofcompositions of the operators in the universal set that approximate the unitary operatorto arbitrary accuracy (strong convergence in norm)5Note that the ’single qubit gates’ really mean a  nite set of single qubit gates, theseare the Hadamard and the  =8 gates. More on this either on wikipedia.org on \Quantumgates", or in Nielsen and Chuang’s textbook, [5]52.2. Cluster State Quantum Computationlater time label may depend on the measurement outcomes of qubits labelledwith earlier time labels. Measurements must satisfy the conditions that theyare single-qubit measurements; that they may depend on the measurementoutcomes of previous measurements on other qubits and that measurementresults from previous qubit measurements can be processed by a classicalcomputer (to help determine what new measurements to be made) [4].The cluster state model can be shown to be universal for quantum com-putation. The proof can be found in [4], and involves verifying a circuitidentity, then showing that the output of a quantum circuit of speci c typesof single qubit operators (HZ ) has the same output (up to a Pauli ma-trix) as a cluster-state computation (by showing that this cluster state com-putaiton is in turn equivalent to a quantum circuit consisting of a sequenceof the circuit identity). The proof concludes by stating that HZ and thecontrolled-phase gate are a universal set for quantum computation. A help-ful diagram is given in Fig. 2.1.Figure 2.1: Measurement Based Quantum Computation on a Cluster StateScheme. Labels 1, 2, denote the order in which they are measured, andlines between circles represent controlled-PHASE operations between qubits,entangling them.62.3. A eck-Kennedy-Lieb-Tasaki (AKLT) States2.3 A eck-Kennedy-Lieb-Tasaki (AKLT) StatesAn A eck-Kennedy-Lieb-Tasaki (AKLT) state is \the ground state of aquantum antiferromagnet" [1]. For illustration of what an AKLT state is,we  rst consider a one-dimensional AKLT state [9].2.3.1 One Dimensional AKLT StateRigorously, the AKLT state is the \ground state of the Hamiltonian"H = JN 1Xb=1[SbSb+1 + 13(SbSb+1)2] (2.2)Pr j =8<:0 if r j is odd;r! ( 1)(r j)=2 if r j is even:(2.3)Note that here the Hamiltonian depends on pairwise interactions be-tween spin-1=2 systems, and thus are easier to analyze than  ve-body Hamil-tonians [9].Suppose we are given N spin-1=2 systems. To construct the AKLT state, rst entangle these systems in pairs in singlet (j i=j01i j10i) states. Then,align these singlet states in a line, so that the singlet states are aligned end-to-end. Project these ends of singlet states in a spin-1 states (spin-1 statesare triplet states, with values j 1i;j0i;j1i). Fig. 2.2 illustrates this.Figure 2.2: An AKLT state in one dimension; as an entangled state ofsinglet states, each end of each singlet pair projected to a spin-1 system.72.3. A eck-Kennedy-Lieb-Tasaki (AKLT) StatesIn summary, the AKLT state takes the form:jAKLTi= (NOa=1PS;a)(j Li[N 1Ob=1js = 0ib;b+1]j Ri) (2.4)Where we have thatj Liandj Riare the wavefunctions for the left andright spin-1=2 systems [9].2.3.2 Two Dimensional AKLT StateIn this case, the AKLT state takes the form of a hexagonal lattice, whichwe denote by the symbolL, and whereL= (V(L);E(L)), expressingLas apair of sets V(L) of vertices, and E(L) of edges. This type of constructioncan be extended to two dimensions. The Hamiltonian in this case is givenbyH = JXe2E(L)P3;e (2.5)Where P3;e is the projection operator on the spin-3 subspace on eachedge. Now the construction of the AKLT state is as follows.Given the spin-1=2 systems in the previous section, again pair themtogether. For each vertex v 2V(L), place the ends of three pairs. Thus,on each edge e 2 E(L), connecting two vertices, there exist six spin-1/2systems. The projection P3;e projects this space unto a spin-3 subspace. Inthe end, the AKLT state takes the form:jAKLTi:=Ov2V(L)PS;vOe2E(L)j ie (2.6)The AKLT state is then made by projecting each vertex of 3 spin-1=2systems into a spin-3=2 subspace (4 states). This is represented by theprojector PS;v, where S denotes the spin-3=2 subspace and v denotes thevertex in question. If j0i and j1i are the two computational basis states for82.3. A eck-Kennedy-Lieb-Tasaki (AKLT) Statesa spin-1=2 system, then PS;v takes the formPS;v :=j000ih000j+j111ih111j+jWihWj+j Wih Wj (2.7)wherejWi= 1p3(j001i+j010i+j100i) ,j Wi= 1p3(j110i+j101i+j011i).The j ie denotes the edges that represent the singlet states, pairing onespin-1=2 system in one vertex, to another spin1=2 system in another vertex.Fig. 2.3 illustrates this.Figure 2.3: An AKLT state in two dimensions; as an entangled state ofsinglet states, each end of each singlet pair projected with two other pair-ends into a spin-3=2 system.92.4. Objectives2.4 ObjectivesHaving introduced the circuit model, cluster-state model and AKLT states,we can present the objectives of the thesis project. Recall that the two-dimensional cluster state on a square lattice is universal for quantum com-putation, that is, we can approximate any arbitrary unitary operator with nite sequences of a  nite set of operations in cluster-state computation.However, a physical realization of two-dimensional cluster states directly isdi cult. Thus, we aim to use AKLT states which can be transformed intocluster states in order to obtain a substrate that is universal for quantumcomputation [7] [8].The overarching objective is to then transform the AKLT state to a clus-ter state. The immediate objective is to ensure that this transformation isrobust, that is, the transformation to a cluster state is still possible evenwhen errors are made in the method of transformation. In summary, theway the AKLT state is transformed to a cluster state is  rst to transformthe AKLT state to a graph state jGi. These involve so-called Transforma-tion Rules, which are discussed later. These need to be simulated in thecomputer program, and constitute the bulk of the algorithm and time incalculation. Then, the graph state jGi is further transformed to the clusterstate via the transformation described in Wei, Raussendorf and A eck’s pa-per that supplements the primary one in [8]. This transformation requiresthat the graph state satisfy two conditions, which we will call conditionsfor universality, required in the proof that the graph state can indeed bytransformed to a cluster state. Satisfying these conditions is the practical’de nition’ of the robustness of the AKLT state. If resulting graph statesatis es these conditions despite having errors in the transformation froman AKLT state, then it is robust. Fig. 2.5 illustrates this [7] [8].2.4.1 Positive Operator Value Measure (POVM)A Positive Operator Value Measure (POVM) is informally a mapping be-tween sets of outcomes of a quantum state to the set of operators of theHilbert state space of that quantum state. It gives a useful method to102.4. ObjectivesFigure 2.4: Step by step transformationFigure 2.5: Transforming an AKLT state to a graph state to a cluster state.project the state space into a subspace depending on some probabilisticoutcome. A special case of a POVM are projective measurements. For eachset of outcomes of the state, we obtain a projection operator Pi that weapply to the original state that projects it to a subspace of the original statespace. In this case, we do not only have projections, but, more generally,positive operators 6. [Conway/wikipedia] As such, POVMs are thought ofas generalized measurements. Either the mapping itself or the range of themapping in the set of operators can be thought of as the POVM; so the setfFigi2I for an index set I can be called the POVM [11].The POVM is a valuable tool in the transformation to a graph state.We apply the POVM to every vertex in the AKLT lattice in order to obtainone of three outcome operators: Fx;Fy and Fz, each projecting the vertexto an encoded spin-1=2 system. Fig. 2.6 illustrates the POVM mapping. 76that is, if A is a positive operator, then for alljhi2H, the state space, thenhAhjhi 0.7In mathematical terms: Let X be the set of all the outcomes of the spin-3=2 stateof the vertex, and the set M be the sigma algebra of all the measureable sets of, sets of,elements of X. Then we have the Hilbert Space H of the spin-3=2 states themselves (j 3=2i;j 1=2i;j1=2i;j3=2i). Let B(H) be the set of all the bounded operators on theHilbert Space H. In B(H), there contain operators like projection operators (which areHermitian on H with the condition that the operator is idempotent (A2 = A) [3]) such asthe projection operators to the spin-1=2 subspaces x;y and z [7] [8].From wikipedia, the POVM there is a set function (on subsets of M) F : M!B(H)such that if Y is a subset of M then F(Y) is a bounded, positive hermitian operator.In our case, the function F has at least these elements in its range, Px;Py and Pz, theprojection operators to the spin-1=2 subspaces of x;y and z; and the requirement thatPi=fx;y;zg(Pi)(Pi)y = I is the requirement that F(X) = I.112.4. Objectives[11].Figure 2.6: POVM maps the set of outcomes to one of many positiveoperators on the Hilbert Space.2.4.2 Transformation RulesTo transform the AKLT state to a graph state, we go through each vertex inthe original lattice, then apply the POVM to each vertex. This means eachvertex is projected unto a subspace of either x;y or z. We now introducethe two transformation rules to change this lattice with POVM outcomesinto a graph state jGi [7] [8]. :1. Group all vertices adjacent to one another with the same outcomesto be one vertex in the new graph G. These groups of vertices withthe same outcomes are called domains to distinguish them from thevertices of the original lattice. Fig. 2.7 illustrates this.2. For the resulting graph G, all edges of even multiplicity are deleted,and all edges of odd multiplicity are reduced to only one between thedomains. Fig. 2.8 illustrates this.122.4. ObjectivesFigure 2.7: Grouping Similar Outcomes. One color for each of x;y and z.Rules 1 and 2 are derived by the properties of the AKLT state, and areproved to be valid transformations in [7] [8]. In short, when the verticeshave the same POVM outcome, due to the way the spin-1=2 systems areentangled, they act as a single encoded spin-1=2 system, and so each do-main acts as a single spin-1=2 system. This also means that edges of evenmultiplicity are deleted because their e ects essentially cancel each otherout for the domains they connect [7] [8].2.4.3 Conditions for UniversalityThe proof for conversion of the graph state jGi to a cluster state requiresthat the graph G satisfy two properties, which we call the Conditions forUniversality. The proof is lengthy and given in detail in [8].1. The size of the domains in the honeycomb lattice are microscopic forlarge lattices L, that is, the size of the individual domains are inde-pendent of the size of the lattice, jLj.2. The probability of the existence of a connected path (also called \span-ning cluster") traversing G from left to right approaches 1 in the limit132.4. ObjectivesFigure 2.8: Deleting even multiplicity edges and retaining odd multiplicityedges.of large lattice size jLj. Fig. 2.9 illustrates the connected path fromleft to right.2.4.4 CalculationThe primary error under investigation are the POVMs. The ideal case isthat the POVMs always project the spin-3=2 system of vertices to a spin-1=2 system. However, this is akin to a projective measurement, and so isprone to errors because it is a physical operator. For example, the POVMsmight not project the system to a spin-1=2 system, but instead to one of thecomputational basis states; e ectively destroying the ability for the vertexto be used in quantum computation, since a qubit must be a spin-1=2 system[7] [8].An essential property of the POVM is that its elements satisfy the com-pleteness relation,Pi=fx;y;zg(Fi)(Fi)y = I. Using this completeness relation,we can de ne the ideal POVM, a three-element POVM, to be applied to eachvertex:142.4. ObjectivesFigure 2.9: A connected path from left to right through domains.Fv;z =r23(j000ih000j+j111ih111j) (2.8)Fv;x =r23(j+ ++ih+ + +j+j   ih   j) (2.9)Fv;y =r23(ji;i;iihi;i;ij+j i; i; iih i; i; ij) (2.10)Where j i:= j0i j1ip2 ;j ii:= j0i ij1ip2 :An erroneous POVM would introduce new POVM elements that includethe projection operator to the encoded computational basis states (that is,the eigenstate that describes the encoded j 0i and j 1i basis states, encodedover several qubits). For each of the elements x;y and z, there are 3 POVM152.4. Objectiveselements, instead of 1, for a total of 9 POVM elements:fp1 pFv;x;r2p3 j+ ++ih+ + +j;r2p3 j   ih   jg (2.11)The value p is an arbitrary parameter that is associated to a ’probability’that we make such an error in our calculation. The extra POVMs can bephysically thought of as making an essentially perfect POVM operation fol-lowed by measuring a speci c state destroying the spin-1=2 state. The moreimportant e ect is that the other vertices with the same POVM outcome(i.e. x;y or z) that are adjacent to this vertex that was ’deleted’, are alsodeleted, so the entire domain is deleted, not just a vertex, since the domainin total acts like one encoded spin-1=2 system.While this type of vertex/domain-deleting POVM is the primary fo-cus of this study, other types of erroneous POVMs exist, like measuring(j000i+j111i)(h000j+h111j)=p2, which may cause other, more complexe ects. Another e ect perviously studied was edge-deletion [7] [8].16Chapter 3ProgressIn order to simulate erroneous POVMs, we focus on vertex-deletion, andchange the parameter p, increasing it to signify ’increasing’ error in ourPOVM application. The objective is to determine the value of p at whichthe AKLT state fails to satisfy the conditions for universality, thus, p givesa quantitative measure for how robust the AKLT state is with respect tomaking this type of error. To test the robustness of the AKLT state,  xthe value of p, and generate many large lattices. For each of these largelattices, assign POVM values to each vertex, generate a graph state, andcheck that it satis es both conditions for universality. This thesis focusedon the existence of a spanning cluster. The process of assigning POVMoutcomes to each vertex is the most time consuming part of the simulation;the subtitle for this thesis, \improvements to simulation algorithms" refersto the way the algorithm was improved for greater e ciency, allowing foraccurate simulations to be run in a realistic amount of time.3.1 Algorithm Overview/ProgramsAll these algorithms were implemented in MATLAB, and are presented herestarting with the name of the program, followed by a short description listingits intended function (as an algorithm), and its inputs and outputs.1. HexPOVMTwo.m Input: N, vertical size of lattice. Output: A, N (N + 1) 2 array. A function whose input, a value N, denotes thenumber of ’rows’ of a hexagonal lattice. The way a square-lattice storesinformation for a hexagonal-lattice is by considering the hexagonallattice in a brick-layout. A separate program assigns the connections173.1. Algorithm Overview/Programsbetween each vertex. The  rst dimension of the array (N (N+1) 1)contains all the POVM outcomes, that are randomly assigned either1;2 or 3, denoting x;y and z. These are assigned with a probability 13to each, but since the vertices are entangled, we require that there bea probability weighting that shifts these outcomes later. The seconddimension of the array (N (N +1) 2) contains all the cluster-labelsof the vertices, in this case, we number them from 1 to N (N + 1),since each vertex is, at  rst, in separate clusters. There is a separatealgorithm that groups them together.Figure 3.1: Horizontal rectangles denote how vertices are arranged in rows,resp. vertical to columns.2. HexAdjacentTwo.m Input: N, the vertical size of the lattice, i;j, therow,column coordinate of the vertex in question. A function whoseinput gives the vertex in question and outputs the vertices that areadjacent to the vertex in question in a hexagonal lattice. This is com-183.1. Algorithm Overview/Programsplicated because for the vertices on the outermost rows and columnsmust still have tri-valency (i.e. have three edges coming out of them),and so they are sometimes connected more than once to the same ver-tex to ensure this (this ensures it is an AKLT state). For example,vertex (1;1) has two edges to vertex (1;2).3. UnionFindThree.m Input: A, the output of HexPOVMTwo.m. Output:A, the lattice with updated cluster-labels. The UnionFind algorithmhere is also known as the Hoshen-Kopelman algorithm 8. No computerscience enhancements were made to this brute-force implementationof it. In fact, it mostly uses ideas from the UnionFind algorithmrather than being a formal UnionFind algorithm. It goes througheach vertex, checks adjacent vertices for whether they share the samePOVM outcome; for vertices that do share the same POVM outcome,the algorithm ensures they are labelled with the same cluster label,that is, changing the value on the second dimension of the array A.Fig. 3.2 illustrates.Figure 3.2: Grouping Similar Outcomes. One color for each of x;y and z.4. GraphGeneratorTwo.m Input:A, the output of UnionFindThree.m.8The latter is a special case of the former.193.1. Algorithm Overview/ProgramsOutput: C, an adjacency matrix of the graph state jGi associatedwith the lattice given by A. A function whose input is the arraywith updated cluster-labels. This algorithm generates the graph byassigning a vertex to each domain, and listing the edges between thedomains, taking into account the deleting the even multiplicity edgesand reducing odd multiplicity edges to multiplicity 1. Fig. 3.3 illus-trates.Figure 3.3: Deleting even multiplicity edges and retaining odd multiplicityedges.5. DomainDeleteTwo.m Input: C, adjacency matrixpdelete, probabilityof domain deletion Output: C, updated adjacency matrix. A func-tion that goes through each domain and, with probability pdelete,deletes the domains, destroying all the edges connected to it. Thisis associated with the erroneous POVM mentioned earlier. Fig. 3.4illustrates.6. BreadthFirstSearch.m Input: C, adjacency matrix and A, array.Output: 0 or 1, corresponding to ’no’ and ’yes’. A function whichchecks domains that have vertices that are on the left side of the hexag-onal lattice, and domains those that have vertices on the right side ofthe hexagonal lattice, and determines, by a breadth  rst search on theadjacency matrix C, whether there is a spanning cluster. The breadth203.1. Algorithm Overview/ProgramsFigure 3.4: Deleting domains with probability pdelete. rst search works by checking the neighbors of all the domains with’left-side’ vertices, checking whether any of these neighbors are thedomains with ’right-side’ vertices, and if none of the neighbors are,checking the neighbors of the neighbors for domains with ’right-side’vertices, and so on. Fig. 3.5 illustrates this. Domain 1 has a left-sidevertex, so check its neighbor, domain 5. But domain 5 does not havea right-side vertex, so check the neighbors of 5. Domain 6 does nothave a right-side vertex, so check the neighbors of 6; and domain 4has a right-side vertex. Note that this graph was generated using the’part-lattice’ in Fig. 2.7.7. ProbabilityWeightingBrute.m This is the over-arching simulationprogram. It performs the previous algorithms for many di erent lat-tices of a  xed size. This way, we have a sampling of many lattices.For each set of about 50 lattices (each of size either 40 40 up to100 100), the pdelete is varied from 0:3 to 0:55 (since we suspect thatthe ’percolation’ threshold is around 0:33). This only works since itwas found that this was a percolation problem 9 This program also9Explained in more detail later, but in short, there exists a threshold of pdelete, for213.1. Algorithm Overview/ProgramsFigure 3.5: Domain 1 has a left-side vertex, domain 4 has a right-sidevertex. This graph has a spanning cluster.implements the proper probability weighting for each of the vertices,by using the sub-program BruteProbabilityShift.m.8. BruteProbabilityShift.m Input: Ainput from HexPOVMTwo.m Output:C,A, adjacency matrix and updated array. This program uses bothUnionFindThree.m and GraphGeneratorTwo.m. It implements theprobability weighting necessary to take into account the entanglementbetween the vertices. This is done  rst by choosing a random vertex.Determine the number of domains V and the number of edges E Forthis vertex,  ip, with equal probability, into one of the other POVMoutcomes. Obtain the number of new domains V0 and new edges E0due to this change. Calculate the probability of accepting this changein POVM outcome based on the change of the number of domains andnumber of edges: p ip = maxf1;2jV0j jE0j jVj+jEjg. This means usingUnionFindThree.m and GraphGeneratorTwo.m for each vertex. Thisprogram is the one that takes the longest and scales the worse, becausewhich below there is guaranteed to have a spanning cluster, and above which there is nospanning cluster.223.2. Previous WorkFigure 3.6: Counting old and new no. of domains and edges.the larger the lattice, the more times we must apply these two costlyalgorithms and the longer each algorithm takes because it is appliedto the entire lattice. Fig. 3.6 illustrates this. A very important noteis that when we refer to the counting of number of edges, this is beforethe mod2 operation, that is, we are counting really the total numberof edges and vertices.3.2 Previous WorkThe previous work by Tzu-Chieh Wei and Robert Raussendorf [7] [8]. showedthat this simulation of increasing pdelete and checking for a connected pathyielded a percolation problem. That is, below a threshold probability, a con-nected path always existed, but above the threshold probability, a connectedpath did not exist (for any of the randomly generated lattices). Fig. 3.7illustrates the problem of domain deletion. Notice that for larger lattices,the threshold between a connected path existing and not becomes more de- ned. For domain deletion, the percolation threshold was found to be aboutpdelete = 0:33. Fig. 3.8 illustrates the problem of edge deletion. In thiscase, the threshold was about pdelete = 0:43 [7] [8].In this case, the probability weighting was done through the algorithm233.2. Previous WorkFigure 3.7: Probability of Spanning Cluster for Domain DeletionFigure 3.8: Probability of Spanning Cluster for Edge Deletion243.3. Testing AlgorithmsLattice Size Time40  40 129 s60  60 438 s80  80 879 s100  100 1470 sTable 3.1: Simulation Times for Di erent-Sized Lattices.described above, that is, for each vertex, we  ip the POVM outcome andre-group the vertices to get a change in the number of domains and edgesfrom which we can determine the  ipping probability.3.3 Testing AlgorithmsIn order to get a clear idea of how well the algorithm from Section 3.1 works,we  rst tried to reproduce the results of the previous work done. Instead ofimplementing the exact same program (which was written in C), we usedMATLAB to produce results and see whether the results would still matcheven without reference to the original programs.As can be intuitively seen, the algorithm that implements the proba-bility weighting takes the most time. In order to realistically check if thealgorithms give a reasonable answer, we apply the algorithm without theprobability weighting  rst. By checking that we do have a percolation-typethreshold, we can proceed to then apply the probability weighting. Here, wetest the algorithm for 25 lattices for each of the di erent pdelete. The valuesof pdelete increment from 0 to 0.7 in steps of 0.05. We do this because weanticipate that the threshold will be somewhere in between.Fig. 3.9, 3.10, 3.11 and 3.12 are the graphs showing the number oflattices (out of 25) that contained connected paths for di erent values ofpdelete, and for di erent lattice sizes (40  40 to 100  100 10 ) To get anidea of how long each algorithm takes, consult Table 3.1.We  nd that the percolation threshold is approximately 0.4 just by in-10Actually, the smallest multiple of 4 that is larger than or equal to these numbers253.3. Testing AlgorithmsFigure 3.9: Percolation Threshold for 40  40 lattices without the Proba-bility Weighting.Figure 3.10: Percolation Threshold for 60  60 lattices without the Prob-ability Weighting.263.3. Testing AlgorithmsFigure 3.11: Percolation Threshold for 80  80 lattices without the Prob-ability Weighting.Figure 3.12: Percolation Threshold for 100 100 lattices without the Prob-ability Weighting.273.4. Probability WeightingFigure 3.13: Percolation Threshold for 40  40 lattices without the Prob-ability Weighting.spection. In order to get a better idea of where the percolation threshold is;we can increase the resolution around 0.4 (0.35 to 0.45, increment pdeleteby 0.005 instead of 0.05. Fig. 3.13 illustrates this, with 40 by 40 lattices.This con rms that the threshold is approximately around 0.4.Notice that this value of pdelete = 0:4 is higher than that predictedin previous work. However, since we did not implement the probabilityweighting, it can be expected for this discrepancy to occur.3.4 Probability WeightingNow we implement the probability weighting algorithm as outlined above.This implementation takes much longer, and so, to simply demonstrate thealgorithm, we apply this algorithm only for much smaller lattices (12 12)and for only very few of them (10 per point). The fact that we used verysmall and very few lattices is evidenced in Fig. 3.14; we see the downwardtrend, with 0.4 about half way through the decaying ’slope’, but it does283.5. Brute-Force Algorithm SummaryFigure 3.14: Percolation Threshold for 12  12 lattices with the (Brute)Probability Weighting.not look at all like the percolation problem earlier. This simulation took1343 seconds, almost the time for the non-probabiliy weighted 100  100lattices (each point also had 25, instead of 10 per point). If we want to geta better resolution around the threshold, we need both more lattices, andlarger lattices, which qualitatively will take much longer, so it is not reallyfeasible to implement the probability weighting in this way (in MATLAB atleast).3.5 Brute-Force Algorithm SummaryTable 3.1 shows the times for di erent lattices; but we want to see howthe time scales with increasing sizes of lattices (for the unweighted latticessimulation). For this, we examine how long it takes for lattices of size 20 20,30 30, 50 50 and 70 70 as well.From Fig. 3.15, scaling looks as if it were either polynomial or exponen-tial; it could be polynomial because we are looking only at the scaling with293.5. Brute-Force Algorithm SummaryFigure 3.15: Time for Simulation versus Lattice Heightrespect to height; by graphing against the actual lattice size, there may bea better idea of the simulation time scaling.From Fig. 3.16, the scaling looks roughly linear for increased lattice size,and so the algorithm is good qualitatively in this regard because the timescales linearly with lattice size.3.5.1 Single LatticeApplying the probability weighting algorithm, we  rst test how long it takesfor the algorithm to work on a single 40 40 lattice. We found that it took1300 seconds for the algorithm to perform the probability weighting on thelattice for a single 40 40 lattice which is the same as having the algorithmrun for all the 100 100 lattices without the probability weighting! If we wereto apply the same algorithm of the 40 40 lattices and change for di erentvalues of probability weighting, then the total time for 25 lattices for eachof the 13 di erent values of pdelete would take nearly  ve days to complete,and it would scale much more for 100 100 lattices.303.6. Algorithm ImprovementFigure 3.16: Time for Simulation versus Lattice Size3.6 Algorithm ImprovementTo make the simulation of probability weighting feasible, we have to  nda way to streamline the probability weighting algorithm. Currently, thealgorithm depends on the size of the lattice in two ways:  rst, for eachvertex, we must apply the algorithm of grouping and counting domains andedges on the entire lattice, so for each individual vertex, we depend on thelattice size; second, the number of vertices on which we check increases withincreasing lattice size. The main improvement to the algorithm is that wecan remove the  rst dependence, so that, for each vertex, we can makethe algorithm independent of the lattice size ( rst dependence). This willconsequently make the second dependence also much faster because eachvertex will take much less time.3.6.1 Domain GroupingThe idea is that the Union-Find Algorithm does not need to be applied tothe entire lattice when a vertex is changed, since the vertex POVM outcome313.6. Algorithm ImprovementFigure 3.17: Algorithm Improvement Scheme ipping is a ’local’, rather than ’global e ect’. That is, only a speci c numberof domains will be a ected; and so the change in the number of domainsand edges will be determined only by the change to these speci c numberof ’local’ domains.Fig. 3.17 illustrates this. For each vertex, there are at most three otherdomains that are connected to it, as in a hexagonal lattice, each vertex onlyhas three neighbors. Since there are only three POVM outcomes, for thevertex and the neighbors, there are at most three domains that are eitheradjacent or contain the vertex in question (illustrated as the multi-coloredvertex in Fig. 3.17). For simplicity, call the domains that contain all thevertices adjacent to the vertex in question plus the domain containing thevertex in question to be a superdomain. The claim is that the change in the323.6. Algorithm Improvementnumber of domains and edges depends only on the change of the number ofdomains and edges within the superdomain, this depends on the fact thatthe number of edges is counted before the mod2 operation. So considerthe three domains in the superdomain. No matter how the domains arerearranged within the superdomain, all the edges to domains outside thesuperdomain still remain, because this is before the mod2 operation. If wechange the POVM outcome of the vertex in question, only the domains thatare in the superdomain could possibly include or exclude this vertex, and sothe domains and edges within the superdomain are a ected.To implement this we have to modify the Union-Find algorithm and theGraph Generator algorithm to work not on the entire lattice, but on a super-domain that may be of variable shape. This means the localUnionFind.mand localEdgeCount.m must work depending on the POVM outcomes ofthe vertices and their domain labels, not on the  xed shape of the lattice(which the ‘global’ versions did). To implement the probability weighting,we included a few new programs:1. ProbabilityShift.m Input: A, the matrix from HexPOVMTwo.m Output:C, the adjacency matrix and A, the modi ed hexagonal lattice. Thisis an overarching program that goes through each vertex,  nding thecurrent POVM outcome,  ipping it and deciding whether the  ip re-mains. This then generates the  nal probability-weighted adjacencymatrix for the resultant graph C, and for the modi ed hexagonal lat-tice A. This e ectively encapsulates the probability weighting in asingle program.2. adjacentDomain.m Input: A, the hexagonal lattice, (i;j), the coordi-nate of the hexagonal lattice Output: The coordinates of the verticesthat are in the domains adjacent to vertex (i;j), the domain labelsfor each of these vertices and a vector of the di erent domain labels(listing them once). This is used to get all the relevant informationfor where to apply the local Union Find algorithm, as well as a way tocount the number of domains after we apply the Union Find algorithm.333.7. Time Improvement3. localEdgeCount.m Input: The coordinates of the vertices that are inthe domains adjacent to vertex (i;j), the domain labels for each ofthese vertices and a vector of the di erent domain labels (listing themonce), and A, the hexagonal lattice. Output: The total number ofedges E. This algorithm takes in the vertex coordinates to be exam-ined, and their POVM outcomes listed in A, and counts the numberof edges between the domains that contain these vertices only. Thismeans this can be applied to both the superdomain before and afterthe POVM outcome shift.4. localUnionFind.m Input: Same as localEdgeCount.m Output: A,the hexagonal lattice with updated domain labels for the given con g-uration of vertices. This gives the number of domains in the form ofupdating the matrix array A, so that all the vertices are ‘correctly’labelled. Essentially, we change the outcome of one vertex in A,and this algorithm merely ‘corrects’ the domain labeling. By usingadjacentDomain.m, we can count the number of domains after thechange.3.7 Time Improvement3.7.1 Single Lattice TestRecall that, for a single 40 40 lattice, it took about 1300 seconds to imple-ment the probability weighting using the brute algorithm. With this localUnion Find algorithm, the single 40 40 lattice takes only 3.65 seconds toimplement the probability weighting.Another useful measure of the speed of the new algorithm is how it scaleswith lattice size. Due to the prohibitive time it takes for the old algorithm torun, we only try lattices up to size 40 40, and see how the time scales withboth the old algorithm and new algorithm, as shown in Table 3.2, whichare ‘fresh’ runs, so the values may di er slightly from those given before.Both these graphs yield roughly quadratic increases in time, so while thenew algorithm does not reduce the polynomial dependence on the height of343.7. Time ImprovementLattice Height Old Algorithm (s) New Algorithm (s)16 34 0.5120 81 0.8324 117 1.2628 320 1.7032 562 2.3536 893 2.9540 1303 3.72Table 3.2: Probability Weighting: Single Simulation Times for Di erent-Sized Lattices.the lattice, it does make the increase much slower for increasing lattice size11. This means we can apply this simulation feasibly for larger lattices (like100 100, which, for this algorithm, takes 56 seconds, so a total of about 5hours for a full simulation).3.7.2 Full Simulation TestFurthermore, recall that for the full simulation on 12 12 lattices took about1343 seconds (with 10 lattices per pdelete value). With this new algorithm,the same simulation only takes 34 seconds. Fig. 3.18 gives the total graphof the 12 12 lattices with probability weighting, using this new algorithm.This only took about 124 seconds, even when more values of pdelete wereused. Since these lattices are very small, the threshold is poorly de ned, butit gives an idea of how much faster the simulation can run using the newalgorithm.Now we apply the full simulation on 60 60 lattices, 25 points per valueof pdelete. Fig. 3.20 gives the result, and it took about an hour to complete(3617 s). We note that the percolation threshold seems to be less than thatof the unweighted lattices, around 0.35-0.37, and yet this is still above theestimated value done in previous work with the brute-force algorithm whichyielded a value of 0.33 for domain deletion. This is an error related to the11Although the new algorithm might look linear, by checking the value for a 100 100lattice, the quadratic nature is shown, albeit a slow one.353.8. AccuracyFigure 3.18: 12 12 Lattices with Probability Weighting using New Algo-rithmaccuracy of the implementation of the algorithm, and must be investigatedfurther. What is (qualitatively) encouraging is that the algorithm at leastreduces the percolation threshold from the overestimate of 0.4 without theweighting.As seen in both Fig. 3.19 12 and 3.20, the percolation threshold is notas well de ned as in the case of unweighted lattices. Thus, for future work,we might require that we go to more lattices (50-100) and larger lattices(100-200 height).3.8 AccuracyWhat other indicators of the accuracy of this algorithm exist other thanonly comparing it with the percolation threshold of previously-run simula-tions? Qualitatively, we could perhaps compare the features of ‘average’probability-weighted lattice using the brute algorithm with the one gener-ated by the new algorithm, such as (1) the ratios of POVM outcomes thatresult, (2) the average size of the domains and (3) the number of edges thatresult. Perhaps another good measure would be to compare the number12This took 1037 seconds to run on MATLAB.363.8. AccuracyFigure 3.19: 40 40 Lattices with Probability Weighting using New Algo-rithmFigure 3.20: 60 60 Lattices with Probability Weighting using New Algo-rithm373.8. AccuracyAverage Domain Size Average Domain Degree3.3369 2.24603.3589 2.00963.2018 1.88343.3211 1.92663.2617 1.96263.2294 1.92663.1927 1.92663.4112 2.13203.2430 1.96263.1963 1.9178Table 3.3: Typical Lattice Characteristics: Average Domain Size and De-gree, for 20 20 Lattices; Brute Force Algorithmof lattices that do result with a connected path (4) for a speci c value ofpdelete. If we can show that at least for one value of pdelete we have similaroutcomes, we can have an idea whether the algorithm (and its implementa-tion) need modifying. We want to check whether both algorithms producethe same ‘typical’ graphs with the probability weighting, for example, theaverage size of the domains and the average domain degree.3.8.1 Average Size of Domains & Average Domain DegreeWe can easily calculate the average size of the domains because we alreadyknow the total size of all the domains together, we need only to divide itby the total number of domains. Another good indicator is the averagedegree of the domains; that is, to how many other domains is an averagedomain connected to? Tables 3.3 and 3.4 show the average domain size andaverage domain degree for ten di erent trials for the brute-force and thenew algorithm.For the brute-force algorithm, the average domain size was 3.2753, andthe average domain degree was 1.9894. For the new algorithm, the averagedomain size was 3.2600, and the average domain degree was 2.0253. This isencouraging as the average domain size and domain degree must be integers.383.8. AccuracyAverage Domain Size Average Domain Degree3.3611 1.94443.3299 2.16493.2511 1.85023.2574 2.07923.1728 2.19903.2056 1.96263.3073 2.04883.2709 2.06903.1754 1.99053.2685 1.9444Table 3.4: Typical Lattice Characteristics: Average Domain Size and De-gree, for 20 20 Lattices; New AlgorithmIt might be helpful to repeat the same experiment for much larger lattices,except we only do it for the new algorithm, as the brute-force one wouldtake too long. Previous work was done using the brute force algorithm, sowe have a ‘literature’ value to compare with. For 100 100 lattices (10 ofthem) the average domain size was 3.4687, and the average domain degreewas 2.0167 (average domain degree was 3.52 in previous work).3.8.2 20 20 Lattices with Connected Paths at pdelete = 0:35In order to measure how well the two algorithms agree, we want to testhow many lattices do contain connected paths out of 100 (instead of 25) forvery small lattices (20 20) for a  xed probability of deletion (0.35). This isonly feasible to calculate for small lattices because the old algorithm is tooslow for larger lattices. The primary problem with this type of comparisonis that 20 20 lattices give relatively unstable results compared with largerlattices 13. For pdelete = 0:35, 83 of the lattices did contain a connectedpath according to the brute-force algorithm (7839 s), while there were 74of these lattices with a connected path according to the newer algorithm(84.05 s).13which may ‘average out’ the errors that occur in smaller lattices39Chapter 4ConclusionMeasurement based quantum computation is one potential physical imple-mentation of quantum computers. In particular, using AKLT states andtransforming them into cluster states makes this feasible in a physical set-ting, provided we can prove that such a transformation is robust. Therobustness of an AKLT state’s transformation critically depends on the twonecessary properties for the graph state to be transformed to a cluster state:that the domains are microscopic compared with the size of the lattice, andthat there exists a connected path from the left to right of the lattice. Thefocus of the study was on the e ects of erroneous POVMs on the robustnessof the AKLT state, simulated by an increasing value of pdelete that corre-sponded to erroneous POVMs whose e ect was to delete domains. Previouswork had shown that this yields a percolation problem in which, for valuesof pdelete  0:33, there is always a connected path, but beyond that, thereis no connected path .4.1 Results SummaryThe original motivation of the thesis was to explore other e ects of dif-ferent types of erroneous POVMs on the robustness of the AKLT state.However, signi cant problems arose due to the time-consuming nature ofimplementing the probability weighting algorithm on lattices that simulatethe entanglement of the vertices in the lattice with one another. Thus, thenew motivation was to try to speed up this part of the algorithm. A newalgorithm was proposed in which the preparation of probability-weighted lat-tices could be produced quickly by performing the vertex POVM outcome ipping only on local ‘superdomains’, rather than on the entire lattice. This404.2. Future Workwas achieved by writing up a ‘local’ Union Find algorithm that re-labelledthe relevant domains for the new POVM outcome.We checked the speed increase of this new algorithm. Conceptually, itshould reduce one ‘layer’ of dependence on the lattice size. Checking fordi erent lattice sizes, it was found that the new algorithm appeared to stillhave the same polynomial power dependence on the lattice size as the oldalgorithm. However, the increase in time to complete the task for a givenincrease in lattice size was much smaller than with the older algorithm.It was found that for a single 40 40 lattice, the new algorithm took 3.65seconds while the older algorithm took 1300 seconds.Next, we checked the accuracy in two ways. First by an overview methodof actually varying pdelete and  nding how many lattices (out of 25) still hada connected path. This resulted in a graph that showed a decreased percola-tion threshold (compared with the lattices without probability weighting), ofabout 0.37, but still not the 0.33 value that was obtained before. Second, wechecked the properties of individual lattices by checking the average domainsize and average domain degree. This yielded encouraging results becauseboth the brute force algorithm (which we know is correct, but may havehad wrong implementation) and the new algorithm yielded similar valuesfor both.This algorithm for probability weighting is useful in setting up the latticeeven before the POVM e ects are considered; thus, by speeding up the sim-ulation, this algorithm allows us to more e ciently explore di erent POVMe ects in the future.4.2 Future WorkOverall, a general theoretical understanding of the underlying mathematicswould be very good to help examine how accurate the algorithms are, andgive a theoretical idea of how quickly the algorithms should run. Percolationproblems are a problem in probability theory, and so it may be possible toexamine other e ects theoretically other than domain deletion. Using big-Onotation to provide a framework to analyze the e ciency of the algorithms414.2. Future Workwould help us understand whether the new scheme does provide any sort ofadvantage for producing probability-weighted lattices.Another important work that must be done is checking the accuracy ofthe algorithm. Here we presented preliminary checks of accuracy. Clearly ifwe were to use many more lattices and much larger lattices, we could obtainmore information on how accurate the new algorithm is compared with the(correct) brute-force algorithm. Another way to check for the accuracy is touse forced outcomes (rather than probabilistic ones) and see the resultantlattice.The Union-Find algorithm is well known in computer science and manyimplementations are far more e cient than the one used here, so in thefuture, we could use these to increase the e ciency even further, and ensurethat it is implemented correctly.Finally, it would be hoped that more POVM e ects could be analyzedboth theoretically and computationally in the future. It is unfortunate thatthe algorithms took so long to implement, but it is hoped that this improve-ment to the e ciency to the preparation of weighted lattices will be helpfulin future simulations.42Bibliography[1] Ian A eck, Tom Kennedy, Elliot Lieb, and Hal Tasaki. Rigorous re-sults on valence-bond ground states in antiferromagnets. 59(7):799{802,1987.[2] J-L Brylinski and Ranee Brylinski. Universal quantum gates. August2001.[3] J.B. Conway. A Course in Functional Analysis, 2nd ed. Springer, 2007.[4] M.A. Nielsen. Cluster-state quantum computation. April 2005.[5] M.A. Nielsen and I. L. Chuang. Quantum Computation and QuantumInformation.[6] John Preskill. Course Notes Physics 229: Introduction and Overview.1998.[7] T.C. Wei, Ian A eck, and R. Robert. The 2d aklt state is a universalquantum computational resource. November 2010.[8] T.C. Wei, Ian A eck, and R. Robert. The 2d aklt state is a universalquantum computational resource{supplementary information. Novem-ber 2010.[9] T.C. Wei, Ian A eck, and R. Robert. Universal one-way quantumcomputation with two-dimensional aklt states. 2010.[10] Wikipedia. http://en.wikipedia.org/wiki/Shor’s algorithm.[11] Wikipedia. http://en.wikipedia.org/wiki/POVM.43

Cite

Citation Scheme:

    

Usage Statistics

Country Views Downloads
United States 20 0
China 11 1
Canada 7 0
Poland 2 0
Singapore 1 0
Romania 1 0
City Views Downloads
San Francisco 11 0
Beijing 6 0
Unknown 6 2
Vancouver 4 0
Shanghai 4 1
Kelowna 3 0
Ashburn 2 0
Buffalo 1 0
Los Angeles 1 0
Gdańsk 1 0
Singapore 1 0
Mountain View 1 0
Guangzhou 1 0

{[{ mDataHeader[type] }]} {[{ month[type] }]} {[{ tData[type] }]}
Download Stats

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.547.1-0085966/manifest

Comment

Related Items