Investigation of the Correlated Dynamicsof Quantum-Dot Cellular AutomataCircuits and SystemsbyFaizal KarimB.Eng., Ryerson University, 2005M.A.Sc., The University of British Columbia, 2007A THESIS SUBMITTED IN PARTIAL FULFILLMENT OFTHE REQUIREMENTS FOR THE DEGREE OFDOCTOR OF PHILOSOPHYinThe Faculty of Graduate and Postdoctoral Studies(Electrical and Computer Engineering)THE UNIVERSITY OF BRITISH COLUMBIA(Vancouver)August 2014c© Faizal Karim 2014AbstractQuantum-dot Cellular Automata (QCA) provides a basis for classical com-putation without transistors. Many simulations of QCA rely upon the In-tercellular Hartree Approximation (ICHA), which neglects the possibility ofentanglement between cells. While simple and computationally efficient, theICHA’s many shortcomings make it difficult to accurately model the dy-namics of large systems of QCA cells. On the other hand, solving a fullHamiltonian for each circuit, while more accurate, becomes computationallyintractable as the number of cells increases. This work explores an interme-diate solution that exists somewhere in the solution space spanned by theICHA and the full Hamiltonian.The solution presented in this thesis builds off of the work done by Toth1and studies the role that correlations play in the dynamics of QCA circuits.Using the coherence-vector formalism, we show that we can accurately cap-ture the dynamical behaviour of QCA systems by including two-cell correla-tions.In order to capture the system’s interaction with the environment, weintroduce a new method for computing the steady-state configurations of aQCA system using well-known stochastic methods, and use the relaxation-time approximation to drive the QCA system to these configurations. Forrelatively-low temperatures, we show that this approach is accurate to withina few percent, and can be computed in linear time.QCADesigner, the de facto simulation tool used in QCA research, hasbeen used and cited in hundreds of papers since its creation in 2004. Byimplementing computationally accurate and efficient algorithms to the ex-isting simulation engines present in QCADesigner, this research is expectedto make a significant contribution to the future of QCA circuit design. Inparticular, researchers in the field will be able to identity a whole new set ofdesign rules that will lead to more compact circuit design, realistic clockingschemes, and crosstalk-tolerant layouts. In addition, proper estimates on thepower dissipation, pipelining, and limitations of room temperature operation1G. Toth and C.S. Lent, “Role of correlation in the operation of quantum-dot cellularautomata", J. Appl. Phys., vol 89. pp. 7943-7953, June 2001iiAbstractwill now be feasible for QCA circuits of any size; a huge step forward forQCA design.iiiPrefaceIn all chapters, Dr. Konrad Walus and Dr. André Ivanov acted in a supervi-sory role. Several parts of this thesis have either been previously published,submitted for publication or are currently in print.Portions of Chapter 3 have been published previously in collaboration withAryan Navabi. I am the principal researcher of this work. I performed the lit-erature review, identified the research problem, developed the research ideas,and wrote all code. I performed all the simulations in this work (except forthe data given in Figures 3.3 and 3.4), analyzed the data, and identified theknown issues with this method.Chapter 4 involved collaboration with Marco Taucer under the supervisionof Dr. Robert A. Wolkow of the University of Alberta. The key ideas andprimary contributions of this chapter belong to Marco Taucer. The experi-mental designs, data analysis and interpretation were performed equally byboth the authors of this work. I wrote the MATLAB code and performed allsimulations for the figures shown in Chapter 4, as well as for those submittedas part of the co-authored work.All figures in this work are used with permission from applicable sources.List of Publications Included in this WorkF. Karim and K. Walus, “Modelling techniques for simulating large QCACircuits,” in Field-Coupled Nanocomputing: Paradigms, Progress, and Per-spectives, LNCS, vol. 8280. Springer, Heidelberg 2014F. Karim and K. Walus, “Efficient Simulation of Correlated Dynamics inQuantum-Dot Cellular Automata (QCA),” IEEE Trans. Nano, 03/2014;13(2):294-307.ivPrefaceF. Karim and K. Walus, “Calculating the Steady State Polarizations ofQuantum Cellular Automata (QCA) Circuits,” J. Comp. Elec., 04/2014;13(37)M. Taucer, F. Karim, R. Wolkow, and K. Walus, “Consequences of Many-cell Correlations in Clocked Quantum-dot Cellular Automata,” arXiv, 07/2012F. Karim, A. Navabi, K. Walus and A. Ivanov, “Quantum mechanical simu-lation of QCA with a reduced Hamiltonian,” in Proceedings of the 8th IEEEConference on Nanotechnolgy, 2008vTable of ContentsAbstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iiPreface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ivTable of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . viList of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viiiList of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.1 Background And Motivation . . . . . . . . . . . . . . . . . . 11.2 Thesis Overview . . . . . . . . . . . . . . . . . . . . . . . . . 52 Simulation of QCA Systems . . . . . . . . . . . . . . . . . . . 72.1 Full-Basis Quantum Mechanical Treatment . . . . . . . . . . 72.2 Two-State Approximation . . . . . . . . . . . . . . . . . . . . 92.3 Intercellular Hartree Approximation . . . . . . . . . . . . . . 113 Augmented ICHA . . . . . . . . . . . . . . . . . . . . . . . . . 163.1 Numerical Results . . . . . . . . . . . . . . . . . . . . . . . . 213.2 Known Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . 263.3 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 284 Loss of Polarization in Isolated Bit Packets . . . . . . . . . 304.1 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345 Coherent Cell-Cell Dynamics . . . . . . . . . . . . . . . . . . 385.1 The Liouville Equation . . . . . . . . . . . . . . . . . . . . . 415.2 Dissipative Coupling to the Heat Bath . . . . . . . . . . . . . 435.2.1 The Born-Markov Approximation . . . . . . . . . . . 445.2.2 The Lindblad Equation . . . . . . . . . . . . . . . . . 455.3 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46viTable of Contents6 Two-Point Correlations . . . . . . . . . . . . . . . . . . . . . . 486.1 Correlation Proper . . . . . . . . . . . . . . . . . . . . . . . . 486.2 Approximating Second-Order Correlations . . . . . . . . . . 516.2.1 System of ODEs . . . . . . . . . . . . . . . . . . . . . 516.3 ODE Solver . . . . . . . . . . . . . . . . . . . . . . . . . . . . 556.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 577 Steady-State Solutions . . . . . . . . . . . . . . . . . . . . . . . 597.1 Monte Carlo Method . . . . . . . . . . . . . . . . . . . . . . 607.1.1 Results . . . . . . . . . . . . . . . . . . . . . . . . . . 617.2 Simulated Annealing . . . . . . . . . . . . . . . . . . . . . . . 697.2.1 Results . . . . . . . . . . . . . . . . . . . . . . . . . . 697.3 Genetic Algorithm . . . . . . . . . . . . . . . . . . . . . . . . 767.3.1 Results . . . . . . . . . . . . . . . . . . . . . . . . . . 777.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 818 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 838.1 Justifying Approximations . . . . . . . . . . . . . . . . . . . 838.2 ICHA vs 2-point Correlations . . . . . . . . . . . . . . . . . . 888.2.1 Logical Outputs and Dynamics . . . . . . . . . . . . . 888.2.2 Power Dissipation . . . . . . . . . . . . . . . . . . . . 908.2.3 Complexity Analysis . . . . . . . . . . . . . . . . . . . 928.2.4 Latency . . . . . . . . . . . . . . . . . . . . . . . . . . 928.2.5 Crosstalk . . . . . . . . . . . . . . . . . . . . . . . . . 988.2.6 Depolarization . . . . . . . . . . . . . . . . . . . . . . 1008.3 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1019 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107viiList of Tables6.1 Butcher Tableau for Dormand-Prince Method . . . . . . . . . 577.1 Steady-State Polarizations for each Circuit in Figure 7.1 . . . 637.2 Average Error Per Cell . . . . . . . . . . . . . . . . . . . . . . 77viiiList of Figures1.1 Schematic of the basic four-site cell. (a) The geometry of thecell. The inter-dot distance is designated by d. (b) Coulombicrepulsion causes the electrons (red) to align along the diago-nals of the cell. These two basis states have polarizations ofP = +1 and P = -1, respectively. . . . . . . . . . . . . . . . . 21.2 Example of a work-around solution to the shortcomings of thecurrent treatments of QCA circuits. The different patterns onthe QCA cells represent different clocking zones attached tothe cell. The cells in the 2nd clocking zone do not latch ontoa polarization until each cell in the 1st clocking zone is fullylatched. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51.3 Example of a simple 4-bit processor that requires 24 clockcycles in order to simulate successfully in QCADesigner. . . . 52.1 Number of elements in the 16N×16N Hamiltonian vs. thenumber of cells in the system. . . . . . . . . . . . . . . . . . . 92.2 Number of elements in the 2N×2N Hamiltonian vs. the num-ber of cells in the system. . . . . . . . . . . . . . . . . . . . . 112.3 Nonlinear cell-to-cell response function. The output cell isalmost completely polarized for even a small polarization. . . 132.4 Parametric simulations of a driven two-cell wire for differ-ent tunnelling rates. (a) The two-cell wire being simulated.(b) Polarization of Cell 1 as a function of the driver cell po-larization, calculated using the ICHA. The ICHA predicts ahysteretic response indicating a memory effect. Where twosolutions exist, the one with lowest energy is shown in bold.(c) Polarization of Cell 1 as a function of the driver cell polar-ization, calculated using the more complete quantum mechan-ical treatment described by equation 2.3. This more completesimulation shows no hysteresis. We have taken kBT ≈ Ek/4,which corresponds to Ek ≈ 100 meV for room temperature. . 15ixList of Figures3.1 Example of a majority gate for which the ICHA produces theincorrect output. . . . . . . . . . . . . . . . . . . . . . . . . . 173.2 Example of a wire experiencing crosstalk for which the ICHAproduces the incorrect output. . . . . . . . . . . . . . . . . . . 173.3 The two-cell correlation propers and kink energy for each cellin the majority gate. Cells with the highest kink energy alsoappear to have the largest two-cell correlation propers withneighbouring cells. . . . . . . . . . . . . . . . . . . . . . . . . 193.4 The two-cell correlation propers and kink energy for each cellin the wire. Cells with the highest kink energy also appear tohave the largest two-cell correlation propers with neighbouringcells. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193.5 Pseudo-code for grouping together cells with large Ek. . . . . 203.6 Pseudo-code for grouping together cells with high correlations. 213.7 Layout of the QCA full adder with no crossover. . . . . . . . 223.8 Cells groups modelled using a multi-cell Hamiltonian. . . . . . 223.9 Polarization of cell 11 in the majority gate using the ICHAand the Augmented ICHA methods. . . . . . . . . . . . . . . 233.10 Polarization of cell 9 in the wire using the ICHA and theAugmented ICHA methods. . . . . . . . . . . . . . . . . . . . 243.11 Results for the full adder using the ICHA and the AugmentedICHA methods. . . . . . . . . . . . . . . . . . . . . . . . . . . 253.12 Layout of two consecutive majority gates. . . . . . . . . . . . 263.13 Layout of two consecutive majority gates. . . . . . . . . . . . 273.14 Layout of two consecutive majority gates. . . . . . . . . . . . 283.15 QCA interconnect built 3-cells wide. . . . . . . . . . . . . . . 284.1 (a) Spectra of unbiased lines of interacting QCA cells, rangingin length from one to eight cells, with γ/Ek = 0.1. The spectraare solutions of Eq. 2.3, with a constant (N − 1)Ek/2 addedfor ease of interpretation. The inset shows the difference inenergy between the two lowest energy levels for each line. (b)Conceptually, the array can be viewed as a single two statesystem with a barrier that increases with the number of cells [46]. 324.2 Coherent oscillations of the first cell in unperturbed one, three,and five cell lines. The cells are all initially in the P = 1 state.In each line, the cells oscillate together, so the polarizationsof the first cell gives a good representation of the polarizationof all its neighbours. γ = 10 meV and Ek = 100 meV [46]. . . 33xList of Figures4.3 Simulations of a six cell line with periodic boundary condi-tions. The value of γ is adiabatically raised and then loweredbetween 1 < γ < 200 meV, with a period of 217~/Ek, with adifferent phase for each cell. The circled line shows the polar-ization of each cell in the absence of any energy dissipation.The solid line shows the polarization of each cell with an en-ergy relaxation time of 434~/Ek. For reference, a decayingexponential as a function of τ is shown with the dashed line. . 354.4 Simulations of a six cell line with periodic boundary condi-tions. The value of γ is adiabatically raised and then loweredbetween 1 < γ < 1000 meV, with a period of 217~/Ek, with adifferent phase for each cell. The circled line shows the polar-ization of each cell in the absence of any energy dissipation.The solid line shows the polarization of each cell with an en-ergy relaxation time of 434~/Ek. For reference, a decayingexponential as a function of τ is shown with the dashed line.Half a period of a coherent oscillation can be seen over thecourse of the simulation, flipping the polarization state fromnegative to positive. . . . . . . . . . . . . . . . . . . . . . . . 365.1 The s = 15 basis operators for a two-cell system. . . . . . . . 395.2 The number of state variables that need to be solved as afunction of the number of cells in the circuit. Plots are shownfor the different quantum mechanical treatments discussed. . 416.1 The maximum value for each of the 27 three-cell correlationpropers for wires of different length. The largest three-cellcorrelation propers do not exceed M = 0.03 for any of thesimulated wires. . . . . . . . . . . . . . . . . . . . . . . . . . . 506.2 The two-cell correlation propers for all pairs of cells in a five-cell wire. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 526.3 The two-cell correlation propers for all pairs of cells in a five-cell cross. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 536.4 The maximum two-point correlation proper terms for wires ofincreasing length. . . . . . . . . . . . . . . . . . . . . . . . . . 547.1 Three QCA circuits used to compare the accuracy of the threeconsidered stochastic methods. . . . . . . . . . . . . . . . . . 62xiList of Figures7.2 Circuit A. The average error per cell versus the number ofiterations using the Monte Carlo algorithm. The maximumerror for 50%, 75%, and 90% of the simulations run is alsoshown. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 637.3 Circuit B. The average error per cell versus the number ofiterations using the Monte Carlo algorithm. The maximumerror for 50%, 75%, and 90% of the simulations run is alsoshown. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 647.4 Circuit C. The average error per cell versus the number ofiterations using the Monte Carlo algorithm. The maximumerror for 50%, 75%, and 90% of the simulations run is alsoshown. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 647.5 The average error per cell versus Ek/kBT for M = 10, 000. . . 657.6 The steady-state polarizations of each cell in (a) a 10-cell arrayand (b) a 20-cell array for different initial guesses. The steady-state polarizations are averaged out over 1,000 simulations.The number of iterations was kept fixed at M = 1, 000 foreach simulation. . . . . . . . . . . . . . . . . . . . . . . . . . . 677.7 The run-times of the Monte Carlo method versus the numberof cells in an array. The number of iterations for each simu-lation was adjusted in order to maintain an average error percell below ±0.01. Simulations were run 1,000 times for eacharray length. The dashed curve represents a best-fit curve. . . 687.8 Circuit A. The average error per cell versus the number ofiterations using simulated annealing for two exponentially de-caying temperature functions. The maximum error for 50%,75%, and 90% of the simulations run is also shown. . . . . . . 707.9 Circuit B. The average error per cell versus the number ofiterations using simulated annealing for two exponentially de-caying temperature functions. The maximum error for 50%,75%, and 90% of the simulations run is also shown. . . . . . . 717.10 Circuit C. The average error per cell versus the number ofiterations using simulated annealing for two exponentially de-caying temperature functions. The maximum error for 50%,75%, and 90% of the simulations run is also shown. . . . . . . 727.11 The average error per cell versus Ek/kBT for M = 10, 000.The temperature function decays exponentially from 300K →1K. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73xiiList of Figures7.12 The steady-state polarizations of each cell in (a) a 10-cell arrayand (b) a 20-cell array for different initial guesses. The steady-state polarizations are averaged out over 1,000 simulations.The number of iterations was kept fixed at M = 1, 000 foreach simulation. . . . . . . . . . . . . . . . . . . . . . . . . . . 747.13 The run-times of the simulated annealing method versus thenumber of cells in an array. The number of iterations for eachsimulation was adjusted in order to maintain an average errorper cell below ±0.01. Simulations were run 1,000 times foreach array length. The dashed curve represents a best-fit curve. 757.14 Circuit A. The average error per cell versus the number ofiterations using a genetic algorithm. The maximum error for50%, 75%, and 90% of the simulations run is also shown. . . . 787.15 Circuit B. The average error per cell versus the number ofiterations using a genetic algorithm. The maximum error for50%, 75%, and 90% of the simulations run is also shown. . . . 797.16 Circuit C. The average error per cell versus the number ofiterations using a genetic algorithm. The maximum error for50%, 75%, and 90% of the simulations run is also shown. . . . 797.17 The average error per cell versus Ek/kBT for M = 1, 000. Apopulation size of 10,000 individuals was used. . . . . . . . . 807.18 The run-times of the genetic algorithm versus the populationsize at M = 1, 000. Simulations were run 1,000 times for eachpopulation. The dashed curve represents a best-fit curve. . . . 807.19 The run-times of the genetic algorithm versus the number ofcells in an array at M = 1, 000. The population size in eachsimulation was adjusted until average error per cell droppedbelow ±0.01. Simulations were run 1,000 times for each arraylength. The dashed curve represents a best-fit curve. . . . . . 818.1 Simulations of a QCA wire. The outputs of each wire areshown for different quantum mechanical treatments. Whilethe solution produced by the ICHA (dashed line) diverges sig-nificantly as the length of the wire increases, the 2PC methodusing only 2-point correlation matches well with the full quan-tum mechanical solution for all wire lengths. . . . . . . . . . . 85xiiiList of Figures8.2 The error in the polarization of the output cell of the QCAwire when (a) using the ICHA and (b) when including 2-pointcorrelations. The difference between the 2PC method andthe full quantum mechanical solution suggests that modellingQCA circuits with 2-point correlations is sufficient for calcu-lating the dynamics of any given system. . . . . . . . . . . . . 868.3 Two of the simulated circuits. (a) 40-cell wire. (b) A majoritygate with uneven input legs. . . . . . . . . . . . . . . . . . . . 878.4 The difference in the polarization of the output cell of a QCAwire and majority gate when modelling each with and withoutnearest-neighbour coupling. . . . . . . . . . . . . . . . . . . . 888.5 Polarization of each cell in a 40-cell wire. (a) Modelled usingthe ICHA. (b) Modelled using the 2PC method. While theICHA is able to predict the correct logical output, it is unableto accurately predict the dynamics of the wire. . . . . . . . . 898.6 Polarization of each cell in a majority gate. (a) Modelled usingthe ICHA. (b) Modelled using the 2PC method. The ICHAis unable to predict the correct logical output or dynamics ofthe majority gate. . . . . . . . . . . . . . . . . . . . . . . . . 918.7 Power Dissipated in a 40-cell wire and a majority gate. (a)Power dissipated as predicted by the ICHA. (b) Power dissi-pated as predicted by the 2PC method. The ICHA underes-timates the amount of power dissipated in the circuits sinceit does not properly account for the cell-cell interactions. . . . 938.8 Simulation times vs the circuit size for different number ofneighbours. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 948.9 Example of a 2-bit 4×4 systolic matrix multiplier that requires33 clock cycles in order to simulate successfully in QCADe-signer [74]. c© 2010. . . . . . . . . . . . . . . . . . . . . . . . 948.10 Example of the cm82a MCNC benchmark circuit that requires6 clock cycles in order to simulate successfully in QCADe-signer [75]. c© 2008. . . . . . . . . . . . . . . . . . . . . . . . 958.11 Example of a 4-bit Montgomery modular multiplier circuitthat requires 10 clock cycles in order to simulate successfullyin QCADesigner [76]. c© 2010. . . . . . . . . . . . . . . . . . 958.12 A Full Adder circuit designed with (a) one clock zone and (b)six clock zones. . . . . . . . . . . . . . . . . . . . . . . . . . . 968.13 A 2-4 Decoder circuit designed with (a) one clock zone and(b) four clock zones. . . . . . . . . . . . . . . . . . . . . . . . 97xivList of Figures8.14 A 4-1 Multiplexer circuit designed with (a) four clock zonesand (b) eight clock zones. . . . . . . . . . . . . . . . . . . . . 988.15 Simulation of Crosstalk due to a Wire . . . . . . . . . . . . . 998.16 The minimum value of α required to produce a polarizationPout ≥ 0.75, as a function of β when using the ICHA and 2PCmethods. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 998.17 3-cell wire with each cell belonging to its own clocking zone. . 1018.18 Simulations of a three-cell wire. The value of γ is adiabati-cally raised and then lowered between Ek < γ < 100Ek, witheach cell phase-shifted by 90◦. The polarization of each cellis shown when modelled using (a) the ICHA, when including(b) 2-point correlations and (c) when using the density ma-trix approach. The 2PC model is capable of predicting thedepolarization that occurs for cells decoupled from the drivercell, whereas the ICHA is not. . . . . . . . . . . . . . . . . . . 102xvChapter 1Introduction1.1 Background And MotivationIn the past several years, incredible advances in the availability of nanofabrication processes have been witnessed, and have demonstrated molecular-scale production beyond the usable limit for CMOS process technology [1–3]. This has led to the research and early development of a wide-range ofnovel computing paradigms at the nanoscale; amongst them, quantum dotcellular automata (QCA) [4]. QCA is a nanoelectronic computing paradigmin which an array of cells, each electrostatically interacting with its neighbors,is employed in a locally interconnected manner to implement general purposedigital circuits [4]. A QCA cell can consist of four quantum dots arranged atthe corners of a square as shown in Figure 1.1. Each cell contains two mobileelectrons, which can tunnel between dots. The electrons experience mutualrepulsion and, at sufficiently low temperatures, will take either of the twopossible antipodal configurations (or a superposition thereof). Because ofthe symmetry of the cell, neither configuration is preferred in the absence ofa perturbation, and the cell can be thought of as a bistable switch: a smallpush toward one or the other diagonal will cause the electrons to “switch”almost completely. The interaction between cells tends to align the electronicconfiguration of one cell with that of its neighbours. A line of cells thus actsas a binary wire. Different geometrical arrangements of cells correspond tologic gates, and together enable the design of a universal computer capableof very low power operation [4–8].Several proof-of-concept QCA devices have been fabricated using silicon-on-insulator (SOI) [9], metallic island devices operating in the Coulombblockade regime [10–16], and nano-magnetics [17–22]. In recent years, re-search into implementing these devices using single molecules has also begunto generate significant interest [1,7,23–28], and most recently, it was demon-strated that silicon atomic dangling bonds (DBs), on an otherwise hydrogenterminated silicon crystal surface, can serve as quantum dots [1].Research into QCA has been motivated by the potential of reaching thelimits to physical scaling, as well as the predictions of ultra low power con-11.1. Background And Motivationdd1 234(a) 4-dot QCA CellP = +1 P = -11 234|1i |0i(b) Bistable QCA statesFigure 1.1: Schematic of the basic four-site cell. (a) The geometry of the cell.The inter-dot distance is designated by d. (b) Coulombic repulsion causesthe electrons (red) to align along the diagonals of the cell. These two basisstates have polarizations of P = +1 and P = -1, respectively.sumption, a major problem in the industry [4, 8, 15, 23, 29, 30]. It has beenreported that molecular and atomic implementations can achieve device den-sities on the order of 1014/cm2 (for 1 nm2 devices) [31]. The power dissipatedfrom current-switched devices such as FETs operating at GHz speeds couldmelt the chip at those densities [32], however, molecular QCA has been pre-dicted to reduce the power dissipation by several orders of magnitude [8,33].Additionally, as the size of the cells gets smaller, the interaction energies be-tween cells increases. At the molecular (and atomic) scale, these energies areexpected to be in the 0.2-0.5 eV range [1,4,34,35] which allow for room tem-perature operation since these energies are greater than the thermal ambientenergy (i.e, thermal noise), kBT (∼25 meV at room temperature), where kBis Boltzmann’s constant and T is the temperature in kelvin (T = 293 Kat room temperature). Furthermore, while the separation between groundstate and first excited state remains the same as the number of cells, N , ina circuit increases, the increasing degeneracy of the excited state means itis increasingly likely that the system at nonzero temperature will be found21.1. Background And Motivationin an excited state - likely resulting in an incorrect measurement at the out-puts. This can be quantified by considering the Helmholtz free energy of thesystem [36]:∆F = Ek[1− kBTEkln (N)], (1.1)where Ek is the energy of the first excited state. As the free energy differenceapproaches zero, thermal smearing of the output cells becomes significantand it becomes harder to unambiguously measure the outputs. However, bycontrolling the ratio between kBT and Ek (either lowering the temperatureor shrinking the cell size so that Ek increases), we can accommodate anarbitrarily large number of cells. If this ratio is 1/10 (approximately theratio for molecular/atomic scale cells at room temperature), then N can beas large as 22,000 cells. Thus, for room temperature operation, a molecular(or atomic) scale implementation is a must.While the development of nanoscale devices is promising, a number oftechnical challenges, including choice of molecules [7,28], the design of properinterfacing mechanisms [37], and clocking technology [38,39] - amongst otherthings - remain to be solved before QCA devices can be implemented. Thus,studying the dynamics of QCA systems has remained mainly theoretical upuntil now.A great deal of theoretical and modelling work has been done on thetopic of QCA [40–44]. This research has aimed to capture the qualitativeand quantitative characteristics of QCA cells and of arrays of cells. Becauseof the difficulties inherent in solving the complete quantum mechanical prob-lem, a number of simplifying assumptions are typically made. These includea reduction of the Hilbert space to two states per cell [42], treatment of inter-cell interactions via a mean-field approach [40,41], and finally an assumptionof exponential energy relaxation [8, 43, 45]. Although much insight has al-ready been gained on the nature and importance of quantum mechanicalcalculations that go beyond these approximations [42, 43], some importantfeatures of clocked QCA systems have not been fully studied [46]. In thiswork, we address these shortcomings and propose a solution based on thework of Toth et al. [43] for modelling the dynamics of QCA systems whilstmaintaining the low computational overhead associated with the currenttreatments. In [43], Toth introduces the coherence vector as a means toseparate the state variables of a QCA system, and remove those that do notsignificantly contribute to its dynamic response. In this thesis, we furtherdevelop this methodology, and use it to model the dynamics of any arbitraryQCA circuit. Furthermore, we develop a new theoretical treatment of the31.1. Background And Motivationinteraction of a QCA system with its environment using well-known stochas-tic methods. The resulting methodology is able to accurately simulate largeQCA circuits in linear time. The results presented here will have implica-tions for molecular- and atomic-scale QCA device design and will change theway QCA circuits are designed and modelled.The exploration of the dynamics is important because it allows one toaccurately predict the power consumption and inherent switching speeds ofa given device. And while advances in nanofabrication processes have pavedthe way for molecular and atomic implementations of QCA cells, functionalelectronic QCA building blocks and circuits have yet to be demonstratedat the nanoscale; and thus, the need for reliable numerical methods to ac-curately model this technology is tantamount. A simulation tool, QCADe-signer, currently exists for this technology [47–50] and has been applied to-wards the high-level design and exploration of both sequential and combi-national circuits [5, 6, 51]. QCADesigner is indeed the de facto simulationtool used in QCA research, however, in its current form, since no simu-lation engine in QCADesigner can accurately simulate the behavior of allQCA circuits, circuit designers are working around the existing shortcom-ings through modifications in their circuits. One such work-around is shownin Figure 1.2. Here, the designers clock the cross-region of the majority gateseparately, which effectively waits for all signals to arrive prior to allowingthem to influence the gate cell. However, in practice, it will be very diffi-cult to achieve such a high-resolution clocking zone. Furthermore, even if itwere possible to clock QCA cells with such a high resolution, clocking eachmajority gate in a circuit this way will lead to deep pipelining which willresult in high circuit latency. To illustrate this point, consider the simple4-bit processor in Figure 1.3, where the cross-region of each majority gatein this design is clocked separately. In total, 96 clock zones were required tocorrectly simulate this circuit in QCADesigner, resulting in a latency of 24clock cycles (96 clock zones / 4 clock zones per cycle) per instruction. Suchmeasures prevent the modelling of practically realizable circuits, and thuslimit the progress that can be made at the design level. By implementingcomputationally accurate and efficient algorithms to the existing simulationengines present in QCADesigner, this research is expected to make a sig-nificant contribution to the future of QCA circuit design. As part of thiswork, an upgrade to QCADesigner’s current time-dependant solver has beenimplemented.41.2. Thesis OverviewClocking Zone 1Clocking Zone 2Figure 1.2: Example of a work-around solution to the shortcomings of thecurrent treatments of QCA circuits. The different patterns on the QCAcells represent different clocking zones attached to the cell. The cells in the2nd clocking zone do not latch onto a polarization until each cell in the 1stclocking zone is fully latched.8.9. SIMPLE 4-BIT PROCESSOR 151Figure 8.38: The 4-bit processor in its entirety, as laid out using QCA cells.8.9.6 System Wide PipelineEach component of the processor, including the interconnect between components,has some latency associated with it, creating a system-wide pipeline. The numberof pipeline stages for each component of the processor is shown in Figure 8.39.In order to eliminate the controller logic for this simple design, we have takenadvantage of this inherent pipelining. Our instructions contain control bits thatdetermine how each of the processor components handles the arguments associatedwith that instruction. The control data is distributed to each of the processor compo-nents through a set of interconnects, which are clocked such that the control signalsarrive synchronously with the associated argument to each of the processor blocks.Figure 1.3: Example of a simple 4-bit processor that requires 24 clock cyclesin order to simulate successfully in QCADesigner.1.2 Thesis OvervieThe rest of this thesis is organized as follows:51.2. Thesis OverviewIn Chapter 2 we summarize the various numerical methods used for solvingsystems of QCA cells, including the full basis treatment, two-state approxi-mation, and the intercellular Hartree approximation (ICHA). In Chapter 3,we describe a method we call the augmented ICHA, which extends the ICHAby including multi-cell correlations in the highly correlated regions of a cir-cuit. The loss of polarization in isolated bit packets is explored in Chapter 4,and the coherence vector and its application towards solving for the dynam-ics and thermal relaxation of QCA circuits is described in detail in Chapter 5.We discuss the effect of higher-order correlations in Chapter 6, and stochas-tic methods for computing the steady-state polarizations of a QCA circuit inChapter 7. And lastly, the thesis concludes with results from our proposedmethod in Chapter 8.6Chapter 2Simulation of QCA SystemsIn the proposed QCA implementations, a QCA cell will feature 4-6 quantumdots and two mobile electrons [1, 23, 52]. A schematic diagram of a singleQCA cell is shown in Figure 1.1(a). This figure shows a cell consisting offour quantum dots arranged in a square pattern. The two basis states of theQCA cell considered in this work is also shown. While cells containing fiveor six dots have shown to improve the behaviour of QCA devices [23,30,38],it greatly increases the numerical complexity of the cell model, and thus welimit the scope of this work to four-dot cells. The two mobile electrons arefree to tunnel between adjacent dots, however, tunneling out of the cell (orbetween cells) is assumed to be completely suppressed.2.1 Full-Basis Quantum Mechanical TreatmentFor the single four-dot cell considered in this work (with two electrons of op-posite spin), there are a total of sixteen underlying basis vectors. Modelling asingle cell within the full sixteen-dimensional Hilbert space makes it possibleto capture the full dynamics of the cell, and represents the most completeway of modelling a QCA cell [42]. To maintain the full many-electron de-grees of freedom of an N -cell system however (including the correlation andexchange effects between cells), one must treat the entire N -cell system asa single quantum system by constructing a new basis set consisting of thedirect-product of all combinations of the single-cell basis vectors. Thus, acomplete basis set for an N -cell system would require 16N basis vectors.A 16N × 16N tight-binding Hubbard-type Hamiltonian for this system canbe constructed using the standard second-quantized notations [42]. In thismodel, quantum dots are treated as sites, ignoring any degrees of freedominternal to the dot. The Hamiltonian for an N -cell circuit can be written as,72.1. Full-Basis Quantum Mechanical TreatmentHˆ =∑i,σ,mE0nˆi,σ(m)− ti,j∑i>j,σ,m(aˆ†i (m)aˆj(m) + aˆ†j(m)aˆi(m))+∑i,mEQnˆi,↑(m)nˆi,↓(m) +∑i>j,σ,σ′ ,mVQnˆi,σ(m)nˆj,σ′ (m)|ri(m)− rj(m)|+∑i>j,σ,σ′ ,k>mVQnˆi,σ(m)nˆj,σ′ (k)|ri(m)− rj(k)|, (2.1)where the operator aˆi,σ(m) (aˆ†i,σ(m)) annihilates (creates) an electron in theith site of cell m with spin σ, the operator nˆi,σ(m) ≡ aˆ†i,σ(m)aˆi,σ(m) is thenumber operator for the ith site of cell m, and VQ = q2e/(4pi) is a constant,where qe is the charge of the electron and the electrical permittivity ofthe medium. The first term in Eq. (2.1) represents the on-site energy ofa dot. The second term describes the electron tunnelling, where ti,j is ahopping constant (with units of energy) between neighbouring sites i and j,determined from the structure of the potential barriers between the dots inthe cell. The third term in Eq. (2.1) accounts for the energetic cost, EQ,of putting two electrons of opposite spin at the same site, and the final twoterms are related to the Coulombic interactions between electrons in thesame cell and in neighbouring cells, respectively. The polarization of eachcell can be found by evaluating,Pm =(ρm1 + ρm3 )− (ρm2 − ρm4 )ρm1 + ρm2 + ρm3 + ρm4, (2.2)where ρmi is the expectation value of the number operator on the ith site ofcellm, i.e., ρmi = 〈nˆi(m)〉. Using this treatment, it has been shown that thedynamics of small arrays can indeed be solved for directly while retaining thefull many-electron degrees of freedom [42]. However, the exponential growthin the basis set makes it computationally prohibitive to model any devicelarger than just a few cells. Figure 2.1 shows the number of elements in the16N×16N Hamiltonian as a function of the number of cells in system. As wecan see, the number of elements in the Hamiltonian exceeds 1 billion elementsfor just four cells. For this reason, if we expect to model the dynamics oflarge QCA systems, then some approximate techniques are needed in orderto reduce the size of the basis set required for larger circuits. The rest ofthis chapter describes the most commonly used of these approximations.82.2. Two-State Approximation1 2 3 4 5 6 7 8 9 10Number of Cells100001x1061x1081x10101x10121x10141x10161x10181x10201x10221x1024Number of Elements in HamiltonianFigure 2.1: Number of elements in the 16N×16N Hamiltonian vs. the num-ber of cells in the system.2.2 Two-State ApproximationThe first of such approximations is the two-state approximation. As shownin [42], the ground state of a single cell remains almost completely containedwithin a two-dimensional subspace of the full sixteen-dimensional Hilbertspace. For simplicity, we will consider idealized cells with saturation po-larizations of ±1. A more realistic treatment would have the polarizationsspanning a slightly smaller range, however the basic argument that followswould be unchanged. We therefore refer to our reduced basis as the polariza-tion basis, and denote the two states as |0〉 and |1〉, as shown in Figure 1.1.The Hamiltonian in the polarization basis for a system of N interacting QCAcells, under the influence of driver cells, is described by a 2N × 2N Ising-likeHamiltonian:Hˆ = −N∑i=1γiσˆx(i)−12N−1∑i=1N∑j=i+1Ei,jk σˆz(i)σˆz(j) +12N∑i=1∑D∈ΩiEi,Dk PDσˆz(i),(2.3)where Ωi represents the neighbourhood of driver cells that have appreciableeffect on cell i, and PD is the polarization of those drivers. The driver cellshave polarizations that can range from −1 to +1. The tunnelling energy,92.2. Two-State Approximationγ, takes the place of the transverse magnetic field in the Ising spin model,and is determined from the nature of the potential barriers between thedots in the cell. When the potential barriers are raised, γ is small, andthe cells are allowed to become polarized (i.e., take on one of the antipodalconfigurations). Cells in this state are considered to be “on”, and capableof influencing neighbouring cells. When the potential barriers are lowered,the opposite is true, and the electrons become delocalized; i.e., the cells areunpolarized, and are considered to be “off”. By modulating the tunnellingbarriers in each cell, we can dictate which cells are “on” and which ones are“off”, and thus control the propagation of bit packets throughout a QCAcircuit. It is expected that the clocking of QCA circuits will be achieved inthis way. Ei,jk is the kink energy between cells, and is the energetic cost of twocells having opposite polarization. In general, the interaction between cellscan be described by a quadrupole-quadrupole interaction in which the kinkenergy decreases as the fifth power of the distance between cells. Thus it istypical to only consider nearest-neighbour coupling within the Hamiltonian,as the kink energy between next-to-nearest neighbours is reduced by a factorof 132 . The kink energy is analogous to the coupling constant, J , in theIsing spin model. The Pauli operators, σˆa(i); a = x, y, z, represent thetensor product of N 2× 2 identity operators, with the ith identity operatorreplaced by the usual Pauli matrix shown below. For example, σˆy(2) ≡1⊗ σˆy⊗1⊗ . . .⊗1. The polarization of a cell, i, is defined as Pi = −〈σˆz(i)〉.σˆx =(0 11 0), σˆy =(0 i−i 0), σˆz =(−1 00 1).In further chapters of this thesis, we will use the two-state approximationto study systems of interacting QCA cells. Even within this approximation,however, the exponential growth in the basis set limits its application tosystems containing only a small number of interacting cells. From Figure 2.2,a fifteen-cell system, for instance, would yield over a billion elements in ourHamiltonian. In practice, QCA circuits will likely require millions of cells.The simple 4-bit processor shown in Figure 1.3, for example, was designedusing almost 30,000 cells. Thus, further approximations are still requiredin order to reduce the computational complexity of modeling large QCAsystems.102.3. Intercellular Hartree Approximation1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20Number of Cells101001000100001000001x1061x1071x1081x1091x10101x10111x1012Number of Elements in HamiltonianFigure 2.2: Number of elements in the 2N×2N Hamiltonian vs. the numberof cells in the system.2.3 Intercellular Hartree ApproximationIn the more complete quantum mechanical treatments discussed above, ar-rays of QCA cells are being solved for directly while retaining the full many-electron degrees of freedom. The inclusion of all of these additional variablescontributes to the exponential growth in the basis set as the system growswith the number of cells, N . Thus, in order to mitigate this growth, somequantum degrees of freedom must be removed during calculations. Onesuch approach is to use the intercellular Hartree approximation (ICHA), forwhich the effects arising from quantum correlations and exchange are fullyneglected [40, 41]. In this treatment, an N -cell system is decoupled intoa set of N single-cell subsystems that are assumed to interact classicallythrough the expectation values of their polarization without any quantummechanical coherence between neighboring cells. Using this approximation,it is only necessary to diagonalize N 2×2 Hamiltonians as opposed to one2N × 2N Hamiltonian. This allows large circuits, which would otherwise beintractable using a full quantum mechanical model, to be simulated on aclassical computer. The Hamiltonian (in the polarization basis) for a singlecell, i, is simply,112.3. Intercellular Hartree ApproximationHˆi = −γiσˆx +12∑j∈ΩEi,jk Pj σˆz, (2.4)where Pj is the polarization of cell j, and is found by evaluating, Pj =−〈σz〉. Because the solutions of one cell’s Hamiltonian define parametersthat enter its neighbours’ Hamiltonians, the system of coupled Schrödingerequations must be solved iteratively to obtain self-consistency. Typically,when calculating the state of the system at any given instant in time, theinitial guess is taken to be the previous state of the system. If a circuitevolves adiabatically and cells remain at the ground state, then this problemsimplifies even further by recognizing that the polarization of any cell, i, canbe evaluated numerically using [53],Pi =12γ∑jEi,jk Pj√√√√1 +(12γ∑jEi,jk Pj)2, (2.5)which results in the well-known nonlinear cell-to-cell response function shownin Figure 2.3. Thus the ICHA carries with it extremely low computationalcomplexity, making it very desirable when solving large arrays of QCA cells.However, while the ICHA is generally capable of arriving at the correctground state of an array of QCA cells in a single clocking zone with a fixeddriver, it has been shown that it does not always arrive at the correct groundstate for many other types of circuits [43, 44] and it can falsely predict alatching mechanism within a group of cells that allows them to retain (andeven obtain) a polarization in the absence of a perturbing cell. This goesbeyond previously noted inaccuracies of the ICHA, such as in dynamics andfinite temperature behaviour [42], where the many-cell excited states areneeded to get quantitatively correct results.To illustrate the effects of the ICHA on the calculated ground state ofQCA arrays, consider the driven two-cell wire shown in Figure 2.4(a). Twosimulations were conducted on the wire; the first using the ICHA, and asecond using the more complete quantum mechanical treatment discussedin Chapter 2.2. For each simulation, the driver cell polarization was sweptbetween, −1 ≤ P ≤ +1, and repeated for cycles with different tunnellingrates. The polarization of the first cell in each of these simulations is plottedin Figures 2.4(b)-(c).Let us first consider the results from the ICHA simulation. When using122.3. Intercellular Hartree Approximation-1 -0.5 0 0.5 1Driver Cell Polarization-1-0.500.51Cell PolarizationFigure 2.3: Nonlinear cell-to-cell response function. The output cell is almostcompletely polarized for even a small polarization.the ICHA, one has freedom in choosing an initial guess for the polarizations.The most common method in dynamic simulations is to use the solutionfrom the previous time step as an initial guess for the current time step.Cell 1’s response to the driver cell follows the hysteresis curve shown in Fig-ure 2.4(b), which is characteristic of memory, and typical of ferromagneticrather than electronic materials. When a magnetic field is applied to a fer-romagnet, electron spins in the ferromagnetic material align themselves withit. When the field is removed, part of the alignment is retained, and thematerial remains (at least partially) magnetized until a magnetic field in theopposite direction is applied. The ICHA predicts a similar behaviour forQCA cells, with the role of the external magnetic field, in this case, playedby the driver cell. When the driver cell is turned on, both the driven cellsalign themselves with the driver polarization. As the driver cell polarizationis removed, the coupling between the two driven cells (through expectationvalues) allows them to drive each other and retain part of their polarizationeven as the driver cell polarization reaches zero. Only after a sufficientlystrong driver polarization in the opposite direction do both cells completelydepolarize, i.e., when ∑Ei,jk Pj = 0. The strength of the driver polarizationrequired to relax the driven cells depends solely on the ratio γ/Ek; the lowerthe tunnelling rate, the larger the residual polarization of the driven cells asthe driver cell’s polarization is removed. In other words, at slow tunnellingrates, a QCA cell can hold onto its polarization well after the driver polariza-tion has been removed. Higher tunnelling rates however, would likely require132.3. Intercellular Hartree Approximationthe cells to be clocked in the same clock zone in order to preserve the po-larization down a line of cells. The other method that can be used with theICHA is to sample the space of polarizations in search of the self-consistentsolution with the lowest energy. For certain driver cell polarizations, thereare two self-consistent solutions. In Figure 2.4(b), the lowest energy solutionis shown in bold. If the lowest energy solution is always used, the ICHApredicts a discontinuity in the response of the driven cells, at zero drivercell polarization. As the driver’s polarization crosses zero, the cells respondby abruptly “snapping” to the other polarization state. The ICHA is nottypically used in this way, however.For the same simulations conducted using a more complete quantum me-chanical treatment, we witness no such hysteresis, as shown in Figure 2.4(c).Here, as the driver polarization is removed, Cell 1 also relaxes to zero po-larization. Thus, there is a “depolarizing” effect that occurs when the driverpolarization is removed that is not predicted when treating QCA systemsusing the ICHA. This issue will be discussed in further detail in Chapter 4of this thesis. While the ICHA offers a great deal of simplicity and lowcomputational overhead, it represents, in a sense, the minimum inclusionof quantum mechanical effects in that correlations are completely ignored,except within the Hilbert space of each cell. For these reasons, it is useful todevelop other techniques if we aim to accurately model the time-dependanceof many-cell devices. It should be noted however, that for such cases wherethe tunnelling rates are very slow (& 100 µs), we witness a crossover froma regime where quantum tunnelling is important to one where electrons arecompletely localized. The normal relaxation dynamics are suppressed be-cause the system is strongly coupled to the environment through σz, whichhas the effect of localizing charge [54]. In such cases, the ICHA is capable ofpredicting the correct ground state of a QCA system. However, in molecularand atomic implementations, it is likely that the range of tunneling ratesthat allows this crossover from quantum dynamics to classical dynamics bydirectly modulating tunnel barriers, will not be achievable. While these sim-ulations present a critical issue for classical computation using clocked QCAat the molecular and atomic scale, it is important to note that this loss ofpolarization occurs only when a set of cells becomes isolated from a perturb-ing influence such as a fixed driver cell, and thus, QCA systems operatingwithin a single clocking zone will not experience such an effect.142.3. Intercellular Hartree ApproximationDriver CellCell 1 Cell 2(a) Driven 2-cell wire-1 -0.5 0 0.5 1-1-0.500.51PDP 1 = Ek2 = Ek3 = Ek20(b) ICHA-1 -0.5 0 0.5 1-1-0.500.51 = Ek2 = Ek3 = Ek20PDP 1(c) Full QM ModelFigure 2.4: Parametric simulations of a driven two-cell wire for differenttunnelling rates. (a) The two-cell wire being simulated. (b) Polarizationof Cell 1 as a function of the driver cell polarization, calculated using theICHA. The ICHA predicts a hysteretic response indicating a memory effect.Where two solutions exist, the one with lowest energy is shown in bold. (c)Polarization of Cell 1 as a function of the driver cell polarization, calculatedusing the more complete quantum mechanical treatment described by equa-tion 2.3. This more complete simulation shows no hysteresis. We have takenkBT ≈ Ek/4, which corresponds to Ek ≈ 100 meV for room temperature.15Chapter 3Augmented ICHA 2In the previous chapter, we established that for cells with fast tunnellingrates, the ICHA is likely limited to circuits consisting of a single clock zone.However, even for circuits operating under these conditions, the ICHA canstill predict the incorrect ground state polarizations. As an example, con-sider the majority gate in Figure 3.1 which is known to produce the incorrectground state result when modelled using the ICHA [56]. One of the inputlegs is only one cell long and is coupled to driver cell Pdriver3, which has a po-larization P = −1. The other two input legs are each four cells in length andare driven by cells Pdriver1 and Pdriver2, each having polarizations P = +1.When simulated using the ICHA, the output cell takes on the polarization asdetermined solely by Pdriver3. The output is shown in Figure 3.9(a) with thetime measured in the natural units of ~/Ek. Since the ICHA couples cellsonly via∑Ei,jk Pj (see Eq. (2.4)), cells will only interact with each other oncethey have obtained a polarization. As such, the bit packets travelling fromPdriver1 and Pdriver2 take much longer to arrive to the gate cell (cell 5) thanthe one travelling from Pdriver3; and hence, why the output is dominated bythe closest input. However, this does not represent the lowest energy stateof the circuit [56].The ICHA’s shortcomings are not only limited to majority gates of un-even input legs, but can also be shown to occur in other QCA circuitry suchas the layout shown in Figure 3.2. Here, a long wire is experiencing crosstalkdue to the presence of a single cell in a neighbouring wire. In this scenario,the ICHA results in cell 8 being so strongly influenced by the interfering cell,that it takes on a polarization as determined by this cell (whose polarizationis P = +1) and not by the driver cell (P = −1), as would be expected fromsolving the full Hamiltonian for the system. The output of cell 9 is shownin Figure 3.10(a). Again, we measure time in the natural units of ~/Ek.While the many shortcomings of the ICHA have been well documented,the computational ease of the ICHA should still allow it to serve as a logicsimulator and play a role in quickly verifying circuit behaviour. To this effect,2This chapter is the outcome of joint research [55]16Chapter 3. Augmented ICHAPdriver1Pdriver2Pdriver3Figure 3.1: Example of a majority gate for which the ICHA produces theincorrect output.IN OUTFigure 3.2: Example of a wire experiencing crosstalk for which the ICHAproduces the incorrect output [44].a heuristic method for solving QCA circuits for which the ICHA incorrectlypredicts the ground state wave function was initially proposed in [43,57] andfurther pursued in our work [44]. Here, we proposed modelling the highly-correlated areas of a QCA circuit with a multi-body Hamiltonian, and theremainder of the circuit using the ICHA. In theory, since the correlated17Chapter 3. Augmented ICHAbehaviour of QCA circuits would be treated with a more complete quantummechanical treatment, this method would overcome the primary shortcomingof the ICHA.To identify the highly-correlated cells in a given circuit, we calculate thecorrelation proper between each pair of neighbouring cells using Eq. (3.1)[57]:Mzz(i, j) = Kzz(i, j)− λz(i)λz(j), (3.1)where Kzz(i, j) is the two-point correlation term between cells i and j, andλz is the single-cell coherence term. Kzz and λz can be found by computing,Kzz(i, j) = 〈σˆz(i)σˆz(j)〉, (3.2)andλz(i) = 〈σˆz(i)〉, (3.3)respectively. The maximum correlation proper between any pair of cells isone, which indicates that the polarization of both cells is completely co-dependant. However, calculating the correlation proper requires informationthat is only available to us after solving the circuit using the full two-stateHamiltonian described in Eq. (2.3). Thus for this method to be effective, weneed to identify the highly-correlated cells in the circuit a priori.In [44], we proposed a rudimentary algorithm that pre-determined highlycorrelated groups of cells based on their total intercellular coupling. Thechoice of using the cell’s kink energy as a metric to determine how stronglycorrelated it is to its neighbouring cells is justified from the results shown inFigures 3.3 and 3.4. These figures show the maximum correlation propers(Eq. (3.1)) and magnitude of the total kink energy (i.e.,∑ |Ek|) for each cellin the majority gate and wire, respectively. The cells in Figures 3.3(a) and3.4(a) are split to show the cell’s correlations to all of its neighbouring cells.For both circuits, the cells with the largest Ek are also the most correlatedwith their neighbouring cells. However, the progressive increase in cell-cellcorrelations is not reflected in the kink energy.18Chapter 3. Augmented ICHA(a) Correlation (b)∑|Ek|Figure 3.3: The two-cell correlation propers and kink energy for each cell inthe majority gate. Cells with the highest kink energy also appear to havethe largest two-cell correlation propers with neighbouring cells.(a) Correlation (b)∑|Ek|Figure 3.4: The two-cell correlation propers and kink energy for each cell inthe wire. Cells with the highest kink energy also appear to have the largesttwo-cell correlation propers with neighbouring cells.Two algorithms were written for this work. The first of these algorithmsgroups cells in a circuit together based on their Ek. The second refines19Chapter 3. Augmented ICHAthese groups - post-simulation - based on the two-cell correlations exhibitedbetween the cells within these groups. The pseudo-codes for both of thesealgorithms are shown in Figures 3.5 and 3.6. In Algorithm 1, the inputto createHam() is a circuit, C. In the algorithm, the total intercellularcoupling,∑ |Ek|, for each cell in C is computed and compared to a thresholdvalue to determine whether or not the cell should be included in a separateHamiltonian. If a given cell, c, is not within a radius, r, of any of theexisting cells in cell_group(i), then it is checked against the next cell group,cell_group(i + 1), and so on until a cell group is found. If no existing cellgroup contains any cells within the specified radius, a new group is formedbeginning with cell, c. Once every cell has been observed, all cell groupsare collected into H_group. H_group now consists of all the groups ofcells that will require a separate Hamiltonian during simulation. In thesecond algorithm, regroupHam() takes in the previously found H_group asits input, and uses the two-cell correlation proper equation to measure howcorrelated each pair of cells are to one another. If for two given cells, c1and c2, Mzz(c1, c2) is above the specified threshold, then both c1 and c2 arekept in their current cell grouping. If any cell is not highly correlated to anyof its neighbours, then it is removed from the cell group and refinement isachieved.Algorithm 1 createHam(C)1: E0k = Kink energy between two neighboring cells (intercellular coupling);2: δ = threshold value;3: r = radius of effect;4: while there exists unobserved cells in C do5: i = 1;6: c = a cell in C that has not been observed;7: |Eck| = sum of the kink energies between cell c and all other cells in C;8: if Eck > 2|E0k | + δ;9: while i ≤ total number of cell groups do10: if distance between cell c and any cell in cell group(i) ≤ r;11: place cell c in cell group(i);12: break;13: else14: i++;15: end if16: end while17: if i == total number of cell groups;18: place cell c in cell group(i + 1);19: end if20: end if21: end while22: H group = cell group;23: return H group;2Figure 3.5: Pseudo-code for grouping together cells with large Ek.203.1. Numerical ResultsAlgorithm 2 regroupHam(H group)1: ! = threshold value;2: i = 1;3: while i ≤ number of cell groups in H group do4: while there exists an unobserved pair of cells in H group(i) do5: c1, c2 = pair of cells in H group(i) that has not been observed;6: if Mzz(c1, c2) > !;7: place cells c1, c2 in cell group(i);8: end if9: end while10: i++;11: end while12: H group = cell group;13: return H group;1Figure 3.6: Pseudo-code for grouping together cells with high correlations.3.1 Numerical ResultsThe majority gate, wire, and full adder were simulated using the algorithm.The layout of the full adder is shown in Figure 3.7. Based on the layouts ofthe simulated circuits, our algorithm grouped cells 4, 5, 6, and 10 togetherfor the majority gate, and cells 7 and 8 for the long wire as shown in Fig-ure 3.8. The cell groupings for the full adder are shown in Figure 3.8(c).In all cases, each circuit was simulated by modelling each of its cells as a2×2 Hamiltonian and coupling them classically through expectation values(ICHA). The grouped cells, which are enclosed in the boxes, were modelledusing separate Hamiltonians (a 16×16 Hamiltonian for the majority gate, forexample), and then coupled classically to the rest of the system. For eachsimulation, γ is slowly lowered (which as the effect of increasing the size ofthe tunnelling barriers), allowing the cells in each circuit to polarize. Oncethe cells are fully polarized, γ is slowly increased, allowing cells to return totheir initial unpolarized state. The majority gate was simulated using theinput pattern, P = {+1,+1,−1}, for inputs Pdriver1, Pdriver2 and Pdriver3,respectively, while the lone input for the wire was P = −1. The full adderwas simulated using inputs, ABCin = {−1,−1,+1}. The results are shownin Figures 3.9-3.11. In each case, using a threshold value, δ = Emaxk /10, ouralgorithm was able to correctly settle to the correct ground state when theICHA failed to do so.Further refinement was achieved for the majority gate, reducing the num-ber of grouped cells such that it only included cells 4 and 5. Further re-finement was also achieved for the full adder. In both cases, the reducedHamiltonian model was able to correctly find the ground state.213.1. Numerical ResultsFigure 3.7: Layout of the QCA full adder with no crossover.(a) Majority Gate (b) Crosstalk(c) Full AdderFigure 3.8: Cells groups modelled using a multi-cell Hamiltonian.223.1. Numerical Results0 50 100 150 200-1-0.8-0.6-0.4-0.20Polarizationt ~Ek(a) ICHA0 50 100 150 20000.20.40.60.81Polarizationt ~Ek(b) Augmented ICHAFigure 3.9: Polarization of cell 11 in the majority gate using the ICHA andthe Augmented ICHA methods.233.1. Numerical Results0 50 100 150 20000.20.40.60.81Polarizationt ~Ek(a) ICHA0 50 100 150 200-1-0.8-0.6-0.4-0.20Polarizationt ~Ek(b) Augmented ICHAFigure 3.10: Polarization of cell 9 in the wire using the ICHA and the Aug-mented ICHA methods.243.1. Numerical Results0 0.5 1 1.5 2Time (s)00.20.40.60.81PolarizationSumCout(a) ICHA0 0.5 1 1.5 2Time (s)-1-0.500.51PolarizationSumCout(b) Expanded ICHAFigure 3.11: Results for the full adder using the ICHA and the AugmentedICHA methods.This algorithm has since been included in QCADesigner as part of a newsimulation engine available to the user. In this new simulation engine, theuser can either manually select which cells he would like to group togetheror have the simulator do it automatically based on the kink energies. Post-simulation refinement is also performed based on the two-point correlations.The kink energy threshold, δ, and the correlation threshold, , are bothuser-defined parameters in the options menu. Using this simulation engine,circuits with more realistic clocking zones can now be designed and simulatedwith a higher level of accuracy.253.2. Known Issues3.2 Known IssuesWhile this method proved useful for a number of small circuits and QCAbuilding blocks, it contains a few shortcomings that limit its overall effec-tiveness. The first known issue occurs in circuits for which there are multiplegroups of correlated cells. Because the different groups of cells evolve inde-pendently of one another, their outputs can cause kinks (i.e., neighbouringcells of opposing polarity) in the circuit which typically does not representthe lowest-energy state of the system. As an example, consider the doublemajority gate circuit shown in Figure 3.12. Based on the proposed algorithm,multi-cell Hamiltonians will be solved for the groups of cells identified in Fig-ure 3.13.Pdriver1Pdriver2Pdriver3 Pdriver4Pdriver512131415161718 19 20 21 22Figure 3.12: Layout of two consecutive majority gates.We simulate this circuit first using the input pattern, P = {+1,+1,−1,+1,−1},for inputs Pdriver1, Pdriver2, Pdriver3, Pdriver4 and Pdriver5, respectively. The in-put combination at the first majority gate will produce an output of P = +1at cell 5. This output will propagate down the line of cells towards the secondmajority gate and break the symmetry caused by the opposing polarizationsof Pdriver4 and Pdriver5, and yield an output of P = +1 at cell 16, as we wouldexpect. Now consider the input pattern, P = {+1,+1,−1,−1,−1}. For thisinput combination, the output of both majority gates is already determined:P = +1 for cell 5, and P = −1 for cell 16. These polarizations propagateoutwards from cells 5 and 16 and eventually meet somewhere on the line of263.2. Known IssuesPdriver1Pdriver2Pdriver3 Pdriver4Pdriver512131415161718 19 20 21 22Figure 3.13: Layout of two consecutive majority gates.cells between them, causing a kink. The exact location of the kink dependson the circuit layout and simulations parameters. In the worst-case scenario,the output of the second majority gate can arrive at cell 11 before the firstmajority gate has arrived at its final polarization. In this case, cell 5 willbe neighboured by four cells of equal and opposite polarizations, and thuswill not obtain a polarization. This issue can be remedied by segregating themajority gates into separate clock zones or by forming a single group of cellscontaining all the cells from both groups, plus the ones that connect them asshown in Figure 3.14. The former solution is undesirable because it repre-sents a workaround for a circuit that can indeed operate as expected undera single clocking zone. The latter is also undesirable as it renders the simu-lation prohibitively inefficient due to the size of the many-body Hamiltonianrequired. This result significantly limits the augmented ICHA’s usefulnessin computing the correct ground state solutions for many QCA circuits.The other known issue with this algorithm occurs with wider intercon-nects. Consider the interconnect shown in Figure 3.15. The interconnectdepicted in this figure is built 3-cells wide (a potential design method to de-crease the sensitivity to fabrication defects). In this design, since most cellsare completely surrounded with other neighboring cells, the∑ |Ek| for eachcell will be high and thus the algorithm will group the entire interconnecttogether into one large Hamiltonian. This is undesirable for two reasons; (1),it will not be feasible to diagonalize the Hamiltonian for large interconnects273.3. SummaryPdriver1Pdriver2Pdriver3 Pdriver4Pdriver512131415161718 19 20 21 22Figure 3.14: Layout of two consecutive majority gates.since the number of cells will be quite large. And (2), the ICHA is perfectlycapable of arriving at the correct ground state for this interconnect, and thusdoes not require any number of cells to be grouped together and solved forusing a separate Hamiltonian. This particular example again demonstratesthe need to develop a more robust method for computing the ground stateof QCA circuits.IN OUTFigure 3.15: QCA interconnect built 3-cells wide.3.3 SummaryIn order to try and overcome the shortcomings of the ICHA, we proposed analgorithm to determine a priori the highly correlated areas of a QCA circuit.The ICHA, which neglects quantum correlations between neighbouring cells,283.3. Summaryis modified such that highly correlated groups of cells in a QCA circuit aremodelled using a separate Hamiltonian and then coupled classically to therest of the system. We also included a refinement algorithm which attemptsto reduce the size of these Hamiltonians using two-point correlations after theinitial simulation was completed. Simulation results show that our algorithmcan correctly find the ground state of common QCA building blocks such asthe majority gate and wire, as well as the QCA full adder - all of which haveshown to settle to meta-stable ground states when using the ICHA. However,the algorithm can break down for circuits which require many multi-cellHamiltonians. In such cases, the outputs of these Hamiltonians producekinks in the circuit which do not represent the lowest-energy state of thesystem. Grouping together the multi-cell Hamiltonians fixes this problem,however, it does so that the expense of additional computational complexitywhich minimizes its usefulness. The other known issue of this algorithmoccurs for circuits which are built several cells wide. In such cases, theproposed algorithm will group the entire circuit into one large Hamiltionian,which becomes computationally intractable after only a few cells. Therefore,while the algorithm is suitable for small, finely clocked circuits, it does notmeet the general requirements for an efficient and accurate simulation tool.29Chapter 4Loss of Polarization in IsolatedBit Packets 3As we discussed briefly in Chapter 2.3, lines of QCA cells will “depolarize” ifleft isolated from a driver cell. In this chapter, we address this phenomenonby briefly exploring the qualitative features of some small systems of in-teracting QCA cells by once again considering the two-state Hamiltoniandescribed in Eq. (2.3). For an unbiased line of N cells, there are always twolowest-energy states without kinks [46]. For a single cell, these represent theonly two states and correspond to the symmetric (|ψs〉 = 1√2(|0〉+ |1〉)) andanti-symmetric (|ψa〉 = 1√2(|0〉−|1〉)) combinations of the polarization states(with the former being the ground state configuration). At finite tempera-tures, the total wave function of the cell is best described by,|Ψ(t)〉 = c1|ψs〉+ eδc2|ψa〉, (4.1)where δ = ∆Et/~, and√c21 + c22 ≡ 1. ∆E is the difference in energy betweenthe two lowest-lying states. At low temperatures (i.e., T ≈ 0K), the cell willbe almost completely localized in the ground state (i.e., c1 ≈ 1) and a singlequantum measurement of σz will yield a -1 or a +1 with equal probability,and the polarization of the cell will be P = 0. At higher temperatures, if weassume that both c1 and c2 are real, then a quantum measurement of σz onthe cell will yield a polarization of P (t) = 2c1c2cos(∆Et/~). Thus at finitetemperatures, the polarization of an unbiased cell oscillates between the twopolarization states at a frequency proportional to ∆E.For lines of two or more unbiased cells, the two lowest-lying states repre-sent entangled states. Specifically, they are the symmetric and anti-symmetriccombinations of the state with all cell polarizations aligned along one diag-onal, and the state with all cell polarizations aligned along the other, i.e.,|ψs,a〉 ≈1√2(|000 . . . 0〉 ± |111 . . . 1〉). (4.2)3This chapter is the outcome of joint research [46]30Chapter 4. Loss of Polarization in Isolated Bit PacketsThe equality in Eq. (4.2) is exact when γ = 0. The wavefunction for aline of unbiased cells can be well-approximated by the superposition of thetwo lowest-lying states as described in Eq. (4.1) since√c21 + c22 ≈ 1. Thus,at finite temperatures, the polarization of any cell in the line can still beapproximated using, P (t) ≈ 2c1c2cos(∆Et/~). Again, at low temperatures,the line of cells will be (almost) completely localized in the ground state andit is easy to show that the polarization of any cell along the line is P = 0.The energy difference, ∆E, between the two-lowest lying states decreaseswith each added cell. For a single unbiased cell, the two lowest-lying (andonly) energy states are separated by 2γ. With each added cell, this energysplitting (and by association, the frequency of oscillation between polariza-tion states) decreases by approximately Ekγ , i.e.,∆E(N) = Ek( 2γEk)N. (4.3)The total energy spectrum for lines of unbiased interacting QCA cells isshown in Figure 4.1(a), with the inset depicting the first excitation energyas a function of the number of cells. At finite temperatures, we can thinkof a group of unbiased interacting cells as a two-state system, analogous toa double well, as shown in Figure 4.1(b). As the number of interacting cellsincreases, so does the the barrier separating the two aligned states. Thus,it follows that longer lines have slower effective tunnelling rates and displaya larger degree of bistability. Figure 4.2 shows the unitary time evolutionof a one-, three-, and five-cell system without any energy dissipation or lossof phase coherence (i.e., the relaxation time, τ = ∞). All cells started ina polarized state and oscillate in unison for each wire. Single (and smallgroups of) cells at the molecular or atomic scale will likely exhibit coher-ent oscillations much faster than the measurement time, and therefore willlead to the loss of classical information. Since longer lines of cells exhibitlonger oscillation times, increasing the length of a wire will allow the po-larization state to be maintained for an arbitrary length of time. For thecase of an infinite number of interacting cells (i.e., ∆E = 0), coherent tun-nelling is completely suppressed and the line of cells will retain their initialpolarization [46, 58]. However, in all cases, loss of information will likely bedetermined by environment-induced phase decoherence and/or energy relax-ation.To further illustrate this point, Figures 4.3 and 4.4 show the results oftwo simulations of six cells in a one-dimensional chain, with periodic bound-ary conditions and nearest neighbour coupling only. γ is modulated be-31Chapter 4. Loss of Polarization in Isolated Bit Packets1 2 3 4 5 6 7 8Number of Cells02468101214Energy (/Ek)1 2 3 4 5 6 7 8Number of Cells1x10-60.000010.00010.0010.010.1110First Excitation Energy (eV)(a) Line Spectra(b) Analogous Double Well SystemsFigure 4.1: (a) Spectra of unbiased lines of interacting QCA cells, ranging inlength from one to eight cells, with γ/Ek = 0.1. The spectra are solutions ofEq. 2.3, with a constant (N − 1)Ek/2 added for ease of interpretation. Theinset shows the difference in energy between the two lowest energy levels foreach line. (b) Conceptually, the array can be viewed as a single two statesystem with a barrier that increases with the number of cells [46].tween 1 meV and 200 meV for the first simulation, and between 1 meV and1000 meV in the second. Ek = 108.5 meV in both simulations. The choice forEk was selected such that it was sufficiently greater than kBT at room tem-perature (≈ 25 meV). The two chosen ranges for γ were selected to illustratethe impact that the ratio of Ek/γ has on QCA operation. The Hamiltonianin Eq. (2.3) is used, so that intercellular correlations are completely included.In both simulations, the tunneling barriers in cells 1, 2, and 3 are initiallyhigh (meaning that γ is low) while the tunneling barriers in cells 4, 5, and6 are low (so that γ is high). As time progresses, the tunneling barriers in32Chapter 4. Loss of Polarization in Isolated Bit Packets0 100 200 300 400 500-1-0.500.511 Cell3 Cells5 CellstEk!PolarizationFigure 4.2: Coherent oscillations of the first cell in unperturbed one, three,and five cell lines. The cells are all initially in the P = 1 state. In eachline, the cells oscillate together, so the polarizations of the first cell gives agood representation of the polarization of all its neighbours. γ = 10 meVand Ek = 100 meV [46].cell 4 are raised, allowing it to polarize, and the barriers in cell 1 are low-ered, which causes it to depolarize. Next, cell 5 has its barriers raised as thebarriers in cell 2 are lowered, and so on. Because of the periodic boundaryconditions, the bit packet moves cyclically through the six cells. Throughoutthe process, the Hamiltonian changes quasi-adiabatically, by which we meanthat dγ/dt E2k/~. For our initial state, we create a nearly full negativepolarization in the three active cells by taking the normalized sum of thetwo lowest energy eigenstates. The resulting cell polarizations are plotted asa function of time with and without dissipation in each case. A third curveplotting the exponential decay, proportional to e−t/τ , is also shown in eachplot as a reference.In the simulation of Figure 4.3, as time progresses, the bit packet travelsalong the periodic line, and a polarization is maintained when there is no dis-sipation. Because the Hamiltonian is changed quasi-adiabatically, no kinksare created as the bit packet moves, and the bit packet evolves qualitativelythe same as it would if it were stationary. Because the tunneling barriersare never lowered completely (γ has a maximum value of 200 meV), the cellsnever reach zero polarization. The interaction between the three active cells334.1. Summaryat a given time, plus the residual polarization in the “inactive” cells, slowsthe coherent oscillations to the point where the bit packet is fully polarizedover the period of the simulation, as long as there is no dissipation. However,the steady state still has zero polarization in all cells, so the bit packet losespolarization exponentially when relaxation is included in the model. Theslow oscillation frequency demonstrated in this simulation simply permitsthe cells to maintain their polarizations over several clock cycles.In the simulation of Figure 4.4, since the tunnelling barriers are low-ered enough to completely depolarize the cells (γ is now allowed as high as1000 meV), fewer cells are “on” at any given time, and the coherent oscilla-tions from negative to positive polarization state are sufficiently fast to beobserved over the time scale of the simulation. As in Figure 4.3, the dis-sipation of energy does indeed bring the system exponentially to its steadystate of zero polarization. In this simulation, because of the visible coherentoscillations, the polarization does not follow the decaying exponential as itdoes in Figure 4.3, but instead the envelope of the oscillation is proportionalto the exponential decay.4.1 SummaryIn this chapter, we explored the dynamics of unbiased lines of cells. Whendecoupled from a driver, cells will oscillate coherently between the two-lowestlying energy states, which are the symmetric and anti-symmetric combina-tions of all cell polarizations aligned along one diagonal, and the state with allcell polarizations aligned along the other. Higher-energy states play a limitedrole in the dynamics of unbiased cells and therefore were not included in thisdiscussion. A quantum measurement of σz on any cell in the unbiased linethus yields an oscillating polarization whose frequency is dependent on thefirst excitation energy of the line. Increasing the number of interacting cellsdecreases this excitation energy and slows the frequency of the oscillations.Thus, longer lines of interacting cells are able to keep their polarization foran arbitrary length of time. Oscillations are completely suppressed only inthe limit of an infinite number of interacting QCA cells. However, even inthe absence of coherent oscillations, undriven cells will still experience anexponential decay towards zero polarization due to dissipation into the envi-ronment. And while cells in the previous clock zone can serve as drivers forthese cells, they can only do so as long as they remain latched and driventhemselves.Lastly, it is important to note that unbiased lines of cells have a unique344.1. Summary0 500 1000 1500 2000-1-0.8-0.6-0.4-0.20P 10 500 1000 1500 2000-1-0.8-0.6-0.4-0.20P 20 500 1000 1500 2000-1-0.8-0.6-0.4-0.20P 30 500 1000 1500 2000-1-0.8-0.6-0.4-0.20P 40 500 1000 1500 2000-1-0.8-0.6-0.4-0.20P 50 500 1000 1500 2000-1-0.8-0.6-0.4-0.20P 6tEk~Figure 4.3: Simulations of a six cell line with periodic boundary condi-tions. The value of γ is adiabatically raised and then lowered between1 < γ < 200 meV, with a period of 217~/Ek, with a different phase foreach cell. The circled line shows the polarization of each cell in the absenceof any energy dissipation. The solid line shows the polarization of each cellwith an energy relaxation time of 434~/Ek. For reference, a decaying expo-nential as a function of τ is shown with the dashed line.ground state. This contrasts with the commonly made assertion that cellsand lines have two degenerate states [59], namely, the aligned states. The354.1. Summary0 500 1000 1500 2000-1-0.500.51P 10 500 1000 1500 2000-1-0.500.51P 20 500 1000 1500 2000-1-0.500.51P 30 500 1000 1500 2000-1-0.500.51P 40 500 1000 1500 2000-1-0.500.51P 50 500 1000 1500 2000-1-0.500.51P 6tEk~Figure 4.4: Simulations of a six cell line with periodic boundary condi-tions. The value of γ is adiabatically raised and then lowered between1 < γ < 1000 meV, with a period of 217~/Ek, with a different phase foreach cell. The circled line shows the polarization of each cell in the absenceof any energy dissipation. The solid line shows the polarization of each cellwith an energy relaxation time of 434~/Ek. For reference, a decaying expo-nential as a function of τ is shown with the dashed line. Half a period of acoherent oscillation can be seen over the course of the simulation, flippingthe polarization state from negative to positive.364.1. Summaryground state is in fact an entangled state, and is not accessible to theICHA [8], and hence why the ICHA fails to predict depolarization. Thus,for most cases, the ICHA is not suitable for modelling QCA dynamics wherelines of cells will be decoupled from the set of driver cells in the circuit.37Chapter 5Coherent Cell-Cell DynamicsBy ignoring correlations and coherence, the ICHA can often fail to pre-dict the correct ground state and dynamics of QCA systems. However, aspreviously discussed, including both coherence and correlation results in anon-polynomial expansion of the basis set - making solving large systemscomputationally intractable. In order to mitigate this effect, Toth et al. de-veloped a model employing the so-called coherence vector formalism whichallows us to separate the state variables of our system into groups corre-sponding to the state of the individual cells and into groups correspondingto the two-point, three-point, etc., correlations [43]. Using this model, wecan remove any higher-order correlation terms deemed negligible to the sys-tem’s evolution, and thus reduce the overall computational requirements forsolving the dynamic response of the QCA circuit.The coherence vector can be constructed by first recognizing that thedensity operator in Hilbert space of dimension n, like any Hermitian oper-ator, can be represented by the s = n2 − 1 generating (basis) operators ofSU (n) [60]. For an N cell system (i.e., a system with dimension n = 2N ),the density operator ρˆ can be represented as [60]:ρˆ = 12N 1ˆ +12Ns∑i=1λiλˆi, (5.1)where λˆi represents the ith generating operator of SU(2N ), and the elementsλi are defined byλi = 〈λˆi〉 = Tr{ρˆλˆi}, (5.2)and are the components that form the coherence vector. For any system ofdimension n = 2N , the λˆi basis operators take the formλˆi = Λˆ(1)⊗ Λˆ(2)⊗ · · · ⊗ Λˆ(N), (5.3)where each term of the direct product can be any of the generators of theSU(2) group - namely the σˆx, σˆy, and σˆz Pauli spin matrices - and the unitmatrix:38Chapter 5. Coherent Cell-Cell DynamicsΛˆ(i) =1ˆσˆxσˆyσˆz(5.4)The special case for which λˆi = 1ˆ⊗ 1ˆ⊗· · ·⊗ 1ˆ, is not allowed. Thus, there area total of s = 4N − 1 (= 22N − 1) possible combinations, which is consistentwith our earlier definition of s.For a simple two-cell system, i.e., N = 2, the λˆi basis operators areshown in Figure 5.1.σˆx(1) σˆy(1) σˆz(1)σˆz(2)σˆy(2)σˆx(2)σˆx(1)σˆx(2) σˆx(1)σˆy(2) σˆx(1)σˆz(2)σˆy(1)σˆz(2)σˆy(1)σˆy(2)σˆy(1)σˆx(2)σˆz(1)σˆx(2) σˆz(1)σˆy(2) σˆz(1)σˆz(2)Single-Cell OperatorsTwo-Cell OperatorsFigure 5.1: The s = 15 basis operators for a two-cell system.There are three single-cell operators for each of the two cells, and nine two-cell operators. Again, the expectation values of the s = 15 basis operatorsare the components that make up the coherence vector:~λ =~λ(1)~λ(2)~K(1, 2) , (5.5)where ~λ(1) and ~λ(2) are single-cell coherence vectors containing the expecta-tion values of the single-cell operators, and ~K(1, 2) is the two-cell coherencevector containing the two-point correlation terms (i.e., the expectation val-ues of the two-cell operators). The single- and two-cell coherence vectors areshown below,39Chapter 5. Coherent Cell-Cell Dynamics~λ(1) = [λx(1) λy(1) λz(1)]T ,~λ(2) = [λx(2) λy(2) λz(2)]T ,~K(1, 2) = [Kxx(1, 2) Kxy(1, 2) Kxz(1, 2)Kyx(1, 2) Kyy(1, 2) Kyz(1, 2)Kzx(1, 2) Kzy(1, 2) Kzz(1, 2)]T . (5.6)Once the coherence vector has been computed, the polarization of the ithcell can be found by evaluating, Pi = −λz(i). It is worthwhile to point outthat Kab(i, j) = Kba(j, i) (a, b = x, y, z), and as such, in the above example,it is not necessary to compute the two-cell coherence vector ~K(2, 1). If weincrease the size of our system to three cells, i.e., N = 3, then our coherencevector would expand to include the expectation values for the additionalbasis operators of our system, i.e.,~λ =~λ(1)~λ(2)~λ(3)~K(1, 2)~K(1, 3)~K(2, 3)~K(1, 2, 3), (5.7)where ~K(1, 2, 3) is the three-cell coherence vector containing the 27 three-point correlation terms (i.e., the expectation values of the three-cell opera-tors). Note that there are now three two-cell coherence vectors correspond-ing to the three unique two-cell groupings possible within a three cell circuit.The increasing number of unique two-cell, three-cell, etc, groupings as thecircuit grows is what leads to the exponential growth in the coherence vector.Comparatively, since the ICHA neglects any correlation between cells, onlythe the single-cell coherence vector elements appear in the coherence vector,and thus the coherence vector grows only linearly with the number of cells(i.e., s = 3N). While the exponential growth of the coherence vector dueto inclusion of correlation terms makes solving larger systems intractable,it has been shown in [43] that the contribution of the higher-order (i.e.,three-point, four-point, etc.) correlation terms in the coherence vector arenegligible in calculating the dynamics of QCA systems. Thus, if we removethese higher-order correlation terms, then our coherence vector grows ac-cording to s = 4.5N2 − 1.5N , which offers a much more manageable growth405.1. The Liouville Equationin state variables, and is far more suitable for solving QCA systems with alarge number of cells. If we limit the scope of our Hamiltonian to simplynearest-neighbour (NN) coupling, then the number of state variables in oursystem can be reduced even further to s = 12N − 9. We will verify bothof these scenarios in the later chapters of this thesis. A comparison of thenumber of state variables being solved for in each approximation is shown inFig 5.2.5 10 15 20 25 30Number of Cells101001000100001000001x1061x1071x1081x1091x10101x10111x10121x10131x10141x10151x10161x10171x1018Number of State VariablesICHA2pt Correlations (with NN)2pt CorrelationsDensity Matrix0 5 10 15 20 25 3001000200030004000Figure 5.2: The number of state variables that need to be solved as a func-tion of the number of cells in the circuit. Plots are shown for the differentquantum mechanical treatments discussed.5.1 The Liouville EquationThe problem of solving for the dynamics of the coherence vector simplifiesconsiderably if we formulate it in the Heisenberg picture. In the Heisenbergpicture of quantum mechanics, the state vector, |ψ〉, does not change withtime, and an observable, A, satisfies,ddtAˆ(t) =i~[Hˆ, Aˆ], (5.8)where Hˆ is the Hamiltonian and [·,·] is the commutator of Hˆ and Aˆ(t).Eq. (5.8) is known as the Liouville equation of motion. By using this equationand substituting in the Hamiltonian defined in Eq. (2.3), we can derive a set415.1. The Liouville Equationof ordinary differential equations (ODE) describing the evolution of each ofthe basis operators of our system, i.e.,ddt σˆx(j) =i~[Hˆ, σˆx(j)],ddt σˆy(j) =i~[Hˆ, σˆy(j)],...ddt σˆx(j)σˆy(k) =i~[Hˆ, σˆx(j)σˆy(k)],...ddt σˆz(j)σˆz(k) =i~[Hˆ, σˆz(j)σˆz(k)],...ddt σˆz(j) . . . σˆz(n) =i~[Hˆ, σˆz(j) . . . σˆz(n)]. (5.9)The dynamics of the coherence vector components can be obtained bysimply taking the expectation values of both sides of the equations listedabove, i.e.,ddtλx(j) =i~〈[Hˆ, σˆx(j)]〉,ddtλy(j) =i~〈[Hˆ, σˆy(j)]〉,...ddtKxy(j, k) =i~〈[Hˆ, σˆx(j)σˆz(k)]〉,...ddtKzz(j, k) =i~〈[Hˆ, σˆz(j)σˆz(k)]〉....ddtKz...z(j, . . . , n) =i~〈[Hˆ, σˆz(j) . . . σˆz(n)]〉. (5.10)As an example, let us consider the dynamics of the σˆy(j) basis operator.In doing so, the following commutation property for Pauli matrices will proveuseful:[σˆa, σˆb] = 2iabcσˆc, (5.11)425.2. Dissipative Coupling to the Heat Bathwhere abc is the Levi-Civita function and is defined as,abc =+1 if (a, b, c) is (x, y, z), (y, z, x) or (z, x, y)−1 if (a, b, c) is (z, y, x), (x, z, y) or (y, x, z) .0 if a = b or b = c or a = c(5.12)Using this property and Equation (5.9), the time dependance of σˆy(j) isthen,ddt σˆy(j) =i~[Hˆ, σˆy(j)],= 1~[2γj σˆz(j)−N∑mEj,mk σˆx(j)σˆz(m)]. (5.13)Taking the expectation values of both sides, we get:~ ddtλy(j) = 2γiλz(j)−N∑mEj,mk Kxz(j,m). (5.14)In addition to λz, we notice that Kxz also appears on the right-hand sideof Eq. (5.14), and therefore its dynamics must also be computed in orderto capture the full dynamic behaviour of λy. Similarly, when we derive thedynamical equation for Kxz, various two- and three-point correlation termswill appear in that equation, and thus we will need to compute the dynamicsfor those terms as well, and so on. At the end, we will be left with a linearsystem of coupled ODEs that must all be solved in order to compute thesystem’s dynamics. We will discuss the ODE solver used in this work in thenext chapter of this thesis.5.2 Dissipative Coupling to the Heat BathTo fully model the dynamics of QCA systems, we must also account for thedissipative effects due to the interaction between the cells and the environ-ment. This is indeed very important, as it is this interaction that allows thesystem to relax to its ground state. However, modelling the time evolution ofa quantum system in contact with a thermal reservoir requires details on thespectral density of the environmental coupling, on temperature, and on thespecific properties of the system [42,54,61]. Thus, we cannot expect to solve435.2. Dissipative Coupling to the Heat Baththis problem in its entirety. However, in the case of weak system-reservoir in-teractions and a short reservoir correlation time, Markovian quantum masterequations are a very useful tool for describing relaxation dynamics of openquantum systems [54, 61, 62]. The constraints described above are collec-tively known as the Born-Markov approximation and are explored in detailin the following section.5.2.1 The Born-Markov ApproximationIn the Born-Markov approximation, quantum master equations are generallythought to satisfy the following conditions [54,60–63]; (i) at some initial timet = 0, there exists no correlation between the undamped quantum system,S, and the thermal reservoir, R; i.e., ρˆ(0) ≡ ρˆS(0) ⊗ ρˆR(0). At later times,correlations between S and R will arise due to the coupling between thesystem and the reservoir. However, if we assume that this coupling is veryweak, then at all times, we can continue to approximate the coupled systemusing ρˆ(t) ≈ ρˆS(t) ⊗ ρˆR(t). In other words, we expect ρˆ(t) to only showdeviations on the order of the uncorrelated interactions between S and R.This is the Born approximation. (ii) R is assumed to be a large systemwhose state should be virtually unaffected by its coupling to S (though ofcourse, we expect the state of S to be significantly affected by R). (iii) It isassumed that the future behaviour of S depends only on its present state.This is the Markov approximation. Potentially, S could depend on its pasthistory because its earlier states would become imprinted as changes in thereservoir state through interaction between S and R (earlier states wouldthen be reflected back onto the future evolution of S as it interacts with thechanged reservoir). If however, the reservoir is large compared to S (as wehave assumed it to be), and maintained in thermal equilibrium, then we donot expect it to preserve minor changes brought on by its interaction with Slong enough to significantly effect the future evolution of S. (iv) And lastly,these Markovian quantum master equations can be written in the Lindbladform as [64,65],ddt ρˆ(t) = −i~[Hˆ(t), ρˆ(t)]+ 12s∑i,j=1Aij([Lˆi, ρˆ(t)Lˆ†j]+[Lˆiρˆ(t), Lˆ†j]), (5.15)where for an n−state system, s = n2 − 1 and,Tr{Lˆi}= 0, Tr{LˆiLˆ†j}= δij . (5.16)445.2. Dissipative Coupling to the Heat BathThe Lˆi operators are referred to as the Lindblad operators, which representthe system-reservoir interactions (“damping channels”). The Lˆi, togetherwith the unit operator, 1ˆ, may be taken to form a complete orthogonal setas with the SU(n) operators [60]. The parameter matrix Aij is Hermitianand positive semidefinite, i.e., Aij ≥ 0 (i = 1, 2, . . . , s), |Aij |2 ≤ AiiAjj .5.2.2 The Lindblad EquationThe Lindblad operators, Lˆi, describe the effort of the environment in theBorn-Markov approximation. The form of the Lindblad operators specifiesthe nature of the dissipative process [45]. We expect the dissipation tooccur via two channels: (i) dephasing and (ii) longitudinal energy relaxation.Here, we define dephasing as the process in which the coherence within- and amongst - the cells decays after the tunnelling barriers have beenraised (i.e., when γ Ek). The coherence of a cell is captured by the off-diagonal elements of the density matrix, and is initially created by loweringthe tunnelling barriers (i.e., when γ Ek). The coherence terms decaywith the dephasing time, T2. The longitudinal energy relaxation is definedas the process in which the distribution of higher energy states returns to theBoltzmann distribution. The possible states of the QCA system are capturedby the diagonal terms of the density matrix. Energy relaxation occurs witha relaxation time, T1. For weakly-coupled systems, we can assume T1 ≈T2 [54, 66].For a 2N -state system, the dephasing and energy relaxation can be cap-tured by the normalized generating operators of SU(2N ) [45]. Here, theloss of phase coherence into the environment is captured via the σˆx andσˆy operators, while the longitudinal energy relaxation is captured via theσˆz operators. Earlier in this thesis, we expressed the density matrix as acombination of the generating operators of SU(n) (Eq. (5.1)). If we insertEq. (5.1) into Eq. (5.15), and take the trace with each generating operator,we arrive at the damped Bloch equation given in [43, 45] which results in adamping term added to each of the ODEs in Eq. (5.10) so that,ddtλi =i~〈[Hˆ, λˆi]〉− 1τ (λi − λiss), (5.17)where τ is the relaxation/dephasing time (T1, T2) and describes the relax-ation of the coherence vector towards the steady-state coherence vector el-ement λiss. The value of τ depends on the specific implementation detailsof the QCA system as well as the nature of the coupling to the thermal en-vironment, and would need to be determined experimentally. The λiss term455.3. Summarycan be found by evaluating,λiss = Tr{ρˆssλˆi}, (5.18)where ρˆss is the steady-state density matrix and is defined as,ρˆss ≡e−Hˆ(t)/kBTTr{e−Hˆ(t)/kBT} . (5.19)Since we treat an idealized QCA that is not tied to a specific implementa-tion, we will not pursue a precise value of τ in this work. However, regard-less of the specifics of the relaxation/decoherence time, the density matrixin Eq. (5.19) will very often represent the real steady state of atomic andmolecular systems. We therefore attempt to calculate the correct steadystate behaviour, acknowledging that the dynamics and the specific value ofτ will be implementation-dependent. Solving Eq. (5.19) is critical to produc-ing the correct dynamical response for QCA devices. Calculating the wrongsteady-state density matrix here will drive the coherence vector to the in-correct ground state and falsely predict the polarization of the QCA cells.For large arrays of QCA cells, solving for ρˆss becomes intractable due to thesize of the Hamiltonian, and thus additional approximations are needed inorder to reduce the calculations. This presents a separate challenge and willbe the main focus in the following chapter of this thesis.From an implementation perspective, besides the Born-Markov approx-imation, there is another severe limitation of the usefulness of the abovequantum master equations. Namely, the relaxation dynamics towards thesteady-state values is correctly described only when the system’s relaxationtimes are large compared to the thermal time (i.e., the time it takes for thereservoir to arrive at a thermal equilibrium) [54, 61]. Thus, this method isnot applicable in most problems of solid state physics at low temperatures.However, in spite of these limitations, the Lindblad equation still provides areasonable description in many cases, e.g., in NMR, in quantum optics, andin a variety of chemical reactions [45,61]. More importantly, it is useful heredue to the potential of room temperature operation of atomic QCA [1].5.3 SummaryIn this chapter, we introduced the coherence vector formalism which allowsfor each of the 4N − 1 basis operators of a system to be treated individually.In doing so, it offers the flexibility to select which of the basis operators we465.3. Summarywant to use to compute the system’s behaviour, while allowing us to neglectany others. The time-dependance of each basis operator is computed usingthe Liouville equation of motion. The collection of these equations resultsin a linear set of ODEs which needs to be solved in order to determine thedynamic behaviour of the entire system. Lastly, the interaction between thecells and the environment is accounted for via the use of a Markovian quan-tum master equation. Expressed in the Lindblad form, the quantum masterequation relaxes the state of the system towards a thermal equilibrium statedetermined by a Boltzmann distribution. While the use of this quantummaster equation is not new to QCA, its use had never been fully justifiedbeyond its inclusion within the ICHA. The use of this quantum master equa-tion assumes the system and the environment are only weakly coupled, andis not applicable at low temperatures. However, given the nature of the pro-posed QCA systems and the potential for room-temperature operation, thisquantum master equation provides a reasonable description for our purposes.47Chapter 6Two-Point CorrelationsIn a classical multi-particle system, the number of degrees of freedom re-quired to describe a state increases linearly with the number of particles.However, as we have already discussed, in a quantum mechanical systemwith N QCA cells, the information necessary to fully describe a quantummechanical system increases exponentially with the number of cells (4N − 1degrees of freedom, to be exact). These extra degrees of freedom come fromthe intercell correlations, for which there is no classical analogue. By usingthe coherence vector description, we can separate the state variables of oursystem into groups corresponding to the state of the individual cells andinto groups corresponding to the two-point, three-point, etc. correlations.This feature of the coherence vector description helps us to determine whichcorrelation terms are important from the view of dynamics and which canbe neglected. Thus, the coherence vector is ideal for constructing an in-termediate model between the ICHA (which does not consider any intercellcorrelations) and the exact model (which considers all intercell correlations).In Chapter 5, we discussed limiting our treatment of QCA devices to second-order correlations as it was shown in [43] that the higher-order correlationterms did not affect the dynamic response of QCA systems in any meaningfulway. This level of quantum mechanical treatment is attractive because it rep-resents the next approximation up from the ICHA, while at the same time,includes all necessary intercell correlations required to accurately model thedynamics of a QCA system. In this chapter, we will investigate this approx-imation in greater detail.6.1 Correlation ProperFor mixed states, the correlation proper between any pair (or group) of cellsis a measure of how correlated a pair (or group) of cells are, and is entirelyanalogous to a classical system of mixed probabilities (i.e., the correlationsare classical). The correlation propers are defined as,486.1. Correlation ProperM(σˆαi(i), σˆαj (j), . . . ) =〈(σˆαi(i)− λαi(i))(σˆαj (j)− λαj (j)) . . .〉,(6.1)where αn = x, y, z. If we always choose the operator labels to be in the orderi, j, . . . , then Eq. (6.1) can be shortened to,Mαiαj ...(i, j, . . . ) =〈(σˆαi − λαi)(σˆαj − λαj ) . . .〉. (6.2)By using Eq. (6.2), we can easily find the equations for all the N -cell corre-lation propers. As an example, consider the three-cell correlation proper:Mαiαjαk(i, j, k) =〈(σˆαi − λαi)(σˆαj − λαj )(σˆαk − λαk)〉. (6.3)By using the linearity property of expectations, Eq. (6.3) simplifies to,Mαiαjαk(i, j, k) =〈σˆαi σˆαj σˆαk〉−〈σˆαi σˆαj〉〈σˆαk〉 − 〈σˆαi σˆαk〉〈σˆαj〉−〈σˆαj σˆαk〉〈σˆαi〉+ 2 〈σˆαi〉〈σˆαj〉〈σˆαk〉 ,= Kαiαjαk(i, j, k)−Kαiαj (i, j)λαk(k)−Kαiαk(i, k)λαj (j)−Kαjαk(j, k)λαi(i) + 2λαi(i)λαj (j)λαk(k). (6.4)For example, to compute the correlated dynamics of the operators σˆx(1),σˆz(4), and σˆy(5), one would use,Mxzy(1, 4, 5) = Kxzy(1, 4, 5)−Kxz(1, 4)λy(5)−Kxy(1, 5)λz(4)−Kzy(4, 5)λx(1) + 2λx(1)λz(4)λy(5). (6.5)If Mαiαjαk(i, j, k) ≈ 0, then the dynamics of the operators σˆαi , σˆαj and σˆαkare not considered to be correlated, and we can approximate the three-pointcorrelation term using,Kαiαjαk(i, j, k) ≈ Kαiαj (i, j)λαk(k) +Kαiαk(i, k)λαj (j)+ Kαjαk(j, k)λαi(i)− 2λαi(i)λαj (j)λαk(k). (6.6)496.1. Correlation ProperUsing this approximation, the three-point terms can be eliminated from thedynamical equations and approximated using only single and two-cell cor-relation terms. To determine which three-point terms can be in fact beeliminated, we simulated lines of QCA cells ranging from three to six cellsin length, and computed all 27 three-cell correlation propers for all possiblegroups of cells. The figures below show the maximum values of each three-cell correlation proper for each line of cells. We can see from Figure 6.1that the maximum value of any three-cell correlation proper never exceedsM = 0.03. Therefore, given these relatively small values, we can use Eq. (6.6)to eliminate the three-point correlation terms from our dynamical equations.By eliminating the three-point correlation terms, the four-point correlationterms that would have appeared in those dynamical equations no longer needto be solved for. Similarly, since there are no longer any four-point corre-lation terms, any five-point correlation terms that would have appeared inthose dynamical equations no longer need to be solved for, and so on. Thus,we are effectively removing all higher-order correlation terms by eliminatingthree-point correlations.30 32 34 36-0.008-0.006-0.004-0.00200.0020.0040.0060.0083 Cell Wiret ~Ek26 28 30 32 34 36-0.02-0.0100.010.024 Cell Wiret ~Ek26 28 30 32 34 36-0.02-0.0100.010.025 Cell Wiret ~Ek26 28 30 32 34 36-0.02-0.0100.010.026 Cell Wiret ~EkFigure 6.1: The maximum value for each of the 27 three-cell correlationpropers for wires of different length. The largest three-cell correlation propersdo not exceed M = 0.03 for any of the simulated wires.506.2. Approximating Second-Order Correlations6.2 Approximating Second-Order CorrelationsIn the previous section, we showed that we could eliminate three-point cor-relations from our calculations since the three-cell correlation propers wereapproximately equal to zero. Thus, our dynamical equations are now lim-ited to only including up to two-cell correlations. In doing so, we reduce thenumber of state variables that need to be solved for from 4N − 1 down to4.5N2−1.5N . While this reduction is significant, we seek to reduce this num-ber even further. Using Eq. (6.2) and the linearity property of expectations,the two-cell correlation propers can be expressed as,Mαiαj (i, j) =〈σˆαi σˆαj〉− 〈σˆαi〉〈σˆαj〉,= Kαiαj (i, j)− λαi(i)λαj (j). (6.7)As was the case with the three-cell correlation proper, if Mαiαj (i, j) ≈ 0,then the dynamics of the operators σˆαi and σˆαj are not considered to becorrelated, and the two-cell correlation terms can be approximated using,Kαiαj (i, j) ≈ λαi(i)λαj (j). (6.8)To determine which two-cell correlation terms can be eliminated using Eq. (6.8),we simulated a line of QCA cells ranging from 2 to 7 cells in length and a5-cell cross. Figures 6.2 and 6.3 show the 9 two-cell correlation propers forall unique pair combinations of a 5-cell wire and cross, respectively. Note,since Mab(i, j) = Mba(j, i), we only plot the two-cell correlation propers ofpairs for which i < j. We also plotted the maximum value of each two-cellcorrelation proper as a function of the wire length in Figure 6.4.By quick inspection of Figures 6.2-6.4, we can see that Kxy,Kyx,Kyz andKzy are all good candidates to be approximated using Eq. (6.8). Therefore,we will only compute the dynamics for the remaining five two-cell correla-tion terms. It should be noted that the ICHA approximates all two-cellcorrelation terms using Eq. (6.8).6.2.1 System of ODEsUsing Eq. (5.10), we can set up a linear set of ODEs that will be used tocompute the dynamics of our QCA system. As we have already discussed,all three-cell correlation terms (and higher) are completely excluded fromour calculations, and so our system of ODEs will only include the dynamicalequations for the three single-cell coherence vector terms and five of the nine516.2. Approximating Second-Order Correlations26 28 30 32 34 36 38 4000.020.040.060.080.10.120.140.160.18t ~EkM xx26 28 30 32 34 36 38 40-0.00100.0010.0020.0030.0040.0050.006t ~EkM yx26 28 30 32 34 36 38 4000.010.020.030.040.050.06t ~EkM zx26 28 30 32 34 36 38 40-0.00100.0010.0020.0030.004t ~EkM xy26 28 30 32 34 36 38 40-0.3-0.25-0.2-0.15-0.1-0.050t ~EkM yy26 28 30 32 34 36 38 40-0.014-0.012-0.01-0.008-0.006-0.004-0.0020t ~EkM zy26 28 30 32 34 36 38 4000.020.040.060.080.10.120.14t ~EkM xz26 28 30 32 34 36 38 40-0.01-0.00500.005t ~EkM yz26 28 30 32 34 36 38 4000.050.10.150.20.250.30.35t ~EkM zzFigure 6.2: The two-cell correlation propers for all pairs of cells in a five-cellwire.526.2. Approximating Second-Order Correlations26 28 30 32 34 36 38 4000.020.040.060.080.10.120.140.160.18t ~EkM xx26 28 30 32 34 36 38 40-0.002-0.00100.0010.0020.0030.0040.005t ~EkM yx26 28 30 32 34 36 38 40-0.1-0.0500.050.1t ~EkM zx26 28 30 32 34 36 38 40-0.00200.0020.0040.006t ~EkM xy26 28 30 32 34 36 38 40-0.25-0.2-0.15-0.1-0.050t ~EkM yy26 28 30 32 34 36 38 40-0.01-0.008-0.006-0.004-0.0020t ~EkM zy26 28 30 32 34 36 38 40-0.0500.050.1t ~EkM xz26 28 30 32 34 36 38 40-0.008-0.006-0.004-0.00200.0020.004t ~EkM yz26 28 30 32 34 36 38 4000.050.10.150.20.25t ~EkM zzFigure 6.3: The two-cell correlation propers for all pairs of cells in a five-cellcross.536.2. Approximating Second-Order Correlations2 3 4 5 6 7Wire Length-0.3-0.2-0.100.10.20.30.4Max Correlation ProperMxxMxyMxzMyxMyyMyzMzxMzyMzzFigure 6.4: The maximum two-point correlation proper terms for wires ofincreasing length.two-cell correlation terms (the remaining four will be approximated usingEq. (6.8)). The ODEs for the three single-cell coherence vector terms are,~ ddtλx(i) =N∑jEi,jk Kyz(i, j)≈N∑jEi,jk λy(i)λz(j) (6.9)~ ddtλy(i) = 2γiλz(i)−N∑jEi,jk Kxz(i, j) (6.10)~ ddtλz(i) = −2γiλy(i). (6.11)Next, we will use Eq. (5.10) to determine the ODEs for all the relevanttwo-point coherence vector terms. We begin with the Kxz correlation termsince it is the only one to appear on the right-hand side of any of the ODEsabove. Again, where necessary, we use Eqs. (6.6) and (6.8) to approximatethe three- and two-cell correlation terms, respectively.546.3. ODE Solver~ ddtKxz(i, j) = −2γjKxy(i, j) + Ei,jk λy(i)+∑mEi,mk Kyzz(i, j,m)−∑dEi,dk PdKyz(i, j)≈ −2γjλx(i)λy(j) + Ei,jk λy(i)+∑mEi,mk Kzz(j,m)λy(i)−∑dEi,dk Pdλy(i)λz(j).(6.12)The only new term that appears on the right-hand side of Eq. (6.12) is Kzz.The ODE for Kzz can be expressed as,~ ddtKzz(i, j) = −2γiKyz(i, j)− 2γjKzy(i, j)≈ −2γiλy(i)λz(j)− 2γjλz(i)λy(k). (6.13)Since no new correlation terms appear on the right-hand side of Eq. (6.13),our work is done. The remaining two-point correlation terms (Kxx,Kyy,and Kzx) appear in the dynamical equations for Kxy, Kyx, Kyz or Kzy -which are unused - and therefore, are not required to compute the dynamicresponse of our QCA system. Thus, the total number of state variables thatwe need to solve for is now N2 + 2N . If we limit the cell-cell interaction tosimply nearest-neighbour interaction only, then this number reduces downeven further to 5N − 2, which scales very well for even large circuits.6.3 ODE SolverSolvers for systems of ODEs is a large research field, and there exists a vastamount of available literature on theoretical and computational aspects ofthe various solvers [67–69]. In this work, we implement the widely usedDormand-Prince solver which belongs in the large family of methods knownas the Runge-Kutta (RK) methods. For a general initial-value problem ofthe formdydt = f(t, y), t > t0, (6.14)y(t0) = y0, (6.15)556.3. ODE SolverRK methods compute a number of intermediate values of f(t, y) in the in-terval [tn, tn+1], and approximate the integral as a weighted sum of thesevalues. Denoting the intermediate approximations of f(t, y) by ki, the ap-proximation of the integral of the right hand side in Eq. (6.14) becomes∫ tn+1tnf(t, y) dt ≈ ∆ts∑i=1biki, (6.16)for given weights, bi, and a number of stages, s. The step-size, given by ∆t =tn+1−tn, can be fixed or variable depending on the choice of implementation.A general RK method with s stages can be written aski = fti + ci∆t, yn + ∆ti∑j=1aijki , i = 1, 2, . . . , s, (6.17)yn+1 = yn +s∑i=1biki. (6.18)Here ci, bi, and aij , for i, j = 1, ..., s are method-specific, given coefficients.These coefficients are typically given in a Butcher Tableau, which has theform:c1 a11 a12 . . . a1sc2 a21 a22 . . . a2s...............cs ai1 ai2 . . . aisb1 b2 . . . bsThe Dormand-Prince method is known as an explicit RK method whereaij = 0 for j ≥ i. The coefficients for the Dormand-Prince method aregiven in the Butcher Tableau in Table 6.1 (where obvious zeros have beenomitted) [67]. The first row of b coefficients gives the fifth-order accuratesolution and the second row gives the fourth-order accurate solution. Sixfunction evaluations are used to calculate the fourth- and fifth-order accuratesolutions, where the difference between these solutions is then taken to bethe error of the (fourth-order) solution and is used to determine the next stepsize (there are actually seven stages, however the last stage is evaluated atthe same point as the first stage of the next step, and thus only six function566.4. SummaryTable 6.1: Butcher Tableau for Dormand-Prince Method01/5 1/53/10 3/40 9/404/5 44/45 -56/15 32/98/9 19372/6561 -25360/2187 64448/6561 -212/7291 9017/3168 -355/33 46732/5247 49/176 -5103/186561 35/384 0 500/1113 125/192 -2187/6784 11/8435/384 0 500/1113 125/192 -2187/6784 11/84 05179/57600 0 7571/16695 393/640 -92097/339200 187/2100 1/40evaluations are required). The non-stiffness of the solver along with its high-degree of accuracy and stability made it a suitable choice for our problem.Furthermore, compared to other known methods such as Euler’s method orCrank-Nicolson, the Dormand-Prince method is capable of using larger time-steps and as such, is able to converge faster than its counterparts. It is alsoworthwhile noting that the Dormand-Prince method is both MATLAB’s andGNU Octave’s default 4th-order solver and has also been used to solve forthe uncorrelated dynamics of 3-state QCA systems [70].6.4 SummaryIn this chapter, we explored the role that correlations play on the dynamicsof QCA circuits. In particular, the correlation proper in Eq. (6.2) was usedas a metric to gauge the correlation between a set of p operators. pth-orderoperators are required to compute the correlation proper of p operators,however, in such cases where the correlation proper is approximately zero,those operators are considered to be uncorrelated, and can be approximatedusing (p− 1)th order operators. Using Eq. (6.2), it was found that only the2nd-order operators were sufficiently large to play a significant role in the dy-namics, and thus all higher-order correlation terms can be either neglectedor approximated using 2nd-order operators. By removing the higher-ordercorrelation terms from our calculations, the complexity of solving for thedynamics of an N -cell system drops from O(4N ) to O(N2). Further reduc-tions in complexity are possible if only nearest-neighbour interactions areconsidered. In such cases, the complexity of our problem drops to O(N) -which represents a very manageable growth in state variables when solvingQCA systems on a classical computer.576.4. SummaryLastly, a set of co-dependant ODEs was constructed using Eq. (5.17)in order to model the time dependance of the remaining operators of oursystem. The well-known Dormand-Prince method, due to its non-stiffness,its wide-applicability and wealthy documentation, was the ODE solver bestfit for our system, and was also described in detail in this chapter.58Chapter 7Steady-State SolutionsTo solve for the dynamics of a QCA system, we will solve the ODE shownin Eq. (5.17) for each state variable involved in the evolution of the system.The challenge of solving these ODEs however, is calculating the steady-stateterms which requires diagonalizing the full Hamiltonian of the system. Thus,while we may have succeeded in reducing the complexity of solving the dy-namics of a closed QCA system, driving the system towards its steady-statesolution still remains an NP-complete problem. This is not an issue whenmodelling QCA systems using the ICHA since it uses the 2×2 Hamiltoniansdescribed in Eq. (2.4) to calculate the steady-state for each individual cell.However, the solutions using this method are not always reliable [43]. Onething that will prove useful is recognizing that in the polarization basis, thesteady-state density matrix is a diagonal matrix, and thus taking the traceof it with any of the generating operators of SU(n) (as required in order tocompute the steady-state values of the coherence vector; see Eq. (5.18)) willonly result in a non-zero value for those generating operators which havediagonal elements in them. Therefore, only the λssz and Ksszz terms are non-zero. (Note, this is consistent with our earlier assertion that the off-diagonalterms describe the coherence of our system, which we expect to go to zero).Furthermore, since our system will be approaching its classical limit andthus be almost entirely incoherent once this relaxation occurs, we can useEq. (6.8) to approximate the Ksszz terms as Ksszz(i, j) ≈ λssz (i)λssz (j). Thus,our problem reduces to simply finding the Boltzmann distribution for eachindividual cell.To do this, we require an efficient way to explore the configuration spacespanned by an N -cell system in order to compute the steady-state densitymatrix shown in Eq. (5.19). Fortunately, many of the 2N possible configura-tions of an N -cell system represent high-energy states, and thus are unlikelyto occur. Therefore, only a partial exploration of the configuration space isrequired in order to truly capture the thermal ground state that we wouldexpect in practice. In this work, we consider three well-known stochasticmethods to accomplish this, and examine them in closer detail in the followsections of this thesis. Partial exploration of QCA system’s configuration597.1. Monte Carlo Methodspace using stochastic methods has been considered in previous work [71],where the authors used simulated annealing to find the instantaneous groundstate of a QCA system. In this work, we are not interested in simply findingthe ground state, but instead, using stochastic methods to approximate theBoltzmann distribution of the system.7.1 Monte Carlo MethodFor a defined set of possible configurations, the Monte Carlo (MC) methodrandomly generates configurations from a probability distribution over thatset, and aggregates the results. The simplest of the Monte Carlo methodsis implemented using the Metropolis procedure, where configurations aregenerated from a previous state using a transition probability (i.e., a Markovchain) which depends only on the energy difference between the current andnew state [72]. To understand why this is useful, consider a QCA circuit forwhich the states are any one of the 2N possible configurations of the system,and satisfy [72]:PnWn→m = PmWm→n, (7.1)where Pn is the probability of the system being in state n, and Wn→m is thetransition probability for n→ m. The probability of the nth state occurringis given by the Boltzmann distribution,Pn =e−En/kBTZ , (7.2)where Z is the canonical partition function and is defined asZ =∑ne−En/kBT . (7.3)This probability is usually not exactly known because of the denominator;however, we can avoid this difficulty by generating a Markov chain of states,i.e., generate each new state directly from the preceding state. If we producethe nth state from the mth state, then the relative probability is the ratio ofthe individual probabilities and the denominator cancels. As a result, onlythe energy difference between the two states is needed, i.e.,∆E = En − Em. (7.4)607.1. Monte Carlo MethodTherefore, the Monte Carlo algorithm is particularly useful for systems wherethere exists a large number of possible configurations. The Monte Carloalgorithm can be executed in the following steps:1. Choose an initial configuration m (i.e., any one of the 2N possibleconfigurations of the QCA system) and set it as the current state2. Choose a cell i from the current state, and create a new state n byreversing the polarization of that cell3. Calculate the energy change ∆E = Em − En which results when thepolarization at cell i is reversed4. If ∆E > 0, set the current state to state n, and return to Step 25. Otherwise, generate a random number r such that 0 < r < 16. If r < e∆E/kBT , set the current state to state n, otherwise retain statem as the current state7. Return to Step 2By repeating the process many times, one simulates the thermal progressionof the QCA system in contact with a heat bath at temperature T . AfterM iterations, one is left with a set of M intermediate solutions that makeup a subset of the possible configurations of the QCA system, with themost likely (i.e., the lowest energy) states making up the majority of thepopulation. The steady-state polarization is the average of the resulting Mconfigurations.7.1.1 ResultsIn this section, we investigate the accuracy of the Monte Carlo algorithmwhen solving for the steady state value of three different QCA circuits. Thethree QCA circuits considered are shown in Fig. 7.1. The steady state valuesfor each circuit were found by solving Eq. (5.18) and compared to thosecomputed using the Monte Carlo algorithm. Simulations were conductedon 1 nm cells spaced 2 nm apart (Ek ≈ 300 meV) at 300K (kBT ≈ 25meV). The steady-state polarization for each cell under these conditions islisted in Table 7.1. The values shown in Table 7.1 were computed by solvingEq. (5.18) using a full N -cell Hamiltonian.The Monte Carlo algorithm was run on each circuit for iterations rangingfrom M = 100→ 1× 106. The absolute difference between the polarizations617.1. Monte Carlo MethodP0 = 1 1 2 3 4 56 78P0 = 1 P0 = 11 2 3 4 5 6 7 8 9P2 = 1P3 = 1P1 = 112346 7 8 910115Figure 7.1: Three QCA circuits used to compare the accuracy of the threeconsidered stochastic methods.predicted by the Monte Carlo algorithm and those found in Table 7.1 wascalculated and averaged out over the total number of cells to determinethe average error per cell. Due to the inherent randomness associated withstochastic methods, the Monte Carlo algorithm was run 1,000 times at eachiteration, and the average error per cell for each run was plotted in Fig-ures 7.2-7.4. In order to quantify the quality of the results, three metrics arealso included in each figure. The first metric, P(Error ≤ m) = 50% (i.e.,the median), indicates the maximum error one is expected to see for 50% ofthe simulations. Similarly, P(Error ≤ m) = 75%, and P(Error ≤ m) = 90%,indicate the maximum error one is excepted to see for 75% and 90% of sim-ulations, respectively.627.1. Monte Carlo MethodTable 7.1: Steady-State Polarizations for each Circuit in Figure 7.1Cell Circuit A Circuit B Circuit C1 +1 +0.8 +12 +1 +0.6 +13 +1 +0.4 +14 +1 +0.2 +15 +1 0 + 16 +1 -0.2 +17 +1 -0.4 +18 -1 -0.6 +19 -0.8 +110 011 +1100 1000 10000 100000 1x106Number of Iterations1x10-71x10-60.000011e-40.0010.010.1Error1000 100000.000010.00010.0010.010 0.2 0.4 0.6 0.8 100.20.40.60.81P(Error < m) = 50%P(Error < m) = 75%P(Error < m) = 90%ErrorFigure 7.2: Circuit A. The average error per cell versus the number of itera-tions using the Monte Carlo algorithm. The maximum error for 50%, 75%,and 90% of the simulations run is also shown.Circuits A and C have the lowest average error per cell, and only require1,000 iterations of the algorithm before the majority of the simulations yieldan average error of less than 2%. Comparatively, Circuit B requires over637.1. Monte Carlo Method100 1000 10000 100000 1x106Number of Iterations0.0010.010.11Error100000 1x1060.010 0.2 0.4 0.6 0.8 100.20.40.60.81P(Error < m) = 50%P(Error < m) = 75%P(Error < m) = 90%Figure 7.3: Circuit B. The average error per cell versus the number of itera-tions using the Monte Carlo algorithm. The maximum error for 50%, 75%,and 90% of the simulations run is also shown.100 1000 10000 100000 1x106Number of Iterations1x10-60.000011e-40.0010.010.1Error1000 100000.00010.0010.010 0.2 .4 0.6 0.8 100.20.40.60.81P(Error < m) = 50%P(Error < m) = 75%P(Error < m) = 90%ErrorFigure 7.4: Circuit C. The average error per cell versus the number of itera-tions using the Monte Carlo algorithm. The maximum error for 50%, 75%,and 90% of the simulations run is also shown.647.1. Monte Carlo Method100,000 iterations in order to achieve the same degree of accuracy. Theadditional work that the Monte Carlo algorithm must do on the latter circuitis not unexpected since the steady-state polarizations for both Circuits A andC are almost entirely ± 1 (with the lone exception being cell 10 in CircuitC), and thus do not require a delicate balance of intermediate solutions toproduce polarizations Pi < |1|. Circuit B however, represents the worst-casescenario for determining the steady-state values, and is not likely to presentitself in a realistic QCA circuit.The simulations above were performed at relatively low temperatures,that is, for Ek kBT . However, the ratio Ek/kBT determines the likelihoodof accepting a new state, and can significantly impact the results and hence,the accuracy of this method. Thus, for a second simulation, the MonteCarlo algorithm was run on each circuit for different values of Ek/kBT . Thenumber of iterations was kept fixed at M = 10, 000, and the simulationswere repeated 1,000 times and averaged out at each Ek/kBT . The results ofthese simulations are shown in Figure 7.5.24681000.10.20.30.40.5ErrorCircuit ACircuit BCircuit C9.59.69.79.89.91000.010.020.030.040.05EkkBTFigure 7.5: The average error per cell versus Ek/kBT for M = 10, 000.By quick inspection, we can see that the Monte Carlo algorithm is onlyvalid for when Ek & 9.5kBT . At first sight, this condition seems restricting,however, the requirement for Ek kBT is a reasonable one since this isnecessary for error-free operation of QCA devices in general. To add furtherperspective, current work in atomic QCA has shown the ability to fabricatecells smaller than 1 nm. At room temperature, the ratio Ek/kBT for 1 nm657.1. Monte Carlo Methodcells (each separated by 2 nm) is Ek/kBT ≈ 11.5. Thus, this approximationfalls within the practical limits of QCA operation.All the results to this point have been achieved by starting with an initialcandidate solution that closely represents the desired result. However, forsuch cases where it may not be possible to predict the actual steady-statepolarizations, it is worthwhile to see how well the Monte Carlo algorithmperforms when the simulation is not seeded with a good initial guess. Thus,for a third set of simulations, we simulate the steady-state polarizations of a10- and 20-cell array (driven by PD = 1) using four different initial guesses:Pi = {1, 1, 1, . . . } (zero kinks)Pi = {−1, 1,−1, 1, . . . } (N kinks)Pi = {1, 1, . . . , 1,−1, . . . ,−1,−1} (single kink, mid-array)Pi = {−1,−1,−1, . . . } (single kink, start of array)For each initial guess, the Monte Carlo algorithm was run 1,000 times oneach array, and the average steady-state polarization of each cell was plottedin Figure 7.6. The number of iterations for each simulation was kept fixedat M = 1, 000.The results in Figure 7.6 show that the Monte Carlo algorithm has adifficult time recovering from a bad initial guess - particularly for largercircuits. More intriguing however, is how the simulation performs for thedifferent initial guesses. Two factors appear to dictate the quality of aninitial guess: (a) the number of cells that start with the correct polarization,and (b) the energy of the initial guess. The former condition is intuitive. Themore cells that begin in the correct polarization, the fewer bit flips requiredto arrive at the ground state. For this reason, the initial guess that featureda kink in the middle of the array (and half of its cells initially set to P = 1)was able to produce a final steady-state solution closer to the desired resultthan the initial guess with the single kink at the beginning of the array (andnone of its cells initially set to P = 1). The second condition favours thoseinitial guesses that represent higher energy states. The simple explanationfor this is that lower energy states offer fewer potential bit flips that willlead to lower energy states, and thus new states are less likely to be acceptedby the algorithm. If the initial guess is close or equal to the desired result,then this can be advantageous (as was the case for the initial guess whichfeatured zero kinks). However, if the initial guess is not close to the desiredresult, then the algorithm can get stuck in a local minima and may not beable to escape from the initial conditions. This second condition explains667.1. Monte Carlo Method0 1 2 3 4 5 6 7 8 9Cell Number0.40.60.81PolarizationZero Kinks10 Kinks1 Kink, Middle1 Kink, Beginning(a) 10-cell Array0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19Cell Number-0.9-0.7-0.5-0.3-0.10.10.30.50.70.9PolarizationZero Kinks20 Kinks1 Kink, Middle1 Kink, Beginning(b) 20-cell ArrayFigure 7.6: The steady-state polarizations of each cell in (a) a 10-cell arrayand (b) a 20-cell array for different initial guesses. The steady-state polar-izations are averaged out over 1,000 simulations. The number of iterationswas kept fixed at M = 1, 000 for each simulation.why the initial guess with N kinks was able to produce a better steady-statesolutions than those with only a single kink, in spite of being a much higherenergy state.Thus the importance of a good initial guess is critical to the success677.1. Monte Carlo Methodof the algorithm. Therefore we will use a simple logic simulator to guessthe polarization of as many cells as possible in the QCA circuit prior tosimulation. For those circuits for which a good initial guess cannot be made,increasing the number of iterations can help mitigate the error in the finalsolution. However, this comes at the expense of increased simulation timewhich could make this approach computationally prohibitive.For a last set of simulations, we investigate the computational complexityof the Monte Carlo method. To determine the time dependance as a functionof the circuit size, a set of simulations were conducted on an increasing arrayof array of cells ranging from 1-50 cells. The number of iterations for eachsimulation was iteratively increased until an average error per cell of ±0.01was achieved. Simulations were run 1,000 times at each array length andplotted in Figure 7.7. The dashed lines represent the “best-fit” curve.10 20 30 40 50Number of Cells00.050.10.150.20.25Time [sec]Figure 7.7: The run-times of the Monte Carlo method versus the number ofcells in an array. The number of iterations for each simulation was adjustedin order to maintain an average error per cell below ±0.01. Simulations wererun 1,000 times for each array length. The dashed curve represents a best-fitcurve.A quick inspection of Figure 7.7 reveals that the best-fit curve is approx-imately linear with a slope of m ≈ 2 × 10−3 secs/cell. It should be notedthat these simulations were run in a MATLAB environment, and thus donot necessarily represent the run-times one would expect to see in QCADe-signer. However, for the purposes of comparison, we will keep track of theslope of each of the discussed stochastic methods. Most importantly though,the trend, which we expect to stay the same regardless of whether or not werun this code in QCADesigner of MATLAB, grows linearly with the number687.2. Simulated Annealingof cells.7.2 Simulated AnnealingSimulated annealing (SA) is executed in almost an identical manner as theMonte Carlo algorithm described in the previous section [73]. Like the MonteCarlo algorithm, new states are generated from the current state and retainedwith a probability P (∆E) = e∆E/kBT . However, whereas the Monte Carloalgorithm kept a fixed temperature T for each iteration, the simulated an-nealing process consists of first “melting” the system being optimized at ahigh effective temperature, then lowering the temperature by slow stagesuntil the system “freezes” and no further changes occur. Simulated anneal-ing, as implemented by the Metropolis procedure, benefits from being ableto transition into a larger subset of states due to the high temperatures inearly iterations; thus reducing the likelihood of the procedure from getting“stuck” in local minima. However, one of the consequences of being accessi-ble to a larger subset of states is that one may require a larger number ofiterations in order to average out the higher energy states that appear in thefinal population. Like the Monte Carlo algorithm, after M iterations, thesteady-state polarization is computed by taking the average of the resultingM configurations.7.2.1 ResultsIn this section, we investigate the accuracy of the simulated annealing al-gorithm when solving for the steady state value of the three QCA circuitsshown in Fig. 7.1. Simulations were again conducted on 1 nm cells spaced 2nm apart at 300K and the results were compared to the steady-state polar-izations listed in Table 7.1.Two exponentially decaying temperature functions were used for the sim-ulated annealing algorithm in order to determine an optimal temperaturerange for simulating QCA circuits. In the first function, the temperatureranges from Ts1 = 600K → 1K, and in the second, from Ts2 = 300K → 1K.For each temperature function, the simulated annealing algorithm was run1,000 times on each circuit for iterations ranging from M = 100→ 1× 106.The average error per cell for each run is plotted in Figures 7.8-7.10 for bothtemperature functions.Between the two temperature functions, the second function providedgreater accuracy when compared to the actual steady state values of eachcircuit. This is likely due to the fact that the first temperature function,697.2. Simulated Annealing100 1000 10000 100000 1x106Number of Iterations1x10-60.000011e-40.0010.010.11Error1000 100000.00010.0010.010 0.2 0.4 0.6 0.8 100.20.40.60.81P(Error < m) = 50%P(Error < m) = 75%P(Error < m) = 90%Error(a) Ts1 = 600 → 1100 1000 10000 100000 1x106Number of Iterations1x10-71x10-60.000011e-40.0010.010.1Error1000 100000.00010.0010.010 0.2 0.4 0.6 0.8 100.20.40.60.81P(Error < m) = 50%P(Error < m) = 75%P(Error < m) = 90%Error(b) Ts2 = 300 → 1Figure 7.8: Circuit A. The average error per cell versus the number of itera-tions using simulated annealing for two exponentially decaying temperaturefunctions. The maximum error for 50%, 75%, and 90% of the simulationsrun is also shown.which started at 600 K, allowed too many high energy states to be retainedearly in the simulation, resulting in a non-realistic distribution of states atits conclusion. Compared to the Monte Carlo algorithm, the second temper-ature function was also able to find the steady-state values with a greaterdegree of accuracy as well. The improvement can be explained by the drop707.2. Simulated Annealing100 1000 10000 100000 1x106Number of Iterations0.0010.010.11Error100000 1x1060.010 0.2 0.4 0.6 0.8 100.20.40.60.81P(Error < m) = 50%P(Error < m) = 75%P(Error < m) = 90%(a) Ts1 = 600 → 1100 1000 10000 100000 1x106Number of Iterations0.0010.010.11Error100000 1x1060.0010.010 0.2 0.4 0.6 0.8 100.20.40.60.81P(Error < m) = 50%P(Error < m) = 75%P(Error < m) = 90%(b) Ts2 = 300 → 1Figure 7.9: Circuit B. The average error per cell versus the number of itera-tions using simulated annealing for two exponentially decaying temperaturefunctions. The maximum error for 50%, 75%, and 90% of the simulationsrun is also shown.in temperature, and hence acceptance probability of the simulated annealingalgorithm. As the simulation carries on, it becomes harder for higher energystates to be retained, and thus late in the simulation, only states which areequal to, or lower than, the current state contribute to the final distributionof states. Therefore, the most likely states of the QCA systems make up the717.2. Simulated Annealing100 1000 10000 100000 1x106Number of Iterations0.000011e-40.0010.010.1Error1000 100000.00010.0010.010 0.2 0.4 0.6 0.8 100.20.40.60.81P(Error < m) = 50%P(Error < m) = 75%P(Error < m) = 90%Error(a) Ts1 = 600 → 1100 1000 10000 100000 1x106Number of Iterations0.000011e-40.0010.010.1Error1000 100000.00010.0010.010 0.2 0.4 0.6 0.8 100.20.40.60.81P(Error < m) = 50%P(Error < m) = 75%P(Error < m) = 90%Error(b) Ts2 = 300 → 1Figure 7.10: Circuit C. The average error per cell versus the number of itera-tions using simulated annealing for two exponentially decaying temperaturefunctions. The maximum error for 50%, 75%, and 90% of the simulationsrun is also shown.majority of the final population.For a second set of simulations, the simulated annealing algorithm wasrun at different ratios of Ek/kBT to compare the performance of the algo-rithm for circuit layouts at 300 K. Again, the number of iterations was keptfixed at M = 10, 000, and the simulations were repeated 1,000 times and av-727.2. Simulated Annealingeraged out at each Ek/kBT . The temperature function Ts2 = 300K → 1Kwas kept fixed for all Ek/kBT . The results of this simulations are shown inFigure 7.11.24681000.10.20.30.40.50.60.7ErrorCircuit ACircuit BCircuit C7.588.599.51000.020.040.06EkkBTFigure 7.11: The average error per cell versus Ek/kBT forM = 10, 000. Thetemperature function decays exponentially from 300K → 1K.From Figure 7.11, it appears that the simulated annealing algorithmperforms better than the Monte Carlo algorithm at larger ratios of Ek/kBT ;which represents the region of interest for correct QCA operation. In fact,using simulated annealing, one can accurately compute the steady state val-ues for Ek/kBT & 7.5; which represents about a 25% increase in performanceover the Monte Carlo algorithm. Again, this can likely be explained by thelower acceptance rate of the simulated annealing algorithm as the simulationcarries on.For the next set of simulations, we consider the performance of the simu-lated annealing algorithm when seeded with different initial guesses. We usethe same four initial guesses as we did when we explored the performance ofthe Monte Carlo algorithm in the previous section, and apply them to a 10-and 20-cell array. Simulations were repeated 1,000 times and averaged outto produce the average steady-state polarizations shown in Figure 7.12. Thenumber of iterations for each simulation was kept fixed at M = 1, 000.The simulated annealing algorithm produces results very similar to thosecomputed using Monte Carlo algorithm, with the latter performing marginallybetter. The difference in performance can be attributed to the decreasingtemperature function used in the former. The lower temperature towards the737.2. Simulated Annealing0 1 2 3 4 5 6 7 8 9Cell Number-0.200.20.40.60.81PolarizationZero Kinks10 Kinks1 Kink, Middle1 Kink, Beginning(a) 10-cell Array0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19Cell Number-1-0.75-0.5-0.2500.250.50.751PolarizationZero Kinks20 Kinks1 Kink, Middle1 Kink, Beginning(b) 20-cell ArrayFigure 7.12: The steady-state polarizations of each cell in (a) a 10-cell arrayand (b) a 20-cell array for different initial guesses. The steady-state polar-izations are averaged out over 1,000 simulations. The number of iterationswas kept fixed at M = 1, 000 for each simulation.end of the simulation makes it harder for higher energy states to be acceptedby the algorithm. When a good initial guess is made, this is advantageoussince any accepted new state will likely resemble desired steady-state of thesystem. However, if a bad initial guess is made, then the lower acceptance747.2. Simulated Annealingrates can make it hard for the algorithm to escape from any local minimacreated by those initial conditions. Again, it will benefit us to use a simplelogic simulator to try and guess the correct polarization a priori, or increasethe number of iterations used by the algorithm.And lastly, to determine time dependance of the simulated annealingalgorithm as a function of the circuit size, a set of simulations were conductedon an increasing array of array of cells ranging from 1-50 cells. Again, thenumber of iterations for each simulation was iteratively increased until anaverage error per cell of ±0.01 was achieved. Simulations were run 1,000times at each array length and plotted in Figure 7.13. The dashed linesrepresent the “best-fit” curve.10 20 30 40 50Number of Cells00.050.10.150.2Time [sec]Figure 7.13: The run-times of the simulated annealing method versus thenumber of cells in an array. The number of iterations for each simulationwas adjusted in order to maintain an average error per cell below ±0.01.Simulations were run 1,000 times for each array length. The dashed curverepresents a best-fit curve.The best-fit curve for the simulated annealing algorithm is almost iden-tical to the Monte Carlo method, which is to be expected since both areimplemented using the Monte Carlo procedure. Like the Monte Carlo algo-rithm, the complexity of the simulated annealing algorithm is approximatelylinear with a slope of m ≈ 2× 10−3 secs/cell.757.3. Genetic Algorithm7.3 Genetic AlgorithmGenetic Algorithms (GA) are adaptive heuristic search algorithms premisedon the evolutionary ideas of natural selection and genetics. The basic conceptof genetic algorithms is designed to simulate processes in natural systemsnecessary for evolution, and in particular, those that follow the “survival ofthe fittest” principles. As such they represent a unique method of performinga random search within a defined search space to solve a problem.In a genetic algorithm, a population of candidate solutions (called “in-dividuals”) to an optimization problem is evolved toward better solutions.Each candidate solution has a set of properties (its “chromosomes”) whichcan be mutated and altered. The evolution typically starts from a popula-tion of randomly generated individuals and occurs in generations. In eachgeneration, the fitness of every individual in the population is evaluated, andtwo individuals (parents) from the current population are randomly selected(where the more fit individuals are the most likely to be selected). Eachindividual’s chromosome is modified to form two new individuals (children),who are then inserted into the existing population along with their parents.Mutation is often performed at this stage as well, whereby one of the chro-mosomes on each child is randomly selected and modified before re-enteringthe population. This is done to prevent the algorithm from getting stuck ina local minima and to provide more diversity within the population. Oncethe parents and children have been re-entered into the population, the twoweakest individuals (according to fitness) are then removed from the newpopulation in order to maintain the original size of the population. The newpopulation is then used in the next iteration of the algorithm.For a QCA system, the initial population consists of a random collectionof the 2N possible configurations (individuals). The chromosomes for eachindividual is the polarization of each cell in its configuration (i.e., a ±1).The fitness of each individual in the population is given byFitness(i) = e−Ei/kBTZ ′ , (7.5)where Z ′ is an abbreviated partition function consisting only of the config-urations in the population. The final solution is the weighted average of thefinal population after M iterations.There are several notable limitations to genetic algorithms. In particular,genetic algorithms do not scale well with complexity. As the number ofpotential configurations increases, so too does the typical population sizerequired to obtain a reasonable solution. However, for QCA circuits, only a767.3. Genetic AlgorithmTable 7.2: Average Error Per CellPop. Size Number of Iterations↓ 100 1,000 10,000 100,000 1×106100 0.8152 0.8261 0.8269 0.8269 0.8253Circuit A 1,000 0.0326 0.0316 0.0317 0.317 0.31610,000 0.00021 0.00022 0.00020 0.0022 0.00021Circuit B100 0.4035 0.4194 0.4198 0.4190 0.41981,000 0.1340 0.1369 0.1370 0.1367 0.137010,000 0.0403 0.0370 0.0370 0.0371 0.0369100 0.6816 0.6984 6991 0.6991 0.6989Circuit C 1,000 0.4709 0.4642 0.4637 0.4646 0.464510,000 0.0219 0.0214 0.0214 0.0216 0.0215small subset of the configuration space actually represents the likely statesof the system, and therefore, the population size does not need to scaledirectly with the problem size. Another limitation of genetic algorithms isthe computational cost of evaluating the fitness function for each individual.This is typically the most prohibitive segment of most genetic algorithms,and is the primary limiting factor here as well.7.3.1 ResultsIn this section, we investigate the accuracy of the genetic algorithm whensolving for the steady state value of the three QCA circuits shown in Fig. 7.1.Simulations were again conducted on 1 nm cells spaced 2 nm apart at 300Kand the results were compared to the steady-state polarizations listed inTable 7.1. For the genetic algorithm, the critical variable is the populationsize. From Table 7.2, we can see that the accuracy of the genetic algorithmis entirely independent of the number of iterations, and thus the populationsize is our only concern.To determine a suitable population size, the genetic algorithm was runon each circuit for three different population sizes. The number of iterationswas fixed at M = 1, 000. At each population size, the algorithm was run1,000 times, and the average error per cell for each run was recorded andplotted in Figures 7.14-7.16. As was the case with the previous two methods,777.3. Genetic Algorithma second set of simulations was also conducted to determine the dependanceon the ratio Ek/kBT . For these simulations, the number of iterations waskept fixed at M = 1, 000, and the population size was kept fixed at 10,000individuals. The results of these simulations are shown in Figure 7.17.100 1000 10000Population Size00.20.40.60.811.21.41.61.8Error1000 1000000.00020.00040.00060.00080.0010.00120 0.2 0.4 0.6 0.8 100.20.40.60.81P(Error < m) = 50%P(Error < m) = 75%P(Error < m) = 90%Figure 7.14: Circuit A. The average error per cell versus the number ofiterations using a genetic algorithm. The maximum error for 50%, 75%, and90% of the simulations run is also shown.Compared to the previously discussed stochastic methods, for Ek kBTand a population size of 10,000, the genetic algorithm produces the most ac-curate steady-state values for Circuit A, but significantly worse (albeit, stillgood) results for Circuits B and C. The primary difference between thesecircuits is that Circuit A is the only one of the three that has steady-statepolarizations which are entirely ±1. Thus, it appears as though the ge-netic algorithm breaks down for circuits whose steady-state polarizationsare Pss < |1|. The potential to achieve better results using the geneticalgorithm exists if we increase the population size. However, the computa-tional cost of running a genetic algorithm over one of the other previouslymentioned methods may mitigate this improvement. As discussed earlierin this section, computing the fitness function for each individual can beprohibitive. And since we would require a population size of around 10,000individuals (and likely more for large circuits), this may be a limiting factorin the usefulness of a genetic algorithm.To determine run-time of the genetic algorithm, two sets of simulationswere conducted. For the first, using a 5-cell array, the genetic algorithm was787.3. Genetic Algorithm100 1000 10000Population Size00.20.40.60.8Error1000 1000000.050.10.150.20.251 0 1 0 1 0Population Size00.20.40.60.811.21.41.61.8Error1000 1000000.00020.00040.00060.00080.0010.00120 0.2 0.4 0.6 0.8 100.20.40.60.81P(Error < m) = 50%P(Error < m) = 75%P(Error < m) = 90%Figure 7.15: Circuit B. The average error per cell versus the number ofiterations using a genetic algorithm. The maximum error for 50%, 75%, and90% of the simulations run is also shown.100 1000 10000Population Size00.20.40.60.811.2Error1000000.010.020.030.040.050.06100 1000 10000Population Size00.20.40.60.811.21.41.61.8Error1000 1000000.00020.00040.00060.00080.0010.00120 0.2 0.4 0.6 0.8 100.20.40.60.81P(Error < m) = 50%P(Error < m) = 75%P(Error < m) = 90%Figure 7.16: Circuit C. The average error per cell versus the number ofiterations using a genetic algorithm. The maximum error for 50%, 75%, and90% of the simulations run is also shown.repeated 1,000 times for population sizes ranging from 100 → 1× 106. Thenumber of iterations was kept fixed atM = 1, 000, and the run-time for eachsimulation was recorded and plotted in Figure 7.18.The run-times of the genetic algorithm increase non-linearly with the797.3. Genetic Algorithm24681000.10.20.30.40.5ErrorCircuit ACircuit BCircuit C8.899.29.49.69.81000.010.020.030.040.050.060.07EkkBTFigure 7.17: The average error per cell versus Ek/kBT for M = 1, 000. Apopulation size of 10,000 individuals was used.100 1000 10000 100000 1x106Population Size0.010.1110100Time [sec]Figure 7.18: The run-times of the genetic algorithm versus the populationsize at M = 1, 000. Simulations were run 1,000 times for each population.The dashed curve represents a best-fit curve.population size. Given that large population sizes are needed to achieve ahigh degree of accuracy, this along with the expensive run-times representthe limiting factors of the genetic algorithm. For the second set of simula-tions, the genetic algorithm was run on an increasing array of array of cells807.4. Summaryranging from 1-50 cells. The population size was iteratively adjusted in eachsimulation until an average error per cell of ±0.01 was achieved. The numberof iterations was kept fixed at M = 1, 000. Simulations were run 1,000 timesat each array length and plotted in Figure 7.19. The dashed lines representthe “best-fit” curve.10 20 30 40 50Number of Cells051015202530Time [sec]Figure 7.19: The run-times of the genetic algorithm versus the number ofcells in an array at M = 1, 000. The population size in each simulation wasadjusted until average error per cell dropped below ±0.01. Simulations wererun 1,000 times for each array length. The dashed curve represents a best-fitcurve.Unlike the previous two methods, the run-time of the genetic algorithmincreases exponentially with the number of cells. Considering this along withthe inferior accuracy of the genetic algorithm and the high computationalcost of using large population sizes, the genetic algorithm does not seem wellsuited for efficiently and accurately approximating the steady-state polariza-tions of QCA circuits.Lastly, since the genetic algorithm creates a random population of statesof both quality and non-quality solutions, the results themselves show itsability to recover from poor states, and thus does not require the same in-vestigation as the previously discussed methods.7.4 SummaryThree well-known stochastic methods were considered in this chapter: theMonte Carlo method, simulated annealing, and the genetic algorithm. Popu-817.4. Summarylations of potential states of a QCA circuit are created during the simulations,and states are kept with a probability determined by the ratio Ek/kBT . Theresulting population creates a Boltzmann distribution of states which can beused to approximate the steady-state polarizations of the circuit. The pop-ulation of states in the Monte Carlo and simulated annealing methods arecreated “on-the-fly” using the Metropolis procedure, where new states aregenerated from the current state. In contrast, the genetic algorithm ran-domly generates a population of potential states, and updates this popula-tion with better candidates as the simulation progresses. The latter processis considerably more computationally expensive, and is therefore limited tocircuits which can be adequately solved using a small population of states.A quick inspection of the results from this chapter suggests that thesimulated annealing algorithm performs marginally better than the otherconsidered methods when finding the steady state values of a QCA circuit.In addition to producing the most accurate results, it also demonstrated themost robustness to changes in Ek/kBT . Comparatively, the other two meth-ods were only capable of producing accurate results when Ek & 9.5kBT ;which requires an Ek/kBT approximately 25% larger than when using sim-ulated annealing. Furthermore, the complexity of the simulated annealingalgorithm is linear, which means that even for large circuit sizes, the algo-rithm should still be able to complete within a reasonable time frame.The primary drawback of the simulated annealing algorithm, however, isthat it must be seeded with a good initial guess. We define a good guess bythe number of cells that are initially assigned a correct polarization. To thisend, we have implemented a simple logic simulator that approximates thepolarization of each cell based on the set of input drivers. This is sufficientfor providing the simulation with a good initial solution.82Chapter 8ResultsIn this chapter, we use the ODEs given in Eqs. (6.9)-(6.13) to solve forthe correlated dynamics of QCA circuits. We use the relaxation-time ap-proximation described in Chapter 5.2 to model the interaction between thecells and the environment, and approximate the steady-state values of oursystem operators using the simulated annealing method described in Chap-ter 7. We will refer to this method as the 2-point correlation (2PC) method.The proposed method was implemented as an upgrade to the current coher-ence vector simulation engine in the widely-used simulation tool QCADe-signer [48,50].8.1 Justifying ApproximationsAs a first simulation, we simulated the dynamic response of a QCA wire usingthe two-point correlation method. We conducted the simulation for wires ofvarying length and compared the results to those predicted by the ICHAand a full-basis quantum mechanical treatment. To maintain consistency,the Dormand-Prince method was used to compute the solution for all threetreatments. Figure 8.1 shows the polarization of the output cell for each wireusing each of the three computational methods. As expected, the ICHA isonly accurate up to a single cell. As additional cells are added to the wire,the correlated behaviour of the cells goes unaccounted for in the ICHA,and thus it is unable to predict the dynamics within any desired level ofaccuracy. Figure 8.2(a) compares the solutions produced by the ICHA andthe full quantum mechanical treatment, and shows the error in the predictedoutput for each wire. As the length of the wire increases, so does the errorproduced by the ICHA. Comparatively, the 2PC method, which includespair-correlations, is able to accurately predict the polarization of the outputcell within ≈ 0.15% for all of the simulated wires (see Figure 8.2(b)). Thefact that the 2PC method is able to predict the dynamic behaviour of thetwo-cell wire is expected. However, that the predicted dynamics for longerwires matches so close to the full quantum mechanical solution confirms thathigher-order correlations are not necessary in order to capture the correct838.1. Justifying Approximationsdynamic response of QCA wires, as acknowledged in [43]. Furthermore, asmentioned in the previous section, we have only chosen to include the KxzandKzz correlation terms, as the remaining seven can be approximated usingthe single-cell coherence vector terms. Again, the accuracy of our solutionconfirms that this is indeed a reasonable approximation.848.1. Justifying Approximations0 200 400 600 800-1-0.8-0.6-0.4-0.201-cell wire QMICHA2pt Corr.0 200 400 600 800-1-0.8-0.6-0.4-0.200 200 400 600 800-1-0.8-0.6-0.4-0.20Polarization0 200 400 600 800-1-0.8-0.6-0.4-0.20t~Ek0 200 400 600 800-1-0.8-0.6-0.4-0.205-Cell WireFigure 8.1: Simulations of a QCA wire. The outputs of each wire are shownfor different quantum mechanical treatments. While the solution producedby the ICHA (dashed line) diverges significantly as the length of the wireincreases, the 2PC method using only 2-point correlation matches well withthe full quantum mechanical solution for all wire lengths.858.1. Justifying Approximations0 100 200 300 400020406080Error (%)1-cell wire2-cell wire3-cell wire4-cell wire5-cell wire 230 240 250 2600.60.70.80.91t ~Ek(a) Error when using ICHA0 100 200 300 40000.020.040.060.080.10.120.14Error (%)1-cell wire2-cell wire3-cell wire4-cell wire5-cell wiret ~Ek(b) Error when using 2PCFigure 8.2: The error in the polarization of the output cell of the QCAwire when (a) using the ICHA and (b) when including 2-point correlations.The difference between the 2PC method and the full quantum mechanicalsolution suggests that modelling QCA circuits with 2-point correlations issufficient for calculating the dynamics of any given system.Earlier in this thesis, we discussed the computational benefits of consid-ering only nearest-neighbour coupling. By assuming that cells only interactwith its nearest-neighbours (as opposed to with all the cells in the circuit),868.1. Justifying Approximationsthe complexity of our problem reduces from O(N2) to O(N). Thus, for asecond set of simulations, we simulated a 40-cell wire and a majority gate(see Fig 8.3) twice; the first accounting for all cell-cell interactions, and thesecond only accounting for nearest-neighbour interactions, and plotted thedifference between solutions in Figure 8.4. The wire was simulated with aninput of PD = −1 and the majority gate with inputs P1 = P2 = −1 andP3 = 1. For both the wire and majority gate, the relatively small margin oferror between the two methods suggests that the influence between distantcells is negligible. This is another important result as it provides anotherway for us to reduce the overall computational requirements of solving QCAcircuits....1 2 3PD 40(Output)(Input)(a) 40-cell Wire6 7 811 (Output)(Input)94321P1 (Input)P210(Input)P35(b) Majority GateFigure 8.3: Two of the simulated circuits. (a) 40-cell wire. (b) A majoritygate with uneven input legs.878.2. ICHA vs 2-point Correlations160 180 200 220 240 260 28000.10.20.30.4Error (%)40-cell wireMajority Gatet ~EkFigure 8.4: The difference in the polarization of the output cell of a QCA wireand majority gate when modelling each with and without nearest-neighbourcoupling.8.2 ICHA vs 2-point CorrelationsThe results above indicate that we can accurately simulate the dynamics ofQCA circuits by including select 2-point correlations and considering onlynearest-neighbour coupling. This method offers the accuracy of a full quan-tum mechanical simulation but with the low computational complexity ofthe ICHA. The next set of simulations compares both the polarization anddynamics of the 2PC method against those predicted by the ICHA.8.2.1 Logical Outputs and DynamicsAs a first set of simulations, we simulated the 40-cell wire and majoritygate shown in Figure 8.3 using the same set of inputs as before. The polar-ization of each cell in both circuits is plotted in Figures 8.5 and 8.6. Theoutputs produced by the ICHA and the 2PC method for the 40-cell wire areshown in Figures 8.5(a) and (b), respectively. The polarization of the cellsin Figure 8.5(a) switch in sequence, with the cell furthest from the driverswitching last. The inherent delay in the line is an artifact of the ICHA anddoes not exist when modelling the wire using 2-point correlations. Secondly,the abrupt switching of the cells to a polarized state is due to the nonlinearcell-cell response predicted by the ICHA, and again, is not present in the888.2. ICHA vs 2-point Correlations2PC method.180 200 220 240 260 280 300-1-0.8-0.6-0.4-0.20Polarization250 255 260 265-1-0.8-0.6-0.4-0.20(a) ICHA(b) 2PCFigure 8.5: Polarization of each cell in a 40-cell wire. (a) Modelled usingthe ICHA. (b) Modelled using the 2PC method. While the ICHA is ableto predict the correct logical output, it is unable to accurately predict thedynamics of the wire.Figures 8.6(a) and (b) show the polarization of each cell of the majoritygate using each method. Again, the abrupt switching and delays present inthe ICHA model are a function of the nonlinear cell-cell response and are not89
- Library Home /
- Search Collections /
- Open Collections /
- Browse Collections /
- UBC Theses and Dissertations /
- Investigation of the correlated dynamics of quantum-dot...
Open Collections
UBC Theses and Dissertations
Featured Collection
UBC Theses and Dissertations
Investigation of the correlated dynamics of quantum-dot cellular automata circuits and systems Karim, Faizal 2014
pdf
Notice for Google Chrome users:
If you are having trouble viewing or searching the PDF with Google Chrome, please download it here instead.
If you are having trouble viewing or searching the PDF with Google Chrome, please download it here instead.
Page Metadata
Item Metadata
Title | Investigation of the correlated dynamics of quantum-dot cellular automata circuits and systems |
Creator |
Karim, Faizal |
Publisher | University of British Columbia |
Date Issued | 2014 |
Description | Quantum-dot Cellular Automata (QCA) provides a basis for classical computation without transistors. Many simulations of QCA rely upon the Intercellular Hartree Approximation (ICHA), which neglects the possibility of entanglement between cells. While simple and computationally efficient, the ICHA’s many shortcomings make it difficult to accurately model the dynamics of large systems of QCA cells. On the other hand, solving a full Hamiltonian for each circuit, while more accurate, becomes computationally intractable as the number of cells increases. This work explores an intermediate solution that exists somewhere in the solution space spanned by the ICHA and the full Hamiltonian. The solution presented in this thesis builds off of the work done by Toth et al., and studies the role that correlations play in the dynamics of QCA circuits. Using the coherence-vector formalism, we show that we can accurately capture the dynamical behaviour of QCA systems by including two-cell correlations. In order to capture the system’s interaction with the environment, we introduce a new method for computing the steady-state configurations of a QCA system using well-known stochastic methods, and use the relaxation-time approximation to drive the QCA system to these configurations. For relatively-low temperatures, we show that this approach is accurate to within a few percent, and can be computed in linear time. QCADesigner, the de facto simulation tool used in QCA research, has been used and cited in hundreds of papers since its creation in 2004. By implementing computationally accurate and efficient algorithms to the existing simulation engines present in QCADesigner, this research is expected to make a significant contribution to the future of QCA circuit design. In particular, researchers in the field will be able to identity a whole new set of design rules that will lead to more compact circuit design, realistic clocking schemes, and crosstalk-tolerant layouts. In addition, proper estimates on the power dissipation, pipelining, and limitations of room temperature operation will now be feasible for QCA circuits of any size; a huge step forward for QCA design. |
Genre |
Thesis/Dissertation |
Type |
Text |
Language | eng |
Date Available | 2014-08-11 |
Provider | Vancouver : University of British Columbia Library |
Rights | Attribution-NonCommercial-NoDerivs 2.5 Canada |
DOI | 10.14288/1.0165909 |
URI | http://hdl.handle.net/2429/49968 |
Degree |
Doctor of Philosophy - PhD |
Program |
Electrical and Computer Engineering |
Affiliation |
Applied Science, Faculty of Electrical and Computer Engineering, Department of |
Degree Grantor | University of British Columbia |
GraduationDate | 2014-09 |
Campus |
UBCV |
Scholarly Level | Graduate |
Rights URI | http://creativecommons.org/licenses/by-nc-nd/2.5/ca/ |
AggregatedSourceRepository | DSpace |
Download
- Media
- 24-ubc_2014_september_karim_faizal.pdf [ 24.41MB ]
- Metadata
- JSON: 24-1.0165909.json
- JSON-LD: 24-1.0165909-ld.json
- RDF/XML (Pretty): 24-1.0165909-rdf.xml
- RDF/JSON: 24-1.0165909-rdf.json
- Turtle: 24-1.0165909-turtle.txt
- N-Triples: 24-1.0165909-rdf-ntriples.txt
- Original Record: 24-1.0165909-source.json
- Full Text
- 24-1.0165909-fulltext.txt
- Citation
- 24-1.0165909.ris
Full Text
Cite
Citation Scheme:
Usage Statistics
Share
Embed
Customize your widget with the following options, then copy and paste the code below into the HTML
of your page to embed this item in your website.
<div id="ubcOpenCollectionsWidgetDisplay">
<script id="ubcOpenCollectionsWidget"
src="{[{embed.src}]}"
data-item="{[{embed.item}]}"
data-collection="{[{embed.collection}]}"
data-metadata="{[{embed.showMetadata}]}"
data-width="{[{embed.width}]}"
data-media="{[{embed.selectedMedia}]}"
async >
</script>
</div>
Our image viewer uses the IIIF 2.0 standard.
To load this item in other compatible viewers, use this url:
https://iiif.library.ubc.ca/presentation/dsp.24.1-0165909/manifest