Science, Faculty of
Physics and Astronomy, Department of
DSpace
UBCV
Lundeberg, Mark Brian
2013-12-12T15:32:14Z
2013
Doctor of Philosophy - PhD
University of British Columbia
The phase coherent properties of electrons in low temperature graphene are measured and analyzed. I demonstrate that graphene is able to coherently transport spin-polarized electrons over micrometer distances, and prove that magnetic defects in the graphene sheet are responsible for limiting spin transport over longer distances. It is shown that these magnetic defects are also partly responsible for the high decoherence (phase loss) observed at low temperature, and that another (as yet unknown) non-magnetic mechanism is required to explain the remainder.
Similar measurements are used to probe and characterize the size scales of the roughness of the graphene sheet. The effects of an in-plane magnetic field threading through the rough graphene sheet are analogous to the effects of the built-in strain; I argue that the observed large valley-dependent scattering rates are a consequence of this built-in strain.
I also describe an original, robust technique for extracting coherence information from conductance fluctuations. The technique is demonstrated in experiments on graphene, used to efficiently detect the presence of magnetic defects. This new approach to studying phase coherence can be easily carried over to other mesoscopic semiconducting systems.
https://circle.library.ubc.ca/rest/handle/2429/45614?expand=metadata
Phase coherence in graphenebyMark Brian LundebergB.Sc., The University of Northern British Columbia, 2005A THESIS SUBMITTED IN PARTIAL FULFILLMENT OFTHE REQUIREMENTS FOR THE DEGREE OFDOCTOR OF PHILOSOPHYinThe Faculty of Graduate and Postdoctoral Studies(Physics)THE UNIVERSITY OF BRITISH COLUMBIA(Vancouver)December 2013c? Mark Brian Lundeberg 2013AbstractThe phase coherent properties of electrons in low temperature graphene aremeasured and analyzed. I demonstrate that graphene is able to coherentlytransport spin-polarized electrons over micrometer distances, and prove thatmagnetic defects in the graphene sheet are responsible for limiting spin trans-port over longer distances. It is shown that these magnetic defects are alsopartly responsible for the high decoherence (phase loss) observed at lowtemperature, and that another (as yet unknown) non-magnetic mechanismis required to explain the remainder.Similar measurements are used to probe and characterize the size scalesof the roughness of the graphene sheet. The effects of an in-plane magneticfield threading through the rough graphene sheet are analogous to the ef-fects of the built-in strain; I argue that the observed large valley-dependentscattering rates are a consequence of this built-in strain.I also describe an original, robust technique for extracting coherence in-formation from conductance fluctuations. The technique is demonstratedin experiments on graphene, used to efficiently detect the presence of mag-netic defects. This new approach to studying phase coherence can be easilycarried over to other mesoscopic semiconducting systems.iiPrefaceThis thesis describes work associated with the following publications:? [1]Lundeberg, M. B. and Folk, J. A., Spin-resolved quantum interfer-ence in graphene. Nature Physics 5, 894 (2009). [Chapter 5]? [2]Lundeberg, M. B. and Folk, J. A., Rippled Graphene in an In-PlaneMagnetic Field: Effects of a Random Vector Potential. Phys. Rev.Lett. 105, 146804 (2010). [Chapter 6, Sec. 8.1, Sec. 8.2]? [3]Lundeberg, M. B., Renard, J., Folk, J. A., Conductance fluctuationsin quasi-two-dimensional systems: A practical view. Phys. Rev. B 86,205413 (2012). [Secs. 8.3, D.3.1, Appx. H]? [4]Lundeberg, M. B., Yang, R., Renard, J., Folk, J. A., Defect-mediatedspin relaxation and dephasing in graphene. Phys. Rev. Lett. 110,156601 (2013). [Chapter 7]The initial direction of research (leading to Ref. [1]) was identified bymy supervisor, Prof. Folk, and after that I identified the research directionsleading to Refs. [2?4].I developed the graphene fabrication procedure and built the graphenedevices used in all three experiments[1, 2, 4]. I helped design and built thelow temperature electrical filtering and wire cooling modules used in the finalexperiment[4]. I performed all of the data taking in the first two experiments.In the third experiment[4], J. Renard had been taking preliminary dataaround that time but these ended up not being published; I acquired thefirst half of the published data, and R. Yang the second half.The theoretical publication, Ref. [3], grew out of discussions with J. Re-nard. The derivations and numerical simulation work were done by me.For all four publications, I performed the data analyses and developedthe interpretations/conclusions. The drafts were prepared by me, and thesewere edited together with the listed coauthors.iiiTable of ContentsAbstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iiPreface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iiiTable of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . ivList of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ixList of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . x1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 Principles of electronic transport in graphene . . . . . . . . 42.1 Hamiltonian of an electron in ideal graphene . . . . . . . . . 52.1.1 Chemistry and lattice . . . . . . . . . . . . . . . . . . 52.1.2 The tight binding model . . . . . . . . . . . . . . . . 62.1.3 Linear dispersion and the valley approximation . . . 82.1.4 Isospin, pseudospin, and spin . . . . . . . . . . . . . . 112.1.5 Minor terms . . . . . . . . . . . . . . . . . . . . . . . 122.1.6 Energy spectrum of a graphene sheet . . . . . . . . . 142.2 Potentials and disorder . . . . . . . . . . . . . . . . . . . . . 152.2.1 Electric potentials (smooth) . . . . . . . . . . . . . . 152.2.2 Magnetic fields . . . . . . . . . . . . . . . . . . . . . . 172.2.3 Valley-coupled disorder . . . . . . . . . . . . . . . . . 192.2.4 Spin-coupled disorder . . . . . . . . . . . . . . . . . . 202.3 Graphene as a two-dimensional electron gas . . . . . . . . . . 222.3.1 Electron transport at the Fermi surface . . . . . . . . 232.3.2 Graphene doping and gating . . . . . . . . . . . . . . 242.4 Semiclassical scattering, diffusion, and resistivity . . . . . . . 262.4.1 Einstein relation (Kubo conductivity) . . . . . . . . . 272.4.2 Semiclassical scattering-diffusion . . . . . . . . . . . . 282.4.3 An example: scattering from a weak scalar potential . 30ivTable of Contents2.4.4 What is the main source of scattering in graphene? . 312.4.5 Pseudospin and spin relaxation . . . . . . . . . . . . 323 Theory of coherent effects . . . . . . . . . . . . . . . . . . . . 343.1 Basics of mesoscopic theory . . . . . . . . . . . . . . . . . . . 353.1.1 Ensemble averages . . . . . . . . . . . . . . . . . . . . 353.1.2 Diffusons and the Diffuson approximation . . . . . . . 363.1.3 Cooperons and quantum crossings . . . . . . . . . . . 383.1.4 Dephasing . . . . . . . . . . . . . . . . . . . . . . . . 393.1.5 Geometry dependence and dimensionality . . . . . . . 403.2 Weak localization (WL) . . . . . . . . . . . . . . . . . . . . . 413.2.1 Sensitivity to perpendicular magnetic fields . . . . . . 423.2.2 Formulas for quasi-2D case . . . . . . . . . . . . . . . 423.2.3 Formulas for quasi-1D case . . . . . . . . . . . . . . . 443.3 Conductance fluctuations . . . . . . . . . . . . . . . . . . . . 443.3.1 General two-conductance correlations . . . . . . . . . 453.3.2 General many-conductance correlations . . . . . . . . 483.4 Coherence with spin, isospin, and pseudospin . . . . . . . . . 493.4.1 Isospin and pseudospin dephasing . . . . . . . . . . . 503.4.2 Spin dephasing . . . . . . . . . . . . . . . . . . . . . . 514 Introduction to experiments . . . . . . . . . . . . . . . . . . . 524.1 Sample preparation . . . . . . . . . . . . . . . . . . . . . . . 524.2 Low-temperature measurement setup . . . . . . . . . . . . . 544.3 Electrical measurements . . . . . . . . . . . . . . . . . . . . . 555 Experiment: Spin-splitting . . . . . . . . . . . . . . . . . . . . 575.1 Experimental setup . . . . . . . . . . . . . . . . . . . . . . . 575.2 Direct observation of spin splitting . . . . . . . . . . . . . . . 585.3 Autocorrelation analysis of splitting . . . . . . . . . . . . . . 605.4 Minimum density of states . . . . . . . . . . . . . . . . . . . 635.5 Hints of ripples in graphene . . . . . . . . . . . . . . . . . . . 645.6 Retrospective . . . . . . . . . . . . . . . . . . . . . . . . . . . 665.7 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . 676 Experiment: Ripples and random vector potentials . . . . 686.1 Experimental setup . . . . . . . . . . . . . . . . . . . . . . . 696.2 Dephasing effect of in-plane field . . . . . . . . . . . . . . . . 706.3 Semiclassical scattering from in-plane field . . . . . . . . . . 746.3.1 Other sources of in-plane magnetoconductance . . . . 76vTable of Contents6.4 Extraction of ripple size . . . . . . . . . . . . . . . . . . . . . 776.5 Analogy to strain-related dephasing . . . . . . . . . . . . . . 776.6 Retrospective . . . . . . . . . . . . . . . . . . . . . . . . . . . 786.7 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . 787 Experiment: The limits of coherence . . . . . . . . . . . . . 807.1 Experimental setup . . . . . . . . . . . . . . . . . . . . . . . 817.2 Dephasing of conductance fluctuations (CF) . . . . . . . . . 827.3 Differences in dephasing for CF and WL . . . . . . . . . . . 857.4 Dephasing of weak localization (WL) . . . . . . . . . . . . . 867.5 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 888 New theoretical results . . . . . . . . . . . . . . . . . . . . . . 908.1 Anisotropic magnetoresistance. . . . . . . . . . . . . . . . . . 908.1.1 Comparison to literature . . . . . . . . . . . . . . . . 928.2 Random-strain dephasing in graphene . . . . . . . . . . . . . 938.3 Analysis of quasi-2D conductance fluctuation correlations . . 948.3.1 How correlations are probed in experiment . . . . . . 958.3.2 Theoretical quasi-2D correlation . . . . . . . . . . . . 968.3.3 Field and energy correlation lengths . . . . . . . . . . 998.3.4 Statistical errors in autocorrelations . . . . . . . . . . 1018.3.5 Multiple dephasing modes . . . . . . . . . . . . . . . 1048.3.6 Notes on the quasi-1D case . . . . . . . . . . . . . . . 1078.3.7 Comparison to literature . . . . . . . . . . . . . . . . 1089 Outlook . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1109.1 Mysteries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1109.1.1 Large anisotropy factor . . . . . . . . . . . . . . . . . 1109.1.2 Wide energy correlations at low temperature . . . . . 1109.1.3 Late turn-off of magnetic decoherence at low temper-ature . . . . . . . . . . . . . . . . . . . . . . . . . . . 1119.1.4 Non-magnetic decoherence saturation . . . . . . . . . 1129.2 Ideas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1129.2.1 Other ways to extract coherence information . . . . . 1129.2.2 Controlling decoherence . . . . . . . . . . . . . . . . . 11610 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118viTable of ContentsA Non-magnetic disorder in graphene . . . . . . . . . . . . . . 126A.1 On-site energy variations . . . . . . . . . . . . . . . . . . . . 126A.2 Hopping energy variations (strains) . . . . . . . . . . . . . . 128B Experimental details . . . . . . . . . . . . . . . . . . . . . . . . 130B.1 Sample preparation . . . . . . . . . . . . . . . . . . . . . . . 130B.2 Measurements . . . . . . . . . . . . . . . . . . . . . . . . . . 131B.2.1 Field alignment . . . . . . . . . . . . . . . . . . . . . 131B.2.2 Conductance measurement . . . . . . . . . . . . . . . 132B.2.3 Device overheating by bias . . . . . . . . . . . . . . . 132B.3 Low-temperature apparatus . . . . . . . . . . . . . . . . . . . 136B.3.1 RC filter boards . . . . . . . . . . . . . . . . . . . . . 137B.3.2 Shield . . . . . . . . . . . . . . . . . . . . . . . . . . . 138B.3.3 Microwave attenuator . . . . . . . . . . . . . . . . . . 138B.3.4 Cold finger . . . . . . . . . . . . . . . . . . . . . . . . 139B.3.5 Magnetic heating of cold finger materials . . . . . . . 141C Detailed interpretation of Chapter 7 data . . . . . . . . . . 146C.1 Background subtraction . . . . . . . . . . . . . . . . . . . . . 146C.2 Correspondence between inflection point and decoherence rate147C.3 Quasi-2D CF correlations in an in-plane magnetic field . . . 149C.4 Spin-orbit interactions . . . . . . . . . . . . . . . . . . . . . . 154D Computation of conductance fluctuation correlations . . . 157D.1 The effect of thermal smearing . . . . . . . . . . . . . . . . . 158D.2 Decomposition into dephasing modes . . . . . . . . . . . . . 159D.3 Zero-temperature correlations (single-mode) . . . . . . . . . 160D.3.1 Quasi-2D case . . . . . . . . . . . . . . . . . . . . . . 160D.3.2 Quasi-1D case . . . . . . . . . . . . . . . . . . . . . . 162D.4 Time-correlations and measurement averaging . . . . . . . . 162E Spin/pseudospin dephasing modes . . . . . . . . . . . . . . . 164E.1 Internal-state dephasing (general) . . . . . . . . . . . . . . . 164E.1.1 Internal-state dephasing of Diffusons . . . . . . . . . 166E.1.2 Internal-state dephasing of Cooperons . . . . . . . . . 166E.1.3 Dephasing and coherent precession . . . . . . . . . . 167E.2 Spin dephasing (low field) . . . . . . . . . . . . . . . . . . . . 167E.2.1 Modelling random spin rotations . . . . . . . . . . . . 167E.2.2 Spin-orbit interaction . . . . . . . . . . . . . . . . . . 168viiTable of ContentsE.2.3 Other spin-orbit mechanisms . . . . . . . . . . . . . . 169E.2.4 Unpolarized magnetic defects . . . . . . . . . . . . . . 170E.3 Isospin and pseudospin dephasing . . . . . . . . . . . . . . . 171E.3.1 Isospin dephasing . . . . . . . . . . . . . . . . . . . . 171E.3.2 Pseudospin dephasing . . . . . . . . . . . . . . . . . . 172E.4 Dephasing in high magnetic fields . . . . . . . . . . . . . . . 172E.4.1 Polarized dynamic magnetic defects . . . . . . . . . . 172E.4.2 Polarized ?static? magnetic defects . . . . . . . . . . . 173E.4.3 Ripples/random magnetic field dephasing . . . . . . . 174F Digamma function reference . . . . . . . . . . . . . . . . . . . 175F.1 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175F.2 Occurence . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176F.3 Properties and identities . . . . . . . . . . . . . . . . . . . . 176F.4 Specific values . . . . . . . . . . . . . . . . . . . . . . . . . . 177F.5 Approximation and computation . . . . . . . . . . . . . . . . 178G Generating correlated random data . . . . . . . . . . . . . . 179G.1 General matrix method . . . . . . . . . . . . . . . . . . . . . 179G.2 Stationary processes (Fourier method) . . . . . . . . . . . . . 181G.2.1 A technicality?periodic boundary conditions . . . . 182G.3 Simulating mixed Cooperon and Diffuson correlations . . . . 183G.4 Semi-stationary processes . . . . . . . . . . . . . . . . . . . . 184G.5 Adding on more data . . . . . . . . . . . . . . . . . . . . . . 185H Theory of statistical errors in autocorrelation functions . 187H.1 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187H.2 Background subtraction (type 2) errors . . . . . . . . . . . . 188H.3 Random (type 3) errors . . . . . . . . . . . . . . . . . . . . . 190viiiList of Tables6.1 Device parameters. . . . . . . . . . . . . . . . . . . . . . . . . 728.1 Asymptotic field and energy correlation lengths. . . . . . . . 101B.1 Curie constants for various cryogenic materials. . . . . . . . . 143E.1 Dephasing rates and eigenmodes due to spin-orbit interaction. 169E.2 Dephasing rates and eigenmodes due to unpolarized magneticdefects. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171E.3 Dephasing modes with polarized magnetic defects. . . . . . . 173ixList of Figures2.1 Graphene lattice structure. . . . . . . . . . . . . . . . . . . . 52.2 Electron dispersion relation of graphene?s pi and pi? electrons. 72.3 K and K? points of graphene. . . . . . . . . . . . . . . . . . . 92.4 Filling of electronic states in doped graphene. . . . . . . . . . 253.1 Paths contributing to the product ??? . . . . . . . . . . . . . 363.2 Graphical definition of the Diffuson correlation. . . . . . . . . 373.3 The Diffuson approximation. . . . . . . . . . . . . . . . . . . 373.4 Example of a non-identical path pair that contributes to ??? . 383.5 Graphical definition of the Cooperon correlation. . . . . . . . 383.6 Outline of CF diagrams. . . . . . . . . . . . . . . . . . . . . . 463.7 The four diagrams contributing to CF correlations. . . . . . . 473.8 Histogram of experimental conductance fluctuations. . . . . . 483.9 Value of weak (anti-)localization correction for graphene a inmagnetic field. . . . . . . . . . . . . . . . . . . . . . . . . . . 504.1 Optical microscope images of graphene device. . . . . . . . . 534.2 Photograph of chip carrier. . . . . . . . . . . . . . . . . . . . 534.3 Cross-sectional view of low-temperature measurement setup. . 544.4 Two-terminal and four-terminal conductance measurements. . 555.1 Experimental setup. . . . . . . . . . . . . . . . . . . . . . . . 585.2 Zeeman effect in graphene. . . . . . . . . . . . . . . . . . . . . 595.3 Autocorrelation extraction of spin splitting. . . . . . . . . . . 605.4 Magnetic field- and gate-dependence of spin splitting. . . . . 625.5 Reproduction of spin-splitting effect in a second device. . . . 635.6 Minimum density of states near Dirac point. . . . . . . . . . . 645.7 Loss of time reversal symmetry due to in-plane field. . . . . . 656.1 Simulation of a rippled graphene sheet. . . . . . . . . . . . . . 686.2 Experimental setup. . . . . . . . . . . . . . . . . . . . . . . . 69xList of Figures6.3 Weak localization magnetoconductance measured at differentin-plane fields. . . . . . . . . . . . . . . . . . . . . . . . . . . 716.4 Increase in dephasing rate from B?. . . . . . . . . . . . . . . . 736.5 Anisotropic in-plane magnetoresistivity. . . . . . . . . . . . . 757.1 Experimental setup. . . . . . . . . . . . . . . . . . . . . . . . 817.2 Conductance fluctuations and autocorrelations. . . . . . . . . 827.3 Dependence of ??1CF on temperature. . . . . . . . . . . . . . . 837.4 Dependence of ??1CF on rms total magnetic field. . . . . . . . . 847.5 Dependence of ??1mag on carrier density. . . . . . . . . . . . . . 857.6 Weak localization as a function of temperature. . . . . . . . . 867.7 Dependence of ??1WL on B2? . . . . . . . . . . . . . . . . . . . . . 878.1 Dephasing rate dependence of several characteristic scales ofthe quasi-2D CF correlation function in magnetic field. . . . . 1008.2 Mapping of several energy correlation lengths to dephasingrate. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1028.3 Example of statistical errors in CF autocorrelation. . . . . . . 1038.4 Guidelines for minimum total scan length in CF experiments 1048.5 Guidelines minimum background field-smoothing length inCF experiments . . . . . . . . . . . . . . . . . . . . . . . . . . 1058.6 Effect of suppressed dephasing modes on CF correlations. . . 1068.7 Effect on field correlation?s inflection point from the combi-nation of two CF modes. . . . . . . . . . . . . . . . . . . . . . 1079.1 Energy autocorrelations of CF at very low temperature. . . . 111B.1 Bias overheating effect as seen in CF variance . . . . . . . . . 133B.2 Electron-electron interaction contribution to conductivity . . 134B.3 Energy autocorrelations of CF. . . . . . . . . . . . . . . . . . 136B.4 Signal filtering topology for each sample wire in the cryostat. 137C.1 Simulated conductance fluctuation correlations. . . . . . . . . 152C.2 Comparison of the experimental ??1CF rate with simulation. . . 153C.3 Theoretical dependence of ??1WL on B? with various forms ofspin disorder. . . . . . . . . . . . . . . . . . . . . . . . . . . . 155G.1 Example of correlated data extrapolation. . . . . . . . . . . . 185xiChapter 1IntroductionGraphene is the two-dimensional crystalline form of carbon, its atoms ar-ranged in a honeycomb lattice. Since 2004, when it was first experimen-tally isolated and electrically characterized[5], graphene has attracted theattention of a wide segment of the condensed matter physics community.Theoretically, the low kinetic energy electrons in graphene are interesting inthat they behave as massless Dirac fermions, analogous to hyper-relativisticelectrons or positrons[6]. The electronic phenomena in graphene thereforegain unusual features compared to the massive Schr?dinger particles seenin ordinary semiconductors. Practically, graphene is a transparent high-mobility semiconductor with potential applications in high-speed analogtransistors[7, 8], touch screens[9], light detectors[10], among other things.A major theme in graphene research has been understanding and con-trolling disorder. In particular, it was expected that electrons in grapheneshould maintain their spin orientation for a long time: the low atomic num-ber of carbon and symmetry of graphene results in a small spin-orbit inter-action for mobile electrons[11, 12], and the interactions between electronsand nuclear spins are extremely weak.[13] For this reason, an exciting fu-ture application of graphene would be as a medium for spintronic circuits,where information is carried by the spin of an electron. The fact that realgraphene devices turned out to have significant spin relaxation was thereforea surprise.[14, 15]Measurements of phase coherent transport have long been used to char-acterize disorder in semiconductors and metals[16?19]. Research in graphenehas continued this tradition with detailed studies of coherent localization[20?33] and coherent conductance fluctuations[34?37]. These studies have re-vealed important information about the way in which disorder couples tothe special symmetries of electrons in graphene, such as their ?valley? degreeof freedom (also known as pseudospin).My thesis work falls within this program of using experimental measure-ments of phase coherent effects to analyze disorder in graphene. Two maintypes of disorder are examined: disorder related to the ripples (roughness)of the graphene sheet, and disorder related to the spin of the electrons. I1Structure of this thesisinvestigated how the ripples can be quantified by coherence experiments inmagnetic fields, and how ripple-induced strains in the graphene have a sig-nificant symmetry-breaking effect. I also demonstrated conclusively that,although spin-orbit interactions do not play a significant role in graphenespin physics, there are magnetic defects in graphene that are responsible forthe spin relaxation. At the moment, these magnetic defects pose a significantroadblock to spintronic and coherent devices in graphene.Structure of this thesisAfter this introduction, the first two chapters review the basic physics ofelectronic transport in graphene. Chapter 2 starts with a derivation of thequantum ?equation of motion? (Hamiltonian) of a delocalized electron insidegraphene. I describe the relevant ways in which environmental factors in-duce disorder that disturbs the electron?s motion. Standard concepts fromsemiconductor physics (band structure, doping, Fermi level, etc.), are re-viewed insofar as they apply to graphene. To give a first understanding ofelectrical conduction, the electrical conductance of graphene is derived in aclassical approximation.Chapter 3 delves deeper into the physics that determine electrical con-ductance, introducing the phase coherent effects that give corrections tothe classical approximation. This chapter reviews two such phase coherenteffects that appear unmistakably in the electrical conductance?weak lo-calization (WL) and conductance fluctuations (CF)?and remarks on basicprinciples and symmetries that are relevant to experiments. Weak localiza-tion and conductance fluctuations are used as tools to probe disorder in thefollowing experiments.The experiments in this thesis involved measurements of electrical con-ductance (G) of microfabricated graphene flakes that were capacitively cou-pled to a backgate (voltage VBG), over a range of very low temperatures (T )and under the influence of a tilted magnetic field (vector B, consisting ofin-plane B? and out-of-plane B? components). Chapter 4 introduces theexperimental portion of this thesis by reviewing the experimental apparatusused to produce the low temperatures and magnetic fields (common to allthree experiments) and describing the electrical measurements.The first experiment, Chapter 5, was a study of the evolution of conduc-tance fluctuations in the presence of an in-plane magnetic field, focussingon the direct coupling of the magnetic field to the electron spin. This studydemonstrated the possibility of interferometric spin transport in graphene2Structure of this thesisdevices, at least over the micron scale.Chapter 6 documents the follow up experiment, looking at effects of thein-plane field on weak localization due to the coupling of the field to theelectrons? motion (rather than their spin). This coupling is only possibledue to the rippled texture of real graphene devices (where the grapheneis on a rough substrate), and so this study yielded information about theroughness of the graphene sheet.The last experiment, described in Chapter 7, was a precision study ofthe phase coherence lifetime of electrons in graphene, using both weak local-ization and conductance fluctuations as probes. This experiment providedconclusive evidence that there are magnetic defects in graphene that causespin flips in the mobile electrons, as well as showing that these magneticdefects are (at least partly) responsible for the lack of long phase coherencetimes in very low temperature.Associated with these experiments are several new theoretical results,discussed in Chapter 8. The majority of Chapter 8 is concerned the properstatistical analysis of conductance fluctuations, which enables experiments ofthe type in Ch. 7. Also discussed are short but important derivations relatingto the interpretation of Ch. 6: modelling the anisotropic scattering from arandom magnetic field, and correctly computing the symmetry-breaking ratefrom random strains.Chapter 9 gives an outlook on prospects for future research along theselines.3Chapter 2Principles of electronictransport in grapheneWe begin by describing the Hamiltonian of an electron in an ideal graphenesheet (Sec. 2.1), which can be quickly derived from basic microscopic consid-erations. As a consequence of the graphene lattice symmetry, the electronsin graphene effectively behave as massless particles and gain two spin-likecharacteristics?isospin and pseudospin?in addition to their ordinary spin.These three internal spin characteristics, combined with their associatedsymmetries, make phase coherence in graphene a complex and interestingsubject (more on this in the next chapter).The graphene devices studied in this thesis are disordered, mostly dueto effects from the environment around the graphene. Section 2.2 discussesthe ways of classifying disorder in graphene (namely, by its coupling toisospin/pseudospin/spin), and describes the microscopic mechanisms thatcan generate these different types of disorder. The distinctions betweenthese disorder classes will be seen clearly in the coherent effects describedin later chapters.Graphene is an example of a two-dimensional electron system (2DES),akin to the active areas of the field effect transistors in conventional siliconand gallium arsenide structures. Section 2.3 describes the usual 2DES prop-erties (carrier density, Fermi level, etc.) as they appear in graphene, andhow they can be controlled by a gate voltage. As a result of the linear dis-persion relation in graphene, some of these properties differ from traditional2DES.Finally, Sec. 2.4 describes how the disorder in graphene gives rise toscattering and resistivity. A simple semiclassical view is presented, leadingtowards the coherent effects that will be described in the following chapter.42.1. Hamiltonian of an electron in ideal graphenerr + ay?ABABABABABABBa2 a1r + a1r + a2r ? a1 + ay? r ? a2 + ay?y?x?AaFigure 2.1: Graphene lattice structure. Carbon atom sites are shown ascircles, and nearest-neighbor bonds as thick lines. The dashed rhombus isthe boundary of the unit cell at location r.2.1 Hamiltonian of an electron in ideal graphene2.1.1 Chemistry and latticeGraphene is the planar (two-dimensional) allotrope of carbon. The crystalstructure of graphene can be represented as a Bravais lattice by placing twocarbon atoms in the unit cell (Fig. 2.1). The lattice vectors area1 =?32 ax?+32ay?, a2 = ??32 ax?+32ay?, (2.1)where a = 0.142nm is the spacing between carbon neighbours. The distancebetween adjacent unit cell centers is |a1| = |a2| = ?3a = 0.246 nm.With this bonding structure, the carbon atom?s outer orbitals {2s, 2p3}are hybridized into a {2sp2, 2pz} configuration. Each atom contributes fourelectrons to these outer orbitals: one electron for each of the three sp2orbitals and one electron to the pz orbital. Neighbouring carbon atoms ingraphene form a strong ? covalent bond due to the symmetric combinationof their sp2 orbitals. Since each neighbour contributes one electron to the52.1. Hamiltonian of an electron in ideal graphenebond, these ? bonds are filled and do not transport charge.1 This leavesone pz orbital and one free electron per carbon atom; the neighbouring pzorbitals hybridize to form pi and pi? bonds. The electrons in the pz orbitalstransport charge and are responsible for the electrical properties of graphene.2.1.2 The tight binding modelThe simplest Hamiltonian for modelling mobile electrons in graphene isfound by constructing a state-space consisting of only two states per carbonatom: spin-up and spin-down in pz. We then allow for hybridization betweenthe pz orbitals of adjacent atoms, to represent the pi and pi? bonding. Forbrevity we omit the spin degree of freedom in the derivation below. Eachunit cell in the Bravais lattice (labelled by its position r) contains two pzorbital states, |r,A? at the A atom, and |r,B? at the B atom (four statestotal, once spin degeneracy is included).We begin the derivation of band structure by defining two candidatelattice plane waves, built out of the orbital states |r,A? and |r,B?:2|k,A? = ?r1?Neik?r |r,A? , |k,B? = ?r1?Neik?(r+ay?) |r,B? , (2.2)where N is the number of unit cells in the graphene. Here, the notation ?rindicates a summation over all unit cells, labelled by their position r. Theplane waves here are not energy eigenstates, but they do make a convenientbasis to represent the eigenstates.Next, we allow for the electrons to hop from atom to atom, due to theoverlap of the pz orbitals of nearby atoms. This is modelled simply byconsidering a hopping term ?t between nearest-neighbor pz orbitals. Sinceeach atom has three nearest neighbours, each of which is in the oppositesublattice, the Hamiltonian isH0 = E0 ? t?r|r,A? [ ?r,B|+ ?r ? a1,B|+ ?r ? a2,B| ]+ h.c. (2.3)The magnitude of this hopping term is about t ? 3 eV. Here E0 is thepotential energy which is influenced by the orbital binding energy as well asexternally induced electrostatic potentials, discussed further in Sec. 2.2.1.1The antisymmetric ?? band is high energy and hence unfilled.2 In some literature a different basis, without the +ay? term in the |k,B? state, is used.This gives subtly differences expressions for some operators, so one should be careful whenmixing calculations from different bases.[38]62.1. Hamiltonian of an electron in ideal grapheneFigure 2.2: Electron dispersion relation of graphene?s pi and pi? electrons,(2.7), plotted over k space. The transparent plane indicates E = E0, whichis the energy of the Dirac point.From translational symmetry we have ?k, i|H|k?, j? = 0 when k 6= k? (iand j here each refer to either sublattice A or B). This means that we candiagonalize this Hamiltonian in the momentum basis, asH0 = ?k?ij?{AA,AB,BA,BB}Hij(k)|k, i??k, j|. (2.4)From (2.2) and (2.3), these matrix elements areHAB(k) = ?k,A|H0 |k,B?= ?t[eikya + e?ikya/2+i?3kxa/2 + e?ikya/2?i?3kxa/2]= ?t[eikya + 2e?ikya/2 cos(?3kxa/2)]HBA(k) = ?t[e?ikya + 2eikya/2 cos(?3kxa/2)]HAA(k) = HBB(k) = E0.(2.5)From this momentum space representation we can easily find the energyeigenvalues of H0, by diagonalizing the 2? 2 matrix at each k:E(k) = E0 ? |HAB(k)| (2.6)= E0 ? t?1 + 4 cos2 (?32 kxa)+ 4 cos (?32 kxa) cos (32kya). (2.7)72.1. Hamiltonian of an electron in ideal grapheneThis expression is the idealized dispersion relation of the pz electrons ingraphene, and is plotted in Fig. 2.2. There are two symmetrical bands:one with positive energy (pi?-bonding), and the other with negative energy(pi-bonding).Although we have found the dispersion relation for a large range of en-ergy, transport experiments are in fact only concerned with the propertiesof electrons very near the Fermi level. Where does the Fermi level lie ingraphene? When the graphene is charge neutral, each carbon atom con-tributes one electron and one pz orbital. Each orbital has two availablespin states, however, and so the electrons fill precisely half of the availablestates. Looking at the states in Fig. 2.2, we see that this will fill all stateswith energy below E0, leaving the positive energy states unfilled. Thus, atcharge neutrality we have E0 equal to the Fermi level. Although we canvary E0 by use of a gate voltage (more on this in Sec. 2.3.2), in practice thedifference EF ? E0 is never more than 0.3 eV or so (small compared to the|t| ? 3 eV scale of (2.7)). Thus, we are only concerned with the propertiesof electrons for small energies |E ? E0| |t|. This sets the stage for thelow-energy approximation described in the next section, which will be usedin the remainder of this thesis.2.1.3 Linear dispersion and the valley approximationAt certain points in graphene?s dispersion relation, the positive and negativeenergy bands meet at an energy level known as the Dirac point (E = E0).The locations in k-space where this meeting occurs are referred to as the Kpoints. The collection of states nearby the K points (near the Dirac point)are known as valleys, a term borrowed from conventional semiconductors.Since the Fermi level never moves very far from the Dirac point, transportin graphene is exclusively concerned with electrons that are in the valleys.How many distinct K points are there? Due to lattice periodicity, thestates at k are identical (at least, up to phase) to the states found at k+G1and k+G2, where G1 = 2pi?3a x?+ 2pi3a y? and G2 = 2pi?3a x?? 2pi3a y? in graphene. Toavoid recounting identical states we can mark out a Brillouin zone, a shapewhich can be tiled to cover k-space by shifts G1 and G2. The Brillouin zoneincludes each distinct k value exactly once. Figure 2.3 replots the dispersionrelation (2.7) over a Brillouin zone centered away from the origin. As canbe seen, there are in fact two distinct K points, which are called K and K?.Although distinct, the K and K? points look quite similar?a symmetry thatwe will explore in more detail in this section.To examine the physics in the valleys, we can rewrite the matrix elements82.1. Hamiltonian of an electron in ideal graphene1 2 3 4 5 6kxa?2?1012k ya K K ?0.30 0.300.30 0.300.300.300.500.500.50 0.500.500.500.700.700.70 0.700.700.700.900.900.900.900.900.901.101.101.101.101.101.10 1.101.101.301.301.301.30 1.301.301.301.501.50 1.501.501.701.70 1.701.701.901.901.901.902.102.102.102.102.302.302.302.302.502.502.502.502.702.702.902.90Figure 2.3: K and K? points of graphene. Solid lines are a contour plot of|E(k)?E0|/|t| from (2.7). The dashed unshaded area indicates a Brillouinzone shape centered away from zero; this area contains each unique k exactlyonce. The locations of the K and K ? vectors, defined in (2.9), are noted.92.1. Hamiltonian of an electron in ideal graphene(2.5) for momenta p defined with respect to the valley centers. To do so,we define a new basis of plane waves based on the states in the valleys. Wedefine these in both bra-ket and matrix notation (both are convenient) torepresent the valley and sublattice freedom of these states:|p? ?[ 1000]= |p,KA? = ?rei(K+p/~)?r?N|r,A?|p? ?[ 0100]= |p,KB? = ?rei(K+p/~)?(r+ay?)?N|r,B?|p? ?[ 0010]= |p,K?B? = ?rei(K?+p/~)?(r+ay?)?N|r,B?|p? ?[ 0001]= |p,K?A? = ?rei(K?+p/~)?r?N|r,A?(2.8)where p is the electron momentum defined with respect to the valley center,and K and K ? are reciprocal space vectors of the K and K? points respec-tively. Throughout this thesis, we fix the locations of the K and K ? vectorsto be3K = 13(G1 +G2) =4pi3?3a x?K ? = 2K = 8pi3?3a x?.(2.9)We can Taylor expand the matrix elements (2.5) in this new basis, forsmall |p| ~/a, to obtain a linearized low-energy Hamiltonian,H0 = v?p|p???????E0 px ? ipy 0 0px + ipy E0 0 00 0 E0 ?px + ipy0 0 ?px ? ipy E0???????p| , (2.10)to first order, where we have defined the number v = 32 ta/~ ? 1.0? 106 m/s.The eigenvalues of the low-energy Hamiltonian are simplyE(p) = E0 ? v|p|. (2.11)This is the famous linear (massless) dispersion relation of low-energy elec-trons in graphene. The meaning of the constant v is clear here: it is thespeed (group velocity) of the electrons.3 Note: due to our choice of B sublattice phase in (2.2), some operators may beexpressed differently if we make a different choice of K and K?[38].102.1. Hamiltonian of an electron in ideal grapheneThe eigenstates of (2.10) are easily obtained. For positive energies E =E0 + v|p| we have eigenstate |p,K+? and |p,K?+? at the K and K? pointsrespectively:|p,K+? = 1?2[e?i?p/2 |p,KA?+ ei?p/2 |p,KB? ]|p,K?+? = 1?2[e?i?p/2 |p,K?B? ? ei?p/2 |p,K?A? ] (2.12)where ?p is the direction of p, such that p = |p|(cos ?px? + sin ?py?). Fornegative energies E = E0 ? v|p| we have similarly|p,K?? = 1?2[e?i?p/2 |p,KA? ? ei?p/2 |p,KB? ]|p,K??? = 1?2[e?i?p/2 |p,K?B?+ ei?p/2 |p,K?A? ]. (2.13)We will be using this linearized approximation as the standard pointof view throughout this thesis. It is worth noting that our interpretationof position r will be taking on a slightly relaxed meaning. Up until thispoint, r was a discrete lattice quantity corresponding to the location of theA atom in a unit cell (Fig. 2.1), however now that we have taken |p| ~/awe cannot localize our wave functions down to individual unit cells. As aresult, we will take the attitude that r is effectively a continuously varyingquantity, though keeping in mind that its literal meaning is the location ofa unit cell in the tight binding model.2.1.4 Isospin, pseudospin, and spinIt is common in the literature to represent (2.10) in short form as H0 =v?Vz ?L ? p, where ?Vx,y,z and ?Lx,y,z are Pauli matrices defined on the valley(K,K?) and sublattice (A,B or B,A) spaces, respectively. This Pauli matrixstructure gives the first hint of spin-like properties, however in the Hamilto-nian the two matrices are mixed together and it is not obvious whether ?Vor ?L represent actual degrees of freedom.The spin-like properties of the Hamiltonian are clarified by defining ma-trix sets ? (isospin) and ? (pseudospin), given by:[21]?x =[ 0 1 0 01 0 0 00 0 0 ?10 0 ?1 0], ?y =[ 0 ?i 0 0i 0 0 00 0 0 i0 0 ?i 0], ?z =[ 1 0 0 00 ?1 0 00 0 1 00 0 0 ?1],?x =[ 0 0 1 00 0 0 ?11 0 0 00 ?1 0 0], ?y =[ 0 0 ?i 00 0 0 ii 0 0 00 ?i 0 0], ?z =[ 1 0 0 00 1 0 00 0 ?1 00 0 0 ?1].(2.14)112.1. Hamiltonian of an electron in ideal grapheneThe matrices ?x,y,z and ?x,y,z form mutually commuting Pauli algebras:[?i, ?i] = 0, [?i, ?j ] = 2iijk?k, and [?i, ?j ] = 2iijk?k. With these newmatrices, the Hamiltonian becomes simplyH0 = E0 + v(?xpx +?ypy) = E0 + v? ? p. (2.15)The isospin (representing A,B sublattice freedom) is thus locked to thedirection of momentum: ? parallel to p for states with positive kinetic en-ergy, and ? antiparallel to p for negative kinetic energy. The isospin leadsto some unusual physics for electrons in graphene. Electrons that travel ina simple loop must rotate their isospin by ?360? (a unitary transformationof eipi?z = ?1), thereby acquiring a geometric phase pi. This phase shiftmodifies coherent effects such as weak localization (Sec. 3.4), and the for-mation of Landau levels[39]. Another feature of isospin is that it inhibitsdirect back-scattering from smooth scalar potentials (Sec. 2.4.3).The pseudospin ? (representing K, K? valley freedom) on the other handdoes not appear in the idealized Hamiltonian (2.15) at all. Pseudospin isthus a true degree of freedom for an electron, analogous to its real spin;electrons can be pseudospin up (|K?), down (|K??), right ( 1?2(|K?+ |K??) su-perposition), and so on. The pseudospin plays an important role in the phasecoherent properties of graphene, because of the prevalence of pseudospin-coupled disorder: intervalley scattering (pseudospin-flips) counteracts theantilocalization from isospin, leading to the appearance of ordinary localiza-tion at low magnetic fields (Sec. 3.4).The real spin, of course, is another degree of freedom in the electrons,represented by ??a third Pauli algebra, orthogonal to ? and ?. Pseu-dospin and spin together give a 4-fold degeneracy to a state at a given E,p.How does the pseudospin differ from the real spin? Most importantly,? Spin is associated with a magnetic moment and thus couples directlyto magnetic fields (Sec. 2.2.2), whereas pseudospin is a non-magneticdegree of freedom.? Although pseudospin is unaffected by ordinary smooth disorder, itdoes couple to certain types of microscopic disorder that turn out tobe common in real graphene devices (Sec. 2.2). Disorder that affectsthe real spin is much weaker (Chapter 7).2.1.5 Minor termsThough the idealized Hamiltonian of graphene electrons, H0 = v? ? p, cap-tures the primary intrinsic characteristics of graphene, we have omitted other122.1. Hamiltonian of an electron in ideal grapheneterms that can become important in some situations. This section brieflydescribes non-disorder terms that appear in the full Hamiltonian. In realdevices, the effects of these additional terms are usually overwhelmed bydisorder, the topic of the next section.? At high momenta, the small-p Hamiltonian starts to lose accuracy.Expanding the tight-binding matrix element (2.5) to the next higherorder in p gives the trigonal warping Hamiltonian term[21]Hw = ?va4~?z[?x(p2x ? p2y)? 2?ypxpy] +O(p3). (2.16)These higher order terms gradually change the constant-energy con-tour of graphene from a circle to a triangle, as can be seen in Fig. 2.3.Note the presence of ?z, meaning that these terms affect the two val-leys oppositely: one valley becomes a ?left? triangle, and the other a?right? triangle (Fig. 2.3). These distortions play a minor role in atransport experiment, since for |E ? E0| . |t|/10 the valleys differ inkinetic energy by less than 1.5%.? Another high-energy inaccuracy comes from the neglect of next-nearestneighbor hopping (from A to A, or B to B). The main effect of thisextended hybridization is to raise somewhat the energy of k = 0 states(far away from the K points), both in the pi and pi? bands. Near theK points, the effect is quite minor: including this hopping (with theappropriate hopping integral, tnnn ? ? 110 t = 0.3 eV) adds a quadraticterm[40]H? = p22m? , where m? =2~29tnnna2 , (2.17)in the small-p approximation. For states relevant to transport (with|E ? E0| . 0.1|t|), this only results in a energy shift < 1%. Includ-ing further hopping terms (next-next-nearest, next-next-next-nearest,etc.) will simply adjust the effective values of v and m? by a smallamount.? A weak spin-orbit coupling is expected to exist in graphene. Kaneand Mele[11] noted that graphene symmetry admits the existence ofspin-isospin (effectively, spin-orbit) couplings of the formHso = ?I?z?z + ?BR(?x?y ??y?x). (2.18)where ?I and ?BR are constants at small p. The first term can existintrinsically (?I) in graphene, while the latter (?BR, Bychkov-Rashba)132.1. Hamiltonian of an electron in ideal graphenecan be generated extrinsically by breaking the out-of-plane (z/?z)symmetry of graphene. Recent studies indicate that the strongest con-tribution to ?I comes from a slight hybridization of carbon d orbitalsinto the pi molecular orbitals, rather than from the intrinsic spin-orbitinteraction of the pz orbital itself[12]. The resulting ?I = 25 ? 50?eVis still fairly weak. As for the extrinsic spin-orbit, it can be createdby applying an out-of-plane electric field, however it is expected that?BR < 50?eV for realistic electric fields (< 1 V/nm).2.1.6 Energy spectrum of a graphene sheetHow many distinct eigenstates are there at a given energy? We can deducethe energy spectrum for a large, clean graphene sheet by counting up thepossible momentum states (which are finely spaced when the graphene haslarge dimensions), and using the dispersion relation (2.7) [or (2.11) at lowenergy]. For the ideal system with a periodic lattice, each energy eigenstatecan be assigned to a particular momentum value. Each distinct momentumvalue occupies a p-space ?area? of ?2p = (4pi2~2)/?, where ? is the real-spacearea of the graphene.For a circularly-symmetric dispersion relation [as in (2.11)], the magni-tude of the momentum is a function of the state energy, |p| = p(E) withgroup velocity v(E) = 1/(dp/dE). Therefore, the set of states between Eand E + dE lie within a thin circular shell in momentum space, with ra-dius p(E) and with thickness dp = 1/(v(E))dE. This shell has an area of2pip(E)dp; dividing this by the per-momentum area ?2p and normalizing bythe area of the system, we arrive at the density of momentum states perunit energy, per unit area:?0(E) = p(E)2pi~2v(E) . (2.19)This holds for any circularly symmetric dispersion, not just in graphene.In Sec. 2.1.3 we saw for graphene that for each p, E there are fourpossible internal states for the electron (spin gs = 2 and pseudospin gv = 2).Also, recall that we found linear dispersion, meaning v is a constant with|E ? E0| = vp. Thus, the total density of states (per unit energy, per unitarea) in an ideal large graphene sheet is?(E) = gsgv?0(E) = gsgv p(E)2pi~2v = gsgv|E ? E0|2pi~2v2 . (2.20)In reality, there are large-scale inhomogeneities in the potential energyin real graphene devices, so that the magnitude of an electron?s momentum142.2. Potentials and disorderchanges from place to place (as kinetic energy is traded for potential energy).The eigenstates of graphene do not correspond to precise momentum values.As a result, the actual density of states is different than indicated in (2.20),especially for low momentum. We will see experimentally in Chapter 5 that(2.20) is fairly accurate for highly doped graphene, but not for low doping:unlike in (2.20), the density of states never actually falls to zero.2.2 Potentials and disorderThe Hamiltonian of an electron in a realistic graphene device is not strictlya function of momentum p, but also varies from place to place dependingon position r. Such variations can be created intentionally (e.g., applyinga magnetic field), or can originate from inhomogeneities in the graphene?senvironment. These disorders and potentials appear as a position-dependentterm HV (r) adding to the ideal free Hamiltonian (2.15):H = v? ? p+HV (r). (2.21)Naturally, the presence of a spatially-varying potential means that p is not agood quantum number for the energy eigenstates. In other words, there willbe mixing between momentum states. This section describes how potentialsand disorders arise, and how they appear in the Hamiltonian for an electron.The primary effect of these disorders, scattering, will be examined in Sec. 2.4.Disorder in graphene can come from many sources: remote electric fields,hybridization with adatoms, changes in carbon-carbon hybridization due tostrains, carbon vacancies or substitutions, etc. It may be simple to writedown a microscopic (lattice) Hamiltonian for these effects, but to under-stand these potentials and disorders in the framework of the small-p regimethey need to be rewritten in terms of their coupling to isospin (sublattice),pseudospin (valley), and spin.This section also includes the effects of two uniform fields (electric fieldfrom gating, and the external magnetic field), since they also appear asadditive terms independent of p and thus can be included into the potentialHV (r).2.2.1 Electric potentials (smooth)The most important potential for electrons in graphene is the ordinary, scalarelectric potential. As shown in Appendix A, if the electric potential variessmoothly from atom to atom, then it appears in the small-p approximation152.2. Potentials and disorderas:H = v? ? p+ E0(r), (2.22)in other words, by giving a position-dependent value of the energy offset E0that was already introduced in (2.3).Since the first term v? ? p can be understood as the kinetic energycomponent of the electron?s energy, then E0(r) can be understood as itspotential energy, now dependent on position r.4 As shown in Appendix A,the value of E0(r) is determined by the average of the on-site energies ofthe A atom and B atom inside the unit cell at r. Empirically, as long as thegraphene surface is clean, the value of E0 is 4.6 eV lower than ?e?vac, where?vac is the electrostatic potential in the vacuum just outside the graphenesurface.[41]5As with any other conductor, the electron potential energy E0 in thegraphene can be directly controlled by simply applying a voltage throughattached electrodes. The electrode sets the Fermi level (chemical potentialof electrons), and E0 follows the change in Fermi level. The value of E0 is,however, also sensitive to environmental factors that aren?t necessarily indirect electrical contact with the graphene. Significant factors include:? Voltages applied to gate electrodes around the graphene (capacitivecoupling).? Electric charge imbalances in the dielectric substrate, from trappedcharges, unsatisfied internal bonds, or leaking-out polar crystal poten-tials.? Contaminants adsorbed on the graphene or adsorbed on the substratesurface.For these environmental factors, screening within the graphene itself playsa large role in determining the eventual E0(r) profile.[42] Screening refersto the collective tendency in an electron gas for the electrons to collectin regions of lower potential energy and avoid regions of higher potential4Of course, the actual electrostatic potential contains large periodic subatomic-scalevariations from atom to atom, plus a disorder landscape of ambient electric fields which addsmoother potential gradients that vary from atom to atom. The potential E0(r), roughly,is the average electrostatic potential over the unit cell at r, plus various offsets due toorbital kinetic energy, dynamic electron-electron screening pseudopotential, etc. Theseoffsets are approximately constants but might vary slightly with strain, doping, dielectricenvironment, etc. To avoid these complications, we just take E0(r) to be defined by (2.22).5The parameter?e?vac?E0 is known as electron affinity, in semiconductor terminology.162.2. Potentials and disorderenergy; this redistribution of charge electrostatically modifies the potentiallandscape and tends to smooth out the potential variations.6The screening is not perfect: graphene only has a small finite density ofstates which means that the existence of a static screening charge requiresa significant local shift in E0(r) (the local charge density depends on bandfilling, as will be described in more detail in Sec. 2.3). The requirement ofconsistency between band filling and electrostatics means that the ambientelectric environment will leave a significant residual imprint in E0(r).7 Asa result, the mobile electrons in the graphene still feel effects from remotecharges, even when they are located nanometers away from the graphene. Bycontrast, in a highly conductive metal such as gold (high density of states)the screening of a remote charge only requires in a tiny shift in internalpotential, and so it does not result in a noticeable change of the metal?selectronic properties.The sensitivity of E0(r) to environmental factors is important since itallows control over the electronic properties of the graphene (doping andgating, see Sec. 2.3.2). Unfortunately it also means that the graphene is sen-sitive to random disorderly charges in the substrate, and so in real graphenedevices there are large variations in E0(r) that result in microscopic scatter-ing (see discussion in Sec. 2.4.4) as well as macroscopic inhomogeneities[43].2.2.2 Magnetic fieldsThe magnetic field B modifies the Hamiltonian in two ways. First, the fieldcouples to electron motion in the usual way, by replacing the momentumoperator p with p ? qA(r), where A(r) is the magnetic vector potentialat location r, and q = ?e is the charge of the electrons. Second, the fieldcouples directly to the electron?s magnetic moment as associated with itsspin. Thus the Hamiltonian in magnetic field isH = v? ? (p+ eA(r)) + ?e? ?B= H0 + ev? ?A(r) + ?e? ?B, (2.23)6Technically the screening is a dynamic phenomenon, however in this thesis we willalmost always consider screening in the meanfield (static) sense, as is often done in semi-conductor physics. This neglect of dynamic electron-electron correlations is what allowsus to use the single electron picture described in this chapter.7This same self-consistency relation is used to explain band bending in semiconductors.It also leads to the quantum capacitance effect wherein the work function of a non-metallicconductor varies with applied electric field?an effect observed in graphene.[41]172.2. Potentials and disorderwhere ?e is the electron magnetic moment and ? its spin Pauli matrix.8Note that the vector potential here must be written in a gauge where theout-of-plane component is zero, Az(r) = 0.The appearance of the charge-coupling to magnetic field is somewhatunusual in graphene since it appears as just a single additive term that isindependent of p (this can be compared compared to the usual two-termcoupling for the nonrelativistic free electron, 12m(p + eA)2 = 12mp2 + emp ?A + e2mA2). We will use the charge-coupled magnetic term later on intwo contexts: to include a uniform perpendicular magnetic field B?, andalso to express the effective ?random magnetic field? disorder due to thecombination of a rippled graphene sheet and a uniform in-plane magneticfield B?. Taking these fields together we have[18]A(r) = (B?rx ?B?z(r))y? (2.24)or equivalently (by gauge freedom)A(r) = (B?z(r)?B?ry)x? (2.25)where z(r) is the height field of the graphene sheet. The perpendicular andin-plane field components have distinct effects, as discussed theoretically inmore detail in Sec. 8.1 and Ref. [18].The ?e? ?B coupling of the magnetic field to the spins of the grapheneelectrons is of central importance in understanding the results of experimen-tal Chapter 5 and plays a small role in Chapter 7. For now we note that,neglecting other spin effects, the effect of the spin-coupling would be simplyto split the kinetic Hamiltonian (2.15) apart into two copies, differing inenergy by the Zeeman splitting 2?e|B|. One band (for electrons with ? ?Beigenvalue of +|B|, corresponding to spin aligned with magnetic field) hasa reduced Hamiltonian ofH? = v? ? p+ E0 + ?e|B|, (2.26)and the other (for electrons with ? ?B eigenvalue of ?|B|, corresponding tospin pointing opposite to magnetic field) hasH? = v? ? p+ E0 ? ?e|B|. (2.27)(Note: though the two bands differ in energy, they share the same Fermilevel.)8The Pauli matrix is defined as usual so that the electron spin operator is s = ~2?.182.2. Potentials and disorderIn the experiments in this thesis, the energy scale of the spin coupling is2?e|B| ? 0.001?1 meV (|B| ? 0.01?10 T). This scale is small to intermediatesince, at high field, it is large enough to exceed the thermal broadening at100 mK temperature (kT ? 0.01 meV), however it is much smaller than thetypical disorder variations in E0 which are of order 100 meV. As a result,for most effects one expects the spin-up electron population and the spin-down electron population to behave in nearly the same way (yet we percieveunmistakably their energy difference in Chapter 5).2.2.3 Valley-coupled disorderIn an ordinary conductor, non-magnetic potentials can be effectively rep-resented simply as variations in E0 like in (2.22)?local shifts of the band,upwards and downwards. Graphene is not quite so simple, as the pseudospin? is a non-magnetic degree of freedom, tied to the lattice. Non-magneticpotentials could couple to ? in which case they cannot be understood aschanges in E0. These same potentials also couple to ?, a consequence oftime reversal symmetry.Considering all combinations of {?x, ?y, ?z} and {?x, ?y, ?z}, there areup to nine possible potentials of this type, which (combined with E0) givesa total of ten non-magnetic potential terms:[21]H = v? ? p+ E0(r) + ?s=x,y,zl=x,y,z?s?lVsl(r). (2.28)where Vsl(r) represents the local strength of that potential. The ten po-tentials can be grouped into three classes based on their coupling to ?z.In a practical device, the most important disorder potential is the scalarpotential E0(r), as it dominates the scattering and the overall conductiv-ity. The second strongest class, of valley-distinguishing potentials, are thethree potentials coupling to ?z. The weakest class, of valley-mixing poten-tials, is made up of the six potentials coupling to ?x and ?y. The rela-tive strengths of these three classes are easily observed in weak localizationexperiments[21, 23].Appendix A describes how all ten non-magnetic disorder terms may begenerated from just two kinds of microscopic disorder: variations in theonsite energy of the pz orbitals, or variations in the interatom pz-pz hoppingenergy. Based on this we can construct microscopic pictures as to how thedifferent classes may arise:? Local variations in electric potential?from nearby charges in the sub-strate or weakly attached adatoms?may generate an atomically smooth192.2. Potentials and disorderelectric potential in the graphene (E0) without generating other non-scalar terms. They are however subject to screening (see previousdiscussion in Sec. 2.2.1).? If the graphene is placed on a rough substrate we will see the creationof local strains and changes in the hopping energies. These variationsgenerate valley-distinguishing terms ?z?x and ?z?y[44]. The quanti-tative effect of these is discussed in Sec. 8.2.? If the graphene is placed on a crystalline, lattice-matched substratesuch as hexagonal silicon carbide or hexagonal boron nitride, we willsee preferential effects that distinguish one sublattice over the other(e.g., A over B). This will generate a valley-distinguishing term ?z?z,that corresponds to the creation of a band gap at the Dirac point[45].If the substrate is crystalline but is not lattice matched (or is rotated),then we will see Moir? pattern variations in the magnitude of the ?z?zterm.[46]? One can imagine carbon atoms missing from the graphene, or adatomsbonded to random carbon atoms in the graphene. These cause anatomic-scale variation in on-site and hopping energies. Sharp inter-faces in the graphene, such as lattice grain boundaries and atomicallyrough edges, will also cause sharp energy variations. These sharpatomic-scale disorders generate contributions to all ten disorder po-tentials, but most importantly they are the only source of the valley-mixing potentials (?x?x, ?x?y, ?x?z, ?y?x, ?y?y, ?y?z)[21].2.2.4 Spin-coupled disorderThe experiments in Chapter 7 are concerned with spin-coupled disorders ingraphene. How do these disorders appear in the electronic Hamiltonian?Similar to the pseudospin-coupled disorder discussed in the previous sec-tion, spin disorders can take the form ?i?j for some indices i, j. These arespin-orbit interactions, as they couple the electron?s spin (?) to its directionof motion (?). Spin-orbit interaction disorders tend to fall into one of twoclasses:9? Out-of-plane spin-orbit disorder has the form of the intrinsic spin-orbit interaction ?z?z (Sec. 2.1.5) but is enhanced by disorder such9If the spin-orbit disorder is associated with an atomically sharp scatterer, it mayof course also generate ?i?j terms (mixing valley and spin); a term such as ?i?j?k ishowever not possible since spin-orbit interactions must respect time reversal symmetry.202.2. Potentials and disorderas adatoms[47]. It is the only possible type of spin-orbit interactionwhen the z ? ?z symmetry of the graphene is preserved.? In-plane spin-orbit disorders, of the form ?x?j or ?y?j , can be createdby a variety of mechanisms that break the z ? ?z symmetry of thegraphene. This includes out-of-plane electric fields[12], curvature ofthe graphene (ripples)[48], adatoms[47], and so on.The primary microscopic mechanisms for these spin-orbit disorders are thehybridization of the carbon pz orbitals with other orbitals?orbitals withinthe graphene itself, or orbitals within adatoms. Fortunately, it is generallypossible to derive effective tight-binding models on the graphene lattice, thatcapture the essential physics of these hybridizations. These effective mod-els lead to the above simple representations in the small-p approximation(?i?j).[12, 47] For the most part these spin-orbit terms are quite small, how-ever it has been suggested that adatoms in particular may lead to detectablespin-orbit interaction.[47]There is another type of spin disorder which has no analogue in pseu-dospin: The electronic spin can couple to other spins in the graphene (e.g.,the spins in magnetic defects, impurities, and even in atomic nuclei). Thesimplest model of coupling of this type is a contact dipole-dipole coupling,Hs = J?(r ? rs)? ? S/~, (2.29)where S is the spin operator of the magnetic defect, J is the coupling con-stant (units of energy times area), and rs is the location of the magneticdefect. This coupling term can be used to model both remote dipole-dipolecoupling and direct exchange interactions with electronic magnetic defects.10In practice a significant value of J is only obtained with exchange interac-tions (magnetic defects directly inside or adsorbed on the graphene).In principle, the interaction of an electron with a defect spin is a many-body effect (as many electrons simultaneously interact with the defect spin)and can lead to nontrivial results such as the Kondo effect[49] and interestingcoherent transport physics[50]. It turns out that, at least for the purposesof the graphene devices studied in this thesis, the coupling of electrons tothe magnetic defect is weak enough that a highly simplified model suffices:we may think of the defect spin S as a quasi-static classical moment (a fixedvector) which causes a spin-dependent elastic scattering of electrons.[50]10Although the remote interaction and exchange interaction seem very different, theygive the same type of interaction Hamiltonian because the electrons are fundamentallyindistinguishable.212.3. Graphene as a two-dimensional electron gas2.3 Graphene as a two-dimensional electron gasWe have derived the single particle physics of an electron, but in fact a pieceof graphene contains numerous electrons. How do these many electrons worktogether to determine measurable properties such as electrical resistivity?For the most part, the electrons in graphene interact with each other inthe mean field sense, without strong quantum correlations. In other words,electron-electron interactions are a significant contribution to the static po-tential E0(r) (e.g., in screening), but dynamic energy exchanges betweenelectrons are a perturbation. Given this, the electrons can be consideredto live in independent single-particle states, to fair accuracy. Therefore,graphene is a textbook solid state material (like a semiconductor) in that theelectrons fill up the available single-particle eigenstates that can be obtainedfrom the single-particle Hamiltonian. (Note that, since the Hamiltonian it-self is influenced by electrostatic effects, the eigenstates change somewhatdepending on the filling of the band structure, in a self-consistent manner.)The Pauli exclusion principle allows at most one electron to occupy eachsingle-particle eigenstate, and so the electrons find their lowest total freeenergy configuration by filling the available energy levels roughly up to anenergy level EF, known as the Fermi level, or total chemical potential forelectrons.11 The probability f(E) of a single-particle state at E being occu-pied is given by the Fermi-Dirac function,f(E) = 11 + exp (E?EFkBT) , (2.30)where T is the absolute temperature and kB is the Boltzmann constant.Energy levels that are more than a few kBT above EF are never occupied,and states more than a few kBT below EF are always occupied.The thermodynamic meaning of a body?s EF is the work required toreversibly add an electron to it, while it is in contact with a reservoir, takingthe electron from a distant state with zero energy. At equilibrium EF iseverywhere constant throughout the graphene and its electrodes, regardlessof inhomogeneities and other details.11This is the semiconductor physicist?s definition of the term ?Fermi level?, and shouldnot be confused with the Fermi energy of metal physics, usually defined as the largestkinetic energy of a fermion in a disorder-free Fermi gas at absolute zero.222.3. Graphene as a two-dimensional electron gas2.3.1 Electron transport at the Fermi surfaceThe Fermi level EF is directly connected with electronic transport, as varia-tions in EF are responsible for the flow of electrons. We can directly controlthe EF of electrodes by applying a voltage from a battery or other powersupply; without loss of generality we can define voltage as V = ?EF/e.12If a conductive path is provided between the electrodes, then charge willflow from the higher to the lower voltage, i.e., electrons will flow from lowvoltage to high voltage.If the voltage difference is small, then we can describe the device asbeing in quasi-equilibrium. Quasi-equilibrium means that it is possible toaccurately describe the local population of energy levels with a local valueof EF and T , though these values may vary from location to location. Ina classical quasi-equilibrium resistive device, there is a smooth, monotonicgradient in EF and the electrical current is simply determined by Ohm?slaw:J = ??EF/e, (2.31)where ? is the local conductivity and J is the current. Here it is assumedthat T is a constant in the device so that thermoelectric effects can beneglected.Importantly, although there are many electrons in the graphene, onlythose with energy near EF contribute to conductivity. The reason is thatthe lower and higher energy levels are either always filled or always empty(|E ? EF| kBT ), and hence they do not exchange charge with their sur-roundings. The energy levels near EF on the other hand can give and takeelectrons from the various electrodes. Quantitatively, if we associate a con-ductivity ?(E) to electrons at energy level E, then the measurable Ohm?slaw conductivity is given by a thermal average[51, 52]? = ??dE df(E)dE ?(E) =?dE f(E)(1? f(E))?(E), (2.32)for f(E) as defined above. This function df(E)/dE is peaked at EF, with awidth of order kBT . We will see later that the averaging in (2.32) plays animportant role in the statistics of conductance fluctuations (Sec. 8.3). Tofirst approximation the conductivity varies smoothly with energy, however,so the typical conductivity is given by ? ? ?(EF).12This ?voltage? is distinct from the electrostatic potential E0, but note that differencesin V (not E0) are what drive electric current. When a voltmeter is attached to two points,it measures precisely the difference in V between those points.232.3. Graphene as a two-dimensional electron gas2.3.2 Graphene doping and gatingThe number of electrons in the graphene can be controlled by shifting theband structure relative to the Fermi level. As in semiconductors, this con-trol of the electron number is referred to as doping or gating (although inthe graphene community these two terms are used almost interchangeably).Given (2.30) and the density of states in the graphene, ?(E), we can com-pute the sheet carrier density ns of the graphene, relative to some referencedensity n0:ns =? ???dE f(E)?(E)? n0. (2.33)We choose n0 to be equal to the electron density in a charge-neutral graphene(exactly one electron per carbon atom in the pi band). Thus, the meaningof ns is the per-area number of extra conduction electrons in the graphene,relative to charge neutrality. Similar to a semiconductor, ns in graphene canbe tuned into be positive or negative. When ns > 0 the graphene is saidto be electron-doped whereas it is hole-doped when ns < 0. The integral in(2.33) is illustrated in Fig. 2.4.The carrier density is something that we can directly control in experi-ment, by capacitive gating. The graphene rests on an SiO2 dielectric witha conductive silicon substrate (called the backgate) underneath. When weapply a voltage difference VBG between the backgate and graphene, then wehave a parallel plate capacitor and13ns = ed(VBG ? V0) (2.34)where is the permittivity of the SiO2 and d its thickness. The offset V0 isintroduced phenomenologically, representing the voltage necessary to cancelthe built-in contamination charge of the graphene.Since VBG (and hence ns) is the experimental knob, it?s important tounderstand how theoretically relevant variables such as E0, p, etc., relateto ns. To first approximation, the energy spectrum of graphene is given by13Note the approximation here that the graphene fixes the local electrostatic potential,as if it were a metal plate. This is valid since the oxide is so thick, and so the workfunction of graphene only changes slowly with respect to VBG.[41] To draw a connectionto the earlier discussion of screening in the graphene, note that the process of chargingthe graphene necessarily involves E0 shifting with respect to EF. Since the electrostaticpotential outside the graphene is locked to E0 (with a 4.6 eV offset) this means that theelectrostatic potential just outside the graphene also shifts slightly (by less than a volt)under the influence of a gate. The electric field between the graphene and the backgate istherefore a little less than VBG/d. This small correction is known as quantum capacitanceand for simplicity we neglect it.242.3. Graphene as a two-dimensional electron gasE0 ? 0.2 eV E0 E0 + 0.2 eVElectron energy level, E012345DOS(cm?2 /eV)?1013EFEFEFE0 ? 0.2 eV E0 E0 + 0.2 eVElectron energy level, E012345DOS(cm?2 /eV)?1013EFEFEFFigure 2.4: Filling of electronic states in doped graphene. Upper: Plot ofdensity of states in clean graphene [(2.20), as solid line] with shaded curvesshowing density of filled levels, f(E)?(E), at 300 K and for various levels ofdoping: EF ? E0 = ?0.2 eV, EF ? E0 = 0, EF ? E0 = 0.2 eV. Lower: Thesame plot, using a more realistic density of states for a graphene device withinhomogeneous E0(r). In the axis, E0 refers to the average or typical valueof potential energy E0(r).In both plots, the left red curve shows the case of negatve ns, commonlycalled p-doped, and the right blue curve shows positive ns (n-doped). Thecenter green curve shows the charge neutral case. While the p-doped andn-doped cases appear more or less similar in ideal and disordered graphene,the charge neutral case is distinct since the density of states does not go tozero in disordered graphene.252.4. Semiclassical scattering, diffusion, and resistivitythe disorder-free density of states (2.20), whose offset is determined by auniform potential energy E0. In this case, n0 = ? E0??dE ?(E) so that ns = 0when EF = E0. We define the quantity? = EF ? E0, (2.35)known as the internal chemical potential.14 At moderate or high levels ofdoping, |?| kBT , the state population is much like that in a metal ordegenerately doped semiconductor since f(E) changes faster with energythan ?(E) does. We can approximate f(E) ? ?(EF ? E), a step function,and so in (2.32) we have simply ? ? ?(EF). For the sheet carrier densitywe get a straightforward result:ns ?? EFE0dE ?(E) = ?2pi~2v2 sgn(?). (2.36)and we can also define the Fermi momentum pF:pF = |?|/v = ~?pi|ns|. (2.37)When ? = 0 the graphene is said to be undoped or charge neutral(ns = 0). There are many theoretical predictions regarding unusual be-haviour in graphene when it is undoped, as in this case there would only bea small number of thermally excited electrons above E0 and holes below E0(Fig. 2.4). Unfortunately, in real graphene devices there is a significant levelof contamination so that there are large-scale variations in E0 (and hence?) of order 100meV, giving rise to the appearance of ?charge puddles??variations in the local charge density from place to place.[41] In the averagedensity of states, this appears as a smearing out of the Dirac point (Fig. 2.4).By uniform gating, it is only possible to make |?| ? 0 in a minority of thegraphene. As a result, special undoped-graphene physics are rarely observedunless they involve energy scales larger than these potential variations (anexample of this is the anomalous quantum hall effect of graphene[5]).2.4 Semiclassical scattering, diffusion, andresistivityUltimately what we want to find from theory is the electrical conductivityof graphene, to compare with experiments. The electrical conductivity as a14In the literature this ? is sometimes simply called the ?chemical potential?, despite thefact that it is not constant at thermal equilibrium.262.4. Semiclassical scattering, diffusion, and resistivityscalar was introduced in Sec. 2.3.1. More generally, the conductivity ? is amatrix relating the current vector J and the local vector gradient in Fermilevel, ?EF: [JxJy]=[?xx ?xy?yx ?yy] [?xEF?yEF], (2.38)or J = ??EF/e for short. We can also define the resistivity matrix ? = ??1.Graphene devices are full of disorder, and as a result the electrons cannottravel very far before being scattered. In a typical device the mean free pathis ltr ? 0.1 ?m whereas the device size is more than a few ?m. Thus theelectrons do not neatly fly across the graphene but instead diffuse, scatteringthousands of times as they move from electrode to electrode. Although theelectrons scatter frequently, they travel for at least a few wavelengths beforetheir momentum is changed (pltr ~). This is the weak disorder limit ofquantum scattering. In this limit, we can think about electronic motionin semiclassical terms, where the electrons move in a particle-like manner,occasionally experiencing random and independent scattering events. Phasecorrelations thus play a minor role in determining the resistivity (see Chapter3).2.4.1 Einstein relation (Kubo conductivity)The Einstein relation gives the electrical conductivity tensor in terms of thediffusion of electrons[52]?ij = e2?Dij (2.39)where ? is the density of states per unit energy, per unit area, [see (2.20)for its value in ideal graphene], and Dij is the diffusion tensor for electronsat equilibrium. The diffusion tensor is calculated from the time evolution ofan electron?s velocity components vx and vy, as it randomly diffuses:Dij =? ?0 ?vi(t)vj(0)? dt, (2.40)The average ?? ? ? ? indicates an average over all possible starting positionsand headings. An alternative but equivalent definition involves tracking theelectron?s position, r(t) = ? t0dt? v(t?). At very large times the position isdescribed by the covariance relation?ri(t)rj(t)? = 2Dijt, for t ?tr, (2.41)which shows that the electron?s probability spreads diffusively; thus, thenaming of D as diffusion tensor. The next few sections will explore how to272.4. Semiclassical scattering, diffusion, and resistivitycalculate the way in which an electron?s velocity becomes randomized overtime, and ultimately Dij .In a broader context, the Einstein relation (2.39) is an example of a Kuboformalism, relating the non-equilibrium linear response of a system (e.g., cur-rent per unit electric field) and its fluctuations at equilibrium (e.g., chargediffusion). This is a powerful idea, and convenient since non-equilibriumproperties can be difficult to analyze directly (requiring, say, a full Boltz-mann scattering formalism). Most importantly, the semiclassical conductiv-ity (2.39) has a close quantum counterpart known as the Kubo-Greenwoodformula, which fully includes coherent effects. The coherent effects (de-scribed in the next chapter) can be interpreted as corrections to ? and D.2.4.2 Semiclassical scattering-diffusionThe semiclassical physics of electrons diffusing in a weakly disordered envi-ronment can be described by the rate of scattering ?p??p?? from an initialstate (p, ?) to a final state at (p?, ?). These states are energy eigenstates ofthe unperturbed Hamiltonian; ? and ? indices represent the internal statesof the electron (pseudospin and spin). Neglecting coherent effects, eachscattering event is independent of the others.Once ?p??p?? has been computed (an example is shown in the nextsubsection), we can use it to predict the statistical evolution of velocity andcalculateDij . In graphene, the velocity is tied to the direction of the momen-tum p, and due to the circularly-symmetric dispersion relation we actuallyonly need to keep track of the electron?s heading ?. We define a differentialangular scattering rate by integrating out the non-angular variablesS(?, ??)d?? = ???p? near ???p??p??= ??0v2pi??[? ?0 dp? ?p??p??]d??,(2.42)where ?0 is the density of momentum states (2.19). The value of S(?, ??)d??is the rate of scattering from an initial state p = (pF, ?) to all final states(any momentum) travelling in a direction between ?? and ?? + d??.Now, consider an electron launched in a direction ?0 at time t = 0. Ata later time t, the probability that the electron?s direction is between ? and? + d? is represented by the distribution P (?, t)d?. Since the scatteringhas no long-term memory, we can write an evolution equation for P due to282.4. Semiclassical scattering, diffusion, and resistivityscattering S:??tP (?, t) =scattering into ?? ?? ??d?? S(?, ??)P (??, t)?scattering from ?? ?? ??d?? S(??, ?)P (?, t) . (2.43)This equation conserves the total probability 1 = ? d? P (?, t). The initialcondition (travelling in a direction ?0) is that P (?, 0) = ?(?? ?0). Now, theproblem is: how to compute the diffusion constant matrix D (2.40) fromthis evolution equation?Isotropic scatteringIf S(?, ??) only depends on the difference between ? and ??, then the scat-tering is isotropic (it has no preferred axes). In this case we can use (2.43)to obtain a simple expression for the diffusion constant in terms of S.We define the function S(?) = S(?, ??) where ? = ? ? ??. Since S mustbe 2pi-periodic and even-symmetric, we can expand it as a Fourier series:S(?) = 12s0 + ??n=1 sn cos(n?). It is then straightforward to show thatthe functions P (x)n (?, t) = e??nt cos(n?) and P (y)n (?) = e??nt sin(n?) for alln ? 0 are eigensolutions of (2.43), where ?n = pi(s0 ? sn). Any solution of(2.43) can be represented as a superposition of these eigensolutions. The twomodes with n = 1 are the only modes contributing to transport, however.This leads to a well-known equation for the diffusion constant Dij = D?ijin an isotropic 2D medium:D = v22 ?tr, where ?tr = ??11 =[? 2pi0 d? (1? cos?)S(?)]?1(2.44)This time ?tr is called the transport time, and its meaning is the lifetimeof momentum.15 It corresponds directly to the electron mean free path vialtr = v?tr. The conductivity in this case is simply ? = 12e2?v2?tr.15 Note that ?tr is not the same as the average time between scattering events,?el =[? 2pi0d?S(?)]?1(2.45)(this is called the elastic time, and may be much shorter than ?tr). The distinction isbetween ?tr and ?el is important: it is possible for an electron to ?scatter? (receiving, say, aphase shift or a spin rotation) without changing direction at all. Such forward scatteringevents do not contribute to ?tr but they do affect ?el. The time ?el plays a role in sometransport properties[53] which are not the main focus of this thesis.292.4. Semiclassical scattering, diffusion, and resistivity2.4.3 An example: scattering from a weak scalar potentialAssuming that the disorder potential is a weak perturbation HV , the Fermigolden rule gives the average scattering rate (the notation ? ? ? is an averageover disorder configurations) as?p??p?? = 2pi~ |?p?, ?|HV |p, ??|2?(Ep ? Ep?).= 2pi~v |?p?, ?|HV |p, ??|2?(|p| ? |p?|).(2.46)Note that (2.46) cannot be used to compute scattering rates from strongpotentials: for example, short-range resonant scatterers effectively causemultiple correlated scattering events. Also, strong longer-ranged potentials(such as the charge puddles in graphene) actually ?curve? the electron?s tra-jectory gradually instead of causing single scattering events.Consider a scalar potential HV = E0(r) which is random, and whichcan be characterized by a correlation function E0(r?)E0(r) = CV (r ? r?).Using (2.46) with the small-p energy eigenstates(2.12), the scalar potentialis found to cause no scattering between valleys, and a scattering rate withineach valley of:?pK?p?K = 2pi?~v C?V (q) cos2?p ? ?p?2 ?(|p| ? |p?|), (2.47)where q = (p? p?)/~ and we have introduced the disorder power spectrumC?V (q), which is defined from the disorder correlation function asC?V (q) =?drxdry e?iq?rCV (r). (2.48)The angular scattering function (2.42) is then found to beS(?, ??) = ?0~C?V (q) cos2 ? ? ??2 . (2.49)In this case, q is defined by its magnitude, q = (2pF/~) sin ????2 , and direction?q = 12(pi+?+??). Note that cos2 term in (2.49) is not present in ordinary 2Delectron systems. This term prevents direct back-scattering (? ? ?? = ?pi),and is a manifestation of the Klein tunnelling effect for Dirac electrons[54].By symmetry, one would expect the correlations of the disorder to beisotropic, in which case we can assume that C?V (q) = C?V (q). This meansthat we can use relation (2.44) and obtain the momentum relaxation rate:??1tr = ?02~? pi?pid? [1? cos2 ?]C?V(2pF~sin ?2) (2.50)302.4. Semiclassical scattering, diffusion, and resistivityand, via (2.39), a contribtion to resistivity? = 2e2v2? ??1tr =1gsgve2~v2? pi?pid? [1? cos2 ?]C?V(2pF~sin ?2). (2.51)In the next section we compare this result to the actual resistivity in graphene.2.4.4 What is the main source of scattering in graphene?The dependence of graphene?s resistivity on doping gives insight into whatsort of disorder is the primary cause of scattering. Recall that we can controlthe carrier density in graphene by capacitively gating it. At high carrierdensities where |ns| rises above the charge-puddling level, we can expectrelation (2.37) to hold, meaning that the current-carrying electrons (thoseat the Fermi level) all have momentum with magnitude pF = ~?pi|ns|.From (2.51) we can consider two approximate regimes for the correlationsin weak disorder:? Short range: For disorder with rapid spatial variations (compared to~/pF), we have C?V (q) approximately constant with respect to q, sothe integral in (2.51) is then dominated by ? ? pi/2.? ? pigsgv~e2v2C?V (?2pF/~) (2.52)In this case the resistivity ? does not depend on directly on pF. Forvery short range disorder we would have simply ? ? p0F.? Long range: In the opposite limit, of a smooth disorder with fluctu-ations longer than ~/pF, the integral in (2.51) is dominated by smallvalues of ?. As an approximation, we have in this case ? ? ~q/pF:? ? ~2gsgve2v2p3F? pF/~0 dq q2C?V (q). (2.53)For very smooth potentials the upper limit can be set effectively to?,predicting that ? ? 1/p3F.In fact, experiments typically observe the following dependence on Fermilevel, at high carrier density (see for example Figs. 5.1, 6.2, 7.1 in the ex-perimental sections):? ? 1ns ?1p2F. (2.54)312.4. Semiclassical scattering, diffusion, and resistivityThis would seem to be inconsistent with the (pF)0 and (pF)?3 relationshipsabove.To generate a ? ? p?2F dependence we can simply take a potential thatbehaves as C?V (q) ? 1/q2 for values of q of order pF/~. This falls between theshort-range and long-range cases. It turns out Coulomb potentials from ran-dom charges nearby the graphene sheet (whose exact nature is not known),combined with screening, would produce this kind of medium-range poten-tial. In other words, as we change the value of pF in the graphene, it isbetter able to screen the disorder and so E0(r) and C?V (q) actually changewith the doping ns. This Coulomb-scattering model has been studied ingreat detail, and is successful in explaining the ? ? n?1s dependence as wellas the saturating behaviour of the resistivity near the Dirac point.[42, 55] Italso describes the experimentally observed effects of adding more Coulombscatterers to the graphene[56].It is worth mentioning that an alternative model exists to explain the? ? 1/p2F dependence: resonant scattering.[57] If the disorder consists ofshort-ranged but very strong local potentials, then the golden-rule scatteringrate (2.52) is not accurate. Instead of constant resistivity (as for weakshort-range scatterers), these strong resonant scatterers give a resistivitydependence quite close to ? ? p?2F , as seen in experiment.The debate about the source of the scalar potential in graphene (resonantscattering vs. screened electrostatic disorder) continues to this day, and thereis little certainty about what exactly is the environmental source of thedisorder.[53, 55, 58?60] Fortunately, the interpretation of the experimentsin this thesis do not rely on the exact nature of the scattering, and merelyrely on the fact that transport in graphene is diffusive over the micron lengthscale.2.4.5 Pseudospin and spin relaxationScalar disorder relaxes the momentum but it does nothing to the electron?sinternal degrees of freedom (pseudospin and spin). When we consider thevalley-coupled and spin-coupled disorders (Secs. 2.2.3 and 2.2.4), we canalso compute rates of scattering between different internal states. Given adisorder Hamiltonian, we can compute spin-scattering rates and pseudospinscattering rates, which give the rate of scattering from spin (pseudospin)sensitive disorder. These are not simply described by one rate: given theplanar symmetry of the graphene, it is important to distinguish in-plane andout-of-plane degrees of freedom.We define the out-of-plane spin scattering rate as the total rate of scat-322.4. Semiclassical scattering, diffusion, and resistivitytering (to any final state) from terms including ?z, and the in-plane spinscattering rate as that from terms including ?x (by symmetry, the same rateas from ?y). These rates appear in coherence dephasing modes (Sec. 3.4)and also correspond closely to the spin-flip rate. Similar rates are defined forpseudospin disorder, though labelled instead as intravalley (terms including?z) and intervalley (terms including ?x and ?y) rates.For example, consider an electron interacting with a classical magneticspin S as in (2.29). From the Fermi golden rule (2.46) we have for a collectionof such magnetic defects16??1zs = pi~?0J2S2znd, ??1xs =pi~?0J2S2xnd (2.55)where nd is the number density of these spins (assumed to be randomlyplaced). This type of analysis is useful to compute, for example the expectedspin relaxation from hyperfine (electron-nucleus) interaction. For 13C atoms(S = 1/2 with 1% concentration or nd ? 1 nm?2), the expected hyperfineinteraction constant is of order J ? 1? 10?7 eVnm2.[13] Given a typicaldensity of momentum states in graphene, ?0 = 0.1 eV?1 nm?2, we thereforeonly expect an electron spin flip rate of order ??1s ? 1 s?1 due to hyperfineinteractions, an extremely low rate!16Compared to ordinary systems,[50] this is lower by a factor of two in graphene sincethe Hamiltonian (2.29) is unable to cause backscattering, as explained near Eq. (2.49).The prefactor becomes larger when the magnetic defect is atomically sharp, as it can startto also couple to ?, ?.33Chapter 3Theory of coherent effectsThe last chapter concluded with a discussion of how the scattering of elec-trons in graphene causes resistivity, from a semiclassical viewpoint. A crit-ical assumption was made that the electron?s phase plays no role; this isequivalent to assuming that the phase becomes totally and unpredictablyrandomized after only a few scattering events. While this is true at tempera-tures of order 100 K or above (due to frequent electron-electron interactions),the assumption becomes invalid as temperature is reduced.At low temperatures, almost all scattering events are elastic processes,where the electron scatters off a static potential. In such an event, the elec-tron?s phase is changed ?randomly? but repeatably, and so it may form stableinterference patterns. To be accurate, then, we should consider the electronas a diffusing wave rather than diffusing particle. Interference complicatesmatters, of course: the phase correlations prevent us from treating everyscattering event as independent. How important is it that we include thesephase coherent correlations? It turns out that, even for a system with per-fect coherence, the relative difference in resistance between the semiclassicaland quantum calculations tends to be of order ~/(pltr), which in grapheneis about 1% to 10%. We can therefore think of the semiclassical conduc-tance (resistance) as a ?background level? that becomes slightly perturbedby coherence.This chapter describes the two ways in which coherence manifests intransport measurements: weak localization (WL) and conductance fluctua-tions (CF). The two effects (WL and CF) are closely related and the sametheoretical tools are used to describe them. It is not in the scope of thisthesis to fully describe the theories behind weak localization and conduc-tance fluctuations. Rather, this section provides a brief introduction to theestablished framework, insofar as is necessary to explain experimentally rele-vant formulas (for chapters 5, 6, 7), and motivate the discussion in Sec. 8.3.For those interested in the complete mathematical details, I recommendthe book Mesoscopic Physics of Electrons and Photons[52], which gives anin-depth and consistent treatment.343.1. Basics of mesoscopic theory3.1 Basics of mesoscopic theoryBefore looking into the specific predictions, we will first look at some ofthe basic concepts that underlie mesoscopic theory. The theory is built onthe notion of ensemble averages, as the calculation of average quantitiesturns out to be much simpler than calculating the unaveraged counterpart.In order to include the effects of diffusion, two mathematical objects (theDiffuson and Cooperon correlations) are introduced. These allow a simplediagramatic expression for quantities such as average conductance, conduc-tance correlations, and so on. The concept of dephasing is introduced, andthe importance of geometrical regime is stressed.3.1.1 Ensemble averagesAs a first attempt to calculate the conductivity of a quantum system, wemight use the semiclassical relationship (2.39), ?ij = e2?Dij , but calculatethe diffusion constant Dij using a quantum approach: the diffuson constantcan be obtained from the growing size of the probability cloud that is asso-ciated with a wave packet evolving in the disorder potential.Specifically, we start with a wave packet ?(r, 0) that is localized near aposition r = 0 at time t = 0, and then evolve it using Schr?dinger?s equation(let?s examine the scalar particle case for the moment). We measure itslocation at a later time (described by the probability distribution |?(r, t)|2),and plug it into formula (2.41) to get the diffusion constant:2Dijt = ?ri(t)rj(t)? =?dr rirj |?(r, t)|2. (3.1)This approach is conceptually simple but unfortunately difficult to realize.The Schr?dinger equation provides no easy solution in a random potential,and we must integrate (3.1) in which the integrand fluctuates over small spa-tial scales. If we average our result over many possible disorder realizations,however, the problem becomes much more simple. To calculate the averageconductivity ?ij , all we need is the average probability P (r, t) = |?(r, t)|2, afunction that is much smoother and easier to integrate than its unaveragedform.Throughout this thesis, the notation X (overbar) will be used to rep-resent the ensemble average of a quantity X, i.e., the value of X averagedover all disorder configurations. Theories of mesoscopic physics are able tomake sense of a highly complex problem (wave evolution in a random poten-tial) by directly computing such averaged quantities without all the work of353.1. Basics of mesoscopic theoryr????L3L1 L2L4r????Figure 3.1: Paths contributing to the product ??? . Left: Example of a pathpair that would contribute to ??? but not to its average, ??? . The phaseof this contribtion to ??? would be highly sensitive to the length difference,(L2+L3)?(L1+L4), which depends sensitively on the disorder configuration.Right: Example of a pair of identical paths, that do contribute to ??? .examining individual disorder realizations. For instance, mesoscopic theoryallows us to directly calculate P (r, t) (and thereby the average conductiv-ity) without needing the messy details of un-averaged |?(r, t)|2. The theoryalso allows calculations of higher-order statistics, such as the variance ofconductance G2 ?G2, conductance correlations GAGB, and so on.3.1.2 Diffusons and the Diffuson approximationHow exactly do we go about computing these ensemble averages? We canexpress ?(r, t) as a superposition of many Feynman paths all starting at 0and leading to location r; it may visit any number of scatterers along theway. To calculate the product ?(r, t)??(r, t) we should therefore considersuperpositions of two paths: one for ? and one for ??. The two pathsmay be completely unrelated, visiting different scattering locations, or mayoverlap partially, or may be completely identical (Fig. 3.1).The types of paths can contribute the ensemble-averaged probability,?(r, t)??(r, t), are restricted. As shown in Fig. 3.1, two paths that travelalong different tracks will acquire a disorder-dependent phase difference.Their contribution to ?(r, t)??(r, t) will therefore be zero on average. Onthe other hand, we could consider the same path for both ? and ? ? (Fig. 3.1,right panel); these paths will contribute positively to ??? since the phase,whatever it is, must be identical for the two paths. In order to add upthe contributions from all such identical pairs of paths (which may involve363.1. Basics of mesoscopic theoryr1r2=r1, r2+ r2r1 r?Figure 3.2: Graphical definition of the Diffuson correlation (darkened wavyarea) connecting scattering events at r1 and r2. The Diffuson compares thephases of two waves which traverse the same path, in the same direction.This allows a generalization of the random walk of Fig. 3.1. The Diffusonis defined recursively: it adds together the contributions to ??? from onescattering event (where r1 = r2), or arbitrarily many scattering events inseries, considering only pairs of identical paths in between.r1r2P (r, t) = Direct propogation + Indirect propogration? r orDiffuson?rFigure 3.3: The Diffuson approximation: an ?equation? for calculating theensemble-averaged probability to be in position r at time t, in terms of theDiffuson. The electron can either reach its destination without scattering(first term), or travel in an indirect path made up of one or more scatteringevents (second term). The first term only contributes for small times t . ?el.any number of scattering events), we define a mathematical object calledthe Diffuson (Fig. 3.2), which explicitly only considers identical paths for ?and ??. Figure 3.3 graphically depicts the process used to calculate P (r, t);mathematical details can be found in Ref. [52].It turns out that the evolution of the Diffuson is essentially a diffusiveprocess, described by the semiclassical diffusion constant Dij [52]. The con-tribution of the Diffuson to the average conductivity is precisely the sameas was found semiclassically, in (2.39). So, what have we gained in thisquantum diffusion picture? The limits of the semiclassical result are clear:it corresponds to only considering identical paths for ? and ?? (neglectingthe Cooperon-type paths described in the next section). The procedure alsointroduces a new mathematical object?the Diffuson. Although we used ithere to calculate a probability, more generally the Diffuson is a tool that wecan use to calculate two-wave correlations ??1?2, even when the two ampli-373.1. Basics of mesoscopic theoryr????Figure 3.4: Example of a non-identical path pair that contributes to ??? .Even though the two paths are not identical, they have the same phase.r1r2=r1, r2+ r2r1 r?Figure 3.5: Graphical definition of the Cooperon correlation. It adds to-gether the contribution of just one or more scattering events encounteredin opposite order. The Cooperon compares the phases of two waves whichtraverse the same path, but in opposite directions. This allows a general-ization of the looping part of the random walk in Fig. 3.4. Compare to theDiffuson definition, Fig. 3.2.tudes do not come from the same wave function.3.1.3 Cooperons and quantum crossingsThe Diffuson does not capture all paths that contribute to ??? . Figure 3.4shows a a pair of non-identical paths that actually do contribute to ??? . Thefirst path crosses itself, and therefore it contains a loop. The second path isidentical except the looping part is reversed. These two paths have identicallengths, and they visit the same scattering sites, thus they will each have thesame phase (as long as the Hamiltonian preserves time reversal symmetry).To describe these types of paths in general, we introduce two new math-ematical objects. One new object is called the Cooperon, which comparesthe phase for counter-propagating paths (Fig. 3.5). It is just like the Diffu-son except one path is reversed. The other new object we introduce is thequantum crossing, a.k.a. Hikami box, where we connect together four paths383.1. Basics of mesoscopic theory(involving eight amplitudes). When we take into account Cooperons whencomputing P (r, t), we encounter weak localization (Sec. 3.2).3.1.4 DephasingWhen we compare two wave amplitudes (?1, ?2) in the form ??1?2 we obtaincoherent objects that extend over arbitrarily long distances?the Diffusonand Cooperon correlations. The Diffuson and Cooperon by design are ableto persist after many random elastic scattering events, giving them a lifetimemuch longer than ?el or ?tr.Diffuson and Cooperon correlations are limited not by ?el but by theyare limited by dephasing processes, which are defined as any processes thatinduce a difference in phase between the two paths. Dephasing causes ??1?2to approach its semiclassical value, suppressing coherent contributions. Forinstance, magnetic fields cause dephasing of the Cooperon by introducingan Aharonov-Bohm phase difference between the counter-propagating paths.Other types of dephasing will appear as we go along: inelastic dephasing dueto many-body interactions, symmetry-breaking disorders, dynamic defects,and more.For a very simple picture of dephasing we could consider ?total phaserandomization? events that could occur at any time or place. The coherentpart of ??1?2 would then decay exponentially as a function of time t, asexp(?t/??). We refer to this exponential time constant ?? as the coherencelifetime. Its inverse, ??1? , is the dephasing rate?the rate at which thesephase loss events occur. Not all dephasing mechanisms lead to this sim-ple exponential decay in time, however. For example, the Aharonov-Bohmphase acquired from a perpendicular magnetic field depends not on timebut on the area enclosed by the path, so small-area paths are dephased veryslowly by the field. As another example, with disorder coupling to spinand pseudospin we also see the appearance of dephasing modes, giving amulti-exponential decay of coherence.It is worth clearing up a point of semantic confusion here, as there exista variety of definitions of the terms ?dephasing? and ?decoherence?. To beconsistent I will follow the definitions seen in the Akkermans and Montam-beaux book[52] and Boris Altshuler?s KITP talk[61]:? The term dephasing will apply to a situation where a coherent effectis suppressed, for any reason. This can include scattering differencesdue to the waves? internal states (spin, pseudospin, etc.), time reversalasymmetry (for Cooperon), or a difference in the environment seen393.1. Basics of mesoscopic theoryby the two waves (even when the difference is intended by the experi-menter). Not all types of dephasing can be described by a dephasingrate.? The term decoherence (a subset of dephasing) is restricted to thosetypes of dephasing which arise from irreversible and uncontrollabledynamic changes in the environment. A static system would have nodecoherence but may have other types of dephasing. Decoherence istypically microscopic in origin so we generally can model decoherenceas a dephasing rate.? The term inelastic dephasing (a further subset of decoherence) de-scribes cases where the diffusing particle leaves an imprint on its en-vironment. This may include some energy transfer between the parti-cle and its environment, although the energy transfer is not essential.Only this last type of dephasing (inelastic) is connected with the deeperquestions about decoherence: entanglement between a particle and itsquantum environment, notions of wave function collapse, etc. Inelas-tic dephasing occurs with the interactions of electrons with phonons,magnetic defects, and even with other electrons. A full descriptionof inelastic dephasing is not trivial since it involves many-body andthermodynamic considerations[50, 52, 62]. Fortunately, in many casesit can be effectively represented by a simple dephasing rate.3.1.5 Geometry dependence and dimensionalityPhase coherent effects rely on the existence of quantum crossings betweendiffusive paths, and it is quite important to establish how likely are thesecrossings. Even classically, the probability of diffusive trajectories intersect-ing each other depends strongly on the dimensionality of the medium. Forinstance, a particle diffusing in three dimensional space has a large phasespace to explore and it typically never returns to its origin point. In a nar-row wire, by constrast, diffusion is constrained so that a particle returns toits origin repeatedly. Two dimensional samples are a special case where thenumber of returns is only a few, growing logarithmically with time.The relevant cut-off time for coherent diffusion is the dephasing time (??).It is crucial to establish whether the diffusion is significantly constrained bythe sample boundaries during this time. This leads to a natural classificationfor 2D devices that are rectangular with length L between contacts, and witha width W between the un-contacted edges. We determine the dimensional403.2. Weak localization (WL)class by comparing L and W to the characteristic coherence length L? =?D??. There are two cases which are particularly interesting for graphene:17? Two-dimensional (2D or quasi-2D) regime: ltr L? L,W .? Dirty quasi-one-dimensional (quasi-1D) regime: ltr W L? L.My experiments were typically within the 2D regime or at the edge of thisregime, so this thesis will focus on 2D results. I will also include theoreticaldetails about the dirty quasi-1D case as graphene samples of this type couldbe easily made. Hereafter it will be assumed that ltr L,W , so that?quasi-1D? always refers to the dirty quasi-1D regime.3.2 Weak localization (WL)By taking into account only Diffusons in the diagrams for average conduc-tance, we get precisely the same average conductance as found semiclassi-cally. Including the Cooperon gives a phase coherent effect: a correction tothe semiclassical average conductance Gsc known as weak localization.G = Gsc +GWL (3.2)where (2.39) Gsc = WL e2?D.Theoretically, weak localization can be understood from a few differentviewpoints:[52]1. An enhancement of the probability to return to origin. Given full timereversal symmetry, the probability of return to origin will be doublethe semiclassical value, as all loops will interfere constructively withtheir reversed counterparts.2. A suppression of the probability to propagate to remote points. Math-ematically we must renormalize the Diffuson by including Cooperonperturbations.3. An enhancement of back-scattering in momentum space. This view-point lends itself naturally to optical experiments on diffusive media,where the coherently enhanced backscattering is directly observed.All of these viewpoints are mathematically equivalent: 1 and 2 must agree inorder to conserve probability, whereas 3 is simply a momentum-space versionof 1. In essence, weak localization modifies the semiclassical conductivitye2?D by modifying the diffusion constant D.17Other dimensional classes do exist with their own special considerations, not men-tioned here[17].413.2. Weak localization (WL)3.2.1 Sensitivity to perpendicular magnetic fieldsThe weak localization correction depends on the existence of the Cooperoncorrelation, which is suppressed by magnetic fields. Thus, weak localiza-tion is strongest at zero magnetic field, and it becomes suppressed as themagnetic field is turned on. Weak localization is particularly sensitive tothe application of perpendicular magnetic fields B?, and it is instructive toconsider why this is so.Qualitatively, the effect of a perpendicular field can be understood as in-troducing a new cutoff for coherent diffusion, in this case an cutoff area givenby L2B = ~/(eB?). Trajectories enclosing this area receive an Aharonov-Bohm phase of order 1. This cutoff becomes relevant when it is comparableto the phase coherent area, L2? in 2D (or WL? in quasi-1D). The field-induced cutoff starts to affect weak localizaction when it is comparable tothe phase coherent area. For a typical 2D device with L? ? 1 ?m (typicalat temperatures around a Kelvin), this already occurs for B? ? 1mT!3.2.2 Formulas for quasi-2D caseLet?s look at the quantitative results for a two dimensional sample with de-phasing rate ?? in the presence of a perpendicular magnetic field B. For nowwe will consider ordinary electrons (with spin but no pseudospin/isospin),and without spin disorder. The case of graphene?s Dirac particles and/orspin relaxation adds some further complications to the result, as discussedin Sec. 3.4.The 2D weak localization contribution to conductance is[52]GWL(B) = ?WLe2pih[?(12 +~4eDB?tr)? ?(12 +~4eDB??)]. (3.3)Here, ?(z) is the digamma function (Appx. F). Note that the ?tr appear-ing in (3.3) is an artifact of a short-time cut-off that was introduced inthe derivation?it may be that 2?tr or 12?tr, for example, would be moreappropriate.We can consider a few limits in magnetic field by selectively expandingthe digamma functions into their asymptotic form, ?(z)? ln z (for z 1):? At zero field we haveG(0) = Gsc ? e2pihWL ln(??/?tr). (3.4)423.2. Weak localization (WL)Since ?? is usually temperature dependent, this expression gives atemperature-dependent conductance. In practice this temperature de-pendence at zero magnetic field is difficult to distinguish from anothereffect (electron-electron interaction density of states anomaly) whichalso gives a conductance contribution of order (e2/h) lnT [52].? For intermediate and low fields [B ~/(eD?tr)] we can write thefield-dependent part (called magnetoconductance) as:G(B)?G(0) ? ? e2pihWL[ln(B?B)? ?(12 +B?B)](3.5)where we have definedB? = ~??1? /(4eD).Remarkably, this expression can be characterized entirely by one fieldscale B? and it does not depend on the approximate cutoff time ?tr.Hence, (3.5) is suitable for obtaining ?? by fitting low-field conductancedata.? Alternatively, for intermediate and high fields, B B?, the conduc-tance is independent of ??:G(B) ? Gsc ? e2pihWL[?(12 +~4eDB?tr)? ?(12)]. (3.6)The Cooperon here is primarily suppressed by the magnetic field, sothat the coherence time is irrelevant. This fact is useful for probingtemperature- and field- dependent effects other than weak localization,as (3.6) provides an easily subtracted background conductance.? At very high fields, B & ~/(eD?tr), we have G(B) ? Gsc, meaningthat weak localization has been completely suppressed and the semi-classical value is restored. We therefore expect conductance to reach aplateau at high field. Once the field is increased too far, however, wewill start to see conductance changes from strong field effects such asShubnikov-de Haas oscillations[39] which start around B ? ~/(eD?el).433.3. Conductance fluctuations3.2.3 Formulas for quasi-1D caseThe WL correction in a 1D system (with spin degeneracy, but no pseu-dospin/isospin) takes the form[52]GWL(B) = ?e2h1L??((D??)?1 + e2B2W 23~2)?1/2? ltr?? . (3.7)Like the quasi-2D case, the appearance of ltr here is approximate. It isinteresting that unlike the quasi-2D case, the effect of magnetic field simplyappears in addition to the dephasing rate in equation (3.7).It is important to note that (3.7) is only valid for small fields. Even ifthe device starts out in the quasi-1D regime at zero field, it transitions tothe quasi-2D regime for fields of order B & ~/(eW 2). At these high fields,the effective phase coherent length has been reduced to the point where itis smaller than the channel width, and so we should use expression (3.3)instead.3.3 Conductance fluctuationsExperimentally, what we measure is the conductance G of a particular de-vice, with a particular potential landscape. This conductance is by no meansan ensemble average, and it depends on un-averaged probabilities ??? thatare sensitive to the potential landscape. What we can say with certaintyis that the measured conductance can be decomposed into two parts: anensemble average (G) part, and a device-dependent part (?G). These con-tributions ?G are known as conductance fluctuations:?G ? G?G (3.8)The value of ?G not only varies from device to device, but it also ishighly sensitive to externally-controlled parameters such as magnetic fieldsand gate voltages. When we scan one of these experimental parameters,take magnetic field for example, we obtain a trace which appears to containrandom noise (see Figs. 5.1, 5.3, 5.7, 7.2, for experimental examples). Theconductance fluctuations are however quite unlike noise: as long as thedevice does not contain slow instabilities, rescanning the same field rangewill produce exactly the same fluctuation pattern.Analysis of conductance fluctuations forms an important part of thisthesis (Sec. 8.3, Chs. 5,7) Mesoscopic theory can provide predictions about443.3. Conductance fluctuationsthe nature of ?G, as long as we ask for something expressed as an ensembleaverage:? What is a typical magnitude of ?G? Quantitatively, this is probed bythe rms fluctuation amplitude,?(?G)2.? How sensitive is ?G to changes in the magnetic field? To probe this wecompute a correlation function, ?G(B1) ? ?G(B2). Similar correlationfunctions exist for other parameters (such as VBG).? How does ?G change over time in an unstable device? This too ischaracterized by a correlation function, ?G(t) ? ?G(t+ ?t), in time t.? What is the probability distribution of ?G?are the fluctuations Gaus-sian? This can be answered by computing the variance and higher-order moments: (?G)2, (?G)3, (?G)4, and so on.As can be seen, fluctuations are associated with many more statistical met-rics compared to the single characteristic G of weak localization. Fromthis one might expect that fluctuations present many more opportunitiesto extract useful information. In practice, not all metrics yield useful in-formation, and only a few metrics can be easily measured to the requiredprecision. (more on this in Sec. 8.3)3.3.1 General two-conductance correlationsThe studies of conductance fluctuations in this thesis all involve quantitiesthat can be written as ?GA ? ?GB, comparing two conductance fluctuationsmeasured under different circumstances which we label A and B. Not onlycan we express the variance ?G2 = ?G ? ?G in this form but also correlationsin magnetic field ?G(BA) ? ?G(BB), gate voltage ?G(VBGA) ? ?G(VBGB), time?G(tA) ? ?G(tB), and so on. We can even look at correlations where twoparameters differ, e.g., ?G(BA, VBGA) ? ?G(BB, VBGB). Any of these two-conductance correlations can be calculated using a general procedure, thoughit is technically involved (Appx. D). Here we will examine briefly how theseconductance correlations appear in mesoscopic theory.Recall that to determine a conductance (probability) we consider manydifferent path pairs that travel from point ri to point rf . When we comparetwo conductances GA and GB, then, we must consider four paths: two paths453.3. Conductance fluctuationsrAi rAfrBi rBf?Figure 3.6: Outline of CF diagrams. To calculate the average productGA ?GB, we need to consider all possible diagrams in the obscured region.going from rAi to rAf , and two from rBi to rBf . Note the relation?GA?GB = [GA ?GA][GB ?GB]= GAGB ?GA ?GB. (3.9)Any diagram in which particle A?s path is unrelated to B?s path will appearin both terms, and thus be cancelled out. Therefore, ?GA?GB includes onlydiagrams where the paths overlap for some length. It turns out that thereare four dominant contributions to ?GA?GB, each involving two quantumcrossings (Fig. 3.7).The diagrams in Fig. 3.7 can be classified by whether the shared pathsare Cooperons or Diffusons, and whether the shared paths are self-returningloops or they propagate between distinct locations. The field dephasingof Cooperon diagrams depends on the sum BA + BB, whereas the Diffu-sion diagrams depend on the field difference BA ? BB; other than this,the Diffuson and Cooperon diagrams behave similarly. If we interpret con-ductivity as (2.39) ? = e2?D, then roughly speaking the looping diagramsrepresent fluctuations in the density of states (??), while the non-loopingdiagrams represent fluctuations in the diffusion constant (?D). The ?D dia-grams generally are more robust to dephasing by thermal effects and energydifferences, compared to the ?? diagrams.Unlike weak localization, there is no analytical expression for conduc-tance correlations at finite temperature. Given the wide variety of differ-ences that could exist between circumstances A and B, there are manyfactors that go into determining ?GA?GB: temperatures, magnetic fields,kinetic energies, and dephasing rates. We can however come quite close to463.3. Conductance fluctuationsrAi rAfrBi rBf(a) Diffuson ?D termrAi rAfrBi rBf(b) Cooperon ?D termrAi rAfrBi rBf(c) Diffuson ?? termrAi rAfrBi rBf(d) Cooperon ?? termFigure 3.7: The four diagrams contributing to CF correlations. Filling inthe obscured region from Fig. 3.6, we arrive at the four dominant kindsof path correlation that contribute to conductance fluctuation correlations?GA?GB.473.3. Conductance fluctuations30000counts (a.u.)-1 -0.5 0 0.5 1conductance (e2/h)Figure 3.8: Histogram of experimental conductance fluctuations, taken froma portion of the data contributing to the T = 200mK, B? = 0 data point inFig. 7.4, showing Gaussian behaviour. (Note: conductance measurementswere correlated due to oversampling, so the counts? noise is larger thanPoissonian)an analytic result in quasi-1D and quasi-2D cases: theory is able to providean analytic form for zero-temperature, single-mode correlation functions F0(Diffuson) and C0 (Cooperon). These zero-temperature correlation func-tions can be used as building blocks for the computation correlations ina more complicated situation (finite temperature and multiple dephasingmodes). Appendix D describes the detailed process of computing conduc-tance correlations.3.3.2 General many-conductance correlationsIt turns out that in weakly disordered samples (pFltr ~) the fluctuationsare normally distributed.[52] This is the case in the experiments of thisthesis (see Fig. 3.8 for an example). This means that we can rewrite higher-order correlations ?GA ? ?GB ? ? ? ?Gn in terms of the two-point correlationfunction described in the last section, using the Isserlis theorem (a.k.a. Wicktheorem). For example, the three-conductance correlator vanishes,?GA ? ?GB ? ?GC = 0, (3.10)483.4. Coherence with spin, isospin, and pseudospinwhereas the four-conductance correlator divides into three terms?GA ? ?GB ? ?GC ? ?GD = (?GA ? ?GB) (?GC ? ?GD)+ (?GA ? ?GC) (?GB ? ?GD)+ (?GA ? ?GD) (?GB ? ?GC) .(3.11)The non-Gaussian contributions to these higher-order terms tend to beof order (pFltr/~)2 times smaller than the typical Gaussian value, as theynecessarily involve at least two more quantum crossings[52].3.4 Coherence with spin, isospin, and pseudospinSo far in this chapter, we have focussed on coherence for scalar particles.In graphene however it is absolutely necessary to consider isospin, and withpractical devices we must also account for the effects of pseudospin-coupleddisorder. The spin freedom also comes into play in Chapters 5 and 7, com-plicating manners even further.Appendix E shows how dephasing modes naturally arise when we try togeneralize wave correlations ?1??2 to include the evolution of internal state(spin, isospin, or pseudospin). Each dephasing mode is associated with adephasing rate and may have an energy offset. The final expression for weaklocalization conductance GWL = G?Gsc appears as a sum over these modesGWL = GWL-1 +GWL-2 + ? ? ?+GWL-Nwhere each mode GWL-i can be written in terms of the scalar solution, suchas (3.3). The conductance fluctuation correlation ?GA?GB also breaks apartinto distinct modes:?GA?GB = FAB1 + CAB1 + FAB2 + CAB2 + ? ? ?+ FABN + CABNand likewise each term is written in terms of the scalar solution (Appx. D).When we consider spin, isospin, and pseudospin together, we potentiallyhave (2? 2? 2)2 = 64 dephasing modes. Fortunately, the isospin is not anactual degree of freedom (it is locked to momentum) and so at most we haveN = g2s g2v = 16 modes. Sixteen modes is still unwieldy, so we first focus onpseudospin alone (at most g2v = 4 modes).493.4. Coherence with spin, isospin, and pseudospin420-2GWL(B) (e2/h)0.01 0.1 1 10 100 1000B (mT)Intrinsic antilocalizationOrdinary localizationGraphene deviceFigure 3.9: Value of weak (anti-)localization correction for graphene in amagnetic field. These curves were computed from (3.12) or (3.3) for a 2Ddevice with aspect W/L = 1, using B? = 0.05 mT and Btr = 90 mT in eachcase. Field scales are defined as Bi = ~/(4eD?i) for i = {?, iv, zv, tr}. Theupper curve corresponds to no pseudospin disorder (Bzv = Biv = 0), whilethe solid curve has Bzv = 30 mT, Biv = 1 mT.3.4.1 Isospin and pseudospin dephasingThe fact that isospin is locked to momentum has an important consequencefor weak localization[21]: it changes the sign of weak localization from lo-calization (GWL < 0) to antilocalization (GWL > 0, though see discussionof pseudospin below). In order to back-scatter, the isospin must be re-versed, but the phase of the final result depends on whether the isospin wasrotated by pi (counterclockwise) or ?pi (clockwise). Therefore the counter-propagating loops determining weak localization (Cooperons) gain a sign of?1; this sign indicates destructive (rather than constructive) interferencefor backscattering. As a result, the probability of return to origin is reducedcompared to the semiclassical value. The rate of diffusion (D) is increased,as is the conductance (Fig. 3.9).Pseudospin-coupled disorder (Sec. 2.2.3) changes the story yet again. In-cluding pseudospin disorder, the expression for the weak localization conduc-tance contribution separates into four dephasing modes (two are identical)[21]GWL(B) = G1(B, ??1? )?G1(B, ??1? + 2??1iv )? 2G1(B, ??1? + ??1iv + ??1zv )(3.12)503.4. Coherence with spin, isospin, and pseudospinwhere G1 is the single-mode WL conductance seen in Sec. 3.2 [e.g., (3.3) forthe 2D regime]. The sign change from isospin is incorporated into (3.12).Valley-distinguishing disorder (associated with scattering rate ??1zv ) stronglysuppresses the intrinsic antilocalization, and intervalley scattering disorder(rate ??1iv ) causes the return of weak localization. The weak localization dipis limited in its sharpness only by ??1? , the rate of time reversal symmetrybreaking. Figure 3.9 shows how weak localization appears in graphene forrealistic values of the scattering rates. Note that at low magnetic fields themagnetoconductance shape is quite similar to the ordinary localization case.For conductance fluctuations, the isospin and pseudospin give less dra-matic effects, as there is no sign change. Like weak localization, there arefour modes; two modes are suppressed by ??1? + ??1iv + ??1zv , one mode by??1? + 2??1iv , and the last mode is only suppressed by ??1? . The rates ??1zvand ??1iv have exactly the same values as in weak localization, however ??1?may be different[34, 35]. The pseudospin-dephased modes are suppressedand their fluctuations are slow in magnetic field. Due to thermal smearingeffects, however, these modes can still give a significant contribution to thefluctuation variance and so they can only be ignored in certain circumstances(see discussion in Sec. 8.3.5).3.4.2 Spin dephasingThe inclusion of the real spin in weak localization and conductance fluctua-tion is considerably more complicated than pseudospin. For small magneticfields, we should consider spin-orbit interactions and magnetic defects[16,63]. Kondo effects can suppress the magnetic defect dephasing at lowtemperature[49]. At high magnetic fields we also need to consider Zeemansplitting of the energy levels for the mobile electrons, as well as the polar-ization and precession of the magnetic defects[33, 50]. Because of all theseeffects, there are a number of subtleties in spin dephasing, too complex todescribe in this section?they will be discussed in remainder of the thesis asthey become relevant.Fortunately, it is generally possible to treat spin dephasing separatelyfrom pseudospin dephasing. Due to the strong pseudospin disorder, thepseudospin-related dephasing modes play a minor role, and so the spinphysics in graphene are much like those in an ordinary electron gas with-out pseudospin or isospin.[33] Thus, it is possible to model the effects spindephasing in graphene using the established theories regarding conventionalsemiconductors.51Chapter 4Introduction to experimentsThe following three chapters will describe a series of experiments on graphene,carried out at UBC in the Quantum Devices laboratory. The apparatus andmeasurements were very similar in all of these experiments, and so thischapter will give a short and basic description of the common measurementdetails. Appendix B goes into further technical detail about the experimen-tal setup.4.1 Sample preparationWe fabricated graphene devices using the famous Scotch tape exfoliationtechnique?the same technique that was used to make the very first graphenesamples.[5] Graphenes were exfoliated onto dies cut from Si wafers withan outer ?280 nm layer of SiO2 (this is the most common substrate forgraphene devices). This type of substrate performs two roles: the SiO2 thinfilm provides a good background to allow the identification of the graphenelayers in an optical microscope, and the conductive Si substrate allows uslater to capacitively control the graphene charge carrier density ns (recallSec. 2.3.2).After exfoliation and once we had identified a suitable graphene mono-layer (Fig. 4.1(a)), we used electron beam lithography with poly(methylmethacrylate) resist to deposit CrAu films in a designed wire pattern. Thesemetal films form Ohmic electrical contacts to the graphene where they over-lap it. For the last device (Chapter 7) a further step of electron beam lithog-raphy was used to mask the graphene for an oxygen plasma etch step, whichallowed us to shape the graphene into a well-defined channel (Fig. 4.1).Once the dies were prepared, we bonded them into chip carriers (Fig. 4.2),allowing the graphene device to be easily inserted into corresponding socketsin our measurement equipment. Typically, before measuring we annealedthe graphene device at 200?C in air to evaporate water and other volatiles.The later device was also annealed before measurement, at 400?C in a re-ducing atmosphere to remove resist residue.524.1. Sample preparationGrapheneSiO2AuCr filmEtched gapsGraphite10 micron(a)(b) 10 micronFigure 4.1: Optical microscope images of graphene device, (a) before and (b)after lithography (metal film deposition and plasma etching). This device(specifically, the smaller channel on the right) was measured in the finalexperiment (Chapter 7).Figure 4.2: Photograph of chip carrier with die mounted and wire bonded,penny for scale. Wire-bonding pads are visible as rows on the die.534.2. Low-temperature measurement setupBoiling helium (4.2 K)Mixing chamber (20 mK)Low freq. filter moduleHigh freq. filter moduleRadiation shieldDevice (chip carrier)Large magnet (B||)Small magnet (B )Wire cooling module~ 8 cmVacuum canTFigure 4.3: Cross-sectional view of low-temperature measurement setup (di-agram is not drawn exactly to scale). The measurement wiring path andthe locations of the superconducting magnet coils are indicated.4.2 Low-temperature measurement setupThe primary measurements were all carried out with the graphene device in-side an Oxford Instruments 3He:4He dilution refrigerator. Figure 4.3 showsthe low-temperature part of the measurement setup. After inserting thedevice, the initially warm refrigerator is covered in a vacuum can and to-gether the assembly is cooled in a boiling 4He bath. The bath contains alarge superconducting magnet which supplies a vertical magnetic field com-ponent (B?, up to 12 T); this is approximately in the plane of the graphenesince the device is oriented vertically. A secondary, smaller superconductingmagnet is glued to the vacuum can and supplies a horizontal magnetic fieldcomponent (B?, up to 200 mT), aligned to the normal of the device plane.The small magnet was used to correct the alignment of the larger magnet(see Sec. B.2.1).Dilution refrigerators are able to reach temperatures below 0.02 K. En-suring that the device is able to reach that temperature, however, is nottrivial since the device needs to be connected to the outside world in someway in order to perform measurements. We have engineered a series of wiringfilters and an electromagnetic radiation shield (Fig. 4.3) to block high fre-quency interference and thermal noise that threaten to overheat the device,544.3. Electrical measurementsVbiasRs1Rs2G?1+IT = 300 K T < 1 K(a) Two terminalIbiasRs1Rs4G?1Rs2Rs3VT = 300 K T < 1 K(b) Four terminalFigure 4.4: Two-terminal and four-terminal conductance measurements.Black filled circles (?) indicate the Ohmic contacts to the graphene flake,which is represented by the rightmost resistor.described in further detail in Sec. B.3. Since the device is cooled entirelythrough its wiring, all of the measurement wires pass through a wire coolingstage just before connecting to the device. These were designed to optimizethe dissipation of heat from, e.g., eddy currents and magnetocaloric heatingdue to magnetic field sweeps (see Sec. B.3.5).4.3 Electrical measurementsThe experimental data set presented in this thesis consists of measurementsof conductance under various conditions (fields, temperatures, bias voltages,gate voltages). We used two different approaches to measure the conduc-tance.One approach to measure conductance is the two-terminal setup (Fig. 4.4):A voltage Vbias was applied to one of the device terminals, and another ter-minal was grounded through a current meter that observes the current I.The other device electrodes, if present, are left floating. Although this im-mediately yields a conductance, I/Vbias, the result depends on the resistanceof the sample wiring plus the contact resistance. If we know these series re-sistances (total Rs) then we can compensate to obtain the conductance ofthe graphene itself: G = (VbiasI?1?Rs)?1. The problem in this approach isthat the device?s contacts add an intrinsic contribution to Rs which cannotbe separated from the graphene resistance, and so typically one can onlymake an educated guess as to the value of Rs.554.3. Electrical measurementsMore often a four-terminal approach was used (Fig. 4.4); this allowedus to measure a specific segment of the graphene without the measurementbeing affected by the series resistance. A current Ibias is forced into one ofthe device terminals, and comes out of another terminal that is grounded.A voltage meter is used to measure the voltage difference V between twoother terminals in the same device. The conductance in this case is definedas G = Ibias/V . (In fact, sinusoidal biases were applied to avoid certainelectrical errors, see Sec. B.2.2).The overall doping of the graphene (Sec. 2.3.2) was controlled using abackgate. The backgate voltage VBG ? ?80 V ? ? ?+80 V was applied throughone of the sample wires that had been bonded to the conductive Si substrate.To study temperature dependences, an electrical heater was used to heatup the mixing chamber of the dilution refrigerator, and we used the vendor-supplied RuO2 resistance thermometer to measure the resulting temperatureof the mixing chamber. It was assumed that the sample holder would ther-malize to the mixing chamber; for the temperature of the graphene itself,it was necessary to correct for the local heating of the graphene by Jouleheating from the electrical biases (see B.2.3).56Chapter 5Experiment: Spin-splittingThis chapter describes my first successful experiment in graphene, involvingspin-splitting of conductance fluctuations in graphene.[1]At the time of this experiment (late 2008) the spin properties of graphenewere starting to be explored for the first time.[14] We were keen to use ourmagnet setup to observe spin physics in mesoscopic phenomena. These typesof experiments had been successfully carried out in metals and semiconduc-tors previously,[64?66] and were something that my supervisor had previousexperience with[67]. There was also the possibility of significant band struc-ture changes induced by the in plane field, an effect that had been predictedlong before the realization of graphene devices[6].The experiment described in this chapter gave a pleasant surprise: notonly did we see the usual effect on conductance fluctuations, a decrease intheir amplitude, but we also saw the direct Zeeman splitting of the con-ductance fluctuations. This splitting had never been seen before in othermaterials. The predicted upward conductance shift from band structurechanges was not visible, a result of the large amount of disorder. Insteadwe saw a weak decrease in conductance due to the graphene?s ripples, aneffect that we investigated more carefully in a subsequent experiment (seenext chapter).The following text is adapted from Ref. [1], an article published by theNature Publishing Group.5.1 Experimental setupThe overall experimental setup is described in the previous chapter (Chapter4). At this point in time, the filtering and wire cooling stages had not beeninstalled into the refrigerator, and we were not especially careful about biasoverheating. As a result, although the dilution refrigerator temperature was20 mK, the temperature of the device may have been significantly higher(100?300 mK); in any case, the heightened temperature was not relevant tothe present study.575.2. Direct observation of spin splitting3020100G (e2 /h)-10 0 10VBG (V)1514-4.8 -4.6bBTB||aFigure 5.1: Experimental setup. (a) Schematic of single-layer graphenedevice, showing orientation of magnetic fields B? and B?. (b) Two-terminalconductance, G, measured as a function of backgate voltage VBG at 20 mK,showing the Dirac conductance minimum. Inset: a sample of conductancefluctuations over a narrow range of gate voltage and B? = B? = 20 mT.The conductance was measured in a two-terminal configuration assuminga series resistance Rs = 3.2 k?. This value of series resistance was chosensuch that the conductance increased linearly for VBG < 0 (Fig. 5.1b), as thatis the normal behaviour for graphene. It can be seen in Fig. 5.1b, however,that the conductance did not increase symmetrically for electron and holedoping. This is a natural consequence of the pinning of carrier density underthe contacts, leading to a variation in contact resistance[68]. In any case, theexact value of conductance was not important to this study, as the effectswere normalized and thus insensitive to Rs.The primary device (called device A) and field orientation are sketched inFig. 5.1a. As can be seen in Fig. 5.1b, the conductance showed a rich patternof fluctuations (on top of the semiclassical background conductance) as thegate voltage was changed. As was described in Sec. 3.3, these fluctuationsare a random interference pattern, appearing universally in small diffusiveconductors. These conductance fluctuations are the subject of this study.5.2 Direct observation of spin splittingThe electrical conductance G of graphene includes contributions from spin-up and spin-down carriers. In the absence of spin flips, each electron main-tains its spin so we can decompose the conductance into spin-up and spin-down parts like so: G = G? + G?. The fluctuations in G? and G? do notdepend directly on VBG but rather on on the Fermi wavelengths, and there-fore the carrier densities, of the spin-up and spin-down electron populations585.2. Direct observation of spin splitting(+ )nsG1086420B||(T)-6.2 -6.1 -6.0 -5.9VBG (V)-2.3 -2.2 -2.1 -2.0VBG (V)?G (e2/h)-0.5 0.5ab B=0B||>0GGdcnsFigure 5.2: Zeeman effect in graphene. (a,b) Diagrams of spin-splitting ef-fect. A simulated conductance trace at zero field (b) splits into two offsetcontributions from spin-up and spin-down at finite field (a), giving doubledfeatures in the total conductance. (c,d) Two examples of conductance fluc-tuations spin-splitting in an in-plane field, taken over two ranges of gatevoltage (B? = 80 mT). For both images, a smooth background was sub-tracted to highlight fluctuations. The offset (indicated by dashed lines asguides to the eye) due to spin is larger around ?6 V compared with ?2 Vbecause density of states increases with density: the dashed lines correspondto ? values of (c) 12? 109 and (d) 7? 109 cm?2/meV.respectively (Sec. 3.3). At zero field, spin-up and spin-down carriers havethe same density n? = n?, and we expect G? = G? (Fig. 5.2b).Applying a magnetic field partially polarizes the graphene?it inducesa difference between spin-up and spin-down carrier densities. The totaldensity ns = n? + n? = ?(VBG ? V0) is set by the backgate voltage; thedifference in the densities n? ? n? = 12?g?BB is set by the magnetic field,where ? is the density of states and g?BB is the Zeeman energy. Togetherthese yieldn?/? = ?2(VBG ? V0 ?V offsetBG2), where V offsetBG = 1??g?BB (5.1)As a result, spin-up and spin-down conductance contributions at finite fieldare offset in gate voltage by V offsetBG , leading to Zeeman splitting of interfer-ence features in a gate voltage trace G(VBG) (Fig. 5.2a).Zeeman splitting of experimental graphene conductance fluctuations canbe seen in Figs. 5.2c and 5.2d. The ?V? shapes in the data correspond to spin-resolved conductance: left-moving (right-moving) arms reflect interferencefor a particular density of spin-down (spin-up) carriers. Although conduc-tance fluctuations have been observed for several decades in a wide variety595.3. Autocorrelation analysis of splitting0.100.60.40.20-0.2VBG ~ -16VBG ~ -12b12010080604020BT (mT)-17 -16 -15 -14 -13 -12 -11VBG (V)f(?VBG) (e4 /h2)?VBG (V)?G (e2/h)aB||=8T1.00-1.0Figure 5.3: Autocorrelation extraction of spin splitting. (a) Features aredoubled in an image of conductance fluctuations at B? = 8 T. (b) Autocor-relations of the windows indicated in (a) show that the spin-split offset (thelocation of the autocorrelation side-peak) changes from V offsetBG = 0.22 V atVBG = ?16 V, to V offsetBG = 0.19 V at VBG = ?12 V. The autocorrelationfunction is symmetric, so side-peaks appear at ?V offsetBG .of materials, a direct measurement of their Zeeman splitting has never be-fore been reported. Several factors must conspire to make this effect visible:a low density of states enhances the visibility of conductance fluctuationsand reduces the gate voltage offset between spin-split features; a small spin-orbit interaction is required for the approximation of two independent spinpopulations to be valid; atomic-scale flatness and precise alignment of the in-plane field reduce the effective Aharanov-Bohm flux entrained by an in-planemagnetic field. The first three factors are naturally present in graphene; thefourth was enabled by a two-axis magnet (see Sec. 5.5, Sec. B.2.1).5.3 Autocorrelation analysis of splittingA statistical analysis of the spin-split offset at particular values of B? wasobtained using an ensemble of traces, ?G(VBG), collected for a range ofout-of-plane fields, 3 mT ? B? ? 120 mT ( B?). The offset is somewhatvisible in the raw conductance data (Fig. 5.3a), but can be seen more clearly605.3. Autocorrelation analysis of splittingby computing an autocorrelation f(?VBG) = ??G(VBG)?G(VBG +?VBG)?.Side-peaks in the autocorrelation at ?VBG = ?V offsetBG (Fig. 5.3b) reflect sim-ilar features offset in the conductance traces (Fig. 5.3a) due to the differencein spin-up and spin-down carrier densities.The side-peaks shifted outward with increasing B? (Fig. 5.4a), while theheight of the central peak (f(0), the variance of the fluctuations) droppedby a factor of two (Fig. 5.4b). The drop in f(0) reflects a suppression ofconductance fluctuations when the Zeeman energy, g?BB, exceeds energybroadening due to temperature or dephasing. This effect has been used tocharacterize spin degeneracy in other systems[64?67], where splitting couldnot be observed directly. The suppression factor of two indicates that spindegeneracy was intact at zero field, after taking into account broken timereversal symmetry due to the small out-of-plane field. The shift of the side-peak location, V offsetBG , with B? was expected from Eq. (5.1)?a statisticalconfirmation of the linear spin-splitting seen in Figs. 5.2c,d.Using Eq. (5.1), the density of states can be extracted directly fromV offsetBG at a given field. We take g = 2 for graphene[69?71] and the value? = 8.05?0.05?1010 cm?2/V determined from quantum Hall measurementsof our device. V offsetBG (VBG) was recorded over the full gate voltage range bycomputing the autocorrelation f(?VBG) within a sliding window in VBG(Fig. 5.4c). The resulting square-root lineshape can be compared to thedensity of states expected from graphene?s dispersion relation (see (2.20)and Sec. 2.3.2)?(VBG) = 2??/pi~vF?|VBG ? V0|, (5.2)with a Fermi velocity vF that would be independent of density for an idealDirac band structure.Because V offsetBG can be determined very accurately from f(?VBG), spin-split fluctuations provide an accurate measure of vF. Experimental errorbars were typically one or two percent of the measured value (Fig. 5.4d),dominated by uncertainty in the charge neutrality point, V0.18 Both devicesshowed vF > 9?105 m/s away from the Dirac peak, with vF > 1.05?106 m/sthroughout most of the gate voltage range for flake A (Fig. 5.4d) and forthe electron-doped gate-voltage range of flake B (see Fig. 5.5). These valuesare similar to those reported elsewhere[43, 72, 73] but considerably largerthan the expected vF ? 0.8? 106 m/s from non-interacting band structure18The value of V0, which is the gate voltage VBG required to dope the graphene tocharge neutrality (Sec. 2.3.2), was determined from the minimum of the conductancevs. VBG curve (Fig. 5.1).615.3. Autocorrelation analysis of splitting00.286420B|| (T)25%50%1.21.11.0v F (x106 m/s)-15 -10 -5 0 5 10 15VBG (V)0.20.10-0.1?VBG (V),B||) (e4 /h2) ?VBG = 0?VBG = VBGoffsetf(? VBG0.20.10-0.1?VBG (V)dcB||=8TaVG=-5...-4.4V0.10(e4/h2)b,B||)f (?VBG0 1) / f (0)f (?VBGFigure 5.4: Magnetic field- and gate-dependence of spin splitting. (a) Auto-correlations, f(?VBG, B?), of fluctuations in the range VBG = ?5 . . .?4.4 V,averaged over an ensemble of traces for 3 mT ? B? ? 120 mT, demon-strate the in-plane field dependence of side-peak location, V offsetBG . Thedashed red line is fit to V offsetBG , and corresponds to a density of states1.04 ? 1013 cm?2/eV. (b) Variance f(?VBG = 0, B?) (filled squares) andautocorrelation side-peak height f(?VBG = V offsetBG , B?) (open squares)from (a). (c) Autocorrelations computed for a sliding 0.75 V-wide win-dow in VBG (c.f. Fig. 5.3), and averaged over an ensemble of traces for3 mT ? B? ? 120 mT, show how the density of states depends on den-sity. (d) The Fermi velocity vF extracted from side-peak locations, V offsetBG ,in (c) is enhanced near the Dirac point. Error bars are primarily due touncertainty in the charge neutrality point, V0 = 1.0? 0.3 V.625.4. Minimum density of states1.11.00.90.82520151050-50.30.20.10-0.1NormalizedAutocorrelation0 1abv F (x106 m/s)?VBG (V)B||=8TVBG (V)Figure 5.5: Reproduction of spin-splitting effect in a second device (flakeB). The data here are presented as in Fig. 5.4.calculations[74]. Enhancements in vF have been predicted due to many-bodyeffects[75?77].On the other hand, differences in the Fermi speeds for devices A and Bsuggest that the apparent Fermi speed in realistic graphene samples also de-pends strongly on disorder and other sample details. Larger values of vF areobserved in Fig. 5.4d at lower densities, similar to a report of infrared spec-troscopy measurements[73], but in flake B the trend was reversed, providingfurther evidence for sample-to-sample variations in Fermi speed.5.4 Minimum density of statesFigure 5.6 shows the behaviour of the side-peak near to the Dirac point.Throughout this region we can see a side peak appearing at V offsetBG ? 0.04 V,corresponding to ? ? 3.5?0.4?109 cm?2/meV. These side peaks are not anartifact of the analysis: in the same region we can directly see the splittingof conductance fluctuations (Fig. 5.6b). The scanning probe experimentsin Ref. [43] observed a very similar ? ? 3 ? 109 cm?2/meV near the Diracpoint.635.5. Hints of ripples in graphene-0.2-0.1 00.10.2420-2NormalizedAutocorrelation0 11086420 1.51.41.31.21.11.00.90.80.70.60.5-0.4 -0.2 0.0 0.2 0.4?V BG (V)?G (e2/h)VBG (V)B ||)T( abB||=8TVBG (V)Figure 5.6: Minimum density of states near Dirac point. (a) Sliding-windowautocorrelations (like Fig. 5.4c) for data from flake A near the Dirac point,with a 0.2 V window, show a side peak signal at the Dirac point. (b)Visible splitting of conductance fluctuations with increasing B?, near theDirac point; the dashed line corresponds to the ? implied by (a).5.5 Hints of ripples in grapheneIn order to observe V?s like those shown in Figs. 5.2c,d, interference featuresmust shift, but not otherwise change, as a function of in-plane field. Inother words, the primary influence of the magnetic field on the interferencemust be through its effect on the densities of spin-up and spin-down carriers,rather than a change in Aharonov-Bohm (AB) flux. The alignment of B?was monitored using weak localization (WL), the coherent enhancement ofbackscattering associated with time-reveral symmetry at B? = 0. This wasused to correct the values of B? (see Sec. B.2.1).In addition to shifting, the WL dip also decreased in magnitude with B?,by a factor of two at 1 T and below detectable levels above 4 T (Fig. 5.7ab).The complete collapse of the WL dip, over a field range where the varianceof conductance fluctuations decreases only by a factor of two, implies thattime-reversal symmetry is broken even by an in-plane field[78, 79]. Thedisappearance of symmetry in g(B?) ? g(?B?) at finite B? (Fig. 5.7cd)provided further evidence of time-reversal symmetry breaking[79]. Similarphenomena have been observed in semiconductor 2D electron gases, andhave been associated with finite thickness and nanometer-scale undulations645.5. Hints of ripples in graphene13.613.513.4<G> (e2 /h)-2 0 2BT (mT)300-304T3T2T1T0T-202-4.9 -4.8 -4.7VBG (V)-202BT (mT)B||=a bB||=0TB||=2Tcd1614G (e2/h)BT (mT)60Figure 5.7: Loss of time reversal symmetry due to in-plane field. (a) Weaklocalization in the averaged conductance (over VBG = ?5 . . .?3 V) is presentat B = 0 but disappears for increasing B? or B?. This indicates a breakingof time-reversal symmetry from either component, but at very different field-scales. Curves at different B? are vertically offset to align G(B? > 10 mT).(b) Negative magnetoconductance for B? > 10 mT is unaffected by B?. Boxencloses range from (a). (c,d) Conductance is symmetric in B? at B? = 0(c) but not at B? = 2 T (d), providing further evidence of time-reveralsymmetry breaking by the in-plane field.655.6. Retrospectivein the 2DEG[78], allowing B? to thread an AB flux through the conductor.The B?-scale of WL collapse in graphene corresponds to an effective thick-ness of ? 1 nm,19 in agreement with previous measurements of the intrinsicripple size in graphene sheets[80, 81].5.6 RetrospectiveSince this experiment I have gained a greater understanding about whatdetermines the autocorrelation function of conductance fluctuations in thistype of experiment (see Sec. 8.3, Appx. D, Appx. E). Besides the basicpicture described in the preceding text, there are a few more details toconsider:? Spin-orbit interactions and/or magnetic defects will cause spin-up elec-trons to behave differently from spin-down electrons. This causes de-phasing of the side-peak correlation terms[50], especially at high mag-netic fields when the magnetic defects have been polarized. Thus themagnetic disorder detected in graphene (Chapter 7) may explain whythe side peak height in Fig. 5.4b is weaker than expected.? In this experiment, the background subtraction procedure was quiteaggressive, which helped to clear up some of the long-range randomfluctuations in Figs. 5.4ac, 5.5. Background subtraction unfortunatelycauses biases that shift the autocorrelation downwards (Sec. 8.3), giv-ing another reason for the reduced height of the side peaks in Fig. 5.4b.? In Fig. 5.4 and Fig. 5.6 the side peaks disappear nearby the chargeneutrality point. In Fig. 5.5 they seem to be completely absent atcharge neutrality. It is possible that spin disorder is stronger at lowcarrier densities, but a more likely explanation is that the electrostaticdisorder pattern changes strongly with gate voltage in this region. Ascarrier densities become lower, the electrons? ability to screen disorderis reduced. Thus, when VBG is changed in this region, it does notresult in a uniform shift in the potential landscape E0(r) across thegraphene. Since the fluctuations are so sensitive to disorder, theycould randomize on a shorter VBG scale than the spin-splitting. Thescreening effect may also reduce the side peak height even at higherbackgate voltages.19This 1 nm number was calculated using a crude approach. A more careful analysis ofthe same data will be shown in the next chapter.665.7. Conclusion? Polarized magnetic defects generate a mean-field exchange, which mod-ifies the Zeeman energy that determines the location of the side peaks[50].This means that at high fields (saturated defect polarization) we ex-pect the Zeeman energy to beEZ(B?) = g?BB? + Em. (5.3)The data in Fig. 5.4a essentially measures this dependence EZ(B?).The linearity of the side peaks? position with B? indicates that Emplays a minor role. It is possible that Em is a significant fraction ofEZ for low fields (B? . 1 T), but certainly not at higher fields.5.7 ConclusionThe ability to distinguish conductance fluctuations associated with spin-up transport from those associated with spin-down transport, using themagnetic field dependence of their position in gate voltage, may enable thedevelopment of interference-based spin filters in graphene. The maximumspin polarization of current injected through such a device is set by the ratioof conductance fluctuation amplitude to the conductance itself. Althoughthis ratio was typically < 10% in the devices measured here, it could beincreased by using constrictions or alternate device geometries to decreasethe overall conductance, while the amplitude of conductance fluctuationsremains large as long as the device size does not exceed the phase coherencelength.67Chapter 6Experiment: Ripples andrandom vector potentialsDuring the course of the 2008 experiment we noticed that the in-plane mag-netic field was causing the loss of time-reversal symmetry in our graphenedevice, evident as a significant suppression of weak localization (Fig. 5.7).At the time we did not know precisely how to analyze this effect, althoughwe attributed it to the graphene not being flat, allowing the in-plane fieldto couple to the electrons? orbital freedom (such an effect was seen before inrough 2DES such as Si inversion layers[78]). It was known that graphene de-vices, though smooth on the atomic scale, have a significant nanometer-scaleroughness called ?ripples?[80?82]. Devices such as ours inherit the roughnessfrom the SiO2 substrate[83].Figure 6.1 shows a visualization of the ripples and the orbital couplingof the in-plane field. The uniform field B?, applied in the average planeof the graphene, has a perpendicular component ?B? that varies randomlyfrom place to place, depending on the local slope. This field ?B? can bedescribed by a random vector potential[18]; the electrons are able to gatherAharonov-Bohm phase e~ ? d` ?A from this random vector potential, whichwould be zero if the graphene were flat.Figure 6.1: Simulation of a rippled graphene sheet with a correlation length,R, ten times its rms height, Z. The uniform in-plane field B?, applied tothe rippled topography of graphene, leads to a random surface-normal field?B?. The Z/R ratio shown is exaggerated compared to real graphene.[ c?American Physical Society[2]]686.1. Experimental setupa bVVI~ I~TBB||806040200G (e2 /h)3020100-10VBG (V)-15 -10 -5 0 5 10 15ns (x1011cm-2)Figure 6.2: Experimental setup. (a) Schematics of graphene devices A (left)and B (right), showing orientation of applied fields B?, B?, and electricalmeasurement setups (two-probe for A, four-terminal for B). Unused/brokenelectrodes are indicated by dashed edges. Scale bars are 5 ?m. (b) Conduc-tance G(ns) for B=0, flake B. [ c?American Physical Society[2]]We set out in 2009 to look more carefully at the ripple-related effects ofan in-plane field, with measurements on a second device (called flake B). Bytaking precise measurements of the weak localization magnetoconductancecurve, we were able to confirm the theoretically expected effect[18]: thatthe loss of time-reversal symmetry from B? can be described by an effectivedephasing rate ??1? ? B2? .We had also noticed that the conductivity would generally decrease athigh B?, though by a small amount. This too could be attributed to therandom magnetic field, in this case semiclassical scattering by the randomLorentz forces. This motivated a careful study of the theoretically expectedeffect, where it was found that the change in conductivity should depend onthe angle between the in-plane field and the current (Sec. 8.1). A final follow-up measurement confirmed that this anisotropy does occur. Interestingly,by combining the two measurements (dephasing and scattering), we couldarrive at specific values of Z and R?the ripple height and ripple length(Fig. 6.1).This chapter describes results that we published in Physical Review Letters[2].Much of the following text is c?American Physical Society. The text has beenadapted to fit in with the overall thesis.6.1 Experimental setupThe experimental setup in this experiment was identical to the previouschapter (Chapter 5). The data comes from the same two graphene devices(called A and B), and we re-analyzed the device A data from Fig. 5.7.696.2. Dephasing effect of in-plane fieldFigure 6.2 shows the measurement setup.In this case we were interested in obtaining precisely the value of G, theensemble-average conductance. By definition the conductance fluctuationsdo not appear in G, however it is not practical to actually prepare andmeasure a large ensemble of devices with similar amounts of disorder. Weinstead averaged the conductance over a number of different gate voltages(i.e. carrier density). The data consisted of a number N of traces Gi(B)measured at gate voltage VBG = Vi, so the gate-averaged conductance is?G(B)? = 1NN?i=1Gi(B). (6.1)Some residual conductance fluctuations do appear in ?G? due to finite N .In this chapter we have chosen to use the sheet carrier density ns,ns = cBGe (VBG ? V0) (6.2)rather than backgate voltage VBG since the different measurements had dif-ferent contamination offsets V0. Using ns allows an easier comparison be-tween the different measurements. The conversion factor (capacitance) wascBG = 8.0? 1010 e/cm2/V.6.2 Dephasing effect of in-plane fieldFigure 6.3 shows typical ?G?(B?) curves for B? = 0 as well as B? = 4 T.Before we discuss the fitting of these curves, it?s important to point outwhat is the ?background? semiclassical conductance level and what is weaklocalization. The average conductance can be separated into semiclassicaland weak localization contributions (Sec. 3.2),G(B?) = Gsc +GWL(B?). (6.3)The dominant contribution to this sum is the first term (2.39),Gsc = WL e2?D, (6.4)where WL is the device aspect ratio, ? is the density of states, and D =12v2F?tr is the diffusion constant. We note two properties of the semiclassicalconductance: First, Gsc dominates the conductance so we can use G toestimate D, a number that will be needed later on. Second, Gsc does not706.2. Dephasing effect of in-plane field?-1 = 40 ns-1 (B||=0)180 ns-1 (B||=4T)?-1 ~ 150 ns-1?-1 ~ 3000 ns-1?ivzv{13.813.613.413.213.012.8-100 -50 0 50 100B T (mT)ns = -2...2x1011cm-2B|| = 0B|| = 4TAvg. conductance ?G? (e2 /h)Figure 6.3: Weak localization magnetoconductance measured at different in-plane fields. Shown are measurements of flake B at low density and variousin-plane fields: zero (+), ?4 T (), +4 T (). Two fits to (6.5) are shownas solid lines. Applying the in-plane field dulls the central dip and decreasesoverall conductance. These are attributed to dephasing and scattering bythe random ?B?. [ c?American Physical Society[2]]change significantly for small values of B?, so the B?-dependence of G(B?)comes entirely from weak localization.As explained in Sec. 3.2.2, in order to extract dephasing rates from weaklocalization we examine the change in conductance G(B?)?G(0) for smallvalues of B?. This removes various offsets that are not known precisely. Fordiffusive, spin-degenerate graphene in the quasi-2D limit, the WL magneto-conductance has been calculated to be (Sec. 3.4, Sec. 3.2.2):[21]G(B?)?G(0) = GWL(B?)?GWL(0)= WLe2pih[F(??1B??1?)? F( ??1B??1? + 2??1iv)? 2F( ??1B??1? + ??1iv + ??1zv)], (6.5)where F (z) = ln(z) + ?(1z + 12) for digamma function ?(x). Equation (6.5)depends on four rates {??1B , ??1? , ??1iv , ??1zv } characterizing different mecha-nisms that suppress WL. The diffusive accumulation of Aharonov-Bohmphases from uniform B? gives ??1B = 4DeB?/~. The time-reversal symme-try breaking rate (??1? ), inter-valley scattering rate (??1iv ), and intra-valleyscattering rate (??1zv ) each originate from scattering processes that each break716.2. Dephasing effect of in-plane fieldParameter Flake B Flake AUnits Hole Low density Electron Holens 1011/cm2 ?13...?5 ?2...2 5...13 ?5...?3W/L - 2.4? 0.8 2.0? 0.7 1.6? 0.6 0.7? 0.3D m2/s 0.03? 0.01 0.025? 0.01 0.03? 0.01 0.05? 0.02??1? 109/s 11? 1 35? 8 11? 1 11? 2??1iv 109/s 70? 50 170? 70 120? 80 20? 10??1zv 1012/s 5.3? 0.4 2.7? 0.5 2.1? 0.4 4.0? 0.3Table 6.1: Device parameters. D and rates are extracted from ?G?(B?, B? =0) at 40 mK (e.g. Fig. 6.3), the conductance averaged over the specifieddensity ranges. [ c?American Physical Society[2]]path-reversal symmetry in a different way.[21]The WL scattering rates were extracted from measured ?G?(B?) curvesby fitting to Eq. (6.5), and are listed in Table 6.1 for the B? = 0 case. Valuesof D (used to scale ??1B ), as computed from Gsc ? ?G?, were the primarysystematic error since the aspect ratio WL was difficult to determine.20 Thelongest time-scale, ??, may have been saturated by the device dimensionsat high doping, with L? = ?D?? ? 2 ?m. The other characteristic lengthsLi ? 600 nm, L? ? 100 nm, and vF?tr < 100 nm, were not influenced by thesample geometry.Adding an in-plane field, B? = 4 T, changed ?G?(B?) in two distinctways (Fig. 6.3). First, the dephasing rate was increased, visible as a sup-pression of the WL dip at small B?. Second, Gsc was reduced, causing anoverall downward shift in the conductance at large B?. These effects canboth be attributed to the ripple-induced random vector potential.The additional dephasing effect of an in-plane field due to Gaussian-correlated ripples was calculated in Ref. [18]. Whereas a uniform B? affectsWL through the diffusive rate ??1B , the random vector potential affects WLas a micro-scattering rate (??1? ) since the ripples are uncorrelated beyondshort distances (R vF?tr ? 100 nm):??1? ? ??1? +?pi(e2/~2)vFZ2RB2? . (6.6)Eq. (6.5) was fit to multiple ?G?(B?)? ?G?(0) curves at finite B?, allowingonly ??1? to change from the B? = 0 fits [Fig. 6.4(a,b)], in order to extract20As can be seen in Fig. 6.2, the devices were not shaped to control the path of thecurrent, leading to the considerable uncertainty in WL .726.2. Dephasing effect of in-plane fieldB T (mT)?G?(BT ) - ?G?(0) (e2/h)B T (mT)Fitted dephasing rate, ??-1 (ns-1)(flake A) ns = -5...-3 x 1011cm-2(flake B) ns = -2...2(flake B) ns = -13...-5(flake B) ns = 5...13a bc-4 -2 0 2ns = -13...-5x1011cm-21.41.21.00.80.60.40.20.0-10 -5 0 5ns = -2...2x1011cm-21501005001614121086420B||2 (T2)Predicted slopewith Z2R = 1.7 nm3B|| = 4T0T0.75T1.5T2.5TFigure 6.4: Increase in dephasing rate from B?. (a,b) B?-magnetoconductance for small B?, at various values of B?. Fits to Eq. (6.5)were computed assuming ?iv, ?zv, and D are independent of B? (as in Ta-ble 6.1), while ??1? was a free parameter. (a) and (b) correspond to the lowdensity (Fig. 6.3) and hole-doped regions, respectively. (c) For both devices,extracted values of ??1? increase in proportion with B2? , as predicted fromEq. (6.6). Error bars plotted here do not include the larger systematic errorfrom WL . [ c?American Physical Society[2]]736.3. Semiclassical scattering from in-plane fieldthe B? effect. Figure 6.4(c) confirms the ???1? (B?) ? B2? dependence inEq. (6.6), with a density-independent Z2R = 1.7 ? 0.5 nm3 extracted fordevices A and B (Fig. 6.4); the uncertainty is dominated by uncertainty inWL . Although the WL rate ??1? is commonly associated with inelastic scat-tering and loss of phase information, in this case it is enhanced by elasticscattering from the random vector potential which only scrambles phaseinformation deterministically. The distinction can be seen in conductancefluctuations, which are softened by decoherence but only scrambled by elas-tic scattering. As reported in Chapter 5, the fluctuations were scrambled byB?, and only decreased by 12 in variance due to broken spin symmetry.6.3 Semiclassical scattering from in-plane fieldBesides suppressing localization, B? caused an overall downward shift in theB?-magnetoconductance trace away from zero field (Fig. 6.3). This shiftindicates a change in the semiclassical conductance Gsc; it was isolated fromthe dephasing effect by examining the B?-magnetoconductance for values of|B?| > 50 mT (Fig. 6.5), where the total WL correction GWL is essentiallyunaffected by the changes in ??1? (see Sec. 3.2.2). The effect of randomvector potential on semiclassical conductivity can understood as a decreasein D due to scattering by the random Lorentz forces (Fig. 6.5(a)). A similarrandom-field resistivity has been observed in 2DES subject to random vectorpotentials originating from nearby magnetic particles or superconductingvortices[84?86].This effect is most conveniently measured as a change in resistivity(rather than conductance). The expected magnetoresistivity ??(B?) =WL [1/G(B?) ? 1/G(0)] for Gaussian ripples due to Lorentz forces from therandom vector potential can be calculated by a Boltzmann approach, as-suming kFR 1 (high doping):??(n, ?,B?) = sin2 ? + 3 cos2 ?41~|ns|3/2Z2R B2? , (6.7)where ? is the angle of the current flow relative to B?. Equation (6.7) isderived in Sec. 8.1.The density dependence, ??(n) ? |ns|?3/2, predicted from Eq. (6.7) canalready be seen in the unaveraged experimental data (Fig. 6.5(b)), on top ofconductance (resistance) fluctuations due to the in-plane field (Chapter 5).The resistivity saturated for |ns| . 1012 cm?2, perhaps due to the breakdown746.3. Semiclassical scattering from in-plane field6040200B||2 (T2)64206040200B||2 (T2)B||? _ h| ns|3/2??(ns,B||)? (nm T2)B||B||B||4003002001000??(ns, 8T) (?)40200-20VG (V)-4 -3 -2 -1 0 1ns (x1012cm-2)B|| xzyxy- 0 +?B Ta bc d?? ~ |ns|-3/2Figure 6.5: Anisotropic in-plane magnetoresistivity at B? > 50 mT in flakeA at 4K. (a) Upper: a simulated symmetric bump in the graphene sheet,with uniform field B? applied in the x? direction. Lower: The resultingsurface-normal field, ?B?, is antisymmetric (positive on the right). Sim-ulated trajectories show how an electron?s x?-velocity is randomized morequickly than its y?-velocity. (b) Density-dependence of ?(8 T) ? ?(0 T)at B? = 50 mT, ? = 20?. Predicted large-n behaviour in (6.7) forZ2/R = 0.15 nm shown as dashed curve. (c) Averages ?~|ns|3/2??(n,B?)?nsshow the dependence on the magnitude and direction of B?. The currentpath (depicted in inset) was measured at in-plane field orientations ? ? 20?(?) and ? ? 70? (+). (d) Measurements on a different pair of electrodes(WL ? 1.6) confirm that the anisotropy depends on the anglular difference,?, between field and current. [ c?American Physical Society[2]]756.3. Semiclassical scattering from in-plane fieldof classical scattering when kFR . 1. In order to get a clean measurementof (6.7), we minimized fluctuations by measuring at 4 K and averaging thequantity ~|ns|3/2??(n, ?,B?) over ns = (?3.5 . . .?1) ? 1012 cm?2. Theresults of these averages are shown in Figures 6.5(c,d). It was confirmedthat the magnitude of the effect did not change from 4 K down to 40 mK,though the amplitude of the fluctuations increased at low temperature asexpected.These averages allowed us to probe the ?-dependence implied by (6.7).Flake A was measured with two current paths along ? ? 20? and 70?. Thedevice was then re-cooled in a 90?-rotated orientation, to change ? ? ?+90?.Fits of magnetoresistance curves to Eq. (6.7) gave a range Z2/R ? 0.05?0.2 nm for flake A (Fig. 6.5cd). The measured anisotropy ??(70?)/??(20?)was approximately 0.13?0.01 for one current path (Fig. 6.5c) and 0.26?0.03for the other (Fig. 6.5d), whereas Eq. (6.7) predicts 0.44. In the singlemeasurement of flake B, Z2/R ? 0.02?0.04 nm.6.3.1 Other sources of in-plane magnetoconductanceThe in-plane field couples to spins as well, leading to the possibility of spin-related effects on transport. We can compare the ripple-related magnetore-sistance (6.7) and dephasing (6.6) to the expected spin effects.? Zeeman splitting of the band structure implies altered populations ofspin-up and spin-down electrons (see Chapter 5). In ideal graphenethis would split the Dirac point and greatly reduce resistance for lowcarrier density.[6] In disordered graphene the situation is more com-plicated. At the usual carrier densities, the spin-up and down popula-tions screen charged impurities less efficiently, leading to an increasedresistance[87]?(B?)/?(0) ? (2? 105 cm?2 T?2)B2?/|ns| (6.8)for densities larger than the impurity broadening. Quantitatively, thiseffect is too small to be observed in our measurements.? Spin-flip scattering off magnetic impurities can lead to decoherence,adding to ??1? . This effect would be field dependent, as B & Bi willfreeze impurities[88], disabling spin-flip dephasing atBi ? kBT/g??B ?100 mT. Such an effect would show up as a peak in ??1? at zero fieldin data such as Fig. 6.4(c). A peak is not observed here, however, seeChapter 7 where the peak is observed.766.4. Extraction of ripple size? Magnetic impurities may also generate an random vector potential bytheir localized magnetic fields, but the strength of this potential (andhence its contribution to ??) would be extremely weak and anywaywould not change when the impurities align to B?.6.4 Extraction of ripple sizeUsing the value Z2R ? 1.7 nm3 from the analysis in Fig. 6.3 and the rangeof values of Z2/R reported above, we haveZ = 0.6? 0.1 nm,R = 4? 2 nm (6.9)assuming Gaussian-correlated rippling of device A (the spread in Z2/R isincorporated into uncertainties for Z and R).The values for Z and R in (6.9) can be compared to values obtained fromatomic force microscope (AFM) measurements on our own device as well asother reported devices in the literature. After the measurements describedabove, AFM scans were performed on device A using an Asylum MFP3D-SA, after annealing the device at 400?C in a low pressure N2/H2 gas mixtureto remove resist residues[81]. These measurements gave Z = 0.13? 0.02 nmand R = 10? 5 nm; limitations of vibration and drift prevented more accu-rate measurements. AFM measurements on similar devices (graphene/SiO2)in Ref. [81] gave Z = 0.19 nm and R = 32 nm. These would imply muchsmoother ripples than seen in (6.9); see the Retrospective section below foran explanation of this diagreement.Scanning tunnelling microscope (STM) measurements are able to probethe ripples with atomic resolution, and observe a higher roughness comparedto the above AFM measurements. The ripples observed in Ref. [82] gaveZ = 0.35 nm, R ? 5 nm,21 whereas Ref. [83] saw Z = 0.37 nm, R ? 5 nm.These observations are closer to what we see in (6.9).6.5 Analogy to strain-related dephasingFinally, we turn to an important analogy that can be drawn between theeffects of an in-plane field and those of strain due to ripples. Since both thein-plane field and ripple strain generate random vector potentials that are21These values are extracted from data provided by the authors of Ref. [82].776.6. Retrospectivedirectly correlated with ripple topography[18, 20], similar effects can be ex-pected. In particular, strain is expected to suppress weak anti-localization[21],but there is disagreement about how to estimate the magnitude of theeffect.[20, 22, 23] In Sec. 8.2 it is argued that the suppression occurs througha short-range dephasing process much like that in Eq. (6.6), and that strain-induced dephasing can fully explain the observed suppression of anti-localization(the large value of ??1zv ).6.6 RetrospectiveAt the time of this experiment, it was confusing why the observed mag-netoresisistance (Fig. 6.5) was so large?orders of magnitude more thanexpected given the AFM measurements of the day. It turned out that theAFM measurements were probably all under-resolved[83], and that STMmeasurements were more accurate. Our own AFM measurements, for in-stance, were likely influenced strongly by lateral vibrations of the AFMprobe. At this point all reliable measurements of graphene ripples on SiO2indicate a high roughness, Z ? 0.4 nm and R ? 5 nm.[2, 82, 83] For fu-ture quantitative studies of the effects of the rippling, the actual correlationshape (non-Gaussian) of the ripples should be taken into account, to give acloser comparison.As a side note, this convergence of results lends support to the conclu-sion that strain is the reason why antilocalization is highly suppressed ingraphene (Sec. 8.2).It is curious that in Fig. 6.4 we see a monotonic increase in ??1? (B?),whereas later measurements (in Chapter 7) saw the low-field non-monotonicsignature of magnetic defects (Fig. 7.7). It could be that in Fig. 6.4 themagnetic scattering rate was not significant for those carrier densities, orthat a difference in the treatment of the devices may have resulted in fewermagnetic defects.6.7 ConclusionTransport measurements of graphene flakes in an in-plane magnetic fieldshowed effects due to the magnetic flux threaded through the ripples. Theuse of an auxiliary out-of-plane field allowed two distinct effects to be sepa-rated: weak localization suppression (by dephasing) and overall anisotropicmagnetoresistance (by Lorentz-force scattering). Besides allowing a deter-mination of the ripples? typical height and length scale, these measurements786.7. Conclusionprovide insight as to how other short-range random vector potentials (suchas that due to ripple strain) might affect transport in graphene.79Chapter 7Experiment: The limits ofcoherenceAround the time after the last experiment, it had become clear that therewere some mysteries surrounding decoherence and spin relaxation in graphene.Weak localization experiments had been showing a saturation in the elec-tron phase coherence time at low temperature,[24] and (non-coherent) spincurrent experiments were detecting significant spin relaxation[14, 15]. Spinrelaxation has significant effects in weak localization[16], and so it was ex-pected that there was likely one common cause for both decoherence andspin relaxation: either spin-orbit interactions, or magnetic defects.Spin-orbit interactions, though purely elastic, can mimic decoherence inweak localization if they only couple to the out-of-plane spin component[16,33]. The intrinsic spin-orbit term in graphene is of this type, but it is ex-pected to be extremely weak (Sec. 2.1.5). To obtain a sufficiently strong spin-orbit interaction would require either unusual adatoms[47] or, more likely,strong in-plane electric fields near scattering sites (Elliot-Yafet scattering).[89]The problem of coherence saturation in graphene is akin to that in met-als. Over a decade ago, the electron coherence time was observed to saturatein pure metals such as Cu, Ag, Au[19, 90]. It was eventually found that verydilute magnetic impurities (e.g., Fe or Mn) were responsible for the coher-ence saturation in these metals. It would be quite strange if such transitionmetal impurities consistently appeared on graphene devices, but graphenecould possess a more general kind of magnetic defect such as an unpairedspin due to an unsatisfied bond, or some other form of odd-electron localizedstate.We decided to sort out this issue of spin-orbit vs. magnetic defects byreexamining the conductance fluctuations, for which spin-orbit interactionscannot mimic decoherence.[52] Once the dust had settled, it was clear thatmagnetic defects were the main cause of spin relaxation. Surprisingly, wealso found evidence of a second decoherence saturation mechanism that isseemingly non-magnetic.This chapter describes results that we published in Physical Review Letters[4].807.1. Experimental setupVBGIbias(a)B||B?4 ?m~V6040200G (e2/h)50250-25VG (V)(b)Figure 7.1: Experimental setup. (a) Simplified device geometry and mea-surement setup. Purple regions are graphene, gold indicates leads, and light-shaded regions have been etched away. (b) Conductance as a function ofgate voltage; the shaded region indicates the studied interval. [ c?AmericanPhysical Society[4]]Much of the following text is c?American Physical Society. The text has beenadapted to fit in with the overall thesis.7.1 Experimental setupThe experimental setup, again, was as described in Chapter 4. Prior to thesemeasurements, we built the filtering and wire cooling modules (see Chap-ter 4) and installed them into two fridges; these ensured that the observedsaturating physics were not an artifact of an overheated device. Some of thedata come from a second cooldown of the same device in a different, butbasically equivalent, cryostat.In this experiment it was particularly important to be careful about over-heating by bias current (Sec. B.2.3). In the first cooldown, overheating hadits greatest relevance for the lowest temperatures T = {110 mK, 220 mK, 310 mK}of the CF B?-correlation data set discussed in Chapter 7; these temperatureswere achieved by applying Ibias = 20 nA with Tcryostat = {13 mK, 190 mK, 290 mK}.At higher temperatures we used higher currents (up to 45 nA). Similar biascurrents were used for the WL measurements (except for the 110 mK data,which used 10 nA). Similar bias currents were used in the second cooldown.In this case, the graphene flake had been etched into a long conduct-ing channel with narrow graphene channels contacting it along the edge(Fig. 7.1(a)). This eliminated the confusion about aspect ratio (a source ofuncertainty in the previous experiment), and allowed us to measure G ofthe graphene alone: By measuring conductance G in a four-terminal con-figuration (Fig. 7.1(a,b)), we ensured that the result was not influenced bythe gold contacts. The conductance background subtraction procedure is817.2. Dephasing of conductance fluctuations (CF)4241.54140.5G (e2/h)1101009080B? (mT)310 mK, B||=0, VG=0(a)0.20.10f (dB) (e4/h2)9876543210-1dB (mT)310 mK, B||=0BIP = 0.45 mT(b)Figure 7.2: Conductance fluctuations and autocorrelations. (a) A typi-cal conductance trace in perpendicular field. (b) A typical autocorrelationfunction (solid curve) and its derivative (dotted curve, no vertical scale plot-ted). The vertical spike in the derivative at ?B = 0 corresponds to the noisepeak in the autocorrelation. The dashed line indicates the inflection point,used as a measure of coherence through Eq. (7.1). [ c?American PhysicalSociety[4]]described in Sec. C.1.7.2 Dephasing of conductance fluctuations (CF)As explained in Sec. 8.3.3, conductance correlations in field contain informa-tion about dephasing, but correlations in gate voltage do not. Thus, whereasin Chapter 5 we looked at the conductance fluctuations in gate voltage?G(VBG), this time we examined the fluctuations in perpendicular magneticfield ?G(B?) (Fig. 7.2(a)). We measured conductances from ? 50 mT to? 150 mT, for several values of VBG over a narrow VBG range (Fig. 7.1),and computed autocorrelations (Sec. C.1). Fig. 7.2(b) shows a typical au-tocorrelation in perpendicular magnetic field, f(?B) (Fig. 7.2(a,b)). Theinflection point of f(?B) provides a robust metric of the conductance fluc-tuation decoherence rate (Secs. 8.3.3, C.2):??1CF ?2eDBIP3~ , whered2fd?B2????B=BIP = 0, (7.1)827.2. Dephasing of conductance fluctuations (CF)20151050t -1UCF (ns-1)1.20.80.40Temperature T (K)Figure 7.3: Dependence of the rate ??1CF on temperature for B? = 0 (?) andB? = 6 T (?). Dotted lines have slope 9.0 ns?1/K and offsets ??1CF(T=0) =10.9 ns?1 (upper line), ??1CF(T=0) = 6.2 ns?1 (middle line). The lower,dashed line shows how ??1CF would appear without saturation, ??1CF(T=0) = 0.[ c?American Physical Society[4]]where D = 0.03 m2/s is the diffusion constant that was calculated from G.As seen in Fig. 7.3, the temperature dependence of ??1CF is linear overthe range T = 0.1 ? ? ? 1.4 K with a slope that is close to the value predictedfor electron-electron interactions[23, 62], and a nonzero extrapolated offset??1CF(T=0) that implies a low-temperature saturation of the dephasing rate.22The finite ??1CF(T=0) observed in this experiment indicates the presence ofdynamic degenerate defects, that is, defects that do not freeze into a singlestate as temperature is decreased. The remainder of this work probes thenature of these degenerate defects: are they magnetic, and how strongly dothey interact with the conduction electrons (how fast do they change state)?Performing the CF measurement with an in-plane magnetic field showsclearly that some of the defects are magnetic: dephasing is reduced at B? =6 T (Fig. 7.3), indicating that the magnetic moments have been polarizedto a static configuration and no longer contribute to ??1CF . Quantitatively,the net change in ??1CF with large B? is the magnetic scattering rate, ??1mag ?4.7?0.5 ns?1. This rate is, in itself, an important finding, as it correspondsto the spin-flip rate for conduction electrons due to unpolarized magnetic22 The corresponding length?D??1CF(T=0) = 1.7?m is much smaller than the flakedimensions [Fig. 7.1(a)], confirming that the quasi-2D interpretation (7.1) is correct.837.2. Dephasing of conductance fluctuations (CF)20100tUCF-1 (ns-1)0.1 1 10Btot (T)1 K0.5 K0.2 K0.08 KSecond cooldownFigure 7.4: Dependence of ??1CF on rms total magnetic field Btot = (B2? +B2?)1/2, for various T . Curves show the theoretical crossover for a scatteringrate ??1mag = 5 ns?1 from spin-12 magnetic defects with g = 2 (solid), or g = 1(dotted). Note that the errors are not completely uncorrelated in this casesince the conductance fluctuations do not totally randomize from field tofield (recall Ch. 5). [ c?American Physical Society[4]]defects.[52] The data in Fig. 7.3 thus prove that magnetic defects inducesufficient spin relaxation to explain previous spin transport measurementsin monolayer graphene.[14, 15, 89]The crossover to full defect polarization was analyzed in a second cooldownof this device (Fig. 7.4). The field required to turn off the dephasing growswith temperature, as expected for the thermodynamics of free magneticmoments. We obtain the theoretical curves in Fig. 7.4 by applying the def-inition of ??1CF in Eq. (7.1) to numerically simulated CF with spin-12 defects(Appx. C.3). At high temperatures the behaviour is consistent with the de-fects having the free electron magnetic moment. A significant departure isseen at 200 mK and below, indicating the need for a more careful theoreticaltreatment of the magnetic defects as quantum objects.[49, 50]Figure 7.5 shows the carrier density dependence of ??1CF . The low-to-highfield decrease in ??1CF , corresponding to ??1mag, is observed to be approximatelylinear in ns.The fact that ??1CF(T=0) does not go to zero at high field indicates anadditional saturation mechanism that is apparently non-magnetic, with de-phasing rate ??1CF(T=0, B?=6 T) ? 6(4) ns?1 [Figs. 7.3(7.4) for the first(second)847.3. Differences in dephasing for CF and WL15105UCF-1 (ns-1)0.1 1 10Btot (T)1050mag-1 (ns-1)-6 -4 -2 0carrier density (1012 cm-2)Second cooldown, 200 mKVBG = -40 VVBG = -20 VVBG = 0Figure 7.5: Dependence of ??1mag on carrier density. For a given cooldown,extracted ??1mag values (inset) depend linearly on gate voltage (or carrier den-sity ns), though the high-field saturation of ??1CF is apparently independentof VBG.cooldown]. The data presented so far do not allow us to say more about thisnon-magnetic mechanism. Is it merely device flicker noise that limits CF? Isit a more fundamental inelastic mechanism, such as the two-channel Kondodephasing that was predicted for metals a decade ago?[91] Along the samelines, it is difficult to ascertain from the CF data whether the magnetic de-phasing results from a Kondo-type interaction of a few defect spins stronglycoupled to the electron gas, or from a large number of slowly fluctuatingmagnetic moments. To address these questions we compare the CF resultsto an analogous measurement based on WL.7.3 Differences in dephasing for CF and WLWe should be precise about the meaning of the CF decoherence rate ??1CF ,and how it may relate to WL. Conductance fluctuations are dephased byany degrees of freedom in a conduction electron?s environment that changefaster than the measurement bandwidth (hertz); see Sec. D.4. The rate ??1CFthus represents the sum of conduction electron scattering rates from uncon-trollable dynamic sources. At low temperatures, the dominant contributionsto ??1CF are from other conduction electrons and dynamic defects (magneticor non-magnetic) in the device. Electron-phonon interactions appear onlyat much higher temperatures[26].Weak localization too measures a ?decoherence rate?, but this rate de-857.4. Dephasing of weak localization (WL)40.840.44039.6Avg. G (e2/h)-0.4 0 0.4B? (mT)(a)0.11 K1.4 K20151050t -1WL, t -1UCF (ns-1)1.20.80.40Temperature T (K)(b)Figure 7.6: Weak localization as a function of temperature. (a) Conduc-tance, averaged over the same VBG range as used in Fig. 2. (b) Comparisonof characteristic rates ??1WL [square, extracted from (a)] and ??1CF (?, fromFig. 7.3) at B? = 0. The dotted lines are WL and CF rates expected forelectron-electron parameter 9.0 ns?1/K, collision rate ??1mag = 4.7 ns?1 withslow magnetic defects, and non-magnetic offsets of 3.5 ns?1 (WL), 6.2 ns?1(CF). [ c?American Physical Society[4]]scribes time reversal symmetry (TRS) breaking (whereas CF probes theunreliability of interference at different times). Like CF, weak localizationmay be dephased by a dynamic environment, but only when those dynam-ics occur faster than the dephasing timescale?a cutoff time nine orders ofmagnitude shorter than the analogous timescale for CF. Unlike CF, weaklocalization can be dephased by a totally static environment, if the envi-ronment does not preserve time-reversal symmetry (e.g., magnetic fields, orspin-flip processes from unpolarized magnetic moments). Spin-orbit interac-tions do not break time reversal symmetry, however a spin-orbit interactionthat only couples to out-of-plane spin effectively appears as time reversalsymmetry breaking, since each spin population is independent.[16]7.4 Dephasing of weak localization (WL)Graphene?s magnetoconductance (Fig. 7.6(a)) is typically fit to aWL theory[21]that includes only non-magnetic dephasing mechanisms, but the CF datademonstrate that graphene also suffers from significant magnetic dephas-ing. If the magnetic defects vary slowly, they distinguish the spin-singletand -triplet channels of the WL correction,[16, 52, 63] which complicatesthe fitting. The dephasing can be more reliably characterized by extracting867.4. Dephasing of weak localization (WL)151050tWL-1 (ns-1)3210B||2 (T2)7650.40.20B|| (T)Second cooldown, 0.08 K(c)Figure 7.7: Dependence of ??1WL on B2? (same VBG range as used in Fig. 7.3).The solid line is ??1TRS(B?) = 4.3 ns?1+(4.0 ns?1/T2)B2? , a fit to the B? ? 1 Tdata. Inset: The same data at low field, plotted linearly in B?. [ c?AmericanPhysical Society[4]]the zero-field magnetoconductance curvature to obtain a single rate ??1WL,defined as??1WL ?eD~(3pi4he2LWd2GdB2????B?=0)? 12, (7.2)where LW = 1.05 is the device aspect ratio and G is the average conductance.For slow unpolarized magnetic defects (B = 0),[16] one expects??1WL ?(32(??1TRS + 23??1mag)?2 ? 12(??1TRS + 2??1mag)?2)? 12 (7.3)where ??1mag is defined the same as for CF and ??1TRS is the summed dephasingrate from other scattering mechanisms that break time reversal symmetry.For fast magnetic defects, on the other hand, ??1WL is simply the sum of rates? ??1TRS + ??1mag.[63]Similar to the conductance fluctuations, ??1WL is seen to increase withtemperature, and a zero-temperature offset is clearly observed (Fig. 7.6(b)).This confirms the saturation in weak localization dephasing that has beenseen in many studies[24, 30, 32]. The common slope with respect to T reflectsthe equal effect of electron-electron interactions on weak localization andconductance fluctuations[62]; this confirms that definitions (7.1) and (7.2)are not miscalibrated.The effect of an in-plane field on WL (Fig. 7.7) is more complicatedthan the analogous measurement for CF (Fig. 7.4) due to graphene?s rip-877.5. Conclusionsples, which convert the uniform in-plane field to a random vector potential(see Chapter 6). This breaks time reversal symmetry,[18] giving ??1TRS(B?) =??1TRS(0) + ?B2? , where ??1TRS(0) is the inelastic dephasing rate from non-magnetic sources and ? describes the ripple geometry (c.f. Chapter 6). Inthe data ??1WL increases sharply from 0 to 50 mT, which can be explainedby the suppression of two WL channels by Zeeman splitting and the resul-tant transition from Eq. (7.3) to ??1WL = ??1TRS + 23??1mag.[50] Above 50 mT,??1WL decreases at first as the magnetic defects polarize and their dephasingeffect vanishes.23 For much higher fields (B? > 0.5 T) the defects are fullypolarized and ??1WL has collapsed to ??1TRS, giving the B2? dependence seenFig. 7.7.7.5 ConclusionsTaking CF andWL data together, we can draw several conclusions about themechanisms of spin relaxation and low temperature dephasing in graphene(Appx. C discusses these in more detail):1. Scattering from magnetic defects induces a spin flip rate ??1mag = 5 ns?1.This is seen directly as the field-induced suppression of ??1CF (Fig. 7.4).The smaller field-induced suppression of ??1WL (? 2 ns?1, Fig. 7.7) isconsistent with the weaker contribution of ??1mag to ??1WL for slow mag-netic defects, i.e., those that change slowly on the dephasing timescalebut fast enough to dephase CF.2. Spin-orbit interactions can be excluded as a significant contributionto spin relaxation in graphene. Spin-orbit coupling would generateeither antilocalization at B? = 0 or a much larger decrease in ??1WL forsmall B?, depending on the spin-orbit symmetry[33] (see Sec. C.4 fora visualization of the effects of spin orbit interactions); moreover thetemperature dependence ??1CF(T ) would be nonlinear (see Sec. 8.3.5).None of these effects are observed.3. WL and CF each indicate a non-magnetic component to the satura-tion in dephasing. For the second cooldown, the non-magnetic ratefor WL dephasing was ??1TRS(T=0) ? 3.5 ? 1 ns?1 after subtracting23With CF the magnetic dephasing vanishes because the moments freeze. With WLthe dephasing vanishes for a different reason: once the magnetic defects are aligned to asingle axis, their induced spin rotations commute and preserve TRS.[50] The behaviourof WL with changing B? is described in more detail in Sec. C.4.887.5. Conclusionsthe contribution from electron-electron interactions, identical to thecorresponding rate for CF, ?sat(T=0, B?=6 T) ? 3.8? 0.2 ns?1. Thisindicates that the non-magnetic source breaks time-reversal symmetry,and must therefore change rapidly on the dephasing timescale. For thefirst cooldown, the CF rate was higher (?sat(T=0, B?=6 T) ? 6.2?0.3ns?1) while the WL rate was ??1TRS(T=0) ? 3.5 ? 1 ns?1; the higherCF rate may be attributed to defects with dynamics too slow to breaktime reversal symmetry.Both magnetic and non-magnetic dephasing mechanisms limit coherencein graphene below 1K. The magnetic scattering rate is too large to be ex-plained by remote magnetic moments, requiring instead that the magneticdefects are electronically coupled to the graphene. Recent WL data[32] sug-gests that the magnetic defects may be midgap states at the Dirac point,formed at vacancies or edges. For the non-magnetic dephasing, we can ruleout bistable charge systems in the SiO2 substrate; their broadly-distributedlevel splittings would produce a rate proportional to T .[91] The data insteadsuggest a class of nearly degenerate non-magnetic defects in the grapheneitself, whose microscopic origin is yet to be determined.89Chapter 8New theoretical resultsDuring the course of the my experimental work it was necessary at timesto build some theory to accurately model the behaviour of the graphene. Afew of these theoretical results turned out to be useful and original enoughto merit publication and mention in this thesis:1. A scattering formalism quantifying the resistance increase due to thein-plane magnetic field threading through the ripples, predicting ananisotropic resistivity.2. A remark on the rate of valley-dependent scattering events due to thestrain in graphene?s ripples (these may be the dominant reason whyantilocalization is suppressed in graphene).3. An in-depth numerical investigation showing how to accurately mea-sure coherence times from conductance fluctuations, in quasi-2D sys-tems.The majority of this chapter is concerned with the third topic, which waspublished as its own paper.8.1 Anisotropic magnetoresistance induced by anin-plane fieldThis section is based on work that was published as a supplement to Ref. [2].Chapter 6 describes an experiment probing the orbital effects of an mag-netic field B?x? applied in the plane of a rippled graphene device. This fieldappears effectively as a random vector potential,[18]A(r) = ?B?z(r)y? (8.1)where z(r) is the height of the graphene sheet at position r. This vectorpotential appears in the Hamiltonian as a potential term (Sec. 2.2.2)H?(r) = ?ev?yB?z(r).908.1. Anisotropic magnetoresistance. . .The ripples z(r) themselves are statistically characterized by a correlationfunction Cz(r ? r?) = z(r)z(r?), and a power spectrum:C?z(q) =?drxdry e?iq?rCz(r) (8.2)Here we briefly derive the correct form of the resistance change due to scat-tering by this random potential (these results are published in the supple-ment of Ref. [2]).We use the basic semiclassical diffusion formalism described in Sec. 2.4.It is assumed that the disorder is relatively weak, and so we use the Fermigolden rule (2.46) with the eigenstates (2.12). The angular scattering rate(2.42) due to B? evaluates to:S?(?, ??) = ??0v ? 2pi~v sin2(? + ??2)e2v2B2?C?z(q)? (8.3)Here, q is defined by its magnitude, q = (2pF/~) sin ????2 , and direction?q = 12(pi + ? + ??), where pF is the Fermi momentum controlled by doping(Sec. 2.3.2). Note that this is anisotropic scattering, since we cannot writeit in terms of ? ? ?? alone. As a result, we cannot directly use the isotropicresult (2.44) and must examine how S? affects the probability evolutionequation (2.43). This is not a trivial problem to solve in general since the vxand vy velocity components in (2.40) no longer necessarily decay as simpleexponentials.To first order, we can look at the effect of (8.3) as a perturbation on themuch stronger ordinary isotropic scattering that exists in graphene. Theequation (2.43) can be seen as the linear equation ??tP (?, t) = S[P ](?, t),where S is a linear operator on these probability functions:S[P ](?) =?d?? [S(?, ??)P (??)? S(??, ?)P (?)] . (8.4)The two important transport modes from the isotropic scattering case arerepresented as Px(?) = cos(?)/?pi and Py(?) = sin(?)/?pi. Technicallyspeaking, the anisotropic scattering in (8.3) mixes these modes with othermodes (e.g., cos(2?)). We neglect these other modes, however, and examinethe S operator (now a 2? 2 matrix) in the reduced basis of just Px and Py.These matrix elements are rates ??1ij = ? d?Pi(?)S[Pj ](?). The elements ofthe inverse diffusion tensor are then given by [D?1]ij = v2??1ij /2, leading tothe resistivity tensor (2.39) ?ij = [D?1]ij/(e2?).918.1. Anisotropic magnetoresistance. . .Here we examine only the case of isotropic ripples C?z(q) = C?z(|q|),for which the off-diagonal restivities ?xy, ?yx remain zero. After some cal-culation (using various changes of variables in the angular integrals), theanisotropy is found to be exactly threefold:?xx = ?0 + 32???yy = ?0 + 12??,(8.5)where ?0 is the zero-field resistivity, and ?? may be written as a real-spaceintegral of the height correlator:?? = piB2?~? ?0 dr rW (kr)Cz(r), (8.6)where k = pF/~,and where we have defined the functionW (z) =? 2pi0 d?J0(2z sin?2 ) sin2?2 . (8.7)Note that the procedure above (considering only Px and Py modes) issomewhat arbitrary. We have however confirmed the threefold anisotropy ofrelaxation by simulating a classical charge moving in the x-y plane with out-of-plane magnetic field B = ?B? dhdx z?, for random isotropic z(r). Simulationand experiment in Ref. [85] also show ??xx = 3??yy.We next consider special cases of (8.6), based on the typical ripple cor-relation length R as compared to the reduced electron wavelength, 1/kF,which is controlled by carrier density. The function W (z) is approximatelyconstant for z 1, and so ?? ? (ZRB?)2/~ at low densities where kF 1/R. At higher density (kF 1/R), the oscillatory W (z) may be inte-grated out via a Hankel transform. This yields the classically-expecteddependence ?? ? k?3F . For ripples with a Gaussian correlation function,Cz(r) = Z2 exp(?r2/R2), we have?? = 12~|n|3/2Z2R B2? . (8.8)This concludes the derivation.8.1.1 Comparison to literatureThe threefold anisotropy derived above was observed in a GaAs 2DEG whenin-plane field lines were rippled by nearby ferromagnets[85]. In that case,928.2. Random-strain dephasing in grapheneeach individual magnetic ripple included equal parts positive and negativemagnetic flux, oriented to the in-plane field. The resulting random vectorpotential had essentially the same qualities as rippled graphene with an in-plane field. At that time, simulations were used to explain the anisotropicresistance, and the effect was explained in terms of the formation of ?snakestates? induced by channeling of electrons along zero-field contours. Asshown by the above derivation, however, the threefold anisotropy occurseven in the weak scattering limit.Recently, in Ref. [92], our derivation was extended and investigated inmore theoretical detail, obtaining exact results (for all k) for two typesof correlation shape: the Gaussian correlation Cz(r) = Z2 exp(?r2/R2),and exponential correlation Cz(r) = Z2 exp(?r/R). The factor of threeanisotropy was found to indeed be a general feature of this effect.8.2 Random-strain dephasing in grapheneThis section elaborates on a short discussion published in Ref. [2].As derived in Appx. A, random variations in the hopping energy of thegraphene sheet create a disorder potential of the formHV = ?x?zVxz(r) +?x?zVyz(r). (8.9)Comparing to (2.23), we can see that (8.9) behaves just like the vectorpotential from a magnetic field, except that it acts oppositely on the twovalleys (as evident by the ?z term). We can write HV = ev?z? ?A?, wherethe effective vector potential here is A? = (Vxz(r)x?+ Vyz(r)y?)/(ev).Since graphene tends to be rippled in a real device, we can expect localstrains that stretch and bend the bonds. This changes the overlap betweenatomic orbitals, which in turn changes the hopping energies. Thus, rip-ples are a likely source of the hopping-disorder potential in (8.9). A fullyaccurate calculation of A? requires consideration of the various tensile mod-uli and orbital overlap functions of graphene, which are not known withconfidence[44, 93?95]. We can make an order of magnitude estimate[20]that |A?| ? ~Z2/(eaR2), where Z and R are the typical ripple height andlength, respectively.Since the ripples are random, then the resulting vector potential A? willbe random as well. By analogy with the random magnetic field dephasingseen in Chapter 6, we expect dephasing from this random vector potential.However, since the strain potential couples to ?z, this dephasing does notbreak time reversal symmetry. The resulting valley-dependent dephasing938.3. Analysis of quasi-2D conductance fluctuation correlationsaffects the rate ??1zv (Sec. 3.4) instead[21]. Making an analogy to Eq. (6.6),where the random potential was of order |A| ? ZB?, the strain dephasingrate should be??1zv ? vFZ4/(a2R3). (8.10)Using this expression, the ripple dimensions extracted from our in-plane fieldmeasurements (Sec. 6.4) or from STM measurements[82] may fully explainthe large ??1zv ? 1?10 ps?1 observed in most graphene WL magnetoresistanceexperiments (see Table 6.1 and Refs. [23, 25, 27]).The idea that strains cause this sort of dephasing is not new, however,the form of (8.10) contrasts with previous estimates of the intra-valley effectof ripple strain.[20, 22, 23] Those estimates assumed that the strain-inducedeffective magnetic field is truly random, with no requirement for flux com-pensation over multiple correlation lengths. That assumption would implyunphysical long range correlations in the vector potential. The dephasingmeasurements in Chapter 6 show that ripples and resulting potential corre-lations are instead short-range, as expected for adhesion to a polished wafer.For that reason, the framework for the derivation of ?? in Eq. (6.6) shouldapply also to ?zv.8.3 Analysis of quasi-2D conductance fluctuationcorrelationsHow can we extract coherence times from conductance fluctuations? Atnonzero temperatures, no closed-form expression exists for the conductancefluctuation correlation function in a quasi-2D system. Although a quan-titative numerical study of correlations has been performed previously,[96]its results are not well known and in any case are difficult to apply to arealistic measurement. This has led to considerable misunderstanding inpast and present experimental studies of coherence: in principle one can ex-tract phase coherence information from conductance fluctuations (just likewith weak localization), but this requires robust and easy-to-use theoreticalpredictions. Adding to the confusion, important asymptotic behaviours aredescribed incorrectly in the well-cited literature (see Sec. 8.3.7).I set out to remedy the this situation in order to interpret my experimen-tal measurements, where conductance fluctuations are found in abundance.The results of this theoretical study are described in this section. The re-sults not only enabled my final experiment (Chapter 7) to produce cleardata with hard conclusions, but should also be useful for future coherence948.3. Analysis of quasi-2D conductance fluctuation correlationsexperiments in graphene and other quasi-2D systems (metal films, semicon-ductor heterostructures, oxide heterostructures), especially when magneticdisorder is present.This section describes results that we published in Physical Review B[3].Much of the following text is c?American Physical Society. The text hasbeen adapted to fit in with the overall thesis.8.3.1 How correlations are probed in experimentThe experimental data that goes into a conductance fluctuation analysis istypically a measurement of conductance G versus perpendicular magneticfield B, over some interval B1 to B2.24 Aside from noise, the data is es-sentially continuous and can be represented by a function G(B). From thistrace, the fluctuations ?g(B) = G(B)? gb(B) are estimated by subtractingoff some kind of smooth background function gb(B). Appx. C.1 describesa practical protocol for background subtraction, used in Chapter 7. Thefluctuations? autocorrelation function f(?B) is then computed:f(?B) ? 1B2 ?B1 ? ?B? B2??BB1dB [?g(B)?g(B + ?B)] (8.11)Typically, the result of (8.11) is averaged together for multiple G(B) traces,taken with different sample settings (e.g., different gate voltage) in order toreduce the statistical error.The idea of computing (8.11) is that the averaging over sample param-eters is similar to the ensemble average of mesoscopic theory. Thus, weexpect to have f(?B) ? ?G(B)?G(B + ?B), so we should be able to make adirect comparison of f(?B) to theory. One would like to fit the theoreticalshape to f(?B), as is done with weak localization, but:? There is no analytical formula for ?G(B)?G(B + ?B) (unlike the G(B)of weak localization) so fitting f(?B) requires extensive numerical com-putations.? The statistical bias of background subtraction will distort the shapeof f(?B). The fluctuations may even be non-ergodic, meaning thatparameter averaging will never give the same result as ensemble aver-aging.24Similar concerns apply to autocorrelations in other parameters, for example f(?VBG)from G(VBG) scans over gate voltage, or multi-dimensional autocorrelations.958.3. Analysis of quasi-2D conductance fluctuation correlations? If the correlation function is long-ranged then it will be very timeconsuming to measure enough G(B) traces to obtain a precise f(?B).For these reasons, directly fitting f(?B) to theory is tedious, inefficient, anderror-prone.A more robust way to compare f(?B) with theory is to compute itscorrelation length, or some other metric, which produces a single numberthat can be compared against pre-tabulated theoretical values. A carefullychosen metric will avoid the problems of fitting, noted above. The followingsections will investigate, theoretically, which metric should be the best-suitedfor interpreting experimental results.8.3.2 Theoretical quasi-2D correlationDiffuson vs. CooperonConsider two conductance fluctuations ?G(?,B) and ?G(??, B?) measuredat internal chemical potentials ? and ?? and perpendicular fields B and B?.The theoretical correlation function takes the form?G(?,B)?G(??, B?) = F (?? ??, B ?B?) + C(?? ??, B +B?) (8.12)where F and C represent the Diffuson and Cooperon contributions, respec-tively.Which correlation (Cooperon or Diffuson) do we probe with the au-tocorrelation (8.11)? Since the autocorrelation is usually averaged over awide range of magnetic fields to increase accuracy, the Cooperon contribu-tion to the autocorrelation is strongly diluted. This is especially true if theCooperon is already suppressed by some form of time reversal symmetrybreaking, like a magnetic field offset. Thus, autocorrelations effectively onlymeasure the Diffuson correlations.25 For this reason, the following sectionsfocus on the Diffuson contribution exclusively and neglect the Cooperon.The Diffuson correlation functionFor now, we compute the Diffuson correlation function for the case of asimple system (spin-less and valley-less) with only one dephasing mode,characterized by the dephasing rate ??1? .26 The quasi-2D case is assumed,25See Ref. [96] for a discussion of alternative methods that effectively probe theCooperon, e.g., the self-convolution of conductance.26 It is assumed that ??1? is a constant, without energy dependence. It is also assumedthat there is no modal energy offset (this is usually the case for the dominant CF mode).968.3. Analysis of quasi-2D conductance fluctuation correlationswhere the dephasing length L? = ?D?? is smaller than the sample length(L, between source-drain contacts) and width (W , between the vacuumedges).We define the following dimensionless variables for energy, field, andtemperature:?? ? ?? ? ??/~, ? ? |?B| ? 2eD??/~, T ? kBT ? ??/~. (8.13)These will help to shorten the following expressions. As derived in AppendixD, we can express the quasi-2D correlation function asF (??, ?B) = e4h2WD??L3 FT(??, ?). (8.14)whereFT(??, ?) =? ???d? ?(?/T)TF0(?? ? ?, ?). (8.15)for ?(x) = 12(x2 coth x2 ? 1)/ sinh2 x2 , and (D.10)F0(?, ?) = 1pi? Im[?(12 +1 + i??)]+ 12pi?Re[??(12 +1 + i??)].Here ?(z) is the complex digamma function (see Appx. F).Weakly smeared behaviour (T 1)For T 1, the thermal smearing is negligible and we have ?(/T)/T? ?(),soFT(??, ?) = F0(??, ?).Therefore the characteristics of the high dephasing rate/low temperaturecase are fairly trivial, requiring no numerical integration. We have F0(0, 0) =3/(2pi) and so the variance is[52]F (0, 0) = 32pie4h2WD??L3 .Highly smeared behaviour (T 1)Note that F0(?, ?) only falls as ? |?|?1 as ? becomes large. This slow decayin the energy dependence plays a critical role when the convolution (8.15) iscomputed for T 1. In particular, it causes the long-ranged and multi-scalenature of F (0, ?B). For ?? = 0 and T 1, we can find some approximate978.3. Analysis of quasi-2D conductance fluctuation correlationsresults from (8.15) by taking ?(x) ? ?(0) for |x| . 1 and ?(x) = 0 for x & 1,FT(0, ?) ? 2? ?T0 d?16TF0(?, ?), (8.16)then plugging in an approximate form for F0.An immediate consequence of the |?|?1 behaviour is the logarithmic formof the variance under thermal smearing.[52, 96] Since the region |?| 1,where F0(?, ?) ? |2?|?1, gives the dominant contribution to (8.16), we haveFT(0, 0) ? 2?? ?T?1 d?16T12?? 16T ln(C0T), T 1 (8.17)for some constant C0 that depends on how the integration limits are cho-sen. We find numerically that C0 = 4.1 accurately describes the asymptoticbehaviour of FT(0, 0). This gives an extremely weak ??-dependence in themeasured variance, F (0, 0) ? ln(T??)/T .How does smearing affect the field-dependence? For ? 6= 0 we can ap-proximate F0 by taking 12 + 1+i?? ? 12 +i ?? for 1, giving a ??-independentcorrelation:F0(?, ?) ? pi2? f(pi?/?), ? 1 or ? 1 (8.18)where Euler?s reflection formula (Appx. F) gives f(x) = 1x tanh x+ 12 sech2 x.When a high amount of thermal smearing is applied, the first term in f(x)dominates. This gives the intermediate-field behaviour of the field correla-tion function in the smeared limit,FT(0, ?) ? 16T ln(C1T/?), T ? 1 (8.19)where we find numerically that C1 = 29.1.The field derivatives of the correlation function behave much differentlyunder thermal smearing. The unsmeared correlation function derivative???F0(?, ?) approaches zero rapidly, as ? sech2(pi?/?), for large ?. As a resultthe thermal smearing convolution imposes a simple behaviour for this fieldderivative:???FT(0, ?) ? h(?)16T , T 1, ? (8.20)for a function h(?) = ????d? ???F0(?, ?).27 Remarkably, this means that27It can be shown that this function h(?) is closely related to the derivative of the weaklocalization magnetoconductance.[62]988.3. Analysis of quasi-2D conductance fluctuation correlationsfield scales defined in terms of ???FT(0, ?), such as inflection point, do notdepend on T 1.8.3.3 Field and energy correlation lengthsWe begin by discussing the metrics of F (0, ?B), which we will call FB(?B).Since this correlation function is long-ranged and has multi-scale behaviour,we compare three different field scales of the correlation function (Fig. 8.1inset):? The half-width ?B 12, defined by FB(?B 12) = 12FB(0), is the point wherecorrelation has fallen to 50% of the variance.? The roundness ?Br = |2FB(0)/F ??B(0)| 12 characterizes correlations atvery small field separation, where F ??B(?B) = d2FB/d?B2.? The inflection point ?Bi, defined as the point where F ??B(?Bi) = 0, isthe field separation at which correlations change the fastest.It is customary in CF studies to characterize FB(?B) by its half-width. Aswe show below, the inflection point makes a much better metric for severalreasons.Figure 8.1 shows how these three field scales, calculated from the theo-retical FB(?B) of quasi-2D CF, depend on dephasing in the case that onlyone dephasing rate ??1? is relevant (the case of multiple dephasing rateswill be discussed in section 8.3.5). Immediately one can see that the threescales are not proportional, illustrating the multi-scale nature of FB(?B).The field scales are expressed here in terms of a characteristic thermal fieldkBT/(2eD), and the dephasing rate in terms of the thermal time, to makethis plot general for any quasi-2D system.Table 8.1 lists the asymptotic behaviour of each field scale in the ther-mally smeared limit (~??1? kBT ) and the unsmeared limit (~??1? kBT ). The unsmeared limit is rarely encountered at low temperatures, wheredephasing is typically dominated by the contribution of electron-electroninteractions[62] giving ??1? = ?kBT/~, for some ? less than unity. In thesmeared limit, ?B 12depends equally on T and ??1? (cf. Ref. [96]), whileroundness ?Br depends logarithmically on T . Remarkably, ?Bi has no di-rect T -dependence in either limit. This is a desirable characteristic becausea measurement of ?Bi then yields the value of ??1? directly, without needingexact knowledge of T .998.3. Analysis of quasi-2D conductance fluctuation correlations0.1110100dB1/2,r,i (kBT/[2eD])0.1 1 10tf-1 (kBT/h)_0FB(dB)0dBdBrdBidB1/2dB1/2dBrdBiFigure 8.1: Dephasing rate dependence of several characteristic scales ofthe quasi-2D CF correlation function in magnetic field. Results for inflec-tion point ?Bi (solid blue), roundness ?Br (dotted red), and half-width?B 12(dashed black) are indicated by thick lines. Thin lines indicate theasymptotic forms in Table 8.1. The inset shows the correlation function forkBT = 3~??1? and graphically depicts the definitions of ?B 12 , ?Br, and ?Bi.[ c?American Physical Society[3]]Energy correlation lengths {?? 12, ??r, ??i} are shown in Fig. 8.2, com-puted from the theoretical F?(??), following definitions analogous to the ?Bcorrelations. Asymptotic forms are listed in Table 8.1. For strong thermalsmearing (kBT ~??1? ), F?(??)/F?(0) converges to a universal function?(??/(kBT )), which is independent of ??1? ; this gives the universal correla-tion lengths ?? 12 ,r,ilisted in Table 8.1. As a result, F?(??) is not useful formeasuring ??1? , but can instead be used as a thermometer.[36]The strategy of using CF as a primary thermometer has been shownto be effective for quasi-1D systems.[36] For the quasi-2D correlation func-tion, however, convergence to the universal form is very gradual, and allthree metrics deviate significantly from their asymptotic values even when~??1? < 0.1kBT (Fig. 8.2). The deviation is particularly severe for the metric1008.3. Analysis of quasi-2D conductance fluctuation correlationsSmeared limit Unsmeared limitMeasure kBT ~??1? kBT ~??1??B 12? 2eD 14.4(kBT~??1? ) 12 6.21~??1??Br ? 2eD (24 ln 4.1kT??~ )12~??1? 3.48~??1??Bi ? 2eD 3.01~??1? 1.53~??1??? 122.72kBT 1.67~??1???r 3.16kBT 1.36~??1???i 2.14kBT 0.68~??1?Table 8.1: Asymptotic field and energy correlation lengths. Prefactors havebeen numerically determined, whereas exponents are analytically derived(see Sec. 8.3.2). [ c?American Physical Society[3]]?? 12identified in Ref. [36], e.g., 20% at ~??1? = 0.05kBT . Somewhat fasterconvergence is observed for the inflection point ??i (e.g., 8% deviation at~??1? = 0.05kBT ). For the highest accuracy, both ?Bi and ??i should bemeasured: together these provide unique values for both T and ??.8.3.4 Statistical errors in autocorrelationsStatistical errors play a major role in experimental studies of CF. Even whenthe conductance G(?,B) is measured exactly (without noise), there are twostatistical error sources that affect the quality of the analysis: random errorsdue to a limited data set, and systematic errors due to the backgroundsubtraction procedure. The technical procedure used to predict these errorscan be found in Appendix H.In this section we examine the two types of errors as they apply toexperiments measuring FB(?B). Careful statistical analysis is required toestimate the errors in FB(?B) with any degree of accuracy?a nonintuitiveresult that comes from the multi-range nature of the fluctuations. Inspectionalone generally overestimates the number of effectively independent samplesthat contribute to the autocorrelation function, often by orders of magni-tude. Moreover, this number depends strongly on which field scale is beingextracted from the correlation function (?B 12vs. ?Bi, etc.). Errors associ-ated with measurements of F?(??) are typically less severe due to its shortrange nature, but this does not help us for extracting ?? as F?(??) containslittle phase coherence information.Random errors appear in the correlation function when it is estimated1018.3. Analysis of quasi-2D conductance fluctuation correlations110100dm1/2,r,i (kBT)0.001 0.01 0.1 1 10tf-1 (kBT/h)_0Fm(dm)0dmdmidmrdm1/2dm1/2dmrdmiFigure 8.2: Mapping of several energy correlation lengths to dephasing rate,analogous to Fig. 8.1. Results for ??i (solid blue), ??r (dotted red), and?? 12(dashed black) are indicated by thick lines. Thin lines indicate theasymptotic forms in Table 8.1. The inset shows the correlation function forkBT = 3~??1? and graphically depicts the definitions of ?? 12 , ??r, and ??i.[ c?American Physical Society[3]]from a finite data set. Figure 8.3 shows an example of random errors, whichappear as fluctuations in the autocorrelation function. These fluctuationsin turn cause uncertainties in all derived parameters such as correlationslengths or variance. The magnitudes of the random errors depend on thetotal scanned length of data, Btot, which may be distributed over multipleindependent scans. As seen in Fig. 8.4, the total scan length required fora reliable estimate is different by many orders of magnitude for the variousstatistical metrics.Systematic errors (biases) occur when the CF are extracted from conduc-tance data by subtracting an experimentally determined background, suchas the mean conductance or fitted/smoothed conductance data. Backgroundsubtraction is necessary for isolating the CF, but as a side effect it capturesand removes genuine fluctuations that are longer than the ?smoothing lengthscale" Bsm. This length scale is determined by the procedure used to esti-mate background conductance (see Appx. H.2). The consistent loss of long-ranged fluctuations causes a negative bias in the autocorrelation function1028.3. Analysis of quasi-2D conductance fluctuation correlations1.00.80.60.40.20FB(dB) (a.u.)6040200dB (mT)1.00.80.6-2 -1 0 1 2randomerrorbiasFigure 8.3: Example of statistical errors in CF autocorrelation for the caseT = 1 K, ?? = 100 ps, and D = 0.03 m2/s. The solid line indicates the trueCF correlation function FB(?B). The dotted lines show the autocorrelationsof two simulated G(B) traces (see Appx. G) over a 1 T range, with meanbackground subtraction (Btot = Bsm = 1 T). The dashed line is an averageof 100 such autocorrelations (Btot = 100 T, Bsm = 1 T). Inset: Expansionof the boxed region, graphically depicting the magnitudes of the two typesof error in variance. [ c?American Physical Society[3]](Fig. 8.3). This bias in turn directly impacts the accuracy of the varianceFB(0) as well as metrics ?B 12and ?Br, whose definitions rely on variance.A large Bsm is required to reach low bias for these metrics (Fig. 8.5). Theinflection point, on the other hand, offers a field scale that does not dependon offsets to the correlation function and is therefore nearly immune to thistype of bias. As a result, errors in ?Bi are much smaller than in the othermetrics, for a given Bsm.Figures 8.3?8.5 demonstrate the dramatically different sensitivity to er-rors for half-width and inflection point. To put this difference in practicalterms, consider a typical low temperature CF measurement of ?? in a dis-ordered semiconductor, where one might have T = 1 K, ?? = 100 ps, andD = 0.03 m2/s. To achieve 10% accuracy (systematic error) using ?B 12, anextremely large Bsm = 3 T would be required even though the value of ?B 12itself is just 5.2 mT. Using ?Bi, on the other hand, 10% accuracy would beobtained for Bsm = 5 mT, smaller by three orders of magnitude comparedto the ?B 12case. Similarly, 10% precision in ?? using ?Bi would require a1038.3. Analysis of quasi-2D conductance fluctuation correlations101102103104105Btot for 10% precision (kBT/[2eD])10-310-210-1100101tf-1 (kBT/h)_dB1/2FB(0)dBrdBiFigure 8.4: Guidelines for minimum total scan length in CF experiments(Btot), for reaching 10% standard deviation in the various metrics of FB(?B).As expected, the standard deviation of any metric falls as B?1/2tot when Btotis increased. [ c?American Physical Society[3]]total scan length Btot = 300 mT, compared to Btot = 45 T for ?B 12.Another source of statistical error is experimental noise. Uncorrelatednoise adds a sharp peak to the experimental correlation function at zero field(energy) difference. It will thus introduce direct errors when measuring thevariance, the half width or the roundness. The inflection point, in contrast, isonly affected (weakly) by the tails of the noise peak. Besides the noise peak,noise also adds random fluctuations to the experimental correlation function.The quantitative errors from noise should be evaluated on a case-by-casebasis, as they depend on many experimental parameters (the signal-to-noiseratio, the noise spectrum, the scanning speed, the passband of filters, etc.)that vary from apparatus to apparatus.8.3.5 Multiple dephasing modesMost systems of practical interest involve multiple dephasing modes dueto the presence of some static disorder that couples to an electron?s in-ternal degrees of freedom, e.g., spin-orbit coupling,[97], frozen magneticimpurities,[63] or valley-orbit coupling.[34, 35] Sections 8.3.3 and 8.3.4 haveonly considered the case of a single dephasing rate, and one may ask whetherthose results are generally applicable. This section explores the effect of ad-ditional dephasing modes and establishes when it is appropriate to neglect1048.3. Analysis of quasi-2D conductance fluctuation correlations100101102103104Bsm for 10% accuracy (kBT/[2eD])10-310-210-1100101tf-1 (kBT/h)_dB1/2FB(0)dBrdBiFigure 8.5: Guidelines for the minimum background field-smoothing lengthin CF experiments (Bsm) such that the background subtraction bias in eachof the various metrics is less than 10%. Numerical values presented here arecalculated for the specific case of mean background subtraction, where Bsmis simply the scan range (see Appx. H.2), but similar results are obtainedfor other background subtraction protocols. Biases in ?B 12, ?Br, ?FB(0)fall as ? log(Bsm)/Bsm when Bsm is increased. Bias in ?Bi falls as B?2sm .[ c?American Physical Society[3]]these modes, allowing the use of simple results such as Table 8.1. We focusprimarily on the correlation function in magnetic field, FB(?B).With symmetry-breaking disorder, the full form of the correlation func-tion becomes the sum of independent modes with a set of distinct dephasingrates {??1i } and degeneracies {Ni},F (??, ?B) = N1F [??11 ](??, ?B)+N2F [??12 ](??, ?B)+N3F [??13 ](??, ?B) + ? ? ? . (8.21)These modes are often known as the Diffuson singlets and Diffuson triplets.[34,35, 52, 63, 97] The measured correlation field scales are then determined bya complicated mixture of the temperature and various dephasing rates, sothat the considerations of Sec. 8.3.3 may not directly apply. Nevertheless,each independent mode F [??1i ] is of the type described in Sec. 8.3.2, so itis straightforward to compute Eq. (8.21) and numerically extract the fieldscales.1058.3. Analysis of quasi-2D conductance fluctuation correlations1.00.80.60.40.20FB(dB) (a.u.)806040200dB (mT)FB[t1-1](dB)FB[t2-1](dB)2FB[t3-1](dB)FB(dB)Figure 8.6: Effect of suppressed dephasing modes on CF correlations. Con-tributions to the magnetic field correlation function from three CF modesat T = 2 K, D = 0.03 m2/s. The dotted curves show separate modes with??11,2,3 = {20 ns?1, 220 ns?1, 2120 ns?1} with N1,2,3 = {1, 1, 2}, and the solidcurve is their sum [Eq. (8.21)]. [ c?American Physical Society[3]]Figure 8.6 shows an example involving the three modes appropriate forgraphene. We have chosen typical[23] values for T = 2 K: a decoherence rateof ??1? = 20 ns?1, an intervalley rate of ?iv = 100 ns?1, and an intravalleyrate of ?? = 2000 ns?1. The resulting modal dephasing rates are[34, 35]??11 = ??1? , ??12 = ??1? + 2??1iv , and ??13 = ??1? + ??1iv + ??1? . Althoughthe rates are greatly different in magnitude, each mode has a significantcontribution to FB(?B) because of thermal smearing (see expression (8.17)).This causes the variance, half-width, and roundness to differ greatly fromthe value expected of the dominant mode (?1) alone. For instance, ?B 12is twice as large, which would be misinterpreted (by the considerations ofSec. 8.3.3) as a dephasing rate four times larger than the actual ??11 . Theinflection point, however, remains a reliable measure of ??11 even when theadditional rates are neglected, with only a 4% error.Figure 8.7 examines how ?Bi, computed for the case of only two modes,depends on the relative dephasing rates of the two modes. Here ??11 mightbe the decoherence rate from dynamic scatterers that affects all CF modes,while (??12 ???11 ) could be the extra static symmetry-breaking rate affectingonly the second mode. When the symmetry-breaking rate is comparable tothe decoherence rate, the inflection point may be displaced from the value1068.3. Analysis of quasi-2D conductance fluctuation correlations543210dBi (ht1-1/[2eD])_0.01 0.1 1 10t1-1/(t2-1- t1-1)Smeared limitUnsmeared limitN2/N1 = 3N2/N1 = 1N2/N1 = 1N2/N1 = 3Figure 8.7: Effect on field correlation?s inflection point from the combinationof two CF modes, in the smeared (kBT ? 10~??11 ) and unsmeared (kBT ?0.02~??11 ) limits. Dashed lines show the unperturbed inflection point whenthere is no secondary mode (N2 = 0). [ c?American Physical Society[3]]expected of ??11 alone. The degree of displacement, however, never exceedsa factor of 1.8 (this only occurs if N2/N1 = 3 and ~??11 kBT ). Fig. 8.7also gives a simple rule for ?Bi: the secondary mode can be neglected whenit dephases much more rapidly than the primary mode, ??12 & 10??11 ; sucha simple rule does not apply for other aspects, e.g., FB(0) or ?B 12. In theopposite limit, when dephasing rates for all modes are similar, ??12 ? ??11 ,the field scales are determined by the considerations of Sec. 8.3.3 with thesingle dephasing rate ??11 (or ??12 ), and FB(0) is only increased by a trivialfactor.[52]The statistical errors (Sec. 8.3.4) are also modified by the presence ofsymmetry-breaking disorder, often becoming much larger. The considera-tions of Appx. H may be used to evaluate errors in a general correlationfunction with multiple dephasing rates.8.3.6 Notes on the quasi-1D caseThe characteristics of quasi-2D CF can be compared to the quasi-1D regime,which occurs when the material is shaped as a long and very narrow stripwith a widthW that is much smaller than the dephasing length L? = ?D??,yet where the length between contacts is longer than L?. An advantageof quasi-1D CF (especially useful in metals) is that shaping the material1078.3. Analysis of quasi-2D conductance fluctuation correlationsinto a wire produces a lower background conductance which allows CF toappear with higher contrast. The quasi-1D correlation function FB(?B)demonstrates essentially single-scale behaviour, falling as 1/|?B|3 at high?B.[98] This explains why the half-width performs well as a measure ofcoherence in the quasi-1D system: its statistical errors are only somewhathigher than the inflection point for a given range of field.For completeness, we note the values of inflection point for the quasi-1Dsystem in the ?dirty? regime, where the elastic mean free path is smallerthan W . The quasi-1D energy correlation function in Ref. [36] gives theinflection point ??i = 0.549~/?? in the unsmeared limit; this converges onthe universal value ??i = 2.14kBT in the smeared limit. As for the magneticfield inflection point, the formulas in Ref. [98] yield ?Bi = ?3~/(eWL?) inthe unsmeared limit, and ?Bi = ?6~/(eWL?) in the smeared limit.8.3.7 Comparison to literatureIn the introduction to this work it was mentioned that CF correlations aresometimes misunderstood in the literature. The confusion can be tracedback to the early investigations of quasi-2D CF correlations[99, 100] thatmisidentified some crucial asymptotic behaviours.28 It had been found thatthe thermal rate kBT/~ would effectively add to the dephasing rate ??1? , sothat in the smeared limit it would be impossible to observe ??. For example,it was predicted that for quasi-2D CF,?B 12? 1eD ?{~??1? , for kBT?? ~, (correct)kBT, for kBT?? ~. (incorrect) (8.22)A simple counterargument explains why kBT/~ does not count as an effectivedephasing rate.[96] Consider a two-path interferometer (such as Fig. 3.7(a)).Interference will be suppressed by dephasing rate proper if either path takeslonger than ?? (dephasing time) to complete. Thermal smearing (initialenergy uncertainty), on the other hand, only suppresses interference if thearrival time difference is longer than ~/kBT ; it does not restrict the totalpath length. The interference can therefore be very sensitive to magneticfield (large enclosed area of orderD??) but totally insensitive to temperature(small time difference).Correct results for the quasi-1D case did appear soon after.[17, 98] Lateron, the numerical work by Bergmann[96] in 1994 identified the correct T -dependence for the quasi-2D ?B 12, in numerical agreement with the values28Significantly, my quasi-2D results disagree with equations (2.17), (3.8b), (3.18) fromRef. [100].1088.3. Analysis of quasi-2D conductance fluctuation correlationsfound here. Unfortunately to this day many experimenters still misapplythe CF theory,[29, 101, 102] especially in the quasi-2D case as Bergmann?spaper seems to have gone unnoticed.My work extends Bergmann?s results, by simplifying the method of com-puting the CF correlation function, proving the asymptotic forms (see Ta-ble 8.1), and investigating the most practical way to extract coherence in-formation. The inflection point ?Bi is robust and easy to use, and I hopethat others in the coherence field will take note.One interesting outcome was the surprisingly strong effect of backgroundsubtraction on variance (Sec. 8.3.4, more detail in Appx. H). This statisti-cal bias could be the explanation for recent experimental observations ofsmaller variance in conductance fluctuations ?G(B) compared to ?G(VBG)(the bias effect in gate voltage is much weaker). In those papers[29, 37]the difference was attributed to non-ergodicity, something that theoreticallycould be expected in a near-insulator[103] but not a weakly-disordered sys-tem like graphene. To establish true non-ergodicity, experimenters shouldtake extreme care with their background subtraction procedure, to rule outthe mundane explanation of statistical bias.109Chapter 9OutlookThis chapter collects together various ideas for future research along thisline. First, I note the various mysteries in the thesis: points of disagreementbetween experimental data and theoretical understanding. A solution tothese mysteries will likely require more careful experimental design as wellas a deeper theoretical picture. Second, I remark on several avenues forfurther exploration that are within reach of the present capabilities of theQuantum Devices lab.9.1 Mysteries9.1.1 Large anisotropy factorIn Sec. 6.3, the change in semiclassical resistance from B? was analyzedin terms of the anisotropic resistance formula (6.7), based on scatteringfrom the ripple-field random vector potential. A very clear anisotropy wasobserved, but it was even larger than in theory (Fig. 6.5). This is surprisingsince the predicted threefold anisotropy is an essential, inflexible feature ofthe ripple theory (Sec. 8.1).A simple way to obtain this large anisotropy would be if there wereanother negative magnetoresistance effect, which was isotropic. This wouldsubtract from the ripple effect and enhance the anisotropy or perhaps evenchange its sign. Such an isotropic effect could be expected as a result ofthe spin-coupling of the in-plane field, however it is unclear which exactmechanism would produce this negative magnetoresistance, since the knownmechanisms are far too weak (Sec. 6.3.1).9.1.2 Wide energy correlations at low temperatureFigure 9.1 shows conductance fluctuation autocorrelations in energy, com-puted from G(VBG) scans taken during the last experiment (Chapter 7).The energy variable has been computed by taking into account the appro-priate density of states. As explained in Sec. 8.3.3, energy autocorrelations1109.1. Mysteries0.80.60.40.20fE(dE) (e4/h2)6004002000-200dE (10-6 eV)Figure 9.1: Energy autocorrelations of CF at very low temperature. Auto-correlations were computed from ?G(VBG) measured over VBG = ?5 ? ? ? 5 V,with Ibias = 4 nA and with a cryostat temperature of 13 mK. The curvesdiffer in the magnetic field at the time of measurement, with (B?, B?) valuesof (0, 0), (0, 50 mT), and (6 T, 50 mT). [ c?American Physical Society[4]]are expected to work well as thermometers[36]. We find that this works wellat ?hot? temperatures above 100 mK.The unusual thing about Fig. 9.1 is that the autocorrelations are toowide in energy, for these very low temperatures. Taking into account theheating by the bias current, we expect a device temperature of 26 mK,whereas the energy correlations indicate a device temperature of 50?70 mK,depending on the in-plane field. It is not clear why this would occur.9.1.3 Late turn-off of magnetic decoherence at lowtemperatureWhile we found that the decrease in CF decoherence with magnetic fieldwas consistent with localized electron defects (g = 2, spin-12) for higher tem-peratures (Fig. 7.4), the fit with this model became poor at the lower tem-peratures below 500 mK. The decoherence turn-off required higher magneticfields than would be expected for those low temperatures, and it convergedto the high-field value quite slowly. A corresponding slow behaviour wasalso seen in the WL dephasing rate (Fig. 7.7) at the single low temperaturewe investigated.There are mechanisms that might accelerate the turn-off for low tem-peratures (such as the defect exchange field), but we have yet to find amechanism to explain the opposite.1119.2. Ideas9.1.4 Non-magnetic decoherence saturationPerhaps the most remarkable result in Chapter 7 was the finding that thecoherence saturation remains, even when the magnetic defects have beenturned off by the in-plane field. This residual non-magnetic decoherencewas found in both CF and WL, indicating that the mechanism responsibleis dynamic, and moreover that its dynamics are rapid enough to break timereversal symmetry. If this effect is real, then it may reopen a controversialdebate from over a decade ago, regarding the possibility of electron-electroninteraction decoherence at zero temperature[104].9.2 IdeasThis section lists some of the ideas that I have come up with during thecourse of my work, but which I did not have the time to try out in experi-ment.9.2.1 Other ways to extract coherence informationField wigglingAn alternative way to measure the CF correlation inflection point would beto measure the differential voltage in field, V ?(B?) = dV (B?)/dB? (in afour-terminal measurement). For stationary fluctuations, we haveV ?(B?)V ?(B? + ?B) = dd?BV ?(B?)V (B? + ?B)= dd?BV ?(B? ? ?B)V (B?)= ? d2d?B2V (B? ? ?B)V (B?) (9.1)So, the zero of the autocorrelation of V ?(B?) is ?Bi?the inflection pointin the autocorrelation of V (B?). The advantage of this technique is thatV ?(B?) can be measured directly by oscillating B? with a direct-currentIbias applied. If the field can be oscillated rapidly enough, this would allowto escape low frequency device noise. The disadvantage would be possibleeddy current heating if the B? oscillation frequency is too large.Cooperon correlationsIf we have a trace G(B?) that goes from negative to positive field, thenit is possible to extract the Cooperon contribution. Rather than taking1129.2. Ideasthe autocorrelation (which would give the usual Diffuson contribution), weperform a self-convolution:c(?B) = 1B2 ?B1? B2B1dBG(B)G(?B + ?B) (9.2)Note that, to avoid having the Diffuson correlations contaminating the re-sult, we should choose B1 and B2 such that the integral does not includeconductances measured near zero field.This function c(?B) would share many of the properties of its Diffusonsibling f(?B). The difference is that c(?B) is sensitive to time reversalsymmetry whereas f(?B) is not. Thus, c(?B) would become dephased inlarge B?, due to the random vector potential (Chapter 6). In fact, in a four-terminal measurement c(?B) can already be suppressed relative to f(?B)even if time reversal symmetry is not otherwise broken.The technical difficulty with this analysis would be in ensuring that thefield is precisely linear with applied magnet current.Side-peak dephasingOnce magnetic defects have been fully polarized by B? two CF modes willno longer be dephased. The other two other CF modes will have a de-phasing rate that is 2??1mag greater than the unaffected modes. These lattermodes are the Zeeman-split side peaks (Chapter 5). Since the side peaksoccur at a distinct energy, it should be possible to measure their dephasingrate. The way this would be done is to carefully perform two-parameterconductance scans G(B?, VBG), similar to Fig. 5.3 but with emphasis onB? resolution rather than VBG resolution. The data would be analyzed bya two-dimensional autocorrelation f(?E, ?B?), from which the central peakand side-peak ?Bi inflection points could be extracted.One possible obstacle would be inhomogeneous density of states in thedevice. This would effectively appear as larger thermal smearing for theside peaks, reducing their amplitude. The side peaks? inflection point infield would not be biased in any way (it is insensitive to thermal smearing)but it would gain a larger random error. This error could possibly be reducedby deliberately averaging f(?E, ?B?) over some range of ?E.1139.2. IdeasSide-peak offset from magnetic defect exchangeWhen polarized, magnetic defects give rise to an mean-field precession thatadds to the normal Zeeman energy:[50]EZ = ?eB ? 2ndJ?Sz?, (9.3)where ?e is the mobile electrons? magnetic moment, and ?Sz?/S is the defectpolarization. This EZ is precisely what determines the position of the CFside peaks (Chapter 5), and also what causes the loss of singlet and onetriplet in weak localization near B? = 0 (Fig. 7.7). It may be possible fromthese effects to measure EZ with sufficient accuracy to determine ndJ .The value of ndJ combined with the rate ??1mag gives a lot of informationabout the magnetic defects. According to the standard scattering theory formagnetic defects, the magnetic scattering rate in an ordinary electron gaswould be[50]??1mag = 2pi~ ?0ndJ2S(S + 1), (9.4)where nd is the concentration of magnetic defects, J is the exchange constantof the interaction,29 and S is the defects? spin. Note that the prefactorof this rate could change somewhat in graphene depending on the latticesymmetries of the defect coupling (compare Sec. 2.4.5). Also, note that this??1mag is precisely the quantity measured in Chapter 7.If we can measure both ??1mag and EZ then (assuming we have some ideaabout S) we obtain both nd and J . This is useful since these same numbersdetermine other physics involving magnetic defects, allowing us to predict:? The defect relaxation rate due to electrons (Korringa rate):[52]??1K = pi(?J)2T.? The interaction between two magnetic defects via the electrons: H =JRKKYS1 ? S2, where[105]JRKKY = ?J28pi?k2[J0(kR)Y0(kR) + J1(kR)Y1(kR)].Here R is the distance between the defects and k = pF/~; J1(x) andY1(x) are Bessel and modified Bessel functions. If kBT . JRKKY thena spin-glass may be created; conductance fluctuations would no longerbe dephased.29The electron Hamiltonian interacts with a defect at location rd via a HamiltonianHd = J? ? S?(r ? rd).1149.2. Ideas? The Kondo temperature:[52]TK ?1kBE exp(?1/|2?0J |)where E is an appropriate energy scale whose value isn?t obvious ingraphene (perhaps equal to t = 3 eV, or perhaps the Fermi energy).Quantum thermoelectric fluctuationsOne aspect which has not been discussed at all in this thesis is the thermo-electric effect. The thermoelectric Seebeck coefficient s is in fact very closelyrelated to the electrical conductance, with the Mott relation[51] giving anexpression in terms of ?(E) very similar to (2.32):s = ?kBe1??dE E ? EFkBT f(1? f)?(E) (9.5)with variables defined as in (2.32). From this one might expect that therewould be quantum fluctuations in the thermoelectric coefficient, analogousto conductance fluctuations. These indeed have been observed in othermaterials.[106]Graphene is an interesting material for thermoelectric studies, with (forexample) thermoelectric effects playing a key role in photoresponse.[107]Based on (9.5), one may expect that the fluctuations in the device?s end-to-end thermopower30 S could reach an order of magnitude?S2 ? kBe1?2?G2 (9.6)when the temperature is chosen optimally such that kBT ? ~??1? whichtends to occur in graphene around 100 mK (see Ch. 7). For kBT ~??1? thefluctuations in S are suppressed by too much thermal smearing (averagingmany fluctuations), while for kBT ~??1? they are suppressed by the usualclassical decrease of S with T . For a device with size comparable to thecoherence length (?G2 ? e4/h2) and graphene near charge neutrality (? ?4e2/h), the fluctuations may be as large as ??S2 ? 1?10?V/K.These thermoelectric fluctuations should be measurable in a carefullydesigned device, and it would be interesting to study (in experiment andtheory) whether coherence information can be extracted from them, andwhether they may induce artifacts when the graphene is overheated by biascurrents such as at the lowest temperatures investigated in Chapter 7.30Thermopower S is to Seebeck coefficient s, as conductance G is to conductivity ?.1159.2. Ideas9.2.2 Controlling decoherenceMore lattice defectsIf lattice defects really are responsible for the magnetic moments, then weshould be able to create more of them. One could etch tiny holes (? 30 nm,or however small the lithographic technique allows), bombard the graphenewith high-energy ions, or expose the graphene to ozone.AdatomsThe deposition of indium or thallium atoms should generate localized spin-orbit interactions, of the Kane-Mele form, Hi = ??zsz?(r ? ri).[47] Thevalue of ? is of order 1 eV?2. Since these atoms generate a perpendicularelectric field they can also introduce a localized Rashba interaction contain-ing ?xsy ??ysx.Different substratesWe can expect that graphene on smooth substrates such as hexagonal boronnitride, or silicon carbide, would lie very flat (as long as it is not wrinkled).This has been confirmed in scanning probe experiments of graphene on mica,for example[108]. Without ripples, the in-plane field would no longer coupleto charge and not show any of the effects in Chapter 6. This would allowan even more direct investigation into electron spin and magnetic defectphysics. One might also expect that the ??1zv rate would disappear (dueto lack of strain), however these substrates are likely to induce sublatticesymmetry breaking ?z?z which brings back the ??1zv dephasing.Time/ensemble-dependent disorderInstead of performing autocorrelations, we can cross-correlate G(B?) tracestaken at different times (say, days apart). The inflection point31 of thiscross-correlation will be larger than the autocorrelation?s, and the differenceindicates how much the disorder landscape has changed in the interveningtime (Sec. D.4). If the device is otherwise stable, then we can performan irreversible operation on the device (such as annealing) to intentionallychange the disorder; this gives information about the thermal activationenergy of the disorder.31Unlike the autocorrelation, the cross-correlation need not be symmetric. To com-pensate for field shifts one should measure the left and right inflection points of thecross-correlation, and take the half difference.116Chapter 10ConclusionThis thesis investigated in detail the effects of in-plane magnetic fields on thephase coherent properties of electrons in graphene. The in-plane magneticfield couples to the electrons? spin and orbital degrees of freedom in roughlyequal amounts, and by a careful analysis of experimental data the spin andorbital coupling effects were separated.The orbital coupling of the in-plane field occurs due to the rippling ofthe graphene sheet. The effects of this coupling, time reversal symmetrybreaking and random scattering, were analyzed in detail to characterize thesize scales of the rippling (Chapter 6), and provided insight into how otherrandom vector potentials affect transport in graphene (Sec. 8.2).The in-plane field couples directly to electron spin by breaking the spindegeneracy. Due to the low intrinsic spin relaxation rates in graphene andthe ability to electrically tune graphene?s potential energy with a gate, di-rect signatures of spin-polarized phase coherent transport could be seen inmicron-sized graphene devices (Chapter 5).As shown in Chapter 7, there is non-negligible spin relaxation in graphenedue to magnetic defects. These magnetic defects not only explain why spinrelaxation occurs in graphene, they are also partly responsible for the lim-itation on low temperature phase coherence times. The decoherence fromthese magnetic defects is turned off by the in-plane field as they too be-come polarized by the field, but the residual decoherence seen at high fieldsremains a mystery.The conductance fluctuation analysis technique described in Sec. 8.3 isnot specific to graphene, and may be used in other semiconductors. Itsapplication in experiment (Chapter 7) proves that conductance fluctuationsare a robust tool for determining coherence information, with perhaps evenwider applicability than the traditional weak localization approach.117Bibliography[1] M. B. Lundeberg and J. A. Folk, Nature Physics 5, 894 (2009),arXiv:0904.2212 .[2] M. B. Lundeberg and J. A. Folk, Phys. Rev. Lett. 105, 146804 (2010),arXiv:0910.4413 .[3] M. B. Lundeberg, J. Renard, and J. A. Folk, Phys. Rev. B 86, 205413(2012).[4] M. B. Lundeberg, R. Yang, J. Renard, and J. A. Folk, Phys. Rev.Lett. 110, 156601 (2013).[5] K. S. Novoselov, A. K. Geim, S. V. Morozov, D. Jiang, Y. Zhang,S. V. Dubonos, I. V. Grigorieva, and A. A. Firsov, Science 306, 666(2004), arXiv:cond-mat/0410550 .[6] G. W. Semenoff, Phys. Rev. Lett. 53, 2449 (1984).[7] Y.-M. Lin, K. A. Jenkins, A. Valdes-Garcia, J. P. Small, D. B. Farmer,and P. Avouris, Nano Letters 9, 422 (2009), arXiv:0812.1586 .[8] S. Vaziri, G. Lupina, C. Henkel, A. D. Smith, M. ?stling,J. Dabrowski, G. Lippert, W. Mehr, and M. C. Lemme, Nano Letters13, 1435 (2013).[9] S. Bae, H. Kim, Y. Lee, X. Xu, J.-S. Park, Y. Zheng, J. Balakrishnan,T. Lei, H. Ri Kim, Y. I. Song, Y.-J. Kim, K. S. Kim, B. ?zyilmaz,J.-H. Ahn, B. H. Hong, and S. Iijima, Nature Nanotechnology 5, 574(2010).[10] E. J. H. Lee, K. Balasubramanian, R. T. Weitz, M. Burghard, andK. Kern, Nature Nanotechnology 3, 486 (2008).[11] C. L. Kane and E. J. Mele, Phys. Rev. Lett. 95, 226801 (2005).[12] S. Konschuh, M. Gmitra, and J. Fabian, Phys. Rev. B 82, 245412(2010).118Bibliography[13] O. V. Yazyev, Nano Letters 8, 1011 (2008).[14] N. Tombros, C. Jozsa, M. Popinciuc, H. T. Jonkman, and B. J. vanWees, Nature 448, 571 (2007), arXiv:0706.1948 .[15] W. Han, K. Pi, K. M. McCreary, Y. Li, J. J. I. Wong, A. G. Swartz,and R. K. Kawakami, Phys. Rev. Lett. 105, 167202 (2010).[16] S. Hikami, A. I. Larkin, and Y. Nagaoka, Prog. Theor. Phys. 63, 707(1980).[17] C. W. J. Beenakker and H. van Houten, Solid State Physics 44, 1(1991).[18] H. Mathur and H. U. Baranger, Phys. Rev. B 64, 235325 (2001).[19] F. Pierre, A. B. Gougam, A. Anthore, H. Pothier, D. Esteve, andN. O. Birge, Phys. Rev. B 68, 085413 (2003).[20] S. V. Morozov, K. S. Novoselov, M. I. Katsnelson, F. Schedin, L. A.Ponomarenko, D. Jiang, and A. K. Geim, Phys. Rev. Lett. 97, 016801(2006), arXiv:cond-mat/0603826 .[21] E. McCann, K. Kechedzhi, V. I. Fal?ko, H. Suzuura, T. Ando, andB. L. Altshuler, Phys. Rev. Lett. 97, 146805 (2006).[22] X. Wu, X. Li, Z. Song, C. Berger, and W. A. de Heer, Phys. Rev.Lett. 98, 136801 (2007).[23] F. V. Tikhonenko, D. W. Horsell, R. V. Gorbachev, and A. K.Savchenko, Phys. Rev. Lett. 100, 056802 (2008).[24] D.-K. Ki, D. Jeong, J.-H. Choi, H.-J. Lee, and K.-S. Park, Phys. Rev.B 78, 125409 (2008).[25] J. Eroms and D. Weiss, New Journal of Physics 11, 095021 (2009),arXiv:0901.0840 .[26] F. V. Tikhonenko, A. A. Kozikov, A. K. Savchenko, and R. V. Gor-bachev, Phys. Rev. Lett. 103, 226801 (2009).[27] Y.-F. Chen, M.-H. Bae, C. Chialvo, T. Dirks, A. Bezryadin, andN. Mason, Journal of Physics Condensed Matter 22, 205301 (2010),arXiv:0910.3737 .119Bibliography[28] A. A. Kozikov, A. K. Savchenko, B. N. Narozhny, and A. V. Shytov,Phys. Rev. B 82, 075424 (2010).[29] C. Ojeda-Aristizabal, M. Monteverde, R. Weil, M. Ferrier, S. Gu?ron,and H. Bouchiat, Phys. Rev. Lett. 104, 186802 (2010).[30] S. Lara-Avila, A. Tzalenchuk, S. Kubatkin, R. Yakimova, T. J. B. M.Janssen, K. Cedergren, T. Bergsten, and V. Fal?ko, Phys. Rev. Lett.107, 166602 (2011).[31] X. Hong, K. Zou, B. Wang, S.-H. Cheng, and J. Zhu, ArXiv e-prints(2012), arXiv:1204.1775 .[32] A. A. Kozikov, D. W. Horsell, E. McCann, and V. I. Fal?ko, Phys.Rev. B 86, 045436 (2012).[33] E. McCann and V. I. Fal?ko, Phys. Rev. Lett. 108, 166606 (2012).[34] K. Kechedzhi, O. Kashuba, and V. I. Fal?ko, Phys. Rev. B 77, 193403(2008).[35] M. Y. Kharitonov and K. B. Efetov, Phys. Rev. B 78, 033404 (2008).[36] K. Kechedzhi, D. W. Horsell, F. V. Tikhonenko, A. K. Savchenko,R. V. Gorbachev, I. V. Lerner, and V. I. Fal?ko, Phys. Rev. Lett.102, 066801 (2009).[37] G. Bohra, R. Somphonsane, N. Aoki, Y. Ochiai, R. Akis, D. K. Ferry,and J. P. Bird, ArXiv e-prints (2012), arXiv:1203.6385 .[38] C. Bena and G. Montambaux, New Journal of Physics 11, 095003(2009), arXiv:0712.0765 .[39] K. S. Novoselov, A. K. Geim, S. V. Morozov, D. Jiang, M. I. Katsnel-son, I. V. Grigorieva, S. V. Dubonos, and A. A. Firsov, Nature 438,197 (2005).[40] Y. F. Suprunenko, E. V. Gorbar, V. M. Loktev, and S. G. Sharapov,Low Temperature Physics 34, 812 (2008).[41] Y.-J. Yu, Y. Zhao, S. Ryu, L. E. Brus, K. S. Kim, and P. Kim, NanoLetters 9, 3430 (2009), pMID: 19719145.[42] S. Adam, E. H. Hwang, V. M. Galitski, and S. Das Sarma, Pro-ceedings of the National Academy of Science 104, 18392 (2007),arXiv:0705.1540 .120Bibliography[43] J. Martin, N. Akerman, G. Ulbricht, T. Lohmann, J. H. Smet, K. vonKlitzing, and A. Yacoby, Nature Phys. 4, 144 (2008), arXiv:0705.2180.[44] F. Guinea, B. Horovitz, and P. Le Doussal, Phys. Rev. B 77, 205421(2008), arXiv:0803.1958 .[45] G. Giovannetti, P. A. Khomyakov, G. Brocks, P. J. Kelly, andJ. van den Brink, Phys. Rev. B 76, 073103 (2007).[46] D. L. Miller, K. D. Kubista, G. M. Rutter, M. Ruan, W. A. de Heer,P. N. First, and J. A. Stroscio, Phys. Rev. B 81, 125427 (2010).[47] C. Weeks, J. Hu, J. Alicea, M. Franz, and R. Wu, Physical Review X1, 021001 (2011), arXiv:1104.3282 .[48] D. Huertas-Hernando, F. Guinea, and A. Brataas, Phys. Rev. B 74,155426 (2006).[49] T. Micklitz, T. A. Costi, and A. Rosch, Phys. Rev. B 75, 054406(2007).[50] M. G. Vavilov and L. I. Glazman, Phys. Rev. B 67, 115310 (2003).[51] M. Cutler and N. F. Mott, Phys. Rev. 181, 1336 (1969).[52] E. Akkermans and G. Montambaux, Mesoscopic Physics of Electronsand Photons (Cambridge University Press, 2007).[53] M. Monteverde, C. Ojeda-Aristizabal, R. Weil, K. Bennaceur, M. Fer-rier, S. Gu?ron, C. Glattli, H. Bouchiat, J. N. Fuchs, and D. L. Maslov,Phys. Rev. Lett. 104, 126801 (2010), arXiv:0903.3285 .[54] M. I. Katsnelson, K. S. Novoselov, and A. K. Geim, Nature Physics2, 620 (2006), arXiv:cond-mat/0604323 .[55] E. M. Hajaj, O. Shtempluk, V. Kotchtakov, A. Razin, and Y. E.Yaish, Phys. Rev. B accepted (2013).[56] J.-H. Chen, C. Jang, S. Adam, M. S. Fuhrer, E. D. Williams, andM. Ishigami, Nature Physics 4, 377 (2008), arXiv:0708.2408 .[57] T. Stauber, N. M. R. Peres, and F. Guinea, Phys. Rev. B 76, 205423(2007), arXiv:0707.3004 .121Bibliography[58] J. Yan and M. S. Fuhrer, Phys. Rev. Lett. 107, 206601 (2011).[59] Q. Li, E. H. Hwang, and E. Rossi, Solid State Communications 152,1390 (2012), arXiv:1202.5546 .[60] T. M. Radchenko, A. A. Shylau, and I. V. Zozoulenko, Phys. Rev. B86, 035418 (2012).[61] B. Altshuler, ?Dephasing and decoherence: What?s it all about??KITP Glasses ?03 talk (available online). (2003).[62] I. L. Aleiner and Y. M. Blanter, Phys. Rev. B 65, 115317 (2002).[63] A. A. Bobkov, V. I. Fal?ko, and D. E. Khmel?nitskii, Sov. Phys. JETP71, 393 (1990).[64] N. O. Birge, B. Golding, and W. H. Haemmerle, Phys. Rev. Lett. 62,195 (1989).[65] P. Debray, J.-L. Pichard, J. Vicente, and P. N. Tung, Phys. Rev. Lett.63, 2264 (1989).[66] J. S. Moon, N. O. Birge, and B. Golding, Phys. Rev. B 56, 15124(1997).[67] J. A. Folk, S. R. Patel, K. M. Birnbaum, C. M. Marcus, C. I. Duru?z,and J. S. Harris, Phys. Rev. Lett. 86, 2102 (2001).[68] B. Huard, N. Stander, J. A. Sulpizio, and D. Goldhaber-Gordon,Phys. Rev. B 78, 121402 (2008).[69] G. Wagoner, Phys. Rev. 118, 647 (1960).[70] K. Matsubara, T. Tsuzuku, and K. Sugihara, Phys. Rev. B 44, 11845(1991).[71] J. Stankowski, Solid State Communications 115, 489 (2000).[72] R. S. Deacon, K.-C. Chuang, R. J. Nicholas, K. S. Novoselov, andA. K. Geim, Phys. Rev. B 76, 081406 (2007).[73] Z. Li, E. A. Henriksen, Z. Jiang, Z. Hao, M. C. Martin, P. Kim,H. Stormer, and D. N. Basov, Nature Phys. 4, 532 (2008).[74] N. M. R. Peres, F. Guinea, and A. H. Castro Neto, Phys. Rev. B 72,174406 (2005).122Bibliography[75] S. D. Sarma, E. H. Hwang, and W.-K. Tse, Phys. Rev. B 75, 121406(2007).[76] P. E. Trevisanutto, C. Giorgetti, L. Reining, M. Ladisa, and V. Ole-vano, Phys. Rev. Lett. 101, 226405 (2008).[77] M. Polini, R. Asgari, Y. Barlas, T. Pereg-Barnea, and A. MacDonald,Solid State Comm. 143, 58 (2007).[78] P. M. Mensz and R. G. Wheeler, Phys. Rev. B 35, 2844 (1987).[79] D. M. Zumb?hl, J. B. Miller, C. M. Marcus, V. I. Fal?ko, T. Jungwirth,and J. S. Harris, Phys. Rev. B 69, 121305 (2004).[80] J. C. Meyer, A. K. Geim, M. I. Katsnelson, K. S. Novoselov, T. J.Booth, and S. Roth, Nature 446, 60 (2007), arXiv:cond-mat/0701379.[81] M. Ishigami, J. H. Chen, W. G. Cullen, M. S. Fuhrer,and E. D. Williams, Nano Letters 7, 1643 (2007),http://pubs.acs.org/doi/pdf/10.1021/nl070613a .[82] V. Geringer, M. Liebmann, T. Echtermeyer, S. Runte, M. Schmidt,R. R?ckamp, M. C. Lemme, and M. Morgenstern, Phys. Rev. Lett.102, 076102 (2009).[83] W. G. Cullen, M. Yamamoto, K. M. Burson, J. H. Chen, C. Jang,L. Li, M. S. Fuhrer, and E. D. Williams, Phys. Rev. Lett. 105, 215504(2010).[84] A. K. Geim, S. J. Bending, I. V. Grigorieva, and M. G. Blamire, Phys.Rev. B 49, 5749 (1994).[85] A. W. Rushforth, B. L. Gallagher, P. C. Main, A. C. Neumann,M. Henini, C. H. Marrows, and B. J. Hickey, Phys. Rev. B 70, 193313(2004).[86] S. Wada, N. Okuda, and J. Wakabayashi, Physica E Low-DimensionalSystems and Nanostructures 42, 1138 (2010).[87] E. H. Hwang and S. Das Sarma, Phys. Rev. B 80, 075417 (2009).[88] J. S. Meyer, V. I. Fal?ko, and B. Altshuler, Nato Science Series II 72,117 (2002), arXiv:cond-mat/0206024 .123Bibliography[89] W. Han and R. K. Kawakami, Phys. Rev. Lett. 107, 047207 (2011).[90] P. Mohanty, E. M. Q. Jariwala, and R. A. Webb, Phys. Rev. Lett.78, 3366 (1997).[91] A. Zawadowski, J. von Delft, and D. C. Ralph, Phys. Rev. Lett. 83,2632 (1999).[92] K. Genma and M. Katori, ArXiv e-prints (2012), arXiv:1211.2046 .[93] J. L. Ma?es, Phys. Rev. B 76, 045430 (2007).[94] K. V. Zakharchenko, M. I. Katsnelson, and A. Fasolino, Phys. Rev.Lett. 102, 046808 (2009).[95] M. Gibertini, A. Tomadin, M. Polini, A. Fasolino, and M. I. Katsnel-son, Phys. Rev. B 81, 125437 (2010).[96] G. Bergmann, Phys. Rev. B 49, 8377 (1994).[97] V. Chandrasekhar, P. Santhanam, and D. E. Prober, Phys. Rev. B42, 6823 (1990).[98] C. W. J. Beenakker and H. van Houten, Phys. Rev. B 37, 6544 (1988).[99] B. Altshuler and D. Khmelnitskii, JETP Lett. 42, 359 (1985).[100] P. A. Lee, A. D. Stone, and H. Fukuyama, Phys. Rev. B 35, 1039(1987).[101] D. Rakhmilevitch, M. Ben Shalom, M. Eshkol, A. Tsukernik,A. Palevski, and Y. Dagan, Phys. Rev. B 82, 235119 (2010).[102] A. Nath Pal, V. Kochat, and A. Ghosh, ArXiv e-prints (2012),arXiv:1206.3866 .[103] R. Rangel and E. Medina, European Physical Journal B 30, 101(2002).[104] D. S. Golubev and A. D. Zaikin, Phys. Rev. Lett. 81, 1074 (1998).[105] B. Fischer and M. W. Klein, Phys. Rev. B 11, 2025 (1975).[106] B. L. Gallagher, T. Galloway, P. Beton, J. P. Oxley, S. P. Beaumont,S. Thoms, and C. D. W. Wilkinson, Phys. Rev. Lett. 64, 2058 (1990).124[107] N. M. Gabor, J. C. W. Song, Q. Ma, N. L. Nair, T. Taychatanapat,K. Watanabe, T. Taniguchi, L. S. Levitov, and P. Jarillo-Herrero,Science 334, 648 (2011), arXiv:1108.3826 .[108] C. H. Lui, L. Liu, K. F. Mak, G. W. Flynn, and T. F. Heinz, Nature462, 339 (2009).[109] J. Jobst, D. Waldmann, I. V. Gornyi, A. D. Mirlin, and H. B. Weber,Phys. Rev. Lett. 108, 106601 (2012).[110] F. Pobell, Matter and Methods at Low temperature (Springer, 1996).[111] D. S. Matsumoto, C. L. Reynolds, and A. C. Anderson, Phys. Rev.B 16, 3303 (1977).[112] F. R. Fickett, Advances in Cryogenic Engineering 38B, 1191 (1992).[113] R. Yee and G. O. Zimmerman, Rev. Sci. Instr. 42, 233 (1971).[114] L. J. Azevedo, Rev. Sci. Instr. 54, 1793 (1983).[115] T. Castner, Jr., G. S. Newell, W. C. Holton, and C. P. Slichter, J.Chem. Phys. 32, 668 (1960).[116] Y. Volokitin, R. Thiel, and L. de Jongh, Cryogenics 34, 771 (1994).[117] T. Herrmannsd?rfer and R. K?nig, J. Low Temp. Phys. 118, 45 (2000).[118] R. Tarasenko, A. Orend??ov?, M. Orend??, M. Kaj?akov?, and A. Fe-her, Acta Physica Polonica A 118, 1067 (2010).[119] A. D. Stone, Phys. Rev. B 39, 10736 (1989).[120] Abramowitz and Stegun, Handbook of Mathematical Functions (publicdomain, 1964).125Appendix ANon-magnetic disorder ingrapheneIn an ordinary semiconductor, non-magnetic disorder can often be simplifiedto an effective scalar potential HV = V (r) in real space basis, or alterna-tively HV = ?p,p? V (q) |p??p?| in the momentum basis, where q = p ? p?and V (q) = 1N ?r e?iq?rV (r) is the Fourier transform of the scalar poten-tial V (r). For graphene in the small-p approximation, however, a scalarpotential is insufficient to describe disorder since each electron has internalnon-magnetic properties ? and ?, introduced in Sec. 2.1.4.In this appendix we will develop an effective model for the potentialsin graphene. We start with the microscopic (tight-binding) description ofthese potentials, and then rewrite the potentials in a way that is usefulin the valley approximation. In doing so, we will see that potentials withordinary microscopic properties can couple to pseudospin and isospin. Thereare ten possible non-magnetic potential terms in graphene: nine of the formVsl(r)?s?l with s, l ? {x, y, z} that couple to isospin and pseudospin, andone scalar potential V00(r) (called E0(r) in Chapter 2). Different symmetryclasses of the potentials can be identified:? Any potential coupling to ? causes rotations of the pseudospin. Po-tentials coupling to ?x or ?y cause inter-valley mixing. Those with ?zare valley-dependent (but do not mix the valleys).? Potentials coupling to ?x or ?y can be understood as vector potentialsanalogous to the magnetic vector potential, since they resemble themagnetic field terms (Sec. 2.2.2). Potentials coupling to ?z are ?massterms? since they create a gap at the K points.A.1 On-site energy variationsA basic model of disorder in graphene is that of a scalar potential thatadjusts the energy of each pz orbital relative to the rest of the graphene.126A.1. On-site energy variationsMost generally this adds a perturbation to the Hamiltonian defined by twosublattice potentials VA(r) and VB(r):HV = ?rVA(r) |r,A? ?r,A|+ VB(r + ay?) |r,B? ?r,B| . (A.1)We are only interested in how this affects the small-p behaviour, so bycomputing matrix elements such as ?p,K,A|HV |p?,K ?, A? we can write thepotential in the momentum/valley/sublattice basis (2.8) asHV = ?p,p?|p??????VA(q) 0 0 VA(q ?K)0 VB(q) VB(q ?K) 00 VB(q +K) VB(q) 0VA(q +K) 0 0 VA(q)??????p?| ,(A.2)where VA(k) and VB(k) are the Fourier transforms of the potentials.In the isospin/pseudospin basis this matrix can be divided into separateclasses like so:HV = ?p,p?|p?????diagonal terms? ?? ?V00(q) + Vzz(q)?z?z +off-diagonal terms? ?? ??s,l?{x,y}Vsl(q)?s?l???? ?p?| , (A.3)where we have separated out six potentialsV00(q) = 12[VA(q) + VB(q)],Vzz(q) = 12[VA(q)? VB(q)],Vxx(q) = 14[?VA(q +K) + VB(q +K) + VB(q ?K)? VA(q ?K)],Vxy(q) = i4[VA(q +K) + VB(q +K)? VB(q ?K)? VA(q ?K)],Vyx(q) = i4[VA(q +K)? VB(q +K) + VB(q ?K)? VA(q ?K)],Vyy(q) = 14[VA(q +K) + VB(q +K) + VB(q ?K) + VA(q ?K)].(A.4)These have the property that Vsl(?q) = Vsl(q)?, and so each represents areal potential that can be computed by inverse Fourier transform, Vsl(r)? =Vsl(r) = ?k eik?rVsl(k).127A.2. Hopping energy variations (strains)A.2 Hopping energy variations (strains)What happens if the graphene?s carbon-carbon bonds vary in strength, per-haps due to strains that stretch and bend the bonds? This allows a secondtype of disorder potential, distinct from the on-site energy variations con-sidered above. We take (2.3) and allow the hopping integrals to change byt ? t + ti for a perturbation ti, which can be complex. Generally, thisperturbation may be written asHV = ?r|r,A? [t0(r + ?0) ?r,B|+ t1(r + ?1) ?r ? a1, B|+ t2(r + ?2) ?r ? a2, B| ]+ h.c.,(A.5)where ?0 = ay?/2, ?1 = (ay??a1)/2, and ?2 = (ay??a2)/2 are three vectorsrepresenting the half-way point between A and B atoms. Note that thereare three variables t0, t1, t2 representing the three different bond directions.We follow a similar procedure as the on-site potential calculation. Thematrix elements evaluate toHV = ?p,p?|p??????0 a(p,p?) b(p,p?) 0a(p?,p)? 0 0 b(p,p?)b(p?,p)? 0 0 a(p,p?)0 b(p?,p)? a(p?,p)? 0??????p?| , (A.6)where for brevity we?ve defined two functions a and b,a(p,p?) = t0(q)ei(p+p?)??0 + t1(q)ei(p+p?)??1+i2pi/3+ t2(q)ei(p+p?)??2?i2pi/3? t0(q) + t1(q)ei2pi/3 + t2(q)e?i2pi/3(A.7)b(p,p?) = t0(q ?K)ei(p+p?)??0 + t1(q ?K)ei(p+p?)??1+ t2(q ?K)ei(p+p?)??2 (A.8)Writing this result in the isospin/pseudospin basis, we haveHV = ?p,p?|p????a terms? ?? ?Vxz(q)?x?z + Vyz(q)?y?z+ Vzx(q)?z?x + Vzy(q)?z?y? ?? ?b terms??? ?p?| ,(A.9)128A.2. Hopping energy variations (strains)containing four different kinds of potential. Two of these potentials arevalley-dependent, with spatial parts given byVxz(q) = 12 [a(q) + a(?q)?] Vyz(q) = i2 [a(q)? a(?q)?]. (A.10)The other two potentials are valley-mixing, withVzx(q) = 12 [b(q) + b(?q)?] Vzy(q) = i2 [b(q)? b(?q)?]. (A.11)Again we have Vsl(?q) = Vsl(q)? and so each represents a real potential.129Appendix BExperimental detailsThis appendix gathers together a number of experimental details which arenot essential for understanding the main arguments of the thesis.B.1 Sample preparationThe steps done to fabricate our graphene devices were generally carriedout in the UBC cleanroom at AMPEL, except for metal film evaporationthat was done in the Quantum Devices Lab?s evaporator. The subtrateswere 5 ? 5 mm dies cut from highly doped Si wafers that were purchasedfrom Nova wafers. These wafers were thermally oxidized by the supplier,to a depth of ?280 nm. We began preparing the wafers by depositing5 nm Cr/100 nm Au wire-bonding pads and alignment marks, using opticallithography (these are visible as rows on the die in Fig. 4.2).Immediately before exfoliation of graphene, we cleaned the dies witha RCA-1/RCA-2 process. RCA-1 is a heated mixture of NH4OH(30%) /H2O2(30%) / H2O (ratio 1:1:5, at 70?C), and RCA-2 is a heated mixtureof HCl(30%) / H2O2(30%) / H2O (ratio 1:1:6, at 70?C). These remove or-ganic and particulate contaminants (RCA-1) and remove ionic contaminants(RCA-2). The RCA-1 clean was done for 10 minutes, but the RCA-2 onlyfor 2 minutes since it slowly erodes the extant CrAu films. This processleaves the SiO2 surface very clean and in a well-defined hydrophilic state.The exfoliation was performed using the classic procedure[5] with 3MMagic Scotch tape: a small piece of graphite is cleaved onto a ? 20 cm pieceof tape, 5 cm from one end. The tape is then repeatedly (7 to 10 times)pressed back on itself, each step essentially doubling the number of graphiteflakes. The cleaned Si dies were then placed lightly onto the adhesive surfaceat places of dense graphite coverage. The tape with attached dies was thenflipped over and firmly pressed onto the dies by finger pressure in a rollingmotion, to squeeze out air bubbles. Immediately after exfoliation, the dieswere put into hot acetone for five minutes, in order to dissolve most ofthe tape residue. They were then rinsed with isopropanol, blow dried, andstored.130B.2. MeasurementsThe exfoliation procedure generates a random assortment of graphene,few-layer graphene, and many-layer graphene over the entire chip. Graphenemonolayers were identified by searching over the surface with an opticalbright field microscope with camera. The optimal magnification was foundto be a 10? objective, giving an overall on-screen magnification of 500?.This lower magnification allowed fast identification of large monolayers (de-spite the low contrast) due to their sharp edges; one 5 ? 5 mm die can besearched in 10 minutes. This procedure misses small (. 10?m2) grapheneflakes, however such flakes are anyway difficult to use for devices.The electron beam lithography was carried out in two or three stages.The first stage deposited millimeter-long wires from the wire bonding padsto nearby the graphene; these were also 5 nm Cr/100 nm Au films. At thesame time a high-resolution alignment pattern was also deposited next to thegraphene. After this step, a high-resolution optical micrograph was takenwith the graphene and alignment pattern. Based on this image, the finalhigh-resolution steps?metallization and etching?were designed. The sec-ond stage deposited the high resolution metal which overlapped the grapheneand made electrical contact. This step was crucial as it is not trivial toestablish good low-resistance contacts: the metal evaporator was pumpedovernight to < 1? 10?7 mbar and 0.5 nm Cr/30 nm Au was evaporated.This second stage metallization can be seen in Fig. 4.1. The third optionalstage of electron beam lithography was used to mask a short burst of oxygenplasma etching.B.2 MeasurementsB.2.1 Field alignmentThe large magnet?s field is not exactly aligned to the device plane, and sothe true out-of-plane field B? (seen by the graphene) will beB? = Bsmall +B? sin ?, (B.1)where Bsmall is the field from the small magnet, and B? sin ? is the contri-bution from the big magnet. The angle ? ? ?1? ? ? ? 1? is the angle betweenthe large magnet axis and the graphene plane.The strategy used to correct the misalignment was as follows: For agiven B?, we used the weak localization dip to measure the offset B? sin ?.The weak localization dip is always centered at B? = 0. By correcting thisoffset we obtain the actual value of B? used in the graphs.131B.2. MeasurementsOne complication is that the torque between the two magnets causes ?to have small variations with Bsmall and B?, so the relationship between B?and Bsmall can be slightly non-linear. Torque issues also prevented us fromfully energizing both magnets at the same time, as we were worried that thelarge torques would mechanically damage the refrigerator or magnet.B.2.2 Conductance measurementDC measurements are susceptible to errors from thermoelectric voltage off-sets and the high noise of amplifiers at low frequency. The classic techniqueto measure resistance without DC errors is to apply an AC bias, and mea-sure the resulting AC output. Note that this means the input and outputvalues we discuss are actually rms values, e.g., for a four-terminal measure-ment, Ibias refers to the rms bias and V would be the in-phase rms voltagedifference.To generate the AC bias and to measure the rms amplitude we used anSR830 lock-in amplifier. The chosen AC frequency was in the range 100 Hzto 1000 Hz, the value depending on what frequency that, at the time, wasleast influenced by interference. Typically the measurement bandwidth wasaround 0.3 Hz to 3 Hz, as the lockin was set up with a time constant of30 ms or 100 ms, on the 24 dB/oct filter slope setting.B.2.3 Device overheating by biasThis section has been adapted from the supplement published in Physical Re-view Letters[4]. Much of the following text is c?American Physical Society.The application of bias current or bias voltage necessarily generates Jouleheating in the device. In the experiment of Chapter 7 the temperature levelplayed a critical role, and so it was necessary to rule out overheating effects.For a two-terminal device, it can be shown based on the Weidemann-Franzlaw that the temperature at the hottest point (center) of the device willbe[19]Tdevice =?T 2wire + T 2bias, Tbias =?3e2pikBV, (B.2)where V is the source-drain voltage and Twire is the temperature of theterminals.Thus, in the last experiment when we wanted to avoid heating our device(resistance ? 1750 ?) above 100 mK, we could not apply much more than10 nA of bias current. Figure B.1 shows the variance of CF for differentbias currents and temperatures. So long as kBT/~ is not much less than132B.2. Measurements0.1234567891UCF variance (e4/h2)102 3 4 5 6 7 81002 3 4 5 6 7 81000Temperature (mK)Figure B.1: Bias overheating effect as seen in CF variance. The variance,computed from G(VBG) over VBG = ?5 ? ? ? 5 V is shown as a function oftemperature. Here B? = 0, B? ? 100 mT, and Ibias = 5 nA (?, ?) or 20 nA(M, N). Open symbols show the uncorrected Tcryostat, and filled symbols thecorrected temperature T via (B.2) with R = 1750 ?. [ c?American PhysicalSociety[4]]the decoherence rate, the variance is sensitive to temperature[52, 96] andso the accuracy of (B.2) can be easily detected by observing the variance.The figure shows how the temperature T ? 110 mK (which has CF variance0.43 e4/h2) can be produced either by a warm Tcryostat ? 100 mK or a highbias current Tbias ? 110 mK, as predicted by (B.2). The variance saturatesbelow ? 100 mK, which is consistent with kBT/~ falling below the saturateddecoherence rate (Sec. 8.3.3).Test of non-saturating temperature: e-e interaction correction toconductivityThere are other aspects of the device which demonstrate that the tempera-ture can be reduced appropriately. One common transport test of low tem-peratures is to examine the temperature dependence of ensemble-averagedconductivity at nonzero magnetic field. A non-saturating contribution toconductivity, proportional to lnT , is expected from electron-electron inter-actions:??? = ?0 + e2pih(1 + c[1? ln(1 + F )F])lnT, (B.3)133B.2. Measurements41.240.840.44039.6Avg. G (e2/h)-3 -2 -1 0 1 2 3B? (mT)(a)110 mK1400 mK43.443.24342.842.6Avg. conductivity (e2/h)1002 3 4 5 6 71000Temperature (mK)(b)Figure B.2: Electron-electron interaction contribution to conductivity. (a)VBG-averaged magnetoconductance data expanded from Fig. 7.6. Shaded re-gions indicate the field-averaging regions used for panel b. (b) Conductivityaveraged over both VBG and B? (circles) plotted against log temperature.The dashed line shows the slope predicted by theory (B.3). [ c?AmericanPhysical Society[4]]where ?0 is an offset, F = ?0.1 is predicted for graphene on SiO2, and c = 3at the low temperatures we investigate here.[28, 109] This effect should notbe confused with the electron-electron decoherence which indirectly causesa temperature-dependent conductance, via the WL effect. In fact, measure-ments of (B.3) should be carried out in a nonzero magnetic field to breaktime reversal symmetry and thus suppress the coherence-sensitive part ofWL; this allows a simple comparison with (B.3).In Figure B.2 we show the conductivity over a small range in B? nearzero, averaged over VBG ? this is a wider range of the WL data displayed inChapter 7 (from the first cooldown). Above 1 mT the WL contributionto conductivity is insensitive to the coherence and thus is temperature-independent; here, WL merely causes a B-dependence of the offset ?0 in134B.2. MeasurementsEq. (B.3). Thus, by averaging conductivity over a consistent range of mag-netic fields above 1 mT, we can test (B.3). This VBG, B?-averaged conduc-tivity (Fig. B.2) shows the expected logarithmic temperature dependenceand the slope is close to the expected value.Test of non-saturating temperature: CF thermometryAs a second test of temperature, we note the recent report[36] that CFthemselves can provide information on the electron temperature in back-gated devices such as ours. In this case, we measure conductance as afunction of back-gate voltage (not magnetic field), and subtract a linearbackground to yield the conductance fluctuations ?g(VBG). Then, VBG isconverted to internal chemical potential ? using the ideal graphene relation? = ~v?pi?|VBG ? V0|, where: v = 106 m/s is the Fermi speed, ? = 8 ?1010 cm?2/V is the gate capacitance density, and V0 = 45 V is the presumedlocation of the Dirac point in this device (see Sec. 2.3.2). This gives us theCF in kinetic energy, ?g(?), from which we calculate the autocorrelationfunction in energy:fE(?E) = 1E2 ? E1? E2E1?g(?)?g(?+ ?E) d?. (B.4)When kBT/~ greatly exceeds the decoherence rate ??1? , the correlation func-tion?s shape is entirely determined by thermal smearing, so the half-widthof f(?E) is expected[36] to be E1/2 = 2.72kT , and the inflection point isexpected to be EIP = 2.14kT (Sec. 8.3.3). In fact, at low temperatures thiscondition is not satisfied: we have kT/~ ? ??1? due to the saturating coher-ence which is the topic of Chapter 7, so the decoherence rate also influencesthe width of f(?E). In this situation, E1/2 (or EIP) can still be converted toT though in a more complicated manner, by taking into account the effectof ??1? (which is known from BIP). (see Sec. 8.3.3)Figure B.3 shows the energy autocorrelation from two conductance tracesin the first cooldown at T ? 100 mK, where ??1? = 12 ns?1. By taking intoaccount the value ??1? = 12 ns?1 that was measured in this situation, wecompute the temperature from the energy widths (Sec. 8.3.3). In one casethe temperature is mainly due to Tbias, and its energy widths correspond to120 mK (whereas (B.2) predicts 113 mK). In the other case the tempera-ture is mainly due to Tcryostat, and its energy width correponds to 115 mK(whereas (B.2) predicts 104 mK). The use of either E1/2 or EIP yieldedthe same temperature to within 4%, for these correlation functions. Very135B.3. Low-temperature apparatus0.40.30.20.10fE(dE) (e4/h2)-150 -100 -50 0 50 100 150dE (10-6 eV)E1/2 = 43.5 ?eVEIP = 26.5 ?eVEIP = 28.0 ?eVE1/2 = 44.3 ?eVFigure B.3: Energy autocorrelations of CF for B? = 0, B? ? 100 mT. Theblack curve corresponds to Ibias = 5 nA with Tcryostat = 100 mK. The redcurve corresponds to Ibias = 20 nA with Tcryostat = 13 mK. The small peakon top of the black curve is an artifact of measurement noise. Correlationswere computed from G(VBG) measured over VBG = ?5 ? ? ? 5 V. [These aretwo of the data points in Fig. B.1.] [ c?American Physical Society[4]]low temperatures showed some deviation in energy width from the expectedtemperature (see Sec. 9.1.2).B.3 Low-temperature apparatusSince we were concerned about overheating of our graphene device fromelectromagnetic interference and other sources, we decided to construct fil-ters, coolers, and shields to attach to the mixing chamber of the dilutionrefrigerator. These are schematically pictured in Fig. 4.3. As is usuallythe case for cryogenic apparatus, this involved a number of materials chal-lenges. The design avoided magnetic materials as much as possible, to avoidmagnetocaloric heating (see Sec. B.3.5).The cryostat comes factory-supplied with ? 100 ? wires that travel fromroom temperature to near the mixing chamber. Their resistance limits theheat currents from room temperature, and at several points the wires areglued to the cryostat for heat sinking. Thus, the provided wires are rea-sonably cold at their lower end. Unfortunately these factory wires cannotsimply be attached to the chip:? The heat sinks function very poorly at low temperatures, as interfacialthermal resistances (between the electrical conductors and insulators)become quite large at sub-kelvin temperatures[110]. The wires and136B.3. Low-temperature apparatus332 ? 332 ? 332 ?100 pF 100 pF 10pFRC-filter boardfactorywiringmicrowaveattenuatorFigure B.4: Signal filtering topology for each sample wire in the cryostat.device may easily be at, say, 50 mK when the mixing chamber is at10 mK.? If any heat is generated at the chip, it must be dissipated throughthe wires. The resistance of the wires now becomes a problem since itobstructs the conduction of heat away from the sample.? Even if the wires were perfectly cold, high frequency electrical inter-ference and noise can propagate from room temperature and impingeon the device. This generates local heating in the resistive graphenedevice even when no electrical bias is intended (see Sec. B.2.3).The fact that magnetic field sweeps formed the primary type of measurementled to further concerns:? Fast magnetic field sweeps, which are desirable for speeding up a mea-surement, lead to eddy current heating in conductive materials.? For fields of 10 mT to 100 mT, paramagnetic materials become heatedand cooled through the magnetocaloric effect. (Sec. B.3.5)? For fields above a few tesla, nuclei in conductors (especially copper)cause another magnetocaloric effect.To deal with these problems, we built four new components into the lowtemperature part of the refrigerator (schematically shown in Fig. 4.3).B.3.1 RC filter boardsThe wires first enter into a circuit board with resistor-capacitor (RC) low-pass filters. Each wire passes through the sequence shown in Figure B.4. Forreasons of compactness and frequency response, these filters were made usingsurface mount components. The resistors and capacitors maintainted their137B.3. Low-temperature apparatuscharacteristics at low temperature as they were of metal film and porcelainconstruction, respectively.This filter begins to cut off around 1 MHz, and by 100 MHz the attenu-ation is ? 10?3. Above 1 GHz, however, the level of attenuation begins toworsen due to the parasitic inductance (around 1?2 nH) of the capacitors.By about 10 GHz the boards no longer provide significant filtering, which isunfortunate as the wires themselves may still be able to partially transportsignals past this range. These high frequncies are removed in the next stage,described below (Sec. B.3.3).The RC filter boards were mechanically affixed to the side of the mixingchamber. It was not crucial to ensure good heatsinking of these boards asthey did not receive heating from the magnetic fields.B.3.2 ShieldBeginning at the circuit boards, the wires are fully enclosed in a metallicshield which is cooled by the mixing chamber. The shield performs two func-tions: it prevents ? 1?100 GHz high frequency radiation (which can bouncearound in the cryostat) from re-entering the sample wires, and also prevents1 THz blackbody radiation (which leaks down from the room temperatureparts of the refrigerator) from directly impinging on the sample.The shield extends down past the sample location as well (fully surround-ing the sample), into the magnet bore. It was deemed important for thisshield to not heat by eddy currents when the magnetic field was swept, asthis would heat up the mixing chamber. Thus, the part of the shield insidethe magnet was constructed from phenolic paper composite (as a mechan-ical support), with thin copper adhesive tape applied to the outside. Thetape was arranged in overlapping vertical strips without electrical contactbetween strips ? this prevented circulation of eddy currents from the largemagnet (which has a vertical axis).In the end, it was found that the mixing chamber would begin to notice-ably heat up for magnet sweeps of 1 mT/s, and significant heating wouldoccur at 5 mT/s.B.3.3 Microwave attenuatorInside the shield, the signals travel through a long, coiled loop of resistiveconstantan wire sandwiched between copper tape (Fig. B.4). This construc-tion is a lossy transmission line: even though it presents a relatively low138B.3. Low-temperature apparatuscapacitance and resistance (100 pF, 100 ohm) to low frequency measure-ments, it is an efficient absorber for all radiation above 1 GHz.The reason for the high losses at microwave frequencies is the skin effect:at high frequency, signal currents are only carried in the outer layer of theconductors. In constantan for example, the skin depth at 1 GHz is only10?m; in general the depth is ? = ??/(pif?), where ? is the bulk resistivityof the wires, f is the frequency, and ? is the magnetic permeability of thewire. As a result, the resistance of a wire becomes amplified far beyond itsDC value. It can be shown that the resulting resistance for a cylindricalwire of diameter d is given byR ??2??pid2 f ? length (B.5)The resulting attenuation factor can be calculated from the solution ofthe Telegrapher?s equations:attenuation = exp(?12R/Z0), (B.6)where Z0 = ?L/C is the characteristic impedance of the transmission line,determined the inductance per unit length (L) and capacitance per unitlength (C).In our case the attenuator was estimated to have Z0 ? 100 ?. The1.5m long, 110 ?m diameter constantan wire had a resistance of R = 272 ? ??f/1GHz. Thus, at 10 GHz (where the RC circuit boards are ineffective),the attenuator attenuates by 10?2; at higher frequencies the attenuationstrengthens rapidly, e.g. 10?6 at 100 GHz. The weak point in this filter-ing system is thus in the 3?30 GHz band, and one may ask whether 10?2attenuation is enough. In reality, the overall attenuation is likely strongerthan this: the resistive wires passing down from room temperature are alsounable to support such high frequencies, and at several points there aremismatched impedances which would produce reflections.The attenuator is mounted on a copper-laminated phenolic stick like de-scribed below for the cold finger, in order to keep it near the mixing cham-ber temperature. Note that the attenuator acts as a one dimensional ?blackbody? for high frequencies, and so it emits black body radiation (Johnsonnoise) for those frequencies that it blocks.B.3.4 Cold fingerAfter passing through the attenuators, the constantan wires terminate at aconnector. On the other side of this connector is the cold finger, the plate139B.3. Low-temperature apparatusthat contains the chip socket (sample holder) and ensures that the samplewires are cold.The cold finger is built on a homemade substrate similar to printedcircuit board material. The substrate is a ? 30 cm long, 25 mm wide, and1.6 mm thick piece of phenolic paper composite which has been laminatedwith a 100 ?m thick copper foil, using Stycast 2850 epoxy as the glue. Thecopper laminate runs continuously around the upper edge of the substrate(near to where it is bolted to the refrigerator), in order to provide goodthermal anchoring to both sides. On the lower end of the cold finger is asocket for our CLCC32 chip carriers, aligned to the center of the magnet.The cold finger must cool the socket pins as close as possible to thebase temperature, since the sample is cooled entirely through its wires. Toensure this, the signals pass from the electrical connector to the socket alonglong, insulated, high quality copper wires32. These wires travel up and downthe length of the cold finger (a total ? 50 cm distance), and are glued tothe copper foil substrate with Stycast 2850 epoxy. At low temperatures,the interfacial thermal resistance (copper to PVDF, PVDF to epoxy, thenepoxy to copper) gives poor heat sinking; the long distance and hence largeinterface area compensates for this.A quick thermal analysis gives the performance of these wires. The cop-per wires are 400 ?m diameter with a measured residual resistivity ratio of104 (i.e., their resistance at low temperature is 104 times less than roomtemperature). By the Wiedemann-Franz law, the per-length thermal resis-tance of one of these wires at 10 mK is therefore R/L = 5.3mK/nW/m,a number that increases linearly with temperature. We can also estimatethe degree of heat sinking from the wire to the laminated copper substrate.At very low temperature, there is significant thermal boundary resistancebetween the copper and insulator due to phonon mismatch effects; a typi-cal per-area thermal resistance is (10 cm2K4/W)/T 3 (copper-epoxy-copper)[111]. Taking the contact area to be the diameter of the wire, the per-lengthheat conductance of this heat sink is G/L = 0.4nW/mK/m.To connect the wires to the socket, they are wrapped around the socketpins. Wire wrapping is an less-used alternative to solder in electronic circuitconstruction?although it forms extremely high quality connections, it islabour intensive and not easily miniaturized. In this case, avoiding soldergives a large advantage in thermal conductivity at sub-kelvin temperature:solder becomes superconducting below 7 K, but unfortunately Cooper pairs32These are wire wrap wires: PVDF (polyvinylidene fluoride) insulation over silver-plated copper140B.3. Low-temperature apparatusdo not conduct heat.B.3.5 Magnetic heating of cold finger materialsA magnetic material in a magnetic field is a thermodynamic system, andmay be heated or cooled by changes in magnetic field (this is known as themagnetocaloric effect). As most of the experiments in the cryostat involvemagnetic field sweeps, the magnetocaloric effect was a concern. This subsec-tion presents a detailed analysis of magnetic heating in the present cold fingerdesign, demonstrating that it is a concern both at low and high magneticfields. Fortunately, a practical workaround to this problem is identified.It can be shown based on the thermodynamic Maxwell relations that,for a slow isothermal increase in magnetic field B by an amount dB, aparamagnetic system will release an amount of heat dQ given by[110]dQ = ?V T(?M?T)BdB (B.7)where T is the temperature and M is the magnetization (dipole momentper unit volume) of the system. V is the system volume. This process isreversible: upon isothermally decreasing the field, the magnetic system willabsorb the same amount of heat.For low enough fields, or high enough temperature, paramagnetic sys-tems typically follow the Curie law in their temperature and field depen-dence:M = (?0/?0 + C/T )B. (B.8)Here ?0 is the temperature-independent part of magnetic susceptibility and?0 is the vacuum magnetic permeability; from (B.7) it can be seen that ?0does not contribute to heating. The Curie constant C on the other handdoes directly contribute to heating, and so we expect for a change in fieldfrom Bi to Bf thatQ = V CT (B2f ?B2i ). (B.9)Table B.1 lists Curie constants as measured for selected cryogenic materi-als that were used in (or considered for) our cold finger. Data on many othermaterials can be found in the literature: Ref. [112] examines many commonmetals and plastics, and Ref. [110] lists susceptibility data for standard cryo-genic construction materials. Purely organic materials (clear plastics, pa-per, cloth, wood) generally maintain a small susceptibility. Pure elementalCu, Ag, Au, and many Al alloys are also excellent non-magnetic materials.141B.3. Low-temperature apparatusUnsurprisingly, alloys that include Cr, Mn, Fe, Co, or Ni tend to becomemagnetic even if they aren?t magnetic at room temperature (e.g., stainlesssteel). Those are not the only problematic materials, however: Surprisingly,most types of brass also become magnetic, as do many glasses and ceramics.The magnetism in glasses and ceramics creates an engineering problemfor low temperature experiments in magnetic field. The construction ofcryogenic apparatus generally requires an insulating material with thermalexpansion matched to copper. Such materials are nearly always compositesof glass (low thermal expansion) with a thermoplastic or epoxy (high thermalexpansion). The glass material is rarely of high quality and usually containssignificant paramagnetism, as seen in the table. An even larger concern inour apparatus is the chip carrier (see Table B.1) where the magnetism islarge and cooling is quite poor.Paramagnetic impurities in glasses at low temperature and lowfieldHow large of a problem are the glasses and ceramics, and in which fieldrange? All magnetic materials saturate at some point, a fact that is notincluded in the field dependence of the Curie law. Moreover, the 1/T tem-perature dependence does not necessarily extrapolate to lower temperatures.In order to use the data in Table B.1 for design of the cold finger at lowtemperature and high magnetic field, it is worthwhile to consider the natureof the magnetism.For these glasses, the magnetism can be modelled as an ensemble of inde-pendent magnetic spins in a non-magnetic host material. The magnetizationof these spins from statistical thermodynamics is described by a Curie lawfor low fields,[110]C = 1kB?2J + 13J n, (B.10)where the spins? angular momentum number is J , magnetic moment is? = gJ?B, and n is the number density (per unit volume) of the spins.The Curie law only holds for fields B kBT/?, above which the param-agnetic moments start to freeze into a fixed orientation. The entropy perunmagnetized spin is kB ln(2J + 1), so total heat of Q = kBT ln(2J + 1) perspin is released during the field freezing process. For electronic paramagnets(e.g. materials contaminated with Fe) the freezing occurs around 10 mT to100 mT, for typical dilution refrigerator temperatures.As a point of focus, we look at the dominant source of magnetism inglass and ceramic, generally believed to be contamination by Fe3+ ions[115?142B.3. Low-temperature apparatusMaterial Details ?0C? (?Kcm3/g)wire wrap wire PVDF on Ag-plated Cu -0.3Au-plated BeCu socket pin (Ni-free) 4phosphor bronze foil 80brass nut from online store 90brass screw from PHAS store 80000constantan ? non-Curie[113]Cu tape #1185 from 3M -1clear epoxy Stycast 1266 2 [114]silver epoxy Epotek H20E 30glass-filled epoxy Stycast 2850 ? 250 [110]nylon screw from PHAS store -4Vespel SP1 bead from Supercon lab 24paper composite Garolite XX 200fibreglass composite Garolite G-10 2700fibreglass composite circuit board 5300glass-filled PPS socket from Locknest 760ceramic chip carrier ?non-magnetic? (Ni-free) 9100Table B.1: Curie constants for various cryogenic materials, normalized tomaterial density (?) and multiplied by the magnetic constant ?0. Exceptfor cited values, these were measured by me using the Quantum DevicesSQUID magnetometer in the Superconductivity lab. Curie constants wereextracted from the fitted magnetization vs. T?1 for T = 5 ? ? ? 30 K, in a fieldB = 100 mT. The SI susceptibility convention is used here; results for C inGaussian units are 4pi times smaller.143B.3. Low-temperature apparatus118]. In Fe3+, the five unpaired electrons tend to collectively behave as aJ = 52 , ? = 5?B free spin, according to Hund?s rules33. The saturation ofmagnetization in ceramics at high fields confirms these values[117]. From(B.10), a measured value of C ? ?0/? = 1000 ? 10?6 Kcm3/g in Table B.1corresponds to 1.095?1019 Fe3+ ions per gram, assuming these are the onlymagnetic ions in the material. The 0.45 g ceramic chip carrier thus wouldcontain about 4.5? 1019 of these ions.Extrapolation of the magnetic behaviour of these iron-contaminatedglasses to sub-kelvin temperatures is not trivial. It becomes necessary toconsider the crystal field splittings, in which the local electric environmentgives a slight preference between different spin directions, mediated by thespin-orbit interaction[115]. These field splittings form an easy axis for thespin, partially breaking its degeneracy on a 0.5 K energy scale. For lowtemperatures, this reduces the number of thermally available spin statesfrom six to two. This phenomenon has been clearly observed as a Schottkyanomaly in ceramics[116, 118]: a large peak in the heat capacity around 0.5K, as the per-ion entropy changes from kB ln 6 to kB ln 2. Although the Fe3+now effectively has only two spin states, as if J = 12 , it does not magnetizelike a normal free electron since it can only be magnetized along its easyaxis. The easy axis is oriented according to the local crystal axes, whichare random in an amorphous glass. One can approximate this by averagingover random directions and taking the magnetic moment to be reduced to? ? 2.5?B for these J = 12 low temperature Fe3+ ions.We now have the numbers necessary to calculate the heat releases fromthe chip carrier. The total heat released is Q = kBT ln 2 per ion whenisothermally increasing the field from zero to saturation. For the ceramicchip carrier and its 4.5 ? 1019 Fe3+ ions this is 19?J at 20mK. It cantake hours to release this entropy in an isothermal manner, since the typicalthermal conductivity away from the chip carrier is only of order 10nW/mK.A slightly faster method is to increase the field quickly to ? 100 mT: muchmore heat will be released and the chip carrier will heat up quite a bit?to100 mK or 200 mK?but the cooling rates at these temperatures are quitefast, and the chip returns to base temperature after an hour or so.The field where the most magnetic heating occurs (maximum in (?S/?B)T )is quite small for these Fe3+ ions, Bpeak ? T ? (0.5mT/mK). This factpresents a simple solution to the magnetic heating problem, which can be33The orbital angular momentum is effectively locked to the crystal environment aroundthe Fe ion and so it does not mix together with the spin; only the spins are free to rotate,so their g-factor is two[115].144B.3. Low-temperature apparatusused for most experiments (especially for conductance fluctuations): Oncethe spins have been polarized and the chip cooled back down, do not returnback to low magnetic fields. The paramagnets will remain polarized, fol-lowing the field orientation, and no longer causing heating. For example, at20 mK one simply needs to keep the total magnetic field above 20 mT.Nuclear paramagnetic heating at high fieldsAt millikelvin temperatures and high magnetic fields (above one tesla orso), it is important to also consider the contribution to magnetism fromatomic nuclei. Much like magnetic ions in a glass, the magnetic moments ofnuclei behave thermodynamically like free, independent dipoles. Hyperfinesplittings are small enough to be neglected, and so it is straightforwardto take the known nuclear characteristics and compute the Curie constantfrom (B.10). For example, both natural isotopes of Cu have J = 32 with? ? 0.00083?B, and a piece of copper contains 9.5 ? 1021 nuclei per gram.The contribution to Curie constant from nuclei in copper is thus only C ??0/? = 0.085 ?Kcm3/g (normalized to density; compare to Table B.1).Even though the nuclear Curie constants are very small, the nuclei donot easily polarize compared to paramagnetic ions. As a result, significantamounts of heat are exchanged at high fields. For example, at 10 mK thetemperature released from a gram of copper by increasing the field from 9 Tto 10 T is 130 ?J! This effect is a concern in the current cold finger design, asthe wire wrap wires (which set the sample temperature) are made of copper,and are only cooled through an insulating material.Avoiding copper in a design is quite difficult; a suitable alternative wouldbe silver (similar electrical conductivity but much reduced nuclear Curieconstant, compared to copper), however wires and foils of high quality elec-trical silver are costly and not as readily available. The best workaroundto nuclear magnetic heating is to ramp the magnetic field up to the highestdesired value and wait several hours until the system has cooled down (ittruly takes hours, as the nuclear heat capacity is enormous in these condi-tions). Once the nuclei have cooled down, the experiment is carried out insuccessively decreasing fields, with the field changing slow enough to avoideddy current heating. This was the strategy during used all of the high-fieldmeasurements in this thesis.145Appendix CDetailed interpretation ofChapter 7 dataThis appendix has been adapted from the supplement that we published inPhysical Review Letters[4]. Much of the following text is c?American Phys-ical Society.This appendix covers auxiliary topics relevant to Chapter 7. These topicsare covered in the following sections:1. The conductance background subtraction procedure is described.2. We discuss the accuracy of the CF autocorrelation inflection pointwhen used as a measure of the decoherence rate.3. We describe the classical magnetic defect polarization model of CF,which was used to interpret the crossover of ??1CF in B?.4. We consider the effects of spin-orbit interaction on weak localizationin the presence of in-plane magnetic field and defect-induced dephas-ing, in order to provide an upper limit on the spin-orbit strength ingraphene.C.1 Background subtractionConductance fluctuation data were analyzed by their autocorrelation in per-pendicular magnetic field, f(?B), defined asf(?B) = ??G(B?, VBG) ?G(B? + ?B, VBG)?B?,VBG (C.1)= 1B2 ? ?B ?B11[?VBG ]?VBG[? B2??BB1?G(B?, VBG) ?G(B? + ?B, VBG) dB?]. (C.2)146C.2. Correspondence between inflection point and decoherence rateHere, ?G(B?, VBG) is the fluctuating part of conductance, obtained fromthe raw data G(B?, VBG) by subtracting off a smooth background function.The conductance was scanned over a range B1 ? 50 mT to B2 ? 150 mT,for ten to twenty different values of VBG spread over the narrow range ofgate voltage noted above; the averaging over VBG was done to gather morefluctuations and thereby improve the statistical accuracy of (C.2). Thesmooth background, used to obtain ?G, was obtained by fitting G(B?, VBG)to a polynomial of the formg0 + g1B? + g2B2? + g3VBG + g4V 2BG + g5B?VBG. (C.3)for free parameters {gi}.C.2 Correspondence between inflection point anddecoherence rateThe rate ??1CF defined from the inflection point BIP,??1CF ? 2eDBIP/3~, (C.4)only corresponds to the true CF decoherence rate ??1? under certain con-ditions, as explained in Sec. 8.3. We list several conditions below; someconditions are nearly violated and hence there are some small systematicerrors in Def. (C.4) (error meaning the difference between ??1? and ??1CF).Some of these errors are known well enough to be corrected, however wehave chosen not to correct them in order to retain the simplicity of defini-tion (C.4). In any case, none of these error sources are able to artificiallygenerate the saturation that is the topic of Chapter 7.Limiting case: quasi-2D, thermally-smearedDefinition C.4 assumes the quasi-2D thermally smeared regime, where thecoherence length ?D?? is smaller than device dimensions, yet is larger thanthe ?thermal length? ?D~/kT .The transition from quasi-2D behaviour (B2DIP = 32~/[eD??]) to quasi-1D behaviour (B1DIP = ?6~/[eW?D??] for width W ) should occur in thevicinity of ??1? ? D/W 2. In the present deviceW = 4.3 ?m, so the crossoverwould be when ??1? drops below 1.6 ns?1. The lowest ??1CF measured was5 ns?1, at high B? and the lowest temperature; this is still a factor of threehigher than the crossover value, but it is perhaps possible that W begins to147C.2. Correspondence between inflection point and decoherence rateinfluence ??1CF at this point. We are unaware of any predictions of behaviourin the crossover and so there is no simple way to quantify this error. Asan unlikely worst case the crossover could be a gradual quadratic crossover(BIP = B2DIP ? [1 + 83D??/W 2]1/2), meaning that all of our extracted ??1CFvalues would be 2 ns?1 higher than the actual ??1? .As can be seen in Fig. 8.1, the thermal smearing limit holds up to BIP .kT/eD. For the lowest temperatures (? 100 mK) at zero B?, the valuesbecome as large as BIP = 1.3kT/eD; in this situation the error in ?CF dueto the thermal smearing approximation is 10%. For all other data points wehave BIP ? 0.7kT/eD giving errors less than 4%.Background subtraction biasesThe systematic errors originating from the conductance background estima-tion are very small for the inflection point, compared to other aspects of thecorrelation function such as variance or half-width. Based on the consider-ations in Sec. 8.3, the resulting bias errors in ??1CF are negligible, < 3% forthe high-temperature data and < 1% for the low-temperature data.Perturbation by symmetry-breaking disorderWhen there are static symmetry-breaking forms of disorder, some (but notall) modes of CF may be suppressed. If the symmetry-breaking rate iscomparable to the decoherence rate ??1? , then the correlation function?sshape is distorted and Def. (C.4) loses some accuracy (Sec. 8.3.5). Thesymmetry-breaking rate causes no change if it is much smaller than thedecoherence rate; conversely if the rate is very large then the suppressedmodes are too weak to contribute.Graphene?s two valley symmetry breaking rates[34] were measured bythe usual WL fitting formula[21] at fields above 1 mT. We extract ??1i =70 ? 20 ns?1 and ??1? = 4000 ? 200 ns?1, much like seen in other studies(See Ref. [23] and Table 6.1). These rates are very large compared to ??1CFand hence the valley-related modes have little influence. The influence ofthe ?i-suppressed mode on ??1CF can be calculated (Sec. 8.3.5) and is less than10% for the high temperature (1.4K) data, and less than less than 4% forthe low temperature data. The larger rate ??1? gives even smaller effectsthat matter only above 10 K.Graphene?s spin-orbit rates are expected to be very low 1 ns?1 and asyet have not been experimentally determined. If stronger than expectations,then spin-orbit interactions would dephase some CF modes and cause some148C.3. Quasi-2D CF correlations in an in-plane magnetic fielddeviations in ??1CF . A very large (& 5 ns?1) spin-orbit rate would howeverproduce a specific temperature-dependent shift in ??1CF (Fig. 8.7) which isnot observed in the data.In conclusion, ??1CF measures the rate ??1? with typical accuracy betterthan 10% for the B? = 0 and B? = 6 T data. There is however anotherform of symmetry-breaking disorder that is present when B? is applied,which does lead to significant changes in the crossover of ??1CF(Btot); this isthe topic of the following section.C.3 Quasi-2D CF correlations in an in-planemagnetic fieldA large in-plane magnetic field induces a Zeeman splitting EZ = 2?BBtot ofconductance fluctuations, while simultaneously the magnetic defects gain anaverage polarization.[50] This causes the CF field correlation function f(?B)to evolve in a nontrivial manner as the Btot is turned on. What determinesthe inflection point ??1CF in this case?ModelTo answer this question, we simulate the theoretical correlation function ofCF including high-field effects, and apply Def. (C.4) to predict the valueof ??1CF . This simulation requires us to move beyond the one-parametercorrelation of the Chapter 7 to the more general two-parameter correlationfunction of CF (Sec. 8.3)F (?B, ?E) = ?G(B?, E) ?G(B? + ?B,E + ?E), (C.5)where ?G(B?, E) is the fluctuating part of conductance at perpendicularmagnetic field B? and energy E. The ordinary one-parameter magnetocon-ductance correlation, (7.1) in the main text, corresponds to a measurementof f(?B) = F (?B, 0), i.e. it corresponds to ?E = 0, since we only computethe correlation between conductances taken at the same VBG (same E).In the analysis of CF where the spin degeneracy is broken (by e.g. spin-orbit interactions[97], frozen magnetic impurities[63], polarized impurities[50],or Zeeman splitting[50]), the full F correlation function is written in termsof up to four distinct modes, and each of these modes can be written interms of the spinless correlation F? which is given by the generic CF theory.This spinless correlation function depends on a single dephasing rate ??1?149C.3. Quasi-2D CF correlations in an in-plane magnetic fieldand temperature T , and is fully specified as F? [??1? , T ](?B, ?E). In the pro-cedure below we use the quasi-2D spinless F? computed by the procedure inSec. 8.3 since our device is in the quasi-2D regime. The results can be ex-tended to the quasi-1D case by using the spinless correlation function foundin Ref. [36] (with the field-dependence noted in Ref. [98]).The expressions below are derived in some detail in Appx. D, E. For thecase of classical magnetic defects and no spin-orbit interaction[50],F (?B, ?E) = 2F? [??1?? , T ](?B, ?E)+ F? [??1?? , T ](?B, ?E + EZ)+ F? [??1?? , T ](?B, ?E ? EZ). (C.6)The first term is comprised of two identical modes, each indicating the cor-relation between two conduction electrons with the same spin (up/up anddown/down). The latter two terms express the correlations between elec-trons with opposite spins (up/down and down/up).Because of defect polarization in the magnetic field, the rates ??1?? and??1?? differ. The rate ??1?? , indicating the loss of coherence between two elec-trons which are identical (same spin), though passing through the grapheneat different times, is given by[50]??1?? (Btot) = ??1? + [1? P (Btot)2]??1mag. (C.7)This may be interpreted as the decoherence rate proper (ie. the dephasingby dynamic environment). Here ??1? is the field-independent part of thedecoherence rate (from e.g. electron-electron interactions, non-magnetic dy-namic defects) and ??1mag is the rate of collision with polarizable magneticdefects. P (Btot) is the average polarization of the magnetic defects, due tothe total field Btot.The correlation between conduction electrons of opposite spin (but equalkinetic energy) is dephased by a larger rate[50]??1?? (Btot) = ??1? + [1 + P (Btot)2]??1mag. (C.8)Note that ??1?? is always dephased by magnetic impurities, even once theyhave been fully polarized (P = 1); this extra dephasing is however notdecoherence since it arises from static defects. Rather, the extra dephasingin ??1?? simply indicates that spin-up electrons consistently scatter differentlythan spin-down (this is analogous to the effect of spin-orbit interactions inCF).150C.3. Quasi-2D CF correlations in an in-plane magnetic fieldThe Zeeman splitting of conduction electrons is also evident in Eq. (C.6):a shift of the opposite-spin CF modes to energies ?E = ?EZ , so these modesappear as side-peaks in the CF energy correlation function. In the theoreticalliterature these side-peaks are known as m = ?1 diffuson triplets[49, 50].We have directly observed these side-peaks in graphene in Chapter 5, andthe present device shows side-peaks as well, though weakly (Fig. 9.1).Figure C.1 shows how Eq. (C.6) evolves as the field is increased. Sinceit is not easy to see the inflection point in F itself, we instead plot itssecond derivative in field, ?2??B2F (?B, ?E). Again, we remind that f(?B)only measures a cross section along ?E = 0, thus the inflection point BIP off(?B) corresponds to the point where ?2??B2F (?B, 0) = 0. We draw attentionto a few important features of Fig. C.1:? For very low fields we have EZ = 0 and P = 0 (thus ??? = ???). HereF (?B, ?E) = 4F? [??1?? , T ](?B, ?E) has the spinless correlation func-tion?s shape. The inflection point here is (trivially) a good measure ofdecoherence, as ??1CF = ??1?? = ??1?? .? At intermediate fields where EZ . 8kT , the side-peaks are closeenough to influence the inflection point. This causes ??1CF to be influ-enced by both ??1?? and ??1?? for these fields. Figure C.2 shows preciselyhow ??1CF can deviate from the decoherence rate (??1?? ) over a range ofintermediate fields.? When EZ is large enough (EZ & 8kT ), the larger ??1?? does not matteras the side-peaks are too far away. The inflection point coincides with3~??1?? /2eD once again, thus we measure the decoherence by ??1CF .The error ??1CF/??1?? at intermediate fields is influenced by all parametersT,EZ , P, ??1? , ??1mag, so the only way to find its value is the direct numericalsimulation as described above.Choice of polarization functionWe have chosen to use a polarization function of the formP (Btot) = tanh(g?BBtot/2kBT ), (C.9)which corresponds to the average magnetization of free spin-12 moments,normalized to P (?) = 1. One might ask whether this is appropriate giventhat the model used above assumes classical magnetic defects, and is not151C.3. Quasi-2D CF correlations in an in-plane magnetic field-100-50050100dE (a.u.)40dB (a.u.)inflectionpoint-4 0 4FBB (a.u.)Btot ~ 08kT-100-50050100dE (a.u.)40dB (a.u.)inflectionpointBtot ~ 0.6 TEZ-100-50050100dE (a.u.)40dB (a.u.)inflectionpointBtot ~ 1.5 TEZ-100-50050100dE (a.u.)40dB (a.u.)inflectionpointBtot ~ 4 TEZFigure C.1: Simulated conductance fluctuation correlations Second deriva-tive ?2??B2F (?B, ?E) of the simulated CF correlation function (C.6), for acase at 310 mK. The vertical dashed lines indicate the fields 3~??1?? /2eDand 3~??1?? /2eD. (Top-left): Near-zero-field case, where EZ = 0 andP = 0. (Top-right): Intermediate field case where EZ = 2.6kT andP = 71%. (Bottom-left): Intermediate field case where EZ = 6.5kT andP = 98%. (Bottom-right): High field case, where EZ = 17kT and P = 100%.[ c?American Physical Society[4]]152C.3. Quasi-2D CF correlations in an in-plane magnetic field12111098Rate (ns-1)5 6 70.12 3 4 5 6 712 3 4 5 6Btot (T)tIP-1 experimenttIP-1 simulationt??-1 simulationFigure C.2: Comparison of the experimental ??1CF rate with simulation.The experimental rate is extracted from experimental 310 mK data (firstcooldown), and compared to the corresponding simulation with g = 1.4 aswell as the decoherence rate ??1?? from the same simulation. [ c?AmericanPhysical Society[4]]necessarily applicable to spin-12 defects.[50] We have compared the classicaldefect model to the much more involved quantum calculation[50] that in-cludes the effects of inelastic scattering. At least for the central mode (aquantum calculation for the side-peaks is not available at this time[50]), theobtained ??1CF is nearly the same for the quantum and classical models, evenfor the spin 12 case, provided that the P = tanh(g?BBtot/2kBT ) polarizationfunction is used. (Such a correspondence does not extend to WL.)Parameters used in Fig. 7.4In order to generate the simulated curves of Fig. 7.4, we chose ??1mag =5 ns?1 for all temperatures, based on the typical difference between low andhigh field. The value of ??1? , which contains all non-magnetic decoherencemechanisms such as electron-electron interactions, is set by hand in eachcase to match the high-field ??1CF . The following table shows the values thatwere used in the simulations to produce the curves drawn in Fig. 7.4:T (K) ??1mag (ns?1) ??1? (ns?1)0.08 5 4.80.2 5 6.40.5 5 9.21 5 14.5153C.4. Spin-orbit interactionsC.4 Spin-orbit interactionsBesides magnetic defects, spin-orbit interactions can also cause spin relax-ation in graphene. It is well known that magnetic defects and spin-orbitinteractions have very different effects on coherent properties.[16]This section describes in detail how spin-orbit interactions would af-fect weak localization. We first establish that spin-orbit interactions alone(without magnetic defects) cannot account for the observed B?-dependenceof WL. (In any case, the in-plane field dependence of CF requires the exis-tence of magnetic defects, but here we show that the WL data proves thesame point.) Next, taking the presence of magnetic defects as a given, wefind an upper bound on the spin relaxation rates due to spin-orbit interac-tions. We allow for two types of spin-orbit interaction: those coupling to thein-plane components of spin, (scattering rates ??1SOx, ??1SOy) or those couplingto the out-of-plane component of spin (rate ??1SOz). Note that, by symmetry,??1SOx = ??1SOy.With B? = 0, the WL magnetoconductance at small B? takes theform[16, 33]G(B?)?G(0) =FWL([??1TRS + 23??1mag + ??1SOx + ??1SOy]?B)+ FWL([??1TRS + 23??1mag + ??1SOx + ??1SOz]?B)+ FWL([??1TRS + 23??1mag + ??1SOy + ??1SOz]?B)? FWL([??1TRS + 2??1mag]?B),(C.10)(see also Appx. E) where in 2D systems we define ??1B = 4eD|B?|/~ andFWL(z) = WLe22pih [digamma(12 + z)? ln(z)].Note that Eq. (C.10) omits twelve other terms related to valley dephasing[33].Due to the strong intervalley scattering, these other terms do not signifi-cantly modify the low-field conductance curve.From Eq. (C.10) it can be seen that the temperature-dependence of WLalone (at B? = 0) does not rule out spin-orbit interactions. Purely out-of-plane spin-orbit interactions are able to generate coherence saturationby mimicking inelastic scattering (??1TRS). In-plane spin-orbit interactionsalso give the appearance of saturation until ??1TRS falls below 1.5??1SOx; pastthis point an antilocalization peak at B? = 0 appears (we do not observeantilocalization).154C.4. Spin-orbit interactions86420tWL-1 (ns-1)10.80.60.40.20B|| (T)Figure C.3: Theoretical dependence of ??1WL on B? with various forms of spindisorder. We take ??1TRS(0) = 0.8 ns?1 (appropriate for electron-electroninteractions) except where noted. The three upper solid curves show thecase of polarizable, g = 1 magnetic defects (with ??1mag = 5 ns?1), combinedwith unpolarizable magnetic defects (additional ??1mag = 3 ns?1 independentof field), fast non-magnetic defects (??1TRS(0) = 2.6 ns?1 including electron-electron interactions) , or in-plane spin-orbit (??1SOx = ??1SOy = 1.3 ns?1); thelower solid curve shows out-of-plane spin-orbit (??1SOz = 3.3 ns?1). Dashedcurves show the case of spin-orbit interactions without magnetic defects(lower: ??1SOz = 4.0 ns?1, upper: ??1SOx = 1.1 ns?1, ??1TRS(0) = 1.7 ns?1). Therates stated above have been chosen such that ??1WL(0) = 6.2 ns?1 for eachcurve. Markers (?) show 80 mK data, repeated from Fig. 7.7. [ c?AmericanPhysical Society[4]]When an in-plane field is applied (without loss of generality, take it tobe applied along x?), Zeeman splitting suppresses the latter two terms ofEq. (C.10), giving[33, 50]G(B?)?G(0) =FWL([??1TRS + 23??1mag + ??1SOx + ??1SOy]?B)+ FWL([??1TRS + 23??1mag + ??1SOx + ??1SOz]?B).(C.11)The crossover from Eq. (C.10) to Eq. (C.11) occurs at a small field, of orderB? ? ~??1s /?B (here, ??1s is the total spin relaxation rate). At higher fields,we see the magnetic defects polarize[50] (effectively ??1mag ? 0 in Eq. (C.11))and the onset of ripple dephasing (??1TRS(B?) = ??1TRS(0) + ?B2? , see Ch. 6).Figure C.3 plots the results of extracting the curvature-defined rate ??1WL(extracted in the same way as in (7.2), for consistency) from theoretical mag-155C.4. Spin-orbit interactionsnetoconductance curves based on Eqs. (C.10), (C.11). The two crossovers(Zeeman splitting, defect polarization) are modelled approximately,34 sincethe detailed computations are not simple to perform.[33, 50] It is immedi-ately evident that, without magnetic defects, the behaviour of ??1WL wouldnot match our data (Fig. C.3 dashed curves).We now assume the presence of magnetic defects (scattering rate ??1mag =5 ns?1 from CF). These magnetic defects (combined with the known electron-electron interaction dephasing rate) are however not enough to quantita-tively match the data, so it is necessary to invoke an additional dephasingmechanism. We consider four different possibilities, the first three beingnearly indistinguishable in Fig. C.3:1. Non-magnetic dephasing from degenerate disorder with dynamics rapidenough to break time reversal. This dephasing mechanism provides acontribution to ??1TRS that is independent of temperature and field.2. Additional magnetic dephasing from unpolarizable magnetic defects,providing a contribution to ??1mag that is insensitive to field.3. In-plane spin-orbit interactions (??1SOx, ??1SOy).4. Out-of-plane spin-orbit interactions (??1SOz).Option 4, out of plane spin-orbit interaction (blue solid line in Fig. C.3),cannot match the observed lineshape at a quantitative level. The interactionstrength chosen for Fig. C.3, ??1SOz = 3.3 ns?1, provided a better fit to thedata than other interaction strengths, but even in this case the curve fallsconsistently outside the error bars.Options 1-3 provide a reasonable fit to the data, but none of these mod-els are particularly satisfactory on physical grounds: spin-orbit interactionsare expected to be very weak in graphene[12]; non-magnetic dephasing asso-ciated with very fast defect dynamics is unprecedented; unpolarizable mag-netic defects could be nuclear spins, but based on the hyperfine interactionof graphene and concentration of 13C one expects hyperfine dephasing ratesbetween micro- and milliseconds[13].34We model the Zeeman splitting by adding a dephasing term (g?BB?/~)2?s to the lattertwo modes in Eq. (C.10). The magnetic defects are modelled by taking a B?-dependentvalue of ??1mag, as ??1mag(B?) = [x/ sinh x]??1mag(0), where x = g?BB?/(kBT ). The value g = 1is chosen to provide a reasonable match to data.156Appendix DComputation of conductancefluctuation correlationsThe goal of this appendix is to explain, in detail, how to compute generalconductance fluctuation correlations of the form?GA?GB = GAGB ?GAGB. (D.1)Here, GA is the conductance measured in a given device configuration calledA, and GB is the conductance in device configuration B. The phrase ?deviceconfiguration? here is meant to encompass the entire set of parameters thatinfluence that conductance measurement. Thus, for label A, we will referto temperature TA, internal chemical potential ?A (controlled by doping),magnetic field BA, time of measurement tA, and so on. Likewise, the sameset of parameters are defined for B.The problem of computing ?GA?GB systematically breaks down into thefollowing steps:1. Based on the differences and similarities between A and B, discoverthe set of dephasing modes for that arise due to spin- and pseudospin-coupled disorder (Appx. E). The modes will usually depend on everyaspect of the A and B configurations. This step mostly involves care-ful thinking about the dephasing mechanisms, followed by some basicmatrix algebra.2. For each mode, calculate the zero-temperature correlation function F0(Diffuson mode) or C0 (Cooperon mode). The form of this functioncan be found exactly for the quasi-2D or quasi-1D geometrical limits.This function needs to be evaluated at many different values of energy,for the next step.3. Add together the contributions of each mode, then apply thermalsmearing by integrating the result over energy, weighted by Fermifunctions.157D.1. The effect of thermal smearing4. If the system has some dynamics on the timescale of the measurements,then we should take into account time-averaging. As explained inSec. D.4 this can usually be represented by a larger effective dephasingrate.The third step (thermal smearing) requires numerical integration. As aresult, although the first step (discovering dephasing modes) can be doneby hand, the latter steps are best performed on a computer.D.1 The effect of thermal smearingAt finite temperature, electrical current is carried by electronic states over arange of energies, roughly within kBT of the Fermi level EF. Quantitatively,the measured conductance G at chemical potential EF is determined by aweighted average of conductances at different energies (2.32),G =? ???dE f ?T (E ? EF)G0(E), (D.2)where f ?T (?E) = 14kBT sech2( 12kBT ?E) is the derivative of the Fermi function.The unsmeared conductance G0(E) is the conductance that would be mea-sured at energy E, if we were hypothetically able to inject electrons at thatexact energy. It is not actually possible to measure G0(E) directly.As we will see, mesoscopic theory is able to provide exactly the two-energy correlation function of unsmeared conductances, ?GA0 (E)?GB0 (E?),where ?GA0 (E) and ?GB0 (E?) are the unsmeared conductance fluctuationsin device configurations A and B respectively. To translate this into anexperimentally accessible quantity, we need to compute the correlation ofsmeared conductance fluctuations ?GA and ?GB:?GA?GB =? ???dE? ???dE? f ?TA(E ? EFA)? f ?TB(E? ? EFB)?GA0 (E)?GB0 (E?).(D.3)Equation (D.3) is an exact consequence of the relation (D.2) and can be usedto take into account thermal smearing for any conductance correlation. Itsusefulness is limited, however, by the fact it is time-consuming to evaluatenumerically over a large set of configurations.It is also worth noting one complication regarding screening. It is as-sumed in (D.3) that the location of EF, relative to energy eigenstates, can bechanged without changing also changing the nature of the energy eigenstates.158D.2. Decomposition into dephasing modesThis is not exactly true: although we can change the relative positions ofEF and the band structure, it involves significantly changing the number ofelectrons in the graphene, and that will change the disorder landscape dueto screening (making G0(E) change in shape as doping changes). We makethis assumption in any case that we can change the quantity ? = EF ? E0by doping (see Sec. 2.3.2), without changing the nature of the energy eigen-states (i.e., neglecting screening). Based on the visibility of the side-peaksin Chapter 5, this is not an entirely incorrect assumption at least for smallchanges in energy.We can often simplify (D.3): a common special case is when the un-smeared correlation only depends on the energy difference ? = E ?E?, i.e.,?GA0 (E)?GB0 (E?) = FAB0 (?), (D.4)and also TA = TB. In this case we can simplify (D.3) to a one-dimensionalconvolution over that energy difference,[52]?GA?GB =? ???d? 1kBT ?( ?kBT)FAB0 (?A ? ?B ??). (D.5)where ?(x) = 12(x2 coth x2 ? 1)/ sinh2 x2 . Moreover, if FAB0 (?) does not itselfdepend on ?A and ?B, then we can use (D.5) combined with the Fourierconvolution theorem35 to efficiently compute ?GA?GB over a wide range ofvalues of ?A ? ?B, all in one step.Note that inelastic dephasing processes (e.g., electron-electron interac-tions, or quantum magnetic defects), strictly speaking, do not satisfy (D.4),since the dephasing rates vary with energy. It is possible however in somecases to find an effective constant dephasing rate allowing the use of (D.5).D.2 Decomposition into dephasing modesBy separating into dephasing modes (Appx. E), we arrive at a set of rates ??1Dn(Diffuson) and ??1Cn (Cooperon), as well as a set of associated energy shifts?EDn and ?ECn. In terms of these modes, the total correlation function35The transform????d? 1a?(?/a)ei?t = [piat/ sinh(piat)]2 allows thermal smearing to beeasily applied in Fourier space.159D.3. Zero-temperature correlations (single-mode)takes the form?GA0 (E)?GB0 (E?) =?Diffuson modes nF0(??1Dn , E ? E? + ?EDn, BA ?BB)+ ?Cooperon modes nC0(??1Cn , E ? E? + ?ECn, BA +BB)(D.6)where F0(??1? , ?E, ?B) and C0(??1? , ?E, ?B) are two functions to be com-puted. The BA and BB here are the values of perpendicular magnetic fieldfor configurations A and B.D.3 Zero-temperature correlations (single-mode)The Diffuson conductance correlation function F0(??1? , ?E, ?B) contains thecontribution of the two CF Diffuson diagrams (Fig. 3.7). The first calcu-lations of F0 can be found in Ref. [100], though that early approach wasmathematically convoluted. A simpler procedure[119] was developed soonafter, and is used here.We consider a rectangular conductor with length L between the sourceand drain contacts (at opposing edges) and width W . The function F0(intended for correlations in the source-drain conductance) is given by[52]:F0(??1? , ?E, ?B) ? ?G0(E,B) ?G0(E + ?E,B + ?B)= C e4h24D2L4?n[ 1|?n|2+ 12Re1?2n], (D.7)where ?n are the eigenvalues of the diffusion equation[D(i?? e~A(r))2 + ??1? ? i?E~]Q(r) = ?nQ(r). (D.8)with Dirichlet and Neumann boundary conditions at the contact and vacuumedges, respectively. The vector potential here represents the field difference,that is, ? ? A = ?Bz?. The spectrum ?n (and therefore, F0) takes on adifferent form depending on the geometrical regime.D.3.1 Quasi-2D caseThis subsection is partly c?American Physical Society. Adapted from Ref. [3].160D.3. Zero-temperature correlations (single-mode)The quasi-2D limit is defined by ?D??1? L,W , i.e., when the dephas-ing length is smaller than either device dimension. The primary assumptionof the quasi-2D case is that we can ignore the boundary conditions, in whichcase the eigenfunctions Q(r) of (D.8) are Landau levels. The ?cyclotron fre-quency? for these Landau levels is 2De|?B|/h, giving the series of eigenvalues?n = 2De|?B|~(n+ 12)+ ??1? ? i?E~ , (D.9)for non-negative integers n, with the degeneracy of e|?B|LW/h for eachlevel.Plugging these ?n into (D.7) and rearranging, we arrive at an expressionwhere L and W appear only in the prefactor of F0. Moreover the functionaldependence on three parameters ??, ?E, ?B can be reduced to two scale-independent variables ? and ?,? ? ?E ? ??/~, ? ? |?B| ? 2eD??/~.In terms of these scale-independent variables, F0 has a functional formF0(??1? , ?E, ?B) = e4h2WD??L3 F0(?, ?).where we have introduced the scale-independent correlation function F0(?, ?).The sum in (D.7) may be solved exactly in terms of the complex digammafunction ?(z) and its derivative, ??(z) (see Appendix F). For F0 this solutionis written compactly asF0(?, ?) = 1pi? Im[?(12 +1 + i??)]+ 12pi?Re[??(12 +1 + i??)]. (D.10)When either ? or ? are zero, (D.10) is undefined. Taking limits, we obtainone-parameter correlationsF0(?, 0) = tan?1 ?pi? +12pi (1 + ?2)?1, (D.11)F0(0, ?) = 32pi???(12 +1?). (D.12)and the variance F0(0, 0) = 32pi .The Landau-level approach to quasi-2D coherence, as used here, is well-known.[100] Surprisingly, it seems that until now nobody had obtained theanalytic result in (D.10), although the special cases for ?B = 0 (D.11) and?E = 0 (D.12) had been derived.[52, 96, 119] The full form (D.10) is veryuseful as it allows efficient numerical computation of the smeared correlationfunction at non-zero ?B.161D.4. Time-correlations and measurement averagingD.3.2 Quasi-1D caseThe energy-dependent CF correlation function for the quasi-1D case canbe found in Ref. [36]. The effect of magnetic fields on the quasi-1D CFcorrelation function is simple to incorporate into the energy-dependent re-sult: much like weak localization (Sec. 3.2.3), the perpendicular field affectsquasi-1D correlations by effectively adding a rate ??1B ? B2 to ??1? [98].D.4 Time-correlations and measurementaveragingThe conductance of a mesoscopic device can fluctuate in time due to themotion of impurities. For example, there may be flickering charges nearthe graphene (perhaps charges trapped in the insulating substrate that areable to switch between different positions). Or, there may be paramagneticmoments which are free to orient in any direction. As time goes on, moreand more of these dynamic impurities will change state as the environmentcontinuously ?forgets? parts of the disorder configuration. As shown below,these motions effectively appear as a larger decoherence rate in conductancefluctuations.Given the instantaneous conductance ?G(t) at time t, we define a corre-lation function in time,36F (?t) = ?G(t) ? ?G(t+ ?t).The value of this function can be interpreted as F (?t) = F [??1? (?t)], whereon the right hand side we have the variance which depends on the decoher-ence rate. The reason that we can make this interpretation is that we cancharacterize the environmental dynamics by a time-dependent decoherencerate ??1? (?t). This rate represents the portion of scattering rate (??1el ) thattypically changes after time ?t. In practice, the disorder changes on a widevariety of time scales, so ??1? (?t) tends to depend smoothly on log(|?t|).Any experimental measurement of conductance necessarily involves av-eraging a current, or a voltage, over a certain period of time, T . As a resultthe experimental conductance ?G?T (t) measured at time t is an average ofthe instantaneous conductance over the preceding time period,?G?T (t) = 1T? tt?Tdt? ?G(t?) (D.13)36F (?t) has no explicit dependence on the time offset t, since the equilibrium systemhas no knowledge of the absolute time.162D.4. Time-correlations and measurement averagingTo compare to experiments, we should analyze the statistics of G?(t) (notG(t)). The variance can be computed in a straight forward manner:[?G?T (t)]2 = ?G?T (t) ? ?G?T (t)= 1T 2? tt?Tdt?? tt?Tdt?? ?G(t?) ? ?G(t??)= 2T? T0 d(?t)(1? ?tT)F (?t) (D.14)The value of (D.14) is determined mainly by times ?t of order T . Given thatF (?t) tends to depend logarithmically on ?t, then to a very good degree ofaccuracy we can say [?G?T (t)]2 ? F (T ). The precise details of ??1? (t) are notimportant.In other words, the time-averaged conductance fluctuations behave justlike the instantaneous conductance fluctuations, though with a slightly highervalue of the decoherence rate [the value being ? ??1? (T ) instead of ??1? (0)].This conclusion can be easily generalized to correlation functions of time-averaged conductances at different times, or different experimental param-eters. In general the result has an effective decoherence rate of ??1? (T ),where T is either the averaging time, or the time between measurements(whichever is larger).[52]163Appendix ESpin/pseudospin dephasingmodesWhen we want to compute dephasing rates of Diffusons or Cooperons in thepresence of valley-coupled and/or spin-coupled disorder, we find it necessaryto compare the relative phase between two electrons (called A and B) thatare experiencing disorderly (pseudo-)spin rotations. Exactly what consti-tutes the ?relative phase? here is not obvious, as it depends on the internalstates of the electrons involved.The end result can be seen in Sec. 3.4 and Appx. D: the weak localizationcorrection and the conductance fluctuation correlations separate out intomultiple modes with distinct dephasing rates (and possibly energy shifts).This appendix gives a simplified but quantitative explanation of why themode separation occurs. We adopt the Diffuson/Cooperon approximation,and we make the simplification of forcing the particles A and B to move alongthe same path (although the path itself is random). Removing the motionaldegree of freedom allows us to focus on general spin dephasing mechanisms,bringing together the various considerations seen in Refs. [16, 33, 50, 52].Technically speaking, the motional degree of freedom can be modelled bya motional dephasing rate D|q|2 where ~q is the difference in momenta forthe two paths (Diffuson), or the sum of momenta (Cooperon)[52]. Exceptfor q ? 0 this motional dephasing rate is very large and dominates any otherdephasing mechanism. It is therefore only necessary to compute dephasingrates for q ? 0, where the two paths are aligned and have equally largemomenta.E.1 Internal-state dephasing (general)We start by considering the evolution of each particle separately. ParticleA starts out in an internal state (spin, pseudospin, etc.) |?Ai?. As it travelsalong a given path, its internal state is randomly perturbed by the environ-ment, and so at some later time t we find it in the state UA|?Ai?, where164E.1. Internal-state dephasing (general)UA is a unitary matrix describing the full series of random events on thatpath (conditional, of course, on that path being taken). At the end of thepath, we project the resulting state onto a final candidate state, |?Af?. Thisprojection gives us an amplitude AA defined asAA = ??Af |UA|?Ai?, (E.1)and similarly for the second particle we define the amplitude,AB = ??Bf |UB|?Bi?. (E.2)For Diffusons, the path of particle B is identical to A although it may seea slightly different environment, if particle B passes along this path at anearlier or late time than particle A. For Cooperons, the path of B will bereversed from A. To quantify the relative phase and the dephasing, we ex-amine the conjugate-product quantity AAA?B, averaged over disorder. Thisis analogous to what we do for the scalar case; defined in this way, we can-cel out any phase contributions shared by A and B. Dephasing is seen as adecay in the value of AAA?B over time.To proceeed we will distinguish the Diffuson and Cooperon cases. Wecan write the product in terms of the AB common space in two differentways, depending on how we look at the conjugate in A?B. Seeing it as aHermitian conjugate, we haveAAA?B = ??Af | UA |?Ai? ??Bi| U?B |?Bf? (E.3)= ??Af?Bi| UAU?B |?Ai?Bf? , (E.4)in the A ? B product space, where |?Ai?Bf? is shorthand for |?Ai? ? |?Bf?.Note how the initial and final states for B are reversed compared to A; thiswill come in useful for the Cooperon. Alternatively, we can simply distributethe complex conjugate, giving a result in the A ? B? product space, usefulfor the Diffuson,AAA?B = ??Af | UA |?Ai? ??Bf |? U?B |?Bi?? (E.5)= ??Af??Bf | UAU?B |?Ai??Bi? . (E.6)Thus, to compute the average product AAA?B we need either UAU?B or UAU?B,whichever is easier to obtain.So, how does dephasing appear? Dephasing causes the averaged ma-trix UAU?B (or UAU?B) to lose unitarity as time increases, with one or moreof its eigenvalues decreasing in magnitude?by comparison, note that the165E.1. Internal-state dephasing (general)unaveraged matrix must be unitary. In the following sections, we examinehow to compute these dephasing matrices when the dephasing is caused byshort-range correlated disorder.E.1.1 Internal-state dephasing of DiffusonsFor Diffusons, the spinors A and B experience a nearly identical chain ofevents, in the same order. We break UA and UB into N segments which eachtake a short time ?t = t/N :UA = U (N)A ? ? ? U (3)A U (2)A U (1)A , and UB = U (N)B ? ? ? U (3)B U (2)B U (1)B . (E.7)Using commutativity of A and B operators, the conjugate product can bereordered asUAU?B = U (N)A U (N)?B ? ? ? U (3)A U (3)?B U (2)A U (2)?B U (1)A U (1)?B . (E.8)Now, we make two crucial assumptions. First, we assume that the ran-domness is uncorrelated between different times, so we haveUAU?B = U (N)A U (N)?B ? ? ? U (3)A U (3)?B U (2)A U (2)?B U (1)A U (1)?B . (E.9)In the limit of very short times we can writeU (n)A U(n)?B ? 1?R(n)?t. (E.10)Moreover, if we assume that there are no preferred times, then R(n) is thesame for all n andUAU?B = (1?Rt/N)N ? exp(?Rt). (E.11)E.1.2 Internal-state dephasing of CooperonsFor Cooperons, the chain of events is reversed between A and B. We againbreak UA and UB intoN segments which each take a short time t/N , howeverwe reverse the labelling of the B events to reflect the reversed order of events.UA = U (N)A ? ? ? U (3)A U (2)A U (1)A , and UB = U (1)B ? ? ? U (N?2)B U (N?1)B U (N)B .(E.12)This allows to write the Hermitian-conjugate product asUAU?B = U (N)A U (N)?B ? ? ? U (3)A U (3)?B U (2)A U (2)?B U (1)A U (1)?B . (E.13)166E.2. Spin dephasing (low field)It is apparent here why we have chosen to reverse the order of labelling inB?s events. It allows us to place the U (n)A event adjacent to the U (n)B event,concomitant with the fact that these events are physically located in thesame place.We make similar assumptions as the Diffuson case. First, we assumethat the randomness is uncorrelated between different locations, howeverwe allow the A and B events sharing a label (i.e., at the same place) to becorrelated. Thus,UAU?B = U (N)A U (N)?B ? ? ? U (3)A U (3)?B U (2)A U (2)?B U (1)A U (1)?B , (E.14)and again we writeU (n)A U(n)?B ? 1?R(n)?t. (E.15)If we assume that there are no preferred times, then R(n) = R is the samefor all n:UAU?B = (1?Rt/N)N ? exp(?Rt). (E.16)E.1.3 Dephasing and coherent precessionFor both Diffuson and Cooperon, we have shown that the dephasing processcan be represented by a matrix called R. Each eigenmode of R representsa separate dephasing mode. For each mode, the value of exp(?Rt) decaysas exp(??t) where ? is the eigenvalue. The real part ??1? = Re? givesthe dephasing rate: the decay of coherence in that mode. The imaginarypart ? = Im? gives the precession rate, indicating a difference ?E = ~? inenergy.E.2 Spin dephasing (low field)E.2.1 Modelling random spin rotationsFollowing the general introduction above, we look at some specific examples.The simplest type of dephasing to analyze is a random spin rotation event.This could occur due to spin-orbit scattering near a defect with high electricfields (such as a heavy adatom on graphene), or from a magnetic defect.We can write down the random rotation event as UevA = exp(ivA ? ?A)for particle A, where ?A = ?Axx? + ?Ayy? + ?Azz? is the set of spin Paulimatrices for particle A, and the vector vA represents the axis and strengthof the rotation (note that |vA| = pi corresponds to a full rotation). Forparticle B we have likewise UevB = exp(ivB ? ?B).167E.2. Spin dephasing (low field)In the limit of weak rotations (|vA|, |vB| 1), we can Taylor-expand theexponentials to second order in vA and vB,UevAU?evB = exp(ivA ? ?A) exp(?ivB ? ??B)? 1? v2A + v2B4 + (vA ? ?A)(vB ? ??B)+ i(vA ? ?A ? vB ? ??B)(E.17)for the Diffuson and the analogue (? ? ?) for the Cooperon.Supposing that these random spin rotation events occur at an averagerate of ??1ev , we will haveR ? ??1ev[1? UevAU?evB]for the Diffuson and the analogue (? ? ?) for the Cooperon. Note that ??1evis not the dephasing rate.E.2.2 Spin-orbit interactionThe key properties of spin-orbit interactions are that they are a static formof disorder and they respect time reversal symmetry: repeated traversalsof the same path will encounter the same event (same v), but when thepath (momentum) is reversed, v changes sign. The two rotations will be thesame (vB = vA) in the Diffuson case, and be the opposite (vB = ?vA) inthe Cooperon case due to the reversed path. The vector itself will of coursebe random, since it depends on the randomly-oriented path directions.We take the situation examined by Hikami[16], considering a weak spin-orbit interaction that is allowed to have out-of-plane anisotropy. This is ap-propriate in general for two-dimensional electron gases like graphene, whichare isotropic in the plane. In this case it is assumed that v2x = v2y , andwe allow v2z to assume a separate value, where z? is the out-of-plane direc-tion. Spin-orbit interactions are unpolarized in general so v = 0. For weakrotations (E.17),UsoAU?soB ? 1? v2x(2? ?Ax??Bx ? ?Ay??By)? v2z(1? ?Az??Bz), (E.18)UsoAU?soB ? 1? v2x(2 + ?Ax?Bx + ?Ay?By)? v2z(1 + ?Az?Bz), (E.19)for the Diffuson and Cooperon, respectively. Note the presence of the signflip in the Cooperon case, due to the reversal symmetry of the spin-orbitinteraction. Table E.1 shows the corresponding dephasing rates for the weak168E.2. Spin dephasing (low field)Eigenvalue of R Eigenmodes0 1?2(|????+ |????)4v2x??1ev 1?2(|???? ? |????)[2v2x + 2v2z ]??1ev |????,|????(a) Diffuson caseEigenvalue of R Eigenmodes0 1?2(|??? ? |???)4v2x??1ev 1?2(|???+ |???)[2v2x + 2v2z ]??1ev |???,|???(b) Cooperon case (cf. Ref. [16]).Table E.1: Dephasing rates and eigenmodes due to spin-orbit interaction,according to (E.18) and (E.19). The states |?? and |?? refer to the positiveand negative eigenstates of ?z.spin-orbit interactions. Note that both the Diffuson and Cooperon have thesame dephasing rates, although the eigenmodes are different. Each has aneigenmode that is not dephased by spin-orbit rotations.More generally, the spin-orbit rotation events may be strong (meaningthat |vA| is of order unity or greater), in which case we must average theUsoAU?soB over the possible values of vA. The identity exp(iv ??) = cos |v|+iv? ?? sin |v| can come in useful in this case. In the extreme, if the spin-orbitinteraction has a random but strong magnitude, then it causes complete spinrotation at each scattering event. We then have UsoAU?soB = 16(3 + ?A ? ??B)and UsoAU?soB = 16(3? ?A ? ?B). This induces a dephasing rate of 23??1ev forthe three susceptible modes listed in Table E.1, and no dephasing for thesinglet mode (cf. Ref. [52]).E.2.3 Other spin-orbit mechanismsIn the above discussion we modelled spin-orbit scattering as occuring in ran-dom pointlike events. For many systems, however, the spin-orbit relaxationactually occurs because of a consistent spin-orbit term in the Hamiltonian.As the particle travels in a straight line, its spin precesses around a certainspin-orbit axis with angular frequency ?. The actual relaxation occurs be-cause of ordinary scattering, which changes the momentum direction (and169E.2. Spin dephasing (low field)thereby changes the spin-orbit axis).Roughly speaking, we can represent this relaxation as random rotations(like described above) but with an event rate set by momentum scattering,??1ev ? ??1tr . The spin rotation will be given by v2 ? (?tr?)2. This gives thefamous Dyakonov-Perel spin relaxation rate,??1so ?{?tr?2, for v2 1??1tr , for v2 & 1 (E.20)E.2.4 Unpolarized magnetic defectsMagnetic defects at first glance are similar to spin-orbit interactions, sincethey induce spin relaxation. The way they cause dephasing, however, is alittle different. Like spin-orbit interactions they induce rotations of the formUmagA = exp(ivA ? ?A) and UmagB = exp(ivB ? ?B). In contrast, the vectorsvA and vB may be uncorrelated in the case of magnetic defects. Moreover,the spin rotation from a magnetic defect does not reverse sign under pathreversal, and so they break time reversal symmetry.In this case we simplify by assuming that the defects are unpolarized andrandomly oriented, so that v2Ai = v2Bi = 13v2 for i = x, y, z, and vA = vB = 0.Magnetic defects can change over time, which adds a new complication: al-though A and B see the same magnetic defect, they may arrive at differenttimes and see a different orientation for that defect. We model this possi-bility of change by setting vAivBj = 13Cv2?ij for some parameter C ? [0, 1]representing the degree of correlation. For the case C = 1, the magneticdefects are oriented identically for A and B. For C = 0, the defects have acompletely independent orientation. Typically for conductance fluctuationsthe C = 0 case applies, whereas C = 1 is the usual case for weak localization.Under these assumptions we haveUmagAU?magB ? 1? v2(1? 13C?A ? ??B) (E.21)UmagAU?magB ? 1? v2(1? 13C?A ? ?B). (E.22)The corresponding dephasing rates are shown in Table E.2. It is interestingto compare these results to the spin-orbit of the preceding section. ForC = 0, the magnetic defects induce a constant dephasing rate for all modes.For the Diffuson at C = 1, the result appears identical to the spin-orbitcase. For the Cooperon, however, we see a striking difference: none of themodes are immune to unpolarized magnetic defects, regardless of C. Thisdemonstrates that magnetic defects always break time reversal symmetry.170E.3. Isospin and pseudospin dephasingEigenvalue of R Eigenmodes(1? C)v2??1ev 1?2(|????+ |????)(1 + 13C)v2??1ev1?2(|???? ? |????),|????,|????(a) Diffuson caseEigenvalue of R Eigenmodes(1 + C)v2??1ev 1?2(|??? ? |???)(1? 13C)v2??1ev1?2(|???+ |???),|???,|???(b) Cooperon case (cf. Ref. [52]).Table E.2: Dephasing rates and eigenmodes due to unpolarized magneticdefects.E.3 Isospin and pseudospin dephasingWe now move on to describing the dephasing of isospin and pseudospin, toobtain the results found in Ref. [21]. The results are analogous to spin-orbitscattering as described above.E.3.1 Isospin dephasingWe can consider the isospin (associated with Pauli matrices ?) as a ?degreeof freedom? in order to allow for the superposition of electron and holestates at a given momentum. The isospin is relaxed very quickly, throughthe Dyakonov-Perel mechanism described in Sec. E.2.3. In this case thegraphene Hamiltonian H0 = v? ?p provides the precession of isospin (with amomentum-dependent axis). For a typical graphene measurement situation(? = v|p|/~ ? 1014 s?1 and ??1tr ? 1013 s?1), the total isospin rotationbetween scattering events large, and so the isospin relaxation rate is ??1tr [21].This is an extremely large dephasing rate, and it completely suppresses theisospin triplet modes. In other words, we need not consider such electron-hole mixed states in a practical device.Just as with spin-orbit interaction (Table E.1), there is an isospin singletmode which remains unaffected by the isospin relaxation. For the Diffuson171E.4. Dephasing in high magnetic fieldsthis singlet is 1?2(|????+|????), while for the Cooperon it is 1?2(|????|???). Inthis case the states |?? and |?? refer to the positive and negative eigenstatesof ?z.E.3.2 Pseudospin dephasingPseudospin (associated with Pauli matrices ?) is a degree of freedom of theundisordered Hamiltonian, unlike isospin, so we do not expect any relaxationof the Dyakonov Perel type. Instead, pseudospin relaxes due to pseudospin-coupled disorder, by the mechanism of random pseudospin rotation events.The symmetry of graphene allows for two types of pseudospin (valley) disor-der (Sec. 2.2.3): valley-sensitive disorder that couples to ?z, and intervalleydisorder that couples to ?x or ?y. These two types of disorder are exactlyanalogous to the out-of-plane (?z coupled) and in-plane (?x or ?y coupled)spin-orbit rotations described in Sec. E.2.2. It is not surprising then thatthe dephasing rates and modes appear identical to the spin orbit case[21].E.4 Dephasing in high magnetic fieldsE.4.1 Polarized dynamic magnetic defectsWe next consider the case of magnetic defects which are fully randomizedbetween particles A and B, but which may have a polarization bias: vA 6= 0and/or vB 6= 0. This case is highly applicable to the measurements ofdephasing of conductance fluctuations in this thesis.Assuming that each magnetic defect is only free to change its orientation(not its strength), we have v2A = v2B = v2 and we define the rms strength ofthe defects as vrms =?v2. We then further restrict the polarization axis tobe the same for A and B. Without loss of generality, choose the x? axis, sothat vAy = vAz = vBy = vBz = 0. Finally, define two average polarizationnumbers PA = vAx/vrms and PB = vBx/vrms.In this case, for weak coupling we findUmagAU?magB ? 1? v2rms(1? PAPB?Ax??Bx)? ivrms(PB??Bx ? PA?Ax)(E.23)UmagAU?magB ? 1? v2rms(1? PAPB?Ax?Bx)? ivrms(PB?Bx ? PA?Ax)(E.24)The resulting dephasing rates are shown in Table E.3. Note that in thiscase we have the appearance of an imaginary term. This is a result of the172E.4. Dephasing in high magnetic fieldsEigenvalue of R = ??1ev(1? UmagAU?magB) Eigenmode[(1? PAPB)v2 + ivrms(PB ? PA)]??1ev |++??[(1? PAPB)v2 ? ivrms(PB ? PA)]??1ev |????[(1 + PAPB)v2 ? ivrms(PB + PA)]??1ev |+???[(1 + PAPB)v2 + ivrms(PB + PA)]??1ev |?+??Table E.3: Dephasing modes with polarized magnetic defects. The states |+?and |?? here are the positive and negative eigenstates of ?x. The Diffusoncase is shown here (cf. Ref. [50]), and the Cooperon results are identicalalthough without the complex conjugate in the eigenstate.magnetic defects creating, on average, a slightly lower energy for polarizedelectron spins.In the case where PA = PB = P (when the polarization-inducing field isthe same for A and B), the |++? and |??? modes lose their energy offsetand their dephasing rate is simply (1?P 2)v2??1ev . As the polarization growstowards its maximal value of P 2 = 1, these modes lose their dephasingcompletely. In contrast, the |+?? and |?+? modes gain dephasing, andtheir energy shift grows, as the polarization increases.37 This energy shiftadds (or subtracts) to the direct Zeeman effect on the electrons A and B.E.4.2 Polarized ?static? magnetic defectsWhile the above discussion for polarized defects (with dynamics) is suitablefor conductance fluctuations, weak localization requires a different treatmentas the defects do not (typically) relax over the time of the loop. This doesnot necessarily mean that the defects are static, however: once we apply amagnetic field to polarize the defects, they begin to precess. The precessionmakes it much more complicated to calculate the dephasing properties, sincedifferent segments of the Cooperon will have different statistical properties.If a magnetic defect is associated with the vectorvA = vAxx?+ vAyy? + vAzz? (E.25)37The energy shift is to be expected, but the increase in dephasing is somewhat sur-prising. The reason for this dephasing to remain at full polarization is that the magneticdefects are in random locations. Each path visits a random number of magnetic defects,and so the dephasing is a result of the uncertainty in the energy shift.173E.4. Dephasing in high magnetic fieldsfor one loop direction, then the opposite loop direction will seevB = vBxx?+ vByy? + vBzz? (E.26)= vAxx?+ [vAy cos(??t)? vAz sin(??t)]y? (E.27)+ [vAy sin(??t) + vAz cos(??t)]z?. (E.28)Here, the polarization direction is x?, the precession frequency is ?/(2pi), and?t is the difference in time between the two encounters on this magneticdefect.Because the defect is precessing, it can transfer energy to the electron.Once we allow for energy transfers, however, the treatment of the dephasingbecomes much more complicated. It is necessary to consider whether thereis an available final state at the appropriate energy difference ~??if not,then dephasing cannot occur. These issues are treated in depth in Ref. [50].E.4.3 Ripples/random magnetic field dephasingThe random magnetic fields that arise from an in-plane field passing throughgraphene ripples dephase the Cooperon, as described in Ref. [18] and mea-sured in Chapter 6. Once we start to compare conductance fluctuations atdifferent magnetic fields, however, the Diffuson becomes dephased as well.Recall that Diffusons are sensitive to orbital effects from the field differenceBA ? BB, and Cooperons to the total field BA + BB. To generalize theargument of Ref. [18], then, we have for the Diffuson a dephasing rate??1? =?pi4e2~2 vZ2R (B?A ?B?B)2and??1? =?pi4e2~2 vZ2R (B?A +B?B)2for the Cooperon. Note that this assumes a Gaussian height-height correla-tion and short-range ripples, as in (6.6).174Appendix FDigamma function referenceThe digamma function ?(z) (also known as psi function) shows up frequentlyin the analysis of phase coherent processes in diffusive two-dimensional sys-tems. As the properties of ?(z) may not be generally familiar, this ap-pendix gathers together some useful equations, many of which can be foundin Ref. [120].F.1 DefinitionsThe name ?digamma? comes from the standard definition of this function:?(z) = ddz ln ?(z), where ?(z) is the well-known gamma function. Thedigamma function appears as the result of infinite series in n containingterms such as 1/(n+ z). A practical definition of the digamma function is:?(z) = ?? +??n=0[ 1n+ 1 ?1n+ z], (F.1)where ? = 0.5772156649 . . . is the Euler-Mascheroni constant, an irrationalnumber. The digamma function is defined for all complex z, except fornonpositive integers (0,?1,?2, . . .).In fact, in a typical derivation one almost always obtains digamma func-tions in pairs. For example, the infinite series:??n=0[ 1n+ z1 ?1n+ z2]= ?(z2)? ?(z1), (F.2)or in finite sums that have been cut off at some large N :N?n=01n+ z = ?(z +N + 1)? ?(z). (F.3)The derivative of the digamma function, ??(z), is known as the trigammafunction. Following from definition (F.1) we have??(z) =??n=01(n+ z)2 . (F.4)175F.2. OccurenceF.2 OccurenceThe complex digamma function and its derivatives can be used to expressthe result of any series of a rational function,??n=0P (n)Q(n) ,where P (n) and Q(n) are polynomials in n, as long as that series converges.One uses partial fraction decomposition to transform into terms of the form1/(n+ z)p, then applies either (for p = 1) definition (F.1) or (for p > 1)??n=01(n+ z)p = (?1)p(p? 1)!?(p?1)(z)where ?(p?1)(z) is the (p ? 1)th derivative of ?(z). The final expression isa finite sum of digamma derivatives, containing at most d terms where d isthe degree of Q(n).For example, the quasi-2D conductance correlation function involves thesum ??n=0 1/|n+ z|2. For real z, this is simply the trigamma function. Forcomplex z, we use partial fraction expansion:??n=01|n+ z|2 =??n=01(n+ z)(n+ z?)=??n=01z ? z?[ 1n+ z? ?1n+ z]= 1z ? z? [?(z)? ?(z?)]= Im?(z)Im z (F.5)F.3 Properties and identitiesThe digamma function is analytic over the complex plane except at thesingular points z = 0,?1,?2,?3, . . .. It is single-valued and does not havebranch cuts. Furthermore, ?(z) is real-valued for all real inputs. As can beseen in (F.1), complex conjugation commutes with function application:?(z)? = ?(z?). (F.6)176F.4. Specific valuesThere are several identities that relate digamma functions at differentpoints. The Euler reflection relation states that?(1? z)? ?(z) = pi cot(piz) (F.7)??(1? z) + ??(z) = pi2/ sin2(piz), (F.8)the recurrence relation states that?(z + 1) = ?(z) + 1z (F.9)??(z + 1) = ??(z)? 1z2 , (F.10)and the duplication relation states38 that?(2z) = 12[?(z) + ?(z + 12)]+ ln 2 (F.11)??(2z) = 14[??(z) + ??(z + 12)]. (F.12)F.4 Specific valuesIn general there exists a closed-form expression for the value of ?(z) and itsderivatives when z = m/n is a rational number. A few special values are ofnote:?(1) = ?? ??(1) = pi2/6 (F.13)?(12) = ?? ? ln 4 ??(12) = pi2/2 (F.14)For arguments of the form z = 12 + iy, where y is real, we can useanalyticity and the Euler reflection relation to obtainIm[?(12 + iy)] = ?(12 + iy)? ?(12 ? iy)2i= pi tan(piiy)2i= pi2 tanh(piy). (F.15)38 A more general duplication relation exists, expressing ?(kz) in terms of the sum[?(z + 1/k) + ?(z + 2/k) + ? ? ?+ ?(z + (k ? 1)/k)], for integer k.177F.5. Approximation and computationNo simple expression exists, however, for Re[?(12 +iy)]. We can differentiate(F.15) to obtain a similar identity for trigamma:Re{??(12 + iy)} = pi22 sech2(piy) (F.16)Similar identities exist for arguments with real part of 1.F.5 Approximation and computationFor large arguments x 1, we have ?(x) ? ln x. The difference between?(z) and ln(z) can be expanded in powers of 1/z to obtain, for Re(z) > 0,?(z) = ln z ? 12z ?112z2 +1120z4 ?1252z6 + ? ? ?= ln z ???n=1B(2)nnzn ,(F.17)where B(2)n are the second Bernoulli numbers.39 This series is interestingsince for any given z the infinite sum fails to converge. On the other hand,for any chosen partial sum (cut off at nmax) the approximation becomeshighly accurate as |z| increases.This leads to the widely-used algorithm for evaluating ?(z) anywherein the complex plane: If Re z is positive and |z| is larger than some chosenthreshold Zthresh, then evaluate (F.17) with the summation cut off at nmax. IfRe z is negative, or if |z| is too small, then use identities (F.8) and/or (F.10),the latter repeatedly if necessary, to express the result in terms of ?(z?),where z? has a positive real part and its magnitude is larger than Zthresh.For a practical implementation limited by the precision of IEEE double-precision floating-point arithmetic, the values Zthresh = 7 and nmax = 18 aresufficient.39B(2)1,2,3,???20 =12 ,16 , 0,?130 , 0,142 , 0,?130 , 0,566 , 0,?6912730 , 0,76 , 0,?3617510 , 0,43867798 , 0,?174611330178Appendix GGenerating correlatedrandom dataTheory tells us that conductance fluctuations are well-described by a corre-lation function ?GA?GB and that they have a Gaussian distribution. Otherrandom aspects of the measurement (noise, ripples) also are expected tobe Gaussian. If we want to visualize any of these fluctuations, then it isuseful to generate a random pattern satisfying the prescribed correlationfunction. This appendix describes a handful of techniques that are usefulfor generating such correlated random fluctuations.We first discuss (in Sec. G.1) a general method to create arbitrarilycorrelated multivariate normal data. Since that method is quite slow forlarge data sets, the subsequent sections (Secs. G.2, G.3, G.4) describe howto take advantage of symmetries to generate the data much more quickly,using Fourier transforms.G.1 General matrix methodSuppose that we want to generate a random sample containing N correlatedmeasurements, {Zn} with index n = 1 ? ? ?N . We can represent this sampleas an N -vector Z:Z =??????Z1Z2...ZN??????(G.1)For now we take the Zn values to be complex without any preferred phase,to obtain a result with wider applicability. At the end of this section thecase of real-valued Z is also examined. It is also assumed that Z = 0, as amean background can be easily added to the generated fluctuations.In generating Z, we must respect its correlations. Finite correlationsof the form Fnm = ZnZ?m can exist for any possible pair of n and m, andso to treat the general problem we define an N ?N correlation matrix (or179G.1. General matrix methodcovariance matrix):F ? ZZ?. (G.2)There are two imporant properties of this matrix: F must be Hermitean(F = F?), and it must be positive semi-definite:40v?Fv ? 0, for all v. (G.3)If the measurements Zn (or linear combinations thereof) are all normallydistributed variables, then the correlation matrix contains all the statisticalinformation about Z; the higher-order correlations are fixed (see Sec. 3.3.2).The goal of this section can be restated as so: ?Given F, generate a Z?.Since there are only N complex degrees of freedom in the solution, we lookto generate a Z in the formZ = Ar, (G.4)where r is an N -vector of completely random, uncorrelated numbers withunity variance, (i.e. rr? = 1). The matrix A, a non-random N ?N matrixto be determined, creates the appropriate correlations out of these randomnumbers. Since it is easy to generate the set of random {rn} using a com-puter random number generator, the problem now reduces to finding A.Plugging (G.4) into (G.2), we obtainF = AA?. (G.5)Any matrix A we can find that satisfies this property will suffice. These Amatrices are, in some sense, the ?square roots? of F. At least one valid Aexists (since F is positive semi-definite) but it is not unique.There are various computer algorithms that yield a valid A, given F.The Cholesky decomposition of F directly yields a value for A, however itcan be unstable when F is singular. A more robust method (four to eighttimes slower than the Cholesky decomposition) is to compute the spectraldecomposition F = PDP? with a unitary P and diagonal D. We can thentake A = PD1/2, where D1/2 is the matrix formed by taking the element-wisesquare root of D.Once A has been computed, it is a simple manner to generate the randomr and obtain correlated fluctuations Z = Ar. This matrix method is fullygeneral, but in practice it is very time consuming when N is large, since thealgorithms for computing A generally require O(N3) arithmetic operations.The following sections will explore special cases where a great deal of speedcan be gained, making it possible to efficiently generate large random datasets.40Proof: v?Fv = v?ZZ?v = v?ZZ?v = |Z?v|2 ? 0.180G.2. Stationary processes (Fourier method)Note on generating real-valued fluctuationsThe above method generates complex-valued Zn satisfying (G.2). There area couple of ways to generate real-valued random fluctuations Xn that satisfyF = XX?:? Take the Zn from the complex-valued procedure, then change it toreal: Xn = ?2 ReZn, or Xn = ReZn + ImZn.? Alternatively, we may simply perform every step with real algebra:compute a real-valued A matrix and only generate real-valued r.G.2 Stationary processes (Fourier method)When the correlations have a natural translation symmetry, we can writeFnm = f(n?m), (G.6)described by a simple one-dimensional function f(?n) satisfying f(??n) =f(?n)?. This situation occurs with noise, random ripples, conductance fluc-tutations in time or gate voltage, and even the conductance fluctuationsin magnetic field (aside from Cooperon correlations). Such translation-symmetric fluctuations are known as stationary processes.If f(?n) is also N -periodic, then we can make a massive simplification:we automatically can write down an eigenbasis of Fnm. This means effec-tively that the PDP? approach, described earlier, has a guaranteed P. Oneguaranteed basis is the Fourier basis: in the matrix representation we canwrite the unitary matrix Pnm = exp(2pii[nm/N ])/?N , so that the diagonalmatrix D is determined by the discrete Fourier transform of F (?n), with1/?N normalization.We can describe this Fourier approach in the conventional language ofsummation notation. We define the Fourier components ck as:ck = 1NN?n=1Zn exp(?2piink/N). (G.7)181G.2. Stationary processes (Fourier method)Observe the correlation function of the fourier components:ckc?l = 1N2N?n=1N?m=1ZnZ?m exp(?2pii(nk ?ml)/N) (G.8)= ?kl ? 1NN?d=1f(d) exp(?2piidk/N) [d = n?m] (G.9)= ?klf?(k) (G.10)The Fourier components ck are thus uncorrelated random numbers, akinto the rn of the preceding section, though with variances given by |ck|2 =f?(k). We can generate the values of these ck simply by taking independentrandom unity-variance complex numbers, and multiplying them by?f?(k).To obtain the desired Zn we simply perform an inverse fourier transformZn = ?Nk=1 cke2piink/N .It is straightforward and efficient to compute f?(k) by using the FastFourier Transform of f(?n), especially if we choose N to be a power of 2.We speed up our generation of random data to O(N logN)?compare tothe matrix method?s O(N3). Note that f?(k), the Fourier transform of f(d),must be everywhere real and nonnegative since the correlation matrix ispositive semi-definite.G.2.1 A technicality?periodic boundary conditionsUsually, theory gives us a correlation function f?(?n) that extends to aninfinite distance. This is at odds with the Fourier approach, which demandsperiodic boundary conditions f(?n) = f(?n+N).To first approximation, we can simply choose a very large data range N ,and then truncate the theoretical function:f(?n) ={f?(?n) for 1 ? ?n ? N/2f?(N ? ?n) for N/2 < ?n ? N (G.11)If the theoretical function f?(?n) decays quickly for large ?n, then the dis-continuity causes only a small problem: tiny negative values in f?(k) at highk, that can be safely zeroed. As a more advanced technique one can super-impose copies of the function f? that are ?wrapped? around the boundaries(as done in Ewald summation), which removes the boundary problem.182G.3. Simulating mixed Cooperon and Diffuson correlationsG.3 Simulating mixed Cooperon and DiffusoncorrelationsThe combination of Cooperon and Diffuson terms naturally gives rise to aconductance-conductance correlation of the form:g(B1)g(B2) = D(B1 ?B2) + C(B1 +B2), (G.12)in magnetic field B, for two functions D(?B) and C(?B) which may or maynot be the same.This correlation is not stationary, however we can use a trick to representthe conductance as a superposition of two independent stationary parts. Welook for an answer of the form:g(B) = s(B) + s(?B) + n(B) (G.13)(note the sign reversal in the second term). Here, s(B) and n(B) are inde-pendent stationary fluctuations, described by correlationss(B1)s(B2) = S(B1 ?B2) = S(B2 ?B1)n(B1)n(B2) = N(B1 ?B2) = N(B2 ?B1)s(B1)n(B2) = 0,(G.14)for some S(?B) and N(?B) to be discovered.In terms of these pieces, we find thatg(B1)g(B2) = 2S(B1 ?B2) + 2S(B1 +B2) +N(B1 ?B2), (G.15)and comparing to (G.12), we must haveS(?B) = 12C(?B)?12C0N(?B) = D(?B)? C(?B) + 2C0.(G.16)Here, C0 is a constant that may be chosen freely (though, see the technicalnote below). To summarize, all we need to do is generate two stationaryfluctuation sets s(B) and n(B) that respect the above correlation functions,and plug them into (G.13) to obtain the result.As a technical point, it appears from (G.16) that the subtraction leadingto N(?B) may produce an invalid solution (allowable correlation functionsmust have a strictly nonnegative fourier transform). In fact, this problemwill not occur as long as the original correlation g(B1)g(B2) is possible, i.e.,183G.4. Semi-stationary processespositive semi-definite.41 The constant C0 in (G.16) should be chosen so thatboth ? dxN(x) ? 0 and ? dxS(x) ? 0.G.4 Semi-stationary processesIn order to generate conductance fluctuations with respect to in-plane fieldB?, we must consider the effect of magnetic impurities, Zeeman splitting,spin-orbit interactions, and so on. The fluctuations with respect to B? arethen, unfortunately, not stationary. If we want to generate a two-parameterconductance patternG(B?, B?), orG(B?, VBG), then the fluctuations overallare not stationary, even if the second axis does have stationary symmetry.For such a situation we can still take advantage of the stationary axis tomake the data generation more efficient. We represent the data vector asZn,p, with two indices: n ? {1 ? ? ?Nn}, representing the non-stationary axes,and p ? {1 ? ? ?Ns}, representing the stationary axes. Note that Z containsa total of NnNs entries. We then, as usual, compute the covariance matrixZn,pZ?m,q = Fnm,pq with the stationary characteristic: Fnm,pq = fnm(p? q),where fnm(?p) is Ns-periodic and fnm(??p) = fmn(?p)?.We proceed by calculating the Fourier transform along the stationaryaxes (like in Sec. G.2), separately for each pair of nonstationary values n,m.This results in a new matrix,cn,kc?m,l = f?nm(k)?kl = ?kl 1NsN?d=1fnm(d) exp(?2piikd/Ns). (G.18)Each fourier frequency is still independent, however we have multiple com-ponents at each frequency, indexed by n,m. Moreover (unlike Sec. G.2)f?nm(k) may be complex valued, though it does satisfy f?nm(k) = f?mn(k)?.In any case, the problem is reduced to applying the matrix method to thematrix f?nm(k), separately at each frequency k. This produces the requiredset of random amplitudes cn,k which are inverse-transformed to produce thefinal data set Zn,p.41 Proof: Let x and y represent magnetic fields. It can be calculated that?dx?dy g(x)g(y) sin(? ? x) sin(? ? y) =L2?dxN(x) cos(? ? x), (G.17)for all nonzero frequencies ?. The left-hand side of (G.17) is a test of the positive-semi-definiteness of the original correlation using sin(? ? x) as a ?test vector?. This guaranteesthat the right-hand side (the fourier transform of N) is non-negative for all ? 6= 0. Forthe special case of ? = 0, we simply need to choose an appropriate C0 in (G.16).184G.5. Adding on more data-3-2-10123Xz, Yz403020100-10-20-30Location zPrior dataExtrapolated dataFigure G.1: Example of correlated data extrapolation. A real-valued dataset X is extrapolated to a sample real-valued data set Y . Here we take cor-relation functions XzXz? = XzYz? = YzYz? = [1 + (z ? z?)2/25]?1. The priordata set {Xz}, ranging over z ? {?95 ? ? ? 4}, was first generated using themethod of Sec. G.1. Next, the conditional values of ?z = Yz, givenX and(Yz ? ?z)(Yz? ? ?z?), givenX were computed using the results of Sec. G.5,with z ? {0 ? ? ? 99} (note the overlap). Finally, a few random Y were gener-ated satisfying these statistics. The thick smooth line shows the value of ?zand the gray bands around it indicate 68%, 95% confidence bands for theextrapolated interval.This separation of the stationary space allows computation to proceedwith only [O(N3nNs) +O(N2nNs lnNs)] operations rather than the O(N3nN3s )that would be required with the full matrix method. The computation canbe quite efficient if only a small number of Nn points are desired.G.5 Adding on more dataThe method in Sec. G.1 generates a fixed-size data set. If we want to lookbeyond the edges of the generated data set, we need to add on more pointsthat are correlated with the known points. This section describes how togenerate the new data properly. Let?s use the vectorX to describe the priordata and use the vector Y to describe the new data. These two data setsmay have different sizes. Note that the length of Fourier-generated datacannot be extended.Before proceeding we need to know: what were the correlations involvedin creatingX, what sort of correlations need to exist within the new Y , andwhat are the inter-correlations between X and Y . These are all captured185G.5. Adding on more datain the total correlation matrix, written in block-matrix form asF =[XY] [XY]?=[XX? XY ?Y X? Y Y ?]=[F11 F12F21 F22](G.19)Note that these correlations must be computed using an average ? ? ? over anensemble that includes all of the possible X, not just the measured one.Once F is known we may proceed. Since [X,Y ] is a multivariate normaldistribution, the conditional probabilityP (Y , given X)dY = P (Y AND X, prior to observing X) dXdYP (X, prior to observing X) dX ,(G.20)also turns out to be a multivariate normal distribution. The distribution ofthe conditional Y has non-zero mean, given by? = Y , givenX = F21F?111 X (G.21)and its covariance around mean is given as(Y ? ?)(Y ? ?)?, givenX = F22 ? F21F?111 F12 (G.22)Note that we use have used the notation ? ? ? , givenX to denote averagesover an reduced ensemble that is conditional on the observed X. Also notethe presence of the matrix inverse F?111 which does not exist if F11 is singular;it is only strictly necessary, however, to compute the product F21F?111 andthis can be obtained with a pseudo-inverse when the proper inverse does notexist.In any case, this means that we merely need to generate new data ?Ywith the appropriate covariance matrix (G.22), using the usual procedure.The new data is then given by Y = ?+ ?Y .186Appendix HTheory of statistical errors inautocorrelation functionsThis appendix is c?American Physical Society. Adapted from Ref. [3].H.1 DefinitionsThis appendix explores the statistical errors that occur in the estimationof the correlation function from a generalized fluctuating quantity G(x).Theoretically, one divides G(x) into its average (sample-independent) partG(x) and its fluctuating (sample-dependent) part, ?G(x). The correlationfunction of ?G(x) is then F (?x) = ?G(x) ?G(x+ ?x) (neglecting Cooperoncorrelations when x is field). The overline notation here refers specifically toan ensemble average, i.e., an average over all possible disorder configurations.An ensemble average (over an infinity of similar devices) is not possiblein practice. Experimental data analysis instead usually consists of averagingover x, using data taken from one device. This is an approximation thatincurs errors compared to an ensemble average. The goal of this appendixis to quantify the differences (errors) between the ideal quantities and theexperiments as described. We use upper-case letters (G, F , ?G) to repre-sent error-free quantities, and lower-case letters (f , ?g, etc.) to representestimated values that one would obtain from experimental analysis.A typical experiment measures conductance G(x) (assumed to be noise-free) over a limited range in x of length L: x = x0 ? ? ?x0 + L. Here x is aparameter such as ? or B. Next, a slowly-varying background estimate gb(x)is computed from G(x), then subtracted to yield the estimated fluctuations,?g(x) = G(x) ? gb(x). Finally, the autocorrelation function f(?x), definedasf(?x) ? 1L? ?x? x0+L??xx0dx ?g(x)?g(x+ ?x), (H.1)is computed and the correlation lengths x 12, xr and xi are extracted fromf(?x).187H.2. Background subtraction (type 2) errors? The half-width x 12is defined by f(x 12) = 12f(0).? The roundness is xr = |2f(0)/f ??(0)| 12 , where f ??(x) = d2dx2 f(x).? The inflection point xi is defined by f ??(xi) = 0.The estimate f(?x) and its correlation lengths in general differ from thetrue values [F (?x) and its correlation lengths] for three reasons.1. ?g(x) may contain a remnant contribution from the background con-ductance, so that ?g(x) 6= 0. This would occur, for example, if thetrue background were linear, G(x) = G0 + G1x, but one allowed foronly a constant background estimate, gb(x) = g0. Errors of this typewill systematically offset f(?x) upwards compared to F (?x).2. The protocol for computing gb(x) [from G(x)] always causes gb(x) tobe somewhat correlated to ?G(x). Thus, ?g(x) will lose some fluctua-tions compared to ?G(x). This systematically offsets f(?x) downwardscompared to F (?x).3. The limited quantity of data leads to random fluctuations in f(?x)depending on the particular realization of ?G(x).The experimentalist typically minimizes errors of type 1 by adding moredegrees of freedom to the background fit, at the expense of increasing type2 errors. Errors of type 1 depend on sample details and thus are difficultto quantify in a general way. Errors of type 2 and 3 on the other hand arequantifiable solely in terms of F (?x). The following sections give a generaltreatment of type 2 and 3 errors.H.2 Background subtraction (type 2) errorsIn this section we calculate the bias that would be induced by the sim-plest possible background subtraction protocol: subtracting the mean ofG(x) over the measured interval L. Considering the error mechanisms listedabove, this protocol would give the lowest possible errors of type 2, whilepotentially leaving errors of type 1 depending on the details of the system.The results presented below could be extrapolated to a more general back-ground fitting protocol by using an effective cutoff length Leff instead of thescan range L. For example, if a higher-order polynomial were fit to G(x) inorder to reduce type 1 errors, then Leff ? L/(n+ 1) where n is the order ofthe polynomial [e.g., n = 2 for a parabolic gb(x)]. Alternatively, Leff could188H.2. Background subtraction (type 2) errorsbe an effective smoothing length if gb(x) is taken to be a smoothed versionof G(x).For mean value subtraction, the estimated background is simply theconstant function gb(x) = 1L ? x0+Lx0 dx?G(x?). The estimated fluctuationsthen have an error ?g(x) ? ?G(x) = E1(x) + E2 composed of type 1 errorE1(x) = gb(x) ? G(x) and type 2 error E2 = ? 1L ? x0+Lx0 dx? ?G(x?). Weassume E1(x) = 0 here. The error E2 is correlated to ?G(x) and thus itsystematically affects the autocorrelation. To first order, the systematicerror in (H.1) for ?x L isBias{f(?x)} ? f(?x)? F (?x)? ?F (0)xLL , (H.2)where xL is a characteristic correlation length defined asxL =? L?Ldz (1? |z|L)F (z)F (0) . (H.3)Approximately, this xL is the area under the normalized correlation functionfrom ?L/2 to L/2.For short-ranged correlation functions, xL would be a constant for largeL, and so the systematic error (H.2) would fall as 1/L; this bias then wouldbe analogous to the well-known sample variance bias from independent sam-ple statistics, agreeing with the intuition of G(x) containing ?many indepen-dent fluctuations?, each having length xL. In the quasi-2D CF case, however,F (?x) only falls as 1/?x [see (D.11), (D.12)] so the value of xL diverges log-arithmically as L increases. Hence the analogy with independent samplestatistics does not hold for the quasi-2D CF variance bias, as there is nowell-defined ?independence length?.The bias in variance f(0) leads to direct effects on the half-width estimate(x 12) and the roundness estimate (xr), as these are both sensitive to theabsolute variance. The roundness estimate xr is biased byBias{xr} ? ?12xrxLL . (H.4)The bias in x 12is given byBias{x 12} ? F (0)2F ?(x 12)xLL . (H.5)189H.3. Random (type 3) errorsThe inflection point estimate xi depends only on the second derivative off(?x), so to first order xi has no bias; taking into account higher order termsomitted from (H.2) we obtainBias{xi} ? 2L2F ???(xi)[F (xi)? F (0)xLL]. (H.6)Note that if F (?x) does not go to zero at large ?x, then the fluctuationsare non-ergodic and it is impossible for (H.1) to converge to F (?x). Thiswould manifest in the above framework as an L-independent contribution tothe type 2 error (H.2). Experimentally, it may be difficult to distinguish truenon-ergodicity from ordinary type 2 errors on ergodic fluctuations, especiallyif F (?x) goes to zero very slowly. For example, magnetoconductance fluc-tuations are ergodic, but at high values of thermal smearing the quasi-2DFB(?B) approaches zero only logarithmically for ?B . kT/(eD) [see (8.19)].The bias (H.2) would be almost independent of L until L kT/(eD), whichcould be mistaken for non-ergodicity. In any case, the inflection point bias(H.6) is unaffected by non-ergodicity (real or apparent).H.3 Random (type 3) errorsNext we suppose the background has been determined perfectly, giving usthe exact fluctuations: ?g(x) = ?G(x). Although we have f(?x) = F (?x) inthis case, the measured f(?x) will have random deviations from F (?x) dueto the limited data set. The random fluctuations in f(?x) can be expressedin terms of a two-point correlator f(?x1)f(?x2). If ?G(x) is gaussian (as isthe case for CFs, see Sec. 3.3.2) then we have (by Isserlis? theorem)f(?x1)f(?x2)? f(?x1) f(?x2)= 1L [H(?x1 ? ?x2) +H(?x1 + ?x2)], (H.7)for an ergodic dataset of large length L ?x1, ?x2, where H(??x) is ahigher-order correlator defined asH(??x) =? ???dz F (z)F (z + ??x). (H.8)Equation (H.7) and its derivatives allow the determination of random errorsin any aspect of f(?x), including its correlation lengths. For instance, therandom error in variance [f(0)] is given by Var{f(0)} = 2H(0)/L.190H.3. Random (type 3) errorsThe half-width estimate x 12is sensitive to the errors in both f(x 12) andf(0), modulated by the local slope F ?(x 12), givingVar{x 12} = Var{f(x 12)? 12f(0)F ?(x 12)}= 1L32H(0) +H(2x 12 )? 2H(x 12 )F ?(x 12)2 . (H.9)The error in the inflection point estimator xi depends on the fluctuationof f ??(xi), modulated by the local slope F ???(xi).Var{xi} = Var{f ??(xi)/F ???(xi)}= 1LH ????(0) +H ????(2xi)F ???(xi)2 . (H.10)The roundness estimator, xr = ?2f(0)/|f ??(0)|, is influenced by changesin both f(0) and f ??(0):Var{xr} = Var{ f(0)?2F (0)|F ??(0)| +?F (0)f ??(0)?2|F ??(0)|3}= 1L[ H(0)F (0)|F ??(0)| +F (0)H ????(0)|F ??(0)|3 +2H ??(0)F ??(0)2]. (H.11)191
Thesis/Dissertation
2014-05
10.14288/1.0085699
eng
Physics
Vancouver : University of British Columbia Library
University of British Columbia
Attribution-ShareAlike 2.5 Canada
http://creativecommons.org/licenses/by-sa/2.5/ca/
Graduate
Phase coherence in graphene
Text
http://hdl.handle.net/2429/45614