Finite Temperature Phase Transitions in the Quenched BFSJ3 Matrix Model by Simon Andrew Yewchuk B.Sc, University of Alberta, 2004 A THESIS SUBMITTED IN PARTIAL F U L F I L M E N T OF T H E R E Q U I R E M E N T S FOR T H E D E G R E E OF Master of Science in The Faculty of Graduate Studies (Physics) The University Of British Columbia January 2007 © Simon Andrew Yewchuk, 2007 Abstract A lattice Monte Carlo simulation of the BFSS matrix model of M-theory with fermions removed is discussed. It is expected that a Hagedorn/deconfmement type phase transition exists in this model and that the deconfined state is related to the black hole states of type IIA supergravity. The black hole phase is expected to be recovered in the 't Hooft large N limit of the BFSS model. The numerical simulation is constructed by investigating the behaviour of various toy models, using the Polyakov loop as an order parameter for the transition. A Fadeev-Popov gauge fixing is used to handle the gauge fields. A first order phase transition behaviour can readily be seen in this model. At finite iV it appears as an analytic step function which is expected to become a true step function in the large ./V limit. The final Monte Carlo simulations are done on a twenty site lattice, using a value of N = 40. The phase diagram of the transition is mapped out for dimensions two through nine and the phase behaviour for nine dimensions is investigated for when a mass term is added to the action. The phase behaviour of a related model - the pp-wave matrix model - is also discussed. Table of Contents Abstract u' Table of Contents iii List of Tables vi List of Figures vii Glossary viii Acknowledgements x Dedication xi Chapter 1 Introduction 1 1.1 String Theory and M-theory 1 1.2 BFSS Matrix Model 2 1.3 Motivation: Connection to Black Hole Thermodynamics 2 1.4 Phase Transitions 3 1.5 Numerical Approach 4 1.5.1 Remove Fermions 4 1.5.2 Euclidean Action 5 1.5.3 Gauge Fixing 6 1.5.4 Path Integral Monte Carlo 7 1.6 Plane Wave Matrix Model 8 Chapter 2 Monte Carlo Techniques for Solving Path Integrals 9 2.1 Monte Carlo Integration 11 2.2 Metropolis Algorithm 12 2.3 Heat Bath Algorithm 13 2.4 Simulation Details 14 2.4.1 Sweeps and Updating 15 2.4.2 Thermalization and Starting Conditions 15 2.4.3 Autocorrelation 15 Chapter 3 Harmonic Potential 17 3.1 Metropolis For Harmonic Potential 17 3.2 Heat Bath for Harmonic Potential 19 3.3 Heat Bath/Metropolis Comparison 19 3.3.1 Run-Time History 19 3.3.2 Autocorrelation 21 iii Table of Contents 3.3.3 2-Point Correlation Function 22 3.3.4 Remarks 23 Chapter 4 Commutator Potential 24 4.1 Commutator Metropolis 25 4.2 Auxiliary Field Heat Bath 25 4.2.1 Auxiliary Fields 27 4.2.2 Diagonal Elements 28 4.2.3 Off-Diagonal Elements 28 4.3 Heat Bath Without Auxiliary Fields 29 4.3.1 Diagonal Elements • 29 4.3.2 Off-Diagonal Elements 30 4.4 Comparison to Published Results 31 4.4.1 Non-Periodic Boundary Conditions 31 4.4.2 Results 32 Chapter 5 Integration Over Gauge Variables 34 5.1 Derivation of Gauge Fixing 34 5.2 Toy Models 36 5.2.1 Phases 36 5.2.2 Metropolis for Gauge Variables . . . •• 36 5.2.3 Vary A' Parameter 38 5.2.4 Vary g Parameter 42 Chapter 6 Gauge Fields &: Harmonic Potential 45 6.1 Exact Form for Harmonic Potential 45 6.2 Full Integration Over Matrices and Gauge Variables 46 6.2.1 Updating Gauge Variables 47 6.2.2 Gauge Boundary Condition for Matrices 47 6.3 Results 48 Chapter 7 Quenched BFSS Matrix Model 50 7.1 Setup 50 7.1.1 Lattice Size 51 7.1.2 Size of N 52 7.1.3 Autocorrelation 53 7.1.4 Save and Restart 54 7.1.5 Hysteresis 54 7.2 Fitting 55 7.2.1 Bootstrap Error 57 7.2.2 Systematic Error 59 7.3 BFSS Model 60 7.4 Commutator & Mass Term Potential 62 Chapter 8 Quenched Plane Wave Matrix Model 65 8.1 PP-Wave Heat Bath 65 8.1.1 Auxiliary Fields 66 8.1.2 Diagonal Elements 66 iv Table of Contents 8.1.3 Off-Diagonal Elements ( 66 8.2 PP-Wave Phase Diagram ' , 67 Chapter 9 Conclusion 70 Bibliography 71 Appendix A Analytic Calculations for Matrix Integrals 73 A . l One Dimensional Harmonic Potential 73 A.1.1 Solving Via Discrete Fourier Transform 73 A. 1.2 Solving Via Functional Determinant 76 A.1.3 2-Point Correlation Function 77 A.2 Matrix Harmonic Potential 79 A.3 Matrix Harmonic Potential With Diagonal Gauge - Exact Form 80 List of Tables 4.1 Comparison of the Metropolis, auxiliary field heat bath and direct heat bath simulations with published results for O4 33 7.1 Comparison of the behaviour of the Polyakov loop operator for different lattice sizes 52 7.2 Comparison of the fit of f3c for the harmonic potential to the exact result. . 60 7.3 Location of the nine dimensional phase transition in the BFSS model. . . . 62 vi List of Figures 3.1 A run-time history of O2 comparing the Metropolis and heat bath simulations to the exact value 20 3.2 Autocorrelation of the Metropolis and heat bath simulations for O2 21 3.3 The 2-point lattice correlation function for the harmonic potential comparing Metropolis and heat bath simulations to the exact value 22 5.1 Expected toy model phase diagram in the large N limit 37 5.2 Run-time history of the Polyakov loop for different values of A' with g — 0. 39 5.3 Expectation value of the Polyakov loop operator varying A' for different val-ues of N with 5 = 0 40 5.4 A sample confined and deconfmed state 40 5.5 Distribution of eigenvalues for different values of A' 41 5.6 Expectation value of the Polyakov loop operator for different values of g with A' held fixed 42 5.7 First and second derivative of the Polyakov loop operator with respect to g. 44 6.1 Expectation value of the Polyakov loop for the harmonic potential varying (3. 49 7.1 Comparison of phase transition behaviour of the Polyakov loop for N — 20 and N = 40 53 7.2 Hysteresis effects in the phase transition region 55 7.3 Fit of the phase transition for N = 20 and N = 40 58 7.4 Histogram showing the distribution of bootstrap fits for j3c 59 7.5 Polyakov loop expectation value in the BFSS model for different dimensions. 61 7.6 Phase diagram for varying dimension in the BFSS model. 62 7.7 Polyakov loop expectation value for fixed values of mass and commutator coupling 63 7.8 Phase diagram comparing mass coupling to commutator coupling 64 8.1 Polyakov loop expectation value varying (3 for the pp-wave model. . . . . . 68 8.2 Phase diagram for the pp-wave model 68 vii Glossary Acceptance Rate I n M e t r o p o l i s M o n t e C a r l o , the r a t i o of h o w often a n u p d a t e is ac-c e p t e d t o the t o t a l n u m b e r of u p d a t e s is referred t o as t h e acceptance rate . It is useful i n m o n i t o r i n g the efficiency of t h e M o n t e C a r l o s i m u l a t i o n . Autocorrelation T h e c o r r e l a t i o n of a q u a n t i t y w i t h itself, as s e p a r a t e d b y a n u m b e r of sweeps. Auxiliary Field A n a d d i t i o n a l field w i t h n o p h y s i c a l s igni f icance t h a t is a d d e d to t h e a c t i o n for c a l c u l a t i o n purposes . Confinement/Deconfineraent I n Q C D , conf inement is t h e r e g u l a r phase of q u a r k s , s u c h t h a t t h e y c a n n o t exist i n i s o l a t i o n . I n t h e B F S S m o d e l , conf inement refers t o the fact t h a t the states of the H a m i l t o n i a n m u s t be s inglets u n d e r the gauge s y m m e t r y . T h i s r e q u i r e m e n t is t h e n b r o k e n b y the H a g e d o r n t r a n s i t i o n . DO-Brane I n s t r i n g theory, a D - b r a n e is a m u l t i d i m e n s i o n a l h y p e r s u r f a c e o n w h i c h s t r ings c a n b e g i n a n d e n d , as c h a r a c t e r i s e d b y D i r i c h l e t b o u n d a r y c o n d i t i o n s . A D O - b r a n e is t h e n a s ingle p o i n t . Euclidean Space A s u b s t i t u t i o n of the t i m e v a r i a b l e b y t' = it t r a n s f o r m s t h e a c t i o n f r o m M i n k o w s k i space to E u c l i d e a n space. T h i s s o m e t i m e s resul ts i n a p r o b l e m t h a t is easier t o solve a n d is useful for a n a l y z i n g t h e t h e r m o d y n a m i c p r o p e r t i e s of a s y s t e m Field Configurations T h e p u r p o s e of the M o n t e C a r l o s i m u l a t i o n is to create a sequence of field states, c a l l e d c o n f i g u r a t i o n s . E a c h c o n f i g u r a t i o n is a u n i q u e state of t h e fields. Hagedorn Temperature T h e l i m i t i n g t e m p e r a t u r e of c e r t a i n s y s t e m s above w h i c h t h e p a r t i t i o n f u n c t i o n divergences. T h i s m a y t h e n also be the l i m i t i n g t e m p e r a t u r e for a phase t r a n s i t i o n . Heat Bath Algorithm T h e heat b a t h a l g o r i t h m isolates one v a r i a b l e at a t i m e a n d relies o n d i r e c t d i s t r i b u t i o n s a m p l i n g as a means of g e n e r a t i n g a p r o b a b i l i t y d i s t r i b u t i o n . Hysteresis H y s t e r e s i s is a lag i n the response of a s y s t e m t o changes i n the forces a c t i n g o n i t . T h i s causes t h e b e h a v i o u r of the s y s t e m to d e p e n d o n i ts i m m e d i a t e h is tory . Importance Sampling I m p o r t a n c e s a m p l i n g is a w e i g h t e d M o n t e C a r l o s i m u l a t i o n . Pref-erence is g i v e n to p a r t s of the d o m a i n where the f u n c t i o n of i n t e g r a t i o n is e x p e c t e d t o be m o r e s u b s t a n t i a l . Markov Chain A M a r k o v c h a i n is a sequence of states generated f r o m a s t o c h a s t i c p r o -cess t h a t have the M a r k o v p r o p e r t y . T h i s p r o p e r t y d i c t a t e s t h a t f u t u r e states are d e p e n d e n t o n l y o n the c u r r e n t s tate a n d n o t o n a n y past states. T h u s , t h e p a t h l e a d i n g u p t o the c u r r e n t s tate is n o t i m p o r t a n t i n d e t e r m i n i n g f u t u r e states. v i i i Glossary Metropolis Algorithm An algorithm for sampling an arbitrary probability distribution. It involves proposing small steps and taking a weighted random walk through config-uration space. Monte Carlo Integration An integration technique that uses random configuration sam-pling to get an estimate for value of an integral. M-Theory M-theory is an 11 dimensional non-perturbative theory that is conjectured to be an unification of the five different superstring theories. The M is yet undefined. Perturbation Theory A mathematical technique for finding an approximate solution to a problem that cannot be solved exactly. It involves taking a power series expansion of some small parameter that deviates from an exact solution. Polyakov Loop The Polyakov loop is an order parameter for the breaking of centre sym-metry and can be used to identify when a gauge theory is deconfined. Pseudo-Random A computer generated sequence of random numbers that conforms to statistical tests of randomness, but is nonetheless generated via some algorithm is said to be pseudo-random. Quenched Approximation In lattice gauge theory, a simulation in which the fermion determinant is set to one, effectively ignoring the contribution of fermions, is known as the quenched approximation. Run-Time History A run-time history of an observable is a record of the value of that observable after every sweep starting from the beginning of the simulation. Standard Model The standard model of particle physics is the current theory which describes the strong, weak, and electromagnetic fundamental forces, as well as the fundamental particles that make up all matter. Sweep The process of updating the entire lattice once is called a sweep. Themalization The initial process of bringing the Monte Carlo simulation to a dynamic equilibrium is referred to as the thermalization of the system. Updating Updating is the process of replacing a field variable with a new random variable - in accordance with an algorithm - for construction of the next configuration. ix Acknowledgements I would like to thank my supervisors: Gordon Semenoff for his patience and insight and Richard Woloshyn for his thoroughness and countless hours spent in meetings at T R I U M F . This work was made possible through contributions from NSERC and West grid. I would also like to thanks my parents and sisters for their encouragement and support. x To my uncle Wally. He will be missed. Chapter 1 Introduction 1.1 String Theory and M-theory String theory is a modern attempt to come up with a theory of everything. Its ultimate goal is to unify gravity with the other three fundamental forces. The idea behind string theory is that the fundamental degrees of freedom are one-dimensional objects called strings. This goes beyond the Standard Model of particle physics, where the fundamental objects are zero-dimensional point particles. This extension is somewhat analogous to going from the real line to the complex plane. String theory is still in the developmental stages. It has yet to yield any real experimental proof or prediction, leading to some debate over the validity of the theory. There are five different types of consistent superstring theories. These are type I, type IIA, type IIB, heterotic SO(32) and heterotic Eg x E%. These differ depending on whether the strings are open/closed, whether they are oriented and on how they treat electrical charge. Each of these is a 10-dimensional theory and is set up as a perturbative expansion in terms of the string length as and string coupling gs. Various correspondences have been discovered between these different types of string theory and this has led to the conjecture that they are all part of some larger, yet still undiscovered 11-dimensional non-perturbative theory called M-theory [1], which also in-cludes 10-dimensional supergravity as an additional limit. The M is undefined and may be made to refer to things such as "matrix" or "membrane" as desired. 1 Chapter 1. Introduction 1.2 BFSS Matrix Model Conjectured in 1996 by Banks, Fischler, Shenker, and Susskind [2], the BFSS matrix model proposes an equivalence between a system of N DO-branes and M-theory in the infinite momentum frame. The model is supersymmetric quantum mechanics with action [2, 3] / dttv (1.1) Here i,j = 1,..., 9 are spatial dimensions and all degrees of freedom are N x N Hermitian matrices. This is a gauge invariant theory, where D = _ — i[A,...] represents the covariant time derivative and A is the Id gauge field. The coupling is written in terms of the com-pactification radius R and the eleven dimensional Plank length £pi. Alternatively, this can be written in terms of the 10-dimensional string coupling gs and string length £s = V c ? , where = 9 & ^ = 9 l / 3 i s , R = 9sts- (1-2) This action already describes the dynamics of N DO-branes of type IIA superstring theory and should represent M-theory in the N —> oo limit. 1.3 Motivation: Connection to Black Hole Thermodynamics One motivation for work in this area is the relation between the finite temperature states of the BFSS model and the black hole states of type IIA supergravity [4, 5]. This idea has been explored in papers by Kabat and Lowe [6, 7]. Beginning with the Bekenstein-Hawking entropy of a ten-dimensional, non-extremal black hole - which is the area of the event horizon in Planck units - they used a Gaussian density approach to calculate the entropy of a black hole in strongly coupled quantum mechanics. Written in terms of gauge parameters in the classical limit, this expression has the form / rpZ \ 3/5 F / r = - 4 - 1 1 5 A , 2 ( s v ) • ' ( L 3 ) 2 Chapter 1. Introduction There are a couple reasons why this formula is of interest. First is its dependence on the 't Hooft large N limit. This limit is N —> oo, gym —» 0, holding A := gyMN fixed [8] and is the appropriate limit to look for black hole behaviour in the BFSS model. Second, in the formula the dependence of the free energy is of order TV2. This is the expected behaviour of a gauge system in the deconfined phase, which may be equivalent to a black hole state. A reproduction of this result from the matrix model would be very significant. To find this behaviour in the matrix model, one must look in the strong coupling regime which is not accessible to perturbation theory and as such, no analytic derivation of this expression has been made. A n alternative idea is to study the BFSS model via numerical simulation. This is a fairly novel approach and is not subject to the same limitations as perturbation theory. 1.4 Phase Transitions For systems with a density of states that grow exponentially, there exists an upper limiting temperature referred to as the Hagedorn temperature, above which the partition function diverges and no longer exists, such that \imT^T- tr[e~^H] = oo. A Hagedorn-like density of states is common to basic string theories and is found in the matrix harmonic oscillator [9], Above the cutoff temperature TH, there is an exponential growth in the asymptotic density of states and the thermodynamic canonical ensemble does not exist. However, it can be made to exist by keeping ./V large but finite. At this point, the energy and entropy would be dominated by states at and above the cutoff scale and the free energy, which is normally zero, would jump to order i V 2 in the N —> oo limit [3]. This is similar to a large-./V Gross-Witten type phase transition [10], which is a 3rd order phase transition behaviour for unitary matrices. In large-iV gauge theories, such as weakly couple Yang-Mills theory [11-13], a Hagedorn transition has been found and is referred to as a deconfinement type transition. It is expected that a similar deconfinement type transition can be found in the BFSS matrix model [3]. Although there is no spatial extend in the BFSS model in which the particles can 3 Chapter 1. Introduction be separated, confinement refers to the fact that the quantum states of the Hamiltonian must be singlets under the gauge symmetry [3], which can then be broken by the Hagedorn transition. In adjoint models, this transition is associated with the breakdown of centre symmetry, A(t) i - » A(t) + c l . There is an order parameter for this type of symmetry breaking, the Polyakov loop [14]. It is the trace of the holonomy of the gauge field around the finite temperature Euclidean time circle, P=±trV(ei$A). (1.4) This operator is gauge invariant and obtains a nonzero expectation value when a gauge theory is deconfined. In the BFSS model, this deconfined state can then possibly be com-pared to a black hole. It will be interesting to look at the behaviour of this operator in the matrix model. 1.5 Numerical Approach 1.5.1 R e m o v e Fe rm ions The focus of this thesis is thus to develop a numerical simulation of the BFSS model. As a first step, the effect of fermions in the action will be ignored. There are various computational difficulties that make inclusion the fermions a considerable task. One is the issue of fermion doubling. Putting the fermions on the lattice causes the fermions to behave as multiple degenerate particles. One means of managing this problem and perhaps the oldest, is to add a Wilson term to the action. This causes the degenerate states to decouple, but is not without additional difficulties. Another issue is that although the fermions can be integrated out of the action easily, because of their chiralities, doing so leaves behind a fermion determinant which is complex. Usually the fermion determinant is already very computational intensive. To complicate matters, it is not even clear how to handle a complex functional integral, which needs to be real in order to have sensible physical meaning. 4 Chapter 1. Introduction Thus, as a result of these problems, calculations involving fermions are extremely dif-ficult. Until recently (thanks to the advent of even more computing power), dropping the fermions was a fairly standard approach in lattice QCD calculations and is know as the quenched approximation. It usually results in uncontrollable 10 or 20 percent errors. There are some encouraging reasons as to why the effects of fermions can be ignored in this model. For one, due to anti-periodic boundary conditions there is gap in the fermion spectrum at low temperatures for which fermions cannot be excited, leaving only the be-haviour of the bosons. Alternatively, at high enough temperatures, the fermions will de-couple and should behave in a similar manner to that of bosons. The gives two regimes in which the fermions are not that important. Another interesting issue is that the effects of supersymmetry predict that the phase transition should occur at Temp — 0. Supersymmetry is unstable at Temp = 0 and should be broken by temperature. By dropping the fermions, the phase transition temperature will be shifted to some finite non-zero value. Simulating the bosonic part of the action is a problem in its own regard and will be the focus of the work herein. It is hoped that the absence of fermions will not alter results too much and that studying the bosonic part of the system will still reflect the behaviour of the system as a whole. This work may also serve as a basis to which fermion corrections can be added at some point in the future. 1.5.2 E u c l i d e a n A c t i o n The thermodynamics of the system can be investigated by looking at the action in Euclidean space and compact Euclidean time. This is a Wick rotation of the time variable (£' — it) in the regular Minkowski space action and has the effect of changing the signs of a few of the terms in the action. The system then exists at some finite temperature Temp = ^. The thermodynamics of the system can be described by means of the partition function, which is an expression that effectively counts all possible states of the system. The partition function and action for this model are (1.5) 5 Chapter 1. Introduction and dttv {DX1)2 + n2Xi2 - ~[X\Xj}2 (1.6) where D = ^ + i[A,...] and the boundary conditions are periodic in time, Xt(t + f3) = Xl(t) and A(t + (5) — A(t). The commutator coupling has been written in terms of the't Hooft coupling A = g2mN, which is designed to make the behaviour of the system invariant under different values of N. As a further modification of equation (1.1), a mass term p2Xl2 has been added to the system. Without the commutator term, equation (1.6) then represents a matrix harmonic oscillator, which is a useful sub-problem for studying this system. 1.5.3 G a u g e F i x i n g The next step is to select a choice of gauge. The gauge symmetry in the action reflects a redundancy of the dynamical variables. This can create difficulties when performing the integration because the partition function will take on an infinite number of states which is not compatible with a numerical simulation. In lattice gauge theory, a common means of handling the gauge symmetry is to represent the gauge fields as link variables between the sites of the lattice, which then integrate to a finite number of states. The quenched BFSS model was investigated using this technique by Bialas and Wosiek [15], for low values of d and N. One of their main results was to show that the transition temperature is consistent with the't Hooft scaling. For this thesis, an alternate means of handling the gauge fields is used. Hopefully this will provide the means to study the system at larger values of ./V. The gauge is chosen so that the gauge variables interact only with the matrices at the boundaries of the matrix integral. Next, it is diagonalized so that the gauge variables behave as angles on the unit circle. This introduces a Faddeev-Popov determinant into the partition function. The exact details for doing so are given in chapter 5. The partition function and action then have the form a<b (1.7) 6 Chapter 1. Introduction and SE = \ JPdttv \x* + u2Xi2 - ~[X\Xi] (1.8) with boundary conditions X\t + j3)ab = Xi(t)abei(-6a~eb\ Here the 6a = j3Aa are the eigenvalues of the system. This sets up the framework for the numerical simulation of this problem. The phases of the system can then be investigated by examining the behaviour of the eigenvalues. The Polyakov loop in discrete form is This operator should average to zero in the confined phase and average to of order N in the deconfined phase. 1 .5.4 P a t h In tegra l M o n t e C a r l o The numerical work of the problem is to now put the partition function on a lattice and calculate the expectation value of observables by solving path integrals. For a general path integral of the form Z = J[dX]e~SE^ the expectation value of an observable is calculated Integrals of this form can be evaluated using Monte Carlo integration, a technique which relies on random number generation to estimate the value of the integral. The content of this thesis focuses on building up a simulation of equations (1.7) and (1.8) and is done in a sequence of steps. To-begin, the basics of Monte Carlo integration are covered in chapter 2, where the properties of two general algorithms, Metropolis and heat bath, are outlined. In chapter 3 a toy model using a harmonic potential u2X2 with gauge variables set to zero, is investigated. This is an exactly solvable model and is a good starting point for testing numerical integration. Next, integration over the commutator term [Xl,X^]2 is covered in chapter 4. There are three possible algorithms for simulating this term and a cross-check between the three will provide a good test of the validity of the simulation up to this point. In chapter 5, the integration of the gauge variables in 7 (1.9) a,b as (1.10) Chapter 1. Introduction investigated using a toy model to represent the matrix integration. This toy model will give a good idea of what to expect for the behaviour of the final simulation. Integration over both the gauge variables and a harmonic potential is studied in chapter 6 and finally in chapter 7, the full simulation of the bosonic BFSS matrix model is studied. 1.6 Plane Wave Matrix Model A slight mathematical extension of the BFSS model is the matrix model on a pp-wave spacetime. Like Minkowski space, the pp-wave spacetime is maximally supersymmetric. This model can be studied perturbatively, but again not in the strong coupling regime. It is itself somewhat of a toy model for study of the BFSS model. The action is [16, 17] gpp -I dttv _1_ " ~2R ^{DX^ + ^[X\X=f+ $1*1, + j^HVPA] (1.11) I) v>2 I'\2 i^eIJKXIXJXK - ^ 712!V Here D dt i[A,...], all degrees of freedom are J V x J V matrices and i,j = 1,... ,9 are spatial dimensions, which are then also split into three dimensional and six dimensional subspaces with I,J,K= 1,2,3 and I' = 4 , . . . , 9. The problem is set up in the same manner as before. After ignoring the fermions and writing the coupling in terms of the't Hooft coupling, the Euclidean action has the form 1 (1.12) 2 , .2u X JJK xJXK Where D = ^ + i[A,...} and all other notation is as it was previously. This action is very similar to that of equation (1.6). The phase transition behaviour of this model is studied in chapter 8. Various dynamical aspects of the plane wave matrix model have recently been investigated in work by Kawahara and Nishimura [17]. 8 Chapter 2 M o n t e Car lo Techniques for Solving P a t h Integrals This chapter introduces the computational techniques needed to simulate a path integral system. To begin, consider a general partition function in terms of matrix variables/fields X(t) that are continuous in time, where the boundary conditions are periodic X(t + (3) = X{t). Here X1 is an N x N matrix and i = 1... d are spatial dimensions. For now, the gauge fields are ignored and their inclusion will be discussed in a later chapter. The partition function is a means of counting how many states are accessible to a particular system and e~SE behaves as a weighting function for each particular state. In this form, the system is at finite temperature with 8 = —. This formulation is useful in thermodynamics or statistical mechanics and can ~ 1 emp J ideally be used to calculate interesting quantities such as entropy or free energy. The partition function is to be integrated over all fields at all times. The standard means for doing so is to divide the time variable into slices and evaluate each spatial integral at a fixed time [18, 19]. The first step is to discretize the time variable into a lattice of T points separated by spacing a. Throughout this paper, T will represent the number lattice sites and is not to be confused with the system temperature. Instead of continuous fields, X(t), (2.1) and the Euclidean action for a general potential (2.2) 9 Chapter 2. Monte Carlo Techniques for Solving Path Integrals the fields will only exist at discrete lattice points Xt- The partition function on the lattice is then / T \ -Siat[X] (2.3) Ziat=(n / \dx^j e~ The action must also be changed to its lattice form. To do so, the integrals with respect to time are replaced by summations and the time derivatives are replaced by difference equations. The prescription for this is: P dt o d_ dt X(t) V T=2 a • E t = i L~/\ — X t ~ Xt-i a a Xt (2.4) at' Thus the lattice action will have the form T a v—< 2 ^ t = i x i - x u + Vlat{Xl) (2.5) Here, the sum over spatial dimensions is implicit. Care must also be taken to handle the boundary conditions. In the case of periodic boundary conditions on the lattice, they are just Xt+r = Xt. Additional subtleties may be required for other types of boundary conditions. This setup should describe the behaviour of the partition function up to a multiplicative constant, provided that a suitable number of lattice sites are used. The expectation value of an observable can be calculated as (O) J[dX]Q[X]e -sE[x] J[dX]e-sElx] (2-6) This is just the integral over all possible values of the observable with e~SE as a weighting factor, normalized by the total volume of the system. The numerical problem is to now solve path integrals of this form. 10 Chapter 2. Monte Carlo Techniques for Solving Path Integrals 2.1 Monte Carlo Integration A path integral such as above may typically have on the order of 105 integration directions. If one were to use a mesh of only 10 sites per direction to get a brute force estimate of the integral, it would require 1 0 1 0 0 0 0 0 summations! A more clever way is needed to calculate such integrals. One such technique is Monte Carlo integration, which relies on random number generation to get an estimate of the integral. Consider the calculation of the average value of the function f(x), on volume space V. As an integral, this is just (/) = y Jv f(x)dx. The idea behind Monte Carlo integration is that an estimate for this value can be calculated by choosing values in the domain at random, Xi G V, and averaging the values of the function at those points. Thus, 1 f f(x)dx^-J2f^)- (2-7) This algorithm has a convergence of -Jn where n is the number of sample points. The convergence remains unchanged regardless of the size of the system, making this technique ideal for very large systems such as this one. For statistically independent sample points, the error is the standard deviation of the values of /(xi), divided by the algorithm convergence. Thus the error in the estimate of (/) is 1 0~ m.r. — 1 " -£[/(*o-/T (2-8) n i=i 1 n Numerous algorithms exist for generating pseudo-random numbers in the interval (0,1). These algorithms will typically generate a long period (eg. 2 3 2) sequence of numbers, each between 0 and 1. They are referred to as pseudo-random because they conform to most statistical tests of randomness, but are nonetheless generated from an algorithm. The reader is referred to the text Numerical Recipes [20] for further details and implementation. Another key Monte Carlo concept is that of importance sampling [21]. Rather than 11 Chapter 2. Monte Carlo Techniques for Solving Path Integrals sampling points in the integration domain uniformly, it may be desirable to use some other probability distribution. If the function of integration is sharply peaked, it is possible to get a better (and faster) estimate of the integral, if preference is given to the region where the function is substantial. Otherwise, much time will be wasted by sampling regions where the function value is negligible. To see how the idea of importance sampling works, let p(x) be probability distribution, normalized such that fvp(x)dx = 1. Then if points in V are sampled according to this distribution, which is equivalent to the change of variables <f> : V i—• V, where y = <f>(x) = Jx p(x')dx'; dy = p(x)dx, equation (2.7) becomes 1 f(Xi) ,-1, x « - Z . - 7 - T . Xi = 4> (yi). n p(xi) This procedure is optimized when p(x) closely resembles the function of integration. These ideas naturally extend to equation (2.6). In order to calculate an observable, e~Ss can be used as the probability distribution for sampling points in the domain. This is highly appropriate since C ~ S B is already a weighting function. The estimate for the value of an observable is then just The problem is to now generate a set of field configurations, {X^}, in accordance with the weighting e~SE^xh Once a means of generating these configurations is created, it will be possible in principle to measure any observable that is a function of the fields, simply by applying equation (2.10). Thus the bulk of the numerical simulation is in the creation of the field configurations. Techniques for doing so are presented in the following two sections. 2.2 Metropolis Algorithm Developed in 1953 by Metropolis et al. [22], the Metropolis algorithm is a simple algorithm for generating an arbitrary probability distribution. It works by proposing a random change 12 Chapter 2. Monte Carlo Techniques for Solving Path Integrals to the current configuration and accepting the change a certain proportion of the time. The result is a weighted random walk through configuration space. This produces a sequence of configurations known in mathematics as a Markov chain. It is necessary to create such a sequence for the matrix fields, {A - ^}, satisfying the probability distribution P(X) = e~s^. The following four step approach outlines the process to go from configuration X^ to configuration in the creation of the chain. 1. Given X^, 2. Create a new point X' that is a small random distance from X^y 3. Let a = p/y \ , then { X' if a > 1 X' with probability a, otherwise X^ if a < 1 For the second step, a new trial point X' is created by proposing a small random change to X ^ , such as by adding to it a random matrix AR: X' = AR + X^y It is necessary that the proposed random change be reversible with equal probability. That is, if the current state is A^i, the probability to have X2 as the next trial state, should be same as the probability for if X2 was the current state and X\ was the next trial state. However, the probability that either X\ or X2 then accepted as the next state is dependent on the third step. The Metropolis algorithm is also possible without this restriction, but this would make the comparison in the third step slightly more complicated. • In steps three and four, the effect of the proposed change on the value of the probability distribution is compared. If the change causes the value of P(X) to increase, it is accepted, otherwise it is only accepted a certain proportion of the time. This is implemented by drawing a random number u £ (0,1) and if a > u the change is accepted, otherwise the original configuration is kept. 2.3 Heat Bath Algorithm The exact details of the heat bath implementation will vary depending on the particular system being studied. The main idea is that a direct means of generating the probability 13 Chapter 2. Monte Carlo Techniques for Solving Path Integrals distribution for a single variable is needed. The approach is to isolate a single variable and then assign a random value to that variable while holding all other variables fixed. This process is then repeated for all variables in turn. For this, a direct means of generating the probability distribution is required, which may not always be possible. This process is similar to bringing each variable to equilibrium with a thermal reservoir, which is how the technique is so named. The matrix systems being studied, have individual components that are quadratic and that can be reduced to the form of a Gaussian: e - S B ( x i ) ^ e — ^ r - ^ (2.11) The commutator squared term, [Xl,XJ]2, in equation (1.8) may appear to be quartic, but the contributions from matrices with different dimensions, X1 and X3, are actually quadratic and can be reduced to the above form. The algorithm for generating a Gaussian random deviate 7 (with unit standard devia-tion) is straightforward. The reader is referred to Numerical Recipes [20] for details on how to do this. This distribution can then be modified to correspond to a Gaussian distribution with arbitrary centre and width. A heat bath can be created by isolating the contribution of a single variable to the action and writing it in the form of equation (2.11). The new random value is assigned to it is xrw=lo- + p. (2.12) The heat bath proceeds by calculating the Gaussian probability distribution for each vari-able and then updating each variable in this fashion. 2.4 Simulation Details The following section outlines some additional basic concepts that are relevant to Monte Carlo simulations. If some of these concepts seem unclear, a more concrete example is given in the next chapter. 14 Chapter 2. Monte Carlo Techniques for Solving Path Integrals 2.4.1 Sweeps and Updat ing In the simulation, the lattice exists as a set of matrix variables {Xt} for t = 1,... ,T . This represents N2T complex variables and requires a fair amount of computer memory for large N. Usually, it is practical to maintain only a single copy of the fields in their entirety. Thus there are some similarities between the simulation and an actual physical system because at any given point in time, the simulation will exist in one particular state only. To perform the calculation of observables, it is necessary to produce a sequence of config-urations. This is done by updating the various individual matrices and matrix components and replacing them with new values. In the Metropolis algorithm, it will be necessary to do multiple updates per variable before moving on to the next, whereas in the heat bath, it is only necessary to update each component once before moving on to the next. A sweep over the lattice is completed after the entire lattice has been updated and replaced by a new configuration. After each' sweep, only the values of relevant observables are stored for later use and not the details of the entire system state. 2.4.2 Thermalization and Starting Conditions Another important thing to discuss is the system initialization. Some possibilities include starting with all variables initially zero or all variables chosen at random. After the choice of starting conditions, the simulation must be run through multiple sweeps and brought to an equilibrium distribution before being used to generate data. This is known as ther-malization. The number of sweeps required to thermalize a particular system will depend on the details of the system. Typically this process is much slower for a Metropolis type simulation than it is for a heat bath. In general, it is not the choice of initial conditions that matters, but instead what is important is that it must be possible to reach the same type of equilibrium distribution regardless of the starting conditions. 2.4.3 Autocorrelat ion Finally, it is important to note that each step in sequence of configurations is not inde-pendent; it is very much dependent on the previous configuration. If one were to simply 15 Chapter 2. Monte Carlo Techniques for Solving Path Integrals average the value of the observables after every single sweep, this would result in a slightly skewed value of the observable and the convergence of the algorithm would be worse than Instead, it is necessary to run enough sweeps between data points, such that they are no longer correlated to the previous data point. It is possible to measure the correlation of an observable with itself over successive sweeps with a quantity known as autocorrelation. It is defined as [23] ) = (AnAn+T) - (An)(An+T) (2.13) = ((An-A)(An+T-A)), where r is the number of sweeps separation. The autocorrelation function can be used to identify how many sweeps in separation are needed to produce statistically independent measurements. 16 Chapter 3 Harmonic Potential As a first Monte Carlo simulation, consider a bosonic system that has only a mass term as the potential. The action is with periodic boundary conditions X(t + (3) = X(t). Here the X s are N x N Hermitian matrices. This system represents A^2 uncoupled harmonic oscillators. It can easily be extended so that the Xs are in multiple spatial dimensions if desired, but because there is no interaction between dimensions, this would effectively just be multiple copies of the same system. This problem is exactly solvable. A discussion of how to do so is included in appendix A . l and A.2, but for now the focus is on how to run a Monte Carlo simulation. Applying the discretization prescription given in equation (2.4), the lattice action has the form The task is now to generate field configurations weighted according to this action. The expectation value of any field dependent observables can then be calculated simply by using equation (2.10). 3.1 Metropolis For Harmonic Potential Using the framework outlined in the previous chapter, this section develops a Metropolis style procedure for generating field configurations. The idea is to propose a random change (3-1) (3.2) 17 Chapter 3. Harmonic Potential to some part of the system and then compare the effect on the action. This can be done by proposing a small change to any or all of the field components. The form of equation (3.2) suggests that it would be advantageous to update an entire matrix at one lattice site, Xt. For each comparison, it is necessary to only calculate the change to a local part of the action, rather than the action as a whole. Isolating the contributions to Xt in equation (3.2), the local part of the action for Xt is a Sioc = 2 t r 2+ f ) { X t Y - 2 X t ( X t + 1 + X t ~ 1 ^ J \ The random change proposed has the form (3-3) Xt' = Xfd + 5 At. (3.4) Here At is a random Hermitian matrix, where each element (both real and complex) has been generated as a random number in (—0.5,0.5) and 5 is a configurable scaling factor. When e~Sloc(x'^ is compared to e~Sloc^x°U\ the change is accepted if e-Sloc(X't)+sloc(x?ld) > U ) (3.5) where u 6 (0,1) is an uniform random deviate. Otherwise the change is rejected. It is computationally efficient to repeat this process numerous times (20-30) for a given lattice site before moving on to the next one. The reason being that the value of Sioc(X°ld) does not need to be recalculated each time. A sweep over the lattice is completed after each lattice site has been updated in this fashion. The 8 parameter is configured to increase the efficiency of this process. It is set based on the acceptance rate, the ratio of accepted updates to total proposed updates. If <5 is large, the simulation will take a small number of large steps in the configuration space, whereas if 5 is small, it will result in a large number of tiny steps. Ideally, some sort of midpoint is desirable. An acceptance rate of 50%-70% is a reasonable guideline for getting a good sample of the configuration space. 18 Chapter 3. Harmonic Potential 3.2 Heat Bath for Harmonic Potential For the heat bath, individual elements need to be isolated and reduced to the form of a Gaussian. This has basically already been done in the local action form of equation (3.3). It is then a simple matter to complete the square and put the expression in the form of (Xt-p)2 e 2?2 . It is necessary to update each component of the matrices, (Xt)ab, individually. For this problem however, since there is no interaction between the components of an individual matrix, these can be updated simultaneously. As outlined in equation (2.12), the new value assigned to each (Xt)ab is {Xt)2bW = 7 C T + Pab, where a ( * a 9 ) = l ff(o//-*a9) = _ 1 a = l + a \ p=lXt+1XXt~\ (3.6) V a a V2aa az a a' This expression involve complex numbers, so each 7 is a pair of Gaussian random deviates, a different set of which is generated for each element. Also, this takes into account the fact that the matrices are Hermitian and as such, it is only necessary to update the off-diagonal elements for a < b and then set (Xt)la = (Xt)ab- This redundancy explains the differences in the factors of 2 in the a terms between the diagonal and off-diagonal elements. Each element only needs to be updated once before moving on to the next because this process is already totally randomized. A sweep is completed after the entire lattice has been updated. 3.3 Heat Bath/Metropolis Comparison A numerical simulation can now be created using each of these algorithms. It is a good idea to set up a shared framework for the non-algorithm specific parts of the code, while leaving the algorithm specific parts as modules which can then be plugged in as desired. This will make the development of the code easier in the long run. 3.3.1 Run-Time History One thing to look at is to follow the behaviour of an observable after every sweep, starting from the beginning of the simulation. This is known as the run-time history of that ob-servable. For example, consider the run time history of a simple observable for the average 19 Chapter 3. Harmonic Potential size of X2. The observable is 0 2 ==]v^ (3.7) Figure 3.1 shows the run-time history comparing the Metropolis and the heat bath algorithms for Oi- In both cases, the starting conditions have been chosen as all fields set to zero. The graph shows the evolution of the observable over time. In both cases the observable approaches and then oscillates about the theoretical value. For the Metropolis, these oscillations are typically much longer than for the heat bath case. If enough data points are taken, the average should eventually converge to the theoretical value. Also, it can be seen that • the heat bath simulation thermalizes much more quickly than the Metropolis simulation and reaches an equilibrium after only after a few tens of sweeps, whereas the Metropolis requires a few thousand sweeps to do so. 0.7 0.6 CM 0.5 S-i r II 0.4 w l b 0.3 l l 0.2 0 0.1 0 Exact Value Metropolis Heat Bath 1000 2000 3000 Sweep Number 4000 5000 Figure 3.1: A run-time history of Oi comparing the Metropolis and heat bath simulations to the exact value. Here N = 16, T = 32, d = 1, u = 1 and a — 0.3. The exact value is drawn as a straight line. The Metropolis simulation begins at zero and slowly thermalizes to the exact value and then has long period oscillations about this line. The heat bath thermalizes to the exact value quickly and then has relatively fast oscillations about this line. 20 Chapter 3. Harmonic Potential 3.3.2 Autocorrelat ion It is clear from the previous graph, that the value of an observable after each sweep is not independent and instead, is related to the values that preceded it. It takes multiple sweeps separating observables before they become statistically independent from each other. The autocorrelation, defined in equation (2.13), can be used as a measure of how many sweeps are necessary for this to happen. Figure 3.2 plots the autocorrelation for Oi- There is an exponential drop off in the autocorrelation as the number of sweeps separation increases. This drop off is much quicker for the heat bath case than for the Metropolis and this is due to the differences in the nature of the two simulations. From these results, it can be concluded that the heat bath needs about a 25 sweep separation to produce independent data points, whereas for the Metropolis, a separation of about 1400 sweeps should be used. 03 o - 0 . 2 1 ' ' 1 1 2 0 500 1000 1500 2000 Sweep Distance, r Figure 3.2: Autocorrelation of the Metropolis and heat bath simulations for Oi. For the Metropolis, the autocorrelation curve begins at one and slowly drops off in exponential fashion as the number of sweeps separation increases. After about 1400 sweeps, it levels off and then fluctuates about zero. The heat bath curve begins at the same value and then drops off much more quickly. It levels off after about 25 sweeps and then continues to fluctuate about zero. 21 Chapter 3. Harmonic Potential 3.3.3 2-Point Correlation Function A useful observable for this system is the correlation function between matrices at different lattice sites. This value can be calculated exactly and is For reference, the derivation of this result is in appendix A.1.3. Figure 3.3 shows this correlator on a 32 site lattice. Both the Metropolis and heat bath simulations agree with the .exact values and are within error. This is convincing evidence that both algorithms are in working order. 1 [ ' ' ' Exact Value ' ' 3 0.001 1 — 1 1 1 - 1 1 1 1 0 5 10 15 20 25 30 35 Lattice Site, t Figure 3.3: The 2-point lattice correlation function for the harmonic potential comparing Metropolis and heat bath simulations to the exact value. Here, N = 16, T = 32, d = 1, u — 1.0 and a = 0.3. The exact value is a V-shaped curve and is drawn with a continuous line. The Metropolis and heat bath values of the correlator are plotted with errors at each lattice site. In most cases, the agreement between the three values is dead-on, except at the very base of the V , where the Metropolis and heat bath values differ from the exact value slightly, but are still within error. 22 Chapter 3. Harmonic Potential 3.3.4 Remarks The primary reason for using two different algorithms is that it provides a means of cross-checking results. Overall, the heat bath approach is faster for each sweep and needs less time to produce statistically independent configurations and is thus in general preferable. The main advantage of the Metropolis is that it is easy to construct and that it is possible to do so for almost any system. The heat bath however, will generally require that the problem can be reduced to a suitably nice function, such as Gaussian, which is not always the case. When the commutator term is added in the next section, the Metropolis simulation will be easy to create and will require the addition of only a single term, whereas the heat bath will become vastly more complicated. In summary, the heat bath approach is fast and useful for in-depth simulations, whereas the Metropolis is easy to implement and useful for getting a problem started. 23 Chapter 4 Commutator Potential Now that a framework has been set up for doing a Monte Carlo simulation, the potential can be switched to a commutator squared potential. The action is SE dttv x^-^_Zix\x^ N (4.1) Here i,j = l,...,d and A — gymN is the 't Hooft coupling. For simplicity, in future equations this will be written as A J V = 77 = ffym, so that it appears as only a single parameter. The action also introduces multiple spatial dimensions. The final simulation will be in nine spatial dimensions, but for now the setup is designed for an arbitrary number of dimensions. It will be useful to look at smaller numbers of dimensions, such as two or four, during the design stages. The gauge variables are still ignored and the partition function is just Z = f[dX]e~SE. Metropolis and heat bath simulations can be constructed and the consistency of the results can be compared. This particular model has been studied recently and one means of comparison will be some published results [24]. There are actually two ways of constructing a heat bath for this model: a direct approach and an auxiliary field approach, both of which will be presented. The auxiliary field technique is the fastest of the three algorithms and is what will be used in later simulations. Using the discretization prescription given in equation (2.4), the lattice action is T Siat = Tj t r d / -yi vi ^ 2 A t - A t - 1 1=1 xNY_[xt,xif i<3 (4.2) This system is invariant under the linear transformation X\ H-> X\ + cll. As such, it is 24 Chapter 4. Commutator Potential necessary to set a zero mode for the system. This is done by setting the global trace of the lattice to zero T J~Jtr(XJ)=0 for eachi. (4.3) t=i If the p?X2 term is present in the action, it will act to lift the zero-mode of the system, so this condition should be used whenever the p2X2 term is not present. 4.1 Commutator Metropolis The Metropolis simulation is relatively easy to code. A single term representing the com-mutator interaction is added to the previous setup and the mass term is removed or set to zero. The local change to the action for the site X\ (with i and t held fixed) is then r, a Sioc = 2 t r The simulation then proceeds in much the same way as in the previous chapter, using the above equation for the Metropolis comparison. It is also necessary to update the matrices of each spatial dimension as well. 4.2 Auxiliary Field Heat Bath To create a heat bath, it is necessary to isolate the contributions to each single matrix element (Xl)ab- With the addition of the commutator term, there is now coupling between the internal elements of the matrix X\ and also across different spatial dimensions with matrix X\ for (j ^ i). Thus, the updating is not as straightforward as it was in the previous chapter. It is no longer possible to update an entire matrix simultaneously and instead each element must be updated individually. An approach for creating a heat bath of this system using a set of auxiliary fields is as follows [25]. The derivation and implementation of this heat bath is somewhat tedious, but the result is a fairly efficient algorithm. First, the commutator term in the action can be replaced by an anti-commutator term: ;(Xi)2-2Xt t-1 -\Nj2[xi,xtY (4.4) 25 Chapter 4. Commutator Potential J > [ X * , ^ ] 2 = J> [{X\X3f - 4 ( X i ) 2 ( ^ ) 2 ] . (4.5) Next, consider the addition of a set of auxiliary fields QlJ to the action, where each Ql] is an N x 7Y Hermitian matrix for i < j. For notational simplicity, define QJ1 — Ql] as a redundant copy of the fields for i > j. The auxiliary fields can be discretized in the same way as the regular fields and on the lattice can be written as QtJ(t) —> Q]3. These auxiliary fields are added to the action in the form of the following quadratic term: Caux •= ^J2tT[QV-{X\X3}}2 (4.6) A d = -f E t r Wj)2 ~ 2Qlj{x\xJ) + {x\x*}2]. Where the Lagrangian density £ is related to the action as S = J dtC To calculate an observable, integration is performed over the auxiliary fields in addition to the regular fields and has the form _ /[dX][dQ]0[X]e~ So dtCEW+g-" J[dX}[dQ}e-fodtC^+c^X,Q] ' ( 4 ' 7 ) The auxiliary fields can be added in this manner because they were introduced in the form of a Gaussian. When the integration over the auxiliary field is done, it will produce a constant in the numerator and denominator, which will cancel and not affect the final value of an X dependent observable. When Caux is added to the original Lagrangian, the anti-commutator terms cancel out. 26 Chapter 4. Commutator Potential The new composite Lagrangian density is then 1 2 t r = 2 t r X i ' - A « E [ i x i > X J } 2 - 4 ( X i ) 2 ( X J ' ) 2 ] (4 d + \NY, [(Qi3)2 - iQi3{x\ X°] + {X\X3f d. ' . (x1)2 + \NY^ i(Qlj)2 - tQlj{x\XJ} + 4(x i) 2(^') 2] t<3 From this expression, the new lattice action has the form °lat — T a x—v TT > T R 2 ^ t = i d / vi Y"is 2 (4.9) + A W £ [ ( < # ' ) 2 - 2 < # ' { X « , X3} + 4 ( X J ) 2 ( X > ' ) 2 It is now possible to create a heat bath for the above lattice action. To do the updating, it is necessary to alternate between sweeps over the auxiliary fields and sweeps over the regular fields. Each element of the regular fields must be updated separately, except for the real and imaginary components of a given element, which can be updated simultaneously. The updating procedure for the auxiliary fields, diagonal elements and off-diagonal elements is as follows. 4.2.1 Aux i l i a ry Fields The heat bath for the auxiliary fields is straightforward and follows directly from equation (4.6). Each element of the auxiliary fields is updated according to (Ql3)ab = 1°~ + Pab ( f° r fixed i,j,a,b and t) where the diagonal and off-diagonal elements must be differentiated between. Here Jdiag) Joff-diag) Pab = {xi,x3tu (4-10) 27 Chapter 4. Commutator Potential and 7 is a complex pair of Gaussian random deviates. The indices a and b refer to the components of each N x N matrix. To avoid possible confusion with the lattice spacing, the latter will be written as a; a t when necessary. Because there is no coupling between the internal elements of the matrix Q]J, this entire matrix can be updated simultaneously. Also, since the auxiliary fields are Hermitian, it is only necessary to update values for a < b and then equate (Qf)la = (Q\3)ab-4.2.2 D i a g o n a l E lemen ts The derivation of the heat bath for the regular fields follows from equation (4.9). The details are fairly tedious, so here are the results. For a diagonal element, it is necessary to update (XI)aa = ^ c r^s) + paa (with i,a and t fixed), where Paa — a (*a9) = _ l a = _ ! _ + 4 A i V V ( X ^ ) Q Q , (4.11) r . . . . . . . . . . / 1 11 lat / aa j^i \ b ^ a f* t f i + * ? - A + X N J - f - 4 ^ R e \xib(X>\a] + {X3^3} \ alat / aa \ h-l-n and 7 is a Gaussian random deviate. Here, the summations over indices are explicit and are only summed over the first index. Thus VJj^i >s o n r y a sum over j, since i is held fixed. Also, all fields are taken at lattice site t, unless stated otherwise. For (X3 )ba, this means to square the X3 matrix first and then take the ba component. 4.2.3 O f f -D iagona l E lemen ts The off-diagonal elements are similar. For i, a, b and t fixed, the procedure is to update (XJU = jo-(°ff-dia^ + P a b , where a{off-diag) _ _ L _ a - - | - + 2AJV V] \(X3\a + (X3\b] , (4.12) V2alata afat ^ I 28 Chapter 4. Commutator Potential Pab 1 a Xt+i + Xt-\ 2 Y, [xL(Xj\b + (X3\cXlb]+{X3,Qi3}ab) c=£a,c^b J and 7 is a pair of Gaussian random deviates. Here (A7)a& is complex and both the real and imaginary parts are updated simultaneously. The same notation conventions apply as in the previous section. In particular, the sum z3 c ^ a i C /6 means the sum of all c except for the two cases c = a and c = b. 4.3 Heat Bath Without Auxiliary Fields It is also possible to construct a heat bath which doesn't involve auxiliary fields. This heat bath will pose additional difficulties because it allows for the mixing between the real and imaginary components of individual elements. It also contains numerous double sums which make the algorithm complexity order N worse than the previous heat bath. As such, the auxiliary field heat bath is generally simpler and faster and this heat bath is presented solely for comparison purposes. To begin, the commutator term is expanded: d d ^ t r ^ S X * ' ] 2 = Ytx [-ZX'XiXtXi + 2(Xi)2(Xj)2] . (4.13) i<j i<j The heat bath for the ( X i ) 2 ( X J ) 2 term was already derived in the previous section, so it is only a matter to add to it the heat bath for the XlX3XlXJ term. Again, the derivation is somewhat tedious. Here are the results. 4.3.1 Diagonal Elements For fixed i, a and t, the diagonal terms are updated as (Xl)aa = ja^diag^ + paa, where a (*a f l ) = _ l a = A. + 2XNJ2\(X3\a-(X3)2aa], (4.14) 29 Chapter 4. Commutator Potential 1 Paa — a + 2 ]T Re XibXlcXi ( (6,C)T^ (a,a) Xt+1 + -^ t - 1 + XNJ2 ' - 2 ^ R e [ x : f e ( ^ 2 ) b a jjti \ b^a and 7 is a Gaussian random deviate. A l l sums over indices are explicit and the lattice index is t unless explicitly stated otherwise. The auxiliary fields from before have been replaced c, except for the single case when b = a and c = a, which is omitted. It is this double sum that ruins the efficiency of the algorithm for large N and sweeps over the auxiliary fields end up being computationally more efficient. 4.3.2 O f f -D iagona l E l e m e n t s For the off-diagonal elements, the real and imaginary parts must be updated separately. For fixed i,a,b,t and real or imaginary component, the off-diagonal elements are updated as Re[(Xlt)ab] = -ycrR + [pR)ab and lva.[(X%t)ab] = -jar + (pi)ab , where by a double sum over b and c: ^2(bc)/(a a)-This notation means the sum over all b and all J- + \ N Y \(X3Xa + (X^bb - 1X{aXl bb\ ' PRab = — Plab = — lm[pab} + 2XNRe[Xib}Y^ (Kb) R e M + 2 A N I m K b ] £ l m (xQ (4.15) 30 Chapter 4. Commutator Potential Pab X\+i + x\_x "lat ) - ^ E E [xic{x3\b + {x3\cxib ' ah j^i c^a,c^b j-^zi c^a,c^b xN Y_(xj2U{xL + xu) + 2\NY_ E j& i # i M)^(a,fc),(fc,a) xJacxlcdxdb and each 7 is a Gaussian random deviate. The final sum, ^2(c,d)^(a,b),(b,a)> m e a n s the sum over all c and d except for the two cases when c = a,d = b and c = 6,d = a. 4.4 Comparison to Published Results So far three ways of simulating the commutator term have been presented. The Metropolis is the easiest to code, the auxiliary field heat bath is the fastest of the three and the direct heat bath serves as an additional comparison. To check results, the simulations are compared to the results shown in a recent paper by Kawahara and Nishimura [24]. In this paper, the commutator model of equation (4.1) is investigated, but with the exception that non-periodic lattice boundary conditions have been imposed. Before making the comparison, it is necessary to adjust for these boundary conditions. -4.4.1 N o n - P e r i o d i c B o u n d a r y C o n d i t i o n s Instead of having a lattice that is compact, consider a lattice with two fixed endpoints. The difference between this and periodic boundary conditions is that the interaction between the end points of the lattice must now be removed. This change manifests itself in the kinetic term which is modified as T / yi yi \ ^ T / yi yi \ 2 E ^ - E ^ • <"<•) t=l v 7 t=2 v 7 Previously, the local expansion of this term for fixed X\ was ^ ( X t J ) 2 — 2X\ I t + 1 a 2 J, as appeared in equation (4.4). To make the corrections requires two main changes. First, the coefficient of the X2 term for the boundary elements is reduces by half: A ( X * ) 2 - > i ( X * ) 2 t = l,t = T. (4.17) a z a z 31 Chapter 4. Commutator Potential Next, the connection to neighbouring lattice sites is altered such that Xl+i t = i XU t = T • (4.18) no change otherwise These modifications require small changes to the Metropolis and heat bath implementations, which only need to be made for the boundary lattice sites - the internal lattice sites remain unchanged. Care should be taken so that these additions can be easily turned on/off so that periodic boundary conditions can be reinstated later. 4.4.2 R e s u l t s The Kawahara paper includes a graph that plots the expectation value of the following observable t = i i = i for various values of the lattice spacing a and inverse temperature (3. This is similar to O2 from the previous chapter, except that here there is a sum over spatial dimensions that are not averaged. The values a — 0.5,(3 = 5 and a = 1,(3 = 10 were selected for comparison. Table 4.1 shows the attempts to reproduce these results. The final row gives the es-timates of results from the Kawahara paper, but since these values have been read off a graph, their accuracy is rather limited. The results show that the Metropolis and heat bath simulations are within error and it is highly probable that they are converging to the same point. Also, there is reasonable enough agreement between these results and the Kawahara results. Based on this data and other observations, it is a safe conclusion that the commutator integration is in working order. 32 Chapter 4. Commutator Potential (O4) a = 0.5,(3 = 5 a = 1,(3 = 10 Metropolis Auxiliary Field Heat Bath Direct heat bath Kawahara Paper (Est.) 1.4349 ± 0.0004 1.4354 ±0.0004 1.4354 ±0.0004 1.44 ±0 .01 1.3448 ± 0.0003 1.3448 ± 0.0003 1.3449 ± 0.0003 1.35 ±0 .01 Table 4.1: Comparison of the Metropolis, auxiliary field heat bath and direct heat bath simulations with published results for O4. Here = 16, T = 10, d = 3 and (3 = aT. The simulation data in this table are the average over 500k sweeps averaging every 100th point. For both a = 0.5,(3 = 5 and a = 1,(3 = 10 the results from the three Monte Carlo simulations are very close and within error, and there is reasonable agreement between these results and the results read from the Kawahara paper. 33 Chapter 5 Integration Over Gauge Variables 5.1 Derivation of Gauge Fixing The next step is to reintroduce the gauge fields. The original partition function with the gauge fields and matrix variables is [dA}[dX)e-SE, (5.1) where the action is SE = \JO DT TR [(JDXJ)2 + V{X^ ' (5'2) Here DX1 = + i[A, X1] is the covariant time derivative and the boundary conditions are periodic, X(t + f3) = X(t) and A(t + (3) = A(t). The action is invariant under the gauge transformation X'(t) H-» U(t)X\t)U\t) (5.3) A(t)~U(t) (A(t)-i^UHt), where U is a unitary matrix (U(t)W(t) = 1.) As a result of the gauge symmetry, the partition function integral is poorly defined and trying to evaluate it would result in redundantly integrating over an infinite number of equivalent states, which is nonsensical in a Monte Carlo simulation. One way of solving this issue is to fix the gauge so that each independent gauge configuration is counted only once. The periodic boundary conditions prevent the choice of A — 0 as the gauge. The 34 Chapter 5. Integration Over Gauge Variables next best choice is a static diagonal gauge —A(t) = 0, Aab = Aa5ab. (5.4) The covariant time derivative under this gauge is then (DXi)ab = -Xtab + i(Aa-Ab)Xlb, (5.5) which is equivalent to the non-covariant derivative action SE — \ Jjf dttr (X1)2 + V(Xl) with boundary conditions X*(t + 0)ab = X ^ t ) ^ 1 ^ - ^ . This gauge fixing can be done using the Faddeev-Popov trick, the details of which are not elaborated upon here. For a discussion of how to do so, see references [9, 17]. The resulting determinant is similar to a Vandermonde determinant and introduces a factor of ] T ^ b s i n \Aa — Ab\J into the partition function. Defining 6a :— (3Aa, the final form of the partition function is Z ~ / dex • • • d6N JJ sm a<b (5.6) with boundary conditions Xl(t + f3)ab = Xl(t)abel(6a~eb\ The gauge variables can now be thought of as angles on the unit circle and the phase behaviour of the system can be investigated by examining the behaviour therein. As discussed in the introduction, an order parameter for phase transition is the Polyakov loop, equation (1.4). In this gauge, this expression has the form J V o = l ,iPAa (5.7) Squaring the latter term P2 = jj Ea=i ^ A a = W Ea,b eW**-^ and using 6a = (3Aa gives 1 N (5.c a,b The above observable for the Polyakov loop (squared) will serve as the primary means of 35 Chapter 5. Integration Over Gauge Variables investigating the phase transition behaviour in this system. 5.2 Toy Models To investigate the behaviour of the gauge fields, consider the following two parameter model representing the integral over the matrices: J[dX]e-W] ~ e ^ V ^ ' + ^ E f cos(0a ) ) ( 5 9 ) where A' and g are coupling constants and A' is not the same as the 't Hooft coupling. The e A ' ^ 6 e ' ( 8 ° 9 b ) factor is derived as a simplification of the matrix harmonic potential (Appendix A.3), while the e9N^<* c o s( e < i) term is a simple variation that behaves as a source term. This model should allow for study of the gauge fields independent of the matrix integration and should provide a good framework for what behaviour should be expected later. 5.2.1 Phases When this model is put into the partition function, the result is a balancing effect between the sine squared term, sin 2 {^-^ ) , and the exponential terms. The sine squared term acts to repulse the eigenvalues, spreading them out, while the exponential terms act to attract the eigenvalues causing them to cluster together. The e s J V 2a c o s ( s «) term has the additional effect of causing the eigenvalues to cluster about 0 — 0. It is expected that the exponential terms will dominate when A' + \g\ > 1, resulting in the deconfined phase. Figure 5.1 shows the expected phase regions of this model for large N. The order of the phase transition will differ depending on whether the A' or g parameter is varied. 5.2.2 M e t r o p o l i s for G a u g e Var iab les To set up a Monte Carlo simulation of this system, it is necessary to use both the sine squared term and exponential terms as the weighting factors for producing field configura-tions. It is not possible to create a heat bath for the sine squared term, so the sole means 36 Chapter 5. Integration Over Gauge Variables 2 | , 1 - 2 I 1 0 . 1 2 A' Figure 5.1: Expected toy model phase diagram in the large N limit. The confined phase is inside the triangle, defined by A' + \g\ < 1. The deconfined phase is outside, defined by A' + | 5 | > 1 . of updating the gauge variables will be via a Metropolis simulation. The Metropolis is set up by isolating one gauge variable in the partition function at a time. After doing so for 0a, the local partition function for updating 6a is Zloc = r j s i n 2 (°a~db) • e2X'^COB^-e^+9Ncoa^). (5.10) Here, the product and sum are over all b indices, holding a fixed. As an alternate form of equation (5.9), E ^ e ^ " ^ = J2a 1 + 2 T,a<bcos(9a ~ eb) has been substituted into the expression. The Metropolis proceeds as outlined in chapter 2. Beginning with the first gauge vari-able, a change is proposed to it and the effect on the action is compared. The change is then accepted or rejected as appropriate. Because the sine squared term and exponential terms have vastly different orders of magnitude, it is best to calculate the changes to these terms separately and then multiply the resulting ratios together. Each variable should then be updated multiple times (20-30) before moving on to the next one. A sweep is completed when this has been done for each variable in turn. 37 Chapter 5. Integration Over'Gauge Variables For the updating, the random change proposed to each variable is 9'A = 6°LD + 2TTSU. (5.11) Here u is a random number in (—0.5,0.5) and 5 is a configurable scaling factor. When 5 = 1, the choice of angles is over the entire circle, but this would result in a poor acceptance rate. To get a reasonable acceptance rate, the 5 parameter must be much smaller, perhaps as small as 6 — 0.01, depending on the details of the simulation. Since many different simulations of this model will be run, each with different parameters and different couplings, our code has been set up to monitor the acceptance rate and change the value of 6 so that the rate will always be between 50% — 70%. 5.2.3 Vary A ' Parameter For a first toy model simulation, consider the behaviour of this model for different values of A' with g fixed. With g set to zero, this model is actually a simplification of the actual matrix model and should be a precursor of what to expect for the full simulation. Figure 5.2 shows the run-time histories for the Polyakov loop operator for selected values of A'. Immediately, the different phases can be seen in the model. For the confined phase, A' < 1, the observable is fluctuating about zero and for the deconfined phase, A' > 1, it is fluctuating about some non-zero number. This is basically the expected behaviour of the Polyakov loop. In the phase transition region, A' ~ 1, there are large fluctuations between both types of states. This can be expanded upon further by looking at the expectation value of Polyakov loop for many values of A' and for different values of N. Figure 5.3 shows what happens to the Polyakov loop when this is done. For A' < 1 the Polyakov loop expectation value is approximately zero. There is a sharp jump near A' = 1 and for A' > 1 the operator takes on values that are order i V 2 . The other interesting feature of this graph is that it shows the behaviour for different values of N. The observable is basically invariant under changes in N, except in the phase transition region. Near A' = 1 the Polyakov loop expectation value jumps to order N2 and for larger values of N, this jump gets steeper. The shape of this graph suggests that in the 38 Chapter 5. Integration Over Gauge Variables 0.8 Sweep Numb' Figure 5.2: Run-time history of the Polyakov loop for different values of A' with g = 0. The system size is N = 40. For A' = 0.5 the observable is fluctuating very close to zero. For A' = 1.5 the observable is oscillating about some non-zero number that is of order JV2 and for A' = 1.0 the observable has very large oscillations between zero and some non-zero value. large N limit, there is a first order phase transition near A' = 1. The large fluctuations seen in the figure 5.2 are also characteristic of a first order phase transition. Another thing of interest is what the confined and deconfined states might look like in terms of the behaviour of the eigenvalues on the unit circle. Figure 5.4 shows an example of how the gauge variables might behave in a typical simulation. In the confined phase, the eigenvalues are spread out over the unit circle, whereas in the deconfined phase, they are clustered about a single point. The degree of the clustering increases or decreases in proportion to the strength of the coupling. Figure 5.5 expands on this idea further by showing distribution of eigenvalues on the unit circle for various couplings. The confined state appears as a flat line on the graph, whereas the deconfined state appears as a peak that gets sharper with increased coupling. Near the phase transition region, A' = 1, the distribution appears as the superposition between clustered and unclustered states. This is mostly a finite N effect and it is not clear what distribution should be seen in the large N limit. 39 Chapter 5. Integration Over Gauge Variables a w 0.8 0.7 h 0.6 0.5 0.4 0.3 0.2 0.1 0 0 iV = 20 -iV = 40 -TV = 80 0.2 0.4 0.6 0.8 1 A' 1.2 1.4' 1.6 1. Figure 5.3: Expectation value of the Polyakov loop operator varying A' for different values of N with g — 0. Each point on this graph has been calculated by means of a separate simulation at that particular value of A' and is the average of 1000 data points. The N = 20 case has been drawn as points with error bars and the N — 40 and N = 80 cases have been drawn as lines. There is good agreement everywhere between the three cases. Near A' = 1, the data has the appearance of a smeared step function. The N = 20 data have a fairly pronounced smearing effect. The N = 40 and N = 80 data are very similar everywhere, although the N = 80 data appears to have a slightly steeper step. (a) (6) Figure 5.4: (a) A sample confined state. The eigenvalues are spread out over the circle. Here A' = 0.5 and N = 40. (b) A sample deconfined state. The eigenvalues are clustered about one point, which has been set as 6 = 0. Here A' = 1.5, and N = 40. 40 Chapter 5. Integration Over Gauge Variables o O 0> ,J=> s TJ CP _tsi a o 0.5 0.4 0.3 0.2 0.1 A' = 0.5 A' = 1.0 A' = 1.5 + + + + + + + + + + • • + + + + + + + + + + + + + • Figure 5.5: Distribution of eigenvalues for different values of A'. Here N — 40 and g = 0. The graph has been normalized so that the total area under a curve is 2ir and the distributions are set to be centred at 6 — 0. For A' = 0.5 the distribution of eigenvalues is constant. For A' — 1.5 (the deconfined state) the distribution is peaked at zero on the outsides. For A' = 1.0 (the confined state) the distribution has a peak that extends to nearly the entire range of angles and then approaches a non-zero value on the wings. 41 Chapter 5. Integration Over Gauge Variables 5.2.4 Vary g Parameter The next step is to investigate the behaviour of this model by varying g while holding A' fixed. This model does not directly relate to future work, but is presented for comparison purposes. Figure 5.6 shows the expectation value of the Polyakov loop for different values of g. The behaviour is quite different from that of the previous case. Instead of having a quick jump at the phase transition region, the progression is gradual. This is characteristic of a phase transition that is other than first order. On this graph, the location of the phase transition, A' + \g\ = 1, corresponds to when the Polyakov loop operator is approximately 0.25. Figure 5.6: Expectation value of the Polyakov loop operator for different values of g with A' held fixed. Here N = 40 and each data point is the average of 1000 sample points. The phase transition occurs when the expectation value is 0.25 and is this drawn as a straight line. For each of the three cases, instead of being a sharp jump, the expectation value slowly increases with g. The order of the phase transition can be investigated by looking for a discontinuity in some derivative of the observable with respect to g. Separating the g dependent component of the action, an observable will have the form _ f[dX}Q[X]e-W+>S-m { U ) - f[dx]e-sm+<>s-m ' 1 j 42 Chapter 5. Integration Over Gauge Variables Then, differentiating with respect to g produces d_ _ J[dX]Q[X}Sg[X}e-sm^m dg{ ' ~ J[dX}e-sm+^lx] { ' ' J[dX]O[X]e-s^+3Sg[X] jmSg^e-S[X]+gsg[X] (jldxje^m+a^my = (OSg)-{0)(Sg). This is just the correlator of O and Sg, which can also be written as ((O — 0)(Sg — Sg)). This process can be repeated to the n t h derivative. The result is gn !Fg (0) = ((0-0)(Sg-Sg)n). (5.14) Figures 5.7a and 5.7b show the first and second derivatives of the Polyakov loop with respect to g. The errors get increasingly large as this procedure is used for higher order derivatives, so it is necessary to average many more data points in order to get manageable errors. A discontinuity can be seen in the second derivative giving evidence that the phase transition across the g parameter is third order. 43 Chapter 5. Integration Over Gauge Variables 0.5 0.4 0.3 0.2 0.1 0.5 1 9 (a) 1.5 9 (b) Figure 5.7: First and second derivative of the Polyakov loop operator with respect to g. Here N = 40, A' = 0 and each data point is the average of 200k sample points, a) First derivative. No discontinuity is present, b) Second derivative. There appears to be a discontinuity at g = 1. Also, the errors get increasingly larger in this method with each order of derivative used. 44 Chapter 6 Gauge Fields & Harmonic Potential It is now possible to proceed to integration over the matrices and gauge fields. As a first model, consider the harmonic potential from chapter 3. The combined partition function has the form Z = Jd01- • -d9N g s m 2 {^~) J * t r ( * 3 + ^ X < a ) > (6-1) with boundary conditions Xlab{t + /?) = Xlab(t)e^6a~db\ This model also has an exact form solution for the matrix interaction. It will be possible to create a full simulation as a well as a partial simulation that uses the exact form solution and compare the behaviour of the two. 6.1 Exact Form for Harmonic Potential The exact form for the harmonic potential matrix interaction, as derived in appendix A.3 is This model will be dominated by the exponential term and thus be in the deconfined phase (also derived in appendix A.3) when /z/?<lnd. (6.3) This exact form for the harmonic potential can be simplified to the toy model from 45 Chapter 6. Gauge Fields Sz Harmonic Potential chapter 5. This is done by taking the first order in the logarithm, ln(l — e ^ + t ( e <» d>>>) m _ e - / 3 M + H e a - 6 » 6 ) Then, noting that the a = b terms are just multiplicative constants and setting A' = de-^ gives ~ E V E . V i ( M l ) ] which reproduces the first factor of the toy model in equation (5.9). Equation (6.2) can now be used to set up a Metropolis simulation. The local partition function for updating fixed 9a is Zloc = rjsm 2 (djh^) • III 1 - 2e-^cos(0 o - 0b) + e " 2 ^ ] ^ , (6.5) where e " ^ ^ l n ( 1 - e " * + ' ( 9 a " 9 i , ) = _tf<b(l ~ 2 e - ^ c o s ( 0 a - 9b) + e~2^)-d has been sub-stituted in as a more numerically friendly expression. The parameter that now determines the system behaviour is the inverse temperature 6, which is related to the number of lattice sites and lattice spacing by (3 = aiatT. A Metropolis simulation can now be run over the gauge variables using equation (6.5) as the local partition function. Similar to the previ-ous chapter where A' was varied, it is now possible to vary /3 (or equivalently aiat) and investigate the behaviour of the Polyakov loop. 6.2 Full Integration Over Matrices and Gauge Variables Consider now the Monte Carlo simulation for full integration over the matrices and gauge variables. The entire path integral, discretized in time has the form TV / T \ Z = J ' ^ • • ^ n s i n 2 ( ^ ^ (6.6) 46 Chapter 6. Gauge Fields & Harmonic Potential where the lattice action with the gauge boundary conditions taken into account is ~>lat aiat 2 E i=l T 1 N lat a,b (X{)ab - (XirUe-W*-6 T (6.7) t=2 Here, the interaction between the gauge variables and the matrices only occurs at the boundaries of the kinetic term and as such, the interaction is independent of the potential. The matrix integration is otherwise entirely the same. Performing the integration will now require alternating between sweeps over the gauge variables which is done using Metropolis and sweeps over the matrices which is done using a heat bath. Thus, after the framework is set up for this action, it should be easy to switch to the commutator potential. 6.2.1 Updat ing Gauge Variables The gauge variable potential is different from the toy model potential from chapter 5, but the idea is still the same - run a Metropolis simulation that compares the weight of the sine squared term in the partition function to the weight of the action. Ignoring all other matrix terms, the 9 dependent part of the action is: N •'lat i=l a<b J 2 / Z R e {X\U{XT)bael {ea-eb) (6.8) Isolating the variable 6a then gives the following local expression for the partition function: Zloc = TJ sin 2 ( 9 - ^ ) • e^t £?=i E M - N W U ( * r ) ^ « > ] _ b^a V 2 / (6.9) The above equation can now be used to run a Metropolis simulation for the gauge variable part of the integration. 6.2.2 Gauge Boundary Condit ion for Matrices Similar to the set up for handling non-periodic boundary conditions in chapter 4, it is necessary to modify the matrix integration to handle the gauge boundary interactions. 47 Chapter 6. Gauge Fields &z Harmonic Potential The necessary changes are only to the (Xt — Xt-\)2 term of the action and are fairly straightforward. The diagonal elements are left unchanged: P Q + i U + W - i W ^ no change (6.10) and the off-diagonal elements are modified as: (Xl+1)ab + {Xl_x)ab {X\)ab + (Xtirte-W'-o*) t = 1 (X^U + iXiUeW*-6-) t = T • (6 .H) no change otherwise Apar t from these differences, the matrix integration is entirely as it was in chapter 3. If all the gauge variables are set to zero, these boundary conditions wi l l reduce to periodic boundary conditions. 6.3 Results It is now possible to compare the results of the full simulation to the exact form for the matrices. Figure 6.1 shows the comparison while varying (3 and plotting the expectation value of the Polyakov loop observable. A simulation of this magnitude is fairly slow and the full simulation requires several hundred sweeps between data points to generate statistically independent results. The same type of phase transition behaviour can be seen as appears in the first toy model. There is a clear separation between the confined and deconfined phases with a sharp, step-like jump in between. Equation (6.3) predicts a phase transition at ^ = 1 . 0 9 8 6 and this is firmly where the phase transition can be seen in both simulations. Overall the agreement between the full simulation and the exact form simulation is very impressive. 48 Chapter 6. Gauge Fields <fe Harmonic Potential i l 1 + L>. . 1 r~ 1 1 1 1 1 1 Full Matrix Simulation 1—>—1 0.9 Matrix Exact Form 0.8 Exact Location •a 1 a 0.7 0.6 W 0.5 \ -0.4 V V t II 0.3 0.2 -0.1 f 0 1 1 1 I L V . i i i1 i i 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 Figure 6.1: Expectation value of the Polyakov loop for the harmonic potential varying (3. Here N = 40, T = 20, d = 9, p = 2 and aiat = Each point on this graph is the average of 200 sample points. The full matrix simulation is plotted as points with error bars and the matrix approximation is plotted as a line. The exact phase transition location is plotted as a vertical line. The shape of the curves resemble that of the first toy model. A step like jump can be seen clearly and is centred on the location of the of the exact phase transition location. There is a slight blip in the exact form solution for /3 < 0.25 and this was a result of variable overflow in the simulation and is not representative of physics of the problem. Elsewhere, there is very good agreement between the full simulation and the analytic form for the matrices. 49 Chapter' 7 Quenched BFSS Matrix Model With the constituents tested, a framework is now in place to combine integration over the gauge variables with integration over matrices with the commutator potential. The partition function has the form Z = Jd61.-.deNl[sm2(^^^ J[dX]e-^, (7.1) with boundary conditions Xlab(t + (3) = Xlab(t)ei(-e°--e>>\ The action is 1 [P SE = - dttv * Jo ( ^ ) 2 + M 2 ( ^ ) 2 - I ^ , ^ ] 2 (7.2) The bosonic part of the BFSS model is when \i = 0. The mass term has been included in the action because it will prove useful in exploring the phase diagram of the model later in this chapter. The process of doing the integration is the same as in the previous chapter, but now the matrix integration is switched to the auxiliary field heat bath. The primary focus of study will be the Polyakov loop operator and phase transition behaviour in this system. 7.1 Set up Before running the final simulations, it is necessary to touch on some important preparatory concepts. The discussion here is mixed with some foreknowledge of what the final results will look like. 50 Chapter 7. Quenched BFSS Matrix Model 7.1.1 L a t t i c e Size The first issue is the size of lattice that is necessary for the simulation. By putting the system on a lattice, the idea is to try and model a continuous system using a discrete one. As such, enough lattice sites need to be used in order to make the lattice system approximately smooth. However, each lattice site contains anJVxJV matrix in nine spatial dimensions, so for computing purposes there are practical limits to how large the lattice can be. The algorithm complexity of increasing the number of lattice sites is linear in T - an extra lattice site is simply one more site to sweep over. However, a larger lattice also results in slower information propagation rates and conversely more total sweeps must be done in order to eliminate autocorrelation effects. Thus the complexity of increasing the lattice size may be as bad as order T2. Table 7.1 compares the expectation value of the Polyakov loop for different lattice sizes. Three sample values of (3 have been chosen in the region where the Polyakov loop is non-zero, near the phase transition ((3C « 1.016). For (3 = 0.8, furthest away from the phase transition, the value has converged to the same value for all three lattice sizes. However, closer to the phase transition the agreement is not as good. For (3 = 0.9, the results for different values of T have spread out, but are still within error. For (3 — 1.0, almost on top of the phase transition, the values for T = 20 and T = 30 are within error, but the value for T = 10 differs by about three times the error, suggesting that ten lattice sites only is a bit too small. Based on this and other observations, T = 20 will be used as the lattice size for the final simulations. It is expected that twenty lattice sites will be more reliable than ten and that T = 30 will produce only modest improvements that are not worth the extra run-time required to make the system that big. Altogether, it is expected that the choice of T will contribute a systematic effect to the overall simulation and as such, this should be compensated for by investigating the systematic error of the final results. 51 Chapter 7. Quenched BFSS Matrix Model /? = 0.8 p = 0.9 p = 1.0 T = 10 T = 20 T = 30 0.666 ±0.001 0.665 ±0.002 0.666 ±0.001 0.551 ±0.002 0.548 ± 0.002 0.542 ± 0.002 0.359 ± 0.003 0.348 ± 0.003 0.343 ± 0.003 Table 7.1: Comparison of the behaviour of the Polyakov loop operator for different lattice sizes. The data are for three different values of /? approaching the phase transition region. Here N = 40, d = 9, u = 1, A = 1, aiat = ^ and each result is the average of 100 points. For P = 0.8, furthest from the phase transition, there is agreement for all values of T. For P = 0.9, the values for different T are spread out but still within error. Finally for P = 1.0, closest to the phase transition, the values for T = 20 and T = 30 are within error, while the value for T = 10 differs by about three times the error. 7.1.2 Size of N The next issue is the size of N that should be used in the simulation. M-theory is supposed to exist in the N —> co limit, whereas the simulation can only be run at large, but finite N. Anything said about the phase transition should reflect the limiting behaviour ofthe system. Realistically, the size of N can only be made so large because the algorithm complexity of increasing N is quite bad. It is order N4 for integration over the matrices and this forms the bulk of the computational burden. Figure 7.1 shows a comparison of the Polyakov loop for two different values of N. Both the N = 20 and N = 40 curves show a similar step-like behaviour. The values of the Polyakov loop for the different values of N overlap almost perfectly, except in phase transition region where there is a difference in the steepness of the step. The N = 20 case has a more gradual slope to it, while the N = 40 case is steeper. This is encouraging because it suggests that the step is getting straighter for increasing values of A . It is conceivable that this system will have a first order phase transition in the N —> oo limit. The quality of the data for the N = 40 case is quite good. This curve took about one week on forty processors to create, where each point on the graph was computed through an independent simulation and is the average of 200 sub-points. This is already pushing the limits of what can be computed in a reasonable amount of time. Since the data are already of good quality and the step is reasonably steep, N — 40 will form the basis for future trials. 52 Chapter 7. Quenched BFSS Matrix Model Figure 7.1: Comparison of phase transition behaviour of the Polyakov loop for N — 20 and N = 40. Here T = 20, d = 9, p = 1, A — 1 and a/ a t = y . The points of the graph represent the average of 200 Monte Carlo points. The curves show a step-like jump near (3 = 1. For N = 20, this step is slightly more gradual than it is for N = 40. In the deconfined phase, (3 < (3C, the data from the two sets line up almost exactly. 7.1.3 Autocorrelat ion With the desired lattice size and size of N in mind, some time was spent trying to optimize the simulation for autocorrelation times. There isn't all that much can be done; it is simply a matter.to alternate between sweeps over the matrix field, auxiliary field and gauge variables and it will require a large number of sweeps to do so. One thing that is important, is to do multiple updates for each gauge variable. The gauge variable integration is the smallest part of the computational burden and doing this integration effectively will translate into improved performance everywhere. But other than this, the system is just really big and changes are slow to propagate, so the autocorrelation times are going to be very large regardless. For our set up, it was found that the autocorrelation time for the Polyakov loop was on the order of a few hundred sweeps for most values of (3, with the exception that this sometimes approached a few thousand sweeps in the phase transition region. As such, a separation of 500 sweeps between data points will be used to produce the Monte Carlo data 53 Chapter 7. Quenched BFSS Matrix Model for the final simulation. The number of sweeps separation is a relative measurement that depends on many details of the actual code; the point to be made is that this is a large enough separation such that fairly good data wi l l be produced everywhere. However, this may result in a slight underestimate of the errors in the data in the phase transition region because the autocorrelation times there are slightly larger. Again , any discrepancy here should be accounted for by adding a systematic error to the final results. In conjunction with the 500 sweeps separation, 200 data points wi l l be collected for each value of p. This produces an error of less than one percent for most values of /? and errors of less than five percent in the phase transition region. As explained in the next section, more data can be added to these sets as needed. 7.1.4 Save a n d R e s t a r t Because the code takes such a long time to run, a useful computational idea is to set up the simulation so that the entire state of the system can saved and then restarted at a later time. A s such, the data generation can be chopped into smaller sections as required. For example, the first 100 points of a data set could be generated in one session and then another session could be run at a later date to create a second 100 points for the same tr ial . This should be done in such a manner that the data are identical whether they were created in two 100-point sessions or one 200-point session. 7.1.5 Hys te res i s A n important characteristic common to first order phase transitions is a hysteresis effect. Such an effect exists in the phase transition region of this system. If the system temperature is changed, the system wi l l hold the properties of the previous state for some time. Figure 7.2 is an illustration of the hysteresis effect for a four dimensional version of the B F S S model. Figure 7.2a plots the expectation value of the Polyakov loop over 500 sweeps while changing P in increments of 0.1 and overlays a comparison between increasing P and decreasing p. In the transition region, there is a clear difference in the measured values depending upon which direction the temperature change came from. Figure 7.2b shows the same thing, but now twice as many sweeps are run before moving to a new temperature. The result is that 54 Chapter 7. Quenched BFSS Matrix Model the hysteresis effect is kept to a smaller temperature region. Thus the lag effect begins to dissipate after an increased number of sweeps. 0.7 0.8 0.9 1 1.1 1.2 .1.3 0.7 0.8 0.9 1 1.1 1.2 1.3 (a) (b) Figure 7.2: Hysteresis effects in the phase transition region. Here N = 40, T = 20, d = 4, u = 0, A = 1 and aiat = §>• a) Expectation value of the Polyakov loop operator while increasing or decreasing j3, where each data point is the average over 500 sweeps. Whi le increasing /3, the system tends to hold the deconfined state longer and while decreasing (3 it tends to hold the confined state, b) Expectation value of the Polyakov loop operator while increasing or decreasing (3, where each data point is the average over 1000 sweeps. Again , the system tends to hold its state, but the region for this is smaller because more sweeps have elapsed between changes in temperature. The hysteresis effect is important because it shows that it is necessary to be careful in how measurements are taken and that is necessary to bring the system to a good equilibrium before doing so. To avoid having this effect interfere with the final results, such as for the data in figure 7.1, a separate simulation is run for each value of (3 and data are not generated by moving from one temperature to the next. 7.2 Fitting This section develops a procedure for systematically identifying the phase transition tem-perature, such as for the data in figure 7.1. The idea is to fit a curve to the phase transition data, for which one on the fitting parameters is the phase transition temperature f3c. The result of the fit should be the value of f3c given in some confidence interval. The actual function for the Polyakov expectation value curve is unknown and instead, the following fitting model is presented. It is motivated based on what the transition should 55 Chapter 7. Quenched BFSS Matrix Model look like in the large N limit. In this limit, it is expected that the graph will contain a true step function, but with finite N, instead what is seen is some sort of smeared step function. The solution to this is to try to fit the transition with an analytic step function that has a limiting parameter to control the sharpness of the step and match the smoothing effect seen in the finite N cases. The choice of step function for this purpose is Here (3C is the phase transition temperature that is the focus of the fit and k is the parameter that controls the sharpness of the step. This function approaches 1 for (3 < f3c, 0 for (3 > (3C and has a value of 0.5 at (3C. There are other choices of step function available, but this one has been chosen because the step slopes off in a similar manner to what is seen in the data. Next it is necessary to propose a suitable model for the remainder of the curve. Our choice for this is the function This function goes to one at (3 = 0 and to zero at (3 = a\ - above which its value is set to zero. Further, the restriction that a\ > (3C is imposed, since it is necessary that this function project slightly past the phase transition value. Other choices of fitting function may be possible. This particular fitting function has been chosen because it serves the purpose of trying to extract information about the phase transition location from the data. The fit is then a five parameter model. The complete fitting function is f((3) -9 {(3 — (3C) where the parameters are a i , a.2, 0,3, k and (3C. The particular values of the the first four parameter are not that important, the main interest is in the value of (3C. A chi-square test is used to identify the goodness of fit and a simplex algorithm is used to do the fitting. The simplex technique is a multidimensional fitting routine that wanders around the parameter space trying to find the best fit. It can be prone to finding local e(/3- (3C) := lim - + - arctan (A; {3C - (3)). (7.3) otherwise (7.4) 56 Chapter 7. Quenched BFSS Matrix Model minima, so it is necessary to do multiple restarts in order to find the global minimum. Figure 7.3 shows sample fits for N = 20 and N = 40. The fits are a reasonable match to the shape of the phase transition and the analytic step function has the desired behaviour of smoothing out the phase step. The location of the phase transition corresponds to when 6(f3 — (3C) = 0.5. For N = 40, the f((3) curve is much closer to the data and the analytic step function is steeper than it is in the N = 20 case. 7.2.1 Bootstrap Error An estimate of the statistical error in the fitting process can be found using bootstrap er-rors. The bootstrap technique works by re-sampling and refitting the original data multiple times. Each data point on figure 7.3 was the average of n (in this case n = 200) sample points. This data, call it for i = 1,... n, when fitted gives a value (3^ for the critical temperature. To run the bootstrap, n points are chosen at random from the original data, where repetitions are allowed. This will produce a new data set The fitting proce-dure can be repeated and this will lead to a value j3c^ for the phase transition temperature. This re-sampling/refitting procedure is repeated m (throughout, m = 200 will be used) times, to produce a set of possible phase transition values {f3$} for j = 1,... m. The idea is that these values should form a distribution around the original value /3C°'. The statistical error is then taken as the 66% confidence interval of this distribution. Figure 7.4 shows the distribution of the possible fit values of (3C. The case N = 20 is compared to N = 40 to get an idea of the error in the fit. The result is distributions which are fairly narrow. There is reasonable overlap between the two cases, so this is encouraging. The shape of the N = 40 distribution is quite good and the distribution is roughly centred about the value of j3c. The N = 20 distribution is a little bit problematic because it seems to be bimodal. A closer inspection of the N = 20 fitting showed that part of the problem is that the gaps in (3 between some of the data on figure 7.3a were a bit too large and this caused the fit distribution to appear discrete rather than continuous. Alternatively, the small number of points used in averaging the data may also be a factor. Ideally, one wants to have on the order of a thousand bootstrap samples because this would act to smooth out irregularities more. 57 • Chapter 7. Quenched BFSS Matrix Model a w 1 0.9 0.8 0.7 0.6 \-0.5 0.4 0.3 0.2 0.1 0 _ T _ ,— 1 r 0.9 -•G 0.8 -1 a Q> 0.7 -W 0.6 -0.5 -II 0.4 -0.3 -0.2 -0.1 -0 -N = 20 Data Fit f((3 0{P - Pc 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1, P (a) Figure 7.3: Fit of the phase transition for N = 20 and N = 40. Here T = 20, d = 9, p = 1, A = 1 and a; a t = | L Each data point is the average of 200 sub-points, a) N = 20 case. The data are plotted as points with error bars. The fit is then plotted as a line over the data. The graph also shows the fit function /(/?) and the step function 0(/3 — /3C). The fitted location of (3C corresponds to when 6((3 — @c) =0.5 and this is firmly in the middle of the phase transition region, b) N = 40 case. The graph is presented in the same way as the N = 20 case. The step function is now steeper and the function /(/?) fits the data more closely and is steeper as well. 58 Chapter 7. Quenched BFSS Matrix Model Figure 7.4: Histogram showing the distribution of bootstrap fits for 8C. Here T = 20, d = 9, n = 1, A — 1, aiat = ^ and 200 bootstrap samples were done for each case. The curves are normalized to the value of 1 count per unit 3. The N = 40 distribution has a single peak, wi th the confidence interval shown above. The N = 20 distribution partly overlaps the N = 40 distribution, but has a larger confidence interval and appears to be bimodal. These problems seem to go away as the distribution of our data gets sharper and for N — 40, the bootstrap fitting is already fairly consistent. In this case, the error is already fairly small, about ±0 .0003 about the critical temperature. To complement this error, it is also necessary to estimate the systematic error in the simulation. 7.2.2 Systematic Error It is expected that there wi l l be some systematic error in this approach. The choice of fitting model, the lattice size and size of N may shift the results in one direction or another and it is not necessarily clear which one of these factors contributes at any given time. To get an idea of how much systematic error exists, the fitting procedure is applied to the data for the harmonic potential model from chapter 6. Contrary to the example studied thus far, this model has an exact solution (u-3 = Ind) and it should be interesting to see how close a fit of this model comes to the exact value. Table 7.2 gives the results for the fit to the data shown in figure 6.1. The difference 59 Chapter 7. Quenched BFSS Matrix Model between the fit and the exact result is about ±0.002. This is about ten times greater than the statistical error. This suggests that the technique for identifying the phase transition is precise, but the process as a whole is somewhat less accurate. 0c Exact: Fit: ^ = 1.0986 1.1007 ± 0.0001 Difference ±0.002 Table 7.2: Comparison of the fit of 8C for the harmonic potential to the exact result. Here N — 40, T = 20, d = 9, n = 2, X = 0 and a; a t = ^ . Note that this is a different case from the one used for the bootstrap fits - here there is no commutator coupling and as such, the exact value is known. The difference between the exact value and the fitted value is about ±0.002 which is an order of magnitude greater than the statistical error found in the fit. The value calculated in the table is a reasonable estimate of the systematic effects caused by the lattice spacing and size of N. When the commutator interaction is thrown in, it is not expected that the overall systematic effects will be that much different. To get an estimate for the systematic error due to the fitting procedure itself, some of the final data from the final sections was fit using different step functions and different fit functions. It was found that the worst case systematic error from the fitting of 0C was about ±0.004, which is comparable to the results shown in table 7.2. Thus a systematic error of ±0.004, combined with the statistical error will serve as the error estimates for the fitting process. 7.3 BFSS Model It is now possible to run and fit the full simulation for the quenched BFSS model. One means of studying this model is to set u = 0 and run it for different numbers of spatial dimensions. Figure 7.5 shows the Polyakov loop expectation value varying the dimensions. Dimensions five through eight are not shown because they clutter the image and are simply found between the curves for d = 4 and d = 9. Each curve has the characteristic phase shape that has been seen for this model. The effect for varying the dimensions is that the phase transition pushes to higher /5/lower temperature for higher dimensions, but that this increase is lessened with each successive dimension. For the main case of interest -nine dimensions - the data are quite good and errors are fairly small. For dimensions two 60 Chapter 7. Quenched BFSS Matrix Model through four, the data have slightly larger errors because autocorrelation effects have not completely been eliminated. The curve for each dimension can now be fit to find the value of the phase transition. e 1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 6 5 3 B B H e d = 2 <—•—• d = 3 ——• -d = 4 '—•—• d = 9 '—•—' G e % G B m B \ I % % -i i i 1 ° I i V 1 1 1 •*• 0.2 0.4 0.6 0.; 1.2 1.4 1.6 Figure 7.5: Polyakov loop expectation value in the BFSS model for different dimensions. Here iV = 40, T = 20, p = 0, A = 1 and aiat = ^ . Each data point is the average of 200 Monte Carlo points, separated by 500 sweeps. The data are plotted as points with error bars. The curve for each dimension shows the phase-step behaviour previously seen in the model. For increasing dimension, the phase transition temperature shifts to a higher value of p. With u = 0, the model is actually one-parameter. The action can be rescaled with the transformation dt —• f3dt' and X —> y/P~X'. The action is then 1 r SE = 7j ^ dt tr m 2 - ~ [ * u j ' ] 2 (7.5) The fits of the data can be plotted using this one parameter model. Figure 7.6 plots the critical value of the coupling versus the dimension. It can be seen that the critical coupling increases with dimension. The size of the errors on the figure are predominantly a result of the systematic error. Some attempts were made to fit this curve as function of d. The shape of the curve has some similarities to the logarithm of d, although nothing was found conclusively. 61 Chapter 7. Quenched BFSS Matrix Model Confined * Deconfined BFSS Model, u = 0 .—.—. I I I 1 1 1 1 1 1 2 3 4 5 6 7 8 9 10 d Figure 7.6: Phase diagram for varying dimension in the BFSS model. The data are cal-culated as the fits to the data shown in figure 7.5. The critical coupling increases with dimensions. Errors on the graph are predominantly due to the estimate of systematic error in the system. From the curve in figure it is possible to make a first estimate of what the behaviour of fermions should be. In this temperature region if the fermions decouple, their effect should be similar to having 25 dimensions of bosons, which does not look to push the critical temperature to much greater than A 1 / , 3 / ? c = 1.2 — 1.4. The supersymmetry prediction is that the temperature should go all the way down to zero. For convenience, the numerical nine dimensional result is displayed in table 7.3. Triv-ially, the transition has the correct dependence on the dimensionless coupling of ( T y ™ \ 3 . Statistical Error Systematic Error 1.108 ±0.0002 ±0.004 Table 7.3: Location of the nine dimensional phase transition in the BFSS model. 7.4 Commutator &; Mass Term Potent ial Alternatively, simulations can be fun using both the mass term and the commutator term. Data for these cases is shown in figure 7.7. This is now a two parameter model and the 62 1.1 1 I 0.9 0.8 0.7 0.6 n c Chapter 7. Quenched BFSS Matrix Model action can be rescaled in the same way as in the previous section. The action then has the form x' 2 + ^ ' J - ^ [ r , r ] J | (7.6) SE = \j dttv It is possible to use the fits for (3C to construct a phase diagram with the mass parameter on one axis and the commutator coupling on the other axis. Figure 7.8 shows the phase transition line varying these two parameters. The point on the y-axis is an overlay of the simulation result with the exact value. The points define a smooth separation between the confined and deconfined regions. 1 1 0.9 0.8 0 * *+ n * m * + e « * B H •+• , .—l 1 1 p = 0, A = 1 • p = 1,A = 1 -p = 2,\ = l -p — 3, A — 1 > I 1 d 0.7 B H B -CJ cf W 0.6 0.5 B -A 0.4 m % M * -0.3 o X S X -0.2 0.1 ID O 0 , t , „ . i B l g V o - n • I ^ W ^ a* m i . * . . 0.2 0.4 0.6 0.8 1 P 1.2 1.4 1.6 Figure 7.7: Polyakov loop expectation value for fixed values of mass and commutator coupling. Here N = 40, T = 20, d = 9 and a; a t = The data are plotted as points with error bars. Each data point is the average of 200 Monte Carlo data points. Each curve has the same characteristic shape that has been seen for this model. The effect of increasing the mass is to make the phase transition steeper and to shift the transition to a lower value of/?. 63 Chapter 7. Quenched BFSS Matrix Model 1 1 1 1 i Simulation Results 1 1 c Exact Value O Confined -Deconfined -i i _ 1 1 1 0 0.2 0.4 0.6 0.8 1 1.2 Figure 7.8: Phase diagram comparing mass coupling to commutator coupling. The points on the graph are result of fits to the data shown in figure 7.7. The data are plotted with xy-error bars and form a relatively smooth curve. The exact value for A = 0 is plotted as a large circle, which is overlapped by the value from the simulation. 64 Chapter 8 Quenched Plane Wave Matrix Model As a further extension, a simulation can be set up for the matrix model on a pp-wave background. The Euclidean action for this model with fermions removed and 't Hooft coupling is where / , J, K = 1, 2,3 and i 7 = 4 , . . . , 9. The system can be put on the lattice and a Monte Carlo simulation run using the auxiliary field heat bath technique. The auxiliary field is introduced for the commutator term in the same manner as before. The only differences between this model and those of the previous chapter are the separation of the spatial dimensions into two groups and the addition of an anti-symmetric term, both of which will only require small modifications. For convenience, the entire heat bath is repeated here. <pp _ 1 / E ~ 2J0 dUr[x^(f)V)2+(0V)2 (8.1) 8.1 PP-Wave Heat Bath 65 Chapter 8. Quenched Plane Wave Matrix Model 8.1.1 Auxi l i a ry Fields Each of the auxiliary field elements is updated as (QlJ)ab = jcr + pab ( f° r fixed a, b and t), where (T(diag) = __1 cM-diag) = p = {x\, X'}. (8.2) ValatAN V^iatAN Here A/v = and all auxiliary fields can be updated simultaneously. 8.1.2 Diagonal Elements For a diagonal element the procedure is to update (XI)aa = 7 a 2 + paa (with i, a and t fixed), where % 1 , j ij(p'aa-MXJ,XK}aa) i = 7 = 1,2,3 [ - L ^ a i = / ' = 4 , . . . , 9 Paa = (Xl+1+2XU) + ( - 4 E R e + {*''.<?''}, Oti = - n — + 4A N > ( A - 7 ) a a + < a * a t ^ ( H ) 2 i = / ' = 4 ) . . . , 9 Here 7, J , K is an even permutation of 1, 2, 3. The anti-symmetric term can also be written as [ X J , X ^ ] a a = 2 i I m ( X J X / c ) a a . 8.1.3 Off-Diagonal Elements For the off-diagonal elements update (XI)ab = 7 c 1 + plab (for i, a, b and t fixed), with i 1 , ±(p'ab-^lXJ,X«]ab) t = 7 = 1,2,3 I 7 ^ 6 i = J ' = 4 , . . . , 9 66 Chapter 8. Quenched Plane Wave Matrix Model Pab = 'lat ' ab j^i 2 £ [xic{X3\b + {X3\cXlb]+{XfQ^} ab c^a,c^b a; | - + 2 A i V ^ [ ( ^ 2 ) a a + (Xi2)bb] + i = I = i = p = 4,... 1,2,3 9 8.2 PP-Wave Phase Diagram This model can be studied by investigating the behaviour of the Polyakov loop. The is done by fixing the values of u and A and then varying Q. Figure 8.1 shows the results of the simulation for this model. A similar phase transition behaviour can be seen as before. For u = 0, the right most curve on the graph, this model reduces to the BFSS model. For the other three curves, the errors in the data points near the phase transition region are quite large and this is probably due to the effect of the antisymmetric term. However, the phase transition region is still fairly well defined and it is possible to repeat the fitting procedure outlined in the previous chapter. Wi th ' t Hooft coupling, this model also behaves as a 2-parameter model. Rescaling, the action has the form Using the fitting procedure from the previous chapter, it is possible to plot the phase diagram of this graph for varying mass and commutator coupling. This is shown in figure When there is no commutator coupling, this model is just a harmonic potential with two different masses and should have a phase transition that is a solution to the following quadratic equation: 1 , 2 2nd [WJJK yi X J X 8.2. (8.6) 67 Chapter 8. Quenched Plane Wave Matrix Model 1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 (j, = 0, A = 1 H = 6, A = 1 fx = 12, A = 1 u = 18, A = 1 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1. 0 Figure 8.1: Polyakov loop expectation value varying 0 for the pp-wave model. Here N = 40, T = 20, d = 9, aiat = ^ and each data point is the average of 200 sample points. Each of the four curves shows a step-like jump over some small range of 8. The u = 0 case reduces to the BFSS model. The other three curves have fairly large errors in the phase transition region, but nonetheless, the location of the phase transition is still fairly well defined. a. 12 10 Monte Carlo Simulation Exact Value Confined Deconfined 0.2 0.4 0.6 AV3/3 1.2 Figure 8.2: Phase diagram for the pp-wave model. The data are plotted with xy-error bars and are obtained by fitting the phase transition data displayed on figure 8.1. The exact value for zero commutator coupling is plotted as a circle on the y-axis and its location is in line with the curve created by the other four points. 68 Chapter 8. Quenched Plane Wave Matrix Model Thus, for zero commutator coupling, the phase transition should be at M : = - 6 1 n ( | x / 3 - l ) (8.7) = 11.12. This point is also plotted on figure 8.2 and its location is in line with the curve created by the other four points. Altogether this curve has a similar shape to the phase diagram from chapter 7. With fermions included and no commutator coupling, the exact value of \idc can be calculated as [3]: u0c = 12 In 3 (8.8) = 13.18. This is a 16% difference from the bosonic case. Thus with low commutator coupling rela-tive to the field mass, it can be expected that the quenched Monte Carlo simulations are representative of the system when fermions are included. However, this does not give any evidence as to the effect of the fermions for stronger values of the commutator coupling. 69 Chapter 9 Conclusion The goals of this research were to construct a working model for the bosonic part of the BFSS model and this has been achieved. Using the Polyakov loop to investigate the large N behaviour of this model at different temperatures, it is possible to clearly see a Hage-dorn/deconfinement transition in the quenched form of the model. The large-N behaviour suggests that the transition will be first order in the large N limit. The quality of the data generated in the simulation is quite good and it is possible to identify the transition location with good accuracy. Here the final calculations were done for N = 40, but it is possible to observe a step-like behaviour for lower values of N such as TV = 20, although this make the transition more gradual. A question still unanswered is whether or not the deconfined state found in the diagram corresponds to collapse to a black hole. The black hole state is expected to exist in the low temperature regime, but it is not clear if the discovered transition temperature is low enough. The inclusion of fermions should have the effect of pushing the phase transition to a lower temperature, much like adding extra bosonic dimensions, but this is ultimately unclear. A fair bit of future work may be possible from these results. For instance, from the path integral Monte Carlo set up, it is a straightforward extension to create a numerical calculation of the entropy or free energy [26, 27], the results of which could then be compared to the appropriate black hole entropy or free energy. The most important extension to this work will be to come up with an accurate de-scription for the effect of fermions. This may be possible through an approximation of the fermion determinant or with enough innovative ideas, through a full dynamical simulation of the BFSS model. Further analysis of the fermion behaviour will ultimately lead to a better understanding of this and related models. 70 Bibliography [1] Duff, M . J., M-theory (The Theory Formerly Known as Strings), Int. J. Mod. Phys. A 11 (1996) 5623 [hep-th/9608117], [2] Banks, T., Fischler, W., Shenker, S. H. , Susskind, L. , M Theory as a Matrix Model: A Conjecture, Phys. Rev.D 55 (1997) 5112 [hep-th/9610043]. [3] Semenoff, G. W., Matrix Model Thermodynamics, hep-th/0405107. [4] Klebanov, I. R., Susskind, L. , Schwarzschild Black Holes in Various Dimensions from Matrix Theory, Phys. Lett. B 416 (1998) 62 [hep-th/9709108]. [5] Banks, T., Fischler, W., Klebanov, I. R., Susskind, L. , Schwarzschild Black Holes in Matrix Theory II, JEEP 9801 (1998) 008 [hep-th/9711005]. [6] Kabat, D., Lifschytz, G., Lowe, D. A. , Black Hole Thermodynamics from Calculations in Strongly Coupled Gauge Theory, Phys. Rev. Lett. 86 (2001) 1426; Int. J. Mod. Phys. A 16 (2001) 856 [hep-th/0007051]. [7] Kabat, D., Lifschytz, G., Lowe, D. A. , Black Hole Entropy from Non-Perturbative Gauge Theory, Phys. Rev. D 64 (2001) 124015 [hep-th/0105171]. [8] 't Hooft, G., A Planar Diagram Theory for Strong Interactions, Nucl. Phys. B 72 (1974) 461. [9] Furuuchi, K. , Schreiber, E. , Semenoff, G. W., Five-Brane Thermodynamics from the Matrix Model, hep-th/0310286. [10] Gross, D. J., Witten, E. , Possible Third-Order Phase Transition in the Large-N Lattice Gauge Theory, Phys. Rev. D 21 (1980) 446. [11] Sundborg, B. , The Hagedorn Transition, Deconfinement and N=4, S Y M Theory, Nucl. Phys. B 573 (2000) 349 [hep-th/9908001]. [12] Polyakov, A. M . , Gauge fields and Space-Time, Int. J. Mod. Phys. A 17S1 (2002) 119 [hep-th/0110196]. [13] Aharony, O., Marsano, J., Minwalla, S., Papadodimas, K . , Van Raamsdonk, M . , The Hagedorn/Deconfinement Phase Transition in Weakly Coupled Large N Gauge Theo-ries, Adv. Theor. Math. Phys. 8 (2004) 603 [hep-th/0310285]. [14] Polyakov, A . M . , Thermal Properties of Gauge Fields And Quark Liberation, Phys. Lett. B 72 (1978) 477. [15] Bialas, P., Wosiek, J. , Lattice Study of the Simplified Model of M-Theory for Larger Gauge Groups, hep-lat/0109031. 71 Chapter 9. Conclusion [16] Bernstein, D., Maldacena, J., Nastase, H. , Strings in Flat Space and P P Waves from N=4 Super Yang Mills, JHEP 0204 (2002) 013 [hep-th/0202021]. [17] Kawahara, N . , Nishimura, J., Yoshida, K. , Dynamical Aspects of the Plane-Wave Matrix Model at Finite Temperature, JHEP 0606 (2006) 052 [hep-th/0601170]. [18] Peskin, M . E. , Schroeder D. V. , An Introduction to Quantum Field Theory, (Westview Press, Boulder, Colorado, 1995). [19] Zee, A. , Quantum Field Theory in a Nutshell, (Princeton University Press, Princeton, 2003). [20] Press, W. H. , et. al. Numerical Recipes in C, (Cambridge University Press, Cambridge, 1992). [21] Rothe, H . J., Lattice Gauge Theories: An Introduction, 3rd ed. (World Scientific, Singapore, 2005). [22] Metropolis N . , Rosenbluth A. W., Rosenbluth M . N . , Teller A . H. , Teller. E., Equation of State Calculations by Fast Computing Machines, J. Chem. Phys. 21 (1953) 1087. [23] Montvay, I., Munster, G., Quantum Fields on a Lattice, (Cambridge University Press, Cambridge, 1994). [24] Kawahara, N . , Nishimura, J., The Large N Reduction in Matrix Quantum Mechanics - a Bridge between BFSS and I K K T -, JHEP 0509 (2005) 040 [hep-th/0505178]. [25] Hotta, T., Nishimura, J. , Tsuchiya, A. , Dynamical Aspects of Large N Reduced Mod-els, Nucl. Phys. B 545 (1999) 543 [hep-th/9811220]. [26] Sogo, K. , Kimura, N . , A Monte Carlo Computation of the Entropy, the Free Energy, and the Effective Potential. Use of Distribution Functions, Phys. Lett. A 115 (1986) 221. [27] Kimura, N . , Sogo, K. , A Monte Carlo Computation of "Entropy" and "Free Energy" for the Z(3) Lattice Gauge Model, Phys. Lett. B 178 (1986) 84. [28] Gradshteyn, I. S., Ryzhik, I. M . , Table of Integrals, Series, And Products, 6th ed. (Academic Press, Sandiego, 2000). 72 Appendix A Analytic Calculations for Matrix Integrals This appendix outlines analytic calculations related to this problem. To begin, the path integral for the one dimensional harmonic potential is solved. This is then extended to a complex harmonic potential in iV dimensions. Finally, an exact form expression involving the harmonic potential with gauge variables is derived. A. l One Dimensional Harmonic Potential To begin, consider a simple scalar field harmonic potential with periodic boundary condi-tions. The Euclidean action is ^ Jo with periodic boundary conditions <f>(t + j3) = 4>(t). The partition function is then This path integral can be solved in two ways: either by using a discrete Fourier transform or by using a functional determinant. Both methods will be presented. A . 1.1 Solving V i a Discrete Fourier Transform The standard prescription for evaluating a path integral is to divide the time interval into small slices and then integrate over the spatial variables for each slice. For this, the (A.l) (A.2) 73 Appendix A. Analytic Calculations for Matrix Integrals time variable is discretized into a lattice of T points and each <f>t is integrated over. The prescription for doing so is <p{t) fi 0 dr -> aJ2 (A-3) t=i 8 l 4>t-4>t-\ dr a a With this transformation, the partition function is z=(n£*)»"iE",[mV''J- ^ with boundary conditions 4>t+T — <t>t-This path integral can be solved analytically by finding the eigenfunctions of the ac-tion and boundary conditions. Here the appropriate transformation is a discrete Fourier transform given by 4H = ^=_Z**e™klT (A.5) and inverse transform V1 t=l Applying this transformation and making use of the identity I ^ e ^ - ' W T ^ ( A . 7 ) gives that T k 1 PCX) 1 POO n / ^ =n / t'=ij-°° k'=ij-°° E^ 2 = Ei^ i 2 (A-9) 74 Appendix A. Analytic Calculations for Matrix Integrals and t k k = J ]4s in 2 (7r fc /T) | ^ | 2 . (A.10) Thus after the transformation, the partition function can be written as Z. f T n Kk' = lJ-°° / T n \,fc'=l' ( A . l l ) Letting Ak = \ sin (nk/T) + an1, gives that n \k'=l <>k' e 2 ELI <t>iAk<i>k (A.12) Since ^ is complex, this integral only makes sense if the measure is actually dfad<t>*k. This situation is remedied by changing some of the fa modes to complex conjugate modes. Since <j>t is real, then fa = (j)*_k is one such expression. Applying periodic boundary con-ditions gives that fa = 4>T-k- Thus the top half of the modes can be written in terms of equivalent complex conjugate modes. The path integral becomes Z = TV 2 ,oc n ^'=1 • »k'a<pk, E)fc=l KAk<t>k (A.13) This expression can be solved using basic Gaussian integrals. To be thorough, it is necessary to calculate in terms of the component variables of fa. Let fa = ak + ibk, then h i t ) 1 71 1 i 1 - i \ / \ ak V b k ) (A.14) 75 Appendix A. Analytic Calculations for Matrix Integrals and (772 \ fj r dak,dbk, e-^lL2M+^. (A.15) Integration only occurs when k' = k and this eliminates the sum in the exponential. The result is the product of T/2 Gaussian integrals 3 7 2 / r°° 1 \ 2 = IT / dakdbke-^al+b^A" . (A.16) k=i ' Using The result is 2 = n| (A.18> *;=i * r / 2 2. n This is the value of the partition function that can be calculated by dividing the integral into time slices. Ideally it should be possible to restore the continuum limit: a —> 0, T —> 0 0 , holding 6 fixed. A.1.2 Solving Via Functional Determinant A n alternate way to solve this problem is to leave the fields continuous and express the partition function as a functional determinant Z = yV^]e~5/o/W+MV ( A . 1 9 ) 1 d e t ^ (~W + ^2 The solution is then the product of the eigenvalues of the determinant, which are found by solving the equation ( — + u2)ip = Xnip. The eigenfunctions that satisfy this equation are ip = e^ 2 ™//^ a n d t n i l s , Xn = (^f^ + p2 for n 6 Z. Note that these are the 76 Appendix A. Analytic Calculations for Matrix Integrals + M 2 same eigenfunctions that were needed in the previous section, except that there is now no restriction on how large n can be. As such, these two techniques are basically different ways of doing the same thing. The resultant partition function is OO -. z= n T — : — ^ ( A - 2 ° ) \ P J Using a product from the Gradshteyn and Ryzhik table of integrals (p. 43) [28]: sinhz = ,n(l + ^ ) ( A - 2 1 ) n = l ^ ' and manipulating the partition function to match the form of this product, reveals that the partition function is a hyperbolic sine to within a multiplicative constant 1 (A.22) sinh( The exact result for a quantum harmonic oscillator is Z = 2 s i n h ( / 3 ^ / 2 ) , so the necessary constant in this expression is ^. This expression can also be derived using a contour integral technique but the details are not elaborated upon here. A . 1.3 2-Point Correlation Function It is often more practical to calculate the value of an observable, rather than the just the value of the partition function itself. For the one dimensional harmonic potential, an observable is calculated as (0) = I y ^ J O ^ e - U ^ + ^ V (A.23) For instance, consider the two point correlation function <O(n,m)) = (> n 0 m ) . (A.24) 77 Appendix A. Analytic Calculations for Matrix Integrals Using the discrete Fourier transform from the previous section gives T 2iri(np—mp') JT (A.25) and («m>4EEe2*i("')/T p p' p P' \fe'=i d(f>k' (A.26) Changing the integral measure to dfadcfi^ and noting that when p ^ p' the integration is over an odd function and is thus zero, gives {4>ncf>m) = I ^ e 2 - ( — ) P / T m i \fc'=ij I 2 P - E TII k=l ^kAk<)>k (A.27) The right hand side can be evaluated using Gaussian integrals, such as was done in the previous section and thus T/2 n . f c ' = i TT — I — — }l^]ApAp i r i r (A-28) z_ Ar, After relabelling p —• k and subbing in, the result is l) — T - i 1 v - ^ e 2iri(n—m)k/T k 02iri(n—m)k/T (A.29) T ^ | S i n 2 ( 7 r f c / T ) + a M 2 -This is the exact result for the 2-point correlation function that should appear in a Monte Carlo simulation of this problem. 78 Appendix A. Analytic Calculations for Matrix Integrals A.2 Matrix Harmonic Potential Next, consider a matrix harmonic potential with action 1 S e = 2J0 D T T V X2{t) + ^X\t) 2 y 2 / (A.30) and periodic boundary conditions X(t + 0) = X(t). Here the Xs are N x N Hermitian ma-trices. This system can easily be extended so that the Xs are in multiple spatial dimensions if desired. The partition function is Z = J[dX]e-y°d^\x2^x2]. (A.31) To begin, the trace and integral measure are broken down into the components of the matrices t r ( X 2 ) = XabXba = 2 E | A a f e | 2 + E ( A a a ) 2 a<b a (A.32) and Then [dX} = N dXabdX, ab .a<b N l[dXa (A.33) a < b a ~ (A.34) The problem can now be treated as the integral over (N2 — N) complex components and N real components. Performing a discrete Fourier transform as in the one-dimensional case, gives that the partition function is just the product of A 2 one-dimensional harmonic potential partition functions T/2 z=n 2TT N2 (A.35) 79 Appendix A. Analytic Calculations for Matrix Integrals The 2-point correlation for the trace of the matrices is then just i 1 ^ p2TTi(n—m)k/T —2 (tr[X(n)X(m)]) = - £ 4 . 2 . 2 - (A.36) Iv I ^ \ s in z (7r /c / J J + an1 Alternatively, applying the functional determinant method gives the result of N2 sinh(^) (A.37) for the partition function. A.3 Matrix Harmonic Potential With Diagonal Gauge -Exact Form Finally, consider the addition of the gauge variables to the matrix harmonic oscillator. It is possible to derive an exact form for the matrix interaction. The partition function is Z = Jd91...d9N]Jsm* f [ d X ^ t i d t ^ ^ X i 2 \ , (A.38) with boundary conditions Xlab((3) = Xlab(0)el(-9a~db\ Here, each spatial dimension has been given a different mass coefficient m. Decomposing the trace as in the previous section gives Z = • • • (fl j dXabdXaAj e-fo^U^b\2+^\Kb\2], (A.39) where the diagonal terms Xlaa can be disregarded as constants because they do not interact with the 9s. At this point one could attempt to solve this integral using a discrete Fourier transform, but this doesn't really lead to anything useful. Instead, consider the functional determinant approach. The partition function is then Z = / d9x--- d9N TT sin 2 (^Z^.) L (A.40) J a<b V 7 l l i = l l l a < b d e t [~W + I I i ) b 80 Appendix A. Analytic Calculations for Matrix Integrals It is necessary to find solutions to the equation ( — + P2 ) V'afc = ^abVVfc, with boundary V / ab conditions 4>ab(8) — e^9a~8b^ipab(0). The eigenfunctions satisfying this equation are (A.41) with eigenvalues \n — Aab (2ixn | 9 a - + pi (A.42) Thus det dt2 ab n n=—oo L oo f2nn , 6 a - 9 b \ 2 , 2 - n /27rn v~7T n (2ixn V T Pi (A.43) where n+ = im + and ^ = ~ W + This product can be regularized by expressing it as an exponential and integrating over /j,', at which point it is expected to be determinable up to a constant: n (2im VT + P ; E J n ( ^ + / ) (A.44) This sum over n is actually a cotangent. To see this, consider the relation 7rcot(-7r^) = E z + n (A.45) Manipulating for z = gives that „ ( l T + A*') ^ c o t f ^ 2 V 2 (A.46) 81 Appendix A. Analytic Calculations for Matrix Integrals and thus (A.47) = e In sin ( £ f - ) + c i A * ' + C 2 Alternatively, this could be derived by matching the residues of equation (A.44) and per-forming a contour integration, but the details are not elaborated upon. Setting ci and c2 to zero, the partition function is Z = j d0i • • • d0jv JI s i n ( J a<b ^ d a - 0 b \ e _ S e f f : (A.48) where , S e f f nti I t f o sinh O - ^ ) - n h (Sp + ^ Simplifying this expression gives (A.49) d N >eff = EE i=l a<b d N = EEN1 l n s . ] h , _ i«>«-h)\ + l n s i n h + i(».-e>) Z Z I \ Z Z (A.50) ,-p~iii-i(6a-6 i=l a<b N ) ) + , „ ( 1 - e -0y,i+i(0a-4^N{N -\)a i=l d j > ( l - e - ^ + i ^ - ° » ) ) + N { N ~ l ) B p d . a^b In the last step all dimensions are taken to have the same mass parameter u. and the last term can be disregarded as a zero point energy. The final expression for integration over the matrices is then Z = f d6, • • • d6N H sin 2 f6—^) e - d ^ m ( i - e - ^ ^ - ^ ) . ( A . 5 1 ) •* a<b ^ ' The location of the phase transition in this model can be calculated exactly. To be-gin, the sine term is included back into the effective action, using r i a < 6 s i n 2 i^a~2h) ~ 82 Appendix A. Analytic Calculations for Matrix Integrals Ua^b I 1 - e^-6^). The effective action is then 5 ' e / / = E [ - l n ( l - e ^ ) ) + d l n ( l - -Pp,+i{6a.-eb) (A.52) where Z = J d9\ • • - dO^e S"ff- The behaviour of this system can be studied by using the power series expansion of the logarithm: - l n ( l - z ) = E - - (A.53) n = l Assuming that the first term in the effective action converges, the effective action is then S'eff ~ E n-l a=j£b N (A.54) As the temperature of the system is raised from zero, the first mode to condense is n — 1, which breaks the symmetry of the system. This will occur at 1 = de~^. As such, the system is in the deconfined phase when u.8 < \nd. (A.55) The first order of the logarithm can act as a first approximation to the above model, where In ( l — e _^"H( e°~ 6 , i>)) « _e-/3/i+l(6,a-ei)) p^he partition function for this first order approximation is then Z « IdO, • • • dQw J"J sin 2 ( e * - ' " E . , » «'<'-">. (A.56) J a<b ^ 2 J This approximation will serve as part of the toy model in chapter 5. 83
- Library Home /
- Search Collections /
- Open Collections /
- Browse Collections /
- UBC Theses and Dissertations /
- Finite temperature phase transitions in the quenched...
Open Collections
UBC Theses and Dissertations
Featured Collection
UBC Theses and Dissertations
Finite temperature phase transitions in the quenched BFSS matrix model Yewchuk, Simon Andrew 2007
pdf
Notice for Google Chrome users:
If you are having trouble viewing or searching the PDF with Google Chrome, please download it here instead.
If you are having trouble viewing or searching the PDF with Google Chrome, please download it here instead.
Page Metadata
Item Metadata
Title | Finite temperature phase transitions in the quenched BFSS matrix model |
Creator |
Yewchuk, Simon Andrew |
Publisher | University of British Columbia |
Date Issued | 2007 |
Description | A lattice Monte Carlo simulation of the BFSS matrix model of M-theory with fermions removed is discussed. It is expected that a Hagedorn/deconfmement type phase transition exists in this model and that the deconfined state is related to the black hole states of type IIA supergravity. The black hole phase is expected to be recovered in the 't Hooft large N limit of the BFSS model. The numerical simulation is constructed by investigating the behaviour of various toy models, using the Polyakov loop as an order parameter for the transition. A Fadeev-Popov gauge fixing is used to handle the gauge fields. A first order phase transition behaviour can readily be seen in this model. At finite N it appears as an analytic step function which is expected to become a true step function in the large .N limit. The final Monte Carlo simulations are done on a twenty site lattice, using a value of N = 40. The phase diagram of the transition is mapped out for dimensions two through nine and the phase behaviour for nine dimensions is investigated for when a mass term is added to the action. The phase behaviour of a related model - the pp-wave matrix model - is also discussed. |
Genre |
Thesis/Dissertation |
Type |
Text |
Language | eng |
Date Available | 2011-02-16 |
Provider | Vancouver : University of British Columbia Library |
Rights | For non-commercial purposes only, such as research, private study and education. Additional conditions apply, see Terms of Use https://open.library.ubc.ca/terms_of_use. |
DOI | 10.14288/1.0084945 |
URI | http://hdl.handle.net/2429/31394 |
Degree |
Master of Science - MSc |
Program |
Physics |
Affiliation |
Science, Faculty of Physics and Astronomy, Department of |
Degree Grantor | University of British Columbia |
Campus |
UBCV |
Scholarly Level | Graduate |
AggregatedSourceRepository | DSpace |
Download
- Media
- 831-ubc_2007-0296.pdf [ 3.61MB ]
- Metadata
- JSON: 831-1.0084945.json
- JSON-LD: 831-1.0084945-ld.json
- RDF/XML (Pretty): 831-1.0084945-rdf.xml
- RDF/JSON: 831-1.0084945-rdf.json
- Turtle: 831-1.0084945-turtle.txt
- N-Triples: 831-1.0084945-rdf-ntriples.txt
- Original Record: 831-1.0084945-source.json
- Full Text
- 831-1.0084945-fulltext.txt
- Citation
- 831-1.0084945.ris
Full Text
Cite
Citation Scheme:
Usage Statistics
Share
Embed
Customize your widget with the following options, then copy and paste the code below into the HTML
of your page to embed this item in your website.
<div id="ubcOpenCollectionsWidgetDisplay">
<script id="ubcOpenCollectionsWidget"
src="{[{embed.src}]}"
data-item="{[{embed.item}]}"
data-collection="{[{embed.collection}]}"
data-metadata="{[{embed.showMetadata}]}"
data-width="{[{embed.width}]}"
data-media="{[{embed.selectedMedia}]}"
async >
</script>
</div>
Our image viewer uses the IIIF 2.0 standard.
To load this item in other compatible viewers, use this url:
https://iiif.library.ubc.ca/presentation/dsp.831.1-0084945/manifest