UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Asymptotic and numerical modeling of magnetic field profiles in superconductors with rough boundaries.. 2010

You don't seem to have a PDF reader installed, try download the pdf

Item Metadata

Download

Media
ubc_2010_fall_lindstrom_michael.pdf [ 1.88MB ]
Metadata
JSON: 1.0071198.json
JSON-LD: 1.0071198+ld.json
RDF/XML (Pretty): 1.0071198.xml
RDF/JSON: 1.0071198+rdf.json
Turtle: 1.0071198+rdf-turtle.txt
N-Triples: 1.0071198+rdf-ntriples.txt
Citation
1.0071198.ris

Full Text

Asymptotic and Numerical Modeling of Magnetic Field Profiles in Superconductors with Rough Boundaries and Multi-Component Gas Transport in PEM Fuel Cells by Michael Robert Lindstrom B. Sc. (Hons.), University of British Columbia, 2008 A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF SCIENCE in The Faculty of Graduate Studies (Mathematics) THE UNIVERSITY OF BRITISH COLUMBIA (Vancouver) August 2010 c© Michael Robert Lindstrom 2010 Abstract This thesis is a combination of two research projects in applied mathematics, which use the applied math techniques of numerical and asymptotic analysis to study real-world problems. The first problem is in superconductivity. This section is motivated by recent experimental results at the Paul Sherrer Institute. Here, we need to determine how the surface roughness of a superconductor influences the penetration properties of an externally applied magnetic field. We apply asymptotic analysis to study the influences, and then verify the accuracy - even going well-beyond the limits of the asymptotics - by means of compu- tational approximations. Through our analysis, we are able to offer insights into the experimental results, and we discover the influence of a few partic- ular surface geometries. The second problem is in gas diffusion. The application for this study is in fuel cells. We compare two gas diffusion models in a particular fuel cell component, the gas diffusion layer, which allows transport of reactant gases from channels to reaction sites. These two models have very different formulations and we explore the question of how they differ qualitatively in computing concentration changes of gas species. We make use of asymptotic analysis, but also use computational methods to verify the asymptotics and ii Abstract to study the models more deeply. Our work leads us to a deeper under- standing of the two models, both in how they differ and what similarities they share. iii Table of Contents Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ii Table of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . iv List of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . . xiv 1 Introduction to Mathematical Modeling . . . . . . . . . . . 1 1.1 Introduction to Numerical and Asymptotic Analysis . . . . . 1 1.1.1 Numerical Analysis . . . . . . . . . . . . . . . . . . . 2 1.1.2 Asymptotic Analysis . . . . . . . . . . . . . . . . . . 5 1.2 Nondimensionalization . . . . . . . . . . . . . . . . . . . . . 8 2 Superconductivity Modeling . . . . . . . . . . . . . . . . . . . 11 2.1 Introduction to Superconductivity . . . . . . . . . . . . . . . 11 2.1.1 Superconducting Properties . . . . . . . . . . . . . . 11 2.1.2 Relevant Equations . . . . . . . . . . . . . . . . . . . 12 2.1.3 Physical Dead Layer . . . . . . . . . . . . . . . . . . . 15 iv Table of Contents 2.2 Asymptotic Analysis . . . . . . . . . . . . . . . . . . . . . . . 18 2.2.1 Asymptotic Formulation . . . . . . . . . . . . . . . . 18 2.2.2 General Procedure . . . . . . . . . . . . . . . . . . . . 27 2.2.3 Geometry One: Surface with Roughness in One Spa- tial Direction and Parallel Applied Magnetic Field . . 31 2.2.4 Geometry Two: Surface with Roughness in One Spa- tial Direction and Applied Field Not Uniformly Par- allel to Surface . . . . . . . . . . . . . . . . . . . . . . 43 2.2.5 Geometry Three: Surface with Roughness in Two Spa- tial Directions . . . . . . . . . . . . . . . . . . . . . . 49 2.3 Finite Difference Program for Geometry One . . . . . . . . . 57 2.3.1 Numerical Formulation . . . . . . . . . . . . . . . . . 57 2.3.2 Validation . . . . . . . . . . . . . . . . . . . . . . . . 64 2.3.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . 68 2.4 Finite Difference Program for General Sinusoidal Surface . . 70 2.4.1 Numerical Formulation . . . . . . . . . . . . . . . . . 70 2.4.2 Validation . . . . . . . . . . . . . . . . . . . . . . . . 84 2.4.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . 87 2.5 Conclusions of Superconductor Modeling . . . . . . . . . . . 90 2.5.1 More Complicated Field Behaviour . . . . . . . . . . 91 2.5.2 Orientation of the Roughness Affects the Profile . . . 92 2.5.3 Effective Dead Layer . . . . . . . . . . . . . . . . . . 92 3 Fuel Cell Modeling . . . . . . . . . . . . . . . . . . . . . . . . . 94 3.1 Modeling of Gas Diffusion in Fuel Cells . . . . . . . . . . . . 94 v Table of Contents 3.1.1 PEM Fuel Cell Overview . . . . . . . . . . . . . . . . 95 3.1.2 The Model . . . . . . . . . . . . . . . . . . . . . . . . 96 3.1.3 Standard Operating Conditions of a Fuel Cell . . . . 98 3.1.4 Diffusion Equations . . . . . . . . . . . . . . . . . . . 100 3.2 Asymptotic Formulation . . . . . . . . . . . . . . . . . . . . 104 3.2.1 Nondimensionalization . . . . . . . . . . . . . . . . . 105 3.3 Asymptotic Analysis . . . . . . . . . . . . . . . . . . . . . . . 108 3.3.1 Asymptotic Analysis of Fick Diffusion . . . . . . . . . 108 3.3.2 Asymptotic Analysis of Maxwell-Stefan . . . . . . . . 111 3.4 Numerical Analysis of Diffusion Models . . . . . . . . . . . . 112 3.4.1 Discretizing the Diffusion Equations . . . . . . . . . . 112 3.4.2 Verification of Program Results . . . . . . . . . . . . 113 3.5 Exploring the Fundamental Differences between the Models . 114 3.6 Conclusions of Gas Diffusion Modeling . . . . . . . . . . . . 116 3.6.1 Formulations . . . . . . . . . . . . . . . . . . . . . . . 116 3.6.2 Quantitative Differences . . . . . . . . . . . . . . . . 117 4 Summary and Future Work . . . . . . . . . . . . . . . . . . . 119 4.1 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 4.1.1 Superconductor Research Summary . . . . . . . . . . 119 4.1.2 Gas Diffusion Research Summary . . . . . . . . . . . 120 4.2 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . 121 4.2.1 Future Work for Superconductor Project . . . . . . . 121 4.2.2 Future Work for Gas Diffusion Project . . . . . . . . 122 Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124 vi Table of Contents Appendices A Eigenvalues of Finite Difference Matrix . . . . . . . . . . . . 127 B The General Interface . . . . . . . . . . . . . . . . . . . . . . . 129 C Dead Layers and Averages . . . . . . . . . . . . . . . . . . . . 132 D Proof of Periodicity of g̃ . . . . . . . . . . . . . . . . . . . . . 133 E Proof of Existence of a Null Vector . . . . . . . . . . . . . . 135 vii List of Tables 2.1 Estimated orders of convergence for (ωx, ωy) = (pi, 0). . . . . . 87 2.2 Estimated orders of convergence for (ωx, ωy) = (pi, pi). . . . . 87 3.1 Physical constants in our gas diffusion model. . . . . . . . . . 99 3.2 Nondimensionalized and rescaled parameters and variables for Fick diffusion. . . . . . . . . . . . . . . . . . . . . . . . . . 107 3.3 Nondimensionalized and rescaled parameters and variables for Maxwell-Stefan diffusion. . . . . . . . . . . . . . . . . . . 108 3.4 Different modeling predictions for the relative changes in the concentrations. . . . . . . . . . . . . . . . . . . . . . . . . . . 115 viii List of Figures 2.1 The magnetic field is constant up to the interface and then decays exponentially in magnitude. Left: a visual represen- tation. Right: Plot of field magnitude vs z, with z = 0 as the interface. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 2.2 Reprinted figure with permission from [6]. Copyright (2010) by the American Physical Society. This figure displays an experimentally measured magnetic field profile. The a and b represent different magnetic field orientations, with slightly different decay length scales. These scales are believed to be due to an anisotropy in the YBCO superconducting material. In both cases, there is a lag in the exponential decay. . . . . . 16 2.3 A sketch of the effective dead layer, in this case δ. . . . . . . 17 2.4 The overall geometry, PDEs, and boundary conditions we are solving. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 2.5 There is no control over the error at x = 0 but there is control at x = . To ensure O(n) accuracy everywhere we use x =  instead of x = 0 as the point where we switch from `left to `right. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 ix List of Figures 2.6 The applied field is parallel to the surface. . . . . . . . . . . . 32 2.7 A profile of the field magnitude with  = 0.05 and ω = 2pi in the y − t. Here the perturbation is quite obvious, but as t gets large the perturbation gradually disappears. . . . . . . . 40 2.8 Peak and Valley Profiles. . . . . . . . . . . . . . . . . . . . . . 41 2.9 A profile of the field magnitude with  = 0.05 and ω = 2pi in the peak and valley cases, and the average field profile. . . . . 42 2.10 Left: the effective dead layer at fixed  = 0.05. Right: the effective dead layer at fixed ω = pi. . . . . . . . . . . . . . . . 43 2.11 The applied field has nonzero perpendicular components with respect to the surface. . . . . . . . . . . . . . . . . . . . . . . 44 2.12 Left: a profile of b1 with  = 0.05 and ω = 2pi in the peak and valley cases. Right: a profile of b3 with  = 0.05, ω = 2pi and x = −0.26 fixed. . . . . . . . . . . . . . . . . . . . . . . . 47 2.13 A profile of |b|avg with  = 0.05 and ω = 2pi. . . . . . . . . . . 48 2.14 Left: the effective dead layer at fixed  = 0.05. Right: the effective dead layer at fixed ω = pi. . . . . . . . . . . . . . . . 48 2.15 Top left: a profile of b1 from peak and valley. Top right: profile of b2 with (x, y) = (−0.27,−0.27). Bottom left: profile of b3 with (x, y) = (−0.27,−0.50). Bottom right: difference between average field profile in perturbed geometry and flat geometry. All figures computed with  = 0.05, and (ωx, ωy) = (2pi, 2pi). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 2.16 Left: the effective dead layer at fixed  = 0.05 and ωx = ωy.. Right: the effective dead layer at fixed (ωx, ωy) = (pi, pi). . . . 55 x List of Figures 2.17 The effective dead layer at fixed  = 0.05 and (ω2x+ω 2 y) 1/2 = 8pi. 55 2.18 Near t = 0 the grid is square but as t goes farther out the spacing in the t−direction increases. . . . . . . . . . . . . . . 60 2.19 A plot of τ = f(t) . . . . . . . . . . . . . . . . . . . . . . . . 61 2.20 A verification that the two-dimensional code has the right behaviour in the flat geometry. . . . . . . . . . . . . . . . . . 65 2.21 Checking second order convergence by observing the error behaviour where the exact solution exp(−z) is known. Here  = 0.1 and ω = 2pi. . . . . . . . . . . . . . . . . . . . . . . . 66 2.22 Checking the convergence order of the asymptotics. Here ω = pi. 68 2.23 Profiles of the field magnitude at  = 0.05 and ω = 2pi. Fig- ure displays decay from a peak and valley, and the average magnitude. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 2.24 Profiles of the field magnitude at  = 0.2 and ω = 100pi. Figure displays decay from a peak and valley and shows the flat geometry solution. . . . . . . . . . . . . . . . . . . . . . . 70 2.25 Formulation for three-dimensional code. . . . . . . . . . . . . 71 2.26 The mesh in the three-dimensional system. Note how g̃ and b meshes interlock. The circles with crosses indicate the points occur deeper into the page than the circles wit the dots. At each circle with a cross, three components are specified. At each circle with a dot, the scalar value of g̃ is specified. . . . . 77 2.27 Visual confirmation that the three-dimensional program cor- rectly handles the flat interface N = 11, and ωx = ωy = pi. . . 85 xi List of Figures 2.28 Left: resolution of first-order asymptotic term for first ge- ometry with ω = pi. Centre: resolution of first-order asymp- totic term for second geometry with ω = pi. Right: resolu- tion of first-order asymptotic term for third geometry with (ωx, ωy) = (pi, pi). . . . . . . . . . . . . . . . . . . . . . . . . . 86 2.29 Top left: a profile of b1 from peak and valley. Top right: a profile of b2 with (x, y) = (−0.27,−0.27). Bottom left: profile of b3 with (x, y) = (−0.27,−0.50). Bottom right: difference between average field profile in perturbed geometry and flat geometry. All figures computed with  = 0.05, and (ωx, ωy) = (2pi, 2pi). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 2.30 The field profiles for different roughness orientations with  = 0.1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 2.31 The asymptotic and numeric mean field profiles. The two are quite close given the large  and ω values. . . . . . . . . . . . 91 3.1 A cross-section of a PEM fuel cell. Hydrogen and Oxygen diffuse primarily in the XY−plane, but there is a diffusion in the Z−direction as well. . . . . . . . . . . . . . . . . . . . . 96 3.2 Our one-dimensional model of the gas diffusion layer. . . . . . 106 3.3 Verifying second-order convergence with the Fick code. . . . . 113 3.4 The concentration profile of Oxygen in the GDL for both gas diffusion models. . . . . . . . . . . . . . . . . . . . . . . . . . 116 3.5 The concentration profile of water vapor in the GDL for both gas diffusion models. . . . . . . . . . . . . . . . . . . . . . . . 117 xii List of Figures 3.6 The concentration profile of Nitrogen in the GDL for both gas diffusion models. . . . . . . . . . . . . . . . . . . . . . . . 118 xiii Acknowledgements A huge thanks to my thesis supervisor, Brian Wetton, for all his help. When- ever I had questions, he was willing to give guidance and direction. He really knows his stuff, and without his generosity in time and help, I’d likely still be stuck at the beginning. I would also like to thank Rob Kiefl for his supervision in the super- conductor work. This modeling work was initially part of a summer job, which eventually became part of my thesis. Also many thanks to Rob for his insights on the problem, too. Thanks to Jon Chapman, who made a critical observation in our early struggles to understand the proper formulation of the superconductor prob- lem. I would also like to thank Michael Ward, Joerg Rottler, and Matt Chop- tuik for their help along the way. xiv Chapter 1 Introduction to Mathematical Modeling 1.1 Introduction to Numerical and Asymptotic Analysis This thesis explores two physical problems. The first pertains to modeling the effects of surface roughness on superconductors. In our work, we consider the surface roughness as a small perturbation from a perfectly flat supercon- ductor with well known properties. The effect of such a small perturbation can be studied by means of asymptotic analysis. Then, to reach perturba- tions that go beyond the region where the asymptotic work is reliable, we use finite difference discretizations to numerically approximate the solution with a computer. The starting point for all of this is in nondimensionalizing the system. Our second project involves modeling gas diffusion in a porous media. We have two diffusion models to work with, and by nondimensionalizing their equations, we are able to use asymptotic methods to compare them. This work is followed up by numerical computations to verify the accuracy 1 1.1. Introduction to Numerical and Asymptotic Analysis and explore the models from various angles. With this basic context, we begin by providing a brief overview of the premise of computational approximations to the solutions of differential equations using finite difference discretizations, the principles of asymptotic analysis, and the method of nondimensionalization. 1.1.1 Numerical Analysis Numerical analysis is a powerful tool to solve real-world problems that be- come too complicated to solve (or even approximate) analytically. Much of this thesis involves the approximation of derivatives by finite differencing. A good reference to which our reader might like to refer is reference [1]. The idea is that a differential equation (ordinary or partial) can be writ- ten in a discretized form, with a “small” error. Let’s consider the real-valued function f : R −→ R i.e. f is a scalar function that takes a single real number as an input and returns a single real number. If we assume that f ∈ C2 (f is twice continuously differentiable) then the limit limh→0 f(x+h)−2f(x)+f(x−h) h2 exists everywhere and is in fact equal to f ′′(x). The above limit is exact. However, if we only know f on a discrete set of points, we could only come up with an approximation to the second derivative. Let’s assume that we have values for f at [x0, x1, ..., xn] with xj = x0 + jh, where h, the spacing between grid points, is a small positive number. Denote these values of f by f0, ..., fn. An approximation to f ′′(xj), denoted by D2hfj , where j ∈ {1, ..., n − 1} 2 1.1. Introduction to Numerical and Asymptotic Analysis is D2hfj = fj+1 − 2fj + fj−1 h2 . (1.1) It turns out this is actually a very good approximation and the error is bounded by a constant times h2, as long as f is smooth enough. If we assume f ∈ C4 then we can write fj±1 = fj±hf ′(xj)+h22 f ′′(xj)±h 3 6 f ′′′(xj)+ h4 24f (4)(t) with t being some number between xj and xj ±h. If we substitute this into (1.1) we get: D2hfj = f ′′(xj) + h2 12 f (4)(t). Thus, the error is bounded by Ch2 for some constant C.We writeD2hfj = f ′′(xj)+O(h2). So if h decreases by a factor of two then the (bound on the) error in the approximation should decrease by a factor of 4. As we make h smaller (tending to zero) then the error in our approximation also gets smaller (and tends to zero). Being able to discretize in this way means that differential equations can be approximated by systems of equations. We will consider the simple example in solving y′′(x) = x with y(0) = y(1) = 1 on the interval [0, 1]. We consider the points [x0 = 0, x1 = h, ..., xn = nh = 1] and we let yj be an approximation to y(xj). The boundary conditions tell us that y0 = yn = 1. And in the interior (1 ≤ j ≤ n − 1) we can impose y′′(xj) ≈ yj+1−2yj+yj−1 h2 = xj . This allows us to write the (n+ 1)× (n+ 1) system 3 1.1. Introduction to Numerical and Asymptotic Analysis  1 0 0 0 · · · 0 0 0 0 1 h2 −2 h2 1 h2 0 · · · 0 0 0 0 0 1 h2 −2 h2 1 h2 · · · 0 0 0 0 · · · · ... · · · · 0 0 0 0 · · · 1 h2 −2 h2 1 h2 0 0 0 0 0 · · · 0 1 h2 −2 h2 1 h2 0 0 0 0 · · · 0 0 0 1  ︸ ︷︷ ︸ Lh  y0 y1 y2 ... yn−2 yn−1 yn  =  1 h 2h ... (n− 2)h (n− 1)h 1  . Solving this matrix system will produce a set of y′js that approximate the exact solution y(xj) to within an error bounded by a constant times h2. To see this, we note that we have written the equation Lhŷ = f̂ , where ŷ = (y0, ..., yn)T and f̂ = (1, h, ..., (n − 1)h, 1)T is the load vector. Here and throughout this thesis, the superscript T will represent the transpose operator. We know the exact solution, y = (y(x0), ..., y(xn))T , will satisfy Lhy = f̂ + O(h2) (O(h2) is a vector with each component bounded by a constant times h2) and if we subtract the equation Lhŷ = f̂ then Lhδ = O(h2) where δ = y − ŷ is the error in the approximation. Multiplying by the inverse matrix yields δ = L−1h O(h2). As long as the matrix norm ||L−1h ||∞ stays bounded for all h (or equivalently for all n) then the error δ is of order h2. The proof that the matrix norm is bounded is generally not trivial. How- ever, for a symmetric matrix, if all its eigenvalues are bounded away from zero then its inverse has a bounded norm. The eigenvalues in this symmetric 4 1.1. Introduction to Numerical and Asymptotic Analysis matrix are all bounded away from zero, and in fact as n gets large the eigen- values of smallest magnitude can be approximated by −pi2,−4pi2,−9pi2, .... Therefore, ||L−1h ||∞ is bounded. We include a proof for the values of the eigenvalues in appendix A. 1.1.2 Asymptotic Analysis Asymptotic analysis is a means to study the behaviour of solutions to equa- tions in response to important parameters. These parameters arise in an equation after it has been nondimensionalized i.e. rewritten so that there are as few constants as possible in the equation (see next section). In asymptotic analysis, we generally know the solution of a particular equation, but are interested to know how the solution changes if the equa- tion is perturbed a little away from the case we know how to solve. Good background material can be found in reference [2]. Supposing we had an equation for the variable/function u, f(u; ) = 0 with the small parameter , we could assume that u can be written as an asymptotic series in . We write u = ∑∞ j=0 fj()uj where fj+1 = o(fj) i.e. as → 0, fj+1()fj() → 0. The most obvious example of an asymptotic series would be a Taylor series, for example u = u0+ u1+ 2u2+ .... Asymptotic series are far more general, however. Their terms can be functions, and even if their terms are constants, in some cases, peculiar expansions such as u = u0 +  log(1 )u1 + sin u2 + 3/2u3 + ... could be necessary. Here, by means of a simple example we will show how asymptotic series can be generated. We will find one that converges and one that diverges. 5 1.1. Introduction to Numerical and Asymptotic Analysis We consider the error function, which is important in statistics and prob- ability, erf(z) = 2√ pi ∫ z 0 exp(−t2)dt. For small |z| i.e. |z|  1 we can use Taylor series to approximate the error function. In this case we are perturbing the integrand away from z = 0. erf(z) = 2√ pi ∫ z 0 ( ∞∑ n=0 (−1)n t 2n n! )dt = 2√ pi ( ∞∑ n=0 (−1)n z 2n+1 (2n+ 1)n! ). This Taylor series (which is also an asymptotic series) converges for all z as it comes from term-by-term integration of a power series that converges uniformly on every compact subset of C (the series for exp(−t2)). This is not, however, a useful way of approximating erf(z) if |z| is large. As erf(z) is an odd function we will consider z  1 here (in this case if we wanted a small parameter we could take  = 1/z). Given that erf(∞) = 1 we can consider a “small perturbation” from z =∞ and write: erf(z) = 1− 2√ pi ∫ ∞ z exp(−t2)dt. We will write the integrand in a form conducive to integration by parts, (−2t exp(−t2))(−12t ). Focusing on the integral, we will integrate by parts (a couple of times): −1 2t e−t 2 |∞z − ∫ ∞ z 1 2t2 e−t 2︸ ︷︷ ︸ (−2t exp(−t2))( −1 4t3 ) dt = −1 2t e−t 2 |∞z + 1 4t3 e−t 2 |∞z + ∫ ∞ z 3 4t4 exp(−t2)dt. The boundary terms are zero when evaluated at t =∞. We thus furnish 6 1.1. Introduction to Numerical and Asymptotic Analysis an asymptotic expansion for the error function for large z: erf(z) = 1− 2 exp(−z 2)√ pi ( 1 2z − 1 8z3 + ...) By induction we could go further to write: erf(z) = 1− exp(−z 2)√ piz (1 + ∞∑ n=1 (−1)n (1)(3)...(2n− 1) 2nz2n ). This asymptotic series diverges. The coefficients cn of (z−2)n are given by cn = (1)(3)...(2n−1) 2n and we use the ratio test to compute limn cn cn+1 = limn 22n+1 = 0. The radius of convergence is zero. Although the series diverges, it still obeys the properties of being asymp- totic. The ratio of term n + 1 in the series to the term n in absolute value is 2n+12 z −2 which tends to 0 as z →∞ (or as  = 1/z tends to 0). Also, this is an excellent means of approximating the error function for large z. If z = 2 then erf(z) ≈ 0.9953 and if we had used the asymptotic expansion taking only the first two nonzero terms we would have obtained 0.9948. By taking 5 terms we arrive at the best asymptotic approximation, 0.9954. After 5 terms the error increases. To get the same precision with the Taylor series we would have needed 14 terms to obtain the precision found with only 2 terms in the asymptotic series, but only 15 terms to obtain the precision found with the 5 terms in the asymptotic series for large z. The Taylor series will converge to the exact value, however. Asymptotic analysis is a reliable tool to give insight into the behaviour of perturbed systems. Not all asymptotic series converge, but even when they 7 1.2. Nondimensionalization do not, they can still provide valuable insights into a perturbed system. In the cases where the series diverge, there is an optimal number of terms needed to obtain a “best” approximation. Finding an asymptotic series for a problem without proving its conver- gence is known as formal asymptotics. In this thesis we are primarily con- cerned with finding only the first two or three nonzero terms in an asymp- totic series so we will not be concerned with proving convergence and all our expansions will be formal. In these formal expansions, our test as to whether we have found a suitable series is whether we can actually compute the terms in the series. 1.2 Nondimensionalization Nondimensionalization is used in the study of differential equations to reduce the number of constants appearing in the equation and to gain insight into the quantities that govern the system’s behaviour. The procedure consists of replacing the variables with dimensionless numbers (i.e. getting rid of the units). More details can be found in reference [3]. As an example, we consider the heat equation with ut = σuxx + f(t) for x ∈ [0, L] and t ≥ 0 with u(0, t) = a, and u(L, t) = b and u(x, 0) = u0(x). The heat equation models the temperature, u inside of a material as a function of time t and the spatial position x. The coefficient σ here is 8 1.2. Nondimensionalization the thermal diffusivity - describing how efficiently the energy can move in the system - with units of length squared over time. The function f is the external heating: here, we could think of it as a uniform heating that varies in time. The units of f are temperature divided by time. Physically, the temperatures a and b represent a fixed temperature on either end of the material, and u0 is an initial temperature distribution. We begin by writing f = f̄F, where F has no dimensions and where f̄ is a dimensional quantity holding a representative value for f. Later we will select the size of f̄ to make the equation as simple as possible. We can rewrite the PDE as: 1 f̄ ut = σ f̄ uxx + F (t). We will similarly rescale the independent variables now with x = x̄X, and t = t̄T along with the dependent variable u = ūU. This yields: ū f̄ t̄ UT = σū f̄ x̄2 UXX +G(T ), where G(T ) = F (t̄T ). By equating the coefficients of the U terms, we can select t̄ = x̄ 2 σ . We can also make one final selection in letting ū = f̄ x̄ 2 σ . The resulting PDE is to solve: UT = UXX +G(T ) with U(0, T ) = a/ū, U(L/x̄, T ) = b/ū, and U(X, 0) = ũ0/ū where ũ0(X) = 9 1.2. Nondimensionalization u0(x̄X) and with X ∈ [0, L/x̄]. This equation is in nondimensionalized form, but it can also be nice to use the characteristic scales in the problem. We can choose x̄ = L, and divide the equation by an upper bound on |G|, say, M. Replacing U by U/M gives UT = UXX +H(T ) for some H(T ) that takes on values between [−1, 1], and with the boundary conditions U(0, T ) = A, U(1, T ) = B, initial condition U(X, 0) = U0, with A = aūM , B = b ūM , U0 = ũ0/M, x ∈ [0, 1], and T ≥ 0. The problem is a lot simpler now. Our spatial variable ranges between 0 and 1. We no longer need to deal with the constant σ. The function H is nicely bounded between −1 and 1. Had the problem been more complicated, we could have had some di- mensionless parameters appearing in the PDE itself, which could be used as asymptotic parameters (this will be apparent in the next two chapters). 10 Chapter 2 Superconductivity Modeling 2.1 Introduction to Superconductivity 2.1.1 Superconducting Properties A superconductor is a material that when cooled to a sufficiently low tem- perature (near absolute zero), exhibits a phase transition to a state with zero electrical resistance. This means that an electric current can run through a superconductor without generating any heat, as long as the current does not exceed some critical value. Superconductors fit into two categories: type 1 and type 2. Another defining property of a superconductor is the Miessner effect whereby the material expels magnetic flux provided the external field does not exceed some critical value [4]. Experimentally this means that if there is an applied magnetic field outside of a superconductor, it can only penetrate a little ways into the substance before becoming negligibly small. This is known as the Meissner effect. A sketch of this behavior is given in figure 2.1. In an experimental setting (such as in Muon Spin Rotation), a beam of polarized, low energy muons is sent through a vacuum region and enters 11 2.1. Introduction to Superconductivity Figure 2.1: The magnetic field is constant up to the interface and then decays exponentially in magnitude. Left: a visual representation. Right: Plot of field magnitude vs z, with z = 0 as the interface. the sample, where the magnitude of the local magnetic field causes them to precess. Muons are unstable particles with a very short half-life and the position and distribution of their decay products can be measured by detectors. Based on the decay properties and an experimentally controlled depth at which they decay, muons act as a probe to measure the average magnetic field at a given depth within a superconductor [5]. They cannot measure the exact field magnitude and direction at specific points. 2.1.2 Relevant Equations The governing equations for these experimental settings are Maxwell’s equa- tions describing electromagnetic phenomena, which hold everywhere in space, and the London Equation (within the superconductor). In the region of the 12 2.1. Introduction to Superconductivity vacuum, the magnetic field (in SI units) B will satisfy: ∇ •B = 0 (2.1) and ∇×B = µ0J = 0 (2.2) where µ0 is the permeability of free space and where J the local current density. In the vacuum, J = 0 as there are no currents. These experimen- tal settings do not require the consideration of the electric field E as it is constant. Within the superconductor we have the London equation, which states: ∇×∇×B = −B/λ2 (2.3) where λ is the London penetration depth [4]. This length scale λ describes how far an applied field can penetrate into the sample. Typically it is of the order of 100 nm [6]. The London equation was discovered by the London brothers and it was initially derived in the efforts to describe an experimentally verified relation within superconductors that m ne2 J̇ = E (where the dot signifies a time derivative, m is the mass of an electron, n is the number density of electrons and e is their charge) [7]. Their work found two equations, but (2.3) is often referred to as the London equation in many books [4]. By using the vector identity that ∇ × ∇× = ∇∇ • −4 where 4 is the Laplacian operator, along with (2.1), and (2.3) we find that inside the 13 2.1. Introduction to Superconductivity superconductor we can work with: 4B = B/λ2. (2.4) In the superconducting region both (2.1) and (2.4) must be implemented. We don’t combine (2.1) and (2.2) to use the Laplacian in the vacuum region, where we would rightfully be able to state that 4B = 0, because it turns out that we lose information that way: all B for which ∇×B = 0 and ∇ • B = 0 satisfy 4B = 0, but there are some B for which ∆B = 0 and ∇ •B = 0 where ∇×B 6= 0 (take B = (z, 0, 0)T for example). Physically, all components of the magnetic field must be continuous across the vacuum-superconductor inferface. If we have a flat superconductor occupying the region z > 0 with vacuum in z ≤ 0, and an applied magnetic field B0 that is parallel to the supercon- ductor surface i.e. it has no z−component, then the field should be constant in the vacuum and inside the superconductor it is given by B0e−z/λ. This is a well-known result that’s easily derived in noting that for a flat superconductor all x− and y−derivatives must vanish (due to translational invariance in x and y). Then in the vacuum, (2.2) implies B1 and B2 are constant, and (2.1) implies B3 is constant. As B3 = 0 at z = −∞ then B3 is zero everywhere in the vacuum. As it must vanish at z = ∞, B3 is zero everywhere in the superconductor. So it must be zero everywhere. Looking at B1 and B2 they will have the same value on the interface as they did at z = −∞ and the PDE (∂zzB1 = B1λ2 , ∂zzB2 = B2λ2 ) with the Dirichlet conditions at z = ∞ imply that they decay exponentially in 14 2.1. Introduction to Superconductivity magnitude. 2.1.3 Physical Dead Layer Until recently, it was assumed that in physical experiments the supercon- ductors were completely flat and hence the fields would decay exponentially once within the superconductor. Experiments at the Paul Sherrer Institute (PSI) have found experimental profiles that differ quite significantly from the field profile predicted by this London theory with a flat interface [6]. See figure 2.2. The research carried out in the first portion of this thesis has been di- rected to modeling surface roughness, what is believed to be a potential explanation to the field perturbations. By surface roughness, we mean a surface that has bumps or wiggles instead of being completely flat. The field seems to decay slower than an exponential near the surface and there is the notion of a dead layer - a distance over which the field magnitude does not decay, even within the superconductor and after which the field decays exponentially. The cartoon function (in dimensionless units) describing this is that |b| =  1 if t ≤ δ exp(−(t− δ)) if t > δ. (2.5) The sketch of this cartoon function is given in figure 2.3. We remark that the modeling in this chapter is independent of time. The variable t will always represent a nondimensionalized depth and not time. 15 2.1. Introduction to Superconductivity Figure 2.2: Reprinted figure with permission from [6]. Copyright (2010) by the American Physical Society. This figure displays an experimentally measured magnetic field profile. The a and b represent different magnetic field orientations, with slightly different decay length scales. These scales are believed to be due to an anisotropy in the YBCO superconducting material. In both cases, there is a lag in the exponential decay. 16 2.1. Introduction to Superconductivity Figure 2.3: A sketch of the effective dead layer, in this case δ. One way of defining a dead layer, δeff, would be to compute the area under the curve |b| from t = 0 to t =∞. In (2.5), the area is 1 + δ. In our asymptotic work to come, after finding the average field magnitude at depth t, |b|avg, we can use this to find an effective deadlayer: δeff = ∫ ∞ 0 |b|avg(t)dt− 1. (2.6) As we will see soon, the definition we give in (2.6) observes many of the properties we would like in a dead layer (in particular becoming zero if the roughness approaches zero). For any surface that is not flat, it is extremely difficult if not impossi- ble to obtain an exact solution to the problem. However, by considering a small perturbation (small with respect to λ) then it is possible to obtain 17 2.2. Asymptotic Analysis an approximate analytic solution by means of an asymptotic series. This means it is possible to have reasonably accurate analytic results for cer- tain geometries. In the next section we consider the procedure of finding an asymptotic solution. In subsequent parts of this paper, we numerically compute approximate solutions for particular surface geometries. 2.2 Asymptotic Analysis 2.2.1 Asymptotic Formulation Nondimensionalization of the Problem In general, the vacuum-superconductor interface could be parametrized by z = ah(x, y) where |h| ≤ 1 is dimensionless and a is a length (the amplitude). This describes a perturbed flat boundary. However, working with such a general surface generally will not lead to analytic formulas that give insight into the problem and by considering a surface given by a cos(ωxx) cos(ωyy), we are still working in quite general terms. Indeed, as long as h and its powers can be Fourier transformed (where we allow the space of distribu- tions), this is still general. When we refer to h, we refer to this Fourier form cos(ωxx) cos(ωyy). See appendix B for a demonstration of computing a general surface. Vacuum is found in the region z ≤ ah(x, y) and superconductor in the region z > ah(x, y). We will require that the field approach an applied field far away from the superconductor, B(z = −∞) = B0 = |B0|θ̂ = |B0|(θ1, θ2, 0)T , and that the Meissner effect is observed, B(z =∞) = 0. 18 2.2. Asymptotic Analysis We remark that θ̂ has no z−component as we would otherwise not be able to obtain a solution in the flat geometry. The asymptotics rely upon being able to use the flat geometry solution as a base solution, so our applied field cannot have a z−component. We will explain this in more detail shortly. Another condition we require is that the magnetic field is continuous across the superconductor-vacuum interface, [B] = 0 where here [•] denotes the jump of •. We now nondimensionalize as in section 1.2. We select the representative units of the magnetic field to be |B0| and the representative units of length to be λ. We write b = B|B0| and (x ′, y′, z′) = 1λ(x, y, z). Then ∂x = 1 λ∂x′ and similarly for the other spatial coordinates. So we have ∇• = 1λ∇′•, ∇× = 1 λ∇′× and 4 = 1λ24′ where the prime signifies we’re looking in the prime- coordinates. This allows us to write4b = 1 λ2 4′ = 1 λ2 b,∇•b = 1λ∇′•b = 0, and∇×b = 1 λ∇′b = 0. We define  = a/λ, and if we work in these nondimensionalized prime coordinates and get rid of the primes we have  ∇ • b = 0 and ∇× b = 0, if z ≤ h ∇ • b = 0 and ∆b = b, if z > h (2.7) with boundary conditions b(x, y,−∞) = θ̂, b(x, y,∞) = 0, [b(x, y, h(x, y)] = 0. See figure 2.4 for a visual summary of the problem we are solving. To explain the no z−component requirement, we suppose there were an 19 2.2. Asymptotic Analysis Figure 2.4: The overall geometry, PDEs, and boundary conditions we are solving. applied field (0, 0, 1)T and we try to solve the flat geometry. Then all x− and y−derivatives vanish (translational invariance) and we only work with z− derivatives. If the field were b = (b1, b2, b3) then by (2.7) in this reduced case with ∂x = ∂y = 0 by using the curl and divergence conditions we have that ∂zb1 = ∂zb2 = ∂zb3 = 0 in the vacuum. Thus the field is constant there. It must be the value (0, 0, 1)T because of the boundary condition at z = −∞. In the superconductor from the divergence condition we also have ∂zb3 = 0 so that b3 is constant. We have b3(z = ∞) = 0 and so b3 = 0 in the superconductor. But b3 = 1 in the vacuum and so b is not continuous, and we don’t have a solution. Physically, given the divergenceless condition, b can be thought of as 20 2.2. Asymptotic Analysis the flux of a fluid. By mass conservation, the fluid must go somewhere. If there is some nonzero b3 at z = −∞ and there can be no flux in the x or y directions there must be some b3 at z = ∞ which cannot come about due to the Meissner effect. The apparent paradox comes about because we are considering a superconductor of infinite size. If it had a finite size then the field could bend around the superconductor. Finding a Basis in Fourier Components A solution can be expressed as a sum (or an integral) of functions with particular Fourier frequencies in the x− and y− directions. What we do here is establish some properties of a general term in the sum. If θ̂ has no z−component then there is a base solution for a flat interface given as b(0) =  θ̂ if z ≤ 0 θ̂e−z if z > 0 (2.8) which satisfies the boundary conditions at ±∞. So the far-field boundary conditions for these extra Fourier components (which would be added to b(0) to yield a solution) would be to vanish at z = ±∞ and to bring about continuity along the interface (the base solution is not continuous if z ≤ 0, > 0 are replaced by z ≤ h,> h). If a Fourier component of the field in the vacuum took on the form b = f(z)eiαxxeiαyy for a vector-valued f having value 0 at z = −∞ then there would be certain restrictions upon f. 21 2.2. Asymptotic Analysis The divergence-free condition of (2.7) yields (iαxf1 + iαyf2 + ∂zf3)eiαxxeiαyy = 0 so we require: iαxf1 + iαyf2 + ∂zf3 = 0. (2.9) And the irrotional condition (the fact that there is a vanishing curl) of (2.7) yields (x̂(iαyf3 − ∂zf2) + ŷ(∂zf1 − iαxf3) + ẑ(iαxf2 − iαyf1))eiαxxeiαyy = 0 so we can read off 3 more equations: ∂zf2 = iαyf3 (2.10a) ∂zf1 = iαxf3 (2.10b) iαxf2 = iαyf1 (2.10c) From (2.9), and (2.10a) and (2.10b) we have the linear equations: ∂zf = ∂z  f1 f2 f3  =  0 0 iαx 0 0 iαy −iαx −iαy 0  ︸ ︷︷ ︸ F  f1 f2 f3  . We have a first order linear differential equation in matrix form. In 22 2.2. Asymptotic Analysis solving this problem, we first need to find the eigenvalues of F . They are found by setting the characteristic polynomial det(F − λI) to 0 : −λ(λ2 − (α2x + α2y)) = 0. We read the eigenvalues off as λ = {0,±α} where we have α = √ α2x + α2y. If vλ is the eigenvector with eigenvalue λ then: f = c0v0 + c−αv−αe−αz + cαvαeαz for c′s being constants. To ensure the that f goes to 0 at z = −∞ only cα can be nonzero and thus we only seek the eigenvector with this eigenvalue. If vλ is an eigenvector of F with eigenvalue λ then (F − λI)vλ = 0. So we find the null space of  −α 0 iαx 0 −α iαy −iαx −iαy −α  . If β is in the null space then row 1 tells us that αβ1 = iαxβ3 and row 2 tells us that αβ2 = iαyβ3. Then we can parametrize β with one degree of freedom, β3:  iαx iαy α β3 23 2.2. Asymptotic Analysis For ease of notation later, we’ll rename β3 as β1 and conclude that solu- tions on the vacuum side take on the form: b = β1  iαx iαy√ α2x + α2y  e √ α2x+α 2 yzeiαxxeiαyy. It’s interesting to note that (2.10c) holds even though we didn’t directly solve it. Indeed we see that if f = (iαx, iαy, √ α2x + α2y) T e √ α2x+α 2 yz then (iαx)f2 = −αxαye √ α2x+α 2 yz = (iαy)f1. Now we consider a solution on the superconducting side g(z)eiαxxeiαyy vanishing at z = ∞. By having zero divergence we arrive at an equation similar to (2.9): iαxg1 + iαyg2 + ∂zg3 = 0 (2.11) and by imposing the Laplacian part of (2.7) we have −α2xgi − α2ygi + ∂2zzgi = gi for i = 1, 2, 3 after cancelling out the exponential terms. Then we find that ∂2zzgi = α 2gi where α ≡ √ 1 + α2x + α2y. And this tells us that g = w−αe−αz + wαeαz. 24 2.2. Asymptotic Analysis For g to be 0 at ∞ we set wα to 0. To satisfy (2.11) we require iαxw−α1 + iαyw−α2 − αw−α3 = 0. We can parametrize g with two degrees of freedom, β2 and β3: g = (  α 0 iαx β2 +  0 α iαy β3)e−αz. So solutions on the superconducting side have the form: b = (  √ 1 + α2x + α2y 0 iαx β2 +  0√ 1 + α2x + α2y iαy β3)e− √ 1+α2x+α 2 yzeiαxxeiαyy. In the asymptotic version of the problem, we will later need to consider 25 2.2. Asymptotic Analysis a jump condition at z = 0 for a function b of form: b = eiαxxeiαyy  β1  iαx iαy√ α2x + α2y  e √ α2x+α 2 yz if z ≤ 0 (β2  √ 1 + α2x + α2y 0 iαx + β3  0√ 1 + α2x + α2y iαy )e − √ 1+α2x+α 2 yz if z > 0. (2.12) We can find the jump for given αx and αy frequencies: [b] =  −iαx √ 1 + α2x + α2y 0 −iαy 0 √ 1 + α2x + α2y − √ α2x + α2y iαx iαy  ︸ ︷︷ ︸ Mαx,αy  β1 β2 β3  eiαxxeiαyy. (2.13) This is useful for us because it means if we know the jump at z = 0 for a piecewise Fourier component (of the form expressed in (2.12)), then we can quickly solve for the β′s and determine its exact form. All we really need is the inverse matrix. It’s worth noting that when (αx, αy) = (0, 0) thenMαxαy is not invertible. However, in such a case, as long as there is no z−component to the jump, we can still find a solution as β2(1, 0, 0)T+β3(0, 1, 0)T still spans R2 × {0}. 26 2.2. Asymptotic Analysis Later on we will use the notationM−10,0c in which case we mean the vector (0, β2, β3) where the jump condition [b] = (β2, β3, 0)T = c is satisfied. 2.2.2 General Procedure In this section we present a general asymptotic procedure, and illustrate the solution method up to second order. We will take the interface to have form z = h(x, y). We will apply formal asymptotics (as discussed in section 1.1.2) and express b as a regular asymptotic expansion in : b = b(0) + b(1) + 2b(2) +O(3) (2.14) We make use of a Taylor expansion to evaluate b along the surface: b(x, y, h) = b(x, y, 0) + h∂zb(x, y, 0) + 1 2 2h2∂2zzb(x, y, 0) +O( 3). (2.15) By substituting (2.14) into (2.15) we can approximate the field to second order in  along the surface: b(x, y, h) = (b(0) + (b(1) + h∂zb(0)) + 2(b(2) + h∂zb(1) + 1 2 h2∂2zzb (0)))|z=0 +O(3). Our solution for a flat superconductor is given below: 27 2.2. Asymptotic Analysis b(0) =  θ̂ if z ≤ 0 θ̂e−z if z > 0. By demanding continuity of b along the interface we arrive at: [b(x, y, h)] = ([b(0)]+[b(1)+h∂zb(0)]+2[b(2)+h∂zb(1)+ 1 2 h2∂2zzb (0)])|z=0+O(3) = 0 which gives us a set of equations at each order in : O(1) : [b(0)]|z=0 = 0 (2.16a) O() : [b(1)]|z=0 = −h[∂zb(0)]|z=0 (2.16b) O(2) : [b(2)]|z=0 = −h[∂zb(1)]|z=0 − 12h 2[∂2zzb (0)]|z=0 (2.16c) Of course (2.16a) already holds. From (2.16b) we can find the jump in b(1) and from (2.16c) we can find the jump in b(2). The most general case of a jump is something of the form [b(j)]|z=0 = ∑ αx,αy c(j)αx,αye iαxxeiαyy for c′s being some constant vectors (which can be found for different per- turbed geometries as in the following sections). But from (2.13) we can 28 2.2. Asymptotic Analysis actually find the form of b(j) : b(j) = ∑ αx,αy   iαx iαy α pi1M −1 αx,αyc (j) αxαye αzeiαxxeiαyy if z ≤ 0 (  α 0 iαx pi2 +  0 α iαy pi3)(M −1 αx,αyc (j) αxαy)e−αzeiαxxeiαyy if z > 0 (2.17) where for each (αx, αy) we have that α = √ α2x + α2y and α = √ 1 + α2x + α2y and where pij = (ej , •) for j = 1, 2, 3 is the jth coordinate (pi2(3, 4, 1)T = 4 for example). For shorthand we may later write β(j)αx,αy in place of M−1αx,αyc(j)αx,αy and refer to the vectors (iαx, iαy, α)T , (α, 0, iαx)T , (0, α, iαy)T as γ (1) αx,αy , γ (2) αx,αy , and γ(3)αx,αy respectively. As a final comment on the solutions, we replace the conditions z ≤ 0 and z > 0 with z ≤ h(x, y) and z > h(x, y) respectively. This is because the partial differential equations that need to be satisfied hold for z ≤ h and z > h and not for z ≤ 0 and z > 0. This also ensures continuity to within O(3). The sketch in figure 2.5 should explain this a bit better in a one-dimensional sense. The figure depicts the use of this convention: for x ≤ 0 there is a function `left (the straight horizontal line) and for x > 0 there is a function `right (the other curve) and we assume they are chosen so that at x =  there is continuity 29 2.2. Asymptotic Analysis to within O(n) for some n > 0. Figure 2.5: There is no control over the error at x = 0 but there is control at x = . To ensure O(n) accuracy everywhere we use x =  instead of x = 0 as the point where we switch from `left to `right. If we chose to take `left for x ≤ 0 and `right for x > 0 then we wouldn’t have control over the jump at x = 0. But by taking `left for x ≤  and `right for x >  there is only one jump and it is zero to within O(n). In terms of applying these computations to the understanding of physical experiments, the vector magnetic field at coordinates (x, y, z) is not useful (see sections 2.1.1 and 2.1.3). Instead, the average field magnitude at a given depth t = z − h(x, y) beyond the interface is useful, because this is what is actually measured. Therefore in later sections, after obtaining the asymptotic solution in terms of (x, y, z) we will need to convert to a field magnitude expressed in terms of (x, y, t) and average over x and y. 30 2.2. Asymptotic Analysis In this last portion of the section, we introduce a few final equations and pieces of notation that will be useful later. The inverse ofMjωx,kωy (defined in (2.13)) is given as: Mjωx,kωy−1 = Γj,k  jiωj,kωx kiωj,kωy −ωj,k2 k2ω2y + ωj,kωj,k −jkωxωy −jiωj,kωx −jkωxωy j2ω2x + ωj,kωj,k −kiωj,kωy  where we define ωj,k = √ 1 + j2ω2x + k2ω2y , ωj,k = √ j2ω2x + k2ω2y , and Γj,k = (ωj,kωj,k2 + j2ω2xωj,k + k 2ω2yωj,k) −1. In the following subsections, we consider some specific geometries with increasing complexity. It may seem peculiar that the analysis is presented in this order - instead of immediately going to the most general (although highly complex) geometry - but it turns out each geometry needs to be analyzed independently. Due to a lack of uniform convergence, the results of the simpler geometries cannot be obtained from the general geometry. This will become apparent very soon. 2.2.3 Geometry One: Surface with Roughness in One Spatial Direction and Parallel Applied Magnetic Field Here we explore a simple geometry in the asymptotic regime. We consider an applied magnetic field that is parallel to the surface. Due to the simplic- ity, much of the algebra and work can easily be shown. In more complex geometries the same techniques are applied but the computations are too lengthy to include in full detail and we also need to make heavy use of the 31 2.2. Asymptotic Analysis notational shorthand. First-Order Term Figure 2.6: The applied field is parallel to the surface. In our non-dimensionalized regime, we consider a surface of form z =  cos(ωy) = 2(e iωy + e−iωy) with applied field θ̂ = (1, 0, 0)T . See figure 2.6. In this case we have that (ωx, ωy) = (0, ω). Then b(0) =  (1, 0, 0)T if z ≤ 0 (1, 0, 0)T e−z if z > 0 is the solution at zeroth order in . Then (2.16b) tells us: 32 2.2. Asymptotic Analysis [b(1)]|z=0 = − cos(ωy)[∂zb(0)]|z=0 = (cos(ωy), 0, 0)T = 1 2 (1, 0, 0)T︸ ︷︷ ︸ c (1) 0,−ω e−iωy + 1 2 (1, 0, 0)T︸ ︷︷ ︸ c (1) 0,ω eiωy. There is a term with e−iωy and we will need β (1) 0,−ω =M−10,−ω  1 2 0 0  =  0 1 2 √ 1+ω2 0  . It turns out that β(1)0,ω = β (1) 0,−ω. Then we can arrive at a first-order pertur- bation by using (2.17): 33 2.2. Asymptotic Analysis b(1) =   0 −iω |ω|  ︸ ︷︷ ︸ γ (1) 0,−ω (0)︸︷︷︸ pi1β (1) 0,−ω e|ω|ze−iωy +  0 iω |ω|  ︸ ︷︷ ︸ γ (1) 0,ω (0)︸︷︷︸ pi1β (1) 0,ω e|ω|ze−iωy if z ≤ 0 (  √ 1 + ω2 0 0  ︸ ︷︷ ︸ γ (2) 0,−ω ( 1 2 √ 1 + ω2 )︸ ︷︷ ︸ pi2β (1) 0,−ω +  0 √ 1 + ω2 −iω  ︸ ︷︷ ︸ γ (3) 0,−ω (0)︸︷︷︸ pi3β (1) 0,−ω )e− √ 1+ω2ze−iωy +(  √ 1 + ω2 0 0  ︸ ︷︷ ︸ γ (2) 0,ω ( 1 2 √ 1 + ω2 )︸ ︷︷ ︸ pi2β (1) 0,ω +  0 √ 1 + ω2 iω  ︸ ︷︷ ︸ γ (3) 0,ω (0)︸︷︷︸ pi3β (1) 0,ω )e− √ 1+ω2zeiωy if z > 0. This expression simplifies greatly when we use Euler’s identity to get our final answer: b(1) =  0 if z ≤ 0 cos(ωy) 0 0  e −√1+ω2z if z > 0. 34 2.2. Asymptotic Analysis Second-Order Term We apply the same procedure at second order. [b(2)]|z=0 = − cos(ωy)[∂zb(1)]|z=0−12 cos 2(ωy)[∂2zzb (0)]|z=0 = (− cos2(ωy)(− √ 1 + ω2+ 1 2 ), 0, 0)T = 1 4 ( √ 1 + ω2−1 2 )(1, 0, 0)T e−2iωy+ 1 2 ( √ 1 + ω2−1 2 )(1, 0, 0)T+ 1 4 ( √ 1 + ω2−1 2 )(1, 0, 0)T e2iωy where we again expressed the jump as a sum of complex exponentials. Of course the middle term has ω = 0. In looking at (2.17) we need the terms M−10,−2ωc(2)0,−2ω = M−10,2ωc(2)0,2ω = 1 4 √ 1+4ω2 ( √ 1 + ω2−12)(0, 1, 0)T , andM−10,0c (2) 0,0 = 1 2 √ 1+4ω2 ( √ 1 + ω2−12)(0, 1, 0)T . After we apply (2.17) we conclude that: b(2) =  0 if z ≤ 0 1 2( √ 1 + ω2 − 12)(e−z + cos(2ωy)e− √ 1+4ω2z) 0 0  if z > 0. We can now write the asymptotic expansion for the magnetic field to second order (after replacing z ≤ 0, z > 0 with z ≤  cos(ωy), z >  cos(ωy)): 35 2.2. Asymptotic Analysis b =   1 0 0  if z ≤  cos(ωy) 1 0 0  e −z +   cos(ωy) 0 0  e −√1+ω2z + 2(12( √ 1 + ω2 − 12))(  cos(2ωy) 0 0  e −√1+4ω2z +  e−z 0 0 ) if z >  cos(ωy) +O(3). (2.18) Asymptotic Results for First Geometry The depth past the surface, t = z − h(x, y) is of more physical relevance than the z−coordinate. We therefore expand (2.18) to second order in  (so the zeroth-order term is expanded to second order, the first-order to first order, and the second-order to zeroth order) using Taylor expansions under the substitution that z = t+  cos(ωy). Here we’ll illustrate the procedure for the zeroth-order term, but the same idea holds for all the terms. 36 2.2. Asymptotic Analysis   1 0 0  if t+  cos(ωy) ≤  cos(ωy) 1 0 0  e −t− cos(ωy) if t+  cos(ωy) >  cos(ωy) =   1 0 0  if t ≤ 0 1 0 0  e −t(1−  cos(ωy) + 122 cos2(ωy)) if t > 0 =   1 0 0  if t ≤ 0 1 0 0  e −t if t > 0 +  0 if t ≤ 0 − cos(ωy) 0 0  e −t if t > 0 +2  0 if t ≤ 0 1 2  cos2(ωy) 0 0  e −t if t > 0 +O(3). In the end, after combining and simplifying the terms up to second order 37 2.2. Asymptotic Analysis in , we arrive at: b1(y, t) =  1 if t ≤ 0e−t if t > 0 +   0 if t ≤ 0(e−√1+ω2t − e−t) cos(ωy) if t > 0 + 2  0 if t ≤ 0 1 2( √ 1 + ω2 − 12)(e−t + e− √ 1+4ω2t cos(2ωy))+ (12e −t −√1 + ω2e− √ 1+ω2t) cos2(ωy) if t > 0 +O(3) (2.19) with b2 and b3 identically 0. We would finally like to take (2.19) and find the average field magnitude (averaged in this case over y as x doesn’t play a role) at a given depth t. As we only need to consider one component and it is positive, we already have the field magnitude: it is just b1. The average field is given here as |b|avg = ω2pi ∫ pi/ω −pi/ω |b|(y, t)dy. Recalling that the cosine function when integrated over a period yields zero and that the average value of cos2 over a period is 12 , this immediately yields: |b|avg(t) =  1 if t ≤ 0 e−t + 2 12 √ 1 + ω2(e−t − e− √ 1+ω2t) if t > 0 +O(3). (2.20) 38 2.2. Asymptotic Analysis We can now determine the profile of the magnetic field for small  from (2.19) and (2.20). As far as what small  means, we need to consider this carefully. For a fixed ω, the asymptotic results for the field and average field are accurate to within O(3) and the accuracy only approaches zero as  → 0. As |ω| → ∞, unless  decreases, the accuracy will not remain. This is because the second-order term goes like 2 √ 1 + ω2. But if |ω| ∼ 1 then the second-order becomes first-order and the asymptotics break down. Thus the asymptotics for  ↓ 0 is not uniformly valid for |ω| → ∞. However, the breakdown isn’t necessarily so severe. For large |ω|, the Fourier components decay very quickly, so the region over which these larger |ω| values would cause problems would be very small. If   1 then we generally seek |ω|  1 . From now on we will always assume ω > 0. We now consider the deadlayer. We compute ∫∞ 0 |b|avg(t)dt = 1 + 2 2 ( √ 1 + ω2 − 1) so that δeff = 22 ( √ 1 + ω2 − 1) by (2.6). If  = 0 or if the spatial frequency goes to zero (so that the interface looks flat) we should recover the flat solution and the dead layer should be zero. This formula is in agreement. We have selected a value of  that seems physically reasonable. In speak- ing with the experimentalists, there is reasonable confidence the amplitude of the perturbation,  is at most 0.05 [8]. We then chose ω with the condition that ω is small. Our first plot depicts the field magnitude profile predicted by the asymp- totics - showing how the magnitude varies both with y and with t. This is 39 2.2. Asymptotic Analysis Figure 2.7: A profile of the field magnitude with  = 0.05 and ω = 2pi in the y − t. Here the perturbation is quite obvious, but as t gets large the perturbation gradually disappears. 40 2.2. Asymptotic Analysis Figure 2.8: Peak and Valley Profiles. seen in figure 2.7. We can see some effects due to the surface roughness. We now consider the field profile for fixed values of y. When we refer to a peak profile, we will refer to a profile of the field magnitude as a function of t where cos(ωy) takes on a maximum value and when we refer to a valley we refer to a point where cos(ωy) takes on a minimum. See figure 2.8. Intuitively, the field should decay slower (with respect to t) from a valley than a peak. This is because in this geometry the field will not decay in the vacuum and points located at t > 0 in a valley profile are closer to the vacuum region (where the first component is always 1) than points at t > 0 in a peak profile. Our next plot, figure 2.9 shows the peak and valley profiles. It also includes the plot of the average field magnitude after averaging over y. The 41 2.2. Asymptotic Analysis Figure 2.9: A profile of the field magnitude with  = 0.05 and ω = 2pi in the peak and valley cases, and the average field profile. average lies right in the middle of the two extreme profiles. Given our definition of the dead layer δeff =  2 2 ( √ 1 + ω2− 1), we plotted it. We held  fixed at 0.05 and varied ω from 0 to 8pi. We also held ω fixed at pi and varied  from 0 to 0.15. The effective dead layer is plotted in figure 2.10. This may seem like an extreme range of values given that the asymp- totics are valid for small  and ω of order one. However, in computing the maximum difference between the asymptotic and numeric solutions (which we explain in section 2.3) at the upper end of these ranges, the sup-norm of the difference was 0.0027 (for  = 0.15 and ω = pi) and 0.0137 (for  = 0.05 and ω = 8pi), demonstrating that the asymptotics are reasonably accurate even in these more extreme limits. 42 2.2. Asymptotic Analysis Figure 2.10: Left: the effective dead layer at fixed  = 0.05. Right: the effective dead layer at fixed ω = pi. 2.2.4 Geometry Two: Surface with Roughness in One Spatial Direction and Applied Field Not Uniformly Parallel to Surface First-Order Term We now consider a surface of form z =  cos(ωx) = 2(e iωx + e−iωx) with applied field θ̂ = (1, 0, 0)T . We note that in this case the applied field is no longer parallel to the surface as can be seen in figure 2.11. In this regime (ωx, ωy) = (ω, 0). Then our zeroth-order term is b(0) =  (1, 0, 0)T if z ≤ 0 (1, 0, 0)T e−z if z > 0. By (2.16b), [b(1)]z=0 = − cos(ωx)[∂zb(0)]z=0 = 12(1, 0, 0)T e−iωx+12(1, 0, 0)T eiωx 43 2.2. Asymptotic Analysis Figure 2.11: The applied field has nonzero perpendicular components with respect to the surface. so we compute β (1) ±ω,0 = 1 2 Γ1,0  ±iω1,0ω ω1,1ω1,1 0  . Using these terms as per (2.17), our resulting first order perturbation is b(1) = Γ1,0   −ω2ω1,0 cos(ωx) 0 −ω1,0ω1,0ω sin(ωx)  eω1,0z if z ≤ 0 ω1,0ω1,0 2 cos(ωx) 0 −ω1,0ω1,0ω sin(ωx)  e−ω1,0z if z > 0. 44 2.2. Asymptotic Analysis Already we see new behavior emerging: the mixing of field components (b3 is no longer zero). We also note the magnetic field in the vacuum is perturbed. Second-Order Term By carrying out similar computations as for the case of the perturbation z =  cos(ωy) now that we know b(1) and b(0) we find: b(2) =   4ωE2,0,1 cos(2ωx) 0 2ω2,0E2,0,1 sin(2ωx)  eω2,0z if z ≤ 0 2ω2,0E2,0,2 cos(2ωx) 0 −4ωE2,0,2 sin(2ωx)  e−ω2,0z +  E0,0,2 0 0  e−z if z > 0 where E2,0,1 = Γ2,0(−12ω2,0ωσ1 − 14ω2,02σ3), E2,0,2 = Γ2,0(14ω2,0ω2,0σ1 − 1 2ω2,0ωσ3), E0,0,2 = 1 2σ1 and where σ1 = −Γ1,0(−ω1,0ω1,03 + ω1,0ω1,0ω2)− 12 and σ3 = −Γ1,0(ω1,0ω1,02ω + ω1,02ω1,0ω). Asymptotic Results for Second Geometry We have expressed b to second order in  with the variable t = z−  cos(ωx). Although we do not write the expression here, it is of the form b = b̃(0)(t) + b̃(1)(t) + 2b̃(2)(t) +O(3). 45 2.2. Asymptotic Analysis To find the field magnitude to second order in  we first compute |b|2 = (b̃(0) 2 1 + b̃(0) 2 2 + b̃(0) 2 3) + (2b̃(0)1b̃(1)1 + 2b̃(0)2b̃(1)2 + 2b̃(0)3b̃(1)3) +2(2b̃(0)1b̃(2)1 + 2b̃(0)2b̃(2)2 + 2b̃(0)3b̃(2)3 + b̃(1) 2 1 + b̃(1) 2 2 + b̃(1) 2 3) +O( 3). This is in full generality. Many of the terms are zero. We label the zeroth order term as u0, the first order as u1 and the second order as u2 so that |b| = √ |b|2 = √ u0 + u1 + 2u2 +O(3) = √ u0(1+ u1 2u0 +2( u2 2u0 − u 2 1 8u20 ))+O(3). (2.21) Given this, we could compute the average field magnitude by averaging over x. The result is given below: |b|avg(t) =  1 +  2(ρ1e 2ω1,0t − ρ2eω1,1t) if t ≤ 0 e−t + 2(ρ3e−t − ρ4e−ω1,0t + ρ5e(1−2ω1,0t)) if t > 0 where we define ρ1 = 14Γ 2 1,0ω1,0 2ω1,0 2ω2, ρ2 = 12Γ1,0ω1,0ω1,0ω 2, ρ3 = (E0,0,2+ 1 4), ρ4 = 1 2Γ1,0ω1,0ω1,0 3, and ρ5 = 14Γ 2 1,0ω1,0 2ω1,0 2ω2. The effective dead layer is computed by (2.6) and we obtain δeff = 2(ρ3 − ρ4 ω1,0 + ρ5 2ω1,0 − 1). In this second geometry, we plot the first component of the field in a peak and valley profile. At both the peak and valley, the third component has value zero so we need to select another value of x (in this case −0.26). Here, 46 2.2. Asymptotic Analysis Figure 2.12: Left: a profile of b1 with  = 0.05 and ω = 2pi in the peak and valley cases. Right: a profile of b3 with  = 0.05, ω = 2pi and x = −0.26 fixed. peaks and valleys refer to slices of constant x which maximize or minimize cos(ωx) respectively. The plot of both the first and third component is given in figure 2.12. The profile of the average field is given in figure 2.13. We also show the effective dead layer in figure 2.14, over the same ranges of  and ω as in the previous geometry. The respective sup-norm errors verified numerically in section 2.3 for  = 0.15 with ω = pi and for  = 0.05 with ω = 8pi are 0.00462 and 0.00641 respectively. This geometry is a lot more interesting. We see that with respect to the first component of the field, the decay begins even before the interface (when looking at the peak profile). We also note that the individual components can take on values that exceed their value at z = −∞. It might be surprising, but there’s nothing that says this cannot happen. Indeed, we have that 4b = 0 in the vacuum region and therefore each component should obey the maximum/minimum 47 2.2. Asymptotic Analysis Figure 2.13: A profile of |b|avg with  = 0.05 and ω = 2pi. Figure 2.14: Left: the effective dead layer at fixed  = 0.05. Right: the effective dead layer at fixed ω = pi. 48 2.2. Asymptotic Analysis principle (because it is harmonic). The maximum principle states that if f is a harmonic function in a domain D then its maxima and minima lie on ∂D (the boundary). In our case, the maxima of b1 and b3 lie on the boundary of the vacuum region - right on the interface. The dead layer is actually negative here. By adding roughness in the direction of the field, the field is no longer constant on the vacuum side, and the net effect is to diminish the average field magnitude as compared to the flat geometry (so that δeff < 0). We also note that the magnitude of the dead layer in this geometry is much smaller than in the first geometry. 2.2.5 Geometry Three: Surface with Roughness in Two Spatial Directions First-Order Term Again we take the applied field as θ̂ = (1, 0, 0)T and we have the same base solution as in the previous geometries. By (2.16b), after decomposing the surface z =  cos(ωxx) cos(ωyy) into components z = 14(e i(−ωx−ωy) + ei(−ωx+ωy) + ei(ωx−ωy) + ei(ωx+ωy)) we require [b(1)]|z=0 = 14(e i(−ωx−ωy) + ei(−ωx+ωy) + ei(ωx−ωy) + ei(ωx+ωy))(1, 0, 0)T . We then compute four terms such terms asM−1ωx,ωy  1 4 0 0  and use (2.17) to get the first-order perturbation: 49 2.2. Asymptotic Analysis b(1) =  Γ1,1  −ω11ω2x cos(ωxx) cos(ωyy) ω1,1ωxωy sin(ωxx) sin(ωyy) −ω1,1ω1,1ωx sin(ωxx) cos(ωyy)  eω1,1z if z ≤ 0 Γ1,1  ω1,1(ω1,1ω1,1 + ω2y) cos(ωxx) cos(ωyy) ω1,1ωxωy sin(ωxx) sin(ωyy) −ω1,1ω1,1ωx sin(ωxx) cos(ωyy)  e−ω1,1z if z > 0. Second-Order Term The second order term can be worked out using the same algorithms as in previous sections and we find 50 2.2. Asymptotic Analysis b(2) =   −8E2,2,1ωx cos(2ωxx) cos(2ωyy) 8E2,2,1ωy sin(2ωxx) sin(2ωyy) −4E2,2,1ω2,2 sin(2ωxx) cos(2ωyy)  eω2,2z +  −4E2,0,1ωx cos(2ωxx) 0 −2E2,0,1ω2,0 sin(2ωxx)  eω2,0z if z ≤ 0 4E2,2,2ω2,2 cos(2ωxx) cos(2ωyy) −4E2,2,3ω2,2 sin(2ωxx) sin(2ωyy) (−8E2,2,2ωx − 8E2,2,3ωy) sin(2ωxx) cos(2ωyy)  e−ω2,2z +  2E2,0,2ω2,0 cos(2ωxx) 0 −4E2,0,2ωx sin(2ωxx)  e−ω2,0z +  2E0,2,2ω0,2 cos(2ωyy) 0 0  e−ω0,2z +  σ1 4 0 0  e−z if z > 0 where we define the E’s by: E2,2,1 = Γ2,2(18ω2,2ωxσ1− 18ω2,2ωyσ2+ 116ω2,22σ3), E2,2,2 = Γ2,2( 116(4ω 2 y+ω2,2ω2,2)σ1+ 1 4ωxωyσ2−18ω2,2ωxσ3), E2,2,3 = Γ2,2(−14ωxωyσ1− 1 16(4ω 2 x + ω2,2ω2,2)σ2 − 18ω2,2ωyσ3), E2,0,1 = Γ2,0(14ω2,0ωxσ1 + 18ω2,02σ3), E2,0,2 = Γ2,0(18ω2,0ω2,0σ1−14ω2,0ωxσ3), andE0,2,2 = Γ0,2(18(4ω2y+ω0,2ω0,2)σ1). We also have that σ1 = −12 − J1, σ2 = Γ1,1(ω1,12ωxωy + ω1,1ω1,1ωxωy), σ3 = Γ1,1(−ω1,1ω1,12ωx−ω1,12ω1,1ωx), and J1 = Γ1,1(−ω1,12(ω2y+ω1,1ω1,1)+ ω1,1ω1,1ω 2 x). 51 2.2. Asymptotic Analysis Asymptotic Results for Third Geometry We can now expand the solution in powers of  with the variable t = z−  cos(ωxx) cos(ωyy). We don’t show the result here as it is very complex, but we emphasize that our result has been verified thoroughly with Maple. We first checked that before re-expressing in terms of t, the terms on both the vacuum and superconducting sides satisfied the PDEs. Taking these ex- pressions, we expanded to second order in  with z = t+ cos(ωxx) cos(ωyy). Then we substituted t = 0 into the asymptotic solution and verified that for randomly chosen x, y, ωx, and ωy the solution was continuous at each order in . Using maple, we can also compute |b|avg(t) = ωxωy4pi2 ∫ pi/ωy −pi/ωy ∫ pi/ωx −pi/ωx |b|(x, y, t)dxdy where |b| has been expressed in powers of  up to second order (by means of (2.21)). The result is given below: |b|avg(t) =  1 +  2(ρ1e 2ω1,1t − ρ2eω1,1t) if t ≤ 0 e−t + 2(−ρ3e−ω1,1t + ρ4e(1−2ω1,1)t − ρ5e−t) if t > 0 (2.22) where we have the definitions ρ1 = 18Γ 2 1,1(ω1,1 2ω2xω 2 y + ω1,1 2ω1,1 2ω2x), ρ2 = 1 4Γ1,1ω1,1ω1,1ω 2 x, ρ3 = 1 4Γ1,1(ω1,1 2ω2y + ω1,1ω1,1 3), ρ4 = 18Γ 2 1,1(ω1,1 2ω1,1 2ω2x + ω1,1 2ω2xω 2 y), and ρ5 = J1 4 . 52 2.2. Asymptotic Analysis By (2.6) we find in this model that δeff = 2(− ρ3 ω1,1 + ρ4 1− 2ω1,1 − ρ5). We now compute profiles for the field components and average field with  = 0.05 and (ωx, ωy) = (2pi, 2pi). In this geometry, peaks and valleys are not uniquely defined in terms of fixed x and y values. We define peaks as occurring at x = −pi/ωx and y = −pi/ωy and valleys as occurring at x = 0 and y = −pi/ωy. Both the second and third components vanished at peaks and valleys so to obtain any profile, we selected pairs of (x, y) where they did not vanish. Figure 2.15 displays these results. In this regime, the average field still looks very much like the flat solution and we do not include its plot. Again we see b1 exceeding its value of b1(z = −∞) = 1. The dead layer is another interesting point to investigate. We explore it from three angles, displayed in figures 2.16 and 2.17. Initially, we fix  = 0.05 and set ωx = ωy = ω and plot how the dead layer varies with what we could call the net spatial frequency √ ω2x + ω2y . We also fix (ωx, ωy) = (pi, pi) and plot the dead layer as a function of . As verified numerically, the sup-norm of the errors at the upper end of these plots were 0.0174 and 0.00662 respectively. Then, for fixed √ ω2x + ω2y = 8pi and  = 0.05, we explore how the dead layer varies with the ratio ωxωy . As ωx → 0, we would expect recover the first geometry, and as ωy → 0, we would expect to recover the second geometry. The plot depicts qualitatively what we expect. As the ratio gets small, the 53 2.2. Asymptotic Analysis Figure 2.15: Top left: a profile of b1 from peak and valley. Top right: profile of b2 with (x, y) = (−0.27,−0.27). Bottom left: profile of b3 with (x, y) = (−0.27,−0.50). Bottom right: difference between average field profile in perturbed geometry and flat geometry. All figures computed with  = 0.05, and (ωx, ωy) = (2pi, 2pi). 54 2.2. Asymptotic Analysis Figure 2.16: Left: the effective dead layer at fixed  = 0.05 and ωx = ωy.. Right: the effective dead layer at fixed (ωx, ωy) = (pi, pi). Figure 2.17: The effective dead layer at fixed  = 0.05 and (ω2x+ω 2 y) 1/2 = 8pi. 55 2.2. Asymptotic Analysis dead layer increases. As it gets big then the roughness acts more and more in the direction of the field and the dead layer decreases. More interestingly, as the ratio reaches zero, the dead layer of (ωx, ωy) = (8pi, 0) is approximately half of the dead layer in the first geometry with ω = 8pi. Also, as the ratio reaches its maximum value, the dead layer reaches a value of −2.95× 10−4 (this value is off the plot range of the graph), which is also about half of the dead layer size in the second geometry with (0, 8pi). This may seem very surprising, but we show exactly where this factor of 1 2 comes from in appendix C. Ultimately, taking a limit as a spatial frequency approaches zero in the third geometry is unphysical experimentally because physicists can only average over a finite range (if a frequency goes to zero, the region over which averaging needs to take place grows without bound). In the next section we turn to numerical computations. 56 2.3. Finite Difference Program for Geometry One 2.3 Finite Difference Program for Geometry One Here we are considering an interface z =  cos(ωy), with applied field (1, 0, 0)T which we studied asymptotically in section 2.2.3. 2.3.1 Numerical Formulation Constant Field on the Vacuum Side We shall take the field to be constant on the vacuum side. Here we show that there is a solution where the field is constant on the vacuum side. Uniqueness is not proven here, but given the three-dimensional numerical work that follows (section 2.4) it seems highly likely. If we take b =   1 0 0  if z ≤  cos(ωy) b(y, z) 0 0  if z >  cos(ωy) with the condition [b]|z= cos(ωy) = 0, then as long as 4b(y, z) = b(y, z), b(y,  cos(ωy)) = 1 and b(y, z) → 0 as z → ∞ (which we will solve for numerically) all PDEs and boundary conditions are satisfied. 57 2.3. Finite Difference Program for Geometry One Coordinate Transformations Trying to implement a finite difference mesh on a sinusoidal surface is ex- tremely difficult. In addition, the main variable of interest is the depth past the sample surface - not the z−coordinate. We can solve both problems by a transformation of coordinates. Here we will define σ = y and t = z −  cos(ωy). We now need to find the Laplacian (∂yy + ∂zz) in the new coordinates. ∂y = ∂yσ∂σ + ∂yt∂t = ∂σ + ω sin(ωy)∂t = ∂σ + ω sin(ωσ)∂t Then ∂yy = ∂σ(∂σ + ω sin(ωσ)∂t) + ω sin(ωσ)∂t(∂σ + ω sin(ωσ)∂t) = ∂σσ + ω2 cos(ωσ)∂t + ω sin(ωσ)∂tσ + ω sin(ωσ)∂σt + 2ω2 sin2(ωσ)∂2tt. And ∂z = ∂zσ∂σ + ∂zt∂t = ∂t. So clearly ∂zz = ∂tt. We arrive at the Laplacian in the new coordinates (after assuming all functions are C2 and making use of the equality of mixed partial derivatives): 4 = ∂σσ + (1 + 2ω2 sin2(ωσ))∂tt + 2ω sin(ωσ)∂σt + ω2 cos(ωσ)∂t. 58 2.3. Finite Difference Program for Geometry One There’s one more thing that needs to be considered. Given that we know the field is zero at t = ∞, it must decay far from the interface. Near the interface itself is where the most interesting phenomena will occur. In terms of a mesh, we need more points closer to the interface to numerically resolve all the intricacies in the solution and fewer points farther away. This will save us computation time. It is also helpful as numerical results can be unreliable if two dimensions of a mesh differ significantly. This aspect of minimizing grid points is of paramount importance later on in our three-dimensional code. To achieve these extra grid properties, one more coordinate transforma- tion is needed. We will do this transformation in the next section as we describe how we are building our mesh. The Grid We will have a mesh in (σ, τ)−coordinates where τ is a transformation of t. We will enforce periodicity in σ with spatial period 2pi/ω. We need only consider the single nonzero component of b in this analysis. Instead of calling it b1 we can just call it b. On the vacuum side (up to t = 0) we have b = 1. We will focus on the superconducting side in our mesh. Numerically we will impose t =∞ as t =M  1. We now pick a natural number N and we discretize in σ. We choose hσ = 2pi Nω 59 2.3. Finite Difference Program for Geometry One Figure 2.18: Near t = 0 the grid is square but as t goes farther out the spacing in the t−direction increases. and choose σi = −pi ω + (i− 1)hσ for i = 1, 2, ...N. Because we want both grid dimensions to scale with N we will choose a similar discretization for τ : τj = α+ (j − 1)hσ for all j ∈ N for which τj ≤ β. We call the number of τ points Nτ . We have some freedom over the choice of α and β and will select them later. Now we consider what we want in terms of a mesh. We would like that the t−spacing is roughly equal to σ−spacing near the interface (to resolve details), but then far away we would like the t−spacing to scale roughly with M/N. See figure 2.18. Near the interface if ht denotes the t−spacing then we want ht ≈ hσ. Thus, given that τ is incremented in units of hσ near the interface dtdτ ≈ 1. 60 2.3. Finite Difference Program for Geometry One Figure 2.19: A plot of τ = f(t) Far from the interface, we want ht ≈ MN = Mω2pi hσ. So dtdτ ≈ Mω2pi . We will define η = 2piMω to be the value of dτ dt near t =M. The algebra is a little easier if we consider τ = f(t) where f is invertible and satisfies f(0) = α < β = f(M). We will consider f to have the form a log(t− b). See figure 2.19 for the general shape of f. From the derivative conditions we have a−b = 1 and a M−b = η which shows us that a = ηM1−η > 0 and b = −ηM 1−η < 0. Knowing a and b, we now calculate α = a log(−b) and β = a log(M − b). Under this transformation, ∂t = f ′(t)∂τ = at−b∂τ = ae −τ/a∂τ (this comes from observing if τ = a log(t− b) then 1/(t− b) = 1/eτ/a). We also have ∂tt = e−2τ/a(−a∂τ + a2∂ττ ). Substituting the τ− derivatives in place of t−derivatives gives: 61 2.3. Finite Difference Program for Geometry One ∆ = ∂σσ + a2(1 + 2ω2 sin2(ωσ))e−2τ/a︸ ︷︷ ︸ ξ(1) ∂ττ + 2aω sin(ωσ)e−τ/a︸ ︷︷ ︸ ξ(2) ∂στ+ a(ω2 cos2(ωσ)e−τ/a − (1 + 2ω2 sin2(ωσ))e−2τ/a)︸ ︷︷ ︸ ξ(3) ∂τ . So a generic point on the mesh could be described by (σi, τj). It is also necessary to map (i, j) coordinates to just a single number, K(i, j) = j+(i− 1)Nτ . This way we can turn the whole PDE system into a matrix equation of form Mu = f where M is a matrix that comes from the discretized equations, u is our unknown, and f is the load vector (see section 1.1.1). The Discretized Equations Conditions along the Interface and at z = ∞ The interface corre- sponds to j = 1. The field being 1 on the interface corresponse toMK(i,1),K(i,1) = 1 with fK(i,1) = 1 for all i. The far-field equation is for j = Nτ . The field being 0 at τNτ corresponds to MK(i,Nτ ),K(i,Nτ ) = 1 and fK(i,Nτ ) = 0 for all i. Laplacian Here we consider 2 ≤ j ≤ Nτ − 1. In this regime, we have the following second order approximations: ∂σσb(σi, τj) = uK(i+1,j) − 2uK(i,j) + uK(i−1,j) h2σ +O(h2σ) ∂ττ b(si, tj) = uK(i,j+1) − 2uK(i,j) + uK(i,j−1) h2σ +O(h2σ) 62 2.3. Finite Difference Program for Geometry One ∂στ b(si, tj) = uK(i+1,j+1) + uK(i−1,j−1) − uK(i+1,j−1) − uK(i−1,j+1) 4h2σ +O(h2σ) ∂τ b(σi, tj) = uK(i,j+1) − uK(i,j−1) 2hσ +O(h2σ) for all i, where to impose periodicity, when i = 1 we replace i − 1 with Ns and when i = Ns we replace i+ 1 with 1. Inside the superconductor we impose the discretized version of ∆b = b by choosing the appropriate rows for M . Defining discretized ξ variables (so that ξ(2)i,j = 2aω sin(ωsi)e −τj/a, for example) our matrix M is assembled as: MK(i,j),K(i,j) = −2ξ(1)i,j h2σ − 2 h2σ − 1 MK(i,j),K(i±1,j) = 1 h2σ MK(i,j),K(i,j±1) = ξ (1) i,j h2σ ± ξ (3) i,j 2hσ MK(i,j),K(i+1,j+1) = ξ (2) i,j 4h2σ MK(i,j),K(i+1,j−1) = − ξ (2) i,j 4h2σ MK(i,j),K(i−1,j+1) = − ξ (2) i,j 4h2σ MK(i,j),K(i−1,j−1) = ξ (2) i,j 4h2σ for all i. We also set fK(i,j) = 0. 63 2.3. Finite Difference Program for Geometry One Finding the Results We define a discretization of t : tj = eτj/a + b for j = 1, ..., Nτ . The approximate magnetic field at (y = σi, t = tj) is given by uK(i,j).Where necessary, an average field at depth tj can be found by averaging uK(i,j) over i with j fixed. 2.3.2 Validation The program can handle a larger range values of ω and  than the asymp- totics. However, we test the program first to see it is providing reasonable results. We found thatM = 9.25 and N = 50 provided results that were accurate to within 10−4 - an error we feel is very acceptable. Generally these are the parameters we set. Order of Convergence with Respect to Exact Solution If  = 0 then for any arbitrary ω we have a flat superconductor so the solution should decay exponentially past the superconducting surface at exp(−t).We selected ω = pi with N = 50 and plotted result. The numerical results are shown in figure 2.20. 64 2.3. Finite Difference Program for Geometry One Figure 2.20: A verification that the two-dimensional code has the right behaviour in the flat geometry. We can also choose boundary conditions so that the exact solution is exp(−z) even for nonzero . We observe that if b(y, z) = exp(−z) then 4b = b and b(y,  cos(ωy)) = b(σ, t = 0) = exp(− cos(ωσ)). That’s what we choose as our boundary condition. If the error E = ||bnum − bex||∞ is of the order h2σ (which is of the order N−2) then on a log-log plot, we would expect that logE is a linear function of logN with slope −2. The subscripts “num” and “ex” denote the numerical and exact solutions respectively. Later on “asy,k” will signify the kth-order asymptotic solution. The plot verifying second order convergence is given in figure 2.21. From these results, the error at N = 60 (not plotted) corresponded to 1.42 × 10−4 which is very near the error in setting b(M) = exp(−9.25) ≈ 65 2.3. Finite Difference Program for Geometry One Figure 2.21: Checking second order convergence by observing the error be- haviour where the exact solution exp(−z) is known. Here  = 0.1 and ω = 2pi. 66 2.3. Finite Difference Program for Geometry One 9.61× 10−5 to 0. With the same amplitude and frequency, we set M = 11 and increased N until the error plateaued. The error was 2.26 × 10−5 and exp(−11) ≈ 1.67 × 10−5. These results give us estimates on the errors coming from the approximation of the far field condition at a finite length from the interface; it roughly scales with exp(−M). Comparison with Asymptotic Solution By selecting large N, in this case N = 50 and M = 9.25, the numerical solution should be very close to the exact solution. We would therefore expect the asymptotics when taken to the second order term to converge at a rate of 3 to the numerical solution. However, this is a very delicate computation (and it is even more delicate in the three-dimensional code). We have bnum = bex + O(h2σ) + O(λ) (where λ represents the far-field error). Also, we have basy,2 = bex + O(3). Subtracting the numerical and second-order asymptotic solutions yields a difference O(h2σ)+O(λ)+O( 3). In general, the error terms O(h2σ) and O(λ) can depend upon . If  is too small then the O(3) term cannot be detected. Once  gets too large, the fourth-order term in the asymptotics could exceed the third-order term and once again we wouldn’t detect O(3). For this code, by choosing  in increments of 0.05 from 0.05 to 0.3, we can verify the order of convergence is O(3) by computing the sup-norm of the difference between the asymptotic and numerical solution at the different values of the amplitude. See figure 2.22. 67 2.3. Finite Difference Program for Geometry One Figure 2.22: Checking the convergence order of the asymptotics. Here ω = pi. 2.3.3 Results Having established the validity of the simulation and having some under- standing of the level of discretization needed to obtain a desired accuracy, we are now in a position to examine some physical problems. We start off by picking a case we have also studied in the asymptotic analysis of the first geometry. We choose  = 0.05 and ω = 2pi. A plot depicting the field magnitude on average and from a peak and valley can be found in figure 2.26. The results are nearly identical to the asymptotic values. We can also select values of  and ω that are quite a bit larger. In particular, we choose a case with  = 0.2 and ω = 100pi. This plot is found in 68 2.3. Finite Difference Program for Geometry One Figure 2.23: Profiles of the field magnitude at  = 0.05 and ω = 2pi. Figure displays decay from a peak and valley, and the average magnitude. figure 2.24. Figure 2.24 depicts what we would expect intuitively. Given the very high spatial frequency, we would expect the field to be nearly constant until past the deepest peaks (because it must be continuous and have the constant value of 1 in the vacuum). As a result, the field decay should be delayed when looking along a valley profile. In the plot, the field seems to be almost constant in the valley profile for a scaled depth of nearly 0.4. Even the peak profile is significantly larger than the flat profile. 69 2.4. Finite Difference Program for General Sinusoidal Surface Figure 2.24: Profiles of the field magnitude at  = 0.2 and ω = 100pi. Figure displays decay from a peak and valley and shows the flat geometry solution. 2.4 Finite Difference Program for General Sinusoidal Surface Here we consider the surface z =  cos(ωxx) cos(ωyy) with applied field (1, 0, 0)T . 2.4.1 Numerical Formulation In this three-dimensional setting, all three components of the field need to be solved for both on the vacuum and superconducting sides. This higher- dimensional setting along with having two regions with different properties to consider makes this work considerably more difficult. We consider the formulation in parts. Our overall formulation (based on 70 2.4. Finite Difference Program for General Sinusoidal Surface Figure 2.25: Formulation for three-dimensional code. the subsequent subsections) is displayed in figure 2.4.1. Vacuum Side Interior Within the vacuum we need to simultaneously satisfy∇×b = 0 and∇•b = 0. We make use of the vanishing curl to write b = ∇g where g : R3 → R is a scalar function (which for our purposes we will assume is at least C2). This then allows us to replace ∇ • b = ∇ • (∇g) with 4g = 0. Superconducting Side Interior In the superconductor we require that both ∇ • b = 0 71 2.4. Finite Difference Program for General Sinusoidal Surface and 4b = b. (2.23) In the interior we choose to only impose the Laplacian condition and impose the divergenceless condition on the boundary. This is noted below. As the divergence and vector Laplacian operators commute i.e. ∇•4 = 4∇• then we can take the divergence of (2.23) to get∇•4b = ∇•b = 4∇•b so that 4q = q where q = ∇ • b. If q = 0 on the boundary of the superconducting region and at z = +∞ with 4q = q, then q = 0 everywhere in the interior. Thus, we aim to numerically implement ∇ • b = 0 on the interface and at z = +∞. Boundary Conditions at z = ±∞ Given that b = ∇g = (∂xg, ∂yg, ∂zg)T , along z = −∞ it is only possible to specify ∂xg and ∂yg. We recall, however, that in the asymptotic analysis that no solutions were possible if the third component of b were nonzero at z = −∞. As it turns out, we don’t need to impose the value of the third component at z = −∞ and our results are still consistent. To impose b(x, y,−∞) = (1, 0, 0)T , it would at first seem appropriate to choose g(x, y,−∞) = x; however, given that our numerical formulation is based upon periodic boundary conditions this cannot work because x is not a periodic function (of x). We introduce g̃ = g − x. Then b = ∇g̃ + (1, 0, 0)T and 4g = 4g̃. This b still satisfies all the PDEs and boundary conditions. If 4g = 0 then ∇ • b = ∇ • ∇(g̃ + (1, 0, 0)T ) = 4g = 0. And given that b = ∇g we 72 2.4. Finite Difference Program for General Sinusoidal Surface must have that ∇× b = 0. There is a potential concern that this g̃ variable may not be periodic. However, given that the b must be periodic, we can show that g̃ is, too. We give this proof in appendix D. Our far-field vacuum boundary conditions are g̃ = 0 at z = −∞. At z = ∞, we need to impose both b = 0 and ∇• b = 0. We impose the zero condi- tions on the first two components directly i.e. b1(x, y,∞) = b2(x, y,∞) = 0. This also forces ∂xb1(x, y,∞) = ∂yb2(x, y,∞) = 0. We now focus on the divergence condition and demand that ∂zb3(x, y,∞) = 0. Given the analysis in the asymptotic work, we see that the Fourier com- ponents would involve exponentials of multiples of z and ∂z could only go to zero if all the exponentials decayed. Thus ∂zb3(x, y, z) → 0 as z → ∞ also weakly imposes b3(x, y,∞) = 0. Interface Conditions At the interface we need to specify a scalar equation (representing the vacuum-side interface condition) and a vector equation (representing the superconducting side interface condition). We impose ∇ • b = 0 on the su- perconducting side of the interface, and b = ∇g̃ + (1, 0, 0)T on the interface as discussed in section 2.4.1. Coordinate Transformation Again, we note that the most interesting phenomena occur near the interface and so, along with considering the depth relative to the interface, t = z −  cos(ωxx) cos(ωyy), we also stretch our coordinates as in the two dimensional 73 2.4. Finite Difference Program for General Sinusoidal Surface code. We define our coordinates as ρ = x, σ = y, and τ =  av log(−t− bv) if t ≤ 0 as log(t− bs) if t > 0. with av, bv, as, bs being determined by the numerical approximations to ±∞. This provides us with a new set of differential operators in the (ρ, σ, τ) parameters. Much of the work is very similar as in the two dimensional code so we summarize the results here: ∂x = ∂ρ±ωx sin(ωxρ) cos(ωyσ)ae−τ/a︸ ︷︷ ︸ ξ(1) ∂τ ∂y = ∂σ ±ωy cos(ωxρ) sin(ωyσ)ae−τ/a︸ ︷︷ ︸ ξ(2) ∂τ ∂z = ±ae−τ/a︸ ︷︷ ︸ ξ(3) ∂τ and 4 = ∂ρρ + ∂σσ ±2ωx sin(ωxρ) cos(ωyσ)ae−τ/a︸ ︷︷ ︸ ξ(4) ∂τρ ±2ωy cos(ωxρ) sin(ωyσ)ae−τ/a︸ ︷︷ ︸ ξ(5) ∂τσ+ [((ω2x + ω 2 y)ω 2 y cos(ωxρ) cos(ωyσ))(±ae−τ/a)+ (2ω2x sin 2(ωxρ) cos2(ωyσ) + 2ω2y cos 2(ωxρ) sin2(ωyσ) + 1)(−ae−2τ/a)]︸ ︷︷ ︸ ξ(6) ∂τ+ 74 2.4. Finite Difference Program for General Sinusoidal Surface [(2ω2x sin 2(ωxρ) cos2(ωyσ) + 2ω2y cos 2(ωxρ) sin2(ωyσ) + 1)(a2e−2τ/a)]︸ ︷︷ ︸ ξ(7) ∂ττ where a = av on the vacuum side and as on the superconducting side and we choose − for the vacuum side and + for the superconducting side. In this transformation we have a flat interface in the new coordinates. The Grid Another difficulty in this numerical work is in setting up a grid. Because the magnetic field is given in terms of the gradient of the vacuum g̃ function, it is necessary to consider two grids that interlock at the interface. We are given ωx, ωy, and . We define a parameter N describing the number of points in the ρ− and σ− directions and Mm and Mp describing how far the grid spans in the vacuum and superconducing sides respectively. Given N, we proceed to define: Our spacings, hρ = 2piNxωx , hσ = 2pi Nyωy , hτ = min{hρ, hσ}; and our stretch- ing parameters, ηv = NhτMm , ηs = Nhτ Mp , bv = ηgMm+ hτ 2 ηg−1 , hI = Mp N , bs = ηsMp+ hI 2 ηs−1 , av = βg+ hτ 2 , as = −βs, αv = av log(Mm−bv), βv = av log(hτ2 −bv), αs = as log(−bs), βs = as log(Mp − bs). The careful selection of these α and β parameters allows for the grids to interlock in the right way. When ωx or ωy are zero we defined hρ or hσ respectively to be 1. We set ρi = − pi ωx + (i− 1)hρ for 1 ≤ i ≤ N, 75 2.4. Finite Difference Program for General Sinusoidal Surface σj = − pi ωy + (j − 1)hσ for 1 ≤ j ≤ N, ρ i = ρi + hρ 2 , σj = σj + hσ 2 and if ωx and/or ωy are zero then we only choose three coordinates −1, 0, 1 for the ρ′s and/or σ′s. We set Nρ to be the number of ρ− points and Nσ to be the number of σ−points. We set Nv = [βv−αvhτ ] + 1 and Ns = [ βs−αs hτ ] + 1 with [ ] being the integer floor function and define τNv−k+1 = βv − (k − 1)hτ for k = 1, ..., Nv and τk = αs + (k − 1)hτ for k = 1, ..., Ns. In the end we have NxNy(Nv + 3Ns) variables and equations to define. We define the mapping K that takes four coordinates and maps them to a single number: K(i, j, k, `) =  (k − 1)NxNy + (j − 1)Nx + i if ` = 0NxNyNv + (`− 1)NxNyNs + (k − 1)NxNy + (j − 1)Nx + i if ` 6= 0 where to impose the periodic boundary conditions, if i = 0 then it must be replaced by i = Nρ and if i = Nρ+1 it is replaced by 1 and similarly for j with Nσ. Based on this numbering, the i, j, and k describe the ρ, σ, and τ positions. The number ` = 0 describes the g̃ variable (the variable in the vacuum including its ghost points), and ` = 1, 2, 3 describes the first, second, and third components of the magnetic field respectively in the superconducting 76 2.4. Finite Difference Program for General Sinusoidal Surface Figure 2.26: The mesh in the three-dimensional system. Note how g̃ and b meshes interlock. The circles with crosses indicate the points occur deeper into the page than the circles wit the dots. At each circle with a cross, three components are specified. At each circle with a dot, the scalar value of g̃ is specified. coordinates. To discretize t, the depth past the surface, we define: tk =  tk = −e (τk+τk+1)/(2av) − bv for 1 ≤ k ≤ Nv − 2 tk = e(τk+2−Nv )/as + bs for Nv − 1 ≤ k ≤ Nv +Ns − 3. This imposes t1 ≈ −Mm + hI2 , tNv−1 = 0, and tNv+Ns−3 ≈ Mp. Figure 2.26 illustrates the mesh. 77 2.4. Finite Difference Program for General Sinusoidal Surface The Discretized Equations Here we present how the equations were handled. We will define u as the unknown vector, M as the matrix, and f as the load vector. The equations change based upon the τ position so in each slice the equations hold for all i and j. We also discretize the ξ variables, like in the two-dimensional code, with indices (i, j, k, `). For example, ξ(2)i,j,k,0 = −ωy cos(ωxρi) sin(ωyσj)ave−τk/av Conditions at −∞ This corresponds to k = 1 and ` = 0. We need to impose g̃(ρ, σ,−∞) = 0. To do so we set fK(i,j,1,0) = 0 with MK(i,j,1,0),K(i,j,1,0) = 1. Conditions at +∞ This corrresponds to k = Ns and ` = 1, 2, 3. We need to impose b1(ρ, σ,∞) = b2(ρ, σ,∞) = 0 and ∂zb3(ρ, σ,∞) = ξ(3)∂τ b3 = 0. This is an equation for the points with τ coordinate number Ns. We impose fK(i,j,Ns,`) = 0. The point at ∞ is between regular grid points and ghost-points. We impose an average condition for the first two components (imposing their average be zero) and a regular second-order derivative condition for the third component: MK(i,j,Ns,`),K(i,j,Ns−1,`) = 1 2 MK(i,j,Ns,`),K(i,j,Ns,`) = 1 2 78 2.4. Finite Difference Program for General Sinusoidal Surface for ` = 1, 2, and MK(i,j,Ns,3),K(i,j,Ns−1,3) = − 1 hτ ξ (3) i,j,Ns,3 MK(i,j,Ns,3),K(i,j,Ns,3) = 1 hτ ξ (3) i,j,Ns,3 . Vacuum Laplacian Here, k = 2, ..., Nv − 1 and ` = 0. The Laplacian is zero in the vacuum region so fK(i,j,k,0) = 0. The discretization methods for mixed partial derivatives are the same here as for the two-dimensional code. For the matrix values, we have: MK(i,j,k,0),K(i±1,j,k,0) = 1 h2ρ MK(i,j,k,0),K(i,j±,k,0) = 1 h2σ MK(i,j,k,0),K(i,j,k±1,0) = ±ξ(6)i,j,k,0 2hτ MK(i,j,k,0),K(i,j,k,0) = −2 h2ρ − 2 h2σ − 2ξ (7) i,j,k,0 h2τ MK(i,j,k,0),K(i+1,j,k+1,0) = ξ (4) i,j,k,0 4hρhτ MK(i,j,k,0),K(i−1,j,k−1,0) = ξ (4) i,j,k,0 4hρhτ 79 2.4. Finite Difference Program for General Sinusoidal Surface MK(i,j,k,0),K(i+1,j,k−1,0) = −ξ(4)i,j,k,0 4hρhτ MK(i,j,k,0),K(i−1,j,k+1,0) = −ξ(4)i,j,k,0 4hρhτ MK(i,j,k,0),K(i,j+1,k+1,0) = ξ (5) i,j,k,0 4hσhτ MK(i,j,k,0),K(i,j−1,k−1,0) = ξ (5) i,j,k,0 4hσhτ MK(i,j,k,0),K(i,j+1,k−1,0) = −ξ(5)i,j,k,0 4hσhτ MK(i,j,k,0),K(i,j−1,k+1,0) = −ξ(5)i,j,k,0 4hσhτ Superconductor Laplacian Here, k = 2, ..., Ns − 1 and ` = 1, 2, 3. This is identical to the discretization of the vacuum Laplacian above except that the 0 is replaced by ` = 1, 2, 3 and the slight change in now having and extra −1 in these entries: MK(i,j,k,0),K(i,j,k,0) = −2 h2ρ − 2 h2σ − 2ξ (7) i,j,k,0 h2τ − 1. Zero Divergence on the Interface This is the scalar equation corre- sponding to the g̃-ghost points. We have k = Nv and ` = 0. We chose to impose zero divergence at the g̃−ghost points. These points, however, are defined between the b−points. To resolve this issue, we define the divergence at ghost points by taking averages. Consider- ing each spatial direction x̂, ŷ, ẑ individually, every ghost point is surrounded 80 2.4. Finite Difference Program for General Sinusoidal Surface by four edges along which a derivative can be taken. Taking the average of these four discrete derivatives gives an approximation to the derivative (in one of the directions) at the ghost point. For example, to compute the z− derivative of b1 at (ρi, σj , τNv , 0) we would compute the average of the discrete derivatives ξ (3) i,j,3/2,1 hτ (uK(i,j,2,1) − uK(i,j,1,1)), ξ (3) i,j,3/2,0 hτ (uK(i−1,j,2,1)−uK(i−1,j,1,1)), ξ (3) i,j,3/2,0 hτ (uK(i,j−1,2,1)−uK(i,j−1,1,1)), ξ (3) i,j,3/2,0 hτ (uK(i−1,j−1,2,1) − uK(i−1,j−1,1,1)) where ξ(3)i,j,3/2,1 is defined by the av- erage 12(ξ (3) i,j,1,1 + ξ (3) i,j,2,1). Referring back to the figure 2.26 should clarify. The divergence is zero and we are looking at the equations for the vacuum ghost points. Therefore we set fK(i,j,Nv ,0) = 0. The entries of M corresponding to the terms in ∂xb1 are shown below and the other terms are similar. For the matrix M, we impose: M(K(i, j,Nv, 0),K(i, j, 2, 1)) = 1 4hρ + ξ (1) i,j,3/2,1 4hτ M(K(i, j,Nv, 0),K(i, j, 1, 1)) = −1 4hρ + ξ (1) i,j,3/2,1 4hτ M(K(i, j,Nv, 0),K(i− 1, j, 2, 1)) = 14hρ + ξ (1) i−1,j,3/2,1 4hτ M(K(i, j,Nv, 0),K(i− 1, j, 1, 1)) = −14hρ + ξ (1) i−1,j,3/2,1 4hτ M(K(i, j,Nv, 0),K(i, j − 1, 2, 1)) = 14hρ + ξ (1) i,j−1,3/2,1 4hτ 81 2.4. Finite Difference Program for General Sinusoidal Surface M(K(i, j,Nv, 0),K(i, j − 1, 1, 1)) = −14hρ + ξ (1) i,j−1,3/2,1 4hτ M(K(i, j,Nv, 0),K(i− 1, j − 1, 2, 1)) = 14hρ + ξ (1) i−1,j−1,3/2,1 4hτ M(K(i, j,Nv, 0),K(i− 1, j − 1, 1, 1)) = −14hρ + ξ (1) i−1,j−1,3/2,1 4hτ where the factor of 14 comes from taking averages. The Gradient Condition This corresponds to k = 1 and ` = 1, 2, 3. The condition is that b = ∇g̃ + (1, 0, 0)T or that ∇g̃ − b = (−1, 0, 0)T . These are equations for b on the interface. Therefore we set fK(i,j,1,1) = −1 and fK(i,j,1,2) = fK(i,j,1,3) = 0. As with the divergence, we need to take averages in finding the deriva- tives. For an illustration, we will show the equations for the third component of b. They correspond to setting the entries of M as follows: MK(i,j,1,3),K(i,j,Nv ,3) = ξ (3) i,j,Nv−1/2,0 4hτ MK(i,j,1,3),K(i,j,Nv−1,3) = −ξ(3)i,j,Nv−1/2,0 4hτ MK(i,j,1,3),K(i−1,j,Nv ,3) = ξ (3) i−1,j,Nv−1/2,0 4hτ MK(i,j,1,3),K(i−1,j,Nv−1,3) = −ξ(3)i−1,j,Nv−1/2,0 4hτ 82 2.4. Finite Difference Program for General Sinusoidal Surface MK(i,j,1,3),K(i,j−1,Nv ,3) = ξ (3) i,j−1,Nv−1/2,0 4hτ MK(i,j,1,3),K(i,j−1,Nv−1,3) = −ξ(3)i,j−1,Nv−1/2,0 4hτ MK(i,j,1,3),K(i−1,j−1,Nv ,3) = ξ (3) i−1,j−1,Nv−1/2,0 4hτ MK(i,j,1,3),K(i−1,j−1,Nv−1,3) = −ξ(3)i−1,j−1,Nv−1/2,0 4hτ MK(i,j,1,3),K(i,j,1,3) = −1. We similarly use the notation ξ(3)i,j,Nv−1/2,0 = 1 2(ξ (3) i,j,Nv−1,0 + ξ (3) i,j,Nv ,0 ). Finding the Results This averaging actually brings about a null vector for M when N is even. As a result, the program is restricted to odd values of N . We explain how this null vector comes about in appendix E for the flat geometry. The same ill-conditioning also occurs for rough geometries. Given the system Mu = f we found u = M−1f (by implementing a direct solve routine for sparse matrices in Matlab). This gives us a magnetic field on the superconducting side, but only gives us g̃ on the vacuum side. To get b on the vacuum side, we took the discretized gradient of g̃ just as above, thereby finding the magnetic field at values at the centres of cubes with g̃ values. We also added 1 to the first component. Where necessary, an average field magnitude at depth tk can be found by averaging the field magnitude over i and j with k fixed. 83 2.4. Finite Difference Program for General Sinusoidal Surface 2.4.2 Validation To test that the code results are accurate, we compare the results to the asymptotically computed solutions. As our problem of interest is a three-dimensional vector problem, the simulation is extremely difficult to run, due to memory constraints. This limitation stems from the use of a direct-solver for the linear system. As a result, in the fully three-dimensional problem, the resulting system of equations can only be solved with values of N up to about 21. When this code has either ωx or ωy zero, we can take N much larger, but still nowhere near the values possible in the two-dimensional code. In all computations, we tried to take N as large as possible. In terms of Mp and Mm, we generally choose Mp = 9.25 and set Mm = Mp/ √ ω2x + ω2y . This choice of Mm ensures that even if  were as large as 1 that the decay of the first order perturbation (which, in the asymptotic case, decays with exp( √ ω2x + ω2yt) would be of the same order as the error at Mp. The most basic check is how the program behaves with a flat interface. For  = 0, the field magnitude should be identically 1 on the vacuum side and decay exponentially with e−t on the superconducing side. Taking N = 11 we see the plot in figure 2.27 Comparison with Asymptotic Solutions We now compare the numerical approximations to the asymptotic solution for different values of . 84 2.4. Finite Difference Program for General Sinusoidal Surface Figure 2.27: Visual confirmation that the three-dimensional program cor- rectly handles the flat interface N = 11, and ωx = ωy = pi. Our first-order asymptotic solution satisfies basy,1 = b(0) + b(1) = bex + O(2) where b(0) and b(1) are the zeroth and first-order terms of the asymp- totic expansion and bex is the exact solution. Our numerical solution bnum satisfies bnum = bex + O(h2τ ) + O(λ) where λ is the far-field error. Taking the difference of basy,1 and bnum and dividing by maxi ||b(1)i ||∞ and taking norms yields an error: E = || bex − basy,1 maxi ||b(1)i ||∞ ||∞ = O() +O(h2τ/) +O(λ/). Provided h2τ is small enough (by taking N large) and λ is small, then this should be a linear function of epsilon. To verify the linear nature, we plot E vs  to see that E varies roughly 85 2.4. Finite Difference Program for General Sinusoidal Surface Figure 2.28: Left: resolution of first-order asymptotic term for first geometry with ω = pi. Centre: resolution of first-order asymptotic term for second geometry with ω = pi. Right: resolution of first-order asymptotic term for third geometry with (ωx, ωy) = (pi, pi). linearly with  and that the E ↓ 0 as  ↓ 0. From this, we see the program is remarkably accurate. Even for  ≈ 0.3 the program can discern the first- order term to within 15%. See figure 2.28. As in section 2.3.2, the second-order asymptotic solution basy,2 should be near the numerical solution to within an error of order O(h2τ )+O( 3)+O(λ). Being able to test this higher order of convergence is extremely delicate (due to memory constraints). When ωx, ωy 6= 0, we can only take N as large as 21. Another difficulty is that the magnetic field is essentially comprised of six parts: three components on two sides of the interface. Each part has its own terms and could presumably have optimal ranges over which the O(3) term could be detected by using second-order asymptotics. We begin this test by taking the three-dimensional code with the condi- tions (ωx, ωy) = (pi, 0) so that we can test how it converges with respect to the results of section 2.2.4. This also means we can take N much larger (in this case we 41). We allow  to range from 0.05 to 0.40 with increments of 0.05. From this we obtain the asymptotic errors for each side of the inter- 86 2.4. Finite Difference Program for General Sinusoidal Surface Table 2.1: Estimated orders of convergence for (ωx, ωy) = (pi, 0). Field Part Range of  Slope b1 vacuum [0.1, 0.25] 2.77 b1 superconductor [0.2, 0.4] 3.08 b3 vacuum [0.05, 0.2] 2.71 b3 superconductor [0.1, 0.25] 2.86 Table 2.2: Estimated orders of convergence for (ωx, ωy) = (pi, pi). Field Part Range of  Slope b1 vacuum [0.2, 0.4] 2.94 b1 superconductor [0.15, 0.4] 3.03 b2 vacuum [0.25, 0.4] 2.98 b2 superconductor [0.15, 0.25] 2.71 b3 vacuum [0.15, 0.35] 2.74 b3 superconductor [0.1, 0.2] 2.50 face, E = ||bnumi−basy,2i||∞ (i = 1, 2, 3), and we can perform a least-squares regression for E = c1 log + c0. We do our fitting on the range of data that seems to best detect O(3). We tabulate the results of c1 in table 2.1. Seeing these very convincing results, we also test the case where (ωx, ωy) = (pi, pi), keeping the same range of  values as above. Again, we perform a least-squares linear fit to the best-behaved subset of the data. Results are tabulated in table 2.2. These results show a convincing agreement between the computational and asymptotic results in the three-dimensional setting, which validates both approaches. 2.4.3 Results Choosing N , we show the plots of the field profile. We begin by selecting (ωx, ωy) = (2pi, 2pi) with  = 0.05 to see the profiles 87 2.4. Finite Difference Program for General Sinusoidal Surface (shown in figure 2.29) are similar to that of the asymptotics. We also have the freedom to choose much larger values of  and the ω’s. To examine how the profile changes when the frequencies are out of pro- portion, we fix  = 0.1 and select (ωx, ωy) = (pi, 8pi) and (ωx, ωy) = (pi, 8pi). The results are plotted together along with the flat geometry solution in figure 2.30. We see the mean field magnitudes in both perturbed geometries exceed that of flat geometry after a certain depth. With ωx  ωy we see a similar shape to that of the experimental results (see figure 2.2) although the dead layer is considerably smaller. With ωx  ωy we see a shape that resembles the experimental plot quite closely, but the decay in magnitude begins even before the superconducting interface. The experimental results in figure 2.30 only measure the field profile in the superconductor. However, it is possible to coat the superconductors with silver to measure the field outside [8]. From this, we observe that by rotating the applied magnetic field (as- suming that we have a single value of λ that does not depend on orientation), the field profile changes purely because the roughness differs in the two di- rections. However, the amount by which the field profile differs from the flat geometry is not large enough to account for the experimental results unless  could be larger i.e. the surface is rougher than experimentalists believe. As final plot for this section, we show how well the asymptotics and numerics agree. With  = 0.1 and (ωx, ωy) = (8pi, 8pi), their predicted mean field profiles are plotted together in figure 2.31. 88 2.4. Finite Difference Program for General Sinusoidal Surface Figure 2.29: Top left: a profile of b1 from peak and valley. Top right: a profile of b2 with (x, y) = (−0.27,−0.27). Bottom left: profile of b3 with (x, y) = (−0.27,−0.50). Bottom right: difference between average field profile in perturbed geometry and flat geometry. All figures computed with  = 0.05, and (ωx, ωy) = (2pi, 2pi). 89 2.5. Conclusions of Superconductor Modeling Figure 2.30: The field profiles for different roughness orientations with  = 0.1. 2.5 Conclusions of Superconductor Modeling Recent experiments have suggested the need for more sophisticated models of a superconductor. Our work examined the notion of surface roughness and the effects this would have on the magnetic field profile in an experimental setting. Through analytic and numerical techniques, we have shown that surface roughness does indeed play a role in perturbing the field magnitude. Not only does it perturb the field, but the orientation of the geometry itself plays a big role in the nature of the perturbation. Here we briefly summarize the most interesting of these results. 90 2.5. Conclusions of Superconductor Modeling Figure 2.31: The asymptotic and numeric mean field profiles. The two are quite close given the large  and ω values. 2.5.1 More Complicated Field Behaviour The standard model with a flat superconductor and parallel applied mag- netic field yields a constant solution within the vacuum region and a field pointing only in one direction decaying exponentially once within the super- conductor. Our results show that in general the field magnitude would not be con- stant in the vacuum and that once within the superconducting region the magnitude could decay differently than a purely exponential decay - decays with multiple length scales and field components with sinusoidal variations in the longitudinal directions. The individual components can rise above or fall below their value at 91 2.5. Conclusions of Superconductor Modeling −∞. In addition, the field components themselves become mixed, and the field is no longer pointing in a single direction throughout the experimental region. 2.5.2 Orientation of the Roughness Affects the Profile For a given roughness that’s not equal in both spatial dimensions, depending on whether the applied field is pointing in a rough or smooth direction, the amount by which it decays in the vacuum region could be significant or nearly negligible. If the experiments were precise enough, this would mean that if a su- perconductor did have a rough surface (which was not equal in both spatial directions), then by changing the field orientation by 90◦ (assuming the pen- etration depth remained nearly the same) then the field profile should be different. Having to take into account the anisotropy in the sample would make this more difficult. 2.5.3 Effective Dead Layer Based on an asymptotic regime, the effective dead layer (a length over which we can think of the field magnitude remaining constant beyond the interface) is of the order 2. Recall,  is the scaled size of the surface roughness defined in section 2.2.1. The actual dependence on the spatial frequencies is quite complicated, and in some cases the effective dead layer is actually negative. Experimentally the dead layer appears to be on the order of 0.05. Our results show that a surface roughness of 0.05 cannot account for the dead 92 2.5. Conclusions of Superconductor Modeling layers of the size shown in figure 2.30. However, results for larger roughnesses are qualitatively similar to the experimental profiles. 93 Chapter 3 Fuel Cell Modeling 3.1 Modeling of Gas Diffusion in Fuel Cells Hydrogen fuel cells hold a promising future for being able to produce clean energy. If Hydrogen and Oxygen are used in the fuel cell, they react electro- chemically producing water (as the only by-product) and energy. For this to take place, these gases must diffuse from channels to reaction sites. Be- ing able to set ideal running conditions for hydrogen fuel cells to maximize efficiency requires an in-depth understanding of the efficiency of the gas dif- fusion among other things. Running experiments can be costly and in recent years a lot of emphasis has been placed on modeling fuel cells numerically. The question then arises as to how to model this gas diffusion. Fickian diffusion, wherein a species diffuses in proportion to the gradi- ent of its mole fraction, has traditionally been chosen. However it is be- lieved that a more sophisticated (and more complex) formulation known as Maxwell-Stefan diffusion yields more physically accurate results. This diffusion model takes into account the interspecies competition in diffusion. Although the theory behind these models is different, they are used to model the same phenomena. Due to the interaction terms and mathematical subtleties, it is not clear 94 3.1. Modeling of Gas Diffusion in Fuel Cells how to best pose a Maxwell-Stefan diffusion problem, and exactly how much it differs from Fickian diffusion in situations relevant to fuel cells is unknown. These issues are explored in this chapter. We begin this portion of the paper with a brief overview of the relevent Physics involved in PEM fuel cells and proceed to develop our model from there. 3.1.1 PEM Fuel Cell Overview A Polymer Electrolyte Membrane (PEM) fuel cell generates power through a reaction between Hydrogen (H2) and Oxygen (O2) producing water (H2O). The simplest cartoon is that the two gases enter through two separate chan- nels and diffuse through a concentration gradient to arrive at catalyst sites that facilitate the reaction [9]. See figure 3.1.1. Hydrogen gas present in the Hydrogen flow channel enters the gas diffu- sion layer (GDL) anode where it diffuses through a concentration gradient until it reaches a catalyst layer (typically comprised of Platinum or a Plat- inum alloy). Here, it oxidizes with the reaction H2 → 2H+ + 2e−. The electrons become part of the electric current drawn from the fuel cell and the protons diffuse through the polymer electrolyte membrane (PEM) until reaching the cathode Oxygen catalyst layer. The other reacting gas is Oxygen which diffuses from the Oxygen flow channel through the GDL cathode. Once reaching the catalyst, an Oxygen molecule combines with four protons (from the Hydrogen side) and 4 elec- trons (generated through the current) to produce water under the reaction O2 + 4e− + 4H+ → 2H2O. This reaction is also enhanced by Platinum or a 95 3.1. Modeling of Gas Diffusion in Fuel Cells Figure 3.1: A cross-section of a PEM fuel cell. Hydrogen and Oxygen diffuse primarily in the XY−plane, but there is a diffusion in the Z−direction as well. Platinum alloy. The properties of the GDL have a large impact on the diffusion. The GDL is not an open channel. Instead, it is a thin layer of teflonated carbon fibre paper, which can be considered a porous media, through which the gases are conducted. The efficiency of this transport depends on how “open” these pathways are. These pathways make for a greater overall distance for the molecules to travel. They cannot travel straight; instead, they must pass through tortuous pathways to reach the catalyst site. 3.1.2 The Model In our work here we will look at the GDL of the cathode. We will model the concentration profile of the gases and explore how Fick and Maxwell-Stefan 96 3.1. Modeling of Gas Diffusion in Fuel Cells diffusion laws differ. We will consider the presence of three gas species, O2, H2Ovap (water vapor), and N2 in the GDL at steady state transport. We approximate the system as isothermal (the temperature is assumed to be constant). We also approximate the system as one-dimensional: the gases have prescribed concentrations at the channels and travel in the X−direction to reach the catalyst layer where the fluxes are known. We consider the different concentrations of gas species Ci, i = 1, 2, 3, where 1 indicates O2, 2 indicates H2Ovap, and 3 indicates N2. Together we consider the vector of concentrations C = (C1, C2, C3)T . The total molar concentration is the sum of the concentrations of the individual species C1+C2+C3 which we will denote by ||C||.Unless otherwise stated, the norm will always represent the L1-norm of a vector. The mass- density ρi of a species i is found by multiplying its molar mass Mi by its molar concentration Ci. The temperature (assumed constant) will be denoted by T , the total pressure by P , and the mass-averaged velocity by U. Important physical constants in this setting are the ideal gas constant, R, the porosity, , the viscosity, µ, and the permeability, κ. In Fickian dif- fusion, D will represent the diffusion coefficient, and in the Maxwell-Stefan formulation, Dij , (i, j) ∈ {1, 2, 3}2 will be the binary diffusivities. The important fluxes are the molar flux with respect to the molar- averaged velocity J∗ and the molar flux with respect to the mass-averaged velocity J. Both J and J∗ are in general rank 2 tensors (in one dimension they are vectors). 97 3.1. Modeling of Gas Diffusion in Fuel Cells In our model, the channel concentrations are specified at position X = 0 so that C(0) = C∗ is known. A known current is drawn from the reac- tions and based upon this current, under standard operating conditions, the fluxes, Ni of the individual species at the catalyst layer (at X = L) are known. We will now determine our parameters of interest. 3.1.3 Standard Operating Conditions of a Fuel Cell Here, based upon standard operating conditions (see references [9], [10], [11], and [12]), we compute our channel concentrations and fluxes. The main equation we need here is the ideal gas law ((3.2) in the next section). We take the pressure within the fuel cell channels to be 3 atm (approximately 3 × 105 Pa). The temperature is taken to be 350 K. Based on this we can compute the total molar concentration of the gases as ap- proximately 100 mol m−3. These are typical conditions for fuel cells being developed for the automotive sector [11]. At this same temperature, the saturation pressure of water vapor is 3.8×104 Pa. This corresponds to a saturation concentration, Csat of 13 mol m−3. If we assume 75% humidity then within the fuel cell, the molar density of water vapor is approximately 10 mol m−3. We then assume the remaining 90 mol m−3 are comprised of Oxygen at 21% and Nitrogen at 79%. Our channel concentrations are: C1 = 19 mol m−3, C2 = 10 mol m−3, and C3 = 71 mol m−3. A typical current density drawn from a fuel cell is 1 A cm−2 which is 104 C s−1 m−2. Based on the reaction in the GDL, we see that for every four electrons consumed, one Oxygen molecule is consumed. Taking the current 98 3.1. Modeling of Gas Diffusion in Fuel Cells Table 3.1: Physical constants in our gas diffusion model. constant value C1 19 mol m−3 C2 10 mol m−3 C3 71 mol m−3 D 8× 10−6 m2 s−1 D1,2 1.24× 10−5 m2 s−1 D1,3 1.04× 10−5 m2 s−1 D2,3 1.23× 10−5 m2 s−1  0.74 F 9.6× 104 C mol−1 κ 10−12 m2 L 10−4 m M1 32 g mol−1 M2 18 g mol−1 M3 28 g mol−1 N1 2.6× 10−2 mol m−2 s−1 N2 −5.2× 10−2 mol m−2 s−1 N3 0 mol m−2 s−1 µ 2.24× 10−5 kg m−1 s−1 R 8.314 kg m2 mol−1 s−2 K−1 T 350 K density and dividing by 4 F (where F is Faraday’s constant, 9.6 × 104 C mol−1) we find the Oxygen flux at the catalyst layer to be N1 = 0.026 mol m−2 s−1. The water vapor flux should be in the opposite direction and have twice the magnitude. So N2 = −0.052 mol m−2 s−1. Nitrogen does not react so its flux is zero. Our particular choice of parameters are summarized in table 3.1. Aside from the constants we used or obtained above, we obtained the other con- stants from references [9] and [11]. In the next section we will review the relevant gas diffusion equations. 99 3.1. Modeling of Gas Diffusion in Fuel Cells 3.1.4 Diffusion Equations Here we give the formulations of Fick and Maxwell-Stefan diffusion. Al- though our model is one-dimensional, the equations here are given in full generality. For further reference, we suggest reference [13]. Einstein summation convention is used throughout in that repeated in- dices in products are summed over (aijbj = ∑ j aijbj for example). Basic Equations In both the Fick and Maxwell-Stefan settings, we will assume Darcy’s law, which applies to porous media [9], and the Ideal Gas law. Respectively, they are stated below: U = − κ µ ∇P (3.1) P = ||C||RT. (3.2) These equations can be combined nicely by substituting (3.2) into (3.1) to yield a modified Darcy Law: U = −κRT µ ∇||C||. (3.3) In later sections we will define the constant σ = KRTµ . The mass-averaged velocity is given by U = ρiVi||ρ|| . The molar-averaged velocity is given by U∗ = CiVi||C|| . Here the Vi are the velocities of the individual species. 100 3.1. Modeling of Gas Diffusion in Fuel Cells Fickian Diffusion Fick’s law describes diffusive fluxes. The formulation we adopt is the same as that implemented by Berg et al. [11]. For each species of gas, we have: ∂tCi +∇ • (CiU −D||C||∇( Ci||C||)) = 0. (3.4) The term CiU is a convective term - the gas species move with the mass- averaged velocity. The term D||C||∇( Ci||C||) is a diffusive term, where diffu- sion is assumed to be driven by the gradient of the mole fractions (Ci/||C||). Fickian diffusion cannot actually be reconciled with Darcy’s law as stated [14]. By summing (3.4) over i we find: ∂t||C||+∇ • (||C||U −D||C||∇( ||C||||C||)) = 0 and the latter term in the diffusive term is actually zero (∇(1) = 0.) There- fore we see that ∂t||C|| = −∇ • (||C||U) which only makes sense if U is the molar-averaged velocity U∗ (different to how Darcy’s law is posed where U is the mass-averaged velocity). In spite of this, many researchers continue to use Fickian diffusion in conjunction with Darcy’s law [9]. As we see next, Maxwell-Stefan diffusion is compatible with Darcy’s law. 101 3.1. Modeling of Gas Diffusion in Fuel Cells Maxwell-Stefan Diffusion The Maxwell-Stefan diffusion law is expressed below: ∂tCi +∇ • (CiU + Ji) = 0, (3.5) where we define J through a series of transformations. To start, Aij J̃j = ∇( Ci||C||) (3.6) gives a relationship between the molar diffusive flux J̃ (with respect to an arbitrary velocity) and the gradients of the mole fractions, where Aij(C) = 1||C||2  ∑ ` 6=i C` Di` if i = j − CiDij if i 6= j. The Dij for (i, j) ∈ {1, 2, 3}2 are binary diffusivities. We will denote D̂ = maxij Dij . The matrix A is not invertible. The vector (C1, C2, C3)T spans its null space. To find the general solution of (3.6) we will pick a base point, (J∗1 , J∗2 , J∗3 ), by using a projection and replacing one of the rows of A by a multiple of (1, 1, 1) and then add an arbitrary multiple of ζ(C1, C2, C3)T to the solution. We find: (J̃1, J̃2, J̃3)T = (PA+ B)−1P∇( C||C||)︸ ︷︷ ︸ (J∗1 ,J ∗ 2 ,−J∗1−J∗2 )T +ζ(C1, C2, C3)T 102 3.1. Modeling of Gas Diffusion in Fuel Cells where P =  1 0 0 0 1 0 0 0 0  and B = 1 D̂||C||  0 0 0 0 0 0 1 1 1  . The factor of 1 D̂||C|| within B may seem strange but it is needed to keep consistency with the dimension and scaling of A. The usefulness of this will become apparent in the asymptotic and numerical work to come (section 3.2.1). Also note that the −J∗1 − J∗2 for the third component of the particular solution comes from our requirement that J∗1 + J∗2 + J∗3 = 0 (i.e. that J∗ is the molar flux with respect to the molar-averaged velocity). We have found the molar-diffusive fluxes (which have an arbitrary pa- rameter). We would like (3.5) to involve a mass-averaged velocity [14]. In this case, if we multiply (3.5) by Mi and add we find: ∂tρ+∇ • (ρU +MiJi) = 0 where ρ is the total density, CiMi (the sum). For a mass-averaged U we require that MiJi = 0. For a particular choice of ζ, the J̃ we found is the molar flux with respect to the mass-averaged velocity, J . So we now find ζ by solving 103 3.2. Asymptotic Formulation M1(J∗1 + C1ζ) +M2(J∗2 + C2ζ) +M3(−J∗1 − J∗2 + C3ζ) = 0. We obtain ζ = (M3−M1)J ∗ 1+(M3−M2)J∗2 ρ . Using this ζ allows us to write: J1 J2 J3  =  S11 S12 0 S21 S22 0 −M1M3 −M2M3 0   J∗1 J∗2 J∗3  where Sij = δij − CiMjρ (1− M3Mj ) as noted by Stockie et. al. [9]. Overall we can find J as shown below:  J1 J2 J3  =  S11 S12 0 S21 S22 0 −M1M3 −M2M3 0  (PA+ B)−1P ︸ ︷︷ ︸ Q ∇( C||C||). The Maxwell-Stefan system is now expressed as: ∂tCi +∇ • (CiU +Qij(C)∇( Cj||C||)) = 0. (3.7) 3.2 Asymptotic Formulation Our analysis will be in a one-dimensional setting so that ∇(•) = •′ with ′ being the spatial derivative. We begin by nondimensionalizing the systems of equations as we explained in section 1.2. 104 3.2. Asymptotic Formulation 3.2.1 Nondimensionalization Fickian Diffusion Let our spatial coordinate be X. By combining (3.3) and (3.4), we obtain the equation ∂tCi + (−σCi||C||′ −D||C||( Ci||C||) ′)′ = 0 (3.8) which, in steady state (i.e. with ∂tC = 0), and after integrating with respect to X yields −σCi||C||′ −D||C||( Ci||C||) ′ = Ni (3.9) where the N ’s are fluxes. Defining the bars as dimensional quantities with a representative scale as in section 1.2, we write X = X̄x, C = C̄c, and N = N̄n. Substituting this into equation and dividing by N̄ yields −σC̄ 2 X̄N̄ ci||c||′ − DC̄ X̄N̄ ||c||( ci||c||) ′ = n (3.10) where ′ now represents the derivative with respect to x. By equating the dimensions of the terms on the left side of (3.10) we find σC̄ 2 X̄N̄ = DC̄ N̄X̄ which can be satisfied by setting C̄ = Dσ . We will choose the length scale X̄ = L as the length of the diffusion layer. Requiring that there be no dimension on the left side of (3.10) means that DC̄ LN̄ = 1, which can be satisfied in selecting N̄ = D 2 σL . 105 3.2. Asymptotic Formulation Figure 3.2: Our one-dimensional model of the gas diffusion layer. With these substitutions, we find −ci||c||′ − ||c||( ci||c||) ′ = n. We define γ = 1||c∗|| (where c ∗ = c(0)) and rescale with c̃ = γc. We also define the number  = γ2||n|| and the vector ñ = n||n|| so that γ2n = ñ with ||ñ|| = 1. Writing (3.10) with the tilde variables and removing the tildes yields. ci||c||′ − γ||c||( ci||c||) ′ = ni (3.11) for x ∈ [0, 1] and with |c(0)| = |c∗| = 1. Note that  is not the porosity here. Our problem is displayed in figure 3.2. It turns out  is very small, and so is γ, although γ  . In this case, the Fickian term is a small perturbation from the Darcy term (which alone could not be solved). Our parameters are given in table 3.2. 106 3.2. Asymptotic Formulation Table 3.2: Nondimensionalized and rescaled parameters and variables for Fick diffusion. Paramater Value c∗1 0.19 c∗2 0.10 c∗3 0.71 ||c∗|| 1 n1 0.333 n2 −0.667 n3 0 γ 4.56× 10−4  4.44× 10−6 Maxwell-Stefan Diffusion Although the problems have very different structures, the nondimensional- ization procedure is very similar. We begin by combining (3.1) and (3.5) to obtain ∂tCi + (−σCi||C||′ +Qij(C)( Ck||C||) ′)′ = 0 (3.12) which becomes −σCi||C||′ +Qij( Cj||C||) ′ = Ni (3.13) in steady-state. We again use the •̄ notation to refer to a dimensional parameter with a representative scale for • and write C = C̄c, X = X̄x, and N = N̄n. Noting that (PA+ B) ∝ (D̂||C||)−1 so that (PA+ B)−1(C) ∝ D̂||C|| and applying 107 3.3. Asymptotic Analysis Table 3.3: Nondimensionalized and rescaled parameters and variables for Maxwell-Stefan diffusion. Paramater Value c∗1 0.19 c∗2 0.10 c∗3 0.71 ||c∗|| 1 n1 0.333 n2 −0.667 n3 0 γ 7.06× 10−4  4.44× 10−6 the same techniques (with D̂ replacing D) that led to (3.10) we find −ci||c||′ +Qij(c)( cj||c||) ′ = ni. (3.14) Rescaling as in the Fickian case we can write equation 3.14 as: −ci||c||′ + γQij(c)( cj||c||) ′ = ni (3.15) where |c(0)| = |c∗| = 1 and where we consider x ∈ [0, 1]. Again,  and γ are small with γ  . Our parameters are given in table 3.3. Figure 3.2 applies here as well. 3.3 Asymptotic Analysis 3.3.1 Asymptotic Analysis of Fick Diffusion By expanding the derivatives of (3.11) we have−ci||c||′−||c||( ci||c||)′ = −ci( ∑ j c ′ j)− γc′i + γ ci ||c||( ∑ j c ′ j) = ni. It is possible to rearrange this in the form of 108 3.3. Asymptotic Analysis MFc′ = n for a matrix MF . We see−ci ∑ j c ′ j = Dijc′j whereDij = −ci,−γc′i = −δijc′j , and γ ci||c||( ∑ j c ′ j) = F̃ijc′j where F̃ij = γ ci||c|| . Thus, if we define MF = D + F with Dij = −ci and Fij = γ( ci||c|| − δij) then MF (c)c′ = n. (3.16) Now we expand c = c(0)+ c(1)+O(2) and substitute this into (3.16) to get MF |c(0)+c(1)+O(2)(c(0) + c(1) + O(2)) = n which yields an equation at each order of epsilon: O(1) :MF |c(0)c(0) ′ = 0 (3.17a) O() :MF |c(0)c(1) ′ + ( ∂MF ∂ci |c∗c(1)i )c(0) ′ = n. (3.17b) As MF (c(0)) is invertible we obtain that c(0)′ = 0 so c(0) is a constant vector. Given the boundary conditions here that c(0) = c∗ we have c(0) = c∗. This makes (3.17b) much simpler because all the terms with c(0) ′ are zero (as shown by Promislow et al. [10]). So the order  equation is simply MF |c∗c(1)′ = n which can be solved to find c(1)′ : 109 3.3. Asymptotic Analysis c(1) ′ =MF |−1c∗ n which is constant. As the boundary conditions are already satisfied with c(0) we need that c(1)(0) = 0 and the first-order approximation to the concentration is: c(x) = c∗ + c(1) ′ x+O(2). (3.18) Substituting x = 1 into (3.18), we obtain an approximation to the con- centrations at the catalyst site. The most interesting information for us in computing the difference between the two diffusion models is the relative change in concentrations from channel values. So we compute ri = ci(1)−c∗i c∗i for the three species. The computations yield r1 = −0.0203 r2 = 0.0617 r3 = −0.00325. The signs of these quantities are generally what we would expect. As wa- ter vapor (component 2) is being produced on the catalyst layer, its concen- tration should be higher than in the channel. Oxygen, the first component, which is reacting on the catalyst layer to produce water would decrease in concentration as we move along the GDL from the channel. Interestingly 110 3.3. Asymptotic Analysis enough, although Nitrogen does not react, because of the diffusion and mass conservation, it still has a concentration gradient. 3.3.2 Asymptotic Analysis of Maxwell-Stefan The analysis here is nearly identical to Fick. Instead of γ we have γQij . By expanding the derivatives in (3.7) we can write: MMSc′ = n (3.19) with MMS = D + G where Dij = −ci and Gij = γQik(− ck||c||2 + δkj ||c||). By expanding c asymptotically with c = c(0)+ c(1)+O(2) the identical analysis holds as in the equations for Fickian diffusion. We again have that c(0) is a constant equalling c∗ and c(1)′ =MMS |c∗−1n so that our first-order solution is identical to (3.18) but with the different value for c(1) ′ . We again compute the ri = ci(1)−c∗i c∗i values and find this time: r1 = −0.0140 r2 = 0.0406 111 3.4. Numerical Analysis of Diffusion Models r3 = −0.00196 The signs are again what we expect, but the results are quite different from Fick’s law. The magnitude of these relative changes is smaller than in the Fickian model. 3.4 Numerical Analysis of Diffusion Models Here we present some numerical computations used to verify the validity of the asymptotic results. 3.4.1 Discretizing the Diffusion Equations We use (3.16) and (3.19) to express the derivative at each point as the inverse of a matrix (dependent upon the local concentration) times the fluxes. Given N, the number of grid points, we define h = 1/N and set xi = (i− 1)h for i = 1, ..., N +1. We define ci to be the approximation to c at xi. We now use a modified Euler stepping scheme [15] beginning with c1 = c∗. To get ci+1 from ci we define: k1 = M|−1ci n k2 = M|−1ci+hk1n and ci+1 = ci + h 2 (k1 + k2). 112 3.4. Numerical Analysis of Diffusion Models Figure 3.3: Verifying second-order convergence with the Fick code. The approximate concentration vector at x = 1 is cN+1. From here we can also approximate ri = cN+1i−c1i c1i . 3.4.2 Verification of Program Results The modified Euler method is second-order accurate. To test the approxi- mation, we compute the ri values at N = 10, 20, 40, 80 and N = 200. Given the simplicity of the problem, we take the result at N = 200 as the “exact” answer and compute the maximum magnitude of the error for each component of r for N = 10, 20, 40, 80. We can plot the error trend on a log-log plot to verify the second-order convergence. The plot is given for the Fickian code in figure 3.3. We can also test the programs against the asymptotic solutions. Running 113 3.5. Exploring the Fundamental Differences between the Models the program (at N = 200) in the Fickian and Maxwell-Stefan models yields concentrations that are identical to the asymptotically computed solutions to within 1.00 × 10−4 and 3.29 × 10−5 respectively. We now have a high degree of confidence in both the asymptotic work and the coding. 3.5 Exploring the Fundamental Differences between the Models There are numerous questions that arise from the analysis and we will now carry out some investigations. In speaking with fuel cell engineers, modern fuel cells actually have a permeability of 10−15 m2 [8]. This would make the Darcy’s law less efficient and it could impact the way the models behave. There is a question of what happens to the Maxwell-Stefan predictions when we use the “wrong” fluxes (the fluxes that Fick’s law inevitably uses). We can redo the calculations for the Maxwell-Stefan model by replacing J, the molar flux with respect to the mass-averaged velocity, by J∗, the molar flux with respect to the molar-averaged velocity. In this regime, Q (as in (3.15)) is simply (PA+ B)−1P. We also remark that the diffusivity D as used by Promislow et al. [10] is noticeably smaller than the binary diffusivities used in the Maxwell-Stefan formulation. We wish to see what happens if we replace D by the arithmetic mean of the binary diffusivities, 1.17×10−5 m2 s−1 and compute the Fickian relative changes. Our results are tabulated in table 3.4 and they were achieved by means 114 3.5. Exploring the Fundamental Differences between the Models Table 3.4: Different modeling predictions for the relative changes in the concentrations. i (M,κ) (M,κ) (M̃, κ) (M̃, κ) (F, κ) (F, κ) (F̃ , κ) (F̃ , κ) 1 −0.0140 −0.0142 −0.0143 −0.0137 −0.0203 −0.0188 −0.0139 −0.0124 2 0.0406 0.0404 0.0403 0.0410 0.0617 0.0632 0.0422 0.0437 3 −0.00196 −0.00211 −0.00185 −0.00118 −0.00325 −0.00177 −0.00222 −0.000743 of the numerical programs in section 3.4. Notationally, (X,Y ) is used where X can be M (the standard Maxwell-Stefan formulation), M̃ (the Maxwell- Stefan with the incorrect flux), F (Fick’s law with D as initially stated), F̃ (Fick’s law with the updated value of D); and Y can be κ (the permeabil- ity initially stated) or κ (the smaller permeability that is possible in more modern fuel cells). The i indicates relative change ri. From table 3.4 it is very clear that by choosing the updated value of D, the Fickian model is in better agreement with Maxwell-Stefan. With a smaller permeability, the updated Fickian model fares worse. This is likely due to the fact that Darcy’s law is weaker and simply updating the value of D isn’t good enough because we start to see the different interactions contained in the Maxwell-Stefan law. We also note that the changes in using the wrong flux in the Maxwell- Stefan setting are negligible. We include the plots (figures 3.4 to 3.6) of the different concentration profiles for the gas species in both the Fick and Maxwell-Stefan settings, where we have chosen an improved value of the diffusivity and the smaller value of the permeability. We see in both cases, the plots are nearly linear and we see the asymp- totics are a good match. 115 3.6. Conclusions of Gas Diffusion Modeling Figure 3.4: The concentration profile of Oxygen in the GDL for both gas diffusion models. 3.6 Conclusions of Gas Diffusion Modeling 3.6.1 Formulations Maxwell-Stefan diffusion in its purest form, is a highly nonlinear system of equations with many intricacies. Fick diffusion is nonlinear, but has fewer difficulties involved in its computation. Fick’s law has formulation inconsistencies when coupled with Darcy’s law, and it does not take into account the individual competitions between diffusing species. Instead, it lumps all the binary diffusivities together into a single constant (which can agree quite well with Maxwell-Stefan if properly selected). Although they differ, the dominant driving force in both models is the 116 3.6. Conclusions of Gas Diffusion Modeling Figure 3.5: The concentration profile of water vapor in the GDL for both gas diffusion models. same: the bulk diffusion of gas species given in Darcy’s law in porous media. The system with Darcy’s law alone would not be solvable; both Fickian and Maxwell-Stefan diffusion are perturbations to a singular system. 3.6.2 Quantitative Differences We saw that in a very simple setting with one dimension and an isothermal, steady state set of conditions that there are very small differences between the two diffusion models when the Fick diffusivity constant is judiciously selected and for a permeability 10−12 m2. For the smaller permeability of 10−15 m2 the differences are no longer so small (although much smaller than if the diffusivity had not been modified). It’s important to note that the differences we obtained could be larger 117 3.6. Conclusions of Gas Diffusion Modeling Figure 3.6: The concentration profile of Nitrogen in the GDL for both gas diffusion models. in more realistic settings. Our one dimensional model has many limitations, one being that it doesn’t take into account the difficult diffusion pathways of the gas species. The effective length each species must diffuse could be as many as four or five times larger than what our computations were based upon. It’s also possible the differences could be more significant in a higher dimensional setting or with more physically realistic conditions (temperature variations, etc.). 118 Chapter 4 Summary and Future Work 4.1 Summary Here we provide a summary of the research results and discuss future ex- tensions to the work done. 4.1.1 Superconductor Research Summary In superconductor systems governed by the London equation, many physi- cal properties are well-understood with certain assumptions on the surface geometry of the superconductor. In particular, if a magnetic field is ap- plied parallel to a superconducting interface, and the interface is flat, then the field magnitude should decay exponentially with the distance into the superconductor (when in the Meissner state). Recent experiments have measured magnetic field profiles that deviate from this exponential decay. Our research project in chapter 2 explored a possible explanation for the deviations: the notion of surface roughness. To reach our conclusions, careful asymptotics were done that accurately describe field profiles in superconductors with sinusoidal surface roughness. Novel numerical methods with a carefully chosen mesh were used to verify the asymptotic results and then provide results beyond what the asymptotics 119 4.1. Summary allow. Two new phenomena are predicted by the study. The first is that the individual field components (and even the field magnitude), on particular regions on the vacuum-superconducting interface can their exceed values given by the applied field (experimentally this cannot be detected because the experiments only measure the average field). The second is that if there is a rough interface, then in general the field in the vacuum region would also be perturbed - and none of the components will be identically zero either in the vacuum or the superconductor. 4.1.2 Gas Diffusion Research Summary In the modeling of gas transport, there exist a number of diffusion models. Fick diffusion is often implemented in modeling due to its relative simplic- ity. However, another diffusion model known as Maxwell-Stefan diffusion is believed to be more accurate due to various interspecies interaction terms that appear in its formulation. This model is a great deal more complicated. We sought to determine the qualitative and quantitative differences be- tween the two models, to find out if in fact there is a significant difference when computations are done with one model or the other in situations rel- evant to fuel cell operation. Using a simple, one-dimensional model of a gas diffusion layer in a fuel cell, we computed the relative changes in concentration for three gas species with both sets of equations. Our work here (chapter 3) was carried out in an asymptotic regime and verified with numerical programs. The numerical programs were also used to further study the predictions of the models. 120 4.2. Future Work We showed that the differences between relative concentration changes predicted by the two models are negligible if the Fickian diffusivity is chosen correctly and if the permeability is not too small. The models do, however, show small differences in their predictions for smaller (more modern) per- meability values. We saw that both models are dominated by the bulk diffusion of the entire system and not the diffusion of individual species (results of Stockie et al. [9] are in agreement). 4.2 Future Work 4.2.1 Future Work for Superconductor Project Through various validations, we have seen that the asymptotic and numer- ical computations are in agreement and agree with physical intuition. Even with a small number of grid points, the three-dimensional code we wrote shows a high level of accuracy. However, in the future if this code needed to be even more accurate, we would need to maximize the number of grid points in the three-dimensional finite difference code. However, the program and environment both pose limitations. We started with a Matlab implementation to test the formulation, but rewriting the code in a more computationally friendly language such as C would be a next step. Matlab doesn’t have enough memory or speed to deal with systems of the size we desire. In addition, we need the compu- tation to be as fast as possible. By making use of the known solution for 121 4.2. Future Work a flat-interface, we could use Fast Fourier Transformations to come up an efficient preconditioner for the matrix equations which could then be solved iteratively using a Krylov subspace method such as GMRES [16]. Exploring symmetries could also help reduce the number of unknowns. Given the surfaces we examined were sinusoidal, there are be planes of sym- metry for the solution. This could reduce the number of grid points needed by up to a factor of 4. Analytically, another consideration is the asymptotic analysis. One phys- ical limit that was inaccessible in the asymptotics was if the spatial fre- quencies became very large. To analyse these limits, it’s possible that ho- mogenization theory holds the answers. Research has been done in solving Maxwell’s equations with very rough boundaries [17]. This could be possible in our case as well. 4.2.2 Future Work for Gas Diffusion Project Having seen there is such a small differences between Fick Diffusion and Maxwell-Stefan diffusion in our model, some serious questions arise. At what point do the two models differ significantly (to the point where Maxwell- Stefan diffusion could be necessary for modeling)? Also, what is the best choice of the Fickian diffusivity when used in conjunction with Darcy’s law, given the set of binary diffusivities? A next course of action would be to model different systems of gases and determine where the two models differ, and how to select the diffusivity. It would also be nice to consider more realistic conditions. Not only would we ideally take the model here to a higher dimensional setting, but 122 4.2. Future Work we would need to add temperature variations, as well as the presence of liquid water, to the calculations. 123 Bibliography [1] Morton, K. and Mayers, D., Numerical Solutions of Partial Differential Equations. Cambridge University Press: 1994. [2] Bender, C. and Orzag, A., Advanced Mathematical Methods for Sci- entists and Engineers: Asymptotic Methods and Perturbation Theory, Springer-Verlag New York, Inc.: 1999. [3] Folwer, A., Mathematical Models in the Applied Sciences, Cambridge Texts in Applied Mathematics: 1997. [4] Ashcroft, N. and Mermin, N., Solid State Physics, Saunders College Publishing: 1976. [5] Sonier, J., “Muon Spin Rotation/Relaxation/Resonance (µSR)” (http://musr.org/ jess/musr/muSRBrochure.pdf). [6] Kiefl, R.; Hossain, M.; Wojek, B.; Dunsiger, S.; Morris, G.; Prokscha, T.; Salman, Z.; Baglo, J.; Bonn, D.; Liang, R.; Hardy, W.; Suter, A.; and Morenzoli, E., “Direct Measurement of the London Pen- etration Depth in Y Ba2Cu3O6.92 Using Low-Energy µSR”, Phys- ical Review B 2010, 81:18, 180502. Abstract can be viewed at: http://link.aps.org/doi/10.1103/PhysRevB.81.180502 124 Bibliography [7] London, F. and London, H., “The Electromagnetic Equations of the Supraconductor”, Series A, Mathematical and Physical Sciences 1935, 149:886, 71-88. [8] Private communication [9] Stockie, J.; Promislow, K.; and Wetton, B., “A Finite Volume Method for Multicomponent Gas Transport in a Porous Fuel Cell Electrode”, International Journal for Numerical Methods in Fluids 2003, 41, 577- 599. [10] Promislow, K.; Stockie, J.; and Wetton, B., “A Sharp Interface Re- duction for Multiphase Transport in a Porous Fuel Cell Electrode”, Proceedings of the Royal Society A: Mathematical, Physical and Engi- neering Science 2006, 462, 789-186. [11] Berg, P.; Promislow, K.; St. Pierre, J.; Stumper, J.; and Wetton, B., “Water Management in PEM Fuel Cells”, Journal of the Electrochem- ical Society 2004, 151:3, A341-A353. [12] CRC Handbook of Chemistry and Physics, 90th ed., CRC Press: 2009. [13] Taylor, R. and Krishna, R., Multicomponent Mass Transfer, Wiley Se- ries in Chemical Engineering: 1993. [14] Bear, J. and Bachmat, Y., Introduction to Modelling Transport Phe- nomena in Porous Media, Kluwer Academic: 1990. [15] Hamming, R., Numerical Methods for Scientists and Engineers, General Publishing Company: 1973. 125 [16] Quarteroni, A. and Sacco, R.; and Saler, F., Numerical Mathematics, Springer-Verlag, Inc.: 2000. [17] Nevard, J. and Keller, J., “Homogenization of Rough Boundaries and Interfaces”, SIAM Journal on Applied Mathematics 1997, 57:6, 1660- 1686. 126 Appendix A Eigenvalues of Finite Difference Matrix We seek the eigenvalues of the (n+ 1)× (n+ 1) matrix M in section 1.1.1. The matrix is clearly invertible (by simple row reduction it can be seen to have n+ 1 pivots). IfMu = λu then u1 = λu1, un+1 = λun+1 and uj+1−2uj+uj−1 = h2λuj for 2 ≤ j ≤ n− 1. The endpoints tell us that u1 = un+1 = 0 since λ 6= 0 (M is invertible). On the interior we will look for a solution of the form uj = θj . We then easily find θj−1(θ2 − βθ + 1) = 0 (A.1) where β = (2 + h2λ). If β2 − 4 ≥ 0 then uj = Aθj+ + Bθj− (where θ± satisfies the quadratic equation for β2 − 4 > 0) or uj = Aj + B (when β2 − 4 = 0) neither of which can lead to anything other than the zero vector with the restrictions u1 = un+1 = 0. On the other hand, if β2−4 < 0 then we have two roots that are complex conjugates, each of which has modulus 1 (since their product must be 1). 127 Appendix A. Eigenvalues of Finite Difference Matrix We denote these roots as exp(±iα). We require that Aeiα + Be−iα = 0 so that A = −Be−2iα. Also we need Aei(n+1)α + Be−i(n+1)α = 0 so A = −Be−i(2n+2)α. By dividing the two equations for A we see this is only possible if e2niα = 1 so that 2niα = 2pii. Thus α = pikn , k = 1, 2, ..., n−1. The constant α cannot be zero (or n) because we require two distinct complex roots. Solving the quadratic for θ in (A.1) and taking the real part, we find θ = β/2. The real part of eikpi/n is cos(kpi/n). This allows us to find the eigenvalues: β/2 = 1 + h2λk/2 = cos(kpi/n) which means λk = 2 cos(kpi/n)− 2 h2 = 2n2(1− 1 2 pi2k2 n2 +O(n−4)− 1) = −pi2k2 +O(n−2). The eigenvalue of smallest magnitude occurs at k = 1 and is given by −pi2 as n→∞. Therefore the matrix M−1 is has a bounded sup-norm. 128 Appendix B The General Interface Here we show how to asymptotically compute the solution for a general interface z = h(x, y) with the assumption that h and its powers h2, h3, ... can be Fourier transformed. We will use the symmetric Fourier transform throughout. For all surfaces, the zeroth order term b(0) will be the base solution given in (2.8). We can write the interface z = 1√ 2pi ∫ R2 ĥ(ωx, ωy)eiωxx+iωyydωxdωy, where ĥ is the Fourier transform of h. Then by (2.16b) we have [b(1)]|z=0 = 1√ 2pi ∫ R2 ĥ(ωx, ωy)ei(ωxx+ωyy)dωxdωy. We could also go to the equations at higher powers of . 129 Appendix B. The General Interface By modifying (2.17) for its Fourier integral analogue, we can write: b(j) = 1√ 2pi ∫ R2 dωxdωye i(ωxx+ωyy)  γ(1)pi1β (j)eωz if z ≤ 0 (γ(2)pi2 + γ(3)pi3)β(j)e−ωz if z > 0 (B.1) where ω(ωx, ωy) = √ ω2x + ω2y , ω(ωx, ωy) = √ 1 + ω2x + ω2y , γ (1) ωx,ωy = (iωx, iωy, ω)T , γ (2) ωx,ωy = (ω, 0, iωx)T , γ (3) ωx,ωy = (0, ω, iωy)T , and β (j) ωx,ωy = M−1ωx,ωy ̂[b(j)]|z=0. We can thus find the perturbation terms at every order. For an easy example we will consider a surface with roughness in only one direction, z =  sinc y, with applied field (1, 0, 0)T and we will compute the first-order perturbation. Recall sinc y = (sin y)/y. The symmetric Fourier transform of sinc y is √ pi 2χ[−1,1](ω), where χ is the characteristic function. Therefore ̂[b(1)]|z=0 = −ĥ[∂zb(0)]z=0 = ( √ pi 2 χ[−1,1], 0, 0)T . By carrying out the computation, we find γ(j)ωx,ωypijβ (1) ωx,ωy = √ pi 2χ[−1,1]δ2,j . Then (by (B.1)) we conclude, for b(1), all components in the vacuum are zero and all components except for b(1)1 are zero in the supercondor. We find b (1) 1 = 1√ 2pi ∫ ∞ −∞ √ pi 2 χ[−1,1]eiωye− √ 1+ω2zdω = 1 2 ∫ 1 −1 e− √ 1+ω2zeiωydω. If we expand the complex exponential as cos(ωy) + i sin(ωy) and take note that the imaginary part of the integrand is an odd function of ω integrated 130 Appendix B. The General Interface over a symmetric range (and hence zero) then we obtain b (1) 1 = 1 2 ∫ 1 −1 e− √ 1+ω2z cos(ωy)dω = ∫ 1 0 e− √ 1+ω2z cos(ωy)dω. To first-order (neglecting b2 and b3 which remain identically zero) we can write: b1 =  1 if z ≤  sinc ye−z +  ∫ 10 e−√1+ω2z cos(ωy)dω if z >  sinc y This process could go on to obtain higher-order approximations. 131 Appendix C Dead Layers and Averages The terms of importance in taking the averages are the constants (with respect to x and y) and those involving the squares of sine and cosine (oth- erwise they would average to zero). Let us suppose a term had a factor of cos2(ωxx) cos2(ωyy) present in it in the third geometry (i.e. neither ωx nor ωy is zero). Then, taking the average of this over x and y is trivially 14 . However, if ωx were zero then we would be computing the average value of cos2(ωyy) which is actually 12 . One of the issues here is that cos2(ωxx) does not approach cos2(0) = 1 uniformly as ωx → 0. 132 Appendix D Proof of Periodicity of g̃ We wish to prove that if b : R3 → R3 (where b = b(x, y, z)) is periodic in x and y with periods Tx and Ty and it is defined by b = ∇g for some C∞ scalar function g satisfying g(x, y,−∞) = x, then the function g̃ = g − x is periodic in x and y satisfying the same periodicity conditions as b. The boundary condition at z = −∞ is clearly g̃(x, y,−∞) = 0. As b is periodic (and C∞) we have the following equations for any x0, y0 and z0 fixed: ∫ x0+Tx x0 ∂xb(x, y0, z0)dx = ∫ y0+Tx y0 ∂yb(x0, y, z0)dy = 0. Replacing b with ∇g̃ + (1, 0, 0)T and looking at the third component we obtain: ∫ x0+Tx x0 ∂zxg̃(x, y0, z0)dx = ∫ y0+Ty y0 ∂zy g̃(x0, y, z0)dy = 0. (D.1) To show that g̃ is periodic, it would suffice to prove that both ∫ x0+Tx x0 ∂xg̃(x, y0, z0)dx and ∫ y0+Ty y0 ∂y g̃(x0, y, z0)dy = 0. We prove the first is zero, as the second is done by the same methodology. 133 Appendix D. Proof of Periodicity of g̃ We write ∫ x0+Tx x0 ∂xg̃(x, y0, z0)dx = ∫ x0+Tx x0 (∂xg̃(x, y0,−∞)+ ∫ z0 −∞ ∂xz g̃(x, y0, z)dz)dx and by Fubini’s theorem (to re-order the integrations) and Claurant’s theo- rem (in the equality of the mixed partial derivatives) along with noting that g̃(x, y0,−∞) = 0 we find ∫ x0+Tx x0 ∂xg̃(x, y0, z0)dx = ∫ z0 −∞ ∫ x0+Tx x0 ∂zxg̃(x, y0, z)dxdz = 0 where the last equality comes from the fact that the inner integral is zero by equation D.1. We have shown that g̃ is periodic with the same periodicity as b. 134 Appendix E Proof of Existence of a Null Vector We show here that for even N, the three dimensional code matrix (in the flat geometry) admits a null vector, making the system unsolvable. We will use the same notation as in section 2.4.1. Here we consider an unstretched coordinate system with x, y and z as the variables. The transformed coordinate system would still be approximating the same system (so if there is a null vector in Cartesian coordinates, there is also a null vector in the stretched coordinates to within an error imposed by the numerical system). We wish to find a vector u so thatMu = 0. Such an equation would imply (in a discretized sense) the following: firstly, g̃(x, y,−∞) = 0; secondly, 4g̃ = 0; thirdly, ∇g̃−b = 0 along the interface; fourthly, ∇•b = 0 along the interface on the superconducting side; fifthly, 4b = b in the superconducting region; and sixthly, b1(x, y,∞) = b2(x, y,∞) = ∂zb3(x, y,∞) = 0. On the vacuum side, we begin by finding a solution vector with zero Laplacian (in the discretized setting) of form uK(i,j,k,0) = (−1)i+jλk for k = 2, ..., Nv − 1 with a uniform mesh in all directions. Here, a point on the 135 Appendix E. Proof of Existence of a Null Vector mesh can be described by (xi, yj , zk) with spacing h. Computing the discretized Laplacian and setting it equal to zero we have ui+1,j,k + ui−1,j,k + ui,j+1,k + ui,j−1,k + ui,j,k+1 + ui,j,k−1 − 6ui,j,k h2 = 0. Thus, (−1)i+jλk−1(λ2 − 6λ + 1) = 0, which has roots in reciprocal pairs, λ = 3±√32/2. If α is small (close to machine epsilon) then α(−1)i+j(3 + √32/2)k is nearly zero at k = 1 and grows exponentially in magnitude with k. Because of how we discretized ∇g̃−b by taking averages, at the interface, the discretized ∇g̃ is zero along the interface making all components of b zero along the interface. Then if the discretized b is zero everywhere on the superconducting side then the remaining conditions are upheld. Note the periodicity implies that (−1)1+j = (−1)N+1+j for fixed j (and similarly switching i and j) and this could only happen if N is even. There- fore, uK(i,j,k,`) = α(−1)i+jλkδ`0 is a null vector for even values of N. 136

Cite

Citation Scheme:

    

Usage Statistics

Country Views Downloads
United States 8 4
Croatia 6 0
Canada 5 0
Tunisia 2 0
Algeria 2 0
Japan 2 0
China 2 20
Norway 1 0
City Views Downloads
Zagreb 6 0
Mountain View 5 1
Vancouver 3 0
Redmond 2 3
Beijing 2 0
Annaba 2 0
Unknown 2 0
Tokyo 2 0
Burnaby 2 0
Nesoddtangen 1 0
Ashburn 1 0

{[{ mDataHeader[type] }]} {[{ month[type] }]} {[{ tData[type] }]}

Share

Share to:

Comment

Related Items