UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Asymptotic and numerical modeling of magnetic field profiles in superconductors with rough boundaries… Lindstrom, Michael Robert 2010

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
24-ubc_2010_fall_lindstrom_michael.pdf [ 1.88MB ]
Metadata
JSON: 24-1.0071198.json
JSON-LD: 24-1.0071198-ld.json
RDF/XML (Pretty): 24-1.0071198-rdf.xml
RDF/JSON: 24-1.0071198-rdf.json
Turtle: 24-1.0071198-turtle.txt
N-Triples: 24-1.0071198-rdf-ntriples.txt
Original Record: 24-1.0071198-source.json
Full Text
24-1.0071198-fulltext.txt
Citation
24-1.0071198.ris

Full Text

Asymptotic and Numerical Modeling of Magnetic Field Profiles in Superconductors with Rough Boundaries and Multi-Component Gas Transport in PEM Fuel Cells by Michael Robert Lindstrom B. Sc. (Hons.), University of British Columbia, 2008 A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF SCIENCE in The Faculty of Graduate Studies (Mathematics)  THE UNIVERSITY OF BRITISH COLUMBIA (Vancouver) August 2010 c Michael Robert Lindstrom 2010  Abstract This thesis is a combination of two research projects in applied mathematics, which use the applied math techniques of numerical and asymptotic analysis to study real-world problems. The first problem is in superconductivity. This section is motivated by recent experimental results at the Paul Sherrer Institute. Here, we need to determine how the surface roughness of a superconductor influences the penetration properties of an externally applied magnetic field. We apply asymptotic analysis to study the influences, and then verify the accuracy even going well-beyond the limits of the asymptotics - by means of computational approximations. Through our analysis, we are able to offer insights into the experimental results, and we discover the influence of a few particular surface geometries. The second problem is in gas diffusion. The application for this study is in fuel cells. We compare two gas diffusion models in a particular fuel cell component, the gas diffusion layer, which allows transport of reactant gases from channels to reaction sites. These two models have very different formulations and we explore the question of how they differ qualitatively in computing concentration changes of gas species. We make use of asymptotic analysis, but also use computational methods to verify the asymptotics and ii  Abstract to study the models more deeply. Our work leads us to a deeper understanding of the two models, both in how they differ and what similarities they share.  iii  Table of Contents Abstract  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  ii  Table of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . .  iv  List of Tables  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii  List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  ix  Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . . xiv 1 Introduction to Mathematical Modeling  . . . . . . . . . . .  1  Introduction to Numerical and Asymptotic Analysis . . . . .  1  1.1.1  Numerical Analysis  . . . . . . . . . . . . . . . . . . .  2  1.1.2  Asymptotic Analysis  . . . . . . . . . . . . . . . . . .  5  . . . . . . . . . . . . . . . . . . . . .  8  2 Superconductivity Modeling . . . . . . . . . . . . . . . . . . .  11  1.1  1.2  2.1  Nondimensionalization  Introduction to Superconductivity . . . . . . . . . . . . . . .  11  2.1.1  Superconducting Properties  . . . . . . . . . . . . . .  11  2.1.2  Relevant Equations  . . . . . . . . . . . . . . . . . . .  12  2.1.3  Physical Dead Layer . . . . . . . . . . . . . . . . . . .  15  iv  Table of Contents 2.2  Asymptotic Analysis . . . . . . . . . . . . . . . . . . . . . . .  18  2.2.1  Asymptotic Formulation  . . . . . . . . . . . . . . . .  18  2.2.2  General Procedure . . . . . . . . . . . . . . . . . . . .  27  2.2.3  Geometry One: Surface with Roughness in One Spatial Direction and Parallel Applied Magnetic Field . .  2.2.4  31  Geometry Two: Surface with Roughness in One Spatial Direction and Applied Field Not Uniformly Parallel to Surface . . . . . . . . . . . . . . . . . . . . . .  2.2.5  Geometry Three: Surface with Roughness in Two Spatial Directions  2.3  2.4  2.5  43  . . . . . . . . . . . . . . . . . . . . . .  49  Finite Difference Program for Geometry One . . . . . . . . .  57  2.3.1  Numerical Formulation  . . . . . . . . . . . . . . . . .  57  2.3.2  Validation  . . . . . . . . . . . . . . . . . . . . . . . .  64  2.3.3  Results . . . . . . . . . . . . . . . . . . . . . . . . . .  68  Finite Difference Program for General Sinusoidal Surface  . .  70  . . . . . . . . . . . . . . . . .  70  2.4.1  Numerical Formulation  2.4.2  Validation  . . . . . . . . . . . . . . . . . . . . . . . .  84  2.4.3  Results . . . . . . . . . . . . . . . . . . . . . . . . . .  87  Conclusions of Superconductor Modeling  . . . . . . . . . . .  90  2.5.1  More Complicated Field Behaviour  . . . . . . . . . .  91  2.5.2  Orientation of the Roughness Affects the Profile . . .  92  2.5.3  Effective Dead Layer  . . . . . . . . . . . . . . . . . .  92  3 Fuel Cell Modeling . . . . . . . . . . . . . . . . . . . . . . . . .  94  3.1  Modeling of Gas Diffusion in Fuel Cells . . . . . . . . . . . .  94  v  Table of Contents  3.2  3.1.1  PEM Fuel Cell Overview . . . . . . . . . . . . . . . .  95  3.1.2  The Model . . . . . . . . . . . . . . . . . . . . . . . .  96  3.1.3  Standard Operating Conditions of a Fuel Cell  98  3.1.4  Diffusion Equations . . . . . . . . . . . . . . . . . . . 100  Asymptotic Formulation 3.2.1  3.3  3.4  . . . .  . . . . . . . . . . . . . . . . . . . . 104  Nondimensionalization  . . . . . . . . . . . . . . . . . 105  Asymptotic Analysis . . . . . . . . . . . . . . . . . . . . . . . 108 3.3.1  Asymptotic Analysis of Fick Diffusion . . . . . . . . . 108  3.3.2  Asymptotic Analysis of Maxwell-Stefan . . . . . . . . 111  Numerical Analysis of Diffusion Models . . . . . . . . . . . . 112 3.4.1  Discretizing the Diffusion Equations . . . . . . . . . . 112  3.4.2  Verification of Program Results  . . . . . . . . . . . . 113  3.5  Exploring the Fundamental Differences between the Models . 114  3.6  Conclusions of Gas Diffusion Modeling 3.6.1  Formulations . . . . . . . . . . . . . . . . . . . . . . . 116  3.6.2  Quantitative Differences  4 Summary and Future Work 4.1  4.2  . . . . . . . . . . . . 116  . . . . . . . . . . . . . . . . 117  . . . . . . . . . . . . . . . . . . . 119  Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 4.1.1  Superconductor Research Summary . . . . . . . . . . 119  4.1.2  Gas Diffusion Research Summary  . . . . . . . . . . . 120  Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . 121 4.2.1  Future Work for Superconductor Project  4.2.2  Future Work for Gas Diffusion Project  . . . . . . . 121 . . . . . . . . 122  Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124 vi  Table of Contents  Appendices A Eigenvalues of Finite Difference Matrix . . . . . . . . . . . . 127 B The General Interface . . . . . . . . . . . . . . . . . . . . . . . 129 C Dead Layers and Averages . . . . . . . . . . . . . . . . . . . . 132 D Proof of Periodicity of g˜  . . . . . . . . . . . . . . . . . . . . . 133  E Proof of Existence of a Null Vector  . . . . . . . . . . . . . . 135  vii  List of Tables 2.1  Estimated orders of convergence for (ωx , ωy ) = (π, 0). . . . . .  87  2.2  Estimated orders of convergence for (ωx , ωy ) = (π, π). . . . .  87  3.1  Physical constants in our gas diffusion model. . . . . . . . . .  99  3.2  Nondimensionalized and rescaled parameters and variables for Fick diffusion. . . . . . . . . . . . . . . . . . . . . . . . . . 107  3.3  Nondimensionalized and rescaled parameters and variables for Maxwell-Stefan diffusion. . . . . . . . . . . . . . . . . . . 108  3.4  Different modeling predictions for the relative changes in the concentrations. . . . . . . . . . . . . . . . . . . . . . . . . . . 115  viii  List of Figures 2.1  The magnetic field is constant up to the interface and then decays exponentially in magnitude. Left: a visual representation. Right: Plot of field magnitude vs z, with z = 0 as the interface.  2.2  . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  12  Reprinted figure with permission from [6]. Copyright (2010) by the American Physical Society. This figure displays an experimentally measured magnetic field profile. The a and b represent different magnetic field orientations, with slightly different decay length scales. These scales are believed to be due to an anisotropy in the YBCO superconducting material. In both cases, there is a lag in the exponential decay. . . . . .  16  2.3  A sketch of the effective dead layer, in this case δ. . . . . . .  17  2.4  The overall geometry, PDEs, and boundary conditions we are solving. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  2.5  20  There is no control over the error at x = 0 but there is control at x = . To ensure O( n ) accuracy everywhere we use x = instead of x = 0 as the point where we switch from right .  lef t  to  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  30  ix  List of Figures 2.6  The applied field is parallel to the surface. . . . . . . . . . . .  2.7  A profile of the field magnitude with  32  = 0.05 and ω = 2π  in the y − t. Here the perturbation is quite obvious, but as t gets large the perturbation gradually disappears. . . . . . . .  40  2.8  Peak and Valley Profiles. . . . . . . . . . . . . . . . . . . . . .  41  2.9  A profile of the field magnitude with  = 0.05 and ω = 2π in  the peak and valley cases, and the average field profile. . . . . 2.10 Left: the effective dead layer at fixed  42  = 0.05. Right: the  effective dead layer at fixed ω = π. . . . . . . . . . . . . . . .  43  2.11 The applied field has nonzero perpendicular components with respect to the surface. . . . . . . . . . . . . . . . . . . . . . . 2.12 Left: a profile of b1 with  = 0.05 and ω = 2π in the peak  and valley cases. Right: a profile of b3 with  = 0.05, ω = 2π  and x = −0.26 fixed. . . . . . . . . . . . . . . . . . . . . . . . 2.13 A profile of |b|avg with  44  = 0.05 and ω = 2π. . . . . . . . . . .  2.14 Left: the effective dead layer at fixed  47 48  = 0.05. Right: the  effective dead layer at fixed ω = π. . . . . . . . . . . . . . . .  48  2.15 Top left: a profile of b1 from peak and valley. Top right: profile of b2 with (x, y) = (−0.27, −0.27). Bottom left: profile of b3 with (x, y) = (−0.27, −0.50). Bottom right: difference between average field profile in perturbed geometry and flat geometry. All figures computed with = 0.05, and (ωx , ωy ) = (2π, 2π). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.16 Left: the effective dead layer at fixed  54  = 0.05 and ωx = ωy ..  Right: the effective dead layer at fixed (ωx , ωy ) = (π, π). . . .  55 x  List of Figures 2.17 The effective dead layer at fixed = 0.05 and (ωx2 +ωy2 )1/2 = 8π. 55 2.18 Near t = 0 the grid is square but as t goes farther out the spacing in the t−direction increases. . . . . . . . . . . . . . .  60  2.19 A plot of τ = f (t) . . . . . . . . . . . . . . . . . . . . . . . .  61  2.20 A verification that the two-dimensional code has the right behaviour in the flat geometry. . . . . . . . . . . . . . . . . .  65  2.21 Checking second order convergence by observing the error behaviour where the exact solution exp(−z) is known. Here = 0.1 and ω = 2π.  . . . . . . . . . . . . . . . . . . . . . . .  66  2.22 Checking the convergence order of the asymptotics. Here ω = π. 68 2.23 Profiles of the field magnitude at  = 0.05 and ω = 2π. Fig-  ure displays decay from a peak and valley, and the average magnitude. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.24 Profiles of the field magnitude at  69  = 0.2 and ω = 100π.  Figure displays decay from a peak and valley and shows the flat geometry solution. . . . . . . . . . . . . . . . . . . . . . .  70  2.25 Formulation for three-dimensional code. . . . . . . . . . . . .  71  2.26 The mesh in the three-dimensional system. Note how g˜ and b meshes interlock. The circles with crosses indicate the points occur deeper into the page than the circles wit the dots. At each circle with a cross, three components are specified. At each circle with a dot, the scalar value of g˜ is specified. . . . .  77  2.27 Visual confirmation that the three-dimensional program correctly handles the flat interface N = 11, and ωx = ωy = π. . .  85  xi  List of Figures 2.28 Left: resolution of first-order asymptotic term for first geometry with ω = π. Centre: resolution of first-order asymptotic term for second geometry with ω = π. Right: resolution of first-order asymptotic term for third geometry with (ωx , ωy ) = (π, π). . . . . . . . . . . . . . . . . . . . . . . . . .  86  2.29 Top left: a profile of b1 from peak and valley. Top right: a profile of b2 with (x, y) = (−0.27, −0.27). Bottom left: profile of b3 with (x, y) = (−0.27, −0.50). Bottom right: difference between average field profile in perturbed geometry and flat geometry. All figures computed with = 0.05, and (ωx , ωy ) = (2π, 2π). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  89  2.30 The field profiles for different roughness orientations with = 0.1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  90  2.31 The asymptotic and numeric mean field profiles. The two are quite close given the large 3.1  and ω values. . . . . . . . . . . .  91  A cross-section of a PEM fuel cell. Hydrogen and Oxygen diffuse primarily in the XY −plane, but there is a diffusion in the Z−direction as well.  . . . . . . . . . . . . . . . . . . . .  96  3.2  Our one-dimensional model of the gas diffusion layer. . . . . . 106  3.3  Verifying second-order convergence with the Fick code. . . . . 113  3.4  The concentration profile of Oxygen in the GDL for both gas diffusion models. . . . . . . . . . . . . . . . . . . . . . . . . . 116  3.5  The concentration profile of water vapor in the GDL for both gas diffusion models. . . . . . . . . . . . . . . . . . . . . . . . 117  xii  List of Figures 3.6  The concentration profile of Nitrogen in the GDL for both gas diffusion models. . . . . . . . . . . . . . . . . . . . . . . . 118  xiii  Acknowledgements A huge thanks to my thesis supervisor, Brian Wetton, for all his help. Whenever I had questions, he was willing to give guidance and direction. He really knows his stuff, and without his generosity in time and help, I’d likely still be stuck at the beginning. I would also like to thank Rob Kiefl for his supervision in the superconductor work. This modeling work was initially part of a summer job, which eventually became part of my thesis. Also many thanks to Rob for his insights on the problem, too. Thanks to Jon Chapman, who made a critical observation in our early struggles to understand the proper formulation of the superconductor problem. I would also like to thank Michael Ward, Joerg Rottler, and Matt Choptuik for their help along the way.  xiv  Chapter 1  Introduction to Mathematical Modeling 1.1  Introduction to Numerical and Asymptotic Analysis  This thesis explores two physical problems. The first pertains to modeling the effects of surface roughness on superconductors. In our work, we consider the surface roughness as a small perturbation from a perfectly flat superconductor with well known properties. The effect of such a small perturbation can be studied by means of asymptotic analysis. Then, to reach perturbations that go beyond the region where the asymptotic work is reliable, we use finite difference discretizations to numerically approximate the solution with a computer. The starting point for all of this is in nondimensionalizing the system. Our second project involves modeling gas diffusion in a porous media. We have two diffusion models to work with, and by nondimensionalizing their equations, we are able to use asymptotic methods to compare them. This work is followed up by numerical computations to verify the accuracy  1  1.1. Introduction to Numerical and Asymptotic Analysis and explore the models from various angles. With this basic context, we begin by providing a brief overview of the premise of computational approximations to the solutions of differential equations using finite difference discretizations, the principles of asymptotic analysis, and the method of nondimensionalization.  1.1.1  Numerical Analysis  Numerical analysis is a powerful tool to solve real-world problems that become too complicated to solve (or even approximate) analytically. Much of this thesis involves the approximation of derivatives by finite differencing. A good reference to which our reader might like to refer is reference [1]. The idea is that a differential equation (ordinary or partial) can be written in a discretized form, with a “small” error. Let’s consider the real-valued function f : R −→ R i.e. f is a scalar function that takes a single real number as an input and returns a single real number. If we assume that f ∈ C 2 (f is twice continuously differentiable) then the limit limh→0  f (x+h)−2f (x)+f (x−h) h2  exists everywhere and is in fact  equal to f (x). The above limit is exact. However, if we only know f on a discrete set of points, we could only come up with an approximation to the second derivative. Let’s assume that we have values for f at [x0 , x1 , ..., xn ] with xj = x0 + jh, where h, the spacing between grid points, is a small positive number. Denote these values of f by f0 , ..., fn . An approximation to f (xj ), denoted by Dh2 fj , where j ∈ {1, ..., n − 1}  2  1.1. Introduction to Numerical and Asymptotic Analysis is  Dh2 fj =  fj+1 − 2fj + fj−1 . h2  (1.1)  It turns out this is actually a very good approximation and the error is bounded by a constant times h2 , as long as f is smooth enough. If we 2  3  assume f ∈ C 4 then we can write fj±1 = fj ±hf (xj )+ h2 f (xj )± h6 f (xj )+ h4 (4) (t) 24 f  with t being some number between xj and xj ± h. If we substitute  this into (1.1) we get:  Dh2 fj = f (xj ) +  h2 (4) f (t). 12  Thus, the error is bounded by Ch2 for some constant C. We write Dh2 fj = f (xj ) + O(h2 ). So if h decreases by a factor of two then the (bound on the) error in the approximation should decrease by a factor of 4. As we make h smaller (tending to zero) then the error in our approximation also gets smaller (and tends to zero). Being able to discretize in this way means that differential equations can be approximated by systems of equations. We will consider the simple example in solving y (x) = x with y(0) = y(1) = 1 on the interval [0, 1]. We consider the points [x0 = 0, x1 = h, ..., xn = nh = 1] and we let yj be an approximation to y(xj ). The boundary conditions tell us that y0 = yn = 1. And in the interior (1 ≤ j ≤ n − 1) we can impose y (xj ) ≈ yj+1 −2yj +yj−1 h2  = xj .  This allows us to write the (n + 1) × (n + 1) system  3  1.1. Introduction to Numerical and Asymptotic Analysis                      1  0  0  0  ···  0  0  0  0  1 h2  −2 h2  1 h2  0  ···  0  0  0  0  0  1 h2  −2 h2  1 h2  0  0  0  0  ·  ·  ·  ·  ··· .. .  ·  ·  ·  ·  0  0  0  0  ···  1 h2  −2 h2  1 h2  0  0  0  0  0  ···  0  1 h2  −2 h2  1 h2  0  0  0  0  ···  0  0  0  1                      y0 y1 y2 .. . yn−2 yn−1 yn      1       h       2h     ..   = .       (n − 2)h         (n − 1)h   1            .          Lh  Solving this matrix system will produce a set of yj s that approximate the exact solution y(xj ) to within an error bounded by a constant times h2 . To see this, we note that we have written the equation Lh yˆ = fˆ, where yˆ = (y0 , ..., yn )T and fˆ = (1, h, ..., (n − 1)h, 1)T is the load vector. Here and throughout this thesis, the superscript T will represent the transpose operator. We know the exact solution, y = (y(x0 ), ..., y(xn ))T , will satisfy Lh y = fˆ + O(h2 ) (O(h2 ) is a vector with each component bounded by a constant times h2 ) and if we subtract the equation Lh yˆ = fˆ then Lh δ = O(h2 ) where δ = y − yˆ is the error in the approximation. Multiplying by the inverse −1 2 matrix yields δ = L−1 h O(h ). As long as the matrix norm ||Lh ||∞ stays  bounded for all h (or equivalently for all n) then the error δ is of order h2 . The proof that the matrix norm is bounded is generally not trivial. However, for a symmetric matrix, if all its eigenvalues are bounded away from zero then its inverse has a bounded norm. The eigenvalues in this symmetric 4  1.1. Introduction to Numerical and Asymptotic Analysis matrix are all bounded away from zero, and in fact as n gets large the eigenvalues of smallest magnitude can be approximated by −π 2 , −4π 2 , −9π 2 , .... Therefore, ||L−1 h ||∞ is bounded. We include a proof for the values of the eigenvalues in appendix A.  1.1.2  Asymptotic Analysis  Asymptotic analysis is a means to study the behaviour of solutions to equations in response to important parameters. These parameters arise in an equation after it has been nondimensionalized i.e. rewritten so that there are as few constants as possible in the equation (see next section). In asymptotic analysis, we generally know the solution of a particular equation, but are interested to know how the solution changes if the equation is perturbed a little away from the case we know how to solve. Good background material can be found in reference [2]. Supposing we had an equation for the variable/function u, f (u; ) = 0 with the small parameter , we could assume that u can be written as an ∞ j=0 fj (  asymptotic series in . We write u = as  → 0,  fj+1 ( ) fj ( )  )uj where fj+1 = o(fj ) i.e.  → 0.  The most obvious example of an asymptotic series would be a Taylor series, for example u = u0 + u1 +  2u  2  + .... Asymptotic series are far more  general, however. Their terms can be functions, and even if their terms are constants, in some cases, peculiar expansions such as u = u0 + log( 1 )u1 + sin u2 +  3/2 u  3  + ... could be necessary.  Here, by means of a simple example we will show how asymptotic series can be generated. We will find one that converges and one that diverges. 5  1.1. Introduction to Numerical and Asymptotic Analysis We consider the error function, which is important in statistics and probability, erf(z) =  √2 π  z 0  exp(−t2 )dt.  For small |z| i.e. |z|  1 we can use Taylor series to approximate the  error function. In this case we are perturbing the integrand away from z = 0. 2 erf(z) = √ π  z 0  ∞  ∞  2 z 2n+1 )dt = √ ( (−1)n ). ( (−1) n! (2n + 1)n! π n=0 n=0 nt  2n  This Taylor series (which is also an asymptotic series) converges for all z as it comes from term-by-term integration of a power series that converges uniformly on every compact subset of C (the series for exp(−t2 )). This is not, however, a useful way of approximating erf(z) if |z| is large. As erf(z) is an odd function we will consider z wanted a small parameter we could take  1 here (in this case if we  = 1/z). Given that erf(∞) = 1  we can consider a “small perturbation” from z = ∞ and write: 2 erf(z) = 1 − √ π  ∞  exp(−t2 )dt. z  We will write the integrand in a form conducive to integration by parts, (−2t exp(−t2 ))( −1 2t ). Focusing on the integral, we will integrate by parts (a couple of times):  −1 −t2 ∞ e |z − 2t  ∞ z  1 −t2 e 2t2  dt =  −1 −t2 ∞ 1 −t2 ∞ e |z + 3 e |z + 2t 4t  ∞ z  3 exp(−t2 )dt. 4t4  (−2t exp(−t2 ))( −13 ) 4t  The boundary terms are zero when evaluated at t = ∞. We thus furnish  6  1.1. Introduction to Numerical and Asymptotic Analysis an asymptotic expansion for the error function for large z:  erf(z) = 1 −  2 exp(−z 2 ) 1 1 √ ( − 3 + ...) 2z 8z π  By induction we could go further to write: ∞  erf(z) = 1 −  exp(−z 2 ) (1)(3)...(2n − 1) √ (1 + (−1)n ). 2n z 2n πz n=1  This asymptotic series diverges. The coefficients cn of (z −2 )n are given by cn = limn  2 2n+1  (1)(3)...(2n−1) 2n  and we use the ratio test to compute limn  cn cn+1  =  = 0. The radius of convergence is zero.  Although the series diverges, it still obeys the properties of being asymptotic. The ratio of term n + 1 in the series to the term n in absolute value is  2n+1 −2 2 z  which tends to 0 as z → ∞ (or as  = 1/z tends to 0).  Also, this is an excellent means of approximating the error function for large z. If z = 2 then erf(z) ≈ 0.9953 and if we had used the asymptotic expansion taking only the first two nonzero terms we would have obtained 0.9948. By taking 5 terms we arrive at the best asymptotic approximation, 0.9954. After 5 terms the error increases. To get the same precision with the Taylor series we would have needed 14 terms to obtain the precision found with only 2 terms in the asymptotic series, but only 15 terms to obtain the precision found with the 5 terms in the asymptotic series for large z. The Taylor series will converge to the exact value, however. Asymptotic analysis is a reliable tool to give insight into the behaviour of perturbed systems. Not all asymptotic series converge, but even when they 7  1.2. Nondimensionalization do not, they can still provide valuable insights into a perturbed system. In the cases where the series diverge, there is an optimal number of terms needed to obtain a “best” approximation. Finding an asymptotic series for a problem without proving its convergence is known as formal asymptotics. In this thesis we are primarily concerned with finding only the first two or three nonzero terms in an asymptotic series so we will not be concerned with proving convergence and all our expansions will be formal. In these formal expansions, our test as to whether we have found a suitable series is whether we can actually compute the terms in the series.  1.2  Nondimensionalization  Nondimensionalization is used in the study of differential equations to reduce the number of constants appearing in the equation and to gain insight into the quantities that govern the system’s behaviour. The procedure consists of replacing the variables with dimensionless numbers (i.e. getting rid of the units). More details can be found in reference [3]. As an example, we consider the heat equation with  ut = σuxx + f (t) for x ∈ [0, L] and t ≥ 0 with u(0, t) = a, and u(L, t) = b and u(x, 0) = u0 (x). The heat equation models the temperature, u inside of a material as a function of time t and the spatial position x. The coefficient σ here is 8  1.2. Nondimensionalization the thermal diffusivity - describing how efficiently the energy can move in the system - with units of length squared over time. The function f is the external heating: here, we could think of it as a uniform heating that varies in time. The units of f are temperature divided by time. Physically, the temperatures a and b represent a fixed temperature on either end of the material, and u0 is an initial temperature distribution. We begin by writing f = f¯F, where F has no dimensions and where f¯ is a dimensional quantity holding a representative value for f. Later we will select the size of f¯ to make the equation as simple as possible. We can rewrite the PDE as: σ 1 ut = ¯uxx + F (t). f¯ f We will similarly rescale the independent variables now with x = x ¯X, and t = t¯T along with the dependent variable u = u ¯U. This yields: σu ¯ u ¯ UT = ¯ 2 UXX + G(T ), ¯ ¯ ft fx ¯ where G(T ) = F (t¯T ). By equating the coefficients of the U terms, we can select t¯ = can also make one final selection in letting u ¯=  x ¯2 σ .  We  f¯x ¯2 σ .  The resulting PDE is to solve:  UT = UXX + G(T )  with U (0, T ) = a/¯ u, U (L/¯ x, T ) = b/¯ u, and U (X, 0) = u ˜0 /¯ u where u ˜0 (X) = 9  1.2. Nondimensionalization u0 (¯ xX) and with X ∈ [0, L/¯ x]. This equation is in nondimensionalized form, but it can also be nice to use the characteristic scales in the problem. We can choose x ¯ = L, and divide the equation by an upper bound on |G|, say, M. Replacing U by U/M gives  UT = UXX + H(T ) for some H(T ) that takes on values between [−1, 1], and with the boundary conditions U (0, T ) = A, U (1, T ) = B, initial condition U (X, 0) = U0 , with A=  a u ¯M ,  B=  b u ¯M ,  U0 = u ˜0 /M, x ∈ [0, 1], and T ≥ 0.  The problem is a lot simpler now. Our spatial variable ranges between 0 and 1. We no longer need to deal with the constant σ. The function H is nicely bounded between −1 and 1. Had the problem been more complicated, we could have had some dimensionless parameters appearing in the PDE itself, which could be used as asymptotic parameters (this will be apparent in the next two chapters).  10  Chapter 2  Superconductivity Modeling 2.1 2.1.1  Introduction to Superconductivity Superconducting Properties  A superconductor is a material that when cooled to a sufficiently low temperature (near absolute zero), exhibits a phase transition to a state with zero electrical resistance. This means that an electric current can run through a superconductor without generating any heat, as long as the current does not exceed some critical value. Superconductors fit into two categories: type 1 and type 2. Another defining property of a superconductor is the Miessner effect whereby the material expels magnetic flux provided the external field does not exceed some critical value [4]. Experimentally this means that if there is an applied magnetic field outside of a superconductor, it can only penetrate a little ways into the substance before becoming negligibly small. This is known as the Meissner effect. A sketch of this behavior is given in figure 2.1. In an experimental setting (such as in Muon Spin Rotation), a beam of polarized, low energy muons is sent through a vacuum region and enters  11  2.1. Introduction to Superconductivity  Figure 2.1: The magnetic field is constant up to the interface and then decays exponentially in magnitude. Left: a visual representation. Right: Plot of field magnitude vs z, with z = 0 as the interface.  the sample, where the magnitude of the local magnetic field causes them to precess. Muons are unstable particles with a very short half-life and the position and distribution of their decay products can be measured by detectors. Based on the decay properties and an experimentally controlled depth at which they decay, muons act as a probe to measure the average magnetic field at a given depth within a superconductor [5]. They cannot measure the exact field magnitude and direction at specific points.  2.1.2  Relevant Equations  The governing equations for these experimental settings are Maxwell’s equations describing electromagnetic phenomena, which hold everywhere in space, and the London Equation (within the superconductor). In the region of the  12  2.1. Introduction to Superconductivity vacuum, the magnetic field (in SI units) B will satisfy:  ∇•B =0  (2.1)  ∇ × B = µ0 J = 0  (2.2)  and  where µ0 is the permeability of free space and where J the local current density. In the vacuum, J = 0 as there are no currents. These experimental settings do not require the consideration of the electric field E as it is constant. Within the superconductor we have the London equation, which states:  ∇ × ∇ × B = −B/λ2  (2.3)  where λ is the London penetration depth [4]. This length scale λ describes how far an applied field can penetrate into the sample. Typically it is of the order of 100 nm [6]. The London equation was discovered by the London brothers and it was initially derived in the efforts to describe an experimentally verified relation within superconductors that  m ˙ J ne2  = E (where the dot signifies a  time derivative, m is the mass of an electron, n is the number density of electrons and e is their charge) [7]. Their work found two equations, but (2.3) is often referred to as the London equation in many books [4]. By using the vector identity that ∇ × ∇× = ∇∇ • −  where  is  the Laplacian operator, along with (2.1), and (2.3) we find that inside the  13  2.1. Introduction to Superconductivity superconductor we can work with:  B = B/λ2 .  (2.4)  In the superconducting region both (2.1) and (2.4) must be implemented. We don’t combine (2.1) and (2.2) to use the Laplacian in the vacuum region, where we would rightfully be able to state that  B = 0, because it  turns out that we lose information that way: all B for which ∇ × B = 0 and ∇ • B = 0 satisfy  B = 0, but there are some B for which ∆B = 0 and  ∇ • B = 0 where ∇ × B = 0 (take B = (z, 0, 0)T for example). Physically, all components of the magnetic field must be continuous across the vacuum-superconductor inferface. If we have a flat superconductor occupying the region z > 0 with vacuum in z ≤ 0, and an applied magnetic field B0 that is parallel to the superconductor surface i.e. it has no z−component, then the field should be constant in the vacuum and inside the superconductor it is given by B0 e−z/λ . This is a well-known result that’s easily derived in noting that for a flat superconductor all x− and y−derivatives must vanish (due to translational invariance in x and y). Then in the vacuum, (2.2) implies B1 and B2 are constant, and (2.1) implies B3 is constant. As B3 = 0 at z = −∞ then B3 is zero everywhere in the vacuum. As it must vanish at z = ∞, B3 is zero everywhere in the superconductor. So it must be zero everywhere. Looking at B1 and B2 they will have the same value on the interface as they did at z = −∞ and the PDE (∂zz B1 =  B1 , ∂zz B2 λ2  =  B2 ) λ2  with  the Dirichlet conditions at z = ∞ imply that they decay exponentially in  14  2.1. Introduction to Superconductivity magnitude.  2.1.3  Physical Dead Layer  Until recently, it was assumed that in physical experiments the superconductors were completely flat and hence the fields would decay exponentially once within the superconductor. Experiments at the Paul Sherrer Institute (PSI) have found experimental profiles that differ quite significantly from the field profile predicted by this London theory with a flat interface [6]. See figure 2.2. The research carried out in the first portion of this thesis has been directed to modeling surface roughness, what is believed to be a potential explanation to the field perturbations. By surface roughness, we mean a surface that has bumps or wiggles instead of being completely flat. The field seems to decay slower than an exponential near the surface and there is the notion of a dead layer - a distance over which the field magnitude does not decay, even within the superconductor and after which the field decays exponentially. The cartoon function (in dimensionless units) describing this is that  |b| =     1 if t ≤ δ  (2.5)    exp(−(t − δ)) if t > δ. The sketch of this cartoon function is given in figure 2.3. We remark that the modeling in this chapter is independent of time. The variable t will always represent a nondimensionalized depth and not time.  15  2.1. Introduction to Superconductivity  Figure 2.2: Reprinted figure with permission from [6]. Copyright (2010) by the American Physical Society. This figure displays an experimentally measured magnetic field profile. The a and b represent different magnetic field orientations, with slightly different decay length scales. These scales are believed to be due to an anisotropy in the YBCO superconducting material. In both cases, there is a lag in the exponential decay.  16  2.1. Introduction to Superconductivity  Figure 2.3: A sketch of the effective dead layer, in this case δ.  One way of defining a dead layer, δeff , would be to compute the area under the curve |b| from t = 0 to t = ∞. In (2.5), the area is 1 + δ. In our asymptotic work to come, after finding the average field magnitude at depth t, |b|avg , we can use this to find an effective deadlayer: ∞  |b|avg (t)dt − 1.  δeff =  (2.6)  0  As we will see soon, the definition we give in (2.6) observes many of the properties we would like in a dead layer (in particular becoming zero if the roughness approaches zero). For any surface that is not flat, it is extremely difficult if not impossible to obtain an exact solution to the problem. However, by considering a small perturbation (small with respect to λ) then it is possible to obtain 17  2.2. Asymptotic Analysis an approximate analytic solution by means of an asymptotic series. This means it is possible to have reasonably accurate analytic results for certain geometries. In the next section we consider the procedure of finding an asymptotic solution. In subsequent parts of this paper, we numerically compute approximate solutions for particular surface geometries.  2.2  Asymptotic Analysis  2.2.1  Asymptotic Formulation  Nondimensionalization of the Problem In general, the vacuum-superconductor interface could be parametrized by z = ah(x, y) where |h| ≤ 1 is dimensionless and a is a length (the amplitude). This describes a perturbed flat boundary. However, working with such a general surface generally will not lead to analytic formulas that give insight into the problem and by considering a surface given by a cos(ωx x) cos(ωy y), we are still working in quite general terms. Indeed, as long as h and its powers can be Fourier transformed (where we allow the space of distributions), this is still general. When we refer to h, we refer to this Fourier form cos(ωx x) cos(ωy y). See appendix B for a demonstration of computing a general surface. Vacuum is found in the region z ≤ ah(x, y) and superconductor in the region z > ah(x, y). We will require that the field approach an applied field far away from the superconductor, B(z = −∞) = B0 = |B0 |θˆ = |B0 |(θ1 , θ2 , 0)T , and that the Meissner effect is observed, B(z = ∞) = 0.  18  2.2. Asymptotic Analysis We remark that θˆ has no z−component as we would otherwise not be able to obtain a solution in the flat geometry. The asymptotics rely upon being able to use the flat geometry solution as a base solution, so our applied field cannot have a z−component. We will explain this in more detail shortly. Another condition we require is that the magnetic field is continuous across the superconductor-vacuum interface, [B] = 0 where here [•] denotes the jump of •. We now nondimensionalize as in section 1.2. We select the representative units of the magnetic field to be |B0 | and the representative units of length to be λ. B |B0 |  We write b =  and (x , y , z ) =  1 λ (x, y, z).  Then ∂x =  similarly for the other spatial coordinates. So we have ∇• = 1 λ∇  × and  =  1 λ2  1 λ∇  1 λ ∂x  and  •, ∇× =  where the prime signifies we’re looking in the prime-  coordinates. This allows us to write 1 λ∇  b = 0. We define  b=  1 λ2  =  1 b, λ2  ∇•b = λ1 ∇ •b = 0, and ∇×b =  = a/λ, and if we work in these nondimensionalized  prime coordinates and get rid of the primes we have    ∇ • b = 0 and ∇ × b = 0, if z ≤ h  (2.7)    ∇ • b = 0 and ∆b = b, if z > h ˆ b(x, y, ∞) = 0, [b(x, y, h(x, y)] = with boundary conditions b(x, y, −∞) = θ, 0. See figure 2.4 for a visual summary of the problem we are solving. To explain the no z−component requirement, we suppose there were an  19  2.2. Asymptotic Analysis  Figure 2.4: The overall geometry, PDEs, and boundary conditions we are solving.  applied field (0, 0, 1)T and we try to solve the flat geometry. Then all x− and y−derivatives vanish (translational invariance) and we only work with z− derivatives. If the field were b = (b1 , b2 , b3 ) then by (2.7) in this reduced case with ∂x = ∂y = 0 by using the curl and divergence conditions we have that ∂z b1 = ∂z b2 = ∂z b3 = 0 in the vacuum. Thus the field is constant there. It must be the value (0, 0, 1)T because of the boundary condition at z = −∞. In the superconductor from the divergence condition we also have ∂z b3 = 0 so that b3 is constant. We have b3 (z = ∞) = 0 and so b3 = 0 in the superconductor. But b3 = 1 in the vacuum and so b is not continuous, and we don’t have a solution. Physically, given the divergenceless condition, b can be thought of as 20  2.2. Asymptotic Analysis the flux of a fluid. By mass conservation, the fluid must go somewhere. If there is some nonzero b3 at z = −∞ and there can be no flux in the x or y directions there must be some b3 at z = ∞ which cannot come about due to the Meissner effect. The apparent paradox comes about because we are considering a superconductor of infinite size. If it had a finite size then the field could bend around the superconductor.  Finding a Basis in Fourier Components A solution can be expressed as a sum (or an integral) of functions with particular Fourier frequencies in the x− and y− directions. What we do here is establish some properties of a general term in the sum. If θˆ has no z−component then there is a base solution for a flat interface given as  b(0) =     θˆ if z ≤ 0  (2.8)    ˆ −z if z > 0 θe which satisfies the boundary conditions at ±∞. So the far-field boundary conditions for these extra Fourier components (which would be added to b(0) to yield a solution) would be to vanish at z = ±∞ and to bring about continuity along the interface (the base solution is not continuous if z ≤ 0, > 0 are replaced by z ≤ h, > h). If a Fourier component of the field in the vacuum took on the form b = f (z)eiαx x eiαy y for a vector-valued f having value 0 at z = −∞ then there would be certain restrictions upon f.  21  2.2. Asymptotic Analysis The divergence-free condition of (2.7) yields  (iαx f1 + iαy f2 + ∂z f3 )eiαx x eiαy y = 0  so we require:  iαx f1 + iαy f2 + ∂z f3 = 0.  (2.9)  And the irrotional condition (the fact that there is a vanishing curl) of (2.7) yields  (ˆ x(iαy f3 − ∂z f2 ) + yˆ(∂z f1 − iαx f3 ) + zˆ(iαx f2 − iαy f1 ))eiαx x eiαy y = 0  so we can read off 3 more equations:  ∂z f2 = iαy f3  (2.10a)  ∂z f1 = iαx f3  (2.10b)  iαx f2 = iαy f1  (2.10c)  From (2.9), and (2.10a) and (2.10b) we have the linear equations:      f 0 0 iα x  f1   1           ∂z f = ∂z  0 iαy  f2  =  0  f2  .      f3 f3 −iαx −iαy 0 F  We have a first order linear differential equation in matrix form. In  22  2.2. Asymptotic Analysis solving this problem, we first need to find the eigenvalues of F. They are found by setting the characteristic polynomial det(F − λI) to 0 :  −λ(λ2 − (αx2 + αy2 )) = 0. We read the eigenvalues off as λ = {0, ±α} where we have α =  αx2 + αy2 .  If vλ is the eigenvector with eigenvalue λ then:  f = c0 v0 + c−α v−α e−αz + cα vα eαz for c s being constants. To ensure the that f goes to 0 at z = −∞ only cα can be nonzero and thus we only seek the eigenvector with this eigenvalue. If vλ is an eigenvector of F with eigenvalue λ then (F − λI)vλ = 0. So we find the null space of    0 iα −α x     0 −α iαy   .   −iαx −iαy −α If β is in the null space then row 1 tells us that αβ1 = iαx β3 and row 2 tells us that αβ2 = iαy β3 . Then we can parametrize β with one degree of freedom, β3 :     iαx    iα  β3  y   α  23  2.2. Asymptotic Analysis For ease of notation later, we’ll rename β3 as β1 and conclude that solutions on the vacuum side take on the form:    iαx   b = β1     iαy αx2 + αy2   √  α2 +α2 z iα x iα y e x y e x e y .    It’s interesting to note that (2.10c) holds even though we didn’t directly √ 2 2 solve it. Indeed we see that if f = (iαx , iαy , αx2 + αy2 )T e αx +αy z then √ 2 2 (iαx )f2 = −αx αy e αx +αy z = (iαy )f1 . Now we consider a solution on the superconducting side g(z)eiαx x eiαy y vanishing at z = ∞. By having zero divergence we arrive at an equation similar to (2.9):  iαx g1 + iαy g2 + ∂z g3 = 0  (2.11)  and by imposing the Laplacian part of (2.7) we have  2 −αx2 gi − αy2 gi + ∂zz gi = gi for i = 1, 2, 3  after cancelling out the exponential terms. 2 g = α2 g where α ≡ Then we find that ∂zz i i  1 + αx2 + αy2 . And this tells  us that g = w−α e−αz + wα eαz .  24  2.2. Asymptotic Analysis For g to be 0 at ∞ we set wα to 0. To satisfy (2.11) we require  iαx w−α 1 + iαy w−α 2 − αw−α 3 = 0.  We can parametrize g with two degrees of freedom, β2 and β3 :          α   0       β2 +  α  β3 )e−αz . g = ( 0         iαx iαy So solutions on the superconducting side have the form:   1+   b = (    αx2 0  iαx    +  αy2         β2 +        0   √  − 1+α2x +α2y z iαx x iαy y e e . β )e 1 + αx2 + αy2  3   iαy  In the asymptotic version of the problem, we will later need to consider  25  2.2. Asymptotic Analysis a jump condition at z = 0 for a function b of form:          iαx    √     α2 +α2 z     e x y if z ≤ 0   β1   iα   y            αx2 + αy2     b = eiαx x eiαy y    0    1 + αx2 + αy2         √       − 1+α2 +α2 z      )e x y 2 2 + β (β if z > 0.  3 2 0 1 + α + α     x y                iαx iαy (2.12) We can find the jump for given αx and αy frequencies:     [b] =     −iαx  1+  αx2  −iαy  0  − αx2 + αy2  iαx  +  αy2    0  β1     iα x iα y   x e y . 1 + αx2 + αy2   β2  e   β3 iαy  Mαx ,αy  (2.13) This is useful for us because it means if we know the jump at z = 0 for a piecewise Fourier component (of the form expressed in (2.12)), then we can quickly solve for the β s and determine its exact form. All we really need is the inverse matrix. It’s worth noting that when (αx , αy ) = (0, 0) then Mαx αy is not invertible. However, in such a case, as long as there is no z−component to the jump, we can still find a solution as β2 (1, 0, 0)T +β3 (0, 1, 0)T still spans R2 × {0}. 26  2.2. Asymptotic Analysis Later on we will use the notation M−1 0,0 c in which case we mean the vector (0, β2 , β3 ) where the jump condition [b] = (β2 , β3 , 0)T = c is satisfied.  2.2.2  General Procedure  In this section we present a general asymptotic procedure, and illustrate the solution method up to second order. We will take the interface to have form z = h(x, y). We will apply formal asymptotics (as discussed in section 1.1.2) and express b as a regular asymptotic expansion in :  b = b(0) + b(1) +  2 (2)  b  + O( 3 )  (2.14)  We make use of a Taylor expansion to evaluate b along the surface:  b(x, y, h) = b(x, y, 0) + h∂z b(x, y, 0) +  1 2 2 2 h ∂zz b(x, y, 0) + O( 3 ). (2.15) 2  By substituting (2.14) into (2.15) we can approximate the field to second order in  along the surface:  b(x, y, h) = (b(0) + (b(1) + h∂z b(0) ) +  2  1 2 (0) (b(2) + h∂z b(1) + h2 ∂zz b ))|z=0 2  +O( 3 ). Our solution for a flat superconductor is given below:  27  2.2. Asymptotic Analysis  b(0) =     θˆ if z ≤ 0   ˆ −z if z > 0. θe  By demanding continuity of b along the interface we arrive at: 1 2 (0) b ])|z=0 +O( 3 ) = 0 [b(x, y, h)] = ([b(0) ]+ [b(1) +h∂z b(0) ]+ 2 [b(2) +h∂z b(1) + h2 ∂zz 2 which gives us a set of equations at each order in :  O(1) : [b(0) ]|z=0 = 0  (2.16a)  O( ) : [b(1) ]|z=0 = −h[∂z b(0) ]|z=0  (2.16b)  1 2 (0) O( 2 ) : [b(2) ]|z=0 = −h[∂z b(1) ]|z=0 − h2 [∂zz b ]|z=0 2  (2.16c)  Of course (2.16a) already holds. From (2.16b) we can find the jump in b(1) and from (2.16c) we can find the jump in b(2) . The most general case of a jump is something of the form  [b(j) ]|z=0 =  iαx x iαy y c(j) e αx ,αy e αx ,αy  for c s being some constant vectors (which can be found for different perturbed geometries as in the following sections). But from (2.13) we can  28  2.2. Asymptotic Analysis actually find the form of b(j) :  b(j)      iαx            (j) αz iαx x eiαy y if z ≤ 0 iα  π1 M−1   αx ,αy cαx αy e e    y           α     =   αx ,αy    0   α               (j) −αz eiαx x eiαy y if z > 0   α  π3 )(M−1   π + (  2 0 αx ,αy cαx αy )e                    iαx iαy (2.17)  where for each (αx , αy ) we have that α =  αx2 + αy2 and α =  1 + αx2 + αy2  and where πj = (ej , •) for j = 1, 2, 3 is the j th coordinate (π2 (3, 4, 1)T = 4 for example). (j)  (j)  For shorthand we may later write βαx ,αy in place of M−1 αx ,αy cαx ,αy and (1)  (2)  refer to the vectors (iαx , iαy , α)T , (α, 0, iαx )T , (0, α, iαy )T as γαx ,αy , γαx ,αy , (3)  and γαx ,αy respectively. As a final comment on the solutions, we replace the conditions z ≤ 0 and z > 0 with z ≤ h(x, y) and z > h(x, y) respectively. This is because the partial differential equations that need to be satisfied hold for z ≤ h and z > h and not for z ≤ 0 and z > 0. This also ensures continuity to within O( 3 ). The sketch in figure 2.5 should explain this a bit better in a one-dimensional sense. The figure depicts the use of this convention: for x ≤ 0 there is a function straight horizontal line) and for x > 0 there is a function curve) and we assume they are chosen so that at x =  right  lef t  (the  (the other  there is continuity  29  2.2. Asymptotic Analysis to within O( n ) for some n > 0.  Figure 2.5: There is no control over the error at x = 0 but there is control at x = . To ensure O( n ) accuracy everywhere we use x = instead of x = 0 as the point where we switch from lef t to right .  If we chose to take  lef t  for x ≤ 0 and  right  for x > 0 then we wouldn’t  have control over the jump at x = 0. But by taking for x >  lef t  for x ≤ and  right  there is only one jump and it is zero to within O( n ).  In terms of applying these computations to the understanding of physical experiments, the vector magnetic field at coordinates (x, y, z) is not useful (see sections 2.1.1 and 2.1.3). Instead, the average field magnitude at a given depth t = z − h(x, y) beyond the interface is useful, because this is what is actually measured. Therefore in later sections, after obtaining the asymptotic solution in terms of (x, y, z) we will need to convert to a field magnitude expressed in terms of (x, y, t) and average over x and y. 30  2.2. Asymptotic Analysis In this last portion of the section, we introduce a few final equations and pieces of notation that will be useful later. The inverse of Mjωx ,kωy (defined in (2.13)) is given as:  jiωj,k ωx Mjωx ,kωy  −1  kiωj,k ωy    2 2 = Γj,k  −jkωx ωy k ωy + ωj,k ωj,k  −jkωx ωy j 2 ωx2 + ωj,k ωj,k  where we define ωj,k =  1 + j 2 ωx2 + k 2 ωy2 , ωj,k =   −ωj,k 2   −jiωj,k ωx    −kiωj,k ωy  j 2 ωx2 + k 2 ωy2 , and Γj,k =  (ωj,k ωj,k 2 + j 2 ωx2 ωj,k + k 2 ωy2 ωj,k )−1 . In the following subsections, we consider some specific geometries with increasing complexity. It may seem peculiar that the analysis is presented in this order - instead of immediately going to the most general (although highly complex) geometry - but it turns out each geometry needs to be analyzed independently. Due to a lack of uniform convergence, the results of the simpler geometries cannot be obtained from the general geometry. This will become apparent very soon.  2.2.3  Geometry One: Surface with Roughness in One Spatial Direction and Parallel Applied Magnetic Field  Here we explore a simple geometry in the asymptotic regime. We consider an applied magnetic field that is parallel to the surface. Due to the simplicity, much of the algebra and work can easily be shown. In more complex geometries the same techniques are applied but the computations are too lengthy to include in full detail and we also need to make heavy use of the  31  2.2. Asymptotic Analysis notational shorthand.  First-Order Term  Figure 2.6: The applied field is parallel to the surface.  In our non-dimensionalized regime, we consider a surface of form z = cos(ωy) = 2 (eiωy + e−iωy ) with applied field θˆ = (1, 0, 0)T . See figure 2.6. In this case we have that (ωx , ωy ) = (0, ω). Then b(0) =     (1, 0, 0)T if z ≤ 0   (1, 0, 0)T e−z if z > 0  is the solution at zeroth order in . Then (2.16b) tells us:  32  2.2. Asymptotic Analysis  [b(1) ]|z=0 = − cos(ωy)[∂z b(0) ]|z=0 = (cos(ωy), 0, 0)T = 1 1 (1, 0, 0)T e−iωy + (1, 0, 0)T eiωy . 2 2 (1)  (1)  c0,−ω  c0,ω  There is a term with e−iωy and we will need   (1)  β0,−ω  (1)  1 2       0         1  = M−1 0,−ω  0  =  2√1+ω 2  .     0 0 (1)  It turns out that β0,ω = β0,−ω . Then we can arrive at a first-order perturbation by using (2.17):  33  2.2. Asymptotic Analysis  b(1)         0  0               |ω|z −iωy  iω  (0) e|ω|z e−iωy if z ≤ 0 −iω  (0) e e  +              (1)   π1 β (1)  π1 β    0,−ω 0,ω  |ω|  |ω|       (1) (1)    γ0,ω  γ0,−ω     √    0   1 + ω2          √      1 √ 2    ( 2  (0) )e− 1+ω z e−iωy √ + ) ( 0 1 + ω  2 1 + ω2        =     π3 β (1)   0,−ω (1)  π β  0 −iω 2 0,−ω       (2) (3)   γ0,−ω γ0,−ω         √   2  0   1+ω           √      1 √  − 1+ω 2 z eiωy if z > 0.      2 √ ( + (0) )e ) +(  0  1+ω    2 1 + ω2             π3 β (1)   0,ω (1)  π β  0 iω 2 0,ω       (2)  γ0,ω  (3)  γ0,ω  This expression simplifies greatly when we use Euler’s identity to get our final answer:  b(1)      0 if z ≤ 0          cos(ωy)  √ =    − 1+ω2 z    e if z > 0.  0               0  34  2.2. Asymptotic Analysis Second-Order Term We apply the same procedure at second order. 1 1 2 (0) b ]|z=0 = (− cos2 (ωy)(− 1 + ω 2 + ), 0, 0)T [b(2) ]|z=0 = − cos(ωy)[∂z b(1) ]|z=0 − cos2 (ωy)[∂zz 2 2 1 1 1 1 1 1 = ( 1 + ω 2 − )(1, 0, 0)T e−2iωy + ( 1 + ω 2 − )(1, 0, 0)T + ( 1 + ω 2 − )(1, 0, 0)T e2iωy 4 2 2 2 4 2 where we again expressed the jump as a sum of complex exponentials. Of course the middle term has ω = 0. (2)  (2)  −1 In looking at (2.17) we need the terms M−1 0,−2ω c0,−2ω = M0,2ω c0,2ω = √ √ 2 − 1 )(0, 1, 0)T , and M−1 c(2) = √ 1 √ 1 ( ( 1 + ω 1 + ω 2 − 12 )(0, 1, 0)T . 0,0 0,0 2 4 1+4ω 2 2 1+4ω 2  After we apply (2.17) we conclude that:  b(2)     0 if z ≤ 0        √ √  2   1 ( 1 + ω 2 − 1 )(e−z + cos(2ωy)e− 1+4ω z ) 2 2  =       if z > 0.   0              0  We can now write the asymptotic expansion for the magnetic field to second order (after replacing z ≤ 0, z > 0 with z ≤ cos(ωy), z > cos(ωy)):  35  2.2. Asymptotic Analysis       1              0 if z ≤ cos(ωy)             0            1 cos(ωy)    √     − 1+ω2 z    −z    b = 0 e +  0  + e                0 0            e−z             + 0    ) if z > cos(ωy)            0      cos(2ωy)  √   − 1+4ω2 z  √ 2 ( 1 ( 1 + ω 2 − 1 ))( e 0 2 2       0  +O( 3 ).  (2.18)  Asymptotic Results for First Geometry The depth past the surface, t = z − h(x, y) is of more physical relevance than the z−coordinate. We therefore expand (2.18) to second order in  (so  the zeroth-order term is expanded to second order, the first-order to first order, and the second-order to zeroth order) using Taylor expansions under the substitution that z = t + cos(ωy). Here we’ll illustrate the procedure for the zeroth-order term, but the same idea holds for all the terms.  36  2.2. Asymptotic Analysis       1              0 if t + cos(ωy) ≤ cos(ωy)             0       1         −t− cos(ωy)   0 e if t + cos(ωy) > cos(ωy)                0       1           0 if t ≤ 0                0 =       1          0 e−t if t > 0                0       1               0 if t ≤ 0             0 =       1          −t   0 e (1 − cos(ωy) +                0      0 if t ≤ 0          − cos(ωy)  +    −t     e if t > 0  0              0  1 2 cos2 (ωy)) 2      0 if t ≤ 0          2  cos (ωy)   + 2   −t  1    e if t > 0  0  2              0  +O( 3 ). In the end, after combining and simplifying the terms up to second order  37  if t > 0  2.2. Asymptotic Analysis in , we arrive at:  b1 (y, t) =     1 if t ≤ 0  +      0 if t ≤ 0 √  +     e−t if t > 0  (e− 1+ω2 t − e−t ) cos(ωy) if t > 0    0 if t ≤ 0    √ √ 2 1 2 − 1 )(e−t + e− 1+4ω 2 t cos(2ωy))+ 1 + ω ( 2 2   √  √  1  ( e−t − 1 + ω 2 e− 1+ω2 t ) cos2 (ωy) if t > 0 2 +O( 3 ) (2.19)  with b2 and b3 identically 0. We would finally like to take (2.19) and find the average field magnitude (averaged in this case over y as x doesn’t play a role) at a given depth t. As we only need to consider one component and it is positive, we already have the field magnitude: it is just b1 . The average field is given here as  |b|avg =  ω 2π  π/ω  |b|(y, t)dy. −π/ω  Recalling that the cosine function when integrated over a period yields zero and that the average value of cos2 over a period is 12 , this immediately yields:  |b|avg (t) =     1 if t ≤ 0   e−t +  √ 21 2  1 + ω 2 (e−t − e−  √  + O( 3 ). (2.20) 1+ω 2 t  ) if t > 0 38  2.2. Asymptotic Analysis We can now determine the profile of the magnetic field for small (2.19) and (2.20). As far as what small  from  means, we need to consider this  carefully. For a fixed ω, the asymptotic results for the field and average field are accurate to within O( 3 ) and the accuracy only approaches zero as  → 0.  As |ω| → ∞, unless  decreases, the accuracy will not remain. This is √ because the second-order term goes like 2 1 + ω 2 . But if |ω| ∼ 1 then the  second-order becomes first-order and the asymptotics break down. Thus the asymptotics for  ↓ 0 is not uniformly valid for |ω| → ∞.  However, the breakdown isn’t necessarily so severe. For large |ω|, the Fourier components decay very quickly, so the region over which these larger |ω| values would cause problems would be very small. If  1 then we generally seek |ω|  1  . From now on we will always  assume ω > 0. ∞  We now consider the deadlayer. We compute 0 |b|avg (t)dt = 1 + √ 2 2 √ 2 2 2 ( 1 + ω − 1) so that δeff = 2 ( 1 + ω − 1) by (2.6). If  = 0 or if the spatial frequency goes to zero (so that the interface  looks flat) we should recover the flat solution and the dead layer should be zero. This formula is in agreement. We have selected a value of that seems physically reasonable. In speaking with the experimentalists, there is reasonable confidence the amplitude of the perturbation, is at most 0.05 [8]. We then chose ω with the condition that ω is small. Our first plot depicts the field magnitude profile predicted by the asymptotics - showing how the magnitude varies both with y and with t. This is 39  2.2. Asymptotic Analysis  Figure 2.7: A profile of the field magnitude with = 0.05 and ω = 2π in the y − t. Here the perturbation is quite obvious, but as t gets large the perturbation gradually disappears.  40  2.2. Asymptotic Analysis  Figure 2.8: Peak and Valley Profiles.  seen in figure 2.7. We can see some effects due to the surface roughness. We now consider the field profile for fixed values of y. When we refer to a peak profile, we will refer to a profile of the field magnitude as a function of t where cos(ωy) takes on a maximum value and when we refer to a valley we refer to a point where cos(ωy) takes on a minimum. See figure 2.8. Intuitively, the field should decay slower (with respect to t) from a valley than a peak. This is because in this geometry the field will not decay in the vacuum and points located at t > 0 in a valley profile are closer to the vacuum region (where the first component is always 1) than points at t > 0 in a peak profile. Our next plot, figure 2.9 shows the peak and valley profiles. It also includes the plot of the average field magnitude after averaging over y. The  41  2.2. Asymptotic Analysis  Figure 2.9: A profile of the field magnitude with = 0.05 and ω = 2π in the peak and valley cases, and the average field profile.  average lies right in the middle of the two extreme profiles. 2 √ Given our definition of the dead layer δeff = 2 ( 1 + ω 2 − 1), we plotted it. We held  fixed at 0.05 and varied ω from 0 to 8π. We also held ω fixed  at π and varied  from 0 to 0.15. The effective dead layer is plotted in figure  2.10. This may seem like an extreme range of values given that the asymptotics are valid for small  and ω of order one. However, in computing the  maximum difference between the asymptotic and numeric solutions (which we explain in section 2.3) at the upper end of these ranges, the sup-norm of the difference was 0.0027 (for  = 0.15 and ω = π) and 0.0137 (for  = 0.05  and ω = 8π), demonstrating that the asymptotics are reasonably accurate even in these more extreme limits. 42  2.2. Asymptotic Analysis  Figure 2.10: Left: the effective dead layer at fixed effective dead layer at fixed ω = π.  2.2.4  = 0.05. Right: the  Geometry Two: Surface with Roughness in One Spatial Direction and Applied Field Not Uniformly Parallel to Surface  First-Order Term We now consider a surface of form z =  cos(ωx) =  iωx 2 (e  + e−iωx ) with  applied field θˆ = (1, 0, 0)T . We note that in this case the applied field is no longer parallel to the surface as can be seen in figure 2.11. In this regime (ωx , ωy ) = (ω, 0). Then our zeroth-order term is  b(0) =     (1, 0, 0)T if z ≤ 0   (1, 0, 0)T e−z if z > 0.  By (2.16b), [b(1) ]z=0 = − cos(ωx)[∂z b(0) ]z=0 = 12 (1, 0, 0)T e−iωx + 21 (1, 0, 0)T eiωx  43  2.2. Asymptotic Analysis  Figure 2.11: The applied field has nonzero perpendicular components with respect to the surface.  so we compute (1) β±ω,0       ±iω1,0 ω  1 = Γ1,0   ω1,1 ω1,1 2  0    .    Using these terms as per (2.17), our resulting first order perturbation is  b(1) = Γ1,0                                           −ω 2 ω   1,0 cos(ωx)  0 −ω1,0 ω1,0 ω sin(ωx) 2 cos(ωx)   ω1,0 ω1,0   0   −ω1,0 ω1,0 ω sin(ωx)    ω1,0 z e if z ≤ 0      −ω z  e 1,0 if z > 0.    44  2.2. Asymptotic Analysis Already we see new behavior emerging: the mixing of field components (b3 is no longer zero). We also note the magnetic field in the vacuum is perturbed.  Second-Order Term By carrying out similar computations as for the case of the perturbation z = cos(ωy) now that we know b(1) and b(0) we find:  b(2) =                                               4ωE2,0,1 cos(2ωx)   ω2,0 z  e  if z ≤ 0 0     2ω2,0 E2,0,1 sin(2ωx)    2ω2,0 E2,0,2 cos(2ωx)   E0,0,2    −ω z   e 2,0 +  0  e−z if z > 0 0       −4ωE2,0,2 sin(2ωx) 0  where E2,0,1 = Γ2,0 (− 12 ω2,0 ωσ1 − 14 ω2,0 2 σ3 ), E2,0,2 = Γ2,0 ( 14 ω2,0 ω2,0 σ1 − 1 2 ω2,0 ωσ3 ),  E0,0,2 = 12 σ1 and where σ1 = −Γ1,0 (−ω1,0 ω1,0 3 + ω1,0 ω1,0 ω 2 ) −  1 2  and σ3 = −Γ1,0 (ω1,0 ω1,0 2 ω + ω1,0 2 ω1,0 ω). Asymptotic Results for Second Geometry We have expressed b to second order in with the variable t = z − cos(ωx). Although we do not write the expression here, it is of the form b = b(0) (t) + b(1) (t) +  2 b(2) (t)  + O( 3 ).  45  2.2. Asymptotic Analysis To find the field magnitude to second order in 2  2  we first compute  2  |b|2 = (b(0) 1 + b(0) 2 + b(0) 3 ) + (2b(0) 1 b(1) 1 + 2b(0) 2 b(1) 2 + 2b(0) 3 b(1) 3 ) 2  2  2  + 2 (2b(0) 1 b(2) 1 + 2b(0) 2 b(2) 2 + 2b(0) 3 b(2) 3 + b(1) 1 + b(1) 2 + b(1) 3 ) + O( 3 ). This is in full generality. Many of the terms are zero. We label the zeroth order term as u0 , the first order as u1 and the second order as u2 so that  |b| =  |b|2 =  2u  u0 + u1 +  2  + O( 3 ) =  √  u0 (1+  u2 u2 u1 + 2( − 12 ))+O( 3 ). 2u0 2u0 8u0 (2.21)  Given this, we could compute the average field magnitude by averaging over x. The result is given below:  |b|avg (t) =      1+    e−t +  2 (ρ  2 (ρ  −t 3e  2ω1,0 t 1e  − ρ2 eω1,1 t ) if t ≤ 0  − ρ4 e−ω1,0 t + ρ5 e(1−2ω1,0 t )) if t > 0  where we define ρ1 = 14 Γ21,0 ω1,0 2 ω1,0 2 ω 2 , ρ2 = 12 Γ1,0 ω1,0 ω1,0 ω 2 , ρ3 = (E0,0,2 + 1 4 ),  ρ4 = 21 Γ1,0 ω1,0 ω1,0 3 , and ρ5 = 14 Γ21,0 ω1,0 2 ω1,0 2 ω 2 . The effective dead layer is computed by (2.6) and we obtain  δeff =  2  (ρ3 −  ρ5 ρ4 + ). ω1,0 2ω1,0 − 1  In this second geometry, we plot the first component of the field in a peak and valley profile. At both the peak and valley, the third component has value zero so we need to select another value of x (in this case −0.26). Here, 46  2.2. Asymptotic Analysis  Figure 2.12: Left: a profile of b1 with = 0.05 and ω = 2π in the peak and valley cases. Right: a profile of b3 with = 0.05, ω = 2π and x = −0.26 fixed.  peaks and valleys refer to slices of constant x which maximize or minimize cos(ωx) respectively. The plot of both the first and third component is given in figure 2.12. The profile of the average field is given in figure 2.13. We also show the effective dead layer in figure 2.14, over the same ranges of  and ω as in the previous geometry. The respective sup-norm errors  verified numerically in section 2.3 for  = 0.15 with ω = π and for  = 0.05  with ω = 8π are 0.00462 and 0.00641 respectively. This geometry is a lot more interesting. We see that with respect to the first component of the field, the decay begins even before the interface (when looking at the peak profile). We also note that the individual components can take on values that exceed their value at z = −∞. It might be surprising, but there’s nothing that says this cannot happen. Indeed, we have that  b = 0 in the vacuum  region and therefore each component should obey the maximum/minimum 47  2.2. Asymptotic Analysis  Figure 2.13: A profile of |b|avg with  = 0.05 and ω = 2π.  Figure 2.14: Left: the effective dead layer at fixed effective dead layer at fixed ω = π.  = 0.05. Right: the  48  2.2. Asymptotic Analysis principle (because it is harmonic). The maximum principle states that if f is a harmonic function in a domain D then its maxima and minima lie on ∂D (the boundary). In our case, the maxima of b1 and b3 lie on the boundary of the vacuum region - right on the interface. The dead layer is actually negative here. By adding roughness in the direction of the field, the field is no longer constant on the vacuum side, and the net effect is to diminish the average field magnitude as compared to the flat geometry (so that δeff < 0). We also note that the magnitude of the dead layer in this geometry is much smaller than in the first geometry.  2.2.5  Geometry Three: Surface with Roughness in Two Spatial Directions  First-Order Term Again we take the applied field as θˆ = (1, 0, 0)T and we have the same base solution as in the previous geometries. By (2.16b), after decomposing the surface z =  cos(ωx x) cos(ωy y) into components z =  1 i(−ωx −ωy ) 4 (e  +  ei(−ωx +ωy ) + ei(ωx −ωy ) + ei(ωx +ωy ) ) we require 1 [b(1) ]|z=0 = (ei(−ωx −ωy ) + ei(−ωx +ωy ) + ei(ωx −ωy ) + ei(ωx +ωy ) )(1, 0, 0)T . 4  We then compute four terms such terms as  M−1 ωx ,ωy  1 4         0  and use (2.17)     0  to get the first-order perturbation:  49  2.2. Asymptotic Analysis  b(1) =                      Γ1,1     −ω11 ωx2 cos(ωx x) cos(ωy y) ω1,1 ωx ωy sin(ωx x) sin(ωy y) −ω1,1 ω1,1 ωx sin(ωx x) cos(ωy y)       ω1,1 z e if z ≤ 0     2     ω1,1 (ω1,1 ω1,1 + ωy ) cos(ωx x) cos(ωy y)      Γ1,1   ω1,1 ωx ωy sin(ωx x) sin(ωy y)         −ω1,1 ω1,1 ωx sin(ωx x) cos(ωy y)     −ω z  e 1,1 if z > 0.    Second-Order Term The second order term can be worked out using the same algorithms as in previous sections and we find  50  2.2. Asymptotic Analysis  b(2) =                                                                     +                                    −8E2,2,1 ωx cos(2ωx x) cos(2ωy y)   ω2,2 z  e  8E ω sin(2ω x) sin(2ω y) 2,2,1 y x y     −4E2,2,1 ω2,2 sin(2ωx x) cos(2ωy y)    −4E2,0,1 ωx cos(2ωx x)    ω2,0 z e + if z ≤ 0 0     −2E2,0,1 ω2,0 sin(2ωx x)   4E2,2,2 ω2,2 cos(2ωx x) cos(2ωy y)    −ω z   e 2,2  ω sin(2ω x) sin(2ω y) −4E 2,2 x y 2,2,3     (−8E2,2,2 ωx − 8E2,2,3 ωy ) sin(2ωx x) cos(2ωy y)    2E2,0,2 ω2,0 cos(2ωx x)   2E0,2,2 ω0,2 cos(2ωy y)   −ω z   −ω z  e 2,0 +   e 0,2 0 0       −4E2,0,2 ωx sin(2ωx x) 0   σ1   4    −z  +  0  e if z > 0   0  1 ω2,2 2 σ3 ), where we define the E’s by: E2,2,1 = Γ2,2 ( 81 ω2,2 ωx σ1 − 18 ω2,2 ωy σ2 + 16 1 (4ωy2 +ω2,2 ω2,2 )σ1 + 14 ωx ωy σ2 − 81 ω2,2 ωx σ3 ), E2,2,3 = Γ2,2 (− 14 ωx ωy σ1 − E2,2,2 = Γ2,2 ( 16 1 2 16 (4ωx  + ω2,2 ω2,2 )σ2 − 18 ω2,2 ωy σ3 ), E2,0,1 = Γ2,0 ( 41 ω2,0 ωx σ1 + 18 ω2,0 2 σ3 ),  E2,0,2 = Γ2,0 ( 18 ω2,0 ω2,0 σ1 − 14 ω2,0 ωx σ3 ), and E0,2,2 = Γ0,2 ( 18 (4ωy2 +ω0,2 ω0,2 )σ1 ). We also have that σ1 = − 21 − J1 , σ2 = Γ1,1 (ω1,1 2 ωx ωy + ω1,1 ω1,1 ωx ωy ), σ3 = Γ1,1 (−ω1,1 ω1,1 2 ωx −ω1,1 2 ω1,1 ωx ), and J1 = Γ1,1 (−ω1,1 2 (ωy2 +ω1,1 ω1,1 )+ ω1,1 ω1,1 ωx2 ).  51  2.2. Asymptotic Analysis Asymptotic Results for Third Geometry We can now expand the solution in powers of  with the variable t =  z − cos(ωx x) cos(ωy y). We don’t show the result here as it is very complex, but we emphasize that our result has been verified thoroughly with Maple. We first checked that before re-expressing in terms of t, the terms on both the vacuum and superconducting sides satisfied the PDEs. Taking these expressions, we expanded to second order in with z = t+ cos(ωx x) cos(ωy y). Then we substituted t = 0 into the asymptotic solution and verified that for randomly chosen x, y, ωx , and ωy the solution was continuous at each order in . Using maple, we can also compute  |b|avg (t) =  ωx ωy 4π 2  π/ωy  π/ωx  −π/ωy  −π/ωx  |b|(x, y, t)dxdy  where |b| has been expressed in powers of  up to second order (by means of  (2.21)). The result is given below:  |b|avg (t) =      1+    e−t +  2 (−ρ  2 (ρ  2ω1,1 t 1e  −ω1,1 t 3e  (2.22)  + ρ4 e(1−2ω1,1 )t − ρ5 e−t ) if t > 0  where we have the definitions ρ1 = 1 2 4 Γ1,1 ω1,1 ω1,1 ωx ,  − ρ2 eω1,1 t ) if t ≤ 0  1 2 2 2 2 8 Γ1,1 (ω1,1 ωx ωy  + ω1,1 2 ω1,1 2 ωx2 ), ρ2 =  ρ3 = 14 Γ1,1 (ω1,1 2 ωy2 + ω1,1 ω1,1 3 ), ρ4 = 18 Γ21,1 (ω1,1 2 ω1,1 2 ωx2 +  ω1,1 2 ωx2 ωy2 ), and ρ5 =  J1 4 .  52  2.2. Asymptotic Analysis By (2.6) we find in this model that  δeff =  2  (−  ρ4 ρ3 + − ρ5 ). ω1,1 1 − 2ω1,1  We now compute profiles for the field components and average field with = 0.05 and (ωx , ωy ) = (2π, 2π). In this geometry, peaks and valleys are not uniquely defined in terms of fixed x and y values. We define peaks as occurring at x = −π/ωx and y = −π/ωy and valleys as occurring at x = 0 and y = −π/ωy . Both the second and third components vanished at peaks and valleys so to obtain any profile, we selected pairs of (x, y) where they did not vanish. Figure 2.15 displays these results. In this regime, the average field still looks very much like the flat solution and we do not include its plot. Again we see b1 exceeding its value of b1 (z = −∞) = 1. The dead layer is another interesting point to investigate. We explore it from three angles, displayed in figures 2.16 and 2.17. Initially, we fix  = 0.05 and set ωx = ωy = ω and plot how the dead  layer varies with what we could call the net spatial frequency  ωx2 + ωy2 .  We also fix (ωx , ωy ) = (π, π) and plot the dead layer as a function of . As verified numerically, the sup-norm of the errors at the upper end of these plots were 0.0174 and 0.00662 respectively. Then, for fixed  ωx2 + ωy2 = 8π and  layer varies with the ratio  ωx ωy .  = 0.05, we explore how the dead  As ωx → 0, we would expect recover the first  geometry, and as ωy → 0, we would expect to recover the second geometry. The plot depicts qualitatively what we expect. As the ratio gets small, the  53  2.2. Asymptotic Analysis  Figure 2.15: Top left: a profile of b1 from peak and valley. Top right: profile of b2 with (x, y) = (−0.27, −0.27). Bottom left: profile of b3 with (x, y) = (−0.27, −0.50). Bottom right: difference between average field profile in perturbed geometry and flat geometry. All figures computed with = 0.05, and (ωx , ωy ) = (2π, 2π).  54  2.2. Asymptotic Analysis  Figure 2.16: Left: the effective dead layer at fixed = 0.05 and ωx = ωy .. Right: the effective dead layer at fixed (ωx , ωy ) = (π, π).  Figure 2.17: The effective dead layer at fixed = 0.05 and (ωx2 +ωy2 )1/2 = 8π.  55  2.2. Asymptotic Analysis dead layer increases. As it gets big then the roughness acts more and more in the direction of the field and the dead layer decreases. More interestingly, as the ratio reaches zero, the dead layer of (ωx , ωy ) = (8π, 0) is approximately half of the dead layer in the first geometry with ω = 8π. Also, as the ratio reaches its maximum value, the dead layer reaches a value of −2.95 × 10−4 (this value is off the plot range of the graph), which is also about half of the dead layer size in the second geometry with (0, 8π). This may seem very surprising, but we show exactly where this factor of 1 2  comes from in appendix C. Ultimately, taking a limit as a spatial frequency  approaches zero in the third geometry is unphysical experimentally because physicists can only average over a finite range (if a frequency goes to zero, the region over which averaging needs to take place grows without bound). In the next section we turn to numerical computations.  56  2.3. Finite Difference Program for Geometry One  2.3  Finite Difference Program for Geometry One  Here we are considering an interface z = cos(ωy), with applied field (1, 0, 0)T which we studied asymptotically in section 2.2.3.  2.3.1  Numerical Formulation  Constant Field on the Vacuum Side We shall take the field to be constant on the vacuum side. Here we show that there is a solution where the field is constant on the vacuum side. Uniqueness is not proven here, but given the three-dimensional numerical work that follows (section 2.4) it seems highly likely. If we take       1             0 if z ≤ cos(ωy)             0  b=     b(y, z)            0  if z > cos(ωy)                0  with the condition [b]|z=  cos(ωy)  = 0, then as long as  b(y, z) = b(y, z),  b(y, cos(ωy)) = 1 and b(y, z) → 0 as z → ∞ (which we will solve for numerically) all PDEs and boundary conditions are satisfied.  57  2.3. Finite Difference Program for Geometry One Coordinate Transformations Trying to implement a finite difference mesh on a sinusoidal surface is extremely difficult. In addition, the main variable of interest is the depth past the sample surface - not the z−coordinate. We can solve both problems by a transformation of coordinates. Here we will define σ = y and t = z − cos(ωy). We now need to find the Laplacian (∂yy + ∂zz ) in the new coordinates.  ∂y = ∂y σ∂σ + ∂y t∂t = ∂σ + ω sin(ωy)∂t = ∂σ + ω sin(ωσ)∂t  Then  ∂yy = ∂σ (∂σ + ω sin(ωσ)∂t ) + ω sin(ωσ)∂t (∂σ + ω sin(ωσ)∂t ) = ∂σσ + ω 2 cos(ωσ)∂t + ω sin(ωσ)∂tσ + ω sin(ωσ)∂σt +  2 2  2 ω sin2 (ωσ)∂tt .  And ∂z = ∂z σ∂σ + ∂z t∂t = ∂t . So clearly ∂zz = ∂tt . We arrive at the Laplacian in the new coordinates (after assuming all functions are C 2 and making use of the equality of mixed partial derivatives):  = ∂σσ + (1 +  2 2  ω sin2 (ωσ))∂tt + 2 ω sin(ωσ)∂σt + ω 2 cos(ωσ)∂t . 58  2.3. Finite Difference Program for Geometry One There’s one more thing that needs to be considered. Given that we know the field is zero at t = ∞, it must decay far from the interface. Near the interface itself is where the most interesting phenomena will occur. In terms of a mesh, we need more points closer to the interface to numerically resolve all the intricacies in the solution and fewer points farther away. This will save us computation time. It is also helpful as numerical results can be unreliable if two dimensions of a mesh differ significantly. This aspect of minimizing grid points is of paramount importance later on in our three-dimensional code. To achieve these extra grid properties, one more coordinate transformation is needed. We will do this transformation in the next section as we describe how we are building our mesh.  The Grid We will have a mesh in (σ, τ )−coordinates where τ is a transformation of t. We will enforce periodicity in σ with spatial period 2π/ω. We need only consider the single nonzero component of b in this analysis. Instead of calling it b1 we can just call it b. On the vacuum side (up to t = 0) we have b = 1. We will focus on the superconducting side in our mesh. Numerically we will impose t = ∞ as t = M  1.  We now pick a natural number N and we discretize in σ. We choose  hσ =  2π Nω  59  2.3. Finite Difference Program for Geometry One  Figure 2.18: Near t = 0 the grid is square but as t goes farther out the spacing in the t−direction increases.  and choose σi = −  π + (i − 1)hσ for i = 1, 2, ...N. ω  Because we want both grid dimensions to scale with N we will choose a similar discretization for τ :  τj = α + (j − 1)hσ for all j ∈ N for which τj ≤ β. We call the number of τ points Nτ . We have some freedom over the choice of α and β and will select them later. Now we consider what we want in terms of a mesh. We would like that the t−spacing is roughly equal to σ−spacing near the interface (to resolve details), but then far away we would like the t−spacing to scale roughly with M/N. See figure 2.18. Near the interface if ht denotes the t−spacing then we want ht ≈ hσ . Thus, given that τ is incremented in units of hσ near the interface  dt dτ  ≈ 1. 60  2.3. Finite Difference Program for Geometry One  Figure 2.19: A plot of τ = f (t)  Far from the interface, we want ht ≈ define η =  2π Mω  to be the value of  dτ dt  M N  =  Mω 2π hσ .  So  dt dτ  ≈  Mω 2π .  We will  near t = M.  The algebra is a little easier if we consider τ = f (t) where f is invertible and satisfies f (0) = α < β = f (M ). We will consider f to have the form a log(t − b). See figure 2.19 for the general shape of f. From the derivative conditions we have shows us that a =  ηM 1−η  > 0 and b =  −ηM 1−η  a −b  = 1 and  a M −b  = η which  < 0.  Knowing a and b, we now calculate α = a log(−b) and β = a log(M − b). Under this transformation, ∂t = f (t)∂τ =  a t−b ∂τ  = ae−τ /a ∂τ (this comes  from observing if τ = a log(t − b) then 1/(t − b) = 1/eτ /a ). We also have ∂tt = e−2τ /a (−a∂τ + a2 ∂τ τ ). Substituting the τ − derivatives in place of t−derivatives gives:  61  2.3. Finite Difference Program for Geometry One  ∆ = ∂σσ + a2 (1 +  ω sin2 (ωσ))e−2τ /a ∂τ τ + 2a ω sin(ωσ)e−τ /a ∂στ +  2 2  ξ (1)  ξ (2)  a( ω 2 cos2 (ωσ)e−τ /a − (1 +  ω sin2 (ωσ))e−2τ /a ) ∂τ .  2 2  ξ (3)  So a generic point on the mesh could be described by (σi , τj ). It is also necessary to map (i, j) coordinates to just a single number, K(i, j) = j +(i− 1)Nτ . This way we can turn the whole PDE system into a matrix equation of form M u = f where M is a matrix that comes from the discretized equations, u is our unknown, and f is the load vector (see section 1.1.1).  The Discretized Equations Conditions along the Interface and at z = ∞  The interface corre-  sponds to j = 1. The field being 1 on the interface corresponse to MK(i,1),K(i,1) = 1 with fK(i,1) = 1 for all i. The far-field equation is for j = Nτ . The field being 0 at τNτ corresponds to MK(i,Nτ ),K(i,Nτ ) = 1 and fK(i,Nτ ) = 0 for all i. Laplacian Here we consider 2 ≤ j ≤ Nτ − 1. In this regime, we have the following second order approximations:  ∂σσ b(σi , τj ) =  uK(i+1,j) − 2uK(i,j) + uK(i−1,j) + O(h2σ ) h2σ  ∂τ τ b(si , tj ) =  uK(i,j+1) − 2uK(i,j) + uK(i,j−1) + O(h2σ ) h2σ  62  2.3. Finite Difference Program for Geometry One  ∂στ b(si , tj ) =  uK(i+1,j+1) + uK(i−1,j−1) − uK(i+1,j−1) − uK(i−1,j+1) + O(h2σ ) 4h2σ ∂τ b(σi , tj ) =  uK(i,j+1) − uK(i,j−1) + O(h2σ ) 2hσ  for all i, where to impose periodicity, when i = 1 we replace i − 1 with Ns and when i = Ns we replace i + 1 with 1. Inside the superconductor we impose the discretized version of ∆b = b by choosing the appropriate rows for M . Defining discretized ξ variables (so (2)  that ξi,j = 2a ω sin(ωsi )e−τj /a , for example) our matrix M is assembled as: (1)  MK(i,j),K(i,j) =  −2ξi,j h2σ  −  MK(i,j),K(i±1,j) =  2 −1 h2σ  1 h2σ (3)  (1)  MK(i,j),K(i,j±1) =  ξi,j  h2σ  ±  ξi,j  2hσ  (2)  MK(i,j),K(i+1,j+1) =  ξi,j  4h2σ (2)  MK(i,j),K(i+1,j−1) = −  ξi,j  4h2σ (2)  MK(i,j),K(i−1,j+1) = −  ξi,j  4h2σ (2)  MK(i,j),K(i−1,j−1) =  ξi,j  4h2σ  for all i. We also set fK(i,j) = 0.  63  2.3. Finite Difference Program for Geometry One Finding the Results We define a discretization of t :  tj = eτj /a + b for j = 1, ..., Nτ .  The approximate magnetic field at (y = σi , t = tj ) is given by uK(i,j) . Where necessary, an average field at depth tj can be found by averaging uK(i,j) over i with j fixed.  2.3.2  Validation  The program can handle a larger range values of ω and  than the asymp-  totics. However, we test the program first to see it is providing reasonable results. We found that M = 9.25 and N = 50 provided results that were accurate to within 10−4 - an error we feel is very acceptable. Generally these are the parameters we set.  Order of Convergence with Respect to Exact Solution If = 0 then for any arbitrary ω we have a flat superconductor so the solution should decay exponentially past the superconducting surface at exp(−t). We selected ω = π with N = 50 and plotted result. The numerical results are shown in figure 2.20.  64  2.3. Finite Difference Program for Geometry One  Figure 2.20: A verification that the two-dimensional code has the right behaviour in the flat geometry.  We can also choose boundary conditions so that the exact solution is exp(−z) even for nonzero . We observe that if b(y, z) = exp(−z) then b = b and b(y, cos(ωy)) = b(σ, t = 0) = exp(− cos(ωσ)). That’s what we choose as our boundary condition. If the error E = ||bnum − bex ||∞ is of the order h2σ (which is of the order N −2 ) then on a log-log plot, we would expect that log E is a linear function of log N with slope −2. The subscripts “num” and “ex” denote the numerical and exact solutions respectively. Later on “asy,k” will signify the k th -order asymptotic solution. The plot verifying second order convergence is given in figure 2.21. From these results, the error at N = 60 (not plotted) corresponded to 1.42 × 10−4 which is very near the error in setting b(M ) = exp(−9.25) ≈ 65  2.3. Finite Difference Program for Geometry One  Figure 2.21: Checking second order convergence by observing the error behaviour where the exact solution exp(−z) is known. Here = 0.1 and ω = 2π.  66  2.3. Finite Difference Program for Geometry One 9.61 × 10−5 to 0. With the same amplitude and frequency, we set M = 11 and increased N until the error plateaued. The error was 2.26 × 10−5 and exp(−11) ≈ 1.67 × 10−5 . These results give us estimates on the errors coming from the approximation of the far field condition at a finite length from the interface; it roughly scales with exp(−M ).  Comparison with Asymptotic Solution By selecting large N, in this case N = 50 and M = 9.25, the numerical solution should be very close to the exact solution. We would therefore expect the asymptotics when taken to the second order term to converge at a rate of  3  to the numerical solution. However, this is a very delicate  computation (and it is even more delicate in the three-dimensional code). We have bnum = bex + O(h2σ ) + O(λ) (where λ represents the far-field error). Also, we have basy,2 = bex + O( 3 ). Subtracting the numerical and second-order asymptotic solutions yields a difference O(h2σ ) + O(λ) + O( 3 ). In general, the error terms O(h2σ ) and O(λ) can depend upon . If too small then the O( 3 ) term cannot be detected. Once  is  gets too large,  the fourth-order term in the asymptotics could exceed the third-order term and once again we wouldn’t detect O( 3 ). For this code, by choosing in increments of 0.05 from 0.05 to 0.3, we can verify the order of convergence is O( 3 ) by computing the sup-norm of the difference between the asymptotic and numerical solution at the different values of the amplitude. See figure 2.22.  67  2.3. Finite Difference Program for Geometry One  Figure 2.22: Checking the convergence order of the asymptotics. Here ω = π.  2.3.3  Results  Having established the validity of the simulation and having some understanding of the level of discretization needed to obtain a desired accuracy, we are now in a position to examine some physical problems. We start off by picking a case we have also studied in the asymptotic analysis of the first geometry. We choose  = 0.05 and ω = 2π. A plot  depicting the field magnitude on average and from a peak and valley can be found in figure 2.26. The results are nearly identical to the asymptotic values. We can also select values of  and ω that are quite a bit larger. In  particular, we choose a case with = 0.2 and ω = 100π. This plot is found in  68  2.3. Finite Difference Program for Geometry One  Figure 2.23: Profiles of the field magnitude at = 0.05 and ω = 2π. Figure displays decay from a peak and valley, and the average magnitude.  figure 2.24. Figure 2.24 depicts what we would expect intuitively. Given the very high spatial frequency, we would expect the field to be nearly constant until past the deepest peaks (because it must be continuous and have the constant value of 1 in the vacuum). As a result, the field decay should be delayed when looking along a valley profile. In the plot, the field seems to be almost constant in the valley profile for a scaled depth of nearly 0.4. Even the peak profile is significantly larger than the flat profile.  69  2.4. Finite Difference Program for General Sinusoidal Surface  Figure 2.24: Profiles of the field magnitude at = 0.2 and ω = 100π. Figure displays decay from a peak and valley and shows the flat geometry solution.  2.4  Finite Difference Program for General Sinusoidal Surface  Here we consider the surface z =  cos(ωx x) cos(ωy y) with applied field  (1, 0, 0)T .  2.4.1  Numerical Formulation  In this three-dimensional setting, all three components of the field need to be solved for both on the vacuum and superconducting sides. This higherdimensional setting along with having two regions with different properties to consider makes this work considerably more difficult. We consider the formulation in parts. Our overall formulation (based on 70  2.4. Finite Difference Program for General Sinusoidal Surface  Figure 2.25: Formulation for three-dimensional code.  the subsequent subsections) is displayed in figure 2.4.1.  Vacuum Side Interior Within the vacuum we need to simultaneously satisfy ∇×b = 0 and ∇•b = 0. We make use of the vanishing curl to write b = ∇g where g : R3 → R is a scalar function (which for our purposes we will assume is at least C 2 ). This then allows us to replace ∇ • b = ∇ • (∇g) with  g = 0.  Superconducting Side Interior In the superconductor we require that both  ∇•b=0  71  2.4. Finite Difference Program for General Sinusoidal Surface and b = b.  (2.23)  In the interior we choose to only impose the Laplacian condition and impose the divergenceless condition on the boundary. This is noted below. As the divergence and vector Laplacian operators commute i.e. ∇ • ∇• then we can take the divergence of (2.23) to get ∇• b = ∇•b = so that  = ∇•b  q = q where q = ∇ • b.  If q = 0 on the boundary of the superconducting region and at z = +∞ with  q = q, then q = 0 everywhere in the interior. Thus, we aim to  numerically implement ∇ • b = 0 on the interface and at z = +∞. Boundary Conditions at z = ±∞ Given that b = ∇g = (∂x g, ∂y g, ∂z g)T , along z = −∞ it is only possible to specify ∂x g and ∂y g. We recall, however, that in the asymptotic analysis that no solutions were possible if the third component of b were nonzero at z = −∞. As it turns out, we don’t need to impose the value of the third component at z = −∞ and our results are still consistent. To impose b(x, y, −∞) = (1, 0, 0)T , it would at first seem appropriate to choose g(x, y, −∞) = x; however, given that our numerical formulation is based upon periodic boundary conditions this cannot work because x is not a periodic function (of x). We introduce g˜ = g − x. Then b = ∇˜ g + (1, 0, 0)T and  g=  This b still satisfies all the PDEs and boundary conditions. If then ∇ • b = ∇ • ∇(˜ g + (1, 0, 0)T ) =  g˜. g=0  g = 0. And given that b = ∇g we  72  2.4. Finite Difference Program for General Sinusoidal Surface must have that ∇ × b = 0. There is a potential concern that this g˜ variable may not be periodic. However, given that the b must be periodic, we can show that g˜ is, too. We give this proof in appendix D. Our far-field vacuum boundary conditions are g˜ = 0 at z = −∞. At z = ∞, we need to impose both b = 0 and ∇ • b = 0. We impose the zero conditions on the first two components directly i.e. b1 (x, y, ∞) = b2 (x, y, ∞) = 0. This also forces ∂x b1 (x, y, ∞) = ∂y b2 (x, y, ∞) = 0. We now focus on the divergence condition and demand that ∂z b3 (x, y, ∞) = 0. Given the analysis in the asymptotic work, we see that the Fourier components would involve exponentials of multiples of z and ∂z could only go to zero if all the exponentials decayed. Thus ∂z b3 (x, y, z) → 0 as z → ∞ also weakly imposes b3 (x, y, ∞) = 0. Interface Conditions At the interface we need to specify a scalar equation (representing the vacuum-side interface condition) and a vector equation (representing the superconducting side interface condition). We impose ∇ • b = 0 on the superconducting side of the interface, and b = ∇˜ g + (1, 0, 0)T on the interface as discussed in section 2.4.1.  Coordinate Transformation Again, we note that the most interesting phenomena occur near the interface and so, along with considering the depth relative to the interface, t = z − cos(ωx x) cos(ωy y), we also stretch our coordinates as in the two dimensional 73  2.4. Finite Difference Program for General Sinusoidal Surface code. We define our coordinates as ρ = x, σ = y, and τ =     av log(−t − bv ) if t ≤ 0   as log(t − bs ) if t > 0.  with av , bv , as , bs being determined by the numerical approximations to ±∞. This provides us with a new set of differential operators in the (ρ, σ, τ ) parameters. Much of the work is very similar as in the two dimensional code so we summarize the results here:  ∂x = ∂ρ ± ωx sin(ωx ρ) cos(ωy σ)ae−τ /a ∂τ ξ (1)  ∂y = ∂σ ± ωy cos(ωx ρ) sin(ωy σ)ae−τ /a ∂τ ξ (2)  ∂z = ±ae−τ /a ∂τ ξ (3)  and  = ∂ρρ + ∂σσ ±2 ωx sin(ωx ρ) cos(ωy σ)ae−τ /a ∂τ ρ ξ (4)  ±2 ωy cos(ωx ρ) sin(ωy σ)ae−τ /a ∂τ σ + ξ (5)  [( (ωx2 + ωy2 )ωy2 cos(ωx ρ) cos(ωy σ))(±ae−τ /a )+ ( 2 ωx2 sin2 (ωx ρ) cos2 (ωy σ) +  2 ω 2 cos2 (ω ρ) sin2 (ω σ) x y y  + 1)(−ae−2τ /a )]  ∂τ +  ξ (6)  74  2.4. Finite Difference Program for General Sinusoidal Surface [( 2 ωx2 sin2 (ωx ρ) cos2 (ωy σ) +  2 2 ωy cos2 (ωx ρ) sin2 (ωy σ)  + 1)(a2 e−2τ /a )] ∂τ τ  ξ (7)  where a = av on the vacuum side and as on the superconducting side and we choose − for the vacuum side and + for the superconducting side. In this transformation we have a flat interface in the new coordinates.  The Grid Another difficulty in this numerical work is in setting up a grid. Because the magnetic field is given in terms of the gradient of the vacuum g˜ function, it is necessary to consider two grids that interlock at the interface. We are given ωx , ωy , and . We define a parameter N describing the number of points in the ρ− and σ− directions and Mm and Mp describing how far the grid spans in the vacuum and superconducing sides respectively. Given N, we proceed to define: Our spacings, hρ =  2π Nx ωx ,  hσ =  2π Ny ωy ,  ing parameters, ηv =  N hτ Mm ,  ηs =  N hτ Mp ,  h ηs Mp + 2I ηs −1  hτ = min{hρ , hσ }; and our stretchbv =  ηg Mm + h2τ ηg −1  , hI =  Mp N ,  bs =  , av = βg + h2τ , as = −βs , αv = av log(Mm −bv ), βv = av log( h2τ −bv ),  αs = as log(−bs ), βs = as log(Mp − bs ). The careful selection of these α and β parameters allows for the grids to interlock in the right way. When ωx or ωy are zero we defined hρ or hσ respectively to be 1. We set ρi = −  π + (i − 1)hρ for 1 ≤ i ≤ N, ωx  75  2.4. Finite Difference Program for General Sinusoidal Surface σj = −  π + (j − 1)hσ for 1 ≤ j ≤ N, ωy ρi = ρ i +  hρ , 2  σj = σj +  hσ 2  and if ωx and/or ωy are zero then we only choose three coordinates −1, 0, 1 for the ρ s and/or σ s. We set Nρ to be the number of ρ− points and Nσ to be the number of σ−points. v s We set Nv = [ βvh−α ] + 1 and Ns = [ βsh−α ] + 1 with [ ] being the integer τ τ  floor function and define τ Nv −k+1 = βv − (k − 1)hτ for k = 1, ..., Nv and τ k = αs + (k − 1)hτ for k = 1, ..., Ns . In the end we have Nx Ny (Nv + 3Ns ) variables and equations to define. We define the mapping K that takes four coordinates and maps them to a single number:  K(i, j, k, ) =      (k − 1)Nx Ny + (j − 1)Nx + i if  =0    Nx Ny Nv + ( − 1)Nx Ny Ns + (k − 1)Nx Ny + (j − 1)Nx + i if where to impose the periodic boundary conditions, if i = 0 then it must be replaced by i = Nρ and if i = Nρ + 1 it is replaced by 1 and similarly for j with Nσ . Based on this numbering, the i, j, and k describe the ρ, σ, and τ positions. The number  = 0 describes the g˜ variable (the variable in the vacuum  including its ghost points), and  = 1, 2, 3 describes the first, second, and  third components of the magnetic field respectively in the superconducting 76  =0  2.4. Finite Difference Program for General Sinusoidal Surface  Figure 2.26: The mesh in the three-dimensional system. Note how g˜ and b meshes interlock. The circles with crosses indicate the points occur deeper into the page than the circles wit the dots. At each circle with a cross, three components are specified. At each circle with a dot, the scalar value of g˜ is specified.  coordinates. To discretize t, the depth past the surface, we define:  tk =      tk = −e(τ k +τ k+1 )/(2av ) − bv for 1 ≤ k ≤ Nv − 2    tk = e(τ k+2−Nv )/as + bs for Nv − 1 ≤ k ≤ Nv + Ns − 3.  This imposes t1 ≈ −Mm +  hI 2 ,  tNv −1 = 0, and tNv +Ns −3 ≈ Mp . Figure  2.26 illustrates the mesh.  77  2.4. Finite Difference Program for General Sinusoidal Surface The Discretized Equations Here we present how the equations were handled. We will define u as the unknown vector, M as the matrix, and f as the load vector. The equations change based upon the τ position so in each slice the equations hold for all i and j. We also discretize the ξ variables, like in the two-dimensional code, with (2)  indices (i, j, k, ). For example, ξi,j,k,0 = − ωy cos(ωx ρi ) sin(ωy σ j )av e−τ k /av Conditions at −∞ This corresponds to k = 1 and  = 0.  We need to impose g˜(ρ, σ, −∞) = 0. To do so we set fK(i,j,1,0) = 0 with MK(i,j,1,0),K(i,j,1,0) = 1. Conditions at +∞  This corrresponds to k = Ns and  = 1, 2, 3.  We need to impose b1 (ρ, σ, ∞) = b2 (ρ, σ, ∞) = 0 and ∂z b3 (ρ, σ, ∞) = ξ (3) ∂τ b3 = 0. This is an equation for the points with τ coordinate number Ns . We impose fK(i,j,Ns , ) = 0. The point at ∞ is between regular grid points and ghost-points. We impose an average condition for the first two components (imposing their average be zero) and a regular second-order derivative condition for the third component:  MK(i,j,Ns ,  ),K(i,j,Ns −1, )  MK(i,j,Ns ,  ),K(i,j,Ns , )  =  =  1 2  1 2  78  2.4. Finite Difference Program for General Sinusoidal Surface for  = 1, 2, and  MK(i,j,Ns ,3),K(i,j,Ns −1,3) = − MK(i,j,Ns ,3),K(i,j,Ns ,3) =  1 (3) ξ hτ i,j,Ns ,3  1 (3) ξ . hτ i,j,Ns ,3  Vacuum Laplacian Here, k = 2, ..., Nv − 1 and  = 0.  The Laplacian is zero in the vacuum region so fK(i,j,k,0) = 0. The discretization methods for mixed partial derivatives are the same here as for the two-dimensional code. For the matrix values, we have:  MK(i,j,k,0),K(i±1,j,k,0) =  1 h2ρ  MK(i,j,k,0),K(i,j±,k,0) =  1 h2σ (6)  MK(i,j,k,0),K(i,j,k±1,0) =  ±ξi,j,k,0 2hτ (7)  MK(i,j,k,0),K(i,j,k,0) =  2ξi,j,k,0 2 −2 − 2 − 2 hρ hσ h2τ (4)  MK(i,j,k,0),K(i+1,j,k+1,0) =  ξi,j,k,0 4hρ hτ (4)  MK(i,j,k,0),K(i−1,j,k−1,0) =  ξi,j,k,0 4hρ hτ  79  2.4. Finite Difference Program for General Sinusoidal Surface (4)  MK(i,j,k,0),K(i+1,j,k−1,0) =  −ξi,j,k,0 4hρ hτ (4)  MK(i,j,k,0),K(i−1,j,k+1,0) =  −ξi,j,k,0 4hρ hτ (5)  MK(i,j,k,0),K(i,j+1,k+1,0) =  ξi,j,k,0 4hσ hτ (5)  MK(i,j,k,0),K(i,j−1,k−1,0) =  ξi,j,k,0 4hσ hτ (5)  MK(i,j,k,0),K(i,j+1,k−1,0) =  −ξi,j,k,0 4hσ hτ (5)  MK(i,j,k,0),K(i,j−1,k+1,0) =  −ξi,j,k,0 4hσ hτ  Superconductor Laplacian Here, k = 2, ..., Ns − 1 and  = 1, 2, 3.  This is identical to the discretization of the vacuum Laplacian above except that the 0 is replaced by  = 1, 2, 3 and the slight change in now  having and extra −1 in these entries: (7)  MK(i,j,k,0),K(i,j,k,0)  2ξi,j,k,0 −2 2 = 2 − 2 − − 1. hρ hσ h2τ  Zero Divergence on the Interface  This is the scalar equation corre-  sponding to the g˜-ghost points. We have k = Nv and  = 0. We chose to  impose zero divergence at the g˜−ghost points. These points, however, are defined between the b−points. To resolve this issue, we define the divergence at ghost points by taking averages. Considering each spatial direction x ˆ, yˆ, zˆ individually, every ghost point is surrounded 80  2.4. Finite Difference Program for General Sinusoidal Surface by four edges along which a derivative can be taken. Taking the average of these four discrete derivatives gives an approximation to the derivative (in one of the directions) at the ghost point. For example, to compute the z− derivative of b1 at (ρi , σj , τ Nv , 0) we (3)  would compute the average of the discrete derivatives (3) ξi,j,3/2,0  (3) ξi,j,3/2,0  (uK(i−1,j,2,1) −uK(i−1,j,1,1) ), hτ (3) ξi,j,3/2,0 (uK(i−1,j−1,2,1) − uK(i−1,j−1,1,1) ) where hτ  uK(i,j,1,1) ),  (3)  hτ  ξi,j,3/2,1 (uK(i,j,2,1) hτ  −  (uK(i,j−1,2,1) −uK(i,j−1,1,1) ),  (3)  ξi,j,3/2,1 is defined by the av-  (3)  erage 12 (ξi,j,1,1 + ξi,j,2,1 ). Referring back to the figure 2.26 should clarify. The divergence is zero and we are looking at the equations for the vacuum ghost points. Therefore we set fK(i,j,Nv ,0) = 0. The entries of M corresponding to the terms in ∂x b1 are shown below and the other terms are similar. For the matrix M, we impose: (1)  ξi,j,3/2,1 1 M (K(i, j, Nv , 0), K(i, j, 2, 1)) = + 4hρ 4hτ (1)  ξi,j,3/2,1 −1 + M (K(i, j, Nv , 0), K(i, j, 1, 1)) = 4hρ 4hτ (1)  ξi−1,j,3/2,1 1 M (K(i, j, Nv , 0), K(i − 1, j, 2, 1)) = + 4hρ 4hτ (1)  ξi−1,j,3/2,1 −1 M (K(i, j, Nv , 0), K(i − 1, j, 1, 1)) = + 4hρ 4hτ (1)  ξi,j−1,3/2,1 1 M (K(i, j, Nv , 0), K(i, j − 1, 2, 1)) = + 4hρ 4hτ  81  2.4. Finite Difference Program for General Sinusoidal Surface (1)  ξi,j−1,3/2,1 −1 M (K(i, j, Nv , 0), K(i, j − 1, 1, 1)) = + 4hρ 4hτ  (1)  ξi−1,j−1,3/2,1 1 M (K(i, j, Nv , 0), K(i − 1, j − 1, 2, 1)) = + 4hρ 4hτ (1)  ξi−1,j−1,3/2,1 −1 M (K(i, j, Nv , 0), K(i − 1, j − 1, 1, 1)) = + 4hρ 4hτ where the factor of  1 4  comes from taking averages.  The Gradient Condition  This corresponds to k = 1 and  = 1, 2, 3.  The condition is that b = ∇˜ g + (1, 0, 0)T or that ∇˜ g − b = (−1, 0, 0)T . These are equations for b on the interface. Therefore we set fK(i,j,1,1) = −1 and fK(i,j,1,2) = fK(i,j,1,3) = 0. As with the divergence, we need to take averages in finding the derivatives. For an illustration, we will show the equations for the third component of b. They correspond to setting the entries of M as follows: (3)  MK(i,j,1,3),K(i,j,Nv ,3) =  ξi,j,Nv −1/2,0 4hτ (3)  MK(i,j,1,3),K(i,j,Nv −1,3) =  −ξi,j,Nv −1/2,0 4hτ (3)  MK(i,j,1,3),K(i−1,j,Nv ,3) =  ξi−1,j,Nv −1/2,0 4hτ (3)  MK(i,j,1,3),K(i−1,j,Nv −1,3) =  −ξi−1,j,Nv −1/2,0 4hτ 82  2.4. Finite Difference Program for General Sinusoidal Surface  (3)  MK(i,j,1,3),K(i,j−1,Nv ,3) =  ξi,j−1,Nv −1/2,0 4hτ (3)  MK(i,j,1,3),K(i,j−1,Nv −1,3) =  −ξi,j−1,Nv −1/2,0 4hτ (3)  MK(i,j,1,3),K(i−1,j−1,Nv ,3) =  ξi−1,j−1,Nv −1/2,0 4hτ (3)  MK(i,j,1,3),K(i−1,j−1,Nv −1,3) =  −ξi−1,j−1,Nv −1/2,0 4hτ  MK(i,j,1,3),K(i,j,1,3) = −1. (3)  (3)  (3)  We similarly use the notation ξi,j,Nv −1/2,0 = 12 (ξi,j,Nv −1,0 + ξi,j,Nv ,0 ). Finding the Results This averaging actually brings about a null vector for M when N is even. As a result, the program is restricted to odd values of N . We explain how this null vector comes about in appendix E for the flat geometry. The same ill-conditioning also occurs for rough geometries. Given the system M u = f we found u = M −1 f (by implementing a direct solve routine for sparse matrices in Matlab). This gives us a magnetic field on the superconducting side, but only gives us g˜ on the vacuum side. To get b on the vacuum side, we took the discretized gradient of g˜ just as above, thereby finding the magnetic field at values at the centres of cubes with g˜ values. We also added 1 to the first component. Where necessary, an average field magnitude at depth tk can be found by averaging the field magnitude over i and j with k fixed. 83  2.4. Finite Difference Program for General Sinusoidal Surface  2.4.2  Validation  To test that the code results are accurate, we compare the results to the asymptotically computed solutions. As our problem of interest is a three-dimensional vector problem, the simulation is extremely difficult to run, due to memory constraints. This limitation stems from the use of a direct-solver for the linear system. As a result, in the fully three-dimensional problem, the resulting system of equations can only be solved with values of N up to about 21. When this code has either ωx or ωy zero, we can take N much larger, but still nowhere near the values possible in the two-dimensional code. In all computations, we tried to take N as large as possible. In terms of Mp and Mm , we generally choose Mp = 9.25 and set Mm = Mp /  ωx2 + ωy2 . This choice of Mm ensures that even if  were as large as  1 that the decay of the first order perturbation (which, in the asymptotic case, decays with exp(  ωx2 + ωy2 t) would be of the same order as the error  at Mp . The most basic check is how the program behaves with a flat interface. For = 0, the field magnitude should be identically 1 on the vacuum side and decay exponentially with e−t on the superconducing side. Taking N = 11 we see the plot in figure 2.27  Comparison with Asymptotic Solutions We now compare the numerical approximations to the asymptotic solution for different values of .  84  2.4. Finite Difference Program for General Sinusoidal Surface  Figure 2.27: Visual confirmation that the three-dimensional program correctly handles the flat interface N = 11, and ωx = ωy = π.  Our first-order asymptotic solution satisfies basy,1 = b(0) + b(1) = bex + O( 2 ) where b(0) and b(1) are the zeroth and first-order terms of the asymptotic expansion and bex is the exact solution. Our numerical solution bnum satisfies bnum = bex + O(h2τ ) + O(λ) where λ is the far-field error. Taking the difference of basy,1 and bnum and dividing (1)  by maxi ||bi ||∞ and taking norms yields an error:  E = ||  bex − basy,1  ||∞ (1) maxi ||bi ||∞  = O( ) + O(h2τ / ) + O(λ/ ).  Provided h2τ is small enough (by taking N large) and λ is small, then this should be a linear function of epsilon. To verify the linear nature, we plot E vs  to see that E varies roughly 85  2.4. Finite Difference Program for General Sinusoidal Surface  Figure 2.28: Left: resolution of first-order asymptotic term for first geometry with ω = π. Centre: resolution of first-order asymptotic term for second geometry with ω = π. Right: resolution of first-order asymptotic term for third geometry with (ωx , ωy ) = (π, π). linearly with  and that the E ↓ 0 as  is remarkably accurate. Even for  ↓ 0. From this, we see the program  ≈ 0.3 the program can discern the first-  order term to within 15%. See figure 2.28. As in section 2.3.2, the second-order asymptotic solution basy,2 should be near the numerical solution to within an error of order O(h2τ )+O( 3 )+O(λ). Being able to test this higher order of convergence is extremely delicate (due to memory constraints). When ωx , ωy = 0, we can only take N as large as 21. Another difficulty is that the magnetic field is essentially comprised of six parts: three components on two sides of the interface. Each part has its own terms and could presumably have optimal ranges over which the O( 3 ) term could be detected by using second-order asymptotics. We begin this test by taking the three-dimensional code with the conditions (ωx , ωy ) = (π, 0) so that we can test how it converges with respect to the results of section 2.2.4. This also means we can take N much larger (in this case we 41). We allow  to range from 0.05 to 0.40 with increments of  0.05. From this we obtain the asymptotic errors for each side of the inter86  2.4. Finite Difference Program for General Sinusoidal Surface  Table 2.1: Estimated orders of convergence for (ωx , ωy ) = (π, 0). Field Part Range of Slope b1 vacuum [0.1, 0.25] 2.77 b1 superconductor [0.2, 0.4] 3.08 [0.05, 0.2] 2.71 b3 vacuum b3 superconductor [0.1, 0.25] 2.86 Table 2.2: Estimated orders of convergence for (ωx , ωy ) = (π, π). Field Part Range of Slope b1 vacuum [0.2, 0.4] 2.94 b1 superconductor [0.15, 0.4] 3.03 b2 vacuum [0.25, 0.4] 2.98 b2 superconductor [0.15, 0.25] 2.71 b3 vacuum [0.15, 0.35] 2.74 [0.1, 0.2] 2.50 b3 superconductor face, E = ||bnumi − basy,2 i ||∞ (i = 1, 2, 3), and we can perform a least-squares regression for E = c1 log + c0 . We do our fitting on the range of data that seems to best detect O( 3 ). We tabulate the results of c1 in table 2.1. Seeing these very convincing results, we also test the case where (ωx , ωy ) = (π, π), keeping the same range of  values as above. Again, we perform a  least-squares linear fit to the best-behaved subset of the data. Results are tabulated in table 2.2. These results show a convincing agreement between the computational and asymptotic results in the three-dimensional setting, which validates both approaches.  2.4.3  Results  Choosing N , we show the plots of the field profile. We begin by selecting (ωx , ωy ) = (2π, 2π) with = 0.05 to see the profiles 87  2.4. Finite Difference Program for General Sinusoidal Surface (shown in figure 2.29) are similar to that of the asymptotics. We also have the freedom to choose much larger values of  and the ω’s.  To examine how the profile changes when the frequencies are out of proportion, we fix  = 0.1 and select (ωx , ωy ) = (π, 8π) and (ωx , ωy ) = (π, 8π).  The results are plotted together along with the flat geometry solution in figure 2.30. We see the mean field magnitudes in both perturbed geometries exceed that of flat geometry after a certain depth. With ωx  ωy we see a similar shape to that of the experimental results  (see figure 2.2) although the dead layer is considerably smaller. With ωx ωy we see a shape that resembles the experimental plot quite closely, but the decay in magnitude begins even before the superconducting interface. The experimental results in figure 2.30 only measure the field profile in the superconductor. However, it is possible to coat the superconductors with silver to measure the field outside [8]. From this, we observe that by rotating the applied magnetic field (assuming that we have a single value of λ that does not depend on orientation), the field profile changes purely because the roughness differs in the two directions. However, the amount by which the field profile differs from the flat geometry is not large enough to account for the experimental results unless could be larger i.e. the surface is rougher than experimentalists believe. As final plot for this section, we show how well the asymptotics and numerics agree. With  = 0.1 and (ωx , ωy ) = (8π, 8π), their predicted mean  field profiles are plotted together in figure 2.31.  88  2.4. Finite Difference Program for General Sinusoidal Surface  Figure 2.29: Top left: a profile of b1 from peak and valley. Top right: a profile of b2 with (x, y) = (−0.27, −0.27). Bottom left: profile of b3 with (x, y) = (−0.27, −0.50). Bottom right: difference between average field profile in perturbed geometry and flat geometry. All figures computed with = 0.05, and (ωx , ωy ) = (2π, 2π).  89  2.5. Conclusions of Superconductor Modeling  Figure 2.30: The field profiles for different roughness orientations with 0.1.  2.5  =  Conclusions of Superconductor Modeling  Recent experiments have suggested the need for more sophisticated models of a superconductor. Our work examined the notion of surface roughness and the effects this would have on the magnetic field profile in an experimental setting. Through analytic and numerical techniques, we have shown that surface roughness does indeed play a role in perturbing the field magnitude. Not only does it perturb the field, but the orientation of the geometry itself plays a big role in the nature of the perturbation. Here we briefly summarize the most interesting of these results.  90  2.5. Conclusions of Superconductor Modeling  Figure 2.31: The asymptotic and numeric mean field profiles. The two are quite close given the large and ω values.  2.5.1  More Complicated Field Behaviour  The standard model with a flat superconductor and parallel applied magnetic field yields a constant solution within the vacuum region and a field pointing only in one direction decaying exponentially once within the superconductor. Our results show that in general the field magnitude would not be constant in the vacuum and that once within the superconducting region the magnitude could decay differently than a purely exponential decay - decays with multiple length scales and field components with sinusoidal variations in the longitudinal directions. The individual components can rise above or fall below their value at  91  2.5. Conclusions of Superconductor Modeling −∞. In addition, the field components themselves become mixed, and the field is no longer pointing in a single direction throughout the experimental region.  2.5.2  Orientation of the Roughness Affects the Profile  For a given roughness that’s not equal in both spatial dimensions, depending on whether the applied field is pointing in a rough or smooth direction, the amount by which it decays in the vacuum region could be significant or nearly negligible. If the experiments were precise enough, this would mean that if a superconductor did have a rough surface (which was not equal in both spatial directions), then by changing the field orientation by 90◦ (assuming the penetration depth remained nearly the same) then the field profile should be different. Having to take into account the anisotropy in the sample would make this more difficult.  2.5.3  Effective Dead Layer  Based on an asymptotic regime, the effective dead layer (a length over which we can think of the field magnitude remaining constant beyond the interface) is of the order  2.  Recall, is the scaled size of the surface roughness defined  in section 2.2.1. The actual dependence on the spatial frequencies is quite complicated, and in some cases the effective dead layer is actually negative. Experimentally the dead layer appears to be on the order of 0.05. Our results show that a surface roughness of 0.05 cannot account for the dead 92  2.5. Conclusions of Superconductor Modeling layers of the size shown in figure 2.30. However, results for larger roughnesses are qualitatively similar to the experimental profiles.  93  Chapter 3  Fuel Cell Modeling 3.1  Modeling of Gas Diffusion in Fuel Cells  Hydrogen fuel cells hold a promising future for being able to produce clean energy. If Hydrogen and Oxygen are used in the fuel cell, they react electrochemically producing water (as the only by-product) and energy. For this to take place, these gases must diffuse from channels to reaction sites. Being able to set ideal running conditions for hydrogen fuel cells to maximize efficiency requires an in-depth understanding of the efficiency of the gas diffusion among other things. Running experiments can be costly and in recent years a lot of emphasis has been placed on modeling fuel cells numerically. The question then arises as to how to model this gas diffusion. Fickian diffusion, wherein a species diffuses in proportion to the gradient of its mole fraction, has traditionally been chosen. However it is believed that a more sophisticated (and more complex) formulation known as Maxwell-Stefan diffusion yields more physically accurate results. This diffusion model takes into account the interspecies competition in diffusion. Although the theory behind these models is different, they are used to model the same phenomena. Due to the interaction terms and mathematical subtleties, it is not clear 94  3.1. Modeling of Gas Diffusion in Fuel Cells how to best pose a Maxwell-Stefan diffusion problem, and exactly how much it differs from Fickian diffusion in situations relevant to fuel cells is unknown. These issues are explored in this chapter. We begin this portion of the paper with a brief overview of the relevent Physics involved in PEM fuel cells and proceed to develop our model from there.  3.1.1  PEM Fuel Cell Overview  A Polymer Electrolyte Membrane (PEM) fuel cell generates power through a reaction between Hydrogen (H2 ) and Oxygen (O2 ) producing water (H2 O). The simplest cartoon is that the two gases enter through two separate channels and diffuse through a concentration gradient to arrive at catalyst sites that facilitate the reaction [9]. See figure 3.1.1. Hydrogen gas present in the Hydrogen flow channel enters the gas diffusion layer (GDL) anode where it diffuses through a concentration gradient until it reaches a catalyst layer (typically comprised of Platinum or a Platinum alloy). Here, it oxidizes with the reaction H2 → 2H + + 2e− . The electrons become part of the electric current drawn from the fuel cell and the protons diffuse through the polymer electrolyte membrane (PEM) until reaching the cathode Oxygen catalyst layer. The other reacting gas is Oxygen which diffuses from the Oxygen flow channel through the GDL cathode. Once reaching the catalyst, an Oxygen molecule combines with four protons (from the Hydrogen side) and 4 electrons (generated through the current) to produce water under the reaction O2 + 4e− + 4H + → 2H2 O. This reaction is also enhanced by Platinum or a 95  3.1. Modeling of Gas Diffusion in Fuel Cells  Figure 3.1: A cross-section of a PEM fuel cell. Hydrogen and Oxygen diffuse primarily in the XY −plane, but there is a diffusion in the Z−direction as well. Platinum alloy. The properties of the GDL have a large impact on the diffusion. The GDL is not an open channel. Instead, it is a thin layer of teflonated carbon fibre paper, which can be considered a porous media, through which the gases are conducted. The efficiency of this transport depends on how “open” these pathways are. These pathways make for a greater overall distance for the molecules to travel. They cannot travel straight; instead, they must pass through tortuous pathways to reach the catalyst site.  3.1.2  The Model  In our work here we will look at the GDL of the cathode. We will model the concentration profile of the gases and explore how Fick and Maxwell-Stefan 96  3.1. Modeling of Gas Diffusion in Fuel Cells diffusion laws differ. We will consider the presence of three gas species, O2 , H2 Ovap (water vapor), and N2 in the GDL at steady state transport. We approximate the system as isothermal (the temperature is assumed to be constant). We also approximate the system as one-dimensional: the gases have prescribed concentrations at the channels and travel in the X−direction to reach the catalyst layer where the fluxes are known. We consider the different concentrations of gas species Ci , i = 1, 2, 3, where 1 indicates O2 , 2 indicates H2 Ovap , and 3 indicates N2 . Together we consider the vector of concentrations C = (C1 , C2 , C3 )T . The total molar concentration is the sum of the concentrations of the individual species C1 +C2 +C3 which we will denote by ||C||. Unless otherwise stated, the norm will always represent the L1 -norm of a vector. The massdensity ρi of a species i is found by multiplying its molar mass Mi by its molar concentration Ci . The temperature (assumed constant) will be denoted by T , the total pressure by P , and the mass-averaged velocity by U. Important physical constants in this setting are the ideal gas constant, R, the porosity, , the viscosity, µ, and the permeability, κ. In Fickian diffusion, D will represent the diffusion coefficient, and in the Maxwell-Stefan formulation, Dij , (i, j) ∈ {1, 2, 3}2 will be the binary diffusivities. The important fluxes are the molar flux with respect to the molaraveraged velocity J ∗ and the molar flux with respect to the mass-averaged velocity J. Both J and J ∗ are in general rank 2 tensors (in one dimension they are vectors). 97  3.1. Modeling of Gas Diffusion in Fuel Cells In our model, the channel concentrations are specified at position X = 0 so that C(0) = C ∗ is known. A known current is drawn from the reactions and based upon this current, under standard operating conditions, the fluxes, Ni of the individual species at the catalyst layer (at X = L) are known. We will now determine our parameters of interest.  3.1.3  Standard Operating Conditions of a Fuel Cell  Here, based upon standard operating conditions (see references [9], [10], [11], and [12]), we compute our channel concentrations and fluxes. The main equation we need here is the ideal gas law ((3.2) in the next section). We take the pressure within the fuel cell channels to be 3 atm (approximately 3 × 105 Pa). The temperature is taken to be 350 K. Based on this we can compute the total molar concentration of the gases as approximately 100 mol m−3 . These are typical conditions for fuel cells being developed for the automotive sector [11]. At this same temperature, the saturation pressure of water vapor is 3.8 × 104 Pa. This corresponds to a saturation concentration, Csat of 13 mol m−3 . If we assume 75% humidity then within the fuel cell, the molar density of water vapor is approximately 10 mol m−3 . We then assume the remaining 90 mol m−3 are comprised of Oxygen at 21% and Nitrogen at 79%. Our channel concentrations are: C1 = 19 mol m−3 , C2 = 10 mol m−3 , and C3 = 71 mol m−3 . A typical current density drawn from a fuel cell is 1 A cm−2 which is 104 C s−1 m−2 . Based on the reaction in the GDL, we see that for every four electrons consumed, one Oxygen molecule is consumed. Taking the current 98  3.1. Modeling of Gas Diffusion in Fuel Cells  Table 3.1: Physical constants in our gas diffusion model. value constant C1 19 mol m−3 10 mol m−3 C2 C3 71 mol m−3 8 × 10−6 m2 s−1 D D1,2 1.24 × 10−5 m2 s−1 D1,3 1.04 × 10−5 m2 s−1 D2,3 1.23 × 10−5 m2 s−1 0.74 4 F 9.6 × 10 C mol−1 10−12 m2 κ L 10−4 m M1 32 g mol−1 18 g mol−1 M2 M3 28 g mol−1 2.6 × 10−2 mol m−2 s−1 N1 N2 −5.2 × 10−2 mol m−2 s−1 N3 0 mol m−2 s−1 µ 2.24 × 10−5 kg m−1 s−1 R 8.314 kg m2 mol−1 s−2 K−1 T 350 K density and dividing by 4 F (where F is Faraday’s constant, 9.6 × 104 C mol−1 ) we find the Oxygen flux at the catalyst layer to be N1 = 0.026 mol m−2 s−1 . The water vapor flux should be in the opposite direction and have twice the magnitude. So N2 = −0.052 mol m−2 s−1 . Nitrogen does not react so its flux is zero. Our particular choice of parameters are summarized in table 3.1. Aside from the constants we used or obtained above, we obtained the other constants from references [9] and [11]. In the next section we will review the relevant gas diffusion equations. 99  3.1. Modeling of Gas Diffusion in Fuel Cells  3.1.4  Diffusion Equations  Here we give the formulations of Fick and Maxwell-Stefan diffusion. Although our model is one-dimensional, the equations here are given in full generality. For further reference, we suggest reference [13]. Einstein summation convention is used throughout in that repeated indices in products are summed over (aij bj =  j  aij bj for example).  Basic Equations In both the Fick and Maxwell-Stefan settings, we will assume Darcy’s law, which applies to porous media [9], and the Ideal Gas law. Respectively, they are stated below: κ ∇P µ  (3.1)  P = ||C||RT.  (3.2)  U =−  These equations can be combined nicely by substituting (3.2) into (3.1) to yield a modified Darcy Law:  U =−  κRT ∇||C||. µ  (3.3)  In later sections we will define the constant σ = The mass-averaged velocity is given by U = velocity is given by U ∗ =  Ci Vi ||C|| .  KRT µ .  ρi Vi ||ρ|| .  The molar-averaged  Here the Vi are the velocities of the individual  species. 100  3.1. Modeling of Gas Diffusion in Fuel Cells Fickian Diffusion Fick’s law describes diffusive fluxes. The formulation we adopt is the same as that implemented by Berg et al. [11]. For each species of gas, we have:  ∂t Ci + ∇ • (Ci U − D||C||∇(  Ci )) = 0. ||C||  (3.4)  The term Ci U is a convective term - the gas species move with the massCi ) is a diffusive term, where diffuaveraged velocity. The term D||C||∇( ||C||  sion is assumed to be driven by the gradient of the mole fractions (Ci /||C||). Fickian diffusion cannot actually be reconciled with Darcy’s law as stated [14]. By summing (3.4) over i we find:  ∂t ||C|| + ∇ • (||C||U − D||C||∇(  ||C|| )) = 0 ||C||  and the latter term in the diffusive term is actually zero (∇(1) = 0.) Therefore we see that ∂t ||C|| = −∇ • (||C||U ) which only makes sense if U is the molar-averaged velocity U ∗ (different to how Darcy’s law is posed where U is the mass-averaged velocity). In spite of this, many researchers continue to use Fickian diffusion in conjunction with Darcy’s law [9]. As we see next, Maxwell-Stefan diffusion is compatible with Darcy’s law.  101  3.1. Modeling of Gas Diffusion in Fuel Cells Maxwell-Stefan Diffusion The Maxwell-Stefan diffusion law is expressed below:  ∂t Ci + ∇ • (Ci U + Ji ) = 0,  (3.5)  where we define J through a series of transformations. To start,  Aij J˜j = ∇(  Ci ) ||C||  (3.6)  gives a relationship between the molar diffusive flux J˜ (with respect to an arbitrary velocity) and the gradients of the mole fractions, where  Aij (C) =      1 ||C||2    C =i Di  if i = j  − DCiji if i = j.  The Dij for (i, j) ∈ {1, 2, 3}2 are binary diffusivities. We will denote ˆ = maxij Dij . D The matrix A is not invertible. The vector (C1 , C2 , C3 )T spans its null space. To find the general solution of (3.6) we will pick a base point, (J1∗ , J2∗ , J3∗ ), by using a projection and replacing one of the rows of A by a multiple of (1, 1, 1) and then add an arbitrary multiple of ζ(C1 , C2 , C3 )T to the solution. We find: C (J˜1 , J˜2 , J˜3 )T = (PA + B)−1 P∇( ) +ζ(C1 , C2 , C3 )T ||C|| (J1∗ ,J2∗ ,−J1∗ −J2∗ )T  102  3.1. Modeling of Gas Diffusion in Fuel Cells where   1 0 0      P=  0 1 0    0 0 0   and       0 0 0     0 0 0 . B=   ˆ D||C||   1 1 1 1  The factor of  1 ˆ D||C||  within B may seem strange but it is needed to keep  consistency with the dimension and scaling of A. The usefulness of this will become apparent in the asymptotic and numerical work to come (section 3.2.1). Also note that the −J1∗ − J2∗ for the third component of the particular solution comes from our requirement that J1∗ + J2∗ + J3∗ = 0 (i.e. that J ∗ is the molar flux with respect to the molar-averaged velocity). We have found the molar-diffusive fluxes (which have an arbitrary parameter). We would like (3.5) to involve a mass-averaged velocity [14]. In this case, if we multiply (3.5) by Mi and add we find:  ∂t ρ + ∇ • (ρU + Mi Ji ) = 0 where ρ is the total density, Ci Mi (the sum). For a mass-averaged U we require that Mi Ji = 0. For a particular choice of ζ, the J˜ we found is the molar flux with respect to the mass-averaged velocity, J. So we now find ζ by solving  103  3.2. Asymptotic Formulation M1 (J1∗ + C1 ζ) + M2 (J2∗ + C2 ζ) + M3 (−J1∗ − J2∗ + C3 ζ) = 0. We obtain ζ =  (M3 −M1 )J1∗ +(M3 −M2 )J2∗ . ρ    J  1   S11     J = S  2   21    M1 −M J3 3  Using this ζ allows us to write:    Ci M j ρ (1  where Sij = δij −  −  M3 Mj )  S12 S22 2 −M M3    0   J1∗     ∗  0    J2    J3∗ 0  as noted by Stockie et. al. [9].  Overall we can find J as shown below:          J1   S11     J = S 2    21    1 J3 −M M3  S12 S22 M2 −M 3  0   C −1 0   (PA + B) P ∇( ||C|| ).  0 Q  The Maxwell-Stefan system is now expressed as:  ∂t Ci + ∇ • (Ci U + Qij (C)∇(  3.2  Cj )) = 0. ||C||  (3.7)  Asymptotic Formulation  Our analysis will be in a one-dimensional setting so that ∇(•) = • with being the spatial derivative. We begin by nondimensionalizing the systems of equations as we explained in section 1.2.  104  3.2. Asymptotic Formulation  3.2.1  Nondimensionalization  Fickian Diffusion Let our spatial coordinate be X. By combining (3.3) and (3.4), we obtain the equation  ∂t Ci + (−σCi ||C|| − D||C||(  Ci )) =0 ||C||  (3.8)  which, in steady state (i.e. with ∂t C = 0), and after integrating with respect to X yields  −σCi ||C|| − D||C||(  Ci ) = Ni ||C||  (3.9)  where the N ’s are fluxes. Defining the bars as dimensional quantities with a representative scale ¯ C = Cc, ¯ and N = N ¯ n. Substituting as in section 1.2, we write X = Xx, ¯ yields this into equation and dividing by N σ C¯ 2 DC¯ ci − ¯ ¯ ci ||c|| − ¯ ¯ ||c||( ) =n ||c|| XN XN  (3.10)  where now represents the derivative with respect to x. By equating the dimensions of the terms on the left side of (3.10) we find  ¯2 σC ¯N ¯ X  =  ¯ DC ¯X ¯ N  which can be satisfied by setting C¯ =  D σ.  ¯ = L as the length of the diffusion We will choose the length scale X layer. Requiring that there be no dimension on the left side of (3.10) means that  ¯ DC ¯ LN  ¯ = = 1, which can be satisfied in selecting N  D2 σL .  105  3.2. Asymptotic Formulation  Figure 3.2: Our one-dimensional model of the gas diffusion layer.  With these substitutions, we find  −ci ||c|| − ||c||(  We define γ =  1 ||c∗ ||  ci ) = n. ||c||  (where c∗ = c(0)) and rescale with c˜ = γc. We also  define the number = γ 2 ||n|| and the vector n ˜=  n ||n||  so that γ 2 n = n ˜ with  ||˜ n|| = 1. Writing (3.10) with the tilde variables and removing the tildes yields.  ci ||c|| − γ||c||(  ci ) = ni ||c||  for x ∈ [0, 1] and with |c(0)| = |c∗ | = 1. Note that  (3.11) is not the porosity here.  Our problem is displayed in figure 3.2. It turns out  is very small, and so is γ, although γ  . In this case,  the Fickian term is a small perturbation from the Darcy term (which alone could not be solved). Our parameters are given in table 3.2.  106  3.2. Asymptotic Formulation  Table 3.2: Nondimensionalized and rescaled parameters and variables for Fick diffusion. Paramater Value c∗1 0.19 ∗ 0.10 c2 c∗3 0.71 ||c∗ || 1 n1 0.333 −0.667 n2 n3 0 4.56 × 10−4 γ 4.44 × 10−6 Maxwell-Stefan Diffusion Although the problems have very different structures, the nondimensionalization procedure is very similar. We begin by combining (3.1) and (3.5) to obtain  ∂t Ci + (−σCi ||C|| + Qij (C)(  Ck )) =0 ||C||  (3.12)  which becomes  −σCi ||C|| + Qij (  Cj ) = Ni ||C||  (3.13)  in steady-state. We again use the ¯ • notation to refer to a dimensional parameter with a ¯ X = Xx, ¯ and N = N ¯ n. Noting representative scale for • and write C = Cc, −1 so that (PA + B)−1 (C) ∝ D||C|| ˆ ˆ that (PA + B) ∝ (D||C||) and applying  107  3.3. Asymptotic Analysis  Table 3.3: Nondimensionalized and rescaled parameters and variables for Maxwell-Stefan diffusion. Paramater Value c∗1 0.19 ∗ 0.10 c2 c∗3 0.71 ||c∗ || 1 n1 0.333 −0.667 n2 n3 0 7.06 × 10−4 γ 4.44 × 10−6 ˆ replacing D) that led to (3.10) we find the same techniques (with D  −ci ||c|| + Qij (c)(  cj ) = ni . ||c||  (3.14)  Rescaling as in the Fickian case we can write equation 3.14 as:  −ci ||c|| + γQij (c)(  cj ) = ni ||c||  where |c(0)| = |c∗ | = 1 and where we consider x ∈ [0, 1]. Again, γ are small with γ  (3.15) and  . Our parameters are given in table 3.3. Figure 3.2  applies here as well.  3.3 3.3.1  Asymptotic Analysis Asymptotic Analysis of Fick Diffusion  ci ) = −ci ( By expanding the derivatives of (3.11) we have −ci ||c|| −||c||( ||c|| ci ( γci + γ ||c||  j cj )  j cj )−  = ni . It is possible to rearrange this in the form of  108  3.3. Asymptotic Analysis MF c = n for a matrix MF . We see −ci  j cj  ci = Dij cj where Dij = −ci , −γci = −δij cj , and γ ||c|| (  j cj )  ci . F˜ij cj where F˜ij = γ ||c||  Thus, if we define MF = D + F with  Dij = −ci and Fij = γ(  ci − δij ) ||c||  then  MF (c)c = n.  (3.16)  Now we expand c = c(0) + c(1) + O( 2 ) and substitute this into (3.16) to get MF |c(0) +  (0) c(1) +O( 2 ) (c  + c(1) + O( 2 )) = n which yields an equation  at each order of epsilon:  O(1) : MF |c(0) c(0) = 0 O( ) : MF |c(0) c(1) + (  (3.17a)  ∂MF (1) |c∗ ci )c(0) = n. ∂ci  (3.17b)  As MF (c(0) ) is invertible we obtain that c(0) = 0 so c(0) is a constant vector. Given the boundary conditions here that c(0) = c∗ we have c(0) = c∗ . This makes (3.17b) much simpler because all the terms with c(0) are zero (as shown by Promislow et al. [10]). So the order  equation is simply  MF |c∗ c(1) = n which can be solved to find c(1) :  109  =  3.3. Asymptotic Analysis  c(1) = MF |−1 c∗ n which is constant. As the boundary conditions are already satisfied with c(0) we need that c(1) (0) = 0 and the first-order approximation to the concentration is:  c(x) = c∗ + c(1) x + O( 2 ).  (3.18)  Substituting x = 1 into (3.18), we obtain an approximation to the concentrations at the catalyst site. The most interesting information for us in computing the difference between the two diffusion models is the relative change in concentrations from channel values. So we compute ri =  ci (1)−c∗i c∗i  for the three species. The computations yield  r1 = −0.0203  r2 = 0.0617 r3 = −0.00325. The signs of these quantities are generally what we would expect. As water vapor (component 2) is being produced on the catalyst layer, its concentration should be higher than in the channel. Oxygen, the first component, which is reacting on the catalyst layer to produce water would decrease in concentration as we move along the GDL from the channel. Interestingly  110  3.3. Asymptotic Analysis enough, although Nitrogen does not react, because of the diffusion and mass conservation, it still has a concentration gradient.  3.3.2  Asymptotic Analysis of Maxwell-Stefan  The analysis here is nearly identical to Fick. Instead of γ we have γQij . By expanding the derivatives in (3.7) we can write:  MMS c = n  (3.19)  with MMS = D + G where  Dij = −ci and Gij = γQik (−  δkj ck + ). ||c||2 ||c||  By expanding c asymptotically with c = c(0) + c(1) + O( 2 ) the identical analysis holds as in the equations for Fickian diffusion. We again have that c(0) is a constant equalling c∗ and c(1) = MMS |c∗ −1 n so that our first-order solution is identical to (3.18) but with the different value for c(1) . We again compute the ri =  ci (1)−c∗i c∗i  values and find this time:  r1 = −0.0140 r2 = 0.0406  111  3.4. Numerical Analysis of Diffusion Models r3 = −0.00196 The signs are again what we expect, but the results are quite different from Fick’s law. The magnitude of these relative changes is smaller than in the Fickian model.  3.4  Numerical Analysis of Diffusion Models  Here we present some numerical computations used to verify the validity of the asymptotic results.  3.4.1  Discretizing the Diffusion Equations  We use (3.16) and (3.19) to express the derivative at each point as the inverse of a matrix (dependent upon the local concentration) times the fluxes. Given N, the number of grid points, we define h = 1/N and set xi = (i − 1)h for i = 1, ..., N + 1. We define ci to be the approximation to c at xi . We now use a modified Euler stepping scheme [15] beginning with c1 = c∗ . To get ci+1 from ci we define:  k1 = M|−1 ci n k2 = M|−1 ci +hk1 n and ci+1 = ci +  h (k1 + k2 ). 2  112  3.4. Numerical Analysis of Diffusion Models  Figure 3.3: Verifying second-order convergence with the Fick code.  The approximate concentration vector at x = 1 is cN +1 . From here we can also approximate ri =  3.4.2  cN +1 i −c1i . c1i  Verification of Program Results  The modified Euler method is second-order accurate. To test the approximation, we compute the ri values at N = 10, 20, 40, 80 and N = 200. Given the simplicity of the problem, we take the result at N = 200 as the “exact” answer and compute the maximum magnitude of the error for each component of r for N = 10, 20, 40, 80. We can plot the error trend on a log-log plot to verify the second-order convergence. The plot is given for the Fickian code in figure 3.3. We can also test the programs against the asymptotic solutions. Running  113  3.5. Exploring the Fundamental Differences between the Models the program (at N = 200) in the Fickian and Maxwell-Stefan models yields concentrations that are identical to the asymptotically computed solutions to within 1.00 × 10−4 and 3.29 × 10−5 respectively. We now have a high degree of confidence in both the asymptotic work and the coding.  3.5  Exploring the Fundamental Differences between the Models  There are numerous questions that arise from the analysis and we will now carry out some investigations. In speaking with fuel cell engineers, modern fuel cells actually have a permeability of 10−15 m2 [8]. This would make the Darcy’s law less efficient and it could impact the way the models behave. There is a question of what happens to the Maxwell-Stefan predictions when we use the “wrong” fluxes (the fluxes that Fick’s law inevitably uses). We can redo the calculations for the Maxwell-Stefan model by replacing J, the molar flux with respect to the mass-averaged velocity, by J ∗ , the molar flux with respect to the molar-averaged velocity. In this regime, Q (as in (3.15)) is simply (PA + B)−1 P. We also remark that the diffusivity D as used by Promislow et al. [10] is noticeably smaller than the binary diffusivities used in the Maxwell-Stefan formulation. We wish to see what happens if we replace D by the arithmetic mean of the binary diffusivities, 1.17×10−5 m2 s−1 and compute the Fickian relative changes. Our results are tabulated in table 3.4 and they were achieved by means 114  3.5. Exploring the Fundamental Differences between the Models  Table 3.4: Different modeling predictions for the relative changes in the concentrations. i 1 2 3  (M, κ) −0.0140 0.0406 −0.00196  (M, κ) −0.0142 0.0404 −0.00211  ˜ , κ) (M −0.0143 0.0403 −0.00185  ˜ , κ) (M −0.0137 0.0410 −0.00118  (F, κ) −0.0203 0.0617 −0.00325  (F, κ) −0.0188 0.0632 −0.00177  (F˜ , κ) −0.0139 0.0422 −0.00222  (F˜ , κ) −0.0124 0.0437 −0.000743  of the numerical programs in section 3.4. Notationally, (X, Y ) is used where ˜ (the MaxwellX can be M (the standard Maxwell-Stefan formulation), M Stefan with the incorrect flux), F (Fick’s law with D as initially stated), F˜ (Fick’s law with the updated value of D); and Y can be κ (the permeability initially stated) or κ (the smaller permeability that is possible in more modern fuel cells). The i indicates relative change ri . From table 3.4 it is very clear that by choosing the updated value of D, the Fickian model is in better agreement with Maxwell-Stefan. With a smaller permeability, the updated Fickian model fares worse. This is likely due to the fact that Darcy’s law is weaker and simply updating the value of D isn’t good enough because we start to see the different interactions contained in the Maxwell-Stefan law. We also note that the changes in using the wrong flux in the MaxwellStefan setting are negligible. We include the plots (figures 3.4 to 3.6) of the different concentration profiles for the gas species in both the Fick and Maxwell-Stefan settings, where we have chosen an improved value of the diffusivity and the smaller value of the permeability. We see in both cases, the plots are nearly linear and we see the asymptotics are a good match.  115  3.6. Conclusions of Gas Diffusion Modeling  Figure 3.4: The concentration profile of Oxygen in the GDL for both gas diffusion models.  3.6 3.6.1  Conclusions of Gas Diffusion Modeling Formulations  Maxwell-Stefan diffusion in its purest form, is a highly nonlinear system of equations with many intricacies. Fick diffusion is nonlinear, but has fewer difficulties involved in its computation. Fick’s law has formulation inconsistencies when coupled with Darcy’s law, and it does not take into account the individual competitions between diffusing species. Instead, it lumps all the binary diffusivities together into a single constant (which can agree quite well with Maxwell-Stefan if properly selected). Although they differ, the dominant driving force in both models is the 116  3.6. Conclusions of Gas Diffusion Modeling  Figure 3.5: The concentration profile of water vapor in the GDL for both gas diffusion models.  same: the bulk diffusion of gas species given in Darcy’s law in porous media. The system with Darcy’s law alone would not be solvable; both Fickian and Maxwell-Stefan diffusion are perturbations to a singular system.  3.6.2  Quantitative Differences  We saw that in a very simple setting with one dimension and an isothermal, steady state set of conditions that there are very small differences between the two diffusion models when the Fick diffusivity constant is judiciously selected and for a permeability 10−12 m2 . For the smaller permeability of 10−15 m2 the differences are no longer so small (although much smaller than if the diffusivity had not been modified). It’s important to note that the differences we obtained could be larger 117  3.6. Conclusions of Gas Diffusion Modeling  Figure 3.6: The concentration profile of Nitrogen in the GDL for both gas diffusion models.  in more realistic settings. Our one dimensional model has many limitations, one being that it doesn’t take into account the difficult diffusion pathways of the gas species. The effective length each species must diffuse could be as many as four or five times larger than what our computations were based upon. It’s also possible the differences could be more significant in a higher dimensional setting or with more physically realistic conditions (temperature variations, etc.).  118  Chapter 4  Summary and Future Work 4.1  Summary  Here we provide a summary of the research results and discuss future extensions to the work done.  4.1.1  Superconductor Research Summary  In superconductor systems governed by the London equation, many physical properties are well-understood with certain assumptions on the surface geometry of the superconductor. In particular, if a magnetic field is applied parallel to a superconducting interface, and the interface is flat, then the field magnitude should decay exponentially with the distance into the superconductor (when in the Meissner state). Recent experiments have measured magnetic field profiles that deviate from this exponential decay. Our research project in chapter 2 explored a possible explanation for the deviations: the notion of surface roughness. To reach our conclusions, careful asymptotics were done that accurately describe field profiles in superconductors with sinusoidal surface roughness. Novel numerical methods with a carefully chosen mesh were used to verify the asymptotic results and then provide results beyond what the asymptotics 119  4.1. Summary allow. Two new phenomena are predicted by the study. The first is that the individual field components (and even the field magnitude), on particular regions on the vacuum-superconducting interface can their exceed values given by the applied field (experimentally this cannot be detected because the experiments only measure the average field). The second is that if there is a rough interface, then in general the field in the vacuum region would also be perturbed - and none of the components will be identically zero either in the vacuum or the superconductor.  4.1.2  Gas Diffusion Research Summary  In the modeling of gas transport, there exist a number of diffusion models. Fick diffusion is often implemented in modeling due to its relative simplicity. However, another diffusion model known as Maxwell-Stefan diffusion is believed to be more accurate due to various interspecies interaction terms that appear in its formulation. This model is a great deal more complicated. We sought to determine the qualitative and quantitative differences between the two models, to find out if in fact there is a significant difference when computations are done with one model or the other in situations relevant to fuel cell operation. Using a simple, one-dimensional model of a gas diffusion layer in a fuel cell, we computed the relative changes in concentration for three gas species with both sets of equations. Our work here (chapter 3) was carried out in an asymptotic regime and verified with numerical programs. The numerical programs were also used to further study the predictions of the models. 120  4.2. Future Work We showed that the differences between relative concentration changes predicted by the two models are negligible if the Fickian diffusivity is chosen correctly and if the permeability is not too small. The models do, however, show small differences in their predictions for smaller (more modern) permeability values. We saw that both models are dominated by the bulk diffusion of the entire system and not the diffusion of individual species (results of Stockie et al. [9] are in agreement).  4.2 4.2.1  Future Work Future Work for Superconductor Project  Through various validations, we have seen that the asymptotic and numerical computations are in agreement and agree with physical intuition. Even with a small number of grid points, the three-dimensional code we wrote shows a high level of accuracy. However, in the future if this code needed to be even more accurate, we would need to maximize the number of grid points in the three-dimensional finite difference code. However, the program and environment both pose limitations. We started with a Matlab implementation to test the formulation, but rewriting the code in a more computationally friendly language such as C would be a next step. Matlab doesn’t have enough memory or speed to deal with systems of the size we desire. In addition, we need the computation to be as fast as possible. By making use of the known solution for 121  4.2. Future Work a flat-interface, we could use Fast Fourier Transformations to come up an efficient preconditioner for the matrix equations which could then be solved iteratively using a Krylov subspace method such as GMRES [16]. Exploring symmetries could also help reduce the number of unknowns. Given the surfaces we examined were sinusoidal, there are be planes of symmetry for the solution. This could reduce the number of grid points needed by up to a factor of 4. Analytically, another consideration is the asymptotic analysis. One physical limit that was inaccessible in the asymptotics was if the spatial frequencies became very large. To analyse these limits, it’s possible that homogenization theory holds the answers. Research has been done in solving Maxwell’s equations with very rough boundaries [17]. This could be possible in our case as well.  4.2.2  Future Work for Gas Diffusion Project  Having seen there is such a small differences between Fick Diffusion and Maxwell-Stefan diffusion in our model, some serious questions arise. At what point do the two models differ significantly (to the point where MaxwellStefan diffusion could be necessary for modeling)? Also, what is the best choice of the Fickian diffusivity when used in conjunction with Darcy’s law, given the set of binary diffusivities? A next course of action would be to model different systems of gases and determine where the two models differ, and how to select the diffusivity. It would also be nice to consider more realistic conditions. Not only would we ideally take the model here to a higher dimensional setting, but 122  4.2. Future Work we would need to add temperature variations, as well as the presence of liquid water, to the calculations.  123  Bibliography [1] Morton, K. and Mayers, D., Numerical Solutions of Partial Differential Equations. Cambridge University Press: 1994. [2] Bender, C. and Orzag, A., Advanced Mathematical Methods for Scientists and Engineers: Asymptotic Methods and Perturbation Theory, Springer-Verlag New York, Inc.: 1999. [3] Folwer, A., Mathematical Models in the Applied Sciences, Cambridge Texts in Applied Mathematics: 1997. [4] Ashcroft, N. and Mermin, N., Solid State Physics, Saunders College Publishing: 1976. [5] Sonier, J., “Muon Spin Rotation/Relaxation/Resonance (µSR)” (http://musr.org/ jess/musr/muSRBrochure.pdf). [6] Kiefl, R.; Hossain, M.; Wojek, B.; Dunsiger, S.; Morris, G.; Prokscha, T.; Salman, Z.; Baglo, J.; Bonn, D.; Liang, R.; Hardy, W.; Suter, A.; and Morenzoli, E., “Direct Measurement of the London Penetration Depth in Y Ba2 Cu3 O6.92 Using Low-Energy µSR”, Physical Review B 2010, 81:18, 180502. Abstract can be viewed at: http://link.aps.org/doi/10.1103/PhysRevB.81.180502 124  Bibliography [7] London, F. and London, H., “The Electromagnetic Equations of the Supraconductor”, Series A, Mathematical and Physical Sciences 1935, 149:886, 71-88. [8] Private communication [9] Stockie, J.; Promislow, K.; and Wetton, B., “A Finite Volume Method for Multicomponent Gas Transport in a Porous Fuel Cell Electrode”, International Journal for Numerical Methods in Fluids 2003, 41, 577599. [10] Promislow, K.; Stockie, J.; and Wetton, B., “A Sharp Interface Reduction for Multiphase Transport in a Porous Fuel Cell Electrode”, Proceedings of the Royal Society A: Mathematical, Physical and Engineering Science 2006, 462, 789-186. [11] Berg, P.; Promislow, K.; St. Pierre, J.; Stumper, J.; and Wetton, B., “Water Management in PEM Fuel Cells”, Journal of the Electrochemical Society 2004, 151:3, A341-A353. [12] CRC Handbook of Chemistry and Physics, 90th ed., CRC Press: 2009. [13] Taylor, R. and Krishna, R., Multicomponent Mass Transfer, Wiley Series in Chemical Engineering: 1993. [14] Bear, J. and Bachmat, Y., Introduction to Modelling Transport Phenomena in Porous Media, Kluwer Academic: 1990. [15] Hamming, R., Numerical Methods for Scientists and Engineers, General Publishing Company: 1973. 125  [16] Quarteroni, A. and Sacco, R.; and Saler, F., Numerical Mathematics, Springer-Verlag, Inc.: 2000. [17] Nevard, J. and Keller, J., “Homogenization of Rough Boundaries and Interfaces”, SIAM Journal on Applied Mathematics 1997, 57:6, 16601686.  126  Appendix A  Eigenvalues of Finite Difference Matrix We seek the eigenvalues of the (n + 1) × (n + 1) matrix M in section 1.1.1. The matrix is clearly invertible (by simple row reduction it can be seen to have n + 1 pivots). If M u = λu then u1 = λu1 , un+1 = λun+1 and uj+1 −2uj +uj−1 = h2 λuj for 2 ≤ j ≤ n − 1. The endpoints tell us that u1 = un+1 = 0 since λ = 0 (M is invertible). On the interior we will look for a solution of the form uj = θj . We then easily find θj−1 (θ2 − βθ + 1) = 0  (A.1)  where β = (2 + h2 λ). j j If β 2 − 4 ≥ 0 then uj = Aθ+ + Bθ− (where θ± satisfies the quadratic  equation for β 2 − 4 > 0) or uj = Aj + B (when β 2 − 4 = 0) neither of which can lead to anything other than the zero vector with the restrictions u1 = un+1 = 0. On the other hand, if β 2 −4 < 0 then we have two roots that are complex conjugates, each of which has modulus 1 (since their product must be 1). 127  Appendix A. Eigenvalues of Finite Difference Matrix We denote these roots as exp(±iα). We require that Aeiα + Be−iα = 0 so that A = −Be−2iα . Also we need Aei(n+1)α + Be−i(n+1)α = 0 so A = −Be−i(2n+2)α . By dividing the two equations for A we see this is only possible if e2niα = 1 so that 2niα = 2πi. Thus α =  πk n ,  k = 1, 2, ..., n−1. The constant α cannot  be zero (or n) because we require two distinct complex roots. Solving the quadratic for θ in (A.1) and taking the real part, we find θ = β/2. The real part of eikπ/n is cos(kπ/n). This allows us to find the eigenvalues: β/2 = 1 + h2 λk /2 = cos(kπ/n) which means  λk =  2 cos(kπ/n) − 2 1 π2 k2 2 = 2n (1 − + O(n−4 ) − 1) = −π 2 k 2 + O(n−2 ). h2 2 n2  The eigenvalue of smallest magnitude occurs at k = 1 and is given by −π 2 as n → ∞. Therefore the matrix M −1 is has a bounded sup-norm.  128  Appendix B  The General Interface Here we show how to asymptotically compute the solution for a general interface z = h(x, y) with the assumption that h and its powers h2 , h3 , ... can be Fourier transformed. We will use the symmetric Fourier transform throughout. For all surfaces, the zeroth order term b(0) will be the base solution given in (2.8). We can write the interface 1 z=√ 2π  ˆ x , ωy )eiωx x+iωy y dωx dωy , h(ω R2  ˆ is the Fourier transform of h. Then by (2.16b) we have where h 1 [b(1) ]|z=0 = √ 2π  ˆ x , ωy )ei(ωx x+ωy y) dωx dωy . h(ω R2  We could also go to the equations at higher powers of .  129  Appendix B. The General Interface By modifying (2.17) for its Fourier integral analogue, we can write:  1 b(j) = √ 2π  dωx dωy ei(ωx x+ωy y) R2     γ (1) π1 β (j) eωz if z ≤ 0   (γ (2) π2 + γ (3) π3 )β (j) e−ωz if z > 0 (B.1)  where ω(ωx , ωy ) = (2)  ωx2 + ωy2 , ω(ωx , ωy ) =  (1)  1 + ωx2 + ωy2 , γωx ,ωy = (iωx , iωy , ω)T ,  (3)  (j)  (j) γωx ,ωy = (ω, 0, iωx )T , γωx ,ωy = (0, ω, iωy )T , and βωx ,ωy = M−1 ωx ,ωy [b ]|z=0 .  We can thus find the perturbation terms at every order. For an easy example we will consider a surface with roughness in only one direction, z =  sinc y, with applied field (1, 0, 0)T and we will compute  the first-order perturbation. Recall sinc y = (sin y)/y. The symmetric Fourier transform of sinc y is  π 2 χ[−1,1] (ω),  where χ is  the characteristic function. Therefore ˆ z b(0) ]z=0 = ( [b(1) ]|z=0 = −h[∂  π χ , 0, 0)T . 2 [−1,1] (j)  (1)  By carrying out the computation, we find γωx ,ωy πj βωx ,ωy =  π 2 χ[−1,1] δ2,j .  Then (by (B.1)) we conclude, for b(1) , all components in the vacuum are (1)  zero and all components except for b1 are zero in the supercondor. We find 1 (1) b1 = √ 2π  ∞ −∞  √ π 1 2 χ[−1,1] eiωy e− 1+ω z dω = 2 2  1  e−  √  1+ω 2 z iωy  e  dω.  −1  If we expand the complex exponential as cos(ωy) + i sin(ωy) and take note that the imaginary part of the integrand is an odd function of ω integrated  130  Appendix B. The General Interface over a symmetric range (and hence zero) then we obtain (1)  b1 =  1 2  1  e−  √  1+ω 2 z  1  cos(ωy)dω =  −1  e−  √  1+ω 2 z  cos(ωy)dω.  0  To first-order (neglecting b2 and b3 which remain identically zero) we can write:  b1 =       e−z +  1 if z ≤  sinc y  √  1 − 1+ω 2 z 0 e  cos(ωy)dω if z >  sinc y  This process could go on to obtain higher-order approximations.  131  Appendix C  Dead Layers and Averages The terms of importance in taking the averages are the constants (with respect to x and y) and those involving the squares of sine and cosine (otherwise they would average to zero). Let us suppose a term had a factor of cos2 (ωx x) cos2 (ωy y) present in it in the third geometry (i.e. neither ωx nor ωy is zero). Then, taking the average of this over x and y is trivially 14 . However, if ωx were zero then we would be computing the average value of cos2 (ωy y) which is actually 21 . One of the issues here is that cos2 (ωx x) does not approach cos2 (0) = 1 uniformly as ωx → 0.  132  Appendix D  Proof of Periodicity of g˜ We wish to prove that if b : R3 → R3 (where b = b(x, y, z)) is periodic in x and y with periods Tx and Ty and it is defined by b = ∇g for some C ∞ scalar function g satisfying g(x, y, −∞) = x, then the function g˜ = g − x is periodic in x and y satisfying the same periodicity conditions as b. The boundary condition at z = −∞ is clearly g˜(x, y, −∞) = 0. As b is periodic (and C ∞ ) we have the following equations for any x0 , y0 and z0 fixed: x0 +Tx  y0 +Tx  ∂x b(x, y0 , z0 )dx =  ∂y b(x0 , y, z0 )dy = 0.  x0  y0  Replacing b with ∇˜ g + (1, 0, 0)T and looking at the third component we obtain:  x0 +Tx  y0 +Ty  ∂zx g˜(x, y0 , z0 )dx = x0  ∂zy g˜(x0 , y, z0 )dy = 0.  To show that g˜ is periodic, it would suffice to prove that both and  y0 +Ty y0  (D.1)  y0 x0 +Tx x0  ∂x g˜(x, y0 , z0 )dx  ∂y g˜(x0 , y, z0 )dy = 0.  We prove the first is zero, as the second is done by the same methodology.  133  Appendix D. Proof of Periodicity of g˜ We write x0 +Tx  x0 +Tx  x0  z0  (∂x g˜(x, y0 , −∞)+  ∂x g˜(x, y0 , z0 )dx =  ∂xz g˜(x, y0 , z)dz)dx −∞  x0  and by Fubini’s theorem (to re-order the integrations) and Claurant’s theorem (in the equality of the mixed partial derivatives) along with noting that g˜(x, y0 , −∞) = 0 we find  x0 +Tx  z0  x0 +Tx  ∂x g˜(x, y0 , z0 )dx = x0  ∂zx g˜(x, y0 , z)dxdz = 0 −∞  x0  where the last equality comes from the fact that the inner integral is zero by equation D.1. We have shown that g˜ is periodic with the same periodicity as b.  134  Appendix E  Proof of Existence of a Null Vector We show here that for even N, the three dimensional code matrix (in the flat geometry) admits a null vector, making the system unsolvable. We will use the same notation as in section 2.4.1. Here we consider an unstretched coordinate system with x, y and z as the variables. The transformed coordinate system would still be approximating the same system (so if there is a null vector in Cartesian coordinates, there is also a null vector in the stretched coordinates to within an error imposed by the numerical system). We wish to find a vector u so that M u = 0. Such an equation would imply (in a discretized sense) the following: firstly, g˜(x, y, −∞) = 0; secondly, g˜ = 0; thirdly, ∇˜ g − b = 0 along the interface; fourthly, ∇ • b = 0 along the interface on the superconducting side; fifthly,  b = b in the superconducting  region; and sixthly, b1 (x, y, ∞) = b2 (x, y, ∞) = ∂z b3 (x, y, ∞) = 0. On the vacuum side, we begin by finding a solution vector with zero Laplacian (in the discretized setting) of form uK(i,j,k,0) = (−1)i+j λk for k = 2, ..., Nv − 1 with a uniform mesh in all directions. Here, a point on the  135  Appendix E. Proof of Existence of a Null Vector mesh can be described by (xi , yj , zk ) with spacing h. Computing the discretized Laplacian and setting it equal to zero we have ui+1,j,k + ui−1,j,k + ui,j+1,k + ui,j−1,k + ui,j,k+1 + ui,j,k−1 − 6ui,j,k = 0. h2 Thus, (−1)i+j λk−1 (λ2 − 6λ + 1) = 0, which has roots in reciprocal pairs, λ = √ 3 ± 32/2. √ If α is small (close to machine epsilon) then α(−1)i+j (3 + 32/2)k is nearly zero at k = 1 and grows exponentially in magnitude with k. Because of how we discretized ∇˜ g −b by taking averages, at the interface, the discretized ∇˜ g is zero along the interface making all components of b zero along the interface. Then if the discretized b is zero everywhere on the superconducting side then the remaining conditions are upheld. Note the periodicity implies that (−1)1+j = (−1)N +1+j for fixed j (and similarly switching i and j) and this could only happen if N is even. Therefore, uK(i,j,k, ) = α(−1)i+j λk δ  0  is a null vector for even values of N.  136  

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.24.1-0071198/manifest

Comment

Related Items