UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Preconditioners for incompressible magnetohydrodynamics Wathen, Michael P. 2018

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
24-ubc_2019_february_wathen_michael.pdf [ 1.2MB ]
Metadata
JSON: 24-1.0375762.json
JSON-LD: 24-1.0375762-ld.json
RDF/XML (Pretty): 24-1.0375762-rdf.xml
RDF/JSON: 24-1.0375762-rdf.json
Turtle: 24-1.0375762-turtle.txt
N-Triples: 24-1.0375762-rdf-ntriples.txt
Original Record: 24-1.0375762-source.json
Full Text
24-1.0375762-fulltext.txt
Citation
24-1.0375762.ris

Full Text

Preconditioners for incompressiblemagnetohydrodynamicsbyMichael P. WathenM.Sci, Mathematics, University of Birmingham, 2012M.Sc, Computer Science, The University of British Columbia, 2014A THESIS SUBMITTED IN PARTIAL FULFILLMENT OFTHE REQUIREMENTS FOR THE DEGREE OFDOCTOR OF PHILOSOPHYinThe Faculty of Graduate and Postdoctoral Studies(Computer Science)THE UNIVERSITY OF BRITISH COLUMBIA(Vancouver)December 2018c© Michael P. Wathen 2018The following individuals certify that they have read, and recommend to theFaculty of Graduate and Postdoctoral Studies for acceptance, the dissertation entitled:Preconditioners for incompressible magnetohydrodynamicssubmitted by Michael Wathen in partial fulfillment of the requirements forthe degree of Doctor of Philosophyin Computer ScienceExamining Committee:Chen Greif, Computer ScienceSupervisorRobert Bridson, Computer ScienceSupervisory Committee MemberUri Ascher, Computer ScienceSupervisory Committee MemberIan Mitchell, Computer ScienceUniversity ExaminerBrian WettonUniversity ExaminerHans De Sterck, Monash UniversityExternal ExamineriiAbstractThe main goal of this thesis is to design efficient numerical solutions to incompressiblemagnetohydrodynamics (MHD) problems, with focus on the solution of the large andsparse linear systems that arise. The MHD model couples the Navier-Stokes equationsthat govern fluid dynamics and Maxwell’s equations which govern the electromagneticeffects.We consider a mixed finite element discretization of an MHD model problem. Upondiscretization and linearization, a large block 4-by-4 nonsymmetric linear system needsto be (repeatedly) solved. One of the principal challenges is the presence of a skew-symmetric term that couples the fluid velocity with the magnetic field.We propose two distinct preconditioning techniques. The first approach relies onutilizing and combining effective solvers for the mixed Maxwell and the Navier-Stokessub-problems. The second approach is based on algebraic approximations of the inverseof the matrix of the linear system. Both approaches exploit the block structure ofthe discretized MHD problem. We perform a spectral analysis for ideal versions of theproposed preconditioners, and develop and test practical versions. Large-scale numericalresults for linear systems of dimensions up to 107 in two and three dimensions validatethe effectiveness of our techniques.We also explore the use of the Conjugate Gradient (CG) method for saddle-pointproblems with an algebraic structure similar to the time-harmonic Maxwell problem.Specifically, we show that for a nonsingular saddle-point matrix with a maximally rank-deficient leading block, there are two sufficient conditions that allow for CG to be used.An important part of the contributions of this thesis is the development of numer-ical software that utilizes state-of-the-art software packages. This software is highlymodular, robust, and flexible.iiiLay SummaryThe goal of this thesis is to develop new, efficient solution techniques for large-scalemulti-physics problems arising from fluid dynamics and electromagnetics. These prob-lems appear in many physical applications, from glacial to astrophysical applications,and thus the development of models and simulations is of great importance.For such applications, it is either impossible or impractical to consider a pen-and-paper approach, and hence, a computer-based solution is typically required. Largesystems of unknowns describe the physical flow, and sophisticated computational solu-tion techniques are needed.We focus on developing modern solution algorithms and a large computer code thatcombines several state-of-the-art numerical software packages. Our algorithms exploitthe underlying numerical and physical properties of the model. Our analysis illustratesfavorable convergence properties, and by applying our algorithms to several challengingphysical problems, we show high scalability for problems of tens of millions of unknowns.ivPrefaceThis thesis describes results in three research articles:1. M. Wathen, C. Greif, and D. Schötzau. Preconditioners for Mixed Finite ElementDiscretizations of Incompressible MHD Equations. SIAM Journal on ScientificComputing, 39(6):A2993–A3013, 20172. C. Greif, and M. Wathen. Conjugate Gradient for Nonsingular Saddle-Point Sys-tems with a Maximally Rank-Deficient Leading Block. Under revision for Journalof Computational and Applied Mathematics (13 pages)3. C. Greif, and M. Wathen. A Scalable Approximate Inverse-Based Preconditionerfor Incompressible Magnetohydrodynamics. In final stages of preparation (15pages)Paper 1 has been published and is described in Chapter 2. Paper 3 is in the finalstages of preparation and is presented in Chapter 3. Paper 2 is currently under revisionand forms Chapter 4. We choose to present the papers in this order because paper1 and paper 3 are strongly related to each other. I have taken a central role in theresearch, analysis of the methods and the generated data, and the methodology forall three pieces of work. In Paper 1, my two senior co-authors provided guidanceon the discretization process and the numerical algorithms. Papers 2 and 3 were co-authored with my research supervisor. In all three papers, I am responsible for thenumerical solution algorithms and techniques derived, which is the core of my thesis.I have received guidance and advice from my research supervisor. All three pieces ofwork have been written in majority by me with the assistance and editing of the otherauthors.This work includes numerical software written solely by me, which effectively com-bines the finite element software FEniCS [77] in conjunction with the PETSc4PY pack-age (Python interface for PETSc [7–9, 30]), the multigrid package HYPRE [37, 38],and the sparse direct solver MUMPS [4, 5]. These papers appear largely as they arepublished. However, for consistency purposes we have altered notation throughout thethesis.vTable of ContentsAbstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iiiLay Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ivPreface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vTable of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viList of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ixList of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiiList of Programs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiiiAcknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xivDedication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.1 Model problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.1.1 Incompressible magnetohydrodynamics . . . . . . . . . . . . . . . 21.1.2 Navier-Stokes equations . . . . . . . . . . . . . . . . . . . . . . . 21.1.3 Maxwell’s equations . . . . . . . . . . . . . . . . . . . . . . . . . 31.2 Finite element discretization of PDEs . . . . . . . . . . . . . . . . . . . . 41.2.1 Laplacian example . . . . . . . . . . . . . . . . . . . . . . . . . . 51.2.2 Mixed discretizations of the model problems . . . . . . . . . . . . 61.3 Iterative solution of sparse linear systems . . . . . . . . . . . . . . . . . 81.3.1 Krylov subspace methods . . . . . . . . . . . . . . . . . . . . . . 91.3.2 Preconditioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121.3.3 Review of preconditioners for saddle-point systems . . . . . . . . 131.4 Numerical software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161.4.1 PDE discretization: FEniCS . . . . . . . . . . . . . . . . . . . . 171.4.2 Solution of the linear system: PETSc . . . . . . . . . . . . . . . . 18vi1.5 Outline and contributions . . . . . . . . . . . . . . . . . . . . . . . . . . 211.6 Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232 Preconditioners for Mixed Finite Element Discretizations of Incom-pressible MHD Equations . . . . . . . . . . . . . . . . . . . . . . . . . . 252.1 Discretization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252.1.1 Mixed finite element approximation . . . . . . . . . . . . . . . . 252.1.2 Picard iteration . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272.1.3 Decoupling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282.1.4 The linear systems . . . . . . . . . . . . . . . . . . . . . . . . . . 292.2 Review of preconditioning techniques for the sub-problems . . . . . . . . 312.2.1 Fluid flow preconditioner . . . . . . . . . . . . . . . . . . . . . . 312.2.2 Maxwell preconditioner . . . . . . . . . . . . . . . . . . . . . . . 332.3 Preconditioners for the MHD system . . . . . . . . . . . . . . . . . . . . 342.3.1 Reordering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 352.3.2 From an ideal to a practical preconditioner . . . . . . . . . . . . 382.3.3 Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 392.4 Numerical results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 402.4.1 2D smooth solution . . . . . . . . . . . . . . . . . . . . . . . . . 422.4.2 2D smooth solution parameter tests . . . . . . . . . . . . . . . . 422.4.3 2D smooth solution on L-shaped domain . . . . . . . . . . . . . . 452.4.4 2D singular solution on L-shaped domain . . . . . . . . . . . . . 452.4.5 2D Hartmann flow . . . . . . . . . . . . . . . . . . . . . . . . . . 462.4.6 3D smooth solution . . . . . . . . . . . . . . . . . . . . . . . . . 483 An Approximate Inverse-Based Preconditioner for IncompressibleMagnetohydrodynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . 503.1 Newton’s method discretization of the MHD model . . . . . . . . . . . . 503.2 A new formula for the inverse of the MHD coefficient matrix . . . . . . . 523.3 A new approximate inverse-based preconditioner . . . . . . . . . . . . . 543.3.1 A sparse approximation of the Schur complement . . . . . . . . . 543.3.2 A practical preconditioner . . . . . . . . . . . . . . . . . . . . . . 553.3.3 Spectral analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . 563.4 A block triangular preconditioner . . . . . . . . . . . . . . . . . . . . . . 583.5 Numerical experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . 623.5.1 3D Cavity driven flow . . . . . . . . . . . . . . . . . . . . . . . . 633.5.2 Fichera corner . . . . . . . . . . . . . . . . . . . . . . . . . . . . 643.5.3 MHD generator . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65vii4 Conjugate gradient for nonsingular saddle-point systems with a max-imally rank-deficient leading block . . . . . . . . . . . . . . . . . . . . . 674.1 Problem statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 674.2 Krylov Subspace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 694.3 Null-space decoupling . . . . . . . . . . . . . . . . . . . . . . . . . . . . 724.4 Eigenvalue Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 734.5 Numerical Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . 754.5.1 Krylov Subspace Solver Test . . . . . . . . . . . . . . . . . . . . 784.5.2 Divergence and Non-Divergence Free Right-Hand-Side . . . . . . 784.5.3 Variable Coefficients . . . . . . . . . . . . . . . . . . . . . . . . . 804.5.4 Fichera Corner Problem . . . . . . . . . . . . . . . . . . . . . . . 814.5.5 Gear Domain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 815 Conclusions and future work . . . . . . . . . . . . . . . . . . . . . . . . 845.1 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 845.2 Future work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86viiiList of TablesTable 1.1 Notation for block systems . . . . . . . . . . . . . . . . . . . . . . . . 23Table 1.2 Notation for superscripts . . . . . . . . . . . . . . . . . . . . . . . . . 24Table 1.3 Notation for subscripts . . . . . . . . . . . . . . . . . . . . . . . . . . 24Table 2.1 Algebraic multiplicity of eigenvalues for preconditioned matrices as-sociated withMMHDS and M˜MHDS . Note that nc = nu − nb +mb . . . 38Table 2.2 Orders of matrix entries for relevant discrete operators . . . . . . . . 38Table 2.3 Solution methods for systems associated with separate block matriceswithin (2.35) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40Table 2.4 2D smooth: Number of nonlinear iterations and number of iterationsto solve the MHD system with tolNL = 1e-4, κ = 1, ν = 1 and νm = 10. 43Table 2.5 Number of nonlinear iterations for various values of ν with κ = 1 andνm = 10. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44Table 2.6 Average linear solver time for various values of ν with κ = 1 andνm = 10. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44Table 2.7 Number of nonlinear iterations for various values of κ with tolNL =1e-5, ν = 1 and νm = 10. . . . . . . . . . . . . . . . . . . . . . . . . . 44Table 2.8 Average linear solver time for various values of κ with ν = 1 andνm = 10. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45Table 2.9 2D L-shaped: Number of nonlinear iterations and number of iterationsto solve the MHD system with tolNL = 1e-4, κ = 1, ν = 1 and νm =10. The iteration was terminated before completion for ` = 9 due tothe computation reaching the prescribed time limit. . . . . . . . . . . 46Table 2.10 2D singular solution on an L-shaped domain: Number of nonlineariterations and number of iterations to solve the MHD system withtolNL = 1e-4, κ = 1, ν = 1 and νm = 10. . . . . . . . . . . . . . . . . 47Table 2.11 2D Hartmann: Number of nonlinear iterations and number of itera-tions to solve the MHD system with tolNL = 1e-4, κ = 1, ν = 1 andνm = 1000 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48ixTable 2.12 3D smooth: Number of nonlinear iterations and number of iterationsto solve the MHD system with tolNL = 1e-4, κ = 1, ν = 1 and νm = 10 49Table 3.1 Block coefficient matrices, their corresponding continuous operatorsand approximate norms for the Fluid matrices . . . . . . . . . . . . . 51Table 3.2 Block coefficient matrices, their corresponding continuous operatorsand approximate norms for the Magnetic matrices . . . . . . . . . . . 52Table 3.3 Block coefficient matrices, their corresponding continuous operatorsand approximate norms for the Newton matrices . . . . . . . . . . . . 52Table 3.4 Solution method for block systems associated with the preconditioners 62Table 3.5 3D Cavity Driven using both the approximate inverse and block tri-angular preconditioner with parameters κ = 1, ν = 1, νm = 1 and Ha= 1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63Table 3.6 3D Cavity Driven using both the approximate inverse and block tri-angular preconditioner with parameters κ = 1e1, ν = 1e-1, νm = 1e-1and Ha =√1000 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64Table 3.7 Numerical cost of using MMHDA and MMHDB for mesh level ` = 6 inTable 3.6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65Table 3.8 Fichera corner using the approximate inverse preconditioner. Setup1: κ =1e1, ν =1e-2, νm =1e-2 and Ha=√1e5 and Setup 2: κ =1e1,ν =1e-2, νm =1e-3 and Ha=1000. . . . . . . . . . . . . . . . . . . . . 66Table 3.9 MHD generator using the approximate inverse preconditioner withparameters κ = 1, ν = 1e-1, νm = 1e-1 and Ha = 10 . . . . . . . . . . 66Table 4.1 Eigenpairs and algebraic multiplicities for the preconditioned matri-ces M̂−1I K and M−1I K. We use the notation bE ∈ Null(E), bN ∈Null(N), rR ∈ Null(R), and l = Dim(Null(R)). . . . . . . . . . . . . . 75Table 4.2 Krylov solver test: time and iteration results for using one iterationsof the preconditioner with FCG, MINRES and GMRES. The viscosityfor these tests is νm = 1e-2 . . . . . . . . . . . . . . . . . . . . . . . . 78Table 4.3 Block preconditioner test: time and iteration results using the blockdiagonal (MP ), block upper triangular (MU) and block lower trian-gular (ML) preconditioners with νm = 1e-2 . . . . . . . . . . . . . . . 79Table 4.4 Divergence-free vs. non-divergence-free right-hand-side: time and it-eration results using the block diagonal preconditioner, MP , for di-vergence and non-divergence free right-hand-sides with νm =1e-2. . . 79Table 4.5 Variable coefficients: time and iteration results using the block diag-onal preconditioner,M, for various different values of a. . . . . . . . 80xTable 4.6 Variable coefficients: time and iteration results using the block diag-onal and triangular preconditioners for a = 100. . . . . . . . . . . . . 81Table 4.7 Fichera corner: time and iteration results using the block diagonalpreconditioner,MP , for various different values of νm. . . . . . . . . 82Table 4.8 Gear domain: time and iteration results using the block diagonal pre-conditioner,MP , for various different values of νm. . . . . . . . . . . 83xiList of FiguresFigure 1.1 Unit square mesh generated by FEniCS . . . . . . . . . . . . . . . . 5Figure 1.2 L-shaped domain generated using mshr. We locally refine cells thatare a distance 0.4 from the origin. . . . . . . . . . . . . . . . . . . . 19Figure 2.1 Eigenvalues of preconditioned matrices for the smooth solution givenin this section. The number of degrees of freedom for these matricesis 724. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43Figure 3.1 Preconditioned eigenvalue plots for (a) the approximate inverse pre-conditioner in (3.12) and (b) the block triangular preconditioner in(3.21) using the smooth solution given (3.31). The dimension of thematrices in this example is 1399× 1399. . . . . . . . . . . . . . . . . 59Figure 4.1 Eigenvalue distribution of the preconditioned matrix M˜−1I K usingrandomly generated blocks with n = 100, m = 20, and l = 5 . . . . . 76Figure 4.2 Fichera corner domain for mesh level, ` = 1 . . . . . . . . . . . . . . 81Figure 4.3 Gear domain for mesh level, ` = 5 . . . . . . . . . . . . . . . . . . . 82xiiList of Programs1.1 mshr: construction of a locally refined mesh on an L-shaped domain inFigure 1.1. The cells that are within a certain distance from the radius. 181.2 FEniCS: construction of the bilinear forms for the MHD model in (1.1) . 191.3 PETSc: our implementation of a block diagonal preconditioner. . . . . . 211.4 PETSc: example of how using a Python class as a preconditioner . . . . 22xiiiAcknowledgementsFirst and foremost I would like to thank my supervisor, Chen Greif. Your guidance,help, and encouragement has been invaluable throughout my time at UBC. To mysupervisory committee, Uri Ascher and Robert Bridson – thank you for your time,support and helpful feedback.There are too many friends that have supported me along the way but I would liketo say a special thank you to Mike Mitchell, Michael Firmin, Anna Mittelholz, AnnaGrau Galofre, Thibaut Astic, Rowan Cockett, Hannah Loch, Greg Bex, Lili Geelhand,Joli Fooken, Colin Rowell, Keelin Scully, Julie Nutini, Jilmarie Stephens, and Luna.You are all a little crazy, but I suppose that’s why we have been friends for so manyyears. Next, I would like to thank two very important places, Bean Around The Worldand Boulevard coffee shops. I shudder to think how much I have spent at these placesin the six years I have been in Vancouver but what I do know is that without eitherplace this thesis wouldn’t be what it is today.Finally, I would like to thank my family for their unwavering support throughoutmy PhD. To Grandpa, I’ve enjoyed everyone of our “Skrype” talks, but now we’ll haveto make do with just telephone calls instead. To my parents, it’s simple, this thesiswouldn’t have happened without you.xivTo Granny and Elizabeth – the old and the newxvChapter 1IntroductionThe central topic of this thesis is the numerical solution of elliptic partial differentialequations (PDEs) that model electromagnetics problems. Developing the continuousmodel, namely the PDE and its associated boundary conditions, requires significantphysical and mathematical efforts. Once the continuous model is given, the main stepsfor the numerical solution may be described as follows:1. discretization and linearization (when applicable);2. solution of the linear system;3. implementation.Our interest is in the numerical solution of the problem, and we primarily focus onitems 2 and 3.In this introductory chapter, we first look at the model problems that we consider(Section 1.1). Following that, we briefly review finite element discretizations and rel-evant literature (Section 1.2). We then give an overview of iterative solution of largeand sparse linear systems (Section 1.3). Next, we discuss numerical software for solvingthese problems (Section 1.4). Finally, we provide an outline and contributions of thisthesis (Section 1.5).1.1 Model problemWe consider the incompressible magnetohydrodynamics (MHD) model that describesthe behavior of electrically conductive incompressible fluids (liquid metals, plasma,salt water, etc.) in an electromagnetic field. It has a number of important applica-tions in technology and industry, along with geophysical and astrophysical applica-tions. Some such applications are electromagnetic pumping, aluminum electrolysis theEarth’s molten core, and solar flares. See [45, 111] for a comprehensive description ofthe physical problem.The two key main components of the MHD model are the Navier-Stokes equations,which govern the fluid dynamics, and Maxwell’s equations, which govern the electro-magnetics. These two problems are strongly coupled in the MHD model.11.1.1 Incompressible magnetohydrodynamicsFollowing the formulation considered in [49, 95], the governing equations for the steady-state incompressible MHD model are:−ν∆u+ (u · ∇)u+∇p− κ (∇× b)× b = f in Ω, (1.1a)∇ · u = 0 in Ω, (1.1b)κνm∇× (∇× b) +∇r − κ∇× (u× b) = g in Ω, (1.1c)∇ · b = 0 in Ω. (1.1d)Here, Ω is a bounded Lipschitz polygonal or polyhedral domain of Rd for d = 2, 3 withboundary ∂Ω. The unknowns are: the velocity u, the hydrodynamic pressure p, themagnetic field b and the Lagrange multiplier r associated with the divergence constrainton the magnetic field. We comment on the role of r in Section 1.1.3. The functionsf and g represent external forcing terms. The equations (1.1) are characterized bythree dimensionless parameters: the hydrodynamic Reynolds number Re = 1/ν, themagnetic Reynolds number Rm = 1/νm, and the coupling number κ.In the case of liquid metals, the ratio between magnetic viscosity, νm, and fluidviscosity, ν, tends to be small. For example, Mercury has a ratio of about 10−7 withνm ≈ 104 − 105, ν ≈ 10−2 − 10−4 and κ ≈ 102 − 109; cf. [6]. In [6] the authors definestrong coupling for 102 ≤ κ ≤ 109. For more discussion of the physical parameters werefer the reader to [6, 44, 45, 72, 90].For simplicity, we consider homogeneous Dirichlet boundary conditions:u = 0, n× b = 0, r = 0 on ∂Ω, (1.2)with n being the unit outward normal on ∂Ω.1.1.2 Navier-Stokes equationsThe steady-state incompressible Navier-Stokes equations that govern incompressiblefluid flow are given by:−ν∆u+ (u · ∇)u+∇p = f in Ω, (1.3a)∇ · u = 0 in Ω. (1.3b)Again, here u and p are the velocity and pressure of the fluid, f is the forcing term andν is the kinematic viscosity. For a classical overview of the Navier-Stokes equations we2refer the reader to [1], and for a comprehensive numerical study of this model, see [35].One of the key numerical challenges in the approximation and numerical solution ofthe Navier-Stokes equations is the presence of the nonlinear convection term (u · ∇)u.This term requires the use of a nonlinear iteration scheme. The simplest common choiceof a nonlinear solver is the linearly convergent Picard iteration. For a given initial guessu0, the approximate solution at the k + 1st iteration, uk+1, is given by:−ν∆uk+1 + (uk · ∇)uk+1 +∇p = f in Ω,∇ · uk+1 = 0 in Ω,where k = 0, 1, . . .. Here, the scalar pressure, p, appears in linear form and is notiterated upon.An alternative to the Picard iteration is the locally quadratically convergent New-ton’s method. That is, Newton’s method will converge quadratically if the initial guessis “close enough” to the solution. To form a suitable initial guess, it is common to usethe solution of the Picard scheme after a few iterations. Newton’s method is highlyeffective, and is considered to be the method of choice for problems such as the Navier-Stokes equations. This scheme is slightly more elaborate than the Picard iteration; see[35, Section 8.2.2]. In Chapter 3, we will discuss Newton’s method for the MHD model.Upon discretization, the convection term ((u ·∇)u) causes nonsymmetry or possiblyskew-symmetry of the matrix within the discrete Navier-Stokes model. We also notethat as ν → 0, convection dominates the diffusion terms (ν∆u), which causes turbulent(nonviscous) flow. This may cause significant difficulties both in terms of the solutionmethod and the discretization used. See [35] for an extensive discussion.1.1.3 Maxwell’s equationsThe second coupled subproblem in the MHD model (1.1) is the time-harmonic Maxwellequation in mixed form [51, 60, 78]. It is given as follows:∇× νm∇× b+∇r = g in Ω (1.5a)∇ · b = 0 in Ω, (1.5b)where b is the magnetic field and r is the Lagrange multiplier associated with the diver-gence constraint on the magnetic field. The constant νm is the magnetic viscosity. Wenote that it is not necessary to introduce the Maxwell’s equations with an incompress-ibility constraint and a Lagrange multiplier; see for example [60, 61, 88]. The following3tensor identities involving the curl operator are useful:∇ · (∇× a) = 0 and ∇× (∇a) = 0, (1.6)where a is an arbitrary vector field. Taking the divergence of (1.5a) and using (1.6),we obtain the Poisson problem∆r = ∇ · g in Ω, r = 0 on ∂Ω.Since g is divergence-free in many physically relevant applications, the solution for r ofthe above Poisson problem is zero. Alas, from a numerical point of view the introductionof the Lagrange multiplier may be beneficial in terms of numerical stability; see [33, 49]1.From (1.6), the kernel of the curl operator is the gradient operator. This may causenumerical difficulties due to the large dimension of the null-space of the discrete curloperator. Numerical methods often attempt to exploit this null-space. This is theapproach we take.1.2 Finite element discretization of PDEsIn this section, we briefly discuss the essential parts of the numerical discretization ofPDEs of the kind we consider. Throughout, we consider boundary value problems andsteady-state PDEs. We cover linear and nonlinear problems.Given a continuous model expressed as a PDE, a typical numerical solution proce-dure includes a discretization process. We first generate a mesh. Two common choicesof tessellation of this mesh are quadrilaterals or triangles. An example of a simpleuniform square mesh with a triangular tessellation is given in Figure 1.1.Once the mesh is generated, the next goal is to approximate the continuous problemby a discrete model. In practice this amounts to converting the differential equationto a difference equation. There are three common choices of discretization techniques:finite differences [79, 100], finite volumes [42, 70], and finite elements [17, 99]. Wefocus on finite element discretizations, which are based on considering solutions in weakform, expressed in terms of basis functions of a specific space. These methods have arich mathematical theory behind the numerical approximation, and can relatively easilyhandle complex domains and unstructured meshes.1The stability result is discussed in detail in [33, Section 3.3], where a nonzero wave number isincorporated into the equation and is then taken to zero. The Helmholtz decomposition can be usedalong with an inf-sup stability analysis.4Figure 1.1: Unit square mesh generated by FEniCSA general finite element method approach follows the steps given bellow:1. Obtain a weak formulation2. Choose an approximation space for the so-called trial functions3. Choose an approximation space for the so-called test functions4. Solve the linear systemThe meaning of trial and test functions is illustrated in the example that follows.1.2.1 Laplacian exampleHere we present a finite element discretization of the “hello world” numerical test prob-lem: Poisson’s equation. For simplicity we consider homogeneous Neumann boundaryconditions to form the problem as follows:∆u = f in Ω with∂u∂n= 0 on ∂Ω. (1.7)The standard weak form of the Poisson problem can be obtained by multiplying (1.7)by a smooth test function v to obtain:−∫Ωv∆u =∫Ωvf. (1.8)The function u is considered the trial function. Using integration by parts and thedivergence theorem, the left-hand-side of (1.8) can be written as:−∫Ωv∆u =∫Ω∇u · ∇v −∫Ω∇ · (v∇u) =∫Ω∇u · ∇v −∫∂Ωv∂u∂n,5where ∂u∂n denotes the directional derivative in the direction of the normal componenton the boundary. Using the homogeneous Neumann condition (∂u∂n = 0 on ∂Ω), theweak form is given by: ∫Ω∇u · ∇v =∫Ωvf. (1.9)The next step for a finite element discretization is to determine the approximationspaces for the trial (u) and test (v) functions. For a given suitable Sobolov space, V ,consider a solution u ∈ V .The most intuitive choice of approximation space for the trial function is the sameas the test functions, thus, v ∈ V . This is called Galerkin finite elements. We define φias basis functions of V and represent the test and trial functions with respect to thisbasis as:v =n∑j=1vjφj and u =n∑i=1uiφi. (1.10)Substituting (1.10) into the weak form in (1.9) gives the matrix form as:n∑i=1n∑j=1vj∫Ω∇φi · ∇φj ui =n∑j=1vj∫Ωφjf. (1.11)We can re-write (1.11) as:n∑i=1∫Ω∇φi · ∇φj ui =∫Ωφjf for j = 1, 2, . . . , n.From this an n× n linear system is defined:Ku = b,whereKi,j =∫Ω∇φi · ∇φj and bj =∫Ωφjf.Here K and b are known as the stiffness matrix and load vector, respectively.1.2.2 Mixed discretizations of the model problemsIt is common in many PDE settings that the continuous model has multiple unknowns.For example, the Navier-Stokes equations (1.3) has two variables, velocity and pressure.For such models it is common to consider different finite elements for each variable.This is called a mixed finite element discretization. This thesis will consider mixedfinite element discretizations for electromagnetic and fluid flow problems [35, 95, 111].6In Section 1.1 we discussed the three model problems that are our focus. Researchinto the discretization of these model problems is a very active area, ranging fromclassical methods to new discretizations. Here we will give a brief overview of possiblemixed discretization strategies for each problem.Navier-Stokes equationsComputational fluid dynamics is of huge importance. Examples of applications areflood modeling, computational aerodynamics, glacial flows and biomedical engineering.There are many choices for a mixed finite element discretization of the Navier-Stokesequations (1.3). For mixed discretizations it is important to consider what is knownas stable discretizations. Suppose the weak formulation of the Navier-Stokes is definedusing the finite-dimensional spaces Xh and Mh. Then for (Xh,Mh) to be a stablediscretization the finite-dimensional spaces must satisfy the Ladyzhenskaya-Babuška-Brezzi (LBB) (or inf–sup) compatibility condition [19]. For a positive γ independent ofthe mesh size h, the LBB condition is:minqh 6=constantmaxvh 6=0|(qh,∇ · vh)‖vh‖1,Ω‖qh‖0,Ω ≥ γ,where (vh, qh) ∈ (Xh,Mh), ‖vh‖1,Ω = (∫Ω vh · vh + ∇vh : ∇vh)1/2, ‖qh‖0,Ω = ‖q −1|Ω|∫Ω q‖, ∇u : ∇v =∑di,j=1(∇u)ij(∇v)ij and d is the spatial dimension.One of most widely used mixed element that is LBB stable is the Taylor-Hood mixedelement [101]. The lowest order Taylor-Hood element is (P2/P1), where the velocity fieldis approximated with nodal quadratic polynomials and the pressure variable with nodallinear polynomials. This is the discretization we use throughout the thesis. We note,though, that low order stabilized elements are also widely used; see, for example, [35,Section 8.3].Maxwell’s equationsSimilarly to the Navier-Stokes equations, the mixed Maxwell problem is discretizedusing a mixed finite element method. The mixed discretization of the Maxwell problem(1.5) has been extensively studied; see, for example [24, 84, 106]. It is well known that fornonconvex domains nodal approximations of the magnetic field, b, often fail to capturethe singular solution [28]. However, for smooth domains nodal approximations appearto be effective. Thus for the mixed formulation, the common choice of finite elementsspaces would be to use H(curl) conforming elements for the magnetic field and H1elements for the multiplier. These are more natural for the approximation of the curl-7operator and can be applied in a seamless fashion, without penalty or stabilization, toproblems that involve nonsmooth domains, singularities, and other challenging settings.This leads to a mixed finite element discretization based on H(curl) conformingedge elements known as Nédélec [81] elements for b and nodal elements of equal orderfor r. We consider first order Nédélec elements for the magnetic field and linear nodalelements for the multiplier variables.MHD modelThe mixed finite element discretization of the MHD model incorporates many of theproposed techniques for the two subproblems. The common discretizations for the MHDmodel are derived from three-field formulation (u, p, b) or the four-field formulationgiven in (1.1).For the three-field formulation of the MHD model, (1.1) is considered without theLagrange multiplier. In [44, 53] the authors propose an exact penalty formulation of astationary MHD model. The exact penalty formulation reduces the curl-curl operatorin (1.1c) and the incompressibility condition in (1.1d) to a vector Laplacian using thevector identity:∆a = ∇(∇ · a)−∇× (∇× a) . (1.12)This discretization is based on standard nodal elements for the approximation of themagnetic field (b). As with Maxwell’s equations in isolation, nodal discretizations of themagnetic field are effective in smooth settings, but they often fail to capture singularitiesin nonsmooth domains; see [95] and the references therein.In [95], the author introduces a mixed function space which utilizes H1 elementsfor the velocity field and multiplier variable, L2 elements for the pressure variables andH(curl) elements for the magnetic field. Therefore, the proposed mixed finite elementdiscretization we use incorporates the two mixed discretizations given for the Navier-Stokes and Maxwell subproblems. This entails using the lowest order Taylor-Hoodelements for the (u, p) and the lowest order mixed Nédélec approximation for (b, r).1.3 Iterative solution of sparse linear systemsIn Section 1.2, we saw that the finite element discretization leads to large and sparselinear systems. In this section, we discuss iterative solution methods for such linearsystems.8Consider the linear system:2Kx = b, (1.13)where K is a nonsingular matrix of dimensions n×n and b is a vector of length n. If thedimension n is sufficiently small, direct solvers are the method of choice. However, weare considering large and sparse linear systems arising from three-dimensional problems,for which direct methods (that is, techniques based on matrix decompositions) areinfeasible to use due to computational time and memory requirements. Such systemstherefore require an iterative solution method. The state-of-the-art class of iterativemethods are Krylov subspace methods. There are many excellent books in this area;see, for example, [48, 93].1.3.1 Krylov subspace methodsFor a given initial guess, x0, the initial residual of (1.13) is given asr0 = b−Kx0.The k-dimensional Krylov subspace is defined as:Kk(K; r0) = span{r0,Kr0,K2r0, · · · ,Kk−1r0}.Krylov subspace methods are based on finding an approximate solution, xk, in theKrylov subspace:K−1b ≈ xk = x0 +Qk yk, (1.14)where Qk ∈ Rn×k is a matrix whose columns form an orthogonal basis for the subspaceKk(K; r0) and yk is a vector of length k. The procedure that forms the orthogonalbasis to the Krylov subspace is the Arnoldi process for nonsymmetric matrices and theLanzcos process for symmetric matrices. Here we will give a brief outline of the Arnoldiand Lanzcos methods, which respectively form the foundation of the GMRES [94] andMINRES methods [83]. Both GMRES and MINRES are minimum residual methods;the approximate solution at the kth iteration is given by the minimization problem:minxk∈Kk(K;r0)‖b − Kxk‖2. (1.15)GMRES and MINRES are methods that form an orthogonal basis of the Krylov sub-space as a first step. Other methods such as BiCG [39], BiCGstab [104] and QMR [41],form a bi-orthogonal basis which relies on matrix-vector products using both K and2In this section we switch to a slightly different notation, with x signifying the vector of unknowns.9KT ; we will not consider these methods.The Arnoldi process is written as:KQk = Qk+1Hk+1,k, (1.16)where Hk+1,k ∈ R(k+1)×k is an upper Hessenberg matrix. Here, Qk+1 ∈ Rn×(k+1)contains (k + 1) orthogonal vectors to the Krylov subspace and Qk is the same matrixwhich contains the first k columns of Qk+1. This is indeed the matrix referred to in(1.14). If we consider pre-multiplying (1.16) by QTk then we obtain:QTkKQk = Hk,k, (1.17)where Hk,k contains the first k rows of Hk+1,k. Thus, for the symmetric case it isclear that Hk,k in (1.17) is symmetric. Therefore, we denote it as Tk+1,k where Tk,kis tridiagonal and Tk+1,k has one extra row. This is called the Lanzcos process and iswritten asKQk = QkTk+1,k.The tridiagonal matrix Tk+1,k is significantly sparser than the upper Hessenberg matrixHk+1,k, and indeed carrying out the Lanzcos process in the symmetric case is consider-ably cheaper than the Arnoldi process for the nonsymmetric case.From now on we will refer to the Arnoldi process for nonsymmetric matrices, notingthat the same can be done for the Lanczos process. Substituting (1.16) into (1.15), thenyk must solve the least-squares problem:minyk‖b − K(x0 +Qk yk)‖2 = minyk‖r0 − KQk yk‖2= minyk‖r0 − Qk+1Hk+1,k yk‖2 = minyk‖βξ1 − Hk+1,k yk‖2,(1.18)where β = ‖r0‖2 and ξ1 is the first column of the identity matrix of dimension k + 1.Thus, the solution yk in (1.18) is now a least-squares problem. The common wayof solving this is using a QR factorization of Hk+1,k. This QR factorization is donevia Givens rotations in a sequential fashion, exploiting the structure to optimize thecomputations. For more details on this we refer the reader to [48, Chapter 2.4].In summary, minimum residual methods are based on minimizing the residual withrespect to xk ∈ Kk(K; r0). They start by constructing an orthogonal basis to theKrylov subspace. Using this, they compute the approximate iterates via small least-squares problems which use an upper Hessenberg matrix in the nonsymmetric case ora tridiagonal matrix in the symmetric case.10EigenvaluesThe following description follows the standard for any sparse linear algebra textbook,see for example [48, 93].The solution at the kth iteration is given in (1.14) where we recall that Qk is madeup of orthonormal vectors that span the Krylov subspace, thus,xk − x0 ∈ Kk(K; r0) for k = 1, 2, . . . .Therefore, for some coefficients βi where i = 0, 1, . . . , k − 1, the residuals and theapproximate solution iterates can be expressed asxk − x0 =k−1∑i=0βiKir0.We can thus write the solution iterate in terms of a polynomial of K:xk = x0 + qk−1(K)r0, (1.19)where qk−1 are polynomials of degree k−1. By premultiplying (1.19) byK and subtract-ing it from b we can represent the residual at the kth iteration in terms of polynomialsof the initial residual:b−Kxk = b−Kx0 +Kqk−1(K)r0,rk = pk(K)r0,(1.20)where pk(x) = 1−xqk−1(x) are polynomials of degree k with pk(0) = 1. Since GMRESand MINRES are both minimum residual methods then:minxk∈Kk(K;r0)‖b−Kxk‖2 = minpk∈Pk, pk(0)=1‖pk(K)r0‖2 (1.21)Suppose that K is diagonalizable, thenK = XΛX−1, (1.22)where X are the right eigenvectors of K and Λ is a diagonal matrix of eigenvalues ofK. Substitution of (1.22) into (1.21) givesminpk∈Pk, pk(0)=1‖pk(K)r0‖ = minpk∈Pk, pk(0)=1‖Xpk(Λ)X−1r0‖≤ ‖X‖‖X−1‖ minpk∈Pk, pk(0)=1‖pk(Λ)‖‖r0‖(1.23)11In the symmetric case, X is orthogonal and therefore (1.23) reduces tominpk∈Pk, pk(0)=1‖pk(K)r0‖ ≤ minpk∈Pk, pk(0)=1‖pk(Λ)‖‖r0‖.To obtain fast convergence, we require pk to be small at the diagonal elements of Λ.This can be readily accomplished if K has a small number of eigenvalues, or if theeigenvalues are strongly clustered.1.3.2 PreconditioningWe saw in Section 1.3.1 that Krylov subspace methods may rapidly converge if theeigenvalue distribution of K is favorable. However, in general, we have no control overthe spectral distribution of K. Therefore, we introduce a matrix M which is known asa preconditioner to enable us to efficiently cluster eigenvalues and speed up convergenceof the Krylov subspace solver. The preconditioned linear systems is given byM−1Kx = M−1b, (1.24)where M is the preconditioner that approximates K to some degree. This is called leftpreconditioning because the inverse of the preconditioner is acting on the left.Instead of left preconditioning in (1.24), it is possible to apply right preconditioning,KM−1y = b, x = M−1y,or split preconditioningM−11 KM−12 y = M−11 b, x = M−12 y,where M = M1M2. When either left, right or split preconditioning can be applied,there appears to be little evidence which technique speeds up the convergence of theKrylov subspace solver the most. However, there are differences among them in termsof the norm that is minimized and the ability to use flexible methods [92]. Henceforth,we will focus on left preconditioning.Generally speaking, an effective preconditioner has the following properties:1. the construction and application of M−1 is computationally efficient;2. the eigenvalues of matrix M−1K are clustered.It is often the case that it is infeasible to construct either M or M−1K. Therefore, inpractice, the iterative solver operates on K and appliesM−1 at each iteration implicitly.12Thus, the action M−1 needs to be “fast” on a single vector which occurs within theiterative solver.The simpler examples of preconditioners arise from sparse factorization or approxi-mation ofK. For example, these methods include the Jacobi preconditioner which takesthe diagonal of K as the preconditioner and the Gauss–Seidel preconditioner which usesthe lower triangular entries of K as the preconditioner. These methods can work wellfor a small class of problems; see for example [107]. However, in general the effectivenessof these preconditioners is limited. More sophisticated approaches are sought, such asincomplete factorizations, multigrid methods, and so on. For a full review of currentpreconditioning methods we refer the reader to [108].Generally speaking, preconditioning can arguably be split into two main categories:1. operator-based or PDE preconditioning,2. “black-box” preconditioning.Operator-based preconditioning typically arises from PDE based problems that utilizethe underlying physical properties and discretization of the problem. The aim for suchpreconditioning approaches is to have scalable iterations. That is, as the mesh is refinedand correspondingly the matrix dimensions increase, the number of iterations which theKrylov subspace solver takes to converge remains constant. In contrast, the “black-box”preconditioning approach is based more on an algebraic nature of the problem and con-structs preconditioners using the matrix entries. Scalable “black-box” preconditioningapproaches are typically harder to develop because there is no underlying application,and hence, there is less information on the matrix properties. In this thesis, we mainlyconsider operator-based preconditioning techniques.1.3.3 Review of preconditioners for saddle-point systemsThe mixed discretizations considered in this thesis lead to what is known as saddle-pointsystems [10, 108]. These linear systems are block structured and they feature two blockvariables, u and v. We call u and v the primary and secondary variables, respectively.For example, in the Navier-Stokes equations the primary variable is the fluid velocityand the secondary variable is the pressure of the fluid. These systems are comprised ofblock matrices of the form:(F BTB 0)︸ ︷︷ ︸K(uv)=(fg), (1.25)13where F ∈ Rn×n and B ∈ Rm×n with m < n. We consider both nonsingular andsingular leading blocks, F .We primarily consider two preconditioning methods which arise from block LDLdecompositions of (1.25), or a regularized version thereof, with −W in the trailingblock. First, if rank(F ) = n, then it is possible to directly invert F , and thus, an idealpreconditioner for (1.25) would be of the form:M1 =(F 00 BF−1BT), (1.26)where BF−1BT is known as the dual Schur complement. It has been shown in [68, 80]that the preconditioned matrix M−11 K has three distinct eigenvalues. If F is singularthen it is not possible to consider preconditioners similar to (1.26). For such problems,one often considers preconditioners of the form:M2 =(F +BTW−1B 00 W), (1.27)where F + BTW−1B is known as the primal Schur complement for the regularizedsaddle-point matrix (F BTB −W),and W is an arbitrary symmetric positive definite matrix. There are various naturalchoices for W based of the underlying discrete operators; see, for example, [87]. Theauthors in [50] show that if rank(F ) = n −m then the preconditioned matrix M−12 Khas two distinct eigenvalues.In general, the construction of both the primal and dual Schur complements will leadto dense matrices which are computationally impractical to form and subsequently solvefor. Therefore, state-of-the-art methods aim to design effective sparse approximationsto F +BTW−1B or BF−1BT . These approximations are often based on mimicking theessential spectral properties of the Schur complement.Navier-Stokes equationsThe numerical solution of the Navier-Stokes problem is of great importance. Three-dimensional fluid applications often lead to systems in excess of 10’s of millions ofdegrees of freedom. In Section 1.3.3 we mentioned how effective Schur complementbased preconditioners can be.For the Navier-Stokes problem, F in (1.25) is defined as discrete convection-diffusion14operator and thus rank(F ) = n. Therefore, the aim is to approximate the dual Schurcomplement in (1.26).The “SIMPLE” preconditioner in [103], uses the approximationX = B diag(F )−1BT .This preconditioner is known to work well for moderate to large values of the kinematicviscosity. However, for small values of ν the preconditioned iterations degrade. Other“SIMPLE” type preconditioners use other algebraic approximations to F−1.In this thesis, we utilize the approaches from [35] where the authors introduce thepressure convection diffusion (PCD) and least-squares commutator (LSC) precondition-ers. These approaches are based on minimizing a commutator operator; see Section 2.2.1for more details.An alternative to the Schur complement approximations presented in [35, 103] aremonolithic multigrid methods. In [105] Vanka introduced a “all-at-once” multigrid ap-proach to the Navier-Stokes problem.Mixed Maxwell problemApplications of computational electromagnetics, just like computational fluid dynamics,lead to large sparse systems and are widely used within industrial and geophysicalapplications.For the mixed Maxwell problem, F in (1.25) is the discrete curl-curl operator andtherefore rank(F ) = n−m. Thus, in [51] that authors propose a preconditioner basedon (1.27) where they introduce the approximation F + BTW−1B ≈ F + X. Here Xis a mass matrix (finite element identity operator), so that F + X is a shifted curl-curl operator; see Section 2.2.2 for more details. Due to the large null-space of thediscrete curl-curl operator, effective “black-box” scalable solvers which can be used inthe Navier-Stokes case do not work as well. Thus, more specialized solvers are required.There are several effective multigrid solvers designed specifically for shifted curl-curl operator. The method proposed in [59] is based on geometric multigrid, usingspecialized smoothers. More recent work has been done using algebraic multigrid forunstructured meshes or where a sequence of meshes is hard to construct; see [14, 43,63, 88]. These methods use smoothers developed for the geometric multigrid case in[59]. The nodal auxiliary space preconditioner introduced in [61] has proven to beparticularly effective. It reduces the solution of the shifted curl-curl problem to a nodalshifted vector Laplacian and scalar Laplacian solve. In Chapter 2, the numerical resultsare produced with an in-house implementation of [61] whereas for Chapters 3 and 4we use the more efficient Hypre [37, 38] implementation. A full description of theimplementation aspects of [61] can be found in [67, 72].15MHD modelThe development of block preconditioning techniques for this model is relatively limited.A number of attempts have been made to develop scalable solution methods for theMHD model, with limited success [20–22, 66, 76, 96, 97]. We are particularly interestedin the class of block preconditioning methods. These techniques are often based oneffective Schur complement approximations [10, 50, 80]. Such methods are known to berobust with respect to mesh refinement and nondimensional parameters.In recent years, the interest in the development of block preconditioning methods forthe MHD model has increased; see [2, 29, 71, 85, 86, 109, 110]. The work that is the mostrelevant to ours is [85, 86]. The authors in those papers consider approaches based onutilizing effective preconditioners for the separate subproblems, and then incorporatingthe coupling terms. Our work in [110] capitalizes on a useful tensor identity introducedin [85, Equation (3.15)] and also uses block preconditioners. However, in contrast to[85] we formulate a different block matrix and use other approximations to the Schurcomplements. Similarly to the “all-at-once” multigrid for the Navier-Stokes equationsin [105], the authors in [2] develop “all-at-once” multigrid methods for the MHD model.In [29], the aim is to use block factorizations to reduce the dimension of the model andthus develop block preconditioning techniques based on this reduced model.So far there appear to be limited block preconditioning strategies that are fullyscalable for large scale three-dimensional problems. This thesis introduces progress onthis front.1.4 Numerical softwareThe number of open-source numerical software packages that deal with the discretizationand linear solution methods for PDE-based problems has risen dramatically in the lastfew decades. Theses packages are used in a variety of different settings; from the designof new and more complex solution methods for a variety of different models to usersthat require a “black-box” type solution method to PDEs.Numerical software for a linear or linearized elliptic PDEs of the kind considered inthis thesis may be split into the following components:1. PDE discretization, including domain construction and mesh generation;2. solution of the linear system.161.4.1 PDE discretization: FEniCSThere are many open-source software packages that consider discretizations of PDEs –most of these are based on either finite elements or finite volumes. A few examples canbe found in [3, 13, 25, 27, 54, 77] but by no means is this an exhaustive list. In thisthesis, we limit our discussion to FEniCS [77], which is a finite element package.The FEniCS project started in 2003 and the aim was create software that automatesa finite element discretization and solution of differential equations. One of the mainstrengths of the FEniCS package is its easy-to-use code to create efficient large-scalenumerical examples for any type of PDE-based problem. The main underlying codebase for FEniCS is written in C++ and Python, with interfaces in both languages.FEniCS supports a large range of different finite element function spaces, thus al-lowing it to be used for a variety of different applications, including electromagneticsand fluid flow problems, which are our focus of interest. We use FEniCS primarily forits discretization modules but note that it is possible to use its linear solver packagesas well. For our specialized block-based linear solvers we use PETSc.As mentioned in Section 1.2, the first step of numerical discretization is domainconstruction and mesh triangulation. Often domains of interest for PDEs are standardrectangles in 2D or boxes in 3D. Such domains are commonly written into the finiteelement discretization software. However, for certain physical applications, nonstandarddomains and meshes are typically required.For the more complex geometries and meshes, we use the mshr mesh generationsoftware [65]. Using Constructive Solid Geometry (CSG) [40], mshr generates 2D and3D meshes using both CGAL [102] and Tetgen [98] as the mesh generation backends.This enables the user to define several points as the domain and then mshr will forma quasi-uniform mesh triangularization from those points. It is possible to use theCGAL or Tetgen libraries in isolation. However, since mshr incorporates both and itlinks seamlessly into FEniCS, it is the natural choice of meshing software for FEniCSdiscretizations.For complicated physical domains and meshes, local mesh refinement is often re-quired. There are a number of different types of mesh refinements. These includegraded mesh refinement, adaptive mesh refinement and cell-wise refinement; see, forexample, [89]. In Program 1.1 we give an example of cell-wise mesh refinement usingmshr and FEniCS. This sort of refinement is done by flagging the cells depending ontheir location in the mesh and refining them. For example, in Program 1.1 we flag thecells which are within a certain distance from the singular point and refine the mesh inthat area.One of the most common complex domains considered is an L-shaped domain; see17Figure 1.2. The mesh refinement introduced in Figure 1.2 is based on cell-wise meshrefinement around the singular point.Program 1.1 mshr: construction of a locally refined mesh on an L-shaped domain inFigure 1.1. The cells that are within a certain distance from the radius.1 from dolfin import Point, mesh2 import mshr3 import matplotlib.pylab as plt45 # L-shaped domain and mesh generation6 domain = mshr.Rectangle(Point(-1,-1), Point(1,1)) - mshr.Rectangle(Point(0,0),Point(1,1))↪→7 mesh = mshr.generate_mesh(domain, 32)89 # Specifying local mesh refinement10 cell_markers = CellFunction("bool", mesh)11 cell_markers.set_all(False)12 for cell in cells(mesh):13 p = cell.midpoint()14 if sqrt(p.x()**2+p.y()**2) < 0.4:15 cell_markers[cell] = True16 mesh = refine(mesh, cell_markers)1718 plot(mesh, interactive=True)19 plt.show()Once the domain and mesh are constructed, the next step is to define the discretiza-tion used. We consider a mixed finite element approach for the MHD model (1.1). Thisthus requires the construction of the mixed function space and then the linear weakform (bilinear form) associated with the model problem. Program 1.2 demonstrateshow FEniCS discretizes the complex MHD model in (1.1). The variational form weuse for the MHD model is given in Section 2.1. The seamless fashion of implementingthe variational form is one of the main strengths of the FEniCS software package, asevident from Program 1.2. The program constructs the bilinear form for the Navier-Stokes equations (1.3), the mixed Maxwell model (1.5) and the coupling terms definedin (1.1).1.4.2 Solution of the linear system: PETScThree of the main and most widely used linear algebra software packages are Trilinos[56, 57], Eigen [52], and PETSc [7, 9, 30]. Theses three packages are all available inthe C++ and Python programming languages. Each package incorporate additionalexternal solvers (direct methods, multigrid methods, etc.) within their solver classes.This therefore allows users to incorporate state-of-the-art external software packages allwithin one framework. In this thesis, we are particularly interested in the development18Program 1.2 FEniCS: construction of the bilinear forms for the MHD model in (1.1)1 (u, p, b, r) = TrialFunctions(W)2 (v, q, c, s) = TestFunctions(W)34 # the bilinear form for the mixed Maxwell’s equations in (1.5)5 Maxwell = kappa*nu_m*inner(curl(b),curl(c))*dx + inner(c,grad(r))*dx +inner(b,grad(s))*dx↪→67 # the bilinear form for the mixed Navier-Stokes equations in (1.4)8 NavierStokes = nu*inner(grad(u), grad(v))*dx + inner((grad(u)*u_k),v)*dx +(1./2)*div(u_k)*inner(u,v)*dx - (1./2)*inner(u_k,n)*inner(u,v)*ds - div(v)*p*dx -div(u)*q*dx↪→↪→910 # the bilinear form for the coupling terms of the MHD model in (1.1)11 Coupling = kappa*inner(cross(v, b_k), curl(b))*dx - kappa*inner(cross(u, b_k),curl(c))*dx↪→1213 # the bilinear form associated with the Newton linearization of the MHD model in(1.1)↪→14 Newton = inner((grad(u_k)*u), v)*dx + (1./2)*div(u)*inner(u_k, v)*dx -(1./2)*inner(u, n)*inner(u_k, v)*ds + kappa*inner(cross(v, b), curl(b_k))*dx -kappa*inner(cross(u_k, b), curl(c))*dx↪→↪→1516 # the bilinear form for the MHD model in (1.4)17 MHD = Maxwell + NavierStokes + Coupling + Newton1819 # assembling MHD coeffi matrix20 K = assemble(MHD)Figure 1.2: L-shaped domain generated using mshr. We locally refine cells that are adistance 0.4 from the origin.of block-based preconditioning techniques. All three of these packages have block-basedpreconditioning classes, however within the Python interface, PETSc seemed the most19usable package.The three key components which we use PETSc for are its library of Krylov subspacesolvers, its interface with external packages, and the ability to apply block precondi-tioners in a seamless fashion. A few examples of external packages are:• Hypre [37] - the LLNL preconditioner library.• MUMPS [4, 5] - MUltifrontal Massively Parallel sparse direct Solver.• ParMeTiS [64] - parallel graph partitioner.• PaStiX [55] - a parallel LU and Cholesky solver package.• SuiteSparse, including KLU [32], UMFPACK [31], and CHOLMOD [23] - sparsedirect solvers, developed by Timothy A. Davis.• SuperLU [73, 74] and SuperLU_Dist [112] - robust and efficient sequential andparallel direct sparse solves.• Trilinos/ML [57] - ML: Multilevel Preconditioning Package. Sandia’s main multi-grid preconditioning package.Several of these packages have been used throughout this project, e.g., Hypre, MUMPS,PaStiX, ML and UMFPACK. For the eventual software package that is being exper-imented with, we use Hypre and MUMPS. We use Hypre primarily as the multigridpackage due to its large range of different smoother types and, particularly, its imple-mentation of the auxiliary space preconditioner in [61]. The auxillary space precondi-tioner has been proven to be highly robust for shifted curl-curl problems, and thus, is animportant component in the new preconditioning techniques we propose. MUMPS is awidely used direct solver for many problems and for exact inverses we use this softwarepackage.The primary focus of this work is on the development of block-based linear solvers,and as such, it was necessary to use software features that support this methodology.PETSc has a builtin function known as fieldsplit, which is used to apply block-basedpreconditioners. A key feature of the application of our block-based preconditioningtechniques is the ability to apply a nested set of linear system solves for an individualblock. In this work, we use various Schur complement approximations that give riseto nested sets of matrix solves; see Section 1.3.3. It appears that this sort of nestedsolves is not well supported in fieldsplit, therefore, we create a Python class whichcan be used to define the inverse of block preconditioners. A generic example of theapplication of a block diagonal preconditioner is given in Programs 1.3 and 1.4. The20code examples demonstrate how it is possible to split the application of a block diagonalpreconditioner of the form: (F 00 L).ksp1 and ksp2 in Program 1.3 are the action of F−1 and L−1, respectively, and ISdenotes the indices of the the two variables. Program 1.4 is set up so that a linearsystem Kx = b is solved using FGMRES [92] with the application of the preconditionerdone using a Python class.Program 1.3 PETSc: our implementation of a block diagonal preconditioner.1 class ApplyD(object):23 def __init__(self, ksp1, ksp2, IS):4 # ksp1 and ksp2 are the individual solvers for the (1,1) and (2,2) blocks,respectively↪→5 self.ksp1 = ksp16 self.ksp2 = ksp27 # IS is the index set where IS[0] denotes the indices associated with theprimary variable and IS[1] the secondary variable↪→8 self.IS = IS910 def setUp(self, pc):11 A, P = pc.getOperators()12 self.B = A.getSubMatrix(self.IS[1], self.IS[0])1314 def apply(self, pc, x, y):1516 x1 = x.getSubVector(self.IS[0])17 y1 = x1.duplicate()18 x2 = x.getSubVector(self.IS[1])19 y2 = x2.duplicate()2021 self.ksp1.solve(x1, y1)22 self.ksp2.solve(x2, y2)23 y.array = np.concatenate([y1.array, y2.array])Using classes similar to the one in Program 1.3, it is possible to create sophisticatedblock preconditioners for the four-field MHD model considered in this work.1.5 Outline and contributionsThis thesis is organized into five chapters. In Chapter 2, we introduce a new block trian-gular preconditioner for the MHD model (1.1)–(1.2). We use a finite element approachbased on Taylor-Hood elements [101] for (u, p) and the lowest order Nédélec [81] pairfor (b, r) in (2.2). Using this discretization, we propose a preconditioning approach that21Program 1.4 PETSc: example of how using a Python class as a preconditioner1 ksp = PETSc.KSP()2 ksp.create(comm=PETSc.COMM_WORLD)3 pc = ksp.getPC()45 ksp.setType(’fgmres’)6 pc.setType(’python’)7 pc.setPythonContext(ApplyD(ksp1, ksp2, IS))89 ksp.setTolerances(rtol=rtol, atol=atol, divtol=divtol, max_it=max_it)10 ksp.setOperators(K)1112 ksp.setFromOptions()1314 ksp.solve(b, x)combines well-known preconditioners for the two subproblems, together with a novelapproach of dealing with coupling terms and approximating Schur complements. Weanalytically show that the proposed preconditioning technique effectively clusters eigen-values of the preconditioned matrix. Finally, we numerically test this preconditionerand demonstrate its merits.In Chapter 3, we derive a new formula for the inverse for the MHD matrix (2.8).The formula uses the nullity of both the coupling terms and curl-curl operator, whichis the leading block of the Maxwell subproblem. Along with showing that the exactinverse formula has a number of zero blocks, we are able to efficiently approximate itusing a sparse matrix, by dropping small entries of the inverse. We show several three-dimensional numerical results which illustrate the strong scalability of this approach.In Chapter 4, we consider the use of conjugate gradient (CG) [58] for a class ofindefinite matrices. The class of matrices considered are nonregularized or regularizednonsingular saddle-point problems with a highly rank deficient leading block. We showthat there are two sufficient conditions for which CG can be used. The conditionsrely on the null-space of the leading block of the saddle-point problem. We show thatthese conditions are perfectly satisfied by the Maxwell problem in (1.5). Our extensivenumerical tests show that these conditions can be relaxed slightly but still enable theuse of CG for the example we consider, namely the mixed Maxwell problem (1.5).In the final chapter we offer concluding remarks and areas for future work.The main contributions are:1. We propose a new block preconditioner that utilizes state-of-the-art precondi-tioners for the Navier-Stokes and Maxwell subproblems. Using a result from [85]we form a practical version of it and test its robustness with respect to matrixdimension. The numerical results show strong scalability with respect to two-22dimensional problems and a small increase in iterations with respect to the meshfor three-dimensional problems3.2. We numerically show that for moderate nondimensional parameters it is possibleto decouple the MHD model into its subproblems, either the Navier-Stokes andMaxwell problems or the Stokes and Maxwell problems.3. We design a new approximate inverse-based preconditioner for the MHD matrix.To our knowledge, this is one of the only strongly scalable preconditioning ap-proaches for large-scale three-dimensional problems with strong coupling.4. Given a symmetric saddle-point matrix with a leading block that is maximallyrank deficient which still leads to a nonsingular matrix, we show that there aresufficient conditions on the construction of a block diagonal and triangular pre-conditioner that enable the use of CG. This work builds upon and complements[36, 51, 72].5. I have developed a large-scale code to solve the MHD problem (1.1)–(1.2) andits essential subproblems. This code combines several well-known open-sourcesoftware packages to produce large-scale numerical examples within the Pythonprogramming language.1.6 NotationWe follow the notation rules given in Tables 1.1 to 1.3. We will denote closely relatedmatrices (either permutations of the original or an extra off diagonal block) with a tildeor hat.Notation MeaningK coefficient matrixM preconditionerS Schur complementTable 1.1: Notation for block systems3Black-box preconditioners such as the Jacobi and Gauss-Seidel do not utilize the underlying physicalproperties of the PDE, and hence, the iteration increase is directly proportional to the mesh level.23Superscript Meaning Used forNS Navier-Stokes K,MS Stokes K,MMX Maxwell K,MMHD Magnetohydrodynamics K,MC Coupling for MHD KN (1,1) block rank deficient KTable 1.2: Notation for superscriptsSubscript Meaning Used forI ideal MP practical MS Schur complement based MA Approximate inverse MB 2-by-2 Block triangular (lower/upper) MTable 1.3: Notation for subscripts24Chapter 2Preconditioners for Mixed FiniteElement Discretizations ofIncompressible MHD EquationsThis chapter consists of our new preconditioning results, which were published in theSIAM Journal of Scientific Computing [110]. The structure of the chapter is as fol-lows. In Section 2.1, we introduce the finite element discretization, consider decouplingschemes for the nonlinear iterations, and discuss the resulting matrix systems. In Sec-tion 2.2, we review relevant preconditioners for the Navier-Stokes and Maxwell sub-problems. Our new preconditioning approach for the fully coupled MHD discretizationis introduced and analyzed in Section 2.3. We also give details of a scalable implemen-tation of the proposed preconditioners in this section. In Section 2.4 we show a seriesof numerical results in two and three dimensions. The two-dimensional results showgood scalability with respect to the mesh. However, for three-dimensional problems,we observe a small increase in iterations with respect to the mesh.2.1 DiscretizationIn this section we specify the mixed finite element discretization for the incompressibleMHD model (1.1)–(1.2), the nonlinear Picard iteration, decoupling approaches for thenonlinear problem, and the resulting linear systems arising in each iteration step.2.1.1 Mixed finite element approximationOur mixed finite element approximation is based on the variational formulation for (1.1)–(1.2) introduced and analyzed in [95]. It consists in finding a weak solution (u, p, b, r)25in the standard Sobolev spacesu ∈ V = {v ∈ H1(Ω)d : v = 0 on ∂Ω},p ∈ Q = { q ∈ L2(Ω) : (q, 1)Ω = 0},b ∈ C = {c ∈ L2(Ω)d : ∇× c ∈ L2(Ω)d¯, n× c = 0 on ∂Ω},r ∈ S = {r ∈ H1(Ω) : r = 0 on ∂Ω}.(2.1)Here and in the following, we write (·, ·)Ω for all L2-inner products, and use d¯ = 2d− 3to define the curls simultaneously for two- and three-dimensional vector fields [49].Now let the domain Ω be divided into regular meshes Th = {K} consisting oftriangles (d = 2) or tetrahedra (d = 3) with mesh size h. We consider Taylor-Hoodelements for (u, p) and the lowest order Nédélec pair for (b, r). The corresponding finiteelement spaces are:V h = {u ∈ V : u|K ∈ P2(K)d, K ∈ Th },Qh = { p ∈ Q ∩H1(Ω) : p|K ∈ P1(K), K ∈ Th },Ch = { b ∈ C : b|K ∈ R1(K), K ∈ Th },Sh = { r ∈ S : r|K ∈ P1(K), K ∈ Th }.(2.2)Here Pk(K) denotes the space of polynomials of total degree at most k on K andR1(K) = {a+ b× x : a ∈ Rd, b ∈ Rd} denotes the linear edge element space on K interms of the position vector x on K.With the finite element spaces in place, our mixed finite element approximation toproblem (1.1)–(1.2) reads: find (uh, ph, bh, rh) ∈ V h ×Qh ×Ch × Sh such thatA(uh,v) +O(uh;uh,v) + C(bh;v, bh) +B(v, ph) = (f ,v)Ω,B(uh, q) = 0,M(bh, c)− C(bh;uh, c) +D(c, rh) = (g, c)Ω,D(bh, s) = 0,(2.3)for all (v, q, c, s) ∈ V h×Qh×Ch×Sh. Here, we denote by (·, ·)Ω the inner product inL2(Ω)d. The bilinear forms A, B, M and D are given byA(u,v) = ν(∇u,∇v)Ω, B(u, q) = −(∇ · u, q)Ω,M(b, c) = κνm(∇× b,∇× c)Ω, D(b, s) = (b,∇s)Ω.26Moreover, the trilinear forms O and C are defined asO(w;u,v) =((w · ∇)u,v)Ω+12((∇ ·w)u,v)Ω,C(d;v, b) = κ(v × d,∇× b)Ω.The forms A, B, and O are associated with the variational formulation of the incom-pressible Navier-Stokes sub-problem, M and D with that of the Maxwell sub-problemin mixed form, and C is the coupling form which combines the two sub-problems intothe full MHD system. We note that the convection form O appears in a standard skew-symmetric fashion. As a consequence, the discretization (2.3) is energy-stable withoutviolating consistency.The discrete problem (2.3) falls into the class of conforming mixed discretizationstudied in [95]. Hence, it is stable and has a unique solution for small data (i.e., forsufficiently large ν, νm, κ and forcing terms f and g with sufficiently small L2-norms).Moreover, we have optimal-order error estimates in natural norms, both for smooth andnon-smooth solutions. In particular, the strongest singularities of the curl-curl operatorin non-convex domains are correctly captured and resolved.Remark 1. The pairs V h ×Qh and Ch × Sh form standard and optimally convergentmixed discretizations for the fluid and magnetic equations in isolation. However, theapproximation properties of these pairs are not properly matched for the fully coupledsystem (2.3). Specifically, the optimal order O(h2) of the Taylor-Hood spaces V h ×Qh (for the H1-norm velocity errors and the L2-norm pressure errors) are potentiallyreduced due to the coupling with the lower-order magnetic spaces Ch × Sh in (2.2).Nonetheless, we have chosen to work with the spaces in (2.2) due to computationalconsiderations and availability of fast solvers. In particular, we avoid the need forstabilized or enriched fluid elements, and are able to use the well established auxiliaryspace preconditioner [61] for the lowest-order Nédélec pair.2.1.2 Picard iterationA common choice for dealing with the nonlinearity within the incompressible Navier-Stokes equations in isolation is to perform Picard or Oseen iterations [35]. We adapt thisapproach for the fully coupled MHD system, and linearize around the current velocityand magnetic fields. Hence, given a current iterate (uh, ph, bh, rh), we solve for updates27(δuh, δph, δbh, δrh) and introduce the next iterate by settinguh → uh + δuh, ph → ph + δph,bh → bh + δbh, rh → rh + δrh.The updates (δuh, δph, δbh, δrh) ∈ V h×Qh×Ch×Sh are found by solving the PicardsystemA(δuh,v) +O(uh; δuh,v) + C(bh;v, δbh) +B(v, δph) = Ru(uh, bh, ph;v),B(δuh, q) = Rp(uh; q),M(δbh, c) +D(c, δrh)− C(bh; δuh, c) = Rb(uh, bh, rh; c),D(δbh, s) = Rr(bh; s),(2.4)for all (v, q, c, s) ∈ V h × Qh × Ch × Sh. Note that this system is linearized around(uh, bh). The right-hand side linear forms correspond to the residual at the currentiteration (uh, ph, bh, rh) and are defined byRu(uh, bh, ph;v) =(f ,v)Ω −A(uh,v)−O(uh;uh,v)− C(bh;v, bh)−B(v, ph),Rp(uh; q) =−B(uh, q),Rb(uh, bh, rh; c) =(g, c)Ω −M(bh, c) + C(bh;uh, c)−D(c, rh),Rr(bh; s) =−D(bh, s),(2.5)for all (v, q, c, s) ∈ V h × Qh ×Ch × Sh. For small data, the iteration (2.4) convergesfor any initial guess [95].2.1.3 DecouplingLet us consider two important cases where simplifications to the Picard iteration (2.4)can be used. We introduce the following variants, referred to as magnetic decouplingand complete decoupling.As mentioned in the Introduction, [6] discusses the notion of strong coupling, ac-cording to the value of κ: cases where κ < 100 are considered to have weak coupling.Otherwise, the problem is treated as one with strong coupling. We have found this tobe useful from a computational point of view, too. For κ < 100 we may converge to asolution by taking the coupling terms explicitly, i.e., we omit them in the LHS of (2.4).Therefore, for a given solution (uh, ph, bh, rh), neglecting the coupling terms in (2.4),28results in solving for the updates (δuh, δph, δbh, δrh) ∈ V h ×Qh ×Ch × Sh such thatA(δuh,v) +O(u; δuh,v) +B(v, δph) = Ru(uh, bh, ph;v)B(δuh, q) = Rp(uh; q),M(δbh, c) +D(c, δrh) = Rb(uh, bh, rh; c),D(δbh, s) = Rr(bh; s),(2.6)where Ru, Rp, Rb and Rr are defined in (2.5). We call this magnetic decoupling (MD).When we have both weak coupling and small convection terms in the system (2.4),the simplest strategy is to take all the nonlinear terms explicitly. This is the simplesttechnique, as it removes all nonlinear terms. For a given solution (uh, ph, bh, rh), re-moving the coupling and convection terms in (2.4) results in solving for the updates(δuh, δph, δbh, δrh) ∈ V h ×Qh ×Ch × Sh such thatAh(δuh,v) +B(v, δph) = Ru(uh, bh.ph;v)B(δuh, q) = Rp(uh; q),M(δbh, c) +D(c, δrh) = Rb(uh, bh, rh; c),D(δbh, s) = Rr(bh; s),(2.7)where again Ru, Rp, Rb and Rr are given in (2.5). We call this complete decoupling (CD).2.1.4 The linear systemsFor the matrix representation of (2.4)–(2.5), we introduce the basis function for thefinite element spaces in (2.2):V h = span〈ψj〉nuj=1, Qh = span〈αi〉mui=1,Ch = span〈φj〉nbj=1, Sh = span〈βi〉mbi=1.The aim is to find the coefficient vectors u = (u1, . . . , unu) ∈ Rnu , p = (p1, . . . , pmu) ∈Rmu , b = (b1, . . . , bnb) ∈ Rnb , and r = (r1, . . . , rmb) ∈ Rmb of the finite element functions(uh, ph, bh, rh) in terms of the chosen bases. To this end, we define the following stiffness29matrices and load vectors:Ai,j = A(ψj ,ψi), 1 ≤ i, j ≤ nu,Bi,j = B(ψj , αi), 1 ≤ i ≤ mu, 1 ≤ j ≤ nu,Mi,j = M(φj ,φi), 1 ≤ i, j ≤ nb,Di,j = D(φj , βi), 1 ≤ i ≤ mb, 1 ≤ j ≤ nb,fi = (f ,ψi)Ω, 1 ≤ i ≤ nu,gi = (g,φi)Ω, 1 ≤ i ≤ nb.We define the stiffness matrices for the two nonlinear forms, O and C, with respect tothe current finite element iterates uh ∈ V h and bh ∈ Ch and their associated coefficientvectors u and b asO(u)i,j = O(uh;ψj ,ψi), 1 ≤ i, j ≤ nu,C(b)i,j = C(bh;ψj ,φi), 1 ≤ i ≤ nb, 1 ≤ j ≤ nu.We denote by (u, p, b, r) and (δu, δp, δb, δr) the coefficient vectors associated with(uh, ph, bh, rh) and (δuh, δph, δbh, δrh), respectively. Then it can be readily seen thatthe Picard iteration (2.4) amounts to solving the matrix systemA+O(u) BT C(b)T 0B 0 0 0−C(b) 0 M DT0 0 D 0δuδpδbδr =rurprbrr , (2.8)withru = f −Au−O(u)u− C(b)T b−BT p,rp = −Bu,rb = g −Mu+ C(b)b−DT r,rr = −Db.(2.9)At each nonlinear iteration, the right hand side vectors and matrices O(u) and C(b)in (2.8), (2.9) must be assembled with the solution coefficient vectors (u, p, b, r) ofthe current iterate. Here, the matrix A is symmetric positive definite, O(u) is non-symmetric and −C(b), C(b)T appear in a skew symmetric fashion. We also note thatM is symmetric positive semidefinite with nullity mb corresponding to the dimensionof the scalar space of the discrete gradients, see [78]. In the sequel, we shall often omit30the dependence of O(u) and C(b) on u and b, respectively, and write O and C.The linear system associated with the magnetic decoupling scheme in (2.6) then is:A+O BT 0 0B 0 0 00 0 M DT0 0 D 0δuδbδpδr =rurbrprr , (2.10)with the right-hand side quantities as defined in (2.9). While still non-symmetric, thesystem decouples into a Navier-Stokes block and a Maxwell block. Thus, allowing theproblems to be solved with well known specifically designed preconditioners and possiblyin parallel.The linear system connected with the complete decoupling scheme in (2.7) is:A BT 0 0B 0 0 00 0 M DT0 0 D 0δuδbδpδr =rurbrprr , (2.11)again with the right-hand side quantities as defined in (2.9). The system is now sym-metric and decouples into a linear Stokes problem and a Maxwell problem. Then, wemay apply MINRES to both of the sub-problems.2.2 Review of preconditioning techniques for thesub-problemsThe linear systems in Section 2.1.3 are associated with a few important sub-problems, asdiscussed. These are the Navier-Stokes, Stokes, and Maxwell problems. In this sectionwe review preconditioners for each of these sub-problems.2.2.1 Fluid flow preconditionerFor the magnetic decoupling scheme (2.10), the governing equations for the fluid floware the Navier-Stokes equations. Their associated discretized and linearized operator isgiven byKNS =(F BTB 0), (2.12)31with F = A + O. One of the principal preconditioning approaches in the literature isbased on approximations to the Schur complement. Using [35, 80], we look at precon-ditioners of the formMNSI =(F BT0 −S), (2.13)where S = BF−1BT . In practice, the leading block, F , and the Schur complement, S,are approximated by linear operators that are easier to invert. Two common choices foran approximation to the Schur complement, S, are the least squares commutator (LSC)and pressure-convection diffusion (PCD). The approximations are based on finding anoperator Fp which minimizes the commutator:E = BF − FpBT =⇒ (BF−1BT )−1 ≈ (BBT )−1Fp, (2.14)where F and B are the continuous convection-diffusion and divergence operators, re-spectively. Loosely speaking, one can split up LSC and PCD in the following two ways:PCD: Works with the continuous operator (2.14) and tries to find Fp such that Eis small. Discretization is done once the continuous operator Fp is found.LSC: Discretizes the left equation in (2.14) straight away and then finds thediscrete analog to Fp, which effectively minimizes the discretized commutator.We use the PCD approximation developed in [35] which has proven to be robustwith respect to viscosity, different choices of mixed finite elements and type of meshtriangulation (i.e., squares or triangles in 2D). The approximation is based onS = BF−1BT ≈ Ap F−1p Qp, (2.15)where the matrix Ap is the pressure Laplacian, Fp is the pressure convection-diffusionoperator and Qp is the pressure mass matrix:(Ap)i,j = (∇αj ,∇αi)Ω 1 ≤ i, j ≤ mu,(Fp)i,j = ν(Ap)i,j + (uh · ∇αj , αi)Ω, 1 ≤ i, j ≤ mu,(Qp)i,j = (αj , αi)Ω, 1 ≤ i, j ≤ mu,where uh ∈ V h is the given velocity field in the current iteration step. Note that Apand Fp are well-defined since we work with continuous pressure elements. An effectiveimplementation of this preconditioning approach is discussed in Section 2.3.3.For the complete decoupling scheme (2.11), the governing equations for the fluid flow32are the Stokes equations. Their discrete form is given by the matrixKS =(A BTB 0). (2.16)Here we opt to use a standard block diagonal (i.e., withBT zero in (2.13)) and symmetricpositive definite preconditioner of the formMSP =(A 00 Ŝ).A natural choice for Ŝ is Ŝ = 1νQp, where Qp is the pressure mass matrix in (2.15).It is important here to note that the system is completely decoupled, hence, using asymmetric positive definite preconditioner allows use of MINRES and therefore shortrecurrences within the Krylov solver. This preconditioner allows one to obtain mesh-independent convergence rates; see [35].2.2.2 Maxwell preconditionerA key part of each decoupling scheme is an efficient preconditioner for the discreteMaxwell sub-problem, whose associated matrix is given byKMX =(M DTD 0). (2.17)In [51], it was shown that an ideal block diagonal positive definite preconditioner isMMXI =(M +DTL−1D 00 L). (2.18)Here L is the scalar Laplacian on Sh defined asLi,j = (∇βj ,∇βi)Ω, 1 ≤ i, j ≤ mb. (2.19)Using this preconditioner yields precisely two distinct eigenvalues, 1 and −1, hencea symmetric preconditioned Krylov solver such as MINRES will converge within twoiterations in the absence of roundoff errors.Remark 2. Given that the MHD problem is nonsymmetric, and hence we would haveto use a nonsymmetric solver anyway, one may be tempted to ask whether it would makesense to incorporate the (1,2) block of the coefficient matrix (2.17) into the precondi-33tioner, namely replace (2.18) byM˜MXI =(M +DTL−1D DT0 L).Interestingly, it turns out that there is no advantage in doing so in terms of eigenvaluedistribution and consequently, the convergence of the iterative solver. The preconditionedeigenvalues of (M˜MXI )−1KMX are 1 and 1±√52 , that is there are three of them, whereasthe block diagonal preconditioner (2.18) gives rise to two eigenvalues. Therefore, theadditional computational cost entailed in a matrix-vector product with DT does nottranslate into a benefit in terms of iteration counts.Inverting the (1,1) block of the preconditioner,ML = M +DTL−1D, (2.20)is typically computationally prohibitive. Let X be the scalar mass matrix on Ch,defined asXi,j = (φj ,φi)Ω, 1 ≤ i, j ≤ nb. (2.21)Then, it has been shown in [51, Theorem 3.3] that ML and M + X are spectrallyequivalent. Hence, we may use the preconditionerMMXP =(MX 00 L), where MX = M +X. (2.22)The preconditioner MMXP still has rather attractive spectral properties; in particular,the preconditioned operator (MMXP )−1KMX has the two eigenvalues 1 and −1 with al-gebraic multiplicity mb each, and the rest of the eigenvalues are bounded by a constantindependent of the mesh size; cf. [51]. We thus will use MMXP as a preconditioner forthe Maxwell sub-problem. As discussed in Section 2.3.3, we shall use certain approxi-mations to MX and L to achieve maximal scalability with respect to computing timeand problem size.2.3 Preconditioners for the MHD systemIn this section, we propose a preconditioning approach for the discrete MHD sys-tem (2.8), which is based on keeping the coupling matrix C in the preconditioner,and on applying the preconditioners discussed in Sections 2.2.1 and 2.2.2 to the Navier-Stokes and Maxwell sub-problems. Leaving the coupling terms in and applying these34known preconditioners to each of the sub-problems in the MHD system yields the idealpreconditioner:MMHDI =F BT CT 00 −S 0 0−C 0 ML 00 0 0 L ,where ML and S are defined as the Schur complement approximations in (2.20) and(2.15), respectively.2.3.1 ReorderingWe note that by reordering the blocks inMMHDI , such that the solution vector is of thefollowing form (u, b, p, r), we obtain a 2× 2 block triangular preconditioner of the form:M˜MHDI =F CT BT 0−C ML 0 00 0 −S 00 0 0 L . (2.23)If we are to use this preconditioner, then we must reorder the linear system accordingly:F CT BT 0−C M 0 DTB 0 0 00 D 0 0δuδbδpδr =rurbrprr , (2.24)with the right-hand side quantities as defined in (2.9). Let us denote by KMHD the co-efficient matrix defined in (2.24). From this point on, we consider the reordered system.The computational bottleneck is solving systems associated with the matrix(F CT−C ML), (2.25)in the (1, 1) block matrix of (2.23). To invert the matrix in (2.25), we apply a blocktriangular preconditioner based on the Schur complement given by(F +MC CT0 ML), where MC = CTM−1L C.35Using the above approximation in M˜MHDI yields:MMHDS =F +MC CT BT 00 ML 0 00 0 −S 00 0 0 L . (2.26)To analyze the spectral properties of MMHDS , we refer to vectors b ∈ null(M) asdiscrete gradients. With the discrete Helmholtz decomposition, it follows that for eachb ∈ null(M) there is a unique r ∈ Rmb such that b = Gr for a discrete gradient matrixG ∈ Rnb×mb ; cf. [51, Section 2]. Hence, dim(null(M)) = mb. The following result holdstrue.Theorem 1. The matrix (MMHDS )−1KMHD has an eigenvalue λ = 1 with algebraicmultiplicity of at least nb + nc where nc is the dimension of the nullspace of C = C(b)and an eigenvalue λ = −1 with algebraic multiplicity of at least mb. The dimensionof the nullspace of C is nc = nu − nb + mb. The corresponding eigenvalue-eigenvector(λ,x) pairs are:λ = 1, xT = (uTc , bT , (−S−1Buc)T , (L−1Db)T ),with uc ∈ null(C) and b ∈ Rnb arbitrary, andλ = −1, xT = (0, bTg , 0, (−L−1Dbg)T ),with bg = Gr a discrete gradient for r ∈ Rmb.Proof. The corresponding eigenvalue problem isF CT BT 0−C M 0 DTB 0 0 00 D 0 0ubpr = λF +MC CT BT 00 ML 0 00 0 −S 00 0 0 Lubpr .The four block rows of the generalized eigenvalue problem can be written as(1− λ)(Fu+BT p+ CT b)− λCT (M +DTL−1D)−1Cu = 0, (2.27)−Cu+ (1− λ)Mb− λDTL−1Db+DT r = 0, (2.28)Bu = −λS p, (2.29)Db = λLr. (2.30)36If λ = 1, (2.27) is satisfied ifCT (M +DTL−1D)−1Cu = 0.This only happens when u ∈ Null(C). Using uc to denote a nullspace vector of C then(2.29) simplifies to:p = −S−1Buc.Equation (2.30) leads to r = L−1Db. If this holds, (2.28) is readily satisfied. Therefore,(uTc , bT , (−S−1Buc)T , (L−1Db)T ) is an eigenvector corresponding to λ = 1. There existnc linearly independent such vectors u and nb linearly independent such vectors b.Hence, it follows that λ = 1 is an eigenvalue with algebraic multiplicity of at leastnu +mb.If λ = −1, (2.30) leads to r = −L−1Db. Substituting it into (2.28), we obtainCu = Mb. If b = bg is a discrete gradient then Mb = 0 and CT b = 0. If we take u = 0,then Cu = 0 and the requirement Cu = Mb is satisfied. If u = 0 and b = bG is adiscrete gradient, equation (2.27) becomes BT p = 0. Since B has full row rank, thisimplies p = 0. Therefore, if b = bg is a discrete gradient, then (0, bTg , 0, (−L−1Dbg)T )is an eigenvector corresponding to λ = −1. There are mb discrete gradients. Thereforeλ = −1 is an eigenvalue with algebraic multiplicity at least mb.Remark 3. In the spirit of Remark 2, looking at (2.26) one may ask whether incorpo-rating DT into the preconditioner, such thatM˜MHDS =F +MC CT BT 00 ML 0 DT0 0 −S 00 0 0 L ,may generate a slightly better eigenvalue distribution for the preconditioned system.Consistently with Remark 2, it turns out that doing so does not practically generate animprovement of the eigenvalue distribution. From Table 2.1, we see that both precondi-tioned systems yield exactly the same number of eigenvalues. However, (MMHDS )−1KMHDonly has 2 distinct eigenvalues whereas (M˜MHDS )−1KMHD has 3 distinct eigenvalues.Thus, the insertion of DT does not theoretically decrease the number of iterations for aKrylov subspace method to converge; this has been confirmed by numerical experiments.Since incorporating DT into the preconditioner slightly increases the cost of a singleiteration, we opt for usingMMHDS rather than M˜MHDS .37(MMHDS )−1KMHD (M˜MHDS )−1KMHD1 nb + nc nu−1 mb 01+√52 0 mb1−√52 0 mbTotal nu + 2mb nu + 2mbTable 2.1: Algebraic multiplicity of eigenvalues for preconditioned matrices associatedwithMMHDS and M˜MHDS . Note that nc = nu − nb +mb2.3.2 From an ideal to a practical preconditionerWe now consider further simplifications ofMMHDS in (2.26), to make the preconditionercomputationally feasible. Effective sparse approximations are required for the relevantSchur complements that arise. We use the approximations in Sections 2.2.1 and 2.2.2for S and ML. For approximating MC , we follow a similar approach to that takenin [85, Section 3.1]. For a given magnetic field b, let Cb be the continuous differentialoperator analogue of MC = CTM−1L C:Cbu = κ(∇× ((κνm∇×∇×+∇∆−1∇ ·)−1κ∇× (u× b)))× b= κ2∇× (κνm∇×∇×+∇∆−1∇ · )−1∇× (u× b)× b. (2.31)The discretization of (2.31) isCbu = κ2G(κνmGTG+DTL−1D)−1GT (u× b)× b, (2.32)where G is a discrete curl matrix and u and b are vectors of velocity and magneticcoefficients, respectively.DiscreteOrderOperatorG O(h−1)L O(h−2)D O(h−1)Table 2.2: Orders of matrix entries for relevant discrete operatorsIn Table 2.2 we state the order of the discrete differential operators that are involvedin (2.32). We observe that the discrete curl-curl matrix, GTG, contains entries ofmagnitude O(h−2) and DTL−1D is order O(1). Thus for small h and moderate values38of κ and νm the curl-curl matrix will be the dominant term. Therefore we consider:Cbu ≈ κν−1m G(GTG)−1GT (u× b)× b.Furthermore, G(GTG)−1GT is an orthogonal projector onto the range space of GT ,hence, it acts as an identity operator within that space. We therefore use the approxi-mationCbu ≈ κνm−1b× (u× b). (2.33)From (2.33), Cb can approximated by a scaled mass matrix determined by the coefficientsof the magnetic field b. We thus approximate MC by a scaled mass matrix, which wedenote as QS , and whose elements are:(QS)i,j = κν−1m (bh ×ψj , bh ×ψi)Ω, 1 ≤ i, j ≤ nu. (2.34)Combining the sparse Schur complement approximations in (2.15), (2.22) and (2.34)in the MHD preconditioner (2.26) gives the approximate preconditioner:MMHDP =F +QS CT BT 00 M +X 0 00 0 −Ap F−1p Qp 00 0 0 L . (2.35)In the transition from the ideal preconditioner (2.23) to the practical preconditioner(2.35), some spectral clustering is inevitably lost. But as we show, the spectral struc-ture of the preconditioned matrix associated with MMHDP is still very appealing. Weillustrate this in Section 2.4.2.3.3 ImplementationSo far we have introduced the matrix systems with possible preconditioners, but havenot discussed practical implementation considerations. One of our main goals is toprovide a fully scalable solution method. To this end, we will consider mesh-independentsolvers for the separate block matrices within the preconditioners. Table 2.3 outlinesthe methods we use to solve the systems associated with Qp, Ap, F +Qs, M +X andL which are the block diagonal matrices in (2.35). Let us provide a few additionalcomments:1. For solving systems involving Qp, we have experimentally observed that scalingby a multiplicative scalar α smaller than 1 results in a slight decrease of iteration39Matrix Implementation methodQp diagonal of αQp where α = 0.75Ap single AMG V-cycleF +Qs single AMG V-cycleM +X AMG method developed in [61]L single AMG V-cycleTable 2.3: Solution methods for systems associated with separate block matrices within(2.35)counts, especially in the 3D case. In our numerical experiments we provide resultsthat correspond to using α = 0.75.2. For solving systems involving M +X, the method developed in [61] aims to over-come issues with standard AMG methods for the discrete curl-curl operator byusing an auxiliary space approach [113] for H(curl) finite element discretizationsof elliptic problems. The construction of the auxiliary space multigrid (ASM)preconditioner relies on three additional matrices. First, the discrete Nédélec in-terpolation operator P ∈ Rnb×dmb (where d is the spatial dimension), the discretegradient operator G ∈ Rnb×mb and mass matrix defined on the scalar space Sh asQ; see [67, 72].2.4 Numerical resultsThis section examines the efficiency of our preconditioning approaches to the MHDmodel (1.1)-(1.2). The auxiliary space multigrid is used as a preconditioner withina Conjugate Gradient solver. Since this practically means that the Krylov subspaceassociated with the preconditioned iterations varies in every outer solve, a flexible solverfor nonsymmetric matrices is required. We chose to use Flexible GMRES (FGMRES)[92]. In all experiments, unless otherwise stated, we use a 2-norm absolute toleranceof 1e-4 for the nonlinear solver and relative error of 1e-5 for both FMGRES and theauxiliary space multigrid [61].All numerical experiments have been carried out using the finite element softwareFEniCS [77] in conjunction with the PETSc4PY package (Python interface for PETSc[7, 9]) and the multigrid package HYPRE [38].We test our methods on problems with inhomogeneous Dirichlet boundary condi-tions in the hydrodynamic variables, even though the analysis has been carried outsolely for the homogeneous Dirichlet case. Other boundary conditions may be handled40by our finite element framework, and our preconditioning approaches can be extendedaccordingly.In the subsequent tables we use the following notation:• timesolve is the average FGMRES time to solve systems associated with KMHD;• timeNL is the total nonlinear solve time (including all matrix assembles, FGMRESsolves and updates to nonlinear iteration);• itNL is the number of nonlinear iterations;• it∗av is the average number of FGMRES where ∗ is either I or D. Superscript Idenotes an approximate/iterative application of the preconditioner; for example,this could be a multigrid solve. See Table 2.3 for a complete list of methodsused to solve systems associated with the block diagonal matrices of the precon-ditioner. Superscript D denotes a direct solve for block diagonal matrices of thepreconditioner.The time and nonlinear iteration columns (timesolve, timeNL, and itNL) are computedusing an iterative application of the preconditioner (itIav). We run the program twice,once for itIav and once for itDav.Mesh sequences. In our tests, we consider sequences of uniformly refined simplicialgrids (i.e., triangles in 2D and tetrahedra in 3D). We define ` to be the mesh level, suchthat there are 2` edges along each boundary. In our tables, we usually show the gridlevel ` in the first column. We also note that DoFs (in the second column), refers tothe total number of degrees of freedom of the linear system.Stopping criteria. Throughout, we enforce the following nonlinear stopping criteriafor the updates:‖δu‖2 + ‖δp‖2 + ‖δb‖2 + ‖δr‖2 < tolNL,where ‖ · ‖2 is the absolute error in the 2-norm of a vector and tolNL =1e-4.Initial guess tolerance. For all numerical experiments, we formulate the initial guessby iteratively solving a Stokes problem and mixed Maxwell problem in isolation. Wechoose the a tight Krylov 2-norm relative tolerance as 1e-10 to ensure that the approx-imations of the inhomogeneous boundary conditions are sufficiently accurate; see [109,Chapter 2.5]. This is to ensure the accuracy of the initial solution on the boundariessince subsequent Picard iterations are solved for homogeneous boundary conditions andhence, any errors in the initial guess will be carried throughout Picard iteration.412.4.1 2D smooth solutionThe first example considered is a simple domain with a structured mesh. We use aunit square domain, Ω = [0, 1]2. We take ν = κ = 1, νm = 10; then the source termsf , g and inhomogeneous Dirichlet boundary conditions on ∂Ω are defined from theanalytical solution:u(x, y) =(xy exp(x+ y) + x exp(x+ y)−xy exp(x+ y)− y exp(x+ y)),p(x, y) = exp(y) sin(x),b(x, y) =(exp(x+ y) cos(x)exp(x+ y) sin(x)− exp(x+ y) cos(x)),r(x, y) = x sin(2pix) sin(2piy).We note that r 6= 0 in this example. Indeed, while the right-hand-side is often divergencefree in applications, which leads to an identically zero Lagrange multiplier, for testingpurposes we set r as above and test convergence to the exact solution.To illustrate the eigenvalue distribution, we compute the eigenvalues of the precon-ditioned matrix(M˜MHDI)−1KMHD in Figure 2.1a and (MMHDP )−1KMHD in Figure 2.1b.We note that there are no imaginary parts of the eigenvalues for Figure 2.1a. From Fig-ure 2.1b we see that we still observe strong clustering of eigenvalues around 1 and −1with only a few complex conjugate pairs. Thus, the spectral structure is still ratherappealing in terms of eigenvalue clustering. Therefore, the preconditionerMMHDP seemsa viable approximation to M˜MHDI .To test the scalability of our method, we considered a uniformly refined sequence ofmeshes. The results are presented in Table 2.4. The iterations appear to remain fairlyconstant with the increasing mesh level for both itDav and itIav.2.4.2 2D smooth solution parameter testsWe next test the performance of the three nonlinear iteration schemes Picard (P),magnetic decoupling (MD) and complete decoupling (CD) introduced in Section 2.1.The convergence of the nonlinear iterations is likely to be affected by the parametersetup of the problem, i.e., by the values of the fluid viscosity (ν), the magnetic viscosity(νm) and the coupling number (κ). By varying κ and ν, we examine the robustness ofthe three schemes with respect to these parameters. If the nonlinear iterations do notconverge then this is denoted by “-” in the tables.42(a) Eigenvalues of the preconditioned matrix(M˜MHDI)−1KMHP associated with the ideal pre-conditioner.(b) Eigenvalues of the preconditioned matrix(MMHDP )−1KMHP associated with the practicalpreconditioner. The eigenvalues in this case arecomplex; the blue curves represent their realparts, and the red curves represent their imag-inary parts.Figure 2.1: Eigenvalues of preconditioned matrices for the smooth solution given in thissection. The number of degrees of freedom for these matrices is 724.` DoFs timesolve timeNL itNL itIav itDav4 3,556 0.33 2.7 7 24.4 20.15 13,764 1.11 9.2 7 25.9 20.46 54,148 4.48 37.2 7 27.1 20.97 214,788 20.32 166.4 7 28.4 21.48 855,556 94.29 762.0 7 31.3 21.89 3,415,044 486.53 3835.0 7 34.3 -10 13,645,828 2231.71 17944.6 7 34.0 -Table 2.4: 2D smooth: Number of nonlinear iterations and number of iterations to solvethe MHD system with tolNL = 1e-4, κ = 1, ν = 1 and νm = 10.Viscosity testAs a first test we consider varying the fluid viscosity, ν, for tolNL = 1e-4, κ = 1 andνm = 10. The nonlinear iteration results are shown in Table 2.5 and the average linearsolve times are shown in Table 2.6.As the fluid viscosity (ν) decreases, the fluid flow equations (1.1a) and (1.1b) becomemore convection-dominated. Thus we see that the (CD) scheme breaks down for smallν, as in this decoupling scheme the convection term is taken explicitly. On the otherhand, the Picard and (MD) schemes perform similarly. We note that for smaller νboth (P) and (MD) have trouble converging without a sufficiently refined mesh. From43ν = 1 ν = 0.1 ν = 0.01` DoFs (P) (MD) (CD) (P) (MD) (CD) (P) (MD) (CD)4 3,556 5 5 9 7 7 - 11 11 -5 13,764 5 5 9 15 7 - 11 11 -6 54,148 5 5 9 7 7 - 13 11 -7 214,788 5 5 9 7 7 - 11 11 -8 855,556 5 5 9 7 7 - 11 11 -Table 2.5: Number of nonlinear iterations for various values of ν with κ = 1 andνm = 10.ν = 1 ν = 0.1 ν = 0.01` DoFs (P) (MD) (CD) (P) (MD) (CD) (P) (MD) (CD)4 3,556 0.04 0.04 0.03 0.08 0.04 - 0.04 0.04 -5 13,764 0.18 0.17 0.14 0.97 0.18 - 0.19 0.18 -6 54,148 0.90 0.91 0.67 0.95 0.92 - 2.05 0.93 -7 214,788 4.48 4.48 3.49 4.68 4.57 - 5.02 4.48 -8 855,556 25.52 22.81 16.67 25.63 23.03 - 25.01 22.61 -Table 2.6: Average linear solver time for various values of ν with κ = 1 and νm = 10.Table 2.6 we see that the direct solve for the Picard (P) system and magnetic decouplingare very similar, but as expected the complete decoupling solve time is shorter.Coupling number testκ = 1 κ = 10 κ = 100` DoFs (P) (MD) (CD) (P) (MD) (CD) (P) (MD) (CD)4 3,556 5 5 9 7 10 17 8 - -5 13,764 5 5 9 7 10 18 8 - -6 54,148 5 5 9 7 10 18 8 - -7 214,788 5 5 9 7 10 18 8 - -8 855,556 5 5 9 7 10 18 8 - -Table 2.7: Number of nonlinear iterations for various values of κ with tolNL = 1e-5,ν = 1 and νm = 10.The next parameter test examines the effects of the coupling terms in the three44κ = 1 κ = 10 κ = 100` DoFs (P) (MD) (CD) (P) (MD) (CD) (P) (MD) (CD)4 3,556 0.06 0.06 0.04 0.06 0.06 0.03 0.06 - -5 13,764 0.23 0.23 0.21 0.24 0.25 0.19 0.24 - -6 54,148 1.12 1.08 0.80 1.01 1.05 0.79 0.98 - -7 214,788 4.80 4.92 3.65 5.05 4.92 3.74 5.52 - -8 855,556 25.21 25.93 18.03 25.49 26.03 18.49 26.75 - -Table 2.8: Average linear solver time for various values of κ with ν = 1 and νm = 10.nonlinear iteration schemes. We expect the Picard scheme to outperform the otherschemes for large values of κ. The results in Table 2.7 show that this is indeed the case.Both the (MD) and (CD) schemes completely break down for κ ≥ 100. This is thepoint at which the Picard iteration (P) becomes the most viable option. Altogether,as expected, the full Picard iteration is more robust than (CD) and (MD). We see asimilar trend with the timing results in Table 2.8 as we saw in the timing results forthe viscosity in Table 2.6.2.4.3 2D smooth solution on L-shaped domainTo further test the robustness of our preconditioning technique, we will consider non-convex domains. We first consider a problem with a smooth solution in the L-shapeddomain Ω = (−1,−1)2 \ ([0, 0) × (1, 1]). We prescribe the same analytical solutionas in Section 2.4.1. The results in Table 2.9 show very good scalability with respect tomesh refinement for direct solves of the preconditioner. On the other hand, we reportthat when using iterative inner solvers with a reasonably loose tolerance, the iterationsdramatically deteriorate and scalability is lost. We speculate that the AMG solvercannot handle well the non-convexity of the domain. In such cases it may be necessaryto apply a rather strict convergence tolerance.2.4.4 2D singular solution on L-shaped domainWe next consider the model singular solution from [49] on the L-shaped domain Ω =(−1, 1)2 \ ([0, 1)× (−1, 0]). That is, taking ν = κ = 1 and νm = 10, we set the forcingterms and the boundary conditions such that the analytic solution is given by thestrongest corner singularities of the underlying Stokes and Maxwell operators subjectto the boundary conditions (1.2) on the two inner sides meeting at the reentrant corner.45` DoFs itNL itDav5 12,880 5 24.46 51,678 5 26.07 203,712 5 27.48 809,705 5 29.69 3,219,082 - -Table 2.9: 2D L-shaped: Number of nonlinear iterations and number of iterations tosolve the MHD system with tolNL = 1e-4, κ = 1, ν = 1 and νm = 10. The iteration wasterminated before completion for ` = 9 due to the computation reaching the prescribedtime limit.In polar coordinates (ρ, φ) at the origin, the fluid solution (u, p) is then of the formu(ρ, φ) = ρλΨu(φ), p(ρ, φ) = ρλ−1Ψp(φ), (2.36)with the singular exponent λ ≈ 0.54448373678246 and where (Ψu,Ψp) are smoothfunctions in the angle φ. Similarly, the magnetic pair (b, r) (with ∇ · b = 0 and∇× b = 0) is of the formb(ρ, φ) = ρ−1/3Ψb(φ), r ≡ 0, (2.37)Detailed expressions of the fluid components (u, p) and the magnetic field b can befound in [49, Section 5.2]. Using Nédélec elements allows us to properly capture thissingular solution, whereas applying standard nodal elements for b fail to do so. Aswith the smooth solution on an L-shaped domain in Section 2.4.3, we only considerdirect applications of the preconditioner. The results are shown in Table 2.10. We drawsimilar conclusions with respect to mesh independence. Overall, the iterations showgood scalability.2.4.5 2D Hartmann flowAs a final 2D numerical example, we consider the two-dimensional Hartmann flow prob-lem, which involves a steady unidirectional flow in the channel Ω = (0, 10) × (−1, 1)under the constant transverse magnetic field bD = (0, 1) on ∂Ω. We impose the analyt-ical solution given in [49, Section 5.3] with Dirichlet boundary conditions for u on the46` DoFs itNL itDav3 740 4 13.84 2,724 4 14.55 10,436 4 15.86 40,836 4 17.57 161,540 4 18.58 642,564 4 20.09 2,563,076 4 21.8Table 2.10: 2D singular solution on an L-shaped domain: Number of nonlinear iterationsand number of iterations to solve the MHD system with tolNL = 1e-4, κ = 1, ν = 1and νm = 10.entire boundary ∂Ω. The MHD solution then takes the form:u(x, y) = (u(y), 0), p(x, y) = −Gx+ p0(y),b(x, y) = (b(y), 1), r(x, y) ≡ 0.(2.38)The exact solution is given by (2.38) withu(y) =GνHa tanh(Ha)(1− cosh(yHa)cosh(Ha)),b(y) =Gκ(sinh(yHa)sinh(Ha)− y),p0(y) = −G22κ(sinh(yHa)sinh(Ha)− y)2,where Ha =√κννmis the Hartmann number. We impose inhomogeneous Dirichletboundary conditions from the exact solutions.The results are reported in Table 2.11. We observe that for the two-dimensionalHartmann flow example the solver appears to accomplish an excellent degree of scala-bility. For the last two mesh levels the preconditioner approximation becomes better interms of lower iterations, itIav. More exploration of the conditioning of the problem andthe norm of the (preconditioned) residual is needed for fully understanding the reasonfor this; the large size of the problems presents a challenge in fully exploring this.47` DoFs timesolve timeNL itNL itIav itDav2 1,212 0.66 1.58 2 18.0 13.53 4,500 1.71 3.82 2 17.5 13.04 17,316 5.13 11.04 2 17.5 12.55 67,908 21.06 44.73 2 18.5 12.56 268,932 97.16 204.36 2 19.0 13.07 1,070,340 447.66 935.03 2 19.0 12.58 4,270,596 921.90 1001.37 1 8.0 7.09 17,060,868 1459.82 1778.95 1 3.0 -Table 2.11: 2D Hartmann: Number of nonlinear iterations and number of iterations tosolve the MHD system with tolNL = 1e-4, κ = 1, ν = 1 and νm = 10002.4.6 3D smooth solutionWe next consider a 3D example with a smooth solution on Ω = [0, 1]3. Let ν = κ = 1,νm = 10 and let the analytical solution be given by:u(x, y, z) =−xy exp(x+ y + z) + xz exp(x+ y + z))xy exp(x+ y + z)− yz exp(x+ y + z)−xz exp(x+ y + z) + yz exp(x+ y + z) ,p(x, y, z) = exp(x+ y + z) sin(y),b(x, y, z) =− exp(x+ y + z) sin(y) + exp(x+ y + z) sin(z)xy exp(x+ y + z)− yz exp(x+ y + z)− exp(x+ y + z) sin(x) + exp(x+ y + z) sin(y) ,r(x, y, z) = sin(2pix) sin(2piy) sin(2piz).Then the source terms f and g and inhomogeneous boundary conditions are definedfrom the analytical solution. The corresponding results are shown in Table 2.12. Weobserve good scalability when we consider direct solves for the preconditioner. However,the average iterations degrade when using multigrid methods to solve the preconditionerand we do not obtain full scalability with respect to the three-dimensional results.48` DoFs timesolve timeNL itNL itIav itDav1 527 0.03 0.9 4 18.8 18.02 3,041 0.22 3.5 3 26.7 22.33 20,381 1.77 26.6 3 37.0 24.74 148,661 22.11 237.0 3 40.7 26.05 1,134,437 206.43 2032.7 3 44.3 -6 8,861,381 2274.28 19662.0 3 50.0 -Table 2.12: 3D smooth: Number of nonlinear iterations and number of iterations tosolve the MHD system with tolNL = 1e-4, κ = 1, ν = 1 and νm = 1049Chapter 3An Approximate Inverse-BasedPreconditioner for IncompressibleMagnetohydrodynamicsThis chapter is currently unpublished and presents a new block approximate inversepreconditioner for the MHD model (1.1)–(1.2). The structure of the chapter is a follows.In Section 3.1, we introduce the Newton system for the MHD model. In Section 3.2 wederive a new formula for the inverse of the MHD matrix. In Section 3.3, we form aneffective sparse approximation to the inverse and analyze spectral properties for an idealcase. A new block triangular approach is derived and analyzed in Section 3.4. Finally,we present several numerical experiments which demonstrate the strong scalability ofthis approach in Section 3.5.3.1 Newton’s method discretization of the MHD modelFor this Chapter, we consider two non-linear iteration schemes, namely, Picard iterationand Newton’s method for the MHD model (1.1)–(1.2). Using the notationuk+1 = uk + δu, pk+1 = pk + δp,bk+1 = bk + δb, rk+1 = rk + δr.where ∗k+1 is the k + 1th iteration of either the Picard or Newton’s iteration, then thelinearized system becomes−ν∆δu+ (uk · ∇)δu+ α(δu · ∇)uk +∇δp− κ(∇× δb)× bk − ακ(∇× bk)× δb = ru,∇ · δu = rp,κνm∇× (∇× δb) +∇r − κ∇× (δu× bk)− κα∇× (uk × δb) = rb,∇ · δb = rr,50whereru =f − [−ν∆uk + (uk · ∇)uk +∇pk − κ (∇× bk)× bk] ,rp =−∇ · uk,rb =g − [κνm∇× (∇× bk) +∇r − κ∇× (uk × bk)] ,rr =−∇ · bk.Here we note that the Picard system is for α = 0 and the Newton system is α = 1.We consider a finite element discretization of the MHD model (1.1)–(1.2) wherethe hydrodynamic unknowns (u and p, respectively) are discretized with any stablemixed finite elements and the magnetic and multiplier unknowns are discretized thougha mixed edge and nodal element pair. Using the same format as [110], we use Taylor-Hood elements [101] for (u, p) and the lowest order Nédélec [81] pair for (b, r). Upondiscretization, we obtain the following matrix system to solve:F + αFNT BT CT + αCTNT 0B 0 0 0−C 0 M + αMNT DT0 0 D 0δuδpδbδr =rurprbrr , (3.1)withru = f − Fuk − CT bk −BT pk,rp = −Buk,rb = g −Muk + Cbk −DT rk,rr = −Dbk.Similarly to [86], the individual matrix blocks in (3.1) are defined in Tables 3.1 to 3.3.Matrix-vector Continuous Approximateproduct operator normF δu ν∆ δuk + (uk · ∇)δuk ν/h2 + ‖uk‖/hBT δp ∇δp 1/hB δu ∇ · δu 1/hCT δu −κ (∇× δb)× bk κ‖bk‖/hTable 3.1: Block coefficient matrices, their corresponding continuous operators andapproximate norms for the Fluid matricesThe recent work in [86] expanded the class of block preconditions to the 4 × 4block formulation of the incompressible MHD model. Using the formulation in [95], the51Matrix-vector Continuous Approximateproduct operator normM δb κνm∇× (∇× δb) κνm/h2DT δr ∇r 1/hD δb ∇ · δb 1/h−C δb −κ∇× (δu× bk) ‖bk‖/hTable 3.2: Block coefficient matrices, their corresponding continuous operators andapproximate norms for the Magnetic matricesMatrix-vector Continuous Approximateproduct operator normFNT δu (δu · ∇)uk ‖∇uk‖CTNT δu −κ (∇× bk)× δb κ‖∇ × bk‖MNT δb −κ∇× (uk × δb) ‖uk‖/hTable 3.3: Block coefficient matrices, their corresponding continuous operators andapproximate norms for the Newton matricesauthors derive a block triangular preconditioning approach based on Schur complementapproximations as well an additive mass matrix shift to the curl-curl operator. Theresults show a moderate increase in linear iterations with respect to mesh refinement.Our approach looks at deriving an approximate inverse preconditioner for the same 4×4block system.3.2 A new formula for the inverse of the MHD coefficientmatrixTo preserver the null space properties of the block system, we will derive the newapproximate inverse preconditioner for the Picard iteration, α = 0. In practice, thenumerical results produced in this paper will be done using the Newton nonlinear iter-ation scheme. Let us denote by KMHD the coefficient matrix in the MHD model (3.1)and write it as:KMHD =(KNS (KC)T−KC KM),52where KNS is the Navier-Stokes sub-problem, KC is the block for the coupling and KMis the Maxwell sub-problem:KNS =(F BTB 0), KM =(M DTD 0),(KC)T =(CT 00 0)and KC =(C 00 0).Then, by [10, Equation (3.4)], the inverse is given by(KMHD)−1 =((KNS)−1 + (KNS)−1(KC)TS−1KC(KNS)−1 −(KNS)−1(KC)TS−1−S−1KC(KNS)−1 S−1),(3.2)where S denotes the Schur complement,S = KM +KC(KNS)−1(KC)T . (3.3)The inverses (KNS)−1 and S−1 appear multiple times in (3.2), and we now deriveexplicit formulas that further reveal their block structure. Notably, using results thathave appeared in [36], we show that S−1 has a zero (2,2) block, and can be expressedin terms of a free matrix parameter. Let us write(KNS)−1 =(K1 K2K3 K4). (3.4)We then have the following useful result.Theorem 2. Let KNS be written in block form as in (3.4). ThenS−1 =(M−1F (I −DTW−1GT ) GW−1W−1GT 0), (3.5)where W is a (free) symmetric positive definite matrix,MF = M +DTW−1D + CK1CT and G = M−1F DT .Proof. Writing out all the matrices involved in formula (3.3) for S, we haveS =(M + CK1CT DTD 0).53Since the discrete gradient operator is the null space of M and CT we havedim(null(M + CK1CT )) = mb,where mb is defined as the number of rows of the magnetic discrete divergence matrixD. Thus the (1,1) block of the Schur complement has the maximum nullity which stillleads to a non-singular saddle point system. Therefore using [36, equation (3.6)], theinverse of the Schur complement is given by (3.5).Using the inverse formula (3.2) and (3.5) together gives the exact expression for theinverse of (3.1) as(KMHD)−1 =K1 −K1V K1 K2 −K1V K2 −K1CTM−1F H 0K3 −K3V K1 K4 −K3V K2 −K3CTM−1F H 0M−1F CK1 M−1F CK2 M−1F H GW−10 0 W−1GT 0 . (3.6)where V = CTM−1F C and H = I −DTW−1GT .3.3 A new approximate inverse-based preconditionerIn this section we aim to sparsify the inverse formula in (3.6) exploiting the null-spaceproperties and the approximate order of the block system.3.3.1 A sparse approximation of the Schur complementRecall that K1 is the (1, 1) block matrix of the inverse of the Navier-Stokes sub-problem. Thus forming it is very computationally costly and will often yield a densematrix. Therefore, CK1CT is a major bottleneck within MF which is a integral part ofthe Schur complement, S. DefiningMF = MW + CK1CTwhere MW = M + DTW−1D, then using a generalization of the Sherman-Morrison-Woodbury theorem, the Binomial inverse formula we can re-write MF asM−1F = M−1W −M−1W CK1(K1 −K1CTM−1W CK1)−1K1CTM−1W . (3.7)54Using (3.7) for G in (3.5) we obtainG = (M−1W −M−1W CK1(K1 −K1CTM−1W CK1)−1K1CTM−1W )DT ,= M−1W DT −M−1W CK1(K1 −K1CTM−1W CK1)−1K1CTM−1W DT = M−1W DT ,since M−1W DT (being a definition of the discrete gradients from [36, Proposition 3.6]) isthe null space matrix of M and CT .In order to approximate M−1F we again use the Binomial inverse formula and notethat the approximation orders for C, MW and K1 are O(h−1), O(h2) and O(h2),respectively. Therefore it can be seen that:M−1W ≈ O(h2) and M−1W CK1(K1 −K1CTM−1W CK1)−1K1CTM−1W ≈ O(h4).Thus, we use the approximationM−1F ≈M−1W .The final step is to consider what matrix to use forW . From [36] we take DG = W ,recalling that G is the matrix of null vectors of M . Since G is made up of discretegradients then from [51, Proposition 2.2], W is defined to be the scalar Laplacian.Also, as shown in [51], the vector mass matrix, X, is spectrally equivalent to DTW−1D.Therefore, we set W to be the scalar Laplacian and therefore change the notation toL. Using these two results and the observation that a multiplication of the Schurcomplement involves multiplications of the leading block with M (GTM = 0), then thesimplified inverse Schur complement becomes:S−1 ≈ S−1P =(M−1X GL−1L−1GT 0), (3.8)where MX = M +X and G = M−1X DT .3.3.2 A practical preconditionerWe observe that the application of the inverse, (3.6), involves multiplications of H =I −DTW−1GT with M where GT is the null-space of M . Thus, in a similar fashion to(3.8), we can reduce H to the identity. Also, we note thatKiV Kj & O(h3)55for any i, j = 1, 2, 3, 4. Hence, removing these terms we form the first step for theapproximation of (3.6) as:(M˜MHDA )−1 =K1 K2 −K1CTM−1X 0K3 K4 −K3CTM−1X 0M−1X CK1 M−1X CK2 M−1X GL−10 0 L−1GT 0 . (3.9)The final step to approximate (3.9) is to consider the inverse of the Navier-Stokessystem KNS. For this we return to the exact inverse formula of a block matrix in (3.2).Applying this to the Navier-Stokes system gives the precise expression for the inverse(KNS)−1 =(F−1 − F−1BTS−1NSBF−1 F−1BTS−1NSS−1NSBF−1 −S−1NS). (3.10)Practically, we use the pressure-convection diffusion (PCD) approximation developedin [35]. Substituting (3.10) into the expression for (M˜MHDA )−1 in (3.9) gives(M˜MHDA )−1 =K F−1BTS−1NS −KCTM−1X 0S−1NSBF−1 −S−1NS −S−1NSBF−1CTM−1X 0M−1X CK M−1X CF−1BTS−1NS M−1X GL−10 0 L−1GT 0 ,(3.11)whereK = F−1 − F−1BTS−1NSBF−1.As with the approximation of M−1F in Section 3.3, we consider the approximate ordersof the individual blocks of (3.11). Removing the O(h3) terms in the (1,3) and (3,1)blocks of (3.11) yields the approximation:(MMHDA )−1 =K F−1BTS−1NS 0 0S−1NSBF−1 −S−1NS −S−1NSBF−1CTM−1X 00 M−1X CF−1BTS−1NS M−1X GL−10 0 L−1GT 0 .(3.12)3.3.3 Spectral analysisTo analyze the spectral properties of our approximate inverse preconditioner, we in-vestigate the eigenspectrum of (M˜MHDA )−1KMHD where (M˜MHDA )−1 is given in (3.11).56We consider the O(h3) approximate inverse, (3.11), to simplify the complex eigenvalueanalysis. The form of (M˜MHDA )−1KMHD is given by:(M˜MHDA )−1KMHD =Iu +KCTM−1L C 0 KCT (I −M−1L M) 0S−1NSBF−1CTM−1L C Ip S−1NSBF−1CT (I −M−1L M) 00 0 M−1L (M + CKCT ) G0 0 0 Ir .(3.13)whereML is the exact primal Schur complementM +DL−1DT . Let us introduce a fewidentities utilizing the null space properties of CT andML. Proposition 1 is particularlyuseful to simplify (3.13).Proposition 1. The following relations hold:(i) CT (I −M−1L M) = 0,(ii) CT (I +M−1L M) = 2CTProof. Since CT and ML have the same null space (space of discrete gradients), byusing the Helmholtz decompositionb = d+∇φ,we obtainCT (I −M−1L M)b = CT (I −M−1L M)d,where d /∈ Null(M). Since M−1L M is zero on the null space of M and an identity onthe range space of M , then(I −M−1L M) ∈ Null(M).Therefore, identity (i) holds due to the fact the null spaces of CT and M are the same.Using similar arguments, identity (ii) can also be shown to be true.Using Proposition 1 simplifies (3.13) to:(M˜MHDA )−1KMHD =Iu +KCTM−1L C 0 0 0S−1NSBF−1CTM−1L C Ip 0 00 0 M−1L (M + CKCT ) G0 0 0 Ir , (3.14)where we recall that G = M−1L DT and K = F−1 − F−1BTS−1NSBF−1.57Theorem 3. The matrix (M˜MHDA )−1KMHD has an eigenvalue λ = 1 of algebraic multi-plicity at least nu − nb + 3mb +mu. The corresponding eigenvectors {vi}nu−nb+3mb+mui=1are given byvi = (ui, pi, bi, ri)where ui ∈ Null(C), bi ∈ Null(M) and pi and ri free.Proof. Since (3.14) is block diagonal where the two blocks are also triangular, thegeneralized eigenvalue problem(M˜MHDA )−1KMHDv = λv,where v = (u, p, b, r) can be simplified to solving:λu = (Iu +KCTM−1L C)u, (3.15)λp = p, (3.16)λb = M−1L (M + CKCT )b, (3.17)λr = r. (3.18)Consider λ = 1, then trivially (3.16) and (3.18) are automatically satisfied. Takingu ∈ Null(C) and b ∈ Null(M +CKCT ) then (3.15) and (3.17) also hold. Since the nullspace of M and CT are the same we choose b ∈ Null(M).Figure 3.1a shows the preconditioned eigenvalues for (MMHDA )−1 in (3.12) usingthe PCD approximation (from [35]) for the fluid Schur complement and the vectormass matrix approximation of the magnetic Schur complement. Here we see that theclustering around the eigenvalue λ = 1 is very strong.3.4 A block triangular preconditionerAs well as the approximate inverse preconditioner, we introduce a Schur complementbased preconditioner for (3.1). Again we will consider the simpler Picard case, α = 0,but in practice use the Newton nonlinear scheme. We follow the well known setting of[62, 80] for experimental comparison. Let us define M˜MHDB as:M˜MHDB =(KNS (KC)T0 −S). (3.19)From [62, 80] the preconditioned matrix, (M˜MHDB )−1KMHD, has precisely two eigen-values ±1 and is diagonalizable. We would therefore expect an appropriate Krylov58(a) Real (blue) and imaginary (red) partsof eigenvalues of preconditioned matrix(MMHDA )−1KMHD.(b) Real (blue) and imaginary (red) partsof eigenvalues of preconditioned matrix(MMHDB )−1KMHD.Figure 3.1: Preconditioned eigenvalue plots for (a) the approximate inverse precondi-tioner in (3.12) and (b) the block triangular preconditioner in (3.21) using the smoothsolution given (3.31). The dimension of the matrices in this example is 1399× 1399.subspace solver to converge within two iterations in exact precision. To use M˜MHDB as apreconditioner, we require a direct Navier-Stokes solve and a Schur complement solve.The direct solve for the Navier-Stokes system is too costly, so we approximate KNSwith the Schur complement system:MNSI =(F BT0 −SNS), (3.20)where SNS = BF−1BT is the fluid Schur complement. Using (3.20), we obtain the morepractical preconditionerMMHDB =(MNS KC0 −S). (3.21)Theorem 4. The matrix (MMHDB )−1KMHD has an eigenvalue λ = 1 of algebraic mul-tiplicity at least nu, and an eigenvalue λ = −1 of algebraic multiplicity at least nb. Thecorresponding (known) eigenvectors are given as follows:λ = 1: with eigenvectors {vi}nb−mbi=1 and {vj}nuj=nb−mb+1, as follows:vi = (ui,−S−1Bui, bi, 0) and vj = (uj ,−S−1Buj , 0, 0),where bi ∈ null(D) 6= 0, Cui = (2M + CK1CT )bi and uj ∈ null(C).59λ = −1: with eigenvectors {vi}nb−mbi=1 and {vj}nbj=nb−mb+1, as follows:vi = (ui, 0, bi, ri) and vj = (0, 0, bj , rj), (3.22)where ui ∈ null(B) 6= 0, Fui + CT bi = 0, bj ∈ null(M), ri and rj free.Proof. The corresponding eigenvalue problem isF BT CT 0B 0 0 0−C 0 M DT0 0 D 0upbr = λF BT CT 00 −SNS 0 00 0 −(M +KC) −DT0 0 −D 0upbr ,where KC = CK1CT . The four block rows of the generalized eigenvalue problem canbe written as(1− λ)(Fu+BT p+ CT b) = 0, (3.23)Bu = −λSNS p, (3.24)(1 + λ)(Mb+DT r) + λCK1CT b− Cu = 0, (3.25)(1 + λ)Db = 0. (3.26)If λ = 1, (3.23) is automatically satisfied. Equation (3.24) simplifies to:p = −S−1NSBu.From (3.26) we have Db = 0, hence, b ∈ null(D). Let us take r = 0, then (3.25) yieldsCu = (2M + CK1CT )b. (3.27)Case 1: Consider 0 6= b ∈ null(D), then Cu = (2M +CK1CT )b. Since, rank of C and(2M +CK1CT ) is nb−mb, then the condition (3.27) has at least nb−mb linearlyindependent eigenvectors.Case 2: Consider b = 0, then we have that Cu = 0. Hence, u must be in the null spaceof C. Sincedim(null(C)) = nu − nb +mb,this accounts for nu − nb +mb such eigenvectors.Therefore λ = 1 is an eigenvalue with algebraic multiplicity at least nu.60If λ = −1, (3.26) is satisfied, hence, r is free. Simplifying (3.25) obtainsCK1CT b+ Cu = 0. (3.28)Let us take u ∈ null(B), then p = 0 and the condition for b isFu+ CT b = 0. (3.29)Under the condition that u ∈ null(B), (3.29) satisfies the equality (3.28).Case 1: Consider u ∈ null(B) and u 6= 0, then from (3.29) we have u = −F−1CT b.Since the rank of CT is nb −mb and F is full rank, then there are only nb −mbsuch linearly independent b’s that determine u. Hence, for this case we obtain atleast nb −mb such eigenvectors.Case 2: Consider u = 0, then for (3.29) to hold CT b = 0. Therefore, we take b ∈null(CT ). Since, the null space of CT is made up of discrete gradients thendim(null(CT )) = mb.Therefore λ = −1 is an eigenvalue with algebraic multiplicity at least nb.Remark 4. Note that in (3.22), {ui} is a subset of the null vectors of B.An approximation of the fluid Schur complement, SNS, is needed to create a practicalpreconditioner. For SNS we will use the PCD preconditioner developed in [35]. Theapproximation is based onSNS = BF−1BT ≈ Ap F−1p Qp,where the matrix Ap is the pressure Laplacian, Fp is the pressure convection-diffusionoperator and Qp is the pressure mass matrix.MMHDB is defined in (3.21), but in practice we use the approximation for the inverseof the Schur complement from Section 3.3 which isS−1 ≈ S−1P =(M−1X GL−1L−1GT 0).The eigenvalues of the preconditioned matrix (MMHDB )−1KMHD are represented in Fig-ure 3.1b. As with the eigenvalues for the approximate inverse preconditioned matrix we61see a small degradation of the eigenvalue clusters. However, we still see strong clusteringaround 1 and −1.3.5 Numerical experimentsIn this section, we present several 3D numerical results to illustrate the performance ofour preconditioning approaches. We use FEniCS [77], a finite element software package,to create the matrix system and PETSc [7, 9]) and HYPRE [38] to solve the resultingsystem.In line with [86], we set the non-linear stopping tolerance to 1e-4 and the linearsolve tolerance as 1e-3. For all experiments we us FGMRES [92] and the Newton(α = 1) nonlinear iteration scheme. This means that for every solve associated withF and multiplication associated with CT we replace them with Fˆ = F + FNT andCˆT = CT + CTNT. Table 3.4 details the methods which we use to solve the systemsassociated with the block preconditioner.Matrix Implementation methodQp single AMG V-cycleAp single AMG V-cycleFˆ Preconditioned AMG GMRES with tolerance 1e-2M +X AMG method developed in [61] with tolerance 1e-2L single AMG V-cycleTable 3.4: Solution method for block systems associated with the preconditionersWe use the Notation: ` is the mesh level, DoF is the total degrees of freedom,time is the average solve time to solve systems associated with KMHD at each nonlineariteration, itNL is the number of nonlinear/Newton iterations to reach the stoppingcriteria, itO is the average number of linear/FGMRES iterations to solve KMHD, itMXis the average of CG/Auxiliary Space iterations to solve M +X and itF is the averageof GMRES iterations to solve Fˆ . Adding an A or B superscript to time or it∗ denoteswhether we use the approximate inverse or block preconditioner, respectively623.5.1 3D Cavity driven flowThe first example we consider is the classic lid driven cavity problem [35]. The problemis designed by the following Dirichlet boundary conditions:u = (1, 0, 0) on z = 1,u = (0, 0, 0) on x = ±1, y = ±1, z = −1,n× b = n× bN on ∂Ω,r = 0 on ∂Ω,(3.30)where bN = (−1, 0, 0).Scalability resultsTables 3.5 and 3.6 show the time and iteration results using the approximate inverseand block triangular preconditioners. We can see that the approximate inverse pre-conditioner exhibits near perfect scaling with respect to the FGMRES iterations forboth tables, whereas the FGMRES iterations for the block triangular preconditionerincrease each mesh level for the harder problem, Ha =√1000. For the easier parametersetup, Table 3.5, we can see that both preconditioners exhibit scalable iterations. Wenote that for the latest mesh level (` = 6) for both Tables 3.5 and 3.6 the approximateinverse preconditioner yields a faster solution method. In fact, for the more difficultproblem (Ha =√1000) the approximate inverse preconditioner is almost exactly twotimes quicker than the block triangular precondition for ` = 6.` DoFs timeA itANL itAO itAMX itAF timeB itBNL itBO itBMX itBF1 14,012 1.18 3 10.0 1.70 1.61 0.81 4 21.5 2.00 1.962 28,436 2.79 3 10.0 1.78 1.65 1.93 4 21.5 2.00 1.963 64,697 11.43 3 9.7 1.85 1.78 6.90 4 21.0 2.00 2.004 245,276 34.64 4 11.0 2.11 1.85 13.65 3 18.3 2.00 2.005 937,715 255.25 4 10.5 2.21 1.95 135.88 3 19.0 2.47 2.006 5,057,636 1979.22 3 9.7 2.71 2.11 2273.48 3 22.3 3.00 2.50Table 3.5: 3D Cavity Driven using both the approximate inverse and block triangularpreconditioner with parameters κ = 1, ν = 1, νm = 1 and Ha = 1.Numerical cost of preconditionersFrom the definition of the approximate inverse preconditioner it is obvious that eachiteration is more expensive than the block triangular preconditioner. However, as seen63` DoFs timeA itANL itAO itAMX itAF timeB itBNL itBO itBMX itBF1 14,012 7.58 4 57.0 1.96 2.00 5.58 4 146.2 2.00 2.002 28,436 22.21 4 56.2 1.98 2.00 14.88 4 147.2 2.00 2.003 64,697 65.95 4 56.0 1.99 2.00 47.79 4 154.2 2.00 2.004 245,276 271.48 4 56.0 2.15 2.00 205.63 4 160.5 2.23 2.005 937,715 1255.15 4 55.5 2.79 2.01 1003.92 4 168.8 2.91 2.006 5,057,636 17656.36 4 58.5 2.99 2.07 35563.15 4 217.5 3.03 2.03Table 3.6: 3D Cavity Driven using both the approximate inverse and block triangularpreconditioner with parameters κ = 1e1, ν = 1e-1, νm = 1e-1 and Ha =√1000in Table 3.6 the scalability of this preconditioner for harder and larger problems yieldssmaller timing results.Table 3.7 shows the number of solves and matrix-vector products associated with theindividual block matrices that make up the approximate inverse and the block triangu-lar preconditioners for mesh level ` = 6 in Table 3.6. We can see that the approximateinverse preconditioner has a smaller number of solves for both systems associated withvector (Fˆ and MX) and scalar (Ap, Qp and L) valued matrices. Also, the total numberof matrix-vector products is less for the approximate inverse preconditioner. This isreflected in the time it takes to solve this system, where the approximate inverse pre-conditioner is faster than the block triangular one. We also note that the iterations inTable 3.6 remain constant for the approximate inverse preconditioner compared to theblock triangular one. Therefore, for harder problems on larger meshes it seems thatthe approximate inverse preconditioner seems more efficient with respect to time anditerations.3.5.2 Fichera cornerThe second solution we consider is a smooth solution on a non-convex domain. Specif-ically, the domain is a cube missing a corner and is defined as Ω = (0, 1)3/[0.5, 1) ×[0.5, 1)× [0.5, 1). The exact solution isu = ∇× (u1, u1, u1) on Ω,p = xyz(x− 1)(y − 1)(z − 1) exp(x) on Ωb = ∇× (b1, b1, b1) on Ω,r = xyz(x− 1)(y − 1)(z − 1) exp(x+ y + z) on Ω,(3.31)64Linear Number of operationsoperation MMHDA MMHDBFˆ−1 4*58.5 = 234.0 1*217.5 = 217.5A−1p 3*58.5 = 175.5 1*217.5 = 217.5Q−1p 3*58.5 = 175.5 1*217.5 = 217.5M−1X 2*58.5 = 117.0 1*217.5 = 217.5L−1 2*58.5 = 117.0 1*217.5 = 217.5CˆT or C 2*58.5 = 117.0 1*217.5 = 217.5BT or B 4*58.5 = 234.0 1*217.5 = 217.5GT or G 2*58.5 = 117.0 2*217.5 = 435.0Average total FMGRES17656.36 35563.15iteration timeTable 3.7: Numerical cost of usingMMHDA andMMHDB for mesh level ` = 6 in Table 3.6whereu1 = x2y2z2(x− 1)2(y − 1)2(z − 1)2 cos(x),b1 = x2y2z2(x− 1)2(y − 1)2(z − 1)2 sin(y),which defines the inhomogeneous Dirichlet boundary conditions and forcing terms fand g.Table 3.8 shows the timing and iteration results for the following two setups:• Setup 1: κ = 1e1, ν = 1e-2, νm = 1e-2 and Ha =√1e5,• Setup 2: κ = 1e1, ν = 1e-2, νm = 1e-3 and Ha = 1000.We can see that for Setup 1 (Ha =√1e5) that the outer FGMRES iterations remainconstant. However, when considering Setup 2 (Ha = 1000) we start to see a largedegradation in terms of the iteration counts. This is not unexpected for such problems.3.5.3 MHD generatorThis is a problem that describes unidirectional flow in a duct which induces an elec-tromagnetic field. We consider the channel [0, 5] × [0, 1] × [0, 1]. On the left and rightboundaries we enforce the boundary condition u = (1, 0, 0) and on the other walls a noslip boundary condition is applied. Defining δ = 0.1, b0 = 1, xon = 2 and xoff = 2.5 theboundary condition associated with the magnetic unknowns is n × b = n × (0,by, 0)whereby =b02[tanh(x− xonδ)− tanh(x− xoffδ)].65` DoFsSetup 1 Setup 2timeA itANL itAO itAMX itAF timeA itANL itAO itAMX itAF1 34,250 15.64 4 29.2 2.56 2.00 102.34 7 211.7 2.04 2.002 57,569 30.41 4 29.2 2.74 2.00 242.27 7 237.0 2.10 2.003 89,612 52.90 4 28.8 2.91 2.00 440.66 7 252.1 2.15 2.004 332,744 232.23 4 27.8 2.96 2.00 2361.45 7 294.3 2.40 2.005 999,269 1026.31 4 27.8 2.98 2.95 8657.89 7 303.9 2.76 2.126 5,232,365 11593.47 5 28.6 2.99 3.00 111675.33 7 321.4 2.93 2.48Table 3.8: Fichera corner using the approximate inverse preconditioner. Setup 1:κ =1e1, ν =1e-2, νm =1e-2 and Ha=√1e5 and Setup 2: κ =1e1, ν =1e-2, νm =1e-3and Ha=1000.The timing and iteration results for the approximate inverse preconditioner arerepresented in Table 3.9. From the table we can see that the iteration counts decreaseas the problem gets larger. We suspect this is due to the fact the the mesh size h isbecoming small enough for mesh level ` ≥ 3 that the fluid and magnetic viscosities arecorrectly capture on these meshes.` DoFs timeA itANL itAO itAMX itAF1 2,199 1.50 3 172.7 1.50 1.922 13,809 13.32 3 108.0 1.54 1.993 96,957 260.81 4 105.2 1.95 2.004 724,725 1693.26 3 70.7 1.98 2.795 5,600,229 8515.71 3 68.0 2.10 2.64Table 3.9: MHD generator using the approximate inverse preconditioner with parame-ters κ = 1, ν = 1e-1, νm = 1e-1 and Ha = 1066Chapter 4Conjugate gradient for nonsingularsaddle-point systems with amaximally rank-deficient leadingblockThe final chapter is currently under revision for publication in the Journal of Com-putational and Applied Mathematics. In Section 4.1, we introduce the saddle-pointmatrix and the specific properties considered. In Section 4.2 we derive conditions underwhich this saddle-point system can be solved using preconditioned conjugate gradient.In Section 4.3, we consider a null-space decoupling of (4.1). In Section 4.4 we analyzethe spectral properties of the block preconditioners. Section 4.5 presents several nu-merical results for the mixed Maxwell problem (1.5) using the preconditioned conjugategradient method.4.1 Problem statementIn this chapter, we derive two sufficient conditions that allow the use of the conjugategradient (CG) [58] method to solve an indefinite saddle-point system with a maximallyrank-deficient leading block. Our work builds on [36], where the authors developedan indefinite approximate inverse preconditioner for such problems. We expand this toconsider the family of block diagonal and block triangular preconditioners from [50, 51].Consider the regularized saddle-point system(N ETE −R)︸ ︷︷ ︸KN(br)=(h0), (4.1)where N ∈ Rn×n, E ∈ Rm×n and R ∈ Rm×m with m < n. In many situations we haveR = 0, however for some PDE discretizations, using a symmetric positive semi-definite67matrix R 6= 0 can be utilized as a stabilization procedure. We focus our investigationon symmetric positive semi-definite matrices N . In particular, we are interested inmaximally rank-deficient leading blocks that still yield to a nonsingular block matrix,KN. For R = 0, we require the following properties to ensure (4.1) is invertible:rank(N) = n−m, rank(E) = m, and ker(N) ∩ ker(E) = {0}.For R 6= 0, the requirement on rank(E) can be relaxed.Applying CG to indefinite linear systems has been extensively considered; see forexample [11, 12, 18, 34, 47, 69, 75, 91]. In [75], the authors show that negating the secondblock row of a symmetric saddle-point matrix obtains the property that the new saddle-point matrix is real positive. The authors use this property to derive conditions forpositive definiteness with respect to a certain bilinear form. The authors in [11, 12, 34]show that for the class of constraint-based preconditioners the preconditioned matrixhas all positive eigenvalues. Thus, even though the preconditioner and saddle-pointmatrix are indefinite, CG can still be used. Finally, in [18, 69] the authors use a non-standard inner product and [47, 91] start (and remaining) on a certain manifold, bothto ensure that CG is possible.In this work, we consider the class of maximally rank-deficient regularized and un-regularized saddle-point systems. The class of preconditioners we use do not rely on thespectral structure of the preconditioner matrix, and thus, do not lead to similar tech-niques as mentioned above. Therefore, the approach we take relies on the constructionof the Krylov subspace:Kk(M−1K, r0) = span{r0, M−1K r0, . . . , (M−1K)k−1 r0},where r0, K andM are the preconditioned initial residual, the coefficient matrix, andthe preconditioner, respectively. If K andM are both SPD then clearly it is possible touse a solver such as CG for SPD matrices. In this work K is indefinite andM is eitherSPD or structurally nonsymmetric (block triangular preconditioners), thusM−1K willnot be SPD or even real positive. However, by explicitly forming the Krylov subspacewe show that for two sufficient conditions the Krylov subspace is constructed usingproducts of SPD and SPSD matrices arising from individual blocks of K andM. Theseconditions rely heavily on the maximal nullity of the leading block of (4.1). Thus, eventhough (M−1K)k is not SPD, the application of (M−1K)kr0 can be defined as a productof SPD and SPSD matrices on the preconditioned residual. This therefore means thatwe can use CG even though the preconditioned system has both positive and negativeeigenvalues and it is also structurally nonsymmetric.68As we show in subsequent sections, the above properties are critical for accomplish-ing our main goal stated, which is to establish conditions under which CG may be used,despite the matrix KN being indefinite. We accompany our analytical observations withnumerical experiments.4.2 Krylov SubspaceIn [50], the authors show that an ideal preconditioner for saddle-point systems of theform (4.1) with a maximally rank-deficient leading block is the augmented precondi-tioner: (N + ETU−1E 00 U), (4.2)where U is an arbitrary symmetric positive-definite matrix. The authors of [47] promotethe use of projected conjugate gradient.Our goal is to find conditions on an approximation Q to the augmented term,ETU−1E, such that we can use preconditioned CG. We consider a preconditioner ofthe following form:MP =(N +Q 00 U),where Q is chosen so that N + Q is SPD. It is possible to use CG to solve (4.1) whenthe preconditioned matrix M−1P KN is symmetrizable and its symmetrized version ispositive definite. Given an initial preconditioned residual r0, the Krylov subspace isgiven by:Kk(M−1P KN, r0) = span{r0, M−1P KN r0, . . . , (M−1P KN)k−1 r0}. (4.3)Considering a zero initial guess, the initial preconditioned residual is given by:r0 =M−1P(h0)=((N +Q)−1h0). (4.4)To construct (4.3), an essential step is multiplication with the preconditioned matrix,which is given by:M−1P KN =(N +Q 00 U)−1(N ETE −R)=((N +Q)−1N (N +Q)−1ETU−1E −U−1R).Theorem 5. Given a symmetric indefinite block 2× 2 matrix and symmetric positive-69definite preconditionerKN =(N ETE −R)and MP =(N +Q 00 U),assuming a zero initial guess, for the preconditioned residual r0 given in (4.4), theresulting multiplications with the preconditioned matrices can be simplified as follows:[M−1P KN]i r0 =([(N +Q)−1N ]i(N +Q)−1h0)for i = 0, 1, . . . (4.5)as long as the following conditions hold:ZTh = 0, (4.6a)NZ = 0, (4.6b)where Z = (N +Q)−1ET .Proof. The proof follows by induction. For i = 0 we have r0, and for i = 1 we haveM−1P KN r0 =((N +Q)−1N(N +Q)−1hU−1E(N +Q)−1h). (4.7)Thus by condition (4.6a) we see that (4.7) holds for i = 1. Now assuming[M−1P KN]i−1r0 =([(N +Q)−1N ]i−1(N +Q)−1h0), (4.8)it readily follows that[M−1P KN]ir0 = [M−1P KN][M−1P KN]i−1r0=((N +Q)−1N (N +Q)−1ETU−1E −U−1R)([(N +Q)−1N ]i−1(N +Q)−1h0)=([(N +Q)−1N ]i(N +Q)−1hU−1E(N +Q)−1N [(N +Q)−1N ]i−2(N +Q)−1h)=([(N +Q)−1N ]i(N +Q)−1h0),where in the last step we used (4.6b). This completes the proof.70From (4.6b) in Theorem 5 it readily follows that the null-space of N is defined asZ = (N + Q)−1ET . Multiplying by N + Q we can express the approximation Q interms of ET and the null-space of N .Corollary 1. To satisfy (4.6b) in Theorem 5, Q must be chosen such that:ET = QZ.Remark 5. In Theorem 6, we considered left preconditioning. However, for right pre-conditioning the preconditioned matrix is KNM−1P and the associated Krylov subspacewould beKk(KNM−1P , r) = span{r, KNM−1P r, . . . , (KNM−1P )k−1 r},where r is the initial residual for a zero initial guess. Using the the conditions (4.6) onecan prove a similar result to Theorem 5, that is:[KNM−1P ]i r =([N(N +Q)−1]ih0)for i = 0, 1, . . ..Thus, it is possible to use CG with left or right preconditioning.Remark 6. The matrix Q does not necessarily have to be positive definite, for theconditions in (4.6) to be satisfied. For example, if Q = ETU−1E then X is defined as:Z = (N + ETU−1E)−1ET . (4.9)In [36, Proposition 3.6] the authors show that if Z is defined as in (4.9), then NZ = 0.Thus, condition (4.6b) in Theorem 5 is automatically satisfied.We now show that we are not restricted to the block diagonal preconditioner toenable the use of CG.Proposition 2. Consider the upper and lower triangular preconditioners:M˜P =(N +Q ET0 U)and M̂P =(N +Q 0D U).Then[(∗M)−1KN]k r0 =([(N +Q)−1N ]k(N +Q)−1h0),where ∗ denotes upper or lower triangular preconditioner, holds under the same condi-tions as in Theorem 5.71The proof follows by induction. The preconditioned matrices areM˜−1P KN =((N +Q)−1N − (N +Q)−1ETU−1E (N +Q)−1ETU−1E −U−1R);M̂−1P KN =((N +Q)−1N (N +Q)−1ETU−1E − U−1E(N +Q)−1N U−1E(N +Q)−1ET − U−1R),and it follows that[(∗M)−1KN]k r0 =([(N +Q)−1N ]k(N +Q)−1h0),where ∗ denotes the upper or lower triangular preconditioner.4.3 Null-space decouplingConsider the saddle-point form of (4.1) with R = 0. Since the null-space of N is ofdimension m, there is a matrix Z ∈ Rn×m whose columns form a linearly independentbasis for the null-space of N . Using this null-space matrix, it is possible to decouplethe saddle-point system (4.1) by multiplying the first block row with ZT to obtain:ZTBT r = ZTh.The solution procedure for (4.1) becomes:ZTET r = ZTh, (4.10a)Nb = g − ET r with Eb = 0. (4.10b)The solution of (4.10) is unique if and only if EZ is nonsingular and N is positivedefinite on the null-space of E.To aid discussion of the null-space decoupling, we introduce a Helmholtz-type de-composition similar to [36]. The intersection of the null-spaces of N and E are zero,thus:ker(N)⊕ ker(E) = Rn. (4.11)This decomposition provides key insights into the null-space decoupling of (4.1). Using(4.11) we write the primary variable solutionb = bN + bE72where bN ∈ ker(N) and bE ∈ ker(E).Using [36, Proposition 3.6] and [36, Theorem 3.5], there exists a formulation of thenull-space of N such that U = EZ is SPD which allows the use of CG. From theHelmholtz-type decomposition, (4.10b) can be written as:NbE = f − ET r with EbN = 0.Since bN ∈ ker(N) then EbN 6= 0 unless bN = 0. Thus the decoupled system becomesZTET r = ZTh and NbE = h− ET r. (4.12)Since N is positive definite on the null-space of E, it is possible to use a CG-type solverfor the second equation in (4.12). In practice, since N is singular, one may apply anull-space method, which amounts to applying CG on:V TNV bE = VTh where EV = 0.We note that a null-space matrix V does not need to be explicitly constructed; see [46].We conclude this section with a few comments for the case when ZTh = 0. Inthis setting (dim(null(N)) = m), it is very common that the secondary variables, r,arise from a Lagrange multiplier formulation (4.1); see for example [33, 49] for themixed formulation of Maxwell’s equations and [16] for norm minimization problemswith equality constraints. Thus, the construction of h is often independent of thesecondary variables which leads to ZTh = 0 for these sorts of formulations. Thus, thesolution to (4.10a) is zero and then the solution to (4.12) becomes:NbE = h. (4.13)Considering a preconditioner of the form N +Q for (4.13) would form the same blockKrylov subspace defined in (4.5) where r = 0.For a non-zero r, the discussion around (4.12) may indicate that the condition (4.6a)might be relaxed for the preconditioned saddle-point system.4.4 Eigenvalue AnalysisWe now analyze the ideal forms of the preconditioner where we do not use the approx-imation Q but rather the augmented term ETU−1E. We will consider the upper block73triangular preconditioner:M˜I =(N + ETU−1E ET0 U).Extension to block diagonal and block lower triangular can easily be done using thesame techniques.Theorem 6. The preconditioned matrix M˜−1I K has eigenvalue λ = 1 with algebraicmultiplicity n −m and eigenvalue λ = 1±√52 with algebraic multiplicity m − l for eacheigenvalue, where l = Dim(Null(R)). The corresponding eigenpairs (λ,x) are:λ = 1, x = (bD, 0), and λ =1±√52, x = (bN , rR),where bD ∈ Null(D), bN ∈ Null(N), and rR ∈ Null(R).Proof. The corresponding generalized eigenvalue problem is given by:(N ETE −R)(br)= λ(N + ETU−1E ET0 U)(br). (4.14)Since W and Q are SPD and SPSD, respectively, λW +Q is nonsingular for all λ 6= 0.Substituting r = (λW +R)−1Db into the first block row of (4.14), we obtain:(1− λ)Nb+ (1− λ)ET (λW +R)−1Db = λETU−1Eb.From this, it follows immediately that if b ∈ Null(D) then λ = 1 is an eigenvalue.Now, assume that λ 6= 1 and Db 6= 0. Substituting Db = (λW + R)r into (4.14)gives:(λ− 1)Nb+ (λ2 + λ− 1)ET r + λETU−1Rr = 0.Thus by setting b ∈ Null(N), and r ∈ Null(R), we have λ = 1±√52 .From Theorem 6 it follows that if R = 0, we have the full spectrum of the precondi-tioned matrix, with just three distinct eigenvalues that have high algebraic multiplicities.Corollary 2. For the non-regularized saddle-point form of (4.1), with R = 0, M˜−1I Khas eigenvalue λ = 1 with algebraic multiplicity n−m and eigenvalues λ± = 1±√52 withalgebraic multiplicities m.Proposition 3, given below, outlines the algebraic multiplicities of eigenvalues and theircorresponding eigenvectors for block diagonal and block lower triangular precondition-74ers. The full eigenvalue analysis for these preconditioners is omitted since it is verysimilar to what we show in Theorem 6.Proposition 3. Consider the preconditionersMI =(N + ETU−1E 00 U)and M̂I =(N + ETU−1E 0D U).Then the eigenpairs and algebraic multiplicities of the preconditioned matrices are givenin Table 4.1.PreconditionedEigenvalue EigenvectorAlgebraicmatrix multiplicityM−1I K 1 (b, rR) n− lM−1I K −1 (bN , rR) m− lM̂−1I K 1 (bE , 0) n−mM̂−1I K 1+√52 (bN , rR) m− lM̂−1I K 1−√52 (bN , rR) m− lTable 4.1: Eigenpairs and algebraic multiplicities for the preconditioned matricesM̂−1I K andM−1I K. We use the notation bE ∈ Null(E), bN ∈ Null(N), rR ∈ Null(R),and l = Dim(Null(R)).In Figure 6 we show an example of the eigenvalue distribution of the preconditionedmatrix.4.5 Numerical ExperimentsIn this section we use the time-harmonic Maxwell equation in mixed form [51, 78] toillustrate our findings. The continuous problem is given as follows:∇× νm∇× b+∇r = g in Ω∇ · b = 0 in Ω, (4.15)where b is the magnetic field and r is the Lagrange multiplier associated with thedivergence constraint on the magnetic field. The constant νm is the magnetic viscosity.This model is well understood and gives rise, upon finite element discretization, to asymmetric system; see [51, 72]. It also arises in coupled electromagnetic flows such asmagnetohydrodynamics; see [86, 110] and the references therein.75Figure 4.1: Eigenvalue distribution of the preconditioned matrix M˜−1I K using randomlygenerated blocks with n = 100, m = 20, and l = 5We consider a finite element discretization of (4.15) which uses Nédélec elementsfor the magnetic field and nodal elements for the multiplier variable. See [51, 72] formore details. Upon discretization, N , E and ET are defined to be the discrete curl-curl,magnetic gradient and magnetic divergence operators, respectively. Thus, the null spaceof N is the discrete gradient operator and has dimensionm. The resulting discretizationof (4.15) falls into the class of saddle-point systems with a maximally rank-deficientleading block. We note that a sparse construction of the discrete gradient operator forfirst order Nédélec elements is possible [72, Section 2].In [51] it was shown that the vector mass matrix and scalar Laplacian are appropriatechoices for Q and U , respectively. Since U is now the scalar Laplacian, use the notationU = L. We now show that for these choices, the conditions (4.6a)–(4.6b) in Theorem 5hold. In [51, Proposition 2.2] the authors prove the identityET = QZ,where we recall that ET and Z are the magnetic divergence and discrete gradientoperators, respectively. Thus, by Corollary 1, (4.6b) holds. Typically, the right-hand-side is divergence-free for physical applications; see [49, 110]. Since Z is the null-space76operator of the curl-curl matrix, ZT is a discrete divergence operator. Hence, thedivergence-free condition (4.6a) holds.Therefore, for the Maxwell problem of the form (4.15) the conditions (4.6a)–(4.6b)hold and enable the use of CG with either a block diagonal, upper or lower blocktriangular preconditioner. The coefficient matrix and block diagonal preconditioner forthe mixed Maxwell problem in (1.5) are:K =(N ETE 0)and M =(N +Q 00 L),where we recall that N , E, ET , Q, and L are the curl-curl operator, magnetic di-vergence operator, a gradient operator, the vector mass matrix, and scalar Laplacian,respectively.Let us consider a 3D example with a smooth solution on Ω = [0, 1]3. The analyticalsolution is set to beb(x, y, z) = − exp(x+ y + z) sin(y) + exp(x+ y + z) sin(z)xy exp(x+ y + z)− yz exp(x+ y + z)− exp(x+ y + z) sin(x) + exp(x+ y + z) sin(y)r(x, y, z) = 0.(4.16)Then the source terms g in (4.15) and inhomogeneous boundary conditions are definedfrom the analytical solution. By construction, we have ensured that the right-hand-side is divergence-free. We use this setup as a basis for all numerical experiments andoutline any alterations (e.g., the domain or multiplier variable) we make for the specificexample.In each experiment we use a tolerance of 1e-6 for the outer flexible CG (FCG) itera-tion [82]. For the inner solves (N +Q and L), we use the auxiliary space preconditionerof [61]. This entails a CG solve for N + Q and L with a tolerance of 1e-3 for each. Inthe subsequent tables we use the following notation:• `: mesh level;• DoFs: number of degrees of freedom for Magnetic field plus the multiplier vari-ables;• Time: total time for FCG to converge for the specified tolerance;• it: number of outer FCG iterations to converge for the specified tolerance;• it1: number of inner CG/Auxiliary Space iterations for N +Q;77• it2: number of inner CG/Auxiliary Space iterations for L.4.5.1 Krylov Subspace Solver TestThe first numerical experiment we consider is to test the robustness and performanceof FCG compared to other Krylov subspace methods. We set the inner tolerance tobe 1e-4 and test against MINRES [83] and GMRES [94]. The results are shown inTable 4.2. We observe that the iteration counts are similar for all three methods tested,and FCG is slightly superior in terms of computational time. Given that the cost ofsingle iterations is lower for FCG compared to MINRES, and given that the memoryconsumption of FCG is significantly lower than that of GMRES, we conclude that FCGis the most effective method of the three, although not by a significant margin.FCG MINRES GMRES` DoFs Time it it1 it2 Time it it1 it2 Time it it1 it23 4913 0.06 14 2.4 1.4 0.07 13 3.0 1.9 0.07 12 3.0 1.94 35,937 0.70 14 2.7 1.7 0.72 13 3.6 1.9 0.78 13 3.5 2.15 274,625 5.82 13 3.1 2.2 7.42 14 3.9 2.9 7.57 13 3.9 2.86 2,146,689 61.78 15 3.3 2.1 60.36 12 4.2 2.8 78.06 14 4.2 2.97 16,974,593 580.22 14 3.9 2.5 601.85 13 4.5 2.9 712.28 13 4.9 2.9Table 4.2: Krylov solver test: time and iteration results for using one iterations of thepreconditioner with FCG, MINRES and GMRES. The viscosity for these tests is νm =1e-2Let us now consider preconditioned CG with diagonal and upper/lower triangularpreconditioners. The results are given in Table 4.3. From Theorem 5 and Proposition 2,we have shown that the Krylov subspace for both the diagonal and block triangular pre-conditioners are the same. Thus we would expect the number of iterations for FCG toconverge to be identical. From the table, we can see that the block diagonal precon-ditioner preforms slightly better with respect to the outer FCG iterations. We haveshown that the preconditioner with the exact augmented term (Q = ETU−1E) hastwo distinct eigenvalues for the preconditioned matrix, whereas for the block triangularpreconditioner we have three distinct eigenvalues for the preconditioned matrix. Thismay explain the slight difference in performance.4.5.2 Divergence and Non-Divergence Free Right-Hand-SideLet us now consider divergence and non-divergence free right-hand-sides. The results aregiven in Table 4.4. We observe that there is almost no difference in the iteration results78MP MU ML` DoFs Time it it1 it2 Time it it1 it2 Time it it1 it22 729 0.01 12 1.8 1.0 0.01 13 1.7 1.1 0.01 13 1.7 1.13 4,913 0.07 14 2.3 1.4 0.05 13 2.2 1.5 0.05 13 2.1 1.64 35,937 0.64 14 2.5 1.7 0.57 14 2.5 1.7 0.56 14 2.5 1.75 274,625 5.95 14 2.7 1.7 6.11 16 2.6 1.6 5.41 14 2.6 1.86 2,146,689 56.92 14 2.9 1.8 60.92 16 2.9 1.7 59.89 16 2.9 1.77 16,974,593 574.34 14 3.3 2.3 589.14 16 3.2 2.2 569.86 16 3.2 2.1Table 4.3: Block preconditioner test: time and iteration results using the block diagonal(MP ), block upper triangular (MU) and block lower triangular (ML) preconditionerswith νm = 1e-2between the divergence and non-divergence free right-hand-side solution. This showsthat even though our analysis requires a divergence-free right-hand side, in practice thesolver is robust with respect to right-hand side vectors that violate this condition. Wedo not have a theoretical justification for this; if the right-hand-side is not divergencefree then term (4.7) does not vanish and the construction of the Krylov subspace issignificantly more involved. In that case, there is no easy way to discern algebraicstructure and proceed with deriving results such as (4.5) in Theorem 5. However,in Section 4.3 we show that it is possible to decouple such saddle point systems intotwo SPD linear systems without the divergence constraint being satisfied. Therefore,Theorem 5 only provides sufficient conditions and we suspect that the divergence-freecondition can be relaxed in practice.Divergence Free Non-divergence Freel DoFs Time it it1 it2 Time it it1 it22 729 0.01 12 1.5 0.9 0.01 12 1.5 0.93 4,913 0.06 14 2.3 1.4 0.06 14 2.3 1.44 35,937 0.64 14 2.5 1.7 0.64 14 2.5 1.75 274,625 7.41 14 2.7 1.7 7.41 14 2.7 1.76 2,146,689 55.02 14 2.9 1.8 55.02 14 2.9 1.87 16,974,593 544.91 14 3.3 2.3 644.91 14 3.3 2.6Table 4.4: Divergence-free vs. non-divergence-free right-hand-side: time and iterationresults using the block diagonal preconditioner,MP , for divergence and non-divergencefree right-hand-sides with νm =1e-2.794.5.3 Variable CoefficientsWe now let the magnetic viscosity be defined as:νm =1/a if x < 0.5 and y < 0.5 and z < 0.5,1/2a if x > 0.5 and y < 0.5 and z < 0.5,1/3a if x < 0.5 and y > 0.5 and z < 0.5,1/4a if x > 0.5 and y > 0.5 and z < 0.5,1/5a if x < 0.5 and y < 0.5 and z > 0.5,1/6a if x > 0.5 and y < 0.5 and z > 0.5,1/7a if x < 0.5 and y > 0.5 and z > 0.5,1/8a otherwise,where a is a constant. The results are shown in Table 4.5 and 4.6. The results inTable 4.5 show iteration and timing results for the block diagonal preconditioner forvarious values of a. We can see from the table that as a increases then the number ofouter FCG iterations increase but the inner Auxiliary Space iterations seem relativelyconstant, with only a slight increase going through the levels. For a = 10 or 100 the outerFCG iterations remain approximately constant but for a = 1000 the outer iterationsstart to degrade slightly. Table 4.6 shows the iteration and timing comparisons betweenthe block diagonal and triangular preconditioners for a = 100. Again, we see thatthe outer iteration FCG iterations appear to be slightly better for the block diagonalpreconditioner than the triangular versions. We also note that solve times for thetriangular versions are significantly higher, especially for the larger mesh levels. This isdue to the extra multiplications with E or ET for the block triangular preconditioners.Thus, it is beneficial to use the block diagonal preconditioner.a = 10 a = 100 a = 1000` DoFs Time it it1/it2 Time it it1/it2 Time it it1/it22 729 0.01 14 1.9/0.9 0.02 32 1.4/0.9 0.06 71 0.9/0.63 4,913 0.10 15 2.3/1.5 0.18 38 2.0/1.3 0.36 98 1.4/1.24 35,937 0.71 15 2.8/1.7 1.51 39 2.2/1.6 3.23 110 1.8/1.55 274,625 6.46 15 3.0/1.9 13.96 40 2.6/1.7 34.18 116 2.2/1.66 2,146,689 63.63 15 3.8/1.9 132.51 40 2.9/1.8 328.45 116 2.4/1.87 16,974,593 581.03 15 3.9/2.2 1306.93 40 3.3/2.3 3267.59 122 2.7/2.3Table 4.5: Variable coefficients: time and iteration results using the block diagonalpreconditioner,M, for various different values of a.80M MU ML` DoFs Time it it1/it2 Time it it1/it2 Time it it1/it22 729 0.02 32 1.4/0.9 0.01 32 1.3/0.8 0.02 34 1.3/0.93 4,913 0.18 38 2.0/1.3 0.15 38 1.9/1.3 0.14 38 1.8/1.34 35,937 1.51 39 2.2/1.6 1.43 39 2.3/1.6 1.43 40 2.1/1.65 274,625 13.96 40 2.6/1.7 14.62 41 2.6/1.7 14.64 41 2.4/1.76 2,146,689 132.51 40 2.9/1.8 147.63 41 3.0/1.8 150.59 43 2.9/1.87 16,974,593 1306.93 40 3.3/2.3 1557.95 42 3.4/2.4 1572.68 44 3.2/2.3Table 4.6: Variable coefficients: time and iteration results using the block diagonal andtriangular preconditioners for a = 100.4.5.4 Fichera Corner ProblemFor our next numerical illustration, we consider the same exact solution in (4.16),however the domain will be a cube missing a corner. That is, the domain is Ω =(−1, 1)3/[0, 1)× [0, 1)× [0, 1) with local refinement in the corner. A visualization of thisdomain is given in Figure 4.2. We investigate how the magnetic viscosity affects theFigure 4.2: Fichera corner domain for mesh level, ` = 1iterations on such a domain. Table 4.7 presents the results. The table shows that asthe magnetic viscosity decreases the number of outer FCG iterations increase. As withthe previous example, inner Auxiliary Space iterations appear to be scalable.4.5.5 Gear DomainFor our final experiment, we consider the same exact solution in (4.16) however withan quasi-uniform 3-dimensional gear as the domain. The domain is bounded in Ω =81νm =1e-1 νm =1e-2 νm =1e-3` DoFs Time it it1 it2 Time it it1 it2 Time it it1 it21 14,636 0.41 13 3.8 2.0 0.73 31 3.7 2.3 1.69 80 3.9 2.62 111,315 3.84 13 4.4 2.5 7.23 31 4.2 3.0 17.01 81 4.5 3.43 869,397 45.74 13 5.9 2.7 85.37 32 5.2 3.3 181.89 72 5.0 3.64 6,874,601 614.06 15 8.1 3.4 1027.89 31 6.4 3.7 2248.88 69 6.4 4.2Table 4.7: Fichera corner: time and iteration results using the block diagonal precon-ditioner,MP , for various different values of νm.[−1, 1]× [−1, 1]× [0,−0.2]. A visualization of this domain is given in Figure 4.3.Figure 4.3: Gear domain for mesh level, ` = 5Table 4.8 shows iteration and timing results for the constant coefficients. From thetable we can see that as we decrease νm the outer FCG iterations increase slightly butappear to remain constant with respect to the mesh size. For this example, the inneriterations are starting to increase slightly more every mesh level. However, the numberof iterations are relatively small and still give a fast solution procedure.82νm =1e-1 νm =1e-2 νm =1e-3` DoFs Time it it1 it2 Time it it1 it2 Time it it1 it26 16,680 0.23 8 2.2 1.4 0.41 19 2.6 1.5 0.83 42 2.3 1.47 106,940 2.62 10 4.0 2.0 4.70 21 4.2 2.5 9.08 44 3.0 2.38 774,871 69.85 10 8.8 2.5 77.25 17 5.2 3.2 173.12 46 4.4 3.79 5,950,932 1387.91 10 17.2 4.5 1554.78 18 10.2 5.1 3227.96 48 7.8 6.0Table 4.8: Gear domain: time and iteration results using the block diagonal precondi-tioner,MP , for various different values of νm.83Chapter 5Conclusions and future work5.1 ConclusionsWe have derived two block preconditioning approaches for the MHD model (1.1)–(1.2).We have also shown that for a specific class of symmetric saddle-point problems, it ispossible to use preconditioned CG as the Krylov subspace solver.The block triangular preconditioner we derived in Chapter 2 demonstrates goodeigenvalue clustering, as shown in the spectral analysis Section 2.3. We show numericalscalability for 2D problems. However for 3D problems, we observe a small increase initerations per mesh level.Our second block preconditioning approach is the block approximate inverse precon-ditioner given in Chapter 3. Using the results in [36], we derive a new inverse formulafor the MHD model (1.1)–(1.2). Considering the orders of the discrete operators in theexact inverse, we form an effective sparse approximation. We show that for an idealcase the eigenvalues are very strongly clustered around 1. Developing robust solversfor this problem is a highly challenging task, and we believe that our approach showspromise in terms of its ability to strongly scale and tackle large-scale 3D problems withhigh Hartman numbers.The final piece of work considered is the use of preconditioned CG for nonsingularsaddle-point problems with a maximally rank deficient leading block. We show thatit is possible to use the symmetric block diagonal or nonsymmetric upper/lower blocktriangular preconditioners given they satisfy two algebraic conditions. Thus, for themixed Maxwell problem in (1.5) we show that for the well-known preconditioners in[51, 72] it is possible to use preconditioned CG for this indefinite saddle-point system.5.2 Future work1. Throughout this thesis we have considered Taylor-Hood elements [101] for veloc-ity/pressure variables and the lowest order Nédélec [81] pair for magnetic/multi-plier variables. This is by no means the only choice of finite element discretizationfor this model. For example, the authors in [49] use the formulation in [95] which84leads to divergence-free Brezzi–Douglas–Marini (BDM) elements [19, 26] for thevelocity elements. We leave the exploration of other finite element discretizationsfor the MHD model as a topic of future work.2. The numerical examples focus solely on Dirichlet boundary conditions for theMHD model. To incorporate alternative more complex boundary conditioners(Neumann or Robin boundary conditions), it may be necessary to alter the Schurcomplement approximations in a similar fashion to [15] and [35, Chapter 9] tocorrectly capture these settings.3. Unlike block triangular preconditioners, it is possible to solve for each block of theapproximate inverse preconditioner independently of the others. In particular, wenote that each block column of the approximate inverse preconditioner in (3.12)has some repetition for the sequence of systems which are solved. Therefore,one could solve each block column of (3.12) in parallel. This may greatly reducethe computational cost for the application of the inverse preconditioner to 1 or2 vector solves per block column, which is comparable to the block triangularpreconditioners.4. During the “sparsification” of the inverse formula for the MHD model in Chap-ter 3, we drop block matrices based on the mesh order. However, depending onthe magnitude of the non-dimensional parameters (ν, νm or κ) different “spar-sification” may be used. Thus, it would be possible to design a preconditioningapproach based on non-dimensional parameters setup for a specific problem inaddition to a “black-box” approach.5. For the Maxwell example in Chapter 4, we solve the preconditioner to a tolerance.It may be possible to reduce the overall computational time by loosening the linearsolver’s tolerance and applying an inexact solution procedure.85Bibliography[1] D. J. Acheson. Elementary fluid dynamics. Oxford Applied Mathematics andComputing Science Series. The Clarendon Press, Oxford University Press, NewYork, 1990.[2] J. H. Adler, T. R. Benson, E. C. Cyr, S. P. MacLachlan, and R. S. Tuminaro.Monolithic multigrid methods for two-dimensional resistive magnetohydrodynam-ics. SIAM Journal on Scientific Computing, 38(1):B1–B24, 2016.[3] G. Alzetta, D. Arndt, W. Bangerth, V. Boddu, B. Brands, D. Davydov,R. Gassmoeller, T. Heister, L. Heltai, K. Kormann, M. Kronbichler, M. Maier, J.-P. Pelteret, B. Turcksin, and D. Wells. The deal.II library, version 9.0. Journalof Numerical Mathematics, 2018, accepted.[4] P. R. Amestoy, I. S. Duff, J. Koster, and J.-Y. L’Excellent. A Fully AsynchronousMultifrontal Solver Using Distributed Dynamic Scheduling. SIAM Journal onMatrix Analysis and Applications, 23(1):15–41, 2001.[5] P. R. Amestoy, A. Guermouche, J.-Y. L’Excellent, and S. Pralet. Hybrid schedul-ing for the parallel solution of linear systems. Parallel Computing, 32(2):136–156,2006.[6] F. Armero and J. C. Simo. Long-term dissipativity of time-stepping algorithmsfor an abstract evolution equation with applications to the incompressible MHDand Navier-Stokes equations. Computer Methods in Applied Mechanics and En-gineering, 131(1):41–90, 1996.[7] S. Balay, M. F. Adams, J. Brown, P. Brune, K. Buschelman, V. Eijkhout, W. D.Gropp, D. Kaushik, M. G. Knepley, L. C. McInnes, K. Rupp, B. F. Smith, andH. Zhang. PETSc User’s Manual. Technical Report ANL-95/11 - Revision 3.4,Argonne National Laboratory, 2013.[8] S. Balay, W. D. Gropp, L. Cu. McInnes, and B. F. Smith. Efficient managementof parallelism in object oriented numerical software libraries. In E. Arge, A. M.Bruaset, and H. P. Langtangen, editors, "Modern Software Tools in ScientificComputing", pages 163–202. Birkhäuser Press, 1997.[9] S. M. Balay, S. Adams, J. Brown, P. Brune, K. Buschelman, V. Eijkhout, W. D.Gropp, D. Kaushik, M. G. Knepley, L. C. McInnes, K. Rupp, B. F. Smith, andH. Zhang. PETSc Web page. http://www.mcs.anl.gov/petsc, 2014.86[10] M. Benzi, G. Golub, and J. Liesen. Numerical solution of saddle point problems.Acta Numerica, 14:1–137, 2005.[11] L. Bergamaschi, M. Ferronato, and G. Gambolati. Mixed Constraint Precondi-tioners for the iterative solution of FE coupled consolidation equations. Journalof Computational Physics, 227(23):9885 – 9897, 2008.[12] L. Bergamaschi, J. Gondzio, M. Venturin, and G. Zilli. Inexact constraint pre-conditioners for linear systems arising in interior point methods. ComputationalOptimization and Applications, 36:137–147, 2007.[13] M. Blatt, A. Burchardt, A. Dedner, C. Engwer, J. Fahlke, B. Flemisch, C. Gers-bacher, C. Gräser, F. Gruber, C. Grüninger, D. Kempf, R. Klöfkorn, T. Malkmus,S. Müthing, M. Nolte, M. Piatkowski, and O. Sander. The Distributed and UnifiedNumerics Environment, Version 2.4. Archive of Numerical Software, 4(100):13–29,2016.[14] P. B. Bochev, C. J. Garasi, J. J. Hu, A. C. Robinson, and R. S. Tuminaro.An improved algebraic multigrid method for solving Maxwell’s equations. SIAMJournal on Scientific Computing, 25(2):623–642, 2003.[15] N. Bootland, A. Bentley, C. Kees, and A. Wathen. Preconditioners for Two-PhaseIncompressible Navier-Stokes Flow. arXiv preprint arXiv:1710.08779, 2017.[16] S. Boyd and L. Vandenberghe. Convex optimization. Cambridge university press,2004.[17] D. Braess. Finite Elements: Theory, Fast Solvers, and Applications in SolidMechanics. Cambridge University Press, third edition, 2007.[18] J. H. Bramble and J. E. Pasciak. A Preconditioning Technique for IndefiniteSystems Resulting from Mixed Approximations of Elliptic Problems. Mathematicsof Computation, 50(181):1–17, 1988.[19] F. Brezzi and M. Fortin. Mixed and Hybrid Finite Element Methods. Springer-Verlag New York, Inc., New York, NY, USA, 1991.[20] L. Chacón. An optimal, parallel, fully implicit newton–krylov solver forthree-dimensional viscoresistive magnetohydrodynamics. Physics of Plasmas,15(5):056103, 2008.[21] L. Chacón. Scalable parallel implicit solvers for 3D magnetohydrodynamics. Jour-nal of Physics: Conference Series, 125(1):012041, 2008.[22] L. Chacón, D. A. Knoll, and J. M. Finn. An Implicit, Nonlinear Reduced ResistiveMHD Solver. Journal of Computational Physics, 178(1):15 – 36, 2002.[23] Y. Chen, T. A. Davis, W. W. Hager, and S. Rajamanickam. Algorithm 887:CHOLMOD, Supernodal Sparse Cholesky Factorization and Update/Downdate.ACM Transactions on Mathematical Software, 35(3):22:1–22:14, October 2008.87[24] Z. Chen, Q. Du, and J. Zou. Finite element methods with matching and non-matching meshes for Maxwell equations with discontinuous coefficients. SIAMJournal on Numerical Analysis, 37(5):1542–1570, 2000.[25] Clawpack Development Team. Clawpack software, 2018. Version 5.5.0.[26] B. Cockburn, G. Kanschat, and D. Schötzau. A note on discontinuous Galerkindivergence-free solutions of the Navier-Stokes equations. Journal of ScientificComputing, 31(1-2):61–73, 2007.[27] R. Cockett, S. Kang, L. J. Heagy, A. Pidlisecky, and D. W. Oldenburg. Simpeg:An open source framework for simulation and gradient based parameter estima-tion in geophysical applications. Computers & Geosciences, 85:142–154, 2015.[28] M. Costabel and M. Dauge. Singularities of electromagnetic fields in polyhedraldomains. Archive for Rational Mechanics and Analysis, 151(3):221–276, 2000.[29] E. C. Cyr, J. N. Shadid, R. S. Tuminaro, R. P. Pawlowski, and L. Chacón.A new approximate block factorization preconditioner for two-dimensional in-compressible (reduced) resistive MHD. SIAM Journal on Scientific Computing,35(3):B701–B730, 2013.[30] L. D. Dalcin, R. R. Paz, P. A. Kler, and A. Cosimo. Parallel distributed comput-ing using python. Advances in Water Resources, 34(9):1124 – 1139, 2011. NewComputational Methods and Software Tools.[31] T. A. Davis. A column pre-ordering strategy for the unsymmetric-pattern mul-tifrontal method. ACM Transactions on Mathematical Software, 30(2):165–195,June 2004.[32] T. A. Davis and E. Palamadai Natarajan. Algorithm 907: KLU, A Direct SparseSolver for Circuit Simulation Problems. ACM Transactions on Mathematical Soft-ware, 37(3):36:1–36:17, September 2010.[33] L. Demkowicz and L. Vardapetyan. Modelling of electromagnetic absorption/scat-tering problems using hp-adaptive finite elements. Computer Methods in AppliedMechanics and Engineering, 152(1):103–124, 1998.[34] C. Durazzi and V. Ruggiero. Indefinitely preconditioned conjugate gradientmethod for large sparse equality and inequality constrained quadratic problems.Numerical Linear Algebra with Applications, 10(8):673–688, 2003.[35] H. C. Elman, D. J. Silvester, and A. J. Wathen. Finite Elements and Fast IterativeSolvers: with Applications in Incompressible Fluid Dynamics. Oxford UniversityPress, second edition, 2014.[36] R. Estrin and C. Greif. On Nonsingular Saddle-point Systems with a MaximallyRank Deficient Leading Block. SIAM Journal on Matrix Analysis and Applica-tions, 36(2):367–384, 2015.88[37] R. Falgout, A. Barker, H. Gahvari, T. Kolev, R. Li, D. Osei-Kuffuor, J. Schroder,P. Vassilevski, L. Wang, and U. M. Yang. hypre: High Performance Precondi-tioners. Lawrence Livermore National Laboratory. http://www.llnl.gov/CASC/hypre/.[38] R. D. Falgout and U. Yang. hypre: A library of high performance precondition-ers. In Computational Science — ICCS 2002, volume 2331 of Lecture Notes inComputer Science, pages 632–641. Springer Berlin Heidelberg, 2002.[39] R. Fletcher. Conjugate gradient methods for indefinite systems. In G.AlistairWatson, editor, Numerical Analysis, volume 506 of Lecture Notes in Mathematics,pages 73–89. Springer Berlin Heidelberg, 1976.[40] J. D. Foley. Constructive Solid Geometry. Computer Graphics: Principles andPractice, Addison-Wesley Professional, pages 557–558, 1996.[41] R. W. Freund and N. M. Nachtigal. QMR: a quasi-minimal residual method fornon-Hermitian linear systems. Numerische Mathematik, 60(3):315–339, 1991.[42] R. Eymardand T. Gallouët and R. Herbin. Finite volume methods. Handbook ofnumerical analysis, 7:713–1018, 2000.[43] M. W. Gee, C. M. Siefert, J. J. Hu, R. S. Tuminaro, and M. G. Sala. Ml 5.0smoothed aggregation user’s guide. Technical Report SAND2006-2649, SandiaNational Laboratories, 2006.[44] J.-F. Gerbeau. A stabilized finite element method for the incompressible magne-tohydrodynamic equations. Numerische Mathematik, 87(1):83–111, 2000.[45] J.-F. Gerbeau, C. Le. Bris, and T. Lelièvre. Mathematical Methods for the Mag-netohydrodynamics of Liquid Metals. Oxford University Press, 2006.[46] N. Gould, M. Hribar, and J. Nocedal. On the solution of equality constrainedquadratic programming problems arising in optimization. SIAM Journal on Sci-entific Computing, 23(4):1376–1395, 2001.[47] N. Gould, D. Orban, and T. Rees. Projected Krylov methods for saddle-pointsystems. SIAM Journal on Matrix Analysis and Applications, 35(4):1329–1343,2014.[48] A. Greenbaum. Iterative methods for solving linear systems, volume 17 of Frontiersin Applied Mathematics. Society for Industrial and Applied Mathematics (SIAM),Philadelphia, PA, 1997.[49] C. Greif, D. Li, D. Schötzau, and X. Wei. A mixed finite element methodwith exactly divergence-free velocities for incompressible magnetohydrodynamics.Computer Methods in Applied Mechanics and Engineering, 199(45-48):2840–2855,2010.89[50] C. Greif and D. Schötzau. Preconditioners for saddle point linear systems withhighly singular (1, 1) blocks. Electronic Transactions on Numerical Analysis,Special Volume on Saddle Point Problems, 22:114–121, 2006.[51] C. Greif and D. Schötzau. Preconditioners for the discretized time-harmonicMaxwell equations in mixed form. Numerical Linear Algebra with Applications,14(4):281–297, 2007.[52] G. Guennebaud, B. Jacob, et al. Eigen v3. http://eigen.tuxfamily.org, 2010.[53] M. D. Gunzburger, A. J. Meir, and J. S. Peterson. On the existence, uniqueness,and finite element approximation of solutions of the equations of stationary, in-compressible magnetohydrodynamics. Mathematics of Computation, 56(194):523–563, 1991.[54] F. Hecht. New development in FreeFem++. Journal of Numerical Mathematics,20(3-4):251–265, 2012.[55] P. Hénon, P. Ramet, and J. Roman. PaStiX: A Parallel Sparse Direct Solver Basedon a Static Scheduling for Mixed 1D/2D Block Distributions. In Proceedings ofthe 15 IPDPS 2000 Workshops on Parallel and Distributed Processing, IPDPS’00, pages 519–527, London, UK, 2000. Springer-Verlag.[56] M. Heroux, R. Bartlett, V. Howle, R. Hoekstra, J. Hu, T. Kolda, R. Lehoucq,K. Long, R. Pawlowski, E. Phipps, A. Salinger, H. Thornquist, R. Tuminaro,J. Willenbring, and A. Williams. An Overview of Trilinos. Technical ReportSAND2003-2927, Sandia National Laboratories, 2003.[57] M. A. Heroux and J. M. Willenbring. Trilinos Users Guide. Technical ReportSAND2003-2952, Sandia National Laboratories, 2003.[58] M. R. Hestenes and E. Stiefel. Methods of conjugate gradients for solving linearsystems. Journal of Research of the National Bureau of Standards, 49:409–436,1952.[59] R. Hiptmair. Multigrid method for Maxwell’s equations. SIAM Journal on Nu-merical Analysis, 36(1):204–225, 1999.[60] R. Hiptmair. Finite elements in computational electromagnetism. Acta Numerica,11:237–339, 2002.[61] R. Hiptmair and J. Xu. Nodal auxiliary space preconditioning in H(curl) andH(div) spaces. SIAM Journal on Numerical Analysis, 45(6):2483–2509, 2007.[62] I. Ipsen. A note on preconditioning nonsymmetric matrices. SIAM Journal onScientific Computing, 23(3):1050–1051, 2001.[63] J. Jones and B. Lee. A multigrid method for variable coefficient Maxwell’s equa-tions. SIAM Journal on Scientific Computing, 27(5):1689–1708, 2006.90[64] G. Karypis and K. Kumar. A Fast and High Quality Multilevel Scheme forPartitioning Irregular Graphs. SIAM Journal of Scientific Computing, 20(1):359–392, December 1998.[65] B. Kehlet. Mshr - Mesh Generation in FEniCS. Talk at FEniCS’14 workshop inParis, June 2014.[66] R. Keppens, G. Tóth, M. A. Botchev, and A. van der Ploeg. Implicit and semi-implicit schemes: algorithms. International Journal for Numerical Methods inFluids, 30(3):335–352, 1999.[67] T. V. Kolev and P. Vassilevski. Parallel auxiliary space AMG for H(curl) prob-lems. Journal of Computational Mathematics, 27(5):604–623, 2009.[68] J. Korzak. Eigenvalue relations and conditions of matrices arising in linear pro-gramming. Computing. Archives for Scientific Computing, 62(1):45–54, 1999.[69] P. Krzyzanowski. On block preconditioners for saddle point problems with sin-gular or indefinite (1, 1) block. Numerical Linear Algebra with Applications,18:123–140, 2011.[70] R. J. LeVeque. Finite volume methods for hyperbolic problems. Cambridge Textsin Applied Mathematics. Cambridge University Press, Cambridge, 2002.[71] D. Li. Numerical Solution of the Time-Harmonic Maxwell equations and In-compressible Magnetohydrodynamics Problems. PhD thesis, University of BritishColumbia, 2010.[72] D. Li, C. Greif, and D. Schötzau. Parallel numerical solution of the time-harmonicMaxwell equations in mixed form. Numerical Linear Algebra with Applications,19(3):525–539, 2012.[73] X. S. Li. An overview of SuperLU: Algorithms, implementation, and user interface.ACM Transactions on Mathematical Software, 31(3):302–325, 2005.[74] X.S. Li, J. W. Demmel, J. R. Gilbert, L. Grigori, M. Shao, and I. Yamazaki. Su-perLU Users’ Guide. Technical Report LBNL-44289, Lawrence Berkeley NationalLaboratory, 1999.[75] J. Liesen and B. N. Parlett. On nonsymmetric saddle point matrices that allowconjugate gradient iterations. Numerische Mathematik, 108(4):605–624, 2008.[76] P. T. Lin, J. N. Shadid, R. S. Tuminaro, M. Sala, G. L. Hennigan, and R. P.Pawlowski. A parallel fully coupled algebraic multilevel preconditioner applied tomultiphysics PDE applications: drift-diffusion, flow/transport/reaction, resistiveMHD. International Journal for Numerical Methods in Fluids, 64(10-12):1148–1179, 2010.[77] A. Logg, K. A. Mardal, and G. N. Wells, editors. Automated solution of dif-ferential equations by the finite element method, volume 84 of Lecture Notes inComputational Science and Engineering. Springer, Heidelberg, 2012.91[78] P. Monk. Finite Element Methods for Maxwell’s Equations. Oxford UniversityPress, 2003.[79] K. W. Morton and D. F. Mayers. Numerical solution of partial differential equa-tions. Cambridge University Press, Cambridge, second edition, 2005. An intro-duction.[80] M. F. Murphy, G. H. Golub, and A. J. Wathen. A note on preconditioning forindefinite linear systems. SIAM Journal on Scientific Computing, 21(6):1969–1972, 2000.[81] J. C. Nédélec. Mixed finite elements in R3. Numerische Mathematik, 35(3):315–341, 1980.[82] Y. Notay. Flexible Conjugate Gradients. SIAM Journal on Scientific Computing,22(4):1444–1460, 2000.[83] C. C. Paige and M. A. Saunders. Solution of sparse indefinite systems of linearequations. SIAM Journal on Numerical Analysis, 12(4):617–629, 1975.[84] I. Perugia, D. Schötzau, and P. Monk. Stabilized interior penalty methods for thetime-harmonic Maxwell equations. Computer Methods in Applied Mechanics andEngineering, 191(41-42):4675–4697, 2002.[85] E. G. Phillips, H. C. Elman, E. C. Cyr, J. N. Shadid, and R. P. Pawlowski. Ablock preconditioner for an exact penalty formulation for stationary MHD. SIAMJournal on Scientific Computing, 36(6):B930–B951, 2014.[86] E. G. Phillips, H. C. Elman, E. C. Cyr, J. N. Shadid, and R. P. Pawlowski.Block Preconditioners for Stable Mixed Nodal and Edge finite element Represen-tations of Incompressible Resistive MHD. SIAM Journal on Scientific Computing,38(6):B1009–B1031, 2016.[87] C. E. Powell and D. Silvester. Optimal preconditioning for Raviart-Thomas mixedformulation of second-order elliptic problems. SIAM Journal on Matrix Analysisand Applications, 25(3):718–738, 2003.[88] S. Reitzinger and J. Schöberl. An algebraic multigrid method for finite elementdiscretizations with edge elements. Numerical Linear Algebra with Applications,9(3):223–238, 2002.[89] M. Rivara. Mesh Refinement Processes Based on the Generalized Bisection ofSimplices. SIAM Journal on Numerical Analysis, 21(3):604–613, 1984.[90] P. H. Roberts. An Introduction to Magnetohydrodynamics. Longmans, London,1967.[91] M. Rozlozník and V. Simoncini. Krylov Subspace Methods for Saddle PointProblems with Indefinite Preconditioning. SIAM Journal on Matrix Analysis andApplications, 24(2):368–391, 2002.92[92] Y. Saad. A flexible inner-outer preconditioned GMRES algorithm. SIAM Journalon Scientific Computing, 14(2):461–469, 1993.[93] Y. Saad. Iterative Methods for Sparse Linear Systems. SIAM, Philadelphia, PA,Second edition, 2003.[94] Y. Saad and M. H. Schultz. GMRES: A generalized minimal residual algorithm forsolving nonsymmetric linear systems. SIAM Journal on Scientific and StatisticalComputing, 7(3):856–869, 1986.[95] D. Schötzau. Mixed finite element methods for stationary incompressiblemagneto–hydrodynamics. Numerische Mathematik, 96(4):771–800, 2004.[96] J. N. Shadid, R. P. Pawlowski, J. W. Banks, L. Chacón, P. T. Lin, and R. S. Tumi-naro. Towards a scalable fully-implicit fully-coupled resistive MHD formulationwith stabilized FE methods. Journal of Computational Physics, 229(20):7649–7671, 2010.[97] J. N. Shadid, R. P. Pawlowski, E. C. Cyr, R. S. Tuminaro, L. Chacón, and P. D.Weber. Scalable implicit incompressible resistive MHD with stabilized FE andfully-coupled Newton-Krylov-AMG. Computer Methods in Applied Mechanics andEngineering, 304:1–25, 2016.[98] H. Si. TetGen, a Delaunay-Based Quality Tetrahedral Mesh Generator. ACMTransactions on Mathematical Software, 41(2):11:1–11:36, February 2015.[99] G. Strang and G. Fix. An analysis of the finite element method. Wellesley-Cambridge Press, Wellesley, MA, second edition, 2008.[100] J. C. Strikwerda. Finite difference schemes and partial differential equations.Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, secondedition, 2004.[101] C. Taylor and P. Hood. A numerical solution of the Navier-Stokes equations usingthe finite element technique. Computers and Fluids, 1(1):73–100, 1973.[102] The CGAL Project. CGAL User and Reference Manual. CGAL Editorial Board,4.12.1 edition, 2018.[103] M. ur Rehman, C. Vuik, and G. Segal. SIMPLE-type preconditioners for the Oseenproblem. International Journal for Numerical Methods in Fluids, 61(4):432–452,2009.[104] H. A. Van der Vorst. Bi-CGSTAB: a fast and smoothly converging variant of Bi-CG for the solution of nonsymmetric linear systems. SIAM Journal on Scientificand Statistical Computing, 13(2):631–644, 1992.[105] S. P. Vanka. Block-implicit multigrid solution of Navier-Stokes equations in prim-itive variables. Journal of Computational Physics, 65(1):138–158, 1986.93[106] L. Vardapetyan and L. Demkowicz. hp-adaptive finite elements in electromagnet-ics. Computer Methods in Applied Mechanics and Engineering, 169(3-4):331–344,1999.[107] A. J. Wathen. Realistic eigenvalue bounds for the Galerkin mass matrix. IMAJournal of Numerical Analysis, 7(4):449–457, 1987.[108] A. J. Wathen. Preconditioning. Acta Numerica, 24:329–376, 2015.[109] M. Wathen. Iterative Solution of a Mixed Finite Element Discretisation of anIncompressible Magnetohydrodynamics Problem. Master’s thesis, University ofBritish Columbia, 2014.[110] M. Wathen, C. Greif, and D. Schötzau. Preconditioners for Mixed Finite ElementDiscretizations of Incompressible MHD Equations. SIAM Journal on ScientificComputing, 39(6):A2993–A3013, 2017.[111] X. Wei. Mixed discontinuous Galerkin finite element methods for incompressiblemagnetohydrodynamics. PhD thesis, University of British Columbia, 2011.[112] S. L. Xiaoye and J. W. Demmel. SuperLU_DIST: A Scalable Distributed-MemorySparse Direct Solver for Unsymmetric Linear Systems. ACM Transactions onMathematical Software, 29(2):110–140, June 2003.[113] J. Xu. The auxiliary space method and optimal multigrid preconditioning tech-niques for unstructured grids. Computing, 56(3):215–235, 1996.94

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
https://iiif.library.ubc.ca/presentation/dsp.24.1-0375762/manifest

Comment

Related Items