UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Stability analysis and stabilization of unstructured finite volume method Chen, Liang 2016

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
24-ubc_2016_may_chen_liang.pdf [ 11.09MB ]
Metadata
JSON: 24-1.0300002.json
JSON-LD: 24-1.0300002-ld.json
RDF/XML (Pretty): 24-1.0300002-rdf.xml
RDF/JSON: 24-1.0300002-rdf.json
Turtle: 24-1.0300002-turtle.txt
N-Triples: 24-1.0300002-rdf-ntriples.txt
Original Record: 24-1.0300002-source.json
Full Text
24-1.0300002-fulltext.txt
Citation
24-1.0300002.ris

Full Text

Stability Analysis and Stabilization ofUnstructured Finite Volume MethodbyLiang ChenBEng, Northwestern Polytechnical University (Xi’an, China), 2012A THESIS SUBMITTED IN PARTIAL FULFILLMENT OFTHE REQUIREMENTS FOR THE DEGREE OFMASTER OF APPLIED SCIENCEinThe Faculty of Graduate and Postdoctoral Studies(Mechanical Engineering)THE UNIVERSITY OF BRITISH COLUMBIA(Vancouver)April 2016c© Liang Chen 2016AbstractIn this thesis, we develop a stability analysis model for the unstructured finite vol-ume method. This model employs the matrix method to implement stability analy-sis. For the full discretization, where the temporal discretization employs backwardEuler time-stepping, a linearization is used to construct the model. Both for the fulldiscretization and semi-discretization, the stability condition is expressed in termsof eigenvalues. The validity of the stability analysis model is verified for linearcases and nonlinear cases. The analysis in this thesis also explains the phenomenathat the defined energy can locally increase in the energy stability analysis method.This model can be applied to both linear problems and nonlinear problems; in thisthesis, we focus on the 2D Euler equations.We also develop a stabilization methodology. This methodology is aimed atoptimizing the eigenvalues of the Jacobian matrix by changing the coordinatesof interior vertices of a mesh. Specifically, for an unstable spatial discretization,we can shift the unstable eigenvalues into stability region by changing the mesh.The stabilization methodology is verified by numerical cases as well. The successof this stabilization relies on a developed method to change the eigenvalues of amatrix in a quantitative and controllable way. This method is a general approachto optimize the eigenvalues of a matrix, which means it can be applied to othersystems as well.iiPrefaceThe topic of the research in this thesis was chosen by Dr. Carl Olliver-Gooch.The research for this thesis was implemented by the author Liang Chen under thesupervision of Dr. Carl Olliver-Gooch. The manuscript preparation was done byLiang Chen with the guidance from Dr. Carl Olliver-Gooch.iiiTable of ContentsAbstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iiPreface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iiiTable of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ivList of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viiList of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viiiList of Symbols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiiiAcknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xviDedication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.1 Preliminary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2 Stability Analysis Methods . . . . . . . . . . . . . . . . . . . . . 31.2.1 The Von Neumann Method . . . . . . . . . . . . . . . . 31.2.2 The Energy Method . . . . . . . . . . . . . . . . . . . . 61.2.3 The Matrix Method . . . . . . . . . . . . . . . . . . . . 71.2.4 Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . 81.3 Literature Review . . . . . . . . . . . . . . . . . . . . . . . . . . 91.4 Motivation and Outline . . . . . . . . . . . . . . . . . . . . . . . 112 Essential Background and Stability Analysis . . . . . . . . . . . . . 122.1 Governing Equations and Numerical Methods . . . . . . . . . . 122.1.1 Physical Governing Equations . . . . . . . . . . . . . . 122.1.2 Numerical Discretization For Spatial Terms . . . . . . . 132.1.3 Temporal Discretization and Time Stepping Methods . . 132.1.4 Summary of Numerical Methods . . . . . . . . . . . . . 152.2 Stability Analysis Methodology . . . . . . . . . . . . . . . . . . 16ivTable of Contents2.2.1 Mathematical Model to Implement Stability Analysis atFixed Point . . . . . . . . . . . . . . . . . . . . . . . . . 162.2.2 Eigenanalysis of the Model . . . . . . . . . . . . . . . . 192.3 Stability of the Method of Lines . . . . . . . . . . . . . . . . . . 273 Numerical Validation for Stability Methodology . . . . . . . . . . . 283.1 Numerical Testing Methodology . . . . . . . . . . . . . . . . . . 283.2 Precision Estimation . . . . . . . . . . . . . . . . . . . . . . . . 303.3 Test Case No. 1 — Poisson’s Equation . . . . . . . . . . . . . . 323.3.1 Numerical Results and Data Analysis . . . . . . . . . . . 323.3.2 Summary . . . . . . . . . . . . . . . . . . . . . . . . . 383.4 Test Case No. 2 — 2D Advection-Diffusion Problem . . . . . . . 383.4.1 Numerical Results and Data Analysis . . . . . . . . . . . 403.4.2 Summary . . . . . . . . . . . . . . . . . . . . . . . . . 423.5 Test Case No. 3 — One Stable 2D Euler Equations . . . . . . . . 433.5.1 Numerical Results and Data Analysis . . . . . . . . . . . 443.5.2 Summary . . . . . . . . . . . . . . . . . . . . . . . . . 483.6 Test Case No. 4 — Another Stable 2D Euler Equations . . . . . . 483.6.1 Numerical Results and Data Analysis . . . . . . . . . . . 503.6.2 Summary . . . . . . . . . . . . . . . . . . . . . . . . . 533.7 Test Case No. 5 — Unstable 2D Euler Equations . . . . . . . . . 533.7.1 Numerical Results and Data Analysis . . . . . . . . . . . 533.7.2 Summary . . . . . . . . . . . . . . . . . . . . . . . . . 583.8 Summary of Stability Analysis . . . . . . . . . . . . . . . . . . 584 Stabilization at Fixed Point . . . . . . . . . . . . . . . . . . . . . . . 604.1 Finding the Derivative of the Eigenvalue . . . . . . . . . . . . . 604.2 Prediction on the Difference of Eigenvalue under the Difference ofa Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 614.3 Numerical Verification of the Eigenvalue Difference CalculationMethod . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 624.3.1 Procedure to Implement Tests . . . . . . . . . . . . . . . 634.3.2 Numerical Results . . . . . . . . . . . . . . . . . . . . . 644.4 Stabilization Methodology . . . . . . . . . . . . . . . . . . . . . 674.4.1 Procedure of Systematic Stabilization . . . . . . . . . . 674.5 Technical Results of Stabilization . . . . . . . . . . . . . . . . . 684.5.1 Test Case No. 1 . . . . . . . . . . . . . . . . . . . . . . 684.5.2 Test Case No. 2 . . . . . . . . . . . . . . . . . . . . . . 744.6 The Region That Causes the Instability . . . . . . . . . . . . . . 744.7 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80vTable of Contents5 Stabilization at Non-fixed Point . . . . . . . . . . . . . . . . . . . . 845.1 Test Case No. 1 — the Unstable 2D Euler Case with Mach=1.2,Angle=0.0 Free Stream . . . . . . . . . . . . . . . . . . . . . . . 845.2 Test Case No. 2 — the Unstable 2D Euler Case with Mach=1.3,Angle=0.0 Free Stream . . . . . . . . . . . . . . . . . . . . . . . 895.3 Summary and Further Discussion . . . . . . . . . . . . . . . . . 906 Conclusions and Future Work . . . . . . . . . . . . . . . . . . . . . 956.1 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 956.2 Future Work for Stability and Stabilization . . . . . . . . . . . . 966.3 Open Questions: Spectral Analysis and Optimization — BeyondStability and Stabilization . . . . . . . . . . . . . . . . . . . . . 97Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98AppendicesA The Mappings for the Solution Update and the Flux Integral . . . . 101A.1 The Mapping for the Solution Update . . . . . . . . . . . . . . . 101A.2 The Mapping for the Flux integral . . . . . . . . . . . . . . . . . 102B Angle Error Estimation . . . . . . . . . . . . . . . . . . . . . . . . 104viList of Tables4.1 Relative error of the prediction on the rightmost eigenvalue changefor both methods under the movement of vertex 683 . . . . . . . . 664.2 Relative error for the prediction on the rightmost eigenvalue changefor both methods under the movement of vertex 1417 . . . . . . . 664.3 Time cost for mesh optimization of the 2D Euler case with Mach=1.2 78viiList of Figures1.1 1D uniform mesh with K = 10 . . . . . . . . . . . . . . . . . . . 43.1 Mesh for solving Poisson’s equation . . . . . . . . . . . . . . . . 333.2 Solution to Poisson’s equation . . . . . . . . . . . . . . . . . . . 333.3 First twenty largest eigenvalues for Poisson’s equation case . . . . 333.4 Convergence history for Poisson’s equation . . . . . . . . . . . . 353.5 Amplification factors for Poisson’s equation . . . . . . . . . . . . 353.6 The difference between the actual amplification factor and the normof the largest eigenvalue (∆γ) for Poisson problem . . . . . . . . 363.7 The difference between the actual angle and 0 or pi (∆θ ) for Pois-son problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . 363.8 Comparison among computational vectors and eigenvector for Pois-son’s equation case . . . . . . . . . . . . . . . . . . . . . . . . . 373.9 Mesh for advection-diffusion problem . . . . . . . . . . . . . . . 393.10 Solution to advection-diffusion problem . . . . . . . . . . . . . . 393.11 First twenty largest eigenvalues for the advection-diffusion problem 403.12 Convergence history for the advection-diffusion problem . . . . . 413.13 Amplification factors of two successive iterations for the advection-diffusion problem . . . . . . . . . . . . . . . . . . . . . . . . . . 413.14 A closeup of Figure (3.13) . . . . . . . . . . . . . . . . . . . . . 423.15 The average amplification factors of 34 iterations for the advectiondiffusion problem . . . . . . . . . . . . . . . . . . . . . . . . . . 433.16 Full image of the mesh for solving the two dimensional Euler casewith Mach=0.75, angle=0.5 . . . . . . . . . . . . . . . . . . . . 443.17 Zoom of the mesh around the airfoil for solving the two dimen-sional Euler case with Mach=0.75, angle=0.5 . . . . . . . . . . . 443.18 Mach field for the two dimensional Euler equations with Mach=0.75,angle=0.5 free stream . . . . . . . . . . . . . . . . . . . . . . . . 453.19 Zoom of the Mach field around the airfoil for the two dimensionalEuler equations with Mach=0.75, angle=0.5 free stream . . . . . 45viiiList of Figures3.20 The first twenty largest eigenvalues for the two dimensional Eulercase with Mach=0.75, angle=0.5 . . . . . . . . . . . . . . . . . . 453.21 Convergence history for the two dimensional Euler case with Mach=0.75,angle=0.5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 463.22 Amplification factor comparison for the two dimensional Eulercase with Mach=0.75, angle=0.0 . . . . . . . . . . . . . . . . . . 473.23 The difference between the amplification factor and the norm ofthe largest eigenvalue (∆γ) for the two dimensional Euler case withMach=0.75, angle=0.0 . . . . . . . . . . . . . . . . . . . . . . . 473.24 The difference between the actual angle and 0 or pi (∆θ ) for thetwo dimensional Euler case with Mach=0.75, angle=0.5 . . . . . . 483.25 Comparison for computational vectors and eigenvector for the 2DEuler case with Mach=0.75, angle=0.5 (density component, ro-tated for better visualization) . . . . . . . . . . . . . . . . . . . . 493.26 Mach field of 2D Euler case with Mach=1.05, angle=0.5 free stream 503.27 Zoom of the Mach field around the airfoil of 2D Euler case withMach=1.05, angle=0.5 free stream . . . . . . . . . . . . . . . . . 503.28 First twenty largest eigenvalues for the 2D Euler case with Mach=1.05,angle=0.5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 513.29 The convergence history for 2D Euler equations with Mach=1.05,angle=0.5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 513.30 The amplification factors of two successive iterations for the 2DEuler case with Mach=1.05, angle=0.0 . . . . . . . . . . . . . . . 523.31 The average amplification factors for the two dimensional Eulercase with Mach=1.05, angle=0.5 . . . . . . . . . . . . . . . . . . 523.32 Mach field for the 2D Euler case with Mach=1.2, angle=0.0 freestream . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 543.33 Zoom of the Mach field around the airfoil of the 2D Euler case withMach=1.2, angle=0.0 free stream . . . . . . . . . . . . . . . . . . 543.34 First twenty largest eigenvalues for the 2D Euler equations withMach=1.2, angle=0.0 free stream . . . . . . . . . . . . . . . . . . 543.35 Zoom of the first two eigenvalues with for two dimensional Eulerequations with Mach=1.2, angle=0.0 free stream . . . . . . . . . 553.36 First twenty rightmost eigenvalues of the spatial Jacobian matrixfor two dimensional Euler case with Mach=1.2, angle=0.0 freestream . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 553.37 The convergence history for the 2D Euler case with Mach=1.2,angle=0.0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 563.38 The amplification factor for the 2D Euler case with Mach=1.2, an-gle=0.0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57ixList of Figures3.39 The difference between actual amplification factors and the normof the largest eigenvalue (∆γ) for one unstable Euler equations casewith Mach=1.2, angle=0.0 . . . . . . . . . . . . . . . . . . . . . 573.40 The difference between the actual angle and 0 or pi (∆θ ) for oneunstable Euler equations case with Mach=1.2, angle=0.0 . . . . . 583.41 Comparison among the computational vectors and the eigenvectorassociated with the largest eigenvalue (density component) for the2D Euler case with Mach=1.2, angle=0.0. . . . . . . . . . . . . . 594.1 The perturbed vertices . . . . . . . . . . . . . . . . . . . . . . . 654.2 Change of eigenvalue when moving vertex 683 in the x-directionfor the 2D Euler case with Mach=1.2, angle=0.0 free stream . . . 654.3 Change of eigenvalue when moving vertex 1417 in the x-directionfor the 2D Euler case with Mach=1.2, angle=0.0 free stream . . . 664.4 The derivative of the rightmost eigenvalue’s real part with respectto the normalized x-coordinate dRe(λ0)dxs for the 2D Euler case withMach=1.2, angle=0.0 free stream. . . . . . . . . . . . . . . . . . 694.5 The eigenvector associated with the rightmost eigenvalue for the2D Euler case with Mach=1.2, angle=0.0 free stream . . . . . . . 704.6 Mesh comparison for the stabilization for the 2D Euler case withMach=1.2 and angle=0.0 (The red line is for the new mesh, and theblack line is for the original mesh.) . . . . . . . . . . . . . . . . . 714.7 Zoom of Figure (4.6) near the airfoil . . . . . . . . . . . . . . . . 724.8 The new mesh arising from stabilization for the 2D Euler case withMach=1.2 and angel=0.0 . . . . . . . . . . . . . . . . . . . . . . 724.9 Convergence history comparison for the 2D Euler case associatedwith different meshes ( Mach=1.2, angle=0.0 ) . . . . . . . . . . 734.10 First twenty rightmost eigenvalues for the 2D Euler case associatedwith the new mesh (Mach=1.2, angle=0.0) . . . . . . . . . . . . . 734.11 Mach field for the 2D Euler case with Mach=1.3, angle=0.0 freestream . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 754.12 First twenty rightmost eigenvalues for the 2D Euler case associatedwith the original mesh (Mach=1.3, angle=0.0) . . . . . . . . . . . 754.13 The derivative of the 14th rightmost eigenvalue’s real part with re-spect to the normalized x-coordinate dRe(λ0)dxs . . . . . . . . . . . . 764.14 Mesh comparison for the stabilization for the 2D Euler case withMach=1.3, angle=0.0 free stream ( The black line is for the originalmesh and the red line is for the new mesh.) . . . . . . . . . . . . . 764.15 The new mesh after stabilization for the 2D Euler case with Mach=1.3,angle=0.0 free stream . . . . . . . . . . . . . . . . . . . . . . . . 77xList of Figures4.16 Rightmost eigenvalues for the 2D Euler case associated with thenew mesh (Mach=1.3, angle=0.0) . . . . . . . . . . . . . . . . . 774.17 Convergence history comparison for the 2D Euler case for differentmeshes ( Mach=1.3, angle=0.0 ) . . . . . . . . . . . . . . . . . . 784.18 The nonzero pattern comparison among the derivatives and eigen-vectors for the 2D Euler case with Mach=1.2, angle=0.0 free stream 804.19 The coordinates’ movement to stabilize the unstable 2D Euler casewith Mach=1.2, angle=0.0 free stream . . . . . . . . . . . . . . . 814.20 Comparison among the derivatives of eigenvalues and the eigen-vectors for the 2D Euler case with Mach=1.3, angle=0.0 free steam 824.21 The coordinates’ movement to stabilize the unstable 2D Euler casewith Mach=1.3, angle=0.0 free stream . . . . . . . . . . . . . . . 835.1 First twenty rightmost eigenvalues of the Jacobian matrix of thefourth order scheme at the initial solution for the 2D Euler casewith Mach=1.2, angle=0.0 free stream . . . . . . . . . . . . . . . 855.2 Eigenvectors and the derivatives for the 2D Euler case with Mach=1.2,angle=0.0 free stream . . . . . . . . . . . . . . . . . . . . . . . . 865.3 The normalized coordinate’s change for the non-fixed point stabi-lization for the 2D Euler case with Mach=1.2, angle=0.0 free stream 875.4 Mesh comparison for the stabilization at the initial solution for the2D Euler case with Mach=1.2, angle=0.0 (The red line is for thenew mesh, and the black line is for the original mesh) . . . . . . . 875.5 New mesh for the stabilization at the initial solution for the 2DEuler case with Mach=1.2, angle=0.0 free stream . . . . . . . . . 885.6 Convergence history comparison for the stabilization at a non-fixedpoint for the 2D Euler case (Mach=1.2, angle=0.0) . . . . . . . . 885.7 First twenty rightmost eigenvalues of the Jacobian matrix on theconverged solution of the fourth order scheme on the new mesh forthe 2D Euler case with Mach=1.2, angle=0.0 free stream . . . . . 895.8 First twenty rightmost eigenvalues of the Jacobian matrix of thefourth order scheme at the initial solution for the 2D Euler casewith Mach=1.3, angle=0.0 free stream . . . . . . . . . . . . . . . 905.9 Eigenvectors and derivatives for the 2D Euler case with Mach=1.3,angle=0.0 free steam . . . . . . . . . . . . . . . . . . . . . . . . 915.10 The normalized coordinates’s change for non-fixed point stabiliza-tion for the 2D Euler case with Mach=1.3, angle=0.0 free stream . 925.11 The mesh comparison for the non-fixed stabilization for the 2DEuler case with Mach=1.3, angle=0.0 (the red line is for the newmesh, and the black line is for the original mesh) . . . . . . . . . 92xiList of Figures5.12 The new mesh arising from non-fixed point stabilization for the 2DEuler case with Mach=1.3, angle=0.0 free stream . . . . . . . . . 935.13 Convergence history comparison for the non-fixed stabilization forthe 2D Euler case with Mach=1.3, angle=0.0 free stream . . . . . 935.14 First twenty rightmost eigenvalues of the Jacobian matrix on theconverged solution of the fourth order scheme on the new mesh forthe 2D Euler case with Mach=1.3, angle=0.0 free stream . . . . . 94xiiList of SymbolsRoman SymbolsU¯ The solution vector of the discretized system or the computational solutionvectorU¯p The computational solution after perturbationU¯s The solution of the discretized system and the fixed point∆t The time step∆x The space step in the x-directionδx The adjustment for the x-coordinate of one vertex in the meshδy The adjustment for the y-coordinate of one vertex in the mesh∆U¯ The computational errorδU¯ The solution update∆U¯p The perturbation to the solution∂ R¯∂U¯ The Jacobian matrixnˆ The outward unit normal vectord DimensionE Numerical energyE Total energyE∆λ0 The relative error of the predicted eigenvalue changeF The flux vectorG The amplification factor defined in the von Neumann methodxiiiList of SymbolsI The identity matrix and the imaginary unitLu The unit length at a vertexR Real numberR(U¯) The residual termS The source terms in Poissson’s equationS The surface of the control volumeT The period of the amplification factor associated with the case where the largesteigenvalues are a conjugate pairU The conservative variables vectoru Velocity in the x-directionV The control volumev Velocity in the y-directionx The right eigenvector of a matrixy The left eigenvector of a matrixP The pressureGreek Symbolsγ¯ The average amplification factor∆γ The difference between the actual amplification factor and the norm of thelargest eigenvalue∆θ The difference between the actual angle and pi or 0∆ζ The vector of coordinate changeε The perturbation of a coordinate of one vertex in the meshγ The amplification factorΓk The ratio of the norm of the computational error at the kth iteration to that ofthe 0th iterationλ Eigenvalue of a matrixxivList of SymbolsΩ The spatial domain∂Ω The boundary of the spatial domainρ Densityθ The angle associated with the real part and the imaginary part of an eigenvalueζ The vector of the coordinates of vertices of a meshSuperscriptsn time levelSubscriptsi Indexj IndexxvAcknowledgementsHere I first would like to thank Dr. Carl Olliver-Gooch for his constant patientsupervision and constant tolerance on different views. His insightful strategic per-spective always inspires me to carry on the research. I also would like to thank mycolleagues from the laboratory and others who have provided me their help andsuggestions. The author also thank ANSYS Canada and NSERC for the financialsupport from January 2013 to December 2015.xviDedicationGraduate study is not easy. The constant love and support from my family alwaysgives me the courage to deal with the tough times and the courage to overcome allkinds of difficulties to make this thesis possible during the course of the graduatestudy.xviiChapter 1Introduction1.1 PreliminaryThe partial differential equations (PDEs) that describe physical phenomena areusually built on the continuous spatial and temporal domains. The PDEs usuallyconsist of two parts: temporal terms and spatial terms. We express the PDEs in thefollowing way:∂U∂ t+∂ f (U)∂x= 0, (1.1)t ≥ 0 x ∈Ω⊆ Rd , (1.2)U (x, t) = u0 U (∂Ω, t) =U∂Ω, (1.3)where U is the conservative variables vector, x is the spatial coordinates, f (U) isthe flux vector, Ω is the spatial domain, ∂Ω is the boundary of the spatial domain,R denotes real numbers, and d is the dimension of the problem. If the temporalterm ∂u∂ t is zero, the problem reduces to a steady state problem.To solve the equations numerically, we must discretize the spatial and temporaldomains. The spatial discretization generates a mesh. There are two categories ofmeshes: one is the structured mesh and the other is the unstructured mesh. Thefinite difference method, the spectral method, the finite element method, and thefinite volume method are the most common ways to represent the spatial terms onthe discretized spatial domain. Usually the finite difference method and the spectralmethod cannot be applied to unstructured meshes while the finite element methodand finite volume method can be applied to both mesh types.After the spatial discretization, a system of ordinary differential equations cor-responding to the original partial differential equations (1.1) is generated:dU¯dt= R(U¯ , x) . (1.4)The variable U¯ in Equation (1.4) is the computational solution vector or discretesolution vector:11.1. PreliminaryU¯ =U¯0U¯1U¯2...U¯n−1 (1.5)where U¯i is the component of the discrete solution vector at the ith node or controlvolume of the mesh. Equation (1.4) is the semi-discretization to the original equa-tion, which is also called the method of lines. Semi-discretization is a good modelto isolate the influence of the temporal discretization when studying the spatial dis-cretization. There are many ways to discretize the temporal terms, for instance, theexplicit Euler temporal discretization and the implicit Euler temporal discretiza-tion. Applying the explicit Euler temporal discretization to Equation (1.4) yieldsU¯n+1−U¯n∆t= R(U¯n) (1.6)where ∆t is the time step. In Equation (1.6) U¯n is the computational solution attime level n. Suppose U¯ns is the exact solution to the fully discretized equation(1.6) at time level n.Definition 1. The computational error ∆U¯n is the difference between the exactsolution to the fully discretized equation U¯ns and the computational solution U¯n atthe nth time level:∆U¯n = U¯ns −U¯n. (1.7)∆U¯ refers the computational error in general.Stability, consistency, and convergence are always the most important proper-ties of numerical schemes. For a well-posed problem, these three properties arerelated to each other by the Lax Equivalence Theorem:Theorem 1. Equivalence Theorem of Lax: For a well-posed initial value prob-lem and a consistent discretization scheme, stability is the necessary and sufficientcondition for convergence.This statement is quoted directly from Hirsch (2007) [9]. The Lax equivalencetheorem, also called the Lax-Richtmyer equivalence theorem, was presented orig-inally by Lax and Ritchtmyer (1956) [12]. A description of this theorem can alsobe found in the Richtmyer and Morton (1967) [23].21.2. Stability Analysis MethodsDefinition 2. A numerical scheme is stable if and only if the norm of the com-putational solution or the associated computational error remains bounded as theiteration process marches to infinity, i.e.,|U¯n| ≤C1 as n → ∞, (1.8)or|∆U¯n| ≤C2 as n → ∞, (1.9)where C1 and C2 are constants independent of n. This is a commonly accepteddefinition, but it is not very rigorous. For non-normal operators, Condition (1.8)and Condition (1.9) are too strict to be satisfied. To remedy this, a weaker conditionthat the error grows linearly as the iteration n can be used:|∆U¯n| ≤ nC3, (1.10)where C3 is independent of n.1.2 Stability Analysis MethodsCondition (1.8), Condition (1.9), and Condition (1.10) for stability are quite ab-stract in that one cannot easily determine under what conditions a scheme is stable.To deal with this, there are three main methods to simplify the stability analysis:the von Neumann method, the energy method, and the matrix method. In thissubsection, we give a brief introduction to each method.1.2.1 The Von Neumann MethodThe von Neumann method is the most classical method to study the stability prop-erty of a numerical scheme. The earliest published description of this method canbe seen in Crank and Nicolson (1947) [4] and Charney et al. (1950) [3]. A com-prehensive and recent description can be found in Hirsch (2007) [9].The basic idea of the von Neumann method is to decompose the computationalsolution or the computational error into a discrete Fourier series. We take the onedimensional advection-diffusion equation as an example:∂u∂ t+a∂u∂x= α∂ 2u∂ 2x, (1.11)x ∈ [−pi, pi] , u(x,0) = u0, u(−pi) = u(pi) , (1.12)31.2. Stability Analysis Methodsa > 0, α > 0. (1.13)The mesh is the one dimensional uniform mesh with ∆x = 2piK which has K+1nodes. For K = 10, the mesh is shown in Figure (1.1).Figure 1.1: 1D uniform mesh with K = 10The first order upwind finite difference for the advection term, and the secondorder central finite difference for the diffusion term are employed to discretize thespatial terms to yield the semi-discrete equationdU¯idt+aU¯i−1−U¯i∆x= αU¯i−1−2U¯i+U¯i+1∆x2. (1.14)Explicit Euler temporal discretization is used to discetize temporal terms to obtainthe fully discrete equationU¯n+1i −U¯ni∆t+aU¯ni−1−U¯ni∆x= αU¯ni−1−2U¯ni +U¯ni+1∆x2. (1.15)Rearranging Equation (1.15), we haveU¯n+1i =(α∆t∆x2−a∆t∆x)U¯ni−1+(1+a∆t∆x−2α ∆t∆x2)U¯ni +α∆t∆x2U¯ni+1. (1.16)The discrete solution U¯n can be decomposed into a finite Fourier series. For eachcomponent, we haveU¯ni =K∑j=0V nj eI i jpiK (1.17)where I =√−1. Substitute Equation (1.17) into Equation (1.16) to get41.2. Stability Analysis MethodsK∑j=0V n+1j eI i jpiK =(α∆t∆x2−a∆t∆x) K∑j=0V nj eI (i−1) jpiK+(1+a∆t∆x−2α ∆t∆x2) K∑j=0V nj eI i jpiK+α∆t∆x2K∑j=0V nj eI (i+1) jpiK . (1.18)For the jth harmonic, we haveV n+1j =(α∆t∆x2−a∆t∆x)V nj eI − jpiK +(1+a∆t∆x−2α ∆t∆x2)V nj +α∆t∆x2V nj eI jpiK .(1.19)We define the amplification factorG j ,V n+1jV nj. (1.20)To be stable, the following condition needs to be satisfied:∣∣G j∣∣≤ 1 for all j = 0, . . .K. (1.21)This condition is the von Neumann stability condition (Hirsch (2007) [9]). Com-bining Equation (1.19) and Equation (1.20) givesG j =(α∆t∆x2−a∆t∆x)eI− jpiK +(1+a∆t∆x−2α ∆t∆x2)+α∆t∆x2eIjpiK . (1.22)By Euler’s formula, we have:G j =(α∆t∆x2−a∆t∆x)cos(jpiK)− I(α∆t∆x2−a∆t∆x)sin(jpiK)+(1+a∆t∆x−2α ∆t∆x2)+α∆t∆x2cos(jpiK)+ Iα∆t∆x2sin(jpiK)=(1+a∆t∆x−2α ∆t∆x2)+(2α∆t∆x2−a∆t∆x)cos(jpiK)+Ia∆t∆xsin(jpiK).51.2. Stability Analysis MethodsTo be stable, |G| ≤ 1, i.e.,((1+a∆t∆x−2α ∆t∆x2)+(2α∆t∆x2−a∆t∆x)cos(jpiK))2+(a∆t∆xsin(jpiK))2≤ 1for j = 0, . . .K. (1.23)1.2.2 The Energy MethodThe energy method has a long history of being used in the stability analysis ofpartial differential equations. An early example of the energy method’s applicationto numerical methods can be found in Richtmyer and Morton (1967) [23]. Theenergy method usually defines a norm of the computational solution as energy andprovides the condition under which the energy monotonically decreases, thereforeresulting in a stable system.For linear problems and nonlinear problems like the Euler equations that arehomogeneous of degree one, the term R(U¯) in Equation (1.4) can be rewritten asR(U¯) =∂R∂U¯U¯ . (1.24)Substitute Equation (1.24) into Equation (1.4) to givedU¯dt=∂R∂U¯U¯ . (1.25)We define the standard norm of the computational solution as energy:E = U¯ᵀU¯ . (1.26)Multiply both sides of Equation (1.25) by Uᵀ to giveU¯ᵀdU¯dt=12dEdt= U¯ᵀ∂R∂U¯U¯ . (1.27)To be stable, the energy E should not increase as time t, i.e., dEdt ≤ 0. As a result,we needU¯ᵀ∂R∂U¯U¯ ≤ 0. (1.28)This condition is sufficient for a normal operator. For an ill-conditioned non-normal operator, this condition is not sufficient. D. Levy and E. Tadmor (1998)[14] proposed a strengthened condition61.2. Stability Analysis MethodsU¯ᵀ∂R∂U¯U¯ ≤−η∣∣∣∣ ∂R∂U¯ U¯∣∣∣∣ . (1.29)Lenferink and Spijker (1991) [13] came up with another approach to deal withill-conditioned non-normal operators.1.2.3 The Matrix MethodIn general, for a linear problem, the evolution of the computational error betweentwo successive iterations can be mapped by the following relation:∆U¯n+1 = M∆U¯n+B, (1.30)where M is a matrix with size equal to the number of unknowns, and B is a vectorassociated with boundary conditions. For some boundary conditions and boundarycondition enforcement methods, B is zero. We do not take B into considerationhere. Suppose the initial computational error is ∆U¯0. For a linear problem, M isconstant; therefore, ∆U¯n = Mn∆U¯0 . To be stable, the norm of the computationalerror ∆U¯n needs to be bounded as n becomes infinite:|∆U¯n|= ∣∣Mn∆U¯0∣∣≤C as n→ ∞.we also have:|∆U¯n|= ∣∣Mn∆U¯0∣∣≤ |Mn| ∣∣∆U¯0∣∣ .we can see that the behavior of ∆U¯ is determined by M. To have |∆U¯n| be bounded,the following condition needs to be satisfied (see e.g., Horn and Johnson (1990)[10]):1. The largest modulus of the eigenvalues of M should not be larger than one.2. If the modulus equals one, then the associated eigenvalues must be semisim-ple.For a steady flow problem, the solution evolution starts from a guessed solution,which does not satisfy the discretized equations. To be able to converge to thesolution of the discretized equations, the computational error needs to be machinezero as n become sufficiently large. Therefore, we need the largest modulus of theeigenvalues to be smaller than one.A more detailed description of this method can be found in Hirsch (2007) [9].For an ill-conditioned operator, though this condition is satisfied, there willbe a large transient growth before the eventual decay. This phenomenon is well71.2. Stability Analysis Methodsdocumented in Reddy and Trefethen (1992) [22], in which an approach to dealwith the non-normality is presented.For a nonlinear problem, if a linearization is used, we can get the a mapping forthe computational error of two successive iteration with the same form as Equation(1.30).1.2.4 RemarksAmong these three methods, the von Neumann method is the most convenient one,providing information about how various factors affect the stability and accuracyproperties of a numerical scheme. One drawback of the von Neumann method isthat it can only be applied to linear problems with periodic boundary conditionson uniform structured meshes. However, the von Neumann method can providevaluable guidelines on stability and accuracy of more general structured mesh casesas well as including nonlinear problems. In general, the von Neumann methodcannot be applied to unstructured discretizations.The energy method can be applied both to structured discretizations and un-structured discretizations. By the energy method, one may figure out general in-formation about the stability properties of a scheme. One drawback of the energymethod is that its success relies on a suitable form of the defined energy, i.e., asuitable norm of the computational error or solution. (see, e.g., Giles (1997) [6]and F. Haider et al. (2009) [7]). In addition, the analysis associated with the en-ergy method is usually in the manner of functional analysis, which is not easy. Itis also hard to take boundary conditions into consideration for the energy method.For high order schemes, the analysis is much more complicated than the first andsecond order schemes.Both for the von Neumann method and the energy method, one must study thestructure of the system to perform the analysis. However, for the matrix method,one just needs to calculate the eigenvalues and in general one does not need to studythe structure of the system. The matrix method can easily incorporate boundaryconditions. One drawback of the matrix method is that the eigenvalue calculationis in general expensive. Also, it usually does not provide some general guidelines,like how mesh type, reconstruction, the coordinates of the mesh, etc., affect thestability.For a numerical scheme on a structured mesh for a linear problem with periodiccondition, the corresponding matrix is a circulant matrix, and the eigenvalues canbe calculated analytically. For this case, the matrix method is equivalent to thevon Neumann method, suggesting that the von Neumann method is a special caseof the matrix method. Compared with a normal matrix, a non-normal matrix ismuch more difficult to deal with. For further information, please see Reddy and81.3. Literature ReviewTrefethen (1992) [22], Giles (1997) [6], and the references therein.The problem of interest in this thesis is the steady compressible inviscid flowproblem, and we employ a high order (third or fourth) finite volume method todiscretize the spatial terms on unstructured meshes. The Jacobian matrix of thespatial discretization therefore is non-symmetric and the condition number is usu-ally large. However, in practice, we do not encounter large transient growth be-fore the eventual decay, and for a steady problem, the intermediate solution duringtime-marching is not of central concern. Though the condition number is large, theeigenvalue calculation is still reliable. Based on these considerations and observa-tions, we use the matrix method in this thesis to implement stability analysis andstabilization due to its convenience.1.3 Literature ReviewBy using stability analysis, we can know if a scheme is stable or not, and for a fulldiscretization, we can find the maximum stable time step. It is also quite valuableto study how various factors — for instance, the reconstruction method, the meshtype, etc. — affect the stability properties of a numerical scheme. In this section,we give a brief review of current literature on the stability analysis of the finitevolume method on unstructured meshes. Please be aware that there are not manyarticles on the stability analysis of the unstructured finite volume method.M. B. Giles (1987) [5] developed a framework of performing energy stabil-ity analysis for unstructured discretizations. In this article, the author used the1D convection equation as the model problem. The basic ideas are: first presentthe properties that suffice for stability for the PDEs, the semi-discretization, andthe full-discretization respectively; if the corresponding properties are satisfied,the PDEs, the semi-discretization, and the full discretization are stable automat-ically. Then prove that the PDEs of interest and semi-discretization satisfy thedesigned properties under some assumptions, therefore being stable; for the full-discretization to satisfy the properties, the time-step limit can then be obtained formulti-stage explicit time-stepping. The author extended the method to first ordersystems of equations and the Euler equations. It is supposed to give more propertime-steps than classical von Neumann method for the unstructured discretizationemploying multi-stage time-stepping. However, the analysis in this article is notverified by numerical results.M. B. Giles (1997) [6] developed a method to perform time-step stability anal-ysis for unstructured discretizations of the Navier-Stokes equations. In this arti-cle, the energy method is used to obtain the maximum time-step for stability. Todeal with the large transient growth, the algebraic stability condition defined in91.3. Literature ReviewLenferink and Spijker (1991) [13] is employed. The basic ideas of the approachis: first, assume the flow is a uniform flow with periodic boundary conditions;second, prove this flow is stable under small perturbation; third, prove the semi-discretization is stable as well; fourth, to have a stable full-discretization, the time-step is obtained to satisfy the algebra stability condition. In this article, the authorobtained the time-step limits both for global and local time-steps for Runge-Kuttatime-stepping for a Galerkin discretization of the Navier-Stokes equations on un-structured meshes.P. Moinier and M. B. Giles (2002) [18] implemented stability analysis for thepreconditioned discrete Euler equations with Runge-Kutta time-stepping on un-structured meshes by the energy method based on the algebraic stability conditiondefined in Lenferink and Spijker (1991) [13]. The approach is similar to the ap-proach in M. B. Giles (1997) [6] . Compared with M. B. Giles (1997) [6], Jacobipreconditioning and low Mach preconditioning are included in the analysis. In ad-dition, solid wall boundary conditions are also taken into consideration. Moinierand Giles implemented numerical tests to show that the presented method can givea time-step limit close to the actual time-step limit. However, for some cases, thetime-step limit are too large, which may be because the flow has strong shock-wave, violating the uniform flow assumption.In the analysis in M. B. Giles (1997) [6] and P. Moinier and M. B. Giles (2002)[18], uniform flow and periodic boundary conditions are assumed. In practice,these conditions are usually not satisfied. Therefore, the results given by thismethod might not be correct, which is seen in P. Moinier and M. B. Giles (2002). Itis also complicated to extend the energy method to higher order schemes and takethe boundary conditions into consideration.By using the energy method and combining it with the matrix method, F.Haider et al. (2009) [7] studied the stability of the MUSCL ([26, 27]) discretizationof the linear advection equation on general unstructured meshes and investigatedthe influence of the slope reconstruction method, the mesh type, and the stencilsize on the stability. The authors proposed a new approximate and qualitative cri-terion to measure how the piecewise linear slope reconstruction affects the stabilityof the MUSCL scheme, for two reasons: even if the scheme is stable, the definedenergy can locally increase when using the energy method and the relations be-tween the slope reconstruction and the eigenvalues are too complicated to analyzefor unstructured discretizations when using the matrix method. The investigationabout stability under this new criterion gives two important conclusions: (1) leastsquares slope reconstruction is the most stable method; (2) extending the recon-struction stencil can increase the stability of a scheme, which is valid at least forthe least squares slope reconstruction scheme. Extensive numerical tests were im-plemented and verified the two conclusions. The energy stability analysis method101.4. Motivation and Outlinein this article studies the stability by studying the structure of the reconstructionin the manner of functional analysis. The conclusions drawn by this approach canbe quite general; however, the analysis is quite hard. It would be harder if thisapproach were extended to high order schemes. Also, the success of the energymethod analysis relies on the suitable definition of energy, which led the authors toseek a new criterion.The current literature contains no efforts to optimize the spatial discretizationin a quantitative and controllable way, which is one of the topics of this thesis.1.4 Motivation and OutlineStructured meshes have obvious efficiency and accuracy advantage, but because ofits flexibility and automatic generation, the unstructured mesh is commonly usedin simulations on complex geometry domains. However, in practice failing to con-verge to the solution happens often, and a long trial and error process is sometimesrequired to generate a suitable mesh to reach the converged solution. An approachto optimize the mesh for stability would be quite valuable. As mentioned previ-ously, currently the study on the unstructured finite volume method is far frombeing sufficient. It is worthy to develop a model to study the stability properties ofthe unstructured finite volume method and a model to stabilize the unstable numer-ical schemes.In this thesis, we develop a model based on the matrix method to implementstability analysis for the discretized Euler equations on unstructured meshes. Wealso develop a method to optimize the mesh to stabilize unstable spatial discretiza-tion cases.The reminder of this thesis is organized in five chapters. Chapter 2 presentsa model to implement stability analysis. Chapter 3 verifies the stability analysismodel. Chapter 4 presents the method to stabilize the unstable spatial discretizationat the fixed point. Chapter 5 presents the results of the stabilization at a non-fixedpoint. Chapter 6 summaries the study in this thesis and presents a prospective viewof future work.11Chapter 2Essential Background andStability Analysis2.1 Governing Equations and Numerical MethodsThe physical problems of interest in this thesis are steady compressible flow prob-lems, for which the governing equations are the Euler equations for inviscid flowsand the Navier-Stokes equations for viscous flows. As a good starting problem,only the two dimensional Euler equations are studied in this thesis, for the pre-sented methods can be easily extended to the two dimensional Navier-Stokes equa-tions, the three dimensional Euler equations, and the three dimensional Navier-Stokes equations. In this thesis, the Euler equations are solved numerically byusing a finite volume scheme employing least squares solution reconstruction [19]and Roe flux difference splitting [24] on unstructured meshes. In this section, asimple introduction is given that is sufficient for expressing the new ideas presentedin this thesis.2.1.1 Physical Governing EquationsThe Euler equations describe the motion of compressible inviscid flows. For theconvenience of applying the finite volume method, here the conservation form isgiven:ddtˆViUdV +˛SiFdS = 0, (2.1)U =ρρuρvE F =ρunρuun+Pnˆxρvun+Pnˆy(E +P)un , (2.2)where U is the conservative variables vector, V is the control volume, S is thesurface of the control volume, and F is the flux vector. In Equation (2.2), ρ , u,v, and E are density, velocity in the x-direction, velocity in the y-direction, and122.1. Governing Equations and Numerical Methodstotal energy respectively. Suppose nˆ is the outward unit normal vector. nˆx is theprojection of nˆ on the x-direction and nˆy is the projection of nˆ on the y-direction.In Equation (2.2), un = unˆx + vnˆy. The pressure P is related to the conservativevariables by the ideal gas equation of state:P = (γ−1)(E− 12ρ(u2+ v2)). (2.3)2.1.2 Numerical Discretization For Spatial TermsIn this thesis, we consider an unstructured finite volume scheme employing leastsquares solution reconstruction and Roe flux difference splitting [24]. The spatialdiscretization generates a nonlinear real value vector function as following:˛SiFdS = R(U¯) (2.4)where the term R(U¯) is the residual term and U¯ is the solution vector or compu-tational solution vector. Detailed information on the spatial discretization can befound in [19]. A numerical method to solve the steady Euler equations is aimed atseeking a solution vector U¯s satisfyingR(U¯s) = 0 (2.5)The residual term R(U¯) can be interpreted as a function of the coordinates ofvertices of the mesh if the mesh’s influence on the system is taken into considera-tion, which is of central interest to this thesis:˛SiFdS = R(U¯) = R(U¯ ,ζ ) (2.6)where ζ denotes the vector of the coordinates of vertices of the mesh discretizingthe spatial domain.2.1.3 Temporal Discretization and Time Stepping MethodsSince the problems of interest are steady problems and we expect only one solutionto Equation (2.5), the accuracy of the temporal discretization only affects the tran-sient procedure to reach the converged solution but does not influence the solutionaccuracy. Among many temporal discretization methods, only the backward Eulermethod is studied in this thesis.132.1. Governing Equations and Numerical MethodsBackward Euler Time Stepping MethodThe governing equations discretized in space can be written asdU¯dt=−R(U¯) . (2.7)By applying the backward Euler method to the temporal terms in Equation (2.7),we haveU¯k−U¯k−1∆tk=−R(U¯k), (2.8)where k is the iteration count, meaning the kth time level, and ∆t is the time-step.The term on the right-hand side of Equation (2.8) can be expanded by a Taylorseries in the following way:R(U¯k)= R(U¯k−1)+∂ R¯∂U¯∣∣∣∣U¯=U¯k−1(U¯k−U¯k−1)+O((U¯k−U¯k−1)2)(2.9)where the term ∂ R¯∂U¯∣∣∣U¯=U¯k−1is the Jacobian matrix. By combining Equation (2.8)and Equation (2.9), and dropping the high order terms, we get the following equa-tion:U¯k−U¯k−1∆tk=−(R(U¯k−1)+∂ R¯∂U¯∣∣∣∣U¯=U¯k−1(U¯k−U¯k−1))(2.10)where the term U¯k−U¯k−1 can be denoted by a variable called the solution updateδU¯ . As a result, we obtain a concise form:(I∆tk+∂ R¯∂U¯∣∣∣∣U¯=U¯k−1)δU¯k =−R(U¯k−1)(2.11)where I is the identity matrix. The solution update at the kth iteration δU¯k canbe obtained by solving this linear equation. The solution at the kth iteration U¯k isupdated in the following way:U¯k = U¯k−1+αδU¯k (2.12)where α is a positive factor, which equals one for the backward Euler method.α > 1 corresponds to overrelaxiation, and α < 1 to underrelaxation.142.1. Governing Equations and Numerical Methods2.1.4 Summary of Numerical MethodsHere we summarize the whole procedure for applying a numerical scheme to solvethe two dimensional Euler equations:(1) Discretize the spatial terms and discretize the temporal terms with the back-ward Euler method:(I∆tk+∂ R¯∂U¯∣∣∣∣U¯=U¯k−1)δU¯k =−R(U¯k−1). (2.13)k starts from k = 1 with an initial guess: U¯ = U¯0. We get the solution update δU¯kby solving Equation (2.13).(2) Update the solution by δU¯k:U¯k = U¯k−1+αδU¯k. (2.14)If the computational solution satisfies the convergence criterion, for instance, theL2 norm of R(U¯) is smaller than 10−10, then stop here; otherwise, go to (1).Hence, the mapping of the computational solutions of two successive iterationsisU¯k = U¯k−1+(I∆tk+∂ R¯∂U¯∣∣∣∣U¯=U¯k−1)−1(−R(U¯k−1)). (2.15)Therefore, the mapping of the computational solution of the kth iteration and initialguess isU¯k = f(f(f ...(f(U¯0))))kth level f , (2.16)where we apply k iterations of the nonlinear functionf(U¯ j+1)= U¯ j +(I∆t j+∂ R¯∂U¯∣∣∣∣U¯=U¯ j)−1 (−R(U¯ j)) . (2.17)We can also use a simpler form to denote Equation (2.16):U¯k = f k(U¯0). (2.18)The central focus of this thesis is studying the stability of the solution procedure.152.2. Stability Analysis Methodology2.2 Stability Analysis MethodologyFor the Euler equations, Mapping (2.18) is nonlinear. The inherent nonlinearity ofthe Euler equations makes analysis much more complicated than a linear system. Ingeneral, for a nonlinear system, analysis around the converged solution (the fixedpoint) is applied to study the system. Instead of focusing on the computationalsolution, a typical fixed point analysis regards the computational error ∆U¯ as theobject to study. If the computational error converges to 0 as k approaches infinity,the system is stable; otherwise it is unstable. This simplifies the analysis.In this section, we present in detail the model originally developed in this the-sis to implement the stability analysis for the discretized systems of physical equa-tions. The main ideas were obtained from communication in October 2014 with Dr.Christopher Hill at ANSYS Corporation [8]. The original methodology proposedby C. Hill is aimed at the stability analysis and stabilization for the dynamicalsystem coupling the spatial discretization with the temporal discretization, whichincludes not only simple temporal discretization methods like the backward Eu-ler time stepping method, but also sophisticated temporal discretization methods.Besides presenting the originally proposed model, this thesis focuses on the back-ward Euler time stepping method and constructs a stability condition solely forthe spatial discretization. One of the remarkable points of the developed model isthat instead of employing the conventional way of implementing the fixed pointanalysis, a simpler approach is developed that drops the high order terms.2.2.1 Mathematical Model to Implement Stability Analysis at FixedPointAs the usual way of implementing fixed point analysis, instead of studying theMapping (2.16) relating the initial computational solution U¯0 and the computa-tional solution of the kth iteration U¯k, we focus on the behavior of the compu-tational error. We start from the mapping of the computational solution betweentwo successive iterations and then shift to the mapping of the computational error.Recall Equation (2.13) and Equation (2.12):(I∆tk+∂ R¯∂U¯∣∣∣∣U¯=U¯k−1)δU¯k =−R(U¯k−1), (2.19)U¯k = U¯k−1+αδU¯k. (2.20)Suppose for a certain k, U¯k−1 is the converged solution of Equation (2.5) and thefixed point of time-stepping defined by Equation (2.19) and Equation (2.20); there-162.2. Stability Analysis Methodologyfore, we have R(U¯k−1)= 0, δU¯k = 0, and U¯k ≡ U¯k−1.1 A stable system should beable to recover to the converged solution after a small perturbation. Following thisidea, to investigate stability, we perturb the solution randomly — therefore the per-turbation is rich in directions — and apply the perturbed solution into the systemto investigate its effects on the time-stepping process:U¯p = U¯s+∆U¯p (2.21)where ∆U¯p is the perturbation, U¯s is the steady solution, and U¯p is the computa-tional solution after perturbation. According to the definition of the computationalerror defined in Definition 1, ∆U¯p is also the computational error ∆U¯ . To studythe stability of the time-stepping process, we insert the perturbed solution into thetime-stepping process. After k iterations, we can relate the computational solutionto the converged solution at the kth iteration:U¯kp = U¯s+∆U¯k. (2.22)By the definition of the vector norm, we have:∣∣U¯kp∣∣= ∣∣U¯s+∆U¯k−1∣∣≤ |U¯s|+ ∣∣∆U¯k−1∣∣ (2.23)where the norm of the fixed point solution U¯s is fixed, which implies that the behav-ior of the norm of the computational solution U¯p inherits from the computationalerror ∆U¯ . As a result, we can just focus on the asymptotic behavior of the compu-tational error ∆U¯ . Insert Equation (2.22) into Equation (2.19), and we get:(I∆tk+1 +∂ R¯∂U¯k∣∣∣U¯=U¯kp)δU¯k+1 =−R(U¯kp)⇓,(I∆tk+1+∂ R¯∂U¯k∣∣∣∣∣U¯=U¯s+∆U¯k)δU¯k+1 =−R(U¯s+∆U¯k). (2.24)The Jacobian matrix on the left-hand side is calculated based on the computationalsolution U¯s +∆U¯k. To reduce the complexity of analysis, this Jacobian matrix isreplaced by the Jacobian matrix calculated based on the fixed point solution. Theright-hand terms can be linearized by the Taylor expansion as:1In the context of scientific computation, there is no real ” 0 ” . Here “ 0 ” is a vector or scalar asclose to 0 as can be reached in the system, constrained by the precision of data and the system itself.Therefore, we can say after a certain k, U¯k is the fixed point since the changes in U¯k are meaninglessafterward.172.2. Stability Analysis Methodology−R(U¯s+∆U¯k)=−(R(U¯s)+∂ R¯∂U¯∣∣∣∣U¯=U¯s(∆U¯k))+O(∆U¯k)2(2.25)where R(U¯s) = 0. Combining these, an equation with a more clear form can begenerated after some simple algebraic operations:(I∆tk+1+∂ R¯∂U¯∣∣∣∣U¯=U¯s)δU¯k+1 =−(∂ R¯∂U¯∣∣∣∣U¯=U¯s(∆U¯k))+O(∆U¯k)2. (2.26)Since we aim at studying the asymptotic behavior of the computational error, andthere is another variable δU¯k+1 in the equation, we need to express the variablesolution update δU¯k+1 in terms of the variable ∆U¯k. Retrieve equation (2.12) andequation (2.22):U¯k = U¯k−1+αδU¯k, (2.27)U¯kp = U¯s+∆U¯k. (2.28)Combine these two equations, and we get:∆U¯k−∆U¯k−1α= δU¯k. (2.29)By using Equation (2.29) in Equation (2.26), an equation relating the computa-tional errors of two successive iterations can be derived :(I∆tk+1+∂ R¯∂U¯∣∣∣∣U¯=U¯s)∆U¯k+1 =(I∆tk+1+(1−α) ∂ R¯∂U¯∣∣∣∣U¯=U¯s)∆U¯k+O(∆U¯k)2.(2.30)Drop the higher order term O(∆U¯k)2, replace k+1 by k , and we can get a conciseformula :(I∆tk+∂ R¯∂U¯∣∣∣∣U¯=U¯s)∆U¯k =(I∆tk+(1−α) ∂ R¯∂U¯∣∣∣∣U¯=U¯s)∆U¯k−1. (2.31)If the backward Euler time stepping method is applied, the time step ∆t is constantand α = 1, resulting in a simpler form of Equation (2.31) as following:182.2. Stability Analysis Methodology(I∆t+∂ R¯∂U¯∣∣∣∣U¯=U¯s)∆U¯k =1∆t∆U¯k−1. (2.32)We can easily see that System (2.31) is linear, which simplifies the analysis of thesystem dramatically, and as shown by the data in Chapter 3, this linearity is validin the neighbor around the converged solution. Assume the matrix on the left-handside of equation (2.31) is not singular, which is valid for the problems of interestin this thesis, then we can have a more direct mapping of ∆U¯ of two successiveiterations:∆U¯k =(I∆tk+∂ R¯∂U¯∣∣∣∣U¯=U¯s)−1(I∆tk+(1−α) ∂ R¯∂U¯∣∣∣∣U¯=U¯s)∆U¯k−1. (2.33)With the help of the recurrence relation of ∆U¯ of Equation (2.33), the mapping ofthe initial computational error ∆U¯0 and the computational error at the kth iteration∆U¯k can be deduced as following:∆U¯k =( I∆tk+∂ R¯∂U¯∣∣∣∣U¯=U¯s)−1(I∆tk+(1−α) ∂ R¯∂U¯∣∣∣∣U¯=U¯s)k∆U¯0. (2.34)Since the term(I∆tk +∂ R¯∂U¯∣∣∣U¯=U¯s)−1(I∆tk +(1−α) ∂ R¯∂U¯∣∣∣U¯=U¯s)is a constant matrix,we use a letter A to denote it to obtain a concise formulation of Equation (2.31):∆U¯k = Ak∆U¯0. (2.35)We get an approximate mapping relating the initial computational error ∆U¯0 andthe computational error at the kth iteration ∆U¯k. In fact, for the solution update ∆U¯and flux integral R(U¯), we can also derive mappings similar to Equation (2.35),the details of which can be seen in Appendix A. Naturally, the next question isunder what circumstances the computational error converges to 0, and thereforethe system is stable; and under what circumstances it diverges.2.2.2 Eigenanalysis of the ModelA common way to study the behavior of Mapping (2.35) is spectral analysis. Inthis subsection, we discuss in detail how we employ spectral analysis to derive thestability conditions.192.2. Stability Analysis MethodologyWithout loss of generality, suppose matrix A is a n×n square matrix with a setof eigenvalues (λ0, λ1, λ2, . . . ,λn−1), a set of right eigenvectors (x0, x1, x2, . . . ,xn−1)and a set of left eigenvectors(y∗0, y∗1, y∗2, . . . ,y∗n−1), satisfying the following equa-tions by definition:Axi = λixi i = 0, 1, 2, . . . ,n−1, (2.36)yiA = λiyi i = 0, 1, 2, . . . ,n−1. (2.37)Suppose there are no repeated eigenvalues for matrix A, which always holds forthe problems of interest in this thesis — the discretized equations inherited fromthe steady 2D Euler equations. Since the right eigenvectors (x0, x1, x2, . . . ,xn−1)are linearly independent, they can span a vector space of dimension n. Similar tothe right eigenvectors, the set of left eigenvectors(y∗0, y∗1, y∗2, . . . ,y∗n−1)can span avector space of dimension n as well. Therefore, we can decompose ∆U by the righteigenvectors in the following way:∆U¯ =i=n−1∑i=0aixi (2.38)where ai are the eigendecomposition coefficients, calculated with the help of theleft eigenvectors:y∗k∆U¯ = y∗k(i=n−1∑i=0aixi)k = 0, 1, 2, . . . ,n−1. (2.39)Applying the orthogonality of the left and the right eigenvectors:y∗kxi ={0 k 6=iC4 k=i(2.40)where C4 is a scalar not equal to zero, we getai =y∗i ∆U¯y∗i xii = 0, 1, 2, . . . ,n−1. (2.41)Now, we study the behavior of ∆U under Mapping (2.35) in the eigendecom-position. Multiply Equation (2.38) by the matrix A from the left:A∆U¯ =i=n−1∑i=0aiAxi. (2.42)By applying Equation (2.36), we get202.2. Stability Analysis MethodologyA∆U¯ =i=n−1∑i=0aiAxi =i=n−1∑i=0aiλixi. (2.43)Hence, after multiplying k times as in Equation (2.35), We get:Ak∆U¯ =i=n−1∑i=0aiAxi =i=n−1∑i=0aiλ ki xi (2.44)Therefore, the computational error at the kth iteration ∆U¯k can be mapped to theinitial computational error ∆U0 as :∆U¯k = Ak∆U¯0 =i=n−1∑i=0a0i λki xi (2.45)where a0i are the eigendecomposition coefficients associated with the initial com-putational error ∆U¯0. If not explicitly stated, a0i is denoted by ai hereafter.Next, we will show the behavior of the computational error under Mapping(2.35) is dominated by the largest eigenvalues in norm and the associated eigen-vectors. From now on, in this thesis, largest eigenvalue(s) means the largest eigen-value(s) in norm if not specifically stated. Given a previous assumption that thereare no repeated eigenvalues for matrix A, the largest eigenvalues are either a single-ton real eigenvalue or a conjugate complex pair. Suppose the eigenvalues are sortedin the descending order of magnitude with respect to the count i. Because complexeigenvalues are common for the problems of interest, we use Euler’s formula topresent eigenvalues in a convenient form:λi = rieIθi (2.46)where ri is the norm of eigenvalue λi; I is the imaginary unit so that I =√−1 ; andθi is the associated angle in the complex plane. We analyze the behaviors of themappings by sorting them into two categories in terms of the characteristics of thelargest eigenvalues.(1) We suppose the largest eigenvalue of the matrix A is a singleton real scalar.Rewrite Equation (2.45) and separate the largest eigenvalue from the rest:∆U¯k = Ak∆U¯ =i=n−1∑i=0aiλ ki xi = λk0i=n−1∑i=0ai(λiλ0)kxi (2.47)By applying Equation (2.46), we have:∆U¯k = rk0i=n−1∑i=0ai(rir0)keIkθixi. (2.48)212.2. Stability Analysis MethodologyTo study the behavior of the computational error ∆U¯k, an asymptotic analysis isimplemented. For∣∣∆U¯k∣∣, we have∣∣∆U¯k∣∣ = ∣∣∣∣∣rk0i=n−1∑i=0 ai(rir0)keIkθixi∣∣∣∣∣=√√√√((rk0(i=n−1∑i=0ai(rir0)keIkθixi))∗rk0(i=n−1∑i=0ai(rir0)keIkθixi))=√√√√((rk0(a0x0+i=n−1∑i=1ai(rir0)keIkθixi))∗rk0(a0x0+i=n−1∑i=1ai(rir0)keIkθixi))=√√√√(r2k0(a∗0a0x∗0x0+O(r1r0)k))= rk0√√√√(a∗0a0x∗0x0+O(r1r0)k)(2.49)For∣∣∆U¯k−1∣∣, similarly, we have∣∣∆U¯k−1∣∣= rk−10√√√√(a∗0a0x∗0x0+O(r1r0)k−1). (2.50)The amplification factor γ is defined as the ratio of the norm of the computationalerrors of two successive iterations. Therefore, as k becomes large, the amplificationfactor at the kth iteration can be obtained:γk =∣∣∆U¯k∣∣|∆U¯k−1| =rk0 |a0| |x0|√(1+O(r1r0)k)rk−10 |a0| |x0|√(1+O(r1r0)k−1) = r0(1+O(r1r0)k−1).(2.51)As k becomes large, the term(r1r0)k−1is exponentially small. Hence, we can con-clude that the amplification factor γk→ r0 as k becomes large. As a result, if r0 islarger than one, the norm of ∆U¯ increases exponentially, resulting in instability andif is smaller than one, the norm of ∆U¯ decreases exponentially, resulting in stabil-ity. If the norm of r0 is one, convergence is still not feasible. Hence, for solution222.2. Stability Analysis Methodologyconvergence, the norm of the largest eigenvalue needs to be smaller than one forthe case where the largest eigenvalue is real.We can also calculate the angle between the vector ∆U¯k and the eigenvectorassociated with the largest eigenvalue. The angle is defined asθk = arccos(x∗0∆U¯k∣∣x∗0∣∣ |∆U¯k|). (2.52)According to Appendix B, we haveθk = pi or 0+O((r1r0)k)(2.53)Analogous to the norm, for large k, the angle converges to 0 or pi . Therefore, thetwo vectors converge to being parallel. Actually, this is the basis of the powermethod for eigenpair calculation.(2) We also need to consider the case where the largest eigenvalues are a conju-gate pair, which is common for the Jacobian matrices of flow problems, and morecomplicated to analyze. We can implement a similar analysis. Suppose the largesteigenvalues are a conjugate pair:λ0 = r0eIθ0 ,λ1 = r0e−Iθ0 .Since the computational error ∆U is real, the eigendecomposition coefficients as-sociated with a conjugate pair should be a conjugate pair as well. Here the coeffi-cients of eigendeomposition associated with the largest eigenvalues are presentedby a0eIα0 and a0e−Iα0 . The associated eigenvectors are a conjugate pair as well.Recall Equation (2.45) and replace a0i by a0 to get∆U¯k = Ak∆U¯ =i=n−1∑i=0aiλ ki xi.Isolating the largest eigenvalues from the rest, we geti=n−1∑i=0aiλ ki xi = rk0(a0eI(α0+kθ0)x0+a0e−I(α0+kθ0)x¯0+i=n−1∑i=2ai(rir0)keIkθixi)where x¯0 is the conjugate of x0. We can also calculate the amplification factor ofthe norm of the computational error of two successive iterations as we have donefor the case where the largest eigenvalue is real:232.2. Stability Analysis Methodologyγk =∣∣∆U¯k∣∣|∆U¯k−1| =∣∣∣∣rk0((a0eI(α0+kθ0)x0+a0e−I(α0+kθ0)x¯0)+ i=n−1∑i=2ai(rir0)keIkθixi)∣∣∣∣∣∣∣∣rk−10 ((a0eI(α0+(k−1)θ0)x0+a0e−I(α0+(k−1)θ0)x¯0)+ i=n−1∑i=2ai(rir0)k−1eI(k−1)θixi)∣∣∣∣= r0∣∣∣∣(a0eI(α0+kθ0)x0+a0e−I(α0+kθ0)x¯0)+ i=n−1∑i=2ai(rir0)keIkθixi∣∣∣∣∣∣∣∣(a0eI(α0+(k−1)θ0)x0+a0e−I(α0+(k−1)θ0)x¯0)+ i=n−1∑i=2ai(rir0)k−1eI(k−1)θixi∣∣∣∣ .Suppose the norm of the eigenvector equals one : |x0|= 1 . Therefore, we have∣∣∣∣∣(a0eI(α0+kθ0)x0+a0e−I(α0+kθ0)x¯0)+ i=n−1∑i=2 a0i(rir0)keIkθixi∣∣∣∣∣2=((a0eI(α0+kθ0)x0+a0e−I(α0+kθ0)x¯0)+i=n−1∑i=2a0i(rir0)keIkθixi)∗((a0eI(α0+kθ0)x0+a0e−I(α0+kθ0)x¯0)+i=n−1∑i=2a0i(rir0)keIkθixi)=((a0e−I(α0+kθ0)x¯ᵀ0 +a0eI(α0+kθ0)xᵀ0)(a0eI(α0+kθ0)x0+a0e−I(α0+kθ0)x¯0))+O((r1r0)k)=(2a20+a20e−2I(α0+kθ0)x¯ᵀ0 x¯0+a20e2I(α0+kθ0)xᵀ0x0)+O((r1r0)k)(2.54)where the operator ()ᵀ is getting the transpose of a vector. In Equation (2.54), x¯ᵀ0 x¯0and xᵀ0x0 are a conjugate pair of complex scalars. Given that |x0|= 1, we havex¯ᵀ0 x¯0 = e−2Iβ0 ,xᵀ0x0 = e2Iβ0 .Consequently, we have242.2. Stability Analysis Methodology∣∣∣∣∣(a0eI(α0+kθ0)x0+a0e−I(α0+kθ0)x¯0)+ i=n−1∑i=1 a0i(rir0)keIkθixi∣∣∣∣∣=√√√√(2a20+a20e−2I(α0+kθ0+β0)+a20e2I(α0+kθ0+β0))+O((r1r0)k).By applying the Euler’s formula, we have√(2a20+a20e−2I(α0+kθ0+β0)+a20e2I(α0+kθ0+β0))=√(2a20+2a20 cos(2(α0+ kθ0+β0))).thus,∣∣∣∣∣(a0eI(α0+kθ0)x0+a0e−I(α0+kθ0)x¯0)+ i=n−1∑i=1 a0i(rir0)keIkθixi∣∣∣∣∣=√(2a20+2a20 cos(2(α0+ kθ0+β0)))+O((r1r0)k). (2.55)Similarly, for the iteration of k−1, we have∣∣∣∣∣(a0eI(α0+(k−1)θ0)x0+a0e−I(α0+(k−1)θ0)x¯0)+ i=n−1∑i=2 a0i(rir0)k−1eI(k−1)θixi∣∣∣∣∣=√(2a20+2a20 cos(2(α0+(k−1)θ0+β0)))+O((r1r0)k−1). (2.56)Combing Equation (2.55) and Equation (2.56), we getγk =∣∣∆U¯k∣∣|∆U¯k−1| = r0√(2a20+2a20 cos(2(α0+ kθ0+β0)))+O((r1r0)k)√(2a20+2a20 cos(2(α0+(k−1)θ0+β0)))+O((r1r0)k−1)= r0(∣∣∣∣ cos(α0+ kθ0+β0)cos(α0+(k−1)θ0+β0)∣∣∣∣+O(r1r0)k−1)(2.57)= r0(|cos(θ0)− sin(θ0) tan(α0+(k−1)θ0+β0)|+O((r1r0)k−1)). (2.58)252.2. Stability Analysis MethodologyDropping the high order terms in equation (2.58), we haveγk = γ (k,θ0,α0,β0) = r0 |cos(θ0)− sin(θ0) tan(α0+(k−1)θ0+β0)| . (2.59)Unlike the case where the largest eigenvalue is real, in which the amplification fac-tor of the norm of ∆U¯ of two successive iterations converges to the norm of thelargest eigenvalue, in this case, the amplification factor associated with two suc-cessive iterations also depend on the number of iterations k, and from Equation(2.59), we can see the amplification factor will oscillate around r0 since tan() is aperiodic function. Since stability describes the asymptotic properties of a scheme,a small number of iterations is not informative. On the contrary, we must focus onthe asymptotic ratios of the norm of the computational error over a large numberof iterations. With the help of Equation (2.57), we can calculate the amplificationfactor of the norm of the computational error of two largely distant iterations. As-sume that if k > m, the higher order terms in Equation (2.57) can be ignored. WedefineΓk =∣∣∆U¯k∣∣|∆U¯0| . (2.60)For a number n > m, we haveΓn =|∆U¯n||∆U¯0| = Γmk=n∏k=m+1γk. (2.61)Recalling Equation (2.57) for γ and dropping the high order terms, we haveΓn = Γmk=n∏k=m+1∣∣∣∣ cos(α0+ kθ0+β0)cos(α0+(k−1)θ0+β0)∣∣∣∣= Γmrn−m0 ∣∣∣∣ cos(α0+nθ0+β0)cos(α0+mθ0+β0)∣∣∣∣ .(2.62)From Equation (2.62), we can easily conclude that r0 determines the asymptoticrate of the growth of the computational error ∆U¯ . Therefore, to have a stablesystem, the following condition is needed:r0 < 1. (2.63)If r0 is smaller than one, the scheme is stable. However, from Equation (2.59), weexpect that the norm of the computational error ∆U¯ will increase locally, whichshould explain the phenomena mentioned in F. Haider et al (2009) [7] that theenergy might increase locally even if the scheme is stable. For the direction of262.3. Stability of the Method of Linesthe computational error ∆U¯ , it is complicated enough that we do not discuss ithere. Though here we only analyze the computational error, for the other twocomputational vectors: the flux integral and the solution update, the conclusionsshould be the same.2.3 Stability of the Method of LinesIn Section 2.2, we presented the stability condition for the full discretization andrelates the behavior of the computational vectors to the eigenvalues of the map-ping. In practice, it is more usual to focus on the stability properties of the spatialdiscretization, i.e., the method of lines. In this section, we present the stabilitycondition for the spatial discretization.The semi-discretization of the 2D Euler equations can be also written asdU¯ (t)dt=−R(U¯ (t)) . (2.64)Since this equation is nonlinear, we focus on the stability of the steady state solu-tion, i.e., U¯s, which is also the fixed point. For a small perturbation to the steadysolution U¯p = U¯s+∆U¯p, we haved∆U¯p (t)dt=− ∂R∂U¯s∆U¯p (t)+h(∆U¯p (t) , t)where h(∆U¯p (t) , t) is a function representing the difference betweend∆U¯p(t)dt and− ∂R∂U¯p∆U¯p (t) . Theorem 4.3 in [1] is on the stability of a system like Equation(2.64). We follow this theorem to discuss the stability of the fixed point solutionof Equation (2.64). We assume h(∆U¯p (t) , t) and − ∂R∂U¯s are continuous in (t, ∆U¯p)for 0≤ t < ∞, |∆U¯p|< k where k is a small positive constant. We also assume forh(∆U¯p (t) , t), the following condition is satisfied:lim|∆U¯p|→0|h(∆U¯p (t) , t)||∆U¯p (t)| = 0, uniformly for t ∈ [0,∞) .If we need to have a stable fixed point solution, we need all the eigenvalues ofmatrix − ∂R∂U¯s to be negative, i.e., Re(Λ(− ∂R∂U¯s))< 0, where Λ(− ∂R∂U¯s)means thespectrum of the Jacobian matrix − ∂R∂U¯s .We need to use the stability condition of semi-discretization for stabilizationat the fixed point and a non-fixed point. For both cases, h(∆U¯p (t) , t) is small;therefore, it is sufficient to use Re(Λ(− ∂R∂U¯s))< 0 as stability condition.27Chapter 3Numerical Validation forStability Methodology3.1 Numerical Testing MethodologyIn this chapter, we test the validity of the stability analysis methodology of Chap-ter 2 by numerical examples. We use two linear problems: Poisson’s equation andthe advection-diffusion equation; and a nonlinear problem: the two dimensionalEuler equations as test problems. For Poisson’s equation and the advection diffu-sion equation, least squares solution reconstruction [19] is employed. For the twodimensional Euler equations, least squares reconstruction [19] and Roe flux differ-ence splitting [24] are used. The spatial discretization of the PDEs is implementedby using the Advanced Numerical Simulation Library (ANSLib) [21]. We use themethod in [17] to implement pseudo-timestepping to reach the steady state solu-tion. The implementation of the time-stepping employs the Portable, ExtensibleToolkit for Scientific Computation [25] (PETSc ), a linear algebra library, to reachthe converged solution. Once the converged solution is reached, we perturb thesolution randomly, and apply backward Euler time-stepping. For mesh generation,we use the Generation and Refinement of Unstructured, Mixed-Element Meshes inParallel (GRUMMP) [20] package.We also need the explicit formation of the Jacobian matrix and its eigenval-ues to test the methodology. The Jacobian matrix is calculated analytically bythe method originally developed by Michalak and Olliver-Gooch (2010) [17] andstored explicitly. The eigenvalue calculation is implemented by the Scalable Li-brary for Eigenvalue Problem Computations [2] (SLEPc).The mappings of the solution update, the computational error, and the flux in-tegral are identical if the backward Euler time stepping method is employed. Asa result, the stability conditions inherited from these three mappings are the same,which is reasonable. In Chapter 2, we have developed a model for the asymptoticbehavior of three variables: the solution update δU¯ , the computational error ∆U¯ ,and the flux integral R(U¯), and also constructed the corresponding stability con-dition; we have also established a stability condition based solely on the spatial283.1. Numerical Testing Methodologydiscretization. In this chapter, we test if the prediction of the behavior of thosevariables based on the presented model is consistent with their actual behavior. Ifthe prediction coincides with the actual behavior, the methodology is valid.The three variables, solution update, computational error, and flux integral,are all real vectors. As a vector, the norm and the direction are of interest to us.Mapping (2.33) models how the norm and direction change after a large number ofiterations, i.e., asymptotically. In previous analysis, we have drawn the followingconclusions. (1) If the largest eigenvalue is real, as the iteration becomes a largenumber, the amplification factor converges to the norm of the largest eigenvaluefor each variable, and the direction of the three vectors converge to being parallelto the direction of the eigenvector associated with the largest eigenvalue. (2) If thelargest eigenvalues are a conjugate complex pair, the norm and the direction behavedifferently from the case where the largest eigenvalue is real. For this case, neitherthe norm nor the direction will go to a constant as the iteration k becomes a largenumber. We use a different approach to test the validity indirectly.Recall Equation (2.62), and take the high order term into consideration:Γn =|∆U¯n||∆U¯0| = Γm(rn−m0∣∣∣∣ cos(α0+nθ0+β0)cos(α0+mθ0+β0)∣∣∣∣+O(r1r0)m). (3.1)Thus, for two large numbers: j, k, the ratio of the norm of the computational errorof these two iterations isΓk− j =∆U¯k∆U¯ j= rk− j0∣∣∣∣cos(α0+ kθ0+β0)cos(α0+ jθ0+β0)∣∣∣∣+O(r1r0) j. (3.2)Suppose (k− j)θ0 = lpi + φ , where l is a integral and |φ | < |θ0| < pi . Thus, wehaveΓk− j =∆U¯k∆U¯ j= rk− j0∣∣∣∣cos(α0+ jθ0+β0+ lpi+φ)cos(α0+ jθ0+β0)∣∣∣∣+O(r1r0) j. (3.3)in which, we have∣∣∣∣cos(α0+ jθ0+β0+ lpi+φ)cos(α0+ jθ0+β0)∣∣∣∣ = ∣∣∣∣cos(α0+ jθ0+β0+φ)cos(α0+ jθ0+β0)∣∣∣∣=∣∣∣∣cos(α0+ jθ0+β0)cos(φ)− sin(α0+ jθ0+β0)sin(φ)cos(α0+ jθ0+β0)∣∣∣∣= |cos(φ)− tan(α0+ jθ0+β0)sin(φ)|≈ 1− 12φ 2− tan(α0+ jθ0+β0)(φ − 16φ 3). (3.4)293.2. Precision EstimationTherefore, we haveΓk− j =∆U¯k∆U¯ j≈ rk− j0(1− 12φ 2− tan(α0+ jθ0+β0)φ)+O(r1r0) j. (3.5)We define an averaged amplification factor of two successive iterations asγ¯ = Γ1k− jk− j. (3.6)The terms, 12φ2, tan(α0+ jθ0+β0)φ , and O(r1r0) jare small. Substitute Equation(3.5) into Equation (3.6) to giveγ¯ = r0(1+O(−12φ 2− tan(α0+ jθ0+β0)φk− j))+O(1k− j(r1r0) j). (3.7)Given Equation (3.7), we can therefore test if γ¯ matches r0 to test the validity of themethodology. From Equation (2.59), we can also see that the convergence historywill oscillate, and the amplification factor associated with two successive iterationswill oscillate around the norm of the largest eigenvalues. As it is complicated toanalyze the variation of the direction of the computational error, solution update,and flux integral for large iteration count, we do not implement numerical tests forit.Since the direction of the three vectors is evaluated by the angle between thevector and the eigenvector associated with the largest eigenvalue, we use the termangle to replace the term direction.3.2 Precision EstimationIt would be misleading to state that one set of data matches another set of data,while the degree of coincidence is not explicitly presented. In this section, weestimate analytically the extent to which the actual data matches the analyticalvalue both for the amplification factor and the angle. We begin with estimating theprecision of the model.For a linear problem, for instance, Poisson’s equation, the derivation of Map-ping (2.31) does not need to drop the higher order terms; thus, the mapping pre-cisely describes the relation of the three variables — computational error, solutionupdate, and flux integral — between two successive iterations. For a nonlinearproblem, specifically the Euler equations in this thesis, the dropped higher order303.2. Precision Estimationterms are of O((∆U¯k)2), which is sufficiently small that it is not the principalfactor that affects the precision.Next, we need to estimate how close the actual amplification factor will be withthe expected value — the norm of the largest eigenvalue, and estimate how closethe angle will be with the expected value (0 or pi) as well. Again, here we dividethe discussion into two categories: one is the case where the largest eigenvalue isreal, and the other is the case where the largest eigenvalues are a conjugate complexpair.(1) We consider first the case where the largest eigenvalue is real. RecallingEquation (2.51), we haveγk = r0(1+O(r1r0)k−1). (3.8)Seen from this equation, the extent to which the actual amplification factor coin-cides with r0 is constrained primarily by the term O(r1r0)k−1. We use ∆γk to denotethe difference between the actual amplification factor at the kth iteration γk and r0:∆γk = |γk− r0| . (3.9)∆γ refers to the difference between the actual amplification factor and r0 when theiteration is not specifically expressed. From Equation (3.8), we can see that ∆γkbehaves similarly as(r1r0)k−1. Simply put if ∆γk is small, which means γk is closeto r0, and the decay rate of ∆γk is close to r1r0 , Equation (3.8) holds, which verifiesthe validity of the stability analysis model.For the angle, we have similar conclusions. Recall Equation (2.53)θk = pi or 0+O((r1r0)k).The difference between the actual angle and pi or 0 is denoted by ∆θ , which for thekth iteration is∆θk = |θk| or |θk−pi|= O((r1r0)k). (3.10)Similar to ∆γk, if ∆θk is small and the decay rate is close to r1r0 , this verifies the va-lidity of Equation (2.53), and therefore the validity of the stability analysis model.313.3. Test Case No. 1 — Poisson’s Equation(2) We consider the case where the largest eigenvalues are complex. For thiscase, we only study the amplification factor. Retrieve Equation (3.7):γ¯ = r0(1+O(−12φ 2− tan(α0+ jθ0+β0)φk− j))+O(1k− j(r1r0) j)(3.11)From this equation, We see that the precision in which γ¯ matches r0 is constrainedby the larger term between O(− 12φ2−tan(α0+ jθ0+β0)φk− j)and O(1k− j(r1r0) j). Bothterms are complicated to evaluate, but if the difference between γ¯ and r0 is small,we can be confident of the validity of the model for the case where the largesteigenvalues are complex.3.3 Test Case No. 1 — Poisson’s EquationThe form of Poisson’s equation is∂ 2u∂x2+∂ 2u∂y2= S, (3.12)whereS = 2pi2 sin(pix)sin(piy) . (3.13)The spatial domain is a unit square. The boundary condition is u = 0 at the bound-ary.Due to the inherent linearity, Equation (2.32) holds exactly for Poisson’s equa-tion since no higher order terms are dropped. The problem is solved on an un-structured mesh consisting of 784 triangles. Cell-centered control volumes areemployed for flux integration. The fourth order reconstruction scheme is used.The mesh on that the equation is solved is shown in Figure (3.1), and the so-lution is shown in Figure (3.2). Once the converged solution is reached, we per-turb the solution randomly, and then we apply backward Euler time-stepping withdt = 0.01, starting from the perturbed solution. The first twenty largest eigenval-ues are shown in Figure (3.3), in which the unit circle is also shown. If all theeigenvalues lie inside the unit circle, the scheme is stable; otherwise it is not stable.3.3.1 Numerical Results and Data AnalysisAs demonstrated previously, the mappings of the three variables are identical aslong as the backward Euler temporal discretization with a constant time-step is323.3. Test Case No. 1 — Poisson’s EquationFigure 3.1: Mesh for solving Pois-son’s equationFigure 3.2: Solution to Poisson’s equa-tion−1.1 −1 −0.9−0.8−0.7−0.6−0.5−0.4−0.3−0.2−0.1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1.1 1.2−1.1−1−0.9−0.8−0.7−0.6−0.5−0.4−0.3−0.2−0.100.10.20.30.40.50.60.70.80.911.11.2First 20 Largest Eigenvalues for Poisson’s Equation(4th order, cell-centered, dt=0.01)Re(λ)Im(λ)Figure 3.3: First twenty largest eigenvalues for Poisson’s equation case333.3. Test Case No. 1 — Poisson’s Equationapplied, and the convergence rate, i.e., the amplification factor, is supposed to bethe norm of the largest eigenvalue. Therefore, we can plot the convergence historiesfor all three variables, and the convergence rate predicted by the eigenvalue aswell for comparison. To obtain a sophisticated sense of the convergence rates oramplification factors, we plot the amplification factors of two successive iterationsas well.As seen in Figure (3.4), the convergence histories of the flux integral, the com-putational error, and the solution update are consistent with the convergence ratepredicted by the largest eigenvalue. We also compare the amplification factors as-sociated with two successive iterations with the norm of the largest eigenvalue inFigure (3.5). From Figure (3.5), we can see that after the 25th iteration, the am-plification factors agree well with the norm of the largest eigenvalue for all threevariables. For the flux integral, we can see that the degree of coincidence decreasesafter the 40th iteration, as the computational error approaches the machine preci-sion. Given Equation (A.10) in Appendix A, it is reasonable that the amplificationfactor of the flux integral coincides with the amplification factor of the solutionupdate.To have more clear understanding of how close the amplification factors areto the norm of the largest eigenvalue, we calculate ∆γk, and plot this to obtainFigure (3.6). From Figure (3.6), we can draw the following conclusions: (1) forall three variables, the norm of ∆γk is small after the 25th iteration. After the 40thiteration, for the flux integral, the norm of ∆γk increases. The decay rates of ∆γkare consistent with r1r0 . However, for the flux integral, the growth rate after the 40thiteration does not coincide well with r1r0 .Similar to the amplification factor, for the angle, we calculate ∆θk to obtainFigure (3.7) . We can see that for all three variables, ∆θk is small. For the com-putational error, the decay rate of ∆θk is consistent with r1r0 from the 25th to the44th iteration. For the solution update, the decay rate of ∆θk is consistent with r1r0from 25th iteration to the 42th iteration. For the flux integral, the decay rate of∆θk is consistent with r1r0 from the 25th iteration to the 33th iteration. The preci-sion of coincidence decreases as the computational error approaches the machineprecision.We also visualize the eigenvector associated with the largest eigenvalue in Fig-ure (3.8a). The vector of the flux integral at the 30th iteration is plotted in Figure(3.8b). Figure (3.8c) is the image of the solution update at the 45th iteration, andFigure (3.8d) is the image of the vector of the computational error at the 45th iter-ation. We can observe that their shapes are similar and the direction of the solutionupdate is opposite from the other three variables, which is expected from Equation(A.10).343.3. Test Case No. 1 — Poisson’s Equation0 5 10 15 20 25 30 35 40 45 50−14−12−10−8−6−4−20IterationsThelog10oftheNormsConvergence History for Poisson problem  Flux Integral R(U¯)Computational Error ∆U¯Solution Update δU¯Convergence Rate Predicted by Eigenvalue λFigure 3.4: Convergence history for Poisson’s equation0 5 10 15 20 25 30 35 40 45 5000.10.20.30.40.50.60.70.80.9IterationsAmplificationFactorsThe Actual Amplification Factors for Poisson Problem (4th order, cell-centered, dt=0.01)  Flux Integral R(U¯)Computational Error ∆U¯Solution Update δU¯The Norm of the Largest Eigenvalues λFigure 3.5: Amplification factors for Poisson’s equation353.3. Test Case No. 1 — Poisson’s Equation25 30 35 40 45 50−7−6−5−4−3−2−1IterationsLog10ofDifferenceThe Difference between the Actual Amplification Factor and the Norm of the Largest Eigenvalue ∆γfor Poisson Problem (4th order, cell-centered, dt=0.01)  Flux Integral R(U¯)Computational Error ∆U¯Solution Update δU¯The Estimate of Difference by(r1r0)k−1Figure 3.6: The difference between the actual amplification factor and the norm ofthe largest eigenvalue (∆γ) for Poisson problem25 30 35 40 45 50−5−4.5−4−3.5−3−2.5−2−1.5−1−0.5IterationsLog10ofDifferenceThe Difference between the Actual Angle and 0 or pi ∆θfor Poisson Problem (4th order, cell-centered, dt=0.01)  Flux Integral R(U¯)Computational Error ∆U¯Solution Update δU¯The Estimate of Difference by(r1r0)kFigure 3.7: The difference between the actual angle and 0 or pi (∆θ ) for Poissonproblem363.3. Test Case No. 1 — Poisson’s Equation(a) The eigenvector associated with the largesteigenvalue(b) The flux integral at the 30th iteration(c) The solution update at the 45th iteration (d) The computational error at the 45th iterationFigure 3.8: Comparison among computational vectors and eigenvector for Pois-son’s equation case373.4. Test Case No. 2 — 2D Advection-Diffusion Problem3.3.2 SummaryThe data and analysis above give us the following conclusions:1. For the amplification factor: ∆γk is small for all three variables and the decayrate of ∆γk is consistent with r1r0 for all three variables.2. For the angle: ∆θk is small for all three variables and the decay rate of ∆γk isconsistent with r1r0 for all three variables.Combined with the numerical validation methodology discussed previously, theseconclusions demonstrate the validity of the stability methodology strongly for Pois-son’s equation. In this case, the largest eigenvalue is real. In the next section, weconsider the case where the largest eigenvalues are complex.3.4 Test Case No. 2 — 2D Advection-Diffusion ProblemCompared with Poison’s equation, the Jacobian matrix associated with this casefeatures a conjugate complex pair of largest eigenvalues, which is also often seenin the Jacobian matrix associated with a flow problem. The equation for the 2Dadvevtion-diffusion problem is∂u∂ t+a(x, y)∂u∂x+b(x, y)∂u∂y= α(∂ 2u∂x2+∂ 2u∂y2), (3.14)where α = 0.005, a(x, y) = 1.0, b(x, y) = 0.0. The spatial domain is a rectanglewith the length L equal to three in the x-direction and with the height H equal toone in the y-direction. The boundary conditions areu(0, y) = sin(piy) ,u(x, 0) = u(x, H) = 0,∂u(L, y)∂x= 0.Backward Euler time-stepping with dt = 0.01, fourth order reconstruction, andcell-centered control volumes are employed. The mesh, which is shown in Figure(3.9), contains 222 triangles. The solution is presented in Figure (3.10). The firsttwenty largest eigenvalues are shown in Figure (3.11).383.4. Test Case No. 2 — 2D Advection-Diffusion ProblemFigure 3.9: Mesh for advection-diffusion problemFigure 3.10: Solution to advection-diffusion problem393.4. Test Case No. 2 — 2D Advection-Diffusion Problem−1.1 −1 −0.9−0.8−0.7−0.6−0.5−0.4−0.3−0.2−0.1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1.1 1.2−1.1−1−0.9−0.8−0.7−0.6−0.5−0.4−0.3−0.2−0.100.10.20.30.40.50.60.70.80.911.11.2First 20 Largest Eigenvalues for Advection-Diffusion Problem(4th order, cell-centered, dt=0.01)Re(λ)Im(λ)Figure 3.11: First twenty largest eigenvalues for the advection-diffusion problem3.4.1 Numerical Results and Data AnalysisAs in the last case, we generate the image of convergence histories in Figure (3.12),and the visualization of amplification factors in Figure (3.13). Different from thelast case, in this case, we see the convergence history oscillate, which is predictedin theory, but the asymptotic rates of the convergence are consistent with the ratepredicted by the norm of the largest eigenvalue. From Figure (3.13), we see theamplification factors oscillate around the norm of the largest eigenvalue, which isconsistent with the previous discussion. Figure (3.14) is a zoom of of the part ofamplification factor from the 580th iteration to the 697th iteration. From Figure(3.14), we see the amplification factors behave roughly periodically.According to the previous discussion, we can calculate the period in the fol-lowing way. First, the angle associated with the largest eigenvalues isθ = arctan(Im(λ0)Re(λ0))= 0.09193. (3.15)The approximate period for amplification factor isT =piθ≈ 34 (3.16)403.4. Test Case No. 2 — 2D Advection-Diffusion ProblemThis period agrees with period measured in Figure (3.14).0 100 200 300 400 500 600 700−16−14−12−10−8−6−4−202IterationsThelog10oftheNormsConvergence History for Advection-Diffusion Problem (4th order, cell-centered, dt=0.01)  Flux Integral R(U¯)Computational error ∆U¯Solution Update δU¯Convergence rate predicted by eigenvalue λFigure 3.12: Convergence history for the advection-diffusion problem0 100 200 300 400 500 600 7000.820.840.860.880.90.920.940.960.981IterationsAmplification FactorsThe Actual Amplification Factors for the Advection-Diffusion Problem(4th order, cell-centered, dt=0.01)  Flux Integral R(U¯)Computational Error ∆U¯Solution Update δU¯The Norm of the Largest Eigenvalues λFigure 3.13: Amplification factors of two successive iterations for the advection-diffusion problemTo investigate the asymptotic behavior of the amplification factors, we calculatethe average amplification factors defined by Equation (3.6). The procedure is:413.4. Test Case No. 2 — 2D Advection-Diffusion Problem580 585 590 595 600 605 610 615 620 625 630 635 640 645 650 655 660 665 670 675 680 685 690 695 7000.910.920.930.940.950.960.970.980.99The Actual Amplification Factors for Advection-Diffusion Problem(4th order, cell-centered, dt=0.01)IterationsAmplificationFactorsFigure 3.14: A closeup of Figure (3.13)1. Calculate the total amplification factors for two iterations k, k+T :Γtotal =∣∣∆U¯k+T ∣∣|∆U¯k| . (3.17)2. Calculate the average amplification factorγ¯ = Γ1Ttotal.3. Repeat (1) - (2) for a range of k to obtain a mapping of average amplificationfactors with respect to the iteration count k, which is visualized in Figure(3.15), and please note that the scale is different from Figure (3.13) andFigure (3.14)In Figure (3.15), k starts from 500, and T = 34. Although the averaged amplifi-cation factors still oscillate, the difference is much more smaller and the largestdifference is below 0.01. Therefore, the validity of Equation (3.11) is verified.3.4.2 SummaryBecause the largest eigenvalues are complex, we only consider the amplificationfactors. From the data and analysis in this section, we can draw the followingconclusions:423.5. Test Case No. 3 — One Stable 2D Euler Equations500 520 540 560 580 600 620 640 660 6800.9420.9440.9460.9480.950.9520.9540.9560.9580.960.962IterationsAmplificationFactorsThe Average Amplification Factors for Advection-Diffusion Problem(4th order, cell-centered, dt=0.01)  Flux Integration R(U¯)Computational Error ∆U¯Solution Update δU¯The Norm of the Largest Eigenvalues λFigure 3.15: The average amplification factors of 34 iterations for the advectiondiffusion problem1. As predicted in theory, the convergence rate of the three variables does notconverge to a constant, but the asymptotic rate is consistent with the norm ofthe largest eigenvalue.2. The amplification factors of two successive iterations oscillate around thenorm of the largest eigenvalue, which coincides with the theoretic prediction.3. The amplification factors of two successive iterations behave periodicallywith the predicted period.4. The average amplification factor defined in this chapter is close to the normof the largest eigenvalue.For the case where the largest eigenvalues are complex, we have Equation (2.57),Equation (2.58), and Equation (2.59) to calculate the amplification factors of twosuccessive iterations. Though we do not have direct evidence of the validity ofthese three equations, the conclusions (1) – (4) can show the validity of these threeequations indirectly.3.5 Test Case No. 3 — One Stable 2D Euler EquationsThe previous two cases demonstrate the validity of the methodology for linearproblems, for both real and complex largest eigenvalues. However, the problem of433.5. Test Case No. 3 — One Stable 2D Euler Equationsinterest is the two dimensional Euler equations. In the following sections, we testthe validity of the methodology applied on the discretized time-stepping systemsinherited from the two dimensional Euler equations. We begin with a stable casewhere the largest eigenvalue is real.The governing equations are the 2D Euler equations, which are shown in Equa-tion (2.1), Equation (2.2), and Equation (2.3). The spatial domain is the NACA0012 airfoil. Slip boundary condition is applied along the the airfoil; the freestream is the steady flow with Mach number equal to 0.75; and the attack anglefor the airfoil is 0.5 degrees.The fourth order reconstruction, Roe flux difference splitting, cell-centeredcontrol volumes, and the backward Euler time stepping with dt = 10 are used.The mesh, shown in Figure (3.16), contains 3063 triangles and the zoom of thepart around the airfoil is shown in Figure (3.17). The solution in terms of Machfield is presented in Figure (3.18), and the zoom of the Mach field around the airfoilis shown in Figure (3.19). The first twenty largest eigenvalues are plotted in Figure(3.20).Figure 3.16: Full image of the meshfor solving the two dimensional Eu-ler case with Mach=0.75, angle=0.5Figure 3.17: Zoom of the mesharound the airfoil for solving thetwo dimensional Euler case withMach=0.75, angle=0.53.5.1 Numerical Results and Data AnalysisFigure (3.21) shows the convergence histories. The amplification factors associatedwith two successive iterations are shown in Figure (3.22). Both figures show thatthe actual convergence rates coincide well with the norm of the largest eigenvalue.443.5. Test Case No. 3 — One Stable 2D Euler EquationsFigure 3.18: Mach field for the twodimensional Euler equations withMach=0.75, angle=0.5 free streamFigure 3.19: Zoom of the Machfield around the airfoil for the twodimensional Euler equations withMach=0.75, angle=0.5 free stream−1.1 −1 −0.9 −0.8 −0.7 −0.6 −0.5 −0.4 −0.3 −0.2 −0.1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1.1−1.1−1−0.9−0.8−0.7−0.6−0.5−0.4−0.3−0.2−0.100.10.20.30.40.50.60.70.80.911.1First 20 Largest Eigenvalues for the 2D Euler Case(4th order, cell-centered, dt=10, Mach=0.75, angle=0.5)Re(λ)Im(λ)Figure 3.20: The first twenty largest eigenvalues for the two dimensional Eulercase with Mach=0.75, angle=0.5453.5. Test Case No. 3 — One Stable 2D Euler EquationsThe difference between the actual amplification factors and the norm of thelargest eigenvalue ∆γ with respect to the iteration count is plotted in Figure (3.23).From Figure (3.23), We see that the ∆γ is small for all three variables and roughlyit is of O(r1r0)k−1. We also see the oscillation of ∆γ in Figure (3.23). The sec-ond largest eigenvalues are a conjugate pair, which explains the oscillation of ∆γ .However, the asymptotic decay rates of ∆γ coincide with r1r0 . For the flux integral,we see the precision decreases when the computational error approaches machineprecision.For the angle, we calculate ∆θ to obtain Figure (3.24). We can see ∆θ is smallafter the 31th iteration. A pattern of oscillation can be seen in ∆θ for all threevariables but the asymptotic rates are consistent with r1r0 . Here we also see theprecision of coincidence decreases near convergence.We also present the visualization of the vectors (Figure (3.25a), Figure (3.25b),Figure (3.25c), and Figure (3.25d)). There are four variables in each control vol-ume, but only the density vector is displayed. Since the entries with relatively largenorm are all near the airfoil, only the zoom of the area near the airfoil is presented.We can see that the shapes are similar and the solution update vector is opposite indirection to other three vectors.0 10 20 30 40 50 60−12−10−8−6−4−202IterationsThelog10oftheNormsConvergence History for the 2D Euler Case (Mach=0.75, angle=0.5)  Flux Integral R(U¯)Computational Error ∆U¯Solution Update δU¯Convergence Rate Predicted by Eigenvalue λFigure 3.21: Convergence history for the two dimensional Euler case withMach=0.75, angle=0.5463.5. Test Case No. 3 — One Stable 2D Euler Equations0 10 20 30 40 50 6000.20.40.60.811.21.41.61.82IterationsAmplificationFactorsThe Actual Amplification Factors for the 2D Euler Case (Mach=0.75, angle=0.5)  Flux Integral R(U¯)Computational Error ∆U¯Solution Update δU¯The Norm of the Largest Eigenvalues λFigure 3.22: Amplification factor comparison for the two dimensional Euler casewith Mach=0.75, angle=0.030 35 40 45 50 55−7−6−5−4−3−2−1IterationsLog10ofDifferenceThe Difference between the Actual Amplification Factor and the Norm of the Largest Eigenvalue (∆γ)for the 2D Euler Case (Mach=0.75, angle=0.5)  Flux Integral R(U¯)Computational Error ∆U¯Solution Update δU¯Estimate of Difference by(r1r0)k−1Figure 3.23: The difference between the amplification factor and the norm of thelargest eigenvalue (∆γ) for the two dimensional Euler case with Mach=0.75, an-gle=0.0473.6. Test Case No. 4 — Another Stable 2D Euler Equations30 35 40 45 50 55−7−6−5−4−3−2−10IterationsLog10ofDifferenceThe Difference between the Actual Angle and 0 or pi (∆θ)for the 2D Euler Case (Mach=0.75, angle=0.5)  Flux Integral R(U¯)Computational Error ∆U¯Solution Update δU¯Estimate of Difference by(r1r0)kFigure 3.24: The difference between the actual angle and 0 or pi (∆θ ) for the twodimensional Euler case with Mach=0.75, angle=0.53.5.2 SummaryAlthough the two dimensional Euler equations are nonlinear, the data and analysisabove give us the following conclusions:1. For the amplification factor: ∆γk is small for all three variables; there is apattern of oscillation in ∆γk but the asymptotic rate matches r1r0 for all threevariables.2. For the angle: ∆θk is small for all three variables; there is a pattern of os-cillation in ∆θk but the asymptotic rate is consistent with r1r0 for all threevariables.These conclusions demonstrate the methodology is valid for the two dimensionalEuler equations where the largest eigenvalue is real for the associated Jacobianmatrix.3.6 Test Case No. 4 — Another Stable 2D EulerEquationsDifferent from Test Case No. 3, in this case, the Mach number is 1.05, and theattack angle is 0.5 degrees. Vertex-centered control volumes, and backward Euler483.6. Test Case No. 4 — Another Stable 2D Euler Equations(a) The eigenvector associated with the largesteigenvalue(b) The computational error at the 40th iteration(c) The solution update at the 40th iteration (d) The flux integral at the 40th iterationFigure 3.25: Comparison for computational vectors and eigenvector for the 2DEuler case with Mach=0.75, angle=0.5 (density component, rotated for better vi-sualization)493.6. Test Case No. 4 — Another Stable 2D Euler Equationstime-stepping with dt = 5 are used. Other settings are as same as those in TestCase No. 3.The solution in terms of Mach field is presented in Figure (3.26), and the zoomof the Mach field around the airfoil is shown in Figure (3.27). The first twentylargest eigenvalues are plotted in Figure (3.28). The largest eigenvalue of the Jaco-bian matrix in this case are a conjugate complex pair, which is different from TestCase No. 3.Figure 3.26: Mach field of 2D Eulercase with Mach=1.05, angle=0.5 freestreamFigure 3.27: Zoom of the Machfield around the airfoil of 2D Eulercase with Mach=1.05, angle=0.5 freestream3.6.1 Numerical Results and Data AnalysisFigure (3.29) gives the visualization of the actual convergence histories. We cansee from the figure that the asymptotic convergence rates of the three variables areconsistent with the predicted rate from the eigenvalue. In Figure (3.30), amplifica-tion factors associated with two successive iterations are plotted, and we can seeamplification factors oscillate around the norm of the largest eigenvalue. If we cal-culate the average amplification factors defined by Equation (3.6), we can see theresults in Figure (3.31). Compared with Figure (3.30), the average amplificationfactor shows much less oscillation. The worst case has two digits precision.503.6. Test Case No. 4 — Another Stable 2D Euler Equations−1.1 −1 −0.9 −0.8 −0.7 −0.6 −0.5 −0.4 −0.3 −0.2 −0.1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1.1−1.1−1−0.9−0.8−0.7−0.6−0.5−0.4−0.3−0.2−0.100.10.20.30.40.50.60.70.80.911.1First 20 Largest Eigenvalues for the 2D Euler Case(4th order, vertex-centered, dt=5, Mach=1.05, angle=0.5)Re(λ)Im(λ)Figure 3.28: First twenty largest eigenvalues for the 2D Euler case withMach=1.05, angle=0.50 20 40 60 80 100 120 140 160−12−10−8−6−4−202IterationsThelog10oftheNormsConvergence History for 2D Euler Case (Mach=1.05, angle=0.5)  Flux Integral R(U¯)Computational Error ∆U¯Solution Update δU¯Convergence Rate Predicted by Eigenvalue λFigure 3.29: The convergence history for 2D Euler equations with Mach=1.05,angle=0.5513.6. Test Case No. 4 — Another Stable 2D Euler Equations0 20 40 60 80 100 120 140 16000.10.20.30.40.50.60.70.80.91IterationAmplificationFactorThe Actual Amplification Factor for the 2D Euler Case(Mach=1.05, angle=0.5)  Flux Integral R(U¯)Computational Error ∆U¯Solution Update δU¯The Norm of The Largest Eigenvalues λFigure 3.30: The amplification factors of two successive iterations for the 2D Eulercase with Mach=1.05, angle=0.060 70 80 90 100 110 120 1300.910.9120.9140.9160.9180.920.9220.924IterationAmplificationFactorThe Average Amplification Factor for 2D Euler Case(Mach=1.05, angle=0.5)  Flux Integral R(U¯)Computational Error ∆U¯Solution Update δU¯The Norm of the Largest Eigenvalues λFigure 3.31: The average amplification factors for the two dimensional Euler casewith Mach=1.05, angle=0.5523.7. Test Case No. 5 — Unstable 2D Euler Equations3.6.2 SummaryThough this case is nonlinear, we have similar conclusions as the advection-diffusionproblem, which we do not repeat here.3.7 Test Case No. 5 — Unstable 2D Euler EquationsThe previous cases are all stable. In this section, we present a case of an unstablediscretization system inherited from the 2D Euler equations. Different from TestCase No. 3, in this case, the Mach number is 1.2, and the attack angle is 0.0.Cell-centered control volumes, and backward Euler time stepping with dt = 1 areused. Other settings are as same as those in Test Case No. 3. This case is notstable for backward Euler time-stepping with some time-steps. However, in thisthesis, we use the pseudo time-stepping method in [17] to reach the first orderscheme solution, then use the first order scheme solution as the initial solution toimplement time-stepping for the fourth order scheme with the same pseudo time-stepping method. For some of the cases in that the spatial discretization is notstable, this approach can reach the solution.The solution in terms of Mach field is presented in Figure (3.32), and the zoomof the Mach field around the airfoil is shown in Figure (3.33). The first twentylargest eigenvalues are plotted in Figure (3.34). A zoom of the first two largesteigenvalues are also shown in Figure (3.35). Figure (3.36) shows first twenty right-most eigenvalues of the spatial Jacobian matrix. Figure (3.35) shows that twoeigenvalues for the full discretization system are larger than one, which implies thesystem is not stable. In Figure (3.36), there are two eigenvalues in the right halfplane, which implies that the semi-discretization is not stable.3.7.1 Numerical Results and Data AnalysisThe convergence histories in Figure (3.37) show that this case is not stable, whichis consistent with the prediction both from the eigenvalues of the full Jacobianmatrix and the spatial Jacobian matrix. We can also see the growth rates of thethree variables coincide well with the rate predicted by the norm of the largesteigenvalue. Since dt = 1, the convergence history of the flux integral coincideswith the convergence history of the solution update. Figure (3.38) presents theamplification factors associated with two successive iterations. We can see thatafter the 30th iteration, the amplification factors coincide with the norm of thelargest eigenvalue.As in previous cases, we calculate the difference between the actual amplifi-cation factors and the norm of the largest eigenvalue ∆γ to generate Figure (3.39).533.7. Test Case No. 5 — Unstable 2D Euler EquationsFigure 3.32: Mach field for the 2DEuler case with Mach=1.2, angle=0.0free streamFigure 3.33: Zoom of the Mach fieldaround the airfoil of the 2D Eulercase with Mach=1.2, angle=0.0 freestream−1.1 −1 −0.9−0.8−0.7−0.6−0.5−0.4−0.3−0.2−0.1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1.1 1.2−1.1−1−0.9−0.8−0.7−0.6−0.5−0.4−0.3−0.2−0.100.10.20.30.40.50.60.70.80.911.11.2First Twenty Eigenvalues for the 2D Euler Case(Mach=1.2, angle=0.0)Re(λ)Im(λ)Figure 3.34: First twenty largest eigenvalues for the 2D Euler equations withMach=1.2, angle=0.0 free stream543.7. Test Case No. 5 — Unstable 2D Euler Equations1.1 1.12 1.14 1.16 1.18 1.2−0.100.1Eigenvalues for the 2D Euler Case(Mach=1.2, angle=0.0)Re(λ)Im(λ)Figure 3.35: Zoom of the first two eigenvalues with for two dimensional Eulerequations with Mach=1.2, angle=0.0 free stream−0.1 −0.05 0 0.05 0.1 0.15−0.2−0.15−0.1−0.0500.050.10.150.2First Twenty Rightmost Eigenvalues of the Spatial Jacobian Matrix for the 2D Euler Case(Mach=1.2, angle=0.0)Re(λ)Im(λ)Figure 3.36: First twenty rightmost eigenvalues of the spatial Jacobian matrix fortwo dimensional Euler case with Mach=1.2, angle=0.0 free stream553.7. Test Case No. 5 — Unstable 2D Euler EquationsWe can see that the ∆γ is below 10−2 for all the three variables between iteration30 and 74. Even the worst case in the figure is much smaller than(r1r0)k−1. For thedecay rates, there is not a similarity between the computed variables and(r1r0)k−1.For the angle, we can implement similar analysis and Figure (3.40) is gener-ated. Compared with the amplification factors, the difference associated with theangle is larger but it is still much smaller than(r1r0)k. The decay rate of ∆θk is notconsistent with r1r0 as well. That neither the decay rate of ∆γk nor the decay rateof ∆θ are consistent with r1r0 might be caused by the shock wave in the flow field.However, we still conclude the methodology is valid since both the values of ∆γkand ∆θk are small.Figure (3.41a) is the visualization of the density component of the eigenvec-tor associated with the largest eigenvalue, Figure (3.41b) is the visualization ofthe vector of the computational error at 60th iteration in terms of density, Figure(3.41c) is the visualization of the vector of the solution update at 60th iteration interms of density, and Figure (3.41d) is the visualization of the vector of flux inte-gral at 60th iteration in terms of density. Comparing the shapes of the four vectorswith each other, we can see the shapes are similar and the one associated with theflux integral is in the opposite direction.0 10 20 30 40 50 60 70 80 90−6−5−4−3−2−10IterationsThelog10oftheNormConvergence History for the 2D Euler Case (Mach=1.2, angle=0.0)  Flux Integral R(U¯)Computational Error ∆U¯Solution Update δU¯Convergence Rate Predicted by Eigenvalue λFigure 3.37: The convergence history for the 2D Euler case with Mach=1.2, an-gle=0.0563.7. Test Case No. 5 — Unstable 2D Euler Equations0 10 20 30 40 50 60 70 80 900.20.40.60.811.21.41.6IterationsAmplificationFactorsThe Actual Amplification Factor for the 2D Euler Case (Mach=1.2, angle=0.0)  Flux Integral R(U¯)Computational Error ∆U¯Solution Update δU¯The Norm of the Largest Eigenvalue λFigure 3.38: The amplification factor for the 2D Euler case with Mach=1.2, an-gle=0.030 40 50 60 70 80 90−5−4.5−4−3.5−3−2.5−2−1.5−1−0.50IterationsLog10ofDifferenceThe Difference Between the Actual Amplification Factor and the Norm of the Largest Eigenvalue (∆γ)(Mach=1.2, angle=0.0)  Flux Integral R(U¯)Computational Error ∆U¯Solution Update δU¯Estimate of Difference by(r1r0)k−1Figure 3.39: The difference between actual amplification factors and the norm ofthe largest eigenvalue (∆γ) for one unstable Euler equations case with Mach=1.2,angle=0.0573.8. Summary of Stability Analysis30 40 50 60 70 80 90−1−0.9−0.8−0.7−0.6−0.5−0.4−0.3−0.2−0.10IterationsLog10ofDifferenceThe Difference Between the Actual Angle and 0 or pi (∆γ)(Mach=1.2, angle=0.0)  Flux Integration R(U¯)Computational Error ∆U¯Solution Update δU¯The Estimation of Difference by(r1r0)kFigure 3.40: The difference between the actual angle and 0 or pi (∆θ ) for oneunstable Euler equations case with Mach=1.2, angle=0.03.7.2 SummaryFor this unstable nonlinear case, the following conclusions can be drawn from thedata and analysis above:1. For the amplification: ∆γk is small for all three variables but the decay ratedoes not coincide with r1r0 .2. For the angle: ∆θk is small for all three variables but the decay rate does notcoincide with r1r0 .Though the decay rate does not coincide with r1r0 , we still think the methodologyholds since the both the values of ∆γk and ∆θk are small.3.8 Summary of Stability AnalysisIn Chapter 2, we developed and presented a new stability analysis methodologyand constructed the corresponding stability condition. In this chapter, by testinglinear problems and the nonlinear 2D Euler equations cases, we have verified thevalidity of the stability methodology.583.8. Summary of Stability Analysis(a) The eigenvector associated with the largesteigenvalue(b) Density component of the computation errorat 60th iteration(c) Density component of the solution update atthe 60th iteration(d) Density component of flux integral at the60th iterationFigure 3.41: Comparison among the computational vectors and the eigenvectorassociated with the largest eigenvalue (density component) for the 2D Euler casewith Mach=1.2, angle=0.0.59Chapter 4Stabilization at Fixed PointIn Chapter 2, we developed a stability analysis model, and constructed the associ-ated stability condition in terms of eigenvalues. In addition, we also constructeda stability condition for the semi-discretization, which is independent of time-stepping. To have a stable ODE system, we need all the eigenvalues to lie in theleft half plane. To stabilize an unstable ODE system, we need shift the eigenvaluesin the right half plane into the left half plane. To reach this objective, we needchange the Jacobian matrix. There are many ways to change the Jacobian matrix,but an easy and controllable way is to change the coordinates of the vertices of themesh. The next question is how the changes of vertices’ coordinates change theeigenvalues. Can we obtain a quantitative relation?4.1 Finding the Derivative of the EigenvalueTo determine how the eigenvalue changes quantitatively corresponding to a changeof the matrix, we need the derivative of the eigenvalue. By the definition of righteigenvalue, we haveAx = λx. (4.1)For the left eigenvalue, we havey∗A = λy∗. (4.2)Taking the first derivative of both sides of Equation (4.1), we haveA′x+Ax′ = λ′x+λx′. (4.3)Multiplying both sides from the left by the left eigenvector yieldsy∗A′x+ y∗Ax′ = λ′y∗x+λy∗x′. (4.4)Multiplying Equation (4.2) from the right by the derivative of the right eigenvectorgives604.2. Prediction on the Difference of Eigenvalue under the Difference of a Matrixy∗Ax′ = λy∗x′. (4.5)Substituting Equation (4.5) into Equation (4.4), we havey∗A′x = λ′y∗x. (4.6)Dividing both sides by y∗x, we havey∗A′xy∗x= λ′. (4.7)By Equation (4.7), the derivatives of eigenvalues can be found from the derivativeof the matrix, and the left and right eigenvectors. The earliest published derivationof Equation (4.6) might be Lancaster (1964) [11].4.2 Prediction on the Difference of Eigenvalue under theDifference of a MatrixWe only take the derivative with respect to the coordinates of the mesh’s verticesinto consideration. We use finite differences to approximate the matrix’s derivativewith respect to the coordinateA′=A(ζ + ε)−A(ζ )ε+O(ε). (4.8)where ζ refers to the vector of the coordinates of vertexes, and ε is a perturbation ofone coordinate of one vertex, which is a good starting point.2 Substituting Equation(4.8) into Equation (4.7) yieldsy∗(A(ζ+ε)−A(ζ )ε )xy∗x= λ′. (4.9)Equation (4.9) gives a way to approximate the eigenvalue’s derivative approxi-mately by finite differences. With the eigenvalue’s derivative at hand, we can pre-dict the eigenvalue of the new Jacobian matrix after mesh perturbation by a Taylorseries, but only the first order derivative is taken into consideration:λnew = λold +λ′oldε+O(ε2). (4.10)Rearranging Equation (4.10), the difference of the eigenvalue can be predicted by2The perturbation ε can be on one coordinate of one vertex or multiple coordinates of multiplevertices, but here the perturbation is only on one coordinate of one vertex.614.3. Numerical Verification of the Eigenvalue Difference Calculation Method∆λ = λnew−λold = λ ′ε. (4.11)For a specific eigenvalue λi, where i is the index, we have∆λi = λ newi −λ oldi = λ′i ε. (4.12)Substituting Equation (4.9) into Equation (4.11) to have∆λ = λ′ε =y∗ (A(ζ + ε)−A(ζ ))xy∗x=y∗∆Axy∗x. (4.13)For a specific eigenvalue λi, we havey∗i ∆Axiy∗i xi= ∆λi. (4.14)The difference of eigenvalue and the difference of the Jacobian matrix are relatedby Equation (4.14). For a certain perturbation to the matrix, there are two ways tocalculate the difference of the eigenvalue. One is applying Equation (4.14) directly,i.e., the difference of the eigenvalue can be calculated by the difference of thematrix, and the left and right eigenvector. The other is obtaining the derivativeof eigenvalue λ ′ by Equation (4.9), then gaining the difference of eigenvalue byEquation (4.12).4.3 Numerical Verification of the Eigenvalue DifferenceCalculation MethodIn this section, we are going to test the reliability of Equation (4.11), and determinethe maximum perturbation ε to predict the difference ∆λ within an acceptable tol-erance. We start directly on the two dimensional Euler equations rather than alinear problem. For the two dimensional Euler equations, if the coordinates arechanged, the converged solution changes accordingly, and therefore, the Jacobianmatrix changes as well. However, we do not take the change of the convergedsolution into consideration. We only take the rightmost eigenvalue λ0 into consid-eration for testing.For an irregular mesh, the length of the edges changes with location. Also,eigenvalues have different sensitivity to the coordinates at different locations. Gen-erally, the solution changes rapidly where the mesh is dense. Therefore a smallchange of the vertex location in those regions can bring significant changes to theeigenvalues. On the other hand, the solution changes slowly where the mesh issparse. A relatively large change of the vertex location does not necessarily change624.3. Numerical Verification of the Eigenvalue Difference Calculation Methodthe eigenvalues significantly. For clarity, we normalize the change of coordinate ε .We define the following length.Definition 3. The unit length at a vertex is defined as 0.1 times the length of theshortest edge connected to the vertex: Lu = 0.1 |eshortest |.We normalize the change of the coordinate ε by the unit length :εs =εLu(4.15)4.3.1 Procedure to Implement TestsTo implement numerical tests, we first calculate the derivative by using finite dif-ferences. The procedure to calculate the derivative is:1. Calculate the rightmost eigenvalue, and the associated left and right eigen-vector of the Jacobian matrix.2. We use the central finite difference to approximate the derivative. Adjust onecoordinate of one vertex by a small quantity εs to obtain a new Jacobian ma-trix A(ζ+εs). By Equation (4.14), we obtain the difference for the rightmosteigenvalue ∆λ0,1.3. Adjust the same coordinate of the same vertex by an opposite quantity −εsto obtain another new Jacobian matrix A(ζ − εs). By Equation (4.14), weobtain another difference for the rightmost eigenvalue ∆λ0,2.4. Calculate the derivative with respect to the normalized length: λ ′= ∆λ0,1−∆λ0,22εs .We can construct a linear function of the difference of the rightmost eigen-value ∆λ0 depending on the perturbation of one coordinate of one vertex:∆λ0 = λ′0εs (4.16)By Equation (4.16), we can predict the change of an eigenvalue arising froma perturbation. Besides the perturbations used for constructing Mapping (4.16),we implement other perturbations to verify the validity of Equation (4.16) andEquation (4.14). There are three ways to obtain the difference of the eigenvalue, ofwhich one is by Equation (4.16), and another one is by Equation (4.14). The thirdone is by recomputing the eigenvalue of the corresponding new Jacobian matrixand therefore calculate the difference. We use the third one to evaluate the validityof the methods associated with Equation (4.14) and Equation (4.16).For the test, each time we only perturb one coordinate of one vertex. For eachcoordinate, there are eight different perturbations. Two vertices are tested. Theresults are presented in the following subsection.634.3. Numerical Verification of the Eigenvalue Difference Calculation Method4.3.2 Numerical ResultsThe red point in Figure (4.1a) is the vertex being perturbed. The number 683 is theindex of the vertex. Figure (4.2) shows the change of the first rightmost eigenvalueunder the perturbation of the x-coordinate of vertex 683. The unit for the x-axisis the Lu defined at the vertex 683. The perturbation associated with the length0.01 and -0.01 are used to construct the linear map defined by Equation (4.16).Table (4.1) presents the relative error of the predicted change to the actual changeassociated with the largest four perturbations for both methods. The relative errorof the predicted eigenvalue change is defined asE∆λ0 =∣∣∣∆λ predicted0 −∆λ actual0 ∣∣∣∣∣∆λ actual0 ∣∣ ×100%. (4.17)From Table (4.1), we can see if the absolute value of the perturbation is notlarger than one unit length, the prediction from linear Mapping (4.16) coincideswith the actual change, i.e., the linearity is well preserved; (2) the prediction byEquation (4.14) matches the actual change as well for the absolute value of theperturbation not larger than one unit length. Even for a perturbation whose absolutevalue is 2 unit length, the relative error is still not too large. For this case, both thelinear Mapping (4.16) and Equation (4.14) demonstrate a good prediction.The red point in Figure (4.1b) is another point being perturbed, of which theindex is 1417. Similar to vertex 683, we also generate Figure (4.3) and Table (4.2).For this case, we can see that if the absolute value of the change is within one unitlength, the relative error of the prediction from both methods are acceptable. If theabsolute value of the movement is as large as two unit length, the error is relativelylarge. However, if a movement within one unit length in absolute value cannotreach an objective, a larger movement like 1.5 or 2 unit length can also be takeninto consideration.From the two examples, we can see if a movement is not larger than one unitlength (both positive and negative), both methods have shown good predictions.For a movement larger than one unit length, the prediction might not be suffi-ciently accurate, but preserves the correct trend. A movement larger than one unitlength can be used if a smaller movement cannot reach the objective. We assumethis conclusion applies to other eigenvalues, other vertices, and other coordinates.For a movement that simultaneously changes multiple coordinates, we assume su-perposition can be applied and do not confirm this by numerical tests.644.3. Numerical Verification of the Eigenvalue Difference Calculation Method(a) Vertex 683 (b) Vertex 1417Figure 4.1: The perturbed vertices−2 −1.8 −1.6 −1.4 −1.2 −1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2−0.03−0.025−0.02−0.015−0.01−0.00500.0050.010.0150.020.0250.03X Coordinate Change LuEigenvalueChange∆λ0Comparison of Eigenvalue Change ∆λ0for the 2D Euler Case (Mach=1.2, angle=0.0)  Change of Eigenvalue ∆λ0 Calculated from Recomputing EigenvalueChange of Eigenvalue ∆λ0 Predicted byy∗0∆Ax0y∗kx0= ∆λ0Change of Eigenvalue ∆λ0 Calculated by Linear Mapping ∆λ0 = λ′0ǫsFigure 4.2: Change of eigenvalue when moving vertex 683 in the x-direction forthe 2D Euler case with Mach=1.2, angle=0.0 free stream654.3. Numerical Verification of the Eigenvalue Difference Calculation Method−2 −1.8 −1.6 −1.4 −1.2 −1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2−0.02−0.015−0.01−0.00500.0050.010.015X Coordinate Change LuEigenvalueChange∆λ0Comparison of Eigenvalue Change ∆λ0for the 2D Euler Case (Mach=1.2, angle=0.0)  Change of Eigenvalue ∆λ0 Calculated from Recomputing EigenvalueChange of Eigenvalue ∆λ0 Predicted byy∗0∆Ax0y∗kx0= ∆λ0Change of Eigenvalue ∆λ0 Calculated by Linear Mapping ∆λ0 = λ′0ǫsFigure 4.3: Change of eigenvalue when moving vertex 1417 in the x-direction forthe 2D Euler case with Mach=1.2, angle=0.0 free stream−2lu −Lu Lu 2LuMapping (4.16) 31% 13% 10% 16%Equation (4.14) 23% 9% 6% 9.5%Table 4.1: Relative error of the prediction on the rightmost eigenvalue change forboth methods under the movement of vertex 683−2lu −Lu Lu 2LuMapping (4.16) 25% 14% 18% 38%Equation (4.14) 31% 14% 14% 29%Table 4.2: Relative error for the prediction on the rightmost eigenvalue change forboth methods under the movement of vertex 1417664.4. Stabilization Methodology4.4 Stabilization MethodologyFrom the last section, we know that if a coordinate’s movement is around oneunit length or even two unit lengths, the eigenvalue’s change can be predicted wellby the linear Mapping (4.16). For multiple-coordinate movement, we can applysuperposition. To stabilize an unstable case, the eigenvalues on the right half planeneed to be shifted into the left half plane and meanwhile, the originally negativeeigenvalues should not be shifted to the right half plane.4.4.1 Procedure of Systematic StabilizationThe next question is what is the adjustment for each vertex to shift the eigen-value with positive real part into the left half plane. For each vertex, there aretwo coordinates, which we consider independently. Suppose m x-coordinates andn y-coordinates are adjusted. The adjustment for the x-coordinate is denoted byδxi, and the adjustment for the y-coordinate is denoted by δy j; as the index tolist the coordinates to be moved, i ranges from zero to m− 1 and j ranges fromzero to n−1. Combine the adjustment for all coordinates of which the adjustmentis not zero as a vector ∆ζ = (δx0, δx1, δx2, . . . ,δxm−1, δy0, δy1, . . . ,δyn−1). Toshift the eigenvalues with positive real part into the left half plane, we need to ob-tain the corresponding ∆ζ . Suppose the derivatives of the first several rightmosteigenvalues with respect to the coordinates of each vertex are obtained; we onlyneed to consider the real part. For the coordinates to be adjusted, we can constructa matrix. Suppose we take the first l rightmost eigenvalues into consideration:(λ0, λ1, λ2, . . . ,λl−1). For each eigenvalue λk, the derivative of its real part withrespect to one normalized x-coordinate is dRe(λk)dxs, i ,3 and its derivative with respectto one normalized y-coordinate is dRe(λk)dys, j . We construct a l× (m+n) matrixdRe(Λ)dζ=dRe(λ0)dxs,0dRe(λ0)dxs,1. . . dRe(λ0)dxs,m−1dRe(λ0)dys,0. . . dRe(λ0)dys,n−1dRe(λ1)dxs,0dRe(λ1)dxs,1. . . dRe(λ1)dxs,m−1dRe(λ1)dys,0. . . dRe(λ1)dys,n−1.....................dRe(λl−1)dxs,0dRe(λl−1)dxs,1. . . dRe(λl−1)dxs,m−1dRe(λl−1)dys,0. . . dRe(λl−1)dys,n−1 .(4.18)For convenience, we use D to denote matrix dRe(Λ)dζ . For a vertex adjustment op-eration ∆ζ , the changes of the real parts of the eigenvalues will be approximatelyRe(∆λ ) =D∆ζ . In practice, ∆ζ is unknown, and Re(∆λ ) is an objective that is setin advance such that all the eigenvalues are negative in real part:3The coordinate is normalized by the unit length; so is the y-coordinate.674.5. Technical Results of StabilizationRe(λ )+Re(∆λ )< 0 (4.19)Therefore, we know D and Re(∆λ ), and ∆ζ is unknown. ∆ζ can be obtained bysolving an optimization problem:min∆ζ12∆ζ ᵀ∆ζ such that{D∆ζ ≤ Re(∆λ )lb≤ ∆ζ ≤ ub (4.20)We want to minimize the vertex adjustment since a larger adjustment means alarger divergence to the prediction of the eigenvalues’ change and more chance tomake the mesh invalid. lb and ub are used to bound the adjustment of vertices.Therefore, we summarize the procedure as:1. Calculate the derivatives dRe(λk)dxs, i anddRe(λk)dys, jfor the eigenvalues of interestand all vertices in the mesh.2. Set Re(∆λ ) such that Condition (4.19) is satisfied.3. Solve the optimization problem of Equation (4.20) to obtain ∆ζ4. Apply ∆ζ to the mesh4.5 Technical Results of StabilizationIn this subsection, we use the procedure above to stabilize two unstable two dimen-sional Euler equation cases.4.5.1 Test Case No. 1This case is the unstable one documented in Chapter 3 with Mach=1.2, angle=0.0free stream and we do not repeat the description here. The derivatives dRe(λ )dεs arecalculated for all the interior vertices and the first thirty rightmost eigenvalues.Here εs is the normalized x-coordinate or y-coordinate. We do not move the ver-tices at the boundary since this changes the domain and we set the derivativesassociated with the boundary vertices zero. The derivative with respect to the x-coordinate or the y-coordinate for all vertices can be interpreted as a vector of thedimension equal to the number of the vertices. It is normal and common that wevisualize the flow variables, for instance, the density, on the mesh. We can definea pseudo field variable similar to flow variables for visualization purpose, and weneed the average value of this variable on the vertex centered control volume to684.5. Technical Results of Stabilizationequal the derivative of the eigenvalue’s real part with respect to the one of the nor-malized coordinates of this vertex: dRe(λ )dεs . Therefore, if we visualize this variableon the mesh like other flow variables, the color of of each vertex-centered con-trol volume represent the value of dRe(λ )dεs . Other variables that can be mapped tothe mesh can be visualized in this manner as well. As an example, Figure (4.4)shows the derivatives of the real part of the rightmost eigenvalue with respect tothe x-coordinate of the vertex. We can see that most of the derivatives are close tozero.Figure 4.4: The derivative of the rightmost eigenvalue’s real part with respect to thenormalized x-coordinate dRe(λ0)dxs for the 2D Euler case with Mach=1.2, angle=0.0free stream.The optimization problem is solved by the function quadprog [15] in MATLAB[16]. The full description of the optimization problem that quadprog solves isdescribed in [15] as:minx12xᵀHx+ f ᵀx such thatA · x≤ bAeq · x = beqlb≤ x≤ ub(4.21)Corresponding to the problem here, we set matrix H to be an identity matrix, matrixA to be matrix D, function f to be zero, Aeq and beq to be null, x to be ∆ζ inEquation (4.20), b to be Re(∆λ ) in Equation (4.20), and the meanings of lb andub in Equation (4.20) are same as those in Equation (4.21). Specifically, herewe set lb to be a vector with all entries equal -1.5, and ub to be a vector withall entries equal 1.5, meaning the bound is 1.5 unit length in absolute value. For694.5. Technical Results of StabilizationFigure 4.5: The eigenvector associated with the rightmost eigenvalue for the 2DEuler case with Mach=1.2, angle=0.0 free streamRe(∆λ ), if the real part of the eigenvalue is larger than zero, we set the associatedcomponent of Re(∆λ ) to be -1.1 times of the real part; if the eigenvalue is smallerthan zero, we set the associated component of Re(∆λ ) to be -0.1 times of the realpart. The objective is to require the real part of the shifted eigenvalue of the originaleigenvalue with positive real part be less than -0.1 times of the original real partand the real part of the shifted eigenvalue of the eigenvalue with negative real partbe less than 0.9 times of the original real part. If this objective is satisfied, all theeigenvalues are negative in real part. We take the first 15 rightmost eigenvaluesinto consideration; for each coordinate, we have 15 dRe(λi)dεs , i.e., i ranges from 0 to14. For a specific eigenvalue, most of the derivative components are close to zeroso that they cannot be used since a too large movement and too many coordinatesare needed to shift the eigenvalue to a target value. We select those coordinates ofwhich the derivatives are relatively large. Here the criterion to select one coordinateis that if among the 15 derivatives, one of the derivatives is larger than 0.0005 butsmaller than 2,4 the coordinate is chosen. Under this criterion, 98 coordinatesamong the 3216 coordinates of the mesh are selected. As a result of this, the sizeof matrix A is 15×98. However, the function quadprog needs matrix A be a squarematrix. To remedy this, we extend the size to 98×98, and assign all of the extendedentries to be zero. Accordingly, we extend the dimension of vector Re(∆λ ) from4The reason why we need an upper bound is to avoid possible singularity.704.5. Technical Results of Stabilization15 to 98, and assign the extended entries the same value as the 15th entry.After solving the optimization problem, we get the movement for the x-coordinatesshown in Figure (4.19a) and the movement for the y-coordinates shown in Figure(4.19b). Both the original mesh and the new mesh after moving are shown in Figure(4.6) for comparison. The zoom of Figure (4.6) near the airfoil is shown in Figure(4.7), from which we can see the vertices near the airfoil are not moved. The newmesh is shown separately in Figure (4.8). We can see that the new mesh looks nor-mal. Figure (4.9) shows the convergence history both for the time-stepping asso-ciated with the original mesh and the time-stepping associated with the new mesh.From the convergence history comparison, we can clearly see the original unstabletime-stepping has been stabilized. Figure (4.10) shows the first twenty rightmosteigenvalues calculated on the converged solution after the mesh movement; com-pared with Figure (3.36), we can see that the original eigenvalues with positive realpart have been shifted into left half plane. For those eigenvalues of which the origi-nal reals part are positive, the corresponding predictions Re(λ )+Re(∆λ ) from theoptimization problem do not coincide well with the actual values of the updatedeigenvalues. We do not study this further.Figure 4.6: Mesh comparison for the stabilization for the 2D Euler case withMach=1.2 and angle=0.0 (The red line is for the new mesh, and the black lineis for the original mesh.)714.5. Technical Results of StabilizationFigure 4.7: Zoom of Figure (4.6) near the airfoilFigure 4.8: The new mesh arising from stabilization for the 2D Euler case withMach=1.2 and angel=0.0724.5. Technical Results of Stabilization0 50 100 150 200 250 300 350−12−10−8−6−4−20IterationsThelog10oftheNormConvergence History Comparison for the Stabilization at Fixed Point for the 2D Euler Case(Mach=1.2, angle=0.0)  Flux Integral R(U¯)for New MeshFlux Integral R(U¯)for Original MeshFigure 4.9: Convergence history comparison for the 2D Euler case associated withdifferent meshes ( Mach=1.2, angle=0.0 )−0.075 −0.07 −0.065 −0.06 −0.055 −0.05 −0.045 −0.04 −0.035 −0.03 −0.025−0.4−0.3−0.2−0.100.10.20.3First Twenty Rightmost Eigenvalues for the 2D Euler Case Associated With the New Mesh(Mach=1.2, angle=0.0)Re(λ)Im(λ)Figure 4.10: First twenty rightmost eigenvalues for the 2D Euler case associatedwith the new mesh (Mach=1.2, angle=0.0)734.6. The Region That Causes the Instability4.5.2 Test Case No. 2It is also an unstable 2D Euler case. The parameters are all the same as the TestCase No. 1 except for that Mach number is 1.3 for the free stream. The solu-tion in terms of Mach field is displayed in Figure (4.11). The density componentof the eigenvector associated with the first rightmost eigenvalue is shown in Fig-ure (4.20b). The density component of the eigenvector associated with the sec-ond rightmost eigenvalue is shown in Figure (4.20d). The first twenty rightmosteigenvalues calculated on the converged solution are plotted in Figure (4.12). Thesettings for the optimization problem are the same with the case with Mach=1.2,except that the first ten eigenvalues are taken into consideration.The reason why we only take the first ten eigenvalues into consideration isbecause the derivatives associated with some eigenvalues from index 11 to index15 are very large, and may arise from singularities, as shown in Figure (4.13).This can be studied in future work. After solving the optimization problem, weget the movement for the x-coordinate, which is shown in Figure (4.21a) and themovement for the y-coordinate, which is shown in Figure (4.21b). 134 coordinatesamong 3216 coordinates are changed. Both the original mesh and the new meshafter moving are shown in Figure (4.14) for comparison. The vertices near theairfoil are not moved. The new mesh is shown separately in Figure (4.15). Wecan see that the new mesh looks normal. The first twenty rightmost eigenvaluescalculated on the converged solution on the new mesh are shown in Figure (4.16).Compared with Figure (4.12), we can see that the eigenvalues with positive realpart have been shifted into the left half plane. The convergence history comparisonis shown in Figure (4.17) and we can clearly see that the original unstable time-stepping has been stabilized.4.6 The Region That Causes the InstabilityIn these two stabilization cases, the calculation of derivative dRe(λ )dεs goes throughall the vertices of the mesh since we do not know which ones are to be selected inadvance. However, in practice, only a small portion of the coordinates are needed.For the case with Mach=1.2, only 98 among 3216, i.e. 3.05%, coordinates are usedfor stabilization; for the case with Mach=1.3, only 134 among 3216, i.e. 4.17%,coordinates are used. Less than 5% of coordinates are actually used. The CPUtime cost of mesh optimization for the 2D Euler case with Mach=1.2, angle=0.0 ispresented in Table (4.3). In Table (4.3), Time-stepping means the time-stepping toreach the converged solution of the fourth order scheme. To calculate the deriva-tives, we need the eigenvalues, and the left and the right eigenvectors. Since we744.6. The Region That Causes the InstabilityFigure 4.11: Mach field for the 2D Euler case with Mach=1.3, angle=0.0 freestream−0.15 −0.13 −0.11 −0.09 −0.07 −0.05 −0.03 −0.01 0.01 0.03 0.05 0.07 0.09 0.11 0.13 0.15 0.17 0.19−4−3−2−101234First Twenty Rightmost Eigenvalues for the 2D Euler Case Associated With the Original Mesh(Mach=1.3, angle=0.0)Re(λ)Im(λ)Figure 4.12: First twenty rightmost eigenvalues for the 2D Euler case associatedwith the original mesh (Mach=1.3, angle=0.0)754.6. The Region That Causes the InstabilityFigure 4.13: The derivative of the 14th rightmost eigenvalue’s real part with respectto the normalized x-coordinate dRe(λ0)dxsFigure 4.14: Mesh comparison for the stabilization for the 2D Euler case withMach=1.3, angle=0.0 free stream ( The black line is for the original mesh and thered line is for the new mesh.)764.6. The Region That Causes the InstabilityFigure 4.15: The new mesh after stabilization for the 2D Euler case with Mach=1.3,angle=0.0 free stream−0.11 −0.1 −0.09 −0.08 −0.07 −0.06 −0.05−4−3−2−101234First Twenty Rightmost Eigenvalues for the 2D Euler Case Associated With the New Mesh(Mach=1.3, angle=0.0)Re(λ)Im(λ)Figure 4.16: Rightmost eigenvalues for the 2D Euler case associated with the newmesh (Mach=1.3, angle=0.0)774.6. The Region That Causes the Instability0 20 40 60 80 100 120 140 160 180−12−10−8−6−4−20IterationsThelog10oftheNormConvergence History Comparison for the Stabilization at Fixed Point for the 2D Euler Case(Mach=1.3, angle=0.0)  Flux Integral R(U¯)for New MeshFlux Integral R(U¯)for Original MeshFigure 4.17: Convergence history comparison for the 2D Euler case for differentmeshes ( Mach=1.3, angle=0.0 )cannot calculate the left and right eigenvectors at the same time, we transpose theJacobian matrix to calculate the left eigenvectors. The 384s in the table includesboth procedures to solve the right eigenvalue problem and the left eigenvalue prob-lem. In the derivative calculation, for each coordinate, we calculate the first 30rightmost eigenvalues’ derivatives with respect to both coordinates of all the ver-tices. The CPU time used to solve the optimization problem is only 0.2s. In thiscase, 98% of CPU time is in derivative calculation. If we only calculate the deriva-tives for the coordinates which will be used in optimization, much CPU time canbe saved. For instance, if we only calculate 10% of the coordinates, which arealready more than two times the portion of the coordinates used in current cases,88% of the computation time can be saved. The computation is executed in serial.If it is run in parallel, it takes much less time to complete the mesh optimization.Items Time (Second)Time-Stepping 42Eigenvalue/Eigenvector Calculation 384Derivative Calculation 21,748Table 4.3: Time cost for mesh optimization of the 2D Euler case with Mach=1.2The question is how can we know that which coordinates will be used andwhich ones will not be used. We can see Figure (4.4) shows a similar nonzero784.6. The Region That Causes the Instabilitypattern as Figure (4.5), suggesting that there might be some relation between them.Usually the entries of an eigenvector of the Jacobian matrix arising from the spa-tial discretization of flow problems are not uniform in absolute value. In fact, thedistribution of the entries in terms of absolute value are so far from being uni-form that most of the entries are small and only a small part of the entries are muchlarger than the rest. Combine this fact with Equation (4.7) and Equation (4.14), andwe can conclude that only a small part of the vertices of the mesh have a signifi-cant influence on the sensitivity of the eigenvalues. These are the vertices whichlie in and around the area with large value in the visualization of the associatedeigenvector. We examine more cases. The derivatives associated with the secondrightmost eigenvalue shown in Figure (4.18a) show a similar nonzero pattern asthe eigenvector associated with the second rightmost eigenvalue shown in Figure(4.18b), except the area around the airfoil. For the 2D case with Mach=1.3, wecan see that the derivatives of the first rightmost eigenvalue’s real part, shown inFigure (4.20a), show a similar nonzero pattern as the real part of the eigenvectorassociated with the first rightmost eigenvalue, shown in Figure (4.20b); the deriva-tives of the second rightmost eigenvalue’s real part, shown in Figure (4.20c), showa similar nonzero pattern as the real part of the eigenvector associated with thesecond rightmost eigenvalue,5 visualized in Figure (4.20d). The nonzero patternof the derivative of the eigenvalue with respect to the y-coordinate is similar to thederivative of the eigenvalue with respect to the x-coordinate. From the analysis andnumerical results, we conclude that the nonzero pattern of the eigenvalue’s deriva-tives with respect to the vertices (x-coordinates and y-coordinates) coincides withthe nonzero pattern of the associated eigenvectors.Since we primarily move those vertices whose associated derivatives are larger,we expect the nonzero patterns of the movement both for x-coordinate and y-coordinate to be similar to the superposition of the nonzero pattern of the asso-ciated eigenvectors. For the 2D Euler case with Mach=1.2, the x-coordinates’movement is shown in Figure (4.19a) and the y-coordinates’ movement is shownin Figure (4.19b), both of which coincide with the superposition of the nonzeropattern of the eigenvector associated with the first rightmost eigenvalue, shown inFigure (4.5) and the eigenvector associated with the second rightmost eigenvalue,shown in Figure (4.18b).6 For the 2D Euler case with Mach=1.3, from Figure(4.20b), Figure (4.20d), Figure (4.21a), and Figure (4.21b), we can see the sameconclusion. Therefore, the areas with relatively large entries of the eigenvectors5In this thesis, the unstable eigenvalues are all real. Therefore, the real part of the eigenvector isequivalent to the eigenvector.6We take the first 15 eigenvalues into consideration. However, the first two rightmost eigenvaluesare positive in real part and are primary taken to consideration. Therefore, the contribution of the resteigenvectors to the pattern of the movement might be trivial.794.7. Summaryassociated with the unstable eigenvalues can be interpreted to cause the instability,since if we move those vertices, the unstable eigenvalues can be shifted to be stableones.As a result, we may just need to consider the vertices that are associated withthe relatively large entries of the eigenvectors of interest.7 We do not take this intofurther consideration, though we suggest it be done in future work.(a) The derivative of the second rightmost eigen-value’s real part with respect to the x-coordinate(b) The eigenvector associated with the secondrightmost eigenvalueFigure 4.18: The nonzero pattern comparison among the derivatives and eigenvec-tors for the 2D Euler case with Mach=1.2, angle=0.0 free stream4.7 SummaryFrom this chapter, we have the following conclusions:1. The changes in eigenvalues of the Jacobian matrix caused by moving thevertices of the mesh can be predicted quantitatively.2. The unstable eigenvalues of the Jacobian matrix can be shifted to be stableeigenvalues by solving an associated optimization problem for a perturbationof the mesh.7Here the eigenvectors of interest are referring to the eigenvectors associated with the eigenvaluesbeing taken into consideration. Maybe we only need to consider the unstable eigenvalues, though inpractice we take the first 15 or 10 rightmost eigenvalues into consideration.804.7. Summary(a) X-coordinate movement (b) Y-coordinate movementFigure 4.19: The coordinates’ movement to stabilize the unstable 2D Euler casewith Mach=1.2, angle=0.0 free stream3. Only a small area of the whole mesh causes instability. If we only calculatethe derivatives associated with the area primarily contributing to the instabil-ity, much computation time can be saved.814.7. Summary(a) The derivative of the first rightmost eigen-value’s real part with respect to the x-coordinate(b) The density component of the eigenvector as-sociated with the first rightmost eigenvalue(c) The derivative of the second rightmost eigen-value’s real part with respect to the x-coordinate(d) The density component of the eigenvector as-sociated with the second rightmost eigenvalueFigure 4.20: Comparison among the derivatives of eigenvalues and the eigenvec-tors for the 2D Euler case with Mach=1.3, angle=0.0 free steam824.7. Summary(a) X-coordinate movement (b) Y-coordinate movementFigure 4.21: The coordinates’ movement to stabilize the unstable 2D Euler casewith Mach=1.3, angle=0.0 free stream83Chapter 5Stabilization at Non-fixed PointIn Chapter 2, we developed a stability analysis model and its validity is verifiedby numerical tests in Chapter 3. This model is based on the fixed point of thetime-stepping. In Chapter 4, we developed a method to stabilize unstable time-steppings by shifting the unstable eigenvalues into the stable region. This stabiliza-tion method is also applied at the fixed point. Fixed point analysis and stabilizationat the fixed point can provide rich information about the system. However, to im-plement fixed point analysis, the fixed point should be reached. For some cases,the fixed point might not be feasible. In this chapter, we try to stabilize the unstabletime-stepping at a non-fixed point.Typically, the first order scheme does not have instability issues. Higher orderschemes like the second, third, fourth order schemes, etc., have instability issuesunder some circumstances. To have a good initial solution, we usually set the firstorder solution as the initial solution for a high order scheme time-stepping. In thischapter, we try to stabilize high order schemes at the initial solution, which is theconverged solution of the first order scheme.We use the same procedure as the stabilization at the fixed point, but here weimplement the stabilization at the initial solution, which is the converged solutionof the first order scheme.5.1 Test Case No. 1 — the Unstable 2D Euler Case withMach=1.2, Angle=0.0 Free StreamThe boundary conditions, the spatial discretization, and time-stepping are as sameas those in the previous 2D Euler case with Mach=1.2, angle=0.0 free stream inChapter 4. Different from the stabilization in Chapter 4, in this chapter, we try tostabilize this unstable fourth order scheme at the initial solution, which is the fixedpoint of the first order scheme time-stepping. The first twenty rightmost eigenval-ues of the Jacobian matrix at the initial solution are shown in Figure (5.1). Theeigenvector associated with the rightmost eigenvalue is shown in Figure (5.2a).The eigenvector associated with the second rightmost eigenvalue is shown in Fig-ure (5.2c). The derivative dRe(λ0)dxs is shown in Figure (5.2b); the derivativedRe(λ1)dxs845.1. Test Case No. 1 — the Unstable 2D Euler Case with Mach=1.2, Angle=0.0 Free Streamis shown in Figure (5.2d); here λ0 means the rightmost eigenvalue; λ1 means thesecond rightmost eigenvalue.−0.1 −0.05 0 0.05 0.1 0.15−0.25−0.2−0.15−0.1−0.0500.050.10.150.20.25First 20 Rightmost Eigenvalues for the 2D Euler EquationsAssociated With the Original Mesh and the Initial Solution (Mach=1.2, angle=0.0)Re(λ)Im(λ)Figure 5.1: First twenty rightmost eigenvalues of the Jacobian matrix of the fourthorder scheme at the initial solution for the 2D Euler case with Mach=1.2, angle=0.0free streamThe settings for the optimization problem are as same as those for the 2D Eulercase with Mach=1.2 in Chapter 4. After solving the optimization problem, thechange in the mesh is obtained. The change in the x-coordinates is shown in Figure(5.3a) and the change in the y-coordinates is shown in Figure (5.3b). The originalmesh and the new mesh are shown together in Figure (5.4) for comparison. Thenew mesh is shown in Figure (5.5). The new mesh looks normal though the largestmovement for a single coordinate is 1.5 unit length.Once we obtain the updated mesh, we implement time-stepping from the initialsolution, i.e., the solution of the first order scheme. Both the convergence historiesin terms of flux integral for the time-stepping associated with the original meshand for the time-stepping associated with the new mesh are shown in Figure (5.6),from which we can see that the unstable time-stepping has been stabilized. The firsttwenty rightmost eigenvalues of the Jacobian matrix calculated on the convergedsolution on the new mesh are shown in Figure (5.7). The eigenvalues are all neg-ative in real part, while there are unstable eigenvalues in the spectrum associatedwith the original mesh, which is shown in Figure (3.36).855.1. Test Case No. 1 — the Unstable 2D Euler Case with Mach=1.2, Angle=0.0 Free Stream(a) The density component’s eigenvector associ-ated with the first rightmost eigenvalue of Jaco-bian matrix of the fourth order scheme at the ini-tial solution(b) The derivatives of the first rightmost eigen-value’s real part with respect to the normalizedx-coordinate: dRe(λ0)dxs(c) The density component’s eigenvector associ-ated with the second rightmost eigenvalue of theJacobian matrix of the fourth order scheme at theinitial solution(d) The derivatives of the second rightmosteigenvalue’s real part with respect to the normal-ized x-coordinate: dRe(λ1)dxsFigure 5.2: Eigenvectors and the derivatives for the 2D Euler case with Mach=1.2,angle=0.0 free stream865.1. Test Case No. 1 — the Unstable 2D Euler Case with Mach=1.2, Angle=0.0 Free Stream(a) X-coordinate change (b) Y-coordinate changeFigure 5.3: The normalized coordinate’s change for the non-fixed point stabiliza-tion for the 2D Euler case with Mach=1.2, angle=0.0 free streamFigure 5.4: Mesh comparison for the stabilization at the initial solution for the 2DEuler case with Mach=1.2, angle=0.0 (The red line is for the new mesh, and theblack line is for the original mesh)875.1. Test Case No. 1 — the Unstable 2D Euler Case with Mach=1.2, Angle=0.0 Free StreamFigure 5.5: New mesh for the stabilization at the initial solution for the 2D Eulercase with Mach=1.2, angle=0.0 free stream0 100 200 300 400 500 600 700−12−10−8−6−4−2024IterationsThelog10oftheNormConvergence History Comparison for the Stabilizationat a Non-fixed Point for the 2D Euler Case ( Mach=1.2, angle=0.0)  Flux Integral R(U¯)for New MeshFlux Integral R(U¯)for Original MeshFigure 5.6: Convergence history comparison for the stabilization at a non-fixedpoint for the 2D Euler case (Mach=1.2, angle=0.0)885.2. Test Case No. 2 — the Unstable 2D Euler Case with Mach=1.3, Angle=0.0 Free Stream−0.08 −0.07 −0.06 −0.05 −0.04 −0.03 −0.02−0.4−0.3−0.2−0.100.10.20.3First 20 Rightmost Eigenvalues for the 2D Euler CaseAssociated with the New Mesh and the Converged Solution (Mach=1.2, angle=0.0)Re(λ)Im(λ)Figure 5.7: First twenty rightmost eigenvalues of the Jacobian matrix on the con-verged solution of the fourth order scheme on the new mesh for the 2D Euler casewith Mach=1.2, angle=0.0 free stream5.2 Test Case No. 2 — the Unstable 2D Euler Case withMach=1.3, Angle=0.0 Free StreamThe boundary conditions, the spatial discretization, and time-stepping are as sameas those in the previous 2D Euler case with Mach=1.3, angle=0.0 free stream inChapter 4. We try to stabilize this unstable case at the initial solution, which is thesolution of the first order scheme. The first twenty rightmost eigenvalues of theJacobian matrix at the initial solution are shown in Figure (5.8). The eigenvectorassociated with the rightmost eigenvalue is shown in Figure (5.9a). The eigenvectorassociated with the second rightmost eigenvalue is shown in Figure (5.9c). Thederivatives dRe(λ0)dxs are shown in Figure (5.9b); The derivativesdRe(λ1)dxsare shownin Figure (5.9d); here λ0 means the rightmost eigenvalue; λ1 means the secondrightmost eigenvalue.For the associated optimization problem, we only take the first four rightmosteigenvalues into consideration.8 The criterion to select a coordinate is that there8If we take more eigenvalues into consideration, for instance, first ten rightmost eigenvalues, thederivatives dRe(λ )dxs anddRe(λ )dys for some eigenvalues may have singularity issues. The singularity mayarise from a shock wave, or the solution may be non-physical since we do not employ any techniqueto guarantee the computational solution is valid in physical meaning. Blending stabilization with atechnique to guarantee the computational solution is physically valid is a suggested topic of futurework.895.3. Summary and Further Discussion−0.15 −0.1 −0.05 0 0.05 0.1 0.15 0.2−4−3−2−101234First 20 Rightmost Eigenvalues for the 2D Euler CaseAssociated With the Original Mesh and the Initial Solution (Mach=1.3, angle=0.0)Re(λ)Im(λ)Figure 5.8: First twenty rightmost eigenvalues of the Jacobian matrix of the fourthorder scheme at the initial solution for the 2D Euler case with Mach=1.3, angle=0.0free streamexists a derivative d(λi)dεs larger than 0.0001 and smaller than 2 for the first foureigenvalues, i.e. i = 0, 1, 2, 3. The other settings for the optimization problem areas same as those for the 2D Euler case with Mach=1.2, angle=0.0 free stream inChapter 4.After solving the optimization problem, we obtain the change in x-coordinatesshown in Figure (5.10a), and the change in y-coordinates shown in Figure (5.10b).Both the original mesh and the new mesh are shown in Figure (5.11). The newmesh is shown separately in Figure (5.12), and we see the new mesh looks normal.By applying the new mesh to the time-stepping where the initial solution is thesolution of the first order scheme, we generate the convergence history shown inFigure (5.13), from which we can see that the original unstable time-stepping hasbeen stabilized. We also plot the eigenvalues of the Jacobian matrix based on theconverged solution of the fourth order scheme on the new mesh, shown in Figure(5.14), and compared with the eigenvalues in Figure (4.12), we can see all theunstable eigenvalues have been shifted into the left half plane.5.3 Summary and Further DiscussionIn this chapter, by the same method used for the fixed point stabilization, we suc-cessfully stabilized two unstable fourth order cases at the initial converged first or-905.3. Summary and Further Discussion(a) The density component’s eigenvector associ-ated with the first rightmost eigenvalue of the Ja-cobian matrix of the fourth order scheme at theinitial solution(b) The derivatives of the first rightmost eigen-value’s real part with respect to the normalizedx-coordinate dRe(λ0)dxs(c) The density component’s eigenvector associ-ated with the second rightmost eigenvalue of theJacobian matrix of the fourth order scheme at theinitial solution(d) The derivatives of the second rightmosteigenvalue’s real part with respect to the normal-ized x-coordinate dRe(λ1)dxsFigure 5.9: Eigenvectors and derivatives for the 2D Euler case with Mach=1.3,angle=0.0 free steam915.3. Summary and Further Discussion(a) X-coordinate change (b) Y-coordinate changeFigure 5.10: The normalized coordinates’s change for non-fixed point stabilizationfor the 2D Euler case with Mach=1.3, angle=0.0 free streamFigure 5.11: The mesh comparison for the non-fixed stabilization for the 2D Eulercase with Mach=1.3, angle=0.0 (the red line is for the new mesh, and the black lineis for the original mesh)925.3. Summary and Further DiscussionFigure 5.12: The new mesh arising from non-fixed point stabilization for the 2DEuler case with Mach=1.3, angle=0.0 free stream0 50 100 150 200 250 300−12−10−8−6−4−2024IterationsThelog10oftheNormConvergence History Comparison for the Stabilizationat a Non-fixed Point for the 2D Euler Case (Mach=1.3, angle=0.0)  Flux Integral R(U¯)for New MeshFlux Integral R(U¯)for Original MeshFigure 5.13: Convergence history comparison for the non-fixed stabilization forthe 2D Euler case with Mach=1.3, angle=0.0 free stream935.3. Summary and Further Discussion−0.11 −0.1 −0.09 −0.08 −0.07 −0.06 −0.05−4−3−2−101234First 20 Rightmost Eigenvalues for the 2D Euler CaseAssociated with the New Mesh and the Converged Solution (Mach=1.3, angle=0.0)Re(λ)Im(λ)Figure 5.14: First twenty rightmost eigenvalues of the Jacobian matrix on the con-verged solution of the fourth order scheme on the new mesh for the 2D Euler casewith Mach=1.3, angle=0.0 free streamder solution. In fact, if we compare the results of the non-fixed point stabilizationwith the results of the fixed point stabilization, we can see they are quite similar.We first compare eigenvalues and eigenvectors. Comparing Figure (5.1) with Fig-ure (3.36), we can see that the first several rightmost eigenvalues of the Jacobianmatrix of the fourth order scheme calculated on the converged solution of the fourthorder scheme are close to those of the Jacobian matrix of the fourth order schemecalculated on the initial solution. Seen from Figure (4.5), Figure (4.18b), Figure(5.2a), Figure (5.2c), for the eigenvectors associated with the first two rightmosteigenvalues, we can draw similar conclusions as those for the eigenvalues. For the2D Euler case with Mach=1.3, we can draw similar conclusions for the rightmosteigenvalues and the eigenvectors associated with the first two rightmost eigenval-ues as well. The Jacobian matrix depends on the solution. The difference betweenthe converged solution of the fourth order scheme and the converged solution ofthe first order scheme is small. Therefore, it is reasonable that the eigenvalues andthe eigenvectors are similar. For both cases, we can also see similarities for thederivatives of the eigenvalues, the mesh movement, and the first several rightmosteigenvalues after mesh movement.94Chapter 6Conclusions and Future Work6.1 ConclusionsIn this thesis, we developed a fixed point stability analysis model and verified itsvalidity.1. If the largest eigenvalue of the mapping relating the computational error oftwo successive iterations is real, the rate of convergence of the norm of com-putational error converges to the norm of the largest eigenvalue. The direc-tion of the computational error converges to the direction of the eigenvectorassociated with the largest eigenvalue, or the opposite.2. If the largest eigenvalues of the mapping are a complex conjugate pair, therate of convergence of the norm of the computational error oscillates aroundthe norm of the largest eigenvalues. The asymptotic decrease rate of thenorm of the computational error is consistent with the norm of the largesteigenvalues.For the flux integral and the solution update, the conclusions are the same. Thisstability model can be applied to both for linear problems and nonlinear problems.We also developed a methodology to change the eigenvalues of a matrix ina quantitative and controllable way. Specifically, for the Jacobian matrix arisingfrom the spatial discretization of the Euler equations on an unstructured mesh, wedemonstrated that if a coordinate’s change is not greater than one-tenth the shortestincident edge length, the changes of the eigenvalues can be predicted within ac-ceptable tolerance. In practice, we may need a larger movement for stabilization.We used this method to change the eigenvalues of the Jacobian matrix bychanging the coordinates of vertices of a mesh. By solving a defined optimiza-tion problem, we obtained the movement for the mesh and applied the movementto the time-stepping. The results showed that the unstable cases were successfullystabilized and the unstable eigenvalues are shifted into the stable region. We im-plemented this stabilization both at the fixed point and a non-fixed point for two2D Euler cases.956.2. Future Work for Stability and Stabilization6.2 Future Work for Stability and StabilizationThere are several problems which are mentioned in previous chapters but are notresolved. They are interesting topics for future work.1. We need to study the reason of the emergence of the singularity of the eigen-value’s derivatives with respect to the coordinates, and the way to deal withit.2. Only a small part of coordinates are needed for stabilization. We could re-duce the time for stabilization significantly if only the coordinates most re-sponsible for the instability are considered.3. We need to find out why eigenvalue’s changes after mesh movement do notcoincide with the changes predicted from the optimization problem. To havethe eigenvalue’s changes predictable, we may need multiple smaller move-ments for stabilization.4. For practical application, we may need to blend the stability analysis andthe stabilization method with a technique which guarantees the solution isphysically valid, for instance, a limiter. Similarly, we can blend the stabilityanalysis and stabilization method with other techniques as well.It is worth extending the study in this thesis to the 2D Navier-Stokes equations,the 3D Euler equations, the 3D Navier-Stokes equations, and other reconstructionschemes, etc.It is well known that the matrix method is misleading for ill-conditioned op-erators and the computation of the eigenvalues of an ill-conditioned matrix is notreliable in finite precision arithmetic. In future work, we may also need to studyhow to implement stability and stabilization for ill-conditioned operators.Qualitatively speaking, stability is determined by three aspects: the physicalproblem, the reconstruction method, and the mesh. It would be useful to developguidelines for generating a mesh that produces a stable system for a given physicalproblem and a given reconstruction method.It would be promising that we improve the stability of a shock wave capturingscheme such that a weaker limiter is sufficient, which is good for resolution of thesolution.966.3. Open Questions: Spectral Analysis and Optimization — Beyond Stability and Stabilization6.3 Open Questions: Spectral Analysis and Optimization— Beyond Stability and StabilizationIn general, eigenvalues and eigenvectors have a wide application. It would be nat-ural that the optimization for eigenvalues and eigenvectors will bring wide applica-tions as well. The approach used in this thesis propose a framework to optimize theeigenvalues of a matrix. We can implement similar optimization for other purposesand for other problems as well.97Bibliography[1] Fred Brauer and John A. Nohel. The Qualitative Theory of Ordinary Differ-ential Equations: An Introduction. W. A. Benjamin, Inc., 1969.[2] Carmen Campos, Jose E. Roman, Eloy Romero, and Andres Tomas. SLEPchomepage. http://http://www.grycap.upv.es/slepc/, 2011.[3] J.G. Charney, R. Fjortoft, and J. Von Neumann. Numerical integration of thebarotropic vorticity equation. Tellus, 2:237–254, 1950.[4] J. Crank and P. Nicolson. A practical method for numerical evaluation ofsolutions of partial differential equations of the heat-conduction type. Mathe-matical Proceedings of the Cambridge Philosophical Society, 43(01):50–67,January 1947.[5] M. B. Giles. Energy stability analysis of multi-step methods on unstructuredmeshes. CFDL Report 1, MIT Dept. of Aero. and Astro, 1987.[6] M. B. Giles. Stability analysis of a Galerkin/Runge-Kutta Navier-Stokes dis-cretisation on unstructured tetrahedral grid. J. Comput. Phys., 132(2):201–214, April 1997.[7] Florian Haider, Jean-Pierre Croiselle, and Bernard Courbet. Stability analy-sis of the cell centered finite-volume MUSCL method on unstructured grids.Numerische Mathematik, 113(4):555–600, 2009.[8] Christopher Hill. Personal communication. October 2014.[9] Charles Hirsch. Numerical Computation of Internal and External Flows:Fundamentals of Computational Fluid Dynamics, volume 1. Butterworth-Heinemann, second edition, 2007.[10] Roger A. Horn and Charles R. Johnson. Matrix Analysis. Cambridge Univer-sity Press, 1990.[11] P. Lancaster. On eigenvalues of matrices dependent on a parameter. Nu-merische Mathematik, 6(1):377–387, December 1964.98Bibliography[12] P. D. Lax and R. D. Richtmyer. Survery of the stability of linear finite differ-ence equations. Commun. Pure Appl. Math., 9:267–293, 1956.[13] H. W. J. Lenferink and M. N. Spijker. On the use of stability regions in thenumerical analysis of initial value problems. Mathematics of Computation,57(195):221–237, Jul. 1991.[14] Doron Levy and Eitan Tadmor. From semidiscrete to fully discrete: stabilityof Runge-Kutta schemes by the energy method. SIAM REV., 40(1):40–73,1998.[15] Mathworks. quadprog documentation. http://www.mathworks.com/help/optim/ug/quadprog.html, December 2015.[16] The MathWorks, Inc., Natick, Massachusetts. MATLAB version 7.14.0.739(R2012a), 2012.[17] Christopher Michalak and Carl Ollivier-Gooch. Globalized matrix-explicitNewton-GMRES for the high-order accurate solution of the Euler equations.Computers and Fluids, 39:1156–1167, 2010.[18] P. Moinier and M. B. Giles. Stability analysis of preconditioned approxi-mations of the Euler equations on unstructured meshes. J. Comput. Phys.,178(2), 2002.[19] Amir Nejat and Carl Ollivier-Gooch. A high-order accurate unstructured fi-nite volume Newton-Krylov algorithm for inviscid compressible flows. Jour-nal of Computational Physics, 227(4):2592–2609, 2008.[20] Carl F. Ollivier-Gooch. GRUMMP — Generation and Re-finement of Unstructured, Mixed-element Meshes in Parallel.http://tetra.mech.ubc.ca/GRUMMP, 1998–2010.[21] Carl F. Ollivier-Gooch. A toolkit for numerical simulation of PDE’s: I. Fun-damentals of generic finite-volume simulation. Computer Methods in AppliedMechanics and Engineering, 192:1147–1175, February 2003.[22] Satish C. Reddy and Lloyd N. Trefethen. Stability of the method of lines.Numerische Mathematik, 62:235–267, 1992.[23] Robert D. Richtmyer and K. W. Morton. Difference Method for Initial-valueProblems. Interscience Publishers, second edition, 1967.[24] P. L. Roe. Approximate Riemann solvers, parameter vectors, and differenceschemes. Journal of Computational Physics, 43:357–372, 1981.99[25] Barry Smith, Satish Balay, Jed Brown, Lisandro Dalcin, Dmitry Karpeev,Matthew Knepley, and Hong Zhang. PETSc home page. http://www.mcs.anl.gov/petsc, 2011.[26] Bram van Leer. Towards the ultimate conservative difference scheme. IV.A new approach to numerical convection. J. Comput. Phys., 23(3):276–299,March 1977.[27] Bram van Leer. Towards the ultimate conservative difference scheme. V.A second-order sequel to Godunov’s method. Journal of ComputationalPhysics, 32:101–136, 1979.100Appendix AThe Mappings for the SolutionUpdate and the Flux IntegralA.1 The Mapping for the Solution UpdateRecalling Equation (2.26), we have(I∆tk+1+∂ R¯∂U¯∣∣∣∣U¯=U¯s)δU¯k+1 =−(∂ R¯∂U¯∣∣∣∣U¯=U¯s(∆U¯k))+O(∆U¯k)2.Replace k by k−1 to get(I∆tk+∂ R¯∂U¯∣∣∣∣U¯=U¯s)δU¯k =−(∂ R¯∂U¯∣∣∣∣U¯=U¯s(∆U¯k−1))+O(∆U¯k−1)2.Combining these two equations and dropping the high order terms yield(I∆tk+1+∂ R¯∂U¯∣∣∣∣U¯=U¯s)δU¯k+1−(I∆tk+∂ R¯∂U¯∣∣∣∣U¯=U¯s)δU¯k=−((∂ R¯∂U¯∣∣∣∣U¯=U¯s(∆U¯k))−(∂ R¯∂U¯∣∣∣∣U¯=U¯s(∆U¯k−1))). (A.1)Retrieve Equation (2.12) and Equation (2.22)U¯k = U¯k−1+αδU¯k,U¯kp = U¯s+∆U¯k.Combine these two equations to obtain∆U¯k−∆U¯k−1 = αδU¯k. (A.2)101A.2. The Mapping for the Flux integralUsing Equation (A.2) in Equation (A.1), we have(I∆tk+1+∂ R¯∂U¯∣∣∣∣U¯=U¯s)δU¯k+1 =(I∆tk+∂ R¯∂U¯∣∣∣∣U¯=U¯s)δU¯k−α(∂ R¯∂U¯∣∣∣∣U¯=U¯s(δU¯k)).A compound form is(I∆tk+1+∂ R¯∂U¯∣∣∣∣U¯=U¯s)δU¯k+1 =(I∆tk+ (1−α) ∂ R¯∂U¯∣∣∣∣U¯=U¯s)δU¯k. (A.3)For backward Euler time-stepping, ∆t is constant, and by replacing k by k−1, wehave(I∆t+∂ R¯∂U¯∣∣∣∣U¯=U¯s)δU¯k =(I∆t+ (1−α) ∂ R¯∂U¯∣∣∣∣U¯=U¯s)δU¯k−1. (A.4)A.2 The Mapping for the Flux integralThe computational solution at the kth iteration is denoted by U¯k, and based on thissolution, the flux integral isR(U¯k). (A.5)Recall Equation (2.22)U¯kp = U¯s+∆U¯k.Use Equation (2.22) and apply the Taylor explanation to getR(U¯k)= R(U¯s+∆U¯k)= R(U¯s)+∂ R¯∂U¯∣∣∣∣(∆U¯k) .Since R(U¯s) = 0, we haveR(U¯k)=∂ R¯∂U¯∣∣∣∣U¯=U¯s(∆U¯k). (A.6)Replacing k by k−1, we haveR(U¯k−1)=∂ R¯∂U¯∣∣∣∣U¯=U¯s(∆U¯k−1). (A.7)102A.2. The Mapping for the Flux integralRecall Equation (2.31)(I∆tk+∂ R¯∂U¯∣∣∣∣U¯=U¯s)∆Uk =(I∆tk+(1−α) ∂ R¯∂U¯∣∣∣∣U¯=U¯s)∆U¯k−1.Apply Equation (A.6) and Equation (A.7) to obtain(I∆tk)(∆Uk−∆U¯k−1)= (1−α)R(U¯k−1)−R(U¯k). (A.8)By using equation (A.2) in equation (A.8), we have(I∆tk)(αδU¯k)= (1−α)R(U¯k−1)−R(U¯k). (A.9)In this thesis, α = 1 and ∆t is constant; so we have(I∆tk)(δU¯k)=−R(U¯k), (A.10)or (δU¯k)=−∆tR(U¯k). (A.11)Substituting Equation (A.11) into Equation (A.4) and applying α = 1, we have(I∆t+∂ R¯∂U¯∣∣∣∣U¯=U¯s)R(U¯k)=(1∆t)R(U¯k−1). (A.12)103Appendix BAngle Error EstimationThe angle between the computational error and the eigenvector x0 is defined asθ = arccos(x∗0∆U¯k∣∣x∗0∣∣ |∆U¯k|). (B.1)Applying the decomposition for ∆U¯k by Equation (2.48), for x∗∆U¯k, we havex∗0∆U¯k = x∗0(rk0i=n−1∑i=0a0i(rir0)keIkθixi)= x∗0rk0(x0+i=n−1∑i=1a0i(rir0)keIkθixi)= rk0(x∗0x0+i=n−1∑i=1a0i(rir0)keIkθix∗0xi). (B.2)Similarly, for |x∗0|∣∣∆U¯k∣∣, we have|x∗0|∣∣∆U¯k∣∣= |x∗0|∣∣∣∣∣rk0i=n−1∑i=0 a0i(rir0)keIkθixi∣∣∣∣∣ . (B.3)Implementing further deduction, we have|x∗0|∣∣∣∣∣rk0 i=n−1∑i=0 a0i(rir0)keIkθixi∣∣∣∣∣=√(x∗0x0)√√√√((rk0(i=n−1∑i=0a0i(rir0)keIkθixi))∗rk0(i=n−1∑i=0a0i(rir0)keIkθixi)).(B.4)Therefore, we have(|x∗0|∣∣∣∣∣rk0 i=n−1∑i=0 a0i(rir0)keIkθixi∣∣∣∣∣)2= x∗0x0(rk0(i=n−1∑i=0a0i(rir0)keIkθixi))∗rk0(i=n−1∑i=0a0i(rir0)keIkθixi)= r2k0 (x∗0x0)(x0+i=n−1∑i=1a0i(rir0)keIkθixi)∗(x0+i=n−1∑i=1a0i(rir0)keIkθixi)104Appendix B. Angle Error Estimation= r2k0 (x∗0x0)(x∗0x0+2i=n−1∑i=1a0i(rir0)keIkθix∗i x0+i=n−1∑i=1a0i(rir0)2ke2Ikθix∗i xi+O(r1r0r2r0)k)= r2k0((x∗0x0)(x∗0x0)+2i=n−1∑i=1a0i(rir0)keIkθi (x∗i x0)(x∗0x0)+i=n−1∑i=1a0i(rir0)2ke2Ikθi (x∗i xi)(x∗0x0)+O(r1r0r2r0)k)= r2k0((x∗0x0)(x∗0x0)+2i=n−1∑i=1a0i(rir0)keIkθi (x∗i x0)(x∗0x0)+i=n−1∑i=1a0i(rir0)2ke2Ikθi (x∗i x0)(x∗i x0)+i=n−1∑i=1a0i(rir0)2ke2Ikθi ((x∗i xi)(x∗0x0)− (x∗i x0)(x∗i x0))+O(r1r0r2r0)k)= r2k0((x∗0x0)+ i=n−1∑i=1a0i(rir0)keIkθi (x∗i x0))2+O(r1r0)2k . (B.5)As a result, for |x∗0|∣∣∣∣rk0 i=n−1∑i=0a0i(rir0)keIkθixi∣∣∣∣, we have|x∗0|∣∣∣∣∣rk0 i=n−1∑i=0 a0i(rir0)keIkθixi∣∣∣∣∣= rk0(∣∣∣∣∣(x∗0x0)+ i=n−1∑i=1 a0i(rir0)keIkθi (x∗i x0)∣∣∣∣∣+O(r1r0)2k).(B.6)Therefore, we havex∗0∆U¯k∣∣x∗0∣∣ |∆U¯k| =rk0(x∗0x0+i=n−1∑i=1a0i(rir0)keIkθix∗0xi)rk0(∣∣∣∣(x∗0x0+ i=n−1∑i=1a0i(rir0)keIkθix∗0xi)∣∣∣∣+O( r1r0)2k) =±1+O((r1r0)2k).(B.7)For the angle, we haveθ = arccos(x∗0∆U¯k∣∣x∗0∣∣ |∆U¯k|)= arccos(±1+O((r1r0)2k))= pi or 0+O((r1r0)k).(B.8)105

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.24.1-0300002/manifest

Comment

Related Items