UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

A parallel active-set method for solving frictional contact problems Litven, Joshua Alexander 2012

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
24-ubc_2013_spring_litven_joshua.pdf [ 1.73MB ]
Metadata
JSON: 24-1.0052205.json
JSON-LD: 24-1.0052205-ld.json
RDF/XML (Pretty): 24-1.0052205-rdf.xml
RDF/JSON: 24-1.0052205-rdf.json
Turtle: 24-1.0052205-turtle.txt
N-Triples: 24-1.0052205-rdf-ntriples.txt
Original Record: 24-1.0052205-source.json
Full Text
24-1.0052205-fulltext.txt
Citation
24-1.0052205.ris

Full Text

A Parallel Active-Set Method for Solving Frictional Contact Problems by Joshua Alexander Litven B.Sc., The University of Waterloo, 2010 A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF SCIENCE in The Faculty of Graduate Studies (Computer Science) THE UNIVERSITY OF BRITISH COLUMBIA (Vancouver) February 18, 2013 c© Joshua Alexander Litven 2012 Abstract Simulating frictional contact is a challenging computational task and there exist a variety of techniques to do so. One such technique, the staggered projections algorithm, requires the solution of two convex quadratic program (QP) subprob- lems at each iteration. We introduce a method, SCHURPA, which employs a primal-dual active-set strategy to efficiently solve these QPs based on a Schur- complement method. A single factorization of the initial saddle point system and a smaller dense Schur-complement is maintained to solve subsequent saddle point systems. Exploiting the parallelizability and warm-starting capabilities of the active-set method as well as the problem structure of the QPs yields a novel approach to the problem of frictional contact. Numerical results of a parallel GPU implementation using NVIDIA’s CUDA applied to a physical simulator of highly deformable bodies are presented. ii Table of Contents Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ii Table of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii List of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vi List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii List of Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.1 Problem Statement . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 Solution Methodology . . . . . . . . . . . . . . . . . . . . . . . . 2 1.2.1 The Primal-Dual Active-Set Method . . . . . . . . . . . 3 1.2.2 SCHURPA . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.3 Contributions of Thesis . . . . . . . . . . . . . . . . . . . . . . . 6 1.3.1 Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2 Active-Set Methods for Quadratic Programming . . . . . . . . 8 2.1 Quadratic Programming . . . . . . . . . . . . . . . . . . . . . . 9 2.2 Overview of Active-Set Methods . . . . . . . . . . . . . . . . . . 9 2.3 Classical ASMs . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 iii Table of Contents 2.4 The Primal-Dual ASM . . . . . . . . . . . . . . . . . . . . . . . 17 2.4.1 Derivation via the Semismooth Newton Method . . . . . 17 2.4.2 Comparison to Classical ASMs . . . . . . . . . . . . . . . 22 3 SCHURPA : a Method for Solving Saddle Point Systems Aris- ing from PDASM . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 3.1 Limitations of Updating Factorizations . . . . . . . . . . . . . . 25 3.2 The Schur-Complement Method . . . . . . . . . . . . . . . . . . 27 3.3 Details of the SCHURPA Algorithm . . . . . . . . . . . . . . . . 31 4 The Eulerian Solids Simulator . . . . . . . . . . . . . . . . . . . . 34 4.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 4.1.1 Simulating Objects Without Contact . . . . . . . . . . . 35 4.1.2 Simulating Objects With Contact . . . . . . . . . . . . . 37 4.2 Adding Friction Using Staggered Projections . . . . . . . . . . . 40 4.2.1 Friction Model . . . . . . . . . . . . . . . . . . . . . . . . 40 4.2.2 The Staggered Projections Algorithm . . . . . . . . . . . 44 4.3 Applying SCHURPA to Frictional Contact . . . . . . . . . . . . 46 4.4 The SCHURPA-SP Algorithm . . . . . . . . . . . . . . . . . . . 54 5 Solving the Contact and Friction QPs . . . . . . . . . . . . . . . 56 5.1 Saddle Point Systems Arising in Contact and Friction . . . . . . 57 5.1.1 Contact . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 5.1.2 Friction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 5.2 Banded SPD Systems . . . . . . . . . . . . . . . . . . . . . . . . 63 5.3 Block Solver for Banded SPD Systems . . . . . . . . . . . . . . . 67 6 GPU Implementation . . . . . . . . . . . . . . . . . . . . . . . . . 72 6.1 CUDA Programming on the GPU . . . . . . . . . . . . . . . . . 73 iv Table of Contents 6.1.1 CUDA Programming Model . . . . . . . . . . . . . . . . 74 6.2 Custom Kernels for SCHURPA . . . . . . . . . . . . . . . . . . . 76 6.3 CUDA Libraries . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 7 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 7.1 Randomly Generated QPs . . . . . . . . . . . . . . . . . . . . . 83 7.2 Frictional Contact in the Simulator . . . . . . . . . . . . . . . . 85 8 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93 Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96 Appendices A Data Storage Formats . . . . . . . . . . . . . . . . . . . . . . . . . 101 B Additional Code . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 v List of Tables 6.1 CPU vs. GPU comparison. . . . . . . . . . . . . . . . . . . . . . 74 7.1 Statistics of the GPU-based implementations of SCHURPA and DIRECT. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 A.1 Data storage formats for dense and symmetric banded matrices. 102 vi List of Figures 4.1 Nodal shape functions for a contact cell Ωc on a 2D grid. . . . . 47 5.1 Example of cell adjacency. . . . . . . . . . . . . . . . . . . . . . . 64 5.2 Typical banded structure of the contact Hessian. . . . . . . . . . 65 5.3 Comparison of the banded Cholesky factorization and solver run- times. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 6.1 Computational grid of threads executed in parallel. . . . . . . . . 75 6.2 Runtimes of the symbgmm operation. . . . . . . . . . . . . . . . 80 7.1 Runtimes of DIRECT and SCHURPA algorithms. . . . . . . . . 85 7.2 Decomposition of the GPU-based SCHURPA runtimes. . . . . . 86 7.3 Simulation: Cylinder collision. . . . . . . . . . . . . . . . . . . . . 86 7.4 Simulation: Proportion of correct indices in A0c and A0f . . . . . . 89 7.5 Simulation: Number of contacts, dimensions of GC and GF . . . . 90 7.6 Simulation: Number of SCHURPA-SP solves. . . . . . . . . . . . 90 7.7 Simulation: DIRECT and SCHURPA runtimes. . . . . . . . . . . 91 7.8 Simulation: Speedup ratios of SCHURPA to DIRECT. . . . . . . 92 vii List of Algorithms 2.1 Generic Active-set Method . . . . . . . . . . . . . . . . . . . . . . 11 2.2 Primal-dual Active-set Method . . . . . . . . . . . . . . . . . . . 21 3.1 SCHURPA-INIT . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 3.2 SCHURPA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 4.1 Staggered Projections . . . . . . . . . . . . . . . . . . . . . . . . 46 4.2 SCHURPA-SP . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 5.1 C0_solve . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 5.2 F0_solve . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62 5.3 Block Substitution . . . . . . . . . . . . . . . . . . . . . . . . . . 69 viii Acknowledgments First and foremost, I would like to thank both of my supervisors, Chen Greif and Dinesh K. Pai, for their outstanding guidance. Their curiosity, insights, and passion are truly inspiring. I greatly appreciate the thoughtfulness and understanding that they have shown me. I would like to thank David I.W. Levin for teaching me many important things about life. Special thanks go to Ye Fan, who coded the Eulerian solids simulator. I am also grateful to Uri Ascher for reading this thesis and providing excellent feedback. Finally, I would like to thank my family for their unconditional love and support throughout the years. ix Chapter 1 Introduction 1.1 Problem Statement Physical simulation, the computational process of simulating physical objects that we observe in reality, is a profoundly important and all-encompassing area of modern research. As such, efficient and scalable methods for simulating physical phenomena of ever increasing size and complexity are highly desirable. In particular, simulating realistic frictional contact has applications ranging from computer graphics to biomechanics. An Eulerian solids simulator, presented in (Levin et al., 2011), simulates highly deformable solid bodies undergoing frictionless contact. The simula- tor utilizes the finite volume method permitting a parallel implementation on the GPU using CUDA (NVIDIA, 2012b). However, the simulator is currently incapable of simulating frictional contact. Lacking the ability to simulate fric- tion restricts the simulator from capturing important phenomena such as hands grasping objects, fingers sliding across a surface etc. Incorporating frictional contact into the simulator is therefore highly desirable. Resolving frictional contact dynamics is a challenging, formidable task, and several techniques have been proposed to do so (Jean, 1999; Stewart and Trinkle, 1996; Kaufman et al., 2008). The staggered projections (SP) algorithm, intro- duced in (Kaufman et al., 2008), is an attractive method because it is robust and can often converge rapidly. As the simulator implementation is parallel and 1 1.2. Solution Methodology scalable, the implementation of the frictional contact solution procedure should share these attractive properties. The work presented in this thesis details an efficient parallel solution frame- work which incorporates the staggered projections algorithm to resolve frictional contact dynamics in the Eulerian solids simulator. We demonstrate the effec- tiveness and scalability of our method with a GPU implementation. 1.2 Solution Methodology The staggered projections algorithm resolves frictional contact by iteratively solving a pair of convex quadratic program (QP) subproblems. A general QP may be given as minimize 12x THx− cTx subject to ATx ≤ b ETx = d, (1.1) where c, d, x ∈ Rn, A ∈ Rn×m, E ∈ Rn×p and H ∈ Rn×n. The QP (1.1) is convex if H is symmetric and positive semi-definite. QPs are ubiquitous in a diverse array of fields, and arise as subproblems of general purpose optimization schemes such as the sequential quadratic program- ming method (Nocedal and Wright, 1999, Chapter 18). As such, solving these problems efficiently has been the subject of decades of research and remains an active area to this day. The fruits of these laborious efforts is evidenced by a plethora of algorithmic methodologies, each with strengths and weaknesses. These include their ability to handle sparse data, scalability, robustness, accu- racy etc. One such methodology is the class of algorithms known as active-set methods (ASMs). A particularly attractive trait of these methods is their ability to warm start: A good initial guess of the solution to the QP being solved significantly 2 1.2. Solution Methodology improves the convergence rate of these algorithms. A good guess of an optimal solution in physical simulation arises naturally due to temporal coherence. More precisely, at each time step of the simulation loop, the induced QPs are in some sense a small perturbation of the QP solved at the previous time step, inducing techniques to provide initial estimates. This perturbation depends on the specific implementation, time step size and other factors. Thus ASMs are a natural approach to solve QPs arising in physical simulation. However, there are two fundamental limitations to the classical ASMs: • they cannot handle sparse data, which limits their applicability to large- scale problems, and • they cannot be parallelized easily, often requiring many cheap but sequen- tial iterations. In contrast, the class of interior point methods (IPs) can handle sparse data, and have since become the more popular method in solving large problems. However, IPs lack the warm-starting capabilities ASMs enjoy (Wright, 1987, Chapter 11). To overcome the limitations of the classical ASM approaches, we leverage the primal-dual active-set method (PDASM) (Ito and Kunisch, 2008) to retain the benefits of ASMs whilst being able to solve large, sparse QPs efficiently. PDASM is central to the work of this thesis. 1.2.1 The Primal-Dual Active-Set Method Typically, each iteration of an ASM selects a working set which consists of a subset of the constraints given in (1.1). This working set serves as a guess to the optimal active set and yields an equality-constrained subproblem to be solved, or equivalently, a saddle point linear system. Unlike classical ASMs which en- sure primal or dual feasibility over iterates, PDASM can violate both primal and 3 1.2. Solution Methodology dual feasibility. Instead, PDASM induces the subproblem by enforcing comple- mentarity to hold. One can show that PDASM is equivalent to a semismooth Newton method applied to the KKT conditions of (1.1) (Hintermüller et al., 2002) (see Section 2.4). Two fundamental differences between classical ASMs and PDASM are: • PDASM allows multiple changes to the working set per iteration, being able to quickly identify the optimal active set, and • being a semismooth Newton method, PDASM super-linearly converges. Typically very few iterations are required. These properties imply that PDASM overcomes the previously mentioned limi- tations of classical ASMs. Super-linear convergence allows the method to scale well with problem size. In Chapter 3, we utilize multiple changes to the working set in our parallel implementation. Recently, (PDASM) has been successfully applied to constrained optimal control (Bergounioux et al., 1999), (Kunisch and Rösch, 2002), contact and fric- tion (Brunssen et al., 2007), (Hüeber and Wohlmuth, 2005), and others (Hin- termuller, 2004). The convergence properties of PDASM are an active area of research. Kunisch and Ito showed global convergence for certain classes of Hes- sians (Ito and Kunisch, 2008). In (Kunisch and Rendl, 2003), it was observed that PDASM quickly eliminates the inactive constraints from the current esti- mate of the active set at a given iteration. This served as a motivation for our work. 1.2.2 SCHURPA Efficiency of ASMs is determined by the method of solving the saddle point sys- tems. In classical methods, a factorization of the initial saddle point system K0 4 1.2. Solution Methodology is produced, and the factors are then updated, e.g., by orthogonal factorizations. This updating process is what can potentially destroy sparsity. Another approach uses a factorization of K0 and, rather than updating fac- tors, subsequent saddle point systems can be solved using an expanded form, given as K0 U UT V  y pi  =  c w  . (1.2) Equation (1.2) is solved via the factorization of K0 and the Schur-complement matrix C = V − UTK−10 U. This Schur-complement method was used in (Gill et al., 1990) and (Bartlett and Biegler, 2006), which describe primal and dual methods, respectively. Assum- ing a sufficient closeness of the initial working set to the optimal active set, the Schur-complement matrix C remains small. This assumption depends on the ability of the warm-starting strategy to accurately reflect the changes of the ac- tive in time. The motivation of the previously mentioned work was to solve the quadratic programming subproblems arising in the sequential quadratic pro- gramming method (SQP) for nonlinear optimization. In SQP, the nonlinear inequality constraints of the original problem are linearized, and as the solution converges the linearized constraints’ active sets also converge. Thus, employing warm-starts of the QPs results in rapid convergence of the ASMs. The primal Schur-complement approach was successfully implemented in (Betts, 1994) to solve sparse nonlinear problems. In classical ASMs, a single change of the working set occurs per iteration. This corresponds to a solve of the form K−10 u to form the Schur-complement matrix C. In contrast, PDASM makes multiple changes to the working set per iteration, modifying the Schur-complement method to solves of the form K−10 U , 5 1.3. Contributions of Thesis where U is a matrix encoding the changes of the working set. If warm-starting is used, one expects that the number of columns of U will not be large. Computing K−10 U amounts to l solves of the form K0xi = ui i = 1, ..., l, which direct methods can efficiently solve upon factorization of K0. Further- more, a parallel implementation of the solve with K0 can significantly improve the performance of the Schur-complement method. In an ideal sense, if the solves were fully parallelized, the duration of an iteration of PDASM and the classical ASMs would be roughly equal. Since PDASM requires far fewer itera- tions than classical ASMs to make the equivalent changes to an initial working set, PDASM is expected to perform much better. We demonstrate this improve- ment via a parallel implementation, called SCHURPA (for SCHUR-complement method in PArallel), on the GPU using NVIDIA’s CUDA. 1.3 Contributions of Thesis Our contribution in this thesis is a solution framework which utilizes SCHURPA to solve the QP subproblems of SP. To further elucidate, we: • introduce SCHURPA: a Schur-complement approach to solving the saddle point systems arising in PDASM; • integrate SCHURPA into SP. We reformulate the problem of friction to be amenable to SCHURPA, and show that in the case of the simulator the saddle point systems may be phrased as banded SPD systems; • demonstrate the effectiveness of SCHURPA with a GPU implementation. 6 1.3. Contributions of Thesis The thesis is organized as follows. Chapter 2 overviews the general class of active-set methods and introduces the primal-dual active-set method, show- ing its equivalence to a semismooth Newton method. Chapter 3 introduces SCHURPA, our method for solving the saddle point systems in PDASM, which is conducive to a parallel implementation. We then describe the Eulerian solids simulation framework in Chapter 4. The problem of adding friction to the sim- ulator is discussed, along with the staggered projections algorithm. Chapter 5 describes how we exploit problem structure of the QP subproblems in SP to solve them efficiently. Our parallel GPU implementation is discussed in Chapter 6. Results of SCHURPA on synthetic data and in the simulator are presented in Chapter 7. Conclusions and future work are given in Chapter 8. 1.3.1 Notation For notation of quantities, we use upper case for matrices, lower case for vectors, and Greek letters generally refer to scalars. We denote the standard basis of Rn by {e1, ..., en} i.e. ei = [δij ]j ∈ Rn. The notation V = {vi}i∈S defines the matrix V to have column vectors vi indexed by the set S, and V = { vTi } i∈S is defined analogously to have row vectors vTi . Quantities with bars denote values at the next iteration (e.g. x̄), while initial quantities will be subscripted by 0 (e.g. x0). The active set of a primal variable x, denoted by A(x), is the subset of constraints which hold with equality at x. We denote an optimal solution to an optimization problem by x∗. 7 Chapter 2 Active-Set Methods for Quadratic Programming Active-set methods (ASMs) are a class of algorithmic methodologies for solving QPs. A particularly attractive trait of ASMs is their ability to warm-start with an estimate of the solution. In many applications an estimate of A(x∗), the active set at the optimal solution, is known. As discussed in Section 1.2, ASMs can effectively utilize this estimate to drastically reduce the number of iterations required for convergence. In this chapter we outline ASMs in the context of solving convex QPs, which are described in Section 2.1. Section 2.2 outlines the background and derivation of ASMs, and discusses the Karush-Kuhn-Tucker (KKT) Conditions. These conditions are necessary and sufficient conditions for obtaining optimality of a convex optimization problem. We then briefly review the classical ASMs in Section 2.3, the primal ASM and the dual ASM, and describe the method of updating factorizations common to ASMs. Unsatisfied with the limitations of these approaches, we turn our eyes towards greener pastures in Section 2.4 to the primal-dual ASM. Details of the algorithm can be found in (Ito and Kunisch, 2008). 8 2.1. Quadratic Programming 2.1 Quadratic Programming Convex quadratic programs (QPs) are an important class of optimization prob- lems which may be stated as minimize 12x THx− cTx subject to aTi x ≤ bi, i ∈ I, (P) where c ∈ Rn, x ∈ Rn, I = {1, 2, ...,m}, b = {bi}i∈I ∈ Rm, A = {ai}i∈I ∈ Rn×m is the constraint Jacobian and H ∈ Rn×n, the Hessian, is a symmetric positive semidefinite matrix so that the problem is convex. The problem could also include equality constraints, but for the purposes of this thesis and without loss of generality they are left out. In this work we assume strict convexity (i.e. H is positive definite) and feasibility of (P) so that the optimal solution x∗ exists and is unique. 2.2 Overview of Active-Set Methods The class of ASMs (as well as most optimization algorithms) attempt to satisfy the Karush-Kuhn-Tucker conditions (Nocedal and Wright, 1999), which for (P) are given by Hx∗ − c+ ∑ i∈I λ∗i ai = 0 (2.1a) aTi x ∗ ≤ bi, i ∈ I (2.1b) λ∗i ≥ 0, i ∈ I (2.1c) (aTi x∗ − bi)λ∗i = 0, i ∈ I (2.1d) For convex problems these are necessary and sufficient conditions for primal 9 2.2. Overview of Active-Set Methods and dual variables x∗ and λ∗, respectively, to be an optimal primal-dual pair, assuming some constraint qualification is satisfied. For example, the linear in- dependence constraint qualification (LICQ) assumes the set {ai}i∈A∗ is linearly independent. When the inequality and equality constraint functions are affine (as in (P)), no constraint qualification is needed. For the KKT conditions we can equivalently write Hx∗ − c+ ∑ i∈A∗ λ∗i ai = 0 (2.2a) aTi x ∗ = bi, i ∈ A∗ (2.2b) aTi x ∗ < bi, i ∈ I\A∗ (2.2c) λ∗i ≥ 0, i ∈ A∗ (2.2d) λ∗i = 0, i ∈ I\A∗. (2.2e) Here we have used the fact that λ∗i = 0 for i ∈ I\A∗, which follows from Equa- tion (2.1b), Equation (2.1d). Consider the equality-constrained subproblem minimize 12x THx− cTx subject to aTi x = bi, i ∈ A∗. (2.3) The KKT conditions of (2.3) are Hx̂− c+ ∑ i∈A∗ λ̂iai = 0 (2.4a) aTi x̂ = bi, i ∈ A∗. (2.4b) By strict convexity x̂ is unique. Since Equation (2.4) is a subset of the KKT 10 2.2. Overview of Active-Set Methods conditions Equation (2.2), the optimal solution of (P) is also the optimal solu- tion to (2.3), and since the solutions are unique, solving the subproblem also solves the full QP. Thus if A∗ were known a priori, one could simply solve (2.3) to solve the original problem. Active set methods employ strategies for finding A∗. Beginning with a primal-dual pair (x0, λ0), Each iteration k of an ASM updates a working set Ak ⊆ I of constraints which represents a guess of A∗. The subproblem minimize 12x THx− cTx subject to aTi x = bi, i ∈ Ak (2.5) is solved, resulting in the optimal primal-dual pair (x∗, λ∗). The step directions p = x∗ − xk µ = λ∗ − λk are then used to update the primal and dual iterates. A generic ASM is given in Algorithm 2.1. Algorithm 2.1 Generic Active-set Method Input: x0, λ0, A0 for k = 0, 1, 2, ... do Solve (2.5) Defined by Ak in terms of step directions p and µ Step xk+1 := xk + αp, α ∈ [0, 1] λk+1 := λk + βµ, β ∈ [0, 1] if the KKT conditions to (P) are satisfied then DONE; x∗ = xk+1 and λ∗ = λk+1 else Choose Ak+1 end if end for Output: x∗, λ∗ The computational “meat” of Algorithm 2.1 lies in solving (2.5), which may 11 2.2. Overview of Active-Set Methods be rewritten as the symmetric indefinite linear system (called the saddle point system) H Ak ATk  x λ  =  c bk  , (2.6) where Ak = {ai}i∈Ak and bk = {bi}i∈Ak . The saddle point matrix Kk = H Ak ATk  is a special case of the symmetric indefinite matrix M =  A B BT  . The following theorem is stated without proof (see, e.g., (Nocedal and Wright, 1999)). Theorem 2.1. M is nonsingular iff A is positive definite on the nullspace of BT . Due to the assumption that H is positive definite we get the following corol- lary. Corollary 2.1. Kk is nonsingular iff ATk is full row rank. From Corollary 2.1 it follows that A0 must prescribe a linearly independent set of constraints to ensure Equation (2.6) is solvable. 12 2.3. Classical ASMs 2.3 Classical ASMs The two classical variants of ASMs are the primal and dual algorithms. The primal ASM, described in (Nocedal and Wright, 1999), chooses the primal step length α such that xk+1 is always feasible. The dual ASM, described in (Gold- farb and Idnani, 1983), chooses the dual step length β ensuring λk+1 is feasible to the dual problem. To ensure convergence, the primal ASM assumes the active constraints are linearly independent at each feasible vertex; the dual requires that H be pos- itive definite. Convergence proofs for these ASMs are based on showing that f(xk+1) < f(xk) in the primal version and f(xk+1) > f(xk) in the dual ver- sion; thus cycling can never occur. Since there are a finite number of active-set subsets, the algorithms converge (Nocedal and Wright, 1999). While this theo- retical argument allows for a possibly exponential number of steps in the size of the input data, in practice this is rarely observed. ASMs typically add correct and remove incorrect constraints to and from the working set; thus their running time is proportional to |A0−A∗|. This is why ASMs benefit from warm-starting with a good estimate of A∗. In the primal and dual algorithms one constraint is added or removed to the working set Ak. Each saddle point system is not solved from scratch; rather, factorizations are updated as constraints are added and removed. In typical implementations these factors are dense, as maintaining sparse factorizations is difficult (Gill et al., 1987). We now give an example demonstrating the factor- ization update process. Consider the QP minimize 12x THx− cTx subject to x ≥ 0, (2.7) which is a special case of (P) where A = −I and b = 0. 13 2.3. Classical ASMs At iteration k of Algorithm 2.1 Ak defines the set of bound and free com- ponents of x, which we denote by xB and xF , respectively, i.e. B = Ak and F = I\Ak . The resulting saddle point system is then HF,F HTB,F HB,F HB,B  xF xB − λF λB  = cF cB  . (2.8) Since xB = 0 and λF = 0, we get HF,FxF = cF , (2.9) which can then be used to solve for λB via λB = −cB +HB,FxF . (2.10) Solving the saddle point system this way reduces the problem to a symmetric positive definite system of size |F |, the number of free variables. Suppose iteration k+1 binds the ith variable of xF . Dropping the F subscript for simplicity, we can write the current saddle point system Equation (2.9) as  H11 h21 H T 31 hT21 h22 h T 32 H31 h32 H33   x1 xi x3  =  c1 ci c3  . Then we can solve the next saddle point system by solving H̄x̄ = c̄, where 14 2.3. Classical ASMs H̄ =  H11 0 HT31 0T 1 0T H31 0 H33  , c̄ =  c1 0 c3  . Suppose we have a Cholesky factorization of HF,F which we write as LLT =  L11 lT21 α L31 l32 L33   LT11 l21 L T 31 α lT32 LT33  =  H11 h21 H T 31 hT21 h22 h T 32 H31 h32 H33  . (2.11) We now wish to determine the Cholesky factorization of H̄: L̄L̄T =  L̄11 l̄T21 ᾱ L̄31 l̄32 L̄33   L̄T11 l̄21 L̄ T 31 ᾱ l̄T32 L̄T33  =  H11 0 HT31 0T 1 0T H31 0 H33  . Writing out the relevant equations of Equation (2.11) give L11L T 11 = H11 L31L T 31 + l32lT32 + L33LT33 = H33. We can now solve for the components of the updated Cholesky factors: 15 2.3. Classical ASMs L̄11L̄ T 11 = H11 = L11LT11 ⇒ L̄11 = L11 L̄11L̄ T 31 = H31 = L11LT31 ⇒ L̄31 = L31 L̄11 l̄21 = 0⇒ l̄21 = 0 l̄T21 l̄21 + ᾱ2 = 1⇒ ᾱ = 1 L̄31 l̄21 + ᾱl̄32 = 0⇒ l̄32 = 0 L̄31L̄ T 31 + l̄32 l̄T32 + L̄33L̄T33 = H33 = L31LT31 + l32lT32 + L33LT33 ⇒ L̄33L̄T33 = L33LT33 + l32lT32. Thus, as can be seen from the last equation above, binding a variable amounts to a rank-one update of a Cholesky factorization. It can be shown that freeing a variable corresponds to a rank-one down-date (Davis and Hager, 2006). Updating Cholesky factorizations from rank-one changes are detailed in the seminal work (Gill et al., 1974). All updating procedures require O(n2) floating point operations as opposed to O(n3) required for a full factorization. In summary, we see that classical ASMs admit several advantages over com- peting methods such as the interior point method (IP). As mentioned, in practice ASMs can take advantage of warm-starting, whereas IPs have difficulty doing so (Wright, 1987, Chapter 11). Also, updating factors is a cheap operation com- pared to full factorizations required in the iterates of IPs. The tradeoff comes from the large number of iterations in ASMs if the working set requires many changes. We are therefore limited by classical ASMs because • They do not exploit sparsity, which limits their applicability to large-scale problems, and 16 2.4. The Primal-Dual ASM • The iterations are inherently sequential, and therefore they cannot be parallelized. As we will show in Chapter 3 the Schur-complement approach ameliorates the first limitation by solving Equation (2.6) in a different way; rather than updating factors, an initial saddle point system is factorized, and future saddle point systems are expressed as solutions of the original saddle point matrix and a smaller Schur-complement matrix. We now address the second issue. 2.4 The Primal-Dual ASM In an attempt to utilize the advantages of ASMs, we turn to a more recent approach: the primal-dual active-set method (PDASM). A key divergence from its classical counterparts is the ability to make multiple changes to the active set per iteration. We will see that combining this with a linear algebra technique to solve the saddle point systems results in a parallel ASM we have been searching for (see Chapter 3). 2.4.1 Derivation via the Semismooth Newton Method We follow (Hintermüller et al., 2002) in deriving PDASM. In a nutshell, PDASM is defined by Algorithm 2.1 by choosing α = β = 1 and selecting the next working set as Ak+1 = {i ∈ I | bi − aTi xk+1 − λk+1i < 0} . (2.12) PDASM can be derived from a generalized Newton method applied to a set of semismooth equations. A comprehensive overview of semismooth equations and algorithms can be found in (Ulbrich, 2011). 17 2.4. The Primal-Dual ASM We define a function to be semismooth if it is differentiable almost every- where, mathematically speaking (that is, everywhere except for a set of measure zero). Consider the function φ(a, b) = min(a, b), which is differentiable everywhere except along the line a = b and thus is a semismooth function. Other examples of semismooth functions include convex functions and piecewise differentiable functions. We have the following useful property. Lemma 2.4.1. φ(a, b) = 0⇔ ab = 0, a ≥ 0, b ≥ 0. Proof. Suppose φ(a, b) = 0. Then min(a, b) = 0 ⇔ a = 0 or b = 0, a ≥ 0, b ≥ 0 ⇔ ab = 0, a ≥ 0, b ≥ 0, as required. Corollary 2.4.2. The KKT conditions to (P) may be reformulated as the solution to the equations F (x, λ) =  Hx+Aλ− c Φ(x, λ) = 0, (2.13) where Φi(x, λ) = φ(bi − aTi x, λi). (2.14) 18 2.4. The Primal-Dual ASM Applying a generalized Newton method to semismooth equations was in- troduced in (Qi and Sun, 1993) and we briefly discuss the approach here. Let F : Rn → Rn and DF ⊆ Rn be the set of points where F is differentiable. The set ∂BF (x) ≡ { lim xk→x ∇F (xk) | {xk} ⊆ DF } is called the B-subdifferential of F at x, where ∇F denotes the Jacobian of F . Intuitively this is the set of Jacobians nearing the point x in the limit. If F is differentiable at x it is clear that ∂BF (x) = ∇F (x). We may now formulate a generalized Newton iteration as xk+1 = xk − V −1k F (xk), (2.15) where Vk ∈ ∂BF (xk). If F is semismooth at a solution x∗ of F (x) = 0 and all V ∈ ∂BF (x∗) are nonsingular, local super-linear convergence can be proved (Ito and Kunisch, 2008). Recall that in the differentiable case, quadratic convergence occurs if ∇F (x∗) is nonsingular. We now apply the generalized Newton method to equations Equation (2.13). The subdifferential of Φ is given by ∂BΦi(x, λ) = ∂Bφ(bi − aTi x, λi) =  (−ai, 0)T if bi − aTi x < λi (0, ei)T if bi − aTi x > λi{ (−ai, 0)T , (0, ei)T } if bi − aTi x = λi We arbitrarily choose Vi = (0, ei)T ∈ ∂BΦi(x, λ) in the final case. Let 19 2.4. The Primal-Dual ASM A = {i ∈ I | bi − aTi xk − λki < 0} Ā = {i ∈ I | bi − aTi xk − λki ≥ 0} . Assume without loss of generality that the equations are ordered first by A and subsequently by Ā. Then, by taking the partial subdifferentials of F , the generalized Newton iteration Equation (2.15) applied to Equation (2.13) gives  H A −ATA 0 0 IĀ  xk+1 − xk λk+1 − λk  = −  Hxk +Aλk − c bA −ATAxk λkĀ  , where AA = {ai}i∈A, IĀ = { eTi } i∈Ā and bA = {bi}i∈A. Equivalently Hxk+1 +Aλk+1 = c; (2.16a) ATAx k+1 = bA; (2.16b) λk+1Ā = 0, (2.16c) and after substituting Equation (2.16c) into Equation (2.16a) we get Hxk+1 − c+ ∑ i∈A λk+1i ai = 0 aTi x k+1 = bi, i ∈ A. Notice that these are exactly the KKT conditions for solving the equality- constrained QP 20 2.4. The Primal-Dual ASM minimize 12x THx− cTx subject to aTi x = bi, i ∈ A. (2.18) Thus, we get an active-set procedure with the working set determined by A, which is how we defined the update in Equation (2.12). Intuitively, rather than satisfying primal or dual feasibility, PDASM satisfies the complementarity condition (aTi xk+1 − bi)λk+1i = 0 i ∈ I, which follows from Equation (2.16b) and Equation (2.16c). Since either bi − aTi x k+1 = 0 or λk+1i = 0, the working set A chooses the former to hold if bi − aTi xk < λki ; otherwise it ensures the latter holds. The super-linear conver- gence of the generalized Newton method applies to PDASM, and in practice very few iterations are required to obtain convergence (see Chapter 7). In combina- tion with being an active-set method and therefore amenable to warm-starting, these make PDASM a very attractive approach for solving QPs. Algorithm 2.2 describes this stunningly simple yet powerful algorithm. Algorithm 2.2 Primal-dual Active-set Method Input: x0, λ0, A0 for k = 0, 1, 2, ... do Solve (2.5) defined by Ak to compute the solution pair (xk+1, λk+1) if the KKT conditions to (P) are satisfied then DONE; x∗ = xk+1 and λ∗ = λk+1 else Ak+1 = {i ∈ I | bi − aTi xk+1 − λk+1i < 0} end if end for Output: x∗,λ∗ Global convergence of PDASM in certain cases has been proven for several classes of problems in (Ito and Kunisch, 2008), e.g., problems with M-matrix 21 2.4. The Primal-Dual ASM Hessians. The two pitfalls in the successful convergence of PDASM are cycling and a potentially singular saddle point system. Assuming that all saddle point systems are uniquely solvable (which is true, for example, if the constraint Jacobian has full rank), the algorithm will con- verge if and only if it is free of cycles. Typically global convergence proofs rely on a merit function which measures the progress of the algorithm to the solu- tion and monotonically decreases, ensuring cycling cannot occur (Nocedal and Wright, 1999). The quantity ||F (xk, λk)|| is a natural choice, but unfortunately is inadequate without additional assumptions on the input data. In practice, convergence occurs for well-conditioned problems. We observe unconditional global convergence of PDASM for our application (see Chapter 7). 2.4.2 Comparison to Classical ASMs Here we recap the similarities and differences of PDASM with the classical ASMs discussed in Section 2.3. Similarities: • being an ASM, PDASM yields the benefits of warm-starting and accurate solutions, • the standard techniques for computing and updating factorizations may be applied to PDASM, and • under certain conditions global convergence can be proved. PDASM is observed to converge as long as the saddle point systems are nonsingular. Differences: • there is no longer any guarantee of primal or dual feasibility, • multiple changes to the working set are made each iteration, and 22 2.4. The Primal-Dual ASM • due to super-linear convergence, few iterations are required and thus PDASM scales well to large problems. In the next section, we exploit these unique features of PDASM to develop a parallel algorithm for solving QPs. 23 Chapter 3 SCHURPA : a Method for Solving Saddle Point Systems Arising from PDASM The central ingredient for our parallel solver is the solution method to the saddle point system Equation (2.6). In this chapter we first show that the standard approach of updating factorizations for saddle point systems in PDASM cannot be parallelized in Section 3.1. We then introduce the Schur-complement method in Section 3.2, originally developed in (Gill et al., 1990). Their work used the method in the primal ASM to solve large, sparse problems. We show that using PDASM, the Schur-complement method not only permits the use of sparse data, but also produces an efficient parallel solver, which we call SCHURPA. Accuracy and complexity of SCHURPA are discussed in Section 3.3. 24 3.1. Limitations of Updating Factorizations 3.1 Limitations of Updating Factorizations Our goal is to utilize the information PDASM provides each iteration, a set of l changes to the working set, to develop a parallel algorithm. Recall in the classical regime of ASMs a single change to the working set corresponds to an update-factorization procedure. In principle then, one could apply l such updates sequentially, but this would squander the use of parallelism and revert the algorithm’s behaviour to that of the classical methods, i.e. each update would be cheap, but they could be numerous and inherently sequential. Let us return to the example discussed in Section 2.3, where Cholesky factors of the reduced Hessian were updated via a rank-one change, corresponding to a single change of the working set. A natural approach in the case of multiple changes would be to update the Cholesky factors in parallel, accounting for all changes to the working set simultaneously. For example, suppose we bind two variables so that LLT =  L11 lT21 α L31 l32 L33 lT41 a l T 43 β L51 l52 L53 l54 L55   LT11 l21 L T 31 l41 L T 51 α lT32 a l T 52 LT33 l43 L T 53 β lT54 LT55  =  H11 h21 H T 31 H T 41 H T 51 hT21 h22 h T 32 h T 42 h T 52 H31 h32 H33 H43 H T 53 hT41 h42 h T 43 h44 h T 54 H51 h52 H53 h54 H55  . The updated factors for the next iteration are 25 3.1. Limitations of Updating Factorizations L̄L̄T =  L̄11 0T 1 L̄31 0 L̄33 0T 0 0T 1 L̄51 0 L̄53 0 L̄55   L̄T11 0 L̄T31 0 L̄T51 1 0T 0 0T L̄T33 0 L̄T53 1 0T L̄T55  =  H11 0 HT31 0 HT51 0T 1 0 0T 0 H31 0 H33 0 HT53 0T 0 0T 1 0T H51 0 H53 0 H55  . Working out the equations as in the previous example yields the same rank- one update L̄33L̄ T 33 = L33LT33 + l32lT32, as well as L̄T53 = L̄−133 ( l32l T 52 + L33LT53 ) L̄55L̄ T 55 = L55LT55 + l52lT52 + l54lT54 + L53LT53 − L̄53L̄T53. The last equation requires two rank-one updates as well as two the size of L53. If the variables being bound are close together or the second variable is near the end, this is a low rank update, but these are unknown a priori. As a second attempt, one could attempt to re-order the factors to produce a single column- row block change to induce a rank-l update. For example, suppose we bind l variables so that PHP = H̄11 H̄T21 H̄21 H̄22  (3.1) PH̄PT = H̄11 Il  26 3.2. The Schur-Complement Method for some permutation matrix P = P11 P12 P21 P22  . We wish to find the cholesky factors H̄11 = L̄11L̄T11 via updating the Cholesky factors H = L11 L21 L22  LT11 LT21 LT22  . (3.2) Substituting Equation (3.2) into Equation (3.1) yields PT11L̄11L̄ T 11P11 = L11LT11 − PT11H̄T21P21 − PT21H̄T21P11 − PT21H̄22P21. (3.3) Unfortunately by construction P11 is not invertible unless P = I. Therefore Equation (3.3) cannot be used as an factorization update equation. We abandon the dream of updating factors and instead turn to another idea: the Schur- complement method. 3.2 The Schur-Complement Method Recall that each iteration k of an ASM computes the solution to a saddle point system of the form H Ak ATk  x λ  =  c bk  . 27 3.2. The Schur-Complement Method A Schur-complement approach, developed in (Gill et al., 1990) for solving saddle point systems arising in a primal active-set method, computes an initial (sparse) factorization of the saddle point system K0 = H A0 AT0  ∈ R(n+p)×(n+p), (3.4) where p = |A0| and A0 = {ai}i∈A0 is given by the initial working set A0. The method relies on the following key observation: Adding or removing a constraint to the working set is equivalent to symmetrically appending a column and a row to the current saddle point system. To see this, consider the first iteration after the initial factorization of K0. Suppose we add a constraint i. Then the new saddle point system is  H A0 ai AT0 aTi   x λ µ  =  c b0 bi  , where µ is the Lagrange multiplier corresponding to the added constraint. Now suppose an initially active constraint i is removed from the working set. The new saddle point system can be written as  H A0 AT0 ei eTi   x λ γ  =  c b0 0  . This adds the final constraint λi = 0, which is required for a non-active con- straint, as well as modifies the equation aTi x = bi to aTi x + γ = bi, which can be satisfied for any x. Therefore, the constraint is “ignored” and the dummy 28 3.2. The Schur-Complement Method variable γ may be discarded. In general, for a given iteration k suppose there are l changes made between Ak−1 and Ak. The system to be solved is given by the form K0 U UT  y pi  =  c w  , (3.5) where U ∈ R(n+p)×l and w ∈ Rl. The variables xk and λk can be obtained by “unscrambling” yk and pik. More precisely, xki = yki , i = 1, ..., n and λki =  piks if constraint iwas the sthconstraint added 0 if constraint iwas the sthconstraint removed ykn+i otherwise Although the system grows in size, by using the Schur-complement matrix defined as C ≡ −UTK−10 U, solving Equation (3.5) is equivalent to solving the following systems: K0v = c Cpi = w − UT v (3.6a) K0y = c− Upi. 29 3.2. The Schur-Complement Method Thus, the work required at each iteration involves two solves with K0 (and updating/solving with C). However, by solving for R = K−10 U and noticing that v = K−10 c is fixed throughout the algorithm we can write Equation (3.6) as C = −UTR Cpi = w − UT v y = v −Rpi and we see that l solves with K0 is required to form C and solve for y and pi. As mentioned, an advantage of the Schur-complement approach is its ab- straction of the solve with K0 in Equation (3.5). More precisely, we black-box the solution of K0 via a function K0_solve so that R = K0_solve (U) . The function K0_solve is application-dependent and implementations can take advantage of specific hardware such as the GPU. In general computing the function requires a matrix factorization and solving requires solving with the computed factors (see Section 3.3). We emphasize that, in contrast to updating factors, the solves ri = K0_solve (ui) i = 1, ..., l are naturally parallelizable. There is one final ingredient needed to make the algorithm above more ef- ficient and ensure non-singularity of the Schur-complement matrix. Suppose a constraint i ∈ A0 is removed from Ak at some iteration k, adding the lth col- 30 3.3. Details of the SCHURPA Algorithm umn to U as described above. If at some later iteration, the constraint is then added back to Ak, adding the lth column to U will yield a rank deficient matrix and thus C will be singular. Instead of adding another column to U , we may simply remove the lth column from U , as well as the lth column and lth row in C. Thus each modification to the working set of this form decreases the number of solves required by one. A similar argument holds for a constraint i 6∈ A0 which is added and subsequently removed from Ak. In addition to avoiding a solve, removing a column also reduces the memory cost of the algorithm. In our GPU implementation, U is stored as a dense matrix, and minimizing the memory footprint on the GPU is crucial. We are now ready to combine PDASM with the Schur-complement method, which we call SCHURPA (SCHUR in PArallel), given in Algorithm 3.2. An ini- tialization step, given in Algorithm 3.1, computes the function K0_solve and solves the initial saddle point system. We describe application-specific imple- mentations of K0_solve for the solids simulator in Chapter 5. The properties of Algorithm 3.2 are the topic of the next section. Algorithm 3.1 SCHURPA-INIT Input: A0 Factorize K0 given by Equation (3.4) to produce the function K0_solve Solve the initial saddle point system: ( x∗ λ∗ ) = K0_solve (( c b0 )) Output: x∗, λ∗, K0_solve 3.3 Details of the SCHURPA Algorithm By solving the saddle point systems via the Schur-complement method, SCHURPA avoids the need for updating factorizations. The approach works well when the cost of computing K0_solve is significantly more than the cost of solv- ing with said function. This was the case in (Gill et al., 1990; Bartlett and 31 3.3. Details of the SCHURPA Algorithm Algorithm 3.2 SCHURPA Input: A0, x0, λ0, K0_solve for k = 0, 1, 2, ... do if the KKT conditions to (P) are satisfied then DONE; x∗ = xk and λ∗ = λk else Ak+1 = {i ∈ I | bi − aTi xk − λki < 0} end if Compute U based on the changes between Ak and Ak+1 Compute C = −UTR, where R = K0_solve(U) Solve Equation (3.5) via Equation (3.7) to obtain (yk+1, pik+1) Use yk+1 and pik+1 to obtain xk+1,λk+1 end for Output: x∗, λ∗ Biegler, 2006), which employed the primal and dual methods, respectively, to solve large sparse QPs arising in the sequential quadratic programming (SQP) method. The Schur-complement method computes an initial sparse factoriza- tion, and updating factors are avoided. As the QPs in SQP converge, few active set changes are required to find the optimal solution. Our motivation, in con- trast, is in physical simulation, where due to the temporal coherence the change in active-sets between time steps may be small but never converging. Thus we expect each iteration a nontrivial number of changes in the active-set, which suits the parallel linear solve of Algorithm 3.2. We also emphasize that their motivation was for solving sparse problems sequentially, while our motivation is solving structured dense problems in parallel (see Chapter 4). This exemplifies the utility of the block-box abstraction of K0_solve. The computational complexity of SCHURPA depends on the number of iter- ations, the cost of computing K0_solve, and the cost of solving with K0_solve. Due to the super-linear convergence of any PDASM, Algorithm 3.2 typically converges in O(1) iterations; in practice typically less than 10 iterations are required. For a generic implementation where a sparse matrix K0 ∈ Rn×n is factorized, Algorithm 3.1 has a complexity of O ( n3 ) and solving with the result- 32 3.3. Details of the SCHURPA Algorithm ing factors has complexity O ( n2 ) . The advantage is that only one factorization is required throughout the entire algorithm. As the number of changes from A0 from Ak grows, so does the size of the Schur-complement matrix C. If C gets too large, the Schur-complement method could be inefficient and a re-factorization of the current saddle point system should be computed to create a new K0_solve. Additionally, since U is stored as a dense matrix, memory limitations must be considered in a GPU imple- mentation. Therefore if either C gets sufficiently large or U cannot be stored in memory, we compute a new K0_solve from the current iteration. In our implementation, C is re-computed and factored each iteration, as in (Bartlett and Biegler, 2006). Finally, we note that the Schur-complement method is less robust than meth- ods updating factors via orthogonal transformations. If the initial saddle point system is ill-conditioned, all subsequent saddle point systems will compromise accuracy due to the Schur-complement approach. One possible amelioration to this is fixed-precision iterative refinement applied when constraint drift oc- curs. However, as our QPs are found to be reasonably well-conditioned and non-degenerate in practice, iterative refinement was not required to identify the optimal active set (see Chapter 7). 33 Chapter 4 The Eulerian Solids Simulator Equipped with SCHURPA, a parallel PDASM for solving QPs, we describe our solution procedure for the frictional contact problem arising in an Eulerian deformable solid bodies simulator, the subject of this chapter. Deformable body simulations typically work by keeping track of material points associated with the object and updating their positions in space. This approach is called Lagrangian simulation. In contrast, Eulerian simulation fixes points in space and simulates objects moving through this space by advecting them through the object’s velocity field. Introduced in (Levin et al., 2011), the Eulerian solids simulator handles complex, highly deformable solid objects undergoing frictionless contact. By discretizing the spatial domain into a regular grid, the simulator utilizes the finite volume method (Versteeg and Malalasekera, 2007). A parallel implemen- tation of the simulator was developed on the GPU using CUDA (NVIDIA, 2012b). However, the contact was assumed to be frictionless. Simulating friction is crucial for many interesting and complex phenomena. Consider hands grasping an object, fingers sliding across a surface, hair blowing in the wind, or cloth sliding off of a surface. Our goal is to imbue the simulator with the ability to simulate friction. 34 4.1. Overview Section 4.1 provides a description of the simulator and a derivation of the contact QP solved at each time step of the simulation. We then introduce the frictional contact problem, and the staggered projections algorithm of (Kaufman et al., 2008) as our solution methodology, in Section 4.2. By appropriately refor- mulating the linearized friction cone in Section 4.3, we show that SCHURPA can be applied to the QPs that result from SP. The final SCHURPA-SP algorithm is outlined in Section 4.4. 4.1 Overview 4.1.1 Simulating Objects Without Contact Here we briefly describe the method of simulating a single solid body in the simulator, the details of which can be found in (Levin et al., 2011). Suppose we are interested in simulating an object starting at time t0. The object’s configuration at time t0 is said to be in its reference configuration. A mapping between the object’s reference configuration and its configuration at time t is given by x : (R3,R+)→ (R3,R+) x(X, t) = X + u(X, t). The reference coordinate X is the coordinate of a point on the object in its reference configuration, and u(X, t) is the change, or deformation, of that point at time t. In typical Lagrangian solids simulations, the object is discretized into particles. The reference coordinates X of these particles act as degrees of freedom (DOFs) of the system, completely determining the configuration of the object. The spatial positions of these particles are computed by integrating the 35 4.1. Overview momentum equation ρ dv(X) dt = ∇·σ+ρb, where v is the velocity of the particle, ρ is the density, σ is the Cauchy stress, and b are the body forces acting on the particle. However, in the Eulerian framework, degrees of freedom are no longer reference coordinates, but rather spatial coordinates fixed on a grid in space. More precisely, we discretize space by a uniform grid G = (∆x,∆y,∆z) of dimensions (Lx∆x)× (Ly∆y)× (Lz∆z) and define our degrees of freedom by xijk = (x0 + i∆x, y0 + j∆y, z0 + k∆z), where i ∈ 0, ..., Lx, j ∈ 0, ..., Ly, k ∈ 0, ..., Lz. By writing the velocity v as a function of x, the material derivative begets the modified momentum equation ρ ( ∂v ∂t + v · ∇v ) = ∇ · σ + ρb. (4.1) Integrating Equation (4.1) yields a velocity field with which the object may be advected through. For an integration region Ω centered around a node N = (i, j, k) we assume a constant density ρ and integrate Equation (4.1) to obtain mN ( ∂vN ∂t + vN∇vN ) = fN, where mN is the integrated mass and fN are the integrated stress and body forces inside Ω. To compute the velocities at time t + 1, we use vtN in the 36 4.1. Overview advection term vN∇vN and apply a first order discretization to ∂vN∂t , yielding mNv t+1 N = mNvtN + ∆t(fN −mND(vtN )vtN ), where the matrix D is the discrete Jacobian matrix using the first-order upwind differencing scheme. Assembling equations over the entire spatial grid for all objects yields the equation Mvt+1 = f∗, (4.2) where M ∈ Rn×n is the globally assembled diagonal mass matrix, vt+1 ∈ Rn is the global velocity vector and f∗ ∈ Rn is the global impulse vector. Once Equation (4.2) is solved, the reference coordinates are advected through the velocity field via the material derivative: DX Dt = 0 ⇒ ∂X ∂t + vt+1 · ∇X = 0, yielding the update equation Xt+1 = Xt −∆tD(vt+1)Xt. By tracking the reference coordinates, strain and stress can be properly com- puted and an elastic object will always return to its undeformed state. 4.1.2 Simulating Objects With Contact Contact between bodies in the simulation is resolved by invoking Gauss’ prin- ciple of least constraint (Lanczos, 1986), a variational formulation of classical mechanics, which states that the velocity is the solution to the following opti- mization problem: 37 4.1. Overview vt+1 = argmin v 1 2v TMv − vT f∗ subject to v ∈ V, (4.3) where V is the constraint set. In the case of the simulator, constraints enforce non-interpenetration between objects. Consider a colliding pair of objects A and B with contact surface ΓAB . Non-interpenetration constraints are formulated on the velocity level so that the relative velocity along the unit normal is always nonnegative along the contact surface: vrel · n ≥ 0, (4.4) where vrel = vA − vB is the relative velocity and n is the unit normal to the surface. As in typical FEM fashion, we integrate Equation (4.4) in each grid cell Ωc where contact occurs to obtain ˆ Γc vrel · ndΓ = ˆ Γc ( vA − vB) · ndΓ ≥ 0, (4.5) where Γc = Ωc ∩ ΓAB . By expressing velocities in terms of nodal shape functions we get v (x) = L∑ N=1 φN (x) vN , (4.6) where vN ∈ R3 , L = LxLyLz and φN is a scalar shape function. In the implementation of the simulator we use trilinear shape functions as in (Levin 38 4.1. Overview et al., 2011). Substituting in Equation (4.6) to Equation (4.5) yields L∑ N=1 ˆ Γc φNndΓ  · vAN − L∑ N=1 ˆ Γc φNndΓ  · vBN = jTc v, where jc =  ˆ Γc φNnxdΓ, ˆ Γc φNnydΓ, ˆ Γc φNnzdΓ T  N=1,...,L ∈ Rn The constraints are assembled into a global constraint Jacobian matrix J = {jc}c=1,...,m ∈ Rn×m and the resulting QP solved at each time step is given by vt+1 = argmin v 1 2v TMv − vT f∗ subject to JT v ≥ 0. (4.7) Because contact only occurs on surfaces of objects, the number of constraints is much smaller than the total number of degrees of freedom, i.e., m n. In such cases it is useful to transform (4.7) into its dual formulation, given by αt+1 = argmin α 1 2α TJTM−1Jα+ αT (JTM−1f∗) subject to α ≥ 0, (4.8) 39 4.2. Adding Friction Using Staggered Projections where α ∈ Rm. The velocities can then be recovered via vt+1 = M−1(f∗ + ct+1), (4.9) where ct+1 ≡ Jαt+1 can be interpreted as the generalized contact impulse with α giving the mag- nitudes of those impulses. Reducing to the dual results in a problem of size m. Due to the diagonal structure of M, formulating (4.8) is computationally tractable. We now turn to the more challenging case of simulating frictional contact. 4.2 Adding Friction Using Staggered Projections 4.2.1 Friction Model We begin our derivation of frictional contact with Coulomb’s law of friction. Consider a contact point p for two objects A and B in contact. Coulomb’s law states that the frictional impulse fp must directly oppose the relative velocity vrel at the contact point, and must lie in the feasible set {fp ∈ Tp : ||fp|| ≤ µα}, where Tp is the tangent plane to the contact point, α is the magnitude of the normal impulse, and 0 ≤ µ ≤ 1 is the coefficient of friction dependent on the material properties. We model friction using a linearized, isotropic Coulomb friction law (Stewart, 2000). We define a set of 2l symmetric unit length vectors 40 4.2. Adding Friction Using Staggered Projections {Ti}i=1,...,2l ∈ Tp so that Ti+1 = −Ti i = 1, 3..., 2l − 1. The frictional impulse is then expressed as fp = T βp, (4.10) where T = {Ti}i=1,...,2l ∈ R3×2l and βp ∈ R2l gives the magnitude of each tangent direction. Note that, due to the symmetry of the tangent vectors Ti, T is necessarily rank deficient. We address the algorithmic issues this implies in Section 4.3. We discretize βp using piecewise constants so that for each contact cell c the frictional impulse is fp = T βc, where βc ∈ R2l is a constant within the contact cell. The linearized Coulomb law may then be described as the set of feasible points Fc = { fp = T βc : eTβc ≤ µcαc, βc ≥ 0 } , (4.11) where e = (1, ..., 1)T , µc ≥ 0 is the coefficient of friction and αc ≥ 0 is the contact magnitude. Fc represents the convex hull of the columns of T . The feasible set over all contacts can now be written in terms of the tangent magnitudes as { β | ETβ ≤ diag(µ)α, β ≥ 0} , (4.12) where β ∈ R2lm is the global tangent magnitude vector, E ∈ R2lm×m is glob- ally assembled from e, and diag(µ) constructs the diagonal matrix of the glob- 41 4.2. Adding Friction Using Staggered Projections ally assembled vector µ. Turning from feasibility to optimality, we rephrase Coloumb’s law as a variational principle, given by the Maximal Dissipation Principle (Moreau, 1973): For a contact point p, the frictional force f maximizes the dissipation among all feasible forces. The dissipation is equal to the rate of negative work and defined as −fTp vrel. We integrate over the contact surface Γc of the contact cell c to obtain the dissipation for a given contact: ˆ Γc −fTp vreldΓ = − ˆ Γc fTp v AdΓ− ˆ Γc fTp v BdΓ  = −βTc  L∑ N=1 ˆ Γc T TφNvAN − L∑ N=1 ˆ Γc T TφNvBN  (4.13) = −βTc TTc v, (4.14) where Tc =  ˆ Γc φNT Tx dΓ, ˆ Γc φNT Ty dΓ, ˆ Γc φNT Tz dΓ T  N=1,...,L ∈ Rn×l. Equation (4.13) follows from expressing the velocities as a linear combination of the shape functions defined by Equation (4.6). Summing over all contacts yields the total dissipation: − m∑ c=1 βTc T T c v = −βTTT v, 42 4.2. Adding Friction Using Staggered Projections where T = {Tc}c=1,..,m ∈ Rn×2lm (4.15) is the globally assembled subspace of generalized friction impulses and f := Tβ is the generalized friction impulse. Maximizing the dissipation over the feasible set Equation (4.12) yields the optimization problem βt+1 = argmax β − 12β TTT vt+1 subject to β ≥ 0, ETβ ≤ diag(µ)α. (4.16) We can transform Equation (4.16) into a QP in β by adding the generalized frictional impulse to the momentum Equation (4.9): vt+1 = M−1(f∗ + ct+1 + f t+1) = M−1(f∗ + ct+1 + Tβt+1), yielding the friction QP βt+1 = argmin β 1 2β TTTM−1Tβ + βT (TTM−1(ct+1 + f∗)) subject to β ≥ 0, ETβ ≤ diag(µ)αt+1. (4.17) Notice that (4.17) is dependent on the solution of the contact QP (4.8). Conversely, taking into account the generalized frictional impulse f t+1 modifies 43 4.2. Adding Friction Using Staggered Projections (4.8) to αt+1 = argmin α 1 2α TJTM−1Jα+ αT (JTM−1(f t+1 + f∗)) subject to α ≥ 0. (4.18) We now have two coupled convex QPs: (4.17), yielding the optimal friction impulse magnitudes βt+1 given the contact impulse ct+1, and (4.18), yielding the optimal contact impulse magnitudes αt+1 given the friction impulse f t+1. This intrinsic coupling is what makes frictional contact a challenging problem, as finding the solution of the coupled problem is equivalent to solving a global minimization of a non-convex QP, which is in general NP-hard (Kaufman et al., 2008). We adopt the method of staggered projections to solve the frictional contact problem. 4.2.2 The Staggered Projections Algorithm Staggered projections, introduced in (Kaufman et al., 2008), is a robust, com- putationally efficient method to simulate discrete frictional contact response for a class of systems with finite degrees of freedom described in generalized coordinates. We briefly outline the algorithm now. Recall the coupled friction and contact QPs (4.17) and (4.18), respectively. Beginning with an initial estimate f0 = Tβ0 of the friction impulse, SP solves the following set of QPs at iteration s: αs+1 = argmin α 1 2α TJTM−1Jα+ αT (JTM−1(fs + f∗)) subject to α ≥ 0, (C) βs+1 = argmin β 1 2β TTTM−1Tβ + βT (TTM−1(cs+1 + f∗)) subject to β ≥ 0, ETβ ≤ diag(µ)αs+1. (F) 44 4.2. Adding Friction Using Staggered Projections Note that if β0 = 0 this reduces to the case of frictionless contact discussed in Section 4.1.2. The staggered projections algorithm is given in Algorithm 4.1. The convergence criterion is given by rel_error = (f s+1 c − fsc )TM−1(fs+1c − fsc ) fsTM−1fs ≤ tol (4.19) where rel_error specifies the relative kinetic metric error and tol is a user- specified tolerance. Similar to ASMs, SP benefits greatly from warm starts. Up to two orders of magnitude speed-ups were observed in Kaufman (2009). Since each QP subproblem is strictly convex and feasible, the algorithm will never fail on a given iteration. While global convergence is not guaranteed, in practice few iterations are required, especially if warm-starting is employed. SP can be interpreted as a fixed point predictor correction scheme (Kaufman, 2009). The predicted impulse f∗ is corrected by the coupled projections fs+1 = PF (αs)(f∗ − cs) cs+1 = PC(f∗ − fs+1), where P is the projection operator and F (α) := { Tβ | ETβ ≤ diag(µ)α, β ≥ 0} , C := {Jα : α ≥ 0} represent the set of admissible friction and contact impulses, respectively. We are now in a position to incorporate SCHURPA into SP. Assuming for the moment that SCHURPA can solve (C) and (F), notice that only the linear terms of the objective function and right-hand-sides of the contact and friction QPs change each iteration of Algorithm 4.1. In other words, the Hessians and 45 4.3. Applying SCHURPA to Frictional Contact constraint matrices remain constant. Thus the initial factorization computed in SCHURPA can be used not only within a single QP solve, but throughout the entire staggered projections algorithm! This idea is formalized in Section 4.4. However, we must first investigate the applicability of SCHURPA to the contact and friction QPs. In particular, the friction QP (F) must be reformulated to ensure the resulting saddle point systems occurring in SCHURPA are solvable, which we do in the next section. Algorithm 4.1 Staggered Projections Given β0,A0c ,A0f , tol rel_error = ∞ for s = 0, 1, 2, ... do Solve the contact QP (C) for αs+1 Solve the friction QP (F) for βs+1 Compute rel_error given by Equation (4.19) if rel_error <tol then BREAK; αt+1 = αs+1 and βt+1 = βs+1 end if end for f t+1 = Tβt+1 ct+1 = Jαt+1 vt+1 = M−1(f∗ + f t+1 + ct+1) 4.3 Applying SCHURPA to Frictional Contact Recall from Section 2.4 local convergence of PDASM is guaranteed under the assumption that the saddle point systems are all nonsingular. In the case of the contact QP (C), this assumption holds, which we prove in Theorem 4.1 using the following lemma. Lemma 4.1. The Jacobian matrix J of (4.7) is full rank. Proof. We prove the lemma on a 2D grid using bilinear shape functions by as- suming for a contradiction the Jacobian has linearly dependent columns; the 46 4.3. Applying SCHURPA to Frictional Contact Figure 4.1: Nodal shape functions for a contact cell Ωc on a 2D grid. extension to 3D using trilinear shape functions as in the simulator is straight- forward. Consider the four shape functions φN for N ∈ N c := {N1, N2, N3, N4} providing local support to a contact cell Ωc; see Figure 4.1. Assume the associ- ated contact surface Γc has a point strictly in the interior so that jN = ˆ Γc φNnxdΓ, ˆ Γc φNnydΓ, ˆ Γc φNnzdΓ  6= (0, 0, 0)T , (4.20) where j = jc is the contact constraint and N ∈ N c. Let C ⊆ {1, ...,m} be a subset of the constraints containing a linearly dependent set so that ∑ c∈C αcjc = 0, (4.21) for scalars αc 6= 0, c ∈ C. We define (xc, yc) to be the grid center of the grid cell Ωc. The idea now is to find a constraint c̄ ∈ C for which j̄ = jc̄ has unique positions which contain nonzero elements, thus contradicting Equation (4.21). One can always find such a constraint c̄ by choosing the grid cell Ωc̄ ∈ {Ωc}c∈C 47 4.3. Applying SCHURPA to Frictional Contact such that xc̄ ≥ xc c ∈ C, yc̄ ≥ yc c ∈ K where K = {c ∈ C | xc = xc̄}. In other words, c̄ selects the constraint cell with maximum x coordinates and maximal y coordinates. Notice that this cell is the only cell with support from the shape function φN2 where N2 ∈ N c̄ (see Figure 4.1). This implies the nonzero entry in j̄ corresponding to j̄N2 6= 0 ( which is nonzero due to Equation (4.20)) is the only constraint with a nonzero entry in that position. Thus j̄ 6= 1 αc̄ ∑ c∈C\c̄ αcjc, contradicting Equation (4.21). We therefore conclude the columns of J must be linearly independent, and J is full rank. Theorem 4.1. The saddle point systems resulting from PDASM applied to (C) will always be nonsingular and PDASM will locally converge super-linearly. Proof. From Lemma 4.1 J is full rank, and thus the Hessian of (C) is non- singular. Since the constraint matrix of (C) is simply the identity, any subset of constraints are linearly independent. Therefore by Theorem 2.1 the saddle point systems will be nonsingular, as required. Because SCHURPA is a PDASM, the following corollary immediately follows from the above theorem. Corollary 4.1. SCHURPA applied to the contact QP (C) locally converges super-linearly. 48 4.3. Applying SCHURPA to Frictional Contact We now tackle the friction QP. Unfortunately, in the form given by (F) the saddle point systems can become singular because of the Hessian matrix. Recall the tangent matrix is given by T = {Tc}c=1,..,m, where Tc =  ˆ Γc φNT Tx dΓ, ˆ Γc φNT Ty dΓ, ˆ Γc φNT Tz dΓ T  N=1,...,L ∈ Rn×l. As previously mentioned, the symmetry of the unit length vectors Ti used to construct the linearized friction cone results in the matrices Tc, and hence T , to become rank deficient. This implies (F) is a semidefinite QP with a singular Hessian. The infinite solutions arise due to the nullspace of T which induces infi- nite representations of a single frictional impulse. More precisely, the symmetry implies Ti+1 = −Ti i = 1, 3..., 2ml − 1 (4.22) and thus f = Tβ = ∑ i=1,...,2ml βiTi = ∑ i=1,3...2ml−1 (βi+1 − βi)Ti. (4.23) Even if we assume the set {Ti}i=1,3...2ml−1 is linearly independent, only the differences βi+1 − βi uniquely define a friction impulse, and the Hessian TTM−1T ∈ R2lm ×2lm of (F) has rank lm. To make matters worse, we have 2ml+m constraints vs 2ml unknowns. Both the rank deficiency of the Hessian 49 4.3. Applying SCHURPA to Frictional Contact and possible linear dependence of working set constraints can cause a singular saddle point system in the course of PDASM, prohibiting the direct application to (F). These deficiencies are not just theoretical. In (Daryina and Izmailov, 2009), it was observed that if the number of constraints is greater than the number of variables, PDASM fails a majority of the time. Their experiments applied to generic QPs also showed that if the Hessian is fairly rank deficient PDASM becomes an unreliable method. Therefore, we must reformulate (F) in order to apply PDASM. We can reformulate the friction QP to an equivalent convex QP with a positive definite Hessian for the case l = 2 (we shall henceforth assume l = 2). Let T̂ be an orthonormal basis for the tangent surface Tp of a contact point p. We can then rewrite the tangent impulse as fp = T̂ βc, (4.24) where βc ∈ R2 is taken as constant in a contact cell c as before. In the case of l = 2, the linearized friction cone is simply expressed by the l1-norm Fc = { T̂ βc | ||T̂ βc||1 ≤ µcαc } , (4.25) which can be rewritten as Fc = { T̂ βc | ÊTc βc ≤ diag(µ̂c)α̂c) } , (4.26) where 50 4.3. Applying SCHURPA to Frictional Contact Êc = 1 −1 1 −1 1 −1 −1 1  , µ̂c = (µc, µc, µc, µc)T , α̂c = (αc, αc, αc, αc)T . (4.27) As before, we assemble the global quantities Ê ∈ R2m×4m, µ̂ ∈ R4m, and α̂ ∈ R4m. The global feasible set is given as { β | ÊTβ ≤ diag(µ̂)α̂, β ≥ 0 } , (4.28) and the generalized tangent matrix is T̂ = {T̂c}c=1,..,m ∈ Rn×2lm (4.29) where T̂c =  ˆ Γc φN T̂ Tx dΓ, ˆ Γc φN T̂ Ty dΓ, ˆ Γc φN T̂ Tz dΓ T  N=1,...,L ∈ Rn×l. The modified friction problem is now βs+1 = argmin β 1 2β T T̂TM−1T̂ β + βT (T̂TM−1(cs+1 + f∗)) subject to ÊTβ ≤ diag(µ̂)α̂s+1. (F2) The advantage of the formulation (F2) is that T̂ has full rank. Lemma 4.2. The tangent matrix T̂ of (F2) is full rank. Proof. The proof is analogous to the proof of Lemma 4.1 where quantities corre- sponding to constraint jc are replaced with quantities corresponding to T̂c. 51 4.3. Applying SCHURPA to Frictional Contact Since T̂ is full rank, the Hessian T̂TM−1T̂ of (F2) is positive definite, and the problem size has reduced by half from 4m in (F) to 2m. Although we have 4m constraints, the following theorem guarantees that PDASM never selects a linearly dependent working set, and therefore the saddle point systems will be nonsingular. Theorem 4.2. Assuming the initial working set A0 results in a linearly in- dependent set of constraints, The saddle point systems resulting from PDASM applied to (F2) will always be nonsingular and PDASM will locally converge super-linearly. Proof. We will show by induction that each step of Algorithm 2.2 selects a set of linearly independent constraints, which by Theorem 2.1 implies that the resulting saddle point system is nonsingular. Base Case: By assumption A0 is a linearly independent set of constraints, so the initial saddle point system is nonsingular. Induction Step: Assume at step k of Algorithm 2.2 that Ak defines a linearly independent set of constraints from (F2), and for a contradiction assume Ak+1 selects a linearly dependent set. The form of Êc implies that Ak+1 must contain a pair of constraints of the form βi + βi+1 = γi and − βi − βi+1 = γi (4.30) or βi − βi+1 = γi and − βi + βi+1 = γi (4.31) for some i = 1, 3, ..., 2m − 1 and γi := µ i+1 2 α i+1 2 > 0. Assume WLOG the pair Equation (4.30) is chosen, and denote these constraints by j and j + 1. Then by definition of Algorithm 2.2 52 4.3. Applying SCHURPA to Frictional Contact λk+1j > γi − (βk+1i + βk+1i+1 ) (4.32) λk+1j+1 > γi + βk+1i + βk+1i+1 . (4.33) Case 1: λk+1j 6= 0 or λk+1j+1 6= 0. Assume WLOG λk+1j 6= 0. By our assumption Ak defined a linearly inde- pendent set. Thus j ∈ Ak, j + 1 6∈ Ak and λk+1j+1 = 0 (Since j + 1was not active) γi = βk+1i + βk+1i+1 (Since jwas active) ⇒ λk+1j+1 = 0 > γi + βk+1i + βk+1i+1 = 2γi > 0, (4.34) a contradiction. Case 2: λk+1j = λk+1j+1 = 0. Combining Equation (4.32) and Equation (4.33) yields λk+1j + λk+1j+1 = 0 > 2γi > 0, (4.35) again yielding a contradiction. Thus Ak+1 cannot select a linearly dependent set of constraints. Therefore, by induction Algorithm 2.2 induces a nonsingular saddle point system each iteration, and will locally converge super-linearly. Again, as in the case of contact, the following corollary is an immediate consequence to Theorem 4.2. Corollary 4.2. SCHURPA applied to the friction QP (F2) locally converges super-linearly. 53 4.4. The SCHURPA-SP Algorithm Corollary 4.1 and Corollary 4.2 give us the theoretical confidence to integrate SCHURPA into SP, solving the contact and friction QPs arising each iteration. 4.4 The SCHURPA-SP Algorithm We now summarize the entire frictional contact algorithm, which employs SCHURPA to solve the QP subproblems induced by SP. Recall that SP can benefit from warm-starting by providing an initial guess of the working sets A0c and A0f close to the optimal active sets At+1c := {i | αt+1i = 0} At+1f := {i | ÊTi βt+1 = µ̂iα̂it+1}. In such a caseA0c andA0f will be close to the optimal active sets of all subproblem QPs arising in SP. We are therefore motivated to generalize SCHURPA’s initial factorizationK0 across all SP iterations so that only one factorization for contact and one factorization for friction is needed. As discussed in Section 3.2, SCHURPA only performs one factorization of the initial saddle point system K0, and subsequent iterations use this factoriza- tion along with the Schur-complement method to solve subsequent saddle point systems. As mentioned in Section 4.2, the Hessians and constraint matrices of the contact and friction QPs do not change within the SP Algorithm. Therefore, we only need to call SCHURPA-INIT during the first SP iteration to compute the initial solutions via the functions C0_solve and F0_solve for contact and friction, respectively. Subsequent saddle point systems across all SP iterations are solved using these functions along with the Schur-complement method, as in the SCHURPA algorithm. The combined algorithms of SCHURPA and SP are given in Algorithm 4.2. Notice that each SP iteration, initial active sets must 54 4.4. The SCHURPA-SP Algorithm be given to SCHURPA. These are determined by applying exactly the same prediction scheme Equation (2.12) of PDASM to the current SP iterates, i.e. As+1c = {i | − αs+1i − λs+1i < 0} (4.36) As+1f = {i | µ̂iα̂is+1 − ÊTi βs+1 − νs+1i < 0}, (4.37) where λ and ν are the Lagrange multipliers for contact and friction QPs, re- spectively. SCHURPA-SP defines our solution procedure for solving frictional contact. Now that the high level description has been given, we move to the efficient implementation of the C0_solve and F0_solve functions in the next chapter. Algorithm 4.2 SCHURPA-SP Input: A0c ,A0f , tol rel_error =∞ Run SCHURPA-INIT (A0c) to compute α0, λ0 and C0_solve Run SCHURPA-INIT ( A0f ) to compute β0, ν0 and F0_solve for s = 0, 1, 2, ... do Solve the contact QP: ( αs+1 λs+1 ) = SCHURPA(Asc, αs, λs, C0_solve) Solve the friction QP: ( βs+1 νs+1 ) = SCHURPA ( Asf , βs, νs, F0_solve ) Compute rel_error given by Equation (4.19) if rel_error < tol then BREAK; αt+1 = αs+1 and βt+1 = βs+1 end if As+1c = {i | − αs+1i − λs+1i < 0} As+1f = {i | µ̂iα̂is+1 − ÊTi βs+1 − νs+1i < 0} end for f t+1 = Tβt+1 ct+1 = Jαt+1 vt+1 = M−1(f∗ + f t+1 + ct+1) Output: vt+1 55 Chapter 5 Solving the Contact and Friction QPs In this chapter we describe the implementation details of Algorithm 4.2, our solution method for solving frictional contact arising in the simulator described in Chapter 4. Crucial to the efficiency of SCHURPA is the method of solving systems with the initial saddle point matrix, which in the case of staggered projections corresponds to the implementation of the functions that solve the initial contact and friction saddle point systems. We will show that, due to the structure of the contact and friction QPs, both functions require a factorization of a banded SPD matrix, and application of said functions amount to solving with these factors. Section 5.1 describes the initial saddle point systems for the contact and friction quadratic programs; these may be formulated as SPD systems. We then show in Section 5.2 the afro-mentioned SPD matrices exhibit a banded structure due to the Hessian matrices of the QPs. In particular, we utilize the nullspace method to solve the friction QP which results in a banded reduced Hessian. Since SCHURPA requires multiple simultaneous solves, we are motivated to design an efficient specialized parallel solver for dense SPD banded systems with multiple right hand sides. A parallel solver based on block substitution is outlined in Section 5.3. 56 5.1. Saddle Point Systems Arising in Contact and Friction 5.1 Saddle Point Systems Arising in Contact and Friction The SCHURPA-SP loop given by Algorithm 4.2 requires the solution of the ini- tial saddle point systems for the contact and friction QPs, which are performed in the functions C0_solve and F0_solve, respectively. In both cases we can rephrase these systems as the solution to an SPD system. Let us first investigate the case for contact. 5.1.1 Contact Consider the contact QP solved at each iteration of SCHURPA-SP, which we restate here for convenience: αs+1 = argmin α 1 2α THCα− αT c subject to α ≥ 0, (C) where c = −JTM−1(T̂ βs + f∗) HC = JTM−1J. (5.1) Recall A0 defines the initial working set and subsequent initial saddle point system. We define F = I\A0 B = A0, 57 5.1. Saddle Point Systems Arising in Contact and Friction to be the initial set of free and bound indices of α, respectively, so that the initial saddle point system for (C) is  HCF,F H C F,B 0 HCB,F H C B,B −IB 0 −IB 0   αF αB λ  =  cF cB 0  . (5.2) SCHURPA requires the solution of Equation (5.2) with arbitrary right hand sides, i.e.  HCF,F H C F,B 0 HCB,F H C B,B −IB 0 −IB 0   αF αB λ  =  cF cB b  . (5.3) Simplifying Equation (5.3) yields αB = −b (5.4) HCF,FαF = cF −HCF,BαB (5.5) λ = −cB +HCB,FαF +HCB,BαB (5.6) Thus the initial saddle point system of the contact QP amounts to solv- ing an SPD system of size |F | × |F |, given in Equation (5.5). Algorithm 5.1 gives C0_solve. Note that a factorization of HCF,F must be computed to solve Equation (5.5); this is performed in SCHURPA-INIT of Algorithm 4.2. 5.1.2 Friction Due to the simple structure of the reformulated friction constraints, a sparse nullspace can be computed explicitly; this allows us to form the reduced Hessian 58 5.1. Saddle Point Systems Arising in Contact and Friction Algorithm 5.1 C0_solve Input: [ c b ] Set αB = −b Solve the SPD system HCF,FαF = cF −HCF,BαB Set λ = −cB +HCB,FαF Output: [ α λ ] without destroying sparsity. We can then reformulate the initial saddle point system of the friction QP as an SPD system via the nullspace method (Nocedal and Wright, 1999). Recall the friction QP of SCHURPA-SP, given by βs+1 = argmin β 1 2β THFβ − βT d subject to Êβ ≤ γ, (F2) where d = −T̂M−1 (cs+1 + f∗) γ = diag(µ̂)α̂s+1 HF = T̂TM−1T̂ . (5.7) Let p = |A0| denote the number of initial active constraints and ÊA0 =  Ê1A0 Ê2A0 . . . ÊmA0  ∈ Rp×2m, (5.8) 59 5.1. Saddle Point Systems Arising in Contact and Friction where ÊcA0 is the (possibly empty) sub-matrix of Êc = 1 −1 1 −1 1 −1 −1 1  T defined by A0. Theorem 4.2 ensures that SCHURPA selects at most two linearly independent constraints of Êc to be active. If exactly two are selected, this completely determines the unknowns β2c−1 and β2c, and they can be removed from the problem. Therefore we may assume ÊcA0 ∈ Rq×2, where 0 ≤ q ≤ 1. (5.9) If q = 1 we let Zc =  (1,−1)T if ÊcA0 = ±(1, 1)T (1, 1)T if ÊcA0 = ±(1,−1)T . (5.10) Notice that ÊcA0Zc = 0. If q = 0 then there is no reduction, and we let Zc = 1 0 0 1  . (5.11) Proposition 5.1.1. The matrix Z =  Z1 Z2 . . . Zm  ∈ R2m×(2m−p) (5.12) forms a basis for the nullspace of ÊA0 . Proof. By construction Z is full rank since each Zc is full rank. Computing the 60 5.1. Saddle Point Systems Arising in Contact and Friction product ÊA0Z =  Ê1A0Z1 Ê2A0Z2 . . . ÊmA0Zm  =  0 0 . . . 0  , (5.13) where we define the matrix product ÊcA0Zc to be the empty matrix if ÊcA0 is empty. Thus Z is a basis for the nullspace of ÊA0 . Equipped with a nullspace of ÊA0 , we can proceed to solve the saddle point system via the null-space method, which we briefly outline here. The initial saddle point system induced by A0 is HF ÊTA0 ÊA0  β ν  =  d γA0  . (5.14) SCHURPA requires the solution of Equation (5.14) with arbitrary right hand sides, i.e. HF ÊTA0 ÊA0  β ν  = d b  . (5.15) Any solution to ÊA0β = b can be written as β = βp + Zβz, (5.16) where βp is a particular solution. Substituting Equation (5.16) into Equa- 61 5.1. Saddle Point Systems Arising in Contact and Friction tion (5.15) and multiplying the first equation by ZT yields the reduced Hessian equation (ZTHFZ)βz = −(ZTHFβp + ZT d). (5.17) Note that we can easily compute βp by choosing βp = ÊTAb 2 (5.18) since ÊAβp = ÊAÊTAb 2 = b. (5.19) Upon solving for βp and βz, β can be computed from Equation (5.16) and the corresponding Lagrange multipliers are ν = Ê T A(d+HFβ) 2 . (5.20) As in the contact QP, we have reduced the problem to an SPD system, given by Equation (5.17), of size (2m− p)×(2m− p). The function F0_solve is given in Algorithm 5.2 and, as in the case of contact, requires a factorization of ZTHFZ, which is performed in SCHURPA-INIT of Algorithm 4.2. Algorithm 5.2 F0_solve Input: [ d b ] Set βp = Ê T Ab 2 Solve the SPD system (ZTHFZ)βz = −(ZTHFβp + ZT d) Set β = βp + Zβz Set ν = Ê T A(d+H Fβ) 2 Output: [ β ν ] 62 5.2. Banded SPD Systems While nullspace matrices are typically dense and the reduced Hessian loses the sparsity structure of the original Hessian, this is not the case here. We show in the next section that, due to the structure of HC and HF , the SPD matrices in Equation (5.5) and Equation (5.17) are banded. This banded structure is critical to the efficient implementation of SCHURPA-SP. 5.2 Banded SPD Systems The computational “meat” of SCHURPA-SP is the factorization and subsequent solving with the SPD matrices GC = HCF,F (5.21) GF = ZTHFZ. (5.22) These matrices are in fact banded. We first prove that HC and HF exhibit a banded structure. Proposition 5.1. Upon discretization using trilinear shape functions HC is a banded matrix. Proof. We prove the proposition on a 2D grid using bilinear shape functions; the extension to 3D using trilinear shape functions as in the simulator is straight- forward. Consider the four shape functions φN for N ∈ N i := {N1, N2, N3, N4} providing local support to a given contact cell Ωi, as pictured in Figure 4.1. The locality of the shape functions implies JiN 6= (0, 0, 0)T ⇐⇒ N ∈ N i. We define grid cells Ωi and Ωj to be adjacent iff they share a common shape function with local support, as exemplified in Figure 5.1. Thus if Ωi and Ωj are 63 5.2. Banded SPD Systems Figure 5.1: Example of cell adjacency on a 2D grid. Grid cells Ωi and Ωj are adjacent as they share a common shape function, while Ωi and Ωk are not adjacent. not adjacent JiNJjN = 0 N = 1, ..., L ⇒ HCij = ∑ k JikJjk Mkk = 0. In other words, HC has a bandwidth of at most ρmax, where ρmax = max i,j {|i− j| | grid cells Ωiare Ωj are adjacent} (5.23) as required. We denote the bandwidth of HC as ρ. The constraint cells are ordered by the standard triplet ordering (x, y, z) (i.e. first by x, followed by y, followed by z coordinates). Assuming this order roughly groups the adjacent constraint cells together, ρ should be small relative to m. Figure 5.2 exemplifies a typical sparsity pattern of HC , exhibiting a banded structure. A bandwidth minimizing 64 5.2. Banded SPD Systems Figure 5.2: Typical banded sparsity pattern of the contact Hessian HC arising in the simulator. procedure such as Cuthill-McKee reordering could also be performed to decrease ρ. Proposition 5.2. The matrix HF is banded with a bandwidth of at most 2ρ. Proof. The proof is analogous to the proof of Proposition 5.1 where quantities corresponding to J are replaced with quantities corresponding to T̂ . The following corollary immediately follows from Proposition 5.1. Lemma 5.1. The matrix GC is banded and SPD with a bandwidth at most ρ. Proof. Since GC is a sub-matrix of HC , GC must have a bandwidth at most the bandwidth of HC , which is at most ρ. Finally, we show that GF is also banded. Lemma 5.2. The matrix GF is banded and SPD with a bandwidth at most 2ρ+1. 65 5.2. Banded SPD Systems Proof. Recall that Z =  Z1 Z2 . . . Zm  , where Zc ∈ R2×q, with q = 1 or q = 2. We decompose HF into 2× 2 blocks HF =  HF1,1 · · · HF1,p+1 ... HF2,2 . . . HFp+1,1 H F m−p,m . . . . . . ... HFm,m−p . . . H F m,m  . (5.24) Equation (5.24) ensures the 2ρ bandwidth of HF is captured within the block form. Now we simply compute the product GF = ZTHFZ =  Z1 Z2 . . . Zm  T  HF1,1 · · · HF1,p+1 ... HF2,2 . . . HFp+1,1 H F m−p,m . . . . . . ... HFm,m−p . . . H F m,m   Z1 Z2 . . . Zm  =  ZT1 H F 1,1Z1 · · · Z1HF1,p+1Zp+1 ... ZT2 HF2,2Z2 . . . ZTp+1H F p+1,1Z1 Z T m−pH F m−p,mZm . . . . . . ... ZTmH F m,m−pZm−p . . . Z T mH F m,mZm  . (5.25) 66 5.3. Block Solver for Banded SPD Systems From Equation (5.25) we see that GF has a bandwidth of at most 2(ρ+1)−1 = 2ρ+ 1, as required. We emphasize that while typically the reduced Hessian loses sparsity, this is not the case here due to the particular structure of the nullspace matrix Z. The final section of this chapter describes a parallel solver for the banded SPD systems arising in the contact and friction QPs. 5.3 Block Solver for Banded SPD Systems Both the contact and friction QP problems require the solution to a banded SPD system. In the case of contact we have GCαF = cF , (5.26) and for friction we have GFβ = −(ZTHFβp + ZT d). (5.27) Recall that SCHURPA requires solving systems with multiple right-hand-sides within the functions C0_solve and F0_solve. Within these functions we must solve Equation (5.26) and Equation (5.27), respectively, implying that cF , βp, and d may be matrices. The dimensionality of these matrices depend on the number of changes to the working set. Thus we are interested in developing a parallel solver to systems of the form HX = B, (5.28) where H ∈ Rn×n is a banded SPD matrix with bandwidth ρ  n, and X,B ∈ Rn×k, where k  n is the number of right hand sides. We write the Cholesky 67 5.3. Block Solver for Banded SPD Systems factorization LLT = H. Note that L is also banded. A solution to Equation (5.28) can now be computed via LY = B, LTX = Y. Unfortunately the backward/forward substitution method is inherently sequen- tial. Recently, approaches to parallelize sparse triangular solves have been inves- tigated (Naumov, 2011) wherein one reorders the matrix by grouping equations which may be solved independently. However, the results only offer a modest speedup (e.g. 2× on average) and don’t apply to the dense banded case being discussed here. For large, very narrow banded systems, the SPIKE algorithm Polizzi and Sameh (2006) can work extremely well. However, our bands are not fixed and may be small to medium sized, depending on the simulation data (see Figure 5.2). We opt for a simple yet effective parallel approach based on block substitution. We decompose the Cholesky factors into blocks of size s as follows: L =  L11 L21 L22 . . . . . . Lp,p−1 Lpp  , (5.29) where Lij ∈ Rs×s and p = dns e. Notice that s ≥ ρ to ensure the two block- banded structure Equation (5.29) contains all nonzero elements of L. Extracting 68 5.3. Block Solver for Banded SPD Systems the block diagonals into the matrices C =  L21 ... Lp,p−1  ∈ R(p−1)s×s D =  L11 ... Lpp  ∈ Rps×s, we solve Equation (5.29) via Algorithm 5.3, which is simply a block version of substitution. We implement the algorithm on the GPU using CUDA, described in the next chapter. Algorithm 5.3 Block Substitution Input: Block matrices C,D from Cholesky factorization LLT = H Y1 = D−11 B1 for i = 2, ..., p do Yi = D−1i (Bi − Ci−1Yi−1) end for Xp = D−1p for i = p− 1, ..., 1 do Xi = D−1i (Yi − CiXi+1) end for Output: Solution to HX = B Upon computing D−1i for i = 1, ..., p in parallel as a pre-processing step, Algorithm 5.3 requires 4p − 2 sequential matrix-matrix multiplications which can be performed efficiently in parallel. By choosing appropriately sized blocks, the solving phase of SCHURPA can be significantly improved. The only addi- tional cost is the pre-processing step of computing the inverse diagonal blocks D−1i . This cost is small relative to the Cholesky factorization and need only be performed once during the initialization step of SCHURPA, after the initial factorization phase. Explicit inversion of the diagonal blocks was also done in (Tomov et al., 2010) to perform triangular solvers on the GPU, and perfor- mance gains exceeding 50× that of NVIDIA’S CUBLAS triangular solvers were observed. Figure 5.3 shows runtimes for the factorization and solve time of the 69 5.3. Block Solver for Banded SPD Systems block banded solver implemented on the GPU compared against a sequential substitution implemented on the CPU for a fixed bandwidth of ρ = 1000 solv- ing for k = 10 right-hand-sides as a function of the matrix dimensions. We used Matlab to perform the banded Cholesky factorization on the CPU. For the GPU version we utilized the CULA library (Humphrey et al., 2010) for CUDA. For matrices of size n ≈ 8000 we observe the GPU performs the fac- torization twice as fast as the CPU. Solving a banded system sequentially runs in O ( nρ2 ) , whereas our block banded solver (in an ideally parallelized setting) runs in O (ρs) = O (n). Thus, while both the CPU and GPU solving runtimes are linear in n, the GPU solver is independent of the bandwidth size. Figure 5.3 demonstrates the scalability of the GPU block substitution solver. The gains in the solving phase come at the cost of the pre-processing step during factorization which, as mentioned, is small relative to the cost of factorization. 70 5.3. Block Solver for Banded SPD Systems 1000 2000 3000 4000 5000 6000 7000 8000 9000 0 200 400 600 800 1000 1200 Banded Matrix Size (n x n) R u n ti m e  ( m s)   CPU Factor GPU Factor (a) 1000 2000 3000 4000 5000 6000 7000 8000 9000 0 50 100 150 200 250 300 350 Banded Matrix Size (n x n) R u n ti m e  ( m s)   CPU Solve GPU Solve (b) Figure 5.3: Comparison of (a) the banded Cholesky factorization and (b) the block banded solver CPU and GPU runtimes, applied to banded matrices as a function of matrix size. The bandwidth ρ = 1000 and number of right-hand- sides k = 10 are fixed across matrix dimensions. 71 Chapter 6 GPU Implementation General purpose computing on Graphics Processing Units (GPUs), or GPGPU, is a relatively recent endeavor taken up by the scientific computing community. Before common API frameworks existed, customizing GPU functionality was arduous and left primarily to specialists. However, two important developments have made GPGPU popularity explode: • the advent of programming interfaces such as OpenCL (Stone et al., 2010) and NVIDIA’s CUDA (NVIDIA, 2012b), and • double precision capability on GPUs. No longer limited to computer graphics, GPUs are fast becoming a staple of scientific computing. This chapter outlines a parallel implementation of SCHURPA applied to the simulator described in Chapter 4. We implement SCHURPA in the GPGPU framework using CUDA. Our motivation for a GPU implementation is two- fold: the simulator is also GPU-based, so CPU to GPU memory overhead is avoided, and SCHURPA’s amenability to parallelism, as discussed in Chapter 3. Dense linear algebra algorithms run on GPUs can lead to orders of magnitude acceleration compared to standard CPU implementations(Tomov et al., 2010), and this performance gap will only widen as the many-core paradigm continues to pervade contemporary programming. 72 6.1. CUDA Programming on the GPU Section 6.1 overviews CUDA, describing the C++ extension syntax. In Section 6.2, we give a taste of the custom functionality implemented on the GPU required by SCHURPA by detailing a symmetric-banded general matrix-matrix multiplication operation. Off-the-shelf GPU libraries were also extensively used in SCHURPA, which we outline in Section 6.3. 6.1 CUDA Programming on the GPU CUDA (Compute Unified Device Architecture), introduced by NVIDIA in 2006, refers to a massively parallel programming model and architecture, along with an API to program GPUs using high level programming languages such as C++ and Fortran. It is freely available (NVIDIA, 2012b) and requires a CUDA- capable NVIDIA GPU (GeForce 8 or greater, Tesla, Quadro etc.). CUDA uses a SIMD (Single Instruction Multiple Data) execution paradigm to run many concurrent threads (individual processing units) from a single function call of a kernel: functions programmed in CUDA which are compiled and run on a GPU. The many cores on a GPU chip allow it to perform thousands of tasks on large sets of data in parallel, whereas CPU programs perform sequentially (multi-core CPUs and hyper-threading attempt to improve this limitation). Due to the increased number of cores on GPUs, theoretical floating point operation throughput (GFLOP/s) is orders of magnitude greater than that of the CPU. Table 6.1 on page 74 summarizes the differences between CPU and GPU pro- gramming paradigms. SCHURPA was written in the extended C++ CUDA API. GPU memory, called device memory, is managed by the GPU; CPU, or host memory, can be copied to device memory and vice versa via the CUDA API. Host-to-device and device-to-host memory copies are slow and thus aimed to be minimized. As the simulator computes all data on the GPU, and device-to- 73 6.1. CUDA Programming on the GPU CPU GPU Number of Cores Several Hundreds Number of Threads Several Thousands Thread Speed Fast Slow Cache size Large Small Table 6.1: CPU vs. GPU comparison. device memory transfer is fast, we need not be concerned with any host memory transfer overhead. There are several types of GPU memory, including: global, shared, registers and local. Global memory is the largest, with the slowest read/write access. Pointers passed to kernel functions are typically arrays stored in global memory. Shared memory is much smaller and shared among threads in a single processing block. The access speed is much faster than global memory, and can be thought of as the cache. Registers store local data for each individual thread and has the fastest access. Local memory is again restricted to a thread and stores any data that can’t fit into the register memory. There is also texture and constant memory, which we shan’t discuss here. 6.1.1 CUDA Programming Model Kernel functions are configured to execute threads partitioned into computa- tional blocks of one, two or three-dimensions which exist on a two-dimensional grid (see Figure 6.1). The number of threads per block is limited as they share resources residing on the core executing the block. The blocks cannot communi- cate or synchronize with one another; this allows them to be run independently in any order. The computational grid is conceptually useful for decomposing a problem domain such as a fluid simulation, or two-dimensional data such as a matrix. Kernels require two additional parameters placed within “<<<>>>>” 74 6.1. CUDA Programming on the GPU Figure 6.1: Computational grid of threads, partitioned into blocks, executed in parallel on the GPU. From CUDA Programming Guide (NVIDIA, 2012b). brackets prefacing the regular parameter list: one int2 specifying the number of blocks on the two-dimensional grid, and an int3 specifying the number of threads per block. For example, suppose we want to add two matrices A ∈ Rn×n and B ∈ Rn×n. Each thread will compute one element of the resulting matrix C ∈ Rn×n. We decompose the matrices into 16 × 16 blocks mapped onto the grid and run the kernel addMatrices via the following syntax: // assume N ×N matr i ce s A, B, C have been de f ined in t2 threadsPerBlock (16 , 1 6 ) ; i n t3 numBlocks (N / threadsPerBlock . x , N / threadsPerBlock . y , 1 ) ; addMatrices<<<numBlocks , threadsPerBlock>>>(A, B, C) ; Listing 6.1: Calling a CUDA kernel. 75 6.2. Custom Kernels for SCHURPA In Listing 6.1 the number of blocks is computed to ensure there are n2 threads. Listing 6.2 shows the function body of the addMatrices kernel. The “__global__” identifier is used to specify a kernel function, which requires a void return type. __global__ void addMatrices ( f l o a t [N ] [N] A, f l o a t [N ] [N] B, f l o a t [N ] [N] C) { i n t i = blockIdx . y ∗ blockDim . y + threadIdx . y ; // row index i n t j = blockIdx . x ∗ blockDim . x + threadIdx . x ; // column index i f ( i < N && j < N) C[ i ] [ j ] = A[ i ] [ j ] + B[ i ] [ j ] ; } Listing 6.2: Body of a CUDA kernel. CUDA provides internal variables to determine the unique thread index and block index, given by threadIdx and blockIdx, respectively. Notice that in Listing 6.2 these indices are used to compute the global row and column indices. Each thread executes the kernel in parallel on the GPU, writing the solution to the two-dimensional array C. The arrays are all stored in global memory on the GPU. 6.2 Custom Kernels for SCHURPA Several operations not available in the current versions of the CUDA libraries (see Section 6.3) were required for the implementation of SCHURPA. Here we simply describe one kernel that performs matrix-matrix multiplication with a symmetric banded and dense matrix, respectively, to get a flavor for the kernel programming process in CUDA. We denote the following operation as symbgmm, for symmetric-banded general matrix-matrix multiplication: 76 6.2. Custom Kernels for SCHURPA C = BA, (6.1) where B ∈ Rn×n is a symmetric banded matrix, and A ∈ Rm×n, C ∈ Rm×n are dense matrices. We define B to have at most ρ nonzero sub/super-diagonals, i.e. Bik = 0 for min (1, i− ρ) ≤ k ≤ max (1, i+ ρ) so that Equation (6.1) can be rewritten as Cij = n∑ k=1 BikAkj = max(n,i+ρ)∑ k=min(1,i−ρ) BikAkj. Such an operation is required due to the banded structure of the contact and friction Hessians described in Chapter 4, and currently the CUBLAS library only supports general dense matrix-matrix multiplication. A sequential version of symbgmm written in C++ is given in 6.3. The bodies of the helper functions are omitted for brevity (see Appendix B). A straightforward extension to CUDA is given in Listing 6.4. As in the matrix addition example Listing 6.2, the two outer loops indexing rows i and column j are replaced with a unique thread for each element Cij . However, due to slow global memory access this kernel is not ideal. For example, a row bTi is required by multiple threads to compute the dot product bTi aj , yet need not be read from global memory by each thread. Rewriting Equation (6.1) in block form yields CIJ = ∑ K BIKAKJ . (6.2) We map the computation of the s×s block CIJ to a block on the computational 77 6.2. Custom Kernels for SCHURPA s t r u c t Matrix{ i n t width ; i n t he ight ; i n t ld ; // l ead ing dimension f l o a t ∗ e lements ; } ; s t r u c t BandedMatrix : Matrix{ i n t rho ; // # subd iagona l s } ; // he lpe r f unc t i on s f l o a t getElement ( const Matrix A, i n t i , i n t j ) ; f l o a t getElement ( const BandedMatrix B, i n t i , i n t j ) ; void setElement (Matrix A, i n t i , i n t j , f l o a t va lue ) ; // computes C = B ∗ A, where B i s a symmetric banded matrix void symbgmmHost( BandedMatrix B, Matrix A, Matrix C){ f l o a t Cval ; // loop through each element o f C f o r ( i n t i = 0 ; i < C. he ight ; i++){ f o r ( i n t j = 0 ; j < C. width ; j++){ Cval = 0 . 0 ; // compute the dot product bTi aj f o r ( i n t k = max(0 , i − B. rho ) ; k < min (B. width − 1 , i + B. rho ) ; k++) Cval += getElement (B, i , k ) ∗ getElement (A, k , j ) ; setElement (C, i , j , Cval ) ; } } } Listing 6.3: Sequential function of the symbgmm operation. 78 6.2. Custom Kernels for SCHURPA grid. To avoid redundant memory fetching, sub-matrices BIK and AKJ can be loaded into shared memory for each block, with each thread loading a sub- matrix element in parallel. A thread computes an element of the sub-matrix product CS = BIKAKJ and stores the result in register memory. Summing all such products over K yields the desired block CIJ , which is then written to device memory. Due to the banded structure of B we have K = (k0 : k0 + s), ..., (kf : kf + s), where k0 = max(1, I1s − ρ), kf = min(n − s, I1s + ρ). A kernel using this shared memory approach is given in Listing 6.5. The __shared__ prefix of the sub-matrix arrays defines them to be stored in shared memory. The __syncthreads() function synchronizes all threads of a given block, so that all previous code has been executed. The observant reader will notice that reading sub-matrices in memory may fail when for example requesting an element below the band of B, or when the block size s does not divide n or m. In such a case the sub-matrix element is set to 0, which produces the desired result. These details are abstracted away in the helper bodies, given in Chapter B. Figure 6.2 compares the runtimes of the different symbgmm routines, along with a comparison to NVIDIA’s CUBLAS matrix multiplication for general matrices. The shared implementation significantly outperforms the naive im- plementation. Along with the symgbmm operation, several other custom kernels were written to perform the following operations: • forming the contact and friction Hessians HC and HF , respectively, • forming the free contact Hessian GC , and 79 6.2. Custom Kernels for SCHURPA // computes C = B ∗ A, where B i s a symmetric banded matrix __global__ void symbgmmNaive( BandedMatrix B, Matrix A, Matrix C){ i n t i = blockDim . y ∗ blockIdx . y + threadIdx . y ; // row index i n t j = blockDim . x ∗ blockIdx . x + threadIdx . x ; // column index i f ( i >= C. he ight | | j >= C. width ) re turn ; f l o a t Cval = 0 . 0 ; // compute the dot product bTi aj f o r ( i n t k = max(0 , i − B. rho ) ; k < min (B. width − 1 , i + B. rho ) ; k++) Cval += getElement (B, i , k ) ∗ getElement (A, k , j ) ; setElement (C, i , j , Cval ) ; } Listing 6.4: Naive kernel of the symbgmm operation. 1000 2000 3000 4000 5000 6000 7000 8000 9000 0 50 100 150 200 250 300 350 400 450 Banded Matrix Size (n x n) R u n ti m e  ( m s)   CPU CUBLAS symbgmmNaive symbgmmShared Figure 6.2: Runtimes of symbgmm, the symmetric-banded general matrix- matrix multiplication operation, as a function of the matrix size. For compar- ison, we also show the performance of CUBLAS (NVIDIA, 2012a) for general matrix-matrix multiplication. 80 6.2. Custom Kernels for SCHURPA #de f i n e BLOCK_SIZE 16 // block s i z e loaded in to shared memory // computes C = B ∗ A, where B i s a symmetric banded matrix __global__ void symbgmmShared(BandedMatrix B, Matrix A, Matrix C){ i n t blockRow = blockIdx . y ; i n t blockCol = blockIdx . x ; i n t row = threadIdx . y ; i n t c o l = threadIdx . x ; i n t i = blockRow ∗ BLOCK_SIZE + row ; // row index i n t j = blockCol ∗ BLOCK_SIZE + co l ; // column index i f ( i >= C. he ight | | j >= C. width ) re turn ; f l o a t Cval = 0 . 0 ; // loop through each nonzero block o f row blockRow in t s ta r tB lock = max(0 , blockRow ∗ BLOCK_SIZE − B. rho ) ; i n t endBlock = min (B. width−1, blockRow ∗ BLOCK_SIZE + B. rho + BLOCK_SIZE − 1 ) ; f o r ( i n t m = sta r tB lock ; m <= endBlock ; m += BLOCK_SIZE){ __shared__ f l o a t Bs [BLOCK_SIZE ] [ BLOCK_SIZE ] ; __shared__ f l o a t As [BLOCK_SIZE ] [ BLOCK_SIZE ] ; // each thread loads an element o f the submatr ices Bs [ row ] [ c o l ] = getElement (B, i , m + co l ) ; As [ row ] [ c o l ] = getElement (A, m + row , j ) ; __syncthreads ( ) ; // ensure submatr ices are loaded // compute the submatrix product BsAs f o r ( i n t k = 0 ; k < BLOCK_SIZE; k++) Cval += Bs [ row ] [ k ] ∗ As [ k ] [ c o l ] ; __syncthreads ( ) ; // ensure dot product i s computed } setElement (C, i , j , Cval ) ; } Listing 6.5: Kernel of the symbgmm operation using shared memory. 81 6.3. CUDA Libraries • forming the reduced friction Hessian GF . The details of these functions are left out of this thesis, but they all follow similar design principles to Listing 6.5. 6.3 CUDA Libraries Several publicly available libraries were crucial in the efficacy of the SCHURPA GPU implementation. These include : • CUBLAS (NVIDIA, 2012a) - a GPU implementation of the BLAS API, developed by NVIDIA, • CUSPARSE (NVIDIA, 2012c) - a GPU library for manipulating sparse data and performing sparse matrix operations, developed by NVIDIA, • CULA (Humphrey et al., 2010) - a GPU implementation of the LAPACK API, developed by EM Photonics, and • Thrust (NVIDIA, 2012d) - a high-level templated interface analogous to the STL library of C++, developed by Jared Hoberock and Nathan bell. Thrust was used for device memory management as well as vector operations such as reduction. CUSPARSE handled the sparse matrix data and opera- tions including the constraint Jacobian and tangent matrices in the simulator. CUBLAS provided the low level functionality such as device-to-device memory copying, dot products, matrix vector products etc. in the SCHURPA algo- rithm. The highly optimized banded cholesky solve in CULA was used for the factorization of the matrices GC and GF . 82 Chapter 7 Results We demonstrate the effectiveness of our SCHURPA implementation in the con- text of generic QPs and the friction and contact QP subproblems arising in the staggered projections algorithm. We compare our method with a direct ap- proach that computes a new factorization in each iteration of PDASM, which we call DIRECT. In Section 7.1 we randomly generate and solve QPs of fixed dimension. Run- times are compared against CPU and GPU-based implementations of DIRECT and SCHURPA. We show that as the initial working set A0 approaches the optimal active set A∗, SCHURPA significantly outperforms the direct version. We then leverage the staggered projections algorithm with SCHURPA to resolve frictional contact in the Eulerian solids simulator discussed in Chapter 4 . The results are given in Section 7.2. All experiments were performed on an Intel i7 2.80GHz CPU running a 64-bit Windows 7 OS, with an NVIDIA GeForce GTX 580 GPU. A Matlab interface was created using mex (Matlab executable) files to allow fast prototyping of the experiments as well as visualization capabilities. 7.1 Randomly Generated QPs To investigate the warm-starting capabilities of SCHURPA, we randomly gen- erated QPs of the form 83 7.1. Randomly Generated QPs minimize 12x THx− cTx subject to x ≥ 0, (7.1) where x, c ∈ Rn and H ∈ Rn×n is a banded SPD matrix with bandwidth ρ. In our experiments, n = 8000 and ρ = 1000 were held fixed. The solution to a randomly generated QP was first computed to determine A∗, and A0 was constructed via altering A∗. Specifically, we reverse the inclusion of the first δ constraints to obtain A0, so that |A0 − A∗| = δ, where A0 − A∗ represents the set difference. CPU and GPU implementations of SCHURPA and DIRECT were then run on the QPs using A0. The CPU implementations were done in Matlab. The saddle point systems of (7.1) were solved using the reduced Hessian method, as discussed in Section 5.1.1. Figure 7.1 shows the runtimes of the algorithms as a function of the working set perturbation, i.e., |A0 −A∗|. We observe that the GPU versions significantly outperform their CPU coun- terparts, with the GPU version of SCHURPA being the most efficient. Notice that as the perturbation increases, the runtimes of both versions of SCHURPA increase, but the GPU version significantly less so than the CPU version; this occurs due to the parallel block solver discussed in Section 5.3. Figure 7.2 shows a breakdown of the GPU-based SCHURPA runtimes. We observe that the fac- torization runtime is constant as only one factorization occurs, independent of iteration number. The solve runtime increases as the perturbation increases due to a larger Schur-complement solve. Table 7.1 details the statistics of the GPU- based DIRECT and SCHURPA algorithms. We denote factorization runtimes by f(ms) and factorization count by #f, solving the saddle point system run- times by s(ms) and the number of solves by #s, and total runtimes by t. The number of iterations for both methods are identical as they are both PDASMs. Notice that one factorization of SCHURPA is more expensive than one factor- 84 7.2. Frictional Contact in the Simulator 0 5 10 15 20 25 30 35 40 45 50 0 500 1000 1500 2000 2500 3000 PDASM runtimes for n =8000. ρ = 1000 Perturbation from optimal active set R u n ti m e s (m s)   Direct (CPU) Schurpa (CPU) Direct (GPU) Schurpa (GPU) Figure 7.1: Runtimes of the CPU and GPU implementations of the DIRECT and SCHURPA algorithms, applied to (7.1), as a function of |A0 − A∗|. For these experiments the matrix size n = 8000 and bandwidth ρ = 1000 were held fixed. ization of DIRECT due to the pre-processing step of the block banded solver as explained in Section 5.3. Finally, we observe that the speedup SCHURPA attains over DIRECT is proportional to the number of iterations required by PDASM. For the case of |A0 −A∗| ≤ 50 a speedup of over 2× is observed. These results show that SCHURPA can successfully take advantage of warm- starting, and our GPU implementation scales well as a function of |A0 − A∗|. Compared to DIRECT, SCHURPA is significantly more efficient, especially when the number of PDASM iterations is large. 7.2 Frictional Contact in the Simulator We now show the results of the SCHURPA-SP method given by Algorithm 4.2 applied to a simulation involving frictional contact in the simulator. The sim- ulation involves a collision of 3× 3 grid of cylinders, as depicted in Figure 7.3. Each cylinder has a length of 5m and diameter 3m, and moves toward the center with a speed of 4m/s. The frictional coefficient µ = 1 was used throughout the 85 7.2. Frictional Contact in the Simulator 0 10 20 30 40 50 0 50 100 150 200 250 300 350 400 Schurpa runtime for n =8000. ρ = 1000 Perturbation from optimal active set R u n ti m e s (m s)   factor time solve time other Figure 7.2: Decomposition of the GPU-based SCHURPA runtimes, applied to (7.1), as a function of |A0−A∗|. For these experiments the matrix size n = 8000 and bandwidth ρ = 1000 were held fixed. (a) (b) (c) Figure 7.3: Cylinder collision simulation. (a) The initial setup of the cylinders. (b) Contact begins, denoting t = t0 = 0 seconds. (c) Resolution of the collision at t = T = 0.5 seconds. 86 7.2. Frictional Contact in the Simulator SC H U R PA D IR E C T # It er at io ns |A 0 − A∗ | f (m s) # f s (m s) # s t (m s) f (m s) # f s (m s) # s t (m s) 0 27 9. 57 4 1 44 .5 66 7 1 35 0. 91 2 19 3. 12 8 1 45 .9 46 6 1 26 3. 92 6 1 10 28 2. 66 1 16 9. 64 3 15 48 4. 10 4 76 7. 98 5 4 18 3. 74 5 4 10 04 .6 3 4 20 27 8. 66 9 1 22 7. 72 7 31 54 0. 22 2 93 8. 09 4 5 22 9. 48 3 5 12 29 .0 5 5 30 28 1. 97 2 1 25 7. 62 1 42 57 4. 07 96 7. 22 1 5 22 9. 82 8 5 12 60 .1 5 5 40 28 4. 37 7 1 27 4. 52 2 54 59 3. 34 3 95 9. 71 5 5 23 0. 71 5 5 12 53 .2 8 5 50 28 3. 50 4 1 35 9. 45 8 72 67 8. 07 2 97 2. 87 7 5 22 9. 68 7 5 12 65 .2 5 5 Table 7.1: Statistics of the GPU-based implementations of SCHURPA and DI- RECT applied to (7.1). We denote factorization runtimes by f(ms) and fac- torization count by #f, solving the saddle point system runtimes by s(ms) and the number of solves by #s, and total runtimes by t. 87 7.2. Frictional Contact in the Simulator simulation. We used a constant time step of ∆t = 0.005 seconds, for a total time of T = 0.5 seconds. We compared the performance of DIRECT and SCHURPA by solving the contact and friction QP subproblems in SP using GPU-based implementations of both methods. Recall that Algorithm 4.2 requires the initial working sets A0c and A0f for contact and friction, respectively, and the quality of these estimates is crucial for effective warm-starting. For our warm-starting strategy we chose a con- straint c to be active if its corresponding grid cell Ωc had a constraint that was active in the previous time step’s optimal active set. As contact is initiated the warm-starting strategy fails as there are no constraints during the previous time step. More sophisticated warm-starting techniques which predict contact could improve this limitation. Figure 7.4 shows the proportions A 0 c A∗c and A0f A∗ f over the course of the simulation. Once contact begins the ratios stay above 0.8 throughout the simulation, yielding very good initial estimates. As a result, SP required at most 2 iterations and SCHURPA required at most 8 iterations for contact and 11 iterations for friction throughout the simulation, demonstrating the utility of warm-starting both SP and SCHURPA. The warm-starting also reduces the number of solves required for Algorithm 4.2, which are shown for the contact and friction phases in Figure 7.6. Observe that near t = 0.35 seconds, A0c A∗c and A0f A∗ f increase which improves the warm-starting and as a result the num- ber of solves decreases. If the number of solves required by SCHURPA exceeds a threshold max_solve, we reset the algorithm as described in Section 3.3. In our implementation we set max_solve = 500. Over the course of the simulation resetting was required at eight time steps in the friction solving phase. These were near the beginning of the simulation when the warm-starting algorithm failed to adequately predict A∗f . In the contact solving phase restarting never occurred. 88 7.2. Frictional Contact in the Simulator 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 0.8 0.82 0.84 0.86 0.88 0.9 0.92 0.94 0.96 0.98 1 Sim time (s) C o rr e c t R a ti o   Contact Friction Figure 7.4: The proportion of correct indices in A0c and A0f over the course of the simulation. Let ρC denote the bandwidth of the contact Hessian HC ; the ratio rC = ρ C m , where m is the number of contacts, determines the sparsity of HC . Over the simulation we observed max ( rC ) = 0.48, min ( rC ) = 0.04 and mean ( rC ) = 0.18, corroborating our previous postulate that our bands being small to medium sized. Solving the contact and friction QPs require factorizing and solving with the free Hessian GC given by Equation (5.21) and the reduced Hessian GF given by Equation (5.22), respectively. The number of contacts, dimensions of GC and dimensions of GF are given in Figure 7.6. These quantities peak at the time of maximum impact (≈ 0.1 seconds), and then decrease as the cylinders bounce apart (see Figure 7.3). Figure 7.7 gives runtime comparisons of DIRECT and SCHURPA applied to the QP subproblems arising in SP. During the peak impact phase (≈ 0.1 sec- onds), SCHURPA outperforms DIRECT substantially. This occurs due to the more efficient factoring phase. Notice that the solving runtimes for SCHURPA are only marginally more than DIRECT, even though SCHURPA requires many solves per time step (see Figure 7.6). In contrast, DIRECT performed at most 89 7.2. Frictional Contact in the Simulator 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 0 200 400 600 800 1000 1200 1400 1600 Sim time (s) D im e n si o n s   # Contacts Contact Dims Friction Dims Figure 7.5: The number of contacts, dimensions of GC and dimensions of GF over the course of the simulation. 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 0 50 100 150 200 250 300 350 400 450 Sim time (s) N u m b e r o f S o lv e s   Contact Friction Figure 7.6: The number of solves performed in SCHURPA-SP for the contact and friction phases over the course of the simulation. 90 7.2. Frictional Contact in the Simulator 0 0.1 0.2 0.3 0.4 0.5 0 200 400 600 800 Sim time (s) R u n ti m e  ( m s)   Direct total SCHURPA total 0 0.1 0.2 0.3 0.4 0.5 0 500 1000 1500 2000 Sim time (s) R u n ti m e  ( m s)   Direct total SCHURPA total 0 0.1 0.2 0.3 0.4 0.5 −200 0 200 400 600 800 Sim time (s) R u n ti m e  ( m s)   Direct factor SCHURPA factor 0 0.1 0.2 0.3 0.4 0.5 0 500 1000 1500 2000 Sim time (s) R u n ti m e  ( m s)   Direct factor SCHURPA factor 0 0.1 0.2 0.3 0.4 0.5 0 10 20 30 40 Sim time (s) R u n ti m e  ( m s)   Direct solve SCHURPA solve 0 0.1 0.2 0.3 0.4 0.5 0 50 100 150 Sim time (s) R u n ti m e  ( m s)   Direct solve SCHURPA solve Figure 7.7: Runtimes of DIRECT and SCHURPA for the contact (left) and friction (right) solving phases over the course of the simulation. The total, factor and solve runtimes are given by the top, middle and bottom subfigures, respectively. 8 and 11 solves in the contact and friction phases, respectively, exemplifying the parallelization of the banded block solver. The total runtime speedup ratios of SCHURPA to DIRECT is given in Figure 7.8. We observe approximately a 4× and 3× speedup during the maximum impact for the contact and friction solving phases, respectively. The presented results show the utility of SCHURPA applied to the contact and friction QP subproblems of SP. Equipped with a simple yet effective warm- starting technique, the number of iterations in SCHURPA and SP remained low. The multiple solves required by SCHURPA were efficiently handled by the parallel block solver, yielding superior performance as compared with DIRECT. 91 7.2. Frictional Contact in the Simulator 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 0 1 2 3 4 5 6 7 Sim time (s) S p e e d u p  R a ti o   Contact Friction Figure 7.8: Total runtime speedup ratios of SCHURPA to DIRECT for the contact and friction phases over the course of the simulation. 92 Chapter 8 Conclusion In this thesis we have developed a solution framework for frictional contact which incorporates the staggered projections algorithm, along with an efficient parallel implementation which exploits the underlying matrix structure of the QP subproblems. Doing so enabled an Eulerian solids simulator the ability to simulate frictional contact dynamics. We first argued the merits of using active-set methods in the context of solv- ing QPs arising in physical simulation due to their warm-starting capabilities. In particular, we desired a parallel active-set method to solve the QP subproblems of the staggered projections algorithm (Kaufman et al., 2008) to complement the parallel implementation of the simulator (Levin et al., 2011). We showed that there were two fundamental limitations attributed to parallelizing classical ASMs and their implementations. These were: • classical ASMs could require many sequential iterations, and • these iterations updated factorizations, an inherently sequential process. To overcome these limitations, we introduced SCHURPA: a parallel implemen- tation of PDASM using the Schur-complement method. The super-linear con- vergence property of PDASM overcomes the first limitation above, and the Schur-complement solution method to solving the saddle point systems arising in PDASM overcomes the second limitation by circumventing the factorization updating process. 93 Chapter 8. Conclusion We then reformulated the friction problem so as to be solvable by PDASM, and therefore SCHURPA. The reformulation incorporated a simplified linearized friction cone as the feasible set to the friction QP; convergence of PDASM applied to the reformulation could then be proven. We integrated SCHURPA into staggered projections to produce SCHURPA-SP, which only required the use of a single factorization for contact and friction, respectively, to solve all QP subproblems arising in staggered projections. Further investigation of the structure of the contact and friction QPs in the simulator showed both problems may be phrased as banded SPD systems. A simple yet effective parallel block substitution method for banded SPD systems was designed and incorporated into a SCHURPA-SP implementation on the GPU using CUDA (NVIDIA, 2012b) to imbue the simulator with the ability to handle frictional contact. The merits of SCHURPA were confirmed by the speedup attained over DI- RECT, an implementation of PDASM which produces a factorization each it- eration. Experiments run on generic randomly generated QPs demonstrated that, when using an initial working set defined by relatively small perturbations of the optimal active set, SCHURPA excels over DIRECT on both CPU and GPU implementations. The GPU implementation showed better performance for larger perturbations due to the parallelizability of the block solver; such per- turbations are to be expected for large problems arising in physical simulation as the active-set is unknown and, unlike SQP-methods utilizing warm-starts, there is no active-set the QPs are converging to. Finally, we showed the improved performance of SCHURPA over DIRECT on a large-scale simulation using the simulator involving frictional contact. Several avenues of future work could significantly improve SCHURPA’s us- ability. More sophisticated warm-starting techniques could significantly reduce 94 Chapter 8. Conclusion the runtime of SCHURPA. A heterogenous CPU-GPU implementation would not only improve performance, but also broaden the algorithm’s applicability to other simulations not confined to a GPU implementations. Along these lines, SCHURPA could be integrated into other simulation frameworks. For exam- ple, rigid body simulations share many of the same properties as the Eulerian solids simulator. Integration of SCHURPA would simply require the function that solves the initial saddle point system and could be extremely beneficial. Finally, ideas from SCHURPA could be used in a hybrid iterative-direct solver for the saddle point systems arising in ASMs to solve large, sparse problems. 95 Bibliography Bartlett R.A. and Biegler L.T. QPSchur: A dual, active-set, Schur-complement method for large-scale and structured convex quadratic programming. Opti- mization and Engineering, 7(1):5–32, 2006. Bergounioux M., Ito K., and Kunisch K. Primal-dual strategy for constrained optimal control problems. SIAM Journal on Control and Optimization, 37(4):1176, 1999. Betts J.T. A Sparse Nonlinear Optimization Algorithm. Journal of Optimization Theory and Applications, 82(3):519–541, 1994. Brunssen S., Schmid F., Schäfer M., and Wohlmuth B. A fast and robust it- erative solver for nonlinear contact problems using a primal-dual active set strategy and algebraic multigrid. International Journal for Numerical Meth- ods in Engineering, 69(3):524–543, 2007. Daryina A.N. and Izmailov A.F. Semismooth newton method for quadratic programs with bound constraints. Computational Mathematics and Mathe- matical Physics, 49(10):1706–1716, 2009. Davis T.A. and Hager W.W. Row modifications of a sparse cholesky factor- ization. SIAM Journal on Matrix Analysis and Applications, 26(3):621–639, 2006. 96 Bibliography Gill P.E., Golub G., Murray W., and Saunders M. Methods for modifying matrix factorizations. Mathematics of Computation, 28(126):505–535, 1974. Gill P.E., Murray W., Saunders M.A., and Wright M.H. Maintaining LU factors of a general sparse matrix. Linear Algebra and its Applications, 88:239–270, 1987. Gill P.E., Murray W., Saunders M.A., and Wright M.H. A Schur-complement method for sparse quadratic programming. In Reliable numerical computa- tion, Oxford Sci. Publ., 113–138. Oxford Univ. Press, New York, 1990. Goldfarb D. and Idnani A. A numerically stable dual method for solving strictly convex quadratic programs. Mathematical Programming, 27(1):1–33, 1983. Hintermuller M. The primal-dual active set method for a crack problem with non-penetration. IMA Journal of Applied Mathematics, 69(1):1–26, 2004. Hintermüller M., Ito K., and Kunisch K. The primal-dual active set strategy as a semismooth newton method. SIAM Journal on Optimization, 13(3):865–888, 2002. Hüeber S. and Wohlmuth B. A primal-dual active set strategy for non-linear multibody contact problems. Computer Methods in Applied Mechanics and Engineering, 194(27):3147–3166, 2005. Humphrey J.R., Price D.K., Spagnoli K.E., Paolini A.L., and Kelmelis E.J. CULA: Hybrid GPU Accelerated Linear Algebra Routines. In SPIE Defense, Security, and Sensing. International Society for Optics and Photonics, 2010. Ito K. and Kunisch K. Lagrange multiplier approach to variational problems and applications. Society for Industrial and Applied Mathematics, 2008. Jean M. The non-smooth contact dynamics method. Computer methods in applied mechanics and engineering, 177(3):235–257, 1999. 97 Bibliography Kaufman D.M. Coupled principles for computational frictional contact mechan- ics. Ph.D. thesis, New Brunswick Rutgers, The State University of New Jersey, 2009. Kaufman D.M., Sueda S., James D.L., and Pai D.K. Staggered projections for frictional contact in multibody systems. ACM Transactions on Graphics (TOG), 27(5):164:1–164:11, 2008. Kunisch K. and Rendl F. An infeasible active set method for quadratic problems with simple bounds. SIAM Journal on Optimization, 14(1):35–52, 2003. Kunisch K. and Rösch A. Primal-dual active set strategy for a general class of constrained optimal control problems. SIAM Journal on Optimization, 13(2):321–334, 2002. Lanczos C. The variational principles of mechanics. Dover Publications, 1986. Levin D.I.W., Litven J., Jones G.L., Sueda S., and Pai D.K. Eulerian solid simulation with contact. ACM Transactions on Graphics (TOG), 30(4):36, 2011. Moreau J. On unilateral constraints, friction and plasticity. New Variational Techniques in Mathematical Physics, 172–322, 1973. Naumov M. Parallel Solution of sparse triangular linear systems in the precon- ditioned iterative methods on the GPU. NVIDIA Technical Report, NVR- 2011-0, 2011. Nocedal J. and Wright S. Numerical optimization. Springer Verlag, 1999. NVIDIA. Cublas : http://docs.nvidia.com/cuda/cublas/index.html. 2012a. NVIDIA. Cuda : http://docs.nvidia.com/cuda/cuda-c-programming- guide/index.html. 2012b. 98 Bibliography NVIDIA. Cusparse : http://docs.nvidia.com/cuda/cusparse/index.html. 2012c. NVIDIA. Thrust : http://docs.nvidia.com/cuda/thrust/index.html. 2012d. Polizzi E. and Sameh A.H. A parallel hybrid banded system solver: the SPIKE algorithm. Parallel Computing, 32(2):177–194, 2006. Qi L. and Sun J. A nonsmooth version of Newton’s method. Mathematical Programming, 58(1-3):353–367, 1993. Stewart D.E. Rigid-body dynamics with friction and impact. SIAM Review, 42(1):3–39, 2000. Stewart D.E. and Trinkle J.C. An implicit time-stepping scheme for rigid body dynamics with inelastic collisions and coulomb friction. International Journal for Numerical Methods in Engineering, 39(15):2673–2691, 1996. Stone J.E., Gohara D., and Shi G. OpenCL: A parallel programming standard for heterogeneous computing systems. Computing in science and engineering, 12(3):66–73, 2010. Tomov S., Nath R., Ltaief H., and Dongarra J. Dense linear algebra solvers for multicore with GPU accelerators. In Parallel Distributed Processing, Work- shops and Phd Forum (IPDPSW), 2010 IEEE International Symposium, 1–8. IEEE, 2010. Ulbrich M. Semismooth newton methods for variational inequalities and con- strained optimization problems in function spaces. Society for Industrial & Applied Mathematics, 2011. Versteeg H. and Malalasekera W. An Introduction to Computational Fluid Dy- namics: The Finite Volume Method. Prentice Hall, 2007. 99 Wright S.J. Primal-dual Interior-point Methods. Society for Industrial & Ap- plied Mathematics, U.S., 1987. 100 Appendix A Data Storage Formats Dense matrices are stored in column-major order, which is defined for a matrix A as Aij = a[i+ j ∗ lda], where a is the associated pointer to device memory and lda is the leading di- mension, or stride, of the allocated memory storing A. A symmetric banded matrix B with kb sub-diagonals is stored in a (kb+1)×n matrix where row i contains the ith sub-diagonal, beginning with the main diagonal, so that Bij = b[i− j + j ∗ ldb] max(0,i-kb)≤j≤i, where ldb is the leading dimension of B.Table A.1 describes the data storage formats for dense and banded matrices on the GPU. 101 Appendix A. Data Storage Formats Matrix Type array Data mapping A Dense a Aij = a[i+ j ∗ lda] B Symmetric Banded b Bij = b[i− j + j ∗ ldb], max(0, i− kb) ≤ j ≤ i Table A.1: Data storage formats for dense and symmetric banded matrices. For matrices A and B, a and b are their two-dimensional stored representation on GPU memory, respectively. lda and ldb are the leading dimensions of the arrays a and b, respectively, and kb is the number of sub/super diagonals of the matrix B. 102 Appendix B Additional Code The code B.1 contains the helper body code of the methods 6.3,Listing 6.4, and Listing 6.5. If a memory access other than a legal matrix element is attempted, 0 is returned, which pads the block with zeros. The resulting matrix multiplication of the sub-matrices in Equation (6.2) is then BIK 0 0 0  AKJ 0 0 0  = CS 0 0 0  , obtaining the desired sub-matrix. 103 Appendix B. Additional Code // he lpe r f unc t i on s f l o a t getElement ( const Matrix A, i n t i , i n t j ){ prec va l = 0 ; i f ( i < A. he ight && j < A. width ) va l = A[ i + j ∗ A. ld ] ; r e turn va l ; } f l o a t getElement ( const BandedMatrix B, i n t i , i n t j ){ prec va l = 0 . 0 ; // i f j > i , swap i & j i n t temp ; i f ( j > i ){ temp = j ; j = i ; i = temp ; } i n t min_j = max(0 , i − B. rho ) ; i f ( i < A. he ight && j < A. width && min_j <= j ) va l = B[ i − j + j ∗ ldb ] ; r e turn va l ; } void setElement (Matrix A, i n t i , i n t j , f l o a t va lue ){ i f ( i < A. he ight && j < A. width ) A[ i + j ∗ A. ld ] = value ; } Listing B.1: Helper bodies for kernels. 104

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
https://iiif.library.ubc.ca/presentation/dsp.24.1-0052205/manifest

Comment

Related Items