Isosurface Stuffing ImprovedAcute Lattices and Feature MatchingbyCrawford DoranB. Software Engineering, University of Waterloo, 2011A THESIS SUBMITTED IN PARTIAL FULFILLMENTOF THE REQUIREMENTS FOR THE DEGREE OFMaster of ScienceinTHE FACULTY OF GRADUATE AND POSTDOCTORAL STUDIES(Computer Science)The University Of British Columbia(Vancouver)October 2013c? Crawford Doran, 2013AbstractWe present two improvements to Labelle and Shewchuk?s isosurface stuffing al-gorithm for tetrahedral mesh generation. First, we use an acute tetrahedral latticeknown as the A15 lattice in place of the original BCC lattice. In the uniform case,the higher-quality tetrahedra of A15 significantly improve the quality of the re-sulting mesh; BCC remains the best choice for adaptive meshes, as the A15-basedadaptive lattices we have designed are not able to outperform it. Second, we ex-tend the method to match features such as corners and sharp creases. Feature linesand points are matched by snapping nearby mesh vertices onto them. A final ver-tex smoothing pass mitigates the loss of mesh quality due to feature-matching.Our experiments show that for input surfaces with reasonable crease angles, thisis generally sufficient to restore mesh quality.iiPrefaceThis thesis consists primarily of independent work by the author, Crawford Doran.Initial ideas and implementation of the material in Chapter 3 came from RobertBridson. Athena Chang provided additional help in creating and visualizing theresults found in Chapter 6.A version of the material in Chapter 3, Chapter 5, and Chapter 6 was pre-sented as a Technical Talk at SIGGRAPH 2013, but none of the original materialcontained herein has seen official publication.iiiTable of ContentsAbstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iiPreface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iiiTable of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ivList of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vList of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viGlossary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viiAcknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viiiDedication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 Overview of Mesh Generation . . . . . . . . . . . . . . . . . . . . . 42.1 Background and Motivation . . . . . . . . . . . . . . . . . . . . . 4iv2.2 Mesh Quality . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52.3 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72.3.1 Delaunay Refinement . . . . . . . . . . . . . . . . . . . . 72.3.2 Advancing Front . . . . . . . . . . . . . . . . . . . . . . 92.3.3 Octree Methods . . . . . . . . . . . . . . . . . . . . . . . 92.3.4 Lattice Methods . . . . . . . . . . . . . . . . . . . . . . . 103 A15 Isosurface Stuffing . . . . . . . . . . . . . . . . . . . . . . . . . 133.1 Improving on the BCC Lattice . . . . . . . . . . . . . . . . . . . 133.2 TCP and Edge Valence Analysis . . . . . . . . . . . . . . . . . . 143.3 A15 Lattice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163.4 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174 Adaptivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214.2 Tetrahedral Subdivision . . . . . . . . . . . . . . . . . . . . . . . 234.3 A15 Point Lattice . . . . . . . . . . . . . . . . . . . . . . . . . . 264.4 Adaptive Red-Green A15 . . . . . . . . . . . . . . . . . . . . . . 295 Feature Matching . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345.1 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345.2 Feature Endpoints . . . . . . . . . . . . . . . . . . . . . . . . . . 365.3 Feature Paths . . . . . . . . . . . . . . . . . . . . . . . . . . . . 385.4 Mesh Smoothing . . . . . . . . . . . . . . . . . . . . . . . . . . 41v6 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 446.1 Uniform A15 Isosurface Stuffing . . . . . . . . . . . . . . . . . . 446.2 Feature Matching . . . . . . . . . . . . . . . . . . . . . . . . . . 486.3 Adaptive A15 Lattice . . . . . . . . . . . . . . . . . . . . . . . . 537 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60Appendix A A15 Tile . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63viList of TablesTable 6.1 Mesh quality for smooth inputs . . . . . . . . . . . . . . . . . 45Table 6.2 Smooth inputs with optimization . . . . . . . . . . . . . . . . 48Table 6.3 Nonsmooth inputs with feature matching, after vertex snapping 49Table 6.4 Nonsmooth inputs with feature matching, after smoothing . . . 49Table 6.5 Quality of lattices produced with the A15 Point Lattice algorithm 54Table 6.6 Quality of lattices produced with the Red-Green A15 algorithm 54viiList of FiguresFigure 2.1 Comparison of equilateral and sliver tetrahedra. . . . . . . . . 6Figure 3.1 Illustration of the BCC lattice. . . . . . . . . . . . . . . . . . 14Figure 3.2 A single tile of the A15 lattice viewed from multiple angles. . 17Figure 3.3 A15 Isosurface stuffing in action. . . . . . . . . . . . . . . . . 20Figure 4.1 Adaptive meshing using BCC lattice. . . . . . . . . . . . . . 22Figure 4.2 Red-green subdivision stencils. . . . . . . . . . . . . . . . . 25Figure 4.3 A15 point lattice (and the octree from which it was generated). 28Figure 4.4 Transforming one face of A15 tile to double resolution. . . . . 30Figure 4.5 Subdivided bridge cells in the red-green A15 algorithm. . . . . 32Figure 5.1 Feature points (black) and curves (pink) of the Fandisk model. 35Figure 5.2 Fandisk meshed with and without feature matching. . . . . . . 37Figure 5.3 Example of feature curve matching. . . . . . . . . . . . . . . 38Figure 6.1 Cutaway view of Sphere input, meshed with h = 0.2. . . . . . 45Figure 6.2 Dragon input, h = 0.02. with dihedral angle histogram . . . . 46viiiFigure 6.3 Bunny input, h = 1.0. with dihedral angle histogram . . . . . 47Figure 6.4 Cube, h = 0.1, after smoothing. Cutaway view at right. . . . . 48Figure 6.5 Joint, h = 0.03, after smoothing. Cutaway view at right. . . . . 50Figure 6.6 Block, h = 0.05, after smoothing. Cutaway views on bottom. . 51Figure 6.7 Fandisk, h = 0.03, after smoothing. Cutaway view at right. . . 52Figure 6.8 Adaptive lattice (Bunny) from A15 Point Lattice algorithm. . . 55Figure 6.9 Adaptive lattice (Block) from Red-Green A15 algorithm. . . . 56Figure A.1 A15 tile overlaid on cube (thick black lines) . . . . . . . . . . 64Figure A.2 Mapping of A15 tile elements to cube. . . . . . . . . . . . . . 65ixGlossaryBCC Body-Centred Cubic lattice, a space-filling tetrahedral lattice formed bytwo offset cartesian gridsCDT Constrained Delaunay Tetrahedralization, an algorithm for finding avolumetric triangulation of a space that is constrained to contains agiven set of vertices, edges, and facesFEM Finite Element Method, a procedure for solving partial differentialequations on domains discretized into an unstructured mesh of elementsCAD Computer-Aided Design, commonly used in the creation of industrialmachine componentsTCP Tetrahedral Close-Packed structures, a family of space-fillingtetrahedral lattices composed of near-equilateral elementsSDF Signed Distance Field, a data structure that represents an isosurface bystoring the signed distance to the surface at regular grid locationsxAcknowledgmentsThank you to Prof. Robert Bridson for providing insight, encouragement, pa-tience, and inspiration in my continuing exploration of the world of physics-basedanimation.Thank you to Athena Chang for writing test scripts for my algorithms, and forbeing a sounding board for my ideas over the course of this project.Thanks also to Essex Edwards for many enlightening conversations on a vari-ety of topics in math, physics, and computer science.A big thank you to James Jacobs for giving me the opportunity to be part of anexciting new project while being incredibly accommodating as I wrote this thesis.Thank you to NSERC for enabling my research through its funding.It is hard for me to imagine successfully completing this project without thelove and support of my wife Gwen. Thank you for putting up with the long nightsand for reminding me to eat, sleep, and enjoy the sunshine every so often.And lastly, a big thank you to my family, whose support has meant the worldto me, even from the other side of the continent.xiDedicationIn loving memory of my father, David Doran, to whom I owe my inspiration topursue higher education.xiiChapter 1IntroductionTetrahedral meshes have become an important tool in physics-based animation,particularly as discretizations of volumetric domains for fluid and elastic solidsimulations. The task of creating a good-quality tetrahedral mesh describing anarbitrary domain is prohibitively difficult and time-consuming by hand, hence themotivation to create automatic mesh generators. However, high-quality tetrahe-dral mesh generation is very much a non-trivial task, and many different tech-niques have been tried over the years to solve this difficult problem.The problem of high-quality tetrahedral mesh generation is essentially a con-flict between two opposing forces: the need for a mesh to have tetrahedral ele-ments of good quality, and the need for a mesh to adequately describe a volumetricdomain, particularly the surface describing its boundary. The ideal, best-qualitytetrahedron in most applications is the equilateral tetrahedron. Unfortunately, it iseffectively impossible to mesh even relatively simple volumetric domains using1only equilateral elements, and as such it becomes necessary to compromise onquality in order to achieve domain conformance. In some applications, such asmodeling of smooth biological tissues or boundary-embedded simulation meth-ods, faithful recreation of the domain boundary is less important, allowing foradditional quality gains. In others, such as Computer-Aided Design (CAD) orhigh-accuracy offline Finite Element Method (FEM) simulations, the domain mustbe modeled precisely, and element quality usually suffers as a result.One of the most notable recent results in tetrahedral mesh generation is the iso-surface stuffing algorithm of Labelle and Shewchuk [13]. Isosurface stuffing be-gins with a tetrahedral lattice covering the domain (the Body-Centred Cubic (BCC)lattice is used), then deforms or cuts elements to place vertices on the domainboundary. The algorithm is conceptually quite simple, and relatively easy to im-plement compared to many other meshing algorithms. Remarkably, it is never-theless one of the few published tetrahedral mesh generators to give guaranteedbounds on element quality. Although the bounds are impressive, there remainsroom for improvement in the quality of elements produced by the algorithm.The attractive results of isosurface stuffing come with a caveat; the input sur-face to isosurface stuffing is assumed to be smooth and of adequately thick shape(e.g. with minimum radius of curvature greater than or comparable to the spacingof the tetrahedral lattice). If this assumption does not hold, isosurface stuffing?rounds off? non-smooth features such as corners and ridges. For those applica-tions where non-smooth features are required to be represented in the tetrahedralmesh, isosurface stuffing is unsuitable.2In this document, we present the work that we have undertaken to improvethe isosurface stuffing algorithm. These improvements fall into two categories:improvement of the quality of elements produced by the algorithm, and extensionof the algorithm to include non-smooth features in the resulting mesh.In order to improve the quality of elements produced by isosurface stuffing,we replace the commonly-used BCC lattice with an acute tetrahedral tiling knownas the A15 lattice. We describe the A15 lattice and our use of it in Chapter 3.In the case of generating a uniform tetrahedral mesh, the A15 lattice allows forlarge gains in the overall quality of the result. However, one of the strengths of theoriginal isosurface stuffing algorithm using the BCC lattice is its adaptive meshingscheme. Chapter 4 describes our attempts to produce an adaptive scheme basedon the A15 lattice that improves on the BCC-based adaptive scheme.To resolve non-smooth features in input surfaces, we choose vertices in themesh and snap them onto the features wherever possible. Care must be takennot to invert tetrahedra or degrade quality below acceptable levels. Our approachdoes not guarantee that all features will be resolved completely, but our experi-ments show that this procedure maintains very reasonable levels of quality whileproducing much better representations of non-smooth input surfaces. Our feature-matching algorithm is presented in Chapter 5, and our experimental results can befound in Chapter 6.We begin in Chapter 2 with a discussion of previously published mesh gener-ation techniques, focusing specifically on quality of results as a basis for compar-ison.3Chapter 2Overview of Mesh Generation2.1 Background and MotivationThe problem of tetrahedral mesh generation has been studied for several decades.Much of the preliminary work comes from the engineering literature, motivatedby the need to automatically create automatic volumetric discretizations for usewith the finite element method (FEM). More recently, with increased interest insimulation methods for physics-based animation, the topic has also been treatedin the computer graphics literature. Some of the more common uses for tetrahe-dral meshes in computer graphics are simulations of fluids and deformable solidobjects.In order to assess and differentiate between the diversity of mesh generationmethods, we look primarily at the quality of elements generated. Ease of imple-mentation and flexibility are also important secondary considerations. We present4here a more detailed discussion of how we measure tetrahedral mesh quality, andthen use this definition to analyze several the mesh generation methods most rel-evant to our own work.2.2 Mesh QualityIt has been shown that the equilateral tetrahedron is the ideal, highest-quality tetra-hedral element in the typical isotropic case. A thorough treatment of this subject,focusing on FEM, is given in Shewchuk [19]; we give a brief summary of relevantpoints here.An equilateral tetrahedron, by its nature, has all edges of equal length, allaltitudes equal (distance between a vertex and its opposite face), and all dihedralangles equal (angle between two faces sharing an edge). In fact, any of the aboveproperties are by themselves sufficient conditions for the equilateral property, andimply the others to be true. Dihedral angles are a very popular and useful indicatorof tetrahedron quality. All dihedral angles of an equilateral tetrahedron are equalto arccos(1/3)? 70.53?. As the maximum and minimum dihedral angles divergefurther from this value, the tetrahedron becomes further from equilateral, and thusof lower quality. Figure 2.1 illustrates the difference between a high-quality and alow-quality tetrahedron.Furthermore, small dihedral angles cause poor conditioning in finite elementmethods, slowing convergence to a solution of the partial differential equation inquestion. Even worse, large dihedral angles reduce the accuracy of the solutionsproduced. These are concrete examples of why tetrahedral quality is important,5Source: Si [21]Figure 2.1: Comparison of equilateral and sliver tetrahedra.and also motivate many to use dihedral angles to as their primary measure ofelement quality. Unfortunately, this makes it easy to adopt too narrow a focus inmeasuring quality, for example only maximizing the minimum dihedral angle ofa mesh (e.g., Burkhart et al. [6]).It is a much better approach to adopt a quality measure that indicates in a singlescalar value how close to equilateral a tetrahedron is. One such measure is theaspect ratio of a tetrahedron, defined as the longest edge of a tetrahedron dividedby its smallest altitude. As advocated by Shewchuk [19], we compute the aspectratio using the signed volume of the tetrahedron, then invert it and normalize itwith respect to the aspect ratio of an equilateral tetrahedron ((?2)?1), This givesa quality metric with a maximum of one for an equilateral tetrahedron, a value6of zero for degenerate tetrahedra with no volume, and a value less than zero foran inverted tetrahedron (with orientation opposite the expected orientation). Thisis a quality function that can be used as the basis for a maximization problem;when the quality measure is maximized over a mesh, we know that its elementsare as close to equilateral as possible. Shewchuk [19] gives many different qualitymeasures that can be used in this way, including aspect ratio. Our experiments useaspect ratio as the basis for quality measurement.Note that, unless otherwise mentioned, the overall quality of a tetrahedralmesh is computed as the quality of its worst single element. A mesh can alsobe measured in terms of the average quality of its elements; however, the worst-element metric is generally preferred because in practice it correlates better withthe accuracy and robustness of FEM simulations performed on the mesh. Meshoptimization is therefore treated as an exercise in improving the quality of theworst tetrahedron in the mesh.2.3 Related Work2.3.1 Delaunay RefinementThe well-known two-dimensional Delaunay criterion states that, for a given set ofpoints on a plane, there exists a unique triangulation such that the circumcircle ofevery triangle contains no other points. In two dimensions, this has been shownto produce optimal triangulations. It is therefore natural to expect that its three-dimensional analogue, which uses the circumsphere of a tetrahedron, would be7similarly useful for producing three-dimensional meshes.Arguably the most popular tetrahedral mesh generation software availabletoday, TetGen, uses Delaunay refinement techniques [21]. Specifically, it usesa technique called Constrained Delaunay Tetrahedralization (CDT) to produce atetrahedral mesh that conforms to a given input polygonal surface. The algorithmis also augmented with Steiner point insertions that place additional vertices intothe mesh in order to improve quality. The overall algorithm is anything but trivialto implement, and we will not go into detail here; the interested reader shouldconsult Si [21], Shewchuk [20], and Alliez et al. [1].Perhaps the greatest strength of Delaunay refinement is its ability to faithfullyreproduce an input surface in the output mesh. This ability comes from the factthat Delaunay refinement preserves the vertices and faces from the input (althoughsharp ?needle-like? features can require special handling). It is also relativelyeasy to control the density of tetrahedra in different regions of the domain; morevertices are inserted in areas of higher density, while low density areas are leftunrefined.However, mesh quality is the area where Delaunay refinement falters. Thekey issue is that Delaunay refinement does not in fact optimize tetrahedra to-ward the ideal equilateral property. Instead, Delaunay methods minimize thecircumradius-to-shortest-edge ratio of a tetrahedron, which admits a specific classof poor-quality tetrahedra known as slivers [20]. This has led to investigationof techniques for removing slivers from tetrahedral meshes, for example Chenget al. [7] and Edelsbrunner and Guoy [9]. Furthermore, the fact that Delaunay8refinement preserves input vertices and faces means that the quality of the resultis dependent on the quality of the initial vertex placement and face triangulations.As a result, Delaunay refinement offers no guarantees on element quality, andin practice cannot consistently be relied upon to produce high-quality tetrahedralmeshes.2.3.2 Advancing FrontAdvancing front methods begin at the boundary of a given domain and incremen-tally insert vertices and tetrahedra further and further inside the surface until thedomain has been completely meshed. For an example from the graphics literature,see Mo?ller and Hansbo [16]. Like Delaunay refinement, this technique explicitlypreserves the input surface in the result. The difficult step involved in this algo-rithm is the merging of ?fronts? as they inevitably intersect inside the domain. Itis this step that prevents advancing front methods from offering any guarantees ofoutput quality. Advancing front is not particularly relevant to our work, so we willnot discuss it further.2.3.3 Octree MethodsLooking back into the engineering literature reveals a long history of meshingalgorithms which work with a Cartesian grid structure, particularly quadtrees oroctrees. Yerry and Shephard [28] first applied this technique in two-dimensions,encoding the input curve into a quadtree structure and then triangulating eachquadtree cell. The approach was then applied to three dimensions by Yerry and9Shephard [29] and further refined by Shephard and Georges [18].Algorithms of this class begin by encoding the input surface into an adaptivegrid structure, i.e., an octree. This effectively converts the surface into a volumet-ric representation made up of cubical octree cells, plus information about the inputsurface in the octree cells that contain it. To achieve a tetrahedral mesh, each oc-tree cell is meshed locally in such a way that adjacent cells conform to each other.Octree cells containing the input surface are treated specially, and meshed in sucha way that the surface is represented in the output. The details of how each cell ismeshed are what distinguish the different octree algorithms.One of the more popular octree methods is the Finite Octree method, whichtakes great care to reproduce all features of the input surface, including non-smooth features [18]. Its ability to accurately reproduce CAD-style non-smoothmodels with relatively high quality elements has led to its adoption for engineer-ing applications. Other notable highlights include provably optimal (up to poten-tially large constant factors) meshers by Bern et al. [2] in two dimensions andMitchell and Vavasis [14] in three dimensions. Although octree methods offergreat promise for element quality, in practice there is still significant room forimprovement, both in terms of quality and in terms of algorithmic complexity.2.3.4 Lattice MethodsA later class of methods that we will refer to as lattice methods can be thoughtof as spiritual successors to octree methods. Instead of starting with a Cartesianstructure for the initial volumetric representation of the domain, a space-filling10tetrahedral lattice is used instead. This is a major breakthrough because it elim-inates the need to do local meshing of octree cells; the domain is already filledwith a tetrahedral lattice, and no additional work needs to be done on the interiorof the domain. Only the tetrahedral elements intersecting the input surface needto be treated in order to resolve the boundary. In addition, better quality elementsare possible because there is no longer any need to conform to the cubical cells ofan octree.Most of the literature exploring these methods deals primarily with smoothinputs, and are aimed at applications where it is either unnecessary to match ev-ery detail of an input surface, or it is assumed that the input surface has no sharpfeatures. Velho et al. [25] began with the Freudenthal subdivision of a grid, anddeformed it to match an implicit surface boundary. Molino et al. [15] introducedthe BCC lattice, with red-green refinement used to achieve adaptivity. These meth-ods accomplish the task of matching the input surface through deformation of thetetrahedra in the initial lattice.The isosurface stuffing algorithm of Labelle and Shewchuk [13] makes thecrucial observation that tetrahedra can also be cut in order to resolve the inputsurface. If lattice vertices lie close enough to the surface then they are warpedonto the surface in the same way as previous methods. However, if an intersectingtetrahedron has no vertices close to the surface, new vertices are inserted that lieon the surface and the tetrahedron is cut according to a small number of context-specific stencils. This results in better quality tetrahedra at the boundary than canbe achieved through vertex warping alone. In fact, isosurface stuffing gives prov-11able bounds on the quality of the resulting tetrahedra, making it one of the onlytetrahedral meshing algorithms to guarantee high-quality elements, and likely thesimplest conceptually to do so.Isosurface stuffing forms the basis of our work on tetrahedral mesh generation,and as such we go into more detail on how we have implemented the method inChapter 3. Our work focuses on starting from the already-impressive level ofquality achieved by the algorithm and seeking to improve it even further. Wealso look to imbue isosurface stuffing with the ability to resolve non-smooth inputfeatures, a capability found in methods such as Delaunay refinement, but so farabsent from any lattice methods.12Chapter 3A15 Isosurface Stuffing3.1 Improving on the BCC LatticeProbably the most popular tetrahedral lattice used in mesh generation today is theBody-Centred Cubic (BCC) lattice. The vertices of the BCC lattice can be thoughtof as two regular grids, one offset from the other by half the grid resolution [15].This makes the BCC lattice relatively intuitive to visualize and work with. Thissame cubical, grid-like property that makes BCC nice to work with is also thesource of its primary limitation in terms of element quality; all tetrahedra in thelattice have dihedral angles of 90?.The nature of the isosurface stuffing algorithm is such that tetrahedra not nearthe isosurface are not deformed from their initial shape in the starting lattice. Ittherefore follows that choosing a lattice with higher-quality tetrahedra will resultin generally better elements in the output mesh, particularly in the interior of the13Source: Labelle and Shewchuk [13]Figure 3.1: Illustration of the BCC lattice.volume. Moreover, starting with better-quality elements is expected to limit thedegradation in quality caused by the deformation and cutting of tetrahedra nearthe boundary. Stated another way, a higher-quality initial lattice allows for moredeformation before quality becomes too poor, a fact that is particularly importantfor our sharp feature-matching algorithm presented in Chapter 5. It is with thismotivation that we look to replace the BCC lattice with a lattice composed ofhigher-quality tetrahedra.3.2 TCP and Edge Valence AnalysisThere are many space-filling tetrahedral tilings, of which BCC is only one choice[24]. The topic of space-filling geometric tiles has received much study in theliteratures of both mathematics and chemistry in the form of geometric foamsand crystalline molecular structures, respectively [22]. Of particular interest isthe family of Tetrahedral Close-Packed (TCP) structures, since their structure is14composed of nearly-equilateral tetrahedra. By definition, the Voronoi cells of aTCP lattice have faces that are either pentagonal or hexagonal. As a consequence,all edges in a TCP lattice have valence five or six (in this context, the valence ofan edge is the number of tetrahedra that share that edge).It is worth noting at this point that it is not possible to tile three- dimensionalspace using only equilateral tetrahedra. As noted before, the dihedral angles ofan equilateral tetrahedron are all equal to arccos(1/3) ? 70.53?. If we considera single edge on the interior of a tetrahedral lattice, it becomes apparent that thesum of the dihedral angles at that edge must be equal to 360?. However, because360?/arccos(1/3)? 5.1 is not an integer, it is not possible for an edge to be sharedby all equilateral tetrahedra; at least one of the dihedral angles at the edge must belarger or small than the ideal arccos(1/3) in order for the sum to be 360?. The bestwe can do is either an edge of valence five, with an optimal maximum dihedralangle of 360?/5 = 72?, or an edge of valence six, with an optimal minimum dihe-dral angle of 360?/6 = 60?. To have edges of valence four or less is an immediateindication of poorer quality, since it is impossible to have a maximum angle lessthan 90? incident to a valence-four edge. Valences greater than six should also beavoided where possible due to the small dihedral angles they induce.Edge valence analysis of the BCC lattice reveals that its 90? dihedral anglesare the result of valence-four edges; even more strongly, it allows us to state thatit is impossible to improve the quality of the BCC lattice further without changingits topology, since 90? is the optimal angle at a valence-four edge. Edge valenceanalysis can also be applied to other mesh generation techniques; the majority15of techniques do not take edge valences into account during execution, and assuch are prone to producing edges with valences that are either too small or toolarge. This is an indication of poor-quality tetrahedra, and also of limits on qualityimprovement that can be achieved without significant topological modifications.3.3 A15 LatticeWith the insight of edge valence analysis available to us, we can clearly classifythe TCP lattices as superior alternatives to BCC in terms of structural quality; alledges in a TCP lattice have valence five or six, meaning that for a space-filling tileits tetrahedra will be as close to equilateral as we can reasonably expect. There arethree basic TCP structures, differentiated by the specific arrangement and types oftheir Voronoi cells, known as A15, Z, and C15 [22]. All three have been observedin nature as molecular structures.We have chosen the A15 lattice to use in our mesh generation algorithm. Thischoice comes from the fact that the vertices of A15 can be defined in terms of inte-gral coordinates on a grid, making it easier to work with in practice. By choosingA15 we are also able to leverage the insights of Williams [27], who used the A15tile for a ?marching tets? isosurfacing algorithm.The A15 tile has all dihedral angles between 53? and 79?, A rendering of thetile can be seen in Figure 3.2. making it a significant improvement over BCC. It isalso locally Delaunay, which means that performing Delaunay refinement on thevertices of the A15 tile will reproduce the A15 tile exactly. It is not possible toimprove the tile locally by means of vertex smoothing; the vertices are already at16Figure 3.2: A single tile of the A15 lattice viewed from multiple angles.local maximums for quality of their adjacent tetrahedra. A full description of thegeometry of the A15 tile can be found in A.3.4 AlgorithmOther than replacing the BCC lattice with the A15 lattice in the initial domain-filling step, our mesh generation algorithm is very similar to the original isosurface17stuffing algorithm of Labelle and Shewchuk [13]. We give a brief overview of ourversion of the algorithm here.1. Convert the input to a Signed Distance Field (SDF) representing the domainboundary as a zero-isosurface (if it is not already given in this format). Mostcommonly, the input surface is given in the form of a triangle mesh; in thiscase we use the fast sweeping method described in Bridson [4] and Zhao[30] to convert the triangles to an SDF. We use the convention that negativedistances indicate that a point is inside the domain, and positive distancesare outside. The resolution of the SDF is given by a desired resolution pa-rameter, h. It is understood that features of the input of finer resolution thanh will be lost and not represented in the output.2. Cover the volumetric domain with the uniform A15 lattice; this is accom-plished by filling the domain?s bounding box with the tetrahedral lattice,then removing all tetrahedra that lie completely outside the domain (i.e.,have all four vertices with positive values in the SDF). The size of thetetrahedra corresponds to the resolution of SDF, h, so that the isosurfaceinformation encoded in the SDF can be sampled accurately with no aliasingartifacts.3. All edges of the lattice that intersect the isosurface are found and added tothe set of ?violated? edges. The point along the edge where the intersectionoccurs is denoted as the ?cut point? of the violated edge. We measure thedistance of the cut point to the endpoints of the violated edge as a ratio of18the total edge length. These distance ratios determine how the violated edgeis handled.4. If the cut point of a violated edge occurs within a certain distance ratiothreshold ? of one of the edge?s endpoints, then the violated edge is resolvedby warping that endpoint to the cut point. The warped vertex now lies on theisosurface, and any other violated edges that contain that vertex are markedas resolved as well. All vertex warping operations are performed beforemoving to the next step.5. All remaining violated edges now have cut points that are not close to ei-ther endpoint of their violated edge. These violated edges are resolved byperforming a cutting operation on the tetrahedra that contain them, addingthe cut points to the mesh in the process. Tetrahedra are cut according toa stencil determined by the number and relative placement of cut points ontheir edges. The stencils ensure that the resulting tetrahedra are conformingand create a well-formed mesh. Unlike the BCC-based isosurface stuffingalgorithm, we do not have a concept of ?red? and ?black? edges in the A15lattice; we consider all edges equally, so fewer stencils are needed.At the completion of the final step, no more violated edges remain, and thetetrahedral mesh has been warped and cut so that its boundary conforms to theisosurface of the domain boundary. Figure 3.3 illustrates the effect of the process.As noted previously, non-smooth features and features under-resolved by the SDFare not reproduced.19Figure 3.3: A15 Isosurface stuffing in action.One of the biggest successes of BCC-based isosurface stuffing paper was thederivation of worst-case quality bounds for the resulting mesh. The bounds varydepending on the metric being optimized and other algorithm parameters, but mostguarantee that all dihedral angles will fall within the range of approximately 9? to160? [13]. These bounds are obtained by a computer-assisted proof. We leave asfuture work the task of deriving similar bounds for the A15-based version of iso-surface stuffing. For the time being, we rely on our earlier assertion that a higher-quality initial lattice will improve the final resulting mesh. At the very least, weexpect that the average element quality will improve significantly, because thehigh-quality A15 tetrahedra on the interior of the domain are left unchanged bythe algorithm. We provide representative experimental results in Section 6.1.20Chapter 4Adaptivity4.1 BackgroundIt is generally desirable in mesh generation to have a finer resolution of elementsin areas of interest of the domain, including the domain boundary. This adaptivemeshing approach uses fewer elements in areas where detail is not required, thussaving both space and time when operating on the mesh. The original isosurfacestuffing algorithm includes an adaptive meshing scheme based on the BCC lattice.In Chapter 3, we presented an improved uniform isosurface stuffing algorithmusing the A15 lattice; a natural next step, therefore, is to produce an analogousA15-based adaptive meshing scheme.Adaptivity in isosurface stuffing is achieved by substituting an adaptive lat-tice for the uniform lattice in the first step. The original algorithm of Labelle andShewchuk [13] uses a customized octree structure to construct a graded lattice21Source: Labelle and Shewchuk [13]Figure 4.1: Adaptive meshing using BCC lattice.based on BCC. The lattice uses BCC tiles in areas of uniform resolution; only inthe gaps between these regions are modifications made in order to ?bridge? thegap between tiles of different resolutions. These bridge tetrahedra are generatedalgorithmically based on the topology of the octree at interface between cells ofdifferent sizes. The result gives an adaptive lattice with all dihedral angles be-tween 45? and 120?. Crucially, the lattice is generated such that the input surfaceonly intersects BCC tetrahedra. This means that the same proofs of quality boundsused with the uniform lattice are applicable here, because the warping and cuttingoperations will operate on the same tetrahedra in both cases. Figure 4.1 shows anexample of the BCC adaptive lattice used for isosurfacing.We have already demonstrated that the A15 lattice outperforms the BCC lattice22in terms of quality in the uniform case. Unfortunately, we have been unable tofind any mention of adaptive variants of TCP structures in the literature, and thecreation of an adaptive lattice reflecting the quality of A15 appears to be an openproblem. In the absence of useful mathematical theory or examples from nature,adaptive meshes are generally achieved by methods such as octree algorithms[18, 29], subdivision operations [15, 23], heuristic remeshing operations [26], orsome combination of all these.The remainder of this chapter documents our efforts to produce an adaptiveA15-based tetrahedral lattice using these and other varied methods. As describedin Section 3.2, we use edge valences as our primary measure of whether a partic-ular lattice emulates the structural quality of A15. The ideal lattice has only edgeswith valence five or six. Edges with valence four are considered undesirable butacceptable due to how commonly they occur in tetrahedral subdivision schemes.Valences greater than six are similarly undesirable but acceptable, but valencesof three or less are considered unacceptable due to the large dihedral angles theyinduce.4.2 Tetrahedral SubdivisionSubdivision schemes are a classic method for increasing and decreasing resolutionin geometric applications. In volumetric subdivision an initial cell (or small setof cells) is recursively split into smaller cells until a certain resolution thresholdis met. In the case of adaptivity, the desired resolution threshold varies spatially.Adaptive subdivision is much more difficult for meshes than for octrees, how-23ever, because the regions between cells of different resolutions must be handledspecially to ensure conformance of the elements of the mesh. For this reason,uniform subdivision schemes are much easier to devise than adaptive ones.Probably the most popular tetrahedral subdivision scheme is red-green refine-ment [3, 8]. The algorithm is so-named because it consists of regular ?red? sub-divisions, performed to increase resolution, and irregular ?green? subdivisions,performed to preserve conformity of the mesh. Red subdivisions divide one tetra-hedron into eight by inserting a new vertex on each edge, splitting the tetrahedroninto four smaller tetrahedra (one at each corner), and a central octahedron. Thereare three choices of diagonal to split the octahedron into four tetrahedra, and thechoice can affect the quality of the resulting tetrahedra. Regardless of the choiceof diagonal, red subdivision can be performed uniformly on adjacent tetrahedraand give a conforming result. However, unsubdivided neighbours of red tetra-hedra cause three-dimensional T?junctions that need to be resolved with greenrefinement. Figure 4.2 shows the different tetrahedral subdivisions used in thered-green scheme.Molino et al. [15] use red-green refinement on an initial uniform BCC lattice togenerate an adaptive BCC-based lattice. It seems simple enough to take the sameapproach but substitute in an A15 lattice. Unfortunately, red-green refinementproduces valence-four edges, which severely undermines the structural quality ofA15 (this is not a problem with the BCC lattice, which already has valence-fouredges). As a result, successive refinements yield fine-resolution tetrahedra that donot retain the high-quality properties of the initial A15 lattice. For this reason,24Source: Molino et al. [15]Figure 4.2: Red-green subdivision stencils.we did not pursue red-green refinement of an initial A15 lattice as an adaptivemeshing scheme.There are other, lesser-known tetrahedral subdivision schemes in the litera-ture. One such scheme comes from Burkhart et al. [6], which seeks to produce avolumetric analogue to the well-known?3-subdivision scheme of Kobbelt [12].However, analysis of this subdivision scheme shows that it produces valence-threeedges, which we consider unacceptable. We note that the quality of the publishedresults of this method are measured in terms of minimum dihedral angle; the in-evitably large maximum dihedral angles that result from such low-valence edgesare not mentioned. Because this method seems not to have been designed with thesame definition of tetrahedral quality that we use, we do not pursue it further.After exhausting the literature searching for a tetrahedral subdivision schemethat would preserve or emulate the quality of the A15 lattice, we attempted todesign our own. We analyzed numerous prototypes based on combinations ofedge subdivisions, face subdivisions, and point insertions on the interior of the25tetrahedron. The following criteria were used for acceptance of a scheme:1. The scheme must be able to tile uniformly (i.e., the ?red? case).2. The scheme must have finite propagation in the adaptive case (i.e., the?green? case).3. The scheme must produce no edges with valence less than five.The first two requirements are necessary of any adaptive tetrahedral subdi-vision scheme. The final requirement is specific to our goal of achieving high-quality tetrahedra, and is the property by which a scheme may be judged superiorto standard red-green refinement.Despite several prototypes that could satisfy up to two of the criteria listedabove, we were unable to come up with a subdivision scheme that satisfied allthree. We do not claim to have performed a full search of the space of possiblesubdivision schemes; we leave this, as well as exploration of the possibility thatno such scheme might exist, as future work. For the time being, we conclude thatstandard red-green refinement is the best tetrahedral subdivision scheme in termsof tetrahedral quality, and turn our focus to other methods of creating an adaptivelattice.4.3 A15 Point LatticeAs mentioned in Section 3.3, the A15 lattice is locally Delaunay. Thus, given aset of points located at the vertices of the A15 tile, performing Delaunay tetrahe-dralization will reproduce the A15 lattice. This insight forms the basis of our next26adaptive meshing algorithm, in which we generate an adaptive point lattice basedon A15, then find its Delaunay tetrahedralization.Before describing the algorithm, we note that it depends on the ability to over-lay an A15 tile onto an octree cell. The version of the A15 tile we use (the listingof which is in A) has vertices at the corners of the cube with side-length four.To overlay the tile, we scale and translate it such that those corner vertices alignwith the corners of the octree cell. It is also important to note that we establisha mapping between the edges and faces of the octant to the corresponding edgesand faces of the A15 tile. This allows us to, for example, determine which verticesare shared by two A15 tiles overlaid on adjacent octree cells. An illustration ofthis mapping can also be found in A.We begin by generating an octree representation of the input surface. Theoctree is then balanced so that adjacent cells only differ in depth by at most onelevel. This is done to limit the complexity of the problem of bridging the gapbetween regions of different resolutions. We then iterate over all cells in the octreefrom smallest to largest. Each octree cell is filled with the vertices of an A15 tile;however, the vertices corresponding to any octree corners, edges, or faces thathave already been visited are omitted to prevent duplicate and conflicting points.The result of this procedure is an adaptive A15 point lattice, an example of whichcan be seen in Figure 4.3. Finally, we use TetGen [21] to compute the Delaunaytetrahedralization of the point lattice to get the adaptive tetrahedral lattice.This algorithm succeeds at creating an adaptive conforming tetrahedral mesh,and it also succeeds for the most part at replicating the structure of the A15 lat-27Figure 4.3: A15 point lattice (and the octree from which it was generated).tice in regions of uniform resolution. Unfortunately, in our experiments with themethod the ?bridge? regions connecting regions of different resolutions exhibitvery poor quality. Our results are presented in Section 6.3. The limitations ofDelaunay refinement methods arise here, as the method does not optimize forequilateral tetrahedra, nor are any guarantees made about edge valences. Indeed,we observed frequent occurrences of valence-three edges in the meshes produced28by this method, as well as many poor-quality sliver tetrahedra. There is room forfurther work in this direction, such as more sophisticated point lattice generatingtechniques and post-processing operations for improving the quality of a Delau-nay mesh. We chose to focus instead on a hybrid strategy for adaptive latticegeneration, detailed below.4.4 Adaptive Red-Green A15The final adaptive lattice generation algorithm we tried is a hybrid of octree andsubdivision methods. Observing that the poor quality in the A15 point latticemethod described above occurs in the ?bridge? regions of the adaptive lattice, wesought to design an algorithm that would give more control over the quality of thetriangulation of those regions. We begin as before with an octree, and label allcells with subdivided neighbours as ?bridge? cells. All non-bridge cells are filledwith the standard A15 tile. The bridge cells must then be treated specially in orderto conform to the A15 tiles of different resolutions surrounding them.We achieve this by noting that the faces of one side of the A15 tile can betransformed to the faces of a finer-resolution set of four A15 tiles by the followingtwo operations:1. Subdivide each face 1-4.2. Translate vertices to match the finer-resolution tiles.3. Flip the long edges that result in the middle of the tile.29red-green subdivisionmove vertices flip edgesFigure 4.4: Transforming one face of A15 tile to double resolution.By performing three-dimensional analogues of these operations, an A15 tilecan be made to conform to the smaller A15 tiles overlaid on adjacent subdividedoctree cells. See Figure 4.4 for an illustration of this process on a face of an octreecell. Therefore, we overlay A15 tiles over all bridge cells, merge all bridge cells ofthe same size together into a single tetrahedral mesh, and then perform the abovesteps on each bridge mesh to make the boundaries conform to their neighbouringA15 tiles.The subdivision step is performed using red-green subdivision as describedabove. In this case, the valence-four edges caused by this scheme are consideredacceptable because they will only occur in the bridge regions; the uniformly high-30resolution regions in this algorithm are meshed directly with the A15 tile. Allfaces that require subdivision are marked as being adjacent to virtual red-refinedtetrahedra. The subdivision scheme is then propagated through the mesh as inBey [3] (most tetrahedra whose faces require subdivision will be marked green,though some may require a full red refinement).The edge flipping operation depends on the valence of the edge to be flipped.If the edge is of valence two, the 2-2 face swapping operation found in Freitagand Ollivier-Gooch [10] is used (recall that all edges to be flipped are boundaryedges). If the edge is of valence three, we first insert a tetrahedron consisting ofthe existing edge and the ?flipped? edge, effectively inserting the desired edge intothe mesh. We then do a 4-4 edge flip as described in Klingner and Shewchuk [11]around the original edge, choosing the new diagonal so as to maximize quality ofthe adjacent four tetrahedra.Once all the required edges have been flipped, vertices are translated to matchsmaller adjacent A15 tiles. At this point the bridge mesh should be able to con-form with all adjacent tiles. Figure 4.5 shows the a mesh at this stage of thealgorithm. Once all bridge meshes have been processed, all tiles can be mergedtogether into the final mesh.By itself, this procedure produces a conformal adaptive mesh with A15 tilesin the regions of uniform resolution. It does not, however, produce good qualityelements in bridge regions. This is largely due to the final vertex translation step,which can cause the tetrahedra created through subdivision to deform substan-tially. We have no choice but to move the boundary vertices in this way to achieve31Figure 4.5: Subdivided bridge cells in the red-green A15 algorithm.conformance with adjacent tiles; however, interior vertices of the bridge mesh arenot constrained in this way. As such, we introduce a vertex optimization step toimprove the quality of the result. We use a simplified version of the technique de-scribed in Freitag and Ollivier-Gooch [10], doing a steepest-descent search to finda new location for each unconstrained vertex. The search terminates whenever theworst adjacent tetrahedron is improved to be the same quality as at least one otheradjacent tetrahedron. We use the inverted aspect ratio quality measure introducedin Section 2.2 and compute the gradient using the second-order central differencemethod.Even with the introduction of vertex optimization, our results consistently con-tained poor-quality tetrahedra, leading us to introduce additional mesh improve-32ment techniques. We examine all valence-four edges of the mesh, performing a4-4 edge flip wherever this would improve quality. We also insert a Steiner pointin place of a valence-four edge when doing so would improve quality, a step mo-tivated by the fact that situations can arise where all possible choices of diagonalin a 4-4 edge flip yield degenerate or inverted tetrahedra.These steps combined with vertex optimization are able to produce adaptiveA15 meshes of reasonable quality for relatively simple input surfaces. Unfortu-nately, however, our experiments show that more complex inputs can cause ouralgorithm to create degenerate or very poor-quality tetrahedra. Even in the sim-ple cases, the quality of the adaptive mesh does not exceed the quality of theadaptive BCC lattice of Labelle and Shewchuk [13]. Our results are presented inSection 6.3. We postulate that the formulation of our algorithm overconstrains thevertices and topology of the mesh, leading to the lack of quality despite numerousoptimization steps. Future work would see exploration of additional alternativestrategies based on conversion of octrees into A15-based meshes. For now, weconclude that the highest-quality adaptive lattice available is the BCC-based adap-tive lattice.33Chapter 5Feature Matching5.1 AlgorithmOur second avenue of improvement to isosurface stuffing is the inclusion of fea-ture matching operations that attempt to include sharp corners and creases of theinput surface in the output mesh. Standard isosurface stuffing guarantees accuratematching of the smooth regions of the input surface, while rounding off sharp fea-tures. Our approach is to start with the smoothed-off mesh, and then resolve thesharp features that have been missed by moving vertices onto them. We will referto this process as ?snapping,? and vertices that have been moved onto a feature arereferred to as ?snapped vertices.?In addition to the usual signed distance field representing the surface we wishto reproduce, we include as input a list of curves and points indicating sharp fea-tures on the surface. Figure 5.1 shows the set of sharp features we used with the34Figure 5.1: Feature points (black) and curves (pink) of the Fandisk model.Fandisk model. Our feature-matching meshing algorithm can be summarized asfollows:1. Perform isosurface stuffing using the signed distance field to produce aninitial tetrahedral mesh that matches the input except at sharp features.2. For each endpoint of a feature in the input, find the closest vertex on theboundary of the mesh and snap it to the feature endpoint.3. For each feature curve, find a path through the boundary of the mesh be-tween the vertices now situated at each of the feature?s endpoints. Snapeach vertex along the path onto the feature.354. Perform a round of vertex smoothing on the mesh to restore element quality.Vertices moved onto features are constrained to those features, and verticeson the mesh boundary are constrained to lie on the isosurface.As before, we assume an isotropic domain so that the optimal tetrahedron isequilateral with all dihedral angles equal. We use the normalized, inverted tetra-hedral aspect ratio from Shewchuk [19] as our quality measure for the purposesof vertex snapping and smoothing. Signed volume and orientation of tetrahedraare computed using Shewchuk?s geometric predicates [17] to ensure accuracy androbustness.Figure 5.2 shows a before-and-after comparison of the Fandisk model meshedwith and without our feature matching algorithm.5.2 Feature EndpointsAssuming that the initial tetrahedral lattice used is of a sufficiently high resolution,the closest vertex to each feature endpoint should be unique; in other words, amesh vertex should not be the closest vertex to more than one feature endpoint.Thus, step (2) of our algorithm is very simple: Find the closest mesh vertex toeach feature endpoint and snap it to that feature endpoint.Practically speaking, if two or more feature endpoints do end up in closeenough proximity that they are competing for the same vertex, two options areavailable. First, the algorithm may choose to fail gracefully, requesting that addi-tional resolution is required to correctly resolve the given features. Alternatively,the offending feature endpoints can be ?merged? into a single endpoint, and the36Figure 5.2: Fandisk meshed with and without feature matching.closest vertex snapped to that merged endpoint. That vertex would then be treatedas the de facto endpoint of the feature for the remainder of the algorithm?s execu-tion, even though it may no longer be at the same location as the endpoint givenas input.37Figure 5.3: Example of feature curve matching.5.3 Feature PathsNext, we find paths through the boundary of the mesh between the endpoints ofeach feature. Each vertex along this path is snapped onto the closest point on thefeature to that vertex. The final result is that the vertices and edges of the pathfollow the feature, thus including it in the output tetrahedral mesh. Optimally,the path chosen to snap to the feature should follow the feature?s shape as closelyas possible. We can accomplish this by treating the problem as a shortest-pathproblem. First, set the weight of each vertex to be the distance from the vertexto the feature. The problem is now to find the path that minimizes the sum ofvertex weights, which can be solved using Dijkstra?s algorithm. This process isillustrated in Figure 5.3 as a two-dimensional view of a feature curve and thenearby mesh boundary.It is worth noting, however, that finding the optimal path is not as simple if38we wish to avoid creating low-quality tetrahedra. For example, snapping threevertices that are part of the same tetrahedra onto a feature that is a straight lineinstantly makes that tetrahedron degenerate. Furthermore, from this scenario wecan see that the decision to add a candidate vertex to a path in the search spacewill depend on the previous vertices of that path. A vertex that inverts a tetrahe-dron when included in one path may be perfectly acceptable when included in adifferent path. This dependence between previous path and vertex admissibilityis not handled in traditional shortest-path graph algorithms, such as Dijkstra, thatguarantee the optimal result.However, in this application we do not believe that optimality of the path is anecessary condition for success; any path that is sufficiently close to optimal anddoes not reduce element quality too much will serve. As such, an algorithm suchas greedy best-first search is probably enough in most cases. Our implementationuses a modified Dijkstra algorithm that does not add vertices to the search frontif adding them would deform a tetrahedron excessively. This does not guaranteeoptimality, but has proven to give acceptable quality in our experiments.Our algorithm always seeks to find a close-to-optimal feature-following pathbetween endpoints that does not reduce tetrahedron quality below a certain thresh-old, and also does not include vertices that have already been snapped to other fea-tures. This can be a very restrictive set of constraints on the path-finding searchalgorithm, and will often fail for non-trivial inputs. If it does fail, we try againwhile relaxing the constraint that the path must not use any vertices that have al-ready been snapped. If this also fails, we try one last time with the tetrahedron39quality constraint removed. If all attempts to find a path fail, the feature is de-clared unresolvable given our constraints, and we move on to the next feature. Ifa path was found, we snap each vertex on that path to the feature. If we had torelax our constraints to find the path, however, not every vertex will be admis-sible to snap; vertices already snapped to other features are not snapped again,and any snapping operation that would degrade tetrahedron quality below a giventhreshold is not performed. This quality threshold should at minimum preventsnapping operations that would invert tetrahedra; this is the threshold used in ourexperiments. More conservative thresholds could prevent snapping operations thatwould increase the aspect ratio of a tetrahedron above, for example, 25.We note that a major limitation of our algorithm appears at feature endpointsshared by multiple features. The surface valence of the mesh vertex snapped toa particular endpoint is bounded by the nature of the isosurface stuffing process,and has no relationship to the number of feature curves incident to that point. Itis easy to come up with example inputs where the number of incident featuresto a particular endpoint exceeds the valence of the closest vertex, thus making itimpossible to resolve all the those features fully without changing the topologyof the mesh. Even in cases where the valence of the vertex is greater than thenumber of incident features, the specifics of a particular scenario and the order inwhich features are resolved can easily lead to ?stranded? features with no unusedvertices left to form a path to the endpoint. Features can also be stranded dueto the minimum quality threshold imposed on the algorithm. Areas with highdensity of features are where tetrahedron quality tends to degrade the furthest,40and such areas occur naturally around endpoints shared by multiple features. Thishas led us to observe many features that are perfectly resolved on their interiors,but missing one or two edges right beside their endpoints. We leave it as futurework to explore methods to change mesh topology to fully resolve features athigh-incidence feature endpoints.5.4 Mesh SmoothingOnce features have been resolved as well as possible given our quality and topol-ogy constraints, we perform mesh smoothing operations to try and improve thequality of the mesh elements as much as possible. Any schedule of mesh op-timization operations will do, but our experiments have shown that a simplifiedlocal vertex smoothing operation generally suffices. The most important consid-eration is that vertices be correctly constrained during the optimization process;vertices snapped to features must remain on those features, and vertices on theboundary should be constrained to remain on the isosurface.The smoothing algorithm we implemented is not particularly sophisticated. Asmall cube around the current vertex position is created with side lengths propor-tional to the tetrahedral mesh resolution. A random sampling of points withinthis cube are then selected and projected onto the constrained space of the vertex.The constrained space is described in more detail below. For each sample point,the quality of each tetrahedron containing the current vertex is calculated with thecurrent vertex moved to the sample point. The sample point that yields the bestlocal tetrahedra is chosen. Finally, if moving the vertex to the chosen sample point41improves the quality of its incident tetrahedra, then the vertex is moved. This lo-cal optimization scheme ensures that no tetrahedron is ever made worse by vertexsmoothing, an approach referred to as ?smart? smoothing by Freitag and Ollivier-Gooch [10]. Smoothing operations are repeated until either mesh quality ceasesto improve or a fixed maximum number of iterations is reached.For a boundary vertex each sample point is constrained to the isosurface. Thisis done by first finding the numerical gradient of the signed distance field at thevertex, and using it to project each sample point onto the plane tangent to theisosurface at the vertex position. We then compute the gradient of the signeddistance field at each sample point and do a line search along that gradient untila point at or near the isosurface is found. If the numerical gradient is zero atthe sample point, it is discarded. A vertex snapped onto a feature is similarlyconstrained to that feature. Sample points are first projected onto the line tangentto the feature at the vertex position, then projected onto the feature itself. Verticessnapped to feature endpoints are the easiest case; they are fully constrained andshould not be moved at all.Our mesh smoothing algorithm was designed for simplicity of implementa-tion rather than efficiency or effectiveness. We are confident that any number ofmore sophisticated mesh improvement algorithms, such as those used by Freitagand Ollivier-Gooch [10] and Klingner and Shewchuk [11], could be used witheven better results. The key insight from our experiments, however, has been thatsnapping vertices onto sharp features degrades element quality but preserves theinherently well- structured mesh produced by isosurface stuffing. Therefore, it42only requires very simple mesh smoothing operations to restore element qualityback to acceptable levels.43Chapter 6Results6.1 Uniform A15 Isosurface StuffingWe tested our algorithms on a small set of inputs that included both smooth sur-faces and surfaces with sharp features. Each surface began as a triangle meshand was then converted to a signed distance field using the fast sweeping methoddescribed by Zhao [30]. Each input was meshed using several different lattice res-olutions (h) for the sake of comparison. We have provided representative resultsof these tests below.We first tested our version of isosurface stuffing augmented with the uniformA15 tetrahedral lattice. Table 6.1 shows mesh quality results for smooth inputsthat require no feature matching. We provide minimum and maximum dihedralangles, as well as the maximum aspect ratio. In each case our results show ac-ceptable angle bounds for all of our inputs, with no dihedral angles smaller than44Figure 6.1: Cutaway view of Sphere input, meshed with h = 0.2.Table 6.1: Mesh quality for smooth inputsInput h Min. Dihed. Max Dihed. Max Aspect TimeSphere 0.2 19.90? 126.12? 4.78 0.001sSphere 0.1 18.59? 137.11? 6.09 0.01sBunny 3.0 13.47? 151.42? 9.42 0.03sBunny 1.0 14.34? 147.33? 8.73 0.67sDragon 0.02 13.94? 148.46? 9.70 0.04sDragon 0.01 13.35? 150.29? 10.51 0.33s13? and none larger than 152?. It is interesting to note that higher resolutions donot necessarily result in better-quality tetrahedra.The Dragon model is also used by Labelle and Shewchuk [13] in their experi-ments, allowing for a direct comparison between our methods. The Dragon modelmeshed with a uniform BCC lattice gives a minimum dihedral angle of 14.9? anda maximum dihedral angle of 157.5? (aspect ratio data is not provided). Using theA15 lattice, we produce a mesh with minimum dihedral angle 13.94? and max-45Figure 6.2: Dragon input, h = 0.02. with dihedral angle histogramimum dihedral angle 148.46?, an improvement of ten degrees on the maximumdihedral angle. A more marked improvement is observed in the dihedral anglehistograms of the two meshes; the A15 dragon (seen in Figure 6.2) has the vastmajority of its angles between 55? and 80?, whereas the BCC dragon has muchwider distribution with most angles between 45? and 100?.We also applied the mesh smoothing algorithm described in Section 5.4 to themeshes in Table 6.1. The results can be seen in Table 6.2, and are quite striking.After vertex smoothing, no dihedral angle is smaller than 20?, and none are largerthan 127?. This speaks to the fact that isosurface stuffing, particularly when using46Figure 6.3: Bunny input, h = 1.0. with dihedral angle histogramour acute lattice, produces a mesh of very high quality. Even if some poor dihedralangles exist near the mesh boundary, the structure of the mesh is such that theycan usually be improved significantly by simple smoothing operations. This opti-mization step is not cheap, however, and can take several minutes to execute formore complex meshes (admittedly, our optimization code could likely be made47Table 6.2: Smooth inputs with optimizationInput h Min. Dihed. Max Dihed. Max Aspect Time (Total)Sphere 0.2 30.18? 110.65? 3.19 0.1sSphere 0.1 28.20? 115.70? 3.43 0.86sBunny 3.0 25.64? 126.35? 4.05 8.15sBunny 1.0 22.51? 125.14? 4.21 414.49sDragon 0.02 20.13? 123.04? 3.91 12.56sDragon 0.01 20.70? 123.62? 3.85 195.58smuch faster, for example by using gradient descent or by restricting optimizationto only take place near the boundary).6.2 Feature MatchingFigure 6.4: Cube, h = 0.1, after smoothing. Cutaway view at right.Next we tested our feature matching algorithm from Chapter 5 on inputs withnonsmooth features. Uniform A15 isosurface stuffing was used as a starting point48Table 6.3: Nonsmooth inputs with feature matching, after vertex snappingInput h Min. Dihed. Max Dihed. Max Aspect TimeCube 0.25 26.57? 109.47? 4.47 0.001sCube 0.1 26.57? 109.47? 3.67 0.04sBlock 1.0 0? 180? n/a 0.87sBlock 0.5 6.12? 164.65? 17.53 6.19sJoint* 0.03 3.20? 168.27? 37.06 1.93sJoint* 0.02 3.25? 171.42? 46.87 6.09sFandisk* 0.1 0.78? 175.38? 132.16 2.55sFandisk* 0.05 0.93? 178.26? 185.36 22.7s*features not all resolvedTable 6.4: Nonsmooth inputs with feature matching, after smoothingInput h Min. Dihed. Max Dihed. Max Aspect Time (Total)Cube 0.25 31.08? 117.55? 3.66 0.01sCube 0.1 26.79? 116.11? 3.23 0.13sBlock 1.0 13.45? 146.74? 7.29 2.19sBlock 0.5 15.60? 127.08? 4.59 19.5sJoint* 0.03 16.87? 132.47? 5.01 7.51sJoint* 0.02 20.51? 126.22? 4.71 25.09sFandisk* 0.1 1.61? 173.95? 88.15 4.49sFandisk* 0.05 15.85? 145.03? 6.35 48.64s*features not all resolvedfor vertex snapping operations. The feature points and curves to match were pro-vided by an automated process that analyzed the input surface mesh and computedcurvature at vertices and face angles of each edge; vertices and edges whose cur-vature is found to be a particular threshold (tweaked by hand for each input) aremarked as features to be matched.Mesh quality results are given in Table 6.3 and Table 6.4. We have included49Figure 6.5: Joint, h = 0.03, after smoothing. Cutaway view at right.quality statistics for the meshes before and after the smoothing step. This servesto show that, although snapping vertices onto features produces low quality ele-ments, smoothing is usually sufficient to restore the lost quality.The inputs we tested on are designed to have features that get progressivelymore difficult to resolve. The Cube (Figure 6.4) is the simplest input, and theFandisk (Figure 6.7) is the most complicated, with many ridges and corners spacedrelatively close together. The relative difficulty of the inputs is reflected in theresults; tetrahedral quality declines as the input features become more and morecomplicated. The worst case by far is the Fandisk meshed with resolution h = 0.1,50Figure 6.6: Block, h = 0.05, after smoothing. Cutaway views on bottom.which yields dihedral angles less than 2? and greater than 173?. We note, however,that increasing the resolution to h = 0.05 gives a much better mesh, suggestingthat the uncharacteristically poor result for the h = 0.1 mesh is probably due tounder-resolving of the input surface.We denote with asterisks those meshes that did not successfully resolve all51Figure 6.7: Fandisk, h = 0.03, after smoothing. Cutaway view at right.features in the input. As described above, this occurred when no vertex could besnapped onto a feature without creating inverted or degenerate tetrahedra. In ourtests, this happened with two or three features on the most difficult inputs. Thealgorithm still snaps as many vertices as possible, resulting in partially matchedfeatures.Overall, our results show that our algorithm preserves sharp features muchmore successfully than isosurface stuffing on its own while still giving tetrahedraof reasonble quality. Unfortunately, we cannot guarantee that all features in theinput will be resolved, because the algorithm fails gracefully in the cases where52mesh quality would be significantly compromised by our snapping operations.Although this makes our method unsuitable for some engineering applications,we believe this is acceptable in computer animation where it is desirable to tradetotal accuracy for improved efficiency; in this case the gains in speed and comefrom using higher-quality tetrahedra.6.3 Adaptive A15 LatticeWe present here the quality statistics of adaptive lattices generated with the algo-rithms described in Chapter 4. In our experiments, we generated adaptive latticesbased on a subset of the input surfaces used in Section 6.1 and Section 6.2. Be-cause adaptivity permits higher-resolution tetrahedra at the input surface, the hsizes used are smaller than in the uniform cases.The quality results presented are from the lattices themselves, before isosur-face stuffing is performed. Because both algorithms guarantee that only regularA15 tiles intersect the input surfaces, the worst angles generated by warping andcutting operations will be the same as seen above. We give quality statistics ofthe lattices so as to distinguish the poor-quality tetrahedra that result from theadaptivity algorithm as opposed to the isosurface stuffing process.Quality results for the A15 point lattice algorithm can be found in Table 6.5and an example lattice can be seen in Figure 6.8. The results clearly show identicallevels of quality for several different meshes. This occurs because of the tilednature of the adaptive point lattice; particular configurations of tiles occur in eachof the test cases, leading to identical Delaunay tetrahedralizations in those regions,53Table 6.5: Quality of lattices produced with the A15 Point Lattice algorithmInput h Min. Dihed. Max Dihed. Max Aspect TimeCube 0.05 2.55? 174.87? 47.24 0.51sSphere 0.1 0? 180? n/a 0.47sSphere 0.05 0? 180? n/a 2.57sBunny 2.0 2.55? 174.87? 47.24 12.98sBlock 0.5 2.55? 174.87? 47.24 7.22sBlock 0.25 2.55? 174.87? 47.24 36.71sJoint 0.02 2.55? 174.87? 47.24 3.59sJoint 0.01 0? 180? n/a 20.51sFandisk 0.05 0? 180? n/a 11.73sFandisk 0.025 0? 180? n/a 67.74sTable 6.6: Quality of lattices produced with the Red-Green A15 algorithmInput h Min. Dihed. Max Dihed. Max Aspect TimeCube 0.05 10.924? 160.42? 12.19 1.82sSphere 0.1 9.97? 160.71? 12.35 1.89sSphere 0.05 6.38? 164.9? 18 26.3sBunny 2.0 0? 180? n/a 204.33sBlock 0.5 1.03? 178.32? 162.28 120.51sBlock 0.25 0? 180? n/a 1,882.08sJoint 0.02 10.41? 161.34? 13.83 19.66sJoint 0.01 4.01? 171.8? 29.07 388.59sFandisk 0.05 0? 180? n/a 122.44Fandisk 0.025 0? 180? n/a 1,841.39sand thus identical quality measures for the worst-quality tetrahedra in each mesh.The results for the red-green A15 algorithm are in Table 6.6 and an examplelattice is shown in Figure 6.9 (note that here the lattice is shown filling the entirebounding box of the domain, whereas in Figure 6.8 all tetrahedra outside the do-main have been trimmed away). Due to the amount of optimization performed in54Figure 6.8: Adaptive lattice (Bunny) from A15 Point Lattice algorithm.this algorithm, each generated mesh is unique in terms of quality, and no patternemerges like in the point lattice algorithm. This also explains the much longerrunning times of the algorithm.As has already been stated, our results show that these algorithms do not re-liably produce high-quality tetrahedra. Furthermore, they do not improve on theadaptive BCC lattice algorithm of Labelle and Shewchuk [13], which has maxi-mum dihedral angles of 120?.55Figure 6.9: Adaptive lattice (Block) from Red-Green A15 algorithm.56Chapter 7ConclusionIn summary, we have presented two improvements to the isosurface stuffing methodof tetrahedral mesh generation. The first is the use of an acute tetrahedral latticeknown as the A15 lattice, which produces better-quality tetrahedra, particularly onthe interior of the mesh. The second improvement is a feature-matching algorithmthat can restore most, if not all, of an input?s sharp creases without sacrificing toomuch in terms of element quality.Use of the A15 lattice in place of the BCC lattice is a relatively simple wayof achieving large gains in overall quality in the uniform isosurface stuffing algo-rithm. Because the initial lattice used has higher-quality elements, the tetrahedraon the interior of the final mesh are necessarily of higher quality. Furthermore,the degradation of quality at the domain boundary caused by warping and cuttingoperations is mitigated by the higher-quality initial elements. We have establishedthat A15 isosurface stuffing gives better-quality results in practice than standard57BCC isosurface stuffing. An important task in future will be to establish the worst-case quality bounds of A15 isosurface stuffing, which will establish whether thealgorithm is provably better in all cases.We have successfully applied the A15 lattice to the generation of uniformtetrahedral meshes; the natural next step is to use it to create high-quality adap-tive meshes. We explored numerous techniques for doing so, including tetrahe-dral subdivision, Delaunay tetrahedralization, and octree techniques. Using thesetechniques, we designed two algorithms for generating adaptive meshes basedon A15. One was based on finding the Delaunay tetrahedralization of an adap-tive A15 point lattice; the other uses red-green subdivision and tetrahedral edgeflipping operations to create conformal interfaces between A15 tiles of differentsizes. Unfortunately, neither of our algorithms are able to consistently producemeshes with quality elements. There remains much potential for future work todiscover high-quality adaptive tetrahedral meshing algorithms, either by pursuingadditional combinations of the techniques mentioned above, or by novel researchinto a customized A15-based adaptive scheme. However, for the time being weconclude that the adaptive BCC lattice presented as part of the original isosurfacestuffing algorithm is the best adaptive scheme currently available.Finally, we have introduced an algorithm to restore sharp features of an inputsurface that are smoothed away by the isosurface stuffing process. First, fea-ture points and endpoints of feature curves are resolved by snapping vertices ofthe mesh onto them. Next, paths through the boundary edges of the mesh arefound between endpoints of each feature curve, with the paths chosen to follow58the curves as closely as possible. Finally, the feature curves are resolved by snap-ping as many of the vertices on the paths to the curves as possible. Vertices arenot snapped if doing so would degrade quality below a given threshold, allowingfor control over the trade-off between feature-matching and quality. Our algo-rithm does not guarantee all sharp features will be resolved in the output mesh.Nonetheless, it greatly expands the set of the inputs that can be effectively meshedusing an isosurface stuffing approach to include many with sharp ridges and cor-ners.Future work in this area would focus on guaranteeing that all features arematched, while still preserving quality as much as possible. Operations that mod-ify the mesh topology should also be explored as a means of improving on ourfeature-matching results, since our algorithm only modifies vertex positions. Itseems likely that an approach making use of the entire initial lattice, not just thealready-processed isosurface stuffing mesh, similar to the work done by Bronsonet al. [5], could also yield significant improvements.59Bibliography[1] P. Alliez, D. Cohen-Steiner, M. Yvinec, and M. Desbrun. Variationaltetrahedral meshing. ACM Transactions on Graphics, 24(3):617?625, July2005.[2] M. Bern, D. Eppstein, and J. Gilbert. Provably good mesh generation.Journal of Computer and System Sciences, 48(3):384 ? 409, 1994.[3] J. Bey. Tetrahedral grid refinement. Computing, 55(4):355?378, 1995.[4] R. Bridson. Fluid Simulation. AK Peters, 2008.[5] J. R. Bronson, J. A. Levine, and R. T. Whitaker. Lattice cleaving:Conforming tetrahedral meshes of multimaterial domains with boundedquality. In Proceedings of the 21st International Meshing Roundtable,pages 191?209. Springer, 2013.[6] D. Burkhart, B. Hamann, and G. Umlauf. Adaptive and feature-preservingsubdivision for high-quality tetrahedral meshes. In Computer GraphicsForum, volume 29, pages 117?127. Wiley Online Library, 2010.[7] S.-W. Cheng, T. K. Dey, H. Edelsbrunner, M. A. Facello, and S.-H. Teng.Sliver exudation. Journal of the ACM (JACM), 47(5):883?904, 2000.[8] H. De Cougny and M. S. Shephard. Parallel refinement and coarsening oftetrahedral meshes. International Journal for Numerical Methods inEngineering, 46(7):1101?1125, 1999.[9] H. Edelsbrunner and D. Guoy. An experimental study of sliver exudation.Engineering with computers, 18(3):229?240, 2002.60[10] L. Freitag and C. Ollivier-Gooch. Tetrahedral mesh improvement usingswapping and smoothing. International Journal for Numerical Methods inEngineering, 40(21):3979?4002, 1997.[11] B. M. Klingner and J. R. Shewchuk. Agressive tetrahedral meshimprovement. In Proceedings of the 16th International MeshingRoundtable, pages 3?23, Seattle, Washington, Oct. 2007.[12] L. Kobbelt. 3-subdivision. In Proceedings of the 27th annual conferenceon Computer graphics and interactive techniques, pages 103?112. ACMPress/Addison-Wesley Publishing Co., 2000.[13] F. Labelle and J. R. Shewchuk. Isosurface stuffing: fast tetrahedral mesheswith good dihedral angles. ACM Transactions on Graphics, 26(3), July2007.[14] S. A. Mitchell and S. A. Vavasis. Quality mesh generation in threedimensions. In Proceedings of the eighth annual symposium onComputational geometry, SCG ?92, pages 212?221, New York, NY, USA,1992. ACM.[15] N. Molino, R. Bridson, and R. Fedkiw. Tetrahedral mesh generation fordeformable bodies. In Proc. Symposium on Computer Animation, 2003.[16] P. Mo?ller and P. Hansbo. On advancing front mesh generation in threedimensions. International Journal for Numerical Methods in Engineering,38(21):3551?3569, 1995.[17] J. Richard Shewchuk. Adaptive precision floating-point arithmetic and fastrobust geometric predicates. Discrete & Computational Geometry, 18(3):305?363, 1997.[18] M. S. Shephard and M. K. Georges. Automatic three-dimensional meshgeneration by the finite octree technique. International Journal forNumerical Methods in Engineering, 32(4):709?749, 1991.[19] J. Shewchuk. What is a good linear finite element? interpolation,conditioning, anisotropy, and quality measures. Eleventh InternationalMeshing Roundtable, pages 115?126, 2002.61[20] J. R. Shewchuk. Tetrahedral mesh generation by delaunay refinement. InProceedings of the fourteenth annual symposium on Computationalgeometry, pages 86?95. ACM, 1998.[21] H. Si. TetGen, a quality tetrahedral mesh generator and three-dimensionalDelaunay triangulator, January 2006.[22] J. M. Sullivan. The geometry of bubbles and foams. In Foams andemulsions, pages 379?402. Springer, 1999.[23] J. Teran, N. Molino, R. Fedkiw, and R. Bridson. Adaptive physics basedtetrahedral mesh generation using level sets. Engineering with Computers,21(1):2?18, 2005.[24] A. U?ngo?r. Tiling 3d euclidean space with acute tetrahedra. In Proc. 13thCanadian Conference on Computational Geometry (CCCG?01), pages169?172, 2001.[25] L. Velho, J. de Miranda Gomes, and D. Terzopoulos. Implicit manifolds,triangulations and dynamics. Neural, Parallel & Scientific Computations, 5(1-2):103?120, 1997.[26] M. Wicke, D. Ritchie, B. M. Klingner, S. Burke, J. R. Shewchuk, and J. F.O?Brien. Dynamic local remeshing for elastoplastic simulation. ACMTransactions on Graphics (TOG), 29(4):49, 2010.[27] B. Williams. Fluid surface reconstruction from particles. Master?s thesis,The University Of British Columbia, 2008.[28] M. Yerry and M. Shephard. A modified quadtree approach to finite elementmesh generation. Computer Graphics and Applications, IEEE, 3(1):39?46,1983.[29] M. A. Yerry and M. S. Shephard. Automatic three-dimensional meshgeneration by the modified-octree technique. International Journal forNumerical Methods in Engineering, 20(11):1965?1990, 1984.[30] H. Zhao. A fast sweeping method for eikonal equations. Mathematics ofcomputation, 74(250):603?627, 2005.62Appendix AA15 TileWe provide here a listing of the geometry of the modified A15 tile used in ourimplementations. It is a cleaned-up version of the tile described in Williams [27],which is in turn derived using the methodology of U?ngo?r [24]. However, we po-sition our tile such that the corners of the cube of side length 4 are all representedin the lattice; this makes it easier to overlay tiles on the cells of an octree.Our tile contains 27 vertices:0: ( 0, 0, 4) 7: ( 2, 0, 0) 14: ( 3, 2, 3) 21: (-1, 4, 2)1: ( 2, 0, 4) 8: ( 4, 0, 0) 15: (-1, 2, 1) 22: ( 1, 5, 2)2: ( 4, 0, 4) 9: (-1, 2, 5) 16: ( 1, 2, 0) 23: ( 3, 4, 2)3: (-1, 0, 2) 10: ( 1, 2, 4) 17: ( 3, 2, 1) 24: ( 0, 4, 0)4: ( 1, 1, 2) 11: ( 3, 2, 5) 18: ( 0, 4, 4) 25: ( 2, 4, 0)5: ( 3, 0, 2) 12: (-1, 2, 3) 19: ( 2, 4, 4) 26: ( 4, 4, 0)6: ( 0, 0, 0) 13: ( 1, 3, 2) 20: ( 4, 4, 4)63Figure A.1: A15 tile overlaid on cube (thick black lines)The 46 tetrahedra that connect the vertices are:0: (12, 4, 0, 3) 12: ( 6, 4,15, 3) 24: (18,12,10, 9) 36: (15,24,13,16)1: ( 0,12, 9,10) 13: (15, 4,12, 3) 25: (13,19,18,10) 37: (24,15,13,21)2: ( 4,12, 0,10) 14: (12,13, 4,15) 26: (13,12,18,21) 38: (24,25,22,13)3: ( 1, 4, 0,10) 15: (16,15, 6, 4) 27: (12,13,18,10) 39: (24,22,21,13)4: (13,12, 4,10) 16: (13,15,16, 4) 28: (18,22,13,21) 40: (25,24,15,13)5: (14, 1, 2,11) 17: ( 4,17, 7,16) 29: (19,13,14,10) 41: (25,22,13,23)6: ( 1,14, 4,10) 18: (17, 4, 7, 5) 30: (19,11,10,14) 42: (13,17,16,25)7: (14, 1, 4,10) 19: (17,13,16, 4) 31: (11,19,20,14) 43: (25,13,17,23)8: (14,13, 4,10) 20: ( 4,14, 5,17) 32: (14,19,20,23) 44: (13,14,17,23)9: ( 1,14, 2, 5) 21: ( 8, 5,17, 7) 33: (19,22,23,13) 45: (23,26,17,25)10: ( 1,11,14,10) 22: (13,17,14, 4) 34: (13,19,14,23)11: ( 6, 7,16, 4) 23: (22,19,18,13) 35: (13,12,21,15)When overlaying tiles on an octree, it?s important to know which vertices areshared by adjacent tiles. We found the easiest way to accomplish this was toestablish a mapping between the A15 tile and a cubical, grid-like mesh with thesame topology. This in turn allows a mapping from faces, edges, and corners of64an octree cell to the vertices of the A15 tile. The grid analogue is achieved byrepositioning the vertices of the A15 tile as follows:0: (0, 0, 4) 7: (2, 0, 0) 14: (4, 2, 2) 21: (0, 4, 2)1: (2, 0, 4) 8: (4, 0, 0) 15: (0, 2, 0) 22: (2, 4, 2)2: (4, 0, 4) 9: (0, 2, 4) 16: (2, 2, 0) 23: (4, 4, 2)3: (0, 0, 2) 10: (2, 2, 4) 17: (4, 2, 0) 24: (0, 4, 0)4: (2, 0, 2) 11: (4, 2, 4) 18: (0, 4, 4) 25: (2, 4, 0)5: (4, 0, 2) 12: (0, 2, 2) 19: (2, 4, 4) 26: (4, 4, 0)6: (0, 0, 0) 13: (2, 2, 2) 20: (4, 4, 4)Figure A.2: Mapping of A15 tile elements to cube.65
- Library Home /
- Search Collections /
- Open Collections /
- Browse Collections /
- UBC Theses and Dissertations /
- Isosurface stuffing improved: acute lattices and feature...
Open Collections
UBC Theses and Dissertations
Featured Collection
UBC Theses and Dissertations
Isosurface stuffing improved: acute lattices and feature matching Doran, Crawford 2013
pdf
Page Metadata
Item Metadata
Title | Isosurface stuffing improved: acute lattices and feature matching |
Creator |
Doran, Crawford |
Publisher | University of British Columbia |
Date Issued | 2013 |
Description | We present two improvements to Labelle and Shewchuk's isosurface stuffing algorithm for tetrahedral mesh generation. First, we use an acute tetrahedral lattice known as the A15 lattice in place of the original BCC lattice. In the uniform case, the higher-quality tetrahedra of A15 significantly improve the quality of the resulting mesh; BCC remains the best choice for adaptive meshes, as the A15-based adaptive lattices we have designed are not able to outperform it. Second, we extend the method to match features such as corners and sharp creases. Feature lines and points are matched by snapping nearby mesh vertices onto them. A final vertex smoothing pass mitigates the loss of mesh quality due to feature-matching. Our experiments show that for input surfaces with reasonable crease angles, this is generally sufficient to restore mesh quality. |
Genre |
Thesis/Dissertation |
Type |
Text |
Language | eng |
Date Available | 2013-10-24 |
Provider | Vancouver : University of British Columbia Library |
Rights | Attribution 2.5 Canada |
DOI | 10.14288/1.0052176 |
URI | http://hdl.handle.net/2429/45378 |
Degree |
Master of Science - MSc |
Program |
Computer Science |
Affiliation |
Science, Faculty of Computer Science, Department of |
Degree Grantor | University of British Columbia |
Graduation Date | 2013-11 |
Campus |
UBCV |
Scholarly Level | Graduate |
Rights URI | http://creativecommons.org/licenses/by/2.5/ca/ |
Aggregated Source Repository | DSpace |
Download
- Media
- 24-ubc_2013_fall_doran_crawford.pdf [ 8.27MB ]
- Metadata
- JSON: 24-1.0052176.json
- JSON-LD: 24-1.0052176-ld.json
- RDF/XML (Pretty): 24-1.0052176-rdf.xml
- RDF/JSON: 24-1.0052176-rdf.json
- Turtle: 24-1.0052176-turtle.txt
- N-Triples: 24-1.0052176-rdf-ntriples.txt
- Original Record: 24-1.0052176-source.json
- Full Text
- 24-1.0052176-fulltext.txt
- Citation
- 24-1.0052176.ris
Full Text
Cite
Citation Scheme:
Usage Statistics
Share
Embed
Customize your widget with the following options, then copy and paste the code below into the HTML
of your page to embed this item in your website.
<div id="ubcOpenCollectionsWidgetDisplay">
<script id="ubcOpenCollectionsWidget"
src="{[{embed.src}]}"
data-item="{[{embed.item}]}"
data-collection="{[{embed.collection}]}"
data-metadata="{[{embed.showMetadata}]}"
data-width="{[{embed.width}]}"
async >
</script>
</div>
Our image viewer uses the IIIF 2.0 standard.
To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.24.1-0052176/manifest