Simplifying Terrain Models and Measuring Terrain M o d e l Accuracy by David Scott Andrews B . S c , University of British Columbia, 1993 A THESIS S U B M I T T E D IN P A R T I A L F U L F I L L M E N T O F THE REQUIREMENTS FOR T H E DEGREE OF Master of Science in T H E F A C U L T Y O F G R A D U A T E STUDIES (Department of Computer Science) We accept this thesis as conforming to the required standard The University of B r i t i s h Columbia April 1996 © David Scott Andrews, 1996 In presenting this degree at the thesis in University of partial fulfilment of of department requirements British Columbia, I agree that the freely available for reference and study. I further copying the agree that permission for extensive granted or understood his or her representatives. an advanced Library shall make it this thesis for scholarly purposes may be by for It is by the that head of copying my or publication of this thesis for financial gain shall not be allowed without my written permission. Department of Covnpo' ~v SdO-vit The University of British Columbia Vancouver, Canada Date DE-6 (2/88) Abstract We describe a set of T I N simplification methods that enable the use of the triangulation hierarchy introduced by Kirkpatrick [Kir83] and modified by de Berg and Dobrindt [dBD95a, dBD95b]. This triangulation hierarchy can be used to form a terrain model combining areas with varying levels of detail. One variant of the delete simplification method formed simplifications with accuracy close to the greedy method. We also investigated different variables that can be used to measure the accuracy of our simplified terrain models. Although the use of derivative statistics did not significantly alter our evaluation of the performance of our simplification methods, we recommend that any future comparisons should be aware of these alternative variables of surface characterization. ii Contents Abstract ii Contents iii List of Tables vi List of Figures vii Acknowledgements x 1 Introduction 1 2 Digital Terrain Models (DTMs) 3 2.1 Terrain Characteristics 4 2.2 Digital Line Graphs (DLGs) 7 2.3 Digital Elevation Models (DEMs) 8 2.4 Triangular Irregular Networks (TINs) 9 2.4.1 2.5 Triangulation's 10 Comparison of D T M s 15 3 Work with Polygonal Lines 17 iii 3.1 3.2 3.3 Simplification of Polygonal Lines 18 3.1.1 18 Douglas-Peucker Algorithm for Line Simplification Concavity Measures for Polygonal Lines 19 3.2.1 Secant Concavity Measure 20 3.2.2 Disk Concavity Measure 22 3.2.3 Comparison of Concavity Measures 23 Transferring Ideas to TINs 24 3.3.1 Simplification 24 3.3.2 Concavity Measures 24 4 Simplification of a Triangular Irregular Network (TIN) 25 4.1 Hierarchical Triangulations 25 4.2 Hierarchical TINs 29 4.2.1 Measuring Vertex Importance 31 4.2.2 Removing Vertices 33 4.3 Delete Simplification Methods 34 4.3.1 Deleting Vertices and Retriangulating Neighbourhoods . . . . 34 4.3.2 Selecting Vertices 36 4.4 Insert Simplification 37 4.5 Rendering Terrain with Varying Levels of Detail 38 4.5.1 Data Structures 38 4.5.2 Rendering 42 5 Surface Accuracy 5.1 47 Parametric Curves and Surfaces 50 5.1.1 50 Parametric Curves iv 5.1.2 5.2 5.3 Parametric Surfaces 52 Computing Curvature 53 5.2.1 55 Computing Curvature on a B-spline Surface Computing Correlation 56 5.3.1 57 Using Correlation to Evaluate Surface Accuracy 5.4 Choosing Study Areas 57 5.5 The Experiment 59 6 Results 6.1 6.2 60 Speed 60 6.1.1 Select Times 60 6.1.2 Delete Times 63 Accuracy 63 6.2.1 Elevation 64 6.2.2 Mean Curvature 67 7 Conclusion 74 Bibliography 76 v List of Tables 5.1 Study Areas 58 vi List of Figures 2.1 Mark's D T M typology 6 2.2 Digital Line Graph (DLG) 7 2.3 Digital Elevation Model ( D E M ) 2.4 Triangular Irregular Network (TIN) 2.5 Adding a vertex to a triangulation 11 2.6 Swapping a diagonal in a triangulation 11 2.7 Delaunay triangle 12 2.8 Delaunay triangulation 13 2.9 Voronoi diagram 13 , . 8 9 2.10 Delaunay triangulation and Voronoi dual 14 3.1 Area covered by a line segment 20 3.2 Area covered by a polyline 20 4.1 Neighbourhood of a vertex 26 4.2 Subdividing a triangle into four children 29 4.3 Subdividing a triangle into three children 30 4.4 Links from triangles to their group polygon 39 4.5 Links from the polygon to its reject triangles vii . 39 4.6 Link from the polygon to its accept vertex 40 4.7 Links from the polygon to its accept triangles 40 4.8 Only one of the three triangle in this polygon is in the queue. It is removed from the queue and rendered 4.9 43 A l l three of the triangles in this polygon are in the queue but the reject vertex is not accepted. A l l three triangles are removed from the queue and rendered 43 4.10 A l l three of the triangles in this polygon are in the queue and the reject vertex is accepted. A l l three triangles are removed from the queue and five new triangles are inserted into the queue 44 4.11 The initial terrain model with 20002 triangular facets 44 4.12 A simplified model with 3360 triangular facets 45 4.13 A terrain model combining regions of varying levels of detail from the hierarchy of the initial terrain model and its simplifications. It has 5280 triangular facets 46 6.1 Time to select 20% of the vertices using the three delete methods . . 61 6.2 Time to delete 20% of the vertices using the three delete methods . . 63 6.3 Linear scale on the elevation correlation axis 64 6.4 Logarithmic scale on the elevation correlation axis 65 6.5 Elevation correlations using the degree method with two delete percentages 6.6 66 Elevation correlations using the normal method with two delete percentages 6.7 67 Elevation correlations using the volume method with two delete percentages 68 viii 6.8 6.9 Comparing the elevation correlations of the insert method and the delete simplification methods 69 A terrain with higher elevation correlations and higher relief 70 6.10 Linear scale on the mean curvature correlation axis 70 6.11 Logarithmic scale on the mean curvature correlation axis 71 6.12 A terrain with better performance when using the normal method . 71 6.13 Mean curvature correlations using the degree method with two delete percentages 72 6.14 Mean curvature correlations using the normal method with two delete percentages 72 6.15 Mean curvature correlations using the volume method with two delete percentages 73 ix Acknowledgements I would like to thank my supervisor, Jack Snoeyink, for providing opportunity, enthusiasm and valuable support throughout my stay at U B C . I would also like to thank my. second reader, David Kirkpatrick and my student reader, Michael McAllister for their insightful and constructive comments. Thanks to my fellow office-mates and lab members, especially Kevin, Gordon, Paul and Kurt, who have made my times at U B C both educational and entertaining. Thanks to my family for their continual support and encouragement. And thanks to Annie Wong for her love and patience. D A V I D (• . -. ' The University of British Columbia April 1996 " S C O T T A N D R E W S •• • Chapter 1 Introduction There are many fields that work with models of terrain. As computers were incorporated into these fields, various Digital Terrain Models were proposed and developed. One popular terrain model, introduced in 1978, is the Triangular Irregular Network [PFLM78]. It is based on irregularly distributed surface-specific points. The selection of these points (as well as the triangulation criteria) has been well studied. A single terrain model might not be appropriate for all users because of constraints on computing space and time. A solution is to simplify the model, providing a balance of size and detail to suit each individual. The user could select the best model from a set of precomputed simplifications or they could create a new simplified model that would meet their own requirements. There exist many ways to simplify the model of a terrain. To compare the effectiveness of these methods we need to examine their speed and their accuracy. Previous comparisons have used only the elevation error of the models to grade their performance. But other fields such as geomorphometry, the study of landform, and geometry of surfaces indicate that there are other variables that can be used to 1 characterize a terrain. The effect of a simplification on these variables should also be observed. Chapter 2 describes and compares three Digital Terrain Models. The next chapter covers some of the work we did on simplifying polygonal lines and measuring the concavity of polygonal lines. Chapter 4 describes various methods for Triangular Irregular Network simplification and Chapter 5 describes our surface accuracy measures. The results of our experiments are in Chapter 6 and our conclusions are in Chapter 7. 2 Chapter 2 Digital Terrain Models (DTMs) A terrain is the physical features of a tract of land. Its topography is its physical or natural features and their structural relationships. If the earth's surface was regular, it could be perfectly modelled by mathematical functions. Unfortunately, it is continuous but irregular, requiring the use of models based on samples of the surface [Kum94]. A Digital Terrain Model (DTM) is a representation of terrain and to- pography. A D T M can be stored on a computer as a data structure and a set of associated procedures. Computer applications containing D T M s are used by people who specialize in terrains, such as geomorphologists, surveyors, photogrammeters, cartographers and mathematicians [Mar78]. D T M s are also a vital component of Geographical Information Systems (GISs). The first section of this chapter lists some of the terrain characteristics that can be obtained from a D T M . The next three sections describe three common D T M s , the Digital Line Graph (DLG), the Digital Elevation Model (DEM) and the Triangular Irregular Network (TIN). The final section compares these three D T M s . 3 2.1 Terrain Characteristics The following paragraphs describe some of the problems that require D T M s to solve. Obtaining various terrain characteristics from a D T M help to form the solutions to these problems. The data structure and procedures of a D T M should support the computation of these terrain characteristics. Some sample characteristics are listed below. • elevation • slope • aspect • visibility • drainage A slope map is a map of the slope magnitude at each point of the terrain. Hammond [Ham64] notes that slope has a strong effect on landuse. One of the variables he uses to classify landform is the frequency of gentle slope. A t his upper limit of eight percent, machine cultivation becomes difficult, the effects of erosion become troublesome and vehicle movement becomes impeded. A D T M should be able to compute the slope at a given point or generate a slope map for the complete surface. De Berg and van Kreveld [dBvK93] define a height level map to be built from a D T M . This map can be used to answer path queries with height restrictions. Elevation has an effect on temperature. The effects of slope have been noted in the paragraph above. A path that satisfies elevation and slope constraints might be preferred to the Euclidean shortest distance path. 4 De Floriani, Marzano and Puppo [DFMP94] study line-of-sight visibility problems. Their example problem asks for the arrangement of a set of microwave transceiver stations in such a way that a signal transmitted from any station can be received by any other station. Since a signal propagates along a straight line and is interrupted by physical obstructions, the towers must be mutually visible. Other related line-of-sight problems include the location of radar, laser or sonar surveillance systems and the location of television transmitters. They use a discrete visibility map consisting of a set of candidate points with edges connecting points that are mutually visible. The v i e w s h e d of a point on the terrain is the area of the terrain visible to that point. Fisher [Fis94] considers a variant of the binary viewshed. The smoke from a forest fire might be in the viewable area of an observation tower even if the fire is not in the line-of-sight. The same situation applies to the visual impact of a tall structure built beyond the horizon. His variant would encode the distance between the horizon and the surface. The height of the smoke of a forest fire or the height of a building could be compared to this distance to obtain a more accurate viewable area than a simple line-of-sight viewshed. Mark [Mar78] proposes a typology for D T M data structures with an initial classification based on how elevation is computed. His typology has been reproduced in Figure 2.1. Tabular or discrete data structures obtain the elevation by interpolating from the surrounding data. Mathematical or continuous data structures obtain the elevation by evaluating a modelling function. The difference between the two branches of his initial division is minor since interpolation can be considered as the evaluation of a function. He also notes that the digital data structure of a continuous function is represented by a discrete set of coefficients. We only use a subdivision of 5 Horizontal Lines DLG H Vertical Tabular Irregular TIN Regular DEM Points — DTM Surface-Specific Patches — Surface-Random — I Multi-Quadric 1 Mathematical Universal Fourier Figure 2.1: Mark's D T M typology his typology as the data structures of the three D T M s discussed below are all part of the tabular or discrete data structure division. A discrete data structure requires an interpolation method. The interpolation method depends on the data structure and the desired level of continuity. Different interpolation methods can affect the continuity of the surface and its derivatives. For example, the T I N model discussed in Section 2.4 commonly uses linear interpolation. The surface of this model is continuous but its slope is not differentiable. 6 Figure 2.2: Digital Line Graph ( D L G ) 2.2 A Digital Line Graphs ( D L G s ) Digital Line Graph (DLG) is a set of contour lines. Figure 2.2 shows a perspective view of a D L G . Each contour line is a list of x and y coordinate vertices with a common z coordinate altitude. Each vertex requires the storage of two coordinates. D L G s are part of Mark's [Mar78] Tabular - Lines - Horizontal D T M subdivision. Terrain and topography information is commonly stored in non-automated contour maps [Eva72]. Data files of contours are widely available because digitizing contour lines from existing maps is a popular input method. Other D T M s are often constructed from digitized contour lines because of their availability. Elevation contours can be formed from other D T M s but this is only done because contour lines are a familiar visual representation of an elevation surface [Kum94]. Terrain characteristics can be quickly computed if they only depend on a single contour line. Unfortunately most terrain characteristics, like the sample characteristics listed in Section 2.1, are much more difficult to compute because they require identifying nearby contours [Eva72]. 7 2.3 Digital Elevation Models ( D E M s ) Figure 2.3: Digital Elevation Model ( D E M ) A Digital Elevation Model (DEM) or regular grid is a matrix of z coordi- nate altitudes with x and y coordinates implicit in the grid. Figure 2.3 shows a perspective view of a D E M with edges connecting neighbouring vertices. Each vertex requires the storage of only one coordinate. D E M s are part of Mark's [Mar78] Tabular - Points - Regular - Constant Density D T M subdivision. D E M s are widely available. The United States Geological Survey produces D E M data files corresponding to its published maps. These are created by manually profiling stereo models or by using an automatic image correlator [Kum94]. D E M s are widely used because they are convenient for programming and for machine storage [Mar78]. Many problems requiring a D T M benefit from the fixed data resolution of a D E M . Information concerning a restricted area, a neigh- bourhood, can be easily selected since the sequence of storage is known and regular [Eva72]. But a fixed resolution is also a disadvantage. The resolution of the regular grid may not adequately model rough areas of the surface while being redundant in smooth areas. 8 Kumler [Kum94] notes that many early favorable comparisons of D E M s and other D T M s concerned the trade-off between the speed of processing and the cost of storing the structure. Later comparisons considered accuracy and required storage space. 2.4 Triangular Irregular Networks (TINs) Figure 2.4: Triangular Irregular Network (TIN) Peucker, Fowler, Little and Mark [PFLM78] were motivated to develop an alternative model because of problems with the Digital Elevation Model ( D E M ) . A D E M must use a fine regular grid to accurately model rough terrain; this grid can be highly redundant in smooth terrain. Also, the D E M data structure doesn't correspond with the "structure of the phenomenon" [Mar78]. The design of a D T M data structure should reflect the terrain it will represent, unlike the D E M which was designed with the problems it solves and the machines it uses in mind. But the neighborhood of a point in a D E M can be reconstructed without searching the entire model. Any successful alternative must store this useful information explicitly. A Triangular Irregular Network (TIN) is a set of irregularly-distributed 9 vertices linked together by straight lines to form a continuous, non-overlapping set of triangular elements [Mar78]. Figure 2.4 shows a perspective view of a T I N . Each vertex requires the storage of all three coordinates. TINs are part of Mark's [Mar78] Tabular - Points - Irregular D T M subdivision. Peucker, Fowler, Little and Mark [PFLM78] base their T I N on surfacespecific vertices. Ridges and channels connecting the peaks, pits and passes of the surface form the skeleton of the T I N . Additional vertices are added to improve spot height accuracy and the set is then connected in a Delaunay triangulation. Although D E M s have better elevation detail and D E M files are widely available, TINs have the advantage of efficient storage and better handling of intervisibility analysis and extraction of hydrological terrain features [Lee91]. Variations of TINs exist with different vertex distributions and different triangulation criteria. The following section reviews triangulations in general and the Delaunay triangulation in particular. 2.4.1 A Triangulations triangulation of a set of vertices V is a set of closed triangles T formed by edges connecting pairs of vertices in V that satisfy the following conditions [DLR90]. • The set of all vertices of triangles in T is V. • Each edge of a triangle in T contains only two vertices (its endpoints) in V. • The union of all triangles in T is equal to the convex hull of V. • The intersection of the interiors of any two distinct triangles in T is empty. Lee and Schachter [LS80] present an iterative algorithm for constructing a triangulation based on two primitive operations, adding a vertex and swapping a 10 diagonal. In Figure 2.5, a new vertex is connected to each of the three vertices of its enclosing triangle in the existing triangulation. If the quadrilateral formed by Figure 2.5: Adding a vertex to a triangulation the union of a new triangle and its neighbour is convex, it will have an alternate diagonal. The diagonal should be swapped with the alternate if required by a local optimization procedure. Figure 2.6 shows how a diagonal is swapped. Figure 2.6: Swapping a diagonal i n a triangulation Different triangulations can be defined by different local optimization procedures. If a diagonal is swapped, two new triangles are formed with quadrilaterals that have to be similarly optimized. Some of the many possible local optimization procedures are listed below. • minimize the maximum or sum of perimeter /area of the two triangles [Law 72] 2 • maximize the minimum angle i n the two triangles [LS80] • minimize the angle between the normals of the two triangles [DLR90] A local cost function on each pair of neighbouring triangles can be used as an optimization procedure. A global cost function for the triangulation can be formed by 11 using either the maximum or the sum of the local cost functions. Lawson proved that any triangulation of a finite vertex set can be transformed into another triangulation of the same set with a finite number of diagonal swaps [Law72]. This theorem shows that any triangulation can be transformed into a triangulation that minimizes a given global cost function. Unfortunately, minimizing the local cost function for each diagonal does not guarantee that the global cost function will be minimized. There are triangulations that do not have a local optimization procedure. One example is the Minimum-Weight Triangulation [PS85]. This triangulation is formed by minimizing the total length of the triangulation edges. Locally minimizing the length of the edges of each pair of triangles does not guarantee that the total length will also be minimized. Delaunay Triangulation A Delaunay triangle is a triangle whose circumcircle does not contain any other vertex in V. This property is called the empty-circle property. A Delaunay Figure 2.7: Delaunay triangle triangulation is a triangulation in which every triangle in T is a Delaunay triangle. 12 Figure 2.8: Delaunay triangulation The Delaunay triangulation is related to the Voronoi diagram. The Voronoi diagram for a set of vertices V is a set of Voronoi cells (convex polygons), one about each vertex. Each Voronoi cell contains all points closer to its vertex than any Figure 2.9: Voronoi diagram other vertex i n V. B y definition, an edge between two Voronoi cells is equidistant from two vertices. The point where three edges meet is equidistant from three vertices. 13 The Delaunay triangulation is the graph dual of the Voronoi diagram. A n a. V Figure 2.10: Delaunay triangulation and Voronoi dual edge in the Delaunay triangulation connects vertices with adjacent Voronoi cells. A triangle in the Delaunay triangulation has a circumcircle centered at a point where three Voronoi edges meet. B y definition, the Voronoi diagram for a set of vertices is unique. Since the Delaunay triangulation is the dual of the Voronoi diagram, the Delaunay triangulation is also unique under general position assumptions. The Delaunay triangulation can be constructed by using the empty-circle property of Delaunay triangles. It can also be constructed by using the local optimization procedure of maximizing the minimum angle in each pair of triangles. This triangulation is used for fitting triangular faceted surfaces to digital terrain data by Lee and Schachter because it minimizes computation time and produces a good visual display [LS80]. The resulting T I N is used for flight simulators. 14 2.5 Comparison of D T M s Availability, usability and storage space should be considered when comparing D T M s . D T M data files should be available or algorithms should exist to convert other D T M s with available data files to the desired D T M . D T M s should be able to quickly and accurately compute the terrain characteristics needed by the user. D T M s should minimize the amount of space needed to store and use the model while keeping an acceptable level of accuracy. Availability D L G and D E M data files are widely available. T I N data files are not widely available but many algorithms exist for constructing a T I N from a D L G or a D E M . Four methods for building TINs from D E M s are reviewed in Section 4.2.1. Usability D L G s are not easy to use. Single contour operations are straightforward but tasks requiring multiple contours are much more difficult because moving from contour to contour is very awkward. D E M s are very easy to use. They are in a matrix form that is easy to access and manipulate. TINs are more complex than D E M s but they are also better for solving some of the more complex D T M problems. Storage D E M s only need to store one coordinate per point while TINs need to store all three coordinates as well as adjacency information. But the number of points required to accurately describe a surface with a T I N is usually smaller than the number of points required by a D E M . Kumler [Kum94] studies this tradeoff by comparing 15 the accuracy of a "Super T I N " with 50000 points to D E M requiring equal data storage space with 170000 points. 16 Chapter 3 Work with Polygonal Lines We started working with both D L G s and TINs but we were unable to transfer ideas useful in forming a simplification of a surface and measuring the accuracy of a simplified surface from one model to the other. The main obstacle was the number of dimensions of the primitives in each model; each polyline in a D L G is 2 - D while a triangular facet and its neighbours in a T I N are 3 - D . We present some of our work on simplifying and measuring the concavity of the polylines in a D L G in the first two sections of this chapter. The third section discusses the ideas that were transferred to our work with TINs. Mokhtarian [Mok90] proposes a multi-scale shape representation technique for planar curves based on curvature. This work can be related to both polyline simplification discussed in Section 3.1 and shape measurement discussed in Section 3.2. He uses an multi-scale approach to find a quick, approximate match at a coarse, simplified scale and then uses the finer scales to obtain more accurate matches. He also uses a technique based on curvature to represent the shape of the curve that is invariant under rotation, uniform scaling and translation. 17 3.1 Simplification of Polygonal Lines A polyline is a sequence of vertices VQ, V±, . . . , v n straight line segments VQVI, polyline from V{ to Vj as VIV , 2 • • •, connected by a sequence of v -\v . We will refer to the section of the n n ... Vj. We wanted a method for simplifying a polyline. The method should produce a polyline with a reduced number of vertices but still a good caricature of the original polyline. The algorithm presented by Douglas and Peucker [DP73] performs well. 3.1.1 Douglas-Peucker Algorithm for Line Simplification Douglas and Peucker [DP73] present a method for line simplification. The method computes a simplification value for each vertex of the original polyline. A vertex is included in a simplification if its simplification value is more than a given threshold. We implemented an algorithm that computes the simplification values of the n vertices of a polyline in 0(n ) time and we also implemented an algorithm that 2 forms a simplified polyline in 0(n) time after the simplification values have been computed. The same simplification values can be used with different thresholds to form polylines with different levels of detail. The vertices that are not included in the simplification are all within a small perpendicular distance to the line segments connecting the vertices that are included. Algorithm notes This algorithm recursively sets the simplification value of the vertex furthest from the line segment connecting the endpoints of the polyline. It is described below. 18 1. Initialize the simplification value of the two endpoints of the polyline to infinity. 2. Form a line segment vr,v with the two endpoints of the polyline vo .. .v . n n 3. F i n d the vertex Vj i n the polyline vo ... v with the largest perpendicular disn tance to the line segment vr,v . n 4. Set the simplification value for V{ to the minimum of the perpendicular distance from vi to the line segment VQV U and the simplification values of the two endpoints. 5. Repeat steps 2 to 5 with the polyline VQ ... v\ and the polyline V{... v n until every vertex has a simplification value. If the method is applied to a set of contour lines, the simplified contour lines may intersect. Also, the method retains the points where a contour crosses a ridge or a valley almost too well, resulting in too many vertices concentrated on those features and not enough for the other parts of the surface [Kum94]. 3.2 Concavity Measures for Polygonal Lines The simplification of a polyline should preserve key properties of the original polyline. One property we examined was concavity. To observe the effect of the simplification of a polyline on concavity, we needed a concavity measure for points on a polyline. A good measure should not depend on the level of detail i n the polyline. In addition, it should not be affected by translation or rotation of the polyline and it should be scalable. The w i n d o w is the section of the polyline around a given point that is used to determine the concavity of that point. The measure should have a parameter 19 to control the size of the window. The size of the window can be set so that local changes do not dominate the concavity measure. We developed two measures, the secant measure and the disk measure. 3.2.1 The Secant Concavity Measure secant concavity measure of a point on a polyline is based on the ratio of the area between the polyline and a secant and the square of the length of the secant. The area between the polyline and the secant is calculated by finding the difference between the area covered by the polyline and the area covered by the secant. The area covered by a directed line segment is equal to the area of a triangle formed by the line segment and a point. The area is positive if the point lies on the left hand side of the segment and negative if the point lies on the right hand side. The area covered by a polyline is equal to the sum of the areas covered by the directed line segments of the polyline for the same point. The area can be positive or negative. These areas are dependent on the location of the point but the difference of these areas is not. Figure 3.1: Area covered by a line segment Figure 3.2: polyline 20 Area covered by a Algorithm notes Given a point p on the polyline and a window distance d. 1. Find the two points that are d along the polyline in both directions from p and compute the area covered by the section of the polyline between these two points. The measure is undefined if the distance along the polyline from p to an endpoint is less than d. 2. Form a secant between these two points and compute the area covered by the secant. 3. Calculate the difference of the area covered by the section of the polyline and the area covered by the secant. This difference represents the area between the polyline and the secant. It can be positive or negative. 4. Form the ratio of this area and the square of the length of the secant: Using the square of the length of the secant makes the measure scalable. It also makes the ratio an area:area ratio. A large positive ratio indicates a convex point relative to the left side of the polyline near p. A large negative ratio indicates a concave point. The ratio has no finite lower or upper bound. As a polyline is simplified, the region specified by the window parameter gets larger. One solution to this problem is to keep the ratio of the window distance and the length of the polyline constant. If the secant concavity measure with a window distance of d is used on a polyline of length /, then a simplified polyline of length s should use the measure with a scaled window distance of fd. 21 Implementation notes This implementation can classify every point (not just the vertices) on a polyline with n vertices in 0(n) time. It maintains the secant concavity measure for the "window" point and it maintains the "tail" point a distance d behind the "window" point and the "head" point d ahead of the "window" point. • Initialize the points and the concavity measure. - Initialize the "tail" point to one endpoint of the polyline. - Initialize the "window" point d along the polyline. - Initialize the "head" point 2d along the polyline. - Initialize the concavity measure using the polyline section and the secant between the "tail" point and the "head" point. • Maintain the concavity measure while moving the three points along the polyline. The concavity measure is a ratio of two areas. As the "window", "tail" and "head" points move along the polyline, this ratio forms a quadratic function. - If the "tail" point or the "head" point reaches a polyline vertex, the rate of change of the ratio changes. - If the ratio reaches a given positive or negative threshold, note the change of classification. 3.2.2 The Disk Concavity Measure disk concavity measure of a point on a polyline is based on the ratio of the area on the left side of a directed polyline within a circle of given radius to the area 22 of that circle. A l g o r i t h m notes Given a point p on the polyline and a window radius r. 1. Form a disk with a radius r around p. 2. Find the two points where the polyline crosses the boundary of the disk. 3. Calculate the area of the intersection of the disc and the left side of the polyline section between these two boundary points. 4. Compute the difference of this area and the area of half the circle. 5. Form the ratio from this difference and the area of half the circle. A large positive ratio indicates a convex point. A large negative ratio indicates a concave point. The ratio has a lower bound of —1.0 and an upper bound of 1.0. There is no need to scale the window radius as the polyline is simplified. Implementation notes Our implementation only classified the vertices of the polyline. It calculates the disk concavity of each vertex independently. The upper time bound is 0 ( n ) because 2 calculating the disk concavity measure for a single point is an O(n) operation and this must repeated for all n vertices. 3.2.3 Comparison of Concavity Measures The implementation of our disk measure only computes the concavity for polyline vertices while the implementation of our secant measure algorithm computes con23 cavity for all points on the polyline. Also, our secant measure implementation has a better time bound than our disk measure algorithm. The ratio bounds are different so ratios cannot be directly compared. 3.3 Transferring Ideas to T I N s 3.3.1 Simplification A useful feature of the Douglas-Peucker line simplification method is its use of recursion. The algorithm subdivides the polyline into two sections and simplifies these sections independently. This would also be a useful feature of a T I N simplification method. The line simplification method uses the distance from a point to a line segment of the polyline to determine if the line should be subdivided. W i t h a T I N , the distance from a point to the planar facet of a triangle can be used as a measure of the importance of that point. 3.3.2 Concavity Measures The ideas behind the secant measure cannot be easily transferred to a polyhedral surface. The secant measure uses a secant, which can be defined for a polyline or a curve but not for a surface. The ideas behind the disk measure can be transferred but the computation is difficult. Finding the volume of the intersection of a ball and a set of planes is harder than computing the intersection of the disk and the area inside the polyline. 24 Chapter 4 Simplification of a Triangular Irregular Network (TIN) This chapter describes the methods we used to simplify TINs. The first two sections review past work on hierarchical triangulations and hierarchical TINs. Section 4.3 describes our methods for selecting and removing vertices to form a hierarchical T I N that can combine different levels of detail. We used another T I N simplification method, described in Section 4.4, to compare the surface accuracy of our simplifications. The details of our implementation of this hierarchical method are described in Section 4.5. 4.1 Hierarchical Triangulations Kirkpatrick [Kir83] presents an algorithm for optimal search in planar subdivisions that uses a triangular subdivision hierarchy. A planar subdivision is a finite collection of line segments inducing a partition of the plane into polygonal regions. A triangular subdivision is a finite collection of finite line segments inducing a 25 partion of the plane into regions bounded by three line segments. This is equivalent to a triangulation. A planar subdivision can be reduced to a triangular subdivision in two steps. In the first step, the planar subdivision is intersected with a triangle chosen to contain all intersections of the planar subdivision. In the second step, each interior region of this triangle is triangulated. The n e i g h b o u r h o o d of a vertex v in a triangular subdivision is the starshaped polygon formed by the union of the triangles incident to v. A triangular Figure 4.1: Neighbourhood of a vertex subdivision can be simplified by removing a vertex and retriangulating its neighbourhood. The p a r e n t s of a triangle t in the retriangulated neighbourhood are the triangles in the original neighbourhood that intersect t. If a vertex v with degree deg(u) is removed, each triangle in the retriangulated neighbourhood will have at most deg(v) parents regardless of how the neighbourhood is retriangulated. The simplification achieved by removing a single vertex is minimal. If a set of vertices is removed, the simplification can be more substantial. The neighbourhoods of non-adjacent vertices do not intersect except possibly along edges. The vertices in an independent set can be removed in parallel and their neighbourhoods can be 26 retriangulated independently. The triangulation hierarchy is a sequence of successively simplified triangular subdivisions. This hierarchy is used to find the region containing a given test point. At each level of the hierarchy, the search algorithm locates the region containing a given test point by searching the through a set of candidates. The parents of that region become the new candidates for the search at the next level. To optimize the search algorithm, the number of levels and the size of the set of search candidates or parents should be minimized. The degree of a vertex affects the size of the set of search candidates and the number of vertices removed affects the height of the hierarchy. The size of a set of independent vertices with degree at most 11 would be at least n/18 where n is the number of vertices in the current triangulation. The proof is below. Notation V = vertices Vhigh = {vi € V\deg(vi) > 11} Vlow = { n = number of vertices n = number of edges nt = number of triangles nh = number of edges in hull (minimum 3) v e Vi G V\deg(«i) < 11} 27 Proof nt — fi + n — 1 = e 0 (Euler's formula) v 3nt + n/j n t = 2 n (counting edges) = 2(n - e 2(n — n / J / 3 — n + n„ — 1 = e h 0 (substitute rit in Euler's formula) e 2 n —ra/i— 3 n + 3n^ — 3 = e n )/3 e 0 e n e = 3nu — n e < 3n„ — 6 (minimum n average degree n/i —3 n = 2n /n e v < 2(3n„ — 6)/n < 6 < 6|V| (total degree) 9\V gh\ < 3|V| (subtract 3\V\) \V igh\ < 3\V \ + 12\V \ low high hi h \V\-\Vhigh\ • |V; |/12 ou; v n /3 v > \Viow\ > > is 3) n -n /3 v v 2n„/3 n /l8 (vertex and 11 neighbours) v Kirkpatrick's optimal search algorithm has a 0(n) preprocessing time, 0 ( l g search time and 0(n) space. 28 4.2 Hierarchical T I N s A hierarchical T I N attempts to combine the useful features of a hierarchical data structure with a T I N . The result should be able to approximate the surface at different resolutions while keeping the benefits associated with TINs, such as irregularlydistributed vertices, efficient storage and better handling of visibility and drainage problems. This section reviews different approaches of forming a hierarchical T I N . Gomez and Guzman [GG79] present a model for three-dimensional surface representation that uses a tree of triangles. If the surface represented by a planar triangle does not satisfy a prespecified error tolerance it is subdivided. Figure 4.2 shows the four child triangles formed by adding new vertices to the three edges of the parent triangle. This hierarchical model uses more vertices to describe areas of Figure 4.2: Subdividing a triangle into four children rough terrain but the surface of this model is not continuous. Discontinuities can appear along the edge shared by a triangle that is subdivided and a neighbouring triangle that is not subdivided. De Floriani, Falcidieno, Nagy and Pienovi [DFFNP84] present a different hierarchical structure. A triangle is subdivided by inserting the vertex that will minimize the interpolation error of the remaining vertices within that triangle. Figure 4.3 shows the three child triangles formed by the edges connecting the new vertex to the three vertices of the parent triangle. This model includes more points in rough areas and it forms a continuous surface at each level of the hierarchy. The major 29 Figure 4.3: Subdividing a triangle into three children problem with this model is its tendency to form elongated triangles that can lead to inaccuracies in numerical interpolation [dBD95a]. In Section 4.2.1, this hierarchy is used as a method for building a T I N from a D E M . These two structures are modified in an attempt to address the problems of surface discontinuities and elongated triangles. Scarlatos and Pavlidis [SP90] aim to triangulate while looking more carefully at the terrain features that are being approximated. They also wish to avoid thin triangles that can appear in other hierarchical triangulations. The refinement technique they present inserts vertices and splits edges of a single triangle to approximate pit or peak points and ridge or channel lines. Scarlatos and Pavlidis compare their technique to the hierarchical structure presented by De Floriani, Falcidieno, Nagy and Pienovi. They find a substantial decrease in the number of thin triangles because their technique allows edge splitting. Their technique also results in a decrease in the number of levels in the hierarchy required to achieve a given degree of approximation because each triangle can be subdivided into more than three children. De Floriani and Puppo [DFP92] refine each triangle by inserting new vertices along the edges of the parent triangle and then inserting vertices in the interior. These new vertices are connected in a local Delaunay triangulation. Elongated triangles are avoided but the union of the triangles in these local triangulations do 30 not necessarily form a global Delaunay triangulation. The surface is continuous at each level of the hierarchy but discontinuities may appear if areas from different levels of the hierarchy are combined. The triangulation hierarchy presented with Kirkpatrick's optimal search algorithm was modified by de Berg and Dobrindt [dBD95a, dBD95b] to be used for polyhedral terrains or TINs. They present a method for displaying terrains combining areas with different levels of detail. The details of our implementation of their method are described in Section 4.5. The other hierarchical TINs presented above replace a single triangle with a set of triangles. These hierarchies have the advantage of a tree structure but the disadvantage of either producing skinny triangles or surface discontinuities. The hierarchical T I N presented by de Berg and Dobrindt replaces a group of triangles with another group of triangles. This hierarchy is not a tree but a directed acyclic graph which makes it more difficult to combine parts. But it has the advantage of allowing the use of a global Delaunay triangulation at every level. A n importance value is assigned to each vertex so that peaks, passes, pits and other points important to the shape of the terrain would not be deleted. Vertices are then selected and removed in order of increasing importance. Their method also allows the users to select a set of "fixed" vertices that are never removed. 4.2.1 Measuring Vertex Importance De Berg and Dobrindt use a method for assigning an importance value to each vertex mentioned by Lee [Lee91]. Lee reviews four methods for building TINs from D E M s . These methods are summarized in this section. The Skeleton method [FL79] uses peaks, pits, passes, ridges and valleys to 31 form the skeleton of the T I N . A 3 x 3 point window is used to define peak and pit points. The center point of a 3 x 3 point window is defined to be a peak if it is a relative local maximum. Similarly, the center point is defined to be a pit if it is a relative local minimum. A 2 x 2 point window is used to define ridge and valley points. A point appears in four 2 x 2 windows. It is defined to be a ridge point if it is never the minimum point of these four windows. Similarly, a point is defined to be a valley point if it is never the maximum point. Ridge and valley lines are formed by connecting ridge and valley points. Next, the Douglas-Peucker algorithm for line simplification described in Section 3.1.1 is used to reduce the number of points needed to describe these lines. Support points are then added to improve differences in elevation. This complex method was excluded from Lee's comparison because it required too many tolerances. The Filter method [CG89] discards points that can be closely interpolated by the points in its 3 x 3 grid neighbourhood. Since each point is visited only once this method is faster than the other methods in Lee's review. The disadvantage of this method is that only local information is used to select the points. The Hierarchy method [DFFNP84] recursively subdivides each triangle by inserting the point with the largest difference between the original and interpolated elevations. This method does not use a Delaunay triangulation and tends to produce long and thin triangles but it has the advantage of a hierarchical data structure. The Drop Heuristic method [Lee89] iteratively discards the point which will cause the least difference in elevation when dropped. The advantage of this method is that it uses the Delaunay neighbours rather than the grid neighbours of a vertex to find its importance in a global context. Unfortunately, this method is susceptible to drift; small errors introduced by discarding points can accumulate into large 32 errors. The Filter method, the Hierarchy method and the Drop Heuristic method each have two possible stopping conditions. Vertices can be inserted or deleted until either the T I N reaches a pre-set number of points or the T I N reaches a pre-set tolerance of elevation error. The Drop Heuristic method can be considered a T I N simplification method instead of a method for building a T I N from a D E M because it begins by connecting the entire D E M into a T I N . The Hierarchy method can also be considered a T I N simplification method because it has no constraints on the distribution of its input data. The Skeleton and the Filter methods are not T I N simplification methods because they require gridded data as input but they are of interest to us because of their different definitions of important points. 4.2.2 Removing Vertices Kirkpatrick simplifies the levels in his triangular subdivision hierarchy by removing vertices and retriangulating their neighbourhoods. In de Berg and Dobrindt's modification, these two tasks must be completed while retaining a Delaunay triangulation. Midtb0 [Mid94] studies three algorithms for removing vertices from a Delaunay triangulation. He describes a simple algorithm for retriangulating the neighbourhood of a deleted vertex. This algorithm uses pairs of edges along the hull of the neighbourhood to form triangles. The radius of the circumcircle of each potential triangle is calculated and an edge is added to complete the triangle with the smallest radius. This process is repeated until the neighbourhood is completely retriangulated. The simple algorithm retriangulates a neighbourhood with d vertices 33 with a worst case running time of 0(d ) while the other two algorithms he describes 2 have worst case running times close to 0(dlog d). But the average d is small enough that the actual running times for the simple algorithm are faster than the running times for the other algorithms he tested. 4.3 Delete Simplification M e t h o d s We wish to simplify a T I N so the hierarchy described by Kirkpatrick and modified by de Berg and Dobrindt can be used. We describe a set of delete simplification methods that meet this constraint. The simplification is formed i n two steps. The first step selects a set of vertices to be deleted. The second step deletes these vertices and retriangulates their neighbourhoods. These two steps can be repeated for further simplification. Since our method of deleting vertices places constraints on how these vertices are selected, we will present the two steps of the simplification in the reverse order. Section 4.3.1 provides the details on how selected vertices are deleted from the T I N and Section 4.3.2 provides the details on how vertices are selected. 4.3.1 Deleting Vertices and Retriangulating Neighbourhoods If a vertex is deleted from a triangulation, its neighbourhood must be retriangulated. A local retriangulation is a retriangulation that does not affect the triangles outside of the neighbourhood. A local retriangulation is always possible, but there is no guarantee that there will be any local retriangulation that can satisfy a given triangulation criteria. If a vertex is deleted from a Delaunay triangulation, the retriangulation required to form the Delaunay triangulation of the remaining vertices is a local retri34 angulation [Mid94]. The triangles outside of the neighbourhood are unaffected so the retriangulation must be local. The proof is below. Notation V = t dei v = = set of vertices including Vdei any triangle on V deleted vertex Proof t is Delaunay on V t is Delaunay on V Vu, 6 V\vi outside circumcircle of t <=> t € Delaunay triangulation of V if t does not have Vdei as a vertex then t G Delaunay triangulation of V =^ t is Delaunay on V V^i G V\vi outside circumcircle of t => W , G V — {vdei}\vi outside circumcircle of t t is Delaunay on V — {vdei} t G Delaunay triangulation of V — {v i} de We will be using the simple fast retriangulating algorithm described by Midtb0 [Mid94]. 35 4.3.2 Selecting V e r t i c e s If the retriangulation is local and no two vertices in the set of vertices to be deleted are adjacent, each delete and retriangulate operation is independent. This allows us to delete a set of vertices without worrying about the interactions among pairs of delete and retriangulate operations. Our retriangulating algorithm may work on some intersecting neighbourhoods of adjacent vertices but will work on all neighbourhoods of independent vertices. Each of the following methods uses a different importance measure to select vertices. A l l three of the measures are local; they only use the neighbourhood of the vertex to measure its importance. The only criteria for selecting vertices in Kirkpatrick's [Kir83] original triangulation hierarchy is the degree of the vertex. We name this method the degree delete simplification method. We deleted a set of independent vertices with degree equal or less than 11. This simplification method only uses a binary measure for each vertex. This method does not use elevation information to select the vertices. We do not expect this method will form accurate simplifications but it is included for comparison purposes. The next two methods use a "flatness" measure to select vertices with low importance. The volume delete simplification method uses the difference in volume as the "flatness" measure. The absolute difference in the volume under the neighbourhood of the vertex with and without the vertex is computed. If the absolute difference is small, deleting that vertex will have a relatively small effect on the model. 36 This method is similar to the algorithms evaluated by Lee [Lee91] and K u m ler [Kum94]. Those algorithms built TINs from D E M s and used the absolute difference in elevation at grid locations. B y calculating the difference in volume, this method considers the difference in elevation at all points and not just the differences at vertices of a T I N or D E M . The normal delete simplification method uses the normal vectors of the triangular facets to compute the "flatness" measure. The unit normal of each triangle in the neighbourhood of the vertex is computed. Another unit normal is formed by the average of these normals. The mean of the dot products of each individual unit normal and the average unit normal forms the "flatness" measure. Equal unit normals have a dot product of one. If the mean of the dot products is near one, the terrain around that vertex is flat and deleting that vertex will have a relatively small effect on the model. This "flatness" measure is harder to interpret than the measure used by the volume method. We know the best "flatness" value is one but we do not know what range of values to expect in our experiments. 4.4 Insert S i m p l i f i c a t i o n We wanted to test the delete simplification methods against another type of simplification method. Garland and Heckbert [GH95] examine the greedy insertion algorithm which we rename the insert simplification method. The insert simplification method builds a simplification of a T I N by repeatedly inserting the vertex with the largest associated error. This error is measured as the height between the current T I N and the vertex. The process is continued until the desired number of vertices is reached. 37 The initial TIN is the flat plane defined by four vertices at elevation zero. This simplification method does not guarantee that the hierarchy described by de Berg and Dobrindt can be formed. 4.5 Rendering Terrain with Varying Levels of Detail We implemented a program to render a terrain with varying levels of detail based on de Berg and Dobrindt's [dBD95a, dBD95b] hierarchy for TINs. Each of the delete simplification methods in Section 4.3 generates a sequence of terrain models which can be used to form the hierarchy. This hierarchy can be used to form a terrain using less data, resulting in faster rendering. It can also be used to form a terrain with non-uniform levels of simplification. These models and the links between them are stored in the data structures explained below. 4.5.1 Data Structures As the TIN is simplified, vertices are deleted and their star-shaped neighbourhoods are retriangulated. A simplified TIN can be refined by reversing this process. These refinements are stored by the hierarchy. Each triangle belongs to a group of triangles that form the retriangulated neighbourhood of a deleted vertex. These triangles are refined by inserting the deleted vertex and restoring the triangulation. In our implementation, each triangle has a link to its group polygon and each polygon has links to the its unrefined reject triangles, the accept vertex that is used to refine these triangles and its refined accept triangles. The triangle links are shown in Figure 4.4 and the polygon links are shown in Figure 4.5, Figure 4.6 and Figure 4.7. We implemented a program to render terrain with varying levels of detail in C++. The headerfilesfor the four classes are in the sections below. 38 Figure 4.4: Links from triangles to their group polygon. Figure 4.5: Links from the polygon to its reject triangles. Vertex class _vertex { public: / * constructor * / .vertex(); / * access functions * / void set_v(float vO, float v l , float v2); void set_level(int levelO); / * returns true i f distance from vO to implicit vertex less than product of level of implicit vertex and dO * / int test_accept(const ..vertex &v0, float dO) const; / * render triangle specified by implicit vertex, v l and v2 * / void draw(const .vertex &vl, const _vertex &v2) const; >; 39 Figure 4.6: Link from the polygon to its accept vertex. Figure 4.7: Links from the polygon to its accept triangles. Triangle class _triangle { public: /* constructor */ _triangle(); /* access function */ void set (.vertex *v0, ..vertex *vl, _vertex *v2, _polygon *groupO, int queue0); void set_queue(int queueO); int get_queue() const; .polygon *get_group() const; /* render triangle using _vertex::render(_vertex, _vertex) */ int drawO const; 40 Polygon c l a s s .polygon { public: /* c o n s t r u c t o r */ .polygon(); /* access f u n c t i o n s */ void set.accept(.vertex *rejectO); int add.reject(.triangle *trO); i n t a d d . a c c e p t ( . t r i a n g l e *taO); /* i f a l l r e j e c t t r i a n g l e s i n queue and r e j e c t vertex accepted then remove a l l r e j e c t t r i a n g l e and i n s e r t a l l accept triangle e l s e remove and render any r e j e c t t r i a n g l e s i n queue */ i n t draw(const .vertex &v0, f l o a t dO) const; >; Level class . l e v e l { public: /* c o n s t r u c t o r */ .level(); /* access f u n c t i o n s */ i n t s e t _ s i z e ( i n t ntO); int set_triangle(int i , .vertex *vO, .vertex * v l , .vertex .polygon *v2, *gO, i n t qO) const; v o i d set_queue(int qO) const; /* f o r each t r i a n g l e i n the queue i f the t r i a n g l e has no a s s o c i a t e d polygon then render the t r i a n g l e e l s e process the polygon u s i n g .polygon::draw(.vertex, f l o a t ) i n t draw(const .vertex &vO, f l o a t dO) const; >; 41 4.5.2 Rendering Each time the T I N is simplified, another level in the hierarchy is formed. The original T I N forms the lowest level and each simplification forms a higher level. The level of a vertex is equal to the highest level of hierarchy that still includes that vertex. Initially, each triangle in the highest level of the hierarchy is inserted in a queue. To render the terrain, each triangle in the queue is processed. A triangle is processed by either rendering the triangle or replacing the triangle and the other reject triangles in its polygon with the set of accept triangles. The union of the enqueued triangles and the rendered triangles will always cover the terrain. If all the reject triangles of a polygon are in the queue, the accept vertex can be tested. In our implementation, the vertex is accepted if the distance between the accept vertex and a user-defined center of detail is less than the product of the level of the accept vertex and a user-defined detail dropoff distance. This test includes vertices from the lower levels of the hierarchy close to the centre of detail while allowing only vertices from higher levels further away. Other tests based on local polygon information can also be used. Three of the four possible cases are illustrated in Figure 4.8, Figure 4.9 and Figure 4.10. The fourth case occurs when the enqueued triangle is in the lowest level of the hierarchy. No more refinement can be done so it is rendered. Enqueued triangles have a grey fill style and rendered triangles have a grid fill style. In every case, the area covered before and after the triangle's group polygon is processed is the same. The results of this rendering method are displayed in three figures. Figure 4.11 shows the initial terrain model with 20002 triangular facets. It has a high 42 level of detail, even in the background where it is not needed. This terrain model is repeatedly simplified to form the model shown in Figure 4.12. This simplified model has a low level of detail. The hierarchy is used to form the terrain model in Figure 4.13. It combines areas of high detail in the foreground with areas of low detail in the background. Figure 4.8: Only one of the three triangle in this polygon is in the queue. It is removed from the queue and rendered. Figure 4.9: A l l three of the triangles in this polygon are in the queue but the reject vertex is not accepted. A l l three triangles are removed from the queue and rendered. 43 Figure 4.10: A l l three of the triangles in this polygon are in the queue and the reject vertex is accepted. A l l three triangles are removed from the queue and five new triangles are inserted into the queue. Figure 4.12: A simplified model with 3360 triangular facets. 45 Figure 4.13: A terrain model combining regions of varying levels of detail from the hierarchy of the initial terrain model and its simplifications. It has 5280 triangular facets. 46 Chapter 5 Surface Accuracy To evaluate the effectiveness of these simplification methods, we examined their accuracy and their speed. Lee [Lee91] and Kumler [Kum94] evaluate surface accuracy by computing the differences in elevation between the original surface and the approximate surface at a set of test points. In our surface evaluations, we used measures of curvature as well as elevation. Lee [Lee91] reviews and evaluates four methods for building TINs from D E M s . Each approximate T I N surface is compared to the original D E M surface by computing the elevation difference at each D E M point. In addition to measuring the mean and standard deviation of the differences, Lee analyses the spatial pattern of the differences. He uses Moran's index, a spatial autocorrelation coefficient, to measure how clustered or how randomly the differences are distributed. Tests for the significance of randomization and normality are also used. He finds that none of the four methods he reviews has a clear overall advantage. It is interesting to note that the Drop Heuristic, a T I N building method that is closely related to the delete simplification methods, has the best Moran's index values. 47 Kumler [Kum94] compares eight different T I N and D E M construction methods. He uses D L G s as the source data. Each approximate D T M surface is compared to the original terrain by computing the elevation differences at points in three test sets. The first test set is the set of D L G spot heights included in each D L G data file. Kumler created the other two sets of test points for his comparison. The second set is created by using a set of coarse grid points "jiggled" so that each point falls on a D L G contour line. The third set is created by obtaining a set of dispersed, irregularly-distributed points and visually estimating the elevations with approximate interpolation between contour lines. Although he believes that TINs "look better" than D E M s , he finds that D E M s estimate spot heights better than TINs. He suspects that TINs may "look better" because TINs are better at modelling derivative statistics such as slope and aspect. In attempting to find a basis for quantitative comparison of landscapes, Evans [Eva72] describes the field of general geomorphometry. Geomorphometry is the measurement and analysis of land-form characteristics applicable to any continuous rough surface. Evans calls for simple variables that are standardized for comparison, integrated and statistically stable. He also prefers to use point measures rather than measures defined in relation to an arbitrary area. He presents five basic variables for general geomorphometry. • altitude, z • gradient, z' v • aspect, z' h • vertical convexity, z" • horizontal convexity, z'^ 48 Lee and Kumler only use altitude as a basis for comparing surfaces. The other four variables, all derivative statistics, are ignored. We wanted to use both elevation and a derivative statistic to evaluate the accuracy of the simplification methods. We decided that mean curvature was a good derivative statistic because it is a meaningful characteristic value of the shape operator reviewed in Section 5.2. Elevation can be interpolated directly from a T I N surface but mean curvature is only defined at a point on a surface with continuous second partial derivatives. To overcome this obstacle, we fit a degree three B-spline surface to the surface of the T I N . Evaluating the surface accuracy consisted of four steps. We first interpolated a set of gridded test points from the original T I N and the simplified T I N by locating the triangle that contains the xy point and using the three vertices of that triangle to linearly interpolate the z coordinate. We then fitted a B-spline surface to both these interpolated sets. A t each test point, we calculated the elevation and mean curvature. Finally, we computed the correlation coefficients of the elevations and curvatures. The following sections provide a background to these steps. Section 5.1 describes parametric curves and parametric surfaces. Section 5.2 reviews the definition and calculation of curvature and Section 5.3 reviews the use of the correlation coefficient. Kumler's comparison notes the importance of testing simplification methods over a wide range of study areas [Kum94]. Our approach to choosing study areas is outlined in Section 5.4. 49 5.1 Parametric Curves and Surfaces This section describes a representation of curves and surfaces [FvDFH90]. We start by reviewing parametric curves and then extend the curve definition to parametric surfaces. We conclude by describing the properties of the specific surface we used, the B-spline surface. 5.1.1 Parametric Curves Polylines are first-degree, piecewise linear approximations to curves. Large numbers of points may be needed to achieve reasonable accuracy. Parametric curves use higher-degree functions as a representation of curves. They are still only approximations, but they use less storage than linear functions. Cubic polynomials are most often used because lower-degree polynomials give too little flexibility i n controlling the shape of the curve and higher-degree polynomials can introduce unwanted wiggles and also require more computation. The cubic polynomials that define a curve segment Q(t) x(t) of the following form. x(t) = at + bt + ct + d y(t) = dyt + byt + Cyt + dy z(t) = at+ 3 2 x x x 3 x 2 bt+ 3 2 z z ct + d, 0 < t < 1 z z This is rewritten in matrix form. T = \ t 3 50 t 2 t 1 1 y(t) z(t) are c= Q{t) = b x by b d x dy d z z T-C A curve segment is denned by constraints on endpoints, tangent vectors and continuity between adjoining curve segments. If two curve segments join together, the curve has G° geometric continuity. If the directions (but not necessarily the magnitudes) of the two segments' tangent vectors are equal at a join point, the curve has G geometric continuity. If the tangent vectors of two curve segments are 1 equal at the segments' join point, the curve has first-degree continuity and is said to be C continuous. If the direction and magnitude of the d /dt [Q(t)] 1 n n the nth derivative are equal at the join point, the join is called C n through to continuous. The coefficient matrix C can be rewritten as C = M • G where M is the basis matrix and G is the geometry vector. The coefficients of geometry vector, Gi, G , G3 and G4, are also vectors. 2 Q(t) = x(t) y(t) z(t) T-C TM-G t 3 t 2 t 1 mn 77112 mis mu G m i m2 m3 m\ G "^31 ""132 ^ 3 3 77134 G 42 TTJ43 G 2 77141 51 2 m 2 2 m 44 l 2 z A B-spline Curves Cubic B-splines are defined by four control points. They have C and C continuity 1 2 and come close but generally do not interpolate their control points. The continuity conditions are achieved by sharing control points between segments. A spline with m + 1 control points Po, Pi, ..., P m has m — 2 curve segments Q3, Q4, . . . , Q . m Curve segment Qi{t) with a curve parameter t is defined between ti = i — 3 and tj+i = i — 2 and depends on points Pi-3, Pi-2, Pi-i and P;. The index i ranges from 3 to rn. Ti = (t-uf (t-ti) -1 3 -3 l (t-u) 2 1 3 - 6 3 0 M = BS -3 0 3 1 4 0 1 0 Pi-3 GBS { = Pi-2 Pi-l Pi Qi(t) 5.1.2 T{ • MBS • Gssi, U <t < ti i + Parametric Surfaces Parametric surfaces are a generalization of parametric curves. B y replacing the constant vector coefficients of the geometry matrix G with parametric curves, a surface of two variables is formed. Si = {S - Si) 3 {S - 52 Si) 2 (S - Si) 1 9n G 9i2 913 914 921 922 923 924 931 932 933 934 941 944 942 943 {t-ti) (t-U) 2 Q(s,t) S •M •G•M T 1 • T T B - s p l i n e Surfaces Just as the degree 3 B-spline curve has C\ and C continuity, the B-spline surface 2 has C\ and C continuity. This means its first and second partial derivatives will 2 be continuous. Other parametric surfaces interpolate their control points but they cannot easily provide the required level of continuity. The one variable in fitting the a B-spline to the grid of interpolated points is the grid spacing of the points. Using a small grid spacing and a large number of points will results in a surface that has many flat areas corresponding to the planar triangular facets. Using a large grid spacing will result in triangular facets in the T I N that are not represented by any point in the grid. 5.2 Computing Curvature This section reviews the definition and calculation of curvature [0'N66]. We used mean curvature as our derivative statistic to measure surface accuracy. The shape of a curve can be measured by its curvature and torsion functions. The c u r v a t u r e vector of a point on a curve represents the turning of the curve at that point. The t o r s i o n vector of a point on a curve represents the twisting of 53 the curve at that point. The analogous measurement of a surface M is its shape operator S. The shape operator of a point on a surface represents the bending of the surface at that point. Surfaces with the same shape operator are congruent. The shape operator S (v) at a point p on the surface is defined for all vectors p v tangent to the surface at that point. A t each point p on the surface there are two unit normals U and —U and two corresponding shape operators S and — S . p p The normal curvature k(u) in the u direction is the dot product of the shape operator and the vector u. k(u) = S(u) • u If the normal curvature is positive the surface bends toward the normal. If the normal curvature is negative the surface bends away from the normal. The normal curvature of a sphere with a radius r and an outward oriented normal is — 1/r for any vector u at any point. The minimum and maximum values of the normal curvature at a point are called the principal curvatures k\ and k - The vectors in which these values occur 2 are called the principal vectors. These principal vectors are orthogonal. The Gaussian curvature K is the product of the two principal curvatures. The formula for Gaussian curvature is calculated from constants formed from the first and second partial derivatives of the surface M with parameters s and t at that point. Constants E = M -M F = M •M s s 54 s t G = M , Af U = (M x W = \\M x M I = t t M )/W s t S t U-M 2 S U-M m st U -M 2 n t In — m EF-F 2 K 2 The mean curvature is the average of the two principal curvatures. The formula for mean curvature H at a point is defined by the same constants that were used to compute the Gaussian curvature. H = Gl + En2{EG - 2Fm F) 2 These two formulae do not require the calculation of the principal curvatures. It is infeasible to use the shape operator as a measure of surface accuracy because it is a operator, not a single number. But both the Gaussian curvature and the mean curvature are meaningful characteristic values of the shape operator. Gaussian curvature is independent of the orientation of the normal but mean curvature is dependent. Since we wanted a derivative statistic that was dependent on the direction of the normal and since we could easily orient the surface, we chose to use mean curvature. 5.2.1 Computing Curvature on a B-spline Surface We can easily calculate the first and second partial derivatives at each interpolated test point on the B-spline surface. We can then substitute these values into the above 55 f o r m u l a to calculate the mean curvature. W e calculate a first p a r t i a l derivative as a n example below. Q(s,t) = S - M B s s 3 Q {s,t) = 3s Qs(P,o) = 0 s - G - S 2s 0 -3 5.3 s 2 2 M S T •M •G • BS M -G- M BS 0 •G • t MB/ BS 1 0 3 T M -G- 1 1 0 0 • T B 1 M S • 1 4 0 t 2 t t 0 0 3 BS T B t 3 1 t 2 1 1 1 0 Computing Correlation T o evaluate surface accuracy, we c o m p u t e d the linear regression correlation coefficients of the elevations a n d the m e a n curvatures at test points o n the B-spline surfaces fitted to the original a n d simplified T I N s . T h e correlation coefficient r [MM89] measures the strength of the linear association between a set of n observations of two variables x a n d y w i t h means x a n d y a n d s t a n d a r d deviations s x r = Xi - x^yj and s . y - y n T h e coefficient is a unitless number between -1 a n d 1. A positive value indicates a positive association a n d a negative value indicates a negative association. A value of -1 or 1 indicates a perfect positive or negative linear association. T h e square of the correlation coefficient r is the fraction of the v a r i a t i o n of x that is explained by the least squares regression of y o n x. C o r r e l a t i o n is a s y m m e t r i c a l relationship so this statement is also true if x a n d y are interchanged. T h e slope b of the regression line of y o n x c a n be expressed i n terms of the two 56 standard deviations and the correlation coefficient. 5.3.1 Using Correlation to Evaluate Surface Accuracy The correlation coefficient only measures linear association. Identical surfaces should not only have a perfect linear association but should also have a regression line passing through the origin with slope of one. B y definition, the regression line passes through the point (x, y). The standard deviations s and s must be equal x y for a regression line with perfect positive association to have a slope of one. The means x and y must be equal for a regression line with slope one to pass through the origin. While checking the correlation coefficients, we also checked the difference in means and standard deviations to ensure that these additional conditions were almost met. 5.4 Choosing Study Areas We wanted to study the effectiveness of the delete simplification methods over a large range of terrain types. Fenneman [Fen28] describes the physiographic divisions of the United States. He lists 25 homogeneous provinces, designated by the initial geological structure, the erosion process and the stage reached in the cycle of changes by erosion. This information captures the history of the terrain as well as the topography. These provinces extend into Canada and Mexico but are specific to North America. Hammond [Ham64] prepares a map of the land form of the United States using indices of four properties. He chooses properties that describe the visual as- 57 Name ashby belle camas crater eloise entriken hogans honolulu jabez jackson newbrit tiefort Ashby Gap, V A Belle Creek, M T - W Y Camas, W A - O R Crater Lake West, O R Eloise, F L Entriken, P A Hogansburg, N Y Honolulu, HI Jabez, K Y Jackson, SC New Britain, C O Tiefort Mountains, C A Hammond Fenneman C5 C4 B3 low C6 Al C5 B2 B6 low C3 A2 B4 low/C4 low B5 low 5 13 24 23 3 6 7 11 3 9 22 Table 5.1: Study Areas pects and the land-use possibilities of the surface. The four properties are frequency of gentle slope, local relief, profile type and surface material. The combination of the indices of these four properties form a classification system that divides the United States into over 300 regions. These regions each fall into a class defined by the ninety-six combinations of the indices. Twenty-one of the ninety-six classes appear in significant quantity in the United States. This classification system can easily be extended to the land form of other countries with the possibility that new combinations of indices may appear. Kumler [Kum94] uses these two terrain classification systems to select D L G s that represent the variety of terrain types found in the United States. His 25 study areas cover 18 of 25 Fenneman provinces and 20 of 21 Hammond classes. The 12 D E M s listed in Table 5.1 were created from a subset of these D L G s . The 12 D E M grids were roughly 300 x 450 with a 30 meter grid spacing. A set of 15000 point TINs was formed by taking one random point from each 3 x 3 grid square. Each point was perturbed between 0.00 and 0.99 meters in both x and 58 y coordinates to avoid degenerate cases in the Delaunay triangulation. We defined these TINs to be the original surfaces. 5.5 The Experiment Each delete simplification method required two parameters. The delete percentage is the maximum percentage of vertices to delete at each iteration of a delete simplification. The stopping percentage is the percentage of vertices remaining in the T I N when the iterations stop. We tested the three delete simplification methods with delete percentages of 10% and 20% and a stopping percentage of 5% over the set of 12 TINs. We plotted the elevation and curvature correlation coefficients against the number of vertices for each T I N with a constant correlation range. The insert simplification method was used for comparison purposes. It was implemented by W i l l Evans. 59 Chapter 6 Results 6.1 Speed We examined the speed of the delete simplification methods by plotting the times required by the two steps in each method. The select time is the time required to select a percentage of vertices to be deleted. The delete time is the time required to delete these vertices and retriangulate their neighbourhoods. We plotted the mean select time and the mean delete time against the number of vertices for each delete simplification method. The mean times were used because we did not feel that the select and delete times depended on the type of terrain. The delete percentage was 20%. 6.1.1 Select Times In Figure 6.1, the time to select 20% of n vertices appeared to be linear for all three methods. The volume method and normal method select vertices in two steps. The 60 •g 200 r 180 160 140 120 100 - 4 — " •" 80 60 40 20 0 0 2000 4000 6000 8000 10000 12000 14000 vertices Figure 6.1: Time to select 20% of the vertices using the three delete methods first calculates a "flatness" value for each vertex. The second selects a set of nonadjacent vertices with the best "flatness" values. The size of the set is controlled either by specifying this size or by specifying a threshold "flatness" value. If a threshold "flatness" value is used, each vertex is checked only once. If a vertex v is not adjacent to a selected vertex and the "flatness" value of v is better than the threshold, then v is selected. This method selects a set of vertices from a n point T I N in O(n) running time, but the number of vertices with "flatness" values better than the threshold may vary with the type of terrain. We decided to specify the size of the set of vertices to be selected so that the speed and accuracy of different delete simplification methods and different terrains could be easily compared. A fixed percentage of the current vertices was selected. A vertex is eligible if it has not been selected and it is not adjacent to a selected vertex. This method repeatedly selects the eligible vertex with the best "flatness" 61 value until the specified number of vertices is selected. To quickly find the vertex with the best "flatness" value, we used a heap. It takes 0(log n) time to insert a single element into a heap of size n. Since we build the heap by repeatedly inserting elements, the total time to build the heap of n elements is be 0(n log n). After building the heap, the vertex with the best "flatness" value can be found in 0(1) time. After finding the best vertex, it is deleted from the heap. The time to repair the heap after a single element is deleted is O(logn). Therefore, the total cost to find and delete a fixed percentage of vertices from the heap is O ( n l o g n ) . Since both heap build and heap find and delete times are 0(n log n), the select time must be 0(n log n). But both the volume method and the normal method in Figure 6.1 appear to be 0(n) indicating that the constants associated with the 0(n) time to compute the "flatness" value of each vertex outweigh the constants associated with the 0(n log n) time to order and select the vertices using the heap. Even though the time to select vertices appears linear, it is actually O ( n l o g n ) . In our experiments, the time to select vertices depends on the time to compute the "flatness" values rather than the time to order the "flatness" values. This means that the "flatness" value can affect both the speed and accuracy of the simplification methods. As the number of vertices is increased, the time to select vertices will eventually depend on the 0(n log n) time to order the "flatness" values, not the 0(n) time to compute the "flatness" values. A t this point, the "flatness" value will only affect the accuracy of the simplification method. The degree method selected vertices by calculating the degree of each vertex and selecting vertices less than or equal to a fixed maximum degree. This algorithm is similar to the threshold method described above using the maximum degree as a threshold. Its time cost is 0(n). 62 6.1.2 Delete Times vertices Figure 6.2: Time to delete 20% of the vertices using the three delete methods The mean delete times appeared to be linear. The speed of the algorithm we used to retriangulate the neighbourhood of a deleted vertex depended on the degree of the vertex. The average degree of a vertex is less than 6 so the average time to delete a single vertex is O ( l ) and the time to delete a fixed percentage of vertices is 0(n). This is supported by Figure 6.2. The mean delete times were lower than the mean select times. This indi- cates that significant improvements to the select speed would result in significant improvements to the overall speed of the simplification method. 6.2 Accuracy We examined the accuracy of the simplification methods by plotting the correlation coefficients for both elevation and mean curvature against the number of vertices. 63 6.2.1 Elevation We plotted the elevation correlation coefficients against the number of vertices in the T I N . We used the logarithm of the difference between the correlation coefficient and one instead of the correlation coefficient. This transformation of the vertical axis provides more detail at correlation values close to one. Figure 6.3 shows a graph using a linear scale and Figure 6.4 shows the transformed scale. The vertical axis ranges from 0.99 to 0.9999, indicating that the correlation of the elevation of the simplified surface and the original surface is good. vertices Figure 6.3: Linear scale on the elevation correlation axis W i t h all three delete simplification methods, the correlation coefficients decrease as the number of vertices in the T I N decreases. This was expected, as a T I N with fewer vertices cannot represent all of the detail present in the original T I N . The correlation decreases slowly at first but tends to deteriorate faster when the T I N simplification has fewer than 30% of the original vertices. 64 0.99999 vertices Figure 6.4: Logarithmic scale on the elevation correlation axis The volume method tends to yield better correlation coefficients than the normal method but both are better than the degree method. This is not surprising since the degree method works independently of the z coordinate data. There is a quality versus speed tradeoff between the volume and the normal methods; the normal method works faster but with less accuracy. Changing the delete percentage from 20% to 10% created some interesting effects. Figure 6.5 shows the correlation coefficients of the TINs simplified with the degree delete simplification method in 10% steps were slightly worse than the correlations coefficients of the TINs simplified in 20% steps. But the results using the normal method and the volume method are different. Figure 6.6 shows the normal method with 10% steps is initially better than the 20% steps but eventually becomes worse and Figure 6.7 shows that the volume method with 10% steps is better than the 20% steps at all levels. Selecting a smaller percentage of vertices would increase 65 the accuracy because the selected vertices would have better importance values. But deleting a smaller percentage of vertices results of more simplifications. Errors due to drift can appear because the importance of a vertex is measured in relation to the current simplification of the terrain model instead of the original terrain model. 0.99999 i 1 1 1 1 1 1 1 1 0.9999 o 0.999 0.99 r 14000 12000 10000 8000 6000 4000 2000 vertices 0 Figure 6.5: Elevation correlations using the degree method with two delete percentages Two other studies [Lee91, Kum94] find that the insert simplification method is better than other tested methods at minimizing mean and maximum elevation error. But Figure 6.8 the performance of the volume delete simplification method is very close to, and sometimes better than, the performance of the insert simplification method. Both methods have 0(nlogn) running times but the volume method has the advantage of the simplification hierarchy. The terrains with the higher correlations tended to be the terrains with high relief. A n example is shown in Figure 6.9. But not all terrains with high relief had high correlations. This was the only difference observed when evaluating the 66 0.99999 0.9999 h o 0) o o 0.999 normal (20%) -•— normal (10%) 0.99 14000 12000 10000 8000 6000 4000 2000 vertices 0 Figure 6.6: Elevation correlations using the normal method with two delete percentages simplification methods over the different terrains. 6.2.2 Mean Curvature We plotted the mean curvature correlation coefficients against the number of vertices in the T I N . Figure 6.10 and Figure 6.11 show the same logarithmic transformation on the vertical axis for the mean curvature correlation coefficients. The scale of axis had to be changed because the correlation of mean curvature was much lower than the correlation of elevation. Almost every method dropped below 0.9 before the model is simplified to 40% of the original vertices. The correlation coefficients again decreased as the number of vertices decreased but in this instance the decrease was smoother. Unlike the elevation correlations, there was no sudden deterioration at 30% of the original vertices. The differences between volume method and the normal method were much 67 0.99999 14000 12000 10000 8000 6000 4000 2000 vertices 0 Figure 6.7: Elevation correlations using the volume method with two delete percentages smaller. O n some graphs, including Figure 6.12, the normal method even out- performed the volume method. Both methods were much better than the degree method. Changing the delete percentages had a different set of effects than those observed when plotting elevation correlations. The mean curvature correlations of the degree method in Figure 6.13 were slightly better when using a 20% delete percentage compared to a 10% delete percentage. But the 10% delete percentage was better when used with the normal method in Figure 6.14 and the volume method in Figure 6.15. The overall range of mean curvature correlations was lower than expected. This could be the result of our choice of grid spacing for the control points of the B-spline surface. A n alternative approach would be to use natural neighbour interpolation to fit a surface through the vertices of the T I N . Implementing this 68 0.99999 vertices Figure 6.8: Comparing the elevation correlations of the insert method and the delete simplification methods method would be more complicated but we would avoid the problems of finding an appropriate grid spacing. 69 0.99999 c 0.9999 o jo 0) o o 0.999 0.99 14000 12000 10000 8000 6000 4000 2000 vertices 0 Figure 6.9: A terrain with higher elevation correlations and higher relief o o o 14000 12000 10000 8000 6000 4000 2000 vertices 0 Figure 6.10: Linear scale on the mean curvature correlation axis 70 0.99 o CD 0.9 O O 0.0 14000 12000 10000 8000 6000 4000 vertices 2000 0 Figure 6.11: Logarithmic scale on the mean curvature correlation axis 0.8 c o 0.6 as 0 o o 0.4 0.2 degree normal volume _i i_ 14000 12000 10000 8000 6000 4000 vertices 2000 0 Figure 6.12: A terrain with better performance when using the normal method 71 0.99 Figure 6.13: Mean curvature correlations using the degree method with two delete percentages 0.99 o 0.9 o o normal (20%) normal (10%) Q0 I 1 1 1 1 1 14000 12000 10000 8000 6000 vertices 1 4000 1 2000 I 0 Figure 6.14: Mean curvature correlations using the normal method with two delete percentages 72 0.99 c o CD 0.9 o o volume (20%) volume (10%) 0.0 _J I 14000 12000 10000 8000 6000 vertices I L_ 4000 2000 0 Figure 6.15: Mean curvature correlations using the volume method with two delete percentages 73 Chapter 7 Conclusion We described a set of T I N simplification methods that we named delete simplification methods. They allow the use of the triangulation hierarchy introduced by Kirkpatrick and modified by de Berg and Dobrindt to form a T I N with varying levels of detail. We also investigated different variables that can be used to measure surface accuracy. Acting on the recommendations of Evans [Eva72] and Kumler [Kum94], we chose to use a derivative statistic, mean curvature, in addition to elevation to grade the performance of our simplification methods. Since mean curvature is only defined at a point on a surface with continuous second partial derivatives, we were also required to fit a B-spline surface to a T I N . One variant of the delete simplification methods, the volume method, formed simplifications with elevation and mean curvature correlations equal to or better than the greedy method. It has the added advantage of the hierarchical triangulation mentioned above. The use of mean curvature as a variable to measure surface accuracy did not 74 significantly alter our evaluation of the performance of these simplification methods. However, we recommend that any future comparisons of simplified terrains should be aware of these other surface characterization variables. 75 Bibliography [CG89] Zi-Tan Chen and J . Armando Guevara. Systematic selection of very important points (VIP) from digital terrain models for constructing triangular irregular networks. In AUTO-CARTO 8: Proceedings of the Eighth International Symposium on Computer-Assisted Cartography, pages 50-56, 1989. [dBD95a] Mark de Berg and Katrin Dobrindt. O n levels of detail in terrains. Technical report, Utrecht University, 1995. [dBD95b] Mark de Berg and K a t r i n Dobrindt. O n levels of detail in terrains. In Proceedings of the Eleventh Annual ACM Symposium on Computational Geometry, pages C26-C27, 1995. [dBvK93] Mark de Berg and Marc van Kreveld. Trekking in the Alps without freezing or getting tired. In Algorithms - ESA '93, Lecture Notes in Computer Science 726, pages 121-132, 1993. [DFFNP84] Leila De Floriani, Bianca Falcidieno, George Nagy, and Caterina Pienovi. A hierarchical structure for surface approximation. Computers and Graphics, 8(2):183-193, 1984. [DFMP94] Leila De Floriani, Paolo Marzano, and Enrico Puppo. Line-of-sight communication on terrain models. International Journal of Geographical Information Systems, 8(4):329-342, 1994. [DFP92] Leila De Floriani and Enrico Puppo. A hierarchical triangle-based model for terrain description. In Theories and Methods of SpatioTemporal Reasoning in Geographic Space, Lecture Notes in Computer Science 639, pages 236-251, 1992. [DLR90] Nira Dyn, David Levin, and Samuel Rippa. Data dependent triangulations for piecewise linear interpolation. IMA Journal of Numerical Analysis, 10(1):137-154, 1990. 76 [DP73] David H . Douglas and Thomas K . Peucker. Algorithms for the reduction of the number of points require to represent a digitized line or its caricature. The Canadian Cartographer, 10(2):112-122, 1973. [Eva72] Ian S. Evans. General geomorphometry, derivatives of altitude, and descriptive statistics. In Spatial analysis in geomorphology. Methuen & Co L t d , 1972. [Fen28] Nevin M . Fenneman. Physiographic divisions of the United States. Annals of the Association of American Geographers, 18(4):261-353, 1928. [Fis94] Peter Fisher. Stretching the viewshed. In Proceedings of the Sixth International Symposium on Spatial Data Handling, volume 2, pages 725-738, 1994. [FL79] Robert J . Fowler and James J . Little. Automatic extraction of irregular network digital terrain models. In Computer Graphics: ACM SIGGRAPH '79 Proceedings, pages 199-207, 1979. [FvDFH90] James D . Foley, Andries van Dam, Steven K . Feiner, and John F . Hughes. Computer Graphics: Principles and Practice. Addison-Wesley Publishing Company, 1990. [GG79] [GH95] Dora Gomez and Adolfo Guzman. Digital model for three-dimensional surface representation. Geo-Processing, l(l):53-70, 1979. Michael Garland and Paul S. Heckbert. Fast polygonal approximation of terrains and height fields. Technical report, Carnegie Mellon University, 1995. [Ham64] Edwin H . Hammond. Analysis of properties in land form geography: A n application to broad-scale land form mapping. Annals of the Association of American Geographers, 54(1):11—19, 1964. [Kir83] David Kirkpatrick. Optimal search in planar subdivisions. SIAM Journal on Computing, 12(1):28—35, 1983. [Kum94] Mark Kumler. A n intensive comparison of Triangulated Irregular Networks (TINs) and Digital Elevation Models (DEMs). Cartographica, 31(2), 1994. [Law72] Charles L . Lawson. Transforming triangulations. Discrete Mathematics, 3:365-372, 1972. 77 [Lee89] Jay Lee. A drop heuristic conversion method for extracting irregular network for digital elevation models. In GIS/LIS'89 Proceedings, volume 1, pages 30-39, 1989. [Lee91] Jay Lee. Comparison of existing methods for building triangular irregular network models of terrain from grid digital elevation models. International Journal of Geographical Information Systems, 5(3):267285, 1991. [LS80] D . T . Lee and B . J . Schachter. Two algorithms for constructing a Delaunay triangulation. International Journal of Computer and Information Sciences, 9:219-242, 1980. [Mar78] David M . Mark. Concepts of "data structure" for digital terrain models. In Proceedings of the Digital Terrain Models (DTM) Symposium, pages 24-31, 1978. [Mid94] Terje Midtb0. Removing points from a Delaunay triangulation. In Proceedings of the Sixth International Symposium on Spatial Data Handling, volume 2, pages 739-750, 1994. [MM89] David S. Moore and George P. McCabe. Introduction to the practice of statistics. W . H . Freeman and Company, 1989. [Mok90] Farzin Mokhtarian. A Theory of Multi-Scale, Curvature and Torsion Based Shape Representation for Planar and Space Curves. P h D thesis, University of British Columbia, 1990. [0'N66] Barrett O'Neill. Elementary Differential Geometry. Academic Press, 1966. [PFLM78] Thomas K . Peucker, Robert J . Fowler, James J . Little, and David M . Mark. The triangulated irregular network. In Proceedings of the Digital Terrain Models (DTM) Symposium, pages 516-532, 1978. [PS85] Franco P. Preparata and Michael Ian Shamos. Computational Geome- try: An Introduction. Springer-Verlag, 1985. [SP90] Lori Scarlatos and Theo Pavlidis. Hierarchical triangulation using terrain features. In Proceedings of the First IEEE Conference on Visualization, pages 168-175, 1990. 78
- Library Home /
- Search Collections /
- Open Collections /
- Browse Collections /
- UBC Theses and Dissertations /
- Simplifying terrain models and measuring terrain model...
Open Collections
UBC Theses and Dissertations
Featured Collection
UBC Theses and Dissertations
Simplifying terrain models and measuring terrain model accuracy Andrews, David Scott 1996
pdf
Page Metadata
Item Metadata
Title | Simplifying terrain models and measuring terrain model accuracy |
Creator |
Andrews, David Scott |
Date Issued | 1996 |
Description | We describe a set of TIN simplification methods that enable the use of the triangulation hierarchy introduced by Kirkpatrick [Kir83] and modified by de Berg and Dobrindt [dBD95a, dBD95b]. This triangulation hierarchy can be used to form a terrain model combining areas with varying levels of detail. One variant of the delete simplification method formed simplifications with accuracy close to the greedy method. We also investigated different variables that can be used to measure the accuracy of our simplified terrain models. Although the use of derivative statistics did not significantly alter our evaluation of the performance of our simplification methods, we recommend that any future comparisons should be aware of these alternative variables of surface characterization. |
Extent | 4978511 bytes |
Genre |
Thesis/Dissertation |
Type |
Text |
File Format | application/pdf |
Language | eng |
Date Available | 2009-02-09 |
Provider | Vancouver : University of British Columbia Library |
Rights | For non-commercial purposes only, such as research, private study and education. Additional conditions apply, see Terms of Use https://open.library.ubc.ca/terms_of_use. |
DOI | 10.14288/1.0051185 |
URI | http://hdl.handle.net/2429/4341 |
Degree |
Master of Science - MSc |
Program |
Computer Science |
Affiliation |
Science, Faculty of Computer Science, Department of |
Degree Grantor | University of British Columbia |
Graduation Date | 1996-05 |
Campus |
UBCV |
Scholarly Level | Graduate |
Aggregated Source Repository | DSpace |
Download
- Media
- 831-ubc_1996-0184.pdf [ 4.75MB ]
- Metadata
- JSON: 831-1.0051185.json
- JSON-LD: 831-1.0051185-ld.json
- RDF/XML (Pretty): 831-1.0051185-rdf.xml
- RDF/JSON: 831-1.0051185-rdf.json
- Turtle: 831-1.0051185-turtle.txt
- N-Triples: 831-1.0051185-rdf-ntriples.txt
- Original Record: 831-1.0051185-source.json
- Full Text
- 831-1.0051185-fulltext.txt
- Citation
- 831-1.0051185.ris
Full Text
Cite
Citation Scheme:
Usage Statistics
Share
Embed
Customize your widget with the following options, then copy and paste the code below into the HTML
of your page to embed this item in your website.
<div id="ubcOpenCollectionsWidgetDisplay">
<script id="ubcOpenCollectionsWidget"
src="{[{embed.src}]}"
data-item="{[{embed.item}]}"
data-collection="{[{embed.collection}]}"
data-metadata="{[{embed.showMetadata}]}"
data-width="{[{embed.width}]}"
async >
</script>
</div>
Our image viewer uses the IIIF 2.0 standard.
To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.831.1-0051185/manifest