UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Computer assisted 3d craniofacial reconstruction Bullock, William David 1999

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata


831-ubc_1999-0469.pdf [ 5.62MB ]
JSON: 831-1.0051482.json
JSON-LD: 831-1.0051482-ld.json
RDF/XML (Pretty): 831-1.0051482-rdf.xml
RDF/JSON: 831-1.0051482-rdf.json
Turtle: 831-1.0051482-turtle.txt
N-Triples: 831-1.0051482-rdf-ntriples.txt
Original Record: 831-1.0051482-source.json
Full Text

Full Text

Computer Assisted 3D Craniofacial Reconstruction by David William Bullock B.Sc. (Hon.), Simon Fraser University, 1996 A THESIS S U B M I T T E D IN P A R T I A L F U L F I L L M E N T OF T H E R E Q U I R E M E N T S F O R T H E D E G R E E O F M a s t e r o f Sc ience in T H E F A C U L T Y O F G R A D U A T E STUDIES (Department of Computer Science) We accept this thesis as conforming to the required standard The University of British Columbia October 1999 © David William Bullock, 1999 In presenting this thesis/essay in partialfuCfiCCment of the requirements for*an advanced degree at the University of 'British CoCumBia, I agree that the Library shadmake it freeCy avaiCaBCe for reference and study. I further agree that permission for extensive copying for this thesis for schoCarCy purposes may Be granted By the Jfead of my department or By his or her representatives. It is understood that copying or puBCication of this thesis for financiaCgain shaCCnot Be aCCowedwithout my written permission. Date Computer Science Jhe University of British CoCumBia 2366 Main maCC yancovuer, 3C Canada "V6T1Z4 Abst rac t This thesis describes a methodology for forensic craniofacial reconstruction, denned as the reproduction a subject's facial appearance from the skull and mandible for the purposes of identification. The focus of this work is on reproducing three dimensional facial shape from the shape of the skull and mandible of a subject. This task can be broken into two parts. First, a technique for determining soft tissue depth at any point on the skull and mandible surface is presented. Second, two complementary methods of producing three dimensional facial shape are presented: a method which generates a surface using the skull and mandible as a skeleton of an implicit surface and a method which produces a hierarchical B-spline surface fit to facial surface locations. Surface material properties and facial features can then be added and modified to these models before being rendered and distributed. Results of these processes are presented. Contents Abstract ii Contents iii List of Tables v List of Figures yi Acknowledgements viii 1 Introduction 1 1.1 Previous Work 2 1.2 The Reconstruction Pipeline 4 2 Interpolating Tissue Depths 7 2.1 Formalizing the Reconstruction Process 7 2.2 Interpolating Tissue Depth 8 3 Reconstructing Facial Models with Implicit Surfaces 14 3.1 Defining an Implicit Surface 15 3.2 Polygonizing Isosurface 17 iii 3.2.1 M a r c h i n g C u b e s 19 3.2.2 B o u n d i n g B o x O p t i m i z a t i o n 20 3.2.3 M i n - M a x O p t i m i z a t i o n 22 3.3 P r o b l e m s a n d S o l u t i o n s 24 3.4 R e s u l t s 28 4 Reconstructing Facial Models with Hierarchical B-Splines 31 4.1 O v e r v i e w o f H i e r a r c h i c a l B-Sp l ine Su r f aces 32 4.2 F i t t i n g H i e r a r c h i c a l B-Sp l ine Su r f ace 35 4.3 A d d i n g F a c i a l F e a t u r e s 38 4.4 R e s u l t s 39 5 Conclusion 40 5.1 R e s u l t s 40 5.2 S u m m a r y 42 5.3 F u t u r e W o r k 43 Bibliography 46 Appendix A Tissue Thickness Measurements 49 i v Lis t o f Tables A . l T i s s u e t h i c k n e s s m e a s u r e m e n t s ( i n A . 2 T i s s u e t h i c k n e s s m e a s u r e m e n t s ( i n A . 3 T i s s u e t h i c k n e s s m e a s u r e m e n t s ( i n m m ) , f o r A m e r i c a n C a u c a s o i d s . 5 0 m m ) , f o r A m e r i c a n B l a c k s . . . . 5 1 m m ) , f o r S o u t h w e s t e r n I n d i a n s . 5 2 List of Figures 1.1 S t a n d a r d s k u l l a n d m a n d i b l e l a n d m a r k s 3 1.2 D i g i t i z e d s k u l l 5 2.1 S t a n d a r d d o w e l n e t w o r k 9 2.2 A c u t t i n g p l a n e b e t w e e n t w o dowe l s 10 2.3 B a r y c e n t r i c i n t e r p o l a t i o n t e c h n i q u e on t w o d o w e l a reas . . . . . . . . 12 2.4 I n t e r p o l a t i o n m e t h o d a p p l i e d t o s k u l l 13 3.1 S i n g l e t r i a n g l e i m p l i c i t su r face 15 3.2 M u l t i p l e t r i a n g l e i m p l i c i t sur face . 16 3.3 M a r c h i n g C u b e s cases . 19 3.4 E m i t t e r cache 24 3.5 O r i g i n a l a n d f i l l ed s k u l l for v a r y i n g t 26 3.6 P o l y g o n r e d u c e d i m p l i c i t sur faces 29 3.7 T h e f i n a l i m p l i c i t su r face . 30 4.1 L o c a l r e f inement 33 4.2 G e n e r i c face h i e r a r c h i c a l B - s p l i n e sur face 35 4.3 H - s p l i n e sur face f i t t o s t a n d a r d - d o w e l l o c a t i o n s 36 vi 4 . 4 H - s p l i n e s u r f a c e f i t t o i n t e r p o l a t e d d o w e l s 3 9 5.1 R e s u l t s o f t h e i m p l i c i t s u r f a c e a n d h i e r a r c h i c a l B - s p l i n e f i t t i n g t e c h -n i q u e s 4 1 5.2 I m p l i c i t s u r f a c e m e t h o d r e s u l t s f o r d i f f e r e n t w e i g h t t y p e s . . . . . . . . 4 1 5 .3 H i e r a r c h i c a l B - s p l i n e f i t t i n g m e t h o d r e s u l t s f o r d i f f e r e n t w e i g h t t y p e s 4 2 v u Acknowledgements I would like to thank the following people for their help and support while completing my thesis: • Jack Snoeyink for accepting me as a student so late in my degree, his help in developing the material, and quick valuable feedback on my drafts; • David Forsey for all his help and making the effort to meet me after moving to industry; • Alain Fournier for his feedback on my thesis, though this just a small part of all he's done to help me; • Bi l l Gates, Robert Scharein, Lisa Streit, Roger Tam, and Robert Walker for their reviews of my thesis drafts, and thanks to them again plus Paul Fearing and Gene Lee for offers of couches to sleep on; the moral support from Imager lab members has been almost overwhelming during my efforts to complete my thesis; • Donald and Florence Bullock, my parents, not only for their support through this degree, but for instilling a love for learning throughout my whole life; and • Tracey Chew. At the best of times I could not possibly convey what your love and support has meant to me over the last two years, let alone after being awake for 50 hours. D A V I D W I L L I A M B U L L O C K The University of British Columbia October 1999 viii Chapter 1 Introduction This thesis describes a methodology for forensic craniofacial reconstruction, the reproduction of human facial appearance from the skull and mandible. We will focus on reproducing 3D facial shape from the 3D shape of the skull. Our goal is to perform a task that is commonly executed in forensic craniofa-cial reconstruction: accurately reconstruct the look and shape of a human head from a skull and mandible. We do not wish to replace the forensic artist; the long-term goal of this project is to provide the practical foundation for tools which forensic artists could use to produce facial reconstructions. Section 1.1 will provide back-ground information and previous work on this subject. Section 1.2 will describe the reconstruction process, both in the traditional and computational setting, to provide a context for this work. This will be followed by Chapter 2 which outlines a method for interpolating tissue depth given any location on the skull or mandible. This fundamental concept is used throughout the remainder of the thesis. Chap-ters 3 and 4 each describe a different method of reconstructing facial shape using implicit surface and hierarchical B-spline techniques, respectively. Final conclusions 1 and future work are provided in Chapter 5. 1.1 Previous Work Craniofacial reconstruction has been used for over a century for identification of remains and to reconstruct busts of historical figures, for archaeological and an-thropological applications and for identification of human remains in the forensic sciences. It is the latter application that this work focuses on. Forensic craniofacial reconstructions are generated from the remains of a subject in an attempt to acquire a likeness the subject when alive. The reconstruction is then used to attain leads which, in turn, will hopefully result in the positive identification of the subject in conjunction with other techniques such as dental records [NM98]. For the remain-der of this thesis craniofacial reconstruction and facial reconstruction are used to refer to "the scientific art of visualizing faces on skulls for the purpose of individual identification" [Geo87]. There are four general techniques used for facial surface reconstruction: (i) two dimensional artistic techniques, (ii) photographic or video super-imposition, (iii) three dimensional facial reconstruction, and (iv) volumetric deformation techniques [NM98]. Techniques (i), (ii), and (iii) rely on associating skull and mandible land-mark locations1 with facial soft tissue locations. These landmark tissue depths are acquired experimentally based on race, age, gender, and body type using a variety of techniques [QCS+97, KI86, Geo93]. Refer to Figure 1.1 and table A . l for the standard landmark locations and tables of tissue depths used throughout this the-sis. Technique (iv) uses a variety of volumetric deformation techniques to transform 1 These are the standard landmarks from [Rhi84, KI86]. Landmark definitions and loca-tions differ slightly from study to study. \ 2 10 Figure 1.1: Standard skull and mandible landmarks (from [Rhi84]). an "average" human face to the facial shape of the subject based on the volumetric transform of the "average" skull to the subject's skull [NM98, QCS+97]. Techniques (iii) and (iv) are both intended to produce a fully three dimen-sional representation of the subject's pre-death facial soft tissue. Reconstruction methods explored in this thesis will be based on the standardized Krogman method [KI86], which employs technique (iii). For a complete discussion of all these tech-niques see [TEC+97, HRPM93 , NM98, KI86, Ste79]. Computer-aided facial reconstruction has been growing in popularity over the last decade in both two dimensional and three dimensional domains. Volumetric deformation methods of technique (iy) are inherently computational. Previous work on computer-aided variants of technique (iii) can be found in [Arc97, VBL+89]. It is hoped that tools used to produce reconstructions will eventually allow forensic artists to produce reconstructions in less time and with greater flexibility in presentation than current methods. As well, we wish to provide an easier means of producing alternative reconstructions when factors such as subject race, weight, and facial features are not known. 1.2 The Reconstruction Pipeline This work focuses on the reproduction of facial shape. The reconstruction process begins with a subject's skull and mandible from which we wish to ascertain the subject's pre-death appearance. Forensic scientists use clues from the scene, skull shape, and local demographic information to deduce the subject's gender, race, and approximate weight and age. We will first describe the standardized Krogman method used to produce a facial reconstruction on a physical skull and mandible followed by a computa-tional process to accomplish the same task. In the Krogman method, dowels are cut and placed on the standard skull and mandible landmark positions. The length of each dowel is determined by charts of soft tissue thickness indexed by the sub-ject's gender, race, and weight. Next, strips of sculpting material such as clay are placed between the dowels to interpolate the soft tissue thickness between adjacent landmarks. Strips are further placed in a systematic fashion between landmarks or strips until there are only small gaps in the surface. These gaps are then filled in and the whole surface is smoothed to produce the final facial surface. Additional facial features such as hair are added to complete the model. To produce a computational reconstruction, the skull and mandible are digi-4 Figure 1.2: Digitized skull. tized by laser scanning, computed tomology (CT) scans, magnetic resonance imaging (MRI) scans, or other 3D digitization techniques. This produces a full three dimen-sional polygonal model of the skull and mandible which is used as the basis of the reconstruction. Figure 1.2 shows the skull used for reconstruction throughout this thesis. This is the skull of an average weight male caucasoid. The next step is to produce a representation of the facial soft tissue. First, virtual dowels are placed on the polygonal skull model at standard landmarks. Again, the length of each dowel is dictated by experimentally determined charts of soft tissue depths for the standard landmarks. Then, a pre-death facial model is produced in the form of a polygonal model using the processes such as those dis-cussed in Chapters 2, 3, and 4. The interpolation between dowel locations will be 5 more consistent and less prone to human error with this computational model. After the facial shape is obtained, the facial model can be exported to a commercial 3 D animation package where material properties can be applied to the skin. Hair and ears can be added to produce a more realistic approximation of the subjects appearance. Animations of the reconstruction could then be used and distributed for the purpose of identification. As well, there will be a practical means of distributing models in a 3 D form in the future as conventions such as V R M L mature and become more widely available. Alternatively, static images of the reconstructed face can be produced. These images could then be coloured and enhanced by a forensic artist using professional digital paint programs. 6 Chapter 2 Interpolating Tissue Depths This chapter addresses the problem of determining tissue depth in regions between dowels. The first section formalizes terms needed in the reconstruction process; the next section describes how to calculate tissue depth of an arbitrary point. 2.1 Formalizing the Reconstruction Process We will now provide a formalization of the techniques used for craniofacial recon-struction as described in Section 1.2. Three dimensional craniofacial reconstruction starts with the placement of dowels at standard facial landmark locations (Fig-ure 1.1) of set length (Tables A . l , A.2 , and A.3). The computational reconstruction system simulates the process of physically placing dowels on the skull and mandible by allowing the user to attach virtual dowels to the skull model. By default, dowel direction is normal to the surface it is placed on but can be adjusted. Definition 2.1 A dowel d{ is defined as a tuple (pi,ni,£i) where fi is the base, n; is the direction in the form of a unit vector, and li is the length of di. 7 The endpoint of a dowel can be calculated as pi + In the standardized Krogman method, the forensic artist fills in the facial skin once these dowels have been placed. The forensic artist runs modelling material such as clay from dowel to dowel in a standard network and then fills in the areas of this network to produce a smooth surface. These areas are filled in using standard rules for interpolation and the forensic artist's knowledge of facial form. The system of strips is used to reduce the subjectivity of the interpolation. Since tissue depths can be attained computationally, and therefore more objectively, a global interpolation method based on this network of dowels is provided. This network is formalized as follows: Definition 2.2 A dowel area is an area of influence defined for three dowels. Tissue depths in the interior of a given dowel area are an interpolation of the three dowel tissue depths. Definition 2.3 A dowel network is a collection of dowel areas incident on a set of dowels. The dowel network is standardized and shown in Figure 2.1. 2.2 Interpolating Tissue Depth Given a point p on the skull model, the tissue depth t associated with p must be determined. The first step is to determine which dowel area p belongs to. That dowel area is then used to interpolate tissue depth at p. The soft tissue surface cam be thought of as sitting at p + £n, where n is the unit vector normal at p. The normal n can be calculated either by using the skull model surface normal at p or by interpolating the dowel directions as well. 8 Figure 2.1: Standard dowel network (from [Rhi84], modified). We will start with a few definitions before defining which dowel area a point belongs to. Definition 2.4 A cutting plane is defined as a plane that divides adjacent dowel areas (i.e., dowel areas that share a common edge). If di and dj are the dowels shared between two adjacent dowel areas, the cutting plane between these dowel areas is defined to be the plane that is parallel to ni + rij and contains points pi and Pj (pi and ni are defined on di as described in Section 2.1). Figure 2.2 provides a visual example of a cutting plane between two dowels. The distance of p to a dowel area a is defined as the shortest distance between 9 Figure 2.2: A cutting plane between two dowels. p and the plane prescribed by the base points of the three dowels for a. We say p is inside a cutting plane of a if p lies in the same half-space of the cutting plane as the points defining a. We say p belongs to a dowel area a when the following conditions are met: 1. p lies inside each of the three cutting places of a; and 2. the distance of p to a is less than the distance of p to any other dowel area that meets condition 1. Once we determine that p belongs to some dowel area a, we calculate the tissue depth as an interpolation of the three dowels of a, d\, d2, and d$. The interpolation is conducted on the plane prescribed by p l t p2> and ps (the bases of d2, and d$, respectively); we will call this the interpolation plane. The projection of point p is calculated first. Then, the tissue depth of p is calculated using barycentric interpolation in the triangle A ( p i , p 2 i P 3 ) as described below. 10 Dowel area a contains three cutting planes. Cutting plane Ci,2 is the cutting plane defined on dowels d\ and d2; cutting planes 02,3 and c 3 ) i are similarly defined. We can formulate the cutting planes using the standard plane equations as follows: Ah2x + Biay + Ci,22 + #1,2 = 0 A2,3X + -B 2 , 3y + ^ 2 , 3 - 2 + £>2,3 = 0 A3,ix + B3,iy + C3,iz + D3,i = 0 or in matrix form: X Dli2 M y + D2,3 z D3,i Ai,2 Cl,2 where M = B2,3 C 2 ,3 A3,i B3,i C3.1 If d e t | M | = 0 then all three dowel normals are parallel 1. In this case, p', the projection of p on the interpolation plane, will be the intersection of the line p,p + n~i and the interpolation triangle. Otherwise, the cutting planes will intersect at a point pc since d e t | M | ^ 0. fc = M - 1 - D h 2 -D2,3 -p3ll 1Note that if all dowel normals are n, then the three plane normals are (In) x (p2 — pi), (2n) x (p3 — P2), and (2n) x (pi — ^3) =>• the plane normals are coplanar => det \M\ = 0. 11 Figure 2.3: Barycentric interpolation technique on two dowel areas. Interpolated tissue depths of the four dowels (represented as narrow cones) are added to the location of the points on the plane to produce a new surface. The regions of influence of the two dowel areas is evident. This pc will be used as a center of projection. The intersection of the line p,pc and A(pi,p2iP3) will be the interpolation point p'. Once we have 7 / , we will calculate tissue depth using barycentric interpo-lation. ii = uti + vt2 + w£3 where Area(p,p2,f3) Area(p1,p2,p3) Area(fi,p,p3) v = Area{pi,p2,f3) Area{pup2,p) (r = Area(pup2,p3) One can see how this interpolation works on a plane in Figure 2.3. 12 F i g u r e 2 .4 : I n t e r p o l a t i o n m e t h o d a p p l i e d t o s k u l l . S h o w n is (a) the o r i g i n a l s k u l l , (b) the s k u l l w i t h t h e t i s sue d e p t h s a d d e d t o each p o i n t o f t h e s k u l l , a n d (c) the sur face p r o d u c e d by e x t r a c t i n g those faces f r o m the s k u l l t h a t were effected. F i n a l l y , we c a n a p p l y t h i s i n t e r p o l a t i o n to the fu l l s k u l l m o d e l . G i v e n a p o i n t Pi w i t h n o r m a l d i r e c t i o n ?!; a n d i n t e r p o l a t e d t i ssue d e p t h we c a n m o v e th i s p o i n t by a d e l t a F i g u r e 2.4 (b) s h o w s us a s k u l l m o d e l w i t h these i n t e r p o l a t e d t i ssue d e p t h s a d d e d as a d e l t a t o each p o i n t . W e c o m p u t e a new p o i n t pfew = pi + £ini for each pt. S e p a r a t i n g t h i s d e l t a mesh f r o m the o r i g i n a l s k u l l m o d e l c l e a r l y d i s p l a y s h o w these d e l t a s affect t h e m e s h ; see 2.4 (c ) . N o w t h a t we c a n i n t e r p o l a t e t i ssue d e p t h a t a n y l o c a t i o n o f the s k u l l , we m u s t p r o v i d e a s m o o t h a n d a c c u r a t e sur face w h i c h i n t e r p o l a t e s these soft t i ssue l o c a t i o n s . T h i s is d e m o n s t r a t e d in C h a p t e r s 3 a n d 4 . 13 Chapter 3 Reconstructing Facial Models wi th Implicit Surfaces In this chapter we describe how to reconstruct the facial model with the use of implicit surfaces. We are attempting to produce a facial model that sits a given dis-tance (i.e., tissue depth) from the skull model. The facial model must not contain the noise and sharp corners of the polygonal skull model. Emission-based implicit surface modelling techniques seem well suited to solving this problem; a set of skele-ton emitters generally produce a smooth surface, even for jagged skeletons. In Section 3.1 we show how to define an implicit surface for the purposes of facial reconstruction. In Section 3.2 we discuss a standard method used to polygonize the implicit surface and extensions to it to speed the polygonization process. Next, Section 3.3 describes problems with the implicit surface technique and solutions to these problems. Finally, Section 3.4 shows results using this technique. 14 Figure 3.1: A triangle and the implicit surface produced from it. 3.1 Defining an Implicit Surface First we define an implicit surface. Given a function f(p) defined on 3-dimensional Euclidean space, an implicit surface is the isosurface associated with a particular real value of / . If we want the isosurface at value u, we are interested in the surface produced by the union of the points where f(p) = v. The skull model is used as the implicit surface skeleton. Each polygon i will emit a tissue depth £; calculated as the average tissue depth of the polygon points. Tissue depth of the polygon points is calculated as described in Chapter 2. The field of an emitter 1 is defined for polygon i as 0 if p is behind polygon i ei(p) (3.1) [-4)* otherwise where d is the shortest distance between p and polygon i , and is a constant. The isosurface of this single emitter is calculated at e,(/j) = 1 with e8(p) > 1 inside and e (^p) < 1 outside the object. The interior of the object produced by the ^ h e terms emitter and skeleton are used interchangeably in implicit surface literature. Though the term skeleton is more commonly used, emitter produces less confusion given the context of this thesis. 15 (a) (b) ( t ) Figure 3.2: Multiple triangle implicit surface. The top two rows depict the implicit surface produced by two adjacent triangles. The bottom row depicts the implicit surface cross section of three adjacent triangles. For column (a) k = 3.0, (b) k = 6.0, and (c) k = 15.0. polygon is the volume union of the polygon extruded by in its facing direction, cylinders along the edges, and balls on the points of radius l± as seen in Figure 3.1. The volume is halved along the plane of the emitter polygon. The sum of the emitters at a point p produces the total field function e as i with the isosurface taken at ei(p) = 0; a threshold value of 0 is used to follow conven-tion. This summing of emitter values produces a blending between the individual isosurfaces. The constant k encapsulates the idea of locality of emitter influence. The higher the value of k, the faster e2(p) decreases outside distance £{, and there-fore the less e,-(p) can contribute to the field past a distance £$. As k increases, the resulting isosurface comes closer to the union of the isosurfaces of the individual 16 emitters. Figure 3.2 shows an example of two emitters with different values of k. Note that as k is increases from 3.0 in 3.2(a) to 15.0 in 3.2(c) that the center bulge where the two emitters meet is reduced. 3.2 Polygonizing Isosurface For the purpose of forensic reconstruction we need to convert this implicit function to an editable and renderable form. Given an implicit function, a polygonal surface can be produced at the desired isosurface level; the subsystem that accomplishes this task will be referred to as the polygonizer. There are a number of techniques that can be used to accomplish this including Marching Cubes (MC) [LC87] (and variants), and Shrinkwrap [vOW93]. While these techniques work well there are some problems for this partic-ular application. The Shrinkwrap technique works on only one surface at a time. Although we only wish to produce one full facial model, incomplete and segmented models are sometimes produced during the reconstruction process due to insufficient sample rates, small tissue depths, and interior disjoint elements that the skull and mandible models can contain. As well, due to the nature of the implicit surfaces, an inside and outside model is produced; one of the models is the desired skin model while the other is produced by the back-faces of the emitters. Changes in topology also add complexity to the Shrinkwrap algorithm. There are problems with using traditional M C as well. The ratio of tissue depth to maximal skull extent can reach levels of two-hundred to one. A sampling rate of at least this ratio is required in each extent of the marching volume in order to ensure no surface is skipped over. Thus, M C would have to check n 3 or 8 million cubes during the march. Even worse, all emitters need to be evaluated for each 17 corner of the cubes. This would be a prohibitive amount of work. To reduce the amount of work required to produce the polygonized implicit surface, two extensions to the M C algorithm will be presented. Traditionally, the isosurface generator can only obtain information from the field emitters in the form of a function / : K3 —> K that evaluates the implicit function given a point. This is a nice separation in terms of design; it provides a simple interface between the implicit surface evaluator and the polygonizer. In addition, isosurface polygonization algorithms that use such an interface also work on volumetric data. However, the work required for a march can be reduced by allowing the isosurface generator to query the field individual emitters. An additional interface is provided between the implicit function evaluator and the polygonizer. The polygonizer evaluates each of the individual emitters and is responsible for summing them up. As well, the polygonizer can ask for additional information from each emitter as described in Sections 3.2.2 and 3.2.3. An octree data structure [Sam89, pp. 1-9] is used to keep track of the spatial influence of each emitter and to query the value of the implicit function. Definition 3.1 An octree is a recursive spatial tree structure defined on a rec-tilinear bounding volume v. Each node represents a volume; the root of the tree represents all of v. Each internal node has eight children which contain each of the eight equal size octants of the parent node's volume. The M C algorithm and optimizations to M C are presented in the following sections. 18 Figure 3.3: Marching Cubes cases. Shown are the 15 cases for cubes in the Marching Cubes algorithm; all possible combinations of inside/outside corners are rotation and inverse equivalent to these cases. Black corners represent points inside the object, white corners outside the object. 3.2.1 M a r c h i n g C u b e s The standard M C algorithm will be presented in this section. Before the "march" 2 is conducted, a rectilinear volume is calculated based on the maximal extents of the surface we wish to produce. This volume is partitioned into n rectilinear boxes in each dimension to produce a three dimensional matrix of n 3 boxes. To polygonize the surface, each box is sequentially evaluated as follows. Each of the corners of the box is evaluated using the emission function to determine whether the corner is inside or outside the object. There exist 2 8 = 256 possible combinations of inside/outside corners on the 8 corners a box. These 256 variations 2 The term "march" will be used to refer to the process of polygonizing the surface using the M C algorithm or one of its variants. 19 are rotation and inverse equivalent to the 15 cases shown in Figure 3.3. A box contributes polygons to the isosurface mesh based on which case the box is equivalent to; no polygons are added if all points are inside or outside the object (i.e., case 0). The actual location of the point on an edge of the box can be calculated by linear interpolation of the corner values or by binary search of the cross-over point; the latter technique is used for this application. In order to avoid recalculating the Values at the box corners and edge point locations of adjacent boxes, an n2 level of boxes is kept in memory during the march. The march is conducted one level at a time. While marching a level, point values and edge points can be stored and reused. When the level is completed, the corner values and edge points adjacent to the next n2 level of boxes are retained. 3.2.2 B o u n d i n g B o x O p t i m i z a t i o n In this optimization the polygonizer is allowed to query each emitter for a bounding box of influence. This bounding box will be considered to be the maximal volume of influence of the emitter. Emitters are assumed to provide no field strength outside this volume. While emitter field strength never reaches zero, we can choose an arbitrar-ily small maximal contribution an emitter can make by increasing the size of the bounding box surrounding it. A distance d is chosen such that it would take m emit-ters to produce a surface at distance d. By the definition of emitter strength, ( | ) f c m = 1 d = £mk, where £ and k are defined for the emitter; we set m — 500 for the purposes of this application. This d is then used to calculate the emitter bounding box which in turn is used in the octree data structure. Definition 3.2 An MCOctree is an octree whose nodes contain a list of emitters 20 incident on the node volume (i.e., whose bounding boxes intersect the node volume). An MCOctree can be constructed from a given list of emitters and a bounding volume. The initial bounding volume is computed from the extents of the emitter bounding boxes. A node in the octree is only expanded during octree construction if the node's list of emitters is greater than 1 and the depth of the node is less than [log 2 n], where n is the sampling rate of the march. This rule for expansion works for depths of up to eight; tree memory usage beyond this point is prohibitive, even if intersecting emitter lists are only kept at the leaves and bounding volumes implicitly stated. Since M C only requires one n2 level of cubes be in memory storage at any time during the march, only the nodes that intersect the current level of the march will be expanded in order to preserve memory. As we move from one n2 level to the next, the octree is updated with nodes outside the level deleted and nodes inside expanded. Once the octree is constructed, a point can be evaluated by traversing the tree until the leaf that contains the point is found. The emitters for the leaf are evaluated and summed on the point and used as the field strength. If there are no emitters in the leaf, then 0 is returned. If we could attain bounding boxes of the interior of the emitters the M C -Octree could be constructed so that any node volume that is entirely covered by the interior of emitters would not be expanded. No surface would be produced in this volume since the emitter value would always be larger than the threshold inside the inner emitter bounding box. This further improvement to the algorithm would pro-duce no gain for this application as the emitter size is small relative to the smallest node volume. 21 3.2.3 M i n - M a x O p t i m i z a t i o n The bounding boxes of emitters are quick and easy to compute; however, this rep-resentation is overly conservative. This will be illustrated in an example. Suppose we have two emitters whose bounding boxes intersect a volume but whose maximal value in the volume is | . Even though a surface cannot be produced within this volume, the volume would not be pruned from the MCOctree. Instead the maximal and minimal values that an emitter can contribute for a given volume are evaluated. Take emitter i with emission function e;. Given a rec-tilinear bounding volume u, these minimum and maximum values can be calculated with the functions max;(u) and min,(u) respectively as follows: if v intersects emitter polygon max;(i>) { ei(Pdosest) otherwise min;(u) = min{e;(p) : p i s corner of bounding box v} where pci0sest is the closest point in v to emitter polygon for emitter i. The maximal value will be capped at some emax >^ 1. Any value smaller than some emin will be considered to be equivalent to zero and therefore not to affect the volume. We can now define a new octree with a new expansion rule. Definition 3.3 A MinMax MCOctree is an octree whose nodes contain a list of emitters whose maximal contribution is at least emin for the node. During tree construction, a node with volume v and emitter list L is only expanded if E ; e £ m a x , ( u ) > 1 and minj(u) < 1. No surface could possibly be produced if this is not the case as the crossover value does not exist in v. Just 22 as with the MCOctree, the MinMax MCOctree need only be expanded for a single marching level at a time in order to conserve memory. An additional optimization is required in order to achieve a speedup with the min-max method. Since the emitter's minimal and maximal values must be computed at the corners of the octree bounding boxes and since these corners can be incident on up to eight nodes of the octree for up to [log 2 n] levels of the octree, we would like to cache these emitter values. We will align the octree structure so that its lowest level bounding volumes coincide with the marching cubes themselves. The top/front/left box of the lowest level of the octree will be aligned with the (0, 0, 0) box of the marching matrix. There will be [log 2 n] levels to the march with the extra 2^°S2™^ — n octree boxes in each extent extruding past the end of the marching volume. The emitter value would then only need be computed once for any location. Keep in mind that we would still have to compute the emitter values along the edges of the marching cubes, but this is only done once for each edge during the march. Emitter values are cached at the corners of the octree. A n index (i,j,k) into the (n + l ) 3 list of corners of the octree and marching matrix is converted into an index (i,j,d) into the cache. Let be the index into the current n2 level of boxes during the march. The value of d is set to be the minimum depth of any of the' corners of the A; t h level (n + l ) 2 list of corners. This can be calculated directly from k such that d = \\og2 n] — i where i is the index of the lowest order bit set in k. Because of the order in which the cubes are marched, k will either be at the top or bottom of our current n2 level of boxes. We need to maintain the emitter values for the [log 2 n] depths of the octree that surround the current n2 level of marching boxes. These octree depths are reused when the octree is re-evaluated as we move 23 c u r r e n t l e v e l Figure 3.4: Emitter cache. On the left we see a sample of a marching volume for n = 8. On the right is the octree surrounding the third level of the march. k) is the index into the array of corners and d is the minimum depth of a point of a level of corners. Note that levels for the octree are completely filled in assuming emitters are active on all boxes of the current level; from one marching level to the next. These [log 2 ri] octree depths will include the corners of the current marching level. 3.3 P r o b l e m s a n d S o l u t i o n s This section discusses some of the problems associated with the implicit surface reconstruction method and solutions to these problems. The implicit surface method cannot be used on the skull and mandible di-rectly. The indents around the skull temples and the gap between the mandible and skull (Figure 3.5(a)) show up in the final implicit surface (Figure 3.5(c) and (e)). These gaps in the surface can be manually filled in (Figure 3.5(b)) to produce a more realistic resulting surface (Figure 3.5(d) and (f)). Another problem associated with this form of reconstruction is that final facial shape is not independent of the number of emitters. Even if two emitter 24 skeletons form exactly the same shape, the resulting surface could be quite different. Bulging can occur if the emitter skeleton size is small compared to its field distance as shown in Fig. 3.2. As 3.2 shows, bulging can be reduced by increasing the value k in the emitter formula; however, even high values of k do not remove this bulging as this does not eliminate the source of the problem. A new emitter is defined for polygon i based on a split distance t, 0 < t < 1, as follows: ei{p) t m a x if 0 < di < tii and p projects on polygon i ( l i z / l * )* ' if d % > U % 0 otherwise (3-2) where di is the signed distance of p from polygon Vs plane with positive d{ indicating distances in front of the plane and negative di giving distances behind the plane and d is the distance of p to a polygon constructed by moving the points of the emitter polygon by tii in the direction of the emitter polygon normal. If we set t = 0. the emitter is exactly as defined in equation 3.1. However, as we increase the value of t, we limit the scope of the emitter. This new emitter can be visualized as follows. Take the volume of polygon i extruded along its normal a distance tii. If P is in this volume, we return emax; if p is past distance tii in the facing polygon direction, we evaluate the emitter exactly as if it were an emitter of length (1 - t)£i for a polygon skeleton placed at the end of the volume extrusion. If p is anywhere else in the space, we say the point is outside the surface by assigning the value 0. Figure 3.5 shows the resulting implicit surface for different values of t. Another problem encountered when using emitters is bulging around edges 25 Figure 3.5: Original and filled skull for varying t. (a) Original skull and mandible; (b) skull with jaw and temples filled in; (c) and (d) n = 150, k = 6.0, t = 0.0; (e) and (f) n = 150, k = 6.0, t = 0.5. (c) and (e) use original skull; (d) and (f) use the filled skull. 26 where emitters meet. This occurs because emitters are in proximity to each other along these edges (and in fact are usually coincident) causing larger field strength sums than for points that project onto the middle of emitter polygons. This can be seen in both 2D and 3D in Figure 3.2. Although the new emitter described by equation 3.2 reduces this bulging, this is an inherent shortcoming of skeletal emitters. The value of k can be increased to reduce this bulging, but too high a value will reduce the surface blending effect of the implicit surface for emitters in close proximity to each other. It is anticipated that a new emitter definition can be constructed to remove this restriction. At the very least it should be possible to define a polygonal emitter so that the resulting surface is flat when adjacent polygon emitters are co-planar. Polygon reduction is also used on the resulting mesh. Since we must conduct the march at a highest resolution, the resulting mesh has an extremely high polygon count of tiny polygons; The method described by Garland, and Heckbert [GH97] is used to reduce the resulting mesh which also provides the added bonus of smoothing out the aforementioned ridges produced by adjacent polygons. Since these ridges are small features compared to the overall facial shape they disappear as a result of the mesh reduction process. Figure 3.6 shows examples of meshes before and after these mesh decimations. As the implicit surface sampling rate gets arbitrarily high, the act of visiting all n 3 cubes will begin to affect the speed of the implicit surface polygonizer. This is true even if most cubes can be trivially ignored and don't require emitter evaluation on their corners. To avoid this, we can traverse the octree to visit only those boxes which have emitters that contribute to the implicit function on the current n2 level of boxes. A list of these visited boxes can be maintained so that only these boxes 27 are initialized between marching levels. 3.4 R e s u l t s The final produced implicit surface can be seen in Figure 3.7! This is marched at a resolution of 150 marching cubes in each extent and produced with the parameters k = 6.0 and t = 0.5 to produce a model of 350, 679 polygons. This is reduced using a iterative contraction of vertex pairs [GH97] down to 14000 polygons. Facial features can be added to the implicit model by fusing polygonal facial features to the polygonized implicit surface. These facial features can be constructed in standard 3D modelling packages or by acquiring 3D scans of template subjects. 28 Figure 3.6: Polygon reduced implicit surfaces. The above shows the fully polygo-nized implicit surface model each with greater than 340, 000 polygons (left) and the corresponding polygon reduced model to 14,000 polygons (right), t — 0.0 for (a) and (b), t = 0.5 for (b) and (c); n = 150 and k = 6.0 for all models. 29 Figure 3.7: The final implicit surface, n = 150, k = 6.0, t = 0.5, reduced to 14,000 polygons. 30 Chapter 4 Reconstructing Facial Models with Hierarchical B-Splines Performing a craniofacial reconstruction with a hierarchical B-spline (H-spline) sur-face is the focus of this chapter. In [Arc97], Archer proposed a first formalization of computational 3D craniofacial reconstruction and an algorithm to fit hierarchi-cal B-splines to standard dowel locations. She found the number of dowels in the standard dowel set to be insufficient to produce a satisfactory fit. This chapter is a continuation of that work. Section 4.1 provides background information on hierarchical B-splines fol-lowed by Section 4.2 which describes Archer's fitting algorithm and the fitting pro-cess. Section 4.3 discusses issues related to facial features. The chapter concludes with results in Section 4.4. 31 4.1 Overview of Hierarchical B-Spline Surfaces Hierarchical B-splines are an extension to B-splines.1 Refer to [FvDFH90] for a good overview of B-splines and [FB88] for a complete description of hierarchical B-splines. A brief overview of hierarchical B-splines is provided for the convenience of the reader. Refinement of a B-spline surface will be described first. If we start with an in X n array of control vertices which defines a B-spline surface, we can redefine the surface by increasing the size of the array in either the m or n extent. This produces a surface o fm + l x n o r m x n + 1 control vertices that represent the same surface. This refinement process is discussed in [WW92]. This is referred to as refinement of the surface. Hierarchical B-splines provide a hierarchical means of refining and editing a surface locally. D e f i n i t i o n 4.1 A l o c a l refinement about a B-spline control vertex x is a 7 X 7 array of B-spline control vertices centered at x which redefines exactly the four B-spline patches adjacent to (uo,vo), the parametric location of x. Definition 4.1 implies that the u and v spacing of this new local surface is | that of the spacing for the original surface. See Figure 4.1 for a pictorial example of a local refinement. D e f i n i t i o n 4.2 An o v e r l a y is a local refinement of a surface such that allows the central control point, but no other, to be edited. The essential idea behind H-splines is that of the overlay. This concept will be illustrated by an example. Take a control point p on a surface. If we wished to throughout this work, the term "B-splines" is used to refer to bicubic B-Splines. 32 X X X X X X X o o X X X 8 — X — B -x—-x—-x----x-X -X-o X H8 $—x—$—x—$ x— -x— -x— -5k $ — X — $ — X — $ X X X X X X X Figure 4.1: Local refinement. The original control vertices and patches are shown as circles and solid lines and the locally refined control vertices and patches are shown as crosses and lines (both solid and dashed), respectively. edit p on a higher level of detail, we would produce a new B-spline surface around p that exactly matches the patches adjacent to p as discussed above. Note that as long asp is the only point edited at this higher resolution, only the region of influence oi p (i.e, the four adjacent lower level patches) can be modified. Evaluating this surface using only the most refined patch surrounding a given parametric (u, v) location results in a C 2 continuous surface. By only moving the central control, only the surfaces created by the local refinement in Definition 4.2 will be modified. Overlays can be extended into composite overlays. Definition 4.3 A composite overlay is created when two composite overlays of a given level overlap (i.e., share control vertices). The two composite overlays are merged into a new composite overlay with coincident control vertices treated as one. A control vertex may only be edited in a composite overlay if it is the center control 33 vertex of a 7 X 7 array of control vertices. An overlay, is the simplest form of composite overlay. . These composite overlays can in turn be treated as B-spline surfaces and be refined further using overlays. A surface can then be specified with an arbitrary amount of detail by progressively refining the surface. There remains an important aspect of H-splines to be covered that pertains to editing: offset referencing. If we kept the H-spline representation as a series of composite overlays in absolute coordinates it would be difficult to edit a control vertex which has been refined (i.e., which has the same (u, v) value as a higher level control vertex). If this point is moved we would like all lower level overlays to move with the lower detail surface in an intuitive manner. To provide an easy means of editing, vertices at higher levels of detail are stored by means of offset referencing. Instead of storing detail vertex locations in world coordinates, a vertex x is stored as an offset o. Let (uo, vo) be the parametric location in which x has the greatest influence. The offset is calculated from f, the next-lower surface evaluated at (uo,vo), in a frame of reference defined by the u direction and v direction tangent vectors at (uo,vo). Let F be the transform from this frame to world coordinates. The actual location of x can then be calculated as: x = f + Fo When control vertices at lower levels of detail influence location (UO,VQ) on the surface, frame F is implicitly modified in a way that ensures the influence of x follows the surface. 34 Figure 4.2: Generic face hierarchical B-spline surface. The original face model is shown (a) with its parametric associations to the standard dowel and (b) rendered separately. 4.2 Fitting Hierarchical B-Spline Surface The algorithm which fits the hierarchical B-spline surface to dowel endpoints is treated as a black box for the purposes of the Craniofacial Reconstruction program. In this work, the algorithm from [Arc97] is used to fit the hierarchical B-spline to the dowel locations. A quick overview of this algorithm is provided. The fit starts with a generic facial model as shown in Figure 4.2. Each of the dowels is treated as a data point to be interpolated by the hierarchical B-spline. A parametric (u, v) location is associated with each dowel manually; this parametric association on the standard dowel set is used for all fits on all skull models. The head is translated and scaled around the skull so that the hierarchical B-spline nose is approximately positioned in its final position and the standard dowels are relatively close to their associated (u, v) locations on the hierarchical B-spline. Fitting is done as an iterative process. A pair of parameters are passed to each iteration: L , the level of the H-spline to fit, and / , a fit fraction, 0 < / < 1. 35 Figure 4.3: H-spline surface fit to standard dowel locations. The fit facial model is shown (a) with its parametric associations to the standard dowel and (b) rendered separately. The it level of the H-spline is treated as a simple B-spline and fit by least-squares. The fit fraction / is used to limit the effect of the iteration on the overall fitting process. The position of control vertex i after an iteration of the fit is cl + /(cj — cl), where cl and c' are the positions of control vertex i before and after the least squares fit. The standard dowels are not sufficient to produce a satisfactory fit as demon-strated in Figure 4.3. While the H-spline surface does take shape about the standard dowel location from iterations of the fitting algorithm on lower detail levels of the H-spline surface, there are many higher detail control vertices that have no paramet-ric associations over their area of influence. Such control vertices will not be moved during the fitting process. Since only the high detail control vertices near standard dowel locations can be moved, ridges and dents occur in the H-spline surface around these locations. To ensure a reasonable fit, dowels must be inserted both outside and inside M the dowel areas' region of influence. Dowels outside this area of influence are given a default small tissue depth; for the purposes of this application this tissue depth was the same as that for the supraglabella dowel on the mid-upper forehead. Since the hierarchical B-spline surface is approximately placed in the correct location and scaled around the skull, we can calculate a (u, v) location to associate with the dowel by intersecting a ray cast by the dowel with the H-spline surface. This association can be modified if necessary to reduce distortion of the fit on the hierarchical B-spline surface. When a dowel is inserted inside a dowel area's region of influence its tissue depth is interpolated as described in Section 2.2. The (u, v) coordinate of the dowel is interpolated as well; however this will not always produce a correct location. Since the parametric surface is not spaced evenly, the distance along the parametric surface will not directly correspond to the interpolated u and v. The dowel's corresponding parametric location can be edited manually if necessary. The quality of the fit is highly sensitive to the quality of the dowel placement. Too few data points can result in both surface buckling and poor overall surface shape as seen in Figure 4.3. Too many data points will overload the fitting algorithm and again produce a poor interpolation. It appears the best results are attained when a data point exists for each patch at the highest level of fitting. This is level two of the hierarchical B-spline. 2 Another problem encountered when fitting the generic H-spline face is dis-tortion of facial features. Since we wish to preserve the structure of the face in order to easily edit facial features, it is important to keep the facial features mostly symmetrical and unchanged. Offsets at levels of the H-spline higher than two are hierarchical B-spline levels are by convention counted from 0, the lowest level of detail. 37 not modified during the fitting process. Even if one takes these pitfalls into account, it is inevitable that the model will require touch-ups. This task can be completed with relative ease as hierarchical B-splines allow for easy editing of the model. 4.3 Adding Facial Features Facial features such as the nose, eyes, and mouth are an implicit part of the initial facial model. The only issues to address are the addition of different facial features and the effects of fitting. Facial features can be modified and distorted during the fitting process. It is therefore desirable to provide a means of limiting the effect to such regions. At first, it would seem like a good idea to ensure that the fitting algorithm does not move such vertices; however, this often produces poor results as a greater weight is put on other control vertices near facial feature offsets. Instead, fitting is conducted as if facial feature offsets can be moved; the original offsets are restored after fitting is complete. This produces the most visually satisfying results. Changing facial features is handled in a similar manner. A library of different facial features could be maintained as a database of control vertex offsets. Though not implemented for this work, a tool for blending between facial feature offsets would provide added flexibility to the artist. As well, the hierarchical B-spline surface can be touched up after facial features have been added. 38 Figure 4.4: H-spline surface fit to interpolated dowels. Presented are (a) a semi-transparent fit surface with interpolation points, (b) a rendering of the h-spline surface fit to the dowel locations, and (c) the model in (b) edited to reduce facial distortions. 4 .4 Results A final fit model is presented in Figure 4.4(b); the same model, modified manually to remove visual distortions, is shown in Figure 4.4(c). Given as tuples in the form (fit level, fit fraction), the iterations of the fit were (0,0.4), (1,0.35), (2,0.35), (0,0.35), (1,0.3), and (2,0.3). As mentioned previously, the fitting process is highly sensitive to the para-metric locations to which dowels are associated. Currently, the parametric locations of the dowels are manually modified in order to produce an undistorted fit. This process can be improved in the future. One possible approach would be an iterative relaxation-based approach; parametric locations can be moved along the surface based on a penalty formula. The fitting process would be conducted at each itera-tion, and (u, v) locations moved in areas that cause surface creases, bunching, and distortions. Such a technique would reduce the subjectivity of parametric locations. 39 Chapter 5 Conclusion This chapter presents the final results of the computational facial reconstruction techniques in Section 5.1, an summary of the thesis in Section 5.2, and finishes with a discussion of future directions for the research in Section 5.3. 5.1 Results For comparative purposes, the final facial model for both the implicit surface and hierarchical B-spline fitting techniques is provided in Figures 5.1(a) and (b), respec-tively. These figures show the raw outputs of each technique without modifications to the facial features or model touch-ups. Note that the hierarchical B-spline in-cludes facial features in the generic surface and these features are carried on to the final result. As well, Figures 5.2 and 5.3 show the implicit surface and hierarchical B-spline fitting techniques for different subject weights. Each of the two techniques presented has its strengths and weaknesses. The implicit surface technique provides an almost automated means of producing facial shape. Once the standard dowels are placed, the algorithm generates the the three 40 Figure 5.1: Results of (a) the implicit surface and (b) hierarchical B-spline fitting techniques for an average weight male caucasoid. Figure 5.2: Implicit surface method results for different weight types. dimensional model. In contrast, the H-spline fitting technique requires manual place-ment of additional dowels, modification of dowel parametric placement to remove facial distortions, and experimentation with fitting parameters. These additional tasks add a further element of subjectivity to the process. However, to its merit, the H-spline fitting technique is better suited to the addition of facial features and editing of the output model. Unlike the implicit surface reconstruction method for which facial features must be added as a separate •n Figure 5.3: Hierarchical B-spline fitting method results for different weight types. process, facial features are integral elements of both the original general and output H-spline surface. As detailed in Section 4.3, a library of facial features can be maintained and blended between; each facial detail is stored as H-spline offsets for specific control vertices. As well, the resulting H-spline surface is much easier to modify than the raw mesh produced from the implicit surface technique. 5.2 Summary This work has detailed the problem of computer assisted 3D craniofacial recon-struction in a series of phases. Chapter 1 describes the context of the problem; the long-term goal of this work is to provide the theoretical foundations of tools to pro-duce a subject's pre-death appearance. This thesis concentrates on the reproduction of three dimensional facial shape from the skull and mandible. The reproduction of facial shape is accomplished in two steps. The first is to provide a means of determining the facial tissue depth at any point on the skull; this is based on the lengths of the dowels at the standard facial landmarks and is describes in Chapter 2. The second step is to generate the facial model. Two complementary 42 methods are detailed. Chapter 3 describes a technique to reconstruct the facial shape using the polygons of the skull and mandible as emitters of an implicit surface. Chapter 4 describes the task as the problem.of hierarchical B-spline surface fitting. Each of the standard dowels is given a corresponding parametric location on the H-spline surface; this parametric location indicates which location on the H-spline surface is to be fit to the end point of the dowel. Additional dowels are manually added with the length and parametric location determined computationally. The parametric location will sometimes need to be adjusted manually. 5.3 Future Work Future work could proceed in two directions: validation of the reconstruction process and improvement of reconstruction techniques. Each will be discussed. Currently, the tissue depth of a given point on the skull is determined by barycentric interpolation of the point projected onto a triangle prescribed by three dowels. The results of this technique, and shown in Figure 2.4, were acceptable ac-cording to R C M P forensic artists. While promising, a less subjective analysis of the results is desirable. It could be that expert knowledge of the forensic artist would have to be incorporated into any automatic system for estimating tissue depths. An analysis of interpolated tissue depths should be conducted on cadavers or living subjects. The tissue depth of a subject can be measured at a large number of facial locations, including the standard dowel set. These measurements could be acquired using traditional invasive techniques and C T scans on cadavers or safe tissue depth methods such as M R I scans on living subjects. These measurements can be com-pared to the interpolated tissue depths; the lengths of the standard dowels would be set to the measured standard landmark tissue depths for the subject. Once there is 43 a means of validating the interpolation method, other interpolation techniques such as natural neighbour interpolation [WP87, Wat92] can be investigated. As well, there is currently no validation of the resulting facial model to the original subject. Both a quantitative and qualitative analysis of facial reconstruction results is needed. As already mentioned, an M R I scan can be taken of a living sub-ject. The facial model of the subject, as extracted from the scan, can be compared with the reconstructed facial model. A metric, such as volume between the scanned and reconstructed facial models, can be computed to measure reconstruction error. In addition, a qualitative study is required to determine whether the recon-structed facial models look similar to the subject according to human perception. This is as much a test of the validity of forensic craniofacial reconstruction as a test of an individual reconstruction technique. Case studies and reviews of physical reconstructions have been conducted (e.g., [HRPM93, T E C + 9 7 ] ) ; however, since re-constructions can be produced computationally, can be done on living subjects, and can be produced with relative ease, a proper statistical experiment for the purposes of identification can be conducted on a scale previously impractical. Once a means of validating a reconstruction is established, extension and new techniques for reconstruction can be explored. For example, the manual insertion of dowels on the skull in the H-spline fitting method is a time consuming process. Instead of inserting fitting dowels on the skull, points on the output implicit surface model could be automatically added. Regions such as the nose, mouth, and orbitals would be masked so that in these regions would not be added. A better means of associating these points with H-spline parametric locations, such as that discussed in Section 4.4, should be explored. Other reconstruction methods which should be explored include a relaxation 44 based spring model approach and volumetric deformation techniques. The relax-ation based spring model would represent the facial surface as a network of springs. Vertices of the network would be kept a distance from the skull by either keeping them in place with springs of resting distance equal to the tissue depth or by not allowing the vertices to cross approximate meshes (such as those in Figures 2.4(b) and 3.5) of the facial surface. Particular vertices in the spring network would be locked to specific facial locations such as the standard landmark positions and the edges of facial features such as the eyes, nose, and mouth. Models of facial fea-tures could be chosen from a database and automatically added to the resulting surface since facial feature locations are set in the network. Volumetric deforma-tion techniques, as discussed in Section 1.1, can be extended. Different deformation techniques, for both the three dimensional skull/facial models and for two dimen-sional cross-sections of the volume such as those produced by M R I scans, could be developed and compared. It is evident the problem of forensic craniofacial reconstruction is an active area of research that requires much future work. 45 Bibliography [Arc97] Katrina Archer. Craniofacial reconstruction using hierarchical B-spline interpolation. Master's thesis, University of British Columbia, 1997. [FB88] David R. Forsey and Richard H . Bartels. Hierarchical B-spline refine-ment. In Proceedings of SIGGRAPH, pages 205-212, 1988. [FvDFH90] James D. Foley, Andries van Dam, Steven K . Feiner, and John F . Hughes. Computer Graphics: Principles and Practice, chapter 11. Addison-Wesley, 1990. [Geo87] Robert M . George. The lateral craniographic method of facial recon-struction. Journal of Forensic Science, 32:1305-1330, 1987. [Geo93] Robert M . George. Forensic Analysis of the Skull, chapter 16: Anatom-ical and Artistic Guidelines for Forensic Facial Reconstruction, pages 215-227. Wiley-Liss, 1993. [GH97] Michael Garland and Paul S. Heckbert. Surface simplification using quadric error metrics. In Proceedings of SIGGRAPH, pages 209-216, 1997. 46 [HRPM93] Richard P. Helmer, S. Rohricht, D . Petersen, and F . Mohr. Forensic Analysis of the Skull, chapter 17: Assessment of the Reliability of Facial Reconstruction, pages 229-246. Wiley-Liss, 1993. [KI86] W . M . Krogman and M . Y . Iscan. The Human Skeleton in Forensic Medicine. Charles C. Thomas, Springfield, IL, 2nd edition, 1986. [LC87] William E . Lorensen and Harvey E . Cline. Marching cubes: A high resolution 3d surface construction algorithm. In Proceedings of SIG-GRAPH, pages 163-169, 1987. [NM98] L . A . Nelson and S.D. Michael. The application of volume deformation to three-dimensional facial reconstruction: A comparison with previous techniques. Forensic Science International, 94(3):167-181, June 1998. [QCS+97] G . Quatrehomme, S. Cotin, G . Subsol, H . Delingette, Y . Garidel, G . Grevin, M . Fidrich, P. Bailet, and A . Oilier. A fully three dimen-sional method for facial reconstruction based on deformable objects. Journal of Forensic Science, 42(4):649-652, 1997. [Rhi84] Stanley Rhine. Tissue thickness measures: American caucasoids, amer-ican blacks, southwestern indians. Physical Anthropology Labratories, Maxwell Museum of Anthropology, University of New Mexico, 1982, 1983, 1984. [Sam89] Hanan Samet. Applications of Spatial Data Structures. Addison-Wesley Publishing Company, 1989. [Ste79] T . D . Stewart. Essentials of Forensic Anthropology. Charles C. Thomas, springfield, il edition, 1979. 47 [TEC+97-] Andrew J . Tyrell, Martin P. Evison, Andrew T. Chamberlain, Michael A . Green, and F . R . C . Path. Forensic three-dimensional fa-cial reconstruction: Historical review and contemporary developments. Journal of Forensic Science, 42(4):653-661, 1997. [VBL+89] P. Vanezis, R .W. Blowes, A . D . Linney, A . C . Tan, R. Richards, and R. Neave. Application of 3d computer graphics for facial reconstruction and comparison with sculpting techniques. Forensic Science Interna-tional, 42:69-84, 1989. [vOW93] Kees van Overveld and Brian Wyvi l l . Potentials, Polygons and Penguins. An efficient adaptive algorithm for triangulating an equi-potential surface . In Proc. 5th Annual Western Computer Graphics Symposium (SKIGRAPH 93), pages 31-62, 1993. [Wat92] David F . Watson. Contouring: A Guide to the Analysis and Display of Spatial Data. Pergamon, 1992. [WP87] D . F . Watson and G . M . Philip. Neighborhood-based interpolation. Geobyte, 2(2):12-16, 1987. [WW92] Alan H . Watt and Mark Watt. Advanced Animation and Rendering Techniques: Theory and Practice. Addison-Wesley Pub Co, 1992. 48 Appendix A Tissue Thickness Measurements This appendix reprints tissue depths from studies conducted by Rhine and Stanley [Rhi84]. 49 Measurement Emaciated Normal Obese No. Name Male Female Male Female Male Female 1 Supraglabella 2.25 2.50 4.25 3.50 5.50 4.25 2 Glabella 2.50 4.00 5.25 4.75 7.50 7.50 3 Nasion 4.25 5.25 6.50 5.50 7.50 7.00 4 End of Nasals 2.50 2.25 3.00 2.75 3.50 4.25 5 Mid Philtrum 6.25 5.00 10.00 8.50 11.00 9.00 6 Upper Lip Margin 9.75 6.25 9.75 9.00 11.00 11.00 7 Lower Lip Margin 9.50 8.50 11.00 10.00 12.75 12.25 8 Chin-Lip Fold 8.75 9.25 10.75 9.50 12.25 13.75 9 Mental Eminence 7.00 8.50 11.25 10.00 14.00 14.25 10 Beneath Chin 4.50 3.75 7.25 5.75 10.75 9.00 11 Frontal Eminence 3.00 2.75 4.25 3.50 5.50 5.00 12 Supraorbital 6.25 5.25 8.25 7.00 10.25 10.00 13 Suborbital 2.75 4.00 5.75 6.00 8.25 8.50 14 Inferior Malar 8.50 7.00 13.25 12.75 15.25 14.00 15 Lateral Orbit 5.00. 6.00 10.00 10.75 13.75 14.75 16 Zygomatic Arch, midway 3.00 3.50 7.25 7.50 11.75 13.00 17 Supraglenoid 4.25 4.25 8.50 8.00 11.25 10.50 18 Gonion 4.50 5.00 11.50 12.00 17.50 17.50 19 Supra M 2 12.00 12.00 19.50 19.25 25.00 23.75 20 Occlusal Line 12.00 11.00 18.25 17.00 23.50 20.25 21 Sub M 2 10.00 9.50 16.00 15.50 19.75 18.75 Table A . l : Tissue thickness measurements (in mm), for American Caucasoids. 50 Measurement Emaciated Normal Obese No. Name Male Female Male Female Male Female 1 Supraglabella 4.00 5.00 5.00 4.50 5.00 3.50 2 Glabella 5.25 6.00 6.25 6.00 7.50 6.00 3 Nasion 5.25 5.25 6.00 5.25 5.25 4.75 4 End of Nasals 3.00 3.25 3.75 3.75 3.25 3.00 5 Mid Philtrum 11.75 .10.00 12.25 11.25 11.75 12.00 6 Upper Lip Margin 12.50 12.00 14.25 12.50 12.50 15.25 7 Lower Lip Margin 13.75 12.25 15.50 15.00 15.50 12.00 ' 8 Chin-Lip Fold 11.75 9.50 11.75 12.25 13.00 12.25 9 Mental Eminence 11.25 11.00 11.50 12.50 15.25 13.00 10 Beneath Chin 8.00 6.50 8.25 8.00 9.50 8.50 11 Frontal Eminence 3.75 3.25 5.00 4.00 5.50 5.00 12 Supraorbital 7.75 7.25 8.50 8.00 11.75 8.50 13 Suborbital 5.75 6.50 7.75 8.25 9.25 9.00 14 Inferior Malar 14.00 14.50 16.50 16.75 17.50 18.75 15 Lateral Orbit 10.50 12.00 13.25 13.00 2o!op 12.75 16 Zygomatic Arch, midway 6.75 8.00 8.25 9.50 13.75 9.25 17 Supraglenoid 9.50 9.75 1E00 11.50 17.50 17.25 18 Gonion 11.50 •11:00 13.00 13.50 24.00 17.50 19 Supra M 2 19.00 20.50 22.00 20.25 24.00 23.50 20 Occlusal Line 16.75 , 17.75 19.00 19.25 30.00 20,00 21 Sub M 2 13.50 14.25 16.50 17.00 23.50 20.00 Table A.2: Tissue thickness measurements (in mm), for American Blacks. 51 Measurement Emaciated Normal Obese No. Name Male Female Male Female Male Female 1 Supraglabella 5.75 4.00 5.00 . 4.50 4.50 4.25 2 Glabella 5.75. 4.75 5.75 4.50 6.00 4.50 3 Nasion 5.75 6.50 6.861 7.00 6.50 5.00 4 End of Nasals 2.75 2.50 3.50 2.50 3.25 3.25 5 M i d Philtrum 7.50 10.00 9.75 10.00 9.25 8.511 6 Upper Lip Margin 8.25 9.50 9.75 11.00 9.25 10.00 7 Lower Lip Margin 9.25 12.00 11.00 12.25 8.75 11.25 8 Chin-Lip Fold 8.50 9.00 11.50 10.00 9.75 11.00 9 Mental Eminence 8.00 11.00 12.00 13.00 12.50 13.25 10 Beneath Chin 5.25 8.00 8.00 8.00 8.00 7.75 11 Frontal Eminence 4.75 4.75 4.25 4.00 4.50 4.201 12 Supraorbital 6.75 5.00 . 9.00 8.50 8.50 8.25 13 Suborbital 3.75 3.25 7.50 6.25 7.75 6.75 14 Inferior Malar 10.00 9.00 14.00 12.00 15.75" 15.00 15 Lateral Orbit 8.00 8.25 12.50 11.50 11.75 13.75 16 Zygomatic Arch, midway 6.00 5.75 7.50 7.00 8.75 9.00 17 Supraglenoid 5.75 4.50 8.50 6.25 9.75 7.75 18 Gonion 7.75 6.25' 13.25 10.50 15.401 12.75 19 Supra M 2 14.25 11.75 21.50 18.00 23.50 19.00 20 Occlusal Line 15.50 12.25 20.75 17.50 22.75 19.25 21 S u b M 2 12.50 10.50 19.25 17.00 18.50 15.75 Table A.3 : Tissue thickness measurements (in> mm), for Southwestern Indians. This may be a typographical error within the original data. 52 


Citation Scheme:


Citations by CSL (citeproc-js)

Usage Statistics



Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            async >
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:


Related Items