New Surface Descriptors for Matching and Recognition of Three-dimensional Rigid Objects by Hossein B. Darbandi M.A.Sc., University of British Columbia, 2002 A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF Doctor of Philosophy in The Faculty of Graduate Studies (Electrical and Computer Engineering) THE UNIVERSITY OF BRITISH COLUMBIA (Vancouver) December 2009 © Hossein B.Darbandi, 2009 ii Abstract In this work we propose new surface descriptors for matching and recognition of three- dimensional rigid objects by encoding the fluctuation of the surface and the variation of its normal around an oriented surface point. The surface of the object is encoded into three features vectors as the surface signatures on each point of the surface, and then the collection of signatures is used to match and recognize the object. The signatures encode the curvature, symmetry, and convexity of the surface around an oriented point. The new descriptors are robust to noise, sampling density, scale, rotation, clutter, and occlusion. In this work we use Computer Aided Design (CAD) to create models and test objects represented by triangular meshes. . iii Contents Abstract.............................................................................................................................. ii Contents ............................................................................................................................ iii List of Tables ................................................................................................................... vii List of Figures................................................................................................................. viii Acknowledgements .......................................................................................................... xi Chapter 1 ........................................................................................................................... 1 Introduction................................................................................................................... 1 1.1 The Research Problem ................................................................................ 1 1.2 Significance of the Study............................................................................ 2 1.3 Thesis Organization .................................................................................... 3 Chapter 2 ........................................................................................................................... 4 Review of Literature ..................................................................................................... 4 2.1. Introduction................................................................................................. 4 2.2. View-Based Representation........................................................................ 5 2.3. Model-Based Representation ...................................................................... 8 2.3.1. Boundary Representation............................................................................ 8 2.3.2. Volumetric Representation ......................................................................... 9 2.3.3. Axial Representation................................................................................. 10 2.3.4. Surface Representation ............................................................................. 11 2.4. Comparison ............................................................................................... 17 2.5. Chapter 2 Conclusions .............................................................................. 18 Chapter 3 ......................................................................................................................... 22 Representing and Matching....................................................................................... 22 3.1. Descriptors ................................................................................................ 22 3.2. Surface Representation ............................................................................. 24 3.3. Creation of the Signature .......................................................................... 25 3.4. Sampling Density...................................................................................... 26 iv 3.5. Measurements ........................................................................................... 28 3.6. Surface Matching ...................................................................................... 29 3.7. Geometric Verification ............................................................................. 29 3.8. Registration ............................................................................................... 32 3.9. Estimating the Transformation ................................................................. 34 3.10. Matching ................................................................................................... 36 3.11. Parameters................................................................................................. 39 3.12. Models and Settings.................................................................................. 39 Chapter 4 ......................................................................................................................... 44 The Accumulative Method......................................................................................... 44 4.1. Accumulative FOC Signatures ................................................................. 44 4.2. Experiments .............................................................................................. 48 4.2.1. Recognition Results .................................................................................. 49 4.2.2. Effect of Parameter M on the Recognition Rate....................................... 51 4.2.3. Effect of Alignment Error and Candidate Selection ................................. 52 4.2.4. Cluttered Surfaces..................................................................................... 52 4.3. Chapter 4 Conclusions .............................................................................. 54 Chapter 5 ......................................................................................................................... 56 The Differential Method............................................................................................. 56 5.1. Differential FOC Signatures ..................................................................... 56 5.2. Smoothing................................................................................................. 60 5.3. Experiments .............................................................................................. 61 5.3.1. Recognition Results .................................................................................. 61 5.3.2. Cluttered Surface ...................................................................................... 62 5.4. Comparisons ............................................................................................. 63 5.4.1. Accumulative and Differential Methods................................................... 64 5.4.2. Spin Images and Differential Method....................................................... 67 5.5. Chapter 5 Conclusions .............................................................................. 69 Chapter 6 ......................................................................................................................... 70 Analysis of the Signatures .......................................................................................... 70 v 6.1. Properties of the Signatures ...................................................................... 70 6.1.1. Flatness Signatures.................................................................................... 71 6.1.1.1. Mathematical Representation.................................................................... 74 6.1.2. Orientation Signatures .............................................................................. 80 6.2. Discriminatory Power of the Signatures ................................................... 81 6.3. Sampling Densities ................................................................................... 85 6.4. Noise Effects............................................................................................. 89 6.4.1. Specific Case............................................................................................. 90 6.4.2. General Case ............................................................................................. 91 6.4.3. Noise Effect on Flatness Signatures ......................................................... 92 6.4.4. Noise Effect on Convexity Signatures ...................................................... 93 6.4.5. Experiments .............................................................................................. 94 6.4.6. Noise Effect on the Alignment Error ........................................................ 95 6.5. Sampling Intervals (∆R) ........................................................................... 97 6.5.1. Accumulative Signatures .......................................................................... 99 6.5.2. Differential Signatures ............................................................................ 100 6.6. Chapter 6 Conclusions ............................................................................ 104 Chapter 7 ....................................................................................................................... 106 Clustered Signatures................................................................................................. 106 7.1. Pooling Data............................................................................................ 107 7.2. Object Recognition ................................................................................. 108 7.3. Geometric Verification ........................................................................... 111 7.4. Registration ............................................................................................. 112 7.5. Experiments ............................................................................................ 113 7.5.1. Results..................................................................................................... 113 7.5.2. Cluttered Surfaces................................................................................... 114 7.5.3. Comparison ............................................................................................. 117 7.6. Scalability ............................................................................................... 119 7.7. Complexity.............................................................................................. 121 7.8. Chapter 7 Conclusions ............................................................................ 122 vi Chapter 8 ....................................................................................................................... 125 Overall Conclusions.................................................................................................. 125 8.1. Summary................................................................................................. 125 8.2. Contributions........................................................................................... 125 8.3. Further Investigations ............................................................................. 130 8.3.1. Symmetrical Plane of Objects................................................................. 130 8.3.2. Object Classification............................................................................... 132 8.3.3. Consistency Recognition ........................................................................ 132 8.3.4. General Representation Technique ......................................................... 133 References:................................................................................................................. 134 APPENDICES............................................................................................................... 143 Appendix A: Models ................................................................................................. 143 Appendix B: Noise Effect ......................................................................................... 156 Appendix C: Clustering............................................................................................ 157 Appendix D: Intersection of a Triangular Mesh and a Sphere ............................ 160 Appendix E: Alternative Method to Calculate FOC Signatures .......................... 166 E.1. Approximated Approach ................................................................................. 166 E.2. Worst Case Error in 2D ................................................................................... 167 E.3. Worst Case Error in 3D .................................................................................. 169 E.4. Experiments..................................................................................................... 169 E.5. FOC Signatures ............................................................................................... 169 E.6. Error Rate ........................................................................................................ 171 E.7. Appendix E Conclusions ................................................................................. 171 vii List of Tables Table 2-1: Comparing different methods.......................................................................... 18 Table 3-1: List of the models used in the experiments. .................................................... 43 Table 6-1: Comparing accumulated and differentiated methods.................................... 105 Table A-1: Parameters of the models used in the thesis ................................................. 150 viii List of Figures Figure 2-1: Spin images for three oriented points on the surface of a rubber duck model. ........................................................................................................................................... 16 Figure 3-1: Representation technique ............................................................................... 23 Figure 3-2: Sampling effect .............................................................................................. 26 Figure 3-3: Sampling effect on a surface.......................................................................... 27 Figure 3-4: Flatness value of the Peaks surface versus the sampling density ................. 28 Figure 3-5: Cylindrical coordinate of point p relative to oriented point q........................ 30 Figure 3-6: Matching and verification process ................................................................. 37 Figure 3-7: Test Models and Test Objects ........................................................................ 40 Figure 3-8: Sample of cluttered test objects ..................................................................... 41 Figure 3-9: Support angle ................................................................................................. 42 Figure 3-10: Models used in the experiments................................................................... 43 Figure 4-1: Accumulative signatures ................................................................................ 45 Figure 4-2: Sample signatures .......................................................................................... 45 Figure 4-3: Convexity signatures...................................................................................... 46 Figure 4-4: Average recognition results for support angle set to pi................................... 50 Figure 4-5: Average recognition results for support angle set to pi/3. .............................. 50 Figure 4-6: Effect of parameter M on the recognition rate and processing time .............. 51 Figure 4-7: Effect of candidate selection on the recognition rate. Support angle set to pi 51 Figure 4-8: Recognition results for different level of clutter ............................................ 51 Figure 5-1: Differential method ........................................................................................ 57 Figure 5-2: Sample flatness and orientation signatures .................................................... 58 Figure 5-3: Convexity signatures...................................................................................... 60 ix Figure 5-4: Filter used to smooth flatness and convexity signatures in differential method. ........................................................................................................................................... 61 Figure 5-5: Average recognition results for support angle set to pi................................... 62 Figure 5-6: Average recognition results for support angle set to pi/3 ............................... 62 Figure 5-7: Recognition results for different level of clutter ............................................ 63 Figure 5-8: Comparing recognition results of accumulative and differential methods .... 65 Figure 5-9: Comparing recognition rates of the accumulative and differential methods for different clutter level......................................................................................................... 66 Figure 5-10: Comparing recognition results of the spin image and differential method.. 68 Figure 6-1: Differential FOC signatures of a sphere with radius R.................................. 72 Figure 6-2: Convex and concave parabolic surfaces. ....................................................... 72 Figure 6-3: A surface and an oriented point. .................................................................... 73 Figure 6-4: Calculating the flatness signature of a sphere with radius R in intervals R1 and R1+dR1. ............................................................................................................................. 75 Figure 6-5: A sample orientation signature of a peak-surface.......................................... 80 Figure 6-6: Symmetrical axes of the sample surfaces identified by their orientation signatures .......................................................................................................................... 81 Figure 6-7: Normalized incremental histogram of the accumulative and differential methods ............................................................................................................................. 82 Figure 6-8: Normalized incremental histogram of the signatures of accumulative and differential methods .......................................................................................................... 83 Figure 6-9: Models sampled with different sampling densities........................................ 83 Figure 6-10: Normalized incremental histogram of the signatures vs. sampling density. 88 Figure 6-11: Noise effect .................................................................................................. 90 Figure 6-12: Noise affect vs. support distance.................................................................. 95 Figure 6-13: Model M48 and the noisy test objects created with different noise level.... 95 Figure 6-14: Average and variation of the alignment error to different noise level. ........ 97 Figure 6-15: Components of flatness and convexity signatures. ...................................... 99 x Figure 6-16: Effect of re-sampling on accumulative signatures ..................................... 100 Figure 6-17: Sample differential flatness signature calculated with different intervals . 101 Figure 6-18: Flatness and orientation signatures of differential method calculated in 20, 40, and 80 intervals. ........................................................................................................ 103 Figure 6-19: Effect of sampling intervals on differential signatures .............................. 103 Figure 7-1: Clusters......................................................................................................... 109 Figure 7-2: Average recognition results ......................................................................... 115 Figure 7-3: Average recognition results for cluttered objects ........................................ 115 Figure 7-4: Recognition gained by clustered signatures................................................. 118 Figure 7-5: Comparing recognition rates of the proposed technique and spin image for different levels of clutter. ................................................................................................ 118 Figure 7-6: Comparing size of the library of the descriptors used by different methods. ......................................................................................................................................... 119 Figure 7-7: True-positive recognition rate ...................................................................... 120 Figure 7-8: New set of test objects ................................................................................. 121 Figure 7-9: True-positive recognition rate ...................................................................... 121 Figure 7-10: All 207 models used in the experiments. ................................................... 124 Figure 8-1: Symmetrical plan of an object. .................................................................... 131 Figure A-1: The set of models used in the thesis.............................................................150 Figure C-1. Clustering tree used in the thesis. ................................................................ 159 Figure D-1: Intersection of triangle and a circle, cases 1 to 7 .........................................163 Figure D-2: Intersection of a triangle and a circle, case 8 ...............................................165 Figure E-1: Intersection of a circle and a triangle............................................................167 Figure E-2: Models used in the experiments ...................................................................167 Figure E-3: Mean and standard deviation of the error rate in calculating the intersection of triangular patches and sphere ......................................................................................168 Figure E-4: Error rate versus processing time .................................................................172 xi Acknowledgements I feel indebted to the many people who supported and sustained me through the inception and completion of this work. First and foremost, I have to express my gratitude to my research supervisor, Dr. M.Ito whose academic devotion contributed greatly to my commitment to bring this chapter of my life to a successful conclusion. I am also thankful to Dr J.Little, my co-supervisor, for his academic insights and guidance throughout this work. My warmest gratitude also goes to Dr. A.Bagheri, whose unconditional support has always enriched my academic and career pursuits. My heartfelt gratitude also goes to my family. To my wife, Nazi whose patience and encouragements have turned my dreams into possibilities, and to Manna, my incredibly mature and considerate daughter who has comforted me through difficult moments. And finally, as this work is brought to its fruition, I would like to dedicate it to my beloved mother, who recently concluded a fulfilled life and moved on. My dear mother and my everlasting source of inspiration, you will be deeply missed 1 Chapter 1 Introduction 1.1 The Research Problem The topic of free-form object representation and object recognition has been pursued by many researchers in recent years. Although there have been great improvements and success in object representation and object recognition, it seems that we are still at preliminary stages of research in the field compared to our natural vision system in which objects are easily recognized [SKCK05]. Diversity of objects and the variety of the parameters such as noise, lighting, pose, and content that affect both representation and recognition processes have classified object recognition as one of the very difficult computational problems. A key computational issue in object recognition is discrimination between different objects, while at the same time being tolerant to object transformations such as scaling, translation, illumination, viewpoint changes, change in context and clutter, non-rigid transformations, and in the case of categorization, also to shape variations. To address this problem, object recognition is divided into representation and matching processes. Due to computational complexity, brute-force point representation cannot be used in recognition process [B90]. To address this problem, the object should be well described in the representation process, so that it can be used efficiently in the matching process. A Chapter 1: Introduction 2 variety of representation techniques have been introduced in the last few decades for different computer vision applications [CF01] [YP05]. In recent years, new techniques in acquiring 3D data with laser rangefinders, or multiple camera, or rotating line camera [KS05], or combination of both techniques [TPR04] [LH06] inspired researchers to describe, recognize, and trace 3D objects in real-time. An effective three-dimensional object representation technique should be robust to noise, sampling density, occlusion, clutter, orientation, and scale, and it should also be efficient. 1.2 Significance of the Study In this thesis different effective representation techniques are investigated, and the drawbacks of each method are discussed. Then a simple but efficient descriptive technique to develop features of points on surfaces represented by triangular meshes is introduced. The descriptive technique introduced in this thesis can describe free-form objects, and discriminate between concave and convex surfaces effectively. It is robust to scale, orientation, occlusion, clutter, surface sampling density, and noise. Unlike spin images that map a three-dimensional surface into a two-dimensional histogram, the proposed descriptors are one dimensional and lead to a smaller representation size and efficient matching processes. Furthermore, the histograms developed by spin images do not effectively encode surface geometric properties, whereas the proposed descriptors encode the curvature, symmetry, and convexity of the surface around an oriented point into three discriminative vectors that makes matching process more accurate and can be used in other related computer vision applications. The proposed technique is then used to represent and recognize three dimensional objects, and the results are examined. The result of the experiments with the proposed Chapter 1: Introduction 3 technique and spin images are then compared, and it will be shown that while the size of the descriptors created by the proposed technique is much smaller than the size of the descriptors created by spin images, the recognition rate of the proposed technique is more accurate and higher than the recognition rate achieved by spin images. Finally, we will show that the discriminative vectors created by the proposed descriptive technique can be shared among different objects through creating pools of data to describe and recognize a variety of objects. This process not only decreases the size of the descriptors used to represent library of the models, but also increases the recognition rate further. 1.3 Thesis Organization In this section, the structure of this thesis is presented. Chapter one serves as an introduction. Chapter two reviews the related theoretical and methodological work done in the field. Chapter three describes details of the proposed descriptive technique. Chapter four discusses the accumulative aspect of the technique [DIL06] introduced in this thesis. Chapter five introduces the differential descriptive technique [DIL07], and compares it with the accumulative technique. Chapter six compares the two proposed methods, and then compares their recognition rate and efficiency with those of the spin images. Chapter seven discusses the expansion of the descriptive technique through the creation of a pool of shared descriptors to represent and recognize a variety of objects [DIL08], and finally chapter eight concludes the thesis and discusses further research that can extend the applications of the proposed technique. 4 Chapter 2 Review of Literature 2.1. Introduction Object recognition can be divided into object representation (modeling) and feature matching (recognition) procedures. This section discusses the main representation techniques introduced in the literature to clarify the descriptive technique proposed in this thesis. The free-form concept is a general term that applies to the objects whose surface cannot be modeled by mathematical expressions. Besl [B90] described a free-form surface as one where the normal is continuous everywhere except at vertices, edges, and transition points. Regardless of the application used for representing free-form surfaces, the following three factors are considered in the representation process [CF01]. • Completeness conveys the idea that the representation method should be able to extract enough features of the surface of an object to describe it entirely. • Efficiency expresses the idea that a representation should be compact, and it can be generated and used efficiently. • Uniqueness means that there is only one way to represent the surface. If a method is complete and unique then there is a one-to-one correspondence between the object and its representation. Depending on the application’s domain, one or more of Chapter 2: Review of Literature 5 the above mentioned factors may be sacrificed. For example: in graphical applications completeness is more important than uniqueness, while in object recognition, the representation should be unique. Three-dimensional object representation can be classified mainly into two categories: view-based (appearance-based) representation and model-based representation. View- based representation makes no assumption about the geometry of the object. An object is represented as a collection of images taken from different views. In the matching process, the 2D image of the object as it appears from one view is compared to the stack of the images taken from the object in the representation process. Model-based representation in turn uses the geometry of the three-dimensional object for representation. In this method, the geometric properties of an object and their relations are extracted and stored as the descriptors of the object. In the matching process, the same procedure is applied to the test object, and its geometric properties and relations are compared against the geometric properties and relations of the descriptors for the purpose of identification. 2.2. View-Based Representation In view-based representation, a 3D object is viewed from different directions, and features of the image of the object in each view are stored as part of the descriptors of the object. In the matching process, the features of the image of the object on the scene are compared with the collection of the features of the library models. Different techniques such as Aspect Graph has been used to group 2D views of a 3D object into topologically distinct subcategories [PCPS99][CK04]. However, the enormous size and complexity of Aspect Graph for even simple objects has limited the use of this technique in Chapter 2: Review of Literature 6 representation and recognition of 3D objects. Oracle, developed by Ming [N80], is one of the earlier systems that used a learning mechanism in the representation and recognition process. The representation and recognition processes implement a learning engine that adds the features of an unknown object into a database, and later uses them to recognize the object. Principal Component Analysis (PCA) [FP03] is a technique used in the representation process to reduce size of the descriptors of the models. In view-based object recognition, an object is viewed from different directions, and its features are encoded in a one or a higher multidimensional space. The multidimensional space is calculated based on statistics from the training data. In the recognition process, the image of the object is projected to the same multidimensional space, and by finding its nearest match in the training data, the object is identified. This simple but powerful method had been used by many object recognition systems. Kirby and Sirovich [KS90] used this method to find the most efficient space named as eigen-pictures to project the training data. This method was later used to model and recognize faces, and it has been named as eigen-faces and eigen-images [SM95] [SVSS03]. It has been shown by Murase and Nayar [MK95] that a subspace of 20 dimensions is sufficient for pose estimation and object recognition. However, due to the lack of a verification method and sensitivity to occlusion, this method has high false positive rates [MLPZ96]. These problems were partially handled later by using eigen-windows [OI97] and the hypothesize-and-test paradigm [LB96]. Eigen-pictures also show poor recognition in the event of clutter [GK00]. The problem of clutter had been partially solved by using local features of the object [L04]. Chapter 2: Review of Literature 7 The view-based recognition has also been applied to range images [LHK98]. In this method, instead of images the range data is used. Principal component analysis is then used to project the range data to eigen-shapes, the same way it is used to create eigen- images and eigen-faces. The advantage of using range data over images is that in most cases, the illumination has no effect on range data. The boosting method basically is used to find discriminative features to describe an object across different views. Yokono and Poggio [YP05] used this method to recognize faces with high accuracy. Boosting had been pursued by other researchers to successfully detect a face and its pose. Please refer to [ZLQ04] and [YSGZ04] for more boosting techniques. Gaussian derivative filters were used by Lowe [L99] [L04] to develop a local descriptor called SIFT. SIFT is invariant to rotation and scale. Lowe used this method to successfully identify objects in a cluttered scene. The system can learn an object model from a single image. Inspired by his work, using local features for object recognition has become a trend and several recognition systems combined with statistical learning have been proposed [LMT07] [ANP07]. The main problem with the SIFT method is that the surface of the object being investigated needs to be highly textured. Recently, more researchers are pursuing view-based representation techniques for classifying and recognizing objects. Cyr and Kimia [CK04] employed an aspect-graph structure based on the shape similarity between images to recognize 3D objects from 2D images. Bai et al. [BYL08] used shape similarity to detect and recognize contours parts in images by retrieving the most similar contour parts in a database of known contour segments. Moment Invariants are also used to recognize partially occluded objects in an image. In this method a database is initially built to store the information of a variety of Chapter 2: Review of Literature 8 objects after calculating their features represented by moment invariants. Then features of an image are calculated and compared with features stored in the database for identification [KFS07] [LBJ08]. The view based representation suffers from redundancy for modeling known objects. The size of the representative features of even a simple three-dimensional object is often too large to be implemented for multiple object representation and recognition. 2.3. Model-Based Representation The model-based representation extracts features of a three-dimensional object, and uses the collection of the features and their geometric relation for representation purpose. The main issue in representation is to extract enough features of an object so that the object can be recognized reliably during the matching process. Although there is often no clear distinction between different types of model-based representations, this method is classified into different categories, such as boundary versus interior modeling [P80], numeric or nonnumeric methods [L98], and global and local feature representation, which uses the global features or local features respectively of an object for representation [CF01]. In another classification schemata, model-based representation techniques are divided into four subgroups; boundary, volume, axial, and surface representation techniques [BM00]. 2.3.1. Boundary Representation Boundary-based representation describes a three dimensional object by its boundaries. Wire frame representation, solid modeling [CT04], and invariant geometric properties Chapter 2: Review of Literature 9 [YF02] are different variations of this category. Representing boundaries as a triple set of surfaces, a set of space curves which represent the intersection of the surfaces, and a graph that represents their relations is another technique to represent and recognize an object from its boundaries. The complex autoregressive model [XZ00] uses complex partial correlation features on the boundaries of the object for representation. The moments-based representation [L98][LSL07] and the Fourier description employ a set of curve moments and Fourier descriptors respectively to specify the silhouette of an object [SN06][KLR06]. The main problem with boundary representation is that this method is often insufficient to properly describe 3D objects to provide reliable feature detection. Furthermore, objects are not simple wire frames or intersections of boundaries. In reality, objects are set of surfaces often with no distinct surface intersection and boundary segments. 2.3.2. Volumetric Representation Voxels, constructive solid geometry, and super quadrics are examples of volumetric representation. Voxels are small volume primitives that are used to describe an object by their space occupancy. These small primitives are usually cubes. Bonfort and Sturm [BS03] used voxels to describe three-dimensional objects. In this representation, space is divided into 3D cubic grids, and each grid is marked as full, empty, or partial. Partial grids are in turn subdivided into smaller grids and the process continues recursively. The Constructive Solid Geometry (CSG) method that creates solids from cubes, cylinders, and spheres is another category of volumetric representation [BC03]. These primitives are combined by their union, intersection, and difference. The model is stored Chapter 2: Review of Literature 10 as a binary tree, in which each leaf node represents a primitive volume, and each internal node represents an operation on its children. The super-quadric [GB92] is a mathematical form of describing object by its volumetric primitives with parametric from = = )(cos)(cos )(cos)(cos )(cos)(cos ,( ),( ),( ),( 3 2 1 βα βα βα βα βα βα βα cb cb cb a a a z y x S , where piβpipiαpi ≤≤−≤≤− , 22 The aspect ratio parameters, a1, a2, and a3 are in the range [0,1], and they control the deformation of a shape, and parameters b 0≥ and c 0≥ specify whether the shape is close in nature to a cube or a sphere. Kosmopoulos [KK06] used range images and employed super quadrics to localize box-like objects. Biegelbauer [GM07] used a similar method to detect 3D objects. The main problem with super quadrics is that they are symmetric, so their ability to represent free-form objects is limited. The main problem with volumetric modeling is the ambiguity in joining volumetric primitives in both the representation and recognition process. 2.3.3. Axial Representation In this technique the representation is based on the structure of an object. The Medial Axis Transform (MAT) [BPA03] is mainly used to skeletonize and describe animate objects. Generalized cylinders [DLB97] are defined by a space curve )(sF , which represents the axis of a primitive, and ),( θsC that represents a curve in a plane normal to )(sF at point s as the cross section of F(s). To describe an object, a set of space curves and cross sections Chapter 2: Review of Literature 11 may be used. This method is suitable to represent elongated objects. However, most objects cannot be described with this method, and its domain is limited. Generalized Cones and ribbons (a 2D version of Generalized Cones) are examples of this category. In these methods, the medial axis transformation is used to simplify the shape representation while preserving the most topological information of the object [YBM04]. Axial representation has shown its impact on skeletonizing and classifying objects based on their internal structure. Skeleton is mainly used for representing and recognizing flexible objects such as live objects. Recently, more researchers are pursuing the medial axis transformation for classifying and matching variety of objects [WKW02][BPA03]. 2.3.4. Surface Representation Surface representation is the most logical approach to describe an object, because what we infer from an object is based on its surface. This method is preferred over all other methods discussed in this section for representing 3D free-form objects. There are many different methods to represent an object from its surface features. Curvature is one of the most prominent features to describe surface of an object. Curvature is characterized as the changes of the orientation of normal of the surface. Principal curvatures, K1 and K2, encode the rate of change of surface orientation in two extreme directions [CF01]. The Gaussian curvature K is defined as 21 *kkK = , and the mean curvatures Km defined as ( ) 2/21 kkKm += . Thirion [T96] used curvature extrema to represent free-form objects. Extreme values are defined as zero crossings between minima and maxima of Gaussian curvature. Besl and Jain [BJ85] used Gaussian curvature and mean curvature to classify the surface into eight Chapter 2: Review of Literature 12 categories. Dorian [DJ97] extended earlier work by defining two new measurements; the shape index ( 21 211tan 12/1 kk kkS − + −= − pi ) and curvedness ( ( ) 2/2221 KKR += ). The shape index classifies the surface into nine distinct types, and curvedness measures the magnitude of the curvature change. Boyer [BSF02] used range data and segmented the surface into four saliency classes based on their mean and Gaussian curvature. Curvature has extensive functionality on different fields of computer vision especially in object representation and object recognition. However, curvature is very sensitive to noise, and robustly calculating the surface curvature is an open area to be pursued by many researches. Mathematical surfaces are mainly used in computer graphics. In this method, a mathematical representation is used to represent a region of a surface. Among mathematical representations, parametric surfaces [BB06] are widely used in computer aided design manufacturing (CADM). A generic parametric form of a 3D surface is represented by )],(),,(),,([),( vuzvuyvuxvuS = where u and v are two parametric variables. The most common parametric formulation of a surface is the non-uniform rational B- spline (NURBS) [FVFH95], in which a surface is represented as jiqj i j pi PvNuNvuS ,,, )()(),( ∑∑= Chapter 2: Review of Literature 13 where piN , and qjN , are the B-spline basis functions of degrees p and q, and jiP , are homogenous control points. The availability of homogenous control points makes this representation quite flexible. However, it is difficult to fit a parametric surface on an arbitrary region of a real object. This means that trimming curves are needed to cut and fit different parametric surfaces on different regions of the object. This process can be accomplished in computer graphics, but it is not an easy process in computer vision, because trimming curves are not easily detectable. B-splines are used by Liao and Medioni [LM92] to represent the surface of an object from range data. Drapikowski and Nowakowski [DN02] used B- splines to build a 3D volumetric representation of a physical space based on data acquired by an IR range finder. B-splines are also used for segmentation and recognition of boundaries belonging to anatomic structures in medical imagery by matching them with prototype templates [AD03]. Ma and Pollick [MPH04] used B-splines to describe fingers of a person, and used it with other geometry measurements of the hand as the signature of that person. They used the signature to verify or identify a person by locating the closest hand image in the database. Although B-splines are a very useful tool in computer graphics for modeling purposes, Cohen and Wang [CW94] showed in their work that this method is not suitable for object recognition, because different parameters and different knots (control points) may create the same parametric surface. B-splines are also used to analyze curvature and segment the surface of an object [SER04]. Another mathematical surface representation is classes of algebraic implicit surfaces. In this method a surface can be represented as the zero-set of an arbitrary function f(x,y,z): Chapter 2: Review of Literature 14 }0),,(|),,{( == zyxfzyxS A method for surface segmentation and fitting process is discussed in [BLCC00] and [HBM04], and the steps of object recognition with this method are discussed in [SG92]. The class of objects that can be represented by a single algebraic surface is very limited. Surface orientation is another method of free-form object representation. Gaussian image (GI), Extended Gaussian Image (EGI) [H84], and Complex Extended Gaussian Image (CEGI) [KI93] are examples of this method. In all of these techniques, a three- dimensional object is represented as a collection of normal with weights proportional to the size of the surface. Lowekamp and Rheingans [LRY02] created a Gaussian map directly from polygonal meshes and combined it with visualization techniques to amplify surface details. The main problem with these techniques is that they are generally considered non-local1 techniques, and they react poorly in the event of clutter in the recognition process. Furthermore, all the above mentioned methods, except CEGI, are not unique. A common method for the representation of a 3D surface is through the use of polygonal meshes. In this method, the object is defined by a pair of ordered lists. [ ]VPL ,= 1 Locality is an important feature in representing objects. The methods that do not use local features of an object for modeling are not suitable for object recognition in cluttered environment. Chapter 2: Review of Literature 15 where },.....,{ 1 NvvV = is a list of N three-dimensional vertices },,{ iiii zyxv = , and },.....,{ 1 MppP = is a list of polygons, in which each specifies a list of vertex indices: },.....,{ 1 iLii vvp = . Different techniques are developed to represent free-form surfaces as a collection of meshes such as spherical representation [DHI93], and inflating balloon implemented by Chen [CL92]. Methods of mesh development from range data are discussed in [SOK00]. There are also methods developed to generate mesh approximations from other geometric primitives such as implicit surfaces [PLLF06], parametric surfaces [KL96], and volume data [FDB07]. Goes and Bergo [GBFG06] developed a framework for constructing and maintaining a time-varying adapted mesh structure that conforms to deformable surfaces. In all these methods, the surface is represented with connected small patches, and then a variety of techniques are developed to use this representation to extract salient features of the surface for describing 3D objects. Recently, due to the decreasing cost of 3D scanners, more complex methods that use patches created from dense images have been introduced in the literature. The splash introduced by Stein and Medioni [SM95] employs the normal of the patches as the signature of an object. In this technique the points of high curvature on the surface are identified, and then the normal of the patches within a constant radius are calculated. This information is then used as a signature for matching purposes. A similar technique known as Point Signature was developed by Chua [CD97]. In this technique the intersection of a sphere of constant radius and a surface of interest is used to define a contour on the surface around the point of interest. Johnson developed the spin image descriptor [J97] in his Ph.D. thesis. In this technique the surface is represented as a histogram of the Chapter 2: Review of Literature 16 cylindrical coordinates of the surface along an axis normal to the surface. A spin image is created for an oriented point using cylindrical coordinates. The radial coordinate a defined as the perpendicular distance to the line through the surface normal, and the elevation coordinate b, defined as the signed perpendicular distance to the tangent plane defined by vertex normal and position. The cylindrical angular coordinate is omitted, because it cannot be defined robustly and unambiguously on planar surfaces. A 2D histogram indexed by a and b is first created, and then the coordinates (a, b) are computed for each vertex in the surface mesh. The resulting histogram can be thought of as an image; dark areas in the image correspond to bins that contain many projected points. Figure 2-1 shows the projected (a, b) 2D coordinates, and spin images for three oriented points on a duck model. For surface matching, spin images are constructed for every vertex in the surface mesh [JH99]. Figure 2-1: Spin images for three oriented points on the surface of a rubber duck model. Chapter 2: Review of Literature 17 Surface point signatures [SL01], harmonic shape contexts [SA99], spherical spin images [RSM01], and multidimensional table representations known as tensor, which describes and recognizes 3D objects in clutter environments [AB06], are other techniques of surface representations. Geometric Invariant Features [TS89] [SJ06] [LBJ08] is another method to represent and recognize objects. In this method invariant features on the surface of an object are used to represent and recognize objects. Color matching technique uses the color histogram of the objects [CK99] for matching purpose. Finally, CAD based representation employs CAD system to create a graph of the objects surface and object features. In this method CAD system is mainly used for modeling purpose [DI93] [F93]. There are a variety of other methods suggested by authors that are mainly a combination and/or extension of the methods which are mentioned in this section. A good survey of different techniques can be found in [CF01] and [PM05]. 2.4. Comparison The pros and cons of the methods mentioned in the previous section are summarized in Table 2-1. As indicated in the table, view-based representation suffers from redundant data for describing an object. Boundary representation is the compact form of the object representation, and it is excellent for describing geometric planar objects. However, in reality we are not dealing with such simple objects, and for the majority of objects in the real world, this method is not sufficient. Volumetric representations have ambiguity in both the representation and matching processes. Finally, axial representation is too sensitive to noise, and a small amount of the noise on the surface of an object changes the entire structure of the medial axis. Chapter 2: Review of Literature 18 Due to the decreasing cost of 3D scanners, more complex methods that use surface features for object representation have been introduced in the literature recently [PM05]. Improvement both in hardware and software technology, representing surface of three- dimensional objects with patches efficiently has opened new possibilities for more complex object representation using surface features. Furthermore, availability of models used in CAM2 and CAD system in manufacturing process and virtual reality makes the methods that use surface features for object recognition more desirable in the near future. In this project, we have found that the surface representation is a more logical approach for object representation, since what we infer from the object is primarily from its surface features rather than other features such as volume or boundaries.. Table 2-1: Comparing different methods. Method Pros and Cons View Based Redundant information Boundary Data is compact, but it is often insufficient to match reliably Volumetric Ambiguity in joining and modeling internal features that are not relevant in recognition Surface A more logical approach Axis Sensitive to noise and insufficient to distinguish between similar objects 2.5. Chapter 2 Conclusions Different methods of surface representations are discussed in this section. Object-based recognition systems that use surface signatures are effective methods for describing objects, since they can describe a variety of objects without restriction. These methods are particularly suitable for describing free-form objects that cannot be represented by parametric surfaces. A surface signature at a given point encodes the surface’s geometric 2 Computer Aided Manufacturing Chapter 2: Review of Literature 19 property around a point on the surface. Splash, spin images, surface point signatures, harmonic shape contexts, and spherical spin images discussed in the previous section are examples of surface signature methods. Of the more recently introduced methods that employ surface signature, the spin image is perhaps the most accurate and easy to implement. The spin image reacts very well in the event of occlusion and clutter, and it is also robust to noise. The spin image representation is sensitive to the resolution and sampling of the models and scene. An improvement to this technique [CHH99] partially overcomes the sensitivity of the spin image to resolution and sampling. Although the spin image is an effective method for representing and recognizing 3D free- form objects, spin images map a 3D surface into a 2D histogram. The histograms do not effectively encode surface geometric properties. Furthermore, 2D histograms lead to large representation size and inefficient matching processes. To deal with the size of the representation, the spin image uses Principal Component Analysis (PCA) to compress descriptive data. The compressed data decrease the recognition rate, and even the compressed spin images are too large to create an effective object recognition system. An effective representation technique should address the following issues: Free-form objects The method should be able to encode a variety of objects without restriction. Concavity and Convexity The representation technique should distinguish between convex and concave surfaces effectively. Chapter 2: Review of Literature 20 Orientation and Scale The method should be robust to the orientation and the size of the model. Occlusion and Clutter In real world, we cannot see the entire surface of an object due to occlusion and clutter. As a result, the representation should be robust to occlusion and clutter. Noise Noise is a natural part of image processing. An effective method should be robust to noise. Sampling Density An object may be sampled with different densities in different circumstances. An effective representation technique should be robust to sampling density. Efficiency The representation technique should allow accurate description of an object using a reasonable amount of data. Many representation techniques we have studied in the previous section use vast amounts of data to describe a single object. This limitation makes it impossible to create an object recognition system to represent and recognize a variety of objects. Scalability The representation technique should be scalable to handle any number of models. Many representation systems work fine with a limited number of models, but they fail due to their inefficiency in handling a large number of models. Chapter 2: Review of Literature 21 In this thesis, we introduce an effective method to represent a 3D free-form object that not only meets all the above mentioned issues, but it also encodes the curvature, symmetry, and convexity of the surface around an oriented point that can be used in other related computer vision applications. Furthermore, the proposed technique encodes the surface into three discriminative vectors that makes the matching process more efficient and accurate. 22 Chapter 3 Representing and Matching The proposed descriptive technique and creation of signatures from a surface represented as triangular meshes are discussed in this section. A method for calculating the area of a triangular mesh circumscribed inside a sphere is first discussed, and then the flatness and orientation signatures are introduced. The chapter continues by describing the method used to compare signatures. Surface matching is achieved by finding corresponding oriented points (a point on the surface along with its normal) using the proposed signatures. Since the matching algorithm makes no assumption about the topology of the test object, many surface points match with the signatures of the test object. These points are then passed to a verification engine to eliminate the wrong matches. The group of oriented points are then used to register the test object with the matched model. The chapter ends by discussion of the test objects and the models used in the experiments. 3.1. Descriptors The basic element used to represent and recognize an object in this work is an oriented point. The oriented point is a point (P) on the surface of an object along with its normal (N) to the surface of the object at point P. Chapter 3: Representing and Matching 23 Consider the disk D in the left column of Figure 3-1 with an area equal to S and a normal N on point P, an arbitrary point on the surface of the disk. Then 10 ≤≤= F S AF (3-1) where A is the projection of S on a plane normal to N. F specifies the flatness of the area around a specific point, P, on the surface. For a planar surface, F is equal to one, and for a curved surface, F is less than one. The more curved the surface, the lower the value of F. Figure 3-1: Representation technique By putting a sphere with center P and radius R on point P (right column of Figure 3-1), we can find the area, S, of the surface circumscribed inside the sphere. Then by projecting S on a plane ( Π ) normal to N, we find A, and calculate F from Equation (3.1). As R increases, the ratio of A/S changes and creates a graph that is the flatness signature of the surface around point P. The flatness signature of a planar area is a horizontal line. Similarly, the angle of the normal, θ, which is equal to the average of the normal of the triangles enclosed in the sphere, also fluctuates from its original position N as R increases, creating another curve, the orientation signature. For a symmetric surface, the Chapter 3: Representing and Matching 24 orientation signature is a horizontal line if P is set on the symmetrical axis on the surface. The combination of the orientation signature and the flatness signature describes the object at point P on the surface of an object. The collection of these signatures is used to represent the entire object. 3.2. Surface Representation In this work triangular meshes are used to model the surface of an object. In a surface mesh, a vertex corresponds to a 3D point sampled from the surface of an object, and edges are created by connecting two adjacent vertices. Given enough samples, any free- form surface can be represented with this method. The surface can be sampled with different techniques such as stereopsis [SS02], laser range finders, and multi-sensors [IOV05]. Using triangular meshes, the normal of each triangle is first calculated. For consistency, the normal of the triangles should point outside of the object in both the representation and matching processes [FVFH93]. To calculate the signatures in each step of the signature creation, the surface of the object circumscribed inside a sphere should be calculated first. In this work, we used the simple geometrical fact that the intersection of an arbitrary plane and a sphere is a circle. As a result, calculating the size of a triangular mesh circumscribed by a sphere is reduced to calculating the intersection of a circle and a triangle. A precise approach for calculating the area of a mesh inside a sphere was used in this work. However, the method can be modified to perform calculations more efficiently depending on the sampling density. The method is described in Appendix D. An alternative method for efficient calculation is introduced in Appendix E, and the results are compared with the precise method. Chapter 3: Representing and Matching 25 3.3. Creation of the Signature By finding the surface of the object circumscribed inside the sphere, and the normal of the orientation point, the flatness and orientation signatures are calculated. Each signature is a vector of d elements, in which d is the number of the intervals used to generate the signature. In each interval of the signature creation, the radius of the sphere is set to RRR ii ∆+= −1 . To find the signatures for an oriented point on a vertex, the normal of the vertex, N, is calculated first. To decrease noise effects, the normals of the triangles are averaged around the vertex inside a sphere with a specific radius called the base-radius. By finding the normal, N, and Π, the plane normal to N, two vectors [ ]di sssS ,...,0 1≥= and [ ]di aaaA ,...,0 1≥= are calculated. Flatness signature, [ ]di fffF ...,01 1≥≥= , of the oriented point P is then calculated from S AF = where ),...,,( 21 dfffF = is a vector of dimension d, and difor s af i i i ,...,2,1== . Concurrently, the oriented signature [ ]di oooO ,...,0 1≥= is calculated from ∑ = = m j jji Ns m O 1 1 where js is the surface of the patch, Nj is its normal, m is number of the patches. Chapter 3: Representing and Matching 26 The collection of F and O signatures models the surface of the object at the selected vertex. 3.4. Sampling Density The proposed representation technique is robust to the mesh sampling density. The flatness signature is calculated by SA / , where S is the area inside the sphere, and A is the projection of S on a plane normal to N. Figure 3-2: Sampling effect Left: Sampling a circle Right: Flatness value of a circle versus its sampling density The left column of Figure 3-2 displays a semi-circle sampled with four and five sample points. As indicated in the figure, by increasing the sampling density, the circle is represented more accurately. The right column of the figure shows the ratio of the boundary of the circle to its diameter. As indicated in the figure, by increasing the sampling density, the ratio reaches to a steady state value, which is the flatness value of a semi-circle. The experiments indicate that by having enough samples to represent a surface, the flatness signatures would be consistent over a sampling density. Chapter 3: Representing and Matching 27 Figure 3-3 shows the Peaks3 surface created by Matlab with two different sampling densities. The figure shown in the left column is sampled with 900 vertices and 1682 triangles, and the figure shown in the right column is sampled with 36 vertices and 50 triangles. Figure 3-3: Sampling effect on a surface Left: Surface sampled with 30x30 vertices Right: The same surface sampled with 6x6 vertices The flatness value of the entire surface with different sampling densities is shown in Figure 3-4. As indicated in the figure, the flatness value of the surface reaches to a steady state with sampling density over six samples in each x and y directions. The same property is valid for the orientation signatures, and it can be shown that the orientation signature is also robust to the sampling density. 3 Peaks function in Matlab Chapter 3: Representing and Matching 28 Figure 3-4: Flatness value of the Peaks surface versus the sampling density. The effect of the sampling density will be discussed in more detail in Section 6 of the thesis. 3.5. Measurements Our experiments indicate that among a variety of disparity metrics to compare two signatures, the Euclidean distance shows better disparity measurement for the proposed signatures. The dissimilarity (distance) of two signatures (S1 and S2) is calculated as the normalized Euclidean distance of the two signatures ( )∑ = −= d i ii ssd ssitydissimilar 1 2 2121 1),( (3-2) where d is the dimension of the signatures. To compare the signatures of two oriented points, the Euclidean distance between the dissimilarity measurements is used: Chapter 3: Representing and Matching 29 2/12 21 2 21 )),(),(( ooitydissimilarffitydissimilarunlikeness += (3-3) where f1, f2 and o1, o2 are the flatness and orientation signatures of oriented points 1 and 2 respectively. Unlikeness is the measurement used to compare two oriented points. 3.6. Surface Matching The Unlikeness measurement provides a means to find the corresponding points on the surface of a test object and models. To match the signatures, first the flatness and orientation signatures created from a sample oriented point on the surface of the test object are compared to the flatness and orientation signatures of all models, using Equation (3-2). Then their unlikeness is measured using Equation (3-3). During the matching process, a sample point can be matched to more than one point for many reasons, e.g., both points are located on symmetrical parts of an object, similar surfaces create similar signatures, and noise affects the signature’s creation. To counter this problem, the matched points are geometrically verified. 3.7. Geometric Verification Different methods for filtering outliers have been suggested in the literature. In one method, all correspondences that are more than a certain fraction of the maximum dissimilarity of all the correspondences are eliminated [JH99]. In practice, this fraction is set to the median of the dissimilarity. This approach does not require reasoning about the matched points to decide which correspondences are the best. In another method (hypothesize, test, and alignment paradigms [G90]), the unlikely correspondences are removed by using geometric consistency between the matched points. In this method, the likelihood of two correspondences is measured by grouping Chapter 3: Representing and Matching 30 them together to calculate a transformation of the model to the test object. If the correspondences are not geometrically consistent, then they are eliminated. Assume that sx and my represent a match between a point x on the scene to the point y on the surface of a model. Now, let ],[ 111 msC = and ],[ 222 msC = be two correspondences between the scene and the model. The geometric consistency between two sets of the matches is calculated as follows [JH97]: )()( )()( )( 11 11 2,1 22 22 sSmS sSmS PPd sm sm + − = (3-4) where [ ] ])(,))((||[,)( 22 qpnqpnqpbaqS p −⋅−⋅−−== (3-5) Sp(q) is the cylindrical coordinate of point p relative to point q, and n is the normal at point q to the surface [JH98] (please refer to Figure 3-5). Figure 3-5: Cylindrical coordinate of point p relative to oriented point q. Chapter 3: Representing and Matching 31 The geometric consistency between two oriented points P1 and P2 is then calculated as follows: ))(),(max( 1,22,1 PPdPPdD = (3-6) Since d is not symmetric, the maximum of the distances is used to define the geometric consistency, D. When D is small, P1 and P2 are geometrically consistent, and the scene and model points in P1 and P2 have the same distance apart, and their surface normals make the same angle with each other. The correspondences that have their D greater than at least one quarter of other correspondences are eliminated as outliers, and the result is a list of correspondences, [P1, P2, …,Pn] that are the most likely geometrically consistent. In order to find the transformation, the correspondences should be grouped together. Correspondences whose points are close to each other generate transformation that are more sensitive to noise in point positions [YF99]. As a result, correspondence points that are far apart are more suitable for calculating transformation. Two correspondences are geometrically consistent and far apart for small values of W: ))(),(max()( 1,22,12,1 pPwPPwPPW = (3-7) where )2/)()(( 2,1 2,1 12121 )()( sSmS sm e PPd PPw +− − = (3-8) Using Equation (3-8), more outliers are eliminated, and then the correspondences are passed on to the registration process. Chapter 3: Representing and Matching 32 3.8. Registration The final stage of a matching process is registration. In this process, the surface of the test object is aligned with the surface of the model. Iterative Closest Point (ICP) [NN97] is a nonlinear optimization procedure widely used for registration. The iterative descent procedure used in ICP minimizes the sum of the squared distances between the points in the two views. The correspondence problem is solved by assuming that the scene is approximately aligned with the model, and therefore each scene point corresponds with its closest model point. The parameter space explored by ICP contains many local minima. To force ICP to converge to a global minimum, the first estimate should be close to the true value. Jost and Hugli [JH02] presented a modified version of ICP to speed up the registration process. Nishino and Ikeuchi [NI02] suggested an alternative method to converge the multiple range image registration on a global minimum using conjugate gradient search utilizing the M-estimator. For a more recent survey on surface registration refer to [MS04]. We have found that ICP is most suitable for evaluating the locality of the matched points. If the sample points are correctly matched with those on the surface of the model, then the registration will succeed, and the matched points are correctly localized. However, if the matched points are not localized correctly, then the registration process will fail despite the fact that the correct candidate is selected. The difference between the correct candidates and the true-positive matches is a way to evaluate the localization power of the proposed descriptors. Given a set of corresponding points (xi, yi), i=1,…,N, the registration problem includes computing the transformation, which registers xi with its corresponding point yi. When Chapter 3: Representing and Matching 33 corresponding points are aligned with each other, the rigid transformation can be written as: tRxy ii += where = 333231 232221 131211 rrr rrr rrr R is a rotation matrix, and = 3 2 1 t t t t is a translation vector. Because of noise effects, the registration problem can be described as an error minimization problem with an error function as follows: ∑ −−= 2||),( tRxytRf ii Although it is computationally easier to represent matrix R as a vector, it is difficult to keep the constraint of a rotation matrix which is: IRRT = and 1=R For this reason, rotation matrix (R) is represented as a quaternion [K93]. A quaternion q is defined by its real part, a scalar p, and its imaginary part, a vector ckbjai ++=β in R3, and it is denoted as Chapter 3: Representing and Matching 34 β+= pq . Rotation in quaternion for angle of θ about vector u is defined as: uq 2 sin 2 cos θθ += where u is a unit vector, 1=u . If v is a vector in R3, then qqvRv = (3-9) where β−= pq , and R is a rotation matrix. 3.9. Estimating the Transformation Given n points of correspondence xi and yi, where i=1,…,n, the error function is defined as follows: ∑ = −−= n i ii tRxyE 1 2|| where R is a rotation matrix, and t is translation vector. To minimize the value of E, t E ∂ ∂ is set to zero 0)(20 1 =−−−⇒= ∂ ∂ ∑ = n i ii tRxyt E simplifying the right hand side of the equation, yields: Chapter 3: Representing and Matching 35 ∑∑ == −= n i i n i i xRynt 11 and is rewritten as xRyt −= where ∑ = = n i iy n y 1 1 and ∑ = = n i ix n x 1 1 are the centers of sets yi and xi respectively. Setting yyY ii −= and xxX ii −= , yields ∑ −= 2|| ii RXYE (3-10) Rewriting the error function of Equation (3-10) and combining it with the quaternion form of Equation (3-9) yields, ∑ = −= n i ii qqXYE 1 2|| where q is quaternion associated with matrix R, and 1|| 2 === qqqqq , then ∑ = −= n i ii qqqXYE 1 22 |||| Multiplying the right side of the equation results in ∑ = −= n i ii qXqYE 1 2|| (3-11) Equation (3-11) can be rewritten as BqqE T= , where Chapter 3: Representing and Matching 36 ∑ = =Β n i i T i AA 1 and [ ] +− − = xiiii T i T i i XYXY YX A 0 where [ ] − − − = 0 0 0 12 13 23 vv vv vv V x Minimizing E under the constraint 12 =q is a linear least square problem, and the answer is the eigenvector of B associated with the smallest eigen-value of the matrix. 3.10. Matching The signatures of the oriented points are calculated and compared with the signatures of the models that are calculated in a similar way. Our experiments show that the signatures created from an object with the proposed modeling technique are more similar to the signatures created from the same object than they are to the signatures created from other objects. To match the signatures, first the flatness and orientation signatures of a sample point on the surface of the test object are compared to the flatness and orientation signatures of all models using Equation (3-2), and then final match is calculated using Equation (3-3). For each sample point, the top 0.1% of the best matches are selected. Then at most M top matches within the top 0.1% of the best matches for each model are chosen, and outliers are removed by geometric verification as described in Section 3.7. Chapter 3: Representing and Matching 37 A voting mechanism counts the number of times each model is matched with the test object. The model that has the highest vote is selected as the candidate model. The candidate model is then passed to the registration process. If the average error of the final alignment falls below a threshold value, then the candidate model is selected as the matched model. The details of matching process are shown inFigure 3-6 and Algorithm 3.1. Figure 3-6: Matching and verification process Chapter 3: Representing and Matching 38 Algorithm 3.1: Matching Process Matching process of test object compared with the signatures of library models. BEGIN flatnessStack flatness signatures of all models in the library orientationStack orientation signatures of all models in the library E select N sample points on the surface of the test object for each sample, p, in E Fs flatness signature of p Os orientation signature of p Fd dissimilarity of Fs with flatnessStack Od dissimilarity of Os with orientationStack unlikeness ),( OdFd topMatch 0.1% of the top matches M M top match for each sample form topMatch topVerified = Geometric_verification(M) voted votingMechanism(topVerified) candidate model with the highest value in voted avgError = Registration(candidate) If avgError < threshold Final Match candidate END Chapter 3: Representing and Matching 39 3.11. Parameters Signatures created with the proposed technique depend on two parameters which are also used in creating spin images that limit the descriptive area of a surface around an oriented point. These two parameters are called support distance and support angle [JH99].The support distance limits the value of R. Small values of R model the local deformation around point P, and large values of R model the global deformation of the surface relative to the oriented point P. The support angle parameter determines whether, due to occlusion, the entire object can be seen from a single viewpoint. It is logical to assume that if the normal of a triangular patch makes an angle greater than a threshold value with the orientation of point P, it cannot be seen from the same angle that sees point P. Therefore, those patches cannot be used for representing the object from that viewpoint. The normal to the surface on a vertex is calculated by averaging the normal of the triangles around the vertex. To decrease the noise effects, the normal of the triangles is averaged inside a sphere with a specific radius called the base-radius. 3.12. Models and Settings The Princeton Benchmark4 3D models were used in our experiments. The list of the models used in our experiments is shown in Figure 3-10 and Figure 7-10. Please refer to Figure A-1 and Table A-1 in Appendix A for more details of the models and their dimensions. Models M32, M48, M49, M88, M111, M605, M1147, and M1622 were used as test models in our experiments. Normal noise with standard deviation equal to 1 was 4 http://shape.cs.princeton.edu/benchmark Chapter 3: Representing and Matching 40 added to each vertex of the surface of the test models to create the test objects. (please refer to Figure 3-7). To demonstrate the robustness of the signatures to clutter, a large portion of the test objects were removed by selecting a random patch as the seed patch on the surface of the test object. Surface patches totalling up to 50 percent of the total surface of the object were then removed. Test Models Test Objects Test Models Test Objects M32 M32 Test Object M48 M48 Test Object M49 M49 Test Object M88 M88 Test Object M111 M111 Test Object M605 M605 Test Object M1147 M1147 Test Object M1622 M1622 Test Object Figure 3-7: Test Models and Test Objects: Odd Columns: Original models Even Columns: Test objects Chapter 3: Representing and Matching 41 Figure 3-8: Sample of cluttered test objects: Left: 10% clutter Middle: 30% clutter Right: 50% clutter Figure 3-8 shows sample test objects that are cluttered randomly from 10 for a total of 50 percent of their surface. Black patches were removed, and were not considered in the signature creation. The cluttered surface was calculated by the following formula: We used CAD models for modeling and signature creation process. The base-radius was set to 2.5% of the maximum dimension of the models, and the models were sampled from base-radius to half of their maximum dimensions in 20 equal intervals. In the real world, we assume that the dimensions of the test objects under experiment are known, and the parameters of the descriptor signatures can be adjusted accordingly. Chapter 3: Representing and Matching 42 Figure 3-9: Support angle Left: M48 with an oriented point, base-radius, and a sphere with radius equal to the maximum support distance. Right: Angle between two normals. A sample oriented point, P, and a sphere with a base-radius equal to 2.5% of the maximum dimension of a model are shown in the left column of Figure 3-9. As indicated in the figure, this is a small portion of the surface of the model that is used to calculate the surface normal by averaging the normal of the triangles around the oriented point. The shaded area around the object shows the size of the sphere when its reaches its maximum radius (maximum support distance), which is equal to the half of the maximum dimension of the object. The right column of Figure 3-9 shows a sphere with center O and radius RO with an oriented point (P, N) on its surface. Now assume a sphere with center P and radius R (support distance equal to R). The angle between N and NO (normal on the surface of the sphere O) is equal to Chapter 3: Representing and Matching 43 3/),cos( pi≤⇒≤∀ OO NNaRR For R larger than RO, the angle between N and NO would be larger than pi/3. It is logical to assume that if the normal of a mesh makes an angle greater than pi/3 with the orientation of point P, it cannot be seen from the same angle that sees point P. Therefore, those surfaces cannot be used for modeling the object from that viewpoint. In this thesis the maximum of support distance (R) is set to half of the maximum dimension of the objects to implicitly account for self-occlusion. It is worth mentioning that all models and test objects are normalized and they all have 200 units as maximum dimension. Figure 3-10: Models used in the experiments. Table 3-1: List of the models used in the experiments. M27 M30 M32 M37 M39 M48 M49 M51 M52 M61 M62 M88 M100 M104 M111 M125 M385 M490 M496 M499 M593 M605 M761 M992 M1119 M1121 M1123 M1132 M1136 M1137 M1142 M1146 M1147 M1151 M1153 M1156 M1157 M1161 M1163 M1169 M1179 M1186 M1191 M1193 M1198 M1202 M1204 M1302 M1304 M1310 M1337 M1340 M1341 M1342 M1344 M1346 M1348 M1350 M1413 M1494 M1504 M1543 M1549 M1563 M1570 M1583 M1603 M1813 M1622 44 Chapter 4 The Accumulative Method The method introduced in the previous chapter creates identical signatures for both convex and concave surfaces. This is a logical approach since both convex and concave surfaces with the same parameters are the same surfaces, and the only difference between them is the direction of view, one from outside and another from the opposite side (please refer to Figure 6-2). To distinguish between these two surfaces, the convexity signature is introduced in this section, and then the signatures are used to match and recognize the test objects. 4.1. Accumulative FOC5 Signatures In the accumulative representation technique, the flatness and orientation signatures of an object are calculated by putting a sphere on the surface of the object, as described in the previous section (please refer to the left column of Figure 4-1). Figure 4-2 displays sample flatness and orientation signatures on the surface of an object on the same oriented points. Experiments indicate that the flatness signatures are more descriptive than the orientation signatures for this representation technique. 5 FOC: an abbreviation for flatness, orientation, and convexity Chapter 4: Accumulative Modeling 45 Figure 4-1: Accumulative signatures Left: Accumulative method Right: Sample signatures on the surface of an object. Flatness Signatures Orientation Signatures Figure 4-2: Sample signatures Left: Sample flatness signatures of M48. Right: Sample orientation signatures on the same oriented points Chapter 4: Accumulative Modeling 46 Figure 4-3: Convexity signatures The method introduced so far creates identical signatures for both concave and convex surfaces. To distinguish between concave, convex, and planar surfaces, the triangular meshes of the surface are divided into positive, negative, and neutral patches (corresponding to convex, concave, and planar surfaces respectively) based on the direction of their normal relative to an oriented point. Consider the cross-section of surface S in Figure 4-3. O1 is a point on the surface, and N1 is its normal. Let us assume that we are interested in finding the signatures of the surface relative to oriented point O1. First the neutral patches are identified by comparing the normal of the patches to normal N1. If α<),( 1 xNNangle where α is a minimum angle that depends on the noise level, then the patch Nx is labelled as a neutral patch. To find the convexity and concavity of the surface on points O2 and O3 relative to O1, connect O1 to O2 and O1 to O3 to create two vectors, O1O2 and O1O3. Then find the projection of N2 and N3 on O1O2 and O1O3, T1 and T3, respectively. If the direction of O1Ox and Tx are the same, then the surface at point Ox is convex relative to point O1. If the direction of O1Ox and Tx are opposite, then the surface at point Ox is concave relative to O1. For example, the surface at O2 and O3 are convex and concave, respectively, relative to the oriented point O1. The convexity signature on vertex i is calculated as Chapter 4: Accumulative Modeling 47 ∑ ∑∑ = == − = estotalPatchj j chesconcavePatx x hesconvexPatcc c i S SS C where cS , xS , jS are the convex, concave and total surface areas respectively circumscribed inside the sphere. The convexity signature is calculated by dividing the difference between the concave area and convex area to the total area at each step of the signature creation. The convexity value can be between 1 and -1. A value of one for convexity means that the area being considered for signature creation is a complete convex area. A value of -1 means that the area is a complete concave area, and a value of zero means that either the area is a flat area or its average is a planar surface. In addition to the flatness and orientation signatures, the total convex, concave, and neutral area at each step of signature creation are stored separately. The calculation of convex and concave areas has no major effect on the processing time since the same procedures are used for calculating flatness and orientation signatures. The convexity signatures along with the flatness and orientation signatures are used to describe the surface around an oriented point. To accommodate the convexity signatures, the Equation (3-3) is modified as 2/12 21 2 21 2 21 )),(),(),(( ccitydissimilarooitydissimilarffitydissimilarunlikeness ++= (4-1) Chapter 4: Accumulative Modeling 48 where f1, f2, o1, o2, c1, and c2, are the flatness, orientation, and convexity signatures of oriented points 1 and 2 respectively. Unlikeness is the final measurement used to compare two oriented points in our experiments. 4.2. Experiments In each experiment a maximum of 6 sample points are randomly selected on the surface of the test object, and their signatures are created with the same parameters that are used to create the signatures of the library of the models. Then the signatures are compared to the signatures of the models. For each sample point, the top 0.1% of the best matches are selected. Then a maximum of M top matches within the top 0.1% of the best matches for each model are chosen, and outliers are removed by geometric verification as described in Section 3.7. A voting mechanism counts the number of times each model is matched with the test object. The model that has the highest vote is selected as the candidate model. The candidate model is then passed to the registration process. In the experiments, the positive candidate refers to the correct candidate, and the true-positive match refers to the correct match when the average alignment error between the positive candidate and the test object falls below a threshold value. Since we were considering only one object, the results were either true-positive or false- negative. In the experiments, the base-radius was set to 5, the support angle was set to pi and pi/3, and the support distance was set from 5 to 100. The library models consist of 69 toys (please refer to Figure 3-10) including 55406 vectors for each flatness, orientation, Chapter 4: Accumulative Modeling 49 and convexity signatures. The dimensions and details of each model are shown in Table A-1 and Figure A-1 of Appendix A. 4.2.1. Recognition Results Figure 4-4 and Figure 4-5 show the average of recognition results for the test objects created from M32, M48, M48, M49, M88, M111, M605, M1147, and M1622 by adding normal noise with standard deviation equal to 1 to each vertex of the surface of the models (please refer to Figure 3-7). The upper curves in the figures show the rate of positive candidate selection, and the bottom curves show the true-positive recognition results. The alignment threshold error after registration was set to 5% of the size of the test object. With this threshold the maximum alignment error is about 8 degrees between the test object and matched model. This threshold value is enough to confirm the correct recognition, and two objects can be aligned further by more iteration of registration process. The maximum top candidate selection (parameter M) was set to 4. Each experiment was conducted 50 times for each support distance for each test object with a different randomly selected set of sample points. Comparing the average of recognition results shown in Figure 4-4 and Figure 4-5 indicate that the recognition rates achieved with wider support angle (pi) are substantially better than the recognition rates achieved with limited support angle (pi/3) using accumulative descriptors. This problem will be addressed in Chapter 5 of the thesis. Chapter 4: Accumulative Modeling 50 Figure 4-4: Average recognition results for support angle set to pi. Figure 4-5: Average recognition results for support angle set to pi/3. Chapter 4: Accumulative Modeling 51 Figure 4-6: Effect of parameter M on the recognition rate and processing time 4.2.2. Effect of Parameter M on the Recognition Rate During matching, a single point can be matched to more than one point for two reasons. Firstly, the symmetry of the object surface can cause the generation of two identical FOC signatures for both sides of an object. Secondly, spatially close points may have similar FOC signatures. As a result, there are often more than two points that match the sample point on the surface of the test object. Our experiments indicate that selecting the four top matches (M equal to 4) for each sample oriented point is enough for recognition purposes, with the amount of noise level in our experiments. Figure 4-6 shows the effect of the parameter M on recognition rate and processing time. As indicated in the figure, setting M equal to 4 is the optimum point for this parameter in our experiments. Increasing the value of M beyond 4 does not increase the recognition rate significantly, but it affects the processing time. The processing time axis has been trimmed for clarification purposes. Chapter 4: Accumulative Modeling 52 4.2.3. Effect of Alignment Error and Candidate Selection It is obvious that by extending the candidate selection to the second candidate and beyond, the recognition rate increases. The effect of candidate selection on the recognition rate is shown in Figure 4-7. As indicated in the figure, by extending the candidate selection to the second candidate from the top candidates, the recognition rate increases about 6% on average. Similarly, by increasing the alignment error, the recognition rate also increases and it asymptotically reaches the rate of positive candidate selection. 4.2.4. Cluttered Surfaces In these experiments, a large portion of the surface of the test objects was removed by selecting a random triangular patch as a seed on the surface of the test object. The surface elements that were removed comprised up to 50% of the total surface of the object. Figure 4-7: Effect of candidate selection on the recognition rate. Support angle set to pi. Chapter 4: Accumulative Modeling 53 Figure 4-8: Recognition results for different level of clutter Top: Average positive candidate selection Bottom: Average of true-positive recognition results Since the signatures close to the cluttered area of the test objects are not suitable for recognition process, nine random sample points (instead of the six sample points in the previous experiments) were selected on the surface of the test objects, and their signatures were calculated and compared with the signatures of the library models. Chapter 4: Accumulative Modeling 54 The top row of the Figure 4-8 shows the average of positive candidate selection for different levels of clutter. The bottom row of the figure shows the true-positive recognition results if the average of the final alignment error falls below a threshold value. In our experiments the threshold value was set to 10 units (5% of the size of the object). Comparing the average of positive candidate selection and true-positive recognition rates shown in the figure indicates that the true-positive recognition rate is about 20% less than the positive candidate selection rate for the same level of clutter. This means that although the correct candidate was selected, the registration process for the specified threshold value failed. This problem indicates that the proposed representation technique is not able to localize the correct match accurately, despite the fact that it can select the correct candidate model. This problem can be fixed by using an alternative registration process to converge on the global minimum instead of the one used in this work. However, we have used this method to be able to study the locality accuracy of the proposed method. 4.3. Chapter 4 Conclusions The recognition results for the test objects are conducted using the following values for the parameters: 1. The support distance is set to 100. 2. The support angle is set to pi and pi/3. 3. The base-radius is set to 5 (2.5% of the size of the test object). 4. The number of the sample points is set to six and nine for the clutter surfaces 5. The parameter M is set to 4 6. The candidate selection is set to the first candidate 7. The final error alignment is set to the 5% of the size of the test objects Chapter 4: Accumulative Modeling 55 Our experiments indicate that by increasing the number of the sample points on the surface of the test objects, increasing the value of parameter M, extending the candidate selection to the second and third candidates, and increasing the threshold of the final alignment error, the recognition results improve. However, the processing time needed for recognition also increases. In the test cases, a major portion of the processing time is devoted to the signature calculation, while other items have little effect on the processing time. 56 Chapter 5 The Differential Method The flatness and orientation signatures of the accumulative method are accumulated incrementally at each step of the signature creation process. With this representation technique, the signatures become less descriptive as the support distance increases. To overcome this problem, rather than using the whole surface, the surface of the object circumscribed between two steps of the support distance is used for signature creation. In this chapter the differential method is discussed, followed by the recognition experiments. The recognition results are then compared with the recognition results achieved by the accumulative method and those achieved by spin images. 5.1. Differential FOC Signatures The accumulative flatness signatures are created by dividing the projection of the surface and its surface areas circumscribed inside a sphere. Consider: sS aA signatureFlatnessveAccumulati ∆+ ∆+ =__ (5-1) where a∆ and s∆ are the surface and projection of the surface areas added at each increased interval of Ri. As support distance increases the values of S and A increase, and the effect of s∆ and of a∆ decreases in Equation (5-1). As a result, the signatures reach a relatively steady state value at the end of the graph. (please refer to Figure 4-2) Chapter 5: Differential Method 57 Figure 5-1: Differential method To overcome this problem, rather than accumulating the surface, the surface of the object circumscribed between two steps of the support distance is used for signature creation. Instead of having a single sphere, two co-centered spheres are used, as shown in Figure 5-1, and the area circumscribed between the two spheres within radius R2-R1 is used for signature creation. ε+∆ ∆ = s a signatureFlatnessalDifferenti __ where ε is added to each element of S to avoid division by zero. The orientation signature of the differential method on vertex i is calculated as ∑ = ∆= m j jji Ns m O 1 1 where js∆ is the surface of patch j circumscribed between the two spheres in interval Rj- Rj-1, and Nj is its normal. Chapter 5: Differential Method 58 Figure 5-2 shows the sample flatness and orientation signatures for both accumulative and differential methods on the surface of M48 on the same oriented points. Comparing the signatures of both methods indicates that the signatures created by the differential method are more discriminatory than those created by the accumulative method. Flatness Signatures Orientation Signatures A cc u m u la tiv e M o de l D iff er en tia l M o de l Figure 5-2: Sample flatness and orientation signatures on the surface of M48 Top row: Sample flatness and oriented signatures of accumulative method Bottom row: Sample flatness and oriented signatures of differential method on the same oriented points Chapter 5: Differential Method 59 Similarly, the convexity signature is calculates as follows: ∑ ∑∑ = == ∆ ∆−∆ = estotalPatchj j chesconcavePatx x hesconvexPatcc c i S SS C where cS∆ , xS∆ , jS∆ are the convex, concave and total surface circumscribed between the two spheres. The left-hand column of Figure 5-3 displays the total area, convex area, and concave area for a sample oriented point on the surface of a peaks6 object. The neutral area is not stored as part of the representation parameters since it can be calculated from other parameters. The right column of Figure 5-3 displays the convexity signature of a sample oriented point. The signature shown in the figure indicates that the concavity of the surface increases as support distance increases, and it reaches to pure concavity at support distance 35. The concavity of the surface then decreases as support distance increases, and it reaches to a pure convex area at support distance 55. The convexity of the surface decreases again from support distance 70 to support distance 80, and then increases to pure convex area at support distance 90. 6 A sample surface generated by the peaks function of Matlab. Chapter 5: Differential Method 60 Figure 5-3: Convexity signatures Left: Total, convex, and concave area of a sample surface Right: Convexity signature 5.2. Smoothing Experiments show that the signatures created by the differential method are more sensitive to noise than the signatures created by the accumulative method. Assume that S and its projection A are two vectors with dimension d, )...,,( 21 dsssS ∆∆∆= and ),...,,( 21 daaaA ∆∆∆= respectively. Then the flatness signature is calculated by ε+ = S AF where ),...,,( 21 dfffF = is a vector of dimension d, and difor s af i i i ,...,2,1=∆ ∆ = . When ∆ is and ∆ ia are too small, small perturbations caused by noise on the surface of the object affect the flatness signature. To fix this problem, the flatness and convexity signatures are smoothed with a Gaussian shape filter (Figure 5-4), where Chapter 5: Differential Method 61 )(Smean=µ and )( µδ <= isallforSstd Figure 5-4: Filter used to smooth flatness and convexity signatures in differential method. 5.3. Experiments The parameters of the signatures and the sample points selected on the surface of the test objects were exactly the same as the experiments conducted in Section 4. 5.3.1. Recognition Results Figure 5-5 and Figure 5-6 show the average recognition results for the test objects. The curve in the top of the figure shows the rate of the positive candidate selection, and the curve in the bottom of the figure shows the true-positive recognition results where the alignment threshold error was set to 5% of the size of the test objects. The maximum top candidate selection (M) was set to 4, and each experiment was conducted 50 times for each support distance for each test object with different randomly selected sample points. As indicated in the figure, there is no major difference between the recognition rates of the support angles set to pi and pi/3 for differential method. Chapter 5: Differential Method 62 Figure 5-5: Average recognition results for support angle set to pi Figure 5-6: Average recognition results for support angle set to pi/3 5.3.2. Cluttered Surface Figure 5-7 shows the recognition results for test objects where surfaces are cluttered by up to 50 percent of the original surfaces. The graph on the top row of the figure shows the recognition results for positive candidate selection for different clutter levels. The graph in the bottom row of the figure shows the true-positive recognition results. The experiments were conducted with exactly the same parameters as in Section 4. Chapter 5: Differential Method 63 Figure 5-7: Recognition results for different level of clutter Top: Average positive candidate selection Bottom: True-positive recognition results 5.4. Comparisons In this section the recognition results of the accumulative and differential methods are compared. Since the spin image is known as one of the best three-dimensional descriptive techniques, the recognition results achieved by the proposed descriptors were also compared with the recognition results achieved by the spin image descriptors. The Chapter 5: Differential Method 64 experiments were conducted with exactly the same parameters and the same sample points on the surfaces of the test objects. 5.4.1. Accumulative and Differential Methods Figure 5-8 shows the difference between recognition rates of the accumulative and differential methods for both the support angles used. The parameters of the two experiments and the sample points were exactly the same. In both figures, the average of the recognition rates of the accumulative method was subtracted from those of the differential method in each support distance. Each graph = (differential recognition rates) – (accumulative recognition rates) As indicated in the Figure 5-8, the positive candidate selection and true-positive recognition rates of the differential method are both higher than the recognition rates of the accumulative method. As indicated in the top row of the figure, the average true- positive recognition rates of the differential method is 11 to 27 percent better than the true-positive recognition rates of the accumulative method for the support angle set to pi. For the support angle set to pi/3, this difference is between 16 to 42 percent. Since a limited support angle is more practical in creating the signatures and recognition, the differential method would be the more appropriate representation method. Chapter 5: Differential Method 65 Figure 5-8: Comparing recognition results of accumulative and differential methods Top: Support angle set to pi Bottom: Support angle set to pi/3 Figure 5-9 shows the difference between recognition rates of the differential and accumulative methods for different levels of clutter. Each graph in the figure is the result of subtracting recognition rates of the accumulative method from those of the differential method. The top row of the figure shows the positive candidate selection, and bottom row shows the true-positive recognition rates. As indicated in both figures, except for clutter Chapter 5: Differential Method 66 level of 50 percent and more, the recognition rates of the differential method are better than the recognition rates of the accumulative method. Figure 5-9: Comparing recognition rates of the accumulative and differential methods for different clutter level. Support angle was set to pi/3. Top: Positive candidate selection. Bottom: True-positive recognition rates. However, due to the cumulative nature of the accumulative descriptors, the accumulative descriptors are less sensitive to clutter than the differential descriptors. Experiments Chapter 5: Differential Method 67 indicate that for the clutter level of 50 percent and more, the recognition rates of the accumulative method are better than the recognition rates of the differential method. The figures only show the results for clutter levels of 10, 30, and 50 percent. The results of other clutter levels are not shown in the figures for clarification purposes. 5.4.2. Spin Images and Differential Method For comparison purposes, the spin images of the test objects and models were created with exactly the same parameters. The support distance of both sets of the models was set to pi/3, and the bin of spin images was set to ∆R. Each spin image is a histogram of 20 x 40 bins. The sample points selected in both experiments were the same, and the experiments were conducted with both methods with exactly the same parameters and procedure. The top row of Figure 5-10 shows the true-positive recognition results of the experiments conducted with the differential method and the spin image. As indicated in the figure, the recognition rates of the differential method are about 6 percent better than the recognition rates of the spin image on average. The bottom row of Figure 5-10 shows the comparison results for cluttered surfaces. Each curve in the figure was calculated by subtracting the recognition rates of the spin images from the recognition rates of the differential method Each Graph =(Rec. Rate of Differential Method)- (Rec. Rate of Spin Image) As indicated in the figure, the recognition rates of the spin image are much better than the recognition rates of the differential method for cluttered surfaces. Chapter 5: Differential Method 68 Figure 5-10: Comparing recognition results of the spin image and differential method Top: True-positive recognition Bottom: True-positive recognition for different level of clutter The size of the library of the descriptors used by the differential method was about 7.5% of the size of the library of the descriptors used by the spin images. In Chapter 7, it will be shown that the size of the proposed descriptors can be reduced further, and at the same time the recognition rates can be increased. Chapter 5: Differential Method 69 5.5. Chapter 5 Conclusions Because the signatures created by the differential method are more discriminative than the signatures of the accumulative method, they yield a better recognition rate especially for limited support angles. Furthermore, the difference between positive candidate selection and true-positive recognition rates of the differential method are smaller than those of the accumulative method. This means that the differential method can localize the correct matches better than the accumulative method. In the next chapter the two methods will be compared in more detail. It was also shown that the recognition rates attained by the proposed representation techniques were slightly better than the recognition rates achieved by spin images. However, when the objects are clutter, the recognition rates of the spin image are substantially better than the proposed method. This problem will be addressed in Chapter 7 of the thesis. 70 Chapter 6 Analysis of the Signatures In Sections 4 and 5 the recognition rates of the accumulative and differential methods were compared, and it was shown that the recognition rate of the differential method is better than the recognition rate of the accumulative method. In this section the properties of the signatures, effect of sampling densities, sampling intervals (∆R), and noise on the signatures created by these two methods are studied and compared. 6.1. Properties of the Signatures The proposed methods encode the surface of an object into three primitive discriminative signatures on each vertex. The flatness signature encodes the average curvature of the surface relative to a reference point, the orientation signature encodes the fluctuation and symmetrical properties of the surface, and finally the convexity signature encodes the convexity and concavity of the surface. Figure 6-1 shows the differential FOC signatures of a sphere with the support angle set to pi/2. Since a sphere is a symmetrical object, it can be represented by a single set of FOC signatures. As indicated in the figure, the orientation signature of a sphere is a horizontal Chapter 6: Analysis of the Signatures 71 line with value equal to zero. A sphere is also a convex object, and its convexity signature is also a horizontal line with value equal to one. 6.1.1. Flatness Signatures The flatness signature specifies the average curvature of the surface relative to an oriented point. For a planar surface, the flatness signature is equal to one, and for a extreme peaky surface the flatness signature is equal to zero. Figure 6-1 shows the flatness signature of a sphere with radius R. The support angle is set to pi/2. As indicated in the figure, the flatness signature of a sphere begins from the value 1 at the oriented point and decreases as support distance increases. The flatness signature of the sphere reaches zero for support distance equal to or beyond 2R . The top row of Figure 6-2 shows two paraboloid surfaces with an oriented point on each surface. The bottom row of the figure shows the signatures of both surfaces on the oriented points shown in the top row. As indicated in the figure, both surfaces have identical flatness and orientation signatures. The only difference between two surfaces is their convexity signatures. The convexity signatures of both surfaces A and B are horizontal lines with a value equal to 1 and -1 respectively. The flatness signatures of both surfaces indicate that the curvature of the surfaces changes within a support distance equal to r. Both surfaces in the top row of the figure were vertically shrunk for a better view. Chapter 6: Analysis of the Signatures 72 Figure 6-1: Differential FOC signatures of a sphere with radius R. Figure 6-2: Top: Convex and concave parabolic surfaces. Bottom: FOC signatures of both surfaces A and B. Chapter 6: Analysis of the Signatures 73 Figure 6-3: Top: A surface and an oriented point. Middle: The flatness and convexity signatures of the oriented point. Bottom: The flatness signatures of the oriented points on the circle. Chapter 6: Analysis of the Signatures 74 The top row of Figure 6-3 shows a surface and an oriented point on it. The figure in the middle row shows the flatness and convexity signatures of the surface referring to the oriented point P. As indicated in the figure, the surface changes from a planar surface to a concave area at support distance equal to 27.5. By examining the signatures of the surface at oriented points (P1, P2, …, Pn) that are located on a circle with center P and radius equal to 27.5, the point P4 is identified. As indicated in the figure, by moving the oriented point on the circle closer to the high-curvature point, P4, the changes in the flatness signature move toward a support distance equal to zero. 6.1.1.1. Mathematical Representation In this section the mathematical representation of the flatness signature of a sphere is calculated directly, and then the general case for calculation of the flatness signature for implicit surfaces is presented. Direct Approach Assume a sphere with a radius equal to R (please refer to the top row of Figure 6-4). To calculate the flatness signature of the sphere in intervals R1 and R1+dR1, assume the coordinate system is located on oriented point P, with normal N. The surface of a sphere circumscribed between R1 and R1+dR1 is equal to [MA69] ( ))cos()cos(22 αααpipi dRRRRdhS +−== By expanding the equation, then ( ) ααpiααpi dRdRS )sin(2)sin(2 22 =−−= (6-1) Chapter 6: Analysis of the Signatures 75 Figure 6-4: Calculating the flatness signature of a sphere with radius R in intervals R1 and R1+dR1. The radii of the smaller and larger circles are equal to )sin(αR and )sin( αα dR + respectively (please refer to bottom row of Figure 6-4), and S’ is equal to ( ))(sin)(sin 222 αααpi −+=′ dRS (6-2) According to the Mean Value Theorem, if function f(x) is differentiable in intervals [a, b], then ))(()()( abmfafbf −′=− where m is a point between a and b. By applying the Mean Value Theorem to Equation (6-2), then αααpi dRS )cos()sin(2 2=′ (6-3) Chapter 6: Analysis of the Signatures 76 Using Equations (6-2) and (6-3), flatness can be calculated as R Z dR dR S Sflatness ===′= )cos()sin(2 )cos()sin(2 2 2 α ααpi αααpi (6-4) Now, we can calculate Z based on R1 as 0→αd . Referring to the following figure, ( ) )(sin)cos(1 222221 αα RRR +−= ( ))(sin)cos(2)(cos1 222 ααα +−+⇒ R and after simplification ( ))cos(12 221 α−= RR 2 2 1 2 2 1 2 1)cos()cos(1 2 R R R R −=⇒−=⇒ αα So, for a sphere, flatness is equal to 2 2 1 2 2 2 R RR S S − = ′ This is the same result found experimentally as shown in Figure 6-1, where 101 = ′ ⇒= S SR and 021 = ′ ⇒= S SRR Chapter 6: Analysis of the Signatures 77 General Form Assume the implicit function of a surface is 0),,( =zyxf Then we can find its local parametric representation as )),(,,(),( yxzyxyxr = Based on the Implicit Function Theorem, we can find derivatives of z based on x and y as follows: z f x f x z ∂ ∂ ∂ ∂ −= ∂ ∂ (6-5) and z f y f y z ∂ ∂ ∂ ∂ −= ∂ ∂ (6-6) Then area of S, circumscribed between two spheres can be calculated as ∫∫ ′ ×= s yx dxdyrrS where ∂ ∂ = x z rx 01 and Chapter 6: Analysis of the Signatures 78 ∂ ∂ = y z ry 10 and y z x z kji rr yx ∂ ∂ ∂ ∂ =× 10 01 then 1 22 + ∂ ∂ + ∂ ∂ =+ ∂ ∂ − ∂ ∂ −=× y z x zkj y zi x z rr yx (6-7) By substituting x z ∂ ∂ and y z ∂ ∂ from Equation (6-5) and Equation (6-6) into Equation (6-7), then 12 2 2 2 + ∂ ∂ ∂ ∂ + ∂ ∂ ∂ ∂ =× z f y f z f x f rr yx and after simplification z f z f y f x f rr yx ∂ ∂ ∂ ∂ + ∂ ∂ + ∂ ∂ =× 222 Chapter 6: Analysis of the Signatures 79 The flatness is thus equal to ∫∫ ′ ∂ ∂ ∂ ∂ + ∂ ∂ + ∂ ∂ ′ = ′ S dxdy z f z f y f x f S S S 222 , (6-8) For a sphere 2222),,( Rzyxzyxf −++= , and the partial derivatives of f(…) can be easily calculated. x x f 2= ∂ ∂ , yy f 2= ∂ ∂ , and z z f 2= ∂ ∂ (6-9) Since z f and y f x f ∂ ∂ ∂ ∂ ∂ ∂ ,, are constant everywhere on the surface of a sphere, the integral part of the dominator of Equation (6-8) can be taken out of the integral. Substituting the values of the partial derivative from Equation (6-9) into (6-10), then 222 444 2 zyx z S S ++ = ′ (6-10) since 2222 Rzyx =++ Equation (6-10) becomes: R z R z S S == ′ 2 2 Chapter 6: Analysis of the Signatures 80 which is the same result that we found from direct calculation (please refer to Equation (6-4)). It can be shown that similar simple results can be extracted for rotated surfaces by more extensive calculations. 6.1.2. Orientation Signatures Orientation signatures encode the symmetry of the surface around an oriented point. As indicated in Figure 6-1 the orientation signature of a sphere is a horizontal line with value equal to zero. By investigating the orientation signatures of a surface, the symmetrical axis of the surface can be identified. Figure 6-5 shows a sample orientation signature of a peak-surface (please refer to Figure 3-3). The signature indicates that the surface is a symmetrical surface from the selected oriented point up to support distance equal to R. The small fluctuation of the orientation signature from zero is caused by noise. Figure 6-5: A sample orientation signature of a peak-surface Chapter 6: Analysis of the Signatures 81 Figure 6-6: Symmetrical axes of the sample surfaces identified by their orientation signatures. 6.2. Discriminatory Power of the Signatures To study the discriminatory power of the signatures, random vertices were selected on the surface of the test objects, and then their signatures were compared with the signatures of the original models on the same oriented points using Equation (3-2) and Equation (4-1). For comparison purposes, random vertices were also selected on the surface of the library models, and their signatures were compared with the signatures of other randomly selected vertices. The normalized incremental histograms of the dissimilarity and unlikeness of both sets of the signatures are shown in Figure 6-7 and Figure 6-8. The top row of Figure 6-7 shows comparison results for the flatness signatures. As indicated in the figure, the dissimilarity histograms of the flatness signatures of the test objects compared with the flatness signatures of the original models for both accumulative and differential methods have a clear margin compared with the dissimilarity histograms of the randomly selected signatures. The histogram indicates that the signatures created from noisy test objects can be compared to the signatures of the library models and be recognized. The bottom row of Figure 6-7 shows the same results for orientation signatures. It is worth mentioning Chapter 6: Analysis of the Signatures 82 that the smaller value of the unlikeness and dissimilarity of the two signatures mean that the signatures are more similar to each other. Figure 6-7: Normalized incremental histogram of the accumulative and differential methods Top: Dissimilarity measurement of the flatness signatures Bottom: Dissimilarity measurement of the orientation signatures. Chapter 6: Analysis of the Signatures 83 Figure 6-8: Normalized incremental histogram of the signatures of accumulative and differential methods Top: Dissimilarity measurement of the convexity signatures Bottom: Unlikeness measurement of the signatures. Chapter 6: Analysis of the Signatures 84 The top row of Figure 6-8 shows the normalized incremental histogram of the dissimilarity values (please refer to Equation (3-2)) of the convexity signatures for both methods. As indicated in the figure, the convexity signatures of the noisy test objects compared with the convexity signatures of the original models of both accumulative and differential methods have a clear margin with dissimilarity histograms relative to those of the randomly selected signatures from library of all signatures. The bottom row of Figure 6-8 shows the normalized incremental histogram of the unlikeness value (please refer to Equation (4-1)) between the signatures of the noisy test objects compared to the signatures of the original models and the randomly selected signatures. As indicated in the figure, the signatures of the differential method are more discriminatory than the signatures of the accumulative method. For example, in the differential method, while 80% of the signatures of the noisy test objects compared to the signatures of the original models have an unlikeness value less than a threshold value, only 0.6% of the randomly selected signatures compared with other randomly selected signatures have the same unlikeness value (please refer to the bottom row of Figure 6-8). Based on the graphs shown in the bottom row of Figure 6-8, if we choose only 6 oriented points on the surface of the test object and compare their signatures against the signatures in the library of the descriptors, and select only 0.6% of the best matches for each oriented point for verification, based on Bernoulli distribution, the probability of having at least three correct matches out of 6 randomly selected oriented points is equal to ( ) ( ) 98.02.08.06 66 3 = = − = ∑ ii i i p Chapter 6: Analysis of the Signatures 85 This estimation is for exact match, where a sample oriented point on the surface of the test object matches with its exact oriented point on the surface of the model. In reality, oriented points close to each other on the surface of an object have similar FOC signatures. As the result, if the signatures of an oriented point on the surface of the test object match with the signatures of the oriented points close to the corresponding oriented point on the surface of the model, then the correct transformation between the test object and the matched model can be established. Our experiments indicate that for the recognition purposes, with the set of the models we have used in this thesis, selecting only 0.1% of the best matches for verification process yield 98% correct recognition rate. The same comparison for accumulative signatures indicates that this ratio for accumulative signatures is 80% to 4.5%. It means that about 4.5% of the randomly selected signatures have the same unlikeness value of about the 80% of the signature of the noisy test objects compared with the signatures of the original models. 6.3. Sampling Densities The sampling density of the meshes has no significant effect on the signatures as long as there are enough samples to preserve details of the surface of the models. To illustrate the effect of mesh resolution on the signatures, the models were sampled with 1000, 800, 600, 400, and 200 triangles and 516, 415, 316, 216, and 113 vertices respectively to create re-sampled models. Here, we used reducepatch(…) from Matlab library. reducepatch(…) reduces the number of faces of the patch, while attempting to preserve the overall shape of the original object. The function uses edge deletion algorithm and saves some of the original vertices. Figure 6-9 shows sample models created with different sampling densities. The signatures of the re-sampled models in each vertex were Chapter 6: Analysis of the Signatures 86 calculated and compared with the signatures of the original models on the same oriented points. The normalized incremental histogram of the unlikeness value of the re-sampled models compared with the signatures of the original models is shown in Figure 6-10. The top row of Figure 6-10 shows incremental normalized histogram of the accumulative signatures based on their unlikeness value. As indicated in the figure, the signatures of the models re-sampled with 1000, 800, 600, 400, and 200 triangles compared with the original models have different unlikeness histogram than the histogram of the noisy test objects compared with the original model. However, the set of the signatures of the re- sampled models are still distinct from the other randomly selected signatures from the library of the signatures. The figure indicates that the unlikeness value of the signatures is increased by re-sampling. However, the unlikeness value of the signatures of the models sampled with 1000, 800, 600, 400, and 200 triangles compared with the signatures of the original models remain the same. Chapter 6: Analysis of the Signatures 87 Original Models 1000 Triangles 800 Triangles Meshes 600 Triangles 400 Triangles 200 Triangles Meshes Figure 6-9: Models sampled with different sampling densities. Chapter 6: Analysis of the Signatures 88 Figure 6-10: Normalized incremental histogram of the signatures vs. sampling density Top: Accumulative method Bottom: Differential method. The bottom row of the Figure 6-10 shows similar results for the signatures of the differential method. As indicated in the figure, re-sampling has no effect on the signatures of the models as long as there are enough samples to preserve the details of the surface of the models. The figure indicates that the signatures begin to change when the sampling density falls below 400 triangles. However, as can be seen in the figure, the Chapter 6: Analysis of the Signatures 89 signatures of the models re-sampled with 200 triangles are still distinctive compared to the randomly selected signatures. 6.4. Noise Effects Because of the averaging nature of the proposed descriptive technique, noise is suppressed. The top row of Figure 6-11 shows samples of patches with their normal vectors, and the bottom row of the figure shows the same patches perturbed by noise. The orientation signature in each step of the signature creation is calculated as follows: ∑ ∑ = = == m j m j j j jjjji v u ba m Ns m O 1 1 11 (6-11) where jjj bas = is the surface of the patch, Nj is its normal, m is number of the patches, and uj and vj are the components of Nj in u and v directions. For the purposes of this manuscript, calculations are simplified by making the assumption that noise affects two adjacent vertices in only the v direction with the same amount of noise, gj, as shown in bottom row of Figure 6-11. Then 22 jjjj gabs +=′ where js′ is the size of the patch affected by noise, and Chapter 6: Analysis of the Signatures 90 Figure 6-11: Noise effect Top: Patches along with their normal Bottom: Same patches perturbed by noise. ∑∑ == − ′=′′=′ m j j jj jj j m j jji Npq qp s m Ns m O 11 11 (6-12) where 22 jjjj gaap += and 22 jjjj gagq += . Nj is normal of the patch before noise effect. By expanding Equation (6-12), then ∑∑∑ === − + = − =′ m j j j jj m j j j jj m j j j jjjj jjjj i u v bg mv u ba mv u bagb gbba m O 111 111 The effect of noise on the new orientation signature can be measured as ∑ − =′−= m j j jjii u v bg m OOe 1 1 (6-13) 6.4.1. Specific Case Assume that uj component is zero (the surface is a plane), then vj=1, since 122 =+ jj vu , and Chapter 6: Analysis of the Signatures 91 = = ∑∑∑ 00 1 11 m j m j m j j gbgb m e For simplicity it is assumed in Figure 6-11 that bbbb m ==== ...21 . If we assume that the distribution of noise is a normal distribution with mean equal to zero, then 0⇒∑ m j jg as ∞→m , and consequently 0→e ` (6-14) 6.4.2. General Case Using Equation (6-13), and applying the Cauchy–Schwarz inequality, δb m g bgb mu v bg m e m j j m j j m j j m j j j jj ==≤ − = ∑∑∑∑ ==== 1 2 1 2 1 2 1 11 (6-15) where it is assumed that bbbb m ==== ...21 . As shown in the Equation (6-15), by increasing the number of samples, and consequently decreasing the size of b, noise effects can be controlled. We have shown the effect of noise on an orientation signature with noise applied in one dimension. However, it can be demonstrated that the findings also apply when noise is applied in all three dimensions. Chapter 6: Analysis of the Signatures 92 6.4.3. Noise Effect on Flatness Signatures The flatness signatures are calculated by the following formula: )...,,( ),...,,( 21 21 d d sss aaa S AF == where A is the projection of the surface S on the plane perpendicular to orientation point on the surface. Noise has no effect on A, but it affects S. As a result, the noisy flatness signature is equal to )...,,( ),...,,( 2211 21 dd d noisy gsgsgs aaa NoiseS A F +++ = + = (6-16) where ig is the noise added to each element of S. For simplicity, assume that the noise affects patches in one direction only as shown in Figure 6-11, and bbbb m ==== ...21 , then ∑ ∑ ∑ ∑ + = + = 22 2 22 jj j jjj jj noisy ga a gab ab F where 22 jjjj gabs +=′ is the surface of a patch affected by noise, and jjj abs = is its projection. If the standard deviation of the noise is a fraction of the sampling density that jj ag ρ= , and the surface is sampled in equal intervals, Chapter 6: Analysis of the Signatures 93 aaaa m ==== ...21 then )1( 1 )1( 222 2 ρρ + = + == ∑ ∑ a a Fnoisy If the value of ρ in jj ag ρ= is small comparing to ja , that djaaa j ,...,1)|( =>>∈∇ ρ then, FFnoisy ≈ (6-17) In reality, the surface of an object is passed to a smoothing process to reduce the effect of ρ on the flatness signatures. 6.4.4. Noise Effect on Convexity Signatures The convexity signature is calculated by the following formula surfacetotal surfaceconcavesurfaceconvex signatureconvexity _ __ _ − = For simplicity, assume that the noise affects patches in one direction only as shown in Figure 6-11, and bbbb m ==== ...21 , then ∑ ∑ ∑ ∑ = = = = == Total j j C j j j T j j C j jj a a ab ab signatureconvexityoriginal 1 1 1 1 __ Chapter 6: Analysis of the Signatures 94 and ∑ ∑ ∑ ∑ = = = = + + = + + = T j jj C j jj T j jjj C j jjj ga ga gab gab signaatureconvexitynoisy 1 22 1 22 1 22 1 22 __ (6-18) where C is the total number of convex and concave patches relative to an oriented point, and T is the total number of the patches in each iteration of signature creation. As indicated in Equation (6-18), as CT, the noise effect on the convexity signature decreases. When C =T, the noise effect is completely eliminated assuming that noise does not change the convexity or concavity of the patches. 6.4.5. Experiments To show the effect of noise on the signatures, the unlikeness of the signatures of the noisy test objects was compared to the signatures of the original models on the same oriented points. The results are shown in Figure 6-12. As indicated in the figure, for both proposed descriptors, the noise effect decreases as the support distance increases. However, since the number of the patches used in calculating the differential signatures is less than the number of the patches used to calculate the accumulative signatures, noise affects the differential signatures more than it affects the accumulative signatures (please refer to Appendix B). Chapter 6: Analysis of the Signatures 95 Figure 6-12: Noise affect vs. support distance In the figure Diff and Acc stand for differential and accumulative methods respectively, and pi/3 and pi are their support angles. 6.4.6. Noise Effect on the Alignment Error The registration processes using FOC signatures are affected by noise which affects the position and direction of the oriented points. The position problem is eliminated using geometric verification, but a small perturbation in the direction of the oriented point can affect the FOC signatures in both matching and registration. The effect of noise on the direction of the oriented point is reduced by averaging the normal of patches Figure 6-13: Model M48 and the noisy test objects created with different noise levels. Chapter 6: Analysis of the Signatures 96 circumscribed inside a sphere with radius equal to the base-radius. To show the effect of noise on the alignment error, noisy test objects were created by adding noise (normal distribution) with different standard deviation in all X, Y, and Z directions to each vertex of the models. Sample points were then selected on the surface of the test objects, and their signatures were compared with the signatures of the original models. The registration process was applied to align the test object and the model, and the average alignment error was used as a measurement to compare the effect of noise on the registration process. Here, the parameter Distance is used to show the average alignment error of the test object and the model. Sensor errors are generally systematic, and Gaussian noise is a very simple scene noise model. However, this noise model is sufficient for demonstrating the effect of noise on FOC signatures. Examples of corrupted patches of M48 along with their noise level are shown in Figure 6-13, and Figure 6-14 shows the average and the variation of the alignment error for each noise level. The surfaces of the test objects were smoothed by averaging and interpolating the neighbouring vertices. As indicated in Figure 6-14, the alignment error (Distance) increases as the standard deviation of the noise increases. However, the result of the experiments indicates that the signatures can tolerate this type of noise. In our experiments the average alignment error was set to 10 (5% of the size of the object). As indicated in the figure, the proposed descriptors can tolerate normal noise with standard deviation equal to or less than 4 for the models used in our experiments. Chapter 6: Analysis of the Signatures 97 Figure 6-14: Average and variation of the alignment error to different noise level. 6.5. Sampling Intervals (∆R) Different sampling intervals (∆R) create different Flatness and Concavity signatures. To deal with this problem, instead of storing flatness and convexity signatures, their components are stored as primitives of the descriptors, and then the signatures are calculated from these components. Flatness and convexity signatures are calculated by the following formula neutralconcaveconvex neutralconcaveconvex SSS AAA S AFlatness ++ ++ == and Chapter 6: Analysis of the Signatures 98 neutralconcaveconvex concaveconvex SSS SSConvexity ++ − = where Aconvex , Aconcave , Aneutral , Sconvex , Sconcave, and Sneutral are the A and S components of convex, concave, and neutral surfaces each with dimension d. Figure 6-15 shows the components of a sample flatness and convexity signatures on the surface of M48. As indicated in the figure, the values of the components are monotonically increasing, and they can be re-sampled accurately within any interval. As a result, instead of storing flatness and convexity signatures, six discriminative vectors, Aconvex , Sconvex , Aconcave , Sconcave , Aneutral , and Sneutral are stored as components of the signatures of the model on each oriented point, and then these components are re-sampled within any intervals. The flatness and convexity signatures of both accumulative and differential methods are then calculated from these components. The normal component (Nj) of the orientation signatures cannot be stored and re-sampled like the components of the flatness and convexity signatures. However, our experiments indicated that orientation signatures can be down-sampled safely. Note that orientation signatures cannot be up-sampled. Chapter 6: Analysis of the Signatures 99 Figure 6-15: Components of flatness and convexity signatures. 6.5.1. Accumulative Signatures Because of the cumulated nature of the components of the accumulative method, Aconvex , Sconvex , Aconcave , Sconcave , Aneutal , and Sneutral (please refer to Figure 6-15 ) can be re- sampled, and the flatness and convexity signatures can be calculated accurately. Experiments indicate that the sampling intervals have no major effect on the orientation signatures, and orientation signatures can only be safely down-sampled for matching purposes. Figure 6-16 shows the normalized incremental histogram of the unlikeness values of the randomly selected signatures sampled with 40 and 80 intervals, where R R ∆ equal to 40 and 80 respectively. They were then re-sampled and compared with the same signatures sampled with 20 intervals. As indicated in the figure, the effect of the sampling intervals is negligible compared to the effect of the noise added to the test objects. Chapter 6: Analysis of the Signatures 100 Figure 6-16: Effect of re-sampling on accumulative signatures 6.5.2. Differential Signatures In the differential method, changing the sampling intervals results in different signatures. Figure 6-17 shows a flatness signature calculated with different intervals (∆R). As indicated in the figure, an oriented point has a different flatness signature when it is sampled with a different interval. Like the accumulative method, instead of storing flatness and convexity signatures of the differential method, their accumulative components are stored as components of the flatness and convexity signatures on each orientated point, and the differential flatness and convexity signatures with the required intervals are then calculated. The differential flatness signature is calculated by ε+ = S AF Chapter 6: Analysis of the Signatures 101 Figure 6-17: Sample differential flatness signature calculated with different intervals Top: ∆R=5 Middle: ∆R=2.5 Bottom: ∆R=1.25. Chapter 6: Analysis of the Signatures 102 where )...,,( 21 dsssS ∆∆∆= is the surface circumscribed between two spheres with radius ii RRR −=∆ +1 , and ),...,,( 21 daaaA ∆∆∆= is the projection of S to plane Π. F is a vector of dimension d, and difor s af i i i ,...,2,1=+∆ ∆ = ε . In the accumulative method, elements of S and A are calculated by 11 ++ ∆+= iii sss and 11 ++ ∆+= iii aaa respectively. Consequently, the components of the flatness signatures of the differential method can be calculated from the components of the accumulative signatures for each interval: 121 sss −=∆ and 121 aaa −=∆ . The same rules apply to the convexity signatures, so instead of storing convexity signatures, the accumulative components are stored. The convexity signatures of the differential method are then calculated from these components. The signatures shown in middle and bottom rows of Figure 6-17 that were sampled in 40 and 80 intervals respectively, re-sampled with 20 intervals using the above mentioned method. The left column of Figure 6-18 shows the flatness re-sampled signatures sampled with 20 intervals from those shown in Figure 6-17. As indicated in the figure, the re-sampled signatures are very similar to the original signatures. The right column of the figure shows an orientation signature, first sampled with 80 and 40 intervals, and then re-sampled with 20 intervals. As indicated in the figure, all three orientation signatures are very similar. Chapter 6: Analysis of the Signatures 103 Figure 6-18: Flatness and orientation signatures of differential method calculated in 20, 40, and 80 intervals. The signatures calculated in 40 and 80 intervals are down-sampled to 20 intervals using their accumulative components. Figure 6-19: Effect of sampling intervals on differential signatures. Figure 6-19 shows the effect of changing the sampling intervals on differential signatures. As indicated in the figure, the effect of different sampling intervals on the unlikeness value is less than the effect of the normal noise added to the test objects. The figure also indicates that the signatures become more distinctive by increasing the sampling ratio. For example, the signatures sampled with 20 (original signatures) and 40 Chapter 6: Analysis of the Signatures 104 intervals (ratio of ½) are more similar than the signatures sampled with 20 and 80 intervals (ration of ¼). Down-sampling of the orientation signature in the differential method can be done as safely as the down-sampling of the accumulative orientation signatures. Experiments indicate that 20 intervals are enough to match and to recognize the models used in this work. 6.6. Chapter 6 Conclusions In this chapter, we argue that the FOC signatures can tolerate a range of sampling density, a range of sampling intervals, and noise. Table 6-1 shows the summary of the comparison between the signatures of the accumulative and differential methods. Symbol stands for better, and symbol stands for much better. As indicated in the table, the accumulative method is more robust to noise than the differential method, and it resists changes in the sampling intervals (∆R) much better than the differential method. Conversely, the differential method shows better results against the changes in sampling density, and its signatures are more discriminating than the signatures produced by the accumulative method. Furthermore, the differential method works more efficiently in limited support angle situations, and it has better locality power than the accumulative method. Thus, due to the discriminatory power of the differential signatures, the differential method is more suitable for object recognition. Chapter 6: Analysis of the Signatures 105 Table 6-1: Comparing accumulated and differential methods Accumulative Differential Noise Sampling Intervals (∆R) Sampling Density Discriminatory Power Limited Support Angle Locality Finally, it has been shown that the descriptors introduced in this thesis meet adequately the issues for an effective representation for object recognition, as described in Chapter 1. These issues include free-form object representation, concavity and convexity distinction, orientation and scale independence, robustness to occlusion and clutter, tolerance to acceptable noise level, and the ability to handle different sampling densities and sampling intervals. In the next chapter, it will be shown that the proposed descriptors are also efficient and scalable. In other words, this technique can be used to represent and to reliably recognize a variety of 3D objects with a shared pool of data. 106 Chapter 7 Clustered Signatures In the last three chapters we have compared and contrasted the properties of the two methods introduced in this thesis, and we found that the recognition rate of the differential method is better than the recognition rate of the accumulative method. The number of the descriptors created by all descriptive techniques including the proposed method increases linearly as more models added to the list of the library models. As a result, efficiency still remains as one of the main problems with these methods. This problem increases the size of the library of the descriptors, and makes the recognition process too costly. Since a limited support angle is more practical, in this chapter we use the differential method with support angle set to pi/3. It will be shown that the flatness, orientation, and convexity signatures (FOC signatures) can be shared to create a pool of shared data to represent a variety of objects. Then a probabilistic method is used to match and recognize objects reliably. It will also be shown that this method not only makes the size of the library of the descriptors much smaller, but it also increases the recognition rates. The parameters and models used in this section are exactly the same as the ones used in Chapter 5 for the differential method and spin images with support angle set to pi/3. Chapter 7: Clustered Signatures 107 7.1. Pooling Data Experiments indicate that oriented points close to each other on the surface of an object have similar FOC signatures. Furthermore, similar surfaces have similar FOC signatures. As a result, the FOC signatures can be shared to describe a variety of objects. The signatures are clustered by using a method similar to BIRCH (please refer to Appendix C). Since the number of the clusters is not known, we set a threshold for grouping similar signatures in a cluster. To calculate the threshold value, we refer to the experiments in Section 6.2 in the previous chapter. The threshold value is set to the median of the dissimilarity value of the noisy signatures compared with the original signatures as shown in Figure 6-7and Figure 6-8. ))_,_(( signaturesoriginalsignaturesnoisyitydissimilarmedianthreShold = The threshold is used as the maximum inter-cluster distance between the signatures of the clusters [A04]. Here the common covariance matrix (Σ) is used for the clusters. The shared covariance matrix is calculated by the following equation ( )0)det(| ≠∀=Σ ∑ ∑ ii i i i ii n n σσ σ where iσ is the covariance matrix of cluster i, and ni is the number of the signatures in the cluster. The top row of Figure 7-1 shows the number of the clusters versus the number of the signatures for each set of F, O, and C signatures. As indicated in the figure, the number of Chapter 7: Clustered Signatures 108 clusters does not increase linearly as more signatures are added to the list of the signatures. The bottom row of Figure 7-1 shows the histogram of the signatures per clusters. As indicated in the figure, half of the signatures are grouped in clusters with 22 signatures or more, and another half of the signatures are grouped into clusters with 22 signatures per cluster or less. In our experiments, since objects are symmetrical, clusters with only one signature are deleted as outliers. 7.2. Object Recognition The probability of an object, given signature s, from a list of n models is calculated by the Bayes’ rule: ∑ = n nn nn n OPOsP OPOsP sOP )()|( )()|()|( (7-1) Since each signature may come from any of the clusters, then ∑= i iin CPCsPOsP )()|()|( where Ci is cluster i, and −Σ−− Σ = −Τ )()( 2 1 exp )2( 1)|( 1212 iidi ssCsP µµpi and Chapter 7: Clustered Signatures 109 Figure 7-1: Clusters Top: Number of the clusters versus number of the signatures Bottom: Histogram of number of the signatures per clusters Chapter 7: Clustered Signatures 110 m mi i N rCP =)( where rmi is the number of the signatures of model m in cluster i, and Nm is the total number of the signatures of model m in all clusters. To simplify the calculation, assume all objects have the same probability, then Equation (7-1) becomes ∑∑ ∑ = n i ii i ii n CPCsP CPCsP sOP )()|( )()|( )|( (7-2) Since there are three different types of signatures and clusters, the probability of model On, given signature FOC, is equal to )|(log)|()|( 3 1 jn ks jnjn sOPsOPfocOP ∑∏ == = (7-3) where sj is one of the f, o, and c signatures. Since one outlier with probability equal to zero in Equation (7-3) affects all further calculations, the zero probability is substituted with the lowest probability calculated for all clusters in Equation (7-3). For the N sample points selected on the surface of the test object, the probability of object On given signature set FOC is equal to ∑∏ = == j jn N j jnn focOPfocOPOP ))|(log()|()( 1 The model with the highest value of P(On) was chosen as the possible candidate model. Chapter 7: Clustered Signatures 111 7.3. Geometric Verification Because geometric consistency between the test object and the candidate model is not being checked, there is a possibility that the positive candidate model selected in the previous section may not be the correct match. To make the recognition more flexible, the top M most probable candidates were selected and passed on to the geometric verification process. For the geometric verification process, corresponding oriented points on the surface of the candidate models are needed for each set of FOC signatures. Since the flatness signatures are more descriptive than the orientation and convexity signatures, the corresponding oriented points are selected from the most probable clusters of the flatness signatures. In this way, there may be a handful of corresponding points (rmi may be large) for each candidate model, so the corresponding oriented points are clustered based on their distances and the angle of their oriented points for the geometric verification process. This process is necessary to limit the number of the samples particularly if the models are sampled with a fine resolution. The clustering process reduces the number of the sample points to a manageable number. In our experiments, we have clustered the matched points only if their distance was less than 5 units (2.5% of the size of the object) and their angle was less than pi/18. To verify the oriented points geometrically, we use the same method used to verify spin images [JH99]. Assume that sx and my represent a match between a point x on the scene to the point y on the surface of a model. Now, let ],[ 111 msC = and ],[ 222 msC = be two correspondences between scene and model. The geometric consistency between the two sets of matches is calculated as follows: Chapter 7: Clustered Signatures 112 )()( )()( )( 11 11 2,1 22 22 sSmS sSmS PPd sm sm + − = where ( ))(,))((||),()( 22 qpnqpnqpbaqS p −⋅−⋅−−== where p and q are two points in 3D, and n is normal at point p. Since d is not symmetric, the maximum of the distances is used to define the geometric consistency, D. ))(),(max( 1,22,1 PPdPPdD = When D is small, P1 and P2 are considered to be geometrically consistent, and scene and model points in P1 and P2 are the same distance apart, and their surface normals form the same angle with respect to each other. 7.4. Registration For the registration process, only three oriented points are needed to estimate the transformation matrix, since three points and three independent normals are enough to calculate the transformation [FP03]. The remaining matches between the test object and the model are then grouped into three consistent oriented points (triangulated), and the sum of their D values is calculated. 231312 DDDD edtriangulat ++= Since, this is a simple table look-up, the process is fast and efficient. Chapter 7: Clustered Signatures 113 The consistent oriented points with the lowest edtriangulatD values are selected and used for the initial estimation of the transformation matrix. Since the calculation of the transformation while keeping 1=R , and 1=τRR , is a known problem, a quaternion [K93] is used to calculate the rotation matrix, R. Then the ICP method is used to register the test object on the scene to the model. 7.5. Experiments In our experiments the support distance was set between 5 and 100 with intervals of 5, the support angle was set to pi/3, and the base-radius was set to 5. Then six random oriented points were selected on the surface of the noisy test objects, and their FOC signatures were created. The signatures were then compared and matched to the library of the descriptors used in the previous experiments that consist of 69 toys including 55406 vectors for each F, O, and C descriptive signatures. The signatures were grouped into 9906, 7060, and 2523 clusters for F, O, and C signatures respectively. Each noisy test object was tested 50 times for each support distance with a new set of randomly selected oriented points. 7.5.1. Results The top column of Figure 7-2 shows the results of positive candidate selection from first to fourth candidates. As indicated in the figure, by extending the candidate selection to the second and higher, the positive recognition rate increases. The bottom row of the figure shows the true-positive recognition rates if the average alignment error between the test object and the selected model falls below a threshold value after registration process. The threshold is set to 10 units in our experiments. The false-positive cases are the ones where similar objects are recognized as being the target model. Chapter 7: Clustered Signatures 114 A comparison between the recognition results shown in Figure 7-2 with the recognition results of the method used in Chapter 5 ( please refer to Figure 5-6) indicates that while the size of the representative data became much smaller, resulting in improvement in processing time, the recognition rates also increase. Furthermore, the recognition rates for positive candidate selection and true-positive recognition rates are much closer with this method, indicating that the proposed descriptive technique localizes the matched points better than the methods used in the previous sections. This is partially the result of isolating outliers in the clusters with low rmi , and consequently decreasing their effect in Equation (7-2). 7.5.2. Cluttered Surfaces Figure 7-3 shows the recognition rates for cluttered test objects. The top row of the figure shows the average of positive candidate selections for different levels of clutter, and the bottom row of the figure shows the average true-positive recognition rates if the alignment error falls below 10 units. Chapter 7: Clustered Signatures 115 Figure 7-2: Average recognition results Top: Average positive candidates Bottom: Average true-positive recognition rate Chapter 7: Clustered Signatures 116 In the recognition experiments for the cluttered objects, nine sample points were randomly selected on the surface of the cluttered test object to eliminate the effect of the sample point selection near the cluttered area, since sample points near the cluttered area are not reliable for matching purpose. Figure 7-3: Average recognition results for cluttered objects. Top: Positive candidate selection. Bottom: True-positive recognition rate. Chapter 7: Clustered Signatures 117 7.5.3. Comparison For comparison purposes, the spin images of the test objects and models were created with exactly the same parameters. The support distance was set to pi/3, and the bin of spin images was set to ∆R. Each spin image was created as a histogram of 20 x 40 bins. The sample points selected in the experiments were the same, and the experiments were conducted with exactly the same parameters as in Section 5. Figure 7-4 shows the difference between the recognition rates of the proposed method, of the differential method, and of the spin image method. Each graph is the result of subtracting the recognition rates of the differential method and spin images from recognition rates of the clustered signatures proposed in this section. The top curve in the graph shows the recognition rate gained by clustered signatures compared to the recognition rate achieved by spin image, and the curve in the bottom of the figure shows gain achieved in recognition rate compared to the differential method described in Section 5. Figure 7-5 shows difference of the true-positive recognition rates between the clustered signatures and the spin image for different levels of clutter. As indicated in the figure, the recognition rate of the proposed technique is considerably better than the recognition rate of the spin image for low clutter levels and for small support distances. However, the recognition rate of the spin image is better for higher clutter levels and large support distances. Figure 7-6 compares the size of the library of the descriptors comprises of 69 toys (please refer to Figure 3-10) created and used by each method. As indicated in the figure the size of the differential method is 7.5% of the size of the spin image, and the size of the Chapter 7: Clustered Signatures 118 clustered signatures is only 0.88% of the size of the spin image. However, the recognition rates of the clustered signatures are better than the recognition rates of the differential method and those of the spin image method. One possible answer for the increase of the recognition rate of the clustered signatures compared with the differential method is that the outliers were removed in the clustering process. Figure 7-4: Recognition gained by clustered signatures. Figure 7-5: Comparing recognition rates of the proposed technique and spin image for different levels of clutter. Chapter 7: Clustered Signatures 119 Figure 7-6: Comparing size of the library of the descriptors used by different methods. 7.6. Scalability So far, the library models used in the thesis comprise 69 toys. An effective representation technique should be scalable to any number of models without sacrificing the efficiency. In this experiment the number of the models in the library is increased to 207 models (please refer to Figure 7-10), three times the number of the models used in the previous experiments. The new set of the descriptors consist of 126403 vectors for each F, O, and C descriptive signatures. The signatures are grouped into 22300, 15899, and 5795 clusters for F, O, and C signatures respectively. The complete list of the models and their specifications are listed in Appendix A. Experiments were conducted with the same test objects and with the same parameters as before. The true-positive recognition rate of the test objects is shown in Figure 7-7. A comparison between the recognition results in Figure 7-7 with those in the Figure 7-2 indicates that increasing the number of the models has no noteworthy effect on the recognition results. Chapter 7: Clustered Signatures 120 In another experiment, the number of the test objects was increased from 8 models as shown in Figure 3-7 to 21 models (please see Figure 7-8), and the experiment was conducted with the same parameters with the new set of test objects. The true-positive recognition rate is shown in Figure 7-9. A comparison between the recognition rates of Figure 7-7 and Figure 7-9 indicates that the recognition rates with the new set of test objects are slightly degraded. The problem mainly caused by the long legs of the bug models shown Figure 7-8. Since establishing correspond oriented points on the surface of the elongated objects is a difficult task in the presence of noise. This problem can partially be addressed by applying smoothing technique and sampling the elongated objects with a higher density. However, an overall comparison between the recognition rates of Figure 7-2, of Figure 7-7, and Figure 7-9 indicates that neither the size of the library of the descriptors nor the number of the test objects has any major effect on the recognition rates. Figure 7-7: True-positive recognition rate with no clutter Chapter 7: Clustered Signatures 121 Figure 7-8: New set of test objects Figure 7-9: True-positive recognition rate with no clutter 7.7. Complexity The processing time of creating each signature depends on two parameters, P and n in the following formula: T ≈Pn where P is the number of the triangles that formed the surface of the object, and n is the number of the intervals used in signature creation. The complexity of each signature Chapter 7: Clustered Signatures 122 creation is )(npΟ , and the complexity of creating descriptors for an object with p triangles is )( 2npΟ . The computational complexity for establishing correspondences is ))log(( 2 LLMSSC ++Ο where S is the number of the sample points selected on the surface of the test object, C is the number of the clusters, M is the number of the top candidates selected for verification process, and L log(L) is a time factor for clustering and sorting the matched sample points. The processing time of calculating signatures as described in appendix D can be considerably decreased by approximating the sizes of the intersection of a sphere and a triangular mesh based on the sizes of the triangles and radius of the circle. In our experiments, we have used both methods. Based on our experiments, the precise calculation of each signature of the model M48, which consists of 671 patches, would take about 6 seconds in average for a PC with a Centrino 1.8 GHZ processor with 1GB of RAM. For the efficient method discussed in appendix D, each signature of M48 takes fraction of a second on the same hardware. 7.8. Chapter 7 Conclusions Our experiments indicate that to represent a variety of objects, FOC signatures can be shared. The shared data not only decreases the size of the library of the descriptors and consequently increases the efficiency of the recognition process, but it also increases the recognition rate. While the size of the library of the descriptors created by the proposed Chapter 7: Clustered Signatures 123 representation technique is much smaller than those created by spin image, the recognition rates are better than those achieved by spin image, except for large support distance and clutter levels of 50 percent and more. This latter problem can be addressed by applying selective selection rather than random selection of the sample points. This solution can be achieved only if the boundaries of the objects are segmented. It was also shown that the proposed representation technique is efficient and scalable, and that adding more models to the list of the library models has no major effect on the recognition rate. However, the number of the incorrect top candidates increases as more similar models are added to the list of the library model, and the consistency check should be extended to check more of the top candidates. Our experiments also indicated that the number of the clusters increases as a polynomial of second order as: cbxaxxf ++= 2)( where x is number of the signatures, f(x) is number of the clusters. The range of the parameters for the clusters used in our experiments are [-0.0008 -0.0016], [ 0.1240 0.4020], and [-0.2280 0.178] for parameters a, b, and c respectively. Experiments also indicated that when descriptors of a new set of models are added to the collection of the clusters, number of the clusters increases linearly in the beginning, but it tends toward a saturated level as more similar models are added to the library. The candidate models selected by the proposed modeling technique are based on the similarity of the signatures and not on their geometric consistency. Chapter 7: Clustered Signatures 124 Figure 7-10: All 207 models used in the experiments. 125 Chapter 8 Overall Conclusions 8.1. Summary This thesis presents a new descriptive method for representing and recognizing three- dimensional rigid objects. The proposed method can represent a variety of free-form objects effectively. The effectiveness of the new method is primarily due to its descriptiveness power and the size of the descriptors it creates. The effectiveness of the proposed technique is demonstrated through the recognition of multiple objects with a library of 207 rigid three-dimensional objects. 8.2. Contributions The major contribution of this thesis is the development of the FOC signature which is a new representation for three-dimensional surfaces. The proposed method is based on spin-images, splash, and surface curvature representations. The proposed method has many advantages and it has the potential to make a significant contribution to the field of computer vision. FOC signatures are general The proposed descriptive technique uses triangular meshes directly to represent an object. With this method, a wide-variety of three-dimensional rigid objects can be described without any restrictions. Chapter 8: Overall Conclusions 126 FOC signatures are robust to occlusion The support angle used in the representation technique accounts for occlusions. Since the entire object cannot be seen from a single viewpoint, it is logical to assume that if the normal of a surface mesh makes an angle greater than a threshold value with the orientation of point P, it cannot be seen from the same angle that sees point P. Therefore, those triangles cannot be used for representing the object from that viewpoint. By setting appropriate value for support angle the self-occlusion problem in representation is resolved. In most of the experiments in this thesis the support angle was set to pi/3. FOC signatures are robust to clutter Clutter is the surface area that does not belong to the object being investigated. Clutter mainly caused by a nearby object or an object located between the object and viewer. Clutter degrades the recognition rate of the object in the matching process. There are two methods for dealing with assumed clutter data in object recognition. One type of system of recognition uses a segmentation process to remove the clutter from the object of interest before recognition while other types of systems handle the clutter directly and localize the features of the object to deal with clutter. In this thesis, we have used the first method by removing a large portion of the noisy test objects. However, since adding and removing surfaces patches have a similar effect on the creation of FOC signatures, FOC signatures can also be used directly to deal with clutter without segmentation by adjusting the value of the parameter support distance. To demonstrate the robustness of the signatures to clutter, the results of extensive recognition experiments shown in Section 7.5.2 indicate that the recognition results of the objects cluttered by less than 50 percent of their entire surface are better than the Chapter 8: Overall Conclusions 127 recognition results achieved by spin images. The parameters, matching algorithm, and registration processes used for both descriptive techniques are similar in this work. As cluttered surfaces and support distance increase to 50 percent and 35 percent of the length of the object respectively (please refer to Figure 7-5), the recognition results degrade. This problem can be addressed by selecting oriented points on the surface of the test objects as far from the cluttered area as possible rather than selecting them randomly. FOC signatures are robust to patch resolution The proposed method is robust to the mesh sampling density as long as there are enough samples to preserve the surface details. The robustness of the signatures to sampling resolution any restriction in the use of the same sampling density in the representation and matching processes. As showed in Section 6.3, the signatures created from the objects that are sampled with different densities are the same, and the sampling density has no major effect on the creation of the signatures. FOC signatures are robust to sampling intervals The experiments in this work show that the sampling interval has almost no effect on the signatures created with accumulated method as long as there are enough samples to effectively describe the surface of the object. The experiments also show that the signatures of the differential method can easily be created from the signatures of the accumulative method with any required sampling intervals. The robustness of the differential FOC signatures to sampling intervals are also demonstrated with extensive comparison of the signatures sampled with different intervals in the thesis. (Please refer to Section 6.5). Chapter 8: Overall Conclusions 128 FOC signatures tolerate sensor noise The appearance of FOC signatures can be affected by noise perturbations on the surface patches and consequently the direction of the normal of the oriented points. The averaging nature of the proposed technique has a suppressing effect on the surface perturbation caused by noise. This effect discussed in Section 6.4. Noise also can affect the direction of normal of the oriented point, and small changes on the direction of the normal can affect the appearance of the FOC signature. This problem is also partially addressed by averaging the surface patches circumscribed inside a sphere with radius equal to the base-radius. In this thesis, it was assumed that the sensor noise is systematic and is modeled by normal-distribution noise. It was shown that the proposed descriptive technique can tolerate this type of noise. FOC signatures can be shared in a pool of data Since similar objects have similar FOC signatures, the FOC signatures created by the proposed descriptive technique can be shared to create a pool of data to represent a variety of objects. Consequently, the number of the models in the library can be scaled without a major effect on the recognition process. The process of pooling data decreases the size of the library of the descriptors and consequently increases the efficiency of the recognition process. Furthermore, it was shown that clustering similar signatures eliminates the effect of the outlier signatures that are caused by noise, increasing the recognition rate as a result. The library model created with 207 objects was efficiently used to recognize the test objects. It was noted that the recognition rate of the proposed technique is much better than the recognition rate achieved by spin images when no or low clutter is present. Furthermore, the size of the Chapter 8: Overall Conclusions 129 library of the descriptors used by the proposed representation technique is only 0.88 % of the size of the library of the descriptors used by the spin images. FOC signatures encode the surface geometric- properties The signatures created by the proposed descriptive technique encode the geometric properties such as average curvature, symmetry, and convexity of the surface around an orientation point rather than generating a mass of data as descriptors. Flatness and orientation signatures indicate average curvature and symmetry of the surface around an oriented point. Convexity signatures indicate the direction of view to a surface. With the proposed descriptive technique both convex and concave surfaces have the same flatness and orientation signatures. The convexity signatures of a convex and concave surface with the same parameters are symmetrical. Chapter 8: Overall Conclusions 130 8.3. Further Investigations 8.3.1. Symmetrical Plane of Objects FOC signatures can be used to find the symmetrical plan of an object. When an object is symmetrical, the FOC signatures of the two sides of the object are similar, but their normals have opposite directions. FOC signatures, therefore, can be used to identify the symmetrical plan of the objects. The top row of Figure 8-1 shows M48 with two oriented points matched on its surfaces. The other oriented points were rejected by examining their relative coordinates and the direction of their normal. The point M, the bisector of two oriented points was identified. The symmetrical plan of the object was then identified by finding more matched points, and applying RANSAC [Z04] to eliminate the outliers. The bottom row of the figure shows the symmetrical plan of the object. Chapter 8: Overall Conclusions 131 Figure 8-1: Symmetrical plan of an object. Top: M48 with two oriented points matched. Bottom: Symmetrical plan of the object. Chapter 8: Overall Conclusions 132 8.3.2. Object Classification The signatures created with the proposed descriptive technique show promising results for object classification. Experiments indicate that by filtering signatures and removing variation on the signatures that are caused by specific details of an object, the signatures can be clustered and used to classify objects. The classification process can also improve object identification rates. Firstly, object can be classified to narrow down the search space, and then identified efficiently. Experiments also indicate that the set of the signatures needed for classification purposes are different for the set of the signatures needed for identification. This means that two sets of separate signatures are needed for each purpose. This finding needs further investigation. 8.3.3. Consistency Recognition The recognition process in almost every method includes two separate procedures. First signatures are matched with the set of signatures of the library models, and then the geometric consistency of the matched signatures are checked. This is a major problem in the efficiency of the recognition process. The processing cycles involve in matching signatures that will be rejected in the consistency-checking process. This problem not only increases the processing time, but also adds the outliers to the list of the matched candidates. The probabilistic method used in Chapter 7 of the thesis can be extended to combine the matching and verification through the following steps: 1. Select a random point on the surface of the test object and match it with the signatures of the models in the library. 2. Select the matches from the previous section. Chapter 8: Overall Conclusions 133 3. Select another point on the surface of the test object, and match it with only those signatures that have the same geometrical relation with the first one. 4. Continue with the third sample point consistent with the first two sample points, and continue for another sample. There can be a possibility that the first match is a wrong one. As a result, a probabilistic method, such as a variation of the Markov Model, should be used to narrow down the search space. The complexity of the searched spaced can be trimmed by eliminating low probability search spaces. 8.3.4. General Representation Technique As discussed in the introductory chapter of the thesis, there are mainly two different representation techniques, i.e. view-based representation, and object-based representation techniques. View-based representation techniques use the 2D image of the object viewing from different directions of the same object. On the other hand, model-based representation techniques use the geometric properties of a 3D object. There is a gap in all proposed techniques. They either use a 2D view of an object, or they use a 3D geometry of the object for representation purpose. An effective representation technique should be able to fill the gap between these two methods. For example, an effective representation technique should be able to represent a 3D object, and match the 3D model with its 2D views Many representation techniques [CTL06] have tried to fill this gap, but a general method for effective 3D object representation is still missing. This is however far beyond the scope of this thesis, and it is just suggested and left for more research. 134 References: [A04] E. Alpaydin; Introduction to machine learning, MIT Press, 2004. [AB06] A.S. Mian; M. Bennamoun; R. Owens.; Three-dimensional model-based object recognition and segmentation in cluttered scenes. Pattern Analysis and Machine Intelligence, IEEE Transactions on Volume 28, Issue 10, Oct. 2006 Page(s):1584 – 1601. [AD03] A. Tarek; Unsupervised iterative segmentation and recognition of anatomic structures in medical imagery using second-order B-spline descriptors and geometric Quasi-Invariants. Third IEEE Symposium on BioInformatics and BioEngineering (BIBE'03), 2003, pp. 231. [ANP07] P. Antonopoulos; N. Nikolaidis; I. Pitas; Hierarchical Face Clustering using SIFT Image Features. Computational Intelligence in Image and Signal Processing, 2007. CIISP 2007. IEEE Symposium on 1-5 April 2007 Page(s):325 - 329 [B88] G. Borgefors; Hierarchical chamfer matching: a parametric edge matching algorithm. IEEE Trans. Pattern Anal. Machine Intell, vol, 10, pp.849-865, Nov. 1988. [B90] P.J. Besl; The free-form surface matching problem in Machine Vision for Three- Dimensional Scene. (H. Freeman, Ed.), pp. 25–71. Academic Press, San Diego, 1990. [BB06] R.H. Bartels; J.C. Bealty; J.C. Beatty; An introduction to splines for use in computer graphics and geometric modeling; The Morgan Kaufmann Series in Computer Graphics, 2006. [BC03] S.F. Buchele; R.H. Crawford; Three-dimensional halfspace constructive solid geometry tree construction from implicit boundary representations. Symposium on Solid Modeling and Applications 2003: 135-144 [BJ85] P.J. Besl; R. Jain; Three dimensional object recognition. Computing surveys, 17(1): 75- 145, March 1985. [BLCC00] M.M. Blane; Z. Lei; H. Civi; D.B. Cooper; The 3L algorithm for fitting implicit polynomial curves and surfaces to data. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2000. [BM00] M. Bennamoun; G.J. Mamic; Object recognition: fundamentals and case studies. Springer, 2000. [BM92] P.J. Besl; N.D. McKay; A method for registration of 3-D shapes, IEEE Transaction on Pattern Analysis and Machine Intelligence ,14(2):239-256,1992. [BNA89] J. Brady; N. Nandhakumar; J. Aggrawel; Recent progress in object recognition from range data. Image and vision computation, 7(4):295-307, November 1989. [BPA01] A. Bonnassie; F. Peyrin; D. Attali; Shape description of three-dimensional images based on medial axis. International Conference on Image Processing, Volume: 3 , 7-10 Oct. 2001. Pages:931 - 934. [BPA03] A. Bonnassie; F. Peyrin; D. Attali; A new method for analyzing local shape in three- dimensional images based on medial axis transformation. IEEE Transactions, Aug. 2003. page(s): 700- 705. Volume: 33, Issue: 4. [BPC98] B.F. Bulle; J. Prenninger; C. Eitzinger; Efficient tracking of 3D-robot positions by dynamic triangulation. Instrumentation and Measurement Technology Conference, 1998. IMTC/98. IEEE Volume 1, 18-21 May 1998 Page(s):446 - 449 vol.1. [BS03] T. Bonfort, P. Sturm; Voxel Carving for specular surfaces, Ninth IEEE International Conference on Computer Vision (ICCV'03) - Volume 1, pp. 591, 2003. References 135 [BSA91] S.O. Belkasim; M. Shirdhar; M. Ahmadi; Pattern recognition with moment invariants. Pattern Recognition, Volume 24, Issue 12, 1991, Pages 1117-1138. [BSF02] K.L. Boyer; R. Srikantiah; P.J. Flynn; Multiscale surface organization and description for free-form object recognition. 16th International Conference on Pattern Recognition, Volume 3, Issue 2002 Page(s): 569 - 572. [BYL08] X. Bai; X. Yang; L.J. Lateckib; Detection and recognition of contour parts based on shape similarity. HuaZhong University of Science and Technology, Wuhan, China. 2008. [CD97] C. Chin; C. Dyer; Point signatures: a new representation for 3D object recognition. International Journal of computer vision, 25(1) 63-85. 1997. [CF01] R.J. Campbell; P.J. Flynn; A survey of free-form object representation and recognition techniques. Computer Vision and Image Understanding 81, 166–210, (2001). [CHH99] O. Carmichael; D. Huber; M. Hebert; Large data sets and confusing scenes in 3-D surface matching and recognition. International Conference on 3-D Digital Imaging and Modeling, pp. 358-367, 1999. [CJ96] C. Chua; R. Jarvis; Point Signatures: A new representation for 3D object recognition, International Journal of Computer Vision, vol. 25, no. 1, pp. 63-85, 1996. [CK04] C.M. Cyr; B.B. Kimia; A similarity-based Aspect-Graph approach to 3D object recognition. International Journal of Computer Vision. Volume 57, Issue 1. pp. 5 – 22, 2004. [CK99] P. Chang; J. Krumm; Object recognition with color cooccurrence histograms. Computer Vision and Pattern Recognition, 1999. IEEE Computer Society Conference on Volume: 2 , 23-25 June 1999. Pages: 504. [CL92] L.H. Chen; W.C. Lin; Pattern Recognition (ICPR92), 30 Aug.-3 Sept. 1992, volume 1, Pages:272 – 275. [CRF94] D.E. Chelen; D.W. Ruck; K.H. Fielding; Three dimensional object recognition using a complex autoregressive model. Aerospace and Electronics Conference (NAECON 1994). Proceedings of the IEEE 1994 National , 23-27 May 1994. Pages:259 - 266 vol.1. [CT04] W. Congli; M.G. Tsuzuki; Representation of curves and surface in B-Rep solid modelers, ABCM Symposium Series in Mechatronics, Vol.1, pp.498-507. [CTL06] C.B. Chong; T. Tan; F.L. Lim; A model-based approach for rigid object recognition. 18th International Conference on Pattern Recognition (ICPR'06) Volume 3. 2006. pp. 116-120. [CW94] F. Cohen; J. Wang; Modeling image curves using invariant 3-D object curve models-a path to 3-D recognition and shape estimation from image contours. IEEE transactions on pattern analysis and matching intelligence, 16(1): 1-12, January 1994. [DBC94] S. Das; B. Bhanu; H. Chih-Cheng; Generic object recognition using CAD-based multiple representations. CAD-Based Vision Workshop, 1994, Pages:202 – 209. [DHI93] H. Delingette; M. Hebert; K. Ikeuchi; A spherical representation for the recognition of curved objects; Computer Vision, 1993. Volume, Issue, 11-14 May 1993. Pages: 103 – 112. [DI93] T. Damergi; D. Ionescu; A method for subpart decomposition for overlapped object identification. Electrical and Computer Engineering (CCECE93), Vol.2, pages 991-994, 1993. [DIL06] H.B. Darbandi; M.R. Ito; J. Little; Flatness and Orientation Signature for Modeling and Matching 3D Objects; 3DPVT'06, IEEE Computer Society, Pages 161-167. References 136 [DIL07] H.B. Darbandi; M.R. Ito; J. Little; Surface Signature-Based Method for Modeling and Recognizing Free-Form Objects, ICV07, Springer Berlin, pages 447-458 [DIL08] H.B. Darbandi; M.R. Ito; J. Little; Clustered Signatures for Modeling and Recognizing 3D Rigid Objects; CVISP08, International Journal of Computer Science and Engineering, Volume 2 Number 4, 2008, pages 232 [DJ97] C. Dorai; A.K. Jain; COSMOS—a representation scheme for 3D free-form objects, IEEE Trans. Pattern Analysis and Machine Intelligence. V.19, 1997, 1115–1130. [DLB97] D. Dion; D. Laurendeau; R. Bergevin; Generalized cylinders extraction in a range image. 3-D Digital Imaging and Modeling (3DIM97), 12-15 May 1997 Page(s):141 – 147. [DN02] P. Drapikowski; T. Nowakowski; 3D object modelling in mobile robot environment using B-spline surfaces. First International Symposium on 3D Data Processing Visualization and Transmission (3DPVT'02), 2002. pp. 676. [E03] C. Ericson; Real-time collision detection. Elsevier, 2003. [F93] O. Faugeras; Three dimensional computer vision: A geometric viewpoint. MIT Press, 1993. [FDB07] M. Fournier; J.M. Dischler; D. Bechmann; A new volumetric implicit surface data structure and its triangulation algorithm applied to mesh integration. 15th Pacific Conference on Computer Graphics and Applications (PG'07). 2007 ,pp. 445-448. [FJ91] P.J. Flynn; A.K. Jain; CAD-based computer vision: from CAD models to relational graphs. Pattern Analysis and Machine Intelligence, IEEE Transactions on, Volume: 13, Issue: 2, Feb. 1991. Pages:114 – 132. [FP03] D.A. Forsyth; J. Ponce; Computer vision a modern approach. Prentice Hall. 2003. [FVFH95] J.D. Foley; A.V. Dam; S.K. Feiner; J.F. Hughes; Computer Graphics: Principles and Practice in C (2nd Edition); Addison-Wesley Publication, 1995. [FVFH93] J.D. Foley; A.V. Dam; S.K. Feiner; J.F. Hughes; Introduction to Computer Graphics; Addison-Wesley Publication, 1993. [G01] M. Gondim; Dynamic simulations of multi-body systems, Springer 2001. [G90] W.E.L. Grimson; Object recognition by computer: the role of geometric constraints, MIT Press, Cambridge, MA, 1990. [GB92] A. Gupta; R. Bajcsy; Surface and volumetric segmentation of range images using biquadrics and superquadrics. Pattern Recognition, 30 Aug.-3 Sept. 1992. Vol.1, Pages:158 – 162. [GB99] M. Greenspan; P. Boulangar; Efficient and reliable template set matching for 3-D object recognition. 3-D Digital Imaging and Modeling. Issue , 1999 Page(s):230 – 239. [GBFG06] F.D. Goes; F.P.G. Bergo; A.X. Falcao; S. Goldenstein; Adapted dynamic meshes for deformable surfaces. XIX Brazilian Symposium on Computer Graphics and Image Processing (SIBGRAPI'06). 2006, pp. 213-220. [GFWR02] R.C. Gonzalez; R.E. Woods; Digital image processing (second edition). Prentice Hall 2002. [GK00] P. Giblin; B.B. Kimia; A formal classification of 3D medial axis points and their local geometry. Computer Vision and Pattern Recognition, 13-15 June 2000. Pages:566 - 573 vol.1. [GK03] P.J. Giblin; B.B. Kimia; On the intrinsic reconstruction of shape from its symmetries. Pattern Analysis and Machine Intelligence, Volume: 25 , Issue: 7 , July 2003. Pages:895 – 911. References 137 [GM07] G. Biegelbauer; M. Vincze; Efficient 3D object detection by fitting superquadrics to range image data for robot’s object manipulation, IEEE International Conference on Robotics and Automation Roma, Italy, 10-14 April 2007. [H84] B. Horn; Extended Gaussian Image. Proceedings of the IEEE, 72:1671-1686, 1984. [HBM04] A. Helzer; M. Barzohar; D. Malah; Stable fitting of 2D curves and 3D surfaces by implicit polynomials. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2004. [HH00] U. Hillenbrand; G. Hirzinger; Object recognition and pose estimation from 3D-geometric relations. Knowledge-Based Intelligent Engineering Systems and Allied Technologies, 2000. Page(s):113 - 116 vol.1. [HS93] R.M. Haralick , L.G. Shapiro; Computer and robot vision. Addison-Wesley, 1993. [HYCG04] H. Yongli; Y. Baocai; C. Shiquan; G. Chunliang; A 3D facial combination model based on mesh resampling. Signal Processing, 2004. ICSP '04. Page(s):1231 - 1234 vol.2 [IOV05] F. Isgro; F. Odone; A. Verri; An open system for 3D data acquisition from multiple sensor. Computer Architecture for Machine Perception, CAMP 2005. 4-6 July 2005 Page(s):52 – 57. [J97] A.E. Johnson; Spin Image: A representation for 3D surface matching. Phd Thesis, Carnegie Mellon University, 1997. [JH02] T. Jost; H. Hugli; A multi-resolution scheme icp algorithm for fast shape registration. In First International Symposium on 3D Data Processing Visualization and Transmission, pages 540– 543, 2002. [J90] J. Foley; A.V. Dam; S. Feiner; J. Hughes; Computer graphics- principles and practice. Addison-Wesley, 1990. [JH97] A.E. Johnson; M. Hebert; Recognizing objects by matching oriented points. Computer Vision and Pattern Recognition, 1997. Page(s):684 – 689. [JH98] A.E. Johnson; M. Hebert; Surface matching for object recognition in complex three- dimensional scenes. Image and Vision Computing 16, p 635-651, 1998. [JH99] A. Johnson; M. Hebert; Using spin images for efficient object recognition in cluttered 3D scenes. IEEE Tr. on Pattern Analysis and Machine Intelligence. Vol. 21, N0 5. May 1999. [K93] J. B. Kuipers; Quaternions and rotation sequences: a primer with applications to orbits, Princeton University Press, 1993. [K94] P.M. Kelly; An algorithm for merging Hyper-ellipsoidal clusters. Los Alamos National Laboratory, 1994. [KFS07] J. Kautsky; J. Flusser; F. Sroubek; Implicit invariants and object recognition. 9th Biennial Conference of the Australian Pattern Recognition Society on Digital Image Computing Techniques and Applications (DICTA 2007). pp. 462-469. [KI93] S.B. Kang; K. Ikeuchi; The complex EGI: A new representation for 3D pose determination. IEEE Transactions on Pattern Analysis and Machine Intelligence, 15(7): 707-721, July 1993. [KK06] D. Katsoulas; D. Kosmopoulos; Box-like super quadratic recovery in range images by fusing region and boundary information. ICPR 2006. Volume 1, 20-24 Aug. 2006 Page(s):719 – 722. [KL96] V. Krishnamurthy; M. Levoy; Fitting smooth surfaces to dense polygon meshes, SIGGRAPH ’96 New Orleans, Louisiana, August 1996, pp. 313–324. References 138 [KLR06] I. Kunttu; L. Lepistö; J. Rauhamaa; Fourier-based object description in defect image retrieval. Machine Vision and Applications archive. Volume 17 , Issue 4, Pages: 211 - 218, 2006. [KS05] R. Klette; K. Scheibe; Combinations of range data and panoramic images - new opportunities in 3D scene modeling. Computer Graphics, Imaging and Vision: New Trends, 2005. 26-29 July 2005 Page(s):3 – 10. [KS90] M. Kirby; L. Sirovich; Application of the Karhunen-Loeve procedure for the characterization of human faces, IEEE Trans. Pattern Anal. Mach. Intell. 12, 1990, 103–108. [L98] S. Loncaric; A survey of shape analysis techniques. Department of EECE, university of Zagreb. [L99] D.G. Lowe. Object recognition from local scale-invariant features. Int. Conf. Vision (ICCV’99), pp.1150–1157, 1999. [L04] D.G. Lowe; Distinctive image features from scale-invariant key points. International Journal of Computer Vision, Vol. 60, No. 2. (November 2004), pp. 91-110. [LB96] A. Leonardis; H. Bischof; Dealing with occlusions in the eigen space approach, Computer Vision and Pattern Recognition, San Francisco, California, June 1996, pp. 453–458. [LBJ08] K. Lichun; L.K. Bin; Y. Jin; A use of curve moment invariants in recognition of partially occluded objects. IEEE Pacific-Asia Workshop on Computational Intelligence and Industrial Application. 2008. pp. 594-597. [LC87] W.E. Lorenson; H.E. Cline; Marching cubes: A high resolution 3D surface construction algorithm, in Computer Graphics, Proceedings of SIGGRAPH ’87, Vol. 21, pp. 163–169, 1987. [LH06] H.Y. Lin; C.Y. Ho; 3D model acquisition based on projections of level curves. ICPR 2006. Volume 4, Page(s):853 – 856. [LHK98] O.I. Camps; C.Y. Huang; T. Kanungo; Hierarchical organization of appearance-based parts and relations for object recognition, Computer Vision and Pattern Recognition, Santa Barabra, California, June 1998, pp. 685–691. [LHXX02] H. Lei; C.Y. Han; W. Xun; L. Xiaokun; W.G. Wee; A skeleton based shape matching and recovery approach. Image Processing. 2002. Volume: 3 , 24-28 June 2002. Pages:789 - 792 vol.3. [LM92] C. Liao; G. Medioni; Representation of range data with Bspline surface patches. 11th International conference on pattern recognition, Hague, 1992. [LMT07] J. Luo; Y. Ma; E. Takikawa; S. Lao; M. Kawade; B.L. Lu; Person-Specific SIFT Features for Face Recognition. Acoustics, Speech and Signal Processing, 2007. ICASSP 2007. IEEE International Conference on Volume 2, 15-20 April 2007 [LSL07] M. Lange; S. Ganebnykh; A. Lange; Moment-based pattern representation using shape and grey scale features. Springer Berlin, Heidelberg. 2007. [LSM94] A. Leonardis; F. Solina; A. Macerl; A direct recovery of super quadratic models in range images using recover-and-select paradigm, Computer Vision — ECCV '94, pp.309-318. [LRY02] B. Lowekamp; P. Rheingans; T.S. Yoo; Exploring surface characteristics with interactive Gaussian images. 13th IEEE Visualization 2002 (VIS 2002). [OI97] K. Ohba; K. Ikeuchi; Detectability, uniqueness, and reliability of eigen windows for stable verification of partially occluded objects, IEEE Trans. Pattern Anal. Mach. Intell. 19, 1997, 1043–1048. References 139 [MA69] T.M. Apostol; Calculus Volume 2 ( Multivariable Calculus and Linear Algebra with Applications), Second Edition, John Wiley and Sons, 1969. Section 12.6, Problem 10, p.430. [MB97] J. Ming; B. Bhanu; ORACLE: An integrated learning approach for object recognition. International Journal of Pattern Recognition and Machine Intelligence. 11(6):961-990, 1997. [MK95] H. Murase; S. Nayar; Visual learning and recognition of 3-D objects from appearance, Int. J. Comput. Vision 14, 1995, 5–24. [MLPZ96] J. Mundy; A. Liu; N. Pillow; A. Zisserman; S. Abdallah; S. Utcke; S. Nayar; C. Rothwell; An experimental comparison of appearance and geometric model based recognition, in Object Representation in Computer Vision II pp. 247–269. Springer-Verlag, Cambridge, UK, 1996. [MPH04] Y. Ma; F. Pollick; W.T. Hewitt; Using B-spline curves for hand recognition. 17th International Conference on Pattern Recognition (ICPR'04) - Volume 3. 2004, pp. 274-277. [MS04] C. Matabosch; J. Salvi; X. Pinsach; J. Pag; A comparative survey on free-form surface registration. pages 308-312, February 2004. [N80] V. Nawla; A Guided tour of computer vision. Addison-Wesley Publishing Company, 1993. [NB77] R. Nevatia, T. Binford; Description and recognition of curved objects. Artificial Intelligence, 8:77-98, 1997. [NI02] K. Nishino; K. Ikeuchi; Robust simultaneous registration of multiple range images, ACCV2002: The 5th Asian Conference on Computer Vision, 2002. [NN97] S.A. Nene; S.K. Nayar; A simple algorithm for nearest neighbour search in high dimensions. IEEE Trans. Pattern Analysis and Machine Intelligence, 19(9), pp. 999-1003, 1997. [RM94] H. Rom; G. Medioni; Part decomposition and description of 3D shapes. Computer Vision & Image Processing, Volume: 1 , 9-13 Oct. 1994 .Pages:629 - 632 vol.1 [RSM01] S.R. Correa; L.G. Shapiro; M. Melia; A new signature-based method for efficient 3-D object recognition. Computer Vision and Pattern Recognition, 2001. CVPR 2001, Volume: 1 , 8- 14 Dec. 2001. Pages:I-769 - I-776 vol.1 [P80] T. Pavlidis; A review of algorithms for shape analysis. Computer Graphics and image processing, 7:243-258, 1978. [PCPS99] J. Ponce; M. Cepeda; S. Pae; S. Sullivan; Shape models and object recognition. D.A. Forsyth et al. (Eds.): Shape, Contour. LNCS 1681, pp. 31-57, 1999. Springer-Verlag, Berlin, Heidelberg. [PDHMS97] K. Pulli; T. Duchamp; H. Hoppe; J. McDonald; L. Shapiro; W. Stuetzle; Robust meshes from multiple range maps. 3-D Digital Imaging and Modeling, 3DIM97, 12-15 May 1997 Page(s):205 – 211. [PDGP05] P. Claes; D. Vandermeulen; L.V. Gool; P. Suetens; Robust and accurate partial surface registration based on variational implicit surfaces for automatic 3D model building. 3-D Digital Imaging and Modeling, 3DIM 2005. 13-16 June 2005 Page(s):385 - 392 . [PLLF06] A. Paiva; H. Lopes; T. Lewiner; L.H. Figueiredo; Robust adaptive meshes for implicit surfaces; XIX Brazilian Symposium on Computer Graphics and Image Processing (SIBGRAPI'06). 2006. pp. 205-212. [PM05] B.M. Planitz; A.J. Maeder; J.A. Williams; The correspondence framework for 3D surface matching algorithms, Computer Vision and Image Understanding, 2005. [PSZ99] M. Pelillo; K. Siddiqi; S.W. Zucker; Matching hierarchical structures using association References 140 graphs. Pattern Analysis and Machine Intelligence, Volume: 21 , Issue: 11 , Nov. 1999. Pages:1105 – 1120. [PP92] S. Petitjean; J. Ponce; D. Kriegman; Computing exact aspect graphs of curved objects: algebraic surfaces, Int. J. of Comp. Vision 9(3), 231-255, 1992. [S03] H. Samet; Depth-first k-nearest neighbour finding using the MaxNearestDist estimator. IEEE Transactions, Image Analysis and Processing, 2003. [S90] H. Samet; The Design and Analysis of Spatial Structures. Addison Wesley, Massachusetts, 1990. [SA99] S.M. Yamany; A. Farag; Freeform surface registration using surface signatures. Proc. Int. Conf. on Computer Vision, 2: 1098-1104, 1999. [SG92] F. Stein; G. Medioni; Structural indexing: Efficient 3-D object recognition, IEEE Trans. Pattern Anal. Mach. Intell. 14, 1992, 125–145. [SER04] O. Soldea; G. Elber; E. Rivlin; Global curvature analysis and segmentation of volumetric data sets using trivariate B-spline functions. Geometric Modeling and Processing 2004, Page(s): 217 – 226. [SJ06] S. Se; P. Jasiobedzki; Photo-realistic 3D model reconstruction, IEEE International Conference on Robotics and Automation Orlando, Florida. 2006. [SK87] L. Sirovitch; M. Kirby; Low dimensional procedure for the characterization of human faces. Journal of the Optical Society of America A, 2:519--524, 1987. [SKCK05] T. Serre; M. Kouh; C. Cadieu; U. Knoblich; G. Kreiman; T. Poggio; A theory of object recognition: computations and circuits in the feed forward path of the ventral stream in primate visual cortex. MIT-CSAIL-TR-2005-082. [SL01] S. Correa; L. Shapiro; A new signature-based method for efficient 3d object recognition. proc. IEEE Conf. Computer Vision and Pattern Recognition, 1: 769-776, 2001. [SOK00] R. Sagawa; K. Okada; S. Kagami; M. Inaba; H. Inoue; Incremental mesh modeling and hierarchical object recognition using multiple range images. Intelligent Robots and Systems, 2000. (IROS 2000). [SM95] F. Stein; G. Medioni; Structural indexing: Efficient 3D object recognition of a set of range views. IEEE transactions on pattern analysis and Machine Intelligence, 17(4):344-359, 1995. [SN06] N.D. Salih; D.C.L. Ngo; H. Mellah; 2D object description with discrete segments, Journal of Computer Science 2 (7): 572-576, 2006. [SPR04] J. Salvi; X. Pinsach; R. Garcia; Surface registration from range image fusion Matabosch, Robotics and Automation, 2004. Proceedings. ICRA '04. 2004 IEEE International Conference on Volume 1, 2004 Page(s):678 - 683 Vol.1. [SS02] D. Scharstein; R. Szeliski; "A Taxonomy and Evaluation of Dense Two-Frame Stereo Correspondence Algorithms," International Journal of Computer Vision, 47 (2002), pp. 7-42. [SSDS99] K. Siddiqi; A. Shokoufandeh; S.J. Dickinson; S.W. Zucker; Shock Graphs and Shape Matching. International Journal of Computer Vision, vol. 35, num.1, pp. 13-32, 1999. [SSL05] J.K. Seo; G.C. Sharp; S.W. Lee; Range data registration using photometric features. Computer Vision and Pattern Recognition, 20-25 June 2005 Page(s):1140 - 1145 vol. 2 [SVSS03] S.K. Singh; M. Vatsa; R. Singh; K.K. Shukla; A comparative study of various face recognition algorithms (feature based, eigen based, line based, neural network approaches). References 141 Computer Architectures for Machine Perception, 2003 IEEE International Workshop on 12-16 May 2003 Page(s):12 pp. - 171 [T91] G. Taubin; Estimation of planar curves, surfaces, and nonplanar space curves, defined by implicit equations with applications to edge and range image segmentation, IEEE Trans. Pattern Anal. Mach. Intell. 13, 1991, 1115–1138. [TP91] M. Turk; A. Pentland; Face recognition using eigenfaces. IEEE conf. on computer vision and pattern recognition. pp. 586-591, 1991. [T96] J.P. Thirion; The extremal mesh and the understanding of 3D surfaces, Int. J. Comput. Vision 19, 1996, 115–128. [TPR04] G. Tornola; M. Parazzini; P. Ravazzani; F. Grandori; C. Svelto; 3D-coordinate measurement system and image reconstruction of anatomical parts from unorganized data clouds. Instrumentation and Measurement Technology Conference, 18-20 May 2004 Page(s):487 - 490 Vol.1 [TS89] A. Taza; C. Suen; Discrimination of planar shapes using shape matrices. IEEE Transactions on SMC, 1989. [TSK01] B. Thomas; P.N Sebastian; B.K. Benjamin; D. Kimia; Recognition of shapes by editing shock graphs. ICCV 2001. 7-14 July 2001. Pages:755 - 762 vol.1 [WKW02] W.P. Choi; K.M. Lam; W.C. Siu; An efficient algorithm for the extraction of a Euclidean skeleton. Acoustics, Speech, and Signal Processing, (ICASSP '02), 13-17 May 2002 .Pages:IV-3241 - IV-3244 vol.4 [XZ00] S.S. Xiong; Z.Y. Zhou; A complex nonlinear exponential autoregressive model approach to shape recognition using neural networks Instrumentation and Measurement, IEEE Transactions on Volume 49, Issue 6, Dec 2000 Page(s):1298 – 1304. [YBM04] Y. Yang; O. Brock; R.N. Moll; Efficient and robust computation of an approximated medial axis. ACM Symposium on Solid Modeling and Applications. 2004. [YF02] S.M. Yamany; A. Farag; Surface signatures: an orientation independent free-form surface representation scheme for the purpose of objects registration and matching. Pattern Analysis and Machine Intelligence, Volume: 24 , Issue: 8 , Aug. 2002. Pages:1105 – 1120. [YF99] S.M. Yamany, A. Farag; Freeform surface registration using surface signatures. Proc. Int. Conf. on Computer Vision, 2: 1098-1104, 1999. [YP05] J.J. Yokono; T. Poggio; Boosting a biologically inspired local descriptor for geometry- free face and full multi-view 3D object recognition. Massachusetts institute of technology — computer science and artificial intelligence laboratory. 2005. [YSGZ04] P. Yang; S. Shan; W. Gao; S. Li; D. Zhang; Face recognition using Ada-Boosted Gabor features. IEEE Int. Conf. On Automatic Face and Gesture Recognition (FG2004), pp356- 361, 2004. [Z04] A. Zisserman; Multi-view geometry in computer vision. Cambridge University Press, March 2004. [Z95] A. Zisserman; D. Forsyth; J. Mundy; 3D object recognition using invariance. Artificial Intelligence, 78: 239-288. 1995. [ZLQ04] L. Zhang; S.Z. Li; Z.Y. Qu; X. Huang; Boosting local feature based classifiers for face recognition. IEEE Workshop on Face Processing in Video, 2004. [ZRL96] T. Zhang; R. Ramakrishnan; M. Livny; BIRCH: An efficient data clustering method for References 142 very large databases. SIGMOD Conference 1996: 103-114. [ZPK02] Y. Zhang; J. Paik; A. Koschan; M.A. Abidi; D. Gorsich; Simple and efficient algorithm for part decomposition of 3-D triangulated models based on curvature analysis. Image Processing, 24-28 June 2002. Pages:III-273 - III-276 vol.3 [ZY96] S.C. Zhu; A.L. Yuille; FORMS: A flexible object recognition and modeling system (1995). International Journal of Computer Vision 20(3), 187-212, 1996. 143 APPENDICES Appendix A: Models Set of the models used in the experiments. M0 M1 M3 M7 M8 M9 M11 M12 M13 M14 M15 M21 M22 M23 M24 M27 M30 M31 M32 M33 M34 M35 M37 M38 Appendices 144 M39 M42 M43 M44 M46 M47 M48 M49 M50 M51 M52 M61 M62 M86 M87 M88 M93 M99 M100 M101 M102 M104 M105 M106 M107 M108 M109 M110 M111 M112 Appendices 145 M125 M385 M490 M496 M499 M511 M525 M526 M527 M532 M550 M586 M593 M598 M605 M661 M612 M613 M614 M633 M642 M646 M665 M658 Appendices 146 M761 M992 M1110 1119 M1121 M1123 M1124 M1125 M1126 M1127 M1128 M1130 M1131 M1132 M1133 M1134 M1135 M1136 M1137 M1138 M1139 M1142 M1146 M1147 M1150 M1151 M1153 M1154 M1156 M1157 Appendices 147 M1158 M1161 M1163 M1164 M1165 M1166 M1167 M1168 M1169 M1172 M1174 M1175 M1176 M1177 M1179 M1182 M1184 M1186 M1188 M1191 M1193 M1194 M1196 M1198 M1200 M1202 M1204 M1207 M1209 M1211 M1213 M1215 M1217 M1219 M1221 M1223 Appendices 148 M1225 M1227 M1229 M1230 M1231 M1232 M1233 M1234 M1235 M1236 M1237 M1238 M1240 M1241 M1243 M1244 M1245 M1246 M1247 M1248 M1249 M1250 M1251 M1300 M1302 M1304 M1310 M1331 M1337 M1340 Appendices 149 M1341 M1342 M1344 M1346 M1348 M1350 M1411 M1413 M1472 M1481 M1494 M1504 M1515 M1520 M1521 M1526 M1532 M1543 M1549 M1563 M1567 M1570 M1578 M1583 M1599 M1603 M1605 M1608 M1620 M1622 Appendices 150 M1625 M1630 M1813 Figure A-1: The set of models used in the thesis. Table A-1: Parameters of the models used in the thesis. Parameters of the Models Model Length Height Width Number of Patches Number of Vertices Average Patch Area m0 130.0 110.0 200.0 492 298 60.7 m1 84.2 75.8 200.0 410 255 54.7 m3 190.8 111.9 200.0 912 486 35.2 m7 114.9 116.3 200.0 506 265 90.1 m8 95.5 200.0 170.4 698 423 86.1 m9 169.6 200.0 96.3 902 557 28.9 m11 200.0 101.9 16.0 724 368 35.1 m12 200.0 74.2 113.4 500 278 47.3 m13 111.2 32.1 200.0 1125 491 14.0 m14 200.0 145.5 157.6 1681 891 13.4 m15 200.0 82.5 100.8 724 368 39.6 m21 200.0 105.1 169.1 556 346 78.1 m22 200.0 85.0 138.7 620 333 68.4 m23 94.9 84.4 200.0 878 462 26.1 m24 200.0 85.0 175.0 284 163 167.3 m27 189.5 185.6 200.0 1024 341 171.2 m30 70.4 105.2 200.0 800 390 35.3 m31 101.4 200.0 104.4 1000 524 49.6 m32 75.3 133.3 200.0 2245 1129 15.0 m33 108.9 140.4 200.0 1000 562 59.0 m34 200.0 48.9 109.3 462 233 36.6 m35 161.6 128.6 200.0 1394 699 47.5 m37 80.0 34.7 60.7 1130 567 4.6 Appendices 151 Parameters of the Models Model Length Height Width Number of Patches Number of Vertices Average Patch Area m38 200.0 49.6 56.3 930 464 12.7 m39 180.8 200.0 49.4 1156 649 25.2 m42 200.0 39.0 99.4 265 116 47.7 m43 189.9 122.4 200.0 60 41 174.5 m44 200.0 91.3 50.8 798 401 23.6 m46 200.0 62.5 143.1 1094 576 12.0 m47 141.2 25.9 200.0 165 101 71.9 m48 97.5 90.8 200.0 1307 671 24.7 m49 199.9 163.4 140.2 1999 1117 43.1 m50 80.2 113.5 200.0 182 93 194.2 m51 200.0 95.7 110.9 598 309 71.3 m52 200.0 125.4 186.3 560 294 139.5 m61 34.2 88.1 200.0 337 216 54.3 m62 22.4 152.5 200.0 478 241 53.6 m86 34.3 126.8 200.0 245 176 97.2 m87 59.1 165.6 200.0 1331 691 33.5 m88 11.5 31.2 50.0 3220 1618 0.6 m93 200.0 133.2 57.2 464 278 187.8 m99 66.2 102.5 200.0 1000 506 44.9 m100 112.2 91.3 200.0 1208 606 42.3 m101 80.2 138.3 200.0 1000 553 119.3 m102 79.2 89.6 200.0 226 127 198.0 m104 200.0 159.3 54.2 461 233 95.8 m105 51.5 200.0 194.0 260 146 167.0 m106 39.7 133.4 200.0 690 346 35.4 m107 50.3 180.5 200.0 1000 521 42.4 m108 200.0 185.9 54.4 1000 504 43.2 m109 169.5 200.0 92.6 1000 528 104.1 m110 93.8 146.2 200.0 902 453 78.6 m111 82.0 137.5 200.0 2442 1238 22.1 m112 191.7 200.0 131.8 999 529 87.8 m125 86.3 70.2 250.0 2500 1250 13.9 m385 184.0 200.0 126.6 918 674 138.3 m490 35.6 100.0 35.6 1076 574 14.4 m496 32.4 50.0 30.9 1000 502 6.8 m499 148.0 148.0 200.0 636 339 110.1 m511 195.2 200.0 188.6 790 294 304.9 Appendices 152 Parameters of the Models Model Length Height Width Number of Patches Number of Vertices Average Patch Area m525 200.0 58.5 56.0 1285 711 34.0 m526 200.0 133.5 100.0 226 120 290.5 m527 200.0 200.0 200.0 348 176 388.5 m532 83.8 83.8 200.0 480 240 90.2 m550 200.0 156.0 194.1 96 64 2315 m586 171.5 66.4 200.0 1372 696 10.6 m593 122.9 200.0 129.2 1352 686 97.8 m598 139.7 94.6 200.0 1148 727 74.3 m605 100.4 200.0 170.9 1300 531 40.5 m611 172.4 200.0 91.4 200 87 273.3 m612 154.7 185.9 200.0 1106 437 106.0 m613 200.0 191.5 57.5 200 136 76.1 m614 78.1 200.0 188.4 684 336 56.3 m633 92.3 200.0 13.2 1728 524 23.5 m642 140.2 164.5 200.0 1104 496 100.4 m646 135.3 141.8 200.0 360 227 439.6 m655 26.1 115.9 200.0 704 364 36.3 m658 34.7 124.7 200.0 560 293 159.5 m761 57.1 100.0 57.1 210 104 84.7 m992 187.3 102.2 200.0 644 491 39.3 m1110 25.6 200.0 77.4 1687 850 11.6 m1119 200.0 74.6 166.4 1550 863 33.8 m1121 161.5 200.0 63.9 1913 1062 19.0 m1123 141.6 200.0 61.8 1756 1015 28.2 m1124 128.0 200.0 45.4 572 390 58.8 m1125 163.0 200.0 52.5 656 434 79.4 m1126 160.7 200.0 44.4 1316 770 33.9 m1127 165.5 200.0 41.0 520 354 75.2 m1128 200.0 198.3 45.7 1232 718 37.3 m1130 200.0 91.4 163.3 4520 2303 13.9 m1131 143.3 200.0 36.2 768 482 49.3 m1132 146.9 200.0 38.9 1428 882 24.7 m1133 156.8 200.0 60.3 1024 624 37.4 m1134 108.9 200.0 31.2 512 338 45.8 m1135 109.7 200.0 32.9 360 264 98.5 m1136 112.4 200.0 43.9 1628 904 10.4 m1137 92.0 200.0 36.8 524 408 53.0 Appendices 153 Parameters of the Models Model Length Height Width Number of Patches Number of Vertices Average Patch Area m1138 200.0 191.2 76.5 3586 2050 20.6 m1139 131.0 200.0 52.5 4234 2920 10.0 m1142 200.0 41.4 103.6 2498 1322 15.9 m1146 181.8 62.6 200.0 2995 1700 9.1 m1147 200.0 64.5 181.0 2320 1149 11.4 m1150 151.2 37.1 200.0 1228 622 16.2 m1151 200.0 51.0 194.7 1660 858 15.7 m1153 161.3 71.5 200.0 1478 743 22.7 m1154 173.6 54.6 200.0 1000 520 23.8 m1156 200.0 39.6 169.9 2452 1335 7.5 m1157 200.0 142.9 33.9 2088 1264 9.1 m1158 186.0 90.3 200.0 608 336 61.9 m1161 190.9 63.4 200.0 1692 1486 18.1 m1163 190.7 58.5 200.0 2778 1014 11.0 m1164 181.4 48.6 200.0 620 272 39.5 m1165 200.0 162.0 52.0 768 448 35.1 m1166 178.3 47.8 200.0 772 349 33.5 m1167 141.6 54.4 200.0 3992 2838 9.0 m1168 131.9 46.2 200.0 1106 500 26.0 m1169 134.4 59.9 200.0 2237 1198 22.0 m1172 200.0 157.0 37.7 4928 1965 15.1 m1174 102.7 40.5 200.0 552 281 35.2 m1175 134.8 62.4 200.0 4592 2344 6.5 m1176 164.2 41.2 200.0 642 213 61.3 m1177 175.2 59.5 200.0 1010 507 29.1 m1179 123.0 40.6 200.0 1994 1203 10.0 m1182 194.8 200.0 47.0 354 179 97.9 m1184 102.1 56.5 200.0 382 182 69.1 m1186 98.8 200.0 46.5 1449 729 14.1 m1188 160.5 52.7 200.0 538 279 52.5 m1191 200.0 182.1 31.8 2444 1295 13.2 m1193 104.5 39.2 200.0 2030 1020 8.6 m1194 200.0 152.7 47.7 1188 757 30.9 m1196 200.0 137.1 42.8 804 523 44.9 m1198 134.7 45.1 200.0 462 224 62.4 m1200 96.6 58.0 200.0 450 197 93.5 m1203 98.3 200.0 53.4 874 438 25.1 Appendices 154 Parameters of the Models Model Length Height Width Number of Patches Number of Vertices Average Patch Area m1205 200.0 157.7 40.4 556 369 83.7 m1207 200.0 26.0 75.3 669 412 25.1 m1209 191.9 52.7 200.0 480 244 96.9 m1211 188.0 61.6 200.0 852 486 37.8 m1213 91.1 31.7 200.0 504 254 29.5 m1215 45.6 141.0 200.0 308 162 100.9 m1217 122.4 52.3 200.0 594 305 37.9 m1219 32.2 147.9 200.0 364 185 75.3 m1221 200.0 173.0 40.9 688 372 61.0 m1223 139.1 40.1 200.0 2230 1117 12.5 m1225 200.0 148.7 32.7 340 183 176.3 m1227 200.0 156.5 28.2 324 265 82.6 m1229 100.3 200.0 42.1 6422 3388 4.9 m1230 200.0 124.2 60.3 4325 2360 5.6 m1231 128.6 35.7 200.0 132 79 218.0 m1232 194.6 45.8 200.0 418 203 73.6 m1233 102.6 50.0 200.0 516 233 41.8 m1234 174.2 44.2 200.0 492 236 53.2 m1235 200.0 45.0 194.4 376 193 84.5 m1236 134.3 40.8 200.0 828 396 31.5 m1237 200.0 59.8 192.0 480 212 62.6 m1238 101.7 46.9 200.0 396 196 47.4 m1240 176.2 56.5 200.0 378 182 70.6 m1241 200.0 142.9 50.3 720 480 67.2 m1243 137.3 47.1 200.0 596 282 49.2 m1244 123.0 55.6 200.0 442 186 59.4 m1245 200.0 60.6 152.5 426 215 59.6 m1246 165.6 64.8 200.0 784 366 38.8 m1247 138.0 66.1 200.0 1546 777 20.9 m1248 200.0 76.5 128.4 558 296 57.1 m1249 200.0 127.5 75.0 616 310 53.3 m1250 178.9 42.3 200.0 4970 2539 8.6 m1251 134.4 59.9 200.0 2237 1198 22.0 m1300 147.1 31.3 200.0 76 39 564.2 m1302 39.3 55.7 200.0 1876 1058 6.1 m1304 50.8 73.3 200.0 2094 1066 10.3 m1310 157.8 57.7 200.0 2666 1362 6.6 Appendices 155 Parameters of the Models Model Length Height Width Number of Patches Number of Vertices Average Patch Area m1331 200.0 54.4 152.0 51 51 789.8 m1337 181.9 200.0 181.9 340 201 305.9 m1340 72.1 200.0 72.1 1457 778 14.2 m1341 74.6 200.0 74.6 816 425 32.2 m1342 123.5 200.0 123.0 2305 1252 27.9 m1344 136.7 200.0 136.7 1930 1093 32.1 m1346 62.5 69.4 200.0 1374 742 20.4 m1348 62.5 69.5 200.0 1382 795 20.2 m1350 200.0 56.1 59.0 1976 1097 15.0 m1411 200.0 56.7 77.3 1368 794 39.3 m1413 92.8 79.9 200.0 566 324 144.4 m1472 80.3 114.1 200.0 846 524 20.5 m1481 68.0 116.6 200.0 1038 547 45.6 m1494 100.4 73.1 200.0 2016 830 44.3 m1504 44.8 19.8 100.0 3269 1621 6.1 m1515 99.2 44.9 200.0 391 250 86.6 m1520 104.9 200.0 48.5 904 378 97.3 m1521 81.5 51.5 200.0 580 308 118.3 m1526 85.3 55.1 200.0 560 320 100.1 m1532 89.9 78.2 200.0 134 132 413.4 m1543 200.0 54.4 91.0 548 386 105.3 m1549 69.4 37.0 200.0 1412 381 41.2 m1563 141.8 100.7 200.0 1108 616 117.1 m1567 99.8 137.8 200.0 216 162 523.0 m1570 43.8 70.9 200.0 3479 1445 27.6 m1578 38.7 72.6 200.0 574 333 85.2 m1583 62.9 114.3 200.0 573 309 119.7 m1599 89.3 200.0 89.3 496 250 74.7 m1603 75.7 200.0 75.7 772 389 45.5 m1605 133.3 200.0 133.3 168 89 378.1 m1608 168.4 200.0 168.4 480 242 220.9 m1620 164.4 200.0 190.3 888 301 133.7 m1622 197.1 178.3 200.0 2130 1054 40.4 m1625 200.0 121.9 124.6 1027 672 25.0 m1630 182.5 88.9 200.0 1120 562 71.2 m1813 140.1 200.0 186.0 1469 773 68.2 Appendices 156 Appendix B: Noise Effect Assume that X is the range data of an object, and that N is white noise with µ=0 and standard deviation δ added to the data. ),0( δNXX i += By averaging, m noisy data points, X, we will have )),0((1)),0((1 δδ mN m XmNmX m X ii +=+= The expected value and the variance are calculated as: 0)),0(( =δNE ),(1)),0(( 2 2 2 2 m oN m m m NE δδδδ =⇒== (B-1) As indicated in Equation (B-1), by averaging m noisy data points, the standard deviation of the noise is suppressed by a factor of m Appendices 157 Appendix C: Clustering To cluster FOC signatures, we have used modified BIRCH (Balanced Iterative Reducing and Clustering using Hierarchies) algorithm [ZRL96]. This algorithm has been slightly modified to produce the best quality clustering results to serve our purposes. The BIRCH clustering algorithm uses a single scan and a CF tree to cluster large amount of data efficiently. The concepts of Clustering Feature and CF tree are at the core of BIRCH’S incremental clustering. A Clustering Feature is a triple summary of the information that we maintain about a cluster. Given N d-dimensional data points in a cluster: {Xi} where i = 1, 2, . . . . N, the Clustering Feature (CF) vector of the cluster is defined as a triple: ),,( SLNCF = , where N is the number of data points in the cluster, ∑ = = N i iXL 1 and ∑ = = N i iXS 1 2 . The CF vector is sufficient for calculating required parameters for fast and efficient clustering. In this thesis, instead of storing CF vector, we have stored the six elements for each cluster. CF=(N, µ, Σ, vertex, normal, rm,). where N is the number of elements in the cluster µ is the mean of the cluster Appendices 158 Σ is covariance of the cluster Vertex is the list of the 3D coordinates of each oriented point in the cluster. Normal is the list of the normals for each vertex in the cluster. mr is the vector of weight of each model m in the cluster. Given N d-dimensional data points in a cluster: {Xi} where i = 1,2, . . . . N, the µ of he cluster is calculated as: N XN i i∑ = = 1µ and [ ]))(( jjiiij XXE µµ −−=Σ Given two clusters i and j, with Ni and Nj elements, the µ (mean) and Σ (covariance) of the new cluster is calculated by [K94] jinew NNN += j new j i new i new N N N N µµµ += [ ]Tjiji newnew ji j new j i new i new NN NN N N N N ))(()1(1 1 1 1 µµµµ −− − +Σ − − +Σ − = =Σ Appendices 159 Figure C-1. Clustering tree used in the thesis. As indicated in Figure C-1, the clusters are made using the leaf nodes. If the number of the elements of a cluster increases beyond a threshold, the cluster is split into two clusters with two equal clusters with maximum intra-cluster distance. A new element is added to the cluster if the inter-cluster distance did not increase beyond a threshold value. The threshold value to cluster the signatures is set to the median of the dissimilarity value of the noisy signatures compared with the original signatures. Appendices 160 Appendix D: Intersection of a Triangular Mesh and a Sphere There are methods, such as Hodgman's polygon clipping algorithm, to efficiently calculate the intersection of a polygonal mesh with a cube [J90]. Ericson suggested a very efficient method to detect the collision of a sphere with a triangular mesh [E03]. A similar method is suggested by Gondim [G01]. In this Section a precise method is used to calculate the area of a free-form object circumscribed inside a sphere by finding the intersection of the sphere with the polygonal meshes. To find the area of a mesh circumscribed inside a sphere, a coordinate system that places the center of the sphere at the origin is used; this can be accomplished by a translation. Thus the implicit equation for the sphere becomes 2222 rzyx =++ (D-1) and the implicit equation for the plane of a mesh is DCzByAx =++ After normalizing coefficients, if necessary, so that 1222 =++ cba , the equation of the plane becomes dczbyax =++ (D-2) where d gives the minimum distance of the plane to the origin (center of the sphere). The relation between r and d can be one of the following cases: • if d > r, no intersection between sphere and plane • if d = r, a single point of intersection between two primitives, at (da,db,dc) Appendices 161 • if d < r, intersection is a circle, with center at (da,db,dc) and radius equal to 22 dr − Now, finding the intersection of the sphere and the mesh is reduced to finding the intersection of a circle and a triangle in 2D. There are totally nine different cases to be considered in finding the intersection of a triangle and a circle. Case 1: The circle is circumscribed by triangle. In this case the intersection of the circle and the triangle is equal to the area of the circle (First row of Figure D-1). Case 2: The triangle is completely inside the circle. In this case, the area of the triangle ABC is calculated by any conventional method (first row of Figure D-1). Cases 3A & 3B: Only one vertex of the triangle is located inside the circle (see first and second rows of Figure D-1). Assume that vertex A is located inside, and two vertexes B and C are located outside of the circle. The area ADpE is calculated as follows: ODEODpEADEADpE −+= where ODpE is the sector of the circle between OD and OE, and p is the circular arc between D and E. Appendices 162 Cases 4A & 4B: Only one vertex of the triangle is located inside the circle, and one side of the triangle intersects the circle (see second row of Figure D-1). Assume that vertex A is located inside, and BC intersects the circle at E and F. The area ADEFG is calculated as follows: CGFBEDABCADEFG −−= since in this case: AFGAFpG ≈ . Cases 5A & 5B: Two vertices of the triangle are located inside the circle. (see third row of Figure D-1) Assume that vertexes B and C are located inside of the circle, and vertex A is outside of the circle. Area BCDpE is calculated as follows: ODEODpEAEDABCBCDpE −+−= Cases 6A: All vertices of the triangle are located outside of the circle, and only one side of the triangle intersects the circle. The center of the circle is located outside the triangle. (see third row of Figure D-1). Assume that the edge AC of the triangle ABC intersects circle at points D and E. The area EpD is calculated as follows: OEDOEpDEpD −= Appendices 163 Figure D-1: Intersection of a triangle and a circle, cases 1 to 7. Appendices 164 Case 6B: All vertices of the triangle are located outside of the circle, and only one side of the triangle intersects the circle. The center of the circle is located inside the triangle. (forth row Figure D-1). Assume that edge BC of triangle ABC intersects circle at points D and E. The area EpD is calculated as follows: ODsEODErEpD −+= 2.pi where r is the radius of the circle. Cases 7A & 7B: All vertices of the triangle are located outside of the circle, and two sides of the triangle intersect the circle (forth row of Figure D-2). Assume that edges AB and AC of triangle ABC intersect the circle at points G, D, E, and F respectively. The intersection between triangle and circle is equal to: OFGOFsGODEODpEDEFGDpEFsG −+−+= where s is an arc between F and G. Case 8: All vertices of the triangle are located outside of the circle, and all three sides of the triangle intersect the circle, Figure D-2. These cases calculated as the previous cases. Case 9: All vertices of the triangle are located outside of the circle, and there is no intersection between triangle and circle, and circle is not circumscribed by triangle. In this case, the Appendices 165 patch is located outside of the sphere, and there is no intersection between the circle and the triangle. Figure D-2: Intersection of a triangle and a circle, case 8. Appendices 166 Appendix E: Alternative Method to Calculate FOC Signatures A main problem in calculating FOC signatures is the processing time, since calculating the surface area inside a sphere is a costly process. In this section, it will be shown that the flatness, orientation, and convexity signatures can be calculated efficiently within a limited error rate, which is negligible comparing with the noise effect on the signatures creation. E.1. Approximated Approach The area circumscribed inside the sphere can be approximated by calculating the area of the triangles located inside the sphere without precise calculation if the size of the triangles is much smaller than the size of the circumscribed sphere. Each triangle can be divided into smaller equal triangles by connecting the midpoints of their sides. The left column of Figure E-1 shows triangle abc that is divided into four equal triangles, S1, S2, S3, and S4. The intersection of the triangle abc and a sphere with radius R can be calculated by the following formula: ∑+=∧∆ iabc SeR for all ( ) ROCS Rii ≤− (E-1) where Ci is the centroid of each smaller triangles, OR is the center of sphere, and e is the error. For example the intersection of sphere R and triangle abc is equal to S2+S4 + e where )( 423 eeee +−= Appendices 167 Figure E-1: Intersection of a circle and triangle If the size of the triangles are small enough comparing to the radius of the sphere, then result of Equation (E-1) becomes closer to the real value of intersection of R and the triangle. E.2. Worst Case Error in 2D Consider the triangle abc in the middle column of Figure E-1. If o is the centroid of the triangle and bcde , then it can be shown that abcade AA 94= and abcbcde AA 95= since de divides ab and ac in ration 2:1. Figure E-2: Models used in the experiments Appendices 168 Now, assume that sampling was uniform that ab=bc=ac, which is the case in all sampling method in each small region of the surface, and the sphere intersects all triangles at their d and e points (Please refer to right column of Figure E-1), then the error will be equal to 2 2 222 2)( ∆ + ∆ = −∆+≤ R r R r R RrR e pi pipi (E-2) It can be shown that in this case Sr 4/33−=∆ (E-3) where S is the surface area of the mesh. Substituting Equation (E-3) into Equation (E-2) the error rate can be written as += +≤ R S R S R SRS e pipipipi 2 3 1 )3( 2 4/34/3 Figure E-3: Mean and standard deviation of the error rate in calculating the intersection of triangular patches and sphere. Appendices 169 where S is the surface area of the mesh and C is the surface area of the circle created by intersection spheres R with the mesh plan. If R is large enough comparing to the size of S, then 0→e . E.3. Worst Case Error in 3D The worst case in 3D occurs when the centroids of all patches are adjacent to the surface of the sphere. In this case 24 re pi= . However, the probability of this case is so negligible that we can dismiss this case. E.4. Experiments Figure E-3 shows the error rate of calculating the intersection of sphere with the surface mesh patches for different ratio of size of the triangles and size of the sphere. The horizontal axis of the figure shows the ratio of the area of the great circle of the sphere to the size of the triangles. In the experiments the size of the triangles were set to 3.14 units and the radius of the sphere was set from 5 to 100 units with interval of 5 units. As indicated in the figure the average error of calculating the size of the triangles inside a sphere with the approximated method is less than 2 percent of the real value. E.5. FOC Signatures The main process in calculating FOC signatures is finding the surface of an object circumscribed inside a sphere. For consistency, the normal of the triangles should point outside of the object in both modeling and matching processes. The direction of the normal of the patches can be found by assuming different view-points far from the object, and then the directions of normal can be calculated by a method similar to the ray tracing method [FVFH93]. Appendices 170 To speed up the processing, the patches can be divided into smaller patches as mentioned above, and since surface of each smaller patch is equal to ¼ surface of the original patch, each area of the patches can be calculated efficiently until area of the patches reaches to the desired size. The normal of each smaller patch is the same as the normal of the original patches, and there is no need for any further calculation. After dividing the surface patches into the desired size, each face is identified with a triple of F(Vcen, A, N) where Vcen is the centroid of each face, A is the area of the face, and N is the normal of the face. Assume that we are interested finding the FOC signatures of the surface on vertex v with orientation Nv within support angle θ, and support distance r from rStart to rEnd with steps of rStep. Algorithm E-1 shows the procedure to find the FOC signatures. 1. First all faces that their normal makes an angle greater than θ with Nv are eliminated. Fθ =for all Fi that angle(Ni, Nv) θ≤ 2. The coordinate of the system transformed to the coordinated of v: Vvcen = Vcen – Vv for all triples of Fθ 3. The distance of the new origin [0 0 0] from each Vvcen is calculated. D = distance([0 0 0], Vvcen) 4. All faces that have their centroid beyond the maximum support distance, rEnd, are eliminated. Appendices 171 Fθd for all Fθ where D<= rEnd 5. Index of convex and concave faces consistent with matrix D relative to orientation of Nv are separated. [IndFθdVex IndFθdCav]=indexOfConvexConcaveFaces(Fθd, Nv) 6. The projection of the faces to the plan normal to Nv is calculated Pθd = Project(Fθd(A), Nv) 7. Using matrix D, Fθd, Pθd, and indices IndFαdVex IndFαdCav the FOC signatures can be calculated. The proposed algorithm can be implemented efficiently in Matlab. E.6. Error Rate Figure E-4 shows the error rate versus processing time. The horizontal axis shows the size of triangles. The support distance was set to 100 units in the experiment. As indicated in the figure, as size of the triangles get smaller, and consequently the error rate decreases, the processing time increases. It is worth to mention that calculating each signatures takes in average 15.78 seconds with precise method for the test cases used in this experiment, and the proposed methods is substantially faster than the exact method. E.7. Appendix E Conclusions The proposed algorithm provides an efficient method to calculate FOC signatures within an acceptable error rate. The algorithm can be efficiently implemented in Matlab, and the processing time is at least 15 times faster than the original method for error rate less than Appendices 172 2 percent. This level of error is negligible comparing with the noise effect on the signature creation. Figure E-4: Error rate versus processing time Appendices 173 Algorithm E-1: calculating FOC signatures for vertex v with a support angle θ and support distance d. Each face is identified with a triple of F(Vcent, A, N) where Vcent is the centroid of each face, A is the area of the face, and N is the normal of the face. Nv = normalOfVertex (F, v, base-radius) % find normal of vertex v Fθ =for all Fi that angle(Ni, Nv) θ≤ % support angle= θ Vvcen = Vcen – Vv for all triples of Fθ D = distance([0 0 0], Vvcen) Fθd for all Fθ where D<= rEnd DFθd The D of Fθd %For consistency of the indices [IndFθdVex IndFθdCav]=indexOfConvexConcaveFaces(Fθd, Nv) Pθd = Project(Fθd(A), Nv) % calculate FOC signatures for each support distance For d = rStart to rEnd with step rStep indD = for all DFθd <d %find index of faces within support distance d % Calculate the components of the signatures Pconvex = sum(Pθd (IndFθdVex Λ indD)) Pconcave = sum(Pθd (IndFθdCav Λ indD)) Aconvex = sum(Fθd (IndFθdVex Λ indD)) Aconcave = sum(Fθd (IndFθdCav Λ indD)) % Calculate the signatures Orientation =normalize(Fθd(A(indD)), Fθd(N(indD))); Flatness = (pconvex + pconcave)/(Aconvex + Aconcave + ε) Convexity = Aconvex /(Aconvex + Aconcave ) end
- Library Home /
- Search Collections /
- Open Collections /
- Browse Collections /
- UBC Theses and Dissertations /
- New surface descriptors for matching and recognition...
Open Collections
UBC Theses and Dissertations
Featured Collection
UBC Theses and Dissertations
New surface descriptors for matching and recognition of three-dimensional rigid objects Darbandi, Hossein B. 2009
pdf
Notice for Google Chrome users:
If you are having trouble viewing or searching the PDF with Google Chrome, please download it here instead.
If you are having trouble viewing or searching the PDF with Google Chrome, please download it here instead.
Page Metadata
Item Metadata
Title | New surface descriptors for matching and recognition of three-dimensional rigid objects |
Creator |
Darbandi, Hossein B. |
Publisher | University of British Columbia |
Date Issued | 2009 |
Description | In this work we propose new surface descriptors for matching and recognition of three-dimensional rigid objects by encoding the fluctuation of the surface and the variation of its normal around an oriented surface point. The surface of the object is encoded into three features vectors as the surface signatures on each point of the surface, and then the collection of signatures is used to match and recognize the object. The signatures encode the curvature, symmetry, and convexity of the surface around an oriented point. The new descriptors are robust to noise, sampling density, scale, rotation, clutter, and occlusion. In this work we use Computer Aided Design (CAD) to create models and test objects represented by triangular meshes. |
Genre |
Thesis/Dissertation |
Type |
Text |
Language | eng |
Date Available | 2010-01-04 |
Provider | Vancouver : University of British Columbia Library |
Rights | Attribution-NonCommercial-NoDerivatives 4.0 International |
DOI | 10.14288/1.0065529 |
URI | http://hdl.handle.net/2429/17456 |
Degree |
Doctor of Philosophy - PhD |
Program |
Electrical and Computer Engineering |
Affiliation |
Applied Science, Faculty of Electrical and Computer Engineering, Department of |
Degree Grantor | University of British Columbia |
GraduationDate | 2010-05 |
Campus |
UBCV |
Scholarly Level | Graduate |
Rights URI | http://creativecommons.org/licenses/by-nc-nd/4.0/ |
AggregatedSourceRepository | DSpace |
Download
- Media
- 24-ubc_2010_spring_darbandi_hossein.pdf [ 5.84MB ]
- Metadata
- JSON: 24-1.0065529.json
- JSON-LD: 24-1.0065529-ld.json
- RDF/XML (Pretty): 24-1.0065529-rdf.xml
- RDF/JSON: 24-1.0065529-rdf.json
- Turtle: 24-1.0065529-turtle.txt
- N-Triples: 24-1.0065529-rdf-ntriples.txt
- Original Record: 24-1.0065529-source.json
- Full Text
- 24-1.0065529-fulltext.txt
- Citation
- 24-1.0065529.ris
Full Text
Cite
Citation Scheme:
Usage Statistics
Share
Embed
Customize your widget with the following options, then copy and paste the code below into the HTML
of your page to embed this item in your website.
<div id="ubcOpenCollectionsWidgetDisplay">
<script id="ubcOpenCollectionsWidget"
src="{[{embed.src}]}"
data-item="{[{embed.item}]}"
data-collection="{[{embed.collection}]}"
data-metadata="{[{embed.showMetadata}]}"
data-width="{[{embed.width}]}"
data-media="{[{embed.selectedMedia}]}"
async >
</script>
</div>
Our image viewer uses the IIIF 2.0 standard.
To load this item in other compatible viewers, use this url:
https://iiif.library.ubc.ca/presentation/dsp.24.1-0065529/manifest