Open Collections

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Distortion-free tolerance-based layer setup optimization for layered manufacturing Chen, Jack Szu-Shen 2010

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
24-ubc_2010_fall_chen_jack.pdf [ 1.73MB ]
Metadata
JSON: 24-1.0071119.json
JSON-LD: 24-1.0071119-ld.json
RDF/XML (Pretty): 24-1.0071119-rdf.xml
RDF/JSON: 24-1.0071119-rdf.json
Turtle: 24-1.0071119-turtle.txt
N-Triples: 24-1.0071119-rdf-ntriples.txt
Original Record: 24-1.0071119-source.json
Full Text
24-1.0071119-fulltext.txt
Citation
24-1.0071119.ris

Full Text

DISTORTION-FREE TOLERANCE-BASED LAYER SETUP OPTIMIZATION FOR LAYERED MANUFACTURING  by  Jack Szu-Shen Chen  A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF APPLIED SCIENCE  in  The Faculty of Graduate Studies (Mechanical Engineering)  THE UNIVERSITY OF BRITISH COLUMBIA (Vancouver)  August 2010  © Jack Szu-Shen Chen, 2010  ABSTRACT Layer manufacturing has emerged as a highly versatile process to produce complex parts compared to conventional manufacturing processes, which are either too costly to implement or just downright not possible. However, this relatively new manufacturing process is characterized by a few outstanding issues that have kept the process from being widely applied. The most detrimental is the lack of a reliable method on a computational geometry level to predict the resulting part error. Layer setup with regard to the contour profile and thickness of each layer is often rendered to operator-deemed best. As a result, the manufactured part accuracy is not guaranteed and the build time is not easily optimized. Even with the availability of a scheme to predict the resulting finished part, optimal layer setup cannot be determined. Current practice generates the layer contours by simply intersecting a set of parallel planes through the computer model of the design part. The volumetric geometry of each layer is then constructed by extruding the layer contour by the layer thickness in the part building direction. This practice often leads to distorted part geometry due to the unidirectional bias of the extruded layers. Because of this, excessive layers are often employed to alleviate the effect of the part distortion. Such form of the distortion, referred to as systematic distortion, needs to be removed during layer setup. This thesis proposes methods to first remove the systematic distortion and then to determine the optimal layer setup based on a tolerance measure. A scheme to emulate the final polished part geometry is also presented. Case studies are performed in order to validate that the proposed method. The proposed scheme is shown to have significantly reduced the number of layers for constructing an LM part while satisfying a user specified error bound. Therefore, accuracy is better guaranteed due to the existence of error measure and control. Efficiency is greatly increased.  ii  TABLE OF CONTENTS Abstract .......................................................................................................................................... ii Table of Contents ......................................................................................................................... iii List of Tables ................................................................................................................................. v List of Figures ............................................................................................................................... vi Acknowledgements ...................................................................................................................... ix Dedication ...................................................................................................................................... x 1.  Introduction ......................................................................................................................... 1 1.1  2.  Basic concept of layered manufacturing ...................................................................... 1  1.1.1  Pre-process ................................................................................................................ 2  1.1.2  Process ...................................................................................................................... 4  1.1.3  Post-process .............................................................................................................. 9  1.2  Advantages and disadvantages of layered manufacturing ............................................ 9  1.3  Existing issues pertaining to layer manufacturing ...................................................... 11  1.4  Literature review......................................................................................................... 14  1.5  Research goal .............................................................................................................. 16  Systematic Distortion Elimination ................................................................................... 18 2.1  Prelude ........................................................................................................................ 18  2.1.1  Layer geometry approximation ............................................................................... 19  2.1.2  Staircase effect ........................................................................................................ 19  2.1.3  Systematic distortion ............................................................................................... 20  2.2  Prior methods for systematic distortion reduction ...................................................... 22  2.3  Proposed method ........................................................................................................ 25  2.4  Boundary contour generation ..................................................................................... 27  2.4.1  Existing methods ..................................................................................................... 28  2.4.2  Necessary concepts ................................................................................................. 29  2.4.3  Proposed algorithm ................................................................................................. 32  2.5  Case studies ................................................................................................................ 39  2.5.1  Case 1: sphere ......................................................................................................... 39  2.5.2  Case 2: slanted cylinder .......................................................................................... 41  2.5.3  Case 3: slanted concave cylinder ............................................................................ 42 iii  2.5.4 3.  Tolerance-Based Layer Setup Optimization for Axis Symmetric Objects .................. 45 3.1  Finishing and its implications ................................................................................. 46  3.1.2  Current methods ...................................................................................................... 47  3.2  Methodology............................................................................................................... 49  3.3  Emulation.................................................................................................................... 52  3.3.1  Distinct monotonic region....................................................................................... 53  3.3.2  Convex region ......................................................................................................... 54  3.3.3  Concave region ....................................................................................................... 55  3.3.4  Start and end region ................................................................................................ 56  3.3.5  Fitting ...................................................................................................................... 57 Layer setup determination .......................................................................................... 58  3.4.1  Layer thickness and position ................................................................................... 59  3.4.2  Number of layers..................................................................................................... 63  3.5  Case studies ................................................................................................................ 64  3.5.1  Case 1: convex only axis symmetric object ............................................................ 65  3.5.2  Case 2: s-shaped axis symmetric object.................................................................. 70  Tolerance-Based Layer Setup Optimization for Non-Axis Symmetric Objects .......... 74 4.1  Prelude ........................................................................................................................ 74  4.2  Methodology............................................................................................................... 75  4.3  Procedure .................................................................................................................... 80  4.3.1  Curve based model .................................................................................................. 81  4.3.2  Mapping .................................................................................................................. 82  4.3.3  Layer setup determination....................................................................................... 87  4.4  5.  Prelude ........................................................................................................................ 45  3.1.1  3.4  4.  Case 4: s-shaped cylinder........................................................................................ 43  Case studies ................................................................................................................ 90  4.4.1  Case 1: compensation vs. optimization................................................................... 91  4.4.2  Case 2: non-axis symmetric object with elliptical contour ..................................... 93  4.4.3  Case 3: non-axis symmetric object with concave contour ...................................... 97  Conclusion ........................................................................................................................ 102 5.1  Research contribution ............................................................................................... 102  5.2  Limitation and future work ....................................................................................... 105  References .................................................................................................................................. 107  iv  LIST OF TABLES Table 3.1  Number of layers and error comparison ................................................................. 70  Table 4.1  Maximum layer error and corresponding computation time for case 1 .................. 93  Table 4.2  Maximum layer error and corresponding number of layers for case 2 ................... 95  Table 4.3  Maximum layer error and corresponding computation time for case 2 .................. 97  v  LIST OF FIGURES Figure 1.1  Model slicing for layer generation. .............................................................................2  Figure 1.2  Tool path to physical layer relationship. .....................................................................3  Figure 1.3  Current layer geometry generation methods: (a) top-down slicing; and (b) bottom-up slicing. .......................................................................................................4  Figure 1.4  Selective laser sintering technology............................................................................6  Figure 1.5  Fused deposition modeling technology. .....................................................................7  Figure 1.6  Stereolithography technology. ....................................................................................8  Figure 1.7  Layer geometry resulting from: (a) top-down slicing; and (b) bottom-up slicing. .......................................................................................................................12  Figure 1.8  Final smoothed geometry resulting from: (a) top-down slicing; and (b) bottom-up slicing. .....................................................................................................13  Figure 2.1  Staircase effect in LM parts. .....................................................................................20  Figure 2.2  Systematic distortion in LM parts: (a) distortion caused by the top-down slicing strategy; and (b) distortion free geometry obtained by a proper layer generation method. ....................................................................................................21  Figure 2.3  Layer contour generation: (a) by the existing merging method; and (b) correct contour for distortion free LM parts (by the proposed method). ..............................24  Figure 2.4  Proposed systematic distortion reduction procedure. ...............................................27  Figure 2.5  Definition of: (a) a convex set; and (b) a non-convex set. ........................................30  Figure 2.6  Elastic band analogy of convex hull. ........................................................................30  Figure 2.7  Delaunay triangulation with circumcircles shown. ...................................................31  Figure 2.8  Proposed boundary contour generation algorithm. ...................................................33  Figure 2.9  Convex hull and initial pair of segments of the boundary contour. ..........................35  Figure 2.10 Point subset for determining the next boundary contour segment. ...........................36 Figure 2.11 Next contour segment determination: (a) Delaunay triangulation; (b) unwanted Delaunay edge removal; and (c) preferred direction for the next contour segment. .......................................................................................................37 Figure 2.12 Inflection point check. ..............................................................................................38 Figure 2.13 Mid-layer boundary contour generation for a sphere: (a) layer extracted; and (b) projected points and the resulting boundary contour. .........................................40 Figure 2.14 Layer boundary contour generation for a slanted cylinder: (a) layer extracted; and (b) projected points and the resulting boundary contour. ..................................41 Figure 2.15 Layer boundary contour generation for a concave cylinder: (a) layer extracted; and (b) projected points and the resulting boundary contour. ..................................42 Figure 2.16 Layer generation for an s-shaped cylinder and the corresponding postprocessed geometry: (a) proposed method; and (b) top-down slicing. .....................44 vi  Figure 3.1  Cusp height vs. tolerance control. .............................................................................47  Figure 3.2  Difference in layer error due to layer positions with respect to geometry. ...............50  Figure 3.3  Similar layer setups result in desirable layer error to number of layers relationship. ...............................................................................................................51  Figure 3.4  Permitted regions for control point extrapolation. ....................................................53  Figure 3.5  Convex region control point estimation: (a) possible control point solutions; and (b) estimated control point solution utilizing adjacent slopes. ...........................55  Figure 3.6  Concave region control point extrapolation: (a) estimated solution when favorable layer position present; and (b) when favorable layer position not present. ......................................................................................................................56  Figure 3.7  Start and end layer control point extrapolation: (a) infinite number of possible solutions; and (b) minimization of start/end layer thickness to better facilitate control point extrapolation. .......................................................................................57  Figure 3.8  Permitted regions for emulated model interpolation. ...............................................58  Figure 3.9  Similar layer distribution at area of different curvature values. ...............................60  Figure 3.10 Unconstrained layer distributions: (a) initial layer setup; (b) compensated layer setup; and (c) optimized layer setup. ...............................................................66 Figure 3.11 Unconstrained layer error: (a) initial layer setup; (b) compensated layer setup; and (c) optimized layer setup. ...................................................................................67 Figure 3.12 Constrained layer distributions: (a) initial layer setup; (b) compensated layer setup; and (c) optimized layer setup. ........................................................................68 Figure 3.13 Constrained layer error: (a) initial layer setup; (b) compensated layer setup; and (c) optimized layer setup. ...................................................................................69 Figure 3.14 S-curved profile. .......................................................................................................71 Figure 3.15 Layer error vs. number of layers. ..............................................................................72 Figure 3.16 Layer error vs. number of layers with concaved area removed. ...............................73 Figure 4.1  Region needing interpolation during polish model emulation. .................................76  Figure 4.2  Defining point extraction at distinct monotone regions for non-axis symmetric objects. ......................................................................................................................77  Figure 4.3  Ambiguity in vertical profile extraction. ..................................................................78  Figure 4.4  Curve-based model. ..................................................................................................79  Figure 4.5  Initial point determination for contour point matching.............................................82  Figure 4.6  Shifting origin. ..........................................................................................................84  Figure 4.7  Curve mapping: (a) alignment of normal for second point; and (b) shifting of origin to facilitate next transformation. ....................................................................84  Figure 4.8  Original and mapped curves. ....................................................................................85  Figure 4.9  Mapped layer thickness.............................................................................................86  Figure 4.10 Max curvature from curve-based model. ..................................................................88 Figure 4.11 Axis symmetric test object. .......................................................................................91 vii  Figure 4.12 Layer error distribution of case 1. .............................................................................92 Figure 4.13 Non-axis symmetric test object. ................................................................................94 Figure 4.14 Original and mapped curves for one single vertical profile. .....................................94 Figure 4.15 Layer error distribution of: (a) compensation only method; and (b) optimized method.......................................................................................................................96 Figure 4.16 Curve-based model of non-axis symmetric test case. ...............................................98 Figure 4.17 Layer error with respect to number of layers. ...........................................................99 Figure 4.18 Layer error vs. number of layers for constrained case with optimization method.....................................................................................................................101  viii  ACKNOWLEDGEMENTS I thank my supervisor, Dr. Hsi-Yung (Steve) Feng, for his continuous support and guidance throughout my degree. His style suites me well and has therefore helped me develop significantly in more than just academically. I better understand, not only my field of research, but academia as a whole. The Mechanical Engineering Department here at UBC has also been great. The faculty and staff here are extremely friendly and approachable making the whole journey that much better. I also thank my parents for their undying support over the years. They have been there for me every step of the way; through the hard times and the fun times. Special thanks to my brother whom has both helped me with my studies and distracted me from them. He is the pioneer of graduate school in our family and because of that, I have benefited from his experiences in more way than I would like to admit. Furthermore, the financial support of this work has been partially provided by the CGS-M scholarship from the Natural Sciences and Engineering Research Council of Canada (NSERC).  ix  DEDICATION  To my family  x  1. INTRODUCTION Layer manufacturing (LM) is a fabrication technology that has shown much potential in specific manufacturing areas in the past decade due to its unique method of part production. Objects fabricated via this method are build layer-by-layer in an additive manner utilizing tool path created through processing computer-aided design (CAD) model into layer model with the number of layers governed either by user-specified layer thickness or layer model surface roughness [1, 2]. Such method of object manufacturing results in a 2.5D approximation of the original intended 3D model and thus some geometric information are lost [2]. However, the advantages of LM in specific cases outweighs its disadvantages; consequently, allowing LM to obtain more ground as manufacturing technology pushes forward [3]. The following section outlines the basic concept, the advantages and disadvantages and the inherent issues with the current LM technologies, followed by research goal and subsequent literature review.  1.1 Basic concept of layered manufacturing There are three main phases to all LM technologies, pre-process, process and post-process [1]. Pre-process involves the preparation of computer design model into LM tool path which are fed into LM systems. The physical part is constructed by the given LM technology and such a phase is named the process phase. After the build, steps to further enhance the object, or rather increase accuracy is deemed as the post-processing step. Detailed description of each phase is outlined in the following sections.  1  1.1.1 Pre-process In the early production cycle of a LM part, a computer design model is processed into finite number of layer geometries (extruded/offset contours) before being fed into a given LM system. By intersecting the CAD model with a given number of planes in the build direction, as shown in Figure 1.1, intersection contours of the CAD model are found [2, 4]. The positions of these planes are determined through user specified build layer thickness which is specified during the contour generation process, also known as slicing. The layer thickness choices are dependent on machine capability or allowance. Each given machine has its own layer thickness range specification which is dependent on the type of LM technologies used and the system achievable accuracy [1 – 4]. These intersection contours are used as the peripheral tool path a given layer. Internal tool path still needs to be generated. Various internal filling tool path patterns are available; however, typically a simple raster pattern is used to fill the interior region [1]. Figure 1.1  Model slicing for layer generation.  2  Once the intersection contours are found, the layer geometries are generated. The term layer geometry is actually used to describe how the intersection contours are off set in order to achieve the desired LM build utilizing squared layer edge approximation. In order to better understand the concept of layer geometry, the positional relationship between the physical layers and the generated tool path needs to be described here. Since the intersection contours found are 2D and planar, but the physical layer has a certain specified thickness, there exist a certain positional relationship between the physical layer built and the planar tool path contour generated. For LM technologies, the contour tool path is set to be the top of the layers as shown in Figure 1.2. The material is assumed to be formed right beneath the tool path with the layer in the build direction estimated using zero-th order edges. Thus, depending on how the contours/tool paths are offset, different physical parts result. The above tool path to physical layer positional relationship is set as the convention throughout the thesis. Furthermore, when describing layer geometry in the latter sections, complete layers are illustrated rather than just the tool path. The physical layers are said to be the extruded geometry of the extracted contour. Figure 1.2  Tool path to physical layer relationship.  LM Tool Path  Layer Material  3  If no tool path offset is present, Top-Down slicing is achieved. This type of slicing is shown in Figure 1.3 (a). If the contour is offset in the positive build direction by a full layer height, Bottom-Up slicing is achieved [5], shown in Figure 1.3 (b). These two slicing methods are the current industrial LM layer geometry generation methods [5 – 7]. Depending on the geometry of the object to be built, different slicing method is chosen. Offset between that of Top-Down slicing and Bottom-Up slicing can also be used. Figure 1.3  Current layer geometry generation methods: (a) top-down slicing; and (b) bottom-up slicing.  (a)  1.1.2  (b)  Process  After the desired layer geometry is generated, the tool paths are sent to the LM machine for build. The machine reads the tool paths created during pre-process method and creates or lays down successive physical layer of material, constructing the final object through series of contours and its corresponding internal filling tool path patterns. These layers are then  4  automatically fused together or bonded during build to create the final product. Depending on the LM technologies used, curing might be necessary after layer creation. There are various degrees of LM system available commercially, some aimed for prototyping proposes only and some are intended for end user part manufacturing [1, 4, 8]. Each of their Process Phase differs from each other. The subsequent paragraphs give a brief overview of the four main technologies: selective laser sintering (SLS), fused deposition modeling (FDM), stereolithography (SLA) and 3D printing (3DP). SLS is a LM technology that utilizes high power laser to sinter/fuse particles of material layerby-layer to create the desired physical part as shown in Figure 1.4 [9]. Particles of material, such as polymer, ceramic or metal, reside in a powder bed on top of a movable platform. The laser scans the powder one layer at the time. The layer thickness is controlled by both the power of the laser and the movement of the platform per layer. Once a layer is created, the platform is lowered and powder is added to the powder bed. This process repeats until the object is complete. Due to the constructed objecting being submerged in the same material powder that it is created with, support structure for overhang features is not necessary. It is also one of the only LM technologies that can create metal parts with close to 100% density. This allows SLS created parts to possess high part strength. However, surface finish of SLS parts is quite poor due to heat affects zones. During the sintering process, neighboring powders are affected by the heat from the laser and can sinter or partially sinter to the object being built. Surface refinement is necessary after the build to ensure desired accuracy is reached.  5  Figure 1.4  Selective laser sintering technology.  Another LM technology meant for creating end-user parts is the FDM system. FDM creates part by extruding soften plastic in successive layers [1, 10]. Coiled strands of polymer are fed into the heated extrusion head before being deposited on a movable platform as shown in Figure 1.5. The thickness of the layers is government by both the extrusion tip size and the heat applied to the plastic. The platform moves down by a layer thickness after each successive layer is created. FDM parts are of high strength, less than that of SLS, but higher than SLA and 3DP. The system is more compact and less energy intensive and can operate in none shop environments, such as directly in an office. Surface finish is also rather poor, thus, post-processing is required. Furthermore, support structure deposited by a separate extrusion head is necessary to support the soften polymer before it hardens. FDM allows for an economical way of producing low-volume end-user polymer parts.  6  Figure 1.5  Fused deposition modeling technology.  The first two LM technologies mentioned are geared towards end-user parts, unlike SLA where it is mostly used for prototyping purposes [11]. SLA hardens polymer resin using ultraviolet light as shown in Figure 1.6. The ultraviolet light scans through the bath of resin to create the part. Final curing is necessary to create a part strong enough for handling. SLA possesses one of the highest accuracy in LM methods but suffers from lack of part strength. Support structure that needs to be created separately is necessary for overhangs.  7  Figure 1.6  Stereolithography technology.  Another prototyping LM method is 3D Printing. 3D Printing consists of bonding sand together with adhesive one layer at time [12]. Such a method is largely used for visual check purposes only. Part strength is very low and care needs to be taken when handling 3DP created parts.  8  1.1.3 Post-process After the physical part is built by one of the above technologies, post-processing is applied. Due to the object being created in a layered fashion and the large layer thickness used (restricted by technology capabilities), the surface finish of LM made objects are usually quite poor and surface finishing step is often required [13 – 17]. Typically the object is polished until stair like surface is eliminated. Such polishing process is applied only until the stairs just disappears. Further polishing can caused inaccuracy. Surface polishing is usually accomplished by human or by antonymous robots through sanding and grinding. Other processes to smooth the object surface include sand blasting, chemical smoothing, machining and surface filling. Post processing also include object preparation other than surface smoothing such as rimming, hole tapping and any process that is necessary to bring the LM produced object into its final production form. Some LM technologies rely on support structures during the process stage to ensure area such as overhand does not deform before the object completely cures or hardens. Removable of the support material and the cleaning of the support area are also considered postprocessing tasks [1]. After post-processing, the complete cycle is complete. The part is ready to be tested as a prototype, used as a visual aid tool, or sold as an end user part.  1.2 Advantages and disadvantages of layered manufacturing The advantages of LM technologies have been widely realized in various industries ranging from medical to aerospace [1, 3, 18]. Due LM’s additive production method, parts that cannot be easily realized with conventional manufacturing method are easily constructed utilizing LM. Objects with internal feature or complex hard to access regions are easily realized [3], rendering 9  “design for manufacturing” almost unnecessary. Part number for a given mechanism can be greatly reduced due to LM’s capability to build pre-assembled parts. Furthermore, varying object properties such as stiffness is achievable. Non-homogeneous material is also readily constructed [3]. LM is also almost tooling and setup free [1, 3, 4]. Special jig setup necessary to create certain features is not required. Accurate positioning of material blank is not necessary. Thus, tooling and setup time is nearly eliminated rendering the build cycle time to be mostly dependent on the physical layer construction process time. This specific characteristic of LM allows it to be the ideal part building technology for low to mid volume manufacturing. Very little start up time is needed to initiate the construction of end-user parts. By being setup and tooling free, LM methods are geometrically insensitive [3]. The time it takes to construct a part is largely dependent on the volume of material needed. This makes the prediction of build time very straight forward allowing better process planning which increases efficiency. Object with complex geometry can be build at similar speed as that of a simple cube with equal volume. With conventional manufacturing methods, usually the manufacturing time increases exponentially as complexity increases for low to mid volume manufacturing [19]. Due to LM’s insensitivity to geometry, it is the ideal technology for complex part production. However, LM is not without faults. One of which is the actual part build time (process time excluding pre-process and post-process time) of LM technologies. Compare to most conventional manufacturing methods, the process time of LM is significantly higher [1]. Operations, such as machining, can be performed repeatedly very quickly once setup. However, this is currently not possible for LM. Therefore, for large volume manufacturing, where setup and tooling time is less significant, LM technology is not sensible. 10  Furthermore, the build time increases exponentially as number of layers increases meaning parts with high accuracy is extremely time consuming to construct. This is one of the primary problems that have kept LM from becoming more dominant in manufacturing, accuracy. Decreasing layer thickness to increase part accuracy can be detrimental to part build time. If the numbers of layers are kept low, the process is sped up at the price of very poor accuracy caused by the combined error due to lack of layers, shrinkage and warping effects [20, 21]. Consequently, on occasion, feature information can be completely lost due to insufficient resolution. Thus, steps needs to be taken to increase efficiency and accuracy of LM technologies. Part strength is also an issue [1]. LM created parts has lower part strength as per say, its machined counterpart. FDM can achieve at best 80% of the part strength of its machined counterpart, signaling a need for a strengthened design when LM part is used instead of machined parts. Furthermore, LM produced parts’ endurance limit is low due to the possibility of delamination since it is a layered structure. Nevertheless, research to increase LM part strength is currently ongoing with promising results.  1.3 Existing issues pertaining to layer manufacturing Layer Manufacturing is still an emerging technology that has yet to completely mature; thus, various issues exists. Due to the slicing methods outlined in the previous section, top-down and bottom-up slicing, an issue known as systematic distortion occurs because of inconsistent containments [5 – 7]. Figure 1.7 (a) and (b) shows the layer model of an object sliced with topdown and bottom-up slicing respectively. It can be seen for both cases, in specific regions, the layers fully contains the model being sliced. In other regions, the computer design model fully contains the layer model. This is caused by the two methods being unidirectional in layer generation. Bias occurs. This bias is carried through to the post processing step causing the final 11  object to be distorted as shown in Figure 1.8 (a) and (b). Distortion of such sort during layer generation is highly undesirable since the original intended shape is not conveyed appropriately by the layer model. The model the layer geometry is representing has strayed from the original computer design model. Not only that, distortion increases the final part error significantly which is extremely undesirable. Figure 1.7  Layer geometry resulting from: (a) top-down slicing; and (b) bottom-up slicing.  (a)  (b)  12  Figure 1.8  Final smoothed geometry resulting from: (a) top-down slicing; and (b) bottom-up slicing.  (a)  (b)  Furthermore, as already mentioned, efficiency of LM is rather low and that the build time increases exponentially with increase in number of layers; industry wise, very little effort has been put to better determine layer setup in the pre-process stage to reduce the number of layers necessary to create the intended part. A small reduction in number of layers can result in a large reduction in build time. Especially with LM system that is now capable of constructing layers with varying layer thicknesses on-the-fly during part production, determination of optimum layer setup can increase efficiency drastically. Uniform layer thickness is still the most popular industrial method for generating layers, a straight forward but inefficient method. For FDM, a reduction of half in number of layers necessary to construct a given part results in a 75% reduction in build time. Additionally, the final post-process geometry is not considered during layer generation process. This lack of consideration is passable for prototyping purposes; however, due to technological  13  advancement, LM has now evolved to manufacture end-user part. Surface smoothing is almost always required to remove staircase effects. Without consideration of the post-processing phase, accuracy is not controlled and optimum layer setup based on the final part cannot be determined. Furthermore, lacking the ability to control final part error usually causes overly conservative layer setup to be chosen, increasing part build time dramatically. To effectively increase efficiency, the complete LM manufacturing cycle should be considered. In addition, during the creation and bonding of physical layers, deformation caused by shrinkage occurs. Materials used for LM shrink as they cool or cure, which not only cause dimensions of the build object to reduce in value but to warp due to each layer cooling at different rate. Such warpage also creates internal stresses. Shrinkage and warping further reduce part accuracy and degrade part strength which is especially undesirable for LM given that part strength of layered objects are already lower than its counterpart created using conventional manufacturing methods such as machining.  1.4 Literature review Kulkarni and Dutta [5] were the first to realize the significance of systematic distortion and proposed a method where both Top-Down and Bottom-Up Slicing are used for layer generation of a single part. By applying each method at the appropriate regions, more consistent layer model can be determined. However, this method is very limited in its application since allowing both Top-Down and Bottom-Up slicing in a single part still does not allow for full containment for any given shape. Thus, Chiu and Liao [6, 7] proposed another method where the top and bottom layer contour is combined to form the contour to be used for part building. The proposed method attempts to address the short coming of Kulkarni and Dutta’s method. However, full containment is still not achieved and systematic distortion still results. Methods to completely eliminated 14  systematic distortion are not proposed up to date. Much of the work has been focused on efficiency aspect of LM. Previous work pertaining to efficient layer setup determination has been carried out by numerous researches at a stage where LM is primarily used for prototyping purposes. The first concept introduced to reduce number of layers and therefore increase the efficiency is the cusp height tolerance concept. Dolenc and Makela [22] notice that by allowing the layer height to adaptively vary with respect to a form of error measure, the number of layers can be reduced compared to the traditional uniform layer size setup. The error measure proposed is cusp height, the approximated maximum difference between the layer model and the original design model at each given layer. The gradient in the build direction at the intersecting planes are using to approximate the original design geometry within a given layer. First order approximation is used and cusp height is determined based on the approximated model. Using cusp height as a constraint, the layer geometry is determined sequentially. Sabourin et al. [23] later modified Dolenc and Makela’s method to solve the layer geometry in a more global sense in hopes of increasing accuracy of layer model and to better capture part features. Tyberg and Bohn [24] furthered on their work by segmenting the parts using feature detection before applying Dolenc and Makela’s cusp height method layer generation method. Pandey [25] later proposed the use of roughness value as a control instead of cusp height since that conforms better to industrial standard. Area deviation between slices has also been proposed as the control parameter. Kulkarni and Dutta [5] proposed the use of curvature instead to approximate the design geometry within a given layer to determine cusp height; thus, better determine layer geometry. However, all the methods proposed to increase efficiency, up to date, have not taken the final polished model into account. Post-processing has become a required phase in order to achieve  15  acceptable surface finish for LM parts. Thus, the final part error is the deviation between the post-processed model and the original computer design model. Furthermore, effect of distortion is ignored. Error is calculated with the assumption that systematic distortion does not exist. The final part resulting from the above methods cannot properly convey the original intended shape of the computer design model.  1.5 Research goal The aim for this research is to increase both accuracy and efficiency of LM technologies on a computational geometric level through the consideration of the complete LM build cycle with the process cycle assumed perfect since material effects are not considered in this research. Due to this assumption, the process cycle is ignored. In order to maximize efficiency, the least number of layers necessary to build a part given a certain tolerance bound is necessary. Furthermore, due to the necessity of the smoothing process, to appropriately determine the optimum layer setup (size, position and number), the layer setup calculation needs to be related to the final geometry; thus, a method to predict the final polished model is needed. Hence, the optimum layer setup determined based on a given tolerance bound with respect to final post-process geometry needs to be calculated to better guarantee accuracy and further improve efficiency. Any artifact created during layer generation needs to be eliminated. Consequently, to achieve the research goal, three phases are executed. The first and foremost phase is the reduction of systematic distortion caused by biased layer geometry generation. Currently, LM layer determination methods possess biasing problem which causes the final product to be distorted. Elimination of systematic distortion is necessary in order for LM parts to 16  be capable of realizing the original shape intended by the computer design geometry. Only when the appropriate shape can be captured, error calculation becomes meaningful. Furthermore, elimination of systematic distortion reduces the total resulting final part error. Larger layer size can thus be used to satisfy equal tolerance requirement. Or rather, few layers are necessary. Therefore layer setup optimization is better facilitated. Once systematic distortion is eliminated, accuracy and efficiency can be enhanced by determination of optimum layer setup for a given object; thus, the second phase. The second phase attempts to improve accuracy and efficiency by emulating the final post-process physical geometry from the layer model. The layer setup, thickness and number, is then determined based on the deviation found between the original computer design model and the emulated final physical model. The second phase focuses on smooth axis symmetric objects. The third phase attempts to extend the second phase into the determination of optimum layer setup more complicated objection without any axis of symmetry. Model emulate is extended to non-axis symmetric objects and so are the layer setup determination methods. Thus, accuracy and efficiency of LM technologies is improved due to bettered pre-process phase. Each of the following section after Literature Review outlines the individual phases listed here.  17  2. SYSTEMATIC DISTORTION ELIMINATION This section introduces a novel scheme to eliminate systematic distortion. Specifics for the method are outlined and case study with subsequent discussion is presented.  2.1 Prelude Layered manufacturing has emerged as a highly versatile process to produce complex parts compared to conventional manufacturing processes, which are either too costly to implement or just downright not possible [3].  However, this relatively new manufacturing process is  characterized by a few outstanding issues that have kept the process from being widely applied. One such issue is the reduced part accuracy caused by the primitive method of generating the layer contours. Current practice generates the layer contours by simply intersecting a set of parallel planes through the computer model of the design part. The volumetric geometry of each layer is then constructed by extruding the layer contour by the layer thickness in the part building direction. This practice often leads to distorted part geometry due to the unidirectional bias of the extruded layers [5 – 7]. Such distortion disallows the layer geometry to properly convey the original intended design shape thus accuracy is lost. Furthermore, the ultimate goal of this research is to improve accuracy and efficiency of LM parts through the consideration of final smoothed part geometry. Tolerance-based layer generation is envisioned in order to minimize number of layers. Therefore, the existence of such a bias effect is carried through to the final geometry. Depending on the layer thickness, final part deviation causes by distortion can be quite large and could become the dominant source of part error. Consequently, any form of tolerance-based layer setup generation method would determine the  18  optimal layer geometry based largely on error causes by distortion. More than necessary layers results leading to great increase in build time. The resulting part is still distorted. In order to fully comprehend the severity of such distortion, further understanding of the mechanics behind layer geometry generation is necessary. The following sections give an indepth description of layer geometry approximation, staircase effect and systematic distortion.  2.1.1 Layer geometry approximation Currently, in all LM processes, a CAD model is converted into layer geometries through intersecting the model with a set of parallel planes perpendicular to the part building direction. The intersections yield planar contours, which can then be extruded to approximate the volumetric geometry of the CAD model layers with square-edged layers [2]. To examine the layer approximation in more detail, one layer is isolated and examined.  After the planar  intersection, a layer contains a top intersection contour and a bottom intersection contour. To fill the approximated layer volume, the top contour can be extruded downwards or the bottom contour can be extruded upwards. This is respectively known as top-down and bottom-up slicing strategies in practice [6]. The resulting layer geometry derived from these two strategies is evidently different and hence the physical parts built by the two strategies will be dissimilar.  2.1.2 Staircase effect Since the original CAD model is approximated by a finite number of square-edged layers, steplike surface morphology known as the staircase effect arises, as shown in Fig. 2.1. The thicker the layers are, the more prominent the effect is and the worse the surface finish becomes. This is a non-ideal characteristic of LM and is usually mitigated during the post-processing stage by polishing [5]. 19  Figure 2.1  Staircase effect in LM parts.  Boundary of CAD Model Boundary of LM Part  2.1.3 Systematic distortion After removal of the staircases in LM part post-processing, another undesirable characteristic of current LM layer generation methods becomes apparent – systematic distortion of the resulting part [5 – 7, 14].  The systematic part distortion is caused by inconsistent layer geometry  containment where all the approximated extruded square-edged layers do not correspond to the minimum circumscribed volume which fully contains the CAD model. Thus, when polishing is applied to remove the staircases, the resulting part becomes geometrically distorted.  This  phenomenon is illustrated in Fig. 2.2a, where a sphere is sliced using the top-down slicing strategy. The final physical object obtained after post-processing is shown in light grey on the right. The top of the sphere not only deviates from the original CAD model but also does not represent the original intended shape. Since the layer thickness of most LM processes ranges from 0.05 mm to 0.3 mm, this distortion effect can be quite significant. Figure 2.2b shows the ideal layer generation results and the final post-processed object geometry. One thing to note is that the systematic distortion discussed here is just one component of part distortion.  As 20  mentioned previously, this research focuses on the pre-processing step of LM, which implies that part distortion results from issues in the processing or post-processing step are not being examined. The systematic distortion in the context of this research only refers to the distortion arising in the pre-processing step. Figure 2.2  Systematic distortion in LM parts: (a) distortion caused by the top-down slicing strategy; and (b) distortion free geometry obtained by a proper layer generation method.  (a)  (b)  21  Due to the systematic distortion, various efforts have been made to identify the optimum part building orientation such that critical part features are not distorted [26-28]. A non-distorted part surface is important especially when mating of parts is required. However, optimum part building orientation only provides the best solution for the given situation. This section aims at eliminating the systematic distortion. If this is achievable, distortion at the critical surface or any surface for that matter is no longer of concern. Hence, increased emphasis can be placed on part strength and material properties in optimizing the LM process. The present work is focused on eliminating the systematic distortion for objects that can be sliced into smooth single-contour layers.  2.2 Prior methods for systematic distortion reduction Kulkarni and Dutta [5] were the first researchers attempting to address the systematic distortion issue by introducing a layer generation method where both the top-down and bottom-up slicing strategies were applied to the CAD model. Surface normal vectors were used to determine which strategy should be applied to a specific layer of the sliced CAD model. Most importantly, a surface normal sign assumption was made for this method to work properly. This assumption required that the surface normal vectors had to be radially consistent within the given layer. Radial consistency in this context means that the direction of the surface normal vectors with respect to the part building direction cannot change throughout the sliced layer surface. This essentially signifies one of the two contours (top and bottom) in the given layer should fully contain the other if projected along the part building direction. Thus, a linear extrusion of the larger contour will result in full containment of the sliced layer volume. It is evident that with this assumption, the part geometry this method can be applied to is quite limited.  22  Chiu and Liao [6] and Chiu et al. [7] later attempted to relax the surface normal sign assumption by utilizing a tessellated geometric model of the part. Through using a mesh model, the part was decomposed radially. Dutta’s method [5] discussed above could then be applied in a local manner, radially around the layer. For this method to work properly, each surface triangle of the associated layer from the triangular mesh model of the part must intersect both the top and bottom slicing planes in order to result in two linear edges within the triangle. From the normal vector of the triangle face, the “outer” edge was chosen. This was done for all the surface triangles of the associated part layer and the desired layer contour was constructed by combining all the individually chosen edges. In essence, the top and bottom layer contours were combined and any edge that falls within either of the two contours was removed. It should be noted that this method makes the assumption that the two top and bottom planes of a layer have to pass through the same set or same number of triangles. This is necessary since the method is based on the idea of edge picking. For common triangular mesh models, however, the probability of passing two planes through the exact same set or number of triangles is extremely low. Because of this constraint, the part model this method can be applied to is also quite limited. As discussed above, both existing methods have limited applications. More specifically, they cannot deal with axial inconsistency of the sliced layer surface in the part building direction. Axial inconsistency in the context of this research refers to the situation where the surface normal direction changes sign throughout the layer thickness. This indicates that sliced layer surface is highly curved along the part building direction with a local maximum or minimum in mid layer thickness. In order to produce distortion free layer geometry, the local maximum within the layer has to be found and included in the layer contour. Chiu et al. [7] did attempt to resolve this issue by taking finer slices. However, this approach simply increased the slicing resolution locally and did not actually find the maximum of the part model within a layer. 23  The previously outlined issues with axial and radial inconsistency are illustrated in Fig. 2.3. Figure 2.3a represents the contour obtained by merging the top and bottom contour of the shown layer [6] and Fig. 2.3b shows the contour necessary to eliminate the systematic distortion. The slanted cylinder in the figure has radially inconsistent surface normal vectors and the sphere has axially inconsistent surface normals. In both cases, the straightforward merging method fails to capture the proper contour and the systematic distortion still exists in the final LM part. Figure 2.3  Layer contour generation: (a) by the existing merging method; and (b) correct contour for distortion free LM parts (by the proposed method).  (a)  (b)  24  2.3 Proposed method In order to eliminate the systematic distortion, each layer needs to be constructed as the minimum circumscribed extruded volume that fully contains the CAD model of the associated layer. This volume can be derived by solving for the outer boundary of the CAD model of the sliced layer projected in the plane perpendicular to the part building direction. For each layer of the sliced CAD model, the boundary contour is then extruded to approximate the geometry of its corresponding layer.  By using the projected outer boundary of each layer, the minimum  circumscribed extruded volume is found; thus, the condition of full CAD model containment is satisfied and the systematic distortion is eliminated, with both radial and axial inconsistencies dealt with. Solving for the outer boundary of a layer of a sliced CAD model is, however, not a trivial task. It is very difficult, if possible, to derive a closed-form solution and often approximation methods are employed. Qin et al. [29] attempted to compute the projected boundary of a complete closed CAD model utilizing a tessellated surface model that approximated the original CAD model. If this method is to be implemented for the present case, the tessellated mesh model needs to be sliced into individual layer models. The resulting layer model surface may need to be retessellated in order to construct a closed model for applying the method. The existing algorithm to slice the tessellated mesh model with parallel planes of any arbitrary orientation and the subsequent re-tessellation and closure of the sliced layer model is not robust and at times produces invalid results. Because of this, the approximated tessellated or triangulated surface model is not employed to solve for the outer boundary of a sliced CAD model layer. The outer boundary of the sliced CAD model layer in the projected plane, which is perpendicular to the part building direction, corresponds to the outer profile of the layer model silhouette on the 25  projected plane.  By extracting the outer profile of the silhouette, the desired minimum  circumscribed extruded volume is readily obtained. To achieve this, a point cloud based surface modeling data format is to be used. First, the point cloud representation of the original CAD model surface is generated. The point cloud is segmented into layer point data and the points within each layer are projected onto a plane that is perpendicular to the part building direction. The outer boundary contour of the projected point data is then extracted. This outer contour represents the approximated solution of the original layer model silhouette.  A flow chart  summarizing the primary steps of the proposed method is shown in Fig. 2.4. One advantage of using the point cloud data is that it can be easily segmented into layer groups without the need to calculate the intersection solutions. More importantly, the discrete point representation of the layer surface decomposes the layer model in both the radial and axial directions with the highest possible resolution.  This means that both the radial and axial inconsistencies described  previously can be easily dealt with. By finding the outer boundary of the projected point set, the layer contour that eliminates the systematic distortion is found.  26  Figure 2.4  Proposed systematic distortion reduction procedure.  Point cloud data  Point cloud segmentation  Layer point data projection  Layer boundary contour generation  LM Contours  2.4 Boundary contour generation The main challenge of the proposed scheme in this work lies in the generation of the outer boundary contour for a given point data set.  This is because point distribution after the  projection becomes quite irregular, compared to that of the original 3D points. Specifically, the projected points on the 2D plane are not characterized with the same topological relationship as that for the original point cloud in the 3D space. Solving for the outer boundary contour of a projected 2D point data set is thus not a trivial task and will be discussed further in the following subsections.  27  2.4.1 Existing methods Currently, there are three methods reported in the literature that can be utilized to capture the outer boundary of a 2D point data set. The oldest and still the most popular is the alpha shape method [30]. Alpha shape is considered as a piecewise linear interpolant based on the Delaunay triangulation [31] of the point set, with the associated alpha circles acting as a point selection filter. Alpha circles are defined as a circle with a user specified radius that has two data points from the data set on its perimeter but does not contain any data point from the data set within the circle itself. The biggest drawback of the alpha shape stems from the user specified radius value. Given a data set, this value is usually not intuitive and cannot be automatically determined. The user needs to adjust the radius value until the best one is found. In the absence of any other information the user might have, this best radius is determined simply from visual examination of the results on the computer screen. As a result, different users applying the alpha shape method on the same data set will most likely acquire different solutions. Furthermore, alpha shape is not adaptive – it cannot keep the level of detail captured consistent when the point distribution is irregular. A change in the point distribution can cause alpha shape to capture too much or too little detail. B-spline curve fitting is another option [32].  This method finds the best-fitted B-spline  polynomial expression that encapsulates a 2D point data set. For the application presented in this section, however, every point in the point cloud representation of the layer model surface is sampled directly from the original CAD model. As a result, these points are accurate surface points and should be included exactly rather than best fitted. In addition, B-spline curve fitting itself is still an active research area with many issues yet to be resolved. In other words, existing B-spline curve fitting algorithms are still not robust enough to be fully automated [33].  28  Voxel is the third option, which is popular in computer visualization [34]. It is used extensively in the medical imaging field and can be applied to the application presented in this thesis. To describe its main concept, a grid of constant spacing is constructed over a plane containing a given 2D point data set and each grid that contains one or more data points is assigned a value of 1. Those grids that do not contain any data points receive a value of 0. In addition, the internal grids, those surrounded by grids of value 1, are assigned a different value and so are the external grids. By connecting the grids with a value of 1 that are adjacent to external grids, the outer boundary contour can be created very efficiently. However, the initial determination of the grid size is critical and not intuitive. If the grids are too small, there is a high probability that voids can be formed and a broken contour be created. Inversely, grid spacing that is too large, fine features in the contour cannot be captured and the resulting accuracy is poor. Determination of the grid size for the voxel method is primarily operator based. Thus, the optimal grid size for a given 2D point data set is not readily attainable. Because of the drawbacks of the existing methods stated above, a new and validated method of outer boundary contour generation for a given 2D point data set has been developed in this work. Two important mathematical concepts are introduced in the next subsection first, with the subsequent subsection outlining the underlying principle and detailed procedure of the proposed method.  2.4.2 Necessary concepts Before outlining the proposed algorithm, two key mathematical concepts used need to be described. First of which is Convex Hull [35]. For any given set of points the convex hull is the minimum convex set that contains the set of points. A convex set is defined as a connected pair of points from within the given object for which the line that connects them also resides within 29  the object. Fig. 2.5a shows a convex set and Fig. 2.5b shows a non-convex set. Thus, a convex hull is the minimum piecewise convex contour constructed from the given data points that encapsulates the complete point set. A simpler analogy is the elastic band analogy. When an elastic band is stretched over an object or a set of points and released, the elastic band forms the minimum convex shape necessary to encompass the object or set of points as shown in Fig. 2.6. The Convex Hull is an important concept for determining the notion of shape. Figure 2.5  Definition of: (a) a convex set; and (b) a non-convex set.  (a)  Figure 2.6  (b)  Elastic band analogy of convex hull.  30  The second concept is Delaunay Triangulation [31] which is mentioned previously but now explained in more detail. Delaunay triangulation for a set of point is constructed using circumcircles. A circumcircle is a circle defined by three vertices or a triangle. In order to construct Delaunay triangulation, circumcircles are constructed using the given point set. However, the Delaunay condition states that the corresponding triangle of the circumcircle is part of the Delaunay triangulation net if and only if the circumcircle does not encapsulate any other data point within. Due to this, Delaunay triangulation avoids the construction of skinny triangles which is ideal when interpolation connectivity information of a point set. Skinny triangle usually signals improper determination of connectivity information. Furthermore, unique triangulation for a given data set is found through Delaunay Triangulation. Fig. 2.7 shows Delaunay triangulation with corresponding circumcircles. Figure 2.7  Delaunay triangulation with circumcircles shown.  31  2.4.3 Proposed algorithm In order to properly capture the outer boundary of a projected layer point data set, there are specific requirements that need to be satisfied for the associated algorithm. First, the algorithm must capture the most probable (or closest to theoretical) points in the data set, with respect to those on the theoretical CAD silhouette curve. Second, the algorithm needs to be adaptive to address changing point density in the data set in order to keep the captured level of detail consistent. Third, the algorithm needs to capture the boundary contour accurately without user input. To elaborate on the above requirements, a special projected point data set is used, where the discretized theoretical solution of the outer boundary contour is a subset of the projected points. Thus, only the theoretical solution points should be extracted. The first requirement sets the algorithm to identify the most probable points. However, without knowing the variation in point density distribution, there is no indication if the points captured are a subset of the theoretical solution points or the inverse. Therefore, the second requirement needs to be put in place to ensure proper level of detail is captured. The last requirement is imposed to make certain that there is no human contributed error. If the user’s judgment is integrated into the calculation, there is a high chance of inaccurate contour generation due to subjective reasoning from the user. This should be avoided. Furthermore, this requirement implicitly specifies that a projected point data set corresponds to a unique boundary contour. With the above three requirements, we propose a marching algorithm to generate the piecewise linear boundary contour via known curvature information. Based on the assumption of minor curvature variations along the boundary contour, the algorithm can march forward and find the most probable point that should be included in the boundary contour independent of the point 32  density distribution. The boundary contour can thus be captured in a consistent and adaptive manner.  Figure 2.8 shows the flow chart summarizing the main aspects of the proposed  marching algorithm. Figure 2.8  Proposed boundary contour generation algorithm.  Projected layer points Convex hull construction Initial curvature determination from two shortest edges of convex hull  Forward circular search range determination Data segmentation using determined search range  Curvature information determination  Delaunay triangulation of the point subset  Step Forward  Unnecessary triangle and edge elimination New edge/point search using previous step curvature information Inflection point checking  N  New point = Starting point? Y End 33  Before outlining the proposed contour generation algorithm in detail, the underlying assumptions and constraints need to be stated. Since the algorithm uses curvature information and assume minor curvature variations along the boundary contour, tangential discontinuity cannot exist, which requires that the boundary curve has to be smooth. The algorithm is also limited to the construction of one single closed contour for each layer. As the projected point data is not considered to have more than one contour boundary, multiple and/or internal contours cannot be successfully constructed. Furthermore, the point cloud representation of the CAD model surface has to be adequately sampled, meaning the point density has to be high enough so that all geometric features are properly represented. It should be noted that determining the applicable sampling density is an active research subject by itself and beyond the scope of this study. Without knowing the threshold sampling density, the CAD models in the case studies have been sampled with high point density. In order to initialize the marching search, initial curvature information is needed. To determine some initial curvature values on the boundary curve, the convex hull [35] of the point set is constructed, as shown in Fig. 2.9. Since a smooth curve constraint is imposed, there are certainly segments of the convex hull that are the same as the segments of the boundary contour. For a pair of such adjacent segments, local curvature values can be evaluated. In order to find the first pair of segments for the marching search, the algorithm evaluates all pairs of adjacent convex hull edges for the shortest combined length via:  min  L(i)  L(i  1)   i [1, N ]  (1)  where L(i ) is the length of the ith edge of the convex hull with a total of N edges and L( N  1)  L(1) . It should be noted here that the pair of shortest edges is not necessarily close to  34  the region of highest curvature since projection would cause irregularity in the point distribution. As evident in Fig. 2.9, the longer the edges are, the more likely they are not part of the outer boundary contour [36]. The logical inverse of this statement indicates that the shortest segment has the highest probability to be part of the outer boundary contour. Furthermore, as the CAD model is assumed to be smooth and adequately sampled, any convex curvature on the model will be represented by at least three consecutive points or two adjacent segments in the convex hull. Thus, the pair of shortest segments in the convex hull should be part of the outer boundary contour. As shown in the exploded view in Fig. 2.9, the angle between the two shortest consecutive segments is calculated and to be used to represent the curvature magnitude and march in a counter-clockwise direction. Figure 2.9  Convex hull and initial pair of segments of the boundary contour.  Specification of the marching vector needs a direction as well as a magnitude. The initial angle or curvature information gives the most probable direction to search for the next point on the boundary contour but does not indicate the range of the search. In order to determine the range, 35  the local point density needs to be evaluated. By taking the inverse of the point density, an area per point is calculated. This area can be roughly represented by a circle of the same area. The minimum density value of the point set (the corresponding maximum circle size) is used in order to ensure neighboring points can always be found. A scalar multiple of the radius of this maximum circle is used as the range to implement the forward search for the most probable next point on the boundary contour (Fig. 2.10). As curvature information of the previous point is used to search for the next point, the result is in fact not really affected by the circle size as long as it is larger than the area corresponding to the local minimum density. The nearest neighbor search algorithm [37] is employed to efficiently identify the subset of points within the circle. Figure 2.10 Point subset for determining the next boundary contour segment.  Once the subset of neighboring points is found using the forward half circle (Fig. 2.10) with radius determined from the (global) minimum density, Delaunay triangulation is applied to the subset of points as shown in Fig. 2.11a. Delaunay triangulation leads to a unique triangulation of the data set. Since only the Delaunay edges containing the current point are of interest, all the  36  other edges are removed. The remaining Delaunay edges are shown in Fig. 2.11b, marking the end points connected to the current point. These end points are all probable candidates as the next point on the boundary contour. Now, using the angle (representing curvature) evaluated at the previous point, a preferred direction for the next contour segment is determined as shown in Fig. 2.11c. This preferred direction is defined as the most probable direction in which the contour polygon should grow towards under the assumption of minor curvature variations along the boundary contour. It is set by considering the curvature-representing angle of the current point the same as that of the previous point. The difference in angle between the preferred direction and each candidate segment is then calculated and the segment with the smallest angular difference to the preferred direction is chosen as the new contour segment. Figure 2.11 Next contour segment determination: (a) Delaunay triangulation; (b) unwanted Delaunay edge removal; and (c) preferred direction for the next contour segment.  37  It should be mentioned that before the chosen Delaunay edge can be deemed as one of the boundary contour segments, an important check needs to be performed. Since the proposed algorithm uses the previous curvature information to predict and find the next contour segment, there is a possibility that the algorithm can pick an incorrect Delaunay edge, in particular at inflection points where the sign of curvature changes. To mitigate this inherent problem due to the assumption of minor curvature variations along the boundary contour, a second search for the next contour segment is performed.  This time, the already determined segment length is  employed to carry out a sweep outwards as shown in Fig. 2.12. If the outward sweep intersects a point, this indicates that the currently chosen segment is not correct.  The Delaunay edge  associated with the newly found point from the outward sweep is then taken as the next contour segment. If the sweep does not intersect a point, then the currently chosen Delaunay edge is the correct boundary segment. The marching algorithm continues until the starting point is met. Figure 2.12 Inflection point check.  38  2.5 Case studies The proposed layer contour generation method to reduce the systematic distortion of LM parts has been implemented and tested. Four specifically devised case studies were carried out to validate the effectiveness of the proposed method. The first three case studies focused on confirming the capabilities of the presented boundary contour generation algorithm to correctly establish the boundary contour for a single projected layer data. The unique advantage of the point cloud based representation of the layer model surface in dealing with both axial and radial inconsistencies is evident from these case studies. The elimination of the systematic distortion for a complete part model is demonstrated in the last case study.  2.5.1 Case 1: sphere The middle layer of a sphere was selected as the first case study (Fig. 2.13a). This is a special layer geometry for which both the top-down and bottom-up slicing strategies fail to capture the minimum circumscribed extruded volume or equivalently, the minimum circumscribed contour for the silhouette of the projected layer. This particular issue is also referred to previously as the axial inconsistency. The proposed method alleviated this issue with ease.  39  Figure 2.13 Mid-layer boundary contour generation for a sphere: (a) layer extracted; and (b) projected points and the resulting boundary contour.  To validate the presented boundary contour generation algorithm, the points were sampled such that the discretized theoretical solution points on the mid-sphere great circle are a subset of the projected layer data set. Thus, with the theoretical solutions known, the proposed boundary contour generation algorithm can be evaluated without difficulty. The mid-layer of the sphere model contained 9,345 sampled points of which 448 were the theoretical solution points. These 448 points were all output by the algorithm, as can be seen in Fig. 2.13b. The presented algorithm did not capture any extra unwanted points or miss any solution points. The algorithm is thus deemed effective when dealing with axial inconsistency in simple convex part models.  40  2.5.2 Case 2: slanted cylinder The second case study was performed on a slanted cylinder. A typical layer containing 12,006 sampled points was extracted and 2,013 of the sampled points were the solution points. Figure 2.14a illustrates this layer in which the radial inconsistency is clearly present.  The radial  inconsistency essentially means that there is a transition region in the correct boundary contour where the outer boundary is neither part of the top nor the bottom contour of the extracted layer. And this transition region cannot be captured simply by selection. The presented algorithm was able to identify all the 2,013 solution points, as shown in Fig. 2.14b, confirming that the proposed layer contour generation method is able to accurately capture convex layer geometry with transition regions. Figure 2.14 Layer boundary contour generation for a slanted cylinder: (a) layer extracted; and (b) projected points and the resulting boundary contour.  41  2.5.3 Case 3: slanted concave cylinder The previous case studies validate the presented method to be effective for simple convex part models containing axial or radial inconsistency. Now, the method needs to be examined for part models containing concave layer geometry. In particular, the effectiveness of the marching algorithm at inflection points needs to be evaluated. A part model shown in Fig. 2.15a was created for this purpose. As in previous case studies, the model was sampled in such a way that the solution points were a subset of the sampled point data set. The layer extracted from the model contained 13,060 sampled points, of which 1,323 points were the solution points. The presented method accurately identified all the 1,323 solution points, as shown in Fig. 2.15b. This confirms that the method is capable of capturing concave layer geometry and the procedure of inflection point check is deemed effective. Figure 2.15 Layer boundary contour generation for a concave cylinder: (a) layer extracted; and (b) projected points and the resulting boundary contour.  42  2.5.4 Case 4: s-shaped cylinder In order to ensure that the presented method is effective in eliminating the systematic distortion, every layer generated needs to be the minimum circumscribed extruded volume that fully contains the corresponding layer model. To demonstrate this critical feature of the presented method, an s-shaped cylinder was devised and its layer geometry would be characterized with axial as well as radial inconsistency. Using the proposed method, 15 layers were generated. The layered and the resulting post-processed part geometry are shown in Fig. 2.16a.  As a  comparison, Fig. 2.16b shows the layered and post-processed part geometry using the conventional top-down slicing approach. In Fig. 2.16b, the dashed line represents the original CAD model and the solid line the resulting post-processed geometry (an estimate of the part geometry after polishing).  As can be observed in this figure, the systematic distortion is  eliminated by the proposed method. Such method is applied on the subsequent section to ensure systematic distortion does not become a factor during layer setup optimization.  43  Figure 2.16 Layer generation for an s-shaped cylinder and the corresponding post-processed geometry: (a) proposed method; and (b) top-down slicing.  44  3. TOLERANCE-BASED LAYER SETUP OPTIMIZATION FOR AXIS SYMMETRIC OBJECTS After the elimination of systematic distortion, final part error can be better determined since the resulting deviation is not heavily affected by an artifact created during layer generation. For identical layer setup, layer error of final part is reduced. Thicker layers can thus be used to achieve equal part tolerance. Therefore, excess numbers of layers to compensate for the error caused by distortion is no longer necessary.  3.1 Prelude This section of the Thesis introduces a tolerance-based method to generate the optimum layer setup required to build layered manufacturing (LM) end-user parts for maximized efficiency. To achieve this, the deviation between the final smoothed LM part geometry and the original design model are formulated and controlled. Maximized layer thicknesses are then realized through optimization of layer position with respect to geometry and maximization of the allowable deviation for each layer, which in term leads to minimization of the build time. Current LM layer setup methods lack such capabilities, rendering layer thickness selection to operator-deemedbest.  Without the ability to optimize layer position with respect to final geometry, layer  thickness selection is often overly conservative, causing more layers than necessary to be used. Given that the LM build time increases exponentially with an increase in the number of layers, efficiency is greatly reduced with conservative layer setup.  45  3.1.1 Finishing and its implications Since the physical object is created through stacking of successful layers, a well-known staircase effect occurs on the surface of the LM part [1 – 4, 18]. This has not been looked upon as an issue previously since LM was initially used for prototyping purposes due to low build part strength. However, owing to recent technological advancements, LM parts have achieved part strength of approximately 80% of its machined equivalent. This drove LM towards manufacturing of enduser products rather than prototypes [3] and thus, signaling a need to improve LM part accuracy and surface finish [13]. The first solution to improve accuracy and surface finish is to decrease the layer thickness. However, the build time of a LM part is largely governed by the volume of the part to be built and the layer thicknesses used since setup and tooling time are almost eliminated [1]. Choice of layer thickness becomes vital in achieving an efficient build process. Furthermore, number of layers affects the build time through a squared relationship where by twice as many layer causes an increase in build time by a factor of four. Therefore, decrease layer thickness to decrease surface roughness can be detrimental to LM build time. In addition, the minimum layer thickness capable by current LM machines are inadequate to produce parts with low enough surface roughness to meet current industry needs [13 – 17]. Thus, polishing has become the primary and most effect post-processing finishing method used to decrease surface roughness of LM parts. Other processes such as water vapor polishing [38], sand blasting, and surface filling have also been used. With LM parts needing finishing processes, layer setup determined through the deviation between the original CAD model and the layer model is no longer adequate. The staircases are polished away during the finishing process; arriving at a geometry that no longer possesses surface roughness that is dictated by layer thicknesses shown in Fig. 3.1. Utilizing the deviation  46  between the layer model and the CAD model to determine optimum layer setup becomes meaningless. The actual part surface tolerance is the deviation between the CAD model and the polished model derived from smoothing of the layer model. In order to obtain the optimum layer setup necessary to satisfy certain geometric design requirements for a given part, the polished model needs to be emulated and the deviation between it and the CAD model calculated. One thing to note, when referring to layer error in this thesis, it is the maximum deviation between the CAD model and the final part for a given layer. Therefore, each layer has one associated layer error value. The maximum layer error is then the maximum of the layer error values. Figure 3.1  Cusp height vs. tolerance control.  Cusp Height  Tolerance/Layer Error  Polished Model  Layer Model CAD Model  3.1.2 Current methods Since LM has becomes widely believed to be a very capable manufacturing process, numerous researches to increase efficient regarding layer setup has been carried out. At the stage where LM is primarily used for prototyping purposes, Dolenc and Makela in 1994 realized the significant 47  time saving achievable through non-uniform layer setup [22]. The concept of adaptive slicing was introduced. In their method, each layer thickness is determined based on the cusp height constraint specified by the user and the surface normal at the preceding intersecting plane. The definition of cusp height is shown in Fig. 3.1. The layers thicknesses are then sequentially determined. Many variations based on the same concept has stemmed since then. However, even now, it is still one of the primary methods of layer setup determination aimed at build time reduction. This method, nonetheless, is optimizing layer setup based on layer model for prototyping purposes, not on the final polished geometry for end-user product. The optimum layer setup cannot be determined without the integration of polish geometry. Furthermore, the layer thicknesses are calculated sequentially utilizing only information found at the intersection planes. The optimum solution cannot be guaranteed due to the lack of consideration for geometries between the planes. In addition, the method’s solution varies depending on the direction the sequential approach takes; thus, optimal global solution is not found. During the same period, Hope et al proposed a method similar to that of Dolenc and Makela’s, except with sloping layer surfaces [39]. It is recognized that further build time reduction can be realized if the prototyping system is able to construct sloping surface layer models. Deviation between the original intended CAD model and the layer geometry is reduced. Due to this, thicker layers can be used given the same tolerance constraint. Hope el al’s method suffers similar issues as that of Dolenc and Makela’s since the sloping surface layer model still exhibits poor surface quality and subsequent finishing process is still needed. The layer setup determination is not optimized in accordance with the finished geometry. Sequential method is also used. In 2000, Kulkarni and Dutta attempted to integrate layer manufacturing and material removal process to decrease surface roughness and deviation [17]. A Ball end mill is used to remove the  48  staircase effect and to bring surface finish to equivalent of that of machining. The presented method is to better the surface finish but not the overall efficiency. The initial, pre-machined, LM part’s layer setup is again not optimized with respect to the final finished geometry. Optimum solution for minimum build time for a given geometry cannot be determined due to the link between layer setup and final geometry not being realized. A method to best machine staircase surface with ball end mill was presented.  3.2 Methodology The main goal is to minimize build time while satisfying the user specified geometric tolerance for the final polished geometry. To best achieve this, layer thicknesses are allowed to vary thus permitting an adaptive method to capture the intended geometry. However, the heart of the research is not in the realization of variable layer thickness but rather, the means to determine the necessary layer thicknesses, position and number required to satisfy the above goal. The important thing to realize is that the maximum layer error after polishing is not directly dependant on number of layers but also on layer thickness and position relative to the geometry. The effect of layer positions relative to geometry is shown in Fig. 3.2. Currently, industries mostly use uniform layer thickness LM methods and believe that a decrease in layer thickness/increase in number of layers results in a decrease in maximum layer error [1]. However, depending on the relative positioning of the layers with respect to the geometry, different maximum layer errors can result. Thus, it is not necessarily true, in a local sense that an increase in number of layers allows for a decrease in maximum layer error. If an increase in number of layers causes less favorable layer positions, the maximum error would in fact increase as number of layers increases. However, in a global sense, increase in number of layers eventually decrease layer error since as number of layers approaches infinite, maximum layer 49  error approaches zero. Overall the relationship between number of layers and layer error for uniform layer setup possess a decaying oscillatory nature. Therefore, to decrease layer error, a significant increase in number of layers is needed, creating excessive layers for the given tolerance constraint. It is ideal, for finding the minimum number of layers needed to satisfy the given geometric tolerance, to encompass a relationship between maximum layer error and number of layers that does not oscillate locally and preferably, results in a single value function. This is achieved through variable layer thickness and position in a way such that the positions of layers relative to the geometry are similar between different numbers of layers solutions. Only then, would any increase in number of layers to decrease maximum layer error make sense as shown in Fig. 3.3. As a result, a need to decrease error can be easily achieved through increase in number of layers. One thing to keep in mind is the layer thicknesses and positions are dependent. In order to change a layer position, the adjacent layer thicknesses have to be varied. Figure 3.2  Difference in layer error due to layer positions with respect to geometry.  Maximum  Minimum  50  Figure 3.3  Similar layer setups result in desirable layer error to number of layers relationship.  Therefore, layer setups between different numbers of layers that result in similar layer distribution relative to the finished geometry are necessary. By minimizing the deviation between the polished model and the layer model such similar layer allocations can be achieved. By minimizing the deviation, uniform layer error distribution results allowing every layer to achieve the same layer error. This in term provides layer setups for any given number of layers identical goals to satisfy which leads to similar layer thickness and positioning results for different number of layers. With this, varying number of layers to find the layer setup that satisfies the user given tolerance is achieved. A compensation method is proposed along with subsequent optimization, both nested in a number of layers marching loop, in order to determine the layer setup with the least number of layers necessary to satisfy the user set geometric tolerance constraint. A much more global orientated solution can be achieved in this manner, solving for not only the optimum layer thickness and number necessary to describe the geometry but layer positions as well. Similar layer-position-solutions results with proposed method, allowing an almost none oscillatory relationship between number of layers and maximum layer error; thus, making marching possible for determining number of layers. Prior to outlining the detailed procedures of the proposed method, polished geometry emulation is needed. 51  3.3 Emulation Before deviation between the CAD model and the polished geometry can be calculated and layer setup determined, emulation of the polished geometry is necessary. A consistent method to best approximate the resulting polishing geometry is desired. The following section outlines the proposed emulation method. For the purpose of proof-of-concept, the geometries being emulated are simple at least G1 continuous revolved object where one single profile defines the part. The profile itself is defined by a single value function in the build direction; thus, features resulting in multiple contours or non-distinct contours during layer generation are not possible. One thing to note is that the proposed method can be applied on none-revolved objects if lined profile in the build direction can be extracted such as shown in [24,40]. However, for the purpose of this research, a single profile of a revolved geometry is considered. To emulate the polished geometry, a layer model is determined first. This layer model represents the LM part created before the smoothing operation. To smooth out this geometry, the excess staircase material needs to be removed. However, depending on the shape of the profile, different amount of excess material removal is necessary. The purpose of properly determining the amount of excess material removal is needed at any given region is to determine the proper control points that are necessary for subsequent profile fitting. As shown in Fig. 3.4, depending on the amount of material removed, the control point for each layer falls somewhere on the permitted control point extract region. For LM finishing process, the object is polished until the staircases are completely removed. Thus, the control points that are being extracted, ideally, should be the intersection point between the layer model and the CAD model. However, the layer model gives somewhat limited information for shape emulation resulting in difficulty in 52  control point extraction in some regions. Specifically for the type of profile this research deals with, there are four cases that need to be looked at separately for control point extraction, three which needs interpolation. These four cases are convex region, concave region and distinct monotonic region and start/end region. Figure 3.4  Permitted regions for control point extrapolation.  Regions for Control Points Extrapolation  Layer Model  3.3.1 Distinct monotonic region The simplest control point extraction case is that of the distinct monotonic regions. For these regions, the extracted control points, where CAD model intersects with layer model, are the concaved intersections of the vertical edges and the horizontal edges of the stairs. The staircases are well defined in these regions which lead to no obscurity as to where the CAD model and the layer model intersect. Simply by selecting the inner corner points of the staircase model, the proper control points are found. A control point can be found for each corresponding layer.  53  3.3.2 Convex region As for convex region, straight forward point selection cannot be used since the layer model does not provide sufficient information in these regions to allow for proper control points extraction. As shown in Fig. 3.5a, the control point can have infinite number of solutions. It can be located anywhere on the vertical edge and gives a valid but inaccurate solution. Some form of estimation needs to be implemented. By studying the layer model, the only information available is the stair profile. Any form of interpolation needs to stem from the staircase model. As mentioned previously, extraction of control point at distinct monotonic regions is accurate and straight forward. Furthermore, the immediate regions adjacent to any convex regions are consisted of distinct monotonic features. Thus, the control points in these adjacent regions can aid in the control point extraction for the convex region. By connecting the adjacent control point with linear segments, slopes approximation of the original CAD model are determined. These slopes information are then used for approximating the location of the control point for the layer in the convex region as shown in Fig. 3.5b. One thing to note is that the assumption that there is sufficient number of layers adjacent to the convex region for slope approximation is made. This is a valid assumption since for any LM process to be capable, adequate layers to properly describe the geometry is needed. A single control point for the layer that encapsulates the convex feature is found.  54  Figure 3.5  Convex region control point estimation: (a) possible control point solutions; and (b) estimated control point solution utilizing adjacent slopes.  Possible Convex Region Control Point Solution  (a)  Estimated Control Point Solution from Slope  (b)  3.3.3 Concave region The concave region follows a similar method in control point extraction except with one main difference. The control points at the concave regions are not on the vertical stair edges as shown in Fig. 3.6. Since the layer model always fully encapsulate the CAD model, at the concave regions, the control point falls within the layer model and thus not on any of the vertical edges. It is no longer an intersection point between the CAD model and the layer model. However, just like the convex region, the adjacent features to any concaved region are distinct monotonic regions which allow slope approximation. These slopes are used to estimate the location of the control point. The goodness of the estimation for concaved region is much worse than that of the convex region or rather, have the possibility of being much worse. As shown in Fig. 3.6a. The extracted control point is a good approximation of the CAD model; however, Fig. 3.6b shows that the extracted control point is not near the actual and results in large error. This is cause by layer position. In Fig. 3.6a, the layer is situated at a location that allows the layer model to 55  properly convey the location of the concave region control point where the adjacent layer points are actually the intersection points between CAD and layer model. For Fig. 3.6b this is not the case. The layer setup is in such a way that extraction of control point for adjacent distinct monotonic regions do not result in control points that are the intersection of the CAD and layer model. However, there is no straight forward way of eliminating this issue. As long as the layer is not properly positioned, the layer model does not have the ability to convey the proper information for reliable control point extraction. This directly reflects on the physical polished model since the proposed method of control point extraction emulates that of physical polishing process. If the layer setup is inappropriate, the polishing can only be done through expertdimmed-best and larger than desired error occurs. This makes the concaved region the most crucial when solving for layer setup. Figure 3.6  Concave region control point extrapolation: (a) estimated solution when favorable layer position present; and (b) when favorable layer position not present.  3.3.4 Start and end region Lastly, start and end control point selection is discussed. As shown in Fig. 3.7a, the start and end point can be located anywhere on the horizontal face of the last/first layer and the only 56  information available for approximating these points are extracted from adjacent layers. The adjacent slopes and curvature can thus be used to estimate the location of the end/start point. However, these estimation are done single sided so it can be very crude and result in large error. Especially if the geometry possesses large curvature changes within the layers or just simple curvature sign change. In order to reduce the effect of this issue, the start and end layer thicknesses are minimized shown in Fig. 3.7b. Nevertheless, the end and start point approximation are still crude and unreliable and are ignored during compensation and optimization. The layer thickness minimization is aimed at reduces the end effect errors as much as possible. Since the start and end layer cannot shift or reposition, minimizing its layer thickness best guarantee the smallest possible error. Figure 3.7  Start and end layer control point extrapolation: (a) infinite number of possible solutions; and (b) minimization of start/end layer thickness to better facilitate control point extrapolation.  Minimize Start/End Layer Thickness  Possible Solutions for Start/End Points (a)  (b)  3.3.5 Fitting After control point extraction, a curve fitting method is needed to interpolate the profile. Since the layer model fully encapsulates the CAD geometry, the fitted curve has to reside within the layer model. Furthermore, for this research, the profiles are single valued functions in the build  57  direction. These constraints result in specific areas that the fitted curve is permitted to reside in. This is shown in Fig. 3.8. The fitting method also needs to be able to inflect within any given layer in order to better capture inflection points and to be oscillation free while interpolating the profile between control points. With the given constraints, the best suited curve interpolation method is the Monotone Cubic Hermite Spline [41]. It is a cubic curve interpolation with tangents determined in such a way that monotonicity of the fitted curve between the defining points is preserved. This method is the chosen method and is used throughout the rest of the section. Figure 3.8  Permitted regions for emulated model interpolation.  Permitted Areas for Fitted Model  3.4 Layer setup determination Once the emulated polished geometry can be determined from given layer model, the deviation between the CAD model and the emulated polished model can be calculated. With the ability to determine the layer error at any given layer for any given layer setup, methods to solve for the optimum layer setup can be resolved. 58  As mentioned previously, if layer positions are similar between different numbers of layers solutions, increase in number of layers decreases the maximum layer error. Thus, to determine the optimum solution, the scheme focuses on optimization of layer thickness and position for any given number of layers first. After the optimum solution can be determined or a means that minimizes the maximum layer error for any given number of layers, the method is nested inside a number of layers marching loop where the number of layers is increased until the layer error constraint is satisfied.  3.4.1 Layer thickness and position For any give number of layers, the layer setup for unconstrained case should ideally result in uniform layer error distribution or in other words, minimized maximum error. To achieve this, a three steps approach is proposed. The first step involves the determination of a crude initial layer setup that is independent of the emulated polished model. This step’s main goal is to distribute the layer to the appropriate regions which in term facilitate a better enhanced input for the second step. For this step curvature is used. Regions of relatively high curvature require more layers to capture the shape appropriately than areas with relatively low curvature. However, this statement is not entirely accurate since the means to determine the optimum layer setup is through uniform layer error distribution, which implies that small features have equal layer error values as that of larger features. Thus, higher curvature regions do not necessarily contain higher number of layers such as shown by Fig. 3.9. To tackle this issue, curvature along is not sufficient. Curvature integrated with respect to build direction coordinate for constant intervals is used instead. To determine the value of this interval, the curvature is integrated from minimum build direction coordinate to maximum build direction coordinate. This value is divided by the current number of layers 59  which gives a constant area value that is in term used to sequentially find the layer thickness of each layer by integration. A crude initial layer distribution is found. Since the layer setup is determined sequentially and independent of the emulated polish geometry, the layer thickness and position need to be adjusted for better results. Figure 3.9  Similar layer distribution at area of different curvature values.  Area of Lower Curvature  Area of Higher Curvature  The previous is the unconstrained case where layer thickness can vary without upper and lower bound. In a practical sense however, LM machine possesses maximum and minimum allowed layer thickness due to machine constraints. Thus, for calculating the initial layer distribution intended for constrained case, when a calculated layer is smaller than that of the smallest allowed layer thickness, the layer is set to the smallest allowed thickness. When the calculated layer is larger than that of the maximum allowed layer thickness, the layer is set to the maximum allowed layer thickness. A constrained initial layer distribution is found. The second step is a method of compensation. Since without the layer model, the emulated polished model cannot be found and without the emulated polished model, the layer error cannot be determined, this step becomes best solved using an iterative process. The layer thickness and 60  position are adjusted simultaneously (since they are actually dependent). As of this step, for any given layer setup, the associated layer errors can be calculated. Those calculated error essentially provides an indication of how the layer should be adjusted. If the layer error is large, then most likely the layer thickness should be reduced and vice versa or the adjacent layer thicknesses should be increased or vice versa. Thus, adjustment of layer position and thickness can be made based on current layer error values. A linear relationship is assumed. To adjust the layers based on current error values, the difference between layer error and mean layer error is found and normalized for each layer by  (3.1)  where  is the layer error and  is the mean layer error. The normalization is to ensure that  the sum of the total adjustment is zero; thus, maintaining the build direction dimension. can be directly applied to scale the layer; however, there is a possibility that some layer after adjustment might become negative since the adjustment relationship is assumed to be linear. To alleviate this potential issue, a scaling factor  is used and determined by  (3.2)  where  represents individual layer thicknesses and  is an arbitrary constant chosen to ensure  the adjusted layers thicknesses do not become zero. Different values can be used for  . It only  affects the number of iterations needed to find the solution. The value of ten gives good results and is used.  , as shown by equation 3.3, is the actually layer adjustments made to currently  layer thicknesses. 61  (3.3)  A new layer setup is then found. Layer error is checked again and readjusts made. After some iteration, the difference between maximum and minimum error is reduced. However, a limit to the amount of reduction exists. When maximum reduction is achieved, compensation stops. For the constrained case, compensation follows the identical rules as before except the adjustment can only be made when it does not result in the layer thickness exceeding the lower or upper layer thickness bound set by the machine. If the adjustment places the layer thickness outside the bounds, the layer adjustment is not made and is ignored when calculating and thus  . The layer is looked at again in the next iteration.  Keep in mind that the compensation step is a method to determine good initial condition for the final step, optimization. Due to the number of variables it is highly likely that the objective function of the problem does not present convexity. In order to avoid local minimums, initial condition close to that of the solution needs to be determined. The goodness of the compensation method to determine such initial condition can be gauged by how the optimization routine performs for the unconstrained case. Since, the result for the unconstrained layer error distribution should be uniform; the ability of the compensation method to find a good initial condition for optimization can be assessed by the optimization results. Optimization is performed with the objective function  (3.4)  where  is layer error.  62  The unconstrained case gives a good indication of the capabilities of the proposed method which in terms increase the confidence of the method for the constrained case. For the constrained case, the optimization is performed bounded. Lower and upper bounds are applied to the layer thicknesses. The different between maximum layer error and minimum layer error is minimized. The optimum layer thickness and position is found for a given number of layers. One item to note is that the start and end layers are ignored during layer setup determination since control point extractions at these regions are unreliable. For the unconstrained case, arbitrary layer sizes that are two orders of magnitude smaller than that of the object’s build direction dimension are used. For the constrained case, the minimum layer thickness is used at these regions with the assumption that such setup gives the minimum possible errors for the start and end layers.  3.4.2 Number of layers With the capability of determining optimum layer thickness and position for any given number of layers, increase in number of layers would now result in decrease in layer error. Marching can be then used to determine the number of layers necessary to satisfy user specified tolerance. However, it is quite inefficient to march from a layer setup that is quite far away from the solution number of layers. Furthermore, optimization is computationally intensive. If the solution consists of a hundred layers, starting the marching from three layers equates to ninety seven iterations. It is more desirable if a close-to-solution number of layers can be estimated. Computational time is much reduced. Two important characteristic of the mean layer error value with respect to number of layers allows for a quick iterative method to estimate the start marching point. The mean of the layer 63  error distribution from the integration of curvature with respect to build direction dimension is within the same order of magnitude as that of the unconstrained uniformed layer error solution. Combined with the fact that the mean layer error possess a decaying nearly none oscillatory trend with respect to number of layers, a number of layers value can be quickly found utilizing the mean layer error values. The resulting number of layers is within ten percent of the final solution. Marching starts from the above initial estimate for both constrained and unconstrained case. The number of layers is increased by one layer per step until the layer error just becomes less than that of the specified tolerance. However, the start marching point estimation can overshoot the solution number of layers. In such a case, the marching steps backwards until layer error is just larger than the user specified value where it stops and set the previous iteration as the solution. This is deemed the optimum layer setup for the given user specific geometric tolerance.  3.5 Case studies The proposed method is implemented on the following case studies in order to validate the scheme and to test its capabilities. The first case study is aimed to perform a step-by-step check of the method and to confirm that the proposed method can find the optimum layer setup utilizing the deviation between the CAD model and the emulated polished geometry. Furthermore, a comparison between number of layers determined through the proposed method and the current industry uniform layer method is performed in order to analyze the effectiveness of the method in efficiency. The second case study is geared toward illustration of the importance of layer position relative to geometry and how through the proposed method optimum results can be obtained.  64  3.5.1 Case 1: convex only axis symmetric object For Case Study 1, a simple object with the revolving profile shown in Fig. 3.10 is used. A geometric tolerance of 0.03mm is given and the layer setup necessary to satisfy the tolerance is found for both constrained and unconstrained case. The result of each individual step is discussed and later compared. For the unconstrained case, the proposed method found the starting marching number of layers to be sixteen and the resulting final number of layers to satisfy the given geometric tolerance to be eighteen. The following paragraphs outline the determination of layer thickness and position for the proposed method at each individual step for the final number of layers solution. The outcome of the nested compensation and optimization technique of the intermediate marching calculation are not shown. Furthermore, the unconstrained case is tested first because the ideal error distribution is known. It is meant to prove the validity of the method before more complicated scenarios are tested. In the first step, a crude layer setup based on the integration of curvature with respect to build direction coordinate is found as shown in Fig. 3.10a. As seen from the figure, layers are concentrated at the area of higher curvature. Even compared with the results of the compensation and optimization steps, shown on Fig. 3.10b and c, respectively, the concentration is much higher. This biased characteristic offers a good starting point to the second step, compensation. It is much more robust for the compensation step to adjust the layers starting from areas of higher curvature than that of areas of lower curvature. It is easier to discover the finer features first using step one then to find the features using the compensation method. It can be seen in Fig. 3.11a that the error of the first step is small for the middle layers and large for the edge layers.  65  This is to be expected due to the over concentration of layer in the higher curvature region. As a reminder, the end and start layers are ignored.  Build Direction Dimension (mm)  Figure 3.10 Unconstrained layer distributions: (a) initial layer setup; (b) compensated layer setup; and (c) optimized layer setup.  6  0  -6 (a)  (b)  (c)  Once the initial layer setup is found sequentially, it is than fed into the second step, compensation. As shown in Fig. 3.10b, the compensation step adjusts the layer thickness and in term, position to reduce the difference between maximum layer error and minimum layer error, shown in Fig. 3.11b. Further adjustment is made through the last step, optimization. For the unconstrained case, there are no lower and upper bound set for the individual layer thickness and due to that the initial condition found by compensation should allow the optimization step to find a solution where uniform layer error results. As shown in Fig. 3.11b, this is the case. The layer errors are within 5% of the mean. If the accuracy of the optimization step is further increased, the percentage of difference from mean is further reduced; however, the computation time becomes increased. Fig. 3.10c shows the resulting layer distribution. Compare to the layer distribution found by compensation, only minor adjustments are made; thus, the compensation method is capable of adjusting the layer thickness and position close to that of the final solution.  66  Layer Error (mm)  Figure 3.11 Unconstrained layer error: (a) initial layer setup; (b) compensated layer setup; and (c) optimized layer setup.  Number of layers (a) Layer Error (mm)  Number of layers (b)  Number of layers (c) As for the constrained case, vastly different layer distribution results due to the minimum allowable layer thickness being set at 0.3 mm and the maximum allowable layer thickness being constrained to 1 mm. The initial layer setup is shown in Fig. 3.12a with the corresponding layer error distribution shown in Fig. 3.13a. Because of the constraint, the number of layers for the optimum solution that satisfies the geometry tolerance constraint is twenty.  67  Build Direction Dimension (mm)  Figure 3.12 Constrained layer distributions: (a) initial layer setup; (b) compensated layer setup; and (c) optimized layer setup.  6  0  -6 (a)  (b)  (c)  After the initial layer setup, compensation takes place reducing the difference between the maximum and minimum error resulting in the layer setup shown in Fig. 3.12b. However, as clearly shown in Fig. 3.13b, the layer error is the largest at the area of highest curvature and smallest at the area of lowest curvature. Even after optimization this behavior still exists, shown in Fig. 3.13c. This distribution is causes by the layer thickness being bounded. At the region where larger than the maximum allowable layer should be used, the error is small because the layer cannot be adjusted to a size where the layer error can increase. At the region where smaller than the minimum allowable layer should be used, the error is large because the layer cannot be adjusted to a size where the layer error can decrease. Due to the layer constraints, uniform layer cannot result. The optimum layer setup solution for given layer thickness constraint is shown in Fig. 3.12c. Again, just with compensation, the layer setup solution is relatively close to the final solution.  68  Layer Error (mm)  Figure 3.13 Constrained layer error: (a) initial layer setup; (b) compensated layer setup; and (c) optimized layer setup.  Number of layers (a) Layer Error (mm)  Number of layers (b)  Number of layers (c) As shown by the results of the three steps for layer thickness and position determination, the proposed method’s capability to find the optimum layer setup for a given number of layers is validated and the first two steps’ capability to determine a good initial condition for the optimization step is confirmed. The optimization routine achieved uniform layer error distribution for the unconstrained case which further solidifying the ability of the proposed method and the goodness of the constrained layer setup result. Most significantly, the proposed method increased the efficiency of the LM process. Table 1 shows the number of layer necessary to satisfy the geometric tolerance constraint for the uniform layer thickness case, the variable layer thickness constrained case and the variable layer thickness unconstrained case. Forty layers are necessary for most current LM methods to construct the part while the proposed method constrained only utilizes twenty layers and unconstrained merely eighteen. Realistically only the constrained result should be compared. The unconstrained eighteen layers are only possible when all machine constraints are ignored. The 69  resulting build time for the constrained case is four times less than that of the current LM uniform layer setup methods. Efficiency is greatly increased. Table 3.1  Number of layers and error comparison  Maximum Error (mm) Number of layers  Uniform  Proposed Method Constrained  Proposed Method Unconstrained  0.0278  0.0297  0.0291  40  20  18  3.5.2 Case 2: s-shaped axis symmetric object The second case study implements the proposed method on an S-shaped profile, shown in Fig. 3.14, where both convex and concave features are present. As mentioned before, the layer positioning at the concave region is crucial and ignoring this fact can result in an oscillating relationship between number of layers and layer error. This is clearly shown in Fig. 3.15 where the error plot of the uniform layer case oscillates with respect to number of layers. The overall trend does show a decrease in layer error, as expected; however locally, increasing number of layers can mean an increase in layer error. Thus, layer by layer jump cannot be made with guaranteed results. Large number of layers jump needs to be made to guarantee a proper decrease in layer error meaning excessive layers are used. Inefficient build result. This oscillatory trend is caused by the layers at the concave region not being able to situate itself at the optimum location. As number of layers increase, layers are situated differently with respect to the geometry, and at some specific number of layers, the layer can be in a position where the maximum layer is much reduced. However, when number of layers is further increased, the layers again fall into a setup where the layer error at the concave region increases drastically.  70  Figure 3.14 S-curved profile.  8  0  -6  With the proposed method, constrained, this issue does not show up until the number of layers comes to close to the maximum allowed. Maximum allowed is the maximum number of layers the machine can produce this particular part with. The same problem occurs because the layers can no longer shift around to find the optimum location relative to the geometry. Before a certain threshold, this is still possible; thus, increase in number of layers still causes a decrease in layer error. Furthermore, the rest of the layers are also capable of situating themselves at better locations.  71  Figure 3.15 Layer error vs. number of layers.  4.00 Uniform  Layer Error (mm)  3.50  3.00  Optimized  2.50  Optimized Constrained  2.00 1.50 1.00 0.50 0.00 10  15  20  25  30  35  40  Number of Layers  If the concave region is removed, the oscillatory effect is gone as shown in Fig. 3.16. The uniform layer thickness method still results in the highest error value for a given layer thickness, unconstrained being the lowest and constrained in between the two. Thus, layer position is extremely important at concave regions. Concaved regions are feature that exists on most parts and the large error caused by current LM layer generation method cannot be ignored. A method capable of resolving this issue is proposed and the result validated.  72  Figure 3.16 Layer error vs. number of layers with concaved area removed.  0.30 Uniform (Concave Area Removed) Layer Error (mm)  0.25  Optimized 0.20 Optimized Contrained (Concave Area Removed)  0.15 0.10 0.05 0.00  15  20  25  30  35  40  Number of Layers  73  4. TOLERANCE-BASED LAYER SETUP OPTIMIZATION FOR NONAXIS SYMMETRIC OBJECTS Previously, a method to increase greatly increase efficiency and better guarantee accuracy for axis symmetric objects is proposed. However, the practicality of such a method is limited. Moreover, LM has the advantage of being very capable at the production of complex geometries [3]; therefore, method to fully utilize such a capability is both needed and required. The potential of the process should be not diminished by the lack of capability in the geometric processing phase.  4.1 Prelude Due to the promising result shown in the axis symmetric case, an attempt to extend the proposed method to none-axis symmetric geometry is made. The assumption where the object sliced results in a single-contour-per-slice and the need for a smooth surface geometry still stands. Geometries resulting in inner contour are also not considered in this work. In order to extend the axis symmetry method to non-axis symmetric objects, some modification are necessary. However, these modifications should be made such that the effectiveness of the method is retained. Similar results should be observed between the two cases. If the previous method cannot be easily ported to the non-axis symmetric case, completely different approach is necessary. However, as shown by the latter section, the axis symmetry method is successfully extended to non-axis symmetric cases. To apply the current method in three dimensions, the polish geometry needs to be readily emulated and the deviation between polished and CAD geometry are straightforwardly  74  calculated given the existence of the extra dimension. An extended polished model emulation method is proposed in the subsequent subsections. Method to determine layer deviation is also presented. Similar method to determine optimal layer setup is proposed.  4.2 Methodology Determination of the polish geometry has become more difficult for non-axis symmetric objects since a single 2D spline no longer fully defines the object being processed. 3D surfaces now exist in the place of the 2D spline. In order to emulate the polish model in 3D space, defining points to create the polished model surface has to be extracted from the surfaces of the CAD object. However, this is not easily achieved due to the ambiguity resulting from the interaction of layers. As mentioned in the axis symmetric case, ambiguities exist in polished model’s defining point extraction at the concave region, the convex region, and the start and end regions. Those ambiguities are caused by the lack of information from the layer model in the build direction only. In the non-axis symmetric case, ambiguity in defining point extraction can exist both in the build direction and the two other directions orthogonal to the build direction. However, such ambiguity is different from those in the build direction or rather, it contributes to increase the uncertainty in the ambiguity found in the build direction. To better understand this ambiguity, a quick look back to the uncertainty for defining point extraction in the axis symmetric case is necessary. In the stated four classes of regions for defining point extraction, three needs interpolation. This is the convex region, concave region and the end and start regions as shown in Fig. 4.1. At these three regions, interpolation is performed to find the best possible defining points. A method to consistently extract these points is proposed in section 3.3. Such interpolation is necessary for the current case as well since the  75  layer model lacks topological information. In order to interpolate and extract these defining points consistently, information resolved from the other regions of the geometry is necessary. Figure 4.1  Region needing interpolation during polish model emulation.  Convex Region  Concave Region  CAD Model  Start/End Region Possible Polished Model  By excluding the ambiguous regions, the only part left is the distinct monotone regions. For both axis and non-axis symmetric cases, the defining points found in these regions are the exact intersection points of the CAD and layer model. This is easily seen in Fig. 4.1 for the axis symmetric case and Fig. 4.2 for the non-axis symmetric case, where other then the layers which contain the concave, convex and start/end geometry, the defining points lies exactly on the CAD model. By utilizing geometry information extracted from these regions the defining point in the ambiguous regions can be best approximated.  76  Figure 4.2  Defining point extraction at distinct monotone regions for non-axis symmetric objects.  However, for the non-axis symmetric case, one extra ambiguity arises that makes this more difficult. For a single planar profile, geometric information from the direct adjacent layers to the layer in need of interpolation is used to extract the defining point. This can be performed since the planar profile constraints the layer information to be all in one plane, allowing unique neighboring defining points to exist. Conversely, in the non-axis symmetric case, the layer information necessary to interpolate a defining point in an ambiguous layer can be any geometry information from any defining point in the immediate adjacent layers. This is shown in Fig. 4.3. Unique adjacent points for interpolation do not exist. Thus, depending on the combination of the adjacent geometric information used, different defining points at the ambiguous regions are extracted. Even so, this should not pose as a problem as long as the connections made to the adjacent layers are consistent and distinct allowing the extracted ambiguous points to also be consistent. To put it in a more layman term, much like drawing curves on an object, as long as the curves are drawn consistently and densely without intersecting each other, the geometric 77  information of the object is very well approximated by the set of curves. Therefore, a constraint similar to that of the plane in the axis symmetric case is necessary where a specific defining point and its layer information are uniquely associated with the adjacent defining points. Figure 4.3  Ambiguity in vertical profile extraction.  In order to apply the above concept, a method to solve for the unique connectivity information in the build direction between defining layer points is necessary. Since the layer model is created from the CAD geometry and the defining point extracted from the layer model, the CAD model is the source of geometric information. To find such build direction connectivity information, a curve-based model is proposed, where the original CAD model is converted into a curve-based model, with 3D curves constructed in the build direction as shown in Fig. 4.4. Given the layer thickness used to slice the object, each curve can be individually processed to determine the curved-based layer model. The complete layer model is the combination of all the individual layer model curves. The staired curves are referred to as layer model curves in this research. Once the curve-based layer model is extracted, the polished model is determined through the processing of each individual layer model curves. The combination of the found polished model curves, again, combines to derive the complete polish model. Since each curve for the curvebased model is associated with its own layer model curve and polished model curve, the 78  deviation between the emulated polished model and the original CAD geometry is readily determined. The maximum deviation for each layer is calculated and thus controlled in order to optimize layer setup for a given object. Figure 4.4  Curve-based model.  However, the same fitting constraints imposed in the 2D curve case also apply for the 3D curve case where the emulated polished model has to be fully contained by the layer model and that oscillation in fitting is avoided. Oscillation in fitting is quite easily mitigated by the careful choice of fitting method used; however, full containment or lack of overshoot is much more difficult to handle in 3D space. Depending on the method used to fit the 3D curves, overshoot can occur in various different directions. The fitted curve is no longer constrained in a plane. One important item to realize is that the overshoot constraint is imposed in the same direction as the surface normal, which implies no overshoot in the surface normal direction. Furthermore, surface error is found by calculated the deviation of the polish model from the CAD model in the direction normal to the CAD model surface, which entails that the emulated polish geometry of interest are those that intersect with the normal of the CAD surface. By such reasoning, the  79  emulated polish geometry of interest is essentially made up of fitted curves that always exist normal to the CAD curves. Thus, a method to determine the emulated polished model by mapping the 3D curves of the curve-based model into 2D space utilizing surface normal information is proposed. Once mapped, the CAD curve becomes planar with all associated normal vector aligned. Just to reiterate an important point made before, as long as the curves for the curve model is generated consistently and densely without intersecting one and another, the geometric information of the computer design model can be captured appropriately. The polished model is determined based on this also. Such a one to one mapping enables curve-based layer model and curve-based emulated polished model to be determined in 2D space instead of 3D space, which simplifies the problem and allows the method proposed in section 3 to be readily applied. Furthermore, only the necessary calculations are made in order to determine final optimum layer setup and error prone 3D surface construction is avoided [42]. Once the deviation between the final polished geometry and the original CAD model can be determined, the previously proposed compensation and optimization method to minimize layer error is used. Linear relationship between layer size and layer error is assumed and the maximum error for each layer is utilized for layer size adjustment.  4.3 Procedure In order to valid the above proposed method, the curved-based model has to be first calculated. Each curve along with its associated layer thickness is mapped from 3D space to 2D space. The emulated polish model is determined and the deviation is calculated and controlled.  80  4.3.1 Curve based model To extract the curve based model from the original CAD geometry, the CAD model is densely sampled in the build direction. Depending on the accuracy needed, a chosen densely spaced finite number of slice planes are intersected with the computer design model and the corresponding intersection contours are found. Every planar contour extracted contains equal number of points in order to help facilitate point matching between the layers to create the curvebased model. In order to match the number of points per contour, the inter-point distance  for a  given contour is given by  (4.1)  where  is the perimeter length of the contour and  is the number of points per contour. With  equal number of points per layer, the points are distinctly connected in the build direction. The error associated with this sampling process is not studied in this work. In order to minimize error caused by sampling, very high sampling rate is used. Before connecting the point vertically to create the curve based model, point matching between layers is necessary. Since equal number of points per layer is used and the contour sampled point for each layer is arranged sequentially in the same direction, matching of the first point facilitates automatic matching of all other points to create the curve model. Exactly which point match to which is not of high importance, as long as the matching is consistent throughout all contours and is done consecutively. This is the case since sampling density is very high and the subsequent mapping step utilizes normal vectors. If sampling density approaches infinite, the curve model approaches the CAD model.  81  To match the first point for smooth one-contour-per-layer object, centroid method is used. For any given contour, the centroid is determined and the origin shifted to the centroid location. If the centroid is located within the contour, the first intersection between the contour and the positive direction of the chosen axis is found as shown in Fig. 4.5a. If the centroid falls outside of the contour, the first intersection between the contour and the negative direction of the chosen axis is found as shown in Fig. 4.5b. This intersection point is found for all contours and connected to create the first curve. The other point automatically matches creating the curve based model ready for mapping into 2D space. Figure 4.5  Initial point determination for contour point matching.  Centroid  Centroid (a)  (b)  4.3.2 Mapping Once the curve model is found, the curves need to be mapped into 2D space. A numerical approach is taken to achieve this mapping. The initial method of determining the curve based model results in a finite number of build direction curves which defines the object. Each of these curves is the piece-wise approximation 82  of the original CAD surface. During construction of these piece-wise curves, the normal information at any given point is retained. By utilizing the available normal information and by applying certain amount of rotation derived from the normal vectors to each curve segment, the 3D curve can be mapped into 2D. The 3D curve is straightened into a planar curve. The mapping is achieved with a shifting origin. For rotating the first segment, the origin is set at the first point of the curve and the normal vector aligned with the x-axis, shown in Fig. 4.6. The position vector of the points defining the curve is updated with respect to the new origin. The angle between the normal vector and the position vector is locked. Furthermore, the relative position and angle of the points after the second is kept constant with respect to the second point. A two step transformation is applied to align the normal vector. According to the coordinate system shown in Fig. 4.6, a rotation about x-axis is first applied to align the position vector with the xz-plane. The angle between the second point’s normal vector and the xz-plane is determined with respect to its position vector. A rotation is applied utilizing the calculated angle value to align the normal vector. Since the rest of the points’ position is locked with respect to the second point, the curve after the second point experience equal rotation. The transformed segments are shown in Fig. 4.7a. Once rotation is complete, the origin shifts to the second point, aligns with the normal vector, and the process repeats (Fig. 4.7b). The previous points that have been processed are not affected by further transformation. When transformation is complete on the last point, the 3D piece-wise curve is mapped to 2D space. Figure 4.8 illustrates the mapped curve and the original curve.  83  Figure 4.6  Shifting origin.  Z  Y  X  Figure 4.7  Curve mapping: (a) alignment of normal for second point; and (b) shifting of origin to facilitate next transformation.  Z  Y  X  Z  Y  X (a)  (b)  84  Figure 4.8  Original and mapped curves.  Once the curve is mapped into 2D space, layer setup used to create the layer geometry also needs to be mapped in order to determine the appropriate layer model in 2D space which facilitates emulate polished curve determination. To map the layer setup into 2D space, the segment of the 3D curve that falls within a given layer when in 3D space is marked. After mapping, the marking allows for the determination of the corresponding layer size in the mapped 2D space as shown in Fig. 4.9.  85  Figure 4.9  Mapped layer thickness.  After mapping the complete set of 3D curves into 2D space, layer model pertaining to each curve is derived and the subsequent emulated polished model for each individual curve is calculated. The emulated polish model is found by the axis symmetric method proposed in Section 3.3, where the defining points are first extracted from the layer model and the Monotone Cubic Hermite spline [41] is fitted to the defining points. Once completed, the deviation between the emulated polished curves and the mapped CAD curves is determined. One thing to note is that after mapping, the dimension in the build direction between each curve is no longer equal as before mapping. Depending on the surface normal, each curve is transformed differently. Thus, the mapped layer thickness values are also no longer equal from curve to curve. The change in length is expected since each curve is extracted from different region of the object where  86  dissimilar features are present. Such lengthening effect in the build direction is a natural outcome from the proposed mapping method from attempting to retain dimensional accuracy. This allows the error calculated in mapped space to be a good approximation of the error in 3D space. The mapping method also retains all transformation information in order to allow the curves to be mapped back.  4.3.3 Layer setup determination With the capability to determine the maximum layer error at any given layer, similar approach to that of the axis symmetric method is taken to determine optimal layer setup where compensation and optimization is used within a nested number of layers marching loop. As mentioned in section 3.2, if the maximum layer error is minimized based on the adjustment of both the position and the thickness of the layer for different number of layers solutions, increase in number of layers results in decrease of the part error. Thus, to determine the optimum solution, the scheme focuses on optimization of layer thickness and position for any given number of layers first. After the optimal solution or a means that minimizes the maximum layer error for any given number of layers can be determined, the method is nested inside a number of layers marching loop where the number of layers is increased until the layer error constraint is satisfied.  4.3.3.1 Layer thickness and position For any given number of layers, a rough initial layer setup is determined based on curvature. Since the object of interest is non-axis symmetric, a combination of maximum curvature value for all curves in the curve-based model is used. The curvature values determined from each of the curves from the curve-based model are super-positioned and the maximum curvature curve is found as shown in Fig. 4.10. Only the curvature of two curves is shown for illustrative purpose. 87  In actual calculation, the number of curves used is the total number of curves in the curve-based model. The maximum curvature curve is integrated with respect to the build direction dimension. The total value is divided by the number of current layer thus resulting in a constant area value of curvature vs. build direction dimension. Layers are then generated sequentially utilizing this value. The layer thickness grows until the constant calculated value is reached and the next layer begins. This repeats until the maximum value is reached. Initial layer setup is found. Figure 4.10 Max curvature from curve-based model.  Once the initially layer setup is found, compensation is used. A linear relationship between layer thickness and layer error is assumed. The compensation method adjusts each layer based on the corresponding calculated layer error and the mean layer error value. If the error at a particular layer is larger than the current mean error value, the layer thickness is decreased and if the error at a particular layer is smaller than the current mean error value, the layer thickness is increased. For the axis symmetric scheme, the maximum deviation within each layer is used to facilitate compensation. It is the same for the non-axis symmetric case, with the only difference being each layer is defined by multiple curve segments rather than only one. The maximum deviation 88  is found amongst these curves segments for each layer and is used for compensation. The equation defining the compensation method is the same as the one shown for the 2D case. Thus, Eqn. 3.1-3.3 are used. After compensation, optimization is performed using the objective function given by Eqn. 3.4. The maximum error is minimized. However, it is actually later discovered that the compensation method is very capable at acquiring similar answer to that of the optimization method if the constant  in Eqn. 3.2 is increased after a certain number of iterations. Initially for the axis  symmetric case, the value of  is set at 10 in order to determine the initial conditions used for  subsequent optimization. By increase  by an order of magnitude after every set of iterations, the  resulting layer error becomes quickly minimized. The solution approaches that of the optimization method. The compensation only method is tested and compared against optimization method in the case studies. For the constrained case where maximum and minimum allowed layer thickness is constrained, the same method applies except the layer size is bounded. For initial layer setup determination, the layer size is not to exceed the maximum when growing sequentially and is not allowed to fall under the minimum. If it does exceed the maximum, the maximum allowed layer is used and if it falls under the minimum, the minimum layer size is used. For the compensation method the same layer thickness constraints applies. Bounded optimization is then performed. The compensation method is also found to be very capable in the constraint case as shown in the case studies section and can find equal or better solution at less computational time.  89  4.3.3.2 Number of layers To determine the initial guess of number of layers in order to reduce computational load, the exact same approach as that of the axis symmetric case is taken. The mean layer error value for the initial layer setup is close to that of mean layer error value of the final layer setup; thus, a marching method utilizing the mean layer error values determined from the initially layer setup is used. Such a scheme can find a solution within ten percent of the final answer. For more detailed information please refer back to section 3.4.2. The same marching method proposed in section 3.4.2 is used to determine the number of layers necessary to satisfy the user specified tolerance.  4.4 Case studies The proposed method is implemented on the following case studies in order to validate the scheme and to test its capabilities. The first case study is aimed to show, in a simple 2D sense, when concave feature is not present, the compensation method alone is capable of achieving equal result to that of optimization at a much reduced calculation time. The second case study is aim at validating the capability of the non-axis symmetric method with mapping. Furthermore, a comparison between number of layers determined through the proposed method and the current industry uniform layer method is performed in order to analyze the effectiveness of the method in efficiency. The third case study is geared toward the illustration of the importance of layer position relative to geometry and how through the proposed method optimal results can be better obtained. For case 1 and case 2, the error constraint is set at 0.015 mm. For the constrained case, the minimum allowed layer thickness is 0.3 mm and the maximum is 1 mm. For Case 3, the minimum allowed layer thickness is 0.4 mm and the maximum is 1 mm. The test cases are implemented in Matlab 2007b on an ASUS G51j running Intel® i7 1.6-2.85 GHz processor, 90  Nvidia® Geforce® GTX 260M 1GB graphics card and 4 GB of ram. For the sake for clarity, the compensation only method is referred to as the compensation method for the rest of the thesis and the compensation and optimization method is referred to as the optimization method.  4.4.1 Case 1: compensation vs. optimization Before diving in and testing out the proposed non-axis symmetric method, the capability of the compensation method needs to be validated. It shows promise in determining similar solution in much reduced calculation time. By using a simple case that the optimization is guaranteed of finding the appropriate answer for, a proper comparison can be made. Thus, the test is done for a 2D spline without any concaved features as shown in Fig. 4.11. For this profile, if the compensation method is capable, same or very similar layer error distribution will result in a much reduce computational time.  Build Axis  Figure 4.11 Axis symmetric test object.  The resulting layer error distribution for the compensation only method with unconstrained layer thickness is shown in Fig. 4.12a and the resulting layer error distribution for the optimization  91  method with unconstrained layer thickness is shown in Fig. 4.12b. Since the layer size is not bounded, the resulting layer error distribution should be uniform. As validated by the result shown in Fig. 4.12, this is the case. The results are equal to 10-5 decimal place.  for the  compensation equation is increased until 1000 from 10 by an order of magnitude per step and optimization tolerance is set to 10-7. Furthermore, as seen in column three and four of Tab. 4.1, the calculation time necessary for the compensation only method is much lower than that of the optimization method, proving the compensation method to be very capable for the unconstrained case. Figure 4.12 Layer error distribution of case 1.  Number of Layers (a)  Number of Layers (b)  92  Table 4.1  Maximum layer error and corresponding computation time for case 1  Compensation Constrained  Optimized Constrained  Compensation Unconstrained  Optimized Unconstrained  Maximum Error  0.01026  0.01172  0.01298  0.01298  Calculation Time (s)  162.324  987.289  324.647  733.507  Looking at the constrained column in Tab. 4.1, it is shown that the compensation method is also capable of finding a good answer when layers are constrained. In this case, a smaller maximum error is found at a much reduce time. Thus, for constrained case, the compensated only method is capable of achieving a better minimized maximum layer error at much less calculation time than that of the optimization method.  4.4.2 Case 2: non-axis symmetric object with elliptical contour The second case study moves completely into 3D space as this is the main challenge and goal of this part of the research. This case study is designed to test all aspect of the proposed non-axis symmetric method and to validate its capability. The object being sliced is shown in Fig. 4.13. First the model is converted to a curve-based model with the initial layer setup estimated based on combined maximum curvature. With the initial layer setup found, the curves are mapped into 2D space. Fig. 4.14 shows one of the original curve in blue and the corresponding mapped 2D curve in red. The resulting mapped curve is validated to be planar. The layer thickness is also mapped.  93  Figure 4.13 Non-axis symmetric test object.  Figure 4.14 Original and mapped curves for one single vertical profile.  The emulated polish model is then determined and the layer error calculated. The maximum error for each layer is calculated and use for layer size compensation. Tab. 4.2 shows the final 94  result necessary to satisfy the tolerance of 0.015 mm. The number of layers is much reduced for the constrained and unconstrained case compare to that of the uniform current industry methods. Build time is much reduced; thus, efficiency increased. Table 4.2  Maximum layer error and corresponding number of layers for case 2  Uniform  Compensation Constrained  Compensation Unconstrained  Maximum Error  0.01109  0.01371  0.01387  Number of layers  38  20  18  However, the shown result in Tab. 4.2 is those acquired by the compensation only method. This is due to the optimization being incapable of finding the appropriate answer. As mentioned previously, for the unconstrained case, the resulting layer error distribution should be uniform. The optimization method used Active Set [43], a gradient based method, was not able to converge. When given the same number of layers, the resulting layer error distribution between the compensation only method and the optimization method is shown in Fig. 4.15a and Fig. 4.15b, respectively. Due to the optimization unable to converge for the non-axis symmetric case; it is quite possible, because of mapping and the interaction of curves, the shape of the objective function is irregular. When the objective function is not continuous or differentiable, the gradient method is not capable of finding the minimum. After the gradient optimization method is found not capable, an optimization algorithm that does not rely on gradient is used, Pattern Search [44]. However, the results is still quite scattered and shown in Fig. 4.15c, indicating that the irregularity in the objective function is not very uniform. The maximum layer error found through these two optimization methods for both the constrained and the unconstrained case is compared to the proposed compensation method in Tab. 4.3. This indicates that objective function for optimization does not possess ideal behavior which leads to difficulty in 95  minimization. The compensation technique, is immune to undesirable shaped objective functions since it complete avoids the need for objective functions. Much more reliable and consistent answer can thus be found for the non-axis symmetric case. Figure 4.15 Unconstrained layer error distribution of: (a) compensation only method; and (b) optimized method.  Number of Layers (a)  Number of Layers (b)  Number of Layers (c)  96  Table 4.3  Maximum layer error and corresponding computation time for case 2  Compensation Unconstrained  Active Set Unconstrained  Pattern Search Unconstrained  Maximum Error (mm)  0.01387  0.01523  0.01509  Calculation Time (s)  972.136  1055.743  3514.33  Compensation Constrained  Active Set Constrained  Pattern Search Constrained  Maximum Error (mm)  0.01371  0.01380  0.01491  Calculation Time (s)  486.068  569.675  2423.57  4.4.3 Case 3: non-axis symmetric object with concave contour To further test the short coming of the optimization method and to make certain that the proposed method ensures a monotonic relationship between layer error and number of layers much like the axis symmetric case, case 3 is implement with an object exhibiting dominant concave feature. Fig. 4.16 shows the curve model with reduced curves for illustrative purpose. Number of layers is varied, and the resulting maximum layer error is found.  97  Figure 4.16 Curve-based model of non-axis symmetric test case.  First, layer error to number of layers relationship is verified. Same as axis symmetric case, LM parts maximum part error for objects with concaved regions is dominate by these regions since extraction of defining points at these regions is the most difficult due to the CAD model not intersecting with the layer model. Depending on the position of the layer relative to the geometry, large error can occur in the concaved regions. As shown in Fig. 4.17, the uniform method results in an oscillatory effect similar to that of the axis symmetric case, which validates that for non-axis symmetric objects, uniform layer setup still results in undesirable layer error to number of layers trend rendering the process very inefficient. A simple increase in number of layers does not necessary decrease the layer error. A large increase in number of layers is 98  necessary to guarantee a decrease in layer error. Conversely, the compensation unconstrained case shows a complete monotonic relationship between layer error and number of layers. This relationship allows the solution of very efficient layer setup. A single solution exists for a user specified part error tolerance. Figure 4.17 Layer error with respect to number of layers.  0.20  0.18  Uniform  Max Layer Error (mm)  0.16 Compensated  0.14  Compensated Constrained  0.12 0.10 0.08 0.06 0.04 0.02 0.00 15  20  25  30  35  Number of Layers  For the constrained case however, the result is only monotonic until a certain number of layers. The oscillation begins after such a number of layers. This oscillatory trend, both in uniform layer setup and constrained layer setup is caused by the layers at the concave region not being able to situate at the optimum location. As number of layers increase, layers are situated differently with respect to the geometry. In some instances, the appropriately sized layer is situated in a closer to ideal region resulting in reduce layer error. However, in other instances, the appropriately sized layer is not situated at the proper location that yields minimum layer error due to being constrained by itself or being constrained by neighboring layers. Large error occurs. 99  Furthermore, inappropriately size layer can also be situated at the optimal position but due to the undesirable layer size, larger than desired error occurs. Therefore, as long as constraints are imposed on layer sizes, oscillatory effect occurs for the layer error to number of layers relationship and larger than desirable layer error results. The wider the constraints, the later the oscillatory effect occurs relative to number of layers. Consequently, the believe that picking the minimum allowed layer size to build a part results in minimum part error is not accurate as shown in Fig. 4.17. Even though oscillatory effect is still present, the proposed method (constrained) determines much more reliable and desirable results when the effect of layer constraint is minor. The relationship before oscillation is monotonic and the minimum layer error achieved is smaller than that achievable by the uniform layer setup for any given number of layers. For this case study, an average of 25% layer saving is achieved resulting in above 50% part construction time saving. Now to test the optimization method, layer error to number of layers relationship using optimization for the constrained case is calculated. The result is shown in Fig. 4.18. Depending on the number of layers, the optimization method can sometimes find similar answer to that of compensation but at times calculate much larger layer error. This indicates that optimization is not robust for the proposed non-axis symmetric method and cannot be depended on to give appropriate results.  100  Figure 4.18 Layer error vs. number of layers for constrained case with optimization method.  0.18 Max Layer Error (mm)  0.16  Optimized Constrained  0.14 Compensated Unconstrained  0.12  Compensated Constrained  0.10 0.08 0.06 0.04 0.02 0.00 10  15  20  25  30  35  40  Number of Layers  101  5. CONCLUSION This research presents novel schemes to both improve build efficiency and part accuracy of Layer Manufacturing methods on a computational geometry level. To achieve such a goal, artifacts created during layer generation are first removed. Subsequent optimal layer setup determination method based on final polished part geometry is then performed. The presented optimal layer setup scheme is first resolved based on simple axis symmetric objects in order to simplify the problem at hand and to better validate the proposed idea. It is then modified for more complex geometries and again validated. The following subsection outlines the contributions in the order each section is presented. Limitation is also summarized along with future work.  5.1 Research contribution Removal of systematic distortion caused by improper layer geometry generation is accomplished through solving for the minimum circumscribed extruded layer geometry. Such a method of layer generation utilizes a volume base approach where all features within a given layer are considered during layer contour generation. Biasing experienced with Top-Down and Bottom-up slicing do not occur; thus, systematic distortion is eliminated. However, due to methods to reliably extract the minimum circumscribed contour of a specific segment of a CAD geometry is not readily available; a new marching algorithm based on the assumption of minor curvature variation to generate such a boundary contour is presented. The new boundary generation method is adaptive in nature and generates boundary contours reliably. Once systematic distortion is removed, tolerance based method to effectively generate optimum layer geometry is presented for axis symmetric objects. Since all end-user LM parts experience 102  surface smoothing during post-processing phase, the resulting part error is the deviation between the final smoothed geometry and the initial computer design geometry. Thus, in order to facilitate error determination, a scheme to emulate the final smooth geometry is proposed. A consistent scheme to extract model defining point from layer model is presented. Such a final model emulation scheme allows for not only the prediction of the resulting part error but also enables the analysation of finished model features and geometric elements. After gaining the ability to predict finished model, the tolerance based layer setup generation method based on the deviation between the original CAD model and the polished final geometry is presented for axis symmetric parts. Because of the simultaneous existence of layer model and polished model, an iterative marching method with nested compensation and optimization technique is presented. Such a scheme takes a more global approach in finding optimal layer thickness and position, thus finding the more effective, time saving layer setup necessary to build a part to satisfy user specified geometric tolerance constraint. Furthermore, the proposed method is capable of positioning layer at optimum positions relative to the geometry for a given error in order to allow layer model to better describe the CAD geometry being build and thus reduces geometric error. It is clearly shown that employing minimum allowed layer thickness does not result in minimum part deviation due to layer positions and vice versa. Thus, a straight forward reduction in layer thickness does not necessarily result in reduction in error. For non-axis symmetric objects, one planar profile no longer allows for full geometric information extraction. Multiple vertical profiles are necessary to fully capture the model geometry. A scheme to determine curved-based model relying on vertical point matching is presented. A new one-to-one mapping method utilizing surface normal is implemented to transform the curve-based model into a set of planar curves to better facilitate the emulation of  103  the final polish geometry. The layer setup is also mapped. The final smoothed model is then approximated with the set of mapped planar curves. A method to predict the polish geometry for non-axis symmetric case is thus presented. Once in mapped to planar curves, the optimal layer setup is determined using the method proposed for the axis-symmetric case. The compensation method uses the maximum layer error to adjust the layer setup. Optimization then fine tunes the result. However, during the validation of the proposed method on non-axis symmetric objects, the optimization method often does not converge. Undesirably large error is found and a monotone relationship between number of layers and layer error cannot be established. Conversely, the presented compensation method is proven to be more capable than initially believed. Result similar to the ones calculated by optimization is found if the amount of adjustment imposed on the layers is reduced by an order of magnitude after a given number of iterations. Computation time is greatly reduced, but more importantly, desired result is achieved. This is validated by testing the compensation method on axis symmetric objects where the optimization method is able to acquire the desired answer robustly. The compensation is capable in the non-axis symmetric case due to it being not reliant on the objective function. The initial optimization algorithm implemented, Active Set [43], uses gradient information to find the minimum. The inability of such a scheme to converge for the non-axis symmetric case entails that the shape of the objective function is noisy or not differentiable or continuous. Optimization not dependent on gradient, Pattern Search [44], is implemented with also ill results. This implies the objective function does not behave in an ideal manner where optimization can easily be applied. Such issue is most likely caused by the interaction of the mapped planar curves. The layer error values are no longer calculated from one 104  single design curve and its corresponding polished curve like in the axis symmetric case. Compensation is a method to reliably determine the optimal layer setup for both non-axis and axis symmetric object. Time is saved, precision is preserved. Robustness is much better guaranteed. A quick comparison is made at this point, with the availability of optimal layer setup schemes, in order to comprehend the significance of systematic distortion elimination. The error causes by systematic distortion is in the same order of magnitude as the layer thickness used. However, depending on the constraint, the error caused by the proposed smoothing method or final part prediction method ranges from approximately an order of magnitude smaller than layer size to two orders of magnitude smaller. Therefore, without the removal of systematic distortion, layer setup would be largely determined based on distortion error with very high resulting number of layers. Number of layers would increase up to and over an order in magnitude. Due to the exponential nature of LM build time with respect to number of layers, an increase of two to three orders of magnitude in build time results. Inversely, the proposed scheme very significantly shortens LM build time.  5.2 Limitation and future work However, the proposed methods can only be accurately implemented on smooth objects that do not exhibit tangential discontinuity. Smoothing occurs at location of tangential discontinuity if implemented on non-smooth objects. For the contour generation method, this is cause by the assumption of minor curvature variation. As for the layer setup determination method, smoothing is causes by the final model being emulated using splines. Furthermore, objects that result in multiple-contour-per-layer and internal when sliced contours cannot be processed. Contour generation and vertical profile extraction is currently not possible for such objects. Error cause 105  by sampling is also not looked into; very high sampling rate is used to minimize sampling error. This makes it possible to go forth with the contour generation method and the proposed mapping scheme. Specifically, the proposed mapping method can suffer from large stacking error if sampling density is not sufficient since the operation is performed sequentially. Thus, a method to discover points of tangential discontinuity is necessary in order to better construct minimum circumscribes contour profiles. Furthermore, segmentation is necessary in order to allow for the extraction of multiple contour or internal contour per layer. In the build direction, polished model emulation needs to be improved with better information extraction from layer model. Feature detection based on specific layer setup might be necessary in order to expend on the type of objects the proposed method can be applicable for. The appropriate sampling density for both contour and vertical profile generation is necessary in order to improve computational efficiency and contour generation and mapping accuracy. The relationship between mapping and contour generation error with regards to sampling density should be quantified. The effect of stacking error for mapping should also be looked at. Through the proposed schemes, systematic distortion for LM parts is eliminated and the optimum layer setup with respect to final smoothed geometry for a given user specified tolerance is found. Number of layers is thus minimized, and build time is significantly reduced. Accuracy is also better guaranteed and controlled.  106  REFERENCES [1]  Chua, C. K., Leong, K. F., Lim, C. S., 2003, Rapid Prototyping (Second Edition): Principles and Applications, World Scientific, Singapore.  [2]  Pandey, P. M., Reddy, N. V., and Dhande, S. G., 2003, “Slicing Procedures in Layered Manufacturing: A Review,” Rapid Prototyping Journal, 9(5), pp. 274-288.  [3]  Dutta, D., Prinz, F. B., Rosen, D., and Weiss, L., 2001, “Layered Manufacturing: Current Status and Future Trends,” ASME Journal of Computing and Information Science in Engineering, 1(1), pp. 60-71.  [4]  Yan, X., and Gu, P., 1996, “A Review of Rapid Prototyping Technologies and Systems,” Computer-Aided Design, 28(4), pp. 307-318.  [5]  Kulkarni, P., and Dutta, D., 1996, “An Accurate Slicing Procedure for Layered Manufacturing,” Computer-Aided Design, 28(9), pp. 683-697.  [6]  Chiu, Y. Y., and Liao, Y. S., 2001, “A New Slicing Procedure for Rapid Prototyping Systems,” International Journal of Advanced Manufacturing Technology, 18(8), pp. 579585.  [7]  Chiu, Y. Y., Liao, Y. S., and Lee, S. C., 2004, “Slicing Strategies to Obtain Accuracy of Feature Relation in Rapidly Prototyped Parts,” International Journal of Machine Tools and Manufacture, 44(7-8), pp. 797-806.  [8]  Karapatis, N. P., Van Griethuysen, J. P. S., and Glardon, R., 1998 “Direct Rapid Tooling: A Review of Current Research,” Rapid Prototyping Journal, 4(2), pp.77-89.  [9]  Kruth, J. -P., Van der Schueren, B., Bonse, J. E., and Morren, B., 1996, “Basic Powder Metallurgical Aspects in Selective Metal Powder Sintering,” CIRP Annals – Manufacturing Technologies, 45(1), pp. 183-186.  [10] K. W., Lee, 1999, Principles of CAD/CAM/CAE Systems, Addison Wesley, Massachusetts, pp. 378–431. [11] Kumar, S., 2003, “Selective Laser Sintering: A Qualitative and Objective Approach,” JOM Journal of the Minerals, Metals and Materials Society, 55(10), pp. 43-47. [12] Sachs, E., Cima, M., Cornie, J., and Suh, N. P., 1990, “Three Dimensional Printing: Rapid Tooling and Prototypes Directly From A CAD Model,” CIRP Annals – Manufacturing Technologies, 39(1), pp. 201-204. [13] Galantucci, L. M., Lavecchia, F., and Percoco, G., 2009, “Experimental Study Aiming to Enhance the Surface Finish of Fused Deposition Modeled Parts,” Manufacturing Technology 58(1), pp. 189-192. [14] Ahn, D., Kim, H., and Lee, S., 2009, “Surface Roughness Prediction Using Measured Data and Interpolation in Layered Manufacturing,” Journal of Materials Processing Technology 209(2), pp. 664-671. [15] Ippolito, R., Iuliano, L. and Gatto, A., 1995, “Benchmarking of Rapid Prototyping Techniques in Terms of Dimensional Accuracy and Surface Finish,” Annals of the CIRP, 44 (1), pp. 157–160.  107  [16] Ahn, D. K., Kim, H. C., and Lee, S.H., 2007, “Fabrication Direction Optimization to Minimize Post-Machining in Layered Manufacturing,” International Journal of Machine Tools and Manufacture, 47(3–4), pp. 593–606. [17] Kulkarni, P. and Dutta, D., 2000, “On the Integration of Layered Manufacturing and Material Removal Process,” International Journal of Machine Science and Engineering, 122(1), pp. 100–108. [18] Tukuru, N., Gowda, K. P. S., Ahmed, S. M., and Badami, S., 2008, “Rapid Prototype Technique in Medical Field,” Research Journal of Pharmacy and Technology, 1(4), pp. 341-344. [19] Kechagias, J., Maropoulos, S., and Karagiannis, S., 2004, “Process Build-time Estimator Algorithm for Laminated Object Manufacturing,” 10(5), pp. 297-304. [20] Wang, T., Xi, J., and Jin, Y., 2006, “A Model Research for Prototype Warp Deformation in the FDM Process,” International Journal of Advanced Manufacturing Technology 33(1112), pp. 1087-1096. [21] Tong, K., Joshi, S., and Lehtihet, E. A., 2008, “Error Compensation for Fused Deposition Modeling (FDM) Machine by Correcting Slice Files,” Rapid Prototyping Journal, 14(1), pp. 4-14. [22] Dolenc, A., and Makela, I., 1994, “Slicing Procedure for Layered Manufacturing Techniques,” Computer Aided Design, 1(2), pp. 4-12. [23] Sabourin, E., Houser, S. A., and Bohn, J. H., 1996, “Adaptive Slicing Using Stepwise Uniform Refinement,” Rapid Prototyping Journal, 2(4), pp. 20-26. [24] Tyberg, J., and Bohn, J. H., 1998, “Local Adaptive Slicing,” Rapid Prototyping Journal, 4(3), pp. 118-27. [25] Pandey, P. M., Reddy, N. V., and Dhande, S. G., 2003, “A Real Time Adaptive Slicing for Fused Deposition Modeling,” International Journal of Machine tools and Manufacture, 43(1), pp. 61-71. [26] Alexander, P., Allen, S., and Dutta, D., 1998, “Part Orientation and Build Cost Determination in Layered Manufacturing,” Computer-Aided Design, 30(5), pp. 343-356. [27] Rattanawong, W., Masood, S. H., and Iovenitti, P., 2001, “A Volumetric Approach to PartBuild Orientations in Rapid Prototyping,” Journal of Materials Processing Technology, 119(1-3), pp. 348-353. [28] Pandey, P. M., Thrimurthulu, K., and Reddy, N. V., 2004, “Optimal Part Deposition Orientation in FDM by Using a Multicriteria Genetic Algorithm,” International Journal of Production Research, 42(19), pp. 4069-4089. [29] Qin, Z., Jia, J., Li, T. T., and Lu, J., 2007, “Extracting 2d Projection Contour from 3d Model Using Ring-Relationship-Based Method,” Information Technology Journal, 6(6), pp. 914-918. [30] Edelsbrunner, H., Kirkpatrick, D. G., and Seidel, R., 1983, “On the Shape of A Set of Points in the Plane,” IEEE Transactions on Information Theory, 29(4), pp. 551-559. [31] Delaunay, B., 1934, “Sur la sphère vide,” Bulletin of the Academy of Sciences of the USSR: Classe des Sciences Mathématiques et Naturelle, (7), pp. 793-800. 108  [32] Flory, S., 2009, “Fitting Curves and Surfaces to Point Clouds in the Presence of Obstacles,” Computer Aided Geometric Design, 26(2), pp. 192-202. [33] Yang, H., Wang, W., and Sun, J., 2004, “Control point adjustment for B-spline curve approximation,” Computer-Aided Design, 36(7), pp.639-652. [34] Percoco, G., and Galantucci, L. M., 2008, “Local-Genetic Slicing of Point Clouds for Rapid Prototyping,” Rapid Prototyping Journal, 14(3), pp. 161-166. [35] Preparata, F. P., and Hong, S. J., 1977, “Convex Hulls of Finite Sets of Points in Two and Three Dimensions,” Communications of the ACM, 20(2), pp. 87-93. [36] Veltkamp, R. C., 1992, “2D and 3D Object Reconstruction with the γ-neighborhood Graph,” Technical Report CS-R9116, CWI Centre for Mathematics and Computer Science, Amsterdam. [37] Arya, S., Mount, D.M., Netanyahu, N.S., Silverman, R., and Wu, A. Y., 1998, “An optimal algorithm for approximate nearest neighbor searching in fixed dimensions,” Journal of the ACM 45(6), pp. 891-923. [38] Espalin, D., Medina, F., and Wicker, R., 2009, “Vapor Smoothing, A Method for Improving FDM-Manufactured Part Surface Finish,” Int. Rep. of the W.M. Keck Center for 3D Innovation, Univ. of Texas at El Paso. [39] Hope, R. L., Roth, R. N., and Jacobs, P.A., 1997, “Adaptive Slicing with Sloping Layer Surfaces,” Rapid Prototyping Journal, 3(3), pp. 89-98. [40] Lee, K. H., and Woo, H., 2000, “Direct Integration of Reverse Engineering and Rapid Prototyping,” Computers & Industrial Engineering, 38(1), pp. 21-38. [41] Fritsch, F. N. and Carlson, R. E., 1980, “Monotone Piecewise Cubic Interpolation,” SIAM Journal on Numerical Analysis, 17(2), pp. 238–246. [42] Kobbelt, L. P., Bischoff, S., Botsch, M., Kahler, K., Rossel, C., Schneider, R., and Vorsatz, J., 2000, “Geometric modeling based on polygonal meshes,” Proceeding of Eurographics, Switzerland. [43] Murty, K. G., 1988, Linear Complementarity, Linear and Nonlinear Programming, Helderman-Verlag, Berlin. [44] Sherif, Y. S., and Boice, B. A., 1994, “Optimization by Pattern Search”, European Journal of Operational Research, 78(3), pp. 277-303.  109  

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.24.1-0071119/manifest

Comment

Related Items