Open Collections

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

3-D Digital Image Correlation using a single color-camera Gubbels, Wade 2014

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
24-ubc_2014_november_gubbels_wade.pdf [ 2.58MB ]
Metadata
JSON: 24-1.0166996.json
JSON-LD: 24-1.0166996-ld.json
RDF/XML (Pretty): 24-1.0166996-rdf.xml
RDF/JSON: 24-1.0166996-rdf.json
Turtle: 24-1.0166996-turtle.txt
N-Triples: 24-1.0166996-rdf-ntriples.txt
Original Record: 24-1.0166996-source.json
Full Text
24-1.0166996-fulltext.txt
Citation
24-1.0166996.ris

Full Text

   3-D DIGITAL IMAGE CORRELATION USING A SINGLE COLOR-CAMERA by Wade Gubbels B.S., New Mexico State University, 2012  A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF APPLIED SCIENCE  in THE FACULTY OF GRADUATE AND POSTDOCTORAL STUDIES (Mechanical Engineering)    THE UNIVERSITY OF BRITISH COLUMBIA (Vancouver)   September 2014 © Wade Gubbels, 2014 ii  ABSTRACT Digital Image Correlation (DIC) is an optical and numerical method capable of accurately providing full-field, two-dimensional (2-D) and three-dimensional (3-D) surface displacements and strains. 3-D DIC is typically done using two cameras that view the measured object from differing oblique directions.  The measured images are independent and must be spatially connected using a detailed calibration procedure.  This places a large demand on the practitioner, the optical equipment and the computational method.  A novel approach is presented here where a single color-camera is used in place of multiple monochrome cameras.  The color-camera measures three independent Red-Green-Blue (RGB) color-coded images.  This feature greatly reduces the scale of the required system calibrations and spatial computations because the color images are physically aligned on the camera sensor. The in-plane surface displacements are obtained by performing traditional 2-D DIC in a single color. The out-of-plane information is obtained by a second 2-D DIC analysis and triangulation using oblique illumination from a differently colored light source. Further, the camera perspective errors associated with out-of-plane displacements can independently be measured during this second DIC analysis of the oblique illumination pattern. The 3-D Digital Image Correlation is completed by combining the 2-D correlations for each color.  The design and creation of an example apparatus is described here. Experimental results show that the single-camera method can measure 3-D displacements with to within 1% error, with precision of the in-plane and out-of-plane measurements being consistently less than 0.04 and 0.12 pixels, respectively.     iii  PREFACE All of the work presented henceforth was conducted in the Renewable Resources Laboratory at the University of British Columbia, Point Grey campus. A version of Chapter 4, covering the single-camera concept discussed in this work, has been published in Advancement of Optical Methods in Experimental Mechanics, Volume 3: Proceedings of the 2014 Annual Conference on Experimental and Applied Mechanics [Gubbels, W. and Schajer, G.S. Three-Dimensional Digital Image Correlation Using a Single Color-Camera.  ISBN 978-3-319-06985-2]. Dr. Gary Schajer was responsible for the color separation algorithm explained in Chapter 4.2.1 and assisted in concept and experimental design. I was the lead investigator for all the remaining work including; conceptual creation, apparatus design and fabrication, data collection and analysis, as well as the majority of manuscript composition.   iv  TABLE OF CONTENTS ABSTRACT ................................................................................................................................................................... ii PREFACE ..................................................................................................................................................................... iii TABLE OF CONTENTS ................................................................................................................................................. iv LIST OF FIGURES ....................................................................................................................................................... vii NOMENCLATURE ....................................................................................................................................................... ix ACKNOWLEDGEMENTS ............................................................................................................................................. xi DEDICATION.............................................................................................................................................................. xii 1 INTRODUCTION ..................................................................................................................................................1 1.1 Digital Image Correlation ............................................................................................................................1 1.2 Limitations in DIC ........................................................................................................................................3 1.3 Proposed Single Color-Camera Approach ..................................................................................................4 1.4 Research Goals ...........................................................................................................................................5 2 DIGITAL IMAGE CORRELATION ...........................................................................................................................7 2.1 Fundamentals of DIC ..................................................................................................................................7 2.1.1 Traditional 2-D DIC Arrangement .......................................................................................................8 2.1.2 2-D DIC Analysis Principles .................................................................................................................9 2.1.3 Correlation Criterion and Sub-Pixel Algorithms .............................................................................. 10 2.2 Stereovision 3-D DIC ................................................................................................................................ 15 2.2.1 Fundamentals of Stereovision 3D DIC ............................................................................................. 15 2.2.2 Stereo-Camera Calibration .............................................................................................................. 17 3 SHADOW/SPECKLE PROJECTION ..................................................................................................................... 19 v  3.1 Speckle Projection Concept ..................................................................................................................... 19 3.2 Limitations in Speckle Projection ............................................................................................................ 21 4 SINGLE-CAMERA SYSTEM DESIGN ................................................................................................................... 24 4.1 Recent Single-Camera Concepts .............................................................................................................. 24 4.2 Color Camera Sensor ............................................................................................................................... 26 4.2.1 Color Separation .............................................................................................................................. 27 4.3 Multiple Color Pattern Concept .............................................................................................................. 28 4.4 Telecentric Speckle Projection ................................................................................................................ 29 4.4.1 Advantages of Using a Linear Diffraction Grating ........................................................................... 31 4.4.2 Removing Laser Speckle from Projected Patterns .......................................................................... 31 4.5 Adapted Applied Pattern ......................................................................................................................... 32 4.5.1 Retro-Reflective Paint ...................................................................................................................... 33 5 MULTIPLE PATTERN DIC ANALYSIS .................................................................................................................. 35 5.1 Combined 3-D Measurements ................................................................................................................ 35 5.2 Perspective Errors.................................................................................................................................... 38 5.2.1 Extracting Perspective Errors .......................................................................................................... 39 5.2.2 Correcting Perspective Errors .......................................................................................................... 40 6 EXPERIMENTAL VALIDATION ........................................................................................................................... 43 6.1 Experimental Setup ................................................................................................................................. 43 6.2 Rigid Body Displacements ....................................................................................................................... 44 6.2.1 Planar Specimen .............................................................................................................................. 44 6.2.2 Non-Planar Specimen ...................................................................................................................... 50 6.3 3-D Deformation ...................................................................................................................................... 56 vi  6.3.1 Experimental Setup ......................................................................................................................... 56 6.3.2 Results ............................................................................................................................................. 58 7 CONCLUSIONS AND FUTURE WORK ................................................................................................................ 62 7.1 Conclusions .............................................................................................................................................. 62 7.2 Future Work............................................................................................................................................. 64 REFERENCES ............................................................................................................................................................ 66    vii  LIST OF FIGURES Figure 1.1.  Digital representation of surface speckle ................................................................................................2 Figure 2.1. Traditional single-camera 2-D DIC arrangement (adapted from [15]) .....................................................8 Figure 2.2. DIC subset matching (adapted from [15]) ................................................................................................9 Figure 2.3. Surface plot showing single peak of correlation coefficient matrix (adapted from [15]) ..................... 13 Figure 2.4. Measurable subset deformations using first-order shape functions (adapted from [40]) ................... 14 Figure 2.5. Diagram of two-camera point identification in 3-D (adapted from [10]) ............................................. 16 Figure 2.6. Example of perspective views as seen from two different camera locations (adapted from [12]) ...... 17 Figure 3.1. Typical shadow projection arrangement (adapted from [20]) .............................................................. 19 Figure 3.2. Determination of out-of-plane displacement using speckle projection (adapted from [22]) .............. 20 Figure 4.1. Typical Bayer style RGB sensor (adapted from [35]) ............................................................................. 26 Figure 4.2. Prosilicia GC 2450C color spectral response (adapted from [36]) ......................................................... 27 Figure 4.3. Telecentric projection diagram ............................................................................................................. 30 Figure 4.4. Creating required projector illumination angle ..................................................................................... 30 Figure 4.5. Laser speckle in projected pattern images ............................................................................................ 32 Figure 4.6. Multiple color patterns .......................................................................................................................... 34 Figure 5.1. Separate color DIC analysis. .................................................................................................................. 36 Figure 5.2 Measured in-plane camera perspective errors (30x) due to 4mm out-of-plane displacement ............ 39 Figure 6.1. Experimental Setup ............................................................................................................................... 43 Figure 6.2. Planar in-plane displacement results .................................................................................................... 45 Figure 6.3. Planar out-of-plane displacement results ............................................................................................. 47 Figure 6.4. Measured perspective errors of planar specimen during out-of-plane displacement ......................... 48 Figure 6.5. Perspective corrected out-of-plane results for planar specimen.......................................................... 49 Figure 6.6. Cylindrical specimen. ............................................................................................................................. 50 viii  Figure 6.7. Cylindrical in-plane displacement results using zero-order shape functions ........................................ 52 Figure 6.8. Cylindrical in-plane displacement results using first-order shape functions ........................................ 53 Figure 6.9. Comparison of cylindrical shape measurement .................................................................................... 54 Figure 6.10. Cylindrical specimen out-of-plane displacement results .................................................................... 54 Figure 6.11. Average shape measurement accuracy of cylindrical surface. ........................................................... 55 Figure 6.12. 3-D deformation experiment .............................................................................................................. 57 Figure 6.13. Separated color images of 3-D deformation experiment ................................................................... 58 Figure 6.14. Measured 3-D surface displacement ................................................................................................... 59 Figure 6.15. Contour plot of measured surface height at final increment. ............................................................ 60   ix  NOMENCLATURE         Cross correlation coefficient        Sum of squared differences correlation coefficient   ,    Image functions for reference and deformed images   ,    Sensor coordinates in the reference image         Sensor coordinates in the deformed image          Zero normalized sum of squared differences correlation coefficient         Zero normalized cross correlation coefficient    ,     Mean subset intensities for reference and deformed images    ,     Subset norms for reference and deformed images   ,    Shape functions for the x- and y-directions  ,    Components of displacement vector in x- and y-directions   ,      Change in location between images in x- and y-directions   ,      Gradients of the x displacements with respect to the x- and y-directions    ,     Gradients of the y displacements with respect to the x- and y-directions    Parameter mapping vector   ,     Change in location in x- and z-directions    Projection illumination angle    Camera viewing angle x     Distance between specimen and camera    Distance between projector and camera R1, G1, B1   Average measured intensity in red, green, and blue A  Calibration coefficient matrix a  Normalized coefficient matrix I  Illumination intensity m  Measured intensity response    Wavelength    Diffraction grating line spacing   xi  ACKNOWLEDGEMENTS First and foremost I want to thank Dr. Gary Schajer for his unwavering commitment, dedication, and support.  Dr. Schajer is one of the kindest, most genuine individuals I have ever met, and without his guidance this work would not have been possible. I would also like to thank everyone else at UBC that has helped me along the way, especially my lab mates: Guillaume Richoz, Ted Angus, Josh Harrington, and Darren Sutton.  I also want thank my family for their unconditional love and support. Finally, I want to thank my girlfriend Allison for putting up with all of the long nights I spent in the lab and for keeping me on track.   xii  DEDICATION To my parents. Thank you for all of your sacrifices, hard work, and support. I love you.  1 INTRODUCTION 1.1 Digital Image Correlation Understanding the surface deformations of an object under loading is an important problem in many engineering and industrial applications including: characterizing crack tip propagation [1], determining the mechanical properties of traditional [2] and composite materials [3], and evaluating component and structural performance in such things as airbags, protective equipment [4], and even walls [5]. Digital Image Correlation (DIC) is a non-contacting optical and numerical method capable of accurately providing full-field, two-dimensional (2-D) and three-dimensional (3-D) displacements and strains. This deformation information is obtained by mathematically tracking similar local features between images of an object’s surface taken before and after deformation. Images are commonly captured in natural or white light environment making the method suitable for both laboratory and in-field applications. The method also has a wide range in applicable length scales that can range from the macro to nano-scale [1-5].  Typically, a high contrast random pattern is applied to the surface of a specimen to ensure the highest accuracy in the tracking algorithms. This pattern can be easily created by spraying speckles of black and white paint on the specimen surface. Commonly, images are taken with black and white cameras or converted to greyscale [6]. This ensures that the applied surface pattern is digitally represented by numerical intensity values at each pixel as shown in figure 1.1. Numerous mathematical techniques have been developed to accurately track the deformations in these digital images to sub-pixel accuracy for both 2-D and 3-D deformations [7].  2   Figure 1.1.  Digital representation of surface speckle. In general, the reference or un-deformed image is divided into smaller portions or subsets of neighboring pixels. Typically, these subsets range in size from 15 to 75 pixels square [8]. Various searching schemes are then used to compare these reference subsets using a predefined correlation criterion to calculate the new location of the reference subset within the deformed image.  Current DIC algorithms are capable of accurately calculating high order displacement fields [9]. Deformation results can be obtained for both 2-D and 3-D deformations accurately, with the latter requiring substantially more involved correlation algorithms arising from the general requirement that images from at least two cameras are required for 3-D analysis [10].   While it is not the aim of this work to improve upon the algorithms used in 2-D or 3-D DIC a general understanding of the differences between the two is required and a more detailed explanation of the DIC fundamentals is covered in Chapter 2. While, much work has been done recently to improve the optimization and accuracy of the DIC algorithms for both the 2-D and 3-D case, the optical arrangements commonly used have remained relatively unchanged. Traditionally, there are two different arrangements, each having specific advantages and limitations. = 3  1.2 Limitations in DIC A traditional single-camera DIC arrangement consists of a single-camera fixed such that the viewing axis is perpendicular to the surface of the specimen. The 2-D nature of the images captured limits the sensitivity of this setup to in-plane motions only (or the plane perpendicular to the viewing axis of the camera). Any out-of-plane motions will cause a change in the magnification of the object surface within the image and thus be calculated as erroneous in-plane displacements during image analysis, commonly referred to as camera perspective error. Furthermore, because of non-constant image magnifications due to curved surfaces, the 2-D single-camera technique is typically limited to planar specimens. To overcome the 2-D limitations of single-camera (or 2-D DIC) systems two-camera (or stereovision) systems were developed [11].  In the stereovision arrangement two cameras are positioned to image the specimen from different viewing angles.   From these image pairs it is possible to compute the real 3-D coordinates of a point using triangulation. For accurate triangulation though, strict requirements must be met.  The relative position and the viewing parameters associated with each camera must be explicitly known to solve the stereo-matching problem. Stereo-matching is the process of identifying the locations, within each image, that correspond to the same physical point on an imaged specimen. The tasks of understanding relative camera locations and matching image points are commonly handled through a detailed calibration procedure. This calibration procedure must be carried out anytime the position or viewing parameters of either camera is changed. Calibration of the stereo cameras is critical to obtaining reliable 3-D results and recently work has been done to characterize the errors associated with different calibration approaches [12]. In addition, the increased optical and calibration complexity, requires that the optical quality of the two cameras must be high, thus substantially increasing the overall hardware cost. Furthermore, the 3-D arrangement also requires substantially more involved computational analysis than does the 2-D approach. Chapter 2 presents a more detailed explanation of the calibration procedure and of 3-D DIC. 4  In summary, traditional single-camera DIC is straightforward in implementation and analysis, but limited to 2-D measurements of planar specimens. In contrast, two-camera 3-D DIC is rich in information and widely applicable but limited by a complex optical setup, detailed calibration procedure, and computationally demanding analysis. Therefore, it would be advantageous to have a single-camera system with the advantages of a traditional 2-D arrangement and the capability to make 3-D measurements. In general, creating such a system requires extracting extra information from a single digital image. Here, an approach is taken where the required additional information is attained by use of the separate color information available from a color-camera.    1.3 Proposed Single Color-Camera Approach A typical color-camera sensor is constructed with an interlaced pattern of Red, Green and Blue (RGB) pixels.  The most commonly used colored pixel arrangement was introduced by Bayer [13].  The three color signals are recorded individually on the sensor and, in conventional use, are combined to create full-color images.  However, the three color signals are independent and just one color, say green, is sufficient to do traditional 2-D DIC measurements.  This leaves the remaining two color signals, say red and blue, available to measure the out-of-plane displacements.  Such measurements can be done using triangulation by projecting a random speckle pattern of either red or blue on to the surface of the object. Then a further 2-D DIC evaluation of the projected pattern combined with triangulation can identify the out-of-plane surface displacements.  This procedure has the advantage that all of the various colored pixels have precisely known locations relative to each other, so it is not necessary to do extensive calibrations to cross-reference the pixels in two separate cameras, as in the conventional 3-D technique. Therefore, stereo-matching is avoided because both color images are registered from a single sensor. Also, straight forward 2-D correlation algorithms can be used on each color image as opposed to the more computationally demanding 3-D algorithms required in traditional stereovision 5  arrangements. The proposed system thus reduces both the optical and computational complexity of a traditional 3-D DIC system.  This thesis describes the development and demonstration of an optical system using a single color-camera with a structured color illumination to measure 3-D displacement of both planar and curved specimens. Chapter 3 explains the traditional approach to speckle projection. Chapter 4 gives a detailed explanation of the system concept and design. Chapter 5 describes the mathematical approach to separate color image DIC and combined 3-D measurements. Following on from this, Chapter 6 presents a functional validation and gives results for both planar and curved specimens. 1.4 Research Goals 3-D DIC provides a very powerful tool to measure the surface deformations of objects during different loading situations. The two-camera arrangements used to obtain this 3-D information currently require the use of very high quality and costly camera equipment, detailed calibration techniques and demanding experimental and computational procedures. By comparison, a single-camera 2-D approach can be expected to be significantly less demanding in every respect.  It is the goal of this research to create a single-camera system that is capable of measuring 3-D displacements of both planar and curved specimen. The intention is to create a 3-D DIC method that has the practical advantages of traditional 2-D DIC arrangements. This work is not intended to replace traditional two-camera 3D-DIC but rather provide a simplified approach that may be more procedurally attractive in many cases.  It is the sincere hope that this work will allow new users to access the data richness of 3-D DIC, which was previously not accessible because of limitations in hardware, software, or budget. The proposed methodology is 6  intended to be both repeatable and adaptable.  It is hoped that it will provide a basis for others to expand upon and also to explore new and interesting ways to apply and adapt Digital Image Correlation.   7  2 DIGITAL IMAGE CORRELATION 2.1 Fundamentals of DIC The bulk of recent work being done in digital image correlation is in the development of more optimized and accurate correlation algorithms and stereovision calibration techniques. Since the initial development of 2-D DIC in the early 1980s the mathematical approach to DIC pattern matching has advanced greatly in sophistication and new approaches are continually being developed. In this chapter, the general requirements and basic principles and concepts of 2-D image correlation, with respect to measuring displacements, are presented. An emphasis is placed on 2-D iterative spatial cross correlation techniques because they are the analysis methods used in the validation of the single color-camera system, presented in Chapter 6. The limitations of traditional 2-D DIC are further explained and the extension of DIC into 3-D using stereovision is discussed. Finally, the increased complexities with respect to 3-D DIC and the required calibration procedures are shared.   DIC is generally comprised of three steps: specimen preparation and experimental setup, Image acquisition, and image analysis. Specimen preparation is relatively simple. DIC requires the presence of a random pattern on the specimen’s surface. This pattern can be naturally occurring on the specimen surface or applied, typically using a thin layer of black and white paint. There are several features of this pattern that are needed to produce the most accurate analysis results. First, this random pattern, if applied, must deform with the specimen surface, as it is the carrier of the displacement information. This random pattern should also be high in contrast, because the recorded differences in light intensities reflecting from this pattern are the basis of DIC analysis. The speckle, or dots, within this pattern should also be appropriately sized relative to the pixel resolution of the camera so that the pattern can be resolved effectively. After proper specimen preparation the camera(s) must be properly arranged to obtain the most accurate results.   8  2.1.1 Traditional 2-D DIC Arrangement Figure 2.1 shows a typical single-camera DIC arrangement.  A single camera is aligned to be approximately perpendicular to the specimen surface, which is sufficiently illuminated with white light. The image taken from a single camera creates a 2-D perspective view of 3-D objects without direct information about the third dimension. Apart from perspective effects, a single-camera (or view) is not sensitive to object motions in the direction of the viewing axis. It is, however, sensitive to motions in the two transverse dimensions. This allows for very accurate measurement of 2-D surface displacements as long as a few requirements are met.   Figure 2.1. Traditional single-camera 2-D DIC arrangement (adapted from [15]). To convert the 2-D motion of an image point into the real world scale the magnification of the camera must be known, mm/pixel for example. This magnification ratio is a function of the lens used to image the specimen and the distance from object to lens. To ensure that this magnification ratio is the same for each point on the surface requires that the entire object be perpendicular to the camera sensor. In general, perfect alignment is not practical but small non-perpendicularities can be handled in post processing. Also, to maintain a constant magnification ratio from image to image the out-of-plane displacements should be negligible. For example, a specimen that moves closer to the camera will appear slightly larger in the camera image. A similar image will appear if the specimen undergoes a uniform bi-axial tension as the surface will stretch equally in both in-plane directions, thus appearing larger in the acquired image. Without extra information, the DIC algorithms cannot 9  differentiate between a real in-plane deformation and a change in the camera perspective. These camera perspective errors can be reduced if a telecentric lens is used to view the specimen [14]. However, traditional single-camera systems are still only sensitive to in-plane displacements. Therefore, the traditional single-camera arrangement is limited to measuring approximately in-plane deformations of nearly planar specimens. 2.1.2 2-D DIC Analysis Principles Once the images of the specimen deformation have been acquired, computer algorithms are used to compare two digital images. While there are numerous DIC analysis techniques, only the fundamentals of 2-D DIC analysis are presented here. A complete review of other 2-D DIC analysis techniques is presented by Pan [15]. The basic objective of DIC analysis is shown in figure 2.2. A subset of                     pixels that is centered at the point          in the reference image is selected and used to find the corresponding location               in a deformed image. As shown, this provides adequate information to calculate the displacement vector of the point P. A surrounding point, such as         can also be mapped to its corresponding location,              using so called shape functions, discussed in the next section.     Figure 2.2. DIC subset matching (adapted from [15]). 10  Generally, the reference image is divided into many subsets, the spacing between these subsets is determined based on the required spatial resolution of the results. Once the corresponding locations of all subsets in a region of interest (ROI) are found, a full-field displacement map is obtained. To calculate these displacements precisely, a predefined correlation criterion and sub-pixel algorithm must be implemented.  2.1.3 Correlation Criterion and Sub-Pixel Algorithms The most commonly used criteria are the cross-correlation and sum of squared differences:    Cross-Correlation:               ∑ ∑ [ (     )   (       )]                                     (2.1)                 Sum of Squared Differences:         ∑ ∑            (       )                    (2.2)  where   and   are the reference and deformed image functions, respectively, and give a grayscale intensity value for each specific       point. For example,          will give the grayscale intensity of the centrally located pixel in the reference subset. A more robust form of the CC and SSD criteria are the Zero-normalized Sum of Squared Differences (ZNSSD) and the Zero-normalized Cross-Correlation (ZNCC).            ∑ ∑ [  (       )           (         )      ]                                   (2.3) and            ∑ ∑ {[  (       )           (         )    ]      }                                    (2.4)  where:              ∑ ∑  (       )                                                                            (2.5) 11  and                ∑ ∑  (         )                                                   (2.6)  are the mean intensities of the respective subsets.     √∑ ∑ [ (       )     ]                                                                   (2.7) and     √∑ ∑ [ (         )     ]                                                                (2.8)  are the respective subset norms. By normalizing the correlation equation and subtracting the mean intensity value the ZNSSD and ZNCC become more robust to noise and are also insensitive to offsets in illumination intensity from image to image [15].  In the above equations:                                                                                                                  (2.9) and                   .                                                               (2.10)    and   are commonly referred to as shape functions. For a zero-order shape function:   (      )                                                                             (2.11)                  And 12    (      )                                                                            (2.12)   where   and   are the respective x and y components of the displacement vector. The correlation criteria are therefore a function of        , the parameter mapping vector, which is composed of the terms of each shape function. In the case where a zero-order shape function is used, the correlation coefficient is a linear function of the parameters   and  . Therefore, a zero-order shape function can only accurately calculate the displacement vector for each subset center when little to no relative subset deformation or rotation occurs. If this is an adequate assumption, subset matching is straightforward.   To begin, an integer search scheme is implemented. A single reference subset is selected and a search region of the deformed image is defined. Typically, the first deformed subset is selected to be centered at the top-left pixel of this search region. One of the previously mentioned correlation criteria is then used to compare the two subsets and assign a correlation coefficient for that specific location. The deformed subset is then shifted by one pixel and the process is repeated until the entire search region of the deformed image has been iteratively compared to the single reference subset. This procedure results in a matrix of correlation coefficients that can be depicted as a surface as shown in figure 2.3.  13   Figure 2.3. Surface plot showing single peak of correlation coefficient matrix (adapted from [15]). Due to the discrete nature of digital images, the location of the surface peak corresponds to the integer pixel location of the deformed subset with the highest similarity to the reference subset. Generally, more precise measurements are required. Therefore, a sub-pixel algorithm is needed to better approximate the real final position. One method of obtaining the sub-pixel location is to use the 8 correlation coefficients surrounding the maximum value to create a two-dimensional quadratic surface fit. The location of the maximum of this bi-quadratic surface then defines the sub-pixel coordinates of the reference subset’s corresponding central point. The process of integer searching and approximating sub-pixel location is then repeated for each reference subset, resulting in a full-field displacement map. Again, this approach is only appropriate in the presence of very small relative subset deformations or rotations. More capable first order-shape functions are needed to measure displacements in the presence of more complex 2-D surface deformations. These shape functions are also critical in the analysis of the projected speckle method explained in Chapters 3 and 5 as they allow for the accurate determination of complex surface shapes.  14   The first-order shape functions are shown in equations 2.13 and 2.14.    (      )                                                              (2.13)  and      (      )                                                                                        (2.14)  where           ,            , and               are the first-order displacement gradients of the reference subset depicted in figure 2.4. This allows for the more accurate matching of relative subset deformations, including the six shown or any linear combination. More complex deformations can also be found using even higher order shape functions.   Figure 2.4. Measurable subset deformations using first-order shape functions (adapted from [40])  If a first-order shape function is used, the correlation criteria become a non-linear function of the parameter mapping vector                      . Therefore, a non-linear correlation algorithm is required to calculate the optimal parameter mapping vector for each subset. The Newton Raphson method is a classic non-linear DIC iterative solver originally presented by Bruck [16]. Venrdroux and Knauss developed a simplified approximation the NR method [17]. When this approximation is used the NR method is sometimes referred to as the improved NR or Gauss-Newton method. The non-linear solver iteratively transforms the reference subset 15  until an appropriately close match is found, so long as an adequately close starting point is provided. This initial guess can be determined using a search scheme similar to the pixel by pixel technique discussed previously.  To summarize, images acquired using a traditional single-camera arrangement can be used to measure complex 2-D surface deformations. Measurements can be made accurately so long as a random pattern is visible on the specimen surface and any changes in camera perspective is minimal. Once images are obtained they are divided into smaller subsets and a correlation equation is used to compare these subsets. When a zero-order shape functions are used, the calculation of in-plane displacements can be handled with a simple integer pixel search scheme and straightforward sub-pixel approximation. To account for more complex 2-D deformations, higher-order shape functions are used in conjunction with a non-linear solver.  Each method is relatively easy to implement, but both are still limited to calculating 2-D (in-plane) displacements when used to analyze images taken from a traditional single-camera arrangement.  However, when used to analyze the images obtained from the single color-camera arrangement, these 2-D techniques can be used to obtain 3-D deformations.  2.2 Stereovision 3-D DIC 2.2.1 Fundamentals of Stereovision 3D DIC The basic concepts of stereovision 3-D DIC are similar to single-camera 2-D DIC. In general, the requirements in regard to the surface pattern are the same. The surface pattern is still imaged during different loading situations and compared using predefined correlation criteria.  The fundamental differences arise from that fact that two cameras or views are required in traditional 3-D DIC. The need for a second camera is illustrated in figure 2.5. Using the well-known pin-hole camera model, a single camera located at C cannot distinguish the difference between points Q and R, or any other point lying on the same projection ray. Including a second camera located a C’ though, provides enough information to uniquely distinguish the two point’s real 3-D location in space. Therefore, combining DIC analysis techniques with a two-camera arrangement allows for accurate measurements of 3-D surface displacements and strains, as well as accurate specimen shape 16  measurements. However, to obtain these measurements more advanced correlation algorithms are required and the two independent camera sensors must be precisely calibrated so that they may be used together.   Figure 2.5. Diagram of two-camera point identification in 3-D (adapted from [10]).  There are different mathematical approaches to 3-D DIC just as in the 2-D case. One of the objectives of this work is to avoid the computational complexities associated with traditional 3-D DIC and while the various 3-D algorithms are not covered in this thesis, explanations of different techniques and their associated performances can be found in the literature [18].  In stereovision DIC the deformation of a subset from reference to deformed image must be found in both the left and right cameras and then matched in image pairs. Figure 2.6 highlights the different perspectives recorded using a stereovision arrangement. Stereovision DIC is generally not as easy as just performing 2-D DIC on each image separately. For example, an initially square subset in the reference image from one camera will not remain square even after a simple translation of the specimen, due to the change in camera perspective. The same surface location will appear to undergo an entirely different deformation as seen by the second camera. This in general, requires the use of second order (or higher) shape functions to measure even simple displacements or deformations. This substantially increases the computational complexity of traditional 3-D DIC.  17   Figure 2.6. Example of perspective views as seen from two different camera locations (adapted from [12]). Once the location of a subset is found in the deformed image from one camera it must then be matched to the corresponding location in the image from the other camera to triangulate a unique 3-D location. However, to obtain accurate correspondence between the two independent camera sensors the locations and viewing parameters associated with each camera must be known. This requires a detailed calibration procedure.  2.2.2 Stereo-Camera Calibration Traditionally, stereo calibration is accomplished by simultaneously (or independently) capturing images of a calibration target placed in different orientations. Typically, a calibration target has distinct points and/or lines which are then viewed by both cameras and used to mathematically determine the needed camera parameters [12]. Certain calibration techniques demand that the characteristics of this calibration target be very precisely known or that the translations (or changes in orientation) of this target be very precise. This places a demand on the precision of the target or on the displacement method, making these techniques expensive or impractical. The accuracy of the calibration procedure directly affects the measurement results and even with adequate calibration the relative locations of each camera must not change during image acquisition or measurement errors will occur. 18  It should be mentioned that there are several different approaches to stereo camera calibration and the process is continually becoming more optimized and easy to implement [19]. Still, there are no calibration techniques free from possible error and even the most advanced 3-D correlation algorithms cannot overcome errors in calibration.  In summary, traditional 3-D DIC requires two cameras to simultaneously image a specimen. The combination of these two 2-D perspectives provides enough information to accurately depict real 3-D scenarios and measure surface deformations. However, these two different perspectives can only be used together if a detailed calibration procedure is performed. Also, more involved correlation algorithms must be implemented to accurately measure 3-D surface deformations. Therefore, traditional stereovision 3-D DIC has a greater optical and computational complexity compared to traditional single camera 2-D DIC. This places a greater demand on the optics and mathematical approaches used and in general means a substantial increase in the overall system cost as well as a greater demand placed on the practitioner.    19  3 SHADOW/SPECKLE PROJECTION 3.1 Speckle Projection Concept Shadow (or speckle) projection is an adaptation of traditional DIC. 2-D image correlation techniques are used to track the apparent in-plane displacements of a pattern that is projected onto a specimen from an oblique angle. These measured in-plane displacements are then used to triangulate the approximate out-of-plane displacements. Figure 3.1 shows a typical speckle projection arrangement. Commonly, the technique is used to do non-contact profilometery by comparing an image taken on a reference plane to one taken on the given non-flat specimen [20]. However, it has also been used to measure out-of-plane surface displacements with modest accuracy [21]. The technique was initially developed to help overcome the limitations associated with the classic shadow moiré method [22].  Originally, a traditional slide projector was used to illuminate the surface of a specimen with a random speckle pattern. Today, digital projectors are used to provide the computer generated speckle image making the setup and implementation of the method relatively easy [20-24].   Figure 3.1. Typical shadow projection arrangement (adapted from [20]). 20  Figure 3.2 illustrates the basic concept of speckle projection. A random speckle pattern is projected onto an initially flat specimen (or reference plane), AB, from a source located at S. If the specimen subsequently undergoes an out-of-plane deformation such that it now lies on A’B’, the projected pattern will appear to shift in the X direction. This point wise lateral shift, dx, can then be measured using the previously described 2-D DIC analysis techniques. If the angle of projection illumination is well known, these measured in-plane image displacements can then be used to triangulate the associated real out-of-plane displacements.   Figure 3.2. Determination of out-of-plane displacement using speckle projection (adapted from [22]). Consider, for example, a point initially registered by the camera at b. It will appear to shift to point d due to the out-of-plane displacement, dz. From the figure above:  21                    ,                                                                               (3.1) where dx equals the distance between b and d. Alpha and beta represent the angles relative to the z axis of the projection ray and camera viewing ray, respectively, that meet at point f. From the geometry,            .                                                                                                     (3.2) By placing the camera and projector sufficiently far from the specimen, it is assumed that D will become much larger than dx. Also by noticing:             ,                                                                                           ( 3.3)    Equation 3.2 can be simplified to:                 .                                                                               ( 3.4) Commonly, the camera is arranged to be similar to a traditional 2-D DIC arrangement, where the camera viewing angle, beta, equals zero. Therefore, if the projection illumination angle is known, and DIC techniques can be used to measure the apparent in-plane image shifts, dx. Equation 3.4 can then be used to calculate the out-of-plane displacements at each point across the specimen surface. Finally, if the magnification ratio of the camera is known, each dz  value is multiplied by this ratio thereby converting pixel displacements into the appropriate real world scale. 3.2 Limitations in Speckle Projection The simplifications and assumptions discussed in the previous section imply that the viewing camera used is telecentric and that the projected image is created using collimated or parallel illumination. While using a telecentric lens to view a specimen is often done in both speckle projection and traditional 2-D DIC, digital projectors create inherently diverging images. This means that the angle of illumination of the projected pattern varies from point to point across the specimen surface. Disregard of these variations in the projection angle 22  leads to measurement error. Also, the measured out-of-plane displacement, dz, corresponds to a real in-plane image coordinate equal to x, as shown, while the location corresponding to the registered point d, lies at an image coordinate equal to x’. Consequently, the amount of out-of-plane displacement calculated for a single location will not be the real value corresponding to that location, rather to a nearby point, due to triangulation. This situation adds a distortion effect to the out-of plane results and leads to measurement errors in places with significant surface slope. In general, the accuracy of 2-D DIC analysis techniques decreases when used in speckle projection. The correlation techniques are identical in terms of the mathematics but the physical differences in the projected speckle make accurate correlation more difficult. The combination of complex specimen surface shapes and camera and projector perspective errors can lead to complex relative subset deformations. This requires the use of high-order shape functions to match reference and deformed image subsets accurately [20].   In addition, the triangulation method requires that the projection illumination angle be precisely known. Different techniques can be used to physically measure this angle during optical setup, but these methods typically involve high precision stages or other costly measurement devices [25]. McNeil developed a procedure using a precisely gridded calibration plate to determine the required angle [24].  Pan presented a method in which a precisely dimensioned gauge object is measured to determine the illumination angle and also calculate a coefficient used to better characterize the gradient of the illumination angle across the image [20]. Both processes however, much like stereovision calibration, increase the required experimental complexity.   Finally, when making out-of-plane displacement measurements using the projected speckle method, there is a general requirement that the in-plane displacements are negligible. That is, the apparent in-plane shifts of the projected pattern should only arise from purely out-of-plane surface displacements. This is required because the method has no way of creating any correspondence between the actual surface points. For example, in figure 3.2, if the specimen surface initially lies long A’B’ and then encounters a rigid body translation in the x-23  direction, the relative surface height at each location will change and the projected pattern will shift accordingly. Thus, the purely in-plane specimen displacement will be inaccurately registered as an out-of-plane deformation. The inability of shadow projection measurements to be matched with the real surface locations greatly limits the practical applications of the method. Without knowledge of the in-plane surface deformations, accurate determination out-of-plane deformations are difficult.  In summary, the speckle projection method uses a single camera and straight-forward 2-D DIC analysis techniques to measure the apparent in plane displacements of a projected speckle pattern. By triangulation these measured in-plane displacements are used to calculate the approximate out-of-plane surface locations. The method is relatively easy to implement, but it does require that the illumination angle be precisely determined. Simplifying assumptions, perspective errors, and complex surface shapes can lead to difficulties in image correlation and/or reduced measurement accuracy. Also, traditional speckle projection does not create any correspondence between actual surface points. Therefore, only the out-of-plane position of surface points can be measured. Thus, the established speckle projection method is limited to measuring specimen shape or out-of-plane surface deformations when in-plane deformations are either known or disregarded. In the subsequent chapters, a different speckle projection method will be presented that overcomes the limitations of the previously established methodology.   24  4 SINGLE-CAMERA SYSTEM DESIGN 4.1  Recent Single-Camera Concepts A single-camera system capable of 3-D displacement measurements has many advantages compared to traditional stereovision DIC. The previously mentioned reductions in cost and complexity are attractive to many users. Single camera systems also have an advantage in high speed applications, as a single camera does not require synchronization.  Also, a single-camera system can help to avoid the issues related to relative camera motion in stereovision measurements. These advantages have motivated researchers recently to develop different single-camera 3-D DIC approaches.  Several authors describe similar methods by which the camera perspective errors associated with a traditional single-camera system are extracted from the in-plane results to provide information about out-of-plane displacements [26-29]. In general, this method is able to give accurate results for 3-D rigid body displacements. However, it requires a calibration procedure to determine the perspective errors associated with known out-of-plane displacements and is typically limited to planar specimens. Another approach was developed in which mirrors are used to direct two different views of the specimen onto a single camera sensor [30]. This approach has shown accuracy similar to stereovision arrangements. However, a calibration of the mirror locations is required and more involved 3-D algorithms are still needed.   To avoid the difficulties associated with calibrating dual-camera or mirror arrangements, Xia developed a novel diffraction assisted technique in which monochromatic illumination and a linear diffraction grating are used to create multiple views of a specimen in a single image [31]. In this method, the out-of-plane displacements are extracted from 2-D in-plane measurements by taking advantage of the different diffraction modes created by the grating. However, due to issues inherent to the method, the reported accuracy of out-of-plane results is relatively modest. The single color-camera approach described in this thesis also uses a linear 25  diffraction grating, but for a different purpose. It is used in the color projection arrangement as described in section 4.4.2. The fringe projection method, which is similar to speckle projection, has become a popular tool in 3-D shape measurements. Recently, the fringe projection method was combined with 2-D DIC to provide 3-D displacement measurements using a single camera [32]. Quan and Tay recorded the two independent patterns using a monochrome camera and separated the fringes from the speckled background using a Fourier transform technique [33]. This technique shows relatively good accuracy but requires that the speckle be slowly varying to separate the fringe pattern from the speckle properly. That is, the 3-D measurements can only be made simultaneously when the in-plane displacements are small. In general, the fringe projection technique provides measurements in the form of wrapped phase maps. This means that an additional phase unwrapping procedure is required and that the out-of-plane measurements are not absolute, unless boundary conditions are reliably known.  The combined fringe projection and DIC method was further advanced by Felipe-Sese, by encoding the fringe and speckle patterns in different colors signals and using a color-camera sensor as a filter [34]. While this approach can handle both small and large in-plane displacements, the overlap in color signals caused an increase in noise for both the DIC and fringe projection measurements.  These recent 3-D single-camera approaches were developed to overcome the limitations associated with traditional stereovision 3-D DIC. The single-camera approach described in this thesis further develops different aspects from some of these previously mentioned methods. This single color-camera method was designed to require minimal calibration and be capable of measuring small and large displacements for both flat and curved specimen. An improved speckle projection method is used in combination with an applied surface pattern such that 2-D DIC algorithms can be used to analyze both patterns. The two independent patterns provide the 26  information to measure both the in and out-of-plane displacements, while also correcting for camera perspective errors. These patterns are simultaneously recorded using a single color-camera.  4.2  Color Camera Sensor A typical color-camera sensor is constructed with an interlaced pattern of Red, Green and Blue (RGB) pixels. The most commonly used colored pixel arrangement was introduced by Bayer [13]. Figure 4.1 illustrates a typical Bayer sensor.  Commonly, a Bayer style mask consists of a row of alternating green and red pixels followed by a row of alternating green and blue pixels. This repeated pattern results in twice as many green pixels as blue or red. The sensor is created this way to mimic the color sensitivity of the human eye [35]. This means that the effective resolution of the red and blue images are half that of the green. However, this decrease in resolution can simply be overcome by incorporating a higher resolution sensor. Also, where a red or blue pixel is located on the sensor array, no green information will be recorded and vice versa. This means that the gaps between similarly colored pixels must be estimated from the intensities recorded by surrounding pixels. This is handled with an interpolation process commonly known as demosaicing.   Figure 4.1. Typical Bayer style RGB sensor (adapted from [35]). 27   Finally, to allow realistic rendition of color scenes where a wide range of wavelengths are present, the filters are designed to have moderately wide bandwidths that overlap each other. Figure 4.2 shows the color sensitivity of the camera used here (Prosilica GC 2450, Allied Vision Technologies, Burnaby, Canada). However, these overlaps are undesirable for the application here because they cause crosstalk between the separately colored signals and consequently impair measurement accuracy. Therefore, a digital filtering technique is employed to separate independent color images, from a single full-color image.   Figure 4.2. Prosilicia GC 2450C color spectral response (adapted from [36]). 4.2.1 Color Separation The effect of the color overlaps can be removed by mathematical calibration.  For a beam of color 1 (red) only, with intensity I1 the measured average responses R1, G1, B1 are: R1  =  A11 I1                      G1  =  A21 I1                     B1  =  A31 I1                                        (4.1) where Aij are calibration coefficients relating the response of pixel color i and illumination color j.  In this measurement, the numerical value of the light intensity I1 is not explicitly known.  However, it is sufficient to work in terms of scaled quantities and to normalize the calibration coefficients Aij: 28  )A  ,A  ,max(AA           3121111111 a          )A  A  ,max(AA          3121,112121 a          )A  ,A  ,max(AA         3121113131 a       (4.2) Similarly, with beams of colors 2 (green) or 3 (blue) only, subscript “2” or “3” replaces subscript “1” in the j position above.    When all three beams are present simultaneously, the measured responses mi are: 321333231232221131211333222111321III  aaaaaaaaa        b  g  rb  g  rb  g  r        mmm                                         (4.3) which in matrix form, becomes : a I   =  m                                                                                 (4.4) The individual beam intensities I can be recovered in scaled form from the measured responses m by matrix solution.  The normalization in Eq.4.2 ensures that the scaled results have a similar range as the original measurements. This calibration procedure need only be performed once for given illumination sources.   4.3  Multiple Color Pattern Concept The color sensor combined with the color separation technique allows for three separately colored (RGB) signals to be registered simultaneously using a single camera. If a random surface pattern is illuminated in one color and a second random pattern is projected in a different color, traditional 2-D DIC can be combined with an improved speckle projection method to provide 3-D displacement information. This approach requires that the two colored patterns be created such that each provides the information needed while also causing minimal interference with the other pattern. Here the applied surface pattern, which is used to measure the in-plane displacements, is illuminated using a green light source. Thus, the higher resolution green pixels on the sensor are used to measure two of the three directional displacements. The projected pattern can then be created using either a red or blue 29  illumination source. While this projected image will have half the resolution of the green image, it also only provides half as much information. Therefore, the two color signals are matched, based on the amount of information provided, to the corresponding color resolutions of the sensor. In this work, the third color is left unused. The basic approach to calculating the out of plane displacements using a projected speckle is similar to the method described in Chapter 3. By projecting the color speckle pattern at some oblique angle to the camera viewing axis, the apparent in-plane shifts of the projected pattern can be used to triangulate the out-of-plane surface displacements. However, to overcome the limitations associated with the established speckle projection method a custom optical arrangement was developed. 4.4  Telecentric Speckle Projection A custom telecentric projector was constructed to overcome the varying in-plane image size created by a conventional projector.  The telecentric arrangement creates an essentially parallel projection.  Figure 4.3 shows a ray diagram for the telecentric projector.  A point source illumination is directed onto a diffusion plate that expands the light into a conical beam.  This beam is collimated by the first lens, which directs the light though a transparent film on which a random speckle pattern had been printed.  This speckled transparency forms the object of the double telecentric lens formed by the following two lenses and intermediate aperture.  The near parallel rays emerging from the final imaging lens project an image that remains constant in size and focus over a significant image distance range known as the telecentric depth. The proper alignment and locations of these components is important as misalignments can cause a reduction in the telecentric depth as well as reduce image quality.  The designed projector creates a telecentric depth (= depth of field) of approximately 6.5 cm allowing for significant z displacements to be measured using triangulation.   30    Figure 4.3. Telecentric projection diagram. For triangulation to be used, the projection system must illuminate the surface at an angle relative to the camera viewing (z) axis.  One possibility is to orient the projector as shown in Figure 4.4 (a), but this arrangement creates a greatly varying distance between imaging lens and object.  This makes focusing the entire projected image difficult.  The available telecentric depth-of-field may or may not be sufficient for the task.  Even if the telecentric depth-of-field is sufficient, likely not much field depth will remain to accommodate object motion. However, a linear diffraction grating can provide a simple solution as shown in Figure 4.4 (b).   Figure 4.4. Creating required projector illumination angle. (a) oblique alignment, (b) parallel alignment with linear diffraction grating . 31  4.4.1   Advantages of Using a Linear Diffraction Grating Variable distance between imaging lens and specimen can be eliminated, while still providing the required illumination angle, by using a linear diffraction grating, as shown in Figure 4.4 (b).  Light passing through the diffraction grating diffracts through an angle                                                                            (4.5) where λ is the wavelength of the light used and d is the line spacing of the diffraction grating [37].  The outgoing diffracted light creates a parallelogram projection.  Therefore, aligning the camera and projection arrangement parallel to one another insures that the optical path lengths of the projection remain equal across the specimen surface.  This arrangement fixes the incidence angle   to be the diffraction angle used in the triangulation, defined in equation 4.   An efficient point source of nearly monochromatic light can be provided using a laser. Here, a λ = 405 nm blue laser and a diffraction grating (Edmunds Optics, Barrington, NJ. USA) with a line spacing d = 1000 nm are used in the projection arrangement.  Using these values in equation 9, the projection angle is 23.9°.  This particular arrangement therefore has the capability of measuring z displacements within a range of approximately ±3 cm around the center of the telecentric depth. One disadvantage to using coherent light though is the occurrence of laser speckle, which acts as a source of noise in the DIC analysis. Therefore, the laser speckle must be sufficiently removed from each image to insure accurate measurements of the out-of-plane displacements.    4.4.2  Removing Laser Speckle from Projected Patterns When imaged, laser speckle appears as a random distribution of varying intensities. These varying intensities are a function of the surface height off which the light rays reflect [38]. This means that the laser speckle will appear to move with the specimen surface during small in-plane displacements. However, the projected speckle pattern should be insensitive to in-plane surface displacements. This creates a conflict in 32  which two competing random patterns are apparent in the telecentric projection as seen in Figure 4.5 (a). Subsequently, the accuracy of the DIC matching algorithms will be reduced unless the laser speckle is removed from the projected pattern images.   Figure 4.5. Laser speckle in projected pattern images. (a) before laser speckle removal, (b) after laser speckle removal. Fortunately, laser speckles can be removed by continuously rotating the diffusion plate that initially expands the point source laser [39]. This rotation continuously varies the location of the random intensities due to the laser speckle while keeping the location of the projected pattern constant. By rotating the diffusion plate at an appropriate speed relative to the exposure time of the camera, the varying intensities created by the laser are effectively averaged out.  This results in a projected random pattern with uniform background intensity, as shown in Figure 4.5 (b). 4.5 Adapted Applied Pattern  A further source of cross-talk exists between the applied (painted-on) and projected speckle patterns.  Of necessity, the speckles in the painted-on pattern must have a distinctly different image intensity compared with (a) (b) 33  that of the background surface so as to create sufficient contrast for effective functioning of the DIC.  However, the large contrast difference arising from a traditionally painted on black and white pattern imprints the painted speckle pattern onto the projected speckle pattern image. Similar to laser speckle, this significantly impedes the functioning of the DIC on the projected speckle pattern.  Mathematical techniques were attempted to try to remove the common portions of the painted-on (green) and projected (blue) images from the measured blue image but the procedure was not adequately effective.  Therefore, an alternative approach was pursued. 4.5.1 Retro-Reflective Paint The chosen approach uses the very small spherical glass beads that are commonly used in retro-reflective paint.  These glass beads have the property of reflecting incident light back towards its source.  If the green light source that illuminates the surface is placed nearly in-line with the camera viewing axis, the light that strikes the beads reflects back to the camera sensor and appears as a green speckle.  However, the blue light, which is projected onto the surface at an oblique angle to the viewing axis, reflects back in the blue illumination direction and therefore is invisible to the camera. Since the off-axis reflectivity of the beads is similar to the diffuse reflection from the matte white painted surface, the bead pattern does not significantly appear within the projected speckle pattern observed by the camera.   Application of the retro-reflective beads can be done very conveniently.  The glass beads, which range in diameter from 50-100 microns, are simply sprinkled onto the surface while a newly applied thin layer of white paint is still wet.  The sprinkled beads naturally form a random pattern that is very suitable for DIC work. The high reflectivity of the beads in the illumination direction causes a large fraction of the incident light to return to the camera, thus allowing the use of a light source of only modest power. Here, a 1 watt LED is passed through a green filter to provide the appropriate illumination. Figure 4.6 shows the projected (blue image) and applied pattern (green image) after being separated from a single full color image. These images highlight the effectiveness of the described approach to create truly independent color signals. 34   Figure 4.6. Multiple color patterns. (a) full-color image, (b) separated green image of applied surface beads, (c) separated blue image of telecentrically projected pattern. To summarize, some single-camera 3-D DIC systems have recently been developed to overcome the limitations of traditional two camera arrangements. In this work, a single color-camera is used to simultaneously record differently colored patterns. A color calibration is included to insure the independent separation of the two images from a single full color image. A green surface speckle is created using small glass beads to avoid the interference caused by a painted on pattern. By performing 2-D DIC on the green image the in-plane displacements are measured. A second pattern in blue is projected onto the surface using a telecentric arrangement and linear diffraction grating. This arrangement overcomes the limitations associated with a traditional digital projector by providing an image that has constant illumination angle, size, and focus throughout a certain depth. A final 2-D DIC of the projected pattern with triangulation provides the out-of-plane surface displacements. Together, with the proper analysis, these two color patterns can provide full-field 3-D displacement information.   35  5 MULTIPLE PATTERN DIC ANALYSIS 5.1 Combined 3-D Measurements After the color images of specimen deformation have been obtained, DIC analysis is performed on each separately colored image. 2-D DIC is used to directly acquire the in-plane (x and y) displacements from the applied surface pattern (green) images. 2-D DIC is then used on the projected pattern (blue) images, in combination with triangulation, to obtain the out-of-plane (z) displacements. However, some extra steps are needed to match the out-of-plane displacements to the corresponding in-plane locations. As mentioned in Chapter 3, the speckle projection method requires either an initially planar specimen or the inclusion of a reference plane image. Imaging the projected pattern on a flat reference plane that is perpendicular to the viewing axis of the camera provides a fixed location in space, which represents zero surface height. By comparing images of the projected pattern taken on the object surface with the image of the pattern taken on the reference plane it becomes possible to measure the specimen surface shape. The difference between these measured surface shapes then provides the information on out-of-plane displacements and deformations of the object surface. Another advantage of the constant size and focus of the projected pattern, created by the telecentric arrangement, is that the reference plane image need only be taken a single time, providing the subsequent images are taken with the specimen lying within the telecentric depth of the projector.  This reference plane image can then be saved in memory and used in the DIC analysis of any object. This is a great advantage because the precise alignment of a perpendicular reference plane must only be carried out a single time, rather than before every measurement. Conceptually, the reference plane image can be considered as a projector calibration; it remains valid for all ongoing measurements providing that the projector internal geometry is kept fixed.  Thus, it becomes a “manufacturer’s calibration”. 36  Finally, to obtain full-field 3-D surface displacements, the out-of-plane triangulation results must be connected to the corresponding in-plane surface locations. This can be handled by slightly modifying the “direction” and subset starting locations of the 2-D DIC process in the projected (blue) images. The flow chart below and the following example help illustrate the process. For simplicity, the projected pattern images and applied surface pattern images will be referred to as the blue and green images, respectively.                                                Figure 5.1. Separate color DIC analysis. First, the blue image of the reference plane image is acquired, labeled as “Blue 0” in Fig. 5.1. Then the specimen is prepared and placed within the telecentric depth. Next, “Blue 1” and “Green 1” images of the object   Blue 0 Blue 1 Blue 2 Green 1 Green 2 (200,500) (200,500) (300,700) (300,700)      Separate Color DIC Analysis Flow Chart 37  surface before deformation are simultaneously measured. Then, after the deformation of interest has occurred, “Blue 2” and “Green 2” images of the object surface are acquired.    The DIC analysis sequence of the various images is as follows: 1.  Choose a ROI within the Blue 1 image, say a subset centered at pixel (200,500).  Use 2-D DIC to find the corresponding position within the Blue 0 image, say it is (400,500), which is a shift of 200 pixels in the x direction. After triangulation, this measured shift indicates the initial surface height of the specimen at the in-plane location corresponding to pixel (200,500).  The DIC is done "in reverse", starting with the Blue 1 image so that the surface height results refer to the pixel locations on the specimen rather than on the reference surface.   2.  Starting at the initially chosen pixel location in the Green 1 image, (200,500) in this example, use 2-D DIC to find the corresponding position within the Green 2 image, say it is at (300,700), which is a shift of 100 pixels in the x direction and 200 in the y direction.  This result gives the x and y displacements of the initially chosen pixel, (200,500) in this example.   3. Starting now in the Blue 2 image at the shifted pixel location from the Green 2 image, (300,700) in this example, use 2-D DIC to find the corresponding position within the Blue 0 image.  Say it is at (600,700), a shift of 300 pixels in the x direction.  After triangulation, this measured shift indicates the final surface height of the point on the specimen that was originally located at pixel (200,500).  The difference of this shift and that found in step 1, 300-200 = 100 pixels here, gives the z displacement of the initially chosen pixel. This is a general example to show the process and order of the separately colored DIC analysis. It should be mentioned that in practice, the displacements measured will not typically be an integer number of pixels. Therefore, when non-integer in-plane displacements are passed to the deformed blue images to assign the 38  appropriate starting location of subsets, the nearest four integer subset sets are all correlated back to the reference plane image (Blue 0). A simple bi-linear interpolation of these four results then provides an appropriately close answer to the non-integer location. However, unless very large deformations are present in the projected images, this interpolation is typically not needed. Generally, the difference between results found using the interpolation process and results obtained by simply selecting the nearest single integer location are negligible. Regardless, all experimental results were analyzed using the interpolation procedure. 5.2 Perspective Errors A change in camera perspective occurs when the object-to-lens distance changes (out-of-plane object displacement). A change in perspective between images appears as a compression or expansion of the specimen surface within images. These changes are then calculated as inaccurate in-plane displacements during DIC analysis.  The magnitude of the resulting perspective error is a function of the viewing lens, the change in object-to-lens distance, and the radial distance of each pixel from the central pixel of the sensor. Figure 5.2 shows the in-plane displacements of a surface pattern taken of a planar specimen after a purely out-of-plane displacement (towards the camera) of 4 mm. Here, the surface pattern (green) images provide the in-plane displacement information. Ideally, a purely out-of-plane displacement should give zero measured in-plane displacements. However, the change in camera perspective produces small shifts in the in-plane measurements. Figure 5.2 shows a vector map of the observed in-plane displacements produced by a 4 mm out-of-plane movement. The vector magnitudes have been multiplied by 30 to illustrate the behavior of the perspective error more clearly. The region of interest for this measurement was purposely chosen to straddle the central pixel of the camera sensor.  39   Figure 5.2 Measured in-plane camera perspective errors (30x) due to 4mm out-of-plane displacement. Figure 5.2 indicates that the camera perspective error causes an optical effect similar to that of uniform in-plane strain of the measured surface. Apparent surface displacements are observe that radiate from the optical center of the image. Therefore, to properly correct for the camera perspective effect the perspective errors and real in-plane displacements must be measured independently and the central pixel of the camera must be known. 5.2.1 Extracting Perspective Errors A major limitation of traditional single-camera systems is the inability to distinguish between actual in-plane displacements and inaccurately perceived displacements due to camera perspective error. Fortunately, 40  the single color-camera system provides enough information to independently determine the amount of perspective error encountered and therefore allows these errors to be corrected.  The orientation of the telecentric projector causes only x-displacements of the projected (blue) speckle pattern to occur when the measured object moves in the z-direction. Any x- y- or z-movements should not cause any speckle pattern displacements in the y-direction. However, the image stretch or compression arising from changes in camera perspective will result in a displacement of the projected image in the y-direction. Therefore, any y component measured in the projected (blue) images directly indicates changes in camera perspective.  The in-plane displacements, measured by analyzing the applied surface pattern (green) images, may contain real surface displacements and perspective errors. This means extracting the camera perspective errors from the applied pattern image is difficult unless geometrical constraints are enforced. However, because the source of perspective error is the single camera viewing lens, the amount of perspective error in both the projected pattern and applied pattern images should be equal. Therefore, the y-component of displacement obtained from the projected pattern analysis can be used to correct the results for the x-component of the projected images as well as the x- and y-components obtained from the applied pattern images, and thus remove any perspective errors from all three measured displacements.   5.2.2 Correcting Perspective Errors After the perspective errors have been extracted from projected images, this information can be used to correct the errors in the remaining three signals. To properly subtract these errors from each directional component of the other signals the central pixel of the sensor must be known. Ideally the optical center of the camera occurs at the image center.  However, small discrepancies in practical lens and sensor alignment cause the optical center to deviate from the image center. Calibration is straightforward and again need only be performed once for a specific camera arrangement. Just as in the measurement depicted in Figure 5.2, to perform this calibration an object simply needs to be displaced in a purely out-of-plane direction and analyzed 41  using 2-D DIC. The pixel location within the image that registers zero in-plane displacement corresponds to the optical center within the camera images. The x and y location of the central pixel can then be saved in memory and used in the perspective error correction routine.  Unless the ROI is symmetric about the optical center, there will be discrepancies in the amount of perspective error recorded in the x- and y-directions. This is due to the subset centers having different distances in the x- and y-direction relative to the central pixel. Therefore, before subtracting the perspective errors from the measured x displacements, they must be properly adjusted for each subsets relative distance in the x-direction, from the central pixel. This is a simple process, but a crucial step ensuring that perspective errors are removed correctly. Finally, the results obtained from the blue images do contain a slightly higher amount of noise than the results in the green images, mostly due to the lower resolution of blue pixels in the color sensor. Therefore, so as not to propagate these small errors from the blue results to the green, when correcting for perspective errors, the y-displacement results from the projected image should be smoothed in some way before being used to correct the other 3 measurements.  In this work, the perspective data are fit to a bi-quadratic surface. This information is then used to correct the other three measurements. This error correction method works for both planar and complex shaped specimen. Also, because the y displacements of the projected image are only measured in the presence of camera perspective changes, the correction algorithm does not add any error to results when perspective errors are not present. The ability and accuracy of this camera perspective correction procedure is presented in the results section of the next chapter.  In summary, the green and blue images separated from single full color images can provide 3-D displacement information. However care must be taken in how these displacements results are obtained using DIC analysis techniques. Traditional 2-D DIC is used on the green images to obtain in-plane surface 42  displacements. This information is then used to assign subset locations in the blue images and ensure that out-of-plane displacement measurements correspond to the proper in-plane locations.  By doing so and reversing the direction of the 2-D DIC in the blue images, the initial and final surface heights (z location) of the corresponding in-plane locations are obtained. The projected pattern analysis also provides the required information to correct for any camera perspective errors encountered during out-of-plane displacements. Together, the two separate patterns and proper 2-D DIC analysis can provide accurate full field 3-D displacements.    43  6 EXPERIMENTAL VALIDATION 6.1 Experimental Setup Figure 6.1 shows the arrangement used to validate the single color-camera concept. A custom telecentric projector is aligned parallel to the viewing axis of a single color-camera (Prosilica GC 2450, Allied Vision Technologies, Burnaby, Canada), placed approximately 65 cm from the specimen surface. The telecentric projector is arranged such that the specimen surface initially lies in the center of the projected telecentric depth. The chosen coordinate system for all measurements is as shown, with the z axis being collinear with the viewing axis of the camera. The system need not be attached to an optical table as shown. However, in validation experiments the table provided a simple way of aligning components as well as the specimen and reference plane.    Figure 6.1. Experimental Setup. X Y Z Blue Laser Collimating Lens Imaging Lens 1 Imaging Lens 2 Diffraction Grating Specimen Green LED Aperture Speckle  Transparency Rotating Window Camera 44  6.2 Rigid Body Displacements A series of experiments was carried out to show the overall functionality of the system. The basic capabilities of the single color-camera system were first demonstrated by measuring the rigid body displacements of a planar and then a non-planar specimen. For each specimen two measurements were made, a series of purely in-plane rigid body displacements, and then a series of purely out-of-plane rigid body displacements. The specimens were displaced using a precision linear actuator (CMA-25CCL Newport) attached to a 3 axis stage.  The DIC analysis parameters were the same for each rigid body displacement measurement. A single reference plane image was taken and all projected (blue) images were correlated to this common reference image. For each experiment, a single object reference image was also taken as well. All prior applied pattern (green) images were correlated to this object reference image. Therefore, total displacement as opposed to incremental displacement was measured.  All images were acquired, color separated, and analyzed with a custom MatLab program. The program uses zero order shape functions in the 2-D DIC analysis. The region of interest was selected to be 400x400 pixels. Each subset was 50x50 pixels and subset centers were spaced 10 pixels apart, combining to produce a total of 1600 subsets. Each specimen was initially placed at approximately the same distance from the camera. At this camera-to-object distance, the magnification of the camera was measured to be approximately 60 microns per pixel. For certain measurements, the same images were also analyzed using a second 2-D DIC analysis software (Ncorr) that utilizes first order shape functions in the image correlation, explained further in section 6.3.2 [40]. 6.2.1 Planar Specimen The first specimen was a flat aluminum plate. A single reference plane image was taken prior to any experiments being made and saved into memory. After attaching the plate to the 3-axis stage a single object 45  reference image was taken and then a single deformed image was taken after each of 10 incremental displacements of 200 microns. The first series of these motions were in-plane displacements in the positive x-direction as shown by the coordinate axes in figure 6.1. 6.2.1.1  In-Plane The average displacements of the object surface, per 200 micron increment, are plotted in figure 6.2 (a). Because the motion of the specimen is a rigid body displacement, it is expected that each point on the surface will displace by the same amount. Therefore, the average displacements should provide adequate information on the accuracy of the system. To show the precision of the measurements, the standard deviations of the measured displacement for all 1600 subsets, per increment, are shown in 6.2 (b).             Figure 6.2. Planar in-plane displacement results. (a) mean measured displacements, (b). standard deviations. The results clearly show that each signal has very good agreement with the theoretical values. The error in the measured x displacements are all less than one percent. The maximum mean absolute error for all 10 increments in the measured y and z displacements is less than 1.5 microns and 9 microns, respectively. While the experiment is a simple rigid body displacement of a flat specimen the results highlight some important 46  points. The individual accuracies of the three signals prove that the green and blue signals are independently separated from the single full color image, with minimal cross-talk. Had cross-talk between images been present the error in the 3-D measured displacements would have been substantially higher. This simple in-plane experiment also demonstrates that the applied surface beads are an effective substitute for traditional painted-on speckle.  It is also important to point out that the standard deviations of each increment are quite low. This shows a high level of precision throughout the entire region of interest. The standard deviation in the green images for both the x and y direction are consistently between 0.02 and 0.04 pixels, corresponding to 1.2 and 2.4 microns respectively. The standard deviations of the measured z displacements, obtained from the blue images, are consistently twice that of the x and y measurements taken from the green images. This is somewhat expected though. The blue images are acquired with half as many pixels as the green images. Also, the sensitivity of measurements in z directional displacements is directly related to the angle of illumination, which in this case is approximately 24 degrees. This means the sensitivity of out-of-plane displacements is nearly half that of the in-plane measurements.  6.2.1.2  Out-of-Plane The in-plane measurement of the planar specimen showed that the two separately colored signals can be independently extracted from a full color image, but the previous measurement could just as easily have been made using a traditional 2-D DIC single camera arrangement. To demonstrate the out-of-plane capabilities of the system, the same planar specimen was displaced incrementally in the positive z direction (towards the camera). Again, 10 incremental displacements of 200 um were applied to the specimen. A single deformed image was taken after each increment. Similar to the previous measurement, the mean displacements and standard deviations of the ten out-of-plane increments are plotted in figure 6.3 (a) and (b), respectively.  47   Figure 6.3. Planar out-of-plane displacement results. (a) mean measured displacements, (b) standard deviations. The measurements again are shown to be quite accurate. After the entire 2 mm out-of-plane displacement, the mean measured z displacement is 1.978 mm, an error of just over one percent. This is the largest percent in the measured z displacements. The mean absolute errors for the x and y displacements increase with each increment and the maxima are nearly 17 and 50 microns, respectively.  The standard deviations for each measurement increment also increase with increasing out-of-plane displacement. The difference in the blue and green standard deviations is again nearly double. However, for the out-of-plane measurement, all three signals’ standard deviations seem to increase at a nearly constant and similar rate, with the standard deviations at the final increment being above 0.25 pixels in all three signals. These relatively high standard deviations suggest that there is a similar deformation occurring in all three dimensions. These small errors in the measured mean displacements and the high standard deviations are mainly due to camera perspective effects. Figure 6.4 shows the same mean displacement results as in Figure 6.3 (a). However, the measured z displacements, acquired from the x displacement in the projected images, have been replaced by the measured displacements acquired from the y component of the projected images. The information should provide the 48  amount of measured perspective error. The scale of this plot more accurately depicts the small errors in the measured in-plane displacements. This plot also shows the similarities between these displacements and the measured perspective errors.  Figure 6.4. Measured perspective errors of planar specimen during out-of-plane displacement. The plot in Figure 6.4 shows that the measured y displacements and the measured perspective errors are very similar. This makes sense, as the measured perspective errors are obtained from the y displacements of the projected image. Thus, if the in-plane errors are due entirely to changes in camera perspective, these two measurements should be identical. However, the measured x displacements are significantly lower. This difference in the amount of perspective errors is due to the location of the ROI relative to the central pixel of the sensor. In this case the ROI was closer to the vertical axis of the camera sensor than the horizontal axis, thus the perspective errors in the x direction are lower. Fortunately, knowing the amount of the perspective error in a single direction is sufficient for error correction; so long as the central pixel is known and the relative location of each subset center is recorded (they should be as this is the DIC displacement information).   Figure 6.5 shows the mean displacements and standard deviations of the same ten out-of-plane displacements after perspective error correction. The new final mean displacement in the z direction is now 49  2.009 mm, an error of less than one half of a percent. All ten increments in the z direction show this level of accuracy, if not better after correction. The mean absolute error in measured z displacements is less than 10 microns for each increment. The maximum mean absolute errors in the measured x and y displacements are now approximately 1 and 3 microns, respectively. This is a reduction of over 90% in both cases.      Figure 6.5. Perspective corrected out-of-plane results for planar specimen. (a) corrected mean displacements, (b) corrected standard deviations. The standard deviations of the corrected results show that after the perspective error has been removed the 1600 measurements at each increment become very consistent. The standard deviations from the blue images are now all approximately 0.10 pixels (= 6 um) and the deviations in the green images are consistently less than half that, at 0.05 pixels (= 3 um). These results agree well with the previous in-plane measurement, which suggests that the correction method effectively removes nearly all associated camera perspective errors.   The corrected out-of-plane results highlight the ability of the projected image to measure the third dimension of displacement accurately, and also provide the required information to correct for any associated 50  camera perspective errors, two qualities a traditional single camera 2-D DIC arrangement cannot offer.  All further measurement results have been adjusted using the perspective error correction method.  6.2.2 Non-Planar Specimen The previous two measurements show the capabilities of the system to accurately measure the displacements of a planar specimen in both the in-plane and out-of-plane directions. The next sets of measurements were conducted to show that the system can maintain similar accuracy and precision when the object of interest is not a flat specimen, a case that presents difficulties for other single-camera 3-D DIC approaches.    Figure 6.6. Cylindrical specimen. The measurements are conducted in exactly the same manner as the previous experiments with the only difference being the specimen used. The flat plate was replaced with a segment of cylindrical rod (mean radius = 1.5 inches = 38.1 mm) as shown in figure 6.6. All other measurement parameters were unchanged. The reference plane image used in the previous experiment was used for the following measurements as well. Again, a single full color reference was taken before any displacements and then a single full color deformed image was 51  taken after each incremental displacement of 200 microns. The blue and green images were then separated from the full color images.  6.2.2.1 In-Plane Figure 6.7 (a) and (b) show the average displacements and standard deviation results for the 10 incremental displacements in the positive x direction of the cylindrical specimen. The largest error in the 10 measured x displacements, occurs at the 8th increment and equals 20.0 um, an error of 1.25 percent. The largest measured average displacements for any increment in the y and z directions are 0.02 um and 0.10 um, respectively, which are relatively small deviations from the expected values of zero in each direction.  This shows that the non-constant surface heights associated with the non-planar specimen can be accurately accounted for during in-plane displacements using the methods outlined in Chapter 5.    The standard deviations of the in-plane information, obtained from the green images, are again consistent at approximately 0.04 to 0.05 pixels. However, the standard deviations of the ten measurements obtained from the blue images are relatively high. The deviations plateau just above 0.50 pixels, corresponding to 30 microns, an unacceptably high standard deviation for rigid body displacements.  As the applied measurements are purely in-plane this cannot be attributed to any issue such as camera perspective error. It is however attributed to the DIC analysis technique used.  52   Figure 6.7. Cylindrical in-plane displacement results using zero-order shape functions. (a) mean measured displacements,                     (b) standard deviations. These results were obtained using zero-order shape functions in the DIC analysis of image sets, as explained in Chapter 2. This analysis approach is not ideally suited to characterize the deformation of the projected pattern accurately when measuring non-planar specimen. The deformed images of the projected pattern are all referenced back to the image of the projected pattern as it appears on a reference plane. Therefore, when the projection falls on a non-flat specimen the image will appear to have undergone relative strains. In the case of a vertical cylinder, subsets of the projected pattern will appear to encounter compression and tension, both in the x direction, depending on the relative surface slope of the specimen. Thus, higher order shape functions are preferable to calculate the displacement of each subset in the projected pattern images. To show that the errors are due to the analysis method as opposed to the projected pattern approach, the same images were analyzed using Ncorr, open-source 2-D DIC software that utilizes first order shape functions [40]. Comparative tests were conducted first to ensure that the two different analysis methods gave similar results when no subset deformation was present. The results obtained from the Ncorr software using the same images, subset sizes, and ROI are plotted in Figure 6.8.  53    Figure 6.8. Cylindrical in-plane displacement results using first-order shape functions. (a). mean measured displacements,                     (b) standard deviations. The mean displacements as measured using the first order shape functions are closely similar to those measured using zero order shape functions, and show a very high degree of accuracy. The mean displacements are similar using both techniques because the symmetry of the part results in the small errors, arising from zero order shape functions, being averaged out in the measured mean displacement values. Using the more appropriate first order shape functions the standard deviations of the measured z displacements have been greatly reduced though. The first order shape functions more accurately calculate the final location of the subsets that are toward the edges of the cylinder, which encounter relatively large deformation.  To help illustrate this point, figure 6.9 shows the measured surface heights (z displacements) at the final location of the cylinder using both analysis techniques. The figure clearly shows that the first-order shape functions can more accurately determine the apparent deformations of the projected image and thereby more accurately measure specimen shape as well as out-of-plane displacements and deformations.    54     Figure 6.9. Comparison of cylindrical shape measurement. (a) zero order shape function analysis, (b) first order shape function analysis. 6.2.2.2 Out-of-Plane The final rigid body displacement experiment is that of the same cylindrical surface now displaced 10 times in the direction of the camera (+z axis). The experimental and analysis parameters are left unchanged. Only the results obtained using first-order shape functions are presented. The measurement was conducted to show the ability of the system to measure the out-of-plane displacements of a non-planar specimen, while still accurately compensating for the camera perspective errors, which are not constant across the entire image because of the 3-D geometry of the specimen.     Figure 6.10. Cylindrical specimen out-of-plane displacement results. (a) mean measured displacements, (b) standard deviations.  55   The results presented in figure 6.10 show that the system can accurately measure the out-of-plane displacement of a non-planar specimen. The largest error for the entire measurement occurs at the final increment in the measured z results. The result is 16.2 microns smaller than the expected 2000 microns, an error of less than one percent. After perspective correction, the mean absolute errors for the x and y measurements are less than 4 microns for all ten increments and the standard deviations are very similar to those found in the previous measurements.     Figure 6.11. Average shape measurement accuracy of cylindrical surface.  Finally, the measured surface shapes determined from the projected image are plotted vs the theoretical values of the surface shape and location. The mean surface shape was calculated, per increment, by taking the average of the 40 columns of subsets, because the cylinder was aligned vertically. This allows the mean surface shape to be plotted as a single line. The results from the first and last increments are shown in Figure 6.11 above. The results show that by correlating the projected images back to the image of the projected pattern on a reference plane, the system can very accurately determine specimen shape. The maximum error between the mean measured surface shapes and the theoretical values are less than one percent for all increments. The level of shape measurement accuracy shown at the final increment further indicates that the camera perspective correction is effective. 56  A more thorough assessment of the shape measurement accuracy was also done by comparing all subsets to the theoretical values of a cylindrical surface, as opposed to mean shape values. These results showed slightly lower agreement, but it was found that after rotating the theoretical cylinder surface about the x axis, by approximately one degree, the error reduced to less than one percent for the entire measurement. This indicates that the initial alignment of the cylindrical specimen was not perfect and suggests the ability of the method to measure specimen shape is quite accurate.   6.3 3-D Deformation The final experiment conducted to validate the single color camera system was that of true 3-D deformation. Images were acquired during the compression of a small rubber block. It was not possible with the limited facilities available to make an independent measurement of the specimen deformation, nor to make a theoretical determination of the highly non-linear behavior. Therefore, while the previous measurements were intended to show system accuracy and precision, this measurement was intended to highlight the more general ability of the system to measure complex in- and out-of-plane deformations simultaneously.  6.3.1 Experimental Setup The images in figure 6.12 illustrate the experimental arrangement and procedure. A small rectangular rubber block 21.5x21.5x11.5 mm was axially compressed in a small machine-vise so as to cause the block to buckle in the out-of-plane direction. The resulting deformation combined axial compression and out-of-plane bending, the latter forming an anticlastic surface due to the effect of Poisson’s ratio. The specimen was compressed in a sequence of 10 increments, taking a single image after each added compression. After the 10th incremental compression the specimen split and dislodged from the vice, thus only 9 increments provided useful images. Figure 6.12 (b) and (c) show the overhead view of a similar specimen before any applied compression and just before failure. 57      Figure 6.12. 3-D deformation experiment. (a). experimental setup. (b) specimen before deformation, (c) specimen after deformation. The specimen was compressed using a machine vise. The fixed and moving arms of the vise were also speckled and analyzed using DIC to quantify the amount of applied compression. Some of the acquired images are shown in figure 6.13. The images in the top row in the figure below are the separated green images of the applied surface pattern from the first, fifth, and ninth increments. The bottom row images are the corresponding blue images of the projected speckle pattern. For this measurement a 300x300 pixel ROI was used, representing a real surface area of approximately 1.8 cm square as illustrated by the red box in 6.13 (a).  Subsets were spaced 6 pixels apart, thus a total of 2,500 subsets were analyzed per increment.     ROI (a) (b) (c) 58            Figure 6.13. Separated color images of 3-D deformation experiment. (a). green image 1, (b) green image 5, (c) green image 9,   (d) blue image 1, (e) blue image 5, (f) blue image 9. The images in Figure 6.13 provide some insight into the deformation encountered by the rubber specimen. The surface pattern images (top row) show that the specimen surface takes on an hour glass shape, suggesting the surface specimen is bending in the out-of-plane direction. However, it is difficult to tell from these 2-D images how large this out-of-plane deformation may be. In general, this is the major limitation of a tradition 2-D DIC arrangement.  However, while visual assessment of the projected pattern images is less intuitive, the extra out-of-plane information provided after DIC analysis is clear.  6.3.2 Results The surface plots below represent the 3-D surface displacements of the rubber specimen. The independently measured x, y, and z displacements have been combined to show the measured 3-D locations of each subset, for all 9 increments. The origin was chosen to be located at the bottom left corner of the ROI.   Failure location (a) (b) (c) (d) (e) (f) 59    Figure 6.14. Measured 3-D surface displacements. (a) orthogonal view, (b) bottom view, (c) side view. The measured amount of applied compression at the 9th increment was 5.5 mm. However, the maximum average reduction in size with respect to the x dimension on the surface was only 1.4 mm. This effect is caused by the bending of the specimen. The vise compressed only the back part of the specimen, causing bending to occur. This bending caused the front part of the specimen to bend outward and expand relative to the back portion. This relative expansion reduced the length change of the front surface from 5.5 mm to the observed 1.4 mm. This effect can be seen in figure 6.14 (b). Had the specimen not been placed partially in front of the vice, the left hand side of all nine surface measurements would lie in the same location with regard to the x axis. The surface plots also show that the specimen is expanding slightly in the y direction. This is expected due to the compression in the x direction and the associated Poisson’s effect. 60                               Figure 6.15. Contour plot of measured surface height at final increment. Finally, the results show that the system was able to characterize the bi-axial deformation of the surface due to bending, clearly seen in 6.14(c) and the contour plot shown in figure 6.15. This indicates that the system can capably detect complex specimen shapes and out-of-plane deformations using first-order shape functions and 2-D DIC analysis. The locations of the largest out-of-plane displacements also point to the areas of highest stress concentration. At these positions, a total out-of-plane displacement on nearly 4 mm is measured. This is a large amount of displacement that without the projected pattern images would go undetected. Also, without perspective error correction the in-plane results would have been significantly influenced by this relatively large displacement in the direction of the camera. The experimental results show that the system and analysis methods used can effectively and simultaneously measure complex 3-D deformations.  In summary, the system shows good accuracy and precision for displacements and deformations in all three dimensions. The surface pattern created using the small glass beads proves to be an effective replacement for a traditional painted on surface pattern. This adapted applied pattern also effectively removes any cross-talk 61  between the applied and projected patterns and seems to introduce little to no measurement error even in relatively large displacements. The results also show that the other steps taken to ensure the two patterns are independent, separable, and noiseless are effective. The improved telecentric speckle projection approach provides precise specimen shape and out-of-plane displacements measurements as well as the information to effectively remove any camera perspective errors from all three signals.  The approach is consistently accurate and precise in measuring the 3-D displacements of both planar and non-planar specimens. The system also proved effective in simultaneously measuring complex 3-D deformations. The experimental results validate the ability of the approach to accurately and simultaneously measure 3-D displacements and deformations using a single-color camera, structured light, and straightforward 2-D DIC analysis techniques.     62  7 CONCLUSIONS AND FUTURE WORK 7.1 Conclusions A system was developed here to make 3-D DIC measurements using a single color-camera. The approach combines speckle projection and 2-D DIC by creating the two patterns in different colors. The two patterns are then separated using a color camera sensor and used to measure specimen shape, in and out-of-plane displacements, as well as changes in camera perspective.  The prototype apparatus was validated using known displacements of planar and non-planar specimens in both the in-plane and out-of-plane directions. The displacement results obtained were in good agreement with the applied displacements in all cases. Further, a rubber specimen was forced to undergo complex 3-D deformations and the results obtained show the system was successful in simultaneously measuring these deformations.   A modified surface speckle pattern was developed. Here, the glass beads commonly found in retro-reflective paint were used to create the required random surface pattern. The beads proved to be an excellent replacement for a traditionally painted-on speckle pattern so as to provide substantial independence between the applied and projected patterns. The amount of time needed to apply the beads to the specimen surface was comparable to the time needed to paint a specimen. Also, retro-reflective beads are available in various sizes which allow the technique to be scalable. The reflective surface pattern illuminated with green light provided measurements of in-plane displacements, found to be consistently accurate to within 0.04 pixels.   An improved speckle projection technique was developed. By designing and creating a custom telecentric projector, the projected speckle image remains constant in image size and focus throughout a relatively large depth of field. The combination of the telecentric projector and a linear diffraction grating created a configuration in which the illumination angle was constant for the entire projected image and the telecentric depth was maximized. These qualities help to overcome the limitations of the previously established speckle 63  projection method and insure efficient and accurate DIC subset matching. By correlating the acquired projected images back to an image of the pattern on a reference plane, they were used to accurately provide specimen shape measurements as well as out-of-plane surface displacements. Further, a technique was developed to match the measured out-of-plane displacements with the corresponding in-plane locations allowing the system to measure complex 3-D deformations.  The use of a telecentric projector introduces some limitations. The maximum image size that may be projected telecentricly equals the diameter of the final imaging lens in the projection arrangement. Consequently, the single color-camera approach developed here may not be practical for measurements of large areas as it would require an equally large projection lens. However, there may be a suitable alternative which would allow for a large yet inexpensive projection lens. This possible alternative is discussed in the future work section.  Also, for very large out-of-plane displacements or large variations in specimen surface height portions of the incoming projection may be blocked. This creates shadows where no out-of-plane information will be available. This effect can be recognized in projection images shown in Figure 6.13. In that measurement the shadow did not encroach on the ROI and thus wasn’t an issue. For certain specimens or measurements though the shadow effect may cause issues. A similar limitation is found in stereovision arrangements as well. 3-D measurements can only be obtained for locations on the specimen surface that both cameras can see. Cameras must be arranged accordingly. Avoiding shadows in the projection arrangement can be handled similarly. By selecting an appropriate wavelength of laser light or line spacing in the diffraction grating the angle of illumination can be adjusted appropriately to suit measurement requirements.  The projected pattern was also used to create a method for obtaining and correcting camera perspective errors. By arranging the diffraction grating such that the angle of illumination was only relative to the y-z plane, the apparent displacements of the projected image were limited to the x direction. This meant any measured 64  displacement of the projected image in the y direction was due entirely to changes in camera perspective. This information was then used to remove the associated errors from the measured x, y, and z displacements. Results showed that the perspective correction method removed almost all of the errors associated with out-of-plane displacements for both planar and non-planar specimens. In general, the information from the projected pattern was consistently accurate to within 0.10 pixels. This slightly lower accuracy is mainly due to the lower spatial resolution of blue pixels in the camera sensor as compared to the resolution of the green pixels. The reduced accuracy of the blue images is a limitation of the approach when a Bayer sensor is used to acquire images. However, recently developed sensors offer color sensitivity at every pixel at a comparable price to Bayer style filters. If such a sensor were used it is expected that accuracy of the green and blue images would improve by factors of 2 and 4, respectively.   7.2 Future Work The major limitation of the single color-camera approach presented here is the practical difficulty and increased cost associated with measuring relatively large specimen, due to the size limitation of telecentrically projected images. Therefore, it would be advantageous to investigate a way to overcome this issue. One possible solution is to create a telecentric projector using Fresnel lenses. Fresnel lenses would provide an inexpensive and practical way to create very large telecentrically projected patterns, but at the expense of significantly inferior image quality. The other components in the telecentric projector could essentially remain the same outside the need for a larger diffraction grating.   If Fresnel lenses can adequately create the projected images required, the single camera system could be used to measure quite large specimens, greatly increasing the potential applications of the system.  For the angle of illumination to be exactly equal to the angle of diffraction the camera and telecentric projector must be parallel to one another. While misalignments between the projector and camera can be easily handled in post processing, it would be advantageous if the camera and projector were in fixed locations 65  relative to one another. A combined camera projector arrangement would also allow the system to become portable. If a portable unit were constructed, first a reference plane image must be taken. After that, only two requirements would need to be met before the system could make accurate measurements. First, the unit would need to be placed within a certain range from the specimen surface, such that the specimen lie somewhere within the telecentric depth of the projection. Finally the camera would need to be focused and the magnification ratio found. A portable unit would further reduce the already minimal calibration for the system and make 3-D measurements time and cost effective. A portable unit would possibly open doors to many new applications for the single color-camera system.   66  REFERENCES 1.  Mathieu, F., Hild, F. and Roux, S. “Identification of a crack propagation law by digital image correlation,” International Journal of Fatigue, vol. 36, pp. 146–154, 2012. 2.  Cooreman, S., Lecompte, D., Sol, H., Vantomme, J. and Debruyne, D. “Identification of mechanical material behavior through inverse modeling and DIC,” Experimental Mechanics, vol. 48, no. 4, pp. 421–433, 2007. 3.  Puishys, J. “Determination of mixed mode energy release rates in laminated carbon fiber composite structures using digital image correlation,” University of Maryland, 2012. 4.  Correlated Solution Inc, Applications. http://www.correlatedsolutions.com/applications . (accessed 2013, Mar. 25) 5.  Herbert, D.M.,  Gardner, D.R., Harbottle, M, Thomas, J. and Hughes, T.G. “The development of a new method for testing the lateral load capacity of small-scale masonry walls using a centrifuge and digital image correlation,” Construction Building Materials, vol. 25, no. 12, pp. 4465–4476, 2011. 6.  Grédiac M. and Hild, F. “Digital Image Correlation,” Full-field measurements and identification in solid mechanics, Wiley, Hoboken, 2012. 7.  Luo, P.F., Chao, Y.J., Sutton, M.A. and Peters, W.H.  “Accurate measurement of three-dimensional deformations in deformable and rigid bodies using computer vision,” Experimental Mechanics, vol. 33, no. 2, pp. 123–132, 1993. 8. Pan, B., Xie, H., Wang, Z., Qian, K. and Wang, Z. “Study on subset size selection in digital image correlation for speckle patterns,” Optics Express, vol. 16, no. 10, pp. 7037–48, 2008. 9.  Lu, H. and Cary, P. “Deformation measurements by digital image correlation: implementation of a second-order displacement gradient,” Experimental Mechanics, pp. 393–400, 2000. 10. Sutton, M.A., Orteu, J.J. and Schreier, H.  “Image correlation for shape, motion and deformation measurements,” Springer, New York, pp.225-228, 2009.  11. Sutton, M.A., McNeil, S.R., Helm, J.D. and Chao, Y.J. “Advances in two-dimensional and three-dimensional computer vision,” Photomechanics, Springer, Berlin, vol. 77, pp. 323-372, 2000. 12. Schreier, H.W. “Investigation of two and three-dimensional image correlation techniques with applications in experimental mechanics,” Ph.D. Thesis, University of South Carolina, 2003. 13.  Parulski, K. and Spaulding, K.  “Color image processing for digital cameras,” Digital color imaging handbook, Gaurav Sharma, ed., CRC Press, 2002.   14.  Sutton, M.A., Yan, J.H., Tiwari, V., Schreier, H.W. and Orteu, J.J. “The effect of out-of-plane motion on 2D and 3D digital image correlation measurements,” Optics and Lasers in Engineering, vol. 46, no. 10, pp. 746–757, 2008. 15.  Pan, B., Qian, K., Xie, H. and Asundi, A.  “Two-dimensional digital image correlation for in-plane displacement and strain measurement: a review,” Measurement Science and Technology, Vol.20, No.6, pp.3-9, 2009.  67  16. Bruck, H., McNeill, S., Sutton, M.A. and Peters, W.H. “Digital image correlation using Newton-Raphson method of partial differential correction,” Experimental Mechanics., pp. 261–267, 1989. 17. Vendroux G. and Knauss, W. “Submicron deformation field measurements: Part 2. Improved digital image correlation,” Experimental Mechanics, vol. 38, no. 2, pp. 86–92, 1998. 18. Hu, Z., Xie, H., Lu, J., Hua, T. and Zhu, J. “Study of the performance of different subpixel image correlation methods in 3D digital image correlation,” Applied Optics, vol. 49, no. 21, pp. 4044–51, 2010. 19. Wang, Y.Q., Sutton, M.A., Ke, X.D., Schreier, H.W., Reu, P.L. and Miller, T.J. “On Error Assessment in Stereo-based Deformation Measurements,” Experimental Mechanics, vol. 51, no. 4, pp. 405–422, 2011. 20. Pan, B., Xie, H., Gao, J. and Asundi, A. “Improved speckle projection profilometry for out-of-plane shape measurement,” Applied Optics, vol. 47, no. 29, pp. 5527–33, 2008. 21. Sriram, P. and Hanagud, S. “Projection-speckle digital-correlation method for surface-displacement measurement,” Experimental Mechanics, pp. 340–345, 1988. 22. Gilbert, J.A., Matthys, D.R., Taher, M.A. and Petersen, M.E. “Shadow speckle metrology,” Applied Optics, vol. 25, no. 2, pp. 199, 1986. 23. Sjödahl M. and Synnergren, P. “Measurement of shape by using projected random patterns and temporal digital speckle photography,” Applied Optics, vol. 38, no. 10, pp. 1990–7, 1999. 24.  McNeill, S., Sutton, M.A., Miao, Z. and Ma, J. “Measurement of surface profile using digital image correlation,” Experimental Mechanics, vol. 37, no. 1, pp. 13-20, 1997. 25. Zhu, F., Liu, W. and He, X. “Measurement of three-dimensional deformation,” Proceedings of SPIE, vol. 7375, pp. 7375-8, 2008. 26. Asundi, A. and Chiang, F. “Separation of 3-D displacement components in the white light speckle method,” Optics and Laser Technology, pp. 41–45, 1983. 27. Quan, C., Tay, C.J., Sun, W. and He, X. “Determination of three-dimensional displacement using two-dimensional digital image correlation,” Applied Optics, vol. 47, no. 4, pp. 583–93, 2008. 28. Sun, W., Dong, E. and He, X. “3D displacement measurement with a single camera based on digital image correlation technique,” Proceedings of SPIE, vol. 6723, pp. 67230E–7, 2007. 29.  Tay, C.J., Quan, C., Huang, Y.H. and Fu, Y. “Digital image correlation for whole field out-of-plane displacement measurement using a single camera,” Optics Communications, vol. 251, no. 1–3, pp. 23–36, 2005. 30. Pankow, M., Justusson, B. and Waas, A.M. “Three-dimensional digital image correlation technique using single high-speed camera for measuring large out-of-plane displacements at high framing rates.,” Applied Optics, vol. 49, no. 17, pp. 3418–27, 2010. 31.  Xia, S., Gdoutou, A. and Ravichandran, G. “Diffraction assisted image correlation: A novel method for measuring three-dimensional deformation using two-dimensional digital image correlation,” Experimental Mechanics, vol. 53, no. 5, pp. 755–765, 2012. 68  32. Tay, C.J., Quan, C., Wu, T. and Huang, Y.H. “Integrated method for 3-D rigid-body displacement measurement using fringe projection,” Optical Engineering, vol. 43, no. 5, pp. 1152-8, 2004. 33. Quan, C., Tay, C.J. and Huang, Y.H. “3-D deformation measurement using fringe projection and digital image correlation,” Optik, vol. 115, no. 4, pp. 164–168, 2004. 34. Felipe-Sesé, L., Siegmann, P., Díaz, F.A. and Patterson, E.A. “Simultaneous in-and-out-of-plane displacement measurements using fringe projection and digital image correlation,” Optics and Lasers in Engineering, vol. 52, no. 2014, pp. 66–74, 2014. 35. Fischer, R.E., Tadic-Galeb, B. and Yoder, P.R. “Hardware design issues,” Optical system design, McGraw-Hill, New York, pp. 434-5, 2008. 36. AVT.  Prosilica GC Technical Manual, Ver. 2.0.8, Allied Vision Technologies, Stadtroda, Germany, 2013.   37. Huntley, M.C.  “Basic Diffraction Equation,” Diffraction Gratings, Academic Press, New York, pp. 22-24. 1982. 38.  Cloud, G.  Optical methods of engineering analysis, Cambridge University Press, Cambridge, 1998. 39. Lowenthal, S. and Joyeux, D. “Speckle removal by a slowly moving diffuser associated with a motionless diffuser,” JOSA, vol. 61 no. 7, pp. 847–851, 1971. 40.         Blaber, J. “Downloads,” http://www.ncorr.com/index.php/downloads , (access 2013, Oct. 10)    

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
https://iiif.library.ubc.ca/presentation/dsp.24.1-0166996/manifest

Comment

Related Items