UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Defocused speckle imaging for remote surface motion measurements Heikkinen, Juuso 2021

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Notice for Google Chrome users:
If you are having trouble viewing or searching the PDF with Google Chrome, please download it here instead.

Item Metadata

Download

Media
24-ubc_2021_may_heikkinen_juuso.pdf [ 18.6MB ]
Metadata
JSON: 24-1.0395871.json
JSON-LD: 24-1.0395871-ld.json
RDF/XML (Pretty): 24-1.0395871-rdf.xml
RDF/JSON: 24-1.0395871-rdf.json
Turtle: 24-1.0395871-turtle.txt
N-Triples: 24-1.0395871-rdf-ntriples.txt
Original Record: 24-1.0395871-source.json
Full Text
24-1.0395871-fulltext.txt
Citation
24-1.0395871.ris

Full Text

DEFOCUSED SPECKLE IMAGING FOR REMOTE  SURFACE MOTION MEASUREMENTS  by Juuso Heikkinen  B.Sc. (Tech.), Tampere University of Technology, 2013 M.Sc. (Tech.), Tampere University of Technology, 2016  A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF  DOCTOR OF PHILOSOPHY in THE FACULTY OF GRADUATE AND POSTDOCTORAL STUDIES (Mechanical Engineering)  THE UNIVERSITY OF BRITISH COLUMBIA (Vancouver)  February 2021  © Juuso Heikkinen, 2021 ii  The following Examining Committee certify that they have read, and recommend to the Faculty of Graduate and Postdoctoral Studies for acceptance, the dissertation entitled:  Defocused Speckle Imaging for Remote Surface Motion Measurements  submitted by Juuso Heikkinen in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Mechanical Engineering  Examining Committee: Gary S Schajer, Mechanical Engineering, UBC Supervisor Robert Rohling, Electrical and Computer Engineering & Mechanical Engineering, UBC Supervisory Committee Member Kirk W Madison, Physics & Astronomy, UBC Supervisory Committee Member James Little, Computer Science, UBC  University Examiner Peter A Cripton, Mechanical Engineering & Medical and Biomedical Engineering, UBC University Examiner  Additional Supervisory Committee Members: Boris Stoeber, Electrical and Computer Engineering & Mechanical Engineering, UBC Supervisory Committee Member  iii  Abstract  Defocused Speckle Imaging (DSI) is an optical method where a laser source illuminates a rough object surface, and a defocused camera records the scattered interference speckle pattern that characterizes the surface. The speckle pattern appears to move if the object displaces or rotates. Speckle motion tracking thus enables non-contact surface motion measurements. The observed speckle motion magnitude increases with distance, which makes DSI particularly attractive for remote measurements. As the camera focal plane position controls the effective sampling distance, measurement sensitivity can be tuned by simple camera defocus adjustment. However, despite its great potential, DSI has not been previously utilized for measurements at large distances. This is because the observed speckle motions are influenced by both surface displacements and rotations, and because the measurement sensitivity depends on geometric parameters that are challenging to extract in field conditions. This thesis first presents a geometric Speckle Hemisphere Model to allow easy visualization of the speckle phenomenon. The thesis next proposes an optimum approach to separate linear and rotational speckle motion components using a simple combination of two cameras focused at different distances. Finally, the thesis presents a measurement self-calibration principle by combining multi-wavelength laser illumination with speckle pattern diffraction analysis to determine geometric distance and angle parameters directly from the captured speckle patterns. A set of experimental measurements validates the Speckle Hemisphere Model and illustrates the general sensitivity characteristics of DSI; at low sampling distances, measurement is mostly sensitive to in-plane displacements, whereas large sampling distances have much higher relative iv  tilt sensitivity. Multiaxial motion experiments performed at 4–16 meters demonstrate the method’s suitability for large distances. The self-calibration principle validation shows capability to determine sampling distances and oblique surface angles up to 45˚ at high accuracy (1.7% and 0.7˚). The final study presents self-calibrated surface motion measurements performed at a 30.7-meter distance, with surface angles of 2.5–7.4˚. The dual-camera configuration can effectively determine the sampling distances (6.4%) and the surface angles (0.2˚). The speckle motions resulting from microscopic in-plane displacements (400µm) and very fine tilt motions (0.003˚) are tracked robustly at high accuracy (6.0%).  v  Lay Summary  Defocused Speckle Imaging (DSI) is an optical method where a laser source illuminates a rough object surface, and a defocused camera records the scattered light. The resulting image contains a speckle pattern that appears to move if the object displaces or rotates. Surface motion can thus be determined by tracking speckle movements. DSI is attractive for remote measurements because its sensitivity increases with measurement distance. The effective recording distance can be changed by simply defocusing the camera on purpose. This thesis presents a simple geometric model to describe DSI characteristics and proposes an arrangement that uses a combination of two cameras focused at different distances. This setup can measure surface displacements and rotations simultaneously, and also extract important calibration parameters without additional sensors. The experimental demonstration shows that the method can measure microscopic surface movements, object distances and surface angles from tens of meters away at high accuracy and repeatability.  vi  Preface  The research presented in this PhD thesis is original work carried out by the author, Juuso Heikkinen, under the supervision of Professor Gary Schajer. The author was responsible for major areas of concept formation, conducted all experiments and drafted all manuscripts and presentations. Professor Schajer supervised the research and provided comprehensive feedback on the manuscripts and presentations. All research was conducted in Renewable Resources Laboratory at The University of British Columbia, Vancouver, Canada. This project began by an accidental discovery in the laboratory. When a laser-illuminated surface was imaged by a camera that happened to be defocused, the resulting image was not blurred but instead had a strong intensity pattern with distinctive speckles of various brightness. Furthermore, when the illuminated object was displaced or rotated, the speckles appeared to move in the acquired images much faster than the physical object itself. Such puzzling behavior motivated the author to investigate the phenomenon in more detail. The research has resulted in the following publications: v  The theory in Chapter 2, Section 2.3, along with the experiments of Chapter 5 have been published in Optics and Lasers in Engineering as: Heikkinen J, Schajer G. A Geometric Model of Surface Motion Measurement by Objective Speckle Imaging. Optics and Lasers in Engineering 2020;124:105850. v  The theory in Chapter 3, Sections 3.4-3.6, along with the experiments of Chapter 6 have been published in Optics and Lasers in Engineering as: Heikkinen J, Schajer G. Remote vii  Surface Motion Measurements Using Defocused Speckle Imaging. Optics and Lasers in Engineering 2020;130:106091.  v The theory in Chapter 4, Section 4.3, along with the experiments of Chapter 7 were presented by the author at Society for Experimental Mechanics 2020 Annual Conference and Exposition on Experimental and Applied Mechanics under title: “Remote Surface Motion Measurements using Multi-Wavelength Defocused Speckle Imaging.” The presentation was awarded the 1st place in 30th annual Michael Sutton International Student Paper Competition. v The contents of Chapter 8 are in preparation to be submitted for a publication. viii  Table of Contents  Abstract ......................................................................................................................................... iii Lay Summary .................................................................................................................................v Preface ........................................................................................................................................... vi Table of Contents ....................................................................................................................... viii List of Tables .............................................................................................................................. xiv List of Figures ...............................................................................................................................xv List of Symbols .............................................................................................................................xx List of Abbreviations ............................................................................................................... xxiii Acknowledgements .................................................................................................................. xxiv Chapter 1: Introduction ................................................................................................................1 1.1 Importance of Motion Measurements ............................................................................. 1 1.2 Need for Remote Measurements ..................................................................................... 1 1.3 Basics of Optical Methods .............................................................................................. 2 1.3.1 Feature Tracking Methods ...................................................................................... 3 1.3.2 Interferometric Motion Measurements ................................................................... 6 1.4 Basics of Speckle Imaging .............................................................................................. 7 1.5 Geometric Aspects of Speckle Imaging ........................................................................ 10 1.6 Defocused Speckle Imaging ......................................................................................... 12 1.7 Internal Properties of Speckle Patterns ......................................................................... 14 1.8 Speckle Imaging Applications ...................................................................................... 15 1.9 Limitations of Speckle Imaging .................................................................................... 16 ix  1.10 Thesis Motivation and Objectives ................................................................................ 17 1.11 Summary ....................................................................................................................... 19 1.12 Thesis Outline ............................................................................................................... 21 Chapter 2: Geometric Representation of Speckle Imaging – Speckle Hemisphere Model ...23 2.1 Overview of Key Literature .......................................................................................... 23 2.1.1 First Observations on Speckle Phenomenon ......................................................... 23 2.1.2 From Speckle Photography to Speckle Imaging ................................................... 24 2.1.3 Existing Speckle Imaging Models ........................................................................ 26 2.1.4 Limitations of Existing Models ............................................................................ 27 2.1.5 Object Motion vs. Surface Motion ........................................................................ 28 2.2 Motivation for an Improved Speckle Imaging Model .................................................. 28 2.2.1 Ideal Model Characteristics .................................................................................. 28 2.2.2 Proposed Approach ............................................................................................... 29 2.3 Speckle Hemisphere Model .......................................................................................... 29 2.3.1 Model Assumptions .............................................................................................. 29 2.3.2 Geometrical Arrangement ..................................................................................... 32 2.3.3 In-plane Displacement dx ..................................................................................... 33 2.3.4 Phase Correction ................................................................................................... 36 2.3.5 In-plane Displacement dy ..................................................................................... 39 2.3.6 Out-of-plane Displacement dz .............................................................................. 40 2.3.7 Out-of-plane Rotation ωy ...................................................................................... 43 2.3.8 Out-of-plane Rotation ωx ...................................................................................... 44 2.3.9 In-plane Rotation ωz .............................................................................................. 45 x  2.3.10 Combined Object Motions .................................................................................... 50 2.3.11 Speckle Decorrelation ........................................................................................... 51 2.4 Conclusion .................................................................................................................... 51 Chapter 3: Remote Surface Motion Measurements Based on Defocused Speckle Imaging .52 3.1 Basics of Image Formation ........................................................................................... 52 3.1.1 Thin Lens Model ................................................................................................... 52 3.1.2 Image Scale ........................................................................................................... 54 3.2 Defocus ......................................................................................................................... 56 3.2.1 Cause of Defocus .................................................................................................. 56 3.2.2 Blur in the Object Space ....................................................................................... 58 3.2.3 Blur in the Image Space ........................................................................................ 59 3.3 Phase Aspects of Image Formation ............................................................................... 60 3.4 Interpretation and Characteristics of Defocused Speckle Imaging ............................... 61 3.5 Defocused Speckle Imaging Sensitivity Equations ...................................................... 65 3.6 Complex Object Motion with Multiple Degrees of Freedom ....................................... 67 3.7 Conclusion .................................................................................................................... 69 Chapter 4: Statistical Speckle Pattern Analysis ........................................................................70 4.1 Background ................................................................................................................... 70 4.2 Speckle Size .................................................................................................................. 72 4.2.1 Interferometric Interpretation of Objective Speckle Size ..................................... 72 4.2.2 Diffraction-Limited Spot Size ............................................................................... 75 4.2.3 Speckle Size in Subjective Speckle Imaging ........................................................ 77 4.2.4 Speckle Size in Defocused Speckle Imaging ........................................................ 79 xi  4.2.5 Speckle Size and Shape vs. Geometry in Defocused Speckle Imaging ................ 80 4.2.6 Challenges ............................................................................................................. 82 4.3 Diffraction view of Speckle Imaging ............................................................................ 84 4.3.1 Operating Principle of Reflection Diffraction Grating ......................................... 84 4.3.2 Speckle Pattern as a Diffraction Pattern ............................................................... 86 4.3.3 Speckle Pattern Wavelength Dependency ............................................................ 89 4.3.4 Diffraction-Based Measurement Calibration ........................................................ 93 4.3.5 Speckle Offset vs. Speckle Size as a Range Metric .............................................. 95 4.3.6 Further Comments ................................................................................................ 95 4.4 Conclusion .................................................................................................................... 97 Chapter 5: Sensitivity Characteristics of Objective Speckle Imaging ....................................98 5.1 Experimental Measurements ......................................................................................... 98 5.1.1 Measurement Setup ............................................................................................... 98 5.1.2 Measurement Procedure ...................................................................................... 101 5.1.3 In-plane Displacement Measurements ................................................................ 104 5.1.4 Out-of-plane Tilt Measurements ......................................................................... 106 5.1.5 In-plane Rotation Measurements ........................................................................ 108 5.1.6 Visualization of Rotating Speckle Field ............................................................. 109 5.2 Discussion ................................................................................................................... 110 5.3 Conclusion .................................................................................................................. 112 Chapter 6: Sensitivity Characteristics of Defocused Speckle Imaging .................................113 6.1 Uniaxial Object Motion Measurements ...................................................................... 113 6.1.1 Uniaxial Motion Measurement Procedure .......................................................... 114 xii  6.1.2 Uniaxial Motion Measurement Parameters ........................................................ 115 6.1.3 Connection Between Objective and Defocused Speckle Patterns ...................... 116 6.1.4 Defocused Speckle Pattern Characteristics ......................................................... 118 6.1.5 Defocused Speckle Size vs. Sampling Distance ................................................. 119 6.1.6 Measurement Sensitivity Characteristics ............................................................ 123 6.2 Complex Object Motion Measurements ..................................................................... 126 6.2.1 Complex Motion Measurement Procedure ......................................................... 126 6.2.2 Complex Motion Measurement Parameters ........................................................ 128 6.2.3 Separating In-plane Displacements from Out-of-plane Tilts .............................. 129 6.3 Discussion ................................................................................................................... 133 6.4 Conclusion .................................................................................................................. 137 Chapter 7: Geometric Calibration Principle Based on Speckle Pattern Diffraction Analysis......................................................................................................................................................138 7.1 Laser Characterization Procedure ............................................................................... 138 7.2 Characterization Results ............................................................................................. 141 7.3 Speckle Offset Measurement Principle ....................................................................... 143 7.4 Speckle Offset Measurement Results ......................................................................... 147 7.5 Determining Sampling Distance and Relative Surface Angle .................................... 149 7.6 Discussion ................................................................................................................... 151 7.7 Conclusion .................................................................................................................. 153 Chapter 8: Self-calibrated Remote Surface Motion Measurements .....................................154 8.1 Experimental Arrangement ......................................................................................... 154 8.2 Experimental Parameters ............................................................................................ 161 xiii  8.3 Laser Beam Waist Adjustment Procedure .................................................................. 164 8.4 Motion Measurement Procedure ................................................................................. 166 8.5 Speckle Motion Tracking Results ............................................................................... 167 8.6 Speckle Motion Measurement Accuracy .................................................................... 171 8.7 Diffraction Analysis Procedure ................................................................................... 174 8.8 Diffraction Analysis Results and Accuracy ................................................................ 177 8.9 Geometric Calibration ................................................................................................. 180 8.10 Estimated Surface Motions ......................................................................................... 182 8.11 Measurement Accuracy vs. Increment Size ................................................................ 184 8.12 Macroscopic Object Tilt Measurements ..................................................................... 185 8.13 Discussion ................................................................................................................... 187 8.14 Conclusion .................................................................................................................. 193 Chapter 9: Conclusion ...............................................................................................................194 9.1 Thesis Summary and Impact ....................................................................................... 194 9.2 Future Work ................................................................................................................ 197 9.2.1 Modeling Aspects ............................................................................................... 197 9.2.2 Technical Aspects ............................................................................................... 197 9.2.3 Full-field Aspects ................................................................................................ 198 9.3 Final Words ................................................................................................................. 199 Bibliography ...............................................................................................................................200 Appendix: Interferometric Laser Characterization Principle ..............................................207 xiv  List of Tables  Table 2.1 Objective Speckle Imaging sensitivity equations. ........................................................ 50 Table 3.1 Defocused Speckle Imaging sensitivity equations. ....................................................... 66 Table 5.1 Applied total and incremental object motion magnitudes. ......................................... 101 Table 5.2 Studied source and imaging distances and angles. ..................................................... 102 Table 6.1 Imaging system parameters for the uniaxial measurements. ...................................... 116 Table 6.2 Geometric parameters for uniaxial motion measurements. ........................................ 123 Table 6.3 Geometric parameters for complex motion measurements. ....................................... 129 Table 6.4 Applied surface motions, observed speckle displacements and computed ................ 130 Table 7.1 Details of the studied laser sources, along with the analysis results. .......................... 142 Table 7.2 Geometric calibration test results. .............................................................................. 150 Table 8.1 Illumination hardware parameters. ............................................................................. 162 Table 8.2 Imaging hardware parameters. .................................................................................... 162 Table 8.3 Motion parameters for the main analysis. ................................................................... 164 Table 8.4 Speckle motion tracking accuracy and repeatability. ................................................. 174 Table 8.5 Autocorrelation outer side-peak separation accuracy and repeatability. .................... 179 Table 8.6 Sampling distance estimation accuracy and repeatability. ......................................... 180 Table 8.7 Illumination distance estimation accuracy and repeatability. ..................................... 181 Table 8.8 Accuracy and repeatability of the estimated sampling and illumination angles. ........ 182 Table 8.9 Accuracy and repeatability of the estimated surface motions using the estimated  ... 183 Table 8.10 Accuracy and repeatability of the estimated surface motions using the actual  ....... 183  xv  List of Figures  Figure 1.1 Motion analysis examples based on attached optical markers. ..................................... 3 Figure 1.2 Spray-painted random dot pattern applied on an object surface. .................................. 4 Figure 1.3 Digital Image Correlation (DIC) tracking principle. ..................................................... 4 Figure 1.4 Interferometric motion measurement principle. ............................................................ 6 Figure 1.5 Speckle formation principle. ......................................................................................... 8 Figure 1.6 Laser speckle pattern captured by a digital camera sensor. ........................................... 9 Figure 1.7 (Left) Sunlight reflected from disco ball surface mirrors. (Right) Illustration of disco ball reflection pattern movements in response to surface rotation. .............................................. 11 Figure 1.8 Image formation in a defocused camera. ..................................................................... 13 Figure 1.9 Defocused speckle pattern with duplicated speckles generated under multi-mode laser illumination. .................................................................................................................................. 15 Figure 2.1 Diffuse object surface is modeled as a collection of randomly oriented mirrors. ....... 31 Figure 2.2 Speckle hemisphere formation. ................................................................................... 32 Figure 2.3 Speckle Imaging sensitivity on surface in-plane dx-displacements. ........................... 34 Figure 2.4 Illumination and observation path length variations across the illuminated spot. ...... 37 Figure 2.5 Speckle Imaging sensitivity on surface in-plane dy-displacements. ........................... 39 Figure 2.6 Speckle Imaging sensitivity on surface out-of-plane dz-displacements. ..................... 42 Figure 2.7 Speckle Imaging sensitivity on surface out-of-plane rotations about the y-axis ωy. .. 43 Figure 2.8 Speckle Imaging sensitivity on surface out-of-plane rotations about the x-axis ωx. .. 45 Figure 2.9 Speckle motion field resulting from object in-plane rotation about the z-axis ωz. .... 46 xvi  Figure 2.10 Speckle hemisphere center of rotation dependence on illumination offset and angle........................................................................................................................................................ 47 Figure 3.1 Image formation through a thin lens. .......................................................................... 54 Figure 3.2 Defocused camera blur characteristics. (a) Image space blur, (b) object space blur. .. 57 Figure 3.3 Phase aspects of image formation. .............................................................................. 61 Figure 3.4 Speckle formation in a defocused camera. .................................................................. 62 Figure 3.5 (a) Objective Speckle Imaging geometry vs. (b) Defocused Speckle Imaging geometry with equal sampling distances (∆L = LC). .................................................................................... 64 Figure 3.6 Comparison of objective vs. defocused speckle pattern dependency on geometry. ... 65 Figure 4.1 Principle of oblique interference. ................................................................................ 74 Figure 4.2 Examples of aperture functions and their corresponding Point Spread Functions. ..... 78 Figure 4.3 (a) Subjective speckle formation in a focused camera. (b) Speckle formation in a highly defocused camera. ......................................................................................................................... 78 Figure 4.4 Speckle shape vs. observation angle. .......................................................................... 81 Figure 4.5 Vignetting causing nonuniform speckle size in Defocused Speckle Imaging. ........... 83 Figure 4.6 Operating principle of a reflection type diffraction grating. ....................................... 85 Figure 4.7 Speckle formation based on modeling the diffuse surface as a collection of randomly oriented diffraction gratings with various groove spacings. .. ...................................................... 86 Figure 4.8 Speckle formation under single-mode vs. multi-mode laser illumination. ................. 92 Figure 4.9 Defocused speckle pattern displaying multiple horizontally offset duplicated speckles........................................................................................................................................................ 92 Figure 4.10 Sampling distance determination based on speckle offset extrapolation. ................. 94 Figure 5.1 Schematic of the measurement geometry. ................................................................. 100 xvii  Figure 5.2 Initial in-plane displacement sensitivity. ................................................................... 105 Figure 5.3 In-plane displacement sensitivity after laser waist offset correction. . ...................... 106 Figure 5.4 Observed speckle displacements resulting from object out-of-plane rotation. . ....... 107 Figure 5.5 In-plane rotation sensitivity. ...................................................................................... 109 Figure 5.6 Visualization of rotating speckle field caused by object in-plane rotation. .............. 110 Figure 6.1 Uniaxial object motion instrumentation. ................................................................... 115 Figure 6.2 Comparison of objective and defocused speckle patterns recorded at the same effective sampling distance. ....................................................................................................................... 118 Figure 6.3 Speckle size dependence on sampling distance and imaging magnification ratio. ... 119 Figure 6.4 Statistical average speckle diameter as a function of the sampling distance. ........... 121 Figure 6.5 Estimated vs. actual sampling distances in the uniaxial motion setup. ..................... 122 Figure 6.6 Observed in-plane displacement sensitivity as a function of the sampling/illumination distance ratio for different levels of magnification. .................................................................... 124 Figure 6.7 Observed tilt sensitivity as a function of the sampling distance. .............................. 125 Figure 6.8 Complex object motion instrumentation.. ................................................................. 128 Figure 6.9 Estimated object surface displacements and tilts at different object distances. . ...... 131 Figure 6.10 Estimated vs. actual sampling distances in the complex motion setup. .................. 133 Figure 7.1 Michelson interferometer setup used to measure interference fringe visibility. . ..... 139 Figure 7.2 Interference fringe visibility computation principle. ................................................. 140 Figure 7.3 Comparison of fringe visibility vs. mirror separation for different laser sources. .... 143 Figure 7.4 Measurement setup used for studying speckle pattern wavelength dependency. ..... 144 Figure 7.5 Comparison of speckle patterns generated by different laser sources.. ..................... 146 Figure 7.6 Horizontal midline AC plots vs. camera defocus distance.. ...................................... 148 xviii  Figure 8.1 A schematic layout of the experimental setup. .......................................................... 155 Figure 8.2 The overall view of the experimental setup. ............................................................. 155 Figure 8.3 A close-up view of the laser source, the cameras and the object-actuator assembly. 156 Figure 8.4 The object-actuator assembly. ................................................................................... 156 Figure 8.5 View from the cameras towards the 1st surface mirrors. ........................................... 157 Figure 8.6 A close-up view of the mirrors. ................................................................................. 157 Figure 8.7 Illustration of pixel correction used for CAM1. ........................................................ 161 Figure 8.8 Waist adjustment principle. ....................................................................................... 166 Figure 8.9 Speckle pattern images and cropped ROIs, small surface angle. .............................. 168 Figure 8.10 Speckle pattern images and cropped ROIs, large surface angle. ............................. 168 Figure 8.11 Speckle pattern images and cropped ROIs, large surface angle and shifted laser waist position. ....................................................................................................................................... 168 Figure 8.12 Motion tracking accuracy and repeatability, small surface angle. .......................... 169 Figure 8.13 Motion tracking accuracy and repeatability, large surface angle. ........................... 169 Figure 8.14 Motion tracking accuracy and repeatability, large surface angle and shifted laser waist position. ....................................................................................................................................... 170 Figure 8.15 DX- vs. DY-displacement magnitudes for applied dx-displacements, large surface angle. ........................................................................................................................................... 172 Figure 8.16 DX- vs. DY-displacements for applied ωy-displacements, large surface angle.  ... 173 Figure 8.17 Example autocorrelation 2D maps. .. ...................................................................... 175 Figure 8.18 Extracted AC horizontal midlines. . ........................................................................ 177 Figure 8.19 Incremental AC midline plots and detected side-peaks. . ....................................... 178 Figure 8.20 Autocorrelation side-peak offset repeatability and accuracy.. ................................ 179 xix  Figure 8.21 Motion tracking accuracy for varying increment sizes. .......................................... 185 Figure 8.22 Autocorrelation midline side-peak separation vs. relative surface angle. ............... 186 Figure 8.23 Autocorrelation side-peak offset accuracy vs. relative surface angle. .................... 187 Figure A.1 Michelson interferometer setup for determining laser mode spacings. .................... 208 Figure A.2 (Top) Interference of two waves propagating with different wavelengths. (Middle) Interference of the two waves. (Bottom) The resulting interferometric fringe visibility.. ......... 210  xx  List of Symbols  Latin Alphabet 𝐴   Displacement sensitivity scaling factor 𝑎    Diffraction grating groove spacing 𝐵   Tilt sensitivity scaling factor 𝐶  Camera sensor 𝐷!"#$   Blur diameter on the sensor surface 𝑑!"#$   Blur diameter on the object surface 𝑑%&'(   Blur diameter on the focal plane 𝑑)*+,  Full Width at Half Maximum diameter 𝑑-.#//0.' Speckle diameter under Gaussian intensity profile illumination 𝑑0   Image distance 𝑑"('/   Lens diameter 𝑑&   Object distance 𝑑/('/&$  Sensor diameter 𝑑/1(%2"(  Speckle diameter 𝑑/1&3   Illumination spot diameter 𝐷𝑋  Speckle displacement along the sensor X-axis 𝑑𝑥   Object surface in-plane displacement along the x-axis 𝐷𝑌  Speckle displacement along the sensor Y-axis  xxi  𝑑𝑦   Object surface in-plane displacement along the y-axis 𝑑𝑧   Object surface out-of-plane displacement along the surface normal, z-axis 𝑓   Lens focal length 𝑓4  Spatial frequency  𝑓#   Numerical aperture ℎ   Vertical fringe spacing ℎ0   Image height ℎ&   Object height 𝐿5   Sensor/observation distance 𝐿4  Source/illumination distance 𝑀  In-focus magnification ratio 𝑚   Diffraction order 𝑀𝑟  Mirror 𝑛   Refractive index 𝑂  Object 𝑃  Point on an object surface 𝑝𝑥𝑙  Pixel 𝑞   Intermediate help variable 𝑆   Laser source 𝑡   Intermediate help variable 𝑉   Fringe visibility 𝑤   Horizontal fringe spacing xxii  XY  Camera sensor coordinate system xyz  Object surface coordinate system 𝑋5&6	  Speckle hemisphere center of rotation X-coordinate on the sensor plane 𝑥&88/(3  Illumination spot offset along the x-axis from the surface center of rotation  Greek Alphabet 𝛼   Half angle among the overlapping light rays 𝛽   Ratio between the observation distance and the illumination distance ∆𝐿   Defocus/sampling distance ∆𝐿9:   The separation between sampling planes 1 and 2 ∆𝑀𝑟  Mirror separation ∆𝑋   Speckle offset, i.e., the separation between partially overlapping speckle patterns ∆𝑧   Mirror displacement ∆𝜆   Mode separation, i.e., the wavelength difference between adjacent laser modes  𝜀   Surface in-plane strain 𝜃   Illumination angle 𝜆   Light wavelength 𝜓   Observation/sampling angle 𝜔;   Object surface out-of-plane rotation (tilt) about the x-axis 𝜔<   Object surface out-of-plane rotation (tilt) about the y-axis 𝜔=   Object surface in-plane rotation about the surface normal, z-axis  xxiii  List of Abbreviations  AC  Autocorrelation AVG  Average BFP  Back Focal Point CAM  Camera CoR  Center of Rotation DFT  Discrete Fourier Transform DIC  Digital Image Correlation DOF  Degree-of-freedom DPSS  Diode-Pumped Solid-State DSI  Defocused Speckle Imaging ESPI  Electronic Speckle Pattern Interferometry FFP  Front Focal Point FP  Focal Plane FWHM Full Width at Half Maximum MDF  Medium Density Fiberboard OPD  Optical Path Difference PSF   Point Spread Function SAR  Synthetic Aperture Radar SD  Standard Deviation SHM  Speckle Hemisphere Model SP  Speckle Pattern xxiv  Acknowledgements  I would like to express my deepest gratitude to my supervisor, Prof. Gary Schajer, for all his guidance, support and advice over the years. Dr. Schajer has been a wonderful mentor, teaching me a lot beyond academics about life in general. He has encouraged me to find my own path and follow the research where it leads, as well as helped me to overcome the challenges I have faced.  I am grateful for my External Examiner, Prof. Cosme Furlong-Vazquez, Supervisory Committee Members, Prof. Robert Rohling and Prof. Kirk Madison, and University Examiners, Prof. James Little and Prof. Peter Cripton, for all their valuable feedback and useful suggestions. I greatly appreciate the important financial support from Jenny and Antti Wihuri Foundation, Helsinki, Finland, and Stresstech Oy, Vaajakoski, Finland. Particularly, I want to thank Lasse Suominen from Stresstech for giving me this opportunity to learn more. I want to also thank The Department of Mechanical Engineering for the granted awards. I want to thank my labmates for their company and interesting discussions, and all the teachers from over the years for their hard work towards my education. This long journey could not have been possible without the support of my family. I want to thank my parents Juha and Anne, sister Roosa, brother Eetu and grandparents for all their love and care. They have taught me the importance of hard work, integrity and doing things properly. I want to also thank my relatives, godparents and friends for their support. To my fiancée Alondra, I feel very lucky to have you in my life and by my side on this journey.  Thank you for your patience and encouragement to push forward, and for listening to my never-ending stories about speckles.1  Chapter 1: Introduction  1.1 Importance of Motion Measurements Motion is ubiquitous. People move to travel, to interact with one another, use motion to convey information and to study the surrounding world. Motion measurements have endless applications, ranging from vehicle velocimeter and hand gesture studies to computer mouse and machine vibration analysis. In experimental mechanics, a common goal for motion measurements is to ensure safe and optimal performance of products and machines, either by monitoring their motion directly, or by characterizing the mechanical properties of their components.  1.2 Need for Remote Measurements A common way to measure motion is to attach a measurement sensor like accelerometer onto the object surface. However, contact measurements may sometimes be impractical with large objects, e.g., a bridge, or if the object is readily moving, e.g., part of factory machinery. If the measurement application involves repeating the same procedure on many specimens or on many separate locations on the same specimen, as in production quality control, installation of contact sensors like strain gages may be too time consuming. In the case of delicate objects, a contact sensor may also change object’s response and thus distort the measurement, for example by changing the thermal mass of a thin plate or the mechanical response of a lightweight audio speaker drum. Human access to the measurement specimen may be limited due to several reasons. Environmental hazards like radiation or high magnetic fields may occur, for example, in nuclear research, power plants or in the vicinity of medical instrumentation. Other adverse conditions like extreme 2  temperatures and pressures are present e.g., in space and deep-sea explorations but may also occur in ordinary factory process environments. In addition, factories often have fast moving objects and heavy machinery, which gives an extra incentive to limit human access to reduce risks wherever possible. Measurement instrumentation can also be affected by the environment – electromagnetic interference may cause noise in electronic sensors, and pressure or temperature variations may affect the sensor calibration, like thermal drifts in strain gages. The presence of the above challenges calls for remote, non-contact measurement techniques. Remote measurements ensure that both the measurement technicians and the instrumentation can remain at a safe distance from the potential hazards and error sources. Non-contact techniques can also be easily scaled for various sized objects and have better capability to measure moving objects with no prior access to them. Furthermore, no time-consuming installation of contact sensors is needed, nor is there risk of the instrumentation interfering with the object.  1.3 Basics of Optical Methods Optical methods are attractive for remote measurements because of their non-contact character. Data capture is quick and simple, typically by taking a digital image or a set of images of the object of interest and analyzing the images to extract object surface motion. Image-based methods are ideal for full-field measurements because they record two-dimensional data with up to millions of individual measurement pixels. Modern camera sensors, image processing algorithms and computational resources are very powerful and relatively low-cost, enabling real-time computations at high accuracy even for consumer applications. Optical motion measurements can be divided into two main types:  feature tracking methods, and interferometric methods. 3  1.3.1 Feature Tracking Methods In feature tracking methods, the captured image frames are analyzed to identify and locate characteristic object surface features or attached optical markers and track how their locations change from frame to frame within the camera view or relative to one another. Optical markers have easily detectable patterns, like corners, and are used in various applications ranging from biomechanical human motion analysis to motion capture for animations and video games, as well as car crash test studies, as illustrated in Figure 1.1.   Figure 1.1 Motion analysis examples based on attached optical markers. (Left) Biomechanical motion analysis, (Right) car crash test study. The cross-markers are easy to identify and track by computer algorithms.  Digital Image Correlation (DIC) [1] is a feature tracking method where object surface motions are measured by following the movements of surface texture. The tracked features can be natural texture like wood or ground metal, or applied patterns like spray-painted random dot speckles (Figure 1.2). Figure 1.3 shows the DIC tracking principle. A portion, subset, of the first image is selected, and the location of the same subset is determined in the subsequent image captured after surface movement. The relative change in subset location indicates the shift within the camera view, which can be related to corresponding surface motion. If the same procedure is repeated for 4  different areas in the image, it is possible to determine surface displacement field. DIC is widely used for surface motion and deformation analysis in experimental mechanics [2].   Figure 1.2 Spray-painted random dot pattern applied on an object surface.   Figure 1.3 Digital Image Correlation (DIC) tracking principle. The recorded image is divided into subsets, and the apparent motion of each subset is determined by tracking how their locations change.  5  Feature tracking can be performed with very simple instrumentation, and the technique is very robust against environmental disturbances. However, as a photographic method it is prone to and limited by perspective effects. Camera magnification and sensitivity, i.e., the motion observed in the image per the actual applied motion on the object, are inversely proportional to object distance. Therefore, if the studied object is far away, it appears to move by a smaller extent than if the same motion was observed for a more nearby object. Correspondingly, measurement resolution is inversely proportional to distance, which limits maximum measurement range. Furthermore, if the object moves towards or away from the camera, the effective magnification changes, causing perspective distortions that make the object appear to enlarge or shrink. Since a single camera cannot distinguish between actual surface deformation and scale change, such measurements are typically limited to cases where the object moves in the in-plane direction. In addition, camera-based methods have very low sensitivities for object out-of-plane rotations, i.e., surface tilts. Even if tilts are sufficiently large to be tracked, they must be extracted from apparent image strains, involving noise-sensitive numerical differentiation of the measured displacement data. A successful measurement requires strong, trackable surface texture. If the object does not naturally have the necessary surface texture, it must be painted or equipped with trackers, greatly increasing measurement preparation time. If the applied texture is not firmly attached, it may peel off or move during the measurement, leading to errors. The apparent contrast of the texture also depends on camera focus. If the object surface is not normal to the camera, has curved shape or moves in the out-of-plane direction, part of it may become blurred, washing out the texture and making the analysis more challenging. For example, the right edge of the surface shown in Figure 1.2 is clearly blurred, making the smallest dots difficult to distinguish. 6  1.3.2 Interferometric Motion Measurements In interferometric motion measurements, the object surface is illuminated using a coherent laser beam. Light reflected from the surface is combined with a reference beam from the same laser source, generating an interference pattern that is captured by a camera. As an example, Figure 1.4 shows the concept of Electronic Speckle Pattern Interferometry (ESPI) setup designed to measure surface in-plane deformations [3]. The intensity of the interfered light depends on the optical path length difference between the measurement beam and the reference beam. Therefore, object surface motions can be analyzed by monitoring intensity changes in the recorded interference patterns [4,5]. Interferometry utilizes the light wavelength as an extremely accurate ruler and can thus reach very high sensitivity in the nanometer range. The instrumentation can be configured to measure in-plane or out-of-plane displacements, or surface tilts [3,4]. However, measurement is sensitive on only one motion component at a time, and the high sensitivity limits maximum measurable motion range.  Figure 1.4 Interferometric motion measurement principle. The displayed example is an Electronic Speckle Pattern Interferometry setup configured to measure surface in-plane deformations. Laser MovingMirrorFixedMirrorCameraObjectLensBeam spli:er Reference beamMeasurement beam7  Interferometric measurements require rather complex instrumentation, typically with a mechanical actuator and a costly high-quality single-wavelength laser source. Monochromatic light is needed to obtain high-contrast interference patterns, and mechanical actuator is used to modulate the relative path length difference between the two beams (Figure 1.4) in order to quantify light phase changes that carry information about the surface motions and deformations [4]. Moreover, because any change in the relative path lengths between the two beams alters the measured intensity signal, interferometric methods are generally very prone to noise from the environment. The effective path length can change due to vibrations and is also affected by convective air currents and changes in moisture that change the refractive index of air [4]. Therefore, interferometric measurements are not generally well suited for field measurements outside of well-controlled laboratory space.  1.4 Basics of Speckle Imaging Feature tracking and interferometric methods have complementary characteristics. While both have useful attributes, neither are ideal for remote measurements on their own. However, there exists a variant measurement method, Speckle Imaging, that combines useful characteristics from both feature tracking and interferometry [6-9]. In Speckle Imaging, an object with a rough surface is illuminated by a laser beam. Laser light is scattered to all directions from the surface, and individual light rays interfere with one another, creating a speckle pattern of bright and dark dots, corresponding to various levels of constructive and destructive interference, respectively. Figure 1.5 illustrates the speckle pattern formation on a nearby sensor surface; every point on the sensor receives light from across the entire illuminated object. At some points on the sensor, the phases of the interfering light rays align, leading to high observed brightness. Conversely, some other points appear darker due to destructive interference. Figure 1.6 shows an example of an 8  experimental speckle pattern captured by a digital camera sensor. The light intensity appears to vary completely randomly across the speckle pattern. While the speckles look random, they depend directly on the local surface roughness within the illuminated spot. Therefore, speckle pattern acts as a virtual fingerprint of the surface; if the surface displaces or rotates, the speckle pattern correspondingly changes. This enables remote surface motion measurements by following the movements of the recorded speckle patterns just like physical surface features are tracked in DIC [6]. Furthermore, in contrast to perspective camera effects, speckle motion sensitivity increases with distance, making it particularly attractive for remote measurements at large distances.   Figure 1.5 Speckle formation principle. A laser source illuminates a portion of a rough object surface, and a digital sensor records a portion of the scattered light. At some points on the sensor the overlapping light rays interfere constructively (bright spot), and at some points destructively (dark spot).  Object SensorLaser9   Figure 1.6 Laser speckle pattern captured by a digital camera sensor. The resulting image contains a random arrangement of dots with varying brightness.  Speckle Imaging requires no surface preparation, provided that the object surface is rough, like paper, wood, or ground metal. Surface is sufficiently rough if it has height variations exceeding the wavelength of light, i.e., ~0.5µm or more. Even smooth surfaces can be measured by coating them with a thin layer of matte paint. Speckles can also be formed by scattering from biological tissues like blood vessels through skin [10,11], or from certain retro-reflective surfaces [12] used in high-visibility clothing and roadside markers. Speckle patterns can be recorded using a bare, lensless camera sensor placed anywhere adjacent to the illuminated object. It is important to note that speckles fill the entire space adjacent to the illuminated object. The sensor will thus record a speckle pattern independent of where it is placed, but sensor location defines the measurement distance. The resulting interference speckle pattern has dense, high contrast texture that can be tracked with superior accuracy using the same algorithms that are used for DIC analysis. Speckle Imaging is sensitive to linear displacements, rotations and even to surface strains [6,7]. The 10  different motion components couple together, so that the total observed speckle motion is the sum of the elementary motions. Therefore, the individual contributions must be extracted from the recorded speckle motion in the case of multiaxial object motion. While this is a challenge on one hand, the capability to measure rotations and strains directly from the measured displacement data is also an advantage, since there is no need for a noise-sensitive numerical differentiation step as with feature tracking methods.  1.5 Geometric Aspects of Speckle Imaging Many characteristics of Speckle Imaging are similar to light behavior at a macroscopic scale. To illustrate this, it is useful to consider the light reflections from a disco ball. A disco ball is a sphere with small mirror pieces arranged on the surface in a mosaic pattern (Figure 1.7, left). When a point source like the sun or a spotlight illuminates the disco ball surface, light rays are reflected from the individual mirrors according to law of reflection, i.e., the angle of reflection equals the angle of incidence. Because of the spherical surface, each mirror normal points at a slightly different direction, so light is correspondingly reflected into various directions. When the disco ball rotates, all the mirrors rotate with it. This changes light incidence angles, which correspondingly causes all the reflected rays to also shift by same proportion. As a consequence, the light pattern reflected onto the nearby walls rotates as a rigid body. Furthermore, given the fixed angular velocity, the tangential motion of the pattern scales with distance (Figure 1.7, right). Therefore, the pattern moves very rapidly on remote walls, while motion is slower on nearby walls.  11   Figure 1.7 (Left) Sunlight reflected from disco ball surface mirrors. (Right) Illustration of disco ball reflection pattern movements in response to surface rotation. The observed motion magnitude scales proportional to the distance from the disco ball.  In the case of Speckle Imaging, the spotlight is replaced by a coherent laser source, and the rough object surface is like a highly irregular disco ball in miniature scale consisting of numerous randomly placed microscopic surface mirrors. The similarity with the disco ball gives motivation to model interference speckle field as a 3D object that moves as a rigid body. With this view, the recorded speckle pattern is a two-dimensional cross-section of the 3D speckle field at the sensor location. Some differences arise from the use of coherent light and microscopic scatterers, as coherent light is subject to interference, and coherent light incident on microscopic features leads to diffraction effects. However, the main aspects are very similar to those typical of the disco ball. Most importantly, the motion of the speckle pattern also increases linearly with distance in response to object rotations. This is very crucial feature for remote measurements and means that the measurement sensitivity increases with distance from the object, contrary to perspective 12  limitations in image-based feature-tracking methods. Furthermore, because speckle patterns are formed by self-interfering light, Speckle Imaging has characteristics similar to a common-path interferometer, making it much more robust against noise than many interferometric methods used for motion measurements.  1.6 Defocused Speckle Imaging A camera (sensor + lens) measures an image of the object by transforming the light rays incident on the camera focal plane onto the sensor plane. If the focal plane does not coincide with the object, i.e., the camera is defocused, the resulting image appears blurred. This happens because each pixel on the sensor of a defocused camera receives light from an extended area on the object, as indicated by the shaded green area in Figure 1.8. A camera is actually always accurately focused at some plane in space; in the "defocused" case it happens that the focal plane is away from the object surface. In Figure 1.8, the length of the dashed blue arrow indicates the defocus distance between the surface and the focal plane. Small amount of defocus reduces image sharpness, but some surface details can still be detected in the resulting image. However, if the object is placed sufficiently far away from the focal plane, the surface details become completely diffused, as every point on the sensor receives light from across the entire object. This description of diffused imaging resembles the principle of speckle formation.  In fact, if a laser illuminated object is imaged using a highly defocused camera, the resulting speckle pattern corresponds to the interference pattern that would be recorded by a lensless sensor placed at the camera focal plane [13]. Therefore, adjusting camera (de-)focus distance is a practical way to control the speckle pattern sampling location. Furthermore, camera in-focus magnification ratio determines the recording scale, which offers an extra level of sensitivity control. Moreover, since speckle pattern is formed by interfering 13  light rays, it retains sharp texture independent of camera defocus, opposite to physical surface features that fade away due to blur.   Figure 1.8 Image formation in a defocused camera. When a physical object is shifted away from the camera focal plane, the resulting defocused image is blurred, and fine surface details cannot be resolved. If the object is moved far from focus, the resulting image becomes completely diffused, with no detectable surface texture.  At large sampling distances, Defocused Speckle Imaging has the characteristics of pointwise measurements since an increase in defocus reduces spatial resolution. While speckle motion sensitivity on surface tilts scales linearly with sampling distance, the sensitivity on linear displacements varies less. Therefore, if the sampling plane is close to the surface, the observed motions are mostly due to linear displacements, while a highly defocused camera is mostly sensitive to rotations. Such sensitivity variation makes camera focus adjustment a practical tool Focal planeDefocusFocused imageDefocused imageDiffused imageLens SensorCameraObject14  for tuning the measurement content and extracting the desired motion component. More specifically, if the same object motion is simultaneously analyzed by two cameras focused at different distances, it is possible to extract the linear and rotational motion components.  1.7 Internal Properties of Speckle Patterns Since speckles are formed by complex interference of many light rays propagating into different directions, the measurement geometry defines the speckle pattern appearance. The average speckle size is inversely proportional to the maximum angle between the scattered light rays that reach the sensor [8,9,14]. Consequently, the average speckle size scales linearly with the sampling distance when measured sufficiently far away from the illuminated surface. Therefore, the sampling distance could potentially be determined directly from the captured speckle pattern, provided that the speckle size can be accurately determined.  In a diffraction grating, the direction of the diffracted beam depends on light incidence angle and wavelength. Because speckle pattern is also a diffraction pattern, speckle locations are thus sensitive to laser spectrum. The output of an ordinary laser diode is not strictly monochromatic but has several wavelengths, longitudinal modes. Each mode creates a diffraction speckle pattern at a slightly different angle. Consequently, the observed speckle pattern contains multiple spatially shifted copies of the same speckles, as shown in Figure 1.9. The angular spacing between the speckles depends on the relative surface angle, while the absolute spacing scales linearly with the sampling distance [15]. Therefore, additional information could be coded into the speckle pattern by controlling the laser spectrum, and this information could be read out by measuring the shifts between the superimposed speckle patterns. Speckle pattern diffraction analysis could thus provide 15  the important range and angle information needed for calibrating the Speckle Imaging measurement. Interestingly, a low-coherence “bad-quality” laser can thus be beneficial for Speckle Imaging, whereas such laser would pose severe limitations in traditional interferometric measurements.   Figure 1.9 Defocused speckle pattern with duplicated speckles generated under multi-mode laser illumination.  1.8 Speckle Imaging Applications Speckle Imaging is already used commercially for contact measurements in laser optical computer mouse [16]. Laser speckle tracking works better on shiny, low-feature surfaces than traditional led mouse that tracks physical surface texture; even a smooth glass table contains impurities that scatter light sufficiently for Speckle Imaging analysis. Speckle patterns have also been used for contrast imaging to visualize blood flow [10] and to measure heart rate [11]. Some examples of non-contact applications include two-dimensional object speed and position tracking [17,18], surface rotation and angular velocity measurements [19,20] and surface roughness estimation [21]. 16  Speckle Imaging has also been demonstrated for gesture controlled human-computer interface [22], as well as a 6 Degree-of-Freedom (DOF) motion sensor [23] and angular orientation sensor for robotic applications [15]. The existing well-established applications are either for contact measurements or for very short range, while the proposed applications are non-contact but only for relatively short range. Since Speckle Imaging has attractive characteristics for remote measurements at large distances, this raises a question about why the method is not utilized better.  1.9 Limitations of Speckle Imaging Because Speckle Imaging is sensitive to various motion types, the desired components must be carefully extracted. Moreover, since instrumentation geometry affects sensitivity, the illumination and sampling distances and angles must be known so that the observed speckle motions can be scaled appropriately. In field measurements the test conditions are not well defined, so these parameters must be separately measured, which may not be straightforward. Historically, speckle formation has been considered an unwanted effect. For example, the first lasers were anticipated to provide the purest monochromatic light possible, but laser-illuminated objects were surprisingly covered by strong granular speckle patterns. Speckles are not limited to visible light; they are formed when radiation from any coherent source is randomly scattered from a rough surface. Apart from visual aspects, speckles are still a problematic source of noise in some applications, including ultrasound, Synthetic-Aperture Radar (SAR) imaging, and projection display technologies [9]. 17  Since many people are familiar with photographic framework and use that as a point of comparison, certain aspects of Speckle Imaging can seem very confusing. For example, defocus in photography leads to blur, washing out texture and reducing contrast. However, laser speckle patterns captured by a defocused camera maintain sharp contrast independent of defocus. Similarly, perspective limits the magnification and sensitivity in photographic motion measurements, while same cameras can capture speckle motions at high sensitivity. Perhaps the biggest reason for the limited utilization of Speckle Imaging is that the relevant literature is very scattered. Most information on Speckle Imaging applications is contained in the form of scientific articles, each with narrow focus and specific target audience. Apart from the books by Goodman [9] and Dainty [24], not many textbooks are available, especially such that would be tailored for newcomers or general engineering audience. Moreover, the existing theoretical models are very complex and mathematically heavy. This makes it very challenging for people outside of science backgrounds to grasp and visualize the speckle phenomenon and its subtleties.  1.10 Thesis Motivation and Objectives The main goal of this research is to develop a non-contact inspection tool that can remotely measure object distance, relative surface angles and microscopic surface motions from tens of meters away. The first objective is to form a strategy for making remote surface motion measurements using Defocused Speckle Imaging. The key feature is to realize that camera focus plane controls the location where the speckle field is sampled. Because rotation and displacement 18  sensitivities vary in a different way with sampling distance, the different motion components can be extracted by recording the speckle patterns by a pair of cameras focused at different distances. To realize the potential of Speckle Imaging optimally, its operating principles must be well understood. Therefore, the second objective is to construct a simple geometric model of Speckle Imaging that explains speckle formation and measurement sensitivity. The model is named Speckle Hemisphere Model (SHM), and it aims to explain the speckle phenomenon without the need for complex mathematical analysis or multivariable calculus, but nevertheless be equally accurate with the existing theoretical models. This will provide an alternative source of information for users who are new to the world of Speckle Imaging. The key insight is to model the interference speckle field as a 3D object that moves as a rigid body in response to surface movements. The geometric representation is straightforward to visualize and complements the existing mathematical formulations. Since Speckle Imaging sensitivity depends on the illumination and sampling distances and angles, these parameters must be accurately known in order to scale the recorded speckle motions appropriately. Because the speckle pattern appearance is affected by the same parameters, the third objective is to develop a method to extract the scaling parameters directly from speckle pattern internal structure through statistical analysis without the need for separate measurements. The final fourth objective is to demonstrate remote self-calibrated surface motion measurements to show the method’s capability for diverse practical applications.   19  1.11 Summary In summary, the thesis objectives are: (1) Develop a practical method for making remote surface motion measurements using Defocused Speckle Imaging (2) Construct a simple geometric model of Speckle Imaging (3) Develop a practical method to extract scaling parameters by statistical speckle pattern analysis (4) Demonstrate remote self-calibrated surface motion measurements  The key aspects of Speckle Imaging studied in this thesis are: v Coherent laser light scattered from a diffuse surface creates a 3D speckle field that moves as a rigid body in response to surface motion o This is the basis of the Speckle Hemisphere Model o To the first approximation, surface modeled as a collection of randomly oriented mirrors o Remote motion measurements possible by tracking speckle movements v The speckle field consists of long needle-shaped speckles that radiate outwards from the illuminated area o Speckle Imaging motion sensitivity increases with measurement distance o Remote measurements at large distance feasible o Possible to overcome perspective limitations v Speckle pattern is a diffraction pattern o Speckle Imaging illumination and observation angles subject to laws of diffraction 20  o Speckle motions not exactly same as reflections from moving macroscopic objects o Speckle Hemisphere not strictly a rigid-body, although the deviation often small o Diffraction nature makes speckles wavelength dependent, so a multi-mode laser source creates many spatially shifted copies of the same speckle pattern v Defocused camera records a 2D cross-section of the 3D speckle field o 2D speckle pattern cross-section is always “in focus” o Lens focal plane defines the sampling location o Sampling location can be changed simply by adjusting the lens focal plane o Lens in-focus magnification sets the sampling scale o A bare sensor at the sampling location would measure the same speckle pattern v Object rigid-body rotation produces speckle displacements that are directly proportional to distance from the illuminated surface. Speckle motions produced by object displacements also depend on measurement distance, but to a much lesser extent o Sampling plane choice by focus adjustment is a practical way to control sensitivity o It is possible to separate rotational and displacement components by using a pair of cameras focused at different distances v Statistical speckle pattern analysis reveals important calibration parameters o Object distance indicated by average speckle size (linear relationship) o Surface orientation relative to instrumentation indicated by speckle shape o These parameters are also indicated by the shift between the superimposed speckle diffraction patterns created by multi-mode laser illumination  21  1.12 Thesis Outline The chapter contents are introduced below. Chapters 2-4 present the theory, and Chapters 5-8 report the related experimental work.  Chapter 1 – Introduction Chapter 2 – Geometric Representation of Speckle Imaging – Speckle Hemisphere Model • Provides the geometric representation of Speckle Imaging, explains the sensitivity characteristics of different motion types, and considers interferometric and diffraction aspects. Chapter 3 – Remote Surface Motion Measurements Based on Defocused Speckle Imaging • Studies the characteristics of Defocused Speckle Imaging and proposes the optimal arrangement for remote measurements under multiaxial object motion.  Chapter 4 – Statistical Speckle Pattern Analysis • Investigates how the crucial geometric calibration parameters can be extracted by analyzing the speckle patterns using statistical methods, introduces a diffraction-based view of speckle formation, and proposes measurement self-calibration principle based on a combination of multi-mode laser illumination and speckle pattern diffraction analysis. Chapter 5 – Sensitivity Characteristics of Objective Speckle Imaging • Presents a series of experiments to validate the Speckle Hemisphere Concept and explores the sensitivity characteristics of Objective Speckle Imaging for various motion types; in-plane displacements, out-of-plane rotations, as well as in-plane rotations. 22  Chapter 6 – Sensitivity Characteristics of Defocused Speckle Imaging • Experimentally reveals the connection between Defocused and Objective Speckle Imaging, investigates Defocused Speckle Imaging sensitivity characteristics, and demonstrates the method’s potential to measure multiaxial motions of a remote object at high accuracy and sensitivity. Chapter 7 – Geometric Calibration Principle Based on Speckle Pattern Diffraction Analysis • Demonstrates the speckle pattern appearance dependence on laser source spectrum and the diffraction-based calibration principle where range and orientation information are extracted from the acquired speckle patterns. Chapter 8 – Self-calibrated Remote Surface Motion Measurements • Demonstrates the method’s performance in a practical measurement situation. The chapter presents a set of uniaxial and multiaxial surface displacement and tilt measurements recorded more than 30 meters away, performed on an object coated by a retroreflective tape. The resulting speckle motions are scaled using the proposed diffraction-based calibration principle. Chapter 9 – Conclusion • Summarizes the thesis findings and discusses their overall impact, considers limitations, and outlines ideas for future work. 23  Chapter 2: Geometric Representation of Speckle Imaging – Speckle Hemisphere Model  The chapter reviews some key historical findings about the speckle phenomenon, development of quantitative speckle motion measurement methods and various existing Speckle Imaging models. It continues by identifying gaps in the existing models and recognizing desired characteristics of an ideal model. Finally, it presents the concept of the proposed Speckle Hemisphere Model and gives a derivation of the theoretical sensitivity equations. Section 2.3 has been published in Optics and Lasers in Engineering under title “A Geometric Model of Surface Motion Measurement by Objective Speckle Imaging” [25].  2.1  Overview of Key Literature 2.1.1 First Observations on Speckle Phenomenon The first publications on speckle phenomenon followed soon after lasers became commercially available in the early 1960s. Langmuir [26] and Oliver [27] were among the first to report the curious sparkles that resulted when laser light was scattered from a diffuse surface.  Langmuir observed how the generated spots moved in response to object motion. He also discovered that speckles retained sharp contrast even if eyes were not focused at the illuminated object, and how the number of speckles reduced if the pattern was viewed through a pinhole.  Langmuir suspected that such behavior had to be related with the coherence and monochromaticity of the laser light, and even suggested that the minimum speckle size is likely related to the finite resolving power of the eye [26].  24  Concurrently with Langmuir, Oliver reported how the apparent speckle motion varied with distance from the object, and how the perceived motion varied among the observers. To explain the findings, he hypothesized that a diffuse reflection of coherent light produces a complex and random diffraction pattern. Using the diffraction view, Oliver explained that the appearance of the speckle pattern must be affected by the focusing of the eyes. He also noticed that speckles have needle-like shapes, and that the lateral speckle dimensions increased linearly with distance and inversely proportional to the illuminated spot size. Furthermore, the diffraction nature makes speckles wavelength dependent; if the laser light contains multiple wavelengths, equally many separate diffraction speckle patterns would be formed. Correspondingly, speckles could not be observed with a white-light source, as the individual diffraction patterns would be averaged out due to wavelength continuum [27].  2.1.2 From Speckle Photography to Speckle Imaging While much early research effort was aimed at reducing laser speckling, many researchers were also motivated to study the speckle phenomenon for remote motion measurements. In 1970, Archbold, Burch and Ennos developed a Speckle Photography method where speckle motions are extracted from double-exposed photographs [28]. The speckle pattern scattered from the object surface is recorded on the same film before and after surface movements. When a small portion of the developed double-exposed film is later illuminated by a convergent laser beam, the transmitted, scattered light forms a diffraction halo [28]. If the surface is shifted or tilted between the exposures, the resulting diffraction halo contains fringes whose density and orientation indicate the amplitude and direction, respectively, of the speckle movements that occurred. 25  Archbold and Ennos used Speckle Photography to investigate the characteristics of speckle motions resulting from linear surface displacements and rotations [6]. In 1972, Tiziani conducted detailed experiments on surface out-of-plane rotations (tilts) [29], and in 1976, Gregory continued Tiziani’s work and gave a detailed analysis on speckle motion characteristics when imaged under varying degrees of defocus [30]. However, while Speckle Photography enabled quantitative speckle motion measurements, the necessary data readout step made the technique complex and time-consuming. Furthermore, the minimum resolvable motion threshold was strictly limited by the average speckle diameter [6]. In 1980’s, the arrival of digital imaging sensors greatly simplified data capture and enabled digital image processing where the lateral speckle shifts are determined directly by computing the cross-correlation maximum between the digital speckle pattern images recorded before and after deformation [7]. To distinguish the novel analysis technique from Speckle Photography, the new digital approach is called Speckle Imaging. Speckle Imaging closely resembles the DIC method [1]. However, while DIC tracks the motion of the physical surface points directly, Speckle Imaging follows the apparent movements of the light interference pattern that is generated by the moving surface. Digital recording and analysis made speckle motion measurements much more practical, enabling real-time measurements and full-field analysis, like surface strain mapping [31,32]. In the last decade, significant advances in imaging sensors and computing hardware have allowed development of compact and portable instruments that are increasingly applied to consumer applications [22]. 26  2.1.3 Existing Speckle Imaging Models Many theoretical models have been developed to describe speckle phenomenon and to study the speckle motion characteristics. Most authors explain speckle formation using diffraction theory [6,7,13,27,29,33-38]. Some diffraction-based models consider the object surface as a collection of randomly located secondary point sources according to Huygen’s principle [13,35,37], while others view the surface as a deforming diffraction grating [7] or a collection of randomly oriented diffraction gratings with varying pitch [8,33]. On the contrary, Gregory models the surface as a collection of microscopic randomly oriented surface mirrors [30]. Most Speckle Imaging models are mathematically very complex. The approaches by Yamaguchi and Hrabovský require solving the multivariable diffraction integral to find the maximum of the cross-correlation function [7,37]. Some of the alternative approaches are based on the general diffraction equation [33] or a related requirement to maintain equal path length differences among the light rays that form the speckles [6,34,35]. With such boundary conditions, the sensitivity equations can be determined using matrix algebra. In contrast, Gregory’s reflection model is based on simple 2D geometry, so the corresponding sensitivity equations are very straightforward to derive using basic trigonometry. Some of the models include only specific motion components, whereas others provide full analysis of all linear displacements, rotations and surface strains. The first complete analysis of Speckle Imaging was published by Yamaguchi in 1981 [7]. Yamaguchi’s model has become so widely known that it has gained a benchmark status – the characteristics of the more recently published models are routinely compared with Yamaguchi’s results [13,35-37,38].  27  2.1.4 Limitations of Existing Models Despite its completeness, Yamaguchi’s model is based on one central assumption that limits its universal applicability. Yamaguchi assumed that the object surface is located along the optical axis of the imaging sensor, parallel with the sensor surface [7]. In general, this condition is not fulfilled. In 1992, Světlík showed that the observed speckle motions deviated from the Yamaguchi’s model predictions for off-axis arrangements [35]. Furthermore, he showed that if the ray path length differences are required to remain equal as previously postulated by Archbold and Ennos [6], and later by Jacquot and Rastogi [34], then the resulting speckle motion equations can be made applicable for various geometric arrangements.  Later in 1999, Hrabovský et al. developed a very fundamental model that concurred with Yamaguchi’s findings, but was also applicable for arbitrary sensor-object positions [13,19,37,39]. The drawback of this approach is that the analytical derivation of the equations involves heavy mathematical treatment, including integration over 16 variables [13,37]. Such heavy formulation can make the analysis appear rather distant from the fundamental physical characteristics of speckle phenomenon. On the other hand, Gregory’s older mirror-based model [30] is straightforward to understand and visualize, but its predictions do not fully agree with the findings of Yamaguchi and other later models. Gregory’s model works well when the object is illuminated and observed at a normal incidence but deviates from the other models for oblique geometries. This is because the mirror treatment is essentially considering light behavior at a macroscopic scale but does not consider the interference effects caused by diffraction that inevitably arise in the presence of microscopic surface roughness.  28  2.1.5 Object Motion vs. Surface Motion It is important to note that Speckle Imaging ultimately measures surface movements. However, the local surface motion within the illuminated spot may differ from the object rigid-body motion if the object has a sloped surface [12,38,40]. Therefore, Speckle Imaging requires a reasonably flat surface portion for successful analysis. Typically, this is not a severe restriction, particularly when assessing microscopic surface movements where motion magnitudes are a small fraction of the illuminated surface spot. Moreover, the curvature induced effects appear only with very small illumination spot diameters [12].  2.2 Motivation for an Improved Speckle Imaging Model 2.2.1 Ideal Model Characteristics An ideal theoretical model should be both simple and accurate. However, as discussed above, these two qualities are often in opposition. For example, Gregory’s model [30] is straightforward to understand but not exact, while Hrabovský’s representation [37] is more accurate but difficult to visualize. From a conceptual viewpoint, a simpler model is better suited for newcomers to learn a new phenomenon, while demanding measurement applications require highest possible accuracy. However, this creates a challenging knowledge gap. While the derived sensitivity equations contain the relationships among the different physical variables, the underlying physical foundation for those relationships can become obscured by the complicated mathematical derivation process, as well as by the systematic use of abbreviated expressions to list, e.g., trigonometric quantities in a more compact form. Therefore, if the complex model and the underlying theory is not well understood, it can become challenging to implement the method well.  29  2.2.2 Proposed Approach Based on the reviewed literature, it appears that the main reason for the complexity of the many existing Speckle Imaging models lies in the exact mathematical description of the diffraction effects. In comparison, Gregory’s mirror-based model is significantly simpler because it 1) uses a geometric approach instead of analytical treatment, and 2) excludes the diffraction effects. While inclusion of diffraction effects seems crucial for achieving high accuracy, the geometric approach remains attractive due to its visual nature. Therefore, the strategy for the conceptual model introduced in this thesis is to explain the speckle formation phenomenon using the simple mirror reflection concept as a starting point, and subsequently refine the model by introducing a geometric phase correction term that includes the diffraction effects. The new model is called Speckle Hemisphere Model (SHM). Because the proposed model is based on entirely geometric treatment, it will thus complement the existing analytical representations. Furthermore, as the diffraction effects are separated from the reflection effects, it is easy to demonstrate how much the speckle motions truly differ from macroscopic light behavior, and to determine under which conditions the mirror-based model alone would be sufficiently accurate.   2.3 Speckle Hemisphere Model 2.3.1 Model Assumptions The main emphasis is to help form an intuitive understanding of speckle pattern movements caused by various object rigid-body motion components. Therefore, the analysis is limited to non-deforming bodies, thus, strain components are excluded from the following derivation. 30  Furthermore, specific attention is given to in-plane rotation sensitivity because the resulting speckle motions are substantially different from those caused by other rigid-body motions.  A rough, diffuse object surface is modelled as a collection of tiny, randomly oriented mirrors [30]. Figure 2.1(a) shows how each individual mirror reflects light specularly, i.e., the angles of the incident and the reflected rays are symmetric with respect to the specific mirror surface normal. Because the surface is composed of many mirrors with different orientations, the mirrors collectively scatter light in all directions, as shown in Figure 2.1(b). Consequently, every point in space adjacent to the illuminated object receives light from across the entire illuminated area. When a monochromatic and coherent laser source is used, the overlapping light rays reflected from the various different mirrors interfere. This generates an objective speckle pattern, consisting of random spots of varying brightness, corresponding to different levels of constructive or destructive interference [8,41]. It is important to emphasize that the interference speckle field and individual speckles are three-dimensional; speckles are ellipsoidal with their long axes oriented in the direction of light propagation [8].  The three-dimensional objective speckle field can be observed by placing a recording medium, i.e., a screen, film or a bare, lensless camera sensor, within the space adjacent to the object so that it receives a portion of the scattered light (Figure 2.1(b)). For small motions, the distance between the illuminated surface area and the camera sensor remains approximately constant. Therefore, the locus of all possible parts of the speckle field that may be sampled by the camera forms a hemispherical surface. Sensor location and size directly determine the speckle hemisphere radius and the size of the portion that is being sampled, respectively.  31   Figure 2.1 Diffuse object surface is modeled as a collection of randomly oriented mirrors. (a) Each surface mirror reflects incident light rays (solid green arrows) through specular reflection, so that the reflected rays (dashed green arrows) are oriented symmetrically with respect to the mirror surface normals (dashed black lines). (b) Speckle pattern formation. The multiple illuminated surface mirrors collectively scatter light in all directions. Each point adjacent to the object receives light from across the entire illuminated spot.  Any rigid-body surface displacement or rotation moves all the surface mirrors, which leads to shifting of the resulting speckle pattern that reaches the sensor location. Because the relative position and illumination angle changes are systematic for all the surface mirrors, all points within the resulting speckle hemisphere are similarly affected. Therefore, the speckle hemisphere appears to move in space as if it were a rigid body. Consequently, it is sufficient to image and monitor only small portions of the speckle hemisphere and use the observed local speckle motions to evaluate the corresponding motions of the illuminated surface.  SensorLaserObjectLaserObject(a) (b)32  2.3.2 Geometrical Arrangement Figure 2.2 introduces a geometric description of the speckle hemisphere concept. A laser source S with a diverging beam illuminates a spot P on a flat object surface O. The distance from the beam focal point, waist [42], to the object is 𝐿4. A camera sensor C records a two-dimensional cross-section, i.e., a speckle pattern, SP of the resulting three-dimensional speckle field. The sensor distance 𝐿5  defines the radius of the conceptual speckle hemisphere that is displayed in pale green. The speckle patterns evolve with increasing propagation/sampling distance, as the relative angles and path lengths among the interfering light rays change. Therefore, the exact content of the recorded pattern depends on the sampling distance.   Figure 2.2 Speckle hemisphere formation. A laser source (S) illuminates a portion (P) of a diffuse test object (O) at a distance 𝑳𝑺. A portion of the scattered light is recorded by a camera sensor (C). The resulting image contains a speckle pattern (SP). The sensor distance 𝑳𝑪 determines the radius of the conceptual speckle hemisphere displayed in pale green. 33  For the rest of the analysis, it is assumed that the observation distance, i.e., the hemisphere radius, is large in comparison to the sensor dimensions, so that the sampled portion can be considered as effectively flat. For clarity, the following analysis is conducted with a further assumption that the object surface normal, the illumination and the observation vectors are all set to lie in the same xz-plane. This is easier to illustrate and simplifies the geometric expressions. The source and sensor distances are measured along the illumination and observation directions, respectively, and the camera sensor is aligned normal to the observation direction.  2.3.3 In-plane Displacement dx Figure 2.3 introduces the Speckle Imaging geometric configuration with relevant parameters. The initial object surface location defines the xyz-coordinate system, while the sensor plane coordinate system is XY. The illumination and the observation angles are 𝜃 and 𝜓, respectively. The angles are defined about the positive y-axis and with respect to the z-axis, i.e., the counterclockwise angles in Figure 2.3 are positive. Therefore, for the illustrated configuration, 𝜃 < 0 & 𝜓 > 0.  A surface point P displaces along the x-axis by a small amount 𝑑𝑥 to a new location P’. Because speckles are generated by the local surface roughness, the displacement must remain small relative to the illuminated spot diameter 𝑑/1&3, so that the same speckles can be seen before and after surface motion. Furthermore, the illumination and observation distances (𝐿4	and 𝐿5) are assumed to be much larger than the illuminated spot. In Figure 2.3, the spot size and the displacement magnitude are greatly exaggerated for illustration purposes. Since the speckles depend on the local details of the surface roughness, the scattered speckle hemisphere moves along with the surface when it is displaced, as indicated by the dashed green 34  line in Figure 2.3. Furthermore, the light rays that illuminate the surface at P’ are rotated clockwise by a small angle 𝑑𝜃 about the y-axis in comparison to the rays that illuminated the original location P. Using the small angle approximation, 𝑑𝜃 ≈ tan(𝑑𝜃), and the geometry and sign conventions as defined in Figure 2.3, the following expressions are obtained: V𝑡 = 𝑑𝑥 cos(𝜃)𝑑𝜃 ≈ − 𝑡𝐿4								 (2.1) 𝑑𝜃 ≈ −𝑑𝑥	 cos(𝜃)𝐿4  (2.2)   Figure 2.3 Speckle Imaging sensitivity on surface in-plane dx-displacements. A laser source (S) illuminates a portion of a test object (O). A sensor (C) records a portion of the scattered light. When a surface point P displaces into a new location P’ by a distance 𝒅𝒙, it generates a speckle motion component 𝑫𝑿𝒅𝒙. The corresponding analytical expressions are given by Equations (2.3 & 2.13). 𝑳𝑺: illumination distance, 𝑳𝑪: observation distance, 𝜽: illumination angle, 𝒅𝜽: change in illumination angle, 𝝍: observation angle, 𝒅𝝍: speckle rotation angle, 𝒅𝒔𝒑𝒐𝒕: diameter of the illuminated area, 𝒙𝒚𝒛: object surface coordinate system, 𝑿𝒀: sensor plane coordinate system. 𝐿𝑆𝐿𝐶CS𝜃𝑑𝜃𝜓O𝑑𝑥𝐷𝑋𝑑𝑥𝜃𝑑ψ 𝐿𝐶 𝑑𝑥 cos 𝜓𝐿𝐶𝑑𝜓𝑡zxyPP’𝜓+-XY𝑑𝑠𝑝𝑜𝑡35  According to Equation (2.2), the change in the illumination angle is linearly proportional to the surface displacement and inversely proportional to the source distance. Since the rays illuminating the specific surface point are rotated by 𝑑𝜃, the corresponding light rays reflected from the surface mirrors through specular reflection are rotated by the same magnitude but symmetrically with respect to the surface normals of the tiny mirrors, i.e., 𝑑𝜓 = −𝑑𝜃. This means that the overall speckle shift observed at the sensor is the summation of the two effects; 1) the shift of the surface, and 2) the rotation of the light rays that give rise to the recorded speckles. With the reflection considerations alone, the observed speckle shift due to surface x-displacement 𝐷𝑋>;,6 is: 𝐷𝑋>;,6 = 𝑑𝑥 cos(𝜓) + 𝐿5𝑑𝜓 = 𝑑𝑥 cos(𝜓) + 𝐿5 𝑑𝑥	 cos(𝜃)𝐿4 	= 𝑑𝑥(cos(𝜓) + 𝛽 cos(𝜃)) (2.3) The factor cos(𝜓) is included because the sensor is sensitive only to motions parallel to its surface plane. If the sensor normal is tilted with respect to the z-axis, the recorded speckle shift equals the hemisphere motion component along the sensor surface plane. The factor 𝛽 is the ratio of the observation distance 𝐿5  and the illumination distance 𝐿4: 𝛽 = 𝐿5𝐿4  (2.4) The speckle motion sensitivity increases when the observation distance is increased with respect to the illumination distance. On the other hand, if the illumination beam is collimated, its focal point is effectively at infinity. In such a case, the related 𝛽-value becomes zero, and the measurement sensitivity is independent of the observation distance. 36  2.3.4 Phase Correction Equation (2.3) is identical to the expression derived by Gregory [30]. However, the above analysis considers the motion of only a single illuminated surface point and the orientations of one incident and a corresponding reflected light ray. It does not consider the important key aspect of speckle formation, namely that speckles are formed by a complex interference of multiple light rays that have varying path lengths and directions. Therefore, speckles do not necessarily strictly follow the movements of the reflected light rays, but instead shift to a location that preserves the original phase distribution. The initial phase distribution is maintained if the ray optical path length differences (𝑂𝑃𝐷) across the illuminated spot [6] are the same before and after surface displacement. Figure 2.4 illustrates the geometric path length characteristics. The laser source S illuminates a circular area (diameter 𝑑/1&3) on the object surface O. The illumination path length varies across the illuminated spot because of the oblique incidence angle. The maximum illumination length difference occurs for the rays on the adjacent sides of the beam. Since the illumination distance is large in comparison to the illuminated spot diameter, the variance of the illumination angle across the spot is small. Thus, the illumination path length difference can be approximated as: 𝑂𝑃𝐷4 = 𝐿4,@ − 𝐿4,A ≈ 𝑑/1&3 sin(𝜃) (2.5) Note that for the specific configuration displayed in Figure 2.4, the illumination angle is negative (𝜃 < 0), so the upper side of the beam travels a longer distance (𝐿4,A) than the lower side of the beam (𝐿4,@). Consequently, the path length difference with the above definition is negative, which complies with the angle sign convention, as sin(𝜃 < 0) < 0. 37   Figure 2.4 Illumination and observation path length variations across the illuminated spot. The corresponding analytical expression is given by Equation (2.7). A laser source (S) illuminates a portion of a test object (O). A sensor (C) records a portion of the scattered light. 𝑳𝑺: illumination distance, 𝑳𝑪: observation distance, 𝜽: illumination angle, 𝝍: observation angle, 𝒅𝒔𝒑𝒐𝒕: illuminated spot diameter, 𝑶𝑷𝑫𝑺: illumination path length difference between rays 𝑳𝑺,𝑨 and 𝑳𝑺,𝑩, 𝑶𝑷𝑫𝑪: observation path length difference between rays 𝑳𝑪,𝑨 and 𝑳𝑪,𝑩. A specific location on the imaging sensor C receives light reflected from various surface mirrors across the illuminated region 𝑑/1&3. The ray propagation distances from the object to the sensor vary analogously to the illumination distances: 𝑂𝑃𝐷5 = 𝐿5,@ − 𝐿5,A ≈ 𝑑/1&3 sin(𝜓) (2.6) Thus, the total range of ray path lengths from the source to the sensor via the object surface is: 𝑂𝑃𝐷BCB = 𝑂𝑃𝐷4 + 𝑂𝑃𝐷5 = 𝑑/1&3]𝑠𝑖𝑛(𝜃) + 𝑠𝑖𝑛(𝜓)` (2.7) If the surface displaces to +x-direction by a small amount 𝑑𝑥 (𝑑𝑥 ≪ 𝑑/1&3), the magnitude of the illumination angle (for the geometry shown in Figure 2.4) is slightly increased according to 𝐿𝐶, 𝐴CSzxy +-XY𝜃O𝑑𝑠𝑝𝑜𝑡 𝜃 ψψ𝑂𝑃𝐷𝑆𝑂𝑃𝐷𝐶𝐿𝐶, 𝐵𝐿𝑆, 𝐴𝐿𝑆, 𝐵38  Equation (2.2). This increases the magnitude of the illumination path length difference 𝑂𝑃𝐷4. However, in order to observe the same speckles after the displacement, the overall path length difference 𝑂𝑃𝐷BCB should remain unchanged. This condition can be fulfilled only if the range of observation path lengths 𝑂𝑃𝐷5  correspondingly increases, i.e. the observed speckles must rotate so that the observation angle becomes larger. Analytically, the requirement for the overall path length difference to remain unchanged means that its derivative must be zero: 𝑑(𝑂𝑃𝐷BCB) = 0 (2.8)   𝑑𝜓 = − 𝑐𝑜𝑠(𝜃)𝑐𝑜𝑠(𝜓) 𝑑𝜃 (2.11) The phase constancy condition (2.11) can be alternatively derived from the general diffraction equation by modeling speckle formation as a diffraction phenomenon. This analysis is presented in Chapter 4, Section 4.3. Using Equation (2.11), the phase-corrected speckle motion due to a surface x-displacement is: 𝐷𝑋>; = 𝑑𝑥 cos(𝜓) + 𝐿5𝑑𝜓 = 𝑑𝑥 cos(𝜓) − 𝐿5 𝑐𝑜𝑠(𝜃)𝑐𝑜𝑠(𝜓) 𝑑𝜃	= 𝑑𝑥 cos(𝜓) + 𝐿5 cos:(𝜃)𝑐𝑜𝑠(𝜓) 𝑑𝑥	𝐿4  (2.12) 𝐷𝑋>; = 𝑑𝑥 dcos(𝜓) + 𝛽 cos:(𝜃)𝑐𝑜𝑠(𝜓)e (2.13) 𝑑/1&3 f𝑑]𝑠𝑖𝑛(𝜃)` + 𝑑]𝑠𝑖𝑛(𝜓)`g = 0 (2.9) 𝑑/1&3]𝑑𝜃	𝑐𝑜𝑠(𝜃) + 𝑑𝜓	𝑐𝑜𝑠(𝜓)` = 0 (2.10) 39  According to Equation (2.11), the speckle rotation angles deviate from the ray rotation angles (𝑑𝜓 = −𝑑𝜃) unless the illumination and observation angles have equal magnitudes. Therefore, the entire speckle hemisphere does not move completely rigidly, but the recorded movement on the sensor depends on the local observation angle. However, the speckle motions within the small sampled sensor area are approximately uniform. Moreover, if the laser and the sensor are in the same direction, arranged symmetrically with respect to the surface normal, or both close to normal incidence, the observed speckles move similarly to macroscopic light, as observed in the reflections from a disco ball.  2.3.5 In-plane Displacement dy When the laser and the sensor are located in the xz-plane, the geometry in the yz-plane corresponds to a case of normal illumination and observation, with the sensor vertical Y-axis parallel to the y-axis and the object surface, as illustrated in Figure 2.5. Similar to the x-displacement, a surface y-displacement 𝑑𝑦 shifts the speckle hemisphere by the same amount along the y-axis, as indicated by the dashed green line. Consequently, the light rays that illuminate the shifted point are rotated by an amount 𝑑𝜃: 𝑑𝜃 ≈ 𝑑𝑦𝐿4  (2.14) Under normal illumination and observation conditions, the illumination and observation angles are equal and zero, so the speckles rotate by the same amount as the illuminating rays, i.e. 𝑑𝜓 = −𝑑𝜃. Therefore, the total speckle motion due to y-displacement	𝐷𝑌>< is: 40  𝐷𝑌>< = 𝑑𝑦 + 𝐿5(−𝑑𝜓) = 𝑑𝑦 + 𝐿5𝑑𝜃 = 𝑑𝑦 + 𝐿5 𝑑𝑦𝐿4 = 𝑑𝑦(1 + 𝛽) (2.15) where the minus-sign accounts for the negative angle direction.   Figure 2.5 Speckle Imaging sensitivity on surface in-plane dy-displacements. When the illuminated surface displaces by a distance 𝒅𝒚, it generates a speckle motion component 𝑫𝒀𝒅𝒚. The corresponding analytical expression is given by Equation (2.15). A laser source (S) illuminates a portion of a test object (O). A sensor (C) records a portion of the scattered light. 𝑳𝑺: illumination distance, 𝑳𝑪: observation distance, 𝒅𝜽: change in illumination angle, 𝒅𝝍: speckle rotation angle.  2.3.6 Out-of-plane Displacement dz Since speckles are three-dimensional, the speckle hemisphere moves in z-direction in response to a surface z-displacement 𝑑𝑧. If the sensor is located away from the object z-axis, the speckle motion has a component parallel to the sensor plane X-axis. The shifted speckle location is indicated by the dashed green line in Figure 2.6.  𝐷𝑌𝑑𝑦𝐿𝑆 𝐿𝐶CS𝑑𝜃O𝑑𝑦 𝑑ψ=−𝑑𝜃𝐿𝐶𝐿𝐶𝑑𝜓zyx -+𝑑𝑦YX41   Figure 2.6 Speckle Imaging sensitivity on surface out-of-plane dz-displacements. When the illuminated surface displaces by a distance 𝒅𝒛, it generates a speckle motion component 𝑫𝑿𝒅𝒛. The corresponding analytical expression is given by Equation (2.19). A laser source (S) illuminates a portion of a test object (O). A sensor (C) records a portion of the scattered light. 𝑳𝑺: illumination distance, 𝑳𝑪: observation distance, 𝜽: illumination angle, 𝒅𝜽: change in illumination angle, 𝝍: observation angle, 𝒅𝝍: speckle rotation angle.  In addition, if the illumination angle is nonzero, the rays illuminating the 𝑑𝑧-shifted surface are rotated with respect to the rays that illuminated the initial position. Analogously to the x-displacements and following the geometry shown in Figure 2.6, the illumination angle rotation 𝑑𝜃 depends on the 𝑑𝑧-displacement according to: V𝑞 ≈ 𝑑𝑧 sin(𝜃)𝑑𝜃 ≈ 𝑞𝐿4											 (2.16) 𝐿𝑆 𝐿𝐶CS𝜃𝑑𝜃O𝑑𝑧𝐷𝑋𝑑𝑧𝑑ψ−𝑑𝑧 sin 𝜓𝐿𝐶𝑑𝜓𝜓𝑞zxy𝜓+-XY42  𝑑𝜃 ≈ 𝑑𝑧 sin(𝜃)𝐿4  (2.17)  Similar to the x-displacements, the rotation of the speckles 𝑑𝜓 depends on the geometry so that the initial phase distribution is maintained according to Equation (2.11). Consequently, the total speckle motion on the sensor X-axis 𝐷𝑋>= is: 𝐷𝑋>= = −𝑑𝑧	𝑠𝑖𝑛(𝜓) + 𝐿5𝑑𝜓 = −𝑑𝑧	𝑠𝑖𝑛(𝜓) − 𝐿5 𝑐𝑜𝑠(𝜃)𝑐𝑜𝑠(𝜓) 𝑑𝑧	 sin(𝜃)𝐿4  (2.18) 𝐷𝑋>= = −𝑑𝑧 d𝑠𝑖𝑛(𝜓) + 𝛽	 𝑐𝑜𝑠(𝜃) sin(𝜃)𝑐𝑜𝑠(𝜓) e (2.19) The Speckle Imaging out-of-plane sensitivity has triangulation characteristics; the sensitivity is generally maximized by placing the laser and the sensor on the same side of the surface normal and maximizing the illumination and observation angles. This way the object and the generated speckles move laterally, parallel to the sensor plane. On the other hand, if the laser and the sensor were located on the opposite sides of the surface normal, then the speckle motion component due to the rotation of the illumination angle would be in the opposite direction than the speckle motion due to the surface shift. Consequently, the two components would partially cancel out. If the laser and the sensor are arranged symmetrically, the observed speckle motions would be zero for small surface displacements. For larger surface motions (not governed by these equations), the scale of the speckle hemisphere would change, and the observed speckle patterns would appear to expand or shrink, with speckle motion characterized by a radial vector field. 43  2.3.7 Out-of-plane Rotation ωy If the illuminated surface is tilted about the y-axis by an angle 𝜔< (in radians), the entire speckle hemisphere rotates by the same amount 𝑑𝜓9 = 𝜔< since speckles are generated by the surface roughness. In Figure 2.7, the new location of the rotated speckle is indicated by the dashed green line. However, the surface tilt also changes the illumination angle by an amount 𝑑𝜃 = −𝜔< as shown in Figure 2.7. This leads to an additional speckle rotation according to Equation (2.11): 𝑑𝜓: = − 𝑐𝑜𝑠(𝜃)𝑐𝑜𝑠(𝜓)𝑑𝜃 = 𝑐𝑜𝑠(𝜃)𝑐𝑜𝑠(𝜓)𝜔< (2.20) The total speckle rotation angle is thus: 𝑑𝜓BCB = 𝑑𝜓9 + 𝑑𝜓: = 𝜔< + 𝑐𝑜𝑠(𝜃)𝑐𝑜𝑠(𝜓)𝜔< = 𝜔< d1 + 𝑐𝑜𝑠(𝜃)𝑐𝑜𝑠(𝜓)e (2.21) The speckle rotation leads to a locally linear speckle motion along the sensor X-axis. With the sensor normal pointed towards the illuminated surface spot, the observed movement corresponds to the tangential motion at an observation radius LC. When the tilt angle is small, the observed motion is simply: 𝐷𝑋D. = 𝐿5𝑑𝜓BCB = 𝐿5𝜔< d1 + 𝑐𝑜𝑠(𝜃)𝑐𝑜𝑠(𝜓)e (2.22) The observed speckle motion scales linearly with the sampling distance and is affected by both the illumination and observation angles. However, unlike linear displacements, speckle rotations are insensitive to the source distance.  44   Figure 2.7 Speckle Imaging sensitivity on surface out-of-plane rotations about the y-axis 𝛚𝐲. When the illuminated surface rotates by an angle 𝝎𝒚 (in radians), it generates a speckle motion component 𝑫𝑿𝝎𝒚. The corresponding analytical expression is given by Equation (2.22). A laser source (S) illuminates a portion of a test object (O). A sensor (C) records a portion of the scattered light. 𝑳𝑺: illumination distance, 𝑳𝑪: observation distance, 𝜽: illumination angle, 𝒅𝜽: change in illumination angle, 𝝍: observation angle, 𝒅𝝍: speckle rotation angle.  2.3.8 Out-of-plane Rotation ωx The surface tilts about the x-axis have similar characteristics with the tilts about the y-axis. However, with the illumination and sampling directions restricted to the xz-plane, the geometry in the yz-plane corresponds with normal incidence (Figure 2.8). Therefore, in contrast to Equations (2.20-2.21), 𝑑𝜓: = −𝑑𝜃, and 𝑑𝜓BCB = −2𝜔;. Consequently, the observed speckle motions along the sensor Y-axis are: 𝐷𝑌D1 = −2	𝐿5𝜔; (2.23) 𝐿𝑆 𝐿𝐶CSθ𝑑𝜃𝜓O 𝐷𝑋𝜔𝑦𝜔𝑦𝑑𝜓2𝝎𝒚zxy +- 𝐿𝐶𝑑𝜓1𝐿𝐶𝑑𝜓2XY𝑑𝜓1 = 𝜔𝑦45  The minus sign in the applied angle −𝜔; follows from the geometric layout of the coordinate system that is used.   Figure 2.8 Speckle Imaging sensitivity on surface out-of-plane rotations about the x-axis 𝛚𝐱. When the illuminated surface rotates by an angle 𝝎𝒙 (in radians), it generates a speckle motion component 𝑫𝒀𝝎𝒙. The corresponding analytical expression is given by Equation (2.23). A laser source (S) illuminates a portion of a test object (O). A sensor (C) records a portion of the scattered light. 𝑳𝑺: illumination distance, 𝑳𝑪: observation distance, 𝒅𝜽: change in illumination angle, 𝒅𝝍: speckle rotation angle.  2.3.9 In-plane Rotation ωz In all preceding rigid-body motion components, a small surface displacement (𝑑𝑥, 𝑑𝑦, 𝑑𝑧)  or tilt (𝜔;, 𝜔<)  leads to uniform, linear speckle motions in the sensor plane. This occurs when the sampling distance 𝐿5  is sufficiently large so that the sampled portion of the speckle hemisphere is effectively flat. However, the situation is different for surface in-plane rotations (𝜔=) about the object surface normal. Because all of the tiny surface mirrors rotate, the entire speckle hemisphere 𝐿𝑆𝐿𝐶CS𝑑𝜃𝐿𝐶𝑑𝜓1O −𝜔𝑥𝑑𝜓2𝐿𝐶𝑑𝜓2 𝐷𝑌𝜔𝑥YXzyx -+−𝝎𝒙𝑑𝜓1 = −𝜔𝑥46  rotates by the same angle, as illustrated in Figure 2.9. Due to the orientation of the rotation axis, the speckle motions observed in the sensor plane strongly depend on sensor distance and angle. If the sensor is located on the speckle hemisphere rotation axis, the recorded speckle motion is pure rotation. However, even moderate offsets from this location make the observed motions mainly linear because the sensor captures only the local tangential motion of the rotating hemisphere. The amplitude and the direction of these tangential motions varies with the sensor offset from the rotation center as shown by the green arrows in Figure 2.9.   Figure 2.9 Speckle motion field resulting from object in-plane rotation about the z-axis 𝛚𝐳. A laser source (S) illuminates a portion of a test object (O). A sensor (C) records a portion of the scattered light. When the illuminated surface rotates by an angle 𝝎𝒛 (in radians), it makes the recorded speckle pattern to rotate by an equal amount. 𝑳𝑺: illumination distance, 𝑳𝑪: observation distance, 𝒙𝒚𝒛: object surface coordinate system, 𝑿𝒀: sensor plane coordinate system.  47  The detailed characteristics of the resulting speckle motions depend on the exact illumination geometry. Four different cases are introduced below. To simplify the conceptual comparison of the different cases, in this section the imaging sensor is assumed to be fixed on the z-axis independent of the location of the illuminated spot. This is contrary to the general assumption where the sensor plane normal is set to point towards the illuminated spot.   (1) Surface center of rotation illuminated at a normal incidence When the surface center of rotation (CoR) is illuminated at a normal incidence, as shown by source S1 in Figure 2.10, the resulting speckle hemisphere rotates about the illumination axis.   Figure 2.10 Speckle hemisphere center of rotation dependence on illumination offset and angle. When the laser source position and orientation are changed (S1-S4), the resulting speckle pattern center of rotation is correspondingly shifted. 𝝎𝒛: surface in-plane rotation, O: object, C: sensor, 𝑳𝑺: illumination distance, 𝑳𝑪: observation distance, 𝒙𝒚𝒛: object surface coordinate system, 𝑿𝒀: sensor plane coordinate system.  48  (2) Surface center of rotation illuminated at an oblique angle If the rotation center is illuminated at an oblique angle by source S2, the speckle hemisphere rotates about an axis that is oriented at a symmetrically offset angle with respect to the surface normal. In this case, the speckle hemisphere rotation axis corresponds to the direction of specular reflection. The tiny surface mirrors that reflect light into this angle must lay parallel to the surface plane, so that the surface in-plane rotation does not affect their orientation nor the reflection angles. If the sensor is at a normal incidence, i.e., the sensor center is on the z-axis, the speckle hemisphere rotation center coordinate on along the sensor X-axis depends on the sampling distance. Using the geometry shown in Figure 2.10: 𝑋5&6,			30"3(> = 𝐿5 tan(−𝜃) = −𝛽𝐿4 tan(𝜃) (2.24)  (3) A point offset from the surface center of rotation illuminated at a normal incidence If the illuminated spot is offset from the surface rotation axis, as illustrated by source S3 in Figure 2.10, the corresponding surface motion is a combination of in-plane rotation and in-plane displacement. The in-plane rotation alone would make the speckle hemisphere rotate, while the in-plane displacement would cause the speckle hemisphere to shift and tilt according to Equations (2.13 & 2.15). The overall observed speckle motion is a vector summation of these two motion components. When a rotating vector field is combined with a constant unidirectional vector field, the result is a rotating vector field that is shifted with respect to the initial vector field [43,44]. In the case of normal illumination at a small offset 𝑥&88/(3 from the surface CoR, the speckle hemisphere CoR on the sensor plane would be shifted in the same direction by the corresponding 49  amount 𝑋5&6,6 = 𝑥&88/(3 due to pure rotation alone. Along the sensor X-axis, the local tangential motion of the rotating speckle hemisphere is purely in Y-direction, as illustrated in Figure 2.9. The observed speckle motion magnitude scales linearly with distance from the speckle hemisphere CoR, so the observed Y-displacement due to in-plane rotation (R) is: 𝐷𝑌D4,6 = tan(𝜔=) ]𝑋 − 𝑋5&6,6` = tan(𝜔=) ]𝑋 − 𝑥&88/(3` ≈ 𝜔=]𝑋 − 𝑥&88/(3` (2.25) where 𝑋 is the sensor X-coordinate. The last approximation holds for small in-plane rotations.  Conversely, the x-offset illumination spot has an in-plane translation component (T) along the y-axis. Its magnitude depends linearly on the offset from the object rotation axis and the applied rotation angle: 𝑑𝑦B = 𝑥&88/(3 tan(𝜔=) ≈ 𝑥&88/(3𝜔= (2.26) According to Equation (2.15), the corresponding speckle Y-displacement on the sensor is: 𝐷𝑌D4,B = (1 + 𝛽)𝑑𝑦B = (1 + 𝛽)	𝑥&88/(3𝜔= (2.27) The overall speckle hemisphere center of rotation occurs at a sensor X-coordinate 𝑋5&6,&88/(3 where the speckle motion components due to pure in-plane rotation and pure y-displacement sum to zero: 𝐷𝑌D4,6 + 𝐷𝑌D4,B = 0 (2.28) This leads to condition: 𝑋5&6,&88/(3 = −𝛽𝑥&88/(3 (2.29) 50  (4) A point offset from the surface center of rotation illuminated at an oblique angle When the illumination occurs at an oblique angle and the illuminated spot is offset from the object CoR as shown by source S4 in Figure 2.10, the two effects are simply combined. The general equation that describes the speckle hemisphere rotation center X-coordinate on the sensor is: 𝑋5&6 = −𝛽(𝑥&88/(3 + 𝐿4 tan(𝜃)) (2.30) Equations (2.24-2.30) apply when the illumination offset is much smaller than the illumination and the observation distances.  2.3.10 Combined Object Motions Table 2.1 lists the Objective Speckle Imaging sensitivity equations for each different object motion component. The equations fully correspond to the model developed by Hrabovský et al. [13,37] using trigonometric notations as presented in [19,39].  Motion Type Motion Component Observed Speckle Motion In-plane Displacement 𝑑𝑥 𝐷𝑋>; = 𝑑𝑥 d𝑐𝑜𝑠 𝜓 + 𝐿5𝐿4 𝑐𝑜𝑠: 𝜃𝑐𝑜𝑠 𝜓 	e (2.31)  𝑑𝑦 𝐷𝑌>< = 𝑑𝑦 j1 + 𝐿5𝐿4k (2.32) Out-of-plane Displacement 𝑑𝑧 𝐷𝑋>= = −𝑑𝑧 jsin𝜓 + 𝐿5𝐿4 cos 𝜃 sin 𝜃cos𝜓 k (2.33) Out-of-plane Rotation (tilt) 𝜔; 𝐷𝑌D1 = −2𝐿5𝜔; (2.34) 𝜔< 𝐷𝑋D. = 𝐿5𝜔< j1 + cos 𝜃cos𝜓k (2.35) In-plane Rotation 𝜔= Vector field   Table 2.1 Objective Speckle Imaging sensitivity equations. 51  2.3.11 Speckle Decorrelation Since speckles are defined by the path length differences of the interfering light rays, the initial phase distribution should be perfectly reproduced (only laterally shifted) on the sensor plane even after surface movement in order to observe the same speckles. In reality, this is impossible, as object motion changes the portion of the illuminated surface. Some of the initially illuminated surface mirrors no longer receive light after surface motion, while some initially inactive mirror facets have been introduced.  Moreover, while the speckle field is a three-dimensional arrangement of radial speckle ellipsoids, it is sampled using a flat sensor that is fixed in space. Therefore, the speckle pattern recorded after surface motion does not necessarily represent the exact same cross-section, part of a speckle hemisphere, as the reference speckle pattern. Consequently, the observed speckles change their appearance in response to surface motion. If the motion is too large, the final speckles no longer have any resemblance with the initial reference pattern. This is known as speckle decorrelation [7]. Decorrelation limits the magnitude of motion increments that can be measured using Speckle Imaging. Typically, surface displacements must be significantly smaller than the diameter of the illuminated spot.  2.4 Conclusion The presented geometric Speckle Hemisphere Model provides a simple alternative to the existing analytical models to easily understand and visualize speckle motions resulting from surface movements. The three-dimensional speckle field behaves generally similarly to the reflections from a disco ball, although some differences arise from interference and diffraction effects. The derived sensitivity equations correspond to those predicted by the existing more complex models. 52  Chapter 3: Remote Surface Motion Measurements Based on Defocused Speckle Imaging  In the basic configuration of Speckle Imaging, the interference field scattered from the object is recorded using a lensless sensor. If, however, the recording is instead done using a camera, i.e., a lens is placed in front of the sensor, the resulting speckle pattern changes. This change occurs because the lens refracts and redirects the incident light rays, so the distribution of light rays that reaches the sensor is altered. Manipulating light propagation by camera focus adjustments is found to enable substantial control of Speckle Imaging sensitivity. To understand how this is possible, this chapter begins with a brief geometric optics introduction to optical image formation. This is followed by a discussion about the role of defocus and the phase aspects of imaging, and how they affect camera-based Speckle Imaging measurements. Finally, the sensitivity equations for Defocused Speckle Imaging are derived, and an optimal arrangement is proposed for remote surface motion measurements under multiaxial object motion. Sections 3.4-3.6 have been published in Optics and Lasers in Engineering under title “Remote Surface Motion Measurements using Defocused Speckle Imaging” [45].  3.1 Basics of Image Formation 3.1.1 Thin Lens Model A simple camera consists of a lens and a sensor. When the camera is focused at the object, the lens forms a sharp image on the sensor plane by focusing every point on the object surface onto a separate, distinct point on the sensor. A point on the object can be considered as a source that 53  radiates light rays into various directions. The lens captures some of the emitted rays and refracts them, so that the rays overlap on a single point on the sensor. A simple way to express image formation analytically is to use the geometric thin lens model. The thin lens model approximates a lens as an infinitely thin structure. Figure 3.1 illustrates image formation through a thin lens using three special rays: 1) The ray that goes through the lens center is undeviated; 2) The ray that initially travels parallel to the optical axis is refracted to propagate through the lens back focal point (BFP); 3) The ray that goes through the front focal point (FFP) is refracted to propagate parallel to the optical axis. The image is formed at the point where the three example rays converge. It is thus apparent that the image location depends on the object distance and the lens focal length. If the object is located at a distance 𝑑& away from the lens that has a focal length 𝑓, then the corresponding image is formed on the other side of the lens at a distance 𝑑0 according to relation: 1𝑑& + 1𝑑0 = 1𝑓 (3.1) The thin lens model works in a paraxial regime where the light rays propagate at small angles with respect to the optical axis. In other words, the diameter of the lens 𝑑"('/ must be small in comparison to the object and the image distances. In Figure 3.1, the lens diameter and the ray angles are exaggerated for illustration purposes. According to Equation (3.1), an increase in object distance reduces image distance and vice versa; the image of a remote object (large 𝑑&) is formed close to the lens, whereas the image of a nearby object is located far from the lens. In the extreme case where the object is at infinity, the image is 54  located at the lens back focal plane. Correspondingly, if the object is placed at the lens front focal plane, the image appears at infinity.   Figure 3.1 Image formation through a thin lens. 𝒇: lens focal length, 𝒅𝒐: object distance, 𝒅𝒊: image distance, 𝒉𝒐: object height, 𝒉𝒊: image height, FFP: front focal point, BFP: back focal point.  The arrangement in Figure 3.1 is reversible such that accurate focus is retained when the locations of the object and the image are interchanged. To clarify descriptions, the object side of the lens is referred to as the object space, and the image side as the image space.   3.1.2 Image Scale Figure 3.1 shows that the image size is in proportion to distance from the lens. The imaging magnification ratio 𝑀 is defined as the image size ℎ0 relative to the object size ℎ&. By similar triangles: Object ImageLensOp@cal axis 𝑓𝑑𝑜FFP BFP𝑓𝑑𝑖Focal planeℎ𝑖ℎ𝑜OBJECT SPACE IMAGE SPACE55  ℎ0𝑑0 = ℎE𝑑E (3.2) Therefore, magnification can be expressed simply as the ratio of image distance and object distance: 𝑀 = ℎ0ℎ& = 𝑑0𝑑& (3.3) Equation (3.3) is commonly expressed with a minus sign because the resulting image appears upside down. In digital cameras, however, the image is electronically rotated by 180° to make the final image appear upright. Therefore, for simplicity, the minus sign is omitted here and in the following equations. In photography, high magnifications with an image size greater or equal to the object size (𝑀 ≥ 1) are known as macro configurations. According to Equations (3.1) and (3.3), unitary magnification is achieved when 𝑑& = 𝑑0 = 2𝑓. Since typical camera lens focal lengths are in the range of 10-100mm, this means that the object must be very close to the camera to be imaged at a high magnification. By combining Equations (3.1) and (3.3), magnification can be expressed as: 𝑀 = 𝑑0𝑓 − 1 (3.4) Consequently, to increase the magnification in a camera, the lens must move away from the sensor. This correspondingly reduces the focus distance in the object space. If magnification is alternatively expressed in terms of the object distance, it becomes: 56  𝑀 = 𝑓𝑑& − 𝑓 ≈ 𝑓𝑑& 	𝑓𝑜𝑟	𝑑& ≫ 𝑓 (3.5) Therefore, for remote objects, the magnification is inversely proportional to the object distance, and remote objects appear smaller than nearby objects. This is the fundamental characteristic and limitation of perspective imaging and is shared by all conventional cameras and by human vision. Since camera sensors have limited resolution, object distance determines the size of details that can be detected. In motion measurements, object distance controls the magnitude of minimum motion that can be resolved.  3.2 Defocus 3.2.1 Cause of Defocus The focal plane in the object space is the surface at which the camera is focused. It corresponds to the object distance that fulfills the thin lens equation for a specific image distance. The object must lie at the focal plane to enable its image to appear sharp. Figure 3.2(a) shows what happens when the object is offset from the focal plane; the light rays that are emitted from an object point do not perfectly converge at the sensor plane but are instead spread over a finite area. The resulting image appears blurred, with details washed away; the camera is said to be defocused. Figure 3.2(b) shows the same image formation example from the viewpoint of the sensor plane – the light rays that converge onto a single point on the image plane originate from a finite area on the object surface. If camera focus distance is shorter than the physical object distance, like the example in Figure 3.2, the camera is said to be near-focused. Conversely, if the object is located between the focal plane and the lens, the camera is far-focused. 57   Figure 3.2 Defocused camera blur characteristics. (a) Image space blur, (b) object space blur. When the camera is defocused, light emitted from an object point and imaged through the lens is spread across a diameter 𝑫𝒃𝒍𝒖𝒓 on the sensor plane. Correspondingly, a point on the sensor plane receives light from the object across an extended area of diameter 𝒅𝒃𝒍𝒖𝒓.  When the camera is defocused, Figure 3.2 shows that the lens diameter directly affects the extent of the blur; the light rays transmitted through a large-diameter lens span a greater range of propagation angles than a similar small-diameter lens.  In a camera, the effective lens diameter is controlled with an adjustable aperture element that blocks the outermost portion of the lens. The aperture size is characterized by the “f-number” 𝑓# that relates the effective lens diameter 𝑑"('/ to the lens focal length 𝑓: Object Sensor planeLensOp@cal axis𝑑𝑜𝑓𝑑𝑖Focal plane Aperture𝑑𝑙𝑒𝑛𝑠∆𝐿𝑑𝑐𝑜𝑛𝑒Object Sensor planeLensOp@cal axis𝑑𝑜𝑓𝑑𝑖Focal plane Aperture𝑑𝑙𝑒𝑛𝑠∆𝐿𝑑𝑏𝑙𝑢𝑟𝐷𝑏𝑙𝑢𝑟(b)(a)58  𝑑"('/ = 𝑓𝑓# (3.6) Therefore, the amount of blur can be limited using a high f-number, but this concurrently reduces image brightness, since a large portion of the light incident on the lens is blocked.   3.2.2 Blur in the Object Space The severity of defocus is characterized by the diameter of the blurred spot. Using the geometry shown in Figure 3.2(b), the blur diameter in the object space (on the object surface) 𝑑!"#$ can be expressed using similar triangles: 𝑑!"#$∆𝐿 = 𝑑"('/𝑑&  (3.7) 𝑑!"#$ = ∆𝐿𝑑& 𝑑"('/ (3.8) where ∆𝐿 is the defocus distance that describes how much the object is offset from the focal plane. Here, positive defocus values describe near-focus, and negative values describe far-focus. According to Equation (3.8), blur diameter increases linearly with defocus distance and lens aperture diameter but is inversely proportional to focus distance 𝑑&. If the defocus distance ∆𝐿 is sufficiently increased, the blur diameter reaches the size of the object, which means that every point on the sensor receives light from across the entire object. Consequently, the resulting image is completely diffused and contains no spatial information about the object.  59  3.2.3 Blur in the Image Space The blur diameter in the image space (on the sensor surface) 𝐷!"#$ can be calculated using the lens imaging property. Referring to Figure 3.2(a), the rays that originate from a point on the object and pass through the lens aperture are spread into a cone that has a diameter 𝑑%&'( on the focal plane. Using similar triangles: 𝑑%&'(∆𝐿 = 𝑑"('/∆𝐿 + 𝑑& (3.9) Since there is a one-to-one correspondence between the points on the focal plane and the points on the image plane, the cross-section of the cone of the rays on the focal plane is reproduced on the image plane at a modified scale 𝑀, so that: 𝐷!"#$ = 𝑀𝑑%&'( = 𝑀 ∆𝐿∆𝐿 + 𝑑& 𝑑"('/ ≈ 𝑀𝑑"('/	𝑤ℎ𝑒𝑛	(∆𝐿 ≫ 𝑑E) (3.11) The amount of image blur increases with defocus. At very large defocus distances (∆𝐿 ≫ 𝑑E), the blur diameter on the image space approaches a constant value. This can be understood by looking at the ray propagation angles; if the object is at infinity, the rays reaching the focal plane and the lens aperture are parallel. Therefore, the bundle of rays that reaches the lens has cross-sectional area and shape equal to those of the aperture, and the corresponding bundle of the rays on the image plane has same dimensions but scaled by the magnification ratio.   𝑑%&'( = ∆𝐿∆𝐿 + 𝑑& 𝑑"('/ (3.10) 60  3.3 Phase Aspects of Image Formation All preceding analysis was based on studying the divergence, propagation, and convergence of light rays. In the context of Speckle Imaging, however, it is necessary to also consider how the overall phase distribution of the speckle field evolves as the interfering light rays transmit through the imaging system. The rays passing through the outside part of the lens have greater propagation angles than the central rays near the optical axis, thus their physical path lengths are greater. Therefore, it would seem logical that propagation through a lens would modify the phase distribution of the interfering light rays, and thus change the appearance of the recorded speckles in comparison to the lensless imaging configuration. However, the central rays pass through a greater thickness of lens. Because the lens is made of glass that has higher refractive index than air, light rays propagate through the lens more slowly and thus have shorter wavelength. Therefore, there are more wavelengths contained within the lens than within a similar distance in air. This compensates for the varying physical path distance, causing the total number of wavelengths covered by an off-axis ray between the object and the image to be the same as the total number of wavelengths covered by a central ray. Consequently, the accumulated phases of all rays converging at any point within the image are the same, and the initial phase distribution is thus conserved.  The above explanation conforms to Fermat’s principle that describes lens as a phase function that connects all rays from a point on the object to a corresponding point in the image with an equal phase [46]. Figure 3.3 illustrates this graphically. An object point can be considered as a point source that emits spherical waves with diverging wavefronts (light propagates normal to the wavefronts). The lens captures a portion of the emitted light, and the wavelength is reduced inside 61  the lens. The refraction at the lens interfaces changes wavefront curvature, so that the transmitted waves have converging wavefronts, causing light to converge into a single point on the image plane. Since all points along the wavefront have equal phase, the total accumulated phase from the object point through the lens onto the corresponding image point must be the same for all rays.   Figure 3.3 Phase aspects of image formation. Lens changes wavefront curvature: initially diverging wavefronts emitted by a single object point are refracted at the lens interfaces so that light converges into a single point on the image plane. Light wavelength is smaller inside the lens.  3.4 Interpretation and Characteristics of Defocused Speckle Imaging Since the accumulated phase is the same for all rays, independent of their propagation angles, imaging through a lens preserves the phase distribution that is incident on the focal plane. For Speckle Imaging, this means that the recorded speckle pattern reproduces the light interference pattern that exists on the focal plane. This works even for a defocused camera; if the focal plane is offset from the object surface, the recorded image reproduces the pattern that would be observed by placing a lensless sensor on the focal plane (Figure 3.4). However, the lens aperture limits Object ImageLensDiverging wavefronts Converging wavefrontsRefrac@onAperture62  which rays can reach the sensor surface, and the recorded defocused speckle pattern is linearly scaled by the camera magnification ratio.    Figure 3.4 Speckle formation in a defocused camera. When the camera is sufficiently defocused, the light phase distribution present on the focal plane is reproduced on the image plane.  Provided that the defocus distance and lens aperture are sufficiently large so that the blur diameter on the object surface exceeds the diameter of the illuminated spot 𝑑/1&3, the resulting speckles are identical to the objective speckles recorded at the focal plane (apart from the scale factor 𝑀). This condition can be expressed by simple requirement 𝑑!"#$ > 𝑑/1&3. By combining Equations (3.6) and (3.8) and rearranging: ∆𝐿 > 𝑓#𝑓 𝑑&𝑑/1&3 (3.12) When Equation (3.12) holds, the defocus distance becomes the effective speckle field sampling distance. The defocus distance can be changed by a simple camera focus adjustment involving shifting the lens relative to the sensor. This is much more convenient than having to move the sensor physically, as in lensless imaging. A defocused camera can thus reach into the three-LaserObject ImageLensFocal planeAperture63  dimensional speckle field and record any desired speckle hemisphere cross-section. This offers an extremely practical way to tune Speckle Imaging measurement sensitivity. Furthermore, lens magnification adjustment gives an additional sensitivity control parameter. In the case where the blur diameter on the object is smaller than the laser spot diameter, a point on the image receives light from only a limited subset of the illuminated surface. Because the imaging is not completely diffused, the measurement retains some spatial resolution, and the resulting speckle pattern differs from the objective speckle pattern recorded at the same sampling distance. Even when Equation (3.12) is fulfilled, the camera lens structure may still block some of the outermost light rays, leading to a loss of the outer parts of the recorded speckle pattern (vignetting) in comparison to the lensless imaging case. Figure 3.5 shows a comparison of Objective and Defocused Speckle Imaging geometries with equal sampling distances. If the defocused camera lens aperture diameter were reduced from the displayed configuration, the resulting speckle pattern would not fill the entire sensor.  For a remote object, the spot diameter is small in comparison to the physical distance between the object and the lens aperture, so the illuminated area can be approximated as a single point. In such case, vignetting occurs if the camera sensor diameter 𝑑/('/&$ is greater than the blur diameter in the image space 𝐷!"#$. Therefore, to avoid vignetting, blur diameter must exceed sensor dimensions, i.e., 𝐷!"#$ > 𝑑/('/&$. By combining Equations (3.6) and (3.11) and rearranging, the following condition is obtained: 𝑀 ∆𝐿∆𝐿 + 𝑑& 𝑓𝑓# > 𝑑/('/&$ (3.13)  64   Figure 3.5 (a) Objective Speckle Imaging geometry vs. (b) Defocused Speckle Imaging geometry with equal sampling distances (∆𝐋 = 𝐋𝐂).  Figure 3.6 illustrates the objective vs. defocused speckle pattern appearance for different geometric configurations. For high defocus distance and wide lens aperture, Equations (3.12) and (3.13) are both valid, and the resulting defocused speckle pattern SP3 is identical to the objective speckle pattern SP1. If the lens aperture diameter is reduced, vignetting occurs, but the speckles retain their shape close to the image center (SP2). For small defocus distance, on the other hand, the object blur diameter is smaller than the spot size, and the resulting defocused speckle pattern SP5 looks different than a comparable objective speckle pattern SP1. When the object is close to the focal plane, the rays reaching the lens surface have a wide range of propagation angles. Therefore, the entire sensor receives light even if the aperture is small.  𝐿𝐶𝐿𝑆𝜃𝜓SensorLaserObject(a)∆𝐿𝑑𝑜 𝑑𝑖𝐿𝑆𝜃𝜓Focal planeLensSensorLaserObject(b)65   Figure 3.6 Comparison of objective vs. defocused speckle pattern dependency on geometry. SP1: Objective speckles observed at a large sampling distance; SP2: Defocused speckles observed at a large sampling distance with small lens aperture diameter; SP3: Defocused speckles observed at a large sampling distance with large lens aperture diameter; SP4: Objective speckles observed at a small sampling distance; SP5: Defocused speckles observed at a small sampling distance.  3.5 Defocused Speckle Imaging Sensitivity Equations So far, the discussion in this chapter has considered the effects of defocus on the properties of still speckle pattern images. However, speckle-based motion measurements are similarly affected, since motion analysis is based on tracking the speckle locations in the captured images. Therefore, if Speckle Imaging is performed by a defocused camera, the measurement sensitivity scales directly proportional to the imaging system in-focus magnification ratio. Moreover, the observed OBJECTIVE SPECKLES𝐷𝑏𝑙𝑢𝑟𝑑𝑠𝑒𝑛𝑠𝑜𝑟𝑑𝑠𝑝𝑜𝑡𝑑𝑏𝑙𝑢𝑟 DEFOCUSED SPECKLES, M = 1Small sampling distance, ∆𝐿 = 𝐿𝐶Large sampling distance,∆𝐿 = 𝐿𝐶SP1 SP2 SP3SP5SP466  speckle motions depend on the defocus distance, since the camera focal plane sets the effective speckle pattern sampling location [13]. Hence, it is possible to transform the sensitivity equations derived in Chapter 2 (Table 2.1) for a lensless sensor to be compatible with defocused imaging configuration. The only two required changes are to 1) replace the sensor distance 𝐿5  by the defocus distance ∆𝐿, and 2) scale all resulting speckle motions by the in-focus magnification ratio 𝑀 = 𝑑0/𝑑&. The updated sensitivity equations are collected in Table 3.1. The magnification ratios have been moved to the left side for better readability.  Motion Type Motion Component Observed Speckle Motion In-plane Displacement 𝑑𝑥 𝐷𝑋>;𝑀 = 𝑑𝑥 dcos𝜓 + ∆𝐿𝐿4 cos: 𝜃cos𝜓 	e (3.14)  𝑑𝑦 𝐷𝑌><𝑀 = 𝑑𝑦 j1 + ∆𝐿𝐿4 k (3.15) Out-of-plane Displacement 𝑑𝑧 𝐷𝑋>=𝑀 = −𝑑𝑧 jsin𝜓 + ∆𝐿𝐿4 cos 𝜃 sin 𝜃cos𝜓 k (3.16) Out-of-plane Rotation (tilt) 𝜔; 𝐷𝑌D1𝑀 = −2∆𝐿𝜔; (3.17) 𝜔< 𝐷𝑋D.𝑀 = ∆𝐿𝜔< j1 + cos 𝜃cos𝜓k (3.18) In-plane Rotation 𝜔= Vector field   Table 3.1 Defocused Speckle Imaging sensitivity equations.   67  3.6 Complex Object Motion with Multiple Degrees of Freedom If the illuminated object has more than one motional degree of freedom, the resulting speckle motion components combine, so that the total observed speckle motion is the vector sum of the elementary speckle movements. For example, if the object is displaced in-plane in x-direction (𝑑𝑥) while simultaneously tilted about the y-axis (𝜔<), both resulting speckle motions are along the sensor horizontal X-axis. The observed total speckle motion in this case is: 𝐷𝑋BCB𝑀 = 𝐷𝑋>;𝑀 +𝐷𝑋D.𝑀  (3.19) 𝐷𝑋BCB𝑀 = 𝑑𝑥 dcos𝜓 + ∆𝐿𝐿4 cos: 𝜃cos𝜓 	e + 𝜔<∆𝐿 j1 + cos 𝜃cos𝜓k (3.20) With simplified geometry (𝜃 = 	𝜓 = 0˚, cos 𝜃 = cos𝜓 = 1), this reduces to: 𝐷𝑋BCB𝑀 = 𝑑𝑥 j1+∆𝐿𝐿4 	k + 𝜔<(2∆𝐿) (3.21) If all geometric parameters are known, there still remain two unknown variables in Equation (3.20); the applied displacements 𝑑𝑥 and rotations 𝜔<. Consequently, at least two independent measurements are required to separate the relative speckle motion contributions caused by the linear displacement and the surface tilt. The reduced Equation (3.21) reveals that displacement sensitivity has a constant factor and a second term that depends on the ratio of the sampling distance and the illumination distance ∆𝐿/𝐿4. On the other hand, the rotation sensitivity scales linearly with the sampling distance ∆𝐿 but is independent of the source position. Therefore, displacement and rotation sensitivities have different slopes as a function of ∆𝐿. Hence, it is possible to separate the different motions by using two differently focused cameras that have 68  unequal sampling distances ∆𝐿: ≠ ∆𝐿9. The camera focused near the object has higher relative displacement vs. rotation sensitivity than the camera focused far away from the object.  When the same object motion is simultaneously measured with the two cameras, the resulting speckle motions 𝐷𝑋9 and 𝐷𝑋: are characterized by two independent equations: ⎩⎪⎨⎪⎧𝐷𝑋9𝑀9 = 𝑑𝑥 dcos(𝜓9) + ∆𝐿9𝐿4 cos:(𝜃)cos(𝜓9)e + 𝜔<∆𝐿9 d1 + 𝑐𝑜𝑠(𝜃)cos(𝜓9)e = 𝑑𝑥𝐴9 + 𝜔<𝐵9𝐷𝑋:𝑀: = 𝑑𝑥 dcos(𝜓:) + ∆𝐿:𝐿4 cos:(𝜃)cos(𝜓:)e + 𝜔<∆𝐿: d1 + 𝑐𝑜𝑠(𝜃)cos(𝜓:)e = 𝑑𝑥𝐴: + 𝜔<𝐵: (3.22) where ⎩⎪⎨⎪⎧𝐴0 = cos(𝜓0) + ∆𝐿0𝐿4 cos:(𝜃)cos(𝜓0)𝐵0 = ∆𝐿0 d1 + 𝑐𝑜𝑠(𝜃)cos(𝜓0)e						  (3.23) The object displacement and rotation can be solved algebraically: ⎩⎪⎨⎪⎧𝜔< = 𝐴: 𝐷𝑋9𝑀9 − 𝐴9 𝐷𝑋:𝑀:𝐴:𝐵9 − 𝐴9𝐵: 	𝑑𝑥 = 1𝐴9 j𝐷𝑋9𝑀9 − 𝜔<𝐵9k (3.24) In-plane displacements 𝑑𝑦 and out-of-plane rotations 𝜔; cause speckle motions that are purely orthogonal along the sensor Y-axis and thus separated from X-directional speckle motions. Therefore, the same two-camera combination can simultaneously extract 𝑑𝑦-displacement and 𝜔;-tilt from the recorded speckle motions 𝐷𝑌9 and 𝐷𝑌:. Consequently, up to four rigid-body motion 69  components can be simultaneously measured using the two-camera arrangement and simple two-dimensional speckle bulk motion analysis. The out-of-plane sensitivity is negligible if the illumination and observation angles remain small. However, the presence of in-plane rotations would cause additional speckle motions, and a third camera would be required to solve the contributions of the individual motion components. The above analysis assumes that the measured motions are small, so that the geometry does not change significantly. If the measured motions, particularly rotations, had larger magnitudes, they should be measured in small increments and the geometric parameters dynamically updated over the course of the measurement, so that the related sensitivity equations would follow the evolution of the geometry.  3.7 Conclusion An image taken by a defocused camera corresponds to the light phase distribution that exists on the camera focal plane. This enables to virtually choose any desired speckle field sampling position by simple camera defocus adjustment. Since a near-focused camera has both high sampling distance and high magnification, Defocused Speckle Imaging can reach very high sensitivity.  A pair of cameras focused at different distances can simultaneously track surface in-plane displacements and out-of-plane tilts. 70  Chapter 4: Statistical Speckle Pattern Analysis  Various geometric parameters influence the observed speckle pattern movements. In addition, the specific geometric arrangement also affects the internal structure and overall appearance of speckle patterns. This motivates to study the possibility to extract the important range and orientation information directly from the captured speckle patterns. In addition, it is important to know how to adjust the speckle pattern content, particularly the average speckle size, because texture density greatly affects how well the speckle motions can be tracked. This chapter explains speckle size and shape dependence using two concepts, oblique interference and optical resolution limit. This is followed by introducing a diffraction-based view of speckle formation, an alternative to the Speckle Hemisphere Model, along with description of speckle pattern wavelength dependency. Finally, a diffraction-based calibration procedure is proposed for Defocused Speckle Imaging. The calibration principle was previously presented at Society for Experimental Mechanics 2020 Annual Conference and Exposition on Experimental and Applied Mechanics [47].  4.1 Background Accurate knowledge of the measurement geometry is necessary for correct scaling of the measured speckle pattern movements. In case of large magnitude motions, it is also crucial to keep track of changes in geometry over the course of the measurement, because speckle motion sensitivity is directly dependent on the object distance and orientation relative to the measurement instrument. While the determination of illumination and observation distances and angles can be relatively straightforward in a laboratory environment at small distances, the same task can be significantly more challenging for remote measurements in field conditions. The object may be too far for 71  manual ruler-based measurements, and the access to the object may be limited by various environmental hazards, or if the object is already moving.  Remote non-contact distance and angle measurements face similar challenges as motion measurements. Camera-based methods, like photogrammetry [48], suffer from perspective sensitivity reduction, whereas interferometric methods are typically suited for measuring only relative motions, not absolute values, at very small scales. While laser based active rangefinders are widely available, remote sensors for large absolute angle measurements are not common [15, 49]. Although autocollimator-based angle measurements can reach very high resolution, they typically cannot measure macroscopic angles higher than a couple of degrees. They are not well suited for remote measurements and require rather complicated instrumentation [50,51]. Furthermore, the potential for quick data acquisition with simple instrumentation are some of the key features of Speckle Imaging. Time-consuming calibration measurements or the need for additional range sensors would compromise these aspects. Since the appearance of speckle pattern depends on the specific geometry, it would be attractive to extract the crucial calibration parameters from the same speckle pattern images that are captured for the motion analysis. Although it is widely known that the average speckle size in objective speckle patterns scales linearly with the sampling distance [8,14,27], this characteristic has not been effectively utilized for distance measurement. Similarly, the speckle pattern diffraction nature and the related wavelength dependency have been known for a long time [27], but this aspect has not been applied for practical angle measurements until very recently [15]. Therefore, statistical speckle pattern analysis has substantial unrealized potential for making Speckle Imaging measurements that are practical and feasible for a wider range of applications.  72  4.2 Speckle Size 4.2.1 Interferometric Interpretation of Objective Speckle Size Average laser speckle size can be derived using the concept of oblique interference of two monochromatic plane waves (Figure 4.1) [8,14]. When a region (diameter 𝑑/1&3) on a diffuse object is illuminated by a laser, light is scattered to all directions. A point on the adjacent lensless sensor at a sampling distance 𝐿5  away receives light from across the illuminated area. The maximum half angle among the overlapping light rays is: 𝛼 ≈ tan𝛼 = 𝑑/1&32𝐿5  (4.1) provided that the sampling distance is much larger than the spot diameter. The overlapping light rays form an interference pattern. If only the two extreme rays are considered, the geometry is equivalent to oblique interference of two collimated beams. The overlapped volume consists of a periodic intensity fringe pattern where intensity modulates along the horizontal and the vertical axes. Bright spots occur where the phase of the two beams is equal, and dark spots where the two beams are out of phase.  According to Equation (4.1) and Figure 4.1, the horizontal spacing of two adjacent bright spots 𝑤 is: 𝜆/2𝑤 = cos 𝛼 (4.2) 𝑤 = 𝜆/2cos 𝛼 (4.3) The horizontal and vertical spacing (ℎ) are related according to: 73  𝑤/2ℎ/2 = tan𝛼 (4.4) Finally, the vertical spacing is: ℎ = 𝑤tan𝛼 = 𝜆/2cos 𝛼 1tan𝛼 = 𝜆2sin 𝛼 ≈ 𝜆2𝛼 = 𝜆𝐿5𝑑/1&3 (4.5) As seen by Equation (4.5), the highest relative angle among the interfering rays leads to the minimum spacing of the bright spots. Considering all light scattered across the illuminated area, the overlapping light rays have a wide range of propagation angles. However, according to Cloud [14], the minimum spacing will dominate the resulting interference pattern, and the larger fringes are modulated or broken by the smallest fringes. In the case of diffuse reflection, the phases of the interfering light rays are random, so the interference pattern does not contain a regular periodic structure but has a random appearance. However, the average feature size, or speckle size, is governed by the same relationship: 𝑑/1(%2"(,9F = ℎ = 𝜆𝐿5𝑑/1&3 (4.6) While there is no lens in this example, the illuminated region can be considered as the imaging aperture, and the sampling distance as the effective image distance. Therefore, oblique interference geometry has an effective f-number: 𝑓#(88 = 𝐿5𝑑/1&3 ≈ 12𝛼 (4.7)  74   Figure 4.1 Principle of oblique interference. The corresponding analytical derivation is shown by Equations (4.1-4.5). Light rays propagating at different directions interfere, forming a systematic arrangement of intensity maxima and minima in the volume adjacent to the object.  With this substitution, the average speckle size can be represented as: 𝑑/1(%2"(,9F = 𝜆𝑓#(88 (4.8) Equation (4.8) is equivalent to the diffraction-limited resolution of an optical system with one-dimensional slit aperture [52]. Since the geometry of Figure 4.1 with only the two extreme rays is effectively a one-dimensional slit, the equivalence suggests that the average speckle size is directly determined by the resolution limit of the optical system that is used to record the speckle pattern [53]. Considering a 2D circular illuminated spot, the expression is slightly affected [8,14], by a constant factor 1.22: ObjectSensorDestruc@ve interferenceConstruc@ve interference𝐿𝐶𝜆𝛼 𝜆2𝛼ℎ2𝛼𝑤2𝑑𝑠𝑝𝑜𝑡275  𝑑/1(%2"(,50$%#".$	/1&3 = 1.22𝜆𝑓#(88 = 1.22𝜆 𝐿5𝑑/1&3 (4.9) Equation (4.9) shows that speckle size scales linearly proportional to wavelength and sampling distance but is inversely proportional to the diameter of the illuminated spot.  4.2.2 Diffraction-Limited Spot Size The optical system resolution depends on the shape and size of the cone of light that reaches a point on the image (sensor) plane. If the aperture shape can be expressed analytically, the corresponding diffraction-limited spot size is obtained by calculating the squared modulus of the Fourier transform of the aperture shape function [46]. This yields so called Point Spread Function (PSF) that directly characterizes how the light from a point on the object is transmitted onto the image plane. Figure 4.2 shows examples of different 2D apertures and their PSFs simulated in Matlab. The first column shows the aperture shape, and the second column displays the corresponding horizontal and vertical cross-sections. The third column shows the PSF shape, and the fourth column shows the PSF cross-sections. All apertures except the Gaussian are binary apertures, meaning that entire aperture cross-section transmits light uniformly. The Gaussian aperture has spatially varying Gaussian transmittivity profile. For ease of comparison, all apertures have equal maximum horizontal diameter (Full width at half maximum, FWHM, diameter for the Gaussian aperture).  76   Figure 4.2 Examples of aperture functions and their corresponding Point Spread Functions.  The PSF of a circular aperture is a Bessel function of first kind, whereas the PSF of a 1D slit is a Sinc function, and the PSF of Gaussian aperture is a Gaussian function. The width of the PSF Aperture shapeCircularRingGaussianSlitTwo pinholesSemicircleEllipse00.51Aperture cross-section00.5100.5100.5100.5100.51-100 0 10000.51PSF shape00.51PSF cross-section00.5100.5100.5100.5100.51-100 0 10000.5177  central peak defines the diffraction-limited spot size. A common way to determine the spot size is to measure the FWHM diameter of the PSF peak. Increasing the aperture diameter reduces the size of its PSF – a wider aperture is able to focus light better. This is fundamentally related to conservation of energy and Heisenberg’s uncertainty principle [52]. A wider aperture allows higher level of uncertainty in the angular direction (momentum) of light that is captured by the optical system. Correspondingly, the uncertainty of the focused spot size (position) is reduced. This becomes evident by looking at the asymmetric aperture PSFs. The slit, semicircle and ellipse apertures have larger horizontal vs. vertical dimensions, so the corresponding spots have greater height than width. These findings are directly applicable to laser speckle size in Objective Speckle Imaging. The observed speckle size and shape depend on the size and shape of the illuminated surface spot, respectively.  4.2.3 Speckle Size in Subjective Speckle Imaging If a laser-illuminated rough surface is recorded with a focused camera (sensor + lens), the resulting image is known as a subjective speckle pattern [8]. In comparison to the lensless case, refraction at the lens changes the angles at which the light rays reach the sensor (Figure 4.3 (a)). This affects the resulting speckle size. In a focused camera, the maximum angle depends on the lens aperture diameter 𝑑"('/ and the image distance 𝑑0. These two parameters define the lens effective f-number: 𝑓#(88 = 𝑑0𝑑"('/ (4.10) Using the thin lens model Equations (3.1, 3.3, 3.6) from Chapter 3, it is possible to represent the effective f-number in terms of lens in-focus magnification 𝑀 and lens f-number 𝑓#: 78   Figure 4.3 (a) Subjective speckle formation in a focused camera. The angles of the light rays that reach the sensor depend only on the imaging system geometry. (b) Speckle formation in a highly defocused camera. The spot diameter limits the propagation angles of the light rays that reach the defocused camera sensor.  𝑑0 = 𝑓(1 +𝑀) (4.11) 𝑑"('/ = 𝑓𝑓# (4.12) 𝑓#(88 = 𝑓#(1 +𝑀) (4.13) The average subjective speckle size is thus: Δ𝐿𝑑𝑠𝑝𝑜𝑡2𝑑𝑏𝑙𝑢𝑟2𝑑𝑜 𝑑𝑖Focal plane LensSensorObject Aperture𝑑𝑙𝑒𝑛𝑠2(b)𝑑𝑠𝑝𝑜𝑡2𝑑𝑜 𝑑𝑖Focal planeLensSensorObjectAperture𝑑𝑙𝑒𝑛𝑠2(a)79  𝑑/1(%2"(,4#!G(%30H( = 1.22𝜆𝑓#(1 +𝑀) (4.14) Therefore, if the camera is focused at the object surface, the observed speckle size is independent of the spot diameter and object distance and depends solely on the parameters of the imaging system.  4.2.4 Speckle Size in Defocused Speckle Imaging If the camera is moved away from the focused position by a small amount, the maximum angle among the light rays reaching the sensor remains unchanged and still depends only on the camera internal geometry. 𝑑/1(%2"(,I&J	>(8&%#/ = 1.22𝜆𝑓#(1 +𝑀) (4.15) However, if the camera is moved far away from the surface, the light rays that overlap on the shifted focal plane have limited range of angles, as shown in Figure 4.3 (b). Consequently, these rays cover only a portion of the lens aperture, and the refracted rays that finally reach the sensor have smaller range of angles in comparison to the focused case. This means that the diameter of the illuminated spot becomes the limiting lens aperture at large defocus distances. The resulting speckles follow the same relationship as the objective speckles but are scaled by the lens magnification ratio because of the additional refraction caused by the lens: 𝑑/1(%2"(,+0KL	>(8&%#/ = 1.22𝑀𝜆 ∆𝐿𝑑/1&3 (4.16) According to Equation (4.16), the defocused speckle size can be tuned by adjusting the lens magnification, laser wavelength, defocus distance and the illumination spot diameter. 80  Equations (4.15-4.16) together describe the speckle size behavior in a defocused imaging system. At small sampling distances, the speckle size remains constant, but once sampling distance is sufficiently increased, speckle size starts to increase linearly. Looking at Figure 4.3 (b), it is apparent that at the boundary, the blur diameter in the object space 𝑑!"#$ is equal to the spot size 𝑑/1&3. Therefore, Equation (4.16) holds when the imaging is completely diffused, i.e., when the condition in Equation (3.12) is valid.  4.2.5 Speckle Size and Shape vs. Geometry in Defocused Speckle Imaging Given the established linear relationship, it would be attractive to use speckle size to estimate the sampling distance in order to scale the recorded speckle motions correctly. The average speckle size can be determined by computing the 2-dimensional autocorrelation (AC) of the captured speckle pattern image [54]. Provided that the image contains a large number of speckles, the resulting 2D autocorrelation map will have a sharp central self-correlation peak similar to the PSF cross-sections in Figure 4.2. The peak FWHM diameter is a statistical measure of the speckle size [55]. As demonstrated by the spot size simulations, speckles may not always be circular. For example, if the illumination spot is elliptical, the resulting speckles are also elliptical, but with opposite axis orientation; i.e., the speckles along the longer illumination axis will appear shorter. On the other hand, if the illuminated spot is circular but is imaged at an oblique angle, the spot appears elliptical when viewed at the sensor direction, again leading to elliptical speckles (Figure 4.4). Therefore, the speckle aspect ratio could potentially indicate the relative surface orientation, the second important parameter for measurement calibration. 81  With the improved understanding, it is now possible to make further remarks about the detailed structure of the 3D Speckle Hemisphere proposed in Chapter 2. If the illumination is at a normal incidence, then the speckles close to the illumination direction have circular cross-sections (Figure 4.4 (Top)). However, the cross-sections become gradually more elliptical with increasing deviations from the normal direction. At the extreme case where the observation direction approaches the surface plane, the projection of the illuminated spot has close to zero width, resembling a one-dimensional slit. Therefore, the speckles become streaked and their widths approach infinity (Figure 4.4 (Bottom)).   Figure 4.4 Speckle shape vs. observation angle. (Top) Circular illumination spot observed at close to normal incidence results in circular speckles, (Bottom) Same spot observed at a highly oblique angle (≈90˚) close to surface plane leads to highly stretched speckles.  82  4.2.6 Challenges While speckle size and aspect ratio seem attractive for extracting the geometric parameters, they have certain limitations. For a diverging laser source, the illuminated spot diameter increases linearly with illumination distance. Therefore, if the illumination and sampling distances are close to equal, the nominator and denominator in Equation (4.16) would change at the same rate, making speckle size rather insensitive to object distance variations. A direct way to overcome this could be to fix the spot size by collimating the laser beam. The observed speckle size is further affected by variations in surface reflectance and laser beam intensity profile. For example, if one side of the illuminated surface has very low reflectance, it has negligible contribution to the scattered interference pattern. Consequently, the effective spot size is reduced, which enlarges the size of the observed speckles. Furthermore, the speckle size equations derived in this section assumed that the laser intensity profile is uniform. In reality, however, lasers typically have Gaussian intensity distribution unless it has been modified by additional optics. Therefore, the central portion of the beam has highest intensity, and the beam has long tails with no distinct edges. The speckle size resulting from Gaussian illumination is [56]: 𝑑-.#//0.' = 1.221.699 	𝑀𝜆 ∆𝐿𝑑)*+, (4.17) where a FWHM value is used for the representative gaussian beam spot diameter. If the object material is not metallic, it may be prone to volume scattering effects [9]. The light penetrated deeper into the material may scatter back to the surface and glow beyond the illuminated region, thus increasing the effective spot diameter. Furthermore, speckle size may also be affected 83  by vignetting effects. A long cylindrical lens may obstruct part of the scattered light from reaching the sensor edges, as illustrated in Figure 4.5 (center). Consequently, the effective aperture may be elliptical for points close to sensor boundaries, leading to radially stretched speckles. Finally, it would be practical to have an all-in-one measurement instrument where the laser and the cameras are integrated into same housing. However, this would require arranging the observation at the same direction with the illumination. Unfortunately, the illuminated spot back-projected towards such instrument would always have the same cross-section as the original illumination beam, independent of the relative surface angle.   Figure 4.5 Vignetting causing nonuniform speckle size in Defocused Speckle Imaging. (Left) Speckle pattern imaged through a short lens, (Center) Speckle pattern imaged through a long lens, (Right) Example defocused speckle pattern exhibiting spatially varying speckle size. Vignetting also reduces light intensity towards the sensor edges. 84  The above considerations highlight that the observed speckle size and shape are influenced by many different experimental factors. Since the range estimation through speckle size has the characteristics of analog measurement, even small unaccounted deviations may be detrimental and make geometric calibration unreliable. While speckle size analysis could work well in a controlled environment, it is far from ideal range metric for challenging field measurements. Instead, it would be useful to find a parameter that is insensitive to variations in laser beam profile and surface reflectance and work predictably with different types of imaging hardware in various environments. Apart from range analysis, speckle size is still a very important control parameter, as it controls the strength of the image texture. If speckles are much smaller than individual sensor pixels, the resulting averaging effects reduce contrast in the recorded speckle patterns [9]. Similarly, if speckles are much larger than pixels, the pixel-to-pixel intensity variations are small, which again reduces texture strength. Therefore, speckle size should always be optimized for the specific measurement geometry.  4.3 Diffraction view of Speckle Imaging 4.3.1 Operating Principle of Reflection Diffraction Grating A reflection diffraction grating is an object with a reflective surface that consists of a regular arrangement of longitudinal grooves with uniform microscopic spacing 𝑎.  Figure 4.6 shows how a reflection diffraction grating operates under monochromatic laser illumination (wavelength 𝜆). Light incident on the grating surface is reflected and diffracted into certain directions where light interferes constructively. If the light incidence angle is 𝜃, there is a relative path length difference 𝑂𝑃𝐷4 = a	sin 𝜃 between the light rays incident on adjacent surface grooves. Similarly, if the 85  diffraction angle is 𝜓, there is relative path length difference 𝑂𝑃𝐷5 = a	sin𝜓 between the light rays originating from the adjacent grooves. The total path length difference is thus: 𝑂𝑃𝐷BCB = 𝑂𝑃𝐷4 + 𝑂𝑃𝐷5 = a	sin 𝜃 + 𝑎 sin𝜓 (4.18) Light rays interfere constructively if they have equal phases. This condition is achieved if the relative path length difference between the rays is an integer number of wavelengths 𝑚𝜆. This requirement leads to the following expression: 𝑎(sin 𝜃 + sin𝜓M) = 𝑚𝜆 (4.19) Equation (4.19) is known as the general diffraction equation [57]. The light incident on the grating surface is diffracted into various angles 𝜓M corresponding to different diffraction orders 𝑚. 0th diffraction order corresponds to a mirror-like specular reflection, so its direction is independent of wavelength.  Figure 4.6 Operating principle of a reflection type diffraction grating. Light reflected from the adjacent grating grooves interfere constructively at certain angles governed by Equation (4.19). Incident beamDiffrac/on gra/ng𝜃𝜓𝑚Diffrac/on order m𝑎 𝜃𝜓𝑚𝜆𝑂𝑃𝐷𝐶𝑂𝑃𝐷𝑆 𝑚𝜆86  4.3.2 Speckle Pattern as a Diffraction Pattern Diffraction occurs when light is incident on any small slit, hole or object. Therefore, a rough object surface can be considered as a collection of random slits, or randomly oriented diffraction gratings [8,33] of varying groove spacings as illustrated in Figure 4.7. Each grating diffracts light into multiple diffraction orders. Collectively, the surface gratings scatter light into all directions, forming a three-dimensional speckle field in the space adjacent to the object. If a cross-section of the speckle field is sampled by placing a screen next to the object, then any point on the sensor receives light from multiple surface gratings. The resulting speckle pattern is thus a random diffraction pattern that is characteristic of the illuminated surface. Figure 4.7 shows speckle formation at two example locations on the sensor.  Figure 4.7 Speckle formation based on modeling the diffuse surface as a collection of randomly oriented diffraction gratings with various groove spacings. The solid green lines show formation of a dark speckle (destructive interference), and the dashed blue lines show formation of a bright speckle (constructive interference). The combined interaction of all surface gratings fills the entire screen with a speckle pattern.  LaserObjectScreen Speckle Pa;ern87  The speckle intensity at a specific sensor point depends on the relative phases of the diffracted light rays that overlap. While the diffraction orders 𝑚 of the overlapping light rays vary, and the rays are formed by 𝑖 separate gratings of different grating line spacings 𝑎0, the overlapping rays all share same incidence angle 𝜃 and diffraction angle 𝜓, provided that the illumination and the observation distances are large in comparison to the diameter of the illuminated spot. Therefore, the speckle formation can be represented as a combined action of all diffraction gratings over different diffraction orders that fulfill the following condition: yy𝑎0(sin 𝜃 + sin𝜓)M =y y 𝑚M0 𝜆0  (4.20) If the surface is displaced in-plane or rotated out-of-plane, the incident light rays that illuminate the gratings after surface motion are rotated with respect to the rays that illuminated the initial object position. If the surface motion changes the illumination angle by 𝑑𝜃, then the diffraction angles must correspondingly change by an amount 𝑑𝜓 so that the general diffraction equation (4.19) remains fulfilled. If the wavelength remains constant and the individual gratings do not deform, then the illumination and diffraction angles are the only parameters that change: yy 𝑎0M (sin(𝜃 + 𝑑𝜃) + sin(𝜓 + 𝑑𝜓)) =y y 𝑚M0 𝜆0  (4.21) For each grating and diffraction order pair (𝑎0 , 𝑚), the following pair of equations holds: {																												𝑎0(sin 𝜃 + sin𝜓) = 𝑚𝜆𝑎0(sin(𝜃 + 𝑑𝜃) + sin(𝜓 + 𝑑𝜓)) = 𝑚𝜆 (4.22) 88  Equating the left sides of Equations (4.22a&4.22b) and dividing by the grating line spacing 𝑎0 yields:  sin(𝜃 + 𝑑𝜃) + sin(𝜓 + 𝑑𝜓) = sin 𝜃 + sin𝜓 (4.23) The left side can be expressed differently using the sine summation identity: sin(𝐴 + 𝐵) = sin𝐴 cos𝐵 + cos𝐴 sin𝐵 (4.24) This yields: sin 𝜃 cos 𝑑𝜃 + cos 𝜃 sin 𝑑𝜃 + sin𝜓 cos 𝑑𝜓 + cos𝜓 sin 𝑑𝜓 = sin 𝜃 + sin𝜓 (4.25) Noting that for small angles, cos 𝐴 ≈ 1& sin𝐴 ≈ 𝐴, Equation (4.25) can be approximated as: sin 𝜃 + 𝑑𝜃	cos 𝜃 + sin𝜓 + 𝑑𝜓 cos𝜓 = sin 𝜃 + sin𝜓 (4.26) Finally, the change in the diffraction angles can be expressed as: 𝑑𝜓 = − cos 𝜃cos𝜓 𝑑𝜃 (4.27) This expression is identical to Equation (2.11) that represents the phase correction term derived using the constancy of the path length differences across the illuminated spot. The equivalence proves that the diffraction view of speckle formation is fully compatible with the phase-corrected Speckle Hemisphere Model.  89  4.3.3 Speckle Pattern Wavelength Dependency The diffraction nature of speckle formation means that speckle locations are wavelength dependent. If the laser wavelength shifts, then the diffraction angles correspondingly change, and speckles originally seen at specific angles drift into new, different positions. This means that the observed speckle pattern may appear to move on the sensor if the laser is unstable or warming up. This feature has been utilized in a speckle-based spectrometer to characterize laser wavelength changes in response to laser temperature variations [58]. It is also possible that the laser source is not purely monochromatic. For example, let’s consider a laser that has two closely spaced wavelength peaks (longitudinal modes) 𝜆9 and 𝜆: so that 𝜆: =𝜆9 + ∆𝜆 & ∆𝜆 ≪ 𝜆9, 𝜆:. Because of the diffraction wavelength dependency, two separate speckle fields will be formed. If a specific speckle formed by 𝜆9 is observed at an angle 𝜓9, then the corresponding speckle formed by 𝜆: must be observed at an angle 𝜓:. Provided that the wavelength difference ∆𝜆 is small, it is reasonable to expect that the related diffraction angles are also close to one another, i.e. 𝜓: = 𝜓9 + ∆𝜓 where ∆𝜓 is small. Thus, sin ∆𝜓 ≈ ∆𝜓	& cos ∆𝜓 ≈1.  For each grating and diffraction order pair (𝑎0 , 𝑚), the following pair of equations holds: {𝑎0(sin 𝜃 + sin𝜓9) = 𝑚𝜆9𝑎0(sin 𝜃 + sin𝜓:) = 𝑚𝜆: (4.28a & 4.28b) Dividing Equation (4.28b) by Equation (4.28a) yields: sin 𝜃 + sin𝜓:sin 𝜃 + sin𝜓9 = 𝜆:𝜆9 (4.29) sin 𝜃 + sin(𝜓9 + ∆𝜓)sin 𝜃 + sin𝜓9 = 𝜆9 + ∆𝜆𝜆9  (4.30) 90  Using sine summation identity (4.24): sin 𝜃 + sin𝜓9 cos ∆𝜓 + sin ∆𝜓 cos𝜓9sin 𝜃 + sin𝜓9 = 1 + ∆𝜆𝜆9  (4.31) Using small angle approximation for ∆𝜓: sin 𝜃 + sin𝜓9 + ∆𝜓 cos𝜓9sin 𝜃 + sin𝜓9 = 1 + ∆𝜆𝜆9  (4.32) 1 + ∆𝜓 cos𝜓9sin 𝜃 + sin𝜓9 = 1 + ∆𝜆𝜆9  (4.33) ∆𝜓 cos𝜓9sin 𝜃 + sin𝜓9 = ∆𝜆𝜆9  (4.34) Therefore, if a rough object surface is illuminated by the dual-wavelength laser, and the resulting speckle field is sampled by a lensless sensor or a defocused camera, then the recorded images will contain two partially overlapping duplicated speckle patterns that have a fixed angular offset ∆𝜓 between them according to: ∆𝜓 = ∆𝜆𝜆9 sin 𝜃 + sin𝜓9cos𝜓9  (4.35) Given the fixed angular offset, the spatial speckle offset ∆𝑋 observed on the sensor plane scales linearly proportional to the sampling distance ∆𝐿: ∆𝜓 ≈ tan∆𝜓 = ∆𝑋∆𝐿  (4.36) ∆𝑋 = ∆𝜓∆𝐿 (4.37) 91  ∆𝑋 = ∆𝜆𝜆9 ∆𝐿 sin 𝜃 + sin𝜓9cos𝜓9  (4.38) ∆𝑋 = ∆𝜆𝜆9 ∆𝐿 }tan𝜓9 + sin 𝜃cos𝜓9~ (4.39) The illumination angle can be alternatively expressed in term of the observation angle as 𝜃 = 𝜓9 +∆𝜃. Therefore: sin 𝜃 = sin(𝜓9 + ∆𝜃) = sin𝜓9𝑐𝑜𝑠∆𝜃 + cos𝜓9 sin ∆𝜃 (4.40) Equation (4.39) can thus be expressed in an alternative form: ∆𝑋 = ∆𝜆𝜆9 ∆𝐿[(1 + cos ∆𝜃)	tan𝜓9 + sin ∆𝜃] (4.41) Equation (4.41) is equivalent with the recently published article by Gibson et al. [15].  If the laser beam has more than two wavelength components, then the resulting image has equally many shifted copies of the same speckles where the relative offsets correspond to relative laser mode separations. Figure 4.8 illustrates speckle formation under single-mode and multi-mode laser illumination. Figures 1.9 and 4.9 show examples of actual multi-mode laser speckle patterns. 92   Figure 4.8 Speckle formation under single-mode vs. multi-mode laser illumination. Each wavelength component creates an independent speckle pattern in a slightly different direction. At a sampling distance ∆L, the speckle patterns are spatially offset by ∆𝑿 according to Equation (4.41).  Figure 4.9 Defocused speckle pattern displaying multiple horizontally offset duplicated speckles. ∆𝜆 ≈ 0.05𝑛𝑚𝜆 ≈ 500𝑛𝑚𝜆 ≈ 500𝑛𝑚IntensityIntensityWavelengthWavelengthSingle specklesMul.-mode laser diode∆𝑋Single-mode laserDuplicated specklesObjectSampling plane∆𝑋 ∆𝐿93  4.3.4 Diffraction-Based Measurement Calibration The speckle offset can be determined from the captured speckle patterns using a similar autocorrelation procedure as described for speckle size estimation [15]. When the image contains duplicated features, the resulting autocorrelation map has distinct side-peaks that are offset from the central self-correlation peak. The distance between the central and side-peaks is a direct measure of the speckle offset. According to Equation (4.39), the observed shift between the overlapping speckle patterns depends on the laser wavelength, wavelength separation, sampling distance, as well as the illumination and observation angles. Of these, the spectral properties can be determined by characterizing the laser source. If the illumination and observation are at the same direction or have a known angular difference ∆𝜃, then the relative surface angle and the sampling distance are the two remaining unknowns. Solving two unknowns requires two independent measurements.  Looking at Equation (4.41) and Figure 4.8, the observed offset between the overlapping speckle patterns scales linearly to the sampling distance. Therefore, if the speckle field is sampled at two different sampling planes at close to same angle, the two resulting speckle patterns will have different amount of speckle offset ∆𝑋: ≠ ∆𝑋9. While the sampling distance is unknown, the separation between the two sampling planes ∆𝐿9: = ∆𝐿: − ∆𝐿9 can be determined from camera calibration. Therefore, it is possible to determine the slope of the speckle offset and find the distance to the object surface by simple extrapolation as illustrated in Figure 4.10. The same can be expressed analytically using similar triangles: 94  ∆𝑋9∆𝐿9 = ∆𝑋:∆𝐿: = ∆𝑋:∆𝐿9 + ∆𝐿9: (4.42) ∆𝐿9 = ∆𝑋9∆𝑋: − ∆𝑋9 ∆𝐿9: (4.43) Once the sampling distance is determined, the relative surface angle is the only remaining unknown and can be easily calculated using Equation (4.39) or Equation (4.41).  Figure 4.10 Sampling distance determination based on speckle offset extrapolation.  The experimental arrangement proposed in Chapter 3 for measuring multiaxial object motions is based on two cameras focused at different distances. Therefore, it is directly applicable for speckle diffraction analysis to determine the sampling distance and the relative surface angle, provided that a multi-mode illumination source with appropriate mode spacing is used. In motion measurements, it is important to use two sampling planes that are at greatly different distances in order to separate the in-plane displacements from tilt contributions. This is equally useful for the calibration procedure, as it makes the extrapolation-based range estimation more robust against Speckle offset∆𝑋1∆𝑋2∆𝐿12∆𝐿1∆𝐿2Sampling distance95  errors. The geometric parameters could be extracted from the same speckle patterns that are used for the speckle motion analysis. Such self-calibration minimizes measurement setup time. Moreover, since the two cameras can capture the required images simultaneously, the range and angle monitoring can be done real-time, and the recorded speckle motions dynamically scaled.   4.3.5 Speckle Offset vs. Speckle Size as a Range Metric The fundamental requirement for speckle offset analysis is to have a laser source that operates in multi-mode. Fortunately, this is not a severe demand because laser diode spectrum can be easily adjusted by current and temperature control. The laser source should also be well stabilized, so that its wavelength and mode characteristics are stable over the course of the measurement. Thankfully, most laser sources are equipped with active control circuitry, and mode spacing is a generally steady quantity, as it is related to the physical length of the laser resonator cavity [42]. While speckle size and shape are affected by many factors, the separation between the two speckle patterns is not prone to similar issues. This makes speckle offset a favorable choice over speckle size for diverse measurement situations.  4.3.6 Further Comments While the speckle pattern diffraction nature has been known since the discovery of speckles [27], the recent work by Gibson et al. [15] in 2019 is the only application known by the author to utilize the wavelength dependent diffraction effects for practical measurements to determine geometric parameters. The relative simplicity of the method opens the question as to why speckle diffraction aspects have not been used more effectively. One reason may be that high-quality single-mode laser sources have been traditionally favored over multi-mode lasers. Moreover, the speckle offset 96  is apparent only under significant defocus and/or relative surface angles; wavelength-dependent speckle effects cannot be seen with a focused camera. Finally, speckle patterns are random patterns, and the partial overlap of two random patterns creates another random pattern. Therefore, the presence of duplicated patterns may not be obvious by visual inspection alone, but instead require computer-based autocorrelation analysis. Wavelength is not the only way to encode geometric information into speckle patterns. An alternative method previously demonstrated by Jakobsen and Hanson [59] utilizes mutually tilted illumination beams. The different illumination angles create independent diffraction speckles at different angles, leading to two overlapping speckle patterns. However, such an approach requires aiming the two beams to illuminate the same object, so the instrumentation alignment has to be carefully tuned for the specific object distance. The primary application of this technique is for short range distance measurements. On the contrary, the wavelength-based approach needs only one beam, so it is a practical choice for remote measurements at various distances. Finally, diffraction-based view of speckle formation allows easy explanation for Speckle Imaging strain sensitivity. If the illuminated surface stretches uniformly, all of the surface gratings are similarly affected. An axial strain component normal to the grating grooves 𝜀 will change the grating spacing 𝑎 by a factor 𝑎𝜀. This forces the diffraction angles to change in order to maintain validity of the general diffraction equation (4.19), which gives rise to a speckle motion signal on the camera sampling plane. Therefore, surface strain can be determined directly by monitoring speckle motions, as opposed to numerical differentiation of displacement fields in DIC analysis [32,48].  97  4.4 Conclusion In a highly defocused camera, speckle size generally scales linearly proportionally to sampling distance, but is also affected by many other geometric factors. These must be considered when designing the instrumentation setup, as speckle size affects image texture and further the robustness of motion tracking. The diffraction-based view of speckle formation matches with the Speckle Hemisphere Model and reveals speckle pattern wavelength dependency. If the laser source has multiple wavelength modes, partially overlapping copies of the same speckles are formed. Because the relative speckle offset depends on the sampling distance and the relative surface angle, it is possible to extract the important calibration parameters directly from the captured speckle patterns with no additional sensors. 98  Chapter 5: Sensitivity Characteristics of Objective Speckle Imaging  This chapter presents a series of experiments conducted to validate the Speckle Hemisphere Model and to explore the characteristics of Objective Speckle Imaging when applied to practical measurements. Particular emphasis is given on studying object in-plane rotations that have received only limited attention in the existing studies. The contents of this chapter are adapted from an article “A Geometric Model of Surface Motion Measurements by Objective Speckle Imaging” published in Optics and Lasers in Engineering [25].  5.1 Experimental Measurements The studied object motions included x-directional surface in-plane displacements 𝑑𝑥, out-of-plane rotations (tilts) about the surface y-axis 𝜔<, and in-plane rotations about the surface normal, z-axis 𝜔=. The resulting speckle displacements were recorded and compared with theoretical expectations based on the setup geometry and the known applied motion magnitudes.  5.1.1 Measurement Setup The measurement object was a flat plate made of medium density fiberboard (MDF). This material was chosen as it is easy to manipulate and has an optically rough surface that scatters light diffusively. The object was illuminated by a green diode-pumped solid-state (DPSS) laser (CrystaLaser GCL-100-S, 𝜆 = 532𝑛𝑚) with a narrow, slightly diverging beam (waist diameter 0.36𝑚𝑚 and divergence angle 2𝑚𝑟𝑎𝑑). A portion of the resulting speckle field was sampled using a Matlab-controlled monochrome machine vision camera (AVT ProSilica GC1290, resolution 99  960x1280 pixels, pixel size 3.75x3.75 µm2). The camera was used without a lens, exposing the sensor to directly capture the objective speckle pattern. A 50ms exposure time was used in every measurement. Figure 5.1 shows a schematic of the measurement geometry. The object and the laser were fixed on the same linear rail. The illumination distance was adjusted by sliding the laser source along the rail, while the object position was kept fixed throughout the measurements. The laser was aligned to illuminate a portion of the object surface at a normal incidence (𝜃 = 0˚). The camera was placed onto a second sliding rail, parallel to the laser rail. The imaging axis was offset by 160mm from the illumination axis. Consequently, the imaging angle 𝜓 was non-zero and varied depending on the sensor distance. In these experiments, the sensor plane was kept parallel to the object surface, i.e., the sensor normal was always parallel to the rail. Therefore, the camera alignment was not strictly perpendicular to the observation direction as intended for the actual measurement applications. While the parallel observation alignment helped to keep the geometry consistent throughout the validation measurements, it caused the horizontal speckle motion component to be inflated by a small factor 1/cos	(𝜓), ranging between 1.003 − 1.050. This modest deviation was mathematically compensated for in the subsequent data-analysis. A Matlab-interfaced servo-controlled precision linear actuator (Newport CMA-12CCCL, with Newport ESP100 driver) was used to displace, tilt or rotate the object, depending on the studied motion type. The actuator body was attached to a linear translation stage, so that the rotating actuator axis pushed the stage just like a conventional micrometer head would. For the in-plane displacement study, the specimen was fixed directly on top of the translation stage to move it in the +x-direction. For the tilt experiments, on the other hand, the object was attached at the end of 100  a pivoted aluminum rod so that the object surface plane was located on the rod rotation axis. The object was tilted about the y-axis into positive direction by pushing the opposite end of the rod by the actuator-driven linear stage. The illuminated spot was located on the rotation axis to ensure that the applied motion was purely rotational. Finally, for the in-plane rotation experiments, the object was attached to a precision bearing assembly at its center. The linear stage pushed the object in a direction normal to the rotation axis at a specific distance away from the rotation center, as illustrated in Figure 5.1.   Figure 5.1 Schematic of the measurement geometry. S: Laser source, O: Object, A: Actuator, C: camera sensor. The displayed configuration is for the case of in-plane rotation. (A) top view, (B) side view. [25].   xyXY(A)(B)S CO𝜓𝐿𝐶𝐿𝑆A𝜔𝑧𝜔𝑧𝜓𝜔𝑦𝑑𝑥101  5.1.2 Measurement Procedure For each experiment, the actuator was programmed to move the object at a constant rate. This caused the speckle hemisphere to correspondingly shift and/or rotate in continuous motion. The camera captured speckle pattern images at frequent intervals to track the moving speckle hemisphere. The incremental approach was needed to maintain partial overlap between the speckle patterns captured in successive frames; otherwise the tracking would not be possible. It is also important to remember that the speckles are defined by the local surface roughness within the illuminated spot. Because the illuminated portion changes with object motion, the individual speckles gradually change. Thus, the incremental method also ensured that the speckle patterns remained well correlated [7]. Table 5.1 summarizes the total applied movement magnitudes, along with typical total numbers of image frames and motion increments in each measurement.  Experiment In-plane Displacement 𝒅𝒙 Out-of-plane Tilt 𝝎𝒚 In-plane Rotation 𝝎𝒛 Applied Movement 5.00mm 2.39˚ 7.04˚ Movement per Step 0.066mm 0.016˚ 0.092˚ Number of Increments 75 150 75  Table 5.1 Applied total and incremental object motion magnitudes.  The speckle pattern bulk motion components (𝐷𝑋,𝐷𝑌) were evaluated using a cross-correlation based technique commonly used for Digital Image Correlation (DIC) analysis [1]. The incremental speckle displacements were computed and then summed (integrated) to yield the total displacements of the speckle hemisphere at the sensor location. This is conceptually similar to the method used for a computer mouse to enable it to track its position using a tiny sensor, even when 102  the mouse is moved by large distances greatly exceeding the sensor dimensions. The speckle displacement analysis was done using a custom Matlab algorithm based on cross-correlation and a Discrete Fourier Transform (DFT) [60]. Measurement sensitivity was investigated using a range of different source vs. sensor distance combinations. The tested geometries included all permutations of the distances shown in Table 5.2. The in-plane displacement and the out-of-plane tilt measurements were performed directly for each unique source and sensor distance combination. The in-plane rotation experiment, however, required a more detailed procedure because of the spatially varying vectorial speckle displacement fields. The imaging sensor was carefully placed at the same height with the laser beam, and the laser beam was aligned to propagate parallel to the rails of the optical table. The resulting observed speckle displacements on the sensor location were within 1˚ of the vertical camera Y-axis and were linear. The applied rotation angle was known, so it was possible to use the recorded vertical speckle displacement 𝐷𝑌 to determine the horizontal distance between the sensor and the speckle hemisphere center of rotation CoR (𝑋4): 𝑋4 = 𝐷𝑌tan	(𝜔=) (5.1)  Source Distances 	𝑳𝑺 [mm] Sensor Normal Distances [mm] Sensor Diagonal Distances 𝑳𝑪 [mm] Imaging Angle 𝝍 [˚] 𝟏/ 𝐜𝐨𝐬𝝍 [-] 1000 500 525 17.74 1.050 1500 1500 1509 6.09 1.006 1900 2217 2223 4.13 1.003  Table 5.2 Studied source and imaging distances and angles. 103  In principle, only one measurement is required to determine the speckle hemisphere CoR. However, because the sampled sensor area is a very small portion of the speckle hemisphere, the data contained in one image are too local to provide a reliable estimation of the global speckle rotation. Therefore, the following more detailed two-step method was used instead. The first measurement was done with the illumination close to the object center of rotation, and the resulting speckle displacement was determined. The laser source was then laterally offset by 17.0mm towards the +x-direction, and the measurement was repeated. Referring back to Equation 2.29, the observed change in the speckle pattern CoR equals the applied laser offset multiplied by the geometric 𝛽 factor: ∆𝑋5&6 = −𝛽∆𝑥&88/(3 (5.2) where  ∆𝑋5&6 = 𝑋4,: − 𝑋4,9 = 𝐷𝑌: − 𝐷𝑌9tan	(𝜔=)  (5.3) Therefore: 𝛽 = − ∆𝑋5&6∆𝑥&88/(3 = 𝐷𝑌: − 𝐷𝑌9∆𝑥&88/(3tan	(𝜔=) (5.4) The two-step approach factors out uncertainties related to potential misalignments. The method is also insensitive to illumination deviations from normal incidence, as long as the laser source is moved in purely x-direction between the measurements. To ensure this, the laser was shifted using a precision linear translation stage.  104  5.1.3 In-plane Displacement Measurements Figure 5.3 shows the in-plane displacement study results. The recorded and summed total speckle displacements were converted to corresponding sensitivity values by dividing by the applied surface displacement: 𝑆>; = 𝐷𝑋>;𝑑𝑥 = dcos(𝜓) + 𝛽 cos:(𝜃)𝑐𝑜𝑠(𝜓)e ≈ 1 + 𝛽	𝑓𝑜𝑟	𝜃, 𝜓 ≈ 0˚ (5.5) The resulting sensitivity values were plotted as a function of the geometric 𝛽 value. The result plots are grouped according to the source distance; for each line, the lowest 𝛽 value corresponds to the shortest sensor distance, while the highest 𝛽 value corresponds to the highest sensor distance. For comparison, the expected theoretical sensitivity is plotted for the case of normal incidence observation. The measurements show moderately close agreement with the theoretical expectations for the two longest source distances. However, the observed sensitivities were systematically higher than expected, particularly for the shortest source distance. Such behavior suggests that the effective source distance could be less than the physical distance measured from the laser housing. 105   Figure 5.2 Initial in-plane displacement sensitivity 𝐒𝐝𝐱 vs. observation/illumination distance ratio 𝛃 [25].  Laser light propagates as a Gaussian beam. Therefore, its curvature in the far-field equals the distance from the waist, i.e., the laser waist is the effective focal point where the laser light rays appear to diverge from [42]. Thus, the effective source distance must be measured with respect to the waist. A thorough examination of the laser output revealed that the beam waist was located outside the laser exit aperture. Based on a set of beam diameter measurements, the waist was estimated to be about 150mm away from the laser source exit lens. Figure 5.3 shows the in-plane displacement study results but with the source distances reduced by 150mm to correct for the waist offset. The new plots show considerably improved agreement with the theory, with some residual error for the lowest source distance. 106   Figure 5.3 In-plane displacement sensitivity after laser waist offset correction [25].  5.1.4 Out-of-plane Tilt Measurements Figure 5.4 shows corresponding measurement results for the object out-of-plane tilt experiments. The recorded speckle displacements are displayed as a function of the sensor distance, along with the theoretical expectations computed according to Equation (2.22).  The tilt measurement was not affected by the laser waist offset, as the results are independent of the source distance. The experimental results show very close agreement with the theoretical values, demonstrating proportional sensitivity to sensor distance and insensitivity to source distance.   107   Figure 5.4 Observed speckle displacements 𝐃𝐗𝛚𝐲 vs. sensor distance 𝐋𝐂 resulting from object out-of-plane rotation [25].  In general, the surface tilt measurements were observed to have very high sensitivity in comparison to the displacement measurement. For the applied tilt angle of 2.39˚, the total speckle pattern displacement was multiple times larger than the sensor dimensions. This motion could not be analyzed in a single step, which highlights the importance and practicality of the incremental method. Even with incremental imaging, the speckle displacements were quite large and thus challenging for the correlation calculations. The camera framerate was limited by the time required to compute the incremental speckle motions before acquiring a subsequent image. Therefore, to improve the tracking robustness, the average speckle size was reduced by increasing the diameter of the illuminated spot. This was done by expanding the laser beam using a diverging lens. The 108  finer texture with higher speckle feature density was easier for the algorithm to track. The larger illumination spot was used only for the tilt measurements. Additional study confirmed that the larger spot size did not change the measurement sensitivity but improved tracking stability, particularly at the highest sensor distances where both the observed displacements and the speckle size were the largest.  5.1.5 In-plane Rotation Measurements The in-plane rotation study was performed using the previously described two-step method for the same different source-sensor distance combinations used to study in-plane displacements and out-of-plane tilts. Figure 5.5 shows the corresponding results. The displayed sensitivities 𝑆D= = 𝛽 were computed using the observed sensor 𝐷𝑌-displacements according to Equation (5.4) and compared to the theoretical expectations. The surface points offset from the rotation axis have an in-plane displacement component, so the resulting speckle motions are affected by the effective source distance. Therefore, the waist-offset correction was again applied here. The results show very good agreement at all source distances. A second experiment was done where the object rotation center CoR was illuminated first at a normal incidence 𝜃9 = 0˚ and later at an oblique angle 𝜃: = 0.57˚	(10𝑚𝑅𝑎𝑑). In both measurements, the illumination distance was 𝐿4 = 1500𝑚𝑚 and the sensor distance 𝐿5 =1509𝑚𝑚. The expected shift in the speckle pattern CoR was 15.09𝑚𝑚 according to Equation (2.24). The observed shift was 14.91𝑚𝑚, which is very close, only 1.2% from the theoretical expectation.  109   Figure 5.5 In-plane rotation sensitivity 𝐒𝛚𝐳 vs. observation/illumination distance ratio 𝛃 [25].  5.1.6 Visualization of Rotating Speckle Field The rotating speckle field resulting from object in-plane rotation was additionally visualized using an indirect approach [30] where the laser light scattered from the rotating object was projected onto a flat white cardboard screen placed behind the laser source. This created a pattern of visible speckles that could be observed directly by eye or using a camera to take focused images of the screen surface. This method was not limited by the camera sensor size, so it allowed observing a much greater portion of the speckle hemisphere and to clearly see the rotation of the entire pattern, as well as locating the rotation center. Figure 5.6 shows an example speckle pattern captured by this method, overlapped with vector displacement field (indicated by the red arrows) computed from incrementally captured speckle patterns. While a large portion of the speckle field could be 110  simultaneously visualized, projecting a radially diverging speckle hemisphere onto a flat screen created projection errors. Furthermore, the observed vectorial displacement magnitudes deviated from the theoretical expectations. The discrepancies may be attributed to the secondary speckles that were generated when the laser light scattered for the second time from the screen surface. Therefore, the method is not well suited for quantitative motion measurements. However, it may provide a valuable educational tool to teach about speckle formation and movements, perhaps in combination with the disco ball analogy.   Figure 5.6 Visualization of rotating speckle field caused by object in-plane rotation. The laser illuminated the object at a normal incidence. Scattered light was projected onto a screen placed behind the laser. A camera placed above the object was focused at the screen of the projected speckle pattern [25].  5.2 Discussion It is important to know the effective source distance accurately to scale the observed speckle motions correctly. However, as was observed in the presented experiments, the laser waist may be 111  challenging to locate accurately. The related uncertainty is a particular concern when working with small source distances. Furthermore, because of Gaussian beam propagation, the laser beam effective curvature has nonlinearities in the vicinity of the beam focal point [42]. Therefore, the observed in-plane displacement sensitivity may behave in an unexpected manner if a diverging laser source is placed very close to the object surface. Hence, larger source distances are recommended for practical measurements in order to minimize the effect of source focal point uncertainty. Speckle Imaging sensitivity can be simplified by tuning certain geometric parameters. For example, if the laser source is collimated, the effective source distance approaches infinity and the sensor/source distance ratio 𝛽 becomes zero. Consequently, in-plane displacement measurement becomes insensitive to source distance and in-plane rotation measurement insensitive to illumination offsets. Accurate beam collimation can be particularly useful for the close-range measurements to minimize the errors resulting from the uncertainty in the effective waist position. Regarding sensor position, out-of-plane tilt sensitivity always scales linearly proportional to the sensor distance, while in-plane displacement sensitivity has a lower slope and an additional constant term. Therefore, remotely placed sensor measures primarily tilt motions, whereas a nearby sensor is mostly sensitive to in-plane displacements. Thus, if the primary objective is to measure tilts, it is advantageous to maximize sensor distance, while small sensor distance is optimal for displacement measurements. However, if the goal is to capture many different motion types simultaneously, medium sensor distance may be most appropriate. A detailed understanding of various geometric parameters greatly assists effective implementation of Speckle Imaging method for new engineering applications. Provided that the setup geometry (𝛽) is known, it is possible to use the two-step method to determine the object in-plane rotation angle according to Equation (5.4). This feature has been previously 112  investigated by Briers and Angus [44], and in-plane rotations have also been studied by Hrabovský and Horvath [19]. In general, however, in-plane rotations have received only very limited attention, and the primary emphasis has been to measure the object rotation magnitude. On the other hand, the vectorial nature of the resulting speckle hemisphere motions has potential to provide further information about the rotating object. With an appropriate camera arrangement, it could be possible to extract, for example, the location of the rotation axis and monitor its straightness as the object rotates.  5.3 Conclusion The presented experiments demonstrated the various characteristics of Objective Speckle Imaging and showed the method’s potential to track a continuously moving object surface that displaces, tilts, or rotates in-plane. The observed speckle displacements were in close agreement with the theoretical expectations predicted by the proposed Speckle Hemisphere Model. However, the measurement geometry, particularly the illumination distance, must be well known so that the observed speckle displacements can be scaled properly. In general, the method has high sensitivity, so it is ideal for measuring small surface motions. However, with the chosen incremental imaging approach, the method can be extended to measure even macroscopic surface motions of flat objects with virtually no upper range limitations, provided that the camera framerate can be appropriately adjusted. Finally, in addition to quantitative measurements, the demonstrated speckle hemisphere visualization can be used as an educational tool to illustrate Speckle Imaging method and related optical phenomena. 113  Chapter 6: Sensitivity Characteristics of Defocused Speckle Imaging  This chapter experimentally demonstrates that a speckle pattern recorded by a defocused camera corresponds to the image that would be captured by a lensless sensor located at the camera’s focal plane. This feature enables a free choice to be made of the effective speckle field sampling position by controlling the camera focus distance, rather than having to move the sensor physically. In addition, the defocused speckle patterns are scaled by the imaging system in-focus magnification ratio, which enables further control of measurement sensitivity by adjusting the lens focal length. The relationship between the speckle size and the sampling distance is studied, and a series of displacement and tilt measurements are presented to investigate the sensitivity characteristics of Defocused Speckle Imaging. The test measurements made at different object distances up to 16 meters reveal the method’s suitability for high-sensitivity remote measurement applications. Finally, the effectiveness of the proposed approach for separating linear and rotational components under multiaxial object motion is investigated. The contents of this chapter are adapted from an article “Remote Surface Motion Measurements using Defocused Speckle Imaging” published in Optics and Lasers in Engineering [45].  6.1 Uniaxial Object Motion Measurements A set of uniaxial measurements was conducted to study the characteristics of Defocused Speckle Imaging. First, the connection between the objective and defocused speckle patterns was investigated, particularly the relationship between the sampling scale and the imaging magnification ratio. This was followed by investigating speckle size dependence on the setup 114  geometry. Finally, Defocused Speckle Imaging sensitivity characteristics were studied under different types of object motion under varying defocus levels.  6.1.1 Uniaxial Motion Measurement Procedure The chosen test object was a flat rectangular MDF plate. The illumination source was a green DPSS laser (CrystaLaser GCL-100-S, 𝜆 = 532𝑛𝑚) with a narrow beam, a waist diameter 0.36𝑚𝑚 and a divergence angle 2𝑚𝑟𝑎𝑑. Samples of the resulting speckle field were recorded using a Matlab-controlled machine vision camera (AVT ProSilica GC1280, resolution 1024x1280 pixels, pixel size 6.7 x 6.7 µm2). The camera sensor was used without a lens to capture the objective speckle patterns, while the defocused patterns were recorded using a telephoto lens (Navitar f=75mm, f#=1.3, C-mount). Camera focus distance was changed by placing extension rings with varying thicknesses between the sensor and the lens. The lens was used with a fully open aperture throughout the measurements, and the camera exposure time was adjusted to maintain consistent speckle pattern brightness. The test object was moved using a servo-controlled linear actuator (Newport CMA-25CCCL) connected to a Matlab interfaced driver (Newport ESP100). For the in-plane displacement experiments, the object was mounted onto a translation stage and moved in +X-direction, as indicated in Figure 6.1. For the tilt experiments, on the other hand, the object was fixed onto a rotation stage (the rotation axis was within the object surface plane) and tilted about its Y-axis into the positive direction. The laser was carefully aligned to illuminate a spot on the rotation axis, so that the applied motion was purely rotational.  115   Figure 6.1 Uniaxial object motion instrumentation. (Top) Schematic diagram. (Bottom) Photo of the actual setup configured for in-plane displacement 𝐝𝐱 measurement. S: Laser source, O: Object, FP: Focal plane, C: Camera sensor. The optical table hole spacing is 1”.  For the uniaxial measurements, the linear actuator was set to move the object at a constant rate, causing the resulting speckle hemisphere to be in steady, continuous motion. The camera captured speckle pattern samples at frequent, regular time intervals during object motion. The horizontal and vertical bulk speckle motion components were tracked in Matlab using a custom algorithm based on cross-correlation and DFT [1, 60]. The incremental speckle displacements were determined between successive frames and summed up (integrated) to yield total speckle motions.  6.1.2 Uniaxial Motion Measurement Parameters Table 6.1 lists the imaging system parameters. Increasing the separation between the lens and the sensor reduces focus distance, which correspondingly reduces the diameter of the imaged field of view (FOV), and thus increases the imaging in-focus magnification ratio. The largest lens 𝐿𝑆zxyXY𝑆𝝎𝒚 𝒅𝒙𝐶𝐹𝑃𝜃∆𝐿𝑂116  separation (69mm) was chosen to obtain approximately unitary magnification value, thus allowing easy comparisons between the defocused and the objective speckle patterns.  Lens Focal Length [mm] 75 Lensless, Objective Lens Separation [mm] 10 20 30 40 69 In-focus Magnification M [-] 0.22 0.36 0.49 0.62 1.00 1 Focus Distance [mm] 476 356 310 287 272 0  Table 6.1 Imaging system parameters for the uniaxial measurements.  The in-focus magnification ratios were measured by placing a mm-scale ruler in front of the camera at a distance that maximized the image sharpness. This marked the location of the focal plane, while the distance between the focused ruler and the camera sensor defined the focus distance. Short focus distances and wide aperture made the lens depth of focus very shallow. Consequently, it was very easy to locate the focal plane accurately, because shifting the ruler away from the maximum sharpness position quickly introduced high amount of blur. The magnification ratios were determined by taking images of the focused ruler, measuring the ruler length in the image in pixels, multiplying the result by the known pixel diameter, and lastly dividing by the physical ruler length.  6.1.3 Connection Between Objective and Defocused Speckle Patterns The object surface was illuminated at a distance 𝐿4 = 1000𝑚𝑚 and at an angle 𝜃 = 6.5˚. The laser beam had a Gaussian intensity profile, and the illuminated spot FWHM diameter was 𝑑/1&3 =2.2𝑚𝑚. The spot diameter was determined from an upscaled digital image of the illuminated 117  surface by measuring the size of the beam area that had intensity at least 50% of the maximum brightness. It was important to adjust exposure time carefully to avoid saturation in order to estimate the spot size accurately. An objective speckle pattern was first captured by a lensless sensor located at 𝐿5 = 700𝑚𝑚 distance from the surface (𝜓 = 0˚). The 75mm focal length lens was then placed in front the sensor with a 69mm separation to achieve a unitary in-focus magnification ratio. The camera-lens combination was shifted away from the surface so that the focal plane was located at the initial position of the lensless sensor, leading to an effective sampling distance ∆𝐿 = 700𝑚𝑚. Figure 6.2 shows the captured speckle patterns. The objective speckle image was rotated in software by 180˚ to compensate for the missing through-lens image inversion. The two speckle pattern images look very similar with visually matching speckle features occurring at a same scale. The speckle pattern similarity was further studied using an open source Ncorr Digital Image Correlation software [61]. The objective speckle pattern image was correlated with the defocused speckle pattern image using circular image patches (subsets) of 50-pixel radius. The adjacent patch centers were 5 pixels apart, forming a rectangular grid that covered the whole image. The median correlation coefficient was 0.90 with a standard deviation of 0.05. The high, consistent correlation coefficients complete the visual inspection, confirming the equivalence of the speckle features. The matched speckle locations were used to compute apparent strains that correspond to image stretching, thus indicating relative magnification differences. The median strains and corresponding standard deviations were 𝜀;;	 = 0.0131 ± 0.0075 and 𝜀<< = 0.0129 ± 0.0084 in x- and y-directions, respectively. The low strain magnitudes indicate that the two imaging scales were very close, within 1.3% of each other. The analysis thus proves experimentally that the focal 118  plane defines the effective sampling location, thereby validating the interpretation of Defocused Speckle Imaging as previously proposed by Horvath [13].   Figure 6.2 Comparison of objective and defocused speckle patterns recorded at the same effective sampling distance. The solid red rounded rectangle shows an example subset in the objective speckle pattern, and the dashed blue rectangle shows a matching subset in the defocused speckle pattern [45]. The scale bar indicates the physical size of the speckle pattern at the sampling plane.  6.1.4 Defocused Speckle Pattern Characteristics Defocused speckle patterns were investigated further by recording the speckle hemisphere at different sampling distances and using various imaging magnification ratios. Figure 6.3 shows the recorded speckle patterns arranged into a grid with increasing magnification ratio and sampling distance. Speckle size clearly increases as a function of both magnification and sampling distance. Speckle pattern brightness reduces with increasing sampling distance and magnification ratio, as a smaller fraction (solid angle) of the speckle hemisphere reaches the sensor. For this reason, exposure times had to be increased when sampling far away from the surface or at high DEFOCUSED, M=1.0OBJECTIVE4.0mm 4.0mm119  magnification. Objective Speckle Imaging is also particularly prone to ambient light because a lensless sensor has no limiting aperture and can thus receive light from all directions. This is apparent for the recorded objective speckle patterns in Figure 6.3, particularly at the largest sampling distance (bottom right). On the contrary, a defocused camera collects light coming only from the direction of the object and is thus more effective in bright measurement environments. The central bright dot present in the M=0.36 column is an artifact resulting from internal reflection in the camera lens.   Figure 6.3 Speckle size dependence on sampling distance and imaging magnification ratio. Sampling distance increases row-wise from top to bottom, and magnification increases column-wise from left to right. The right-most column shows objective speckle patterns with corresponding effective sampling distances [45]. The scale bar indicates the physical size of the speckle pattern at the sampling plane.  ∆𝐿=400𝑚𝑚∆𝐿=700𝑚𝑚∆𝐿=1000𝑚𝑚∆𝐿=1300𝑚𝑚𝑀 = 0.22 𝑀 = 0.36 𝑀 = 0.49 𝑀 = 0.62 𝑀 = 1.00 𝑂𝑏𝑗𝑒𝑐𝑡𝑖𝑣𝑒	(𝑀 = 1)8.0mm8.0mm8.0mm8.0mm8.0mm8.0mm120  The speckle patterns captured by the smallest sampling magnifications (left column) also show the vignetting effect caused by the lens entrance pupil. Vignetting emerges as subtle brightness reduction towards the image edges and causes obvious shadowing that completely blocks light from reaching the outermost sensor areas. The diameter of the captured speckle pattern, i.e., the FOV, is given by the blur diameter in the object space according to Equation (3.11). Vignetting effects are less evident at higher magnifications where only the central un-vignetted portion of the speckle pattern falls onto the sensor, and also with longer sampling distances where the sampled light rays are more parallel, as can be understood from Figure 3.5 (b).  Vignetting can thus be reduced by increasing magnification or sampling distance, although this unavoidably leads to larger speckle size. An alternative would be to use a lens with a larger aperture diameter (lower f-number).   6.1.5 Defocused Speckle Size vs. Sampling Distance To allow quantitative comparison, the average speckle sizes were evaluated from the recorded speckle pattern images by determining the FWHM diameter of the two-dimensional normalized autocorrelation peak for each speckle pattern. Correlation analysis was performed using Matlab function ‘normxcorr2’ for a 401x401 pxl2 image window extracted from the speckle pattern center. Figure 6.4 shows the analysis results, along with theoretical expectations according to Equations (4.16-4.17). The observed speckle size range was 21…239µm (3…36pxl). The left plot reveals the linear dependence between the speckle size and the sampling distance, while the right plot verifies that speckle size scales linearly with the magnification ratio. Lens focus adjustment thus allows great control over tuning the speckle size optimally.  121   Figure 6.4 (Left) Statistical average speckle diameter as a function of the sampling distance for different levels of magnification. (Right) The same data displayed per unit magnification [45].  The observed objective speckle size matched well with the theoretical expectation for a Gaussian intensity distribution. Similarly, the defocused imaging configurations with high magnification had speckle sizes close to the theoretical values. However, the speckle sizes of the low-magnification configurations were systematically higher than expected, and the deviations seemed to scale inversely proportional to magnification. All sampling distances were sufficiently high to ensure the diffuse imaging condition according to Equation (3.12). Therefore, the illumination spot size was the limiting aperture, so the defocused speckle sizes should scale linearly proportional to sampling distance. However, since the correlation window covered about 40% of the image height, window boundaries were likely affected by vignetting. Thus, boundary pixels received only a fraction of the scattered light, so they had an effectively smaller sized aperture, leading to increased overall statistical average speckle size.  122  Figure 6.5 shows the sampling distances estimated according to Equation 4.17 using the extracted speckle sizes and the measured illumination spot diameter. Since speckle size scales linearly proportional to the distance from the surface, any unaccounted deviations in the speckle size cause equal relative errors in the estimated sampling distance. On one hand, vignetting errors could be reduced by using a smaller correlation window. However, this would reduce the total number of speckles falling onto the correlation window, making the computed statistical speckle size more prone to random variations, as the sizes of individual speckles vary depending on the particular surface roughness.   Figure 6.5 Estimated vs. actual sampling distances in the uniaxial motion setup.   123  6.1.6 Measurement Sensitivity Characteristics Defocused Speckle Imaging sensitivity characteristics were studied through a set of uniaxial in-plane displacement and tilt motion measurements. Table 6.1 lists the imaging parameters, while the geometric parameters are shown in Table 6.2. The illumination distance was constant throughout the measurements (𝐿4 = 1000𝑚𝑚). The speckle motions were recorded for different combinations of sampling distances (𝐿5 = 400𝑚𝑚…1300𝑚𝑚) and magnification ratios (𝑀 =0.22…1.00). Figure 6.6 shows the results for the in-plane object motion experiments. The left graph shows the observed sensitivity values 𝑆>; = 𝐷𝑋/𝑑𝑥 (observed speckle motion per applied surface in-plane displacement) for different ratios of sampling distance over illumination distance 𝛽 = ∆𝐿/𝐿/. The measured results are shown by scatter plots, while the theoretical expectations (according to Equation (3.14)) are displayed by solid lines.  Measurement Type In-plane Displacement Tilt Illumination Distance 𝑳𝑺 [mm] 1000 Sampling Distances 𝚫𝑳 [mm] 400, 700, 1000, 1300 Illumination Angle 𝜽 [°] 6.5 9.9 Imaging Angle 𝝍 [°] 0 3.4 Total Applied Motion 3.00mm 25.6mrad (1.47°) Motion Rate 0.10mm/s 0.049°/s Typical Number of Increments 40 40 Typical Incremental Motion 0.075mm 0.64mrad (0.037°)  Table 6.2 Geometric parameters for uniaxial motion measurements.  124   Figure 6.6 Observed in-plane displacement sensitivity as a function of the sampling/illumination distance ratio for different levels of magnification. The vertical axis indicates the observed motion at the sensor over the applied displacement. The experimental values are shown as scatter plots, and the solid lines represent the theoretical expectations. (Right) The same data displayed per unit magnification. The dashed line shows the theoretical expectation [45].  The results agree well with the theoretical expectations. The observed sensitivity increases linearly as a function of the 𝛽-ratio, and also with increasing magnification ratio. The defocused imaging sensitivity at unitary magnification was equal to that of the objective imaging configuration. The right graph shows the same results normalized by the magnification ratio. The overlap of the datapoints indicates good agreement with the theoretical expectations. The highest overall relative errors occurred for the configuration with the smallest magnification ratio. This measurement uncertainty may have been partially caused by the vignetting effect; the speckle pattern did not fully cover the recorded image, which possibly reduced the tracking accuracy. The observed sensitivities were slightly higher than expected, which is likely due to residual uncertainty about 125  the exact position of the laser focal point. The accuracy is expected to improve with larger illumination distance, as this would reduce the relative error in the laser waist position. Figure 6.7 shows corresponding results for the object tilt measurements. The sensitivity values 𝑆D< = 𝐷𝑋/𝜔𝑦 (the ratio of the observed speckle motion over the applied tilt angle) are displayed as a function of the sampling distance ∆𝐿. The experimental results agree very well and are very close to the theoretical expectations from Equation (3.18). The measured speckle motions have much higher relative sensitivity to tilts than to in-plane displacements. Moreover, the tilt sensitivity is independent of the laser focal point location, which explains why the tilt measurements have generally higher accuracy than the in-plane displacement measurements.   Figure 6.7 (Left) Observed tilt sensitivity as a function of the sampling distance for different levels of magnification. The vertical axis indicates the observed motion at the sensor divided by the applied rotation angle. The experimental values are shown as scatter plots, whereas the solid lines represent the theoretical expectations. (Right) The same data displayed per unit magnification. The dashed line shows the theoretical expectation [45]. 126  6.2 Complex Object Motion Measurements The objective of the complex object motion experiments was to investigate how accurately the individual motion components can be extracted if the object simultaneously displaces and rotates. Various different object distances were tested to determine the method’s potential for long-range measurements.  6.2.1 Complex Motion Measurement Procedure The complex object motion was applied using a combination of two stepper motor linear actuators #1 and #2 (Nippon Pulse NPM PF35-24C1, actuation step size 1/30 mm). A rigid aluminum rod was pivoted about its one end by pushing the other end of the rod with linear actuator #1 at a 93mm distance from the rotation axis. The second linear actuator #2 was fixed on top of the aluminum rod at its rotating end. A flat object with an MDF surface was attached onto the linear actuator #2 so that the object surface plane was located on the rotation axis. The stepper motors were controlled by a combination of an Arduino Uno and Adafruit Motor Shield (v2.3) interfaced with Matlab. Contrary to the uniaxial measurements, a different, stepwise approach was used to study complex object motion. The object was first displaced by a small increment, then tilted by a small increment, and the resulting speckle field was sampled by a pair of cameras that were focused at different distances from the object while the object was stationary. These steps were repeated for a specified number of increments until the desired motion path was completed. The purpose of the stepwise method was to ensure that the stepper motors and the cameras remained appropriately synchronized for each increment. 127  Both actuators were equipped with limit switches that were used to reset the actuator positions before each new measurement. This ensured that the same location on the specimen surface was illuminated and measured in each case. Moreover, all studied geometric and motion configurations were measured three times to monitor the repeatability and robustness of the method. The average speckle displacements were calculated and used for the analysis. A dual-camera system was used to simultaneously record the speckle patterns at two distinct sampling locations. This was required to separate the linear displacement contribution from the tilt signal, as described in Section 3.6. Camera 1 (CAM1, AVT ProSilica GC2450C, resolution 2448x2550 pixels, pixel size 3.45 x 3.45 µm2) was used with a high focal length telephoto lens (Opteka f=500mm, f#=6.3 Mirror lens, used with a 2x teleconverter) to enable focusing far away while simultaneously maintaining a high sampling magnification. The second camera (CAM2, AVT ProSilica GC1290, resolution 960x1280 pixels, pixel size 3.75 x 3.75 µm2) was equipped with a conventional telephoto lens (Navitar f=75mm, f#=1.3, C-mount, with a 5mm extension ring) that was near-focused to maximize the sampling distance from the specimen. Figure 6.8 shows the experimental setup. A pair of mirrors was used to fold the setup geometry to enable measurements at large illumination and imaging distances in a limited laboratory space. The laser source was aimed at mirror M1 that reflected the beam towards the object and illuminated a circular spot on its surface. The FWHM spot diameter was 5.6mm, and the illumination distance 𝐿4 = 3000𝑚𝑚 (distance from the laser beam waist to M1 and further to the object). Mirror M2 reflected a portion of the scattered speckle field, and cameras CAM1 and CAM2 recorded corresponding speckle patterns at sampling distances ∆𝐿9 and ∆𝐿:. The object distance was changed by moving mirror M2, while the physical illumination distance was kept fixed throughout 128  the measurements. CAM1 focus distance was adjusted so that its focal plane FP1 was located approximately halfway between the camera body and the object surface. On the other hand, CAM2 was always set for near-focus to maximize the sampling distance.   Figure 6.8 Complex object motion instrumentation. (Top) Schematic diagram. (Bottom) Photo of the actual setup. S: Laser source, O: Object, M: Mirror, FP: Focal plane, C: Camera sensor. The optical table hole spacing is 1” [45].  6.2.2 Complex Motion Measurement Parameters Three measurement configurations with different object distances were investigated. The object distances ranged between 4 and 16 meters. The object distance was measured from the object surface to mirror M2 and further to the front edge of CAM1. Table 6.3 lists the geometric parameters, while the applied motions are shown in Table 6.4. Both the displacement and the tilt 𝜓1𝐿𝑆zxyX1Y1𝑆#1 #2𝝎𝒚 𝒅𝒙 1𝐶2 𝐹𝑃2𝐹𝑃1𝜓2∆𝐿2X2Y2z𝑀1𝜃𝑀2∆𝐿1𝑂129  were applied in 15 equal-sized increments. The incremental displacement was 0.10mm and the tilt increment 0.36mrad (0.02˚).  Measurement Configuration Object Distance [mm] Illumination Distance 𝑳𝑺 [mm] Illumination Angle 𝜽 [°] Sampling Distance ∆𝑳 [mm] Magnification 𝑴 [-] Imaging Angle 𝝍 [°] CAM 1 CAM 2 CAM 1 CAM 2 CAM 1 CAM 2 1 3938  3000  1.5 1600 3097 0.3994  0.1543 -2.1 -3.9 2 9966 5395 9125 0.1876 -0.9 -1.6 3 15994 8330 15153 0.1083 -0.6 -1.0  Table 6.3 Geometric parameters for complex motion measurements.  6.2.3 Separating In-plane Displacements from Out-of-plane Tilts To validate the accuracy and repeatability of the experimental setup, a set of uniaxial displacement and tilt measurements were first conducted. This was followed by a set of complex multiaxial motion where the object was both displaced and tilted simultaneously. The same test sequence was repeated for all three measurement configurations with different object distances. Table 6.4 lists the observed average speckle movements and related standard deviations for each camera, along with the estimated object motions. Figure 6.9 shows the estimated vs. applied object motions as a function of object distance. For the uniaxial validation measurements, the surface motion estimates are computed independently for CAM1 and CAM2 using Equations (3.14 & 3.18). However, both CAM1 and CAM2 speckle motions are jointly needed to characterize the multiaxial object motion according to Equation (3.24).  130  Measurement Configuration Camera Applied Motion Type True Applied Motion Expected Speckle Motion [mm] Observed Speckle Motion [mm] Estimated Applied Motion Relative Error [%] 1 CAM1 𝑑𝑥 1.50mm 0.918 0.908 ±0.010 1.482mm -1.2% 𝜔K 5.37mrad 6.858 7.026 ±0.011 5.497mrad +2.5% 𝑑𝑥  +𝜔K 1.50mm +5.37mrad 7.776 7.926 ±0.002 1.491mm +5.486mrad -0.6% +2.3% CAM2 𝑑𝑥 1.50mm 0.470 0.464 ±0.006 1.480mm -1.4% 𝜔K 5.37mrad 5.132 5.259 ±0.011 5.4970mrad +2.6% 𝑑𝑥  +𝜔K 1.50mm +5.37mrad 5.603 5.716 ±0.004 1.491mm +5.486mrad -0.6% +2.3% 2 CAM1 𝑑𝑥 1.50mm 0.787 0.744 ±0.002 1.418mm -5.5% 𝜔K 5.37mrad 10.859 10.724 ±0.004 5.298mrad -1.2% 𝑑𝑥  +𝜔K 1.50mm +5.37mrad 11.646 11.435 ±0.058 1.747mm +5.196mrad +16.5% -3.1% CAM2 𝑑𝑥 1.50mm 0.935 0.879 ±0.002 1.410mm -6.4% 𝜔K 5.37mrad 15.107 14.900 ±0.001 5.291mrad -1.4% 𝑑𝑥  +𝜔K 1.50mm +5.37mrad 16.043 15.722 ±0.081 1.747mm +5.196mrad +16.5% -3.1% 3 CAM1 𝑑𝑥 1.50mm 0.613 0.581 ±0.002 1.421mm -5.2% 𝜔K 5.37mrad 9.678 9.781 ±0.011 5.421mrad +1.1% 𝑑𝑥  +𝜔K 1.50mm +5.37mrad 10.2913 10.3153 ±0.0388 1.5700mm +5.362mrad +4.7% -0.1% CAM2 𝑑𝑥 1.50mm 1.3998 1.3115 ±0.0038 1.4053mm -6.3% 𝜔K 5.37mrad 25.0844 25.3146 ±0.0264 5.4140mrad +0.9% 𝑑𝑥  +𝜔K 1.50mm +5.37mrad 26.4842 26.5374 ±0.0895 1.5700mm +5.362mrad +4.7% -0.1%  Table 6.4 Applied surface motions, observed speckle displacements and computed estimated surface motions.  131   Figure 6.9 Estimated object surface displacements and tilts at different object distances [45].  The uniaxial measurements show very good agreement between the estimated vs. applied object motions. The relative error for uniaxial displacement measurement is larger for the larger sampling distance, while the tilt measurement accuracy improves with increasing sampling distance. In the complex motion experiments, the tilt measurement accuracy remains comparable to the uniaxial measurements, while the estimated surface displacements deviate much more from the actual values at large sampling distances.  The higher errors in the displacement measurements can be understood by comparing the relative displacement vs. tilt sensitivities. The speckle motions resulting from object tilts are roughly 7-18 as high as the displacement signals. Consequently, even a small unintended surface tilt during a displacement measurement (that is expected to be purely uniaxial) may lead to substantial error in the estimated surface motion. Similarly, a small error in the applied surface tilt during a complex 132  object motion can induce a large error in the estimated surface displacement due to the sensitivity difference. The uniaxial motion estimations between CAM1 and CAM2 are very well correlated, which indicates that the method is robust and that the observed results are systematic. Given the experimental uncertainties attributed to the home-built construction of the mechanical actuator assembly, the overall measurement accuracy can be considered very good. The actuator response, its linearity and repeatability are studied in more detail in Chapter 8. Furthermore, it is important to remember that the experimental geometry includes two greatly different scales; the studied incremental motions are in sub-millimeter scale, while the cameras are up to 16 meters away from the object. In other words, the applied incremental displacements are up to 160,000 times smaller than the object distance. Finally, the CAM1 and CAM2 sampling distances were estimated similar to the uniaxial case using the average speckle sizes that were extracted from the captured speckle pattern images. Figure 6.10 shows the corresponding results. The estimated distances are close to the expected values. Despite the challenging experimental conditions, the deviations from the theoretical values are lower than in the uniaxial motion setup (Figure 6.5). The better performance may be explained by the use of much greater sampling distances that help to mitigate vignetting effects.  133   Figure 6.10 Estimated vs. actual sampling distances in the complex motion setup.   6.3 Discussion Out-of-plane tilt sensitivity is directly proportional to sampling distance, whereas in-plane displacement sensitivity has lower slope and is non-zero even at a zero sampling distance. Consequently, tilt sensitivity is high in comparison to in-plane displacement sensitivity at large sampling distances, whereas the reverse is true when the sampling plane approaches the object surface. Therefore, it is crucial to configure the imaging equipment geometry to target the specific measurement goals. For example, the complex object motion study demonstrated how a combination of two significantly different sampling distances can be used to separate linear and rotational speckle motion components. The uniaxial motion experiments revealed that Defocused Speckle Imaging can reach very high sensitivities. Furthermore, the resulting speckle pattern movements can be tracked at very high 134  accuracy using modern image correlation algorithms [48]. Using the maximum sensitivity configurations shown in Figures 6.6 and 6.7, and assuming a conservative speckle tracking accuracy of 1/10 pixels, the estimated maximum accuracy is 0.3µm for in-plane displacements and 0.3µrad for tilts. These values are very small and approach the regime of interferometric methods. On the other hand, the complex surface motion experiment showed that Defocused Speckle Imaging can be applied for remote measurements of small surface motions at significant distances away from the object. This is encouraging regarding the interest to apply the method to monitor objects that are difficult to access e.g., due to their large size or because of the environmental hazards present. The maximum practical measurement distance depends on various factors. Since object surface scatters light to all directions, the light intensity reaching the camera sensor scales inversely proportional to the square of the object distance. This means that extremely remote measurements at distances of, e.g., hundreds of meters would require, e.g., increasing laser power or camera exposure time to obtain sufficiently bright speckle patterns. However, maximum laser power may be limited due to safety considerations, and exposure times should be kept low to avoid motion blur resulting from high speckle motion sensitivity. On the other hand, there is a novel way to dramatically increase the method’s light efficiency. If the object is coated by retroreflective tape, the scattered light intensity is concentrated into a narrow cone instead of a full hemisphere. This technique is studied later in Chapter 8. If, however, the object cannot be coated or painted to increase its reflectance, another way is to capture a larger portion of the speckle hemisphere by using a large-diameter lens. In addition to capturing more light, such approach has the advantage of capturing more speckles by covering a wider FOV. This becomes very important at large 135  distances, as the speckle size and speckle motion magnitudes scale proportional to the sampling distance. Otherwise, the speckle pattern may contain too few speckles for robust tracking and move too fast to maintain the required partial speckle pattern overlap between motion increments. Large sampling distance is the key to obtain high sensitivity at high magnification for remote measurements. However, this requires defocusing the camera to the point where all pixels on the sensor receive light from across the entire illuminated area. On one hand, the resulting fully diffused, objective-like speckle pattern is easy to track since it moves as a rigid body, but it does not contain any spatial information about the object surface. Therefore, the proposed technique is primarily intended for single-point measurements, but it could be applied to an extended area by scanning the illuminated spot across the object surface. At smaller sampling distances, the measurement sensitivity is not as high, but the imaging may retain some spatial resolution. In such a case, the Speckle Imaging concept could be extended to analyze non-uniform full-field surface motion fields. While high overall sensitivity is advantageous for many precision applications, it also requires the use of a sufficiently high image acquisition framerate to maintain partial speckle pattern overlap in successive image frames and to avoid speckle decorrelation effects [7]. Therefore, the incremental motions must be significantly smaller than the sensor dimensions. Some laser speckle computer mouse cameras have dynamic framerates that actively adapt according to the mouse speed. Such approach could be adapted also for Defocused Speckle Imaging measurements. Speckle size was found not to have a strong effect on measurement accuracy in the conducted studies. However, the observed incremental speckle motions were substantial in all measurements, spanning numerous pixels even for the lowest sensitivity configuration. Thus, there was no need 136  to detect very small, fractional pixel motions. On the other hand, speckle size would likely play a more important role in setups with smaller sensitivities and/or smaller sub-pixel scale incremental motions, as finer patterns provide better pixel-to-pixel contrast. On the contrary, finer speckles may also be advantageous in situations where the incremental motions are very high and the overlap between successive frames is small. In such a case, the higher speckle density could help to improve the tracking robustness by obtaining reliable matching across small surface patches. If needed and feasible, the speckle size could be reduced by lowering the imaging system magnification ratio, reducing the sampling distance and increasing the diameter of the illuminated spot. Finally, the recorded speckle motions were scaled using the pre-determined, known illumination and sampling distances and angles. In a practical measurement situation, these parameters may not be known, so they must be measured by some means. In the uniaxial measurements, the average speckle size correlated reasonably well with the sampling distance. However, it was affected by the illumination beam intensity distribution, and was prone to vignetting effects particularly in small magnification configurations at low sampling distances. In the remote measurement study, the speckle sizes predicted the sampling distance very well, but the analysis relied on the carefully measured illumination spot diameter. Moreover, the illumination distance was kept fixed throughout the measurements. In a practical application, both the illumination and sampling distances are affected by the object distance. As discussed in Chapter 4, the spot diameter changes together with object distance when illuminated by a diverging beam. Therefore, spot size is not an ideal parameter for an arbitrary measurement situation. Chapter 7 studies how the sampling 137  distance and the relative surface angle can be extracted using the alternative diffraction-based calibration principle.  6.4 Conclusion The conducted experiments revealed many important characteristics of Defocused Speckle Imaging and demonstrated the method’s capability to remotely monitor multiaxial object surface motions from a large distance. Since various geometric parameters affect the appearance of defocused speckle patterns, speckle size and measurement sensitivity can be easily adjusted. This makes Defocused Speckle Imaging attractive for diverse applications, like structural monitoring of large objects. However, successful implementation of the method requires knowing the object distance and surface orientation accurately. While speckle size scales linearly with sampling distance, it is also affected by many other factors. Therefore, an alternative calibration method is needed.  This is discussed in the next chapter.  138  Chapter 7: Geometric Calibration Principle Based on Speckle Pattern Diffraction Analysis  This chapter investigates experimentally speckle pattern appearance dependence on the laser source spectrum and demonstrates the diffraction-based calibration principle. The calibration results were previously presented at Society for Experimental Mechanics 2020 Annual Conference and Exposition on Experimental and Applied Mechanics [47].  7.1 Laser Characterization Procedure The wavelength mode spacing parameters of the studied lasers were extracted using an interferometric approach based on Michelson interferometer [62]. The basic idea of this method is to divide the laser output into two paths, vary the relative optical path length difference (𝑂𝑃𝐷) between the two interferometer arms, and monitor the cyclically varying quality of the interference patterns that result when the beams are recombined. The quality of interference is characterized by interferometric fringe visibility 𝑉 [52]: 𝑉 = 𝐼M.; − 𝐼M0'𝐼M.; + 𝐼M0' (7.1) The path length difference corresponding to the interference cycle length is directly related to the laser wavelength mode spacing according to [63]: ∆𝜆 = 𝜆:𝑂𝑃𝐷 (7.2) The details of the method are described in Appendix A. 139  Figure 7.1 shows the experimental arrangement. The diverging laser beam was collimated with a plano-convex lens and directed to a 50/50 beam splitter cube. Half of the light was transmitted onto a fixed 1st surface mirror, while the remaining light was reflected onto another 1st surface mirror that was attached onto a moving linear stage. The light was reflected back from each mirror, entered the beam splitter again and eventually propagated towards a lensless camera sensor. The two overlapping beams created an interference pattern that was recorded by the sensor. The fixed mirror was tilted by a fraction of a degree in the horizontal direction to create vertical interference fringes on the imaging sensor. The laser power was adjusted by rotating the linear polarizer placed in front of the laser source.   Figure 7.1 Michelson interferometer setup used to measure interference fringe visibility vs. optical path difference for determining laser longitudinal mode spacings. The optical table hole spacing is 1”.  The relative path length difference between the two arms was varied by moving one of the mirrors in small increments using a precision linear actuator (Newport CMA-25CCCL with Newport 140  ESP100 controller) and recording the resulting interference patterns. When the mirror was displaced by an amount ∆𝑧, the roundtrip optical path length changed by an amount: 𝑂𝑃𝐷 = 2𝑛.0$∆𝑧 ≈ 2∆𝑧 (7.3) where 𝑛.0$ ≈ 1 is the refractive index of air. The interference patterns were recorded by a lensless machine vision camera sensor (AVT ProSilica GC1290, resolution 960x1280 pixels, pixel size 3.75 x 3.75 µm2). Figure 7.2 shows an example interference fringe pattern along with the extracted intensity parameters. A horizontal band 120x1280 pxl2) of the fringe pattern was extracted at the image center. From this band, column-wise average intensities were computed, resulting in a horizontal line vector of brightness values. From this vector, the moving maximum and minimum intensities were extracted using a 500-pixel bin size. The resulting vectors were used to compute a corresponding fringe visibility values, whose average value was used as the representative single visibility number to represent the specific mirror position. This procedure was repeated for each different mirror position.   Figure 7.2 Interference fringe visibility computation principle. 141  7.2 Characterization Results Three different laser sources were characterized. Table 7.1 lists the parameters of the studied lasers, and Figure 7.3 displays a comparison of the computed fringe visibilities vs. mirror position. The green DPSS laser JDS UniPhase (𝜆 = 532𝑛𝑚) had consistently high fringe visibility that was independent of the applied path length difference. This behavior is as expected for a high-coherence single-wavelength laser source. On the contrary, the two other lasers both showed strong cyclical visibility characteristics, indicating low coherence length and multi-mode operation. The green DPSS laser CrystaLaser (𝜆 = 532𝑛𝑚) had slowly varying visibility envelope that appeared to be modulated by a high-frequency component. Furthermore, every second peak in the envelope had slightly higher maximum visibility. This indicates the presence of at least three distinct mode spacings. On the other hand, the blue Osram laser diode (𝜆 = 450𝑛𝑚) had very distinct visibility peaks, along with systematic high-frequency modulation. The multi-mode laser cycle lengths were determined by computing the power spectrum of each fringe visibility plot and identifying the dominating spatial frequencies 𝑓4 = 1/∆𝑧. The resulting values are shown in Table 7.1. The green CrystaLaser had three distinct frequency peaks. These corresponded with mirror separations of 2.50mm, 1.25mm and 0.24mm. The 1.25mm separation is the envelope cycle, whereas the 2.50mm is the distance between every second envelope peak. The smallest 0.24mm separation corresponds to the high-frequency modulation. The mirror separations correspond with longitudinal mode spacings of 0.0565nm, 0.113nm and 0.593nm computed according to Equations (7.2 & 7.3). Of these, the first two differ exactly by a factor of two, which is typical of laser longitudinal modes. The third peak, on the other hand, differs by a much greater amount and is not an integer multiple. The blue Osram laser power spectrum had a 142  dominant high frequency, along with several harmonics. The dominant frequency corresponds to mode spacing of 0.055nm, whereas the harmonics were integer multiples of this value. The harmonics indicate the presence of several evenly spaced longitudinal modes.  Laser JDS UniPhase 4611-050-1001 CrystaLaser GCL-100-S Osram PL 450B Laser Type DPSS, single-mode DPSS, multi-mode Laser diode, multi-mode Wavelength 𝝀 [nm] 532 532 450 Power Spectrum Peak 𝒇𝑺 [mm-1] - 0.3992 0.7984 4.192 0.5453 Mirror Separation ∆𝒛 [mm] - 2.5050 1.2525 0.2385 1.8339 Longitudinal Mode Spacing ∆𝝀 [nm] - 0.05649 0.11298 0.59334 0.05521 Side-peak Order [-]  1 2 “Highest” 1 6 Side-peak Sampling Distance ∆𝑳 [mm] - 1000 1000 600 1000 1000 Side-peak Offset ∆𝑿/∆𝑳 [-] Theoretical - 7.508e-5 1.502e-4 7.886e-4 8.675e-5 5.205e-4 Measured - 7.921e-5 1.573e-4 7.736e-4 8.567e-5 5.133e-4 Error [%] - 5.6 4.8 -2.0 -1.3 -1.4  Table 7.1 Details of the studied laser sources, along with the analysis results.  143   Figure 7.3 Comparison of fringe visibility vs. mirror separation for different laser sources.  7.3 Speckle Offset Measurement Principle A set of defocused speckle patterns were captured using each characterized illumination source to study the laser source spectrum influence on speckle pattern appearance. Figure 7.4 shows the experimental setup. A laser source illuminated a close to circular spot (width 12mm, height 11mm) on a rough aluminum object surface at an oblique angle 𝜃 = −45˚. The laser source distance was approximately 𝐿4 ≈ 300𝑚𝑚. The spot size and shape were controlled by a plastic aperture plate placed between the laser and the object. A defocused DSLR camera (Canon EOS 100D, pixel size 0 1 2 3 4 5 6 7 8 9 1000.20.40.60.81Fringe visibility [-]Single-mode green laser, step size 100µm0 1 2 3 4 5 6 7 8 9 1000.20.40.60.81Fringe visibility [-]Multi-mode green laser, step size 20µm0 1 2 3 4 5 6 7 8 9 10Micrometer position [mm]00.20.40.60.81Fringe visibility [-]Multi-mode blue laser diode, step size 10µm144  4.31x4.31 µm2, with Canon EF 50mm f#1.8 lens, used at f#5.6 in combination with a 52mm extension tube to reduce focus distance and to increase imaging magnification to M=1.176) recorded a portion of the scattered speckle hemisphere at various sampling distances ∆𝐿. The observation was at a normal incidence (𝜓 = 0˚). Both the illumination and the observation axes were parallel to the optical table. Camera exposure time was adjusted so that the average image brightness was approximately 40%. The sampling distance was changed by moving the camera along an optical rail. The sampled distances ranged from ∆𝐿 = −20𝑚𝑚 (far-focus, focal plane behind the object) to ∆𝐿 = 1000𝑚𝑚  (near-focus, focal plane before the object) in 20mm increments.    Figure 7.4 Measurement setup used for studying speckle pattern wavelength dependency. The displayed configuration is for the blue Osram laser. A defocused DSLR camera captures a portion of the laser light scattered from the rough surface of a ground aluminum plate. The optical table hole spacing is 1”. 145  The recorded speckle pattern RGB color images were processed in Matlab in a following way: Only the color channel corresponding to the laser wavelength was extracted, and a small region was cropped at the image center. The cropped single-color image was analyzed by computing its autocorrelation using ‘normxcorr2’ function. The autocorrelation template was 401x401 pxl2, and the search window 701x701 pxl2, so that the template could be shifted by 150 pixels in all directions over the search window while maintaining full area overlap. The resulting autocorrelation map had a size of 301x301 pxl2. Example speckle patterns and corresponding autocorrelation maps are shown in Figure 7.5. The single-mode green laser generated a speckle pattern with distinct black areas between speckles, whereas the multi-mode green laser and particularly the blue laser diode produced speckle patterns with much lower contrast. This is a good demonstration why speckle imaging requires an illumination source with a narrow spectrum; speckles could not be observed under white-light illumination, as the wavelength continuum would produce a continuum of shifted speckle patterns whose intensities would average out. All autocorrelation maps show a high-intensity central self-correlation peak, and the multi-mode lasers also have horizontally offset side-peaks arranged symmetrically about the central peak. The separation between the side-peak and the central peak corresponds to the offset between the partially overlapping duplicated speckles. Since the illumination and observation directions were arranged in the same horizontal plane, any duplicated speckle patterns were offset in the same plane. Therefore, under such controlled geometry, it is sufficient to extract only the horizontal midline of the autocorrelation plot, as shown in the bottom row of Figure 7.5. To assess the speckle pattern offset dependence on the sampling distance, the autocorrelation line plots were extracted 146  for each camera position ∆𝐿 and fused together to form a 2D matrix where the horizontal axis is the sampling distance and the vertical axis is the autocorrelation pixel shift. Finally, this map was upscaled by a factor of 10 along the vertical axis to assess the side-peak offsets at a subpixel accuracy.  Figure 7.5 Comparison of speckle patterns generated by different laser sources. (Top) Examples of the cropped speckle patterns captured at a sampling distance ∆𝐋 = 𝟔𝟎𝟎𝐦𝐦. (Center) The corresponding 2D autocorrelation maps. Brightness range is from zero correlation (black), to correlation coefficient of 0.4 (white). Any correlation value higher than 0.4 is shown in white. (Bottom) Horizontal mid-line plots extracted from the 2D autocorrelation (AC) maps. SM: single-mode, MM: multi-mode. Sensor pixel size is 4.31µm. The scale bars indicate the physical sizes of the cropped captured speckle patterns and the AC maps at the sampling plane. SM green DPSS laserSpeckle patternMM green DPSS laser MM blue laser diode2D AC map-100 0 100AC offset [pxl]00.20.40.60.81Correlation coefficient [-]-100 0 100AC offset [pxl]00.20.40.60.81-100 0 100AC offset [pxl]00.20.40.60.811.0mm1.0mm 1.0mm 1.0mm1.0mm 1.0mm147  7.4 Speckle Offset Measurement Results Figure 7.6 shows a comparison of the fused horizontal midline autocorrelation plots as a function of camera defocus distance for the three characterized laser sources. The single-mode green laser (JDS) had no side-peaks, indicating that the laser had only one active wavelength component. The high-correlation area of the central peak increased as a function of sampling distance, indicating that the average speckle size increased linearly with distance, as expected based on Equation (4.9). However, the multi-mode green laser (CrystaLaser) had four distinct side-peaks whose separation increased linearly with defocus distance, indicating that it had at least five active wavelength components. Two of the side-peaks were evenly spaced at low slopes, whereas the two other side-peaks had significantly steeper slopes (higher offsets). Finally, the blue laser diode (Osram) had multiple evenly spaced side-peaks, which confirms the multi-mode operation typical for a laser diode characterized by the even mode spacing. The lower part of Table 7.1 lists the side-peak spacings per unit sampling distance corresponding to the observed autocorrelation pixel offsets (one pixel in the image corresponds to 1/𝑀 pixels on the sensor plane, 𝑝𝑥𝑙/𝑀 = 3.662µ𝑚). The low-slope modes of the CrystaLaser were sampled at the highest ∆𝐿 = 1000𝑚𝑚 distance, whereas the mode with the steepest slope was assessed at ∆𝐿 = 600𝑚𝑚, as indicated in Figure 7.6. In addition to these three side-peaks, there was a fourth side-peak that also had a steep slope. This peak, however, was by far the weakest and it was not observed during the laser characterization, indicating that the related optical mode carried only a small fraction of the total laser power. For the Osram laser diode, the first and the sixth side-peak offsets were measured, both sampled at the highest measured distance ∆𝐿 = 1000𝑚𝑚.  148   Figure 7.6 Horizontal midline AC plots vs. camera defocus distance. (Top) Single-mode green laser JDS UniPhase. (Center) Multi-mode green laser CrystaLaser, (Bottom) Multi-mode blue laser diode Osram. Plot color indicates the correlation coefficients. The red markers show the locations of selected side-peaks determined from upscaled autocorrelation plots [47]. Sensor pixel size is 4.31µm.  Table 7.1 also lists the theoretical expectations based on the characterized laser mode spacings calculated according to Equation (4.39). In Osram, both measured side-peak offsets were very 149  close to the theoretical expectations. In CrystaLaser, on the other hand, the relative errors associated with the low-slope modes were slightly higher than that of the steep-slope mode. The errors were most likely caused by uncertainty in the estimated illumination angle. The lasers were aligned at 45˚ angle with the help of the optical table hole grid. However, some deviations are possible, particularly at such small illumination distance. At 45˚, an error of 1˚ in actual illumination angle causes almost 2% deviation in the sine of the angle. Therefore, considering experimental uncertainties, the overall measurement performance is very good. This study thus demonstrates the side-peak offset dependence on the laser spectrum. The results also verify the anticipated linear relationship between the side-peak spacing and the sampling distance, which forms the basis for the proposed geometric calibration principle.  7.5 Determining Sampling Distance and Relative Surface Angle A further experiment was conducted to study the feasibility of extracting the sampling distance and the relative surface angle from a pair of defocused speckle patterns recorded at different sampling distances. The Osram laser diode was chosen for the experiment. The laser was moved further away from the object (𝐿4 ≈ 1314𝑚𝑚) and placed close to the camera optical axis. This was done to simulate a practical measurement instrument where the illumination and observation are in the same instrument housing. The resulting illumination angle was 𝜃 = −4.2˚, while the observation angle remained at a normal incidence (𝜓 = 0˚). The illumination spot was circular (𝑑/1&3 ≈ 35𝑚𝑚). Otherwise, the setup was similar to the one shown in Figure 7.4. The two chosen sampling distances were ∆𝐿9 = 500𝑚𝑚 and ∆𝐿: = 1000𝑚𝑚. The sampling distance was changed by moving the camera on the optical rail. 150  Four different relative surface angles ∆𝜓 were studied: ∆𝜓 = 0˚, 15˚, 30˚, 45˚. The surface angle was changed by tilting the object, which correspondingly changed both the illumination and observation angles. For each configuration, a pair of defocused speckle patterns were recorded, and the related side-peak separations were extracted using the same autocorrelation approach as above. Table 7.2 shows the corresponding results. The first-order side-peaks were used for the computations. With the lowest 0˚ relative surface angle, the different order side-peaks partially overlapped with the central self-correlation peak and one another, so the side-peak separations could not be measured. However, calibration was successful for the three non-zero surface angles. The resulting sampling distances were within 2% of the actual values, and the corresponding surface angles within 0.7˚ of the actual angles. Remembering the experimental uncertainties related to ruler-based angle determination, the obtained results can be considered very good.  Relative Surface Angle ∆𝝍	[˚] 0 15 30 45 Illumination Angle 𝜽	[˚] -4.2 10.8 25.8 41.8 Observation Angle 𝝍	[˚] 0 15 30 45 Side-peak Offset 𝑫𝑿 [µm] ∆𝐿9 ∆𝐿: ∆𝐿9 ∆𝐿: ∆𝐿9 ∆𝐿: ∆𝐿9 ∆𝐿: - - 26.70 53.44 65.82 130.57 116.60 234.89 Estimated Sampling Distance ∆𝑳𝟏 [mm] - 499.49 508.22 492.89 Relative Error [%] - -0.2 1.7 -1.5 Estimated Relative Surface Angle ∆𝝍 [˚] - 14.31 29.49 45.07 Error [˚] - -0.69 -0.51 0.07  Table 7.2 Geometric calibration test results. 151  7.6 Discussion The successful demonstration of the diffraction based geometric calibration principle is an important milestone regarding the goal to apply Defocused Speckle Imaging for remote surface motion measurements. The final remaining step is to integrate the calibration principle as part of the actual motion measurements, so that the observed speckle motions can be scaled correctly. While the demonstrated surface angle measurements were strictly one-dimensional, the same approach can be extended to study arbitrary surface orientations. The only required extra step is to locate the autocorrelation side-peaks in two dimensions and extract the corresponding horizontal and vertical offsets [15]. The side-peak overlap issue related to the lowest relative surface angle could be alleviated by using larger spot size to reduce the speckle size. Moreover, the overlap issue is less severe in remote measurements at larger sampling distances where the side-peak offsets are greater. This is evident in Figure 7.6 where the laser diode side-peaks blend together at lower sampling distances but are easily separable at larger sampling distances. Alternatively, the side-peak overlap could be completely avoided by tilting the illumination beam more with respect to the observation direction or arranging the illumination at a normal yz-plane [15]. The yz-illumination would shift the side-peaks in the vertical direction, allowing measurements even at a normal incidence.  In contrast to the previous demonstration by Gibson et al. [15], the lasers studied here had more than two wavelength modes. The existence of several side-peaks improves the accuracy of the side-peak analysis. At low sampling distance, a higher order side-peak with larger speckle offset magnitude can be selected in order to utilize the available sensor area more effectively and to 152  reduce the relative error in the measured side-peak offset. Alternatively, all detected side-peaks could be measured, and their offsets used for a best-fit type estimation. If the sampling distance and the surface orientation are known, the laser mode spacings can be extracted from a single defocused speckle pattern through the 2D autocorrelation analysis. Therefore, speckle pattern diffraction analysis provides similar information about the laser source spectrum as the interferometric approach based on the Michelson interferometer. The advantage of the speckle-based method is that it requires no moving parts, and all information can be extracted from a single snapshot. Therefore, it is ideal for studying dynamic laser behavior, like mode-hopping, in situations where a commercial optical spectrum analyzer is not available. On the other hand, dedicated spectrum analyzers are based on scanning, so a measurement with an increased wavelength resolution takes a longer time. Furthermore, unlike speckle pattern analysis, traditional spectrum analyzers require coupling the studied light source into a single-mode fiber. This is an extra step that may take considerable time. The side-peak characterization could provide a useful tool for, e.g., tuning laser diode current and temperature parameters to hit the “sweet spot” where the laser operates in a single longitudinal mode. This may allow the use of simple and affordable laser diodes in applications that traditionally require more costly single-frequency lasers. Moreover, the side-peak analysis can be performed with very simple instrumentation, as it requires only a scattering object and an imaging sensor. Therefore, it provides an affordable tool for basic laser characterization for hobbyists and smaller non-optics laboratories that work with laser-based applications. For such a simple method, speckle pattern autocorrelation analysis has very good performance. The experimental demonstration shows possibility to easily detect <0.06nm mode separations at sampling distances 153  below 1m. The method’s wavelength resolution could be further improved by increasing the illumination angle and the sampling distance to maximize the resulting speckle offset. In addition, the speckle size could be reduced by illuminating a larger surface area in order to resolve overlapping speckle patterns that have only small relative shift. With such adjustments, the speckle-based spectrum analysis could rival or even surpass some of the existing methods.  7.7 Conclusion The presented study illustrated the relationship between the laser source spectrum and the speckle pattern internal structure and showed how speckle pattern diffraction analysis can be effectively used to extract the important geometric calibration parameters. Even a regular multi-mode laser diode with several wavelength components can be used, as long as the laser mode spacing is appropriate. Since the calibration can use the same images that would be used for the actual motion analysis, there is potential to perform self-calibrated Defocused Speckle Imaging measurements at remote distances with very simple instrumentation, with no additional sensors. 154  Chapter 8: Self-calibrated Remote Surface Motion Measurements  This chapter presents a series of Defocused Speckle Imaging experiments conducted to demonstrate the method’s potential to measure surface motions at high sensitivity and accuracy at extended measurement distances. The object was located at a significant distance (>30 meters) from the measurement instrumentation, and the recorded speckle motions were scaled using the proposed self-calibration procedure based on speckle pattern diffraction analysis. A further novel aspect was that the test object was coated by retroreflective tape to maximize scattered light intensity at the direction of the measurement instrumentation. This feature also raises an interesting possibility to perform Speckle Imaging motion, ranging and angle measurements on engineered retroreflective surfaces and markers that are abundant in the built environment, particularly within traffic infrastructure.  8.1 Experimental Arrangement The experimental setup was closely similar to the one used for the complex object motion study presented in Chapter 6. A folded path geometry with very sharp V-shape was used to maximize the effective object distance in a limited laboratory space. This time, however, the full laboratory length was utilized to double the physical distance from the previous study. Furthermore, the illumination beam now travelled across the whole laboratory in contrast to previous measurements where the laser distances were significantly smaller. Figure 8.1 shows the schematic of the updated setup, and Figures 8.2-8.6 include photos of the actual test configuration. 155   Figure 8.1 A schematic layout of the experimental setup. The angle magnitudes are exaggerated for illustration purposes. S: laser source, Mr: mirror, O: object, A: actuator, FP: focal plane, CAM: camera, 𝐋𝐬: illumination distance, ∆𝐋: sampling distance.   Figure 8.2 The overall view of the experimental setup. The two mirrors (not shown) were located more than 15 meters to the right from the cameras. The optical table hole spacing is 1”. 𝜓𝐿𝑆zxyX1Y1𝑆𝝎𝒚 𝒅𝒙𝐶𝐴𝑀1𝐶𝐴𝑀2𝐹𝑃2𝐹𝑃1∆𝐿2X2Y2z𝑀𝑟1𝜃𝑀𝑟2∆𝐿1𝑂A1A2 ∆𝜃 ∆𝑀𝑟156   Figure 8.3 A close-up view of the laser source, the cameras and the object-actuator assembly.   Figure 8.4 The object-actuator assembly. The object was mounted on a linear rail that was displaced towards left by a stepper motor (at the bottom right). The linear rail was mounted onto an aluminum rod that was pivoted about the object surface axis. The object was rotated clockwise by pushing the rod with a precision linear actuator (at the top left). The rubber band helped to maintain the contact between the actuator and the rod while resetting the motion. The second stepper motor (at the top right) was not used in this study. 157   Figure 8.5 View from the cameras towards the 1st surface mirrors that folded the setup geometry.   Figure 8.6 A close-up view of the mirrors. The illumination laser beam was reflected from the left mirror, while the scattered light was imaged via the right mirror. Due to surface wear, the laser folding mirror scattered some light under high-intensity illumination.  The illumination laser source and the two cameras, i.e., the sensor assembly, were placed side-by-side onto an optical table close to the laboratory back wall. The laser illumination axis and the optical axes of the cameras were close to parallel to each other, and all axes were aligned to be horizontal, parallel to the optical table surface (the angles in Figure 8.1 schematic have been 158  exaggerated for pictorial clarity). The sensor assembly was aimed towards the far end of the laboratory where two first surface mirrors were placed side by side on top of a fixed cart. The first, left mirror (𝑀𝑟1) reflected the laser light back towards the optical table and illuminated the surface of a test object that was located next to the sensor assembly. The test object was a flat rectangular aluminum plate (width 55mm, height 50mm) that was fully coated by a retroreflective tape (3M 03456C Silver ScotchliteTM). The tape consisted of retroreflective beads glued onto a flat substrate. Each bead created an imperfect retroreflection where light was reflected into a narrow cone centered about the illumination direction. Because of the random bead arrangement and imperfect retroreflection, the light reflected from each bead overlapped. The beaded surface thus acted similar to a conventional laser-illuminated rough surface, but with most scattered light intensity concentrated within a narrow cone centered about the illumination direction. This can be described as “directional scattering”. Consequently, the observed speckle patterns behaved and looked just like those formed by a conventional rough surface but had much higher intensities. This enabled the use of moderate camera exposure times while maintaining safe laser power level in the laboratory. In comparison to an MDF surface, the required camera exposure times was a factor of 100 smaller. This is a remarkable improvement. The second, right mirror (𝑀𝑟2) received a portion of the scattered laser light and reflected it towards the two cameras that captured and recorded the resulting speckle patterns. Here, one camera (CAM1, AVT ProSilica GC1280, resolution 1024x1280 pixels, pixel size 6.7 x 6.7 µm2) was near-focused, so that its focal plane was located close to the camera body and far away from the object surface. The other camera (CAM2, AVT ProSilica GC1290, resolution 960x1280 pixels, pixel size 3.75 x 3.75 µm2) was focused further away, so that its focal plane was located close to 159  mirror 𝑀𝑟2, approximately midway between the camera body and the object surface. It is important to remember that having two distinct, well separated sampling planes was crucial for: 1) Separating the object surface tilt motions from linear displacements, and 2) accurately determining object surface distance and surface angle relative to the sensor assembly for the measurement geometric calibration. CAM1 was fitted with a conventional telephoto lens (Navitar f=75mm, f#=1.3, C-mount, with a 10mm extension ring) that was near-focused to maximize the sampling distance, i.e., the distance between the object surface and the camera focal plane. CAM2 was equipped with a high focal length telephoto lens (Opteka f=500mm, f#=6.3 Mirror lens with a ring-shaped aperture, used with a 2x teleconverter) to enable focusing far away from the camera while simultaneously maintaining sufficiently high sampling magnification.  The illumination laser source used was a green DPSS laser (CrystaLaser GCL-100-S, 𝜆=532nm) operating in multi-mode. The laser mode characteristics were previously analyzed in Chapter 7. The low-power green laser was chosen for these remote measurements over the blue laser diode for safety considerations, as human vision is much more sensitive to green vs. blue wavelengths. For the majority of the experiments, the laser was used without any additional beam shaping optics. In this configuration, the laser beam waist was located approximately 150 mm in front of the laser housing, and the beam diverged after the waist (divergence angle 2𝑚𝑟𝑎𝑑). By the object surface, the beam had diverged enough to illuminate the whole object, so that the entire retroreflective bead covered surface contributed to the creation of the speckle pattern. 160  The object was mounted onto an actuator assembly that was slightly modified from the one used in Chapter 6 in order to study smaller tilt increments.  The object was fixed onto a stepper motor linear actuator 𝐴1 (Nippon Pulse NPM PF35-24C1, actuation step increment 1/30 mm per step), and the linear actuator was fixed onto one end of an aluminum rod. The rod was pivoted about the object end by pushing the rod’s other end with a servo-controlled precision linear actuator 𝐴2 (Newport CMA-25CCCL, one-directional repeatability 1µm, minimum incremental motion 0.2µm) at a 93mm distance from the rotation axis. The object surface was located on the rod rotation axis. Both actuators were controlled via Matlab using a custom script. The stepper motor was interfaced with a combination of an Arduino Uno and Adafruit Motor Shield (v2.3), while the servo-controlled actuator was connected to a Newport ESP100 driver. The stepper motor displacement made the object surface to move in-plane by the stepped distance 𝑑𝑥 as indicated in Figure 8.1. On the other hand, the movement of the servo-controlled actuator pushed the rod, which made the object surface to undergo a very fine, low-magnitude out-of-plane tilt rotation 𝜔< due to the long moment arm of the aluminum rod. At the time of the experiments, CAM1 had a minor data processing issue that altered the brightness of every even-numbered image column, as shown in Figure 8.7 (left). This was corrected by applying a 4-pixel moving average in the horizontal direction (Figure 8.7 (right)).  161   Figure 8.7 Illustration of pixel correction used for CAM1. The data processing errors present in the original image were corrected by applying a 4-pixel moving average in the horizontal direction.  8.2 Experimental Parameters Tables 8.1-8.2 list the important geometric parameters. The illumination distance was measured from the object surface to the beam folding mirror and further to the location of the laser beam waist. The physical object distance was measured from the object surface to mirror 𝑀𝑟2 and further to the nearest front edge of CAM1 body. The reference illumination angle (at the tilt actuator zero position) was determined by placing an aperture block in front of the beam, so that only a narrow portion of the beam illuminated the object surface. A planar first surface mirror was then temporarily attached onto the object surface, so that the narrow beam was specularly reflected from the mirror. When the illumination was at an oblique angle, the reflected beam deviated from the illumination direction. The deviation angle was determined using trigonometry by measuring the lateral offset between the incident and the reflected beams at a known distance away from the surface and applying an arctangent function. The deviation angle of the reflected beam was double 162  of the illumination angle, as the reflection and the incidence angles are always symmetric with respect to the surface normal. Laser Model CrystaLaser GCL-100-S Output Power [mW] 22 Wavelength 𝝀 [nm] 532 Mode Separation ∆𝝀 [nm] 0.05649 (1st) 0.11298 (2nd) Source Distance 𝑳𝒔 [mm] 30590 (remote waist) 8462 (adjusted waist) Illumination Angle 𝜽 [˚] 3.48 (small object tilt angle) 8.39 (large object tilt angle)  Table 8.1 Illumination hardware parameters. Parameter CAM1 CAM2 Camera Model AVT ProSilica GC1280 AVT ProSilica GC1290 Lens Navitar f=75mm f#1.3 Opteka f=500mm, f#6.3 Mirror Lens Additional Optics 10mm extension ring 2x teleconverter Focus Near-focused Focused close to the folding mirror 𝑀𝑟2 Exposure Time [ms] 35-55 0.5-1.5 Resolution HxW [pxl2] 1024x1280 960x1280 Pixel Size [µm] 6.7 3.75 M [-] 0.2195 0.0539 Sampling Distance ∆𝑳 [mm] 30234 15799 Sampling Distance Separation ∆𝑳𝟏𝟐 [mm] 14435 Physical Object Distance [mm] 30705 Imaging Angle 𝝍 [˚] 2.45 (small object tilt angle) 7.36 (large object tilt angle) Relative Tilt from Illumination Direction ∆𝜽 [˚] 1.03  Table 8.2 Imaging hardware parameters. 163  Two separated mirrors were used to direct the illuminating and scattered light in order to offer greater control of the setup alignment and to avoid the laser light scattered from the worn mirror surface from reaching the cameras (Figure 8.6). Consequently, however, the imaging angle deviated from the illumination angle by ∆𝜃 = 𝜃 − 𝜓. The deviation angle ∆𝜃 could be determined using simple trigonometry by dividing the distance between the mirrors ∆𝑀𝑟 = 280𝑚𝑚 by the object-mirror distance (≈ ∆𝐿:):  ∆𝜃 = arctan j∆𝑀𝑟∆𝐿: k (8.1) For each camera, the focal plane location was determined by placing a ruler in front of the camera and moving the ruler to a location where its image sharpness was maximized. The sampling distance is the separation from the object surface to the speckle folding mirror and further from the mirror to the focal plane. On the other hand, camera in-focus magnification ratio 𝑀 was determined by taking an image of the focused ruler, counting the number of pixels that the ruler covered in the image, multiplying the count by the known camera pixel size, and dividing the result by the physical ruler length. Table 8.3 lists the parameters for the studied applied motions. Since the tilt rotation sensitivity scales linearly with the sampling distance, while in-plane displacement sensitivity depends on the ratio between the sampling and illumination distances, remote measurements are mostly sensitive to tilts, unless the laser is focused close to the object using additional optics. Therefore, the chosen applied displacement increments (0.40mm) had far greater magnitudes than the applied tilts (0.054mrad / 0.0031˚). In the specific actuator design, the chosen object tilt increment was achieved by displacing the servo-controlled actuator 𝐴2 by only 5 microns at a 93mm distance 164  from the rotation axis. Frictional effects were minimized by smoothing the metal contact surfaces in the actuator assembly using very fine sandpaper. The actuator was run several times before the actual experiments to ensure seamless motion.  Motion type Displacement 𝒅𝒙 Tilt 𝛚𝐲 Combined 𝒅𝒙 + 𝝎𝒚 Increment Size 0.40mm 0.054mrad (0.0031˚) 0.40mm + 0.054mrad Number of Increments 15  Table 8.3 Motion parameters for the main analysis.  Two different object orientations were investigated. The relative object surface angle was adjusted from the reference position by displacing the servo-controlled actuator 𝐴2 to introduce an initial surface angle offset before initiating the actual motion sequence. This changed both the illumination and imaging angles by the same amount, similar to Section 7.5.  8.3 Laser Beam Waist Adjustment Procedure Monitoring of fine displacements from very remote distances may be challenging due to the higher relative tilt sensitivity. While displacement sensitivity could be increased by reducing the illumination distance, placing the laser source physically closer to the object may not be practical nor even possible in some cases. However, there is a virtual way to reduce the source distance by careful focus adjustment of the laser beam. If the laser beam is first diverged and subsequentially focused so that the beam focal point is located between the physical laser source and the object surface, i.e., the beam goes through focus, then the focal point becomes the effective source location. This was done in practice by first diverging the laser beam by a lens objective (f=2.8-165  12mm, f#1.4), and converging the beam by a plano-convex focusing lens (f=400mm) placed at a distance 355 mm from the laser housing. The laser focal point location could be adjusted by moving the focusing lens. As the waist location was rather sensitive on the lens movement and very challenging to estimate accurately through direct visual inspection alone, an alternative method was used for setting the desired waist location [64]. A diffuse plastic film was placed on the beam path at the desired waist location (Figure 8.8, left). The laser light transmitted through the diffuser and created a speckle pattern that was projected onto the screen placed on the optical table (Figure 8.8, middle). The distance between the diffuser and the screen was approximately 7660mm. Since laser speckle pattern formed by transmission through a thin diffuser behaves similarly to the reflection speckle pattern [9], the average size of the speckles seen on the screen scaled inversely proportional to the diameter of the illuminated surface area according to Equation (4.9). Consequently, the laser beam could be focused on the diffuser by moving the focusing lens to a position where the speckle size on the adjacent screen was maximized (Figure 8.8, right). With the shifted waist, the illumination spot diameter on the object was approximately 5mm. The effectiveness of the waist adjustment procedure was investigated by performing a set of motion measurements with vs. without waist adjustment and comparing the resulting speckle motion magnitudes.  166     Figure 8.8 Waist adjustment principle based on maximizing the average speckle size in the speckle pattern formed by a laser-illuminated diffuser. (Left) Laser light reflected back from the mirror illuminated a circular spot on the diffuser. (Middle) Speckles were observed on a screen placed 7660mm away from the diffuser. (Right) When the laser waist coincided with the diffuser surface, the spot diameter was minimized, which maximized the size of the projected speckles.  8.4 Motion Measurement Procedure Each measurement consisted of 15 equal motion increments. The studied motions included in-plane displacement only, out-of-plane tilt only, and combined motion with both displacement and tilt movements. Each camera captured the resulting speckle pattern between each motion increment. Figures 8.9-8.11 show example speckle patterns captured by CAM1 and CAM2 in the three different studied geometric configurations. The scale bars indicate the physical extent of the captured speckle hemisphere portions at the camera sampling planes. For the combined motion increment, the object was first displaced and then tilted before taking the incremental image. The incremental speckle motions were computed from the cropped speckle pattern images (marked by the red squares in Figures 8.9-8.11) using a custom Matlab algorithm based on cross-correlation 167  and DFT [1,60]. The algorithm estimated the horizontal and vertical rigid body shifts of the speckle pattern at a 1/100 sub-pixel accuracy. The incremental motions were summed (integrated) to keep track of the evolution of the total speckle displacements. Each motion experiment was repeated three times in order to monitor measurement repeatability.  8.5 Speckle Motion Tracking Results Figure 8.12 displays the incrementally summed speckle motions 𝐷𝑋 for both cameras for the case of small object surface angle, and Figure 8.13 shows similar plots for the case of large object angle. Figure 8.14 shows the results for the case of large object angle measured with the laser waist position adjustment. For each camera, the in-plane displacement data set had lower magnitudes than the tilt motion data set. For each of the three motion types, the three repeated measurements are indicated by red, green and blue markers and lines, while the theoretical expectations from Equations (3.14 & 3.18) are displayed by black lines.       168   Figure 8.9 Speckle pattern images and cropped ROIs, small surface angle.  Figure 8.10 Speckle pattern images and cropped ROIs, large surface angle.  Figure 8.11 Speckle pattern images and cropped ROIs, large surface angle and shifted laser waist position. 30.0mm 30.0mm30.0mm 30.0mm30.0mm 30.0mm169   Figure 8.12 Motion tracking accuracy and repeatability, small surface angle. Black lines: theoretical expectations; color markers: experimental results.  Figure 8.13 Motion tracking accuracy and repeatability, large surface angle. Black lines: theoretical expectations; color markers: experimental results. 170   Figure 8.14 Motion tracking accuracy and repeatability, large surface angle and shifted laser waist position. Black lines: theoretical expectations; color markers: experimental results.  CAM1 and CAM2 had different speckle motion magnitudes due to the different sampling distances and imaging magnification ratios. For easier comparison between the cameras, the plot windows were scaled with respect to the maximum expected combined speckle motions. This way, the plots reveal how the observed total speckle motions in CAM1 had greater relative contribution from the tilt signal than in CAM2. Conversely, the speckle motions due to surface in-plane displacement had greater relative contribution in CAM2 than in CAM1. This agrees with theory, as CAM2 had lower sampling distance than CAM1. Changing the object tilt angle by a small amount did not significantly change the motion sensitivities. On the other hand, reducing the effective illumination distance by laser beam waist location adjustment greatly increased the speckle motion sensitivity on surface in-plane 171  displacements, but had no effect on the tilt sensitivity, as expected from the theory. This reduced the sensitivity imbalance between the in-plane displacements and tilts.  8.6 Speckle Motion Measurement Accuracy The measurement repeatability was very high throughout the experiments. The incremental motions in CAM1 and CAM2 were very well correlated and followed similar trends. The applied object tilt lead to very linear speckle motion response, following closely with the theoretical expectations. However, the speckle motions caused by object displacement had some systematic nonlinearities, but the motions still followed well the expected trends. The nonlinearities seemed slightly larger in CAM1 than in CAM2. Furthermore, the waist adjustment reduced these effects. Since the measurement equipment was carefully aligned to be parallel with the optical table surface, both of the applied motions were expected to introduce purely horizontal speckle motions. However, significant y-directional speckle shifts 𝐷𝑌 were observed during the displacement tests. Figure 8.15 shows a comparison of the tracked 𝐷𝑋 and 𝐷𝑌 speckle motion magnitudes for the large object angle configuration without and with the waist adjustment applied. CAM1 had higher relative vertical speckle shifts than CAM2. Interestingly, the laser waist adjustment had no effect on the 𝐷𝑌 magnitudes, while 𝐷𝑋 magnitudes increased significantly. This strongly indicates that both the 𝐷𝑌 speckle motions and the nonlinearities in 𝐷𝑋 likely resulted from unintended surface tilts caused by a slightly curved motion path of the displacement actuator. Because of the much greater relative tilt sensitivity, even a small tilt would cause large erroneous speckle motion signals. For example, the 2mm 𝐷𝑌 displacement recorded by CAM1 in Figure 8.15 has approximately equal magnitude as caused by three purposely applied tilt increments in Figure 8.12, 172  i.e., 0.16mrad or 0.0092˚. Such minute fluctuations are very plausible with the chosen actuator design. On the contrary, the 𝐷𝑌 magnitudes remained very low during the tilt measurements (Figure 8.16), indicating good alignment and consistent performance of the tilt actuator assembly. In order to further strengthen the above reasoning, a quick additional experiment was performed where the flat object was replaced by a vertical cylinder (diameter 60mm) covered by retroreflective tape. When the cylindrical object was displaced in-plane, the resulting speckle motions had magnitudes and nonlinearities matching those of the flat surface. Such similarity indicates that the measurements are robust against small variations in surface curvature and do not necessarily require a perfectly flat surface. It further strengthens the expectation that the observed nonlinearities were indeed caused by the small unintended tilts of the displacement actuator.   Figure 8.15 DX- vs. DY-displacement magnitudes for applied dx-displacements, large surface angle. (Top) Without waist adjustment, (Bottom) With waist adjustment. Black lines: theoretical expectations; color markers: experimental results. 173   Figure 8.16 DX- vs. DY-displacements for applied 𝝎𝒚-displacements, large surface angle. (Top) Without waist adjustment, (Bottom) With waist adjustment. Black lines: theoretical expectations; color markers: experimental results.  Table 8.4 lists the numerical values of the expected and observed displacements, and the corresponding relative errors. The observed values show the average (AVG) of the three repeated measurements, along with the related standard deviation (SD). Despite the different sensitivities, CAM1 and CAM2 had comparable accuracies. The maximum relative motion error was 6%.  The overall measurement performance is thus very good considering the observed nonlinearities with the displacement actuator, along with general experimental uncertainties. Looking at Table 8.4 and Figures 8.12-8.14, the observed speckle motions due to in-plane displacements were marginally lower than the theoretical expectations, whereas surface tilts generated speckle motions that were slightly higher than expected. It is thus possible that the 174  surface motions applied by the actuators deviated by small amount from the values anticipated based on the actuator specifications and the measured actuator A2 moment arm length. Therefore, the actual speckle motion uncertainties may in fact be lower than the errors listed in Table 8.4.  Object Orientation Expected Motion [mm] Measured Motion (AVG±SD) [mm] Error [%] Applied Motion 𝑫𝑿𝟏 𝑫𝑿𝟐 𝑫𝑿𝟏 𝑫𝑿𝟐 𝑫𝑿𝟏 𝑫𝑿𝟐 Small Tilt Angle, Remote Waist 𝒅𝒙 2.592 0.490 2.587±0.070 0.488±0.010 -0.2 -0.4 𝝎𝒚 10.608 1.373 10.799±0.150 1.391±0.022 1.8 1.4 𝒅𝒙 + 𝝎𝒚 13.201 1.862 13.983±0.114 1.951±0.016 6.0 4.8 Large Tilt Angle, Remote Waist 𝒅𝒙 2.569 0.486 2.520±0.012 0.481±0.001 -2.0 -1.0 𝝎𝒚 10.541 1.364 10.944±0.270 1.411±0.033 3.9 3.5 𝒅𝒙 + 𝝎𝒚 13.110 1.849 13.682±0.084 1.915±0.011 4.4 3.6 Large Tilt Angle, Shifted Waist 𝒅𝒙 5.901 0.917 5.645±0.022 0.936±0.009 -4.4 2.1 𝝎𝒚 10.541 1.364 10.754±0.106 1.412±0.014 2.1 3.6 𝒅𝒙 + 𝝎𝒚 16.442 2.280 16.847±0.201 2.262±0.108 2.5 -0.8  Table 8.4 Speckle motion tracking accuracy and repeatability.  8.7 Diffraction Analysis Procedure  2D autocorrelation maps were calculated for each incremental speckle pattern image in each camera using a similar approach as in Chapter 7. CAM1 autocorrelation template was 551x551 pxl2 and search window 751x751 pxl2, yielding a 201x201 pxl2 autocorrelation map. For CAM2, the corresponding values were 451x451 pxl2, 501x501 pxl2 and 51x51 pxl2, respectively. Figure 8.17 shows samples of the resulting autocorrelation images for CAM1 and CAM2 for the two 175  different object surface angles. These correspond to the speckle patterns shown in Figures 8.9-8.10; they are the first frames of the in-plane displacement experiments (dx #1) shown in Figures 8.12-8.13. Each autocorrelation map shows a distinct central self-correlation peak and one or two pairs of side-peaks oriented symmetrically about the central peak. The distance between the side-peak and the central peak indicates the separation between the overlapping wavelength-dependent speckle patterns.    Figure 8.17 Example autocorrelation 2D maps. (Top) small vs. (Bottom) large relative surface angle. Image brightness scale indicates correlation coefficient value from zero (black) to one (white). The scale bars indicate the physical extent of the speckle offsets at the sampling planes.  As the measurement setup had horizontal, close to parallel geometry, the wavelength-dependent speckle separations occurred in the horizontal direction. Therefore, the one-dimensional search procedure presented in Chapter 7 was again used here. The horizontal midline was extracted from 1.0mm 1.0mm1.0mm 1.0mm176  the 2D autocorrelation plot, and the resulting plot was upscaled by a factor of 100 using sub-pixel interpolation. Figure 8.18 shows the horizontal autocorrelation midlines and the interpolated plots corresponding to the autocorrelation maps displayed in Figure 8.17. The side-peak locations were determined from the upscaled data by searching the local maxima using Matlab function ‘islocalmax’, and appropriately thresholding the detected peaks to pick only the highest matches. Finally, the side-peak separations were determined by taking the average separation between the central peak and either left or right side-peak and multiplying the result by the known camera pixel size. CAM1 side-peaks were slightly tilted with respect to the horizontal direction. This may be explained by a minor tilt in the sensor orientation or by the fact that CAM1 was mounted about 10mm higher than CAM2, which introduced an additional minor speckle shift in the vertical direction. However, the shifts in the horizontal vs. vertical directions are de-coupled [15], so the horizontal spacings were not affected.  177   Figure 8.18 Extracted AC horizontal midlines. (Top) small vs. (Bottom) large surface angle. CAM1 pixel size: 6.7µm, CAM2 pixel size: 3.75µm.  8.8 Diffraction Analysis Results and Accuracy Figure 8.19 shows the incremental autocorrelation midline plots for each camera for the two different object orientations. The data corresponds to the first in-plane displacement tests (dx #1) shown in Figures 8.12-8.13. The incremental plots were fused into a 2D map, where the horizontal axis indicates the increment, and the vertical axis displays the autocorrelation pixel shift from the image center. The detected central and side-peaks are displayed on top of the map. At the small object surface angle, only one pair of side-peaks was detected, while two pairs were observed for the large object surface angle. This is understandable, since increasing the relative object surface angle increased the related speckle offset. At the small tilt angle, the observed side-peaks were actually the second maxima; the first maxima could not be detected as they were close to and overlapping with the central peak. 178   Figure 8.19 Incremental AC midline plots and detected side-peaks. (Top) small vs. (Bottom) large surface angle. CAM1 pixel size: 6.7µm, CAM2 pixel size: 3.75µm.  The side-peak locations remained constant for each motion increment. Figure 8.20 shows the detected side-peak locations for the three repeated in-plane displacement measurements, along with the theoretical expectations (Equation (4.41)). The theoretical expectations were calculated using the reported laser wavelength and the mode spacings previously characterized in Chapter 7. The results show that the side-peak detection was very consistent, highly repeatable, and that the detected side-peaks were close to the expected values. Table 8.5 lists the numerical results for all three different motion types for the two different object surface angle configurations. The side-peak separations remained consistent throughout the experiments, and the measurement accuracy was high, within 6% from the expected values.  179   Figure 8.20 Autocorrelation side-peak offset repeatability and accuracy. (Top) small vs. (Bottom) large surface angle. Black lines: theoretical expectations; color markers: experimental results.  Object Orientation Expected Side-peak Offset ∆𝑿 [µm] Measured Side-peak Offset ∆𝑿 (AVG±SD) [µm] Error [%] Applied Motion CAM1 CAM2 CAM1 CAM2 CAM1 CAM2 Small Tilt Angle, Remote Waist 𝒅𝒙  144.9  18.7 138.5±0.3 17.9±0.0 -4.5 -4.4 𝝎𝒚 138.4±0.4 18.1±0.0 -4.5 -3.4 𝒅𝒙 + 𝝎𝒚  138.0±0.2 18.0±0.0 -4.8 -4.0 Large Tilt Angle, Remote Waist 𝒅𝒙  386.1  49.9 396.4±0.5 52.4±0.1 2.7 5.1 𝝎𝒚 394.7±0.3 52.6±0.0 2.3 5.4 𝒅𝒙 + 𝝎𝒚  395.8±0.3 52.6±0.0 2.6 5.5  Table 8.5 Autocorrelation outer side-peak separation accuracy and repeatability. 180  8.9 Geometric Calibration The observed side-peak separations were used to determine the measurement geometry. The sampling distances ∆𝐿9 and ∆𝐿: were estimated using the extrapolation procedure introduced in Chapter 4. Because CAM1 and CAM2 had different magnification ratios, the measured side-peak separations had to be first scaled per unit 𝑀. Therefore, Equation (4.43) was used in a modified form: ∆𝐿9 = ∆𝑋9/𝑀9∆𝑋:/𝑀: − ∆𝑋9/𝑀9 ∆𝐿9: (8.2) The estimated sampling distances are listed in Table 8.6. The Illumination distance 𝐿4 could then be estimated, as the separation between the camera focal planes and the laser waist was known. Table 8.7 lists the resulting values for the different measurements and object tilt configurations.  Object Orientation Applied Motion Actual Sampling Distance ∆𝑳 [mm] Estimated Sampling Distance (AVG±SD) [mm] Error [%] CAM1 CAM2 CAM1 CAM2 CAM1 CAM2 Small Tilt Angle, Remote Waist 𝒅𝒙    30234    15799 30172±72 15737±72 -0.3 -0.4 𝝎𝒚 30555±34 16120±34 1.1 2.1 𝒅𝒙 + 𝝎𝒚  30435±41 16000±41 0.7 1.3 Large Tilt Angle, Remote Waist 𝒅𝒙 30981±68 16546±68 2.5 4.8 𝝎𝒚 31236±33 16801±33 3.4 6.4 𝒅𝒙 + 𝝎𝒚  31189±41 16754±41 3.2 6.1  Table 8.6 Sampling distance estimation accuracy and repeatability.  181  Object Orientation Applied Motion Actual Illumination Distance 𝑳𝑺 [mm] Estimated Illumination Distance 𝑳𝑺 (AVG±SD) [mm] Error [%] Small Tilt Angle, Remote Waist 𝒅𝒙    30590 30529±72 -0.3 𝝎𝒚 30911±34 1.1 𝒅𝒙 + 𝝎𝒚 30792±41 0.7 Large Tilt Angle, Remote Waist 𝒅𝒙 31337±68 2.5 𝝎𝒚 31592±33 3.3 𝒅𝒙 + 𝝎𝒚 31545±41 3.2  Table 8.7 Illumination distance estimation accuracy and repeatability.  The sampling and illumination distance estimation was consistent for different motion types. Errors were higher for the configuration with the larger object angle, but the accuracy was still within 7% from the expected values. Since the extrapolation-based calibration procedure seeks to find the location of the object surface, CAM1 and CAM2 both have the same absolute error. Therefore, the relative errors scale inversely proportional to the sampling distance, which explains the higher accuracy of CAM1. The estimated CAM2 sampling distance was next used to estimate the deviation angle ∆𝜃 according to Equation (8.1) using the known mirror separation ∆𝑀𝑟. Finally, the sampling angle 𝜓 was determined according to Equation (4.41), and the illumination angle followed simply: 𝜃 =𝜓 + ∆𝜃. Table 8.8 lists the estimated angles. The computed values were very close to the actual, measured angles.  182  Object Orientation Actual Angles [˚] Estimated Angles [˚] Error [˚] Applied Motion 𝜃 𝜓 𝜃 𝜓 𝜃 𝜓 Small Tilt Angle, Remote Waist 𝒅𝒙 3.48 2.45 3.35±0.02 2.33±0.01 -0.13 -0.12 𝝎𝒚 3.30±0.01 2.31±0.01 -0.18 -0.15 𝒅𝒙 + 𝝎𝒚 3.31±0.00 2.31±0.00 -0.18 -0.15 Large Tilt Angle, Remote Waist 𝒅𝒙 8.39 7.36 8.37±0.03 7.40±0.02 -0.02 0.05 𝝎𝒚 8.27±0.01 7.31±0.01 -0.12 -0.05 𝒅𝒙 + 𝝎𝒚 8.30±0.02 7.34±0.01 -0.09 -0.02  Table 8.8 Accuracy and repeatability of the estimated sampling and illumination angles.  8.10 Estimated Surface Motions Finally, the applied surface motions were determined using the measured speckle motions and the estimated geometric distance and angle parameters. Table 8.9 lists the surface motions for different applied surface movements in the two different object surface angle configurations computed using Equations (3.23 & 3.24) and the estimated geometric parameters. The results indicate high accuracy for the uniaxial motions, while the errors were considerably higher under multiaxial object motion. Furthermore, the uniaxial object tilt induced spurious surface displacement signal, whereas the uniaxial displacement did not cause any significant apparent tilt motion. The multiaxial object motion reflected similar behavior – the estimated displacements had higher errors than the tilts. The observed behavior can be attributed to the unequal tilt vs. displacement sensitivities. Therefore, a small relative error in the tilt signal lead to an amplified error in the estimated displacement.  183  Object Orientation Applied Surface Motion Estimated Surface Motion Error [%] Applied Motion 𝒅𝒙 [mm] 𝛚𝐲 [mrad] 𝒅𝒙 [mm] 𝛚𝐲 [mrad] 𝒅𝒙 𝛚𝐲 Small Tilt Angle, Remote Waist 𝒅𝒙 6.00 0 5.97±0.12 0.000±0.007 -0.6 – 𝝎𝒚 0 0.806 -0.78±0.15 0.837±0.009 – 3.8 𝒅𝒙 + 𝝎𝒚 6.00 0.806 5.12±0.02 0.888±0.008 -14.7 10.2 Large Tilt Angle, Remote Waist 𝒅𝒙 6.00 0 5.93±0.04 -0.003±0.002 -1.3 – 𝝎𝒚 0 0.806 -1.87±0.10 0.861±0.023 – 7.4 𝒅𝒙 + 𝝎𝒚 6.00 0.806 3.84±0.14 0.882±0.008 -36.0 10.0  Table 8.9 Accuracy and repeatability of the estimated surface motions using the estimated geometry.  Table 8.10 shows comparable results computed using the same measured speckle motions but with the actual, manually measured geometric parameters. The uniaxial results have slightly higher accuracy, while the multiaxial motion uncertainties are greatly reduced. Object Orientation Applied Surface Motion Estimated Surface Motion Error [%] Applied Motion 𝒅𝒙 [mm] 𝛚𝐲 [mrad] 𝒅𝒙 [mm] 𝛚𝐲 [mrad] 𝒅𝒙 𝛚𝐲 Small Tilt Angle, Remote Waist 𝒅𝒙 6.00 0 5.96±0.11 0.001±0.006 -0.8 – 𝝎𝒚 0 0.806 -0.25±0.11 0.828±0.009 – 2.8 𝒅𝒙 + 𝝎𝒚 6.00 0.806 5.51±0.08 0.881±0.008 -8.2 9.3 Large Tilt Angle, Remote Waist 𝒅𝒙 6.00 0 6.07±0.03 -0.008±0.002 1.1 – 𝝎𝒚 0 0.806 -0.18±0.08 0.834±0.023 – 4.1 𝒅𝒙 + 𝝎𝒚 6.00 0.806 5.66±0.06 0.850±0.007 -5.7 6.0  Table 8.10 Accuracy and repeatability of the estimated surface motions using the actual geometry.  184  8.11 Measurement Accuracy vs. Increment Size The measurement accuracy was further investigated by performing a series of in-plane displacement and tilt experiments with different motion increment sizes. Unlike the previous measurements, here only one measurement per configuration was performed. Figure 8.21 displays the resulting speckle motions. The speckle motions resulting from large 0.8mm displacement increments followed closely the trend and the datapoints of the 0.4mm increment motions and had significant nonlinearities at high displacement magnitudes. Given the proven nonlinearity of the displacement actuator, no further step sizes were investigated. On the other hand, the linear response and the extremely high sensitivity motivated studying the tilt characteristics in more detail. Therefore, various tilt increments ranging from doubled 0.1075mrad increments (0.0062˚) all the way down to 0.0108mrad (0.0006˚) were investigated. The resulting observed speckle motions remained close to the theoretical expectations even with smaller increment sizes, although some minor nonlinearities can be seen at the smallest tilt increment. These slight deviations are understandable, as applying the lowest increment required running the actuator at its reported repeatability limit (1µm actuator shaft displacement per increment). Nevertheless, the measurement accuracy and sensitivity are impressive given the hobbyist-style actuator assembly. 185   Figure 8.21 Motion tracking accuracy for varying increment sizes, small relative surface angle. Black lines: theoretical expectations; color markers: experimental results.  8.12 Macroscopic Object Tilt Measurements Finally, the possibility to use the side-peak diffraction analysis to study macroscopic surface tilts was investigated. The surface was rotated using the same actuator 𝐴2 but applying much greater 0.62˚ tilt increments. The studied relative surface angles, or imaging angles, had a range 3.3˚-11.0˚. The tilt increments were produced by pushing the actuator by 1mm at a time at a 93mm distance from the rotation axis. Figure 8.22 shows the autocorrelation midlines extracted from the recorded speckle patterns, along with the detected side-peaks. Figure 8.23 shows the detected side-peaks in comparison to the theoretical expectations for the two different laser mode spacings. The observed side-peak offsets were close to the theoretical values, although the observations seemed to have slightly elevated slopes. At low object angles, the first side-peaks could not be 186  robustly seen due to their proximity to the central self-correlation peak. The second side-peaks, on the other hand, could be detected at lower angles, but even they blended with the central peak at the starting angle. On the other hand, CAM1 detected a pair of side-peaks that occurred far away, approximately 50 pixels from the self-correlation peak. These peaks were likely caused by the weaker wavelength mode associated by the highest mode spacing as observed in Chapter 7. This is why having a laser with more than two modes can be useful; the different mode spacings allow observing side-peaks at a wide range of object surface angles, which helps to increase the dynamic range of the angle measurement.   Figure 8.22 Autocorrelation midline side-peak separation vs. relative surface angle. CAM1 pixel size: 6.7µm, CAM2 pixel size: 3.75µm.  187   Figure 8.23 Autocorrelation side-peak offset accuracy vs. relative surface angle. Black lines: theoretical expectations; red markers: experimental results.  8.13 Discussion The demonstrated tilt measurements revealed the method’s capability to detect extremely small rotations of a remotely located object. To give some perspective of the scale, it is useful to consider a car 5 meters long and then imagine lifting the front end of the car by 50 microns, i.e., the diameter of a human hair. This would make the car to tilt (pitch) by 0.0006˚, which is equal to the smallest tilt increment studied in Section 8.11. Such rotation would make the car headlight beams to rotate by the same angle. At 30 meters, the beams would move laterally upward by 300 microns (0.3mm), which would be very challenging to notice. In Defocused Speckle Imaging, on the other hand, an equivalent motion magnitude would be easy to track thanks to the strong texture of the interference speckle patterns. Moreover, despite the extremely high tilt sensitivity, it is possible to simultaneously keep track of macroscopic relative surface angles using the side-peak diffraction analysis. Therefore, multi-wavelength speckle imaging measurement effectively contains two very different tilt sensitivity scales. 188  Because of the much higher relative tilt sensitivity, the in-plane displacement measurements contained nonlinearities caused by small unintended surface tilts. This makes the method attractive for high-sensitivity straightness measurements to assess e.g., the flatness of machined surfaces or the linearity of machine motion paths. In multiaxial motion measurements, on the other hand, the sensitivity imbalance may be a significant challenge. Therefore, the demonstrated waist location adjustment may be necessary in order to improve the overall measurement accuracy and robustness. The waist adjustment greatly reduced the diameter of the illumination spot. This correspondingly increased the average speckle size, which made the speckle tracking and diffraction analysis challenging. Therefore, it would be useful to improve the laser beam control optics system so that in addition to shifting the focal point, the divergence angle would also be increased. This way, the laser could be focused far away while simultaneously illuminating sufficiently large surface region. Alternatively, the apparent speckle size could be reduced by changing the imaging system parameters, as studied in Chapter 6. The mirror-based lens attached to CAM2 had a ring-shaped aperture, which explains the ring-shaped speckle patterns. The central speckle pattern obstruction was more severe with the adjusted waist where the illuminated surface spot was smaller. The missing center limited the speckle pattern area that could be used for motion tracking and autocorrelation analysis. Consequently, CAM2 was more prone to tracking errors, as seen in the combined object motion measurements with the adjusted waist (Figure 8.14, 𝑑𝑥 + 𝜔< #1 & #3). Because of the speckle pattern obstructions and large speckle size, the autocorrelation analysis was not robust with the waist adjusted configuration. These issues would not arise if a lens with a conventional circular aperture 189  was used. Alternatively, a lens with a higher numerical aperture would help to record speckle patterns that cover a greater portion of CAM2 sensor. The experimental instrumentation was found to be mechanically robust. Unlike an interferometric ESPI instrumentation used previously in the same laboratory space, the Defocused Speckle Imaging setup was not affected by convective currents caused by the heavy air-conditioning present, nor by occasional mechanical vibrations resulting from activities in the surrounding laboratories. This is thanks to the common-path nature of Speckle Imaging: since speckles are formed by self-interfering light, all light travels the same optical path. While most components were rigidly fixed onto the optical table, the two geometry folding mirrors were free-standing on top of a movable cart. Nevertheless, the overall measurement repeatability was very high, as the repeated measurements yielded matching speckle displacements, and the incremental side-peak offsets had consistent magnitudes. This indicates that the entire instrumentation was mechanically stable, the motion actuator system was reliable, and that the data acquisition and measurement computations were well implemented. Therefore, the resulting measurement errors were systematic and are expected to result from uncertainties in the geometric parameters. The chosen V-shaped geometry enabled simulating remote measurements in the limited laboratory space. However, the use of the folded paths increased the setup complexity and made the illumination and sampling distances and angles rather challenging to measure by manual methods. In a real measurement situation, on the other hand, the setup geometry would be I-shaped, with the object in one end and all other instrumentation in the other end. Such geometry would be easier to characterize for validation purposes. 190  CAM1 focal plane was easy to locate accurately, because the macro-configured camera was strongly blurred when the calibration target deviated even slightly from the maximum sharpness position. The far-focused CAM2, on the other hand, had much greater depth of field, so the calibration target remained reasonably sharp even under moderate defocus. Therefore, CAM2 sampling distance and magnification ratio were more difficult to assess. The camera calibration could be improved by utilizing a computer-based blur estimation algorithm (similar to camera autofocus) to locate the focal plane more accurately.  While CAM1 and CAM2 were placed side-by-side close to one another, their effective sampling angles were marginally different. This arguably caused some minor relative variations in the observed side-peak offsets between the two cameras, potentially affecting the calibration accuracy. This may explain why the side-peak offsets observed in Figure 8.23 had slightly higher slopes than expected. These effects would be reduced at larger sampling distances, so the calibration accuracy should be better for more remote objects. The angle differences could also be completely eliminated by directing the scattered light into a beam splitter and directing the split beams into different cameras. Despite the various experimental challenges, the recorded speckle motions and the observed side-peak offsets were close to the theoretical expectations. However, some of the estimated surface motions had much higher relative errors. This can be explained by the high number of experimental variables needed for the computations; The many small errors accumulated, increasing the overall uncertainty in the estimated surface motions. Because of this characteristic, small improvements in geometric alignment and characterization of the individual instrumentation components could greatly improve the overall measurement accuracy. In particular, it would be important to calibrate 191  the motion actuators against a known reference in order to minimize uncertainties in the applied motions.  The speckle tracking was found to be robust and consistent throughout the measurements, except in the case of the shifted laser waist where the speckle size was excessively high (Figure 8.11). Considering the small tilt angle configuration (Figure 8.12) and a conservative 0.1-pxl image correlation accuracy, the estimated speckle tracking accuracy on in-plane displacements is 1.6µm for CAM1 and 4.6µm for CAM2. The corresponding values for tilt-measurements are 5.2e-5 mrad for CAM1 and 2.3e-4 mrad for CAM2. This indicates that the correlation-based approach is suitable for measuring extremely small motions. Furthermore, the actual performance may exceed the estimated values, as the correlation accuracy can reach 0.01-pxl in ideal conditions [48]. An optional way to improve measurement accuracy could be to add a third camera, so that each camera would have different sampling distance. While this would slightly increase the setup complexity, it would enable some data overfitting and thus make the sensitivity equations more robust against experimental uncertainties. Consequently, this could reduce the spurious displacement signals fueled by uncertainties in the tilt measurements. In addition, it would stabilize and improve the sampling distance calibration; Instead of simple two-point extrapolation, Equation (8.2) would be replaced by a three-point regression analysis. In addition to added stability, the third camera would also enable tracking the in-plane rotation signals that were not included in this analysis. All experiments presented in this chapter were conducted on a retroreflective tape surface. While retroreflective markers have been previously used for signal strengthening in other laser-based methods, they have not been widely applied in Speckle Imaging. The reason for this is that 192  retroreflection commonly creates speckle patterns that have strong spatial intensity variations that make speckle tracking challenging [12]. However, in the presented experiments, these effects were mitigated using large illumination spot and large sampling distances. Consequently, the resulting speckle patterns, speckle motions and side-peak offsets behaved just like expected for and observed with a conventional diffuse surface, like ground aluminum used in Chapter 7, or medium-density fiberboard used in Chapters 5-6. The successful demonstration is encouraging for wider utilization of retroreflective surfaces.  Despite the great light efficiency obtained with retroreflective surface, field measurements may still be prone to ambient light, like direct or reflected sunlight, that may saturate the camera sensor. A practical and effective way to deal with ambient light issues would be to equip camera lens by a narrow bandpass filter that transmits only a narrow spectrum centered at the laser wavelength and blocks other wavelengths. The proposed calibration principle is crucial for scaling the recorded speckle displacements in surface motion measurements. However, the possibility of simultaneously measuring distance and relative surface angles with a pair of defocused cameras could gain interest even as a standalone method in situations exceeding the range of autocollimators [50,51]. Combined with the possibility to use the technique on retroreflective surfaces like traffic signs, number plates and high-visibility clothing, it could be useful in robotics for monitoring the location of the object, e.g., a self-driving car, with respect to the environment and other moving objects, e.g., road, other cars and pedestrians.  193  8.14 Conclusion The previous chapters proposed and studied the various different aspects of Defocused Speckle Imaging. This chapter brought together the different pieces and presented a complete set of experiments to demonstrate the method’s suitability for remote surface motion measurements. The measurements performed at more than 30 meters revealed the possibility to monitor extremely small object movements at high accuracy, and the geometric calibration showed promise to scale the observed speckle motions with no additional sensors. The method thus has potential for monitoring large objects, as well as objects that are located in hazardous environments. In addition, the observations pave way for new interesting applications, like high-sensitivity straightness measurements, as well as monitoring the relative distances and surface angles between the sensor and the surrounding retroreflective surfaces. 194  Chapter 9: Conclusion  9.1 Thesis Summary and Impact Surface motion measurements are important for evaluating the performance and safety of mechanical structures and components. While distance-dependent magnification limits the measurement range of traditional camera-based methods, the sensitivity of Defocused Speckle Imaging increases with distance. This makes it an attractive choice for tracking remote objects. Although Defocused Speckle Imaging has existed for a long time, it has not been previously applied to remote measurements at large distances. A major reason for this is that the relevant literature is scattered and that the speckle phenomenon is explained using challenging analytical treatment. This thesis seeks to overcome the barrier of mathematical theory by presenting a physical Speckle Hemisphere Model based on geometric treatment.  This physical approach is anticipated to make the technique more accessible for newcomers (Objective 2). The three-dimensional speckle field behaves generally similarly to the reflections from a disco ball, with some minor differences that arise from interference and diffraction effects. The derived sensitivity equations are identical to those of the existing more complex models. The anticipated speckle motion characteristics have been confirmed through a set of in-plane displacement, out-of-plane tilt and in-plane rotation measurements. The few existing Speckle Imaging applications are intended for only contact or close-range measurements. Contact measurement are convenient because speckle motions at zero sampling distances are sensitive to only linear surface displacements, simplifying the related computations. Remote measurements are more complicated because the resulting speckle motions are affected 195  by both linear displacements and surface tilts. Remote measurement analysis thus requires separating the relative tilt and displacement contributions. This thesis has overcome the problem by a simple combination of two cameras focused at different distances (Objective 1). Such arrangement is effective because camera defocus and magnification adjustments offer great control over measurement sensitivity and content. At low sampling distances, Defocused Speckle Imaging is mostly sensitive to linear in-plane displacements, whereas large sampling distances are characterized by much higher relative tilt sensitivity. Moreover, the observed speckle motion magnitude scales linearly proportional to the camera in-focus magnification ratio. A set of multi-axial surface motion experiments performed at various distances between 4–16 meters have illustrated the dual-camera arrangement suitability for remote measurement applications. Defocused Speckle Imaging sensitivity depends on the illumination and sampling distances and angles. In field conditions, these parameters are generally not known, and manual range measurements may be impractical due to large distances or potentially hazardous conditions. This thesis has proposed measurement self-calibration by utilizing multi-mode laser illumination in combination with speckle pattern diffraction analysis (Objective 3). Multi-mode illumination creates multiple partially overlapping speckle patterns, and the relative speckle offset encodes information about the important geometric parameters. The self-calibration principle has been successfully demonstrated using the dual-camera arrangement. The diffraction analysis was able to extract the sampling distances of 500–1000mm at a 1.7% accuracy and the oblique surface angles of 15–45˚ to within 0.7˚. The final self-calibrated remote surface motion measurements performed at a 30.7-meter distance have extended the range of Defocused Speckle Imaging (Objective 4). The experiments have 196  revealed the method’s potential for extremely high tilt sensitivity, standalone remote angle measurements and applicability to diverse objects, like retro-reflective surfaces. The dual-camera configuration could monitor the sampling distances of 15–30 meters at a 6.4% accuracy and determine the relative surface angles of 2.5–7.4˚ to within 0.2˚. The setup could robustly track the speckle motions resulting from microscopic in-plane displacements (400µm) and very fine tilt motions (0.003˚) at a high accuracy, with a maximum uncertainty of 6.0%. The estimated surface in-plane displacements were more prone to errors because of the much higher relative tilt sensitivity. The sensitivity imbalance could be reduced by adjusting the laser source focus location, and the overall measurement performance is expected to further improve with additional advancements in geometric alignment and characterization. This thesis is characterized by the utilization of several phenomena that have been traditionally considered unproductive, limiting or undesired. The outcome is a novel combination of unconventional features. The possibility to measure small motions at high sensitivity from far away is extraordinary and thought-provoking. This is because human vision is fundamentally bound by perspective effects; if a person wants to see the motion of a distant object in more detail, they have to move closer, not further away. While speckles have commonly been seen as a source of noise, they have been used here to convey information about surface movements. Similarly, defocus is usually associated with loss of detail, yet in the context of Speckle Imaging, defocus enables selective extraction of desired information, and this way to adjust measurement sensitivity. Finally, although low-coherence multi-mode laser sources limit the performance of interferometric measurements, multi-mode illumination actually provides additional geometric information for Speckle Imaging. 197  9.2 Future Work 9.2.1 Modeling Aspects  The Speckle Hemisphere Model was developed for geometry where the illumination and the observation vectors were confined to xz-plane. In reality, however, these vectors may also have y-components, which would break the normal incidence conditions for y-directional surface in-plane displacements 𝑑𝑦 and tilts about the x-axis 𝜔;, influencing the resulting speckle motion magnitudes. In future, the sensitivity equation derivation could be extended to a general 3D geometry by considering three-dimensional angles.  9.2.2 Technical Aspects The studied applied motions were highly controlled; either slow continuous or quasistatic. In future, it would be important to study less restricted surface movements and investigate how high framerates are required to maintain partial speckle overlap in the successive frames in practical measurement applications. In addition, the studied multiaxial motions were limited to two degrees of freedom, one displacement and one tilt component. This was done for practical reasons to keep the actuator assembly reasonably simple and robust. However, it would be interesting to study the dual camera performance in the presence of the two additional orthogonal motion components, and also see how effectively the proposed three-camera arrangement could resolve object motion state in the presence of additional in-plane rotations or out-of-plane displacements. Such measurements would require an appropriate multiaxial high-precision motion actuator. While the observed speckle motions and side-peak offsets were systematically close to the expected values, some of the estimated surface movements had higher uncertainties. This was 198  caused by an accumulation of many small errors due to the high number of experimental parameters. The overall measurement accuracy is expected to improve with more thorough characterization of the geometric parameters, for example, the locations of the focal planes, and by simplifying the geometry by minimizing the angular offset between the cameras. Some of the measurement uncertainties were caused by the folded illumination and observation geometries that had to be used in the limited laboratory space. In future, it would be interesting to conduct measurements outdoors in true field conditions using unfolded light paths and with object distances extending beyond the studied 30 meters.   9.2.3 Full-field Aspects Imaging at large sampling distances is diffuse, which gives Defocused Speckle Imaging point-wise characteristics. However, it could still be possible to gather information from a larger area by, for example, sequentially illuminating and recording different points across the object surface, or by simultaneously illuminating separate surface points with different colored laser beams and recording the resulting wavelength-dependent speckle patterns into different channels of a color camera sensor. The illuminated surface area is assumed to be flat. In many cases, this is a reasonable approximation and remains valid when assessing microscopic surface motions where motion magnitudes are a small fraction of the illuminated surface spot. However, if an object with a sloped surface displaces by a great distance, the local surface motion within the illuminated spot may differ from the object rigid body motion. For example, if an object with a tilted surface is shifted in-plane, the illuminated spot appears to move out-of-plane in addition to the in-plane 199  displacement, thereby inducing an extra speckle motion component. On the other hand, if the surface is cylindrical, then the local surface angle changes as the object is displaced. Therefore, the illuminated surface portion effectively tilts, creating an additional speckle motion component. Speckle motion dependence on surface curvature raises interesting possibilities. For example, if the laser-camera sensor assembly were attached to a linear stage and accurately displaced at a known rate to sweep the laser across the object surface, it could be possible to determine surface curvature and shape profile along the motion path from the nonlinearities in the resulting speckle motions. While this approach would require scanning, it has the potential of measuring from several tens of meters away using the demonstrated dual-camera arrangement. Moreover, because of the much higher relative tilt sensitivity, this technique could reach very high curvature resolution. Alternatively, the speckle motion analysis could be complemented by the side-peak diffraction analysis to keep track of macroscopic surface curvature.   9.3 Final Words With an ever-increasing amount of automation and machinery present in the modern world, the importance of motion measurements will continue increasing to ensure high performance and safe operation. It is hoped that the topics presented in this thesis will lower the threshold for adopting Defocused Speckle Imaging for new application areas and generate more opportunity for remote surface motion measurements. 200  Bibliography  [1] Sutton M, Wolters W, Peters W, Ranson W, McNeill S. Determination of Displacements Using an Improved Digital Correlation Method. Image and Vision Computing 1983;1(3):133-139. [2] Wang Z, Kieu H, Nguyen H, Le M. Digital Image Correlation in Experimental Mechanics and Image Registration in Computer Vision: Similarities, Differences and Complements. Optics and Lasers in Engineering 2015;65:18-27. [3] Leendertz J. Interferometric Displacement Measurement on Scattering Surfaces Utilizing Speckle Effect. Journal of Physics E: Scientific Measurements 1970;3(3):214-218. [4] Steinchen W, Yang L. Digital Shearography: Theory and Application of Digital Speckle Pattern Shearing Interferometry. Bellingham, WA: SPIE; 2003.   [5] Cloud G. Optical Methods in Experimental Mechanics Part 45: Measuring Phase Difference – Part I: The Problem. Experimental Techniques 2011;35(1):3-7. [6] Archbold E, Ennos A. Displacement Measurement from Double-exposure Laser Photographs. Optica Acta: International Journal of Optics 1972;19(4):253-271. [7] Yamaguchi I. Speckle Displacement and Decorrelation in the Diffraction and Image Fields for Small Object Deformation. International Journal of Optics 1981;28(10):1359-1371. [8] Ennos A. Speckle Interferometry. In: Dainty J, editor. Topics in Applied Physics Volume 9: Laser Speckle and Related Phenomena. London: Springer-Verlag Berlin Heidelberg; 1975. P. 203-253.     [9] Goodman J. Speckle Phenomena in Optics: Theory and Applications, Second Edition. Greenwood Village, CO: Roberts and Company; 2007.  201  [10] Li P, Ni S, Zhang L, Zeng S, Luo Q. Imaging Cerebral Blood Flow Through the Intact Rat Skull with Temporal Laser Speckle Imaging. Optics Letters 2006;31(12):1824-1826. [11] Beiderman Y, Talyosef R, Yeori D, Garcia J, Mico V, Zalevsky Z. Use of PC Mouse Components for Continuous Measuring of Human Heartbeat. Applied Optics 2012;51(16):3323-3328. [12]  Martin P, Rothberg Steve. Laser Vibrometry and The Secret Life of Speckle Patterns. Eighth International Conference on Vibration Measurements by Laser Techniques: Advances and Applications 2008;7098:709812. [13] Horváth P, Hrabovský M, Šmíd P. Full theory of Speckle Displacement and Decorrelation in the Image Field by Wave and Geometrical Descriptions and its Application in Mechanics. Journal of Modern Optics 2004;51(5):725-742. [14] Cloud G. Optical Methods in Experimental Mechanics Part 27: Speckle Size Estimates. Experimental Techniques 2007;31(3):19-22.  [15] Gibson S, Charrett T, Tatam R. Absolute Angle Measurement Using Dual-wavelength Laser Speckle for Robotic Manufacturing. In: Optical Measurement Systems for Industrial Inspection; 2019. p.110560K. [16] Bachratý M, Žalman M. 2D Position Measurement with Optical Laser Mouse Sensor. In: International Scientific Conference on New Trends in Signal Processing; 2010. p. 20-23. [17] Šmíd P, Horváth P, Hrabovský M. Speckle Correlation Method Used to Measure Object’s In-plane Velocity. Applied Optics 2007;46(18):3709-3715. [18] Francis D, Charrett T, Waugh L, Tatam R. Objective Speckle Velocimetry for Autonomous Vehicle Odometry. Applied Optics 2012;51(16):3478-3490.  202  [19] Hrabovský M, Bača Z, Horváth P. Measurement of an Object Rotation Using the Theory of Speckle Pattern Decorrelation. Optik 2000;111(8):359-366.  [20] Narayanamurthy C. Measurement of Angular Velocity Using Speckle Photography. Applied Optics 1991;30(22):3197-3199. [21] Yamaguchi I, Kobayashi K, Yaroslavsky L. Measurement of Surface Roughness by Speckle Correlation. Optical Engineering 2004;43(11):2753-2762. [22] Zizka J, Olwal A, Raskar R. SpeckleSense: Fast, Precise, Low-cost and Compact Motion Sensing Using Laser Speckle. Proceedings of the 24th Annual ACM Symposium on User Interface Software and Technology 2011:489-498. [23] Jo K, Gupta M, Nayar S. SpeDo: 6 DOF Ego-Motion Sensor Using Speckle Defocus Imaging. In: Proceedings of the IEEE International Conference on Computer Vision; 2015. p. 4319-4327. [24] Dainty J, editor. Topics in Applied Physics Volume 9: Laser Speckle and Related Phenomena. London: Springer-Verlag Berlin Heidelberg; 1975. [25] Heikkinen J, Schajer G. A Geometric Model of Surface Motion Measurement by Objective Speckle Imaging. Optics and Lasers in Engineering 2020;124:105850. [26] Langmuir R. Scattering of laser light. Applied Physics Letters 1963;2(2):29-30. [27] Oliver B. Sparkling Spots and Random Diffraction. Proceedings of the IEEE 1963;51(1);220-221. [28] Archbold E, Burch J, Ennos A. Recording of In-plane Surface Displacement by Double-exposure Speckle Photography. Optica Acta: International Journal of Optics 1970;17(12):883-898. 203  [29] Tiziani H. Analysis of Mechanical Oscillations by Speckling. Applied Optics 1972;11(12): 2911-2917.  [30] Gregory D. Basic Physical Principles of Defocused Speckle Photography: A Tilt Topology Inspection Technique. Optics & Laser Technology 1976;8(5):201-213.  [31] Yamaguchi I. A Laser-speckle Strain Gauge. Journal of Physics E: Scientific Instruments 1981;14(11):1270-1273. [32] Sjödahl M. Electronic Speckle Photography: Measurement of In-plane Strain Fields Through the use of Defocused Laser Speckle. Applied Optics 1995;34(25):5799-5808.  [33] Stetson K. Problem of Defocusing in Speckle Photography, its Connection to Hologram Interferometry, and its Solutions. Journal of the Optical Society of America 1976;66(11):1267-1271. [34] Jacquot P, Rastogi P. Speckle Motions Induced by Rigid-body Movements in Free-space Geometry: An Explicit Investigation and Extension to New Cases. Applied Optics. 1979;18(22):2022-2032. [35] Světlík J. Speckle Displacement: Two Related Approaches. Journal of Modern Optics 1992;39(1):149-157.  [36] Sjödahl M. Calculation of Speckle Displacement, Decorrelation, and Object-Point Location in Imaging Systems. Applied Optics 1995;34(34):7998-8010. [37] Hrabovský M, Bača Z, Horváth P. Theory of Speckle Displacement and Decorrelation and its Application in Mechanics. Optics and Lasers in Engineering 1999;32(4):395-403. [38] Charrett T, Tatam R. Objective Speckle Displacement: An Extended Theory for the Small Deformation of Shaped Objects. Optics Express 2014;22(21):25466-25480. 204  [39] Hrabovský M, Horváth P. Application of Speckle Decorrelation Method for Small Translation Measurements. Optica Applicata. 2004;34(2):203-218. [40] Charrett T, Tatam R. Objective Speckle Displacement Resulting from the Deformation of Shaped Objects. In: Optical Measurement Systems for Industrial Inspection IX; 2015. p. 95251N. [41] Cloud G. Optical Methods in Experimental Mechanics Part 25: Objective Speckle. Experimental Techniques 2007;31(1):15-17. [42] Svelto O. Principles of Lasers, 5th Edition. Milano: Springer; 2010.  [43] Gregory D. Speckle Photography in Engineering Applications. In: Robertson E, editor. The Engineering Uses of Coherent Optics, Glasgow: Cambridge University Press; 1976, p. 263-282. [44] Hrabovský M, Bača Z, Horváth P. Measurement of an Object Rotation Using the Theory of Speckle Pattern Decorrelation. Optik 2000;111(8):359-366.  [45] Heikkinen J, Schajer G. Remote Surface Motion Measurements Using Defocused Speckle Imaging. Optics and Lasers in Engineering 2020;130:106091. [46] Greivenkamp J. Field Guide to Geometrical Optics. Bellingham, WA: SPIE; 2004. [47] Heikkinen J, Schajer G. Remote Surface Motion Measurements using Multi-Wavelength Defocused Speckle Imaging. In: SEM Annual Conference and Exposition on Experimental and Applied Mechanics; September 14-17, 2020. [48] Sutton M, Orteu J, Schreier H. Image Correlation for Shape, Motion and Deformation Measurements: Basic Concepts, Theory and Applications. New York: Springer Science & Business Media; 2009. 205  [49] Hahn I, Weilert M, Wang X, Goullioud R. A Heterodyne Interferometer for Angle Metrology. Review of Scientific Instruments 2010;81(4):045103. [50] Shimizu Y, Matsukuma H, Gao W. Optical Sensors for Multi-axis Angle and Displacement Measurement Using Grating Reflectors. Sensors 2019;19(23);5289. [51] Li R, Konyakhin I, Zhang Q, Cui W, Wen D, Zou X, Guo J, Liu Y. Error Compensation for Long-distance Measurements with a Photoelectric Autocollimator. Optical Engineering 2019;58(10):104112. [52] Singer W, Totzeck M, Gross H. Handbook of Optical Systems, Volume 2, Physical Image Formation. Weinheim: Wiley-VCH Verlag GmbH & Co. KGaA; 2005.  [53] Cloud G. Optical Methods in Experimental Mechanics Part 26: Subjective Speckle. Experimental Techniques 2007;31(2);17-19.  [54] Kim J. Range and Accuracy of Speckle Displacement Measurement in Double-Exposure Speckle Photography. Journal of the Optical Society of America A 1989;6(5):675-681. [55] Parigi V, Perros E, Binard G, Bourdillon C, Maitre A, Carminati R, Krachmalnicoff V, De Wilde Y. Near-field to Far-field Characterization of Speckle Patterns Generated by Disordered Nanomaterials. Optics Express 2016;4(7):7019-7027. [56] Hu X-B, Dong M, Zhu Z, Gao W. Does the Structure of Light Influence the Speckle Size? Scientific Reports 2020;10(199):1-11. [57] Palmer C. Diffraction Grating Handbook, 5th Edition. Rochester, NY: Thermo RGL; 2002. [58] Chakrabarti M, Jakobsen M, Hanson S. Speckle-based Spectrometer. Optics Letters 2015;40(14):3264-3267. [59] Jakobsen M, Hanson S. Distance Measurements by Speckle Correlation of Objective Speckle Patterns, Structured by the Illumination. Applied Optics 2012;51(19):4316-4324. 206  [60] Guizar-Sicairos M, Thurman S, Fienup J. Efficient Subpixel Image Registration Algorithms. Optics Letters 2008;33(2):156-158.  [61] Blaber J, Adair B, Antoniou A. Ncorr: Open-Source 2D Digital Image Correlation Matlab Software. Experimental Mechanics 2015;55(6):1105-1122. [62] Illaramendi M, Zubia J, Arrue J, Ayesta I. Adaptation of the Michelson Interferometer for a Better Understanding of the Temporal Coherence in Lasers. 14th Conference on Education and Training in Optics and Photonics 2017:1045249. [63] Bass M, Van Stryland E, Wolfe W, Williams D. Handbook of Optics: Fundamentals, Techniques, and Design. United States of America: McGraw-Hill Professional Publishing; 1995.  [64] Alexeev I, Wu J, Karg M, Zalevsky Z, Schmidt M. Determination of Laser Beam Focus Position Based on Secondary Speckles Pattern Analysis. Applied Optics 2017;56(26):7413-7418. 207  Appendix: Interferometric Laser Characterization Principle  Laser spectrum is typically determined using an optical spectrum analyzer that measures laser output power as a function of laser wavelength. However, even if a dedicated spectrum analyzer is not available, there is a related approach based on Michelson interferometer that can be used to extract the mode spacings [62]. In a Michelson interferometer, the laser beam is divided into two paths by a beam splitter (Figure A.1). The transmitted and the reflected beam are reflected back from a pair of first surface mirrors. The two beams are recombined in the beam splitter and directed towards a screen or a digital sensor. The two overlapping beams create an interference pattern that is recorded. The reference arm has a fixed path length, while the other measurement arm has an adjustable path length to introduce a relative optical path length difference (𝑂𝑃𝐷). The fixed mirror is tilted by a very small angle (fraction of a degree) to introduce a subtle path length gradient in the horizontal direction. Consequently, the interference pattern formed on the imaging sensor consists of vertical fringes of sinusoidally varying intensities, corresponding to different levels of constructive or destructive interference. The quality of interference can be quantified by measuring the contrast of the interference fringes. The fringe contrast is also known as visibility 𝑉, and it can be expressed as [52]: 𝑉 = 𝐼M.; − 𝐼M0'𝐼M.; + 𝐼M0' (A.1)  208   Figure A.1 Michelson interferometer setup for determining laser mode spacings.  The visibility depends on the maximum and the minimum fringe intensities 𝐼M.; and 𝐼M0', respectively. A single-wavelength laser produces interference patterns that have high contrast, and the visibility of an ideal, perfectly coherent, monochromatic laser is unity. However, if the laser spectrum contains multiple wavelength components, then the phase of each mode propagates at a slightly different rate (Figure A.2). Therefore, the different wavelength modes eventually move out of phase. If the total path lengths of the two arms of the Michelson interferometer differ sufficiently, the resulting fringe visibility drops. However, if the path length is further increased, the different wavelengths eventually return in phase, correspondingly lifting the fringe visibility back up. This cyclical behavior is similar to the beat signal in acoustics. Let’s consider a laser that has two modes, 𝜆9 = 𝜆 and 𝜆: = 𝜆 + ∆𝜆. If the mode with a longer wavelength 𝜆: oscillates 𝑝 times over one fringe visibility cycle, then the mode with a shorter Diverging laser sourceCollima/ng lensLensless camera sensorBeam spli;er cubeFixed mirrorMoving mirror∆𝑧209  wavelength 𝜆9 must oscillate 𝑝 + 1 times. If the cycle length corresponds to an optical path difference 𝑂𝑃𝐷, then a following pair of equations holds: 𝑂𝑃𝐷 = (𝑝 + 1)𝜆		𝑂𝑃𝐷 = 𝑝(𝜆 + ∆𝜆)œ (A.2a & A.2b) Equating the right sides yields: ∆𝜆 = 𝜆𝑝 (A.3) Since laser mode spacing is much smaller than wavelength (∆𝜆 ≪ 𝜆), one visibility cycle spans a high number of wavelengths, i.e., 𝑝 is large. Therefore, it is appropriate to approximate Equation (A.2a) as: 𝑂𝑃𝐷 ≈ 𝑝𝜆 (A.4) Finally, combining Equations (A.3 & A.4) yields [63]: ∆𝜆 = 𝜆:𝑂𝑃𝐷 (A.5) Therefore, laser mode spacing can simply be estimated by measuring the visibility cycle length, provided that the laser wavelength is accurately reported. If the laser has more than two wavelength modes, then the visibility graph is modulated by additional cyclical components. In that case, the different cycle periods can be determined by computing the power spectrum of the visibility graph and identifying the dominant frequencies.  210   Figure A.2 (Top) Interference of two waves propagating with different wavelengths. (Middle) Interference of the two waves. (Bottom) The resulting interferometric fringe visibility. 

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            data-media="{[{embed.selectedMedia}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
https://iiif.library.ubc.ca/presentation/dsp.24.1-0395871/manifest

Comment

Related Items