Open Collections

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Position-weighted template matching for measuring in-plane dynamics of microdevices Peng, Cheng 2015

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata


24-ubc_2016_february_peng_cheng.pdf [ 9.26MB ]
JSON: 24-1.0220526.json
JSON-LD: 24-1.0220526-ld.json
RDF/XML (Pretty): 24-1.0220526-rdf.xml
RDF/JSON: 24-1.0220526-rdf.json
Turtle: 24-1.0220526-turtle.txt
N-Triples: 24-1.0220526-rdf-ntriples.txt
Original Record: 24-1.0220526-source.json
Full Text

Full Text

Position-weighted Template Matching for MeasuringIn-plane Dynamics of MicrodevicesbyCheng PengB. Eng, University of Electronic Science and Technology of China, 2013A THESIS SUBMITTED IN PARTIAL FULFILLMENTOF THE REQUIREMENTS FOR THE DEGREE OFMASTER OF APPLIED SCIENCEinTHE FACULTY OF GRADUATE AND POSTDOCTORALSTUDIES(Electrical and Computer Engineering)The University of British Columbia(Vancouver)November 2015c© Cheng Peng, 2015AbstractThe measurement of in-plane dynamics of microdevices is crucial to analyzingtheir dynamic characteristics under certain excitations. It has become more andmore important to enable precise measurements and visual means to character-ize dynamic microstructures, as the designs of moving micro-electro-mechanicalsystems (MEMS) are rapidly becoming more and more complex. And the visual-ization and measurement of the dynamics of MEMS structures are of considerablesignificance to the development of more effective and advanced microdevices.This thesis investigates the problem of visualizing, measuring and analyzingthe in-plane dynamics of microdevices. We propose a novel object position track-ing algorithm, called position-weighted template matching, improving the tradi-tional template matching technique. The newly proposed algorithm effectivelyaddresses the position “jump” problem that typically happens for object trackingin planar microdevices, where similar sub-patterns may exist in a single structure.We have incorporated the parabola fitting interpolation technique into our algo-rithm to achieve a higher, sub-pixel resolution level. We have implemented ourproposed methods into a software module, associated with a LabVIEW GraphicalUser Interface (GUI). Several comparative experiments were carried out to demon-strate the effectiveness of our algorithm. In addition, the procedure was also usedfor performing a system identification on a fabricated MEMS resonator. Our im-plemented LabVIEW GUI can be potentially interfaced with low-cost hardware toenable visualization and measurement of in-plane motion of microdevices.iiPrefaceThis dissertation is an original intellectual work of the author Cheng Peng. Noneof the text of the thesis is taken directly from previously published or collabora-tive articles. The ideas of the innovative algorithm in Chapter 3 and the work ofsystem identification in Chapter 4 originated from the discussion between my su-pervisor Edmond Cretu and myself. And the details of the algorithm were designedand implemented by myself independently. The work of system identification inChapter 4 was completed by myself independently as well. The experiments foracquiring the video data, used in Chapter 3 and Chapter 4, were conducted in theAdaptive MEMS Lab at the University of British Columbia (UBC) by myself, withthe help of my lab colleague Manav Bharti. The microdevices used in the experi-ments were previously designed by Manav Bharti. During the research process, Iwas responsible for developing the concepts, deriving the mathematical formula-tions, implementing the methods, conducting the experiments and analyzing the re-sults, as well as writing the draft of the thesis manuscript. My research supervisor,Edmond Cretu, provided supervisory advice and manuscript editing throughout theentire process of conducting the research and writing the thesis.iiiTable of ContentsAbstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iiPreface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iiiTable of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ivList of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viiList of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viiiAcknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xDedication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.3 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31.4 Thesis Organization . . . . . . . . . . . . . . . . . . . . . . . . . 42 Monitoring In-plane Dynamics at Micro-scale . . . . . . . . . . . . 52.1 Dynamic Behavior of Microdevices . . . . . . . . . . . . . . . . 52.2 State-of-the-art Techniques in Monitoring In-plane Dynamics atMicro-scale . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72.2.1 Micro System Analyzer . . . . . . . . . . . . . . . . . . 7iv2.2.2 Stroboscopic Video Microscopy for In-plane Motion De-tection . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72.2.3 Planar Motion Analyzer . . . . . . . . . . . . . . . . . . 102.3 Microsystems Integration Platform FPGA Imaging System . . . . 132.4 Software Techniques for Object Tracking . . . . . . . . . . . . . 152.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183 Object Tracking from Optical Recordings of Micro-displacements . 193.1 System Architecture . . . . . . . . . . . . . . . . . . . . . . . . . 193.2 Additional Functions . . . . . . . . . . . . . . . . . . . . . . . . 203.2.1 Multiple OOIs and Relative Movement . . . . . . . . . . 203.2.2 Rotation Angle Detection . . . . . . . . . . . . . . . . . 213.3 Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223.3.1 Mean Shift . . . . . . . . . . . . . . . . . . . . . . . . . 223.3.2 Template Matching . . . . . . . . . . . . . . . . . . . . . 233.3.3 Novel Template Matching Based on Position History . . . 273.3.4 Position-weighted Template Matching with Parabola Fit-ting Interpolation . . . . . . . . . . . . . . . . . . . . . . 313.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 364 System Identification . . . . . . . . . . . . . . . . . . . . . . . . . . 384.1 Basic Concepts and General Procedures . . . . . . . . . . . . . . 384.1.1 Data Acquisition and Preprocessing . . . . . . . . . . . . 394.1.2 Nonparametric and Parametric Model Estimation Methods 394.1.3 Model Validation . . . . . . . . . . . . . . . . . . . . . . 404.2 Transfer Function Estimation . . . . . . . . . . . . . . . . . . . . 404.2.1 Transfer Function Deduction . . . . . . . . . . . . . . . . 404.2.2 Data Fitting in MATLAB . . . . . . . . . . . . . . . . . . 424.2.3 Transfer Function Estimation from Experimental Data . . 434.3 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 435 Experimental Results and Discussions . . . . . . . . . . . . . . . . . 455.1 Comparison of Algorithms for Object Tracking . . . . . . . . . . 455.1.1 Experimental Video Data . . . . . . . . . . . . . . . . . . 45v5.1.2 Mean Shift . . . . . . . . . . . . . . . . . . . . . . . . . 475.1.3 Template Matching . . . . . . . . . . . . . . . . . . . . . 485.1.4 Position-based Template Matching . . . . . . . . . . . . . 485.1.5 Position-based Template Matching with Parabola Fitting . 495.1.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . 495.2 Transfer Function Estimation from Experimental Data . . . . . . 505.2.1 Polytec PMA . . . . . . . . . . . . . . . . . . . . . . . . 505.2.2 LabVIEW . . . . . . . . . . . . . . . . . . . . . . . . . . 525.2.3 Comparison of the Results from Two Processing Methods 545.2.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . 546 Conclusions and Future Work . . . . . . . . . . . . . . . . . . . . . 566.1 Thesis Overview . . . . . . . . . . . . . . . . . . . . . . . . . . 566.2 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 576.2.1 Future Outlook on Hardware Equipment . . . . . . . . . . 576.2.2 Expansion on Other Issues with Software Algorithms . . . 57Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59A The LabVIEW GUI Design and Implementation . . . . . . . . . . . 63A.1 Basic LabVIEW GUI Implementation for Object Tracking . . . . 63A.2 LabVIEW GUI Implementation for Additional Functions . . . . . 65A.2.1 Multiple OOIs and Relative Movement . . . . . . . . . . 65A.2.2 Rotation Angle Detection . . . . . . . . . . . . . . . . . 66A.3 LabVIEW GUI Implementation for Object Tracking with Mean Shift 66B Programming Codes . . . . . . . . . . . . . . . . . . . . . . . . . . . 71B.1 C++ Code for Position-weighted Template Matching with ParabolaFitting Interpolation . . . . . . . . . . . . . . . . . . . . . . . . . 71B.2 MATLAB Code for Data Fitting . . . . . . . . . . . . . . . . . . 77viList of TablesTable 4.1 Parameters of the micro-resonator for system identification. . . 40Table 5.1 Polytec PMA Data. . . . . . . . . . . . . . . . . . . . . . . . 51Table 5.2 LabVIEW Data. . . . . . . . . . . . . . . . . . . . . . . . . . 53Table 5.3 Comparison of results from two processing methods. . . . . . . 54viiList of FiguresFigure 2.1 A part of the micro-resonator. . . . . . . . . . . . . . . . . . 6Figure 2.2 Polytec MSA-500. . . . . . . . . . . . . . . . . . . . . . . . 8Figure 2.3 Timing diagram of the PMA synchronization signal, adaptedfrom [32]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9Figure 2.4 The stroboscopic technique. . . . . . . . . . . . . . . . . . . 10Figure 2.5 Measurement procedure using the PMA, adapted from [30]. . 11Figure 2.6 Measuring in-plane motion of a microdevice using Polytec PMA. 12Figure 2.7 MIP FPGA-based imaging system and its components. . . . . 14Figure 2.8 Front panel of the reference design. . . . . . . . . . . . . . . 14Figure 2.9 MEMS device under the imaging system. . . . . . . . . . . . 15Figure 2.10 Taxonomy of tracking methods, adapted from [41]. . . . . . . 16Figure 3.1 The front panel of the LabVIEW GUI for object position track-ing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20Figure 3.2 Object position tracking procedure in the designed LabVIEWGUI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21Figure 3.3 An example of sliding the template image against the sourceimage. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26Figure 3.4 An example of the result matrix of sliding the template image. 26Figure 3.5 A part of the structure of a micro-resonator. . . . . . . . . . . 28Figure 3.6 Y displacement with the template matching method. . . . . . 28Figure 3.7 Frames of the video stroboscopy movie. . . . . . . . . . . . . 31Figure 3.8 The color mappings of the matrices for the algorithm. . . . . . 32Figure 3.9 The simplified model of parabola fitting. . . . . . . . . . . . . 34viiiFigure 3.10 The simplified model of parabola fitting. . . . . . . . . . . . . 35Figure 4.1 The micro-resonator for system identification. . . . . . . . . . 41Figure 5.1 The microstructure for the object tracking experiment. . . . . 46Figure 5.2 The ROI and OOI selected for the object tracking experiment. 47Figure 5.3 The results of Polytec PMA (for one period). . . . . . . . . . 47Figure 5.4 The results of mean shift. . . . . . . . . . . . . . . . . . . . . 47Figure 5.5 The results of the traditional template matching. . . . . . . . . 48Figure 5.6 The results of position-weighted template matching. . . . . . 48Figure 5.7 The results of the position-weighted template matching withparabola fitting. . . . . . . . . . . . . . . . . . . . . . . . . . 49Figure 5.8 The microstructure for the system identification experiment. . 50Figure 5.9 The ROI and OOI selected for the system identification exper-iment. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51Figure 5.10 The result of curve fitting from the Polytec PMA data. . . . . 51Figure 5.11 The result of curve fitting from the LabVIEW data. . . . . . . 54Figure 6.1 In-plane deformation of a lateral beam, adapted from [18]. . . 58Figure 6.2 MEMS gear-wheel drive (Courtesy of Sandia National Labo-ratories). . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58Figure A.1 The LabVIEW GUI front panel for object position tracking. . 64Figure A.2 The block diagram implementation for selecting an AVI andan ROI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64Figure A.3 The block diagram implementation for reference points. . . . 65Figure A.4 The block diagram implementation for object tracking. . . . . 67Figure A.5 The block diagram implementation for object tracking of mul-tiple OOIs. . . . . . . . . . . . . . . . . . . . . . . . . . . . 68Figure A.6 The front panel for rotation angle detection. . . . . . . . . . . 69Figure A.7 The block diagram implementation for rotation angle detection. 69Figure A.8 The block diagram implementation for object tracking withmean shift. . . . . . . . . . . . . . . . . . . . . . . . . . . . 70ixAcknowledgmentsI would like to offer my sincere gratitude to all the faculty, staff and my fellowstudents at the UBC, who have provided me with support and advice throughoutmy work. In particular, I owe my deepest thanks and greatest appreciation to mysupervisor Edmond Cretu, who has supported me academically, financially andemotionally throughout my way. I am also very grateful to our lab manager Dr.Alina Kulpa for her assistance and guidance during my work in the lab.I could not have accomplished all of my work without the great help and theinvaluable support from some particular lab colleagues and friends in our depart-ment: Manav Bharti, Maan Almarghalani, Carlos Gerardo, Dr. Elie Sarraf, MiguelTorres, Lingyi Liu.Gratitude is also owed to all of my friends here in Vancouver and around theworld for their genuine support throughout my education. I would like to expressparticular thanks to my best friend Xining Yang, who has always been by my sideno matter what.I owe my special thanks to the UBC and Mitacs for their generous financialsupport throughout my studies here.Finally, I can never express enough gratitude to my beloved great parents, whohave always supported me throughout my whole life in every aspect.xDedicationTo my beloved family and friendsxiChapter 1IntroductionThis chapter introduces the topic of measuring in-plane dynamics of microdevices.The background and the related work about techniques and facilities for measuringin-plane dynamics of microdevices are provided. The motivation and the goal ofthe thesis work are clearly stated in this chapter. Also included in this chapter is adescription of the thesis structure.1.1 BackgroundOne of the most challenging aspects of MEMS devices is to properly characterizetheir mechanical behavior under electrical actuation. Advanced testing methodsfor the dynamics of microstructures are necessary, in order to develop reliable,commercial MEMS devices [21]. The main purpose for MEMS testing is to pro-vide feedback to the design and simulation process in an engineering developmenteffort, and this feedback should include device behavior, system parameters andmaterial properties [34]. And the visualization and measurement of the dynamicsof MEMS structures are of considerable significance to the development of moreeffective and advanced microdevices.1.2 Related WorkSeveral methods have been proposed in the past, and expensive measurement equip-ment systems have been used to measure and characterize microstructures.1Optical characterization of mechanical micro-dynamics has become a preferredmethod for non-contact testing of MEMS structures, providing detailed informa-tion, including vibration modes. It is common to measure out-of-plane dynamics ofmicrostructures using laser vibrometry techniques (Doppler frequency/phase shift)[22]. On the other side, the in-plane mechanical dynamics can be measured usingvideo stroboscopy techniques, relying on a combination of hardware and imageprocessing in software.In [7], optical techniques developed for characterization of static displace-ments, motions, vibrations and internal strains of MEMS and MOEMS (Micro-Opto-Electro-Mechanical Systems) devices are critically reviewed. This articlepresents many optical techniques that are suitable for out-of-plane measurements,along with their performances, limitations and proposed improvements. It alsodiscusses few techniques for in-plane motions, vibrations and mechanical strainmeasurements. It points out that compared to the techniques for out-of-plane mea-surements, the choice is more restricted for in-plane measurements.There are present systems capable of performing combined out-of-plane andin-plane micro-dynamics characterization, like Polytec MSA-500, or, on the re-search side, the systems proposed in [34], containing a Stroboscopic InterferometerSystem, a Computer-Micro-Vision System, and a Laser-Doppler Vibrometer (Poly-tec PI). The Stroboscopic Microscopic Interferometer System developed at BSACcan measure both in-plane and out-of-plane motions in a single experiment. Stro-boscopy and digital-image processing are used to determine in-plane motions withsub-pixel resolution. The Computer-Micro-Vision System employs stroboscopyand image-processing techniques to track the motions of moving rigid-body struc-tures. This system can measure both out-of-plane and in-plane motions through afocus variation. The Laser-Doppler Vibrometer (Polytec PI) is a commercial setupthat can be used to measure out-of-plane motions at the location monitored by thelaser beam. The results of experimental studies on various MEMS devices are alsopresented in this paper. Currently they are investigating the possibility of providinghigh-speed Internet (Supernet) access for remote use of these facilities.The capabilities of in-plane measurements using Polytec Planar Motion An-alyzer (PMA) have been demonstrated in [30]. Both static and dynamic charac-terizations are demonstrated. The optical measurement system utilizes the laser2vibration measurement technique for out-of-plane motion recording, and then mea-sures the lateral resonant frequency and sensor displacements. The PMA analyzesin-plane vibrations of a microdevice under a microscope, based on the video stro-boscopy principle. Characterization results in both time and frequency domainscan be precisely generated and analyzed. Polytec PMA is regarded as one of themost successful state-of-the-art technologies to visualize and measure in-plane mo-tion of microdevices. A more detailed introduction about Polytec PMA will bepresented later in Chapter 2.1.3 MotivationOne of the most efficient and advanced characterization facilities for in-plane mi-crodevices is the Polytec PMA. Even though Polytec PMA is a state-of-the-artcommercial tool for the characterization and measurement of in-plane motions ofMEMS structures, there are several limitations that have inspired the thesis work.Polytec PMA system is an expensive commercial device, with proprietary andnon-open hardware and software modules. This makes it impossible to extend thesystem, for instance, to specific excitation types. (Polytec PMA provides a limitedset of electrical actuation signals: sine waves, chirp signal, and harmonic sweep-ing.) A single region of interest (ROI) can be selected in the field of view, makingit impossible to correlate motions of different parts of the microstructure. The mea-surement resolution of Polytec PMA is set by the internal algorithms and cannotbe altered by the user. These existing problems and limitations serve as the moti-vations to develop a system to measure in-plane dynamics of microdevices usinglow-cost programmable hardware, with more flexibility and enhanced functionali-ties.The main goal of the thesis work is to develop a low-cost software modulewith an easy-to-use LabVIEW GUI that implements our new image processing al-gorithm, in order to perform measurements for in-plane dynamics of microdevices.The algorithm processes the video stroboscopy movie filmed under the microscopefor extracting and measuring the in-plane dynamics in several regions of interest,by applying a position-weighted template matching algorithm. The software mod-ule can be potentially integrated with more affordable and accessible hardware3equipment that is able to acquire image data, such as the PXI-based MIP (Mi-crosystems Integration Platform) Imaging System [40]. The main focus of thiswork lies in the software implementation process, especially in the image process-ing algorithm used to track the object of interest. The main task of the algorithm isto track the moving object of interest and to measure and display its position in Xand Y directions at each frame.The main contribution of this thesis is to propose a new algorithm, calledposition-weighted template matching. This new technique is based on an existingprocedure named template matching. The method is useful for characterizing in-plane motion of microdevices, with many similar structural patterns. The secondcontribution of the paper is the implementation of parabola fitting interpolation,which enables the algorithm to achieve sub-pixel precision.1.4 Thesis OrganizationThis thesis is divided into six chapters. Chapter 1 introduces the topic of measur-ing in-plane dynamics of microdevices, as well as the background and the relatedwork about techniques and facilities of measuring in-plane dynamics of microde-vices. The motivation and the goal of the thesis work are also stated in this chap-ter. Chapter 2 explains the concept of dynamic behavior of microdevices, andgives an introduction of the state-of-the-art technologies in monitoring in-planedynamics at a micro-scale. In addition, software techniques for object tracking invideo sequences are generally reviewed. Chapter 3 first presents the general sys-tem architecture for object position tracking at a micro-scale, and also gives a briefintroduction of some additional functions implemented for the software module.Then several algorithms for object tracking are introduced. In Chapter 4, the basicconcepts and procedures of system identification are first introduced, and then weexplain how to estimate the transfer function from our experimental data. Results,comparisons and discussions related to the object position tracking algorithms andthe system identification are presented in Chapter 5. Finally in Chapter 6, conclu-sions are given with some future outlook and potential ideas to improve and expandthe worth of our work.4Chapter 2Monitoring In-plane Dynamics atMicro-scaleThis chapter explains the concept of dynamic behavior of microdevices, and givesan introduction of the hardware technologies for monitoring in-plane dynamics atmicro-scale. The Polytec PMA and the Microsystems Integration Platform (MIP)FPGA Imaging System are introduced in terms of their setup components, workingprinciples, applications, etc. Finally, software techniques for object tracking invideo sequences are generally reviewed. In our thesis work, the measurement of in-plane motion of microdevices requires both hardware and software modules. Ourresearch work mainly focuses on the design and implementation of the softwaremodule, while Polytec PMA was used to acquire recorded video sequences of in-plane motion of microstructures for the experiments. In the long term, the MIPFPGA Imaging System can be potentially coupled with our software module, inorder to measure in-plane motion of MEMS structures.2.1 Dynamic Behavior of MicrodevicesDynamic behavior is defined as a description of how a system or an individual unitfunctions with respect to time.Let us take a micro-resonator used in this thesis work as an example. A micro-resonator is a microdevice that exhibits a resonant behavior, which means, it nat-5Figure 2.1: A part of the micro-resonator.urally oscillates at some frequencies, called its resonant frequencies, with greateramplitudes than at other frequencies. The operation of the micro-resonator can bedescribed by a simplified 2nd-order dynamics equation:md2xdt2+bdxdt+ kx = ma(t) (2.1)where m is the proof mass of the device, k is the spring constant, b is the mechan-ical loss coefficient, x denotes the displacement of the microstructure, and a(t)represents the acceleration corresponding to the electrostatic force that is gener-ated by the excitation signal (in the case of an electrical actuation only). A part ofthe structure of the micro-resonator is shown in Figure 2.1. The structure containssome fixed parts that are rigidly anchored to the substrate, as well as other partsthat can move in a particular pattern with the excitation of an electrical signal. Forinstance, with the excitation of a sinusoidal AC signal and other proper settings,the movable parts of the microstructure can vibrate back and forth in a sinusoidalpattern in the horizontal direction. The resonant frequency of this microstructureis 7.2 kHz, which means that the vibration has the greatest amplitude when theexcitation signal has the frequency of 7.2 kHz.62.2 State-of-the-art Techniques in Monitoring In-planeDynamics at Micro-scaleOne of the state-of-the-art techniques in measuring in-plane dynamics at micro-scale is Polytec PMA. The Polytec PMA is one of the three techniques that areintegrated into a more sophisticated all-in-one facility called the Micro SystemAnalyzer (MSA).2.2.1 Micro System AnalyzerThe Micro System Analyzer [1] is the premier measurement tool for the anal-ysis and visualization of structural vibrations and surface topography in micro-structures such as MEMS. The appearance of MSA-500 is shown in Figure 2.2.The MSA is designed with an all-in-one combination that integrates three highlysophisticated measuring techniques in one instrument: static characterization of thetopography by scanning white-light interferometry; real-time dynamic characteri-zation of out-of-plane vibrations by laser-Doppler vibrometry; dynamic characteri-zation of in-plane movements by stroboscopic video microscopy. The combinationof all three techniques makes the MSA-500 a universal tool for: obtaining 3-D pro-files and determining surface and form parameters; identification, visualization andmeasurement of system resonances and transient responses; detailed informationabout amplitude and phase [31].As designed, the MSA-500 has significants benefits for R&D and productionapplications. And the integration of three measurement principles has made thecomplete characterization of microstructures faster and more accurate.2.2.2 Stroboscopic Video Microscopy for In-plane Motion DetectionThe basic working principle of Polytec PMA is stroboscopic video microscopy,which is explained in detail in [31].In Polytec PMA, a normal speed camera is recording the motion observed un-der microscope, and thus it is not rapid enough to capture high-frequency move-ments. Therefore, the stroboscopic technique is necessarily applied to accuratelymeasure the high-frequency in-plane motion of microdevices under test. With stro-boscopic illumination and digital imaging, motions of fast, periodically moving7Figure 2.2: Polytec MSA-500.objects, can be sharply frozen in time to capture the exact position of a regionof interest (ROI) on the specimen. Images are only collected during the selectedphases of movement, when the LED flashlight is switched on, and movements canbe recorded for a period of time, which can be even shorter than the shortest possi-ble exposure time of the camera. The specimen excitation signal, the LED strobeflashes, and the camera exposure have to be exactly synchronized. The timing di-agram of the PMA synchronization using an example of two camera shots takenat two different phases of the periodic specimen excitation is shown in Figure 2.3.In this example, two LED flashes are used within the exposure time of the digitalcamera. The video stroboscopy technique performs a subsampling of a high fre-quency periodic motion, reducing thus the high-frequency motion to a slow motionthat can be captured by a normal camera.The stroboscopic video microscopy for in-plane motion detection utilizes aspecial type of stroboscopic technique: short light pulses synchronized with the ob-ject motion capture the position at precise phase angles. The duration of the pulsesis chosen as small as possible (limited by the camera recording performance), sothat the microstructure is quasistationary during a light pulse. Through shifting8Figure 2.3: Timing diagram of the PMA synchronization signal, adaptedfrom [32].the timing of these pulses by phase angle increments, the motion of a moving ob-ject can be subsampled over several high-frequency periods, and reconstructed ata much lower frequency. This technique is illustrated in Figure 2.4. The solidline represents the original signal, the black points indicate the sampling points byshifting the timing of pulses by phase angle increments, and the dashed line showsthe reconstructed signal. The reconstructed signal is a lower-frequency signal withthe same shape compared to the original signal. In other words, the video sequencerecorded by the stroboscopic technique is a slower-motion video sequence with thesame moving pattern, compared to the original movement of the device under test.This technique is superior to common high speed video stroboscopy systems,because the flash duration is adapted to the actual vibration frequency. Thereforethe image quality is independent from the frame rate of the camera. This tech-nology guarantees a high degree of measurement accuracy and a visual real-time9Figure 2.4: The stroboscopic technique.analysis in live mode. More details about Stroboscopic Machine Vision of PolytecPMA can be found in [32].2.2.3 Planar Motion AnalyzerThe Polytec Planar Motion Analyzer (PMA) is one of the techniques that are inte-grated into the MSA, and is used for measuring in-plane vibrations of microstruc-tures. A brief introduction of the working principles of the PMA is presented in[30].The PMA uses the laser vibration measurement for imaging, and measures thelateral resonant frequency and sensor displacements as well. Its working principleis based on the video stroboscopy under the microscope, which has been explainedpreviously.The major parts of the PMA system include a strobe controller, digital camera,high performance imaging system, stroboscopic illumination source, generator/-data acquisition board, external illumination source for the microscope, the twoprobe tips, digital oscilloscope, synchronized signal generator, and a host com-puter including the PMA operating software and an illumination control software.It can provide a high resolution of 2.0 nm for the in-plane vibration amplitudes, afrequency range from 0.1 Hz to 1 MHz, and a high resolution of the recorded videoframes (1030x1300 pixels).10Figure 2.5: Measurement procedure using the PMA, adapted from [30].The measurement procedure using the PMA is presented in Figure 2.5. Thevideo frames of the motion are analyzed using a proprietary built-in measurementalgorithm, extracting displacements from the recorded succession of frames. Mea-surements are based on monitoring the relative in-plane position (X and Y coordi-nates) of a selected feature from one recorded frame to another. Finally, the systemdisplays the X and Y tracked displacements versus time or frequency. The imageprocessing techniques involve the definition of the search pattern and search re-gion, attaining sub-pixel resolution, and image correlation, the details of which areillustrated in [32].Figure 2.6 shows an example of measuring in-plane motion of a microdeviceusing Polytec PMA. A region of interest (ROI) and an object of interest (OOI)are selected in rectangles, respectively. Then the in-plane position of the OOI istracked during the movement of the microdevice. Figure 2.6a displays the X and Ydisplacements versus time at a particular excitation frequency. Figure 2.6b presentsthe X and Y magnitude displacements versus frequency for the entire excitation fre-quency range. Based on the capability of monitoring the microdevice and trackingthe selected OOI, Polytec can perform a series of analyses on the in-plane motionof the microdevice.11(a) The X and Y displacements versus time.(b) The X and Y magnitude displacements versus frequency.Figure 2.6: Measuring in-plane motion of a microdevice using Polytec PMA.122.3 Microsystems Integration Platform FPGA ImagingSystemThe Microsystems Integration Platform (MIP) is a platform that bridges the gapbetween algorithmic/architectural exploration and stimulus and measurement ofsensors and actuators. A general introduction to MIP [40] is given in this section.MIP is a modular system that includes a PXI base module (PXI controller anda low-noise chassis), the MEMS and microfluidics imaging module, and relatedPXI boards for signal acquisition and processing. The imaging module includesa color camera, an optical system for microscopic visualization, motorized zoom,illuminator, microscope stand, camera cables, zoom and lighting interface card,and control software. The imaging system can be configured for FPGA-basedimage processing, and additionally includes an FPGA video adapter connectedwith NI FlexRIO FPGA card on PXI chassis, plus FPGA image processing libraryand reference design.The MIP imaging system enables several types of research in microfluidics andMEMS fields, such as MEMS strobe test system based on real-time video/imagingprocessing.The software resources that the MIP FPGA-based imaging system environ-ment consists of include: NI LabVIEW with FPGA module, CMC CADpass toconnect to CMC’s LabVIEW license server, and Adsys FPGA imaging processinglibrary and reference design. The hardware components include: MIP PXI chassiswith high-speed embedded controller and NI FlexRIO FPGA card; Adsys imagingsystem including camera, microscopic lens system, motorized zoom, illuminator,and microscope stand, FPGA video adapter, and camera cables; motor drive de-vice, lighting supply device, and cables. The FPGA-based imaging system and itscomponents are shown in Figure 2.7.The front panel of the reference design is shown in Figure 2.8.After the MIP FPGA-based imaging system is assembled and properly set up, aMEMS device can be observed under the lens of the system. An example is shownin Figure 2.9 for demonstration purpose.Using the reference design with the MIP FPGA-based imaging system as astarting point, imaging processing in microfluidics and MEMS applications can be13Figure 2.7: MIP FPGA-based imaging system and its components.Figure 2.8: Front panel of the reference design.14Figure 2.9: MEMS device under the imaging system.conducted, including but not limited to: bio-particle tracking, real-time bio-cellanalysis, and MEMS strobe testing based on real-time video/imaging processing.In our thesis work, a software module with an easy-to-use LabVIEW GUI isdesigned and implemented for measurement and analysis of in-plane dynamics ofmicrodevices. Our LabVIEW GUI can be potentially combined with the referencedesign as well as the MIP FPGA-based imaging system in the future work. Inthis way, we can obtain a cost-effective system for visualization, measurement andanalysis of in-plane dynamics of microdevices.2.4 Software Techniques for Object TrackingObject tracking is the task of reconstructing the trajectory of an object over time,from its position in successive frames of a video.A taxonomy of tracking methods is illustrated in Figure 2.10, adapted from[41]. Here we briefly introduce the main tracking categories.(1) Point Tracking. Objects detected are represented by points, and the as-sociation of the points is based on the previous object state, which can includeobject position and motion. This approach requires an external mechanism to de-15Figure 2.10: Taxonomy of tracking methods, adapted from [41].tect the objects in every frame. Point tracking includes deterministic methods suchas Greedy Optimal Assignment (GOA) tracker [39], and statistical methods suchas Kalman filter [9] and Probabilistic Multiple Hypothesis Tracking (PMHT) [38].(2) Kernel Tracking. Kernel refers to the object shape and appearance. Objectsare tracked by determining the motion of the kernel in consecutive frames. Thismotion is usually in the form of a parametric transformation such as translation, ro-tation, and affine. Kernel tracking includes template and density based appearancemodels such as Mean-shift [11] and Kanade-Lucas-Tomasi (KLT) [37], and multi-view appearance models such as Eigentracking [6] and Support Vector Machines(SVM) tracking [3].(3) Silhouette Tracking. Tracking is performed by estimating the object regionin each frame. These methods use the information encoded inside the object re-gion. This information can be in the form of appearance density and shape modelswhich are usually in the form of edge maps. Given the object models, tracking isperformed by either shape matching or contour evolution. Shape matching meth-ods can be based on Hausdorff [16] and Hough transform [36]. Contour evolutionmethods include state space models [17] and heuristic methods [35].In [27], tracking methods are categorized into the following: kernel-based16tracking, active curve density tracking, tracking by region-based matching andtracking in the spatiotemporal domain. Kernel-based tracking uses a feature de-scription of the target, an objective function to serve the purpose of positioningthe target from one instant of observation to the next, and the minimization of thisobjective function. In active curve density tracking, the target region is no longerconstrained to have a fixed shape, and tracking is done by an active curve whichmoves to coincide with the target boundary. These two methods use a model fea-ture distribution to pursue the target from one instant of observation to the next.The region-based matching scheme [24] and its extensions [25], [5] do not use atarget model learned priori, but instead they take the image of the target in the pre-vious frame as the reference to match in the frame of the current instant. Trackingcan also be viewed as motion detection in the spatio-temporal domain: instead offollowing the target from one instant of observation to the next, the observations ina time interval are all taken together and the spatio-temporal region correspondingto the foreground of moving objects is detected.In [20], there are discussions about the methods that represent the objects bygeometric shapes and their motion is estimated between consecutive frames, i.e.the so-called frame-to-frame tracking. Template tracking is the most straightfor-ward approach in this case. The object is described by a target template and themotion is defined as a transformation that minimizes mismatch between the targettemplate and the candidate patch. Template tracking can be realized as either static[12] (when the target template does not change), or adaptive [23], [37] (when thetarget template is extracted from the previous frame). Methods that combine staticand adaptive template tracking have been proposed in [26], [14], [33], and methodsthat recognize “reliable” parts of the template have been proposed in [19], [2].The methods that have been reviewed above are proposed for general objecttracking in video sequences. Our thesis focuses on monitoring in-plane dynamicsat micro-scale. Therefore, some of these technologies can be modified and im-plemented for the purpose of object tracking at micro-scale, and more details arepresented in Chapter 3.172.5 SummaryThe concept of dynamic behavior of microdevices is introduced at first. Then thehardware facilities in monitoring in-plane dynamics of microdevices are presented:the Polytec PMA and the MIP FPGA-based imaging system, with respect to theirsetup components, basic working principles, research applications and so on. Thesoftware module with a LabVIEW GUI implemented in our thesis work can bepotentially interfaced with the MIP FPGA-based imaging system in a future work,to achieve a new and cost-effective system that can have similar functionalities withthe Polytec PMA system. Finally, we have reviewed some software techniques forobject tracking in consecutive images, and have proposed to make use of them forobject tracking at micro-scale.18Chapter 3Object Tracking from OpticalRecordings ofMicro-displacementsThis chapter presents the general system architecture for object position trackingat micro-scale, and also gives a brief introduction of some additional functionsimplemented for the system. Then several algorithms for object tracking at micro-scale are introduced. Among these algorithms, a new technique, called position-weighted template matching with parabola fitting interpolation, is proposed, ex-plained and implemented.3.1 System ArchitectureA software module with a LabVIEW GUI has been designed and implemented,in order to perform object position tracking, given a video of in-plane motion ofmicrostructures. The recorded video sequences in our thesis work were collectedfrom Polytec PMA. The front panel of the basic LabVIEW GUI for object posi-tion tracking is shown in Figure 3.1. The general system architecture is shown inFigure 3.2.Firstly, a recorded video of the in-plane motion of a MEMS structure is chosenand loaded in to the software module. Secondly, the user needs to define a re-19Figure 3.1: The front panel of the LabVIEW GUI for object position track-ing.gion of interest (ROI). Thirdly, two reference points are selected, for dimensionalcalibration purposes. Fourthly, relevant data are required by the user, in order todetermine the length and time scaling factors, thus enabling the display in real-lifeunits. Fifthly, we select an object of interest (OOI), whose position will be trackedin the next step. Finally, the tracking process begins and the results are displayed.More details about the LabVIEW GUI design can be found in Appendix A.1.3.2 Additional FunctionsIn addition to the basic system architecture implemented in Section 3.1, severaladditional functions have also been implemented to enhance the functionality andthe flexibility of the software module.3.2.1 Multiple OOIs and Relative MovementSome in-plane microstructures contain several parts that can move in correlatedpatterns. Therefore, it is necessary to monitor multiple OOIs and to calculate the20Figure 3.2: Object position tracking procedure in the designed LabVIEWGUI.relative movement between different OOIs (a feature not present in the commer-cial Polytec PMA module). The basic procedure is described as follows: firstly,two OOIs are selected in the previously determined ROI; secondly, the object po-sition tracking algorithm is performed on both OOIs simultaneously; finally, thedifference of the positions of the two OOIs in each frame, is calculated and dis-played, in both X and Y directions. The implementation details can be found inAppendix A. Rotation Angle DetectionPlanar microstructures may have parts that rotate during the movement. We haveimplemented a supplementary function for determining not only the rigid body21translation, but also the in-plane rotation of the selected OOI from one frame toanother. The OOI in the first frame is regarded as the reference, and the rota-tion angles of the OOI in the subsequent frames with respect to the reference aredetected and displayed versus time in a graph. More details of the LabVIEW im-plementation can be found in Appendix A. AlgorithmsThe core of implementing object position tracking in our software module is thechoice of the algorithm. Here we introduce some common algorithms for ob-ject tracking, and also propose a newly designed technique based on an existingmethod. This new technique is called position-weighted template matching.3.3.1 Mean ShiftMean shift is a non-parametric feature-space analysis technique for locating themaxima of a density function given discrete data sampled from that function, andit is a so-called mode-seeking algorithm [10]. It was originally presented in 1975by Fukunaga and Hostetler [15].General ProceduresMean shift is an iterative technique, and the general procedures are as follows:(1) Start with an initial estimate x.(2) Let a kernel function K(xi− x) be given, which determines the weight ofnearby points for re-estimation of the mean.(3) The weighted mean of the density in the window determined by K can beexpressed asm(x) =∑xi∈N(x) K(xi− x)xi∑xi∈N(x) K(xi− x)(3.1)where N(x) is the neighborhood of x, a set of points for which K(x) 6= 0.(4) The mean shift algorithm now sets x← m(x), and repeats the estimationprocess until m(x) converges.In other words, the steps of the mean shift algorithm can be summarized asfollows:22(1) Fix a window around each data point.(2) Compute the mean of data within the window.(3) Shift the window to the mean and repeat till convergence.ApplicationsMean shift is used in application domains in computer vision and image processing.Common applications include clustering, tracking and smoothing.The mean shift algorithm can be utilized for visual tracking. The most basicsuch method creates a confidence map from the new image, based on the colorhistogram of the object of interest in the previous image, and uses the mean shiftalgorithm to locate the peak of a confidence map near the object’s old position. Theconfidence map is a probability density function on the new image, assigning eachpixel of the new image a probability, which is the probability of the pixel coloroccurring in the object in the previous image.ImplementationA LabVIEW built-in Virtual Instrument (VI) named “IMAQ Track Objects” usesthe mean shift algorithm to track a particular object of interest. This VI is used asa key element to implement the LabVIEW GUI for object position tracking withthe mean shift algorithm. The design and implementation details can be found inAppendix A. Template MatchingTemplate matching is a kernel tracking approach that locates regions of an imagethat match a predetermined template.OverviewTemplate matching locates regions of an image that match a predetermined tem-plate. A template/model is firstly created to represent the pattern of interest. Thenthe algorithm searches for the model in the frames, calculating a score for eachmatch. The score relates how closely the model matches the pattern found. Tem-plate matching is used in the following three general applications [28]:23(1) Alignment: determine the position and orientation of a known object.(2) Gauging: measure lengths, diameters, angles, and other critical dimensions.If the measurements fall outside set tolerance levels, the component is rejected.(3) Inspection: detect simple flaws, such as missing parts or unreadable print-ing.Template matching methods can be classified into template-based approaches.If the template image has strong features, a feature-based approach can be consid-ered. Since this kind of approaches does not consider the entirety of the templateimages but extracts the salient features for matching, it can be more computa-tionally efficient. For templates without strong features, or when the bulk of thetemplate image constitutes the matching image, a template-based approach may beeffective. Since template-based approaches may require sampling of a large num-ber of points, it might be efficient to consider some ways to reduce the number ofsampling points.Traditional template matching techniques include normalized cross correlation,pyramidal matching, and scale-invariant matching, which are explained in [28]:(1) Cross correlation. Normalized cross correlation is the most common way tofind a template in an image. Because the underlying mechanism for correlation isbased on a series of repeated multiplication and addition operations, the correlationprocess is time consuming.(2) Pyramidal matching. The computation speed can be improved by reducingthe size of the image and the template. One such technique is called pyramidalmatching. In this method, both the image and the template are sub-sampled tosmaller spatial resolutions. Matching is performed first on the reduced images.When the matching is complete, only areas with a high match need to be consideredas matching areas in the original image.(3) Scale-invariant matching. Normalized cross correlation is a good techniquefor finding patterns in an image when the patterns in the image are not scaledor rotated. For scale-invariant matching, we must repeat the process of scalingor resizing the template and then perform the correlation operation. This adds asignificant amount of computation to the matching process.24Template Matching in OpenCVOpenCV [8] is an open source computer vision and machine learning software li-brary. It is very useful and powerful in image processing and computer vision tasks.Therefore, it is important and necessary to clearly explain the template matchingmethods in OpenCV here.There are two primary components necessary for template matching:(1) Source image/frame (I): the image in which we expect to find a match tothe template image. The size of the source image is W ×H pixels.(2) Template image (T ): the patch image which will be compared to the tem-plate image. The size of the template image is w×h pixels.Our goal is to detect the best matching area and its location. To identify thematching area, we have to compare the template image against the source imageby sliding it. By sliding, we mean moving the template image one pixel at a time(left to right, top down), as indicated in Figure 3.3. At each location during thesliding process, a score is calculated that represents how good or bad the matchat that location is, or in other words, how similar the template image is to thatparticular area of the source image. For each location of T over I, the score isstored in the result matrix R, the map of comparison results. Each location (x,y) inR contains the match score in that location. The size of the result R is (W−w+1)×(H−h+1). For instance, Figure 3.4 is a color mapping of the R matrix obtainedby using a metric called TM SQDIFF NORMED (Normalized Squared DifferenceTemplate Matching). The darkest locations indicate the highest matches. As shownin the figure, the location marked by the circle is the one with the highest match,so that location represents the left-top corner of the sub-image matching the giventemplate.There are six metrics for template matching in OpenCV, illustrated by the fol-lowing equations. Note that x′ = 0,1, ...,w−1, and y′ = 0,1, ...,h−1.(1) TM SQDIFF (Squared Difference Template Matching):R(x,y) = ∑x′,y′(T (x′,y′)− I(x+ x′,y+ y′))2 (3.2)(2) TM SQDIFF NORMED (Normalized Squared Difference Template Match-25Figure 3.3: An example of sliding the template image against the source im-age.Figure 3.4: An example of the result matrix of sliding the template,y) =∑x′,y′ (T (x′,y′)− I(x+ x′,y+ y′))2√∑x′,y′ T (x′,y′)2 ·∑x′,y′ I(x+ x′,y+ y′)2(3.3)(3) TM CCORR (Cross Correlation Template Matching)R(x,y) = ∑x′,y′(T (x′,y′) · I(x+ x′,y+ y′)) (3.4)(4) TM CCORR NORMED (Normalized Cross Correlation Template Match-26ing)R(x,y) =∑x′,y′ (T (x′,y′) · I(x+ x′,y+ y′))√∑x′,y′ T (x′,y′)2 ·∑x′,y′ I(x+ x′,y+ y′)2(3.5)(5) TM CCOEFF (Cross Coefficient Template Matching)R(x,y) = ∑x′,y′(T ′(x′,y′) · I(x+ x′,y+ y′)) (3.6)whereT ′(x′,y′) = T (x′,y′)− 1w ·h ∑x′′,y′′T (x′′,y′′) (3.7)I′(x+ x′,y+ y′) = I(x+ x′,y+ y′)− 1w ·h ∑x′′,y′′I(x+ x′′,y+ y′′) (3.8)(6) TM CCOEFF NORMED (Normalized Cross Coefficient Template Match-ing)R(x,y) =∑x′,y′ (T ′(x′,y′) · I′(x+ x′,y+ y′))√∑x′,y′ T ′(x′,y′)2 ·∑x′,y′ I′(x+ x′,y+ y′)2(3.9)The steps of the template matching methods in OpenCV can be summarized asfollows:1) Load an input image and an image template.2) Perform a template matching procedure using any of the six matching met-rics implemented in OpenCV.3) Normalize the output of the matching procedure.4) Determine the location with the best match. For Squared Difference Tem-plate Matching metric and its normalized version, the lowest value in R indicatesthe best match. However, for the other four metrics, the higher the value, the betterthe match.5) Draw a rectangle around the area corresponding to the highest match.3.3.3 Novel Template Matching Based on Position HistoryFor microdevices, several similar sub-patterns may exist in a single device, whichcan lead to multiple high matching areas. For instance, Figure 3.5 shows a part ofthe structure of a micro-resonator. The several parts that are marked by the rectan-27Figure 3.5: A part of the structure of a micro-resonator.Figure 3.6: Y displacement with the template matching method.gles have almost the same patterns. If one of them is selected as the template image,it is very probable that any other part will be identified with an identical matchingscore. This will generate an R matrix with multiple template matching positions,leading to potential discontinuous “jumps” in the trajectory of the tracked object.The selected match in such a case is very sensitive to the noise present in the im-age/frame. For instance, Figure 3.6 shows the vastly fluctuating Y displacement ofthe OOI when the traditional template matching method is used, while the OOI isactually stationary in the Y direction during the whole process.In order to overcome this typical issue in microdevices, we propose a newtracking algorithm, called the position-weighted template matching. This method28is based on the traditional template matching algorithm, but uses the extra infor-mation of the position of the object in the previous frame(s).As explained earlier, if we have a source image I with the size of W ×H anda template image T with the size of w× h, then we will obtain a result matrix Rwith the size of (W −w + 1)× (H − h + 1). The result matrix R is the map ofcomparison results. Each location (x,y) in R contains the match score for thatlocation, representing how good or bad the match at that location is.In order to make use of the position information of the matched object of inter-est in the previous frame, we introduce another matrix D called the distance map,indicating the distance from the object position in the previous frame to the poten-tial object position in the present frame. The size of D is the same as that of R, i.e.(W −w+1)× (H−h+1). Assume that in the previous frame, the matched objectof interest is located at the position of (m,n). For the current frame, the distancematrix D can be determined using the following equation:D(x,y) =√(x−m)2 +(y−n)2 (3.10)where x = 0,1, ...,W −w, and y = 0,1, ...,H−h. Each location (x,y) in D containsthe distance score in that location, representing how close that location is to theobject of interest in the previous frame. The lower the score, the closer that locationis to the previous-frame matched location of the object.Once we compute R and D score matrices, we can make use of both types ofinformation to decide the matching position. For this, we combine the normalizedversions of R and D, potentially with different weights, into a global score matrixM:M = w1 · R¯+w2 · D¯ (3.11)where w1 and w2 are two weighting factors satisfyingw1 +w2 = 1 (3.12)and R¯ and D¯ are the two normalized score matrices with coefficients ranging from0 to 1.Because in the D matrix, the lowest score corresponds to the matching position,29we choose a metric for the R matrix having similar characteristics. This suggeststhat we use either the Squared Difference Template Matching or its normalizedversion as the metric function, depending on the illumination conditions for therecorded video frames. If the illumination conditions are the same, the SquaredDifference Template Matching should be used as the metric for less computationalcost. However, the normalized version must be chosen as the metric, when theillumination conditions differ between the video frames.The weighting factors w1 and w2 are deciding the relative importance of thetime history compared to the position in the present frame. In our case, we haveexperimentally observed that the w1 value in the range of (0.40 - 0.95) is enoughto provide a smooth trajectory. If the image noise increases, it is expected that w2should increase its value.Another issue worth mentioning is that, as indicated in Equation 3.11, the ma-trix D¯ always adds a bias to the normalized matching result matrix R¯, unless w2equals to 0. One possible solution to reduce the influence of this bias is to make alocation prediction based on several previous frames, and then determine the ma-trix D¯ based on the predicted location.In order to better illustrate the procedure of the position-weighted templatematching algorithm, one frame of the video stroboscopy movie is shown in Fig-ure 3.7a, as the current frame. Figure 3.7b displays the previous frame, with thebest match located at the upper-left corner of the rectangle. The color mapping ofthe R¯ matrix, obtained based on the current frame shown in Figure 3.7a by usingthe Normalized Squared Difference Template Matching metric, is shown in Fig-ure 3.8a, in which there are several identical scores for the matching positions.Figure 3.8b is the normalized distance map D¯, based on the previous-frame match-ing position in Figure 3.7b. Figure 3.8c is the color mapping of the global scorematrix M, with the unique best match marked in the rectangle.To summarize, the matrix M is the global score matrix of the position-weightedtemplate matching method, and can be used to locate the match that is both similarto the template and close in position to the previous-frame match. Basically, thelowest value indicates the desired location of the matching area. In this way, thetypical trajectory “jump” issue in object tracking for in-plane microdevices can beeffectively resolved by this new technique.30(a) The current frame.(b) The previous frame.Figure 3.7: Frames of the video stroboscopy movie.3.3.4 Position-weighted Template Matching with Parabola FittingInterpolationThe algorithm introduced earlier allows us to estimate the trajectory of a selectedobject only in a quantized space determined by the size of the pixels. In orderto obtain a smooth trajectory, closer to the physical motion, we need to be ableto reduce this quantization noise. This provides the motivation for estimating theposition with sub-pixel resolution. Parabola fitting is a common technique for sub-pixel estimation of local extrema [4].MotivationAs mentioned earlier about the theory of template matching methods in OpenCV,to identify the matched area, we have to compare the template image against the31(a) The color mapping of the R¯ matrix.(b) The normalized distance map D¯.(c) The color mapping of the global score ma-trix M.Figure 3.8: The color mappings of the matrices for the algorithm.32source image by sliding it one pixel at a time (left to right, top down). Therefore,the template matching algorithms in OpenCV achieve only pixel-level precision.As obtained from the position-weighted template matching methods, the matrix Malso achieves only pixel-level precision. This means that when we locate the bestmatch in the position of (x,y), x and y are both integers. In this case, when wecharacterize the position of the tracked object versus time, the results may not beaccurate or smooth enough.Therefore, we consider implementing an interpolation technique on the resultsobtained from the position-weighted template matching. The position-weightedtemplate matching method finds the pixel-level location of the global minimum,while the goal of interpolation is to get the sub-pixel location of the local extremum.One common and effective such technique is parabola fitting. We have enhancedthe position-weighted template matching method with sub-pixel resolution, by im-plementing the parabola fitting procedure.ConceptsThe basic idea of parabola fitting is illustrated in Figure 3.9. Figure 3.9a containsthe pixel-level values indicating the position of the extremum; the goal is to findthe sub-pixel values representing the more precise position of the extremum bymodeling a smooth surface, as indicated in Figure 3.9b. In parabola fitting, theshape of the surface is generated by two independent orthogonal parabolic curves;the enhanced location of the peak is computed by independently fitting two one-dimensional quadratic functions and computing the location of their peaks [13].The theory of parabola fitting [4] is explained here in detail. The simplifiedmodel is illustrated in Figure 3.10. A pixel-level extremum with a value of p0 isfound at the position of 0. The pixel at 1 has a value of p+, and that at -1 has a valueof p−. The “actual” extremum is located at the position of x. The interpolation ofthe quadratic curve through p−, p0, and p+ will result in a more accurate estimationof the local extremum at x, with sub-pixel resolution.The parabola function is expressed asy = ax2 +bx+ c (3.13)33(a) Pixel-level values indicating the positionof the extremum.(b) Sub-pixel values indicating the preciseposition of the extremum.Figure 3.9: The simplified model of parabola fitting.This parameterized curve is then solved to find the position of the local extremumby differentiating and setting the derivative to zero:dydx= 2ax+b = 0 (3.14)which leads tox =− b2a(3.15)34Figure 3.10: The simplified model of parabola fitting.Substituting the pixel values for each of the three known data points into Equa-tion (3.13) givesp− = a−b+ c (3.16)p0 = c (3.17)p+ = a+b+ c (3.18)which when solved and substituted into Equation (3.15) results in the estimate ofthe location of the “actual” extreme pointx =p+− p−4p0−2(p++ p−) (3.19)In the analysis above, no assumption is made as to whether the center point isa local minimum or maximum. Therefore the analysis can be applied for eithersituation.After applying the procedure above to X and Y directions independently, theestimate of the location of the “actual” extreme point (x,y) can be determined.ImplementationLabVIEW is chosen as the platform to implement our software module with aneasy-to-use GUI, to be potentially interfaced with a low-cost hardware module forimage/video acquisition in the future. Our proposed new algorithm is implemented35in Microsoft Visual Studio, incorporated into LabVIEW through a dynamic-linklibrary (DLL).The Call Library Function Node in LabVIEW can call a DLL, which is an ef-fective way to incorporate OpenCV and C/C++ programming into LabVIEW. TheCall Library Function Node serves as a bridge between Microsoft Visual Studioand LabVIEW. We have firstly implemented the proposed new algorithm position-weighted template with parabola fitting interpolation, using C++ with OpenCV inMicrosoft Visual Studio, the detailed code of which can be found in Appendix B.1.And then we created a DLL for the algorithm using Microsoft Visual Studio. Fi-nally, the DLL was imported into LabVIEW through the Call Library FunctionNode.The Call Library Function Node is the core of the implementation of the GUI,since it contains the novel algorithm that we have proposed. However, the design ofthe GUI also includes other parts that are integral to performing the tracking task.The basic implementation of the LabVIEW GUI can be found in Appendix A.1.It is worth mentioning that the generated DLL is not limited to being used inLabVIEW. A DLL is a file that can be utilized in other software systems such asMATLAB. This means that the DLL that we have created can have a broader rangeof applications beyond what has been discussed in this thesis.3.4 SummaryAt first, the basic system architecture for object position tracking at micro-scaleis given. Besides, additional functions such as tracking for multiple OOIs androtation angle detection are briefly explained. Then several algorithms for objecttracking are presented and explained in detail.The mean shift algorithm is introduced, including its procedure, applicationsand implementation. The template matching algorithm is then introduced, alongwith its basic concepts. Based on the traditional template matching, we have pro-posed a new algorithm called position-weighted template matching. The motiva-tion and concepts of this new algorithm are explained in detail. This algorithm isspecifically useful for the task of object tracking in microdevices, where severalsimilar sub-patterns may exist in a single structure. We have also improved the36precision of this algorithm by using the parabola fitting interpolation technique.The newly proposed algorithm has been realized into a software module with aLabVIEW GUI, with the incorporation of OpenCV and C++ programming.37Chapter 4System IdentificationThis chapter introduces the basic concepts of system identification. Using thismethodology, we compute the electro-mechanical transfer function of a fabricatedMEMS resonator, using MATLAB numeric curve fitting capabilities.4.1 Basic Concepts and General ProceduresSystem identification is an iterative process that involves building mathematicalmodels of a dynamic system based on a set of measured stimulus and responsedata samples.The system identification procedure requires collecting experimental data re-garding system behavior. In the case of parametric system identification, a systemmodel is also used; the values of the parameters in the model are chosen so that itprovides the best match with the measured data.[29] includes the basic concepts and general procedures of system identifica-tion. Raw data are first acquired from a real-world system. Then the data areformatted and processed as necessary. Finally, a mathematical algorithm is used toidentify a mathematical model of the system, which then can be utilized to analyzethe dynamic characteristics of and stimulate the time process of the system. Verifi-cation and validation are sometimes necessary to demonstrate the correctness andthe effectiveness of the model.384.1.1 Data Acquisition and PreprocessingThe first step in system identification is data acquisition. Data can be acquiredfrom data acquisition hardware and software or from a pre-stored file. After ac-quiring the data, the raw data can be preprocessed if necessary. Common prepro-cessing methods include removing outliers, removing offsets and trends, filteringand downsampling, etc.During the process of data acquisition, the choice of stimulus signals has animportant role in the observed system behavior and the accuracy of the estimatedmodel. Even though the system under test often limits the choice of stimulus sig-nals, an ideal input signal is needed to exhibit certain characteristics to producea model that provides the information needed for developing an accurate model.Common stimulus signals include: filtered Gaussian white noise, random binarysignal, pseudo-random binary sequence, chirp waveform or swept sine wave, etc.4.1.2 Nonparametric and Parametric Model Estimation MethodsThe two most common techniques to estimate models that represent linear time-invariant systems are nonparametric estimation and parametric estimation. Non-parametric model estimation methods are simpler and more efficient, but often lessaccurate, than parametric model estimation. Compared to nonparametric models,parametric models provide more insight into the physics of the system and are amore compact model structure.The two common ways of estimating a nonparametric model are impulse re-sponse and frequency response. The impulse response reveals the time-domainproperties of the system, such as time delay and damping, while the frequency re-sponse reveals the frequency-domain properties, such as the natural frequency of adynamic system.Parametric models describe systems in terms of differential equations and trans-fer functions. There are two categories of parametric models: polynomial andstate-space. The polynomial models include AR, ARX, ARMAX, output-error,Box-Jenkins, and general-linear models. And state-space models provide a morecomplete representation of the system, and are often preferable to polynomial mod-els.39Table 4.1: Parameters of the micro-resonator for system identification.Parameter Valuem 5.0551×10−9 kgk 8.99 N/m4.1.3 Model ValidationOnce a model is obtained, the model can be validated to determined how well thebehavior of the model corresponds to both the observed data, any prior knowledgeof the system, and the purpose for which the model is used. Revising the systemidentification process or considering another approach may be needed to improveor re-establish the model.The best way to validate a model is to experiment under real-world conditions,but this can be dangerous. Instead, we can validate the model by using commonsense or statistical tests on the model. Three of the most common validation meth-ods are model simulation, model prediction, and model residual analysis, moredetails of which can be found in [29].4.2 Transfer Function EstimationA transfer function is a mathematical representation to describe the relationshipbetween the inputs and outputs of a system. We can first develop the basic transferfunction with undetermined parameters based on the working principles of the sys-tem, and then determine the unknown parameters by substituting the experimentaldata points.4.2.1 Transfer Function DeductionThe microstructure used for system identification in this thesis work is shown inFigure 4.1, with its theoretically computed parameters m and k listed in Table 4.1.The operation of the micro-resonator can be described by a simplified 2nd-orderdynamics equation:md2xdt2+bdxdt+ kx = ma(t) (4.1)40Figure 4.1: The micro-resonator for system identification.where m is the proof mass of the device, k is the spring constant, b is the mechanicalloss coefficient, x denotes the displacement of the microstructure, and a(t) repre-sents the acceleration corresponding to the electrostatic force that is generated bythe excitation signal. In our thesis work, the MEMS structure was actuated byan excitation signal provided by Polytec with different frequencies but a constantamplitude.Dividing by m on both sides, we getd2xdt2+bmdxdt+kmx = a(t) (4.2)41which is equivalent tod2xdt2+ω0Qdxdt+ω20 x = a(t) (4.3)where ω0 =√km is the natural resonance frequency, and Q =√kmb is the qualityfactor of the system. Transformed into the Laplace domain, the following equationis obtained:s2X(s)+ω0QsX(s)+ω20 X(s) = A(s) (4.4)By properly arranging the equation, we can finally obtain the transfer function:H(s) =X(s)A(s)=1s2 + ω0Q s+ω20(4.5)In practice, we do not measure the exact value of the electrostatic actuation,but only its shape, so our transfer function is augmented by a scaling factor β :H(s) =X(s)A(s)=βs2 + ω0Q s+ω20(4.6)Through mathematical derivations, two important deductions can be obtainedfrom the transfer function:(1) |X(s)A(s) | reaches the maximum when ω =ωpeak =ω0√1− ς2 (ς is the damp-ing ratio and 0 ≤ ς<1), with the exact location determined by the value of thequality factor. In the case that the electrostatic actuation has a constant amplitudeand ς has a value close to 0, |X(s)| reaches the maximum when ω is close to ω0.(2) When ω = ω0, |X(s)| = βQω20 |A(s)|. When the electrostatic actuation has aconstant amplitude, |A(s)| can be combined into the scaling factor β . In this case,when ω = ω0, |X(s)|= αQω20 , where the scaling factor α = β |A(s)|.4.2.2 Data Fitting in MATLABBased on the transfer function obtained in Equation 4.6, we can build a 2nd-ordersystem model:y =1ax2 +bx+ c(4.7)42where y denotes the displacement of the moving object of interest in the MEMSstructure, x represents the frequency of the harmonic electrostatic input force, anda, b, c are matching parameters.In our case, we have used the dynamics of the structure along the horizon-tal direction as determined by the position-weighted template matching algorithm,correlated with the variation in time of the electrical input. The results of the opti-cal measurement procedure were used in MATLAB, where data fitting allowed thedetermination of a, b, c parameters.4.2.3 Transfer Function Estimation from Experimental DataNow that we have the mathematical model with unknown parameters for the sys-tem, experimental data are required to determine these parameters. There are twoprocessing methods for data fitting:(1) Polytec PMA. Polytec PMA records the displacement of the OOI corre-sponding to a range of excitation frequencies. We can export the data directly fordata fitting in MATLAB.(2) LabVIEW GUI. Given the video characterizing the in-plane dynamics of amicrodevice, the software module with an LabVIEW GUI that we have designedand implemented previously can track the OOI and display the displacement ofthe OOI versus time. Since we know the excitation frequency at each period oftime, we can get the X displacement amplitude of the OOI corresponding to eachexcitation frequency.Once we have the mathematical model and experimental numeric data, we canperform data fitting in MATLAB and obtain the curve that shows the relationshipbetween y and x. The details of the code for data fitting in MATLAB can be foundin Appendix B.2. Furthermore, the unknown parameters in the transfer functioncan be determined. In this way, the final transfer function of the system is obtained.4.3 SummaryIn this chapter, the basic concepts of system identification are briefly explainedat first. And then the general procedures of system identification, including dataacquisition and preprocessing, model estimation and validation, are presented in43detail. The second part of this chapter focuses on the transfer function estimationfor the in-plane dynamics of a micro-resonator. We deduct the transfer functionmodel step by step, and explain how to perform data fitting in MATLAB using theexperimental data, in order to finally determine the transfer function.44Chapter 5Experimental Results andDiscussionsThis chapter first shows the results of object position tracking using the differentalgorithms that have been explained in Chapter 3. We compare and discuss theresults of the different algorithms. And then the results of the system identificationpresented in Chapter 4 of a micro-resonator are presented. The transfer functionof the micro-resonator system is estimated from two different processing methodsrespectively. And the results are also compared and discussed.5.1 Comparison of Algorithms for Object TrackingAs mentioned earlier in Chapter 3, several algorithms for object tracking are in-troduced and implemented. These methods include: mean shift, template match-ing, position-weighted template matching and position-weighted template match-ing with parabola fitting. We have implemented an easy-to-use step-by-step proce-dure with a user-friendly LabVIEW GUI, allowing an inexperienced user to char-acterize a general MEMS structure.5.1.1 Experimental Video DataThe main focus of this thesis work lies in the software implementation. The de-signed software module has not yet been interfaced with any image acquisition45Figure 5.1: The microstructure for the object tracking experiment.hardware equipment. Therefore, a video of the in-plane motion of a microdevice isrequired to validate and evaluate the performance of our algorithms. Polytec PMAhas the capability to extract a video sequence in AVI format as a result of the videostroboscopy analysis.The experimental data that we have acquired here is a video sequence for amicrostructure shown in Figure 5.1. The MEMS structure was actuated by a com-bined DC (10V) and AC (2V amplitude, 9.57 kHz frequency) voltage. With theexcitation, some components of the micro-resonator were fixed, while some partsof it were moving back and forth in a sinusoidal pattern in the X direction. The ac-tual duration of the recorded in-plane motion for the micro-resonator in this videois 350 µs.We would like to track an OOI by using the techniques that have been dis-cussed, and to compare the results to demonstrate the strength of our newly pro-posed algorithm.In this experiment, the area shown in Figure 5.2 is selected as the ROI, and theregion marked by a rectangular bounding box is chosen as the OOI. The OOI isbeing tracked, and its position versus time is displayed in graphs in both X and Ydirections. The Polytec PMA is used to measure and analyze the in-plane motionof this micro-resonator, and the results for one period of movement are displayedin Figure 5.3, which can be regarded as the golden standard.46Figure 5.2: The ROI and OOI selected for the object tracking experiment.Figure 5.3: The results of Polytec PMA (for one period).5.1.2 Mean ShiftThe results of the mean shift algorithm are shown in Figure 5.4. The plot of the Xposition appears similar to a sinusoidal pattern, but the plot is not smooth becausethis method achieves only pixel-level precision.Figure 5.4: The results of mean shift.47Figure 5.5: The results of the traditional template matching.Figure 5.6: The results of position-weighted template matching.5.1.3 Template MatchingThe results of the traditional template matching method are shown in Figure 5.5.The plot of the X position appears similar to a sinusoidal pattern, but the plot isnot smooth because this method achieves only pixel-level precision. The plot ofthe Y position shows distinct ”jump” discontinuities, due to the inefficiency of thetemplate matching algorithm when using only the R score matrix. These “jumps”are due to the several similar patterns existing in the micro-structure.5.1.4 Position-based Template MatchingThe results of the position-weighted template matching method are shown in Fig-ure 5.6. The plot of the X position is the same as that obtained from the traditionaltemplate matching method. However, the “jump” problem in Y direction is obvi-ously solved by the position-weighted template matching method.48Figure 5.7: The results of the position-weighted template matching withparabola fitting.5.1.5 Position-based Template Matching with Parabola FittingThe results of the position-weighted template matching with parabola fitting areshown in Figure 5.7. The X motion dynamics shows now a smooth trajectoryenabled by the sub-pixel interpolation. The periodic displacement in Y directionmight indicate a small cross-axes sensitivity of the micro-resonator.5.1.6 SummaryWhen running the LabVIEW GUI to perform object position tracking, we can ob-serve the whole tracking process in the ROI panel of the GUI. With the traditionaltemplate matching method, the rectangular bounding box does not always indicatethe correct OOI and may identify any other similar object as the OOI. However,with the mean shift algorithm or the position-weighted template matching algo-rithm, the bounding box always correctly keeps track of the desired OOI and never“jumps” to any other similar region. In terms of precision, the mean shift algo-rithm and traditional template matching achieve only pixel-level resolution, whilesub-pixel resolution can be reached with the parabola fitting interpolation tech-nique.We have also selected other different sets of ROIs and OOIs to repeat the ex-perimental process, and have obtained similar results leading to the conclusionconsistent with the experiment described above.49Figure 5.8: The microstructure for the system identification experiment.5.2 Transfer Function Estimation from ExperimentalDataAs elaborated in Chapter 4, the transfer function is given by Equation 4.6. Thedata fitting procedure in MATLAB was used to identify the model parameters a, b,c from Equation 4.7.In this experiment, we characterize the microstructure shown in Figure 5.8,which is a part of the microdevice presented in Figure 4.1. The micro-resonator isexcited by a stepped sine wave signal with a frequency range from 3.2 kHz to 11.2kHz. The DC offset is 10 volts and the AC amplitude is 3 volts. The area shown inFigure 5.9 is selected as the ROI, and the region marked by a rectangular boundingbox is chosen as the OOI.5.2.1 Polytec PMAPolytec PMA records the displacement of the OOI corresponding to the range ofexcitation frequencies. We can export the data directly for data fitting in MATLAB.The experimental numeric data exported from Polytec PMA are presented inTable 5.1. We do the curve fitting using the method elaborated in Section 4.2.2 ofChapter 4, and the result is shown in Figure 5.10.50Figure 5.9: The ROI and OOI selected for the system identification experi-ment.Table 5.1: Polytec PMA Data.Frequency [kHz] X Displacement Amplitude [10−7m]3.2 1.324.2 1.585.2 2.116.2 3.487.2 4.568.2 2.849.2 1.5710.2 1.0211.2 0.717Figure 5.10: The result of curve fitting from the Polytec PMA data.51From Figure 5.10, we can determine the unknown parameters in the transferfunction described in Equation 4.6. The curve fitting indicates the resonant fre-quency asf0 = 6.96 kHz (5.1)The bandwidth, the width of the range of frequencies for which the energy is atleast half its peak value, is given by∆ f |−3dB = (8.128−5.784)kHz = 2.344 kHz (5.2)The quality factor is calculated asQ =f0∆ f |−3dB = 2.969 (5.3)Based on the deductions elaborated in Section 4.2.1 of Chapter 4, we can have|X(s)|max = αQω20= 4.365×10−7 (5.4)In this way, the scaling factor can be determined asα =4.365×10−7ω20Q= 281.159 (5.5)5.2.2 LabVIEWWe have used the LabVIEW to perform the visual analysis of the video stroboscopymovie, and extracted the horizontal displacement versus time. The videos with theexcitation frequencies of 3.2, 4.2, 5.2, 6.2, 7.2, 8.2, 9.2, 10.2 and 11.2 kHz areexported respectively from Polytec PMA, and concatenated into one single video tobe analyzed in the LabVIEW. The X displacements corresponding to the frequencyrange from 3.2 kHz to 11.2 kHz are displayed in a graph in the LabVIEW GUI,and these results can be rearranged in Table 5.2.The result of curve fitting in MATLAB is shown in Figure 5.11. The curve52Table 5.2: LabVIEW Data.Frequency [kHz] X Displacement Amplitude (normalized scale) X Displacement Amplitude [10−7m]3.2 0.33 1.0294.2 0.425 1.3255.2 0.645 2.0116.2 1.05 3.2747.2 1.4 4.3658.2 0.86 2.6819.2 0.46 1.43410.2 0.3 0.93511.2 0.225 0.702fitting indicates the resonant frequency asf0 = 6.984 kHz (5.6)The bandwidth is given as∆ f |−3dB = (8.08−5.88)kHz = 2.2 kHz (5.7)The quality factor is determined asQ =f0∆ f |−3dB = 3.175 (5.8)Based on the deductions elaborated in Section 4.2.1 of Chapter 4, we can get|X(s)|max = αQω20= 4.365×10−7 (5.9)In this way, the scaling factor can be calculated asα =4.365×10−7ω20Q= 264.733 (5.10)53Figure 5.11: The result of curve fitting from the LabVIEW data.Table 5.3: Comparison of results from two processing methods.Parameters Polytec PMA LabVIEWf0(kHz) 6.984 6.96Q 3.175 2.969α 264.733 281.1595.2.3 Comparison of the Results from Two Processing MethodsThe estimated parameters from the two processing methods above are listed inTable 5.3. The values of the parameters of the transfer function estimated fromLabVIEW are quite close to those estimated from Polytec PMA. Therefore, it isproven that our software module does an effective job in object position trackingfor in-plane microdevices.5.2.4 SummaryWe have characterized the microstructure in a micro-resonator, and estimated theparameters of its transfer function, using two different processing methods. Thevalues of the parameters estimated from the LabVIEW processing method are veryclose to those estimated from the Polytec PMA processing procedure, which hasproven the effectiveness of our software module in object position tracking for54in-plane microdevices.55Chapter 6Conclusions and Future WorkA thesis overview is given in this chapter, as well as some future outlook andpotential ideas to improve and expand the worth of our work.6.1 Thesis OverviewIn this thesis, we have investigated software algorithms for measuring in-plane dy-namics of microdevices. Based on the traditional template matching method, wehave introduced a new technique called the position-weighted template matchingalgorithm. The motivation and implementation of this enhanced algorithm are ex-plained in detail. This algorithm is useful for the task of object tracking in planarmicrodevices, where several similar sub-patterns may exist in a single structure.We have also improved the precision of this algorithm by using the parabola fittinginterpolation technique. We have used the new software module for estimating themotion of a voltage-actuated micro-resonator, and estimated its transfer function inMATLAB through data fitting procedure.The proposed algorithm has been implemented in C++ and compiled in aDLL, interfaced with a custom designed LabVIEW GUI, which can be potentiallyconnected to a low-cost hardware that monitors in-plane motion of microdevices.Video sequences of in-plane motion of a micro-resonator were acquired, and sev-eral experiments were conducted. The results have demonstrated the effectivenessand strength of our proposed algorithm. The identified transfer function of the sys-56tem for the micro-resonator has been determined by data fitting from the LabVIEWtracking results. The obtained transfer function is proven to be quite close to thatestimated from Polytec PMA tracking data.6.2 Future WorkThere are still some problems that have not been explored thoroughly in this thesisand can be further investigated in the future. Future research directions will focuson both improving the software algorithms and on its hardware implementation.6.2.1 Future Outlook on Hardware EquipmentOur final goal is to obtain a low-cost and modular system for estimating the in-plane dynamics of microstructures. A potential high performance hardware imple-mentation can use PXI-based modules introduced in Chapter 2, where the algo-rithms can be mapped into an FPGA-based hardware implementation. By enhanc-ing the computation efficiency of the system, we can even possibly enable real-timevisualization and analysis of in-plane dynamics of MEMS structures.6.2.2 Expansion on Other Issues with Software AlgorithmsOn the software side, the main improvements will target reducing computationalcomplexity, and dealing with more complex motions and deformations of the trackedobjects.Computational ComplexityThe position-weighted template matching is based on the traditional template match-ing, which is essentially a detection algorithm. Therefore, our tracking algorithmis basically detecting the object at each frame, which is very computationally ex-pensive. One possible solution to this problem is to combine a tracker with ourdetector. This will not only reduce the amount of computation, but also improvethe effectiveness of tracking. Another potential solution is to use some methodssuch as pyramidal matching and non-uniform sampling [28], to reduce the compu-tational complexity.57Figure 6.1: In-plane deformation of a lateral beam, adapted from [18].Figure 6.2: MEMS gear-wheel drive (Courtesy of Sandia National Laborato-ries).Tracking Objects with Elastic DeformationsOur proposed algorithm focuses on rigid object tracking in microstructures, whileit can be meaningful to extend our algorithm to monitor objects with elastic de-formations. For instance, Figure 6.1 shows the in-plane deformation of a lateralbeam. It would be meaningful to monitor and measure the in-plane deformation ofthis microdevice, since the measurement might help us determine some importantcharacteristics of this microdevice, such as the in-plane stiffness of the beam in thiscase.Rotation Angle DetectionThe proposed technique tracks the object of interest and measures the planar trans-lations of the selected object. However, some microdevices contain structures thatcan rotate during the motion, such as the MEMS gear-wheel drive shown in Fig-ure 6.2. Therefore, it can also be worthwhile to improve the algorithm to enablerotation angle detection during the measurement process.58Bibliography[1] MSA-500 Micro System Analyzer, 2015. URL → pages 7[2] A. Adam, E. Rivlin, and I. Shimshoni. Robust fragments-based trackingusing the integral histogram. In Computer vision and pattern recognition,2006 IEEE Computer Society Conference on, volume 1, pages 798–805.IEEE, 2006. → pages 17[3] S. Avidan. Support vector tracking. Pattern Analysis and MachineIntelligence, IEEE Transactions on, 26(8):1064–1072, 2004. → pages 16[4] D. G. Bailey. Sub-pixel estimation of local extrema. In Proceeding of Imageand Vision Computing New Zealand, pages 414–419, 2003. → pages 31, 33[5] M. Ben Salah and A. Mitiche. Model-free, occlusion accommodating activecontour tracking. ISRN Artificial Intelligence, 2012, 2012. → pages 17[6] M. J. Black and A. D. Jepson. Eigentracking: Robust matching and trackingof articulated objects using a view-based representation. InternationalJournal of Computer Vision, 26(1):63–84, 1998. → pages 16[7] A. Bosseboeuf and S. Petitgrand. Characterization of the static and dynamicbehaviour of M(O)EMS by optical techniques: status and trends. Journal ofmicromechanics and microengineering, 13(4):S23, 2003. → pages 2[8] G. Bradski. The opencv library. Doctor Dobbs Journal, 25(11):120–126,2000. → pages 25[9] T. J. Broida and R. Chellappa. Estimation of object motion parameters fromnoisy images. Pattern Analysis and Machine Intelligence, IEEETransactions on, (1):90–99, 1986. → pages 1659[10] Y. Cheng. Mean shift, mode seeking, and clustering. Pattern Analysis andMachine Intelligence, IEEE Transactions on, 17(8):790–799, 1995. →pages 22[11] D. Comaniciu and V. Ramesh. Real-time tracking of non-rigid objects usingmean shift, July 8 2003. US Patent 6,590,999. → pages 16[12] D. Comaniciu, V. Ramesh, and P. Meer. Kernel-based object tracking.Pattern Analysis and Machine Intelligence, IEEE Transactions on, 25(5):564–577, 2003. → pages 17[13] M. Debella-Gilo and A. Ka¨a¨b. Sub-pixel precision image matching formeasuring surface displacements on mass movements using normalizedcross-correlation. Remote Sensing of Environment, 115(1):130–142, 2011.→ pages 33[14] N. D. Dowson and R. Bowden. Simultaneous modeling and tracking (smat)of feature sets. In Computer Vision and Pattern Recognition, 2005. CVPR2005. IEEE Computer Society Conference on, volume 2, pages 99–105.IEEE, 2005. → pages 17[15] K. Fukunaga and L. Hostetler. The estimation of the gradient of a densityfunction, with applications in pattern recognition. Information Theory, IEEETransactions on, 21(1):32–40, 1975. → pages 22[16] D. P. Huttenlocher, G. Klanderman, W. J. Rucklidge, et al. Comparingimages using the hausdorff distance. Pattern Analysis and MachineIntelligence, IEEE Transactions on, 15(9):850–863, 1993. → pages 16[17] M. Isard and A. Blake. Condensationconditional density propagation forvisual tracking. International journal of computer vision, 29(1):5–28, 1998.→ pages 16[18] E. Iwase, P.-C. Hui, D. Woolf, A. W. Rodriguez, S. G. Johnson, F. Capasso,and M. Loncˇar. Control of buckling in large micromembranes usingengineered support structures. Journal of Micromechanics andMicroengineering, 22(6):065028, 2012. → pages ix, 58[19] A. D. Jepson, D. J. Fleet, and T. F. El-Maraghi. Robust online appearancemodels for visual tracking. Pattern Analysis and Machine Intelligence, IEEETransactions on, 25(10):1296–1311, 2003. → pages 1760[20] Z. Kalal, K. Mikolajczyk, and J. Matas. Tracking-learning-detection.Pattern Analysis and Machine Intelligence, IEEE Transactions on, 34(7):1409–1422, 2012. → pages 17[21] P. Krehl, S. Engemann, C. Rembe, and E. Hofer. High-speed visualization, apowerful diagnostic tool for microactuators–retrospect and prospect.Microsystem Technologies, 5(3):113–132, 1999. → pages 1[22] E. M. Lawrence, K. Speller, and D. Yu. Laser doppler vibrometry for opticalmems. In Fifth International Conference on Vibration Measurements byLaser Techniques, pages 80–87. International Society for Optics andPhotonics, 2002. → pages 2[23] B. D. Lucas, T. Kanade, et al. An iterative image registration technique withan application to stereo vision. In IJCAI, volume 81, pages 674–679, 1981.→ pages 17[24] A.-R. Mansouri. Region tracking via level set pdes without motioncomputation. Pattern Analysis and Machine Intelligence, IEEE Transactionson, 24(7):947–961, 2002. → pages 17[25] A.-R. Mansouri and A. Mitiche. Region tracking via local statistics and levelset pdes. In Image Processing. 2002. Proceedings. 2002 InternationalConference on, volume 3, pages 605–608. IEEE, 2002. → pages 17[26] I. Matthews, T. Ishikawa, and S. Baker. The template update problem. IEEETransactions on Pattern Analysis & Machine Intelligence, (6):810–815,2004. → pages 17[27] A. Mitiche and J. K. Aggarwal. Computer Vision Analysis of Image Motionby Variational Methods. Springer, 2014. → pages 16[28] IMAQ Vision Concepts Manual. National Instruments, 11500 North MopacExpressway Austin, Texas 78759-3504 USA, June 2003. → pages 23, 24, 57[29] LabVIEW System Identification Toolkit User Manual. National Instruments,11500 North Mopac Expressway Austin, Texas 78759-3504 USA, Sept.2004. → pages 38, 40[30] A. Ongkodjojo and F. E. Tay. Characterizations of micromachined devicesusing planar motion analyzer (pma). In Sensors, 2005 IEEE, pages 4–pp.IEEE, 2005. → pages viii, 2, 10, 11[31] MSA-500 Micro System Analyzer. Polytec, . → pages 761[32] Software Manual - Planar Motion Analyzer - Software 2.6. Polytec, . →pages viii, 9, 10, 11[33] A. Rahimi, L.-P. Morency, and T. Darrell. Reducing drift in differentialtracking. Computer Vision and Image Understanding, 109(2):97–111, 2008.→ pages 17[34] C. Rembe, R. Kant, and R. S. Muller. Optical measurement methods tostudy dynamic behavior in mems. In Lasers in Metrology and ArtConservation, pages 127–137. International Society for Optics andPhotonics, 2001. → pages 1, 2[35] R. Ronfard. Region-based strategies for active contour models.International journal of computer vision, 13(2):229–251, 1994. → pages 16[36] K. Sato and J. K. Aggarwal. Temporal spatio-velocity transform and itsapplication to tracking and interaction. Computer Vision and ImageUnderstanding, 96(2):100–128, 2004. → pages 16[37] J. Shi and C. Tomasi. Good features to track. In Computer Vision andPattern Recognition, 1994. Proceedings CVPR’94., 1994 IEEE ComputerSociety Conference on, pages 593–600. IEEE, 1994. → pages 16, 17[38] R. L. Streit and T. E. Luginbuhl. Maximum likelihood method forprobabilistic multihypothesis tracking. In SPIE’s International Symposiumon Optical Engineering and Photonics in Aerospace Sensing, pages394–405. International Society for Optics and Photonics, 1994. → pages 16[39] C. J. Veenman, M. J. Reinders, and E. Backer. Resolving motioncorrespondence for densely moving points. Pattern Analysis and MachineIntelligence, IEEE Transactions on, 23(1):54–72, 2001. → pages 16[40] S. Xu. User Guide: Setting up Microsystems Intergration Platform FPGAImaging System Environment. CMC MICROSYSTEMS, Building 50,Innovation Park at Queens University, 945 Princess St., Kingston, OntarioCanada K7L 3N6, 1 edition, Mar. 2013. → pages 4, 13[41] A. Yilmaz, O. Javed, and M. Shah. Object tracking: A survey. Acmcomputing surveys (CSUR), 38(4):13, 2006. → pages viii, 15, 1662Appendix AThe LabVIEW GUI Design andImplementationA.1 Basic LabVIEW GUI Implementation for ObjectTrackingThe front panel of the basic LabVIEW GUI for object position tracking is shownin Figure A.1.Figure A.2 shows the block diagram implementation for selecting a video andan ROI. When the button “Select AVI” in the front panel is clicked, the LabVIEWGUI requires the user to choose a video in AVI format. Then if the button “Se-lect ROI” is clicked, the GUI will display the first frame of the chosen video, andrequire the user to choose an ROI that contains an OOI for observation. The track-ing will be performed only on the ROI instead of the whole area, and thereforecomputational costs can be considerably reduced.Figure A.3 illustrates the block diagram for selecting reference points anddetermining scaling factors. When the button “Select Two Reference Points” isclicked, the GUI displays the ROI of the first frame of the chosen video and re-quires the user to select two reference points. Then the user is required to enter thereal-length distances between the two selected reference points, both in X and in Ydirections. Based on the pixel distances and the real-length distances between the63Figure A.1: The LabVIEW GUI front panel for object position tracking.Figure A.2: The block diagram implementation for selecting an AVI and anROI.two reference points, both in X and in Y directions, the distance scaling factors inboth horizontal and vertical directions can be determined. In this way, the resultscan be displayed in real-length units.The total time duration of the chosen video is required to be entered too. Thetime scaling factor can be calculated based on the total time duration and the totalframe number of the video. The video is processed at frame rate, but the results areto be displayed versus time. Therefore, conversion is also necessary here.64Figure A.3: The block diagram implementation for reference points.Figure A.4 presents the block diagram for selecting an OOI, performing theobject position tracking and displaying the tracking process and the tracking re-sults. The GUI will display the tracking process, during which the tracked objectis marked by a rectangle. When the tracking process is over, the coordinates ofthe OOI, in both X and Y directions, are displayed versus time in graphs. Notethat the algorithm used in Figure A.4 is position-weighted template matching withparabola fitting interpolation.A.2 LabVIEW GUI Implementation for AdditionalFunctionsA.2.1 Multiple OOIs and Relative MovementThe front panel is basically the same as that shown in Appendix A.1, except thatthe X and Y displacements are replaced by the relative displacements between twoOOIs. The key part of the block diagram implementation is given in Figure A.5.65A.2.2 Rotation Angle DetectionThe complete front panel is presented in Figure A.6. The block diagram implemen-tation for the part of tracking and rotation angle detection is shown in Figure A.7.The key element of the implementation is the LabVIEW built-in VI called “IMAQRotation Detect VI”, which determines the rotation angle between a source imageand the reference image.A.3 LabVIEW GUI Implementation for Object Trackingwith Mean ShiftThe implementation of the panels for selecting an AVI, an ROI and reference pointsis basically the same as that presented in Appendix A.1. However, the implemen-tation for object tracking is different, due to the use of a different algorithm. Theimplementation for object tracking with mean shift is shown in Figure A.8. ALabVIEW built-in VI named “IMAQ Track Objects” is used as a key element toimplement the GUI for object position tracking with the mean shift algorithm.66(a) The block diagram for the first frame.(b) The block diagram for the subsequent frames.Figure A.4: The block diagram implementation for object tracking.67(a) The block diagram for the first frame.(b) The block diagram for the subsequent frames.Figure A.5: The block diagram implementation for object tracking of multi-ple OOIs.68Figure A.6: The front panel for rotation angle detection.Figure A.7: The block diagram implementation for rotation angle detection.69(a) The block diagram for the first frame.(b) The block diagram for the subsequent frames.Figure A.8: The block diagram implementation for object tracking with meanshift.70Appendix BProgramming CodesB.1 C++ Code for Position-weighted Template Matchingwith Parabola Fitting InterpolationThe code listed below was complied in Microsoft Visual Studio 2012./ / TMK. cpp : Def ines the exported func t i ons f o r the DLL app l i c a t i o n .# inc lude ” s tda fx . h ”# inc lude ” opencv2 / imgproc / imgproc . hpp ”# inc lude ” CV SubPix . h ”using namespace std ;using namespace cv ;# i f n d e f FLT EPSILON#def ine FLT EPSILON 1.192092896e−07F /∗ sma l les t such t ha t 1.0+FLT EPSILON != 1.0 ∗ /# end i f# i f n d e f ASSERT#def ine ASSERT#end i fex tern ”C” {declspec ( d l l e x p o r t ) i n t MyPatternMatch ( uchar ∗img , uchar ∗ templ , f l o a t or ,f l o a t oc , i n t img cols , i n t img rows , i n t templ co ls , i n t templ rows , f l o a t∗xL , f l o a t ∗xR , f l o a t ∗yT , f l o a t ∗yB ) ;}71declspec ( d l l e x p o r t ) i n t MyPatternMatch ( uchar ∗img , uchar ∗ templ , f l o a t or ,f l o a t oc , i n t img cols , i n t img rows , i n t templ co ls , i n t templ rows , f l o a t∗xL , f l o a t ∗xR , f l o a t ∗yT , f l o a t ∗yB ){/ / s to re images i n Mat typeMat src image ( img rows , img cols ,CV 8U,& img [ 0 ] ) ;Mat templ image ( templ rows , templ co ls ,CV 8U,& templ [ 0 ] ) ;/ / c reate r e s u l t mat r i xMat r e s u l t ;i n t r e s u l t c o l s = src image . co ls − templ image . co ls + 1 ;i n t r esu l t r ows = src image . rows − templ image . rows + 1;r e s u l t . c reate ( r e su l t c o l s , resu l t rows , CV 32FC1 ) ;/ / match and normal izematchTemplate ( src image , templ image , r esu l t , CV TM SQDIFF NORMED) ;normal ize ( r e su l t , r e su l t , 0 , 1 , NORMMINMAX, −1, Mat ( ) ) ;/ / c reate d is tance mat r i x and normal izeMat d ;d . c reate ( resu l t rows , r e su l t c o l s , CV 32FC1 ) ;f o r ( i n t i = 0 ; i < d . rows ; i ++) {f o r ( i n t j = 0 ; j < d . co ls ; j ++) {f l o a t temp = pow ( ( f l o a t ) i−or , 2 ) +pow ( ( f l o a t ) j−oc , 2 ) ;d . at<f l o a t >( i , j ) = sq r t ( ( f l o a t ) temp ) ;}}normal ize ( d , d , 0 , 1 , NORMMINMAX, −1, Mat ( ) ) ;/ / c reate f i n a l weighted mat r i x and normal izeMat f i n a l ;addWeighted ( r esu l t , 0 .6 , d , 0 .4 , 0 , f i n a l , −1) ;normal ize ( f i n a l , f i n a l , 0 , 1 , NORMMINMAX, −1, Mat ( ) ) ;/ / Locate the best matchdouble minVal , maxVal ;Po in t minLoc , maxLoc ;minMaxLoc ( f i n a l , &minVal , &maxVal , &minLoc , &maxLoc , Mat ( ) ) ;/ / SubpixelPoint2d minLocsub ;minMaxLocSubPix(&minLocsub , f i n a l , &minLoc , 0) ;/ / pass the po i n t e r ( s ) to the bounding box value and match score(∗xL ) = minLocsub . x ;72(∗xR) = minLocsub . x + f l o a t ( templ image . co ls ) ;(∗yT ) = minLocsub . y ;(∗yB ) = minLocsub . y + f l o a t ( templ image . rows ) ;/ / (∗ score ) = maxVal ;r e t u rn 0 ;}/ / Subpixelvo id SubPixFi tParabola ( cv : : Point2d∗ Resul t , cv : : Point2d& P1 , cv : : Point2d& P2 ,cv : : Point2d& P3) ;double minMaxLocSubPix (CV OUT cv : : Point2d∗ SubPixLoc ,cv : : Inpu tAr ray src ,CV IN OUT cv : : Po in t∗ LocIn ,CV IN OUT const i n t Method){ / / se t de f au l t r e s u l t i n case we ba i lSubPixLoc−>x = ( f l o a t ) LocIn−>x ;SubPixLoc−>y = ( f l o a t ) LocIn−>y ;i f ( Method == −1) { / / d i sab le any changes f o r t e s t i n gre t u rn 0 ;} ;{ / / Parabola mathMat HeatMap = src . getMat ( ) ;/ / p i ck some l i m i t i n g values/ / I t ’ s not expected t ha t the template peak values w i l l span more than 1/16 ofthe source s ize . This a lso l i m i t s the processing t ime used when look ing a t ablank image .Size MaxScan ; MaxScan . width = HeatMap . co ls >> 4; MaxScan . he igh t = HeatMap . rows>> 4;Po in t ScanRectMin ; / / Use two Poin ts ins tead of a Rect to prevent having Rectcompute r i g h t / l e f t values i n each loop below .Po in t ScanRectMax ;ScanRectMin . x = LocIn−>x − MaxScan . width ; i f ( ScanRectMin . x < 0) ScanRectMin . x =0 ;ScanRectMin . y = LocIn−>y − MaxScan . he igh t ; i f ( ScanRectMin . y < 0) ScanRectMin . y= 0 ;ScanRectMax . x = LocIn−>x + MaxScan . width ; i f ( ScanRectMax . x >= HeatMap . co ls )ScanRectMax . x = HeatMap . co ls −1;ScanRectMax . y = LocIn−>y + MaxScan . he igh t ; i f ( ScanRectMax . y >= HeatMap . rows )ScanRectMax . y = HeatMap . rows −1;73/ / where we are s t a r t i n g a tconst f l o a t FloatValueChange = FLT EPSILON ∗ 10.0 f ; / / sma l les t change t ha t wecan do math on wi th some meaningful r e s u l t ./ / scan to f i n d area to use . t h i s can get compl icated s ince we may be given apo in t near any o f the edges of the blob we want to use .f l o a t S r cS ta r t i ngPo in t = HeatMap . at<f l o a t >( LocIn−>y , LocIn−>x ) ;Po in t Center = ∗LocIn ;/ / r e s u l t sPo in t ScanRight ;Po in t ScanLeft ;Po in t ScanUp ;Po in t ScanDown ;ScanRight = Center ;wh i le ( t r ue ){++ScanRight . x ; / / no po in t checking the passed l o ca t i o n . so inc f i r s ti f ( ScanRight . x > ScanRectMax . x ){r e t u rn 1 ; / / ran out o f room to scan} ;f l o a t Val = HeatMap . at<f l o a t >(ScanRight . y , ScanRight . x ) ;i f ( abs ( Val − SrcS ta r t i ngPo in t ) > FloatValueChange ){break ;} ;} ;ScanLeft = Center ;wh i le ( t r ue ){−−ScanLeft . x ; / / no po in t checking the passed l o ca t i o n . so inc f i r s ti f ( ScanLeft . x < ScanRectMin . x ){r e t u rn 1 ; / / ran out o f room to scan} ;i f ( abs (HeatMap . at<f l o a t >( ScanLeft . y , ScanLeft . x ) − SrcS ta r t i ngPo in t ) >FloatValueChange ){break ;} ;} ;ScanUp = Center ;wh i le ( t r ue ){++ScanUp . y ; / / assume G cords . The ac tua l d i r e c t i o n o f Up in the image i s notimpor tan t s ince the math i s symmetr icali f (ScanUp . y > ScanRectMax . y ){r e t u rn 1 ; / / ran out o f room to scan74} ;i f ( abs (HeatMap . at<f l o a t >( ScanUp . y , ScanUp . x ) − SrcS ta r t i ngPo in t ) >FloatValueChange ){break ;} ;} ;ScanDown = Center ;wh i le ( t r ue ){−−ScanDown . y ; / / assume G cords . The ac tua l d i r e c t i o n o f Up in the image i s notimpor tan t s ince the math i s symmetr icali f (ScanDown . y < ScanRectMin . y ){r e t u rn 1 ; / / ran out o f room to scan} ;i f ( abs (HeatMap . at<f l o a t >(ScanDown . y , ScanDown . x ) − SrcS ta r t i ngPo in t ) >FloatValueChange ){break ;} ;} ;double Er ro rVa l = 0 ;{ / / X ax isPoint2d A;A . x = ScanLeft . x ; / / The p i x e l cordsA. y = HeatMap . at<f l o a t >(ScanLeft . y , ScanLeft . x ) ; / / the Heat map valuePoint2d B; / / centerB . x = Center . x ; / / The p i x e l cordsB. y = HeatMap . at<f l o a t >( Center . y , Center . x ) ; / / the Heat map valuePoint2d C;C. x = ScanRight . x ; / / The p i x e l cordsC. y = HeatMap . at<f l o a t >(ScanRight . y , ScanRight . x ) ; / / the Heat map valuePoint2d Resul t ;SubPixFi tParabola ( &Resul t , A , B, C) ;/ / we throw away the y and use the x/ / c l i p and set e r r o ri f ( Resul t . x < ScanLeft . x ){ASSERT(0 ) ;Resul t . x = ScanLeft . x ;E r ro rVa l = 1 ;} ;i f ( Resul t . x > ScanRight . x ){75ASSERT(0 ) ;Resul t . x = ScanRight . x ;Er ro rVa l = 1 ;} ;SubPixLoc−>x = Resul t . x ;} ; / / X ax is{ / / Y ax is/ / t h i s t ime we swap x and y s ince the parabola i s always found i n the xPoint2d A;A . x = ScanDown . y ; / / The p i x e l cordsA. y = HeatMap . at<f l o a t >( ScanDown . y ,ScanDown . x ) ; / / the Heat map valuePoint2d B; / / centerB . x = Center . y ; / / The p i x e l cordsB. y = HeatMap . at<f l o a t >( Center . y , Center . x ) ; / / the Heat map valuePoint2d C;C. x = ScanUp . y ; / / The p i x e l cordsC. y = HeatMap . at<f l o a t >( ScanUp . y , ScanUp . x ) ; / / the Heat map valuePoint2d Resul t ;SubPixFi tParabola ( &Resul t , A , B, C) ;/ / we throw away the y and use the xResul t . y = Resul t . x ;/ / c l i p and set e r r o ri f ( Resul t . y < ScanDown . y ){ASSERT(0 ) ;Resul t . y = ScanDown . y ;Er ro rVa l = 1 ;} ;i f ( Resul t . y > ScanUp . y ){ASSERT(0 ) ;Resul t . y = ScanUp . y ;Er ro rVa l = 1 ;} ;SubPixLoc−>y = Resul t . y ;} ; / / X ax isr e t u rn Er ro rVa l ;} ;r e t u rn 0 ;} ;76/ / Parabo l i c f i tvo id SubPixFi tParabola ( cv : : Point2d∗ Resul t , cv : : Point2d& P1 , cv : : Point2d& P2 ,cv : : Point2d& P3){Result−>x = P2 . x ; / / d e f au l t i n case of an e r r o rResult−>y = P2 . y ;double denom = (P1 . x − P2 . x ) ∗ (P1 . x − P3 . x ) ∗ (P2 . x − P3 . x ) ; / / can ’ t be zeros ince X i s from p i x e l l o ca t i ons .double A = (P3 . x ∗ (P2 . y − P1 . y ) + P2 . x ∗ (P1 . y − P3 . y ) + P1 . x ∗ (P3 . y − P2 . y ) ) /denom ;double B = ( ( P3 . x ∗ P3 . x ) ∗ (P1 . y − P2 . y ) + (P2 . x ∗ P2 . x ) ∗ (P3 . y − P1 . y ) + (P1 . x∗ P1 . x ) ∗ (P2 . y − P3 . y ) ) / denom ;double C = (P2 . x ∗ P3 . x ∗ (P2 . x − P3 . x ) ∗ P1 . y + P3 . x ∗ P1 . x ∗ (P3 . x − P1 . x ) ∗ P2. y + P1 . x ∗ P2 . x ∗ (P1 . x − P2 . x ) ∗ P3 . y ) / denom ;/ / now f i n d the centerdouble xv = −B / (2 ∗ A) ;double yv = C − (B∗B) / (4 ∗ A) ;Result−>x = xv ;Result−>y = yv ;} ;B.2 MATLAB Code for Data Fitting%% Data f i t t i n g f o r Poly tec PMA numeric data% enter 9 sample po in t sx = [ 3 . 2 , 4 . 2 , 5 . 2 , 6 . 2 , 7 . 2 , 8 . 2 , 9 . 2 , 10 . 2 , 11 . 2 ] ;y = [1 .32 ,1 .58 ,2 .11 ,3 .48 ,4 .56 ,2 .84 ,1 .57 ,1 .02 ,0 .717 ] ;% choose the curve f i t t i n g modelmy f i t t ype = f i t t y p e ( ’ 1 / ( a∗x∗x+b∗x+c ) ’ , . . .’ dependent ’ ,{ ’ y ’} , ’ independent ’ ,{ ’ x ’ } , . . .’ c o e f f i c i e n t s ’ ,{ ’ a ’ , ’ b ’ , ’ c ’} )% data f i t t i n g and p l o t the r e su l t smy f i t = f i t ( x ’ , y ’ , my f i t t ype )f i gu re , p l o t ( my f i t , x , y )x l abe l ( ’ Frequency [ kHz ] ’ )y l abe l ( ’X Displacement Ampli tude [10ˆ{−7}m] ’ )%% Data f i t t i n g f o r numeric data from LabVIEW t rack i ng r e su l t s77% enter 9 sample po in t sx = [ 3 . 2 , 4 . 2 , 5 . 2 , 6 . 2 , 7 . 2 , 8 . 2 , 9 . 2 , 10 . 2 , 11 . 2 ] ;y = [0 .33 ,0 .425 ,0 .645 ,1 .05 ,1 .4 ,0 .86 ,0 .46 ,0 .3 ,0 .225 ] ;% choose the curve f i t t i n g modelmy f i t t ype = f i t t y p e ( ’ 1 / ( a∗x∗x+b∗x+c ) ’ , . . .’ dependent ’ ,{ ’ y ’} , ’ independent ’ ,{ ’ x ’ } , . . .’ c o e f f i c i e n t s ’ ,{ ’ a ’ , ’ b ’ , ’ c ’} )% data f i t t i n g and p l o t the r e su l t smy f i t = f i t ( x ’ , y ’ , my f i t t ype )f i gu re , p l o t ( my f i t , x , y )x l abe l ( ’ Frequency [ kHz ] ’ )y l abe l ( ’X Displacement Ampli tude [ normal ized scale ] ’ )78


Citation Scheme:


Citations by CSL (citeproc-js)

Usage Statistics



Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            async >
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:


Related Items