Open Collections

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

A System for the efficient automated analysis of reconstructed double-pulsed holograms Zhao, Zhijun 1993

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
831-ubc_1993_spring_zhao_zhijun.pdf [ 7.89MB ]
Metadata
JSON: 831-1.0080889.json
JSON-LD: 831-1.0080889-ld.json
RDF/XML (Pretty): 831-1.0080889-rdf.xml
RDF/JSON: 831-1.0080889-rdf.json
Turtle: 831-1.0080889-turtle.txt
N-Triples: 831-1.0080889-rdf-ntriples.txt
Original Record: 831-1.0080889-source.json
Full Text
831-1.0080889-fulltext.txt
Citation
831-1.0080889.ris

Full Text

A SYSTEM FOR THE EFFICIENT AUTOMATED ANALYSIS OFRECONSTRUCTED DOUBLE-PULSED HOLOGRAMSByZhijun ZhaoB. Sc. (Automatic Instrumentation) Tianjin University, ChinaM. Sc. (Precision Instrumentation) Tianjin University, ChinaA THESIS SUBMITTED IN PARTIAL FULFILLMENT OFTHE REQUIREMENTS FOR THE DEGREE OFMASTER OF APPLIED SCIENCEinTHE FACULTY OF GRADUATE STUDIESMECHANICAL ENGINEERINGWe accept this thesis as conformingto the required standardTHE UNIVERSITY OF BRITISH COLUMBIAApril 1993© Zhijun Zhao, 1993In presenting this thesis in partial fulfilment of the requirements for an advanced degree atthe University of British Columbia, I agree that the Library shall make it freely availablefor reference and study. I further agree that permission for extensive copying of thisthesis for scholarly purposes may be granted by the head of my department or by hisor her representatives. It is understood that copying or publication of this thesis forfinancial gain shall not be allowed without my written permission.Mechanical EngineeringThe University of British Columbia2075 Wesbrook PlaceVancouver, CanadaV6T 1Z1Date:ABSTRACTDouble-pulsed holograms of particles in a flow form the basis of HolographicParticle Image Velocimetry (HPIV), which is becoming widely used for the measure-ment of three-dimensional velocity fields. The major deficiency of HPIV is that thedouble-pulsed holograms must be surveyed by a human operator, at a considerablecost in time and effort, to determine the location of objects within the holographicvolume.A system that automatically and efficiently analyzes double-pulsed hologramsof microbubbles in flowing water has been developed. The system employs a three-part algorithm:1. Holograms of microbubbles are scanned in the holographic (x-y) plane using acomputer-controlled positioning table. Microbubbles within the scanned regionare detected by a CCD-based fast object detection system. On detection of aprobable bubble signal, the CCD board signals the computer to stop scanning.2. Bubble images thus detected are focussed (in the z-direction) using a software-based autofocussing technique.3. In-focus bubble images are analyzed by a simple set of image analysis tech-niques, including edge detection and patch correlation.The output of this algorithm is the size, and displacement between the two laserpulses, of microbubbles in the water flow.Large numbers of experiments were conducted using this algorithm. On thebasis of these experiments it is concluded that scanning the large empty regions ofspace in a hologram to identify the widely separated microbubbles (step 1 above)can be done at almost the human rate. In contrast, the second part of the algorithm— autofocussing of bubble images with the present hardware — requires more timethan does a human operator. The third part of the algorithm, involving automatedbubble image analysis, can be carried out at very nearly the human rate.Table of ContentsABSTRACT^ iiTable of Contents^ iiiList of Tables^ viList of Figures^ viiACKNOWLEDGEMENTS^ x1 INTRODUCTION 11.1 Preliminary Remarks ^ 11.2 Photography and Holography ^ 31.3 In-line holography ^ 51.4 Hologram Particle Image Velocimetry ^ 71.5 Purpose and Scope of the Present Study 92 LITERATURE REVIEW 112.1 Autofocussing of Microbubble Images ^ 112.1.1^Haussmann and Lauterborn 122.1.2^Stanton, Caulfield and Stewart ^ 132.2 Microbubble Image Analysis ^ 162.2.1^Zarschizky Method 182.2.2^Payne Method^ 201113^2.3^Summary ^EXPERIMENTAL APPARATUS^3.1^Overall Automated Hologram Reconstruction System ^^3.2^Experimental Procedures ^202222243.2.1^Adjustments of Optical System ^ 243.2.2^Adjustments of Electronic System 254 FAST OBJECT DETECTION SYSTEM 264.1 Rationale for the System Design ^ 264.2 Hardware of the Fast Object Detection System^ 274.2.1^The Optical System^ 274.2.2^Electronic System 284.2.3^Software Design ^ 364.3 The Effectiveness of the Detection System ^ 404.3.1^Factors which Impact on the Effectiveness 404.3.2^Improvement of Bubble Detection System Effectiveness ^ 415 AUTOFOCUSSING OF BUBBLE IMAGES 435.1 Basic Concepts of Autofocussing ^ 435.1.1^Principle of Bubble Image Focussing ^ 435.1.2^Extraction of a Focussing Parameter 445.1.3^An Approach to Autofocussing of Bubble Image ^ 505.2 Implementation of Autofocussing Algorithm ^ 545.2.1^Identification of Spurious "Bubbles" 545.2.2^Selection of Bubble for Focussing ^ 555.2.3^Autofocussing of the Bubble Image 56iv5.3 Comparision with Human Focussed Images ^  575.3.1 Definition of Errors  ^575.3.2 Analysis of Error  ^606 BUBBLE IMAGE ANALYSIS^ 636.1 Introduction  ^636.2 Image Analysis  ^636.2.1 Preprocessing of Bubble Image  ^636.2.2 Further Image Analysis Procedures  ^656.2.3 Bubble Displacement and Bubble Diameter  ^766.3 Results and Comparison with the Human Analysis ^  807 Concluding Remarks^ 837.1 Summary ^  837.1.1 Fast Object Detection  ^837.1.2 Bubble Autofocussing Algorithm  ^847.1.3 Bubble Image Analysis  ^847.2 Recommendations for Future Work ^  85Bibliography^ 86A Nonlinear Data Fitting^ 90B Shrink and Blow Technique^ 94C Object Edge Tracking Technique^ 96List of Tables5.1 Table of absolute error  ^575.2 Table of relative error  ^59v iList of Figures1.1 Hologram recording process ^41.2 Hologram reconstruction process.  ^51.3 In-line hologram recording process.  ^61.4 In-line hologram reconstruction process.  ^72.1 A schematic of out-of-plane positions  ^153.1 Schematic of hologram reconstruction facility  ^233.2 Photograph of hologram reconstruction facility^ 234.1 Schematic of the fast object detection system  ^274.2 Video signal output of the linear CCD^  294.3 Inverted signal of Figure 4.2  ^304.4 A schematic of the signal inverter.  ^304.5 Schematic of the low-pass filter circuit  ^314.6 Low-pass filtered signal of the inverted signal  ^324.7 Schematic of integrator  ^334.8 Schematic of mean value extraction ^  334.9 Schematic of a positive integrator  ^344.10 Schematic of signal detection circuit  ^364.11 Schematic of the detection circuit  ^374.12 Synchronization of the different signals ^395.1 Schematic of bubble image focussing ^  44vii5.2 Photographs of a microbubble in and out-of-focus ^ 455.3 Focussing parameter using the image size of bubble  465.4 One dimensional box filter applied to radial lines from centroid  ^485.5 Intensity variation along a radial line prior to filtering  ^485.6 Intensity changes after the filtering  ^495.7 Focussing parameter for a typical microbubble ^  515.8 A schematic of a large step scanning procedure  515.9 A schematic of a small step scanning ^  535.10 Focussing parameter fitting using a Gaussian function  ^545.11 An image with a spurious bubble (bright region at the center of frame)^555.12 Schematic of the flow around a vortex ^  585.13 A schematic of relationship between 178 and r  ^585.14 Bubble focussing depth error.  ^605.15 Fluctuation with time of the focussing parameter  ^616.1 Histogram of bubble image ^  656.2 Photographs of preprocessing steps applied to a typical hologram image ^ 666.3 Typical bubble edge points identified during one-dimensional box filtering 676.4 Radial distance of bubble edge points versus angular coordinate  ^686.5 Typical bubble edge after crude noise suppression  ^696.6 Radial distance of edge points after crude noise suppression  ^696.7 Large noise portion of edge  ^706.8 Scanning sequence  ^716.9 Typical bubble after link procedure  ^726.10 Radial distance of edge points after the link procedure  ^726.11 Typical bubble edge after smoothing ^  75viii6.12 Radial distance of edge points after edge smoothing  ^756.13 Photographs of different stages in the preliminary image analysis.^776.14 Photographs of different stages in later image analysis.  ^786.15 Schematic of patch correlation  ^796.16 Bubble displacement error  ^816.17 Bubble diameter error ^  82C.1 Schematic of edge-tracking technique  ^96C.2 Scanning sequence and matrix ^  97ixACKNOWLEDGEMENTSI would like to express my gratitude to my supervisor Dr. S.I. Green for his supervisionand encouragement throughout the course of this study.G. Wright and M. Slessor are also thanked for their assistance.Thanks are also due to the personnel of the Mechanical Engineering Workshop whoassisted in producing some mechanical components.Thanks must also be extended to the personnel of Mechanical Engineering Instru-mentation Lab, who give me helpful suggestions and help to provide some electroniccomponents.In addition I would like to express my appreciation to my friends whose encourage-ment, discussions, and comments at the various stages of the work were very helpful.Financial support for this work by the Natural Science and Engineering ResearchCouncil of Canada is gratefully acknowledged.Finally, I would like to express special thanks to my wife, Dongmei. She has alwaysbeen there to aid me in many ways, and she has endured the worst of me over the pastfew years.xChapter 1INTRODUCTION1.1 Preliminary RemarksHolography is a very powerful tool for helping to visualize and measure the complexphenomena encountered in single and multiphase flows (Lee, 1986).Single-pulsed holography is often used to record the instantaneous location of particlesor bubbles in a flow. For example, the combustion of a black powder pellet has beenvisualized using single-pulsed holography (Trolinger and Field, 1980). Using single-pulsedholography, the particle size and concentration were measured, giving insight into thephysical process of combustion. Cavitation nuclei are also studied actively using single-pulsed holographic techniques (Gates and Bacon, 1978).Double-pulsed holographic techniques are much more widely used in quantitative mea-surements of a flow field than single-pulsed holography. Ewan (1979) produced double-pulsed holograms of the moving particles embedded in a one-dimensional velocity field.Lee (1973) measured the instantaneous velocity field of droplets using this technique.Triple-pulsed holography and holography involving more than three laser pulses areuseful for measuring acceleration and particle morphology changes with time. Liquidand liquid metal breakup phenomena were observed in great detail using this kind ofholography (Craig, 1984).Like many other optical methods, holography provides a non-intrusive measurementof flow fields, but this is not its distinguishing advantage over conventional photography.1Chapter 1. INTRODUCTION^ 2One important feature of holography is it is the only method by which high resolutionmeasurements of individual particles in a dynamic field can be made. The other impor-tant advantage of holography over photography is its ability to record objects in threedimensions (Hobson, 1988). This latter feature is the principal reason that particulateholography has become an increasingly common technique for the measurement of thelocal pressure and particle velocity field in three-dimensional space (Green and Lin, 1991).Double-pulsed holography—the extension of conventional holography to record particlevelocity within the holographic volume—is analogous to strobe photography. In double-pulsed holography (which forms the basis of Holographic Particle Image Velocimetry,HPIV) two holograms of a dynamical particle field are recorded on the same holographicfilm with a short time separation between the two recordings. On reconstruction of thehologram, the quasi-instantaneous velocities of all the particles in the flow field will bevisible as the displacement between the two successive images of each particle. Thisdisplacement is the basis of Particle Displacement Velocimetry (PDV or Ply).Such a useful flow measurement technique has not become a mature and commonexperimental technique, unlike Laser Doppler Velocimetry (LDV) and hot wire anemom-etry, because there are two shortcomings of holography which hinder its application.One shortcoming is the inability of holography to provide on-line velocity measurements,that is, holography requires film development and hologram reconstruction steps beforethe velocity field can be determined. The other shortcoming is that the volume of datathat can be extracted from a double-pulsed hologram is enormous, and this holographicanalysis is traditionally done by humans. Such analysis is rather time-consuming andtedious.The first limitation is a characteristic of classical holography, about which we have nocontrol. The second shortcoming, however, can be minimized using advance computer-based hologram analysis techniques (Green and Lin, 1991). Efficient reconstructed imageChapter 1. INTRODUCTION^ 3processing and analysis is the topic of this investigation.1.2 Photography and HolographyLight, as a form of electromagnetic radiation propagating as waves, has four identify-ing characteristics—the wave propagation direction, the wavelength, the wave phase, andthe wave amplitude. Conventional photography, which records only the light wave am-plitude information, obviously loses some important information about the light. In par-ticular, by not recording the light phase, the depth-of-field information about the light islost. Consequently, the three-dimensional information present in the incoming light wavesis "compressed" onto the two-dimensional film medium. Stereoscopic or multi-cameratechniques, which are commonly used in photogrammetry, enable a three-dimensionalreconstruction of original objects to be made, but they involve multi-perspective pho-tography, not one-directional photography. These stereoscopic techniques have foundapplication in robotic vision systems (Harris, 1988).Holography is different from conventional photography. Instead of light of randomwavelength and phase being used to illuminating an object, as is done in conventionalphotography, holography uses coherent light to illuminate the object. The light reflectedfrom or scattered off an object is projected onto the holographic film. This light isreferred to as the "information beam". Reference light waves, which come from thesame coherent light source, are projected onto the holographic film at the same time.Because the reflected light and reference light will have travelled very different paths,the two beams will be out of phase. Consequently, the two beams of coherent lightwill interfere with each other on the surface of the holographic film. With the helpof sensitive emulsion, the interference fringes are recorded (Figure 1.1). The exposedholographic film is developed and fixed, and a hologram results. This process is calledChapter 1. INTRODUCTION^ 4the "recording process."Figure 1.1: Hologram recording process.When the "recording process" is finished, both intensity and phase information of thelight reflected off the object is frozen on the hologram. If you look at the hologram inroom light, nothing is visible except for some complicated winding interference fringes;there are few signs of the object image recorded on it.In order to look at the image of objects recorded on the hologram, you must putthe hologram back in its original position at the time of hologram recording, remove theobject that was recorded, and illuminate the hologram using only the reference light atthe original recording angle. In the location of the original object, a "virtual image" ofthe object appears. The form of the electromagnetic field of the virtual image is identicalto that of the original object. If we observe the virtual image facing the light, we shall seean "object" identical to the original object. This virtual image is formed by extending, inthe opposite direction, the light waves of the reference light diffracted by the hologram.On other side of hologram one can see a three-dimensional real image of the object,which is formed by the converged diffraction light. The real image and virtual imageChapter 1. INTRODUCTION^ 5are conjugated in the phase of the light wave (Figure 1.2). This step in holography isreferred to as the " reconstruction process."Figure 1.2: Hologram reconstruction process.1.3 In-line holographyThe holographic recording process can be divided into two categories, depending onwhether reflected or diffracted light from the object is used as the information beam.Holograms formed using the diffracted light as the information beam can be furtherdivided into in-line or off-axis holograms, depending on the selection of the referencebeam. The holographic technique illustrated in Figure 1.1 and Figure 1.2 belongs to theclass of off-axis holographic recording with diffracted light as the information beam.In in-line holography, the illuminating light and reference light are the same. Aschematic diagram of the in-line holographic recording technique is shown in Figure 1.3.An object is illuminated by a coherent and collimated light source which is collinearwith the object and the holographic film. The transmitted light consists of two parts.The first part is a strong uniform plane wave corresponding to the directly transmittedlight. This constitutes a reference beam whose amplitude and phase do not vary acrossChapter 1. INTRODUCTION^ 6the holographic film. The second part is a weak scattered wave due to the object.The holographic film records the sequence of the complex sum of these two waves; theinterference pattern. The exposed holographic film is developed and fixed, and an in-linehologram is thus formed.Figure 1.3: In-line hologram recording process.Reconstruction of an in-line hologram is accomplished by simply illuminating thehologram with a collimated beam similar to the original recording beam. The coaxialreal and virtual images separate. A virtual image of the particle field is formed at theoriginal location of the particles, and a real image is formed on the opposite side of theholographic plate, at the same distance from the holographic plate (Figure 1.4).There is a limit to the size of object that can be recorded using the in-line holographictechnique, imposed by the requirement of collinearity of the illuminating light, object andholographic film (Belz, 1972). The limitation is that the distance, z, from the object to theholographic film must satisfy z > a2/A, where a is the maximal dimension of the object(the diameter of a circular object) and A is the wavelength of the illuminating light. Thisrequirement guarantees that the film records the Fraunhofer (far-field) diffraction pattern,and therefore that the interference of real and virtual images is minimal. This restrictionChapter 1. INTRODUCTION^ 7Real ImageHologramFigure 1.4: In-line hologram reconstruction process.limits the use of in-line holography to small particles, and hence its main application isthe recording of volumes of solid particles, gas bubbles or aerosols. In Hologram ParticleImage Velocimetry, however, the limitation is almost a blessing, because the tiny particles,gas bubbles and aerosols can be used as tracers to visualize and measure the flow field,as explained in the next section (Trolinger, 1980).The advantage of in-line holography relative to off-axis holography is its ease of setup.The recording and reconstructing process are relatively simple, because there is no needto provide for a separate reference beam and axis. Furthermore, there are less stringentconstraints on film resolution than those imposed by the off-axis method. Objects inreconstructed in-line holograms appear bright relative to a dark background.1.4 Hologram Particle Image VelocimetryBy recording two holograms of a dynamic particle or bubble field on the same holo-graphic film with a known time interval between the two light exposures, on reconstruc-tion the hologram will show the displacements of all particles or bubbles in the flowfield. The instantaneous velocity of each particle or bubble can then be deduced usingChapter 1. INTRODUCTION^ 8image analysis techniques. This method is called Hologram Particle Image Velocimetry.Trolinger et al. (1969) were among the first to describe this technique, and they usedmanual scanning of the hologram to determine the velocity field. There are, in fact, twokind of HPIV according to the size and concentration of particles in the field.One kind of HPIV involves the tracking of individual particles. In this case the parti-cles or bubbles are relatively large (10tan to 500p,m), and the concentration of particles isvery low. If the in-line holographic technique is used, a significant amount of light (about80 percent) must pass through the field without modification to serve as an effective ref-erence beam (Trolinger, 1975); this percentage may not always be attainable. If theconcentration of particles is not low enough to satisfy the light transmission requirement,an off-axis holographic technique should be considered. Independent of the use of in-lineor off-axis holography, this kind of HPIV can be used to generate an enormous volumeof data. Consequently, the analysis of a hologram, which involves the image analysis ofeach particle within it, is complex. The analysis has been done traditionally by humans,but unfortunately requires a long time and has difficulties (Dimotakis et al., 1981; Marko& Rimai, 1985).The problem of extracting the enormous amount of image data generated by the HPIVtracking technique of HPIV is being ameliorated by the rapid development of computerand electronic techniques. Payne et al. (1984) discussed image analysis techniques for acomputer-controlled HPIV. Malyak and Thompson (1984) have used Fourier transformsto determine the particle displacements and velocities.The other kind of HPIV technique does not require the tracking of individual particles,but rather is based on image cross-correlations (Adrian, 1986, and Meynart, 1983). Inthis kind of HPIV the particles are very small, and their concentration is very high.Dust and aerosols are often used. Although it benefits from a much higher processingspeed and involves simpler measurement procedures than the particle tracking technique,Chapter 1. INTRODUCTION^ 9this technique suffers from comparatively poor spatial resolution, because the particlesare observed as a spatial gray particle group in a small area of the image pixel matrixdimension ( Uemura et al., 1989, Utami, et al., 1991)1.5 Purpose and Scope of the Present StudyThis thesis descibes an efficient method for the analysis of reconstructed, low particledensity, double-pulsed holograms. The holograms studied here, are recordings of themicrobubbles (typically 100/im — 300,um in diameter) in a water flow field. Double-pulsed in-line holography was used to generate the holograms (Green and Acosta, 1991).To the author's best knowledge, two groups have published results dealing with thecomplete problem of automated holographic image reconstruction. Those groups areHaussmann and Lauterborn (1984) and Stanton et al. (1984), and they represent twodifferent ways of implementing automated holographic image reconstruction. A newapproach was developed during this thesis which incorporates some of the ideas of theaforementioned two groups.Before an attempt was made to automate the HPIV process, a considerable numberof holograms were studied first. The motivation for these studies was a belief that byunderstanding how a human analyzes holograms, we could gain some insights into rea-sonable schemes for computer-automated hologram analysis. During these studies it wasdiscovered that, whether a microbubble is in-focus or not, its image was always evidentas a bright region in planes within a certain range (at least +20mm to —20mm) of thefocal plane. That observation provides a strong basis for the 2-dimensional (holographicplane) scanning of the hologram that was used for preliminary bubble identification. Thevery poor signal-to-noise ratio of a reconstructed hologram imposed fairly rigorous con-straints on the design of the optical and electronic detection system developed for thisChapter 1. INTRODUCTION^ 10preliminary bubble identification.The toughest task for humans in hologram analysis is focussing of the microbubbleimage after it has been found. As we shall see, focussing is also the main obstacle tototally automated holographic image reconstruction. The reason focussing is so difficultis that there is no clear location where a bubble is in focus. Microbubbles pass smoothlyfrom out-of-focus to in-focus to out-of-focus again. As far as we could determine, thereis no single parameter (e.g. edge sharpness, image brightness, edge smoothness, etc.)which provides a firm clue to the human observer that microbubble images are preciselyin-focus.Once a microbubble image is identified and focussed, classical two-dimensional imageprocessing techniques can be brought to bear on its analysis. Because image processingspeed was a major factor in the design of the automated HPIV system, several classicalimage processing algorithms were modified and optimized for the restricted task at hand.In summary, the analysis of a double-pulsed holographic recording of microbubbles ina flow field may thus be broken down into two tasks: (I) finding all of the microbubblesand (2) characterizing them. "Finding the microbubbles" describes the task of scan-ning the hologram in the holographic (two-dimensional) plane to find each microbubble,whether it is in-focus or out-of focus, and locating the plane of best focus for each mi-crobubble. The second task—"characterizing the bubbles"—requires that the infocus mi-crobubble image be analyzed to extract information related to the size and displacementof each microbubble.Chapter 2LITERATURE REVIEW2.1 Autofocussing of Microbubble ImagesThe focussing of the microbubble image is the most time-consuming and difficultaspect of automated holographic image reconstruction. Traditionally this task has beendone by a human operator. The original HPIV investigators used a human operator tovisually observe the focussing and defocussing of magnified images on a closed circuitmonitor or TV (McKee, 1975), and thus demonstrated the possibility of measuring in-dividual particle velocities. However, a feasible method of extracting large quantitiesof data in a practical engineering application was not available. There are two majordifficulties with this human-operator-based analysis, one obvious and one slightly moresubtle. The obvious difficulty is that human-based analysis is notoriously slow, non-repeatable, and therefore costly and inaccurate (Forbes et al., 1991). The less obviousdifficulty is that human-based analysis of focussing of holograms may be inherently lessaccurate than focussing using computer vision.To assist in the automated hologram image analysis, it is widely accepted that someform of data processing, either digital or combined optical/digital, is needed. Severalinvestigators have developed schemes for particle image autofocussing. There are, ingeneral, two methods. One method, which mimics a human observer, requires digitizationand processing of many successive 2-D particle images in the real image volume field toidentify that image plane with the "sharpnest" bubble image. The other method is based11Chapter 2. LITERATURE REVIEW^ 12on the analysis of two or more image planes in which a bubble is defocussed. The twogroup studies decribed below are representative of the two methods.2.1.1 Haussmann and LauterbornHaussmann and Lauterborn (1981) have developed a methodology for bubble imageautofocussing based on the conventional human method. Their implementation of auto-focussing involves stepping in the z-direction through an image searching for "in-focus"location; the sharpness of the edge contour of particle was used as a criterion for theparticle z position.In their autofocussing algorithm, differential operators like the Roberts cross-operatoror the Sobel operator (Gabiati, 1990) were used to determine the the sharpness of anobject's edge. However, differential operators utilize only locally limited informationand are therefore very sensitive to noise. In general, noise is rather high in the recon-structed hologram image due to the effect of the Airy Disk (Schmidt-Harms, 1984) andbackground. Consequently, before the differential operators were used, noise suppres-sion with pixel averaging was first used to alleviate the effection of background noise.The effectiveness of the differential operator for edge detection was enhanced by usinga weighting function. Weighting functions work by amplifying the value at the pointsthat belong to the particle contour, and suppressing the values at points elsewhere. Asimple mathematical weighting function WFC(/) describe the probability that a pic-ture element with intensity, I, belongs to the object contour. This weighting functionwas defined using a priori knowledge about the bubble intensity distribution in the im-age. Several different weighting functions (linear ramp, triangle, rectangular, Gaussianand trapezoidal) were applied to the unfiltered input image to determine which functionyielded the most noise suppression. Gaussian weighting function yielded the maximaledge contrast enhancement.Chapter 2. LITERATURE REVIEW^ 13The noise suppressed image was then filtered by a differential operator, which isdenoted by GRAD. At an arbitrary point A, the processes described above can beexpressed by equation 2.1:GRADiw(A) = GRAD(A) • WFC[AVG(A)] (2.1)A focussing parameter of sufficient selectivity, which detects the occurrence of sharpedges, is calculated by averaging GRADiw(A) over the entire image. This process isrepeated at each step in the z dimension. As a function of the coordinate depth, thisaveraged focussing parameter shows a local maxima at the z location corresponding tothe in-focus position of the object. The reconstruction system drives an positioning table,on which the hologram of bubbles was, to the focussed position. Consequently, the videocamera can grab the in-focus bubble images.2.1.2 Stanton, Caulfield and StewartStanton et al. (1984) have developed a more efficient way to autofocus bubble images.The particle size of the first Fraunhofer diffraction ring in each non-focussed image planeis used as a indicator of object focus. The in-focus position can be calculated dependingon the size of these rings.The theory is based on a paper by Vikram and Billet (1984). They pointed out thatthe size of particles in the out-of-focus image is much larger than that in the in-focusimage. By measuring the out-of-focus spot size, s, and the distance out of focus, Az, onecan infer the particle diameter p and in-focus position.Just like conventional focussing method, the size of particle in the different measuringimage plane is used as a focussing parameter. At focus, the image size of particle isminimum, that is, s = p. Consequently, at focus,dzds 0(2.2)Chapter 2. LITERATURE REVIEW^ 14Ids/dzi will be positive away from focus.The differerence between this approach and conventional focussing methods is thatthe focus position can be determined using only two out-of-focus images by satisfyingserveral constraints. These constraints are1. the two out-of-focus particle images are spaced byAz^Ap (2.3)where Ap PT2, A is the recording and reconstructed wavelength.2. the focussed particle image lies between the two measurement planes. The corre-sponding z value out of focal plane (Az1 and Az2) satisfyAzi Az2 = Az^ (2.4)3. based on Vikram and Billet's development, the spot size (Si and 82) of particleimage are proportional to their distances from the focal plane^si oc Az1^and^s2 cx Az2^(2.5)One may then find the unique Az,. and Az2 satisfing these constraints, thus deter-mining the plane in which the image is located (Figure 2.1).The third constraint can be explained in detail. For a round object, according tothe analysis of Vikram et al. (1984), the intensity field, which represents the Fraunhoferdiffraction of the round particle at a distance Az with a uniform background, is given by^I(R) or (1 — K B2)2 + ^KB27ra2 12J1(27raR/ A Az )  12AAz )^(27raR/AAz)^(2.6)Xm (2.7)Chapter 2. LITERATURE REVIEW^ 15Figure 2.1: A schematic of out-of-plane positionsWhere R is the radial distance in the intensity pattern and a is the radius of the object.The minima in this field is accurate at the zero of Bessel function Ji(x).So at the mth minimum ring of the pattern, from equation 2.6 one can recognizesthat27raR1^27raR2AA zi^AA z2Where xn, is the mth zero of Ji(x). R1 and R2 are the radial distances of the mthminimum ring of the Fraunhofer pattern, and Azi and Az2 are the distances from focalposition. Assume si = 2R1 and 82 = 2R2, one can find that this just is the thirdconstraint (equation 2.5). One can find at the same time that this approach applies tospherical particles; extending this method to particles of arbitrary shape is more difficult.From the equation 2.7, one can getR1Az2 R2Az1^ (2.8)Combining equation 2.6 and equation 2.7, one can obtain Azi and Az2 in terms ofthe known quantity Az and the radii R1 and R2(  R1 R1 + R2 AzChapter 2. LITERATURE REVIEW^ 16AZ2 = ( R2 ) AzR1 -f- R21(2.9)If either Az]. or Az2 is known, the focal position of particle image is known immedi-ately. If the spot size (Si and 32) is used,Azi ----=A Z2 -7--the equation 2.9 will be3 1 Az(2.10)( 31 + 32  )32^Az(^)31+ 32The method of Haussmann et al. is robust, especially for situations in which mi-crobubbles are used as tracers. Unfortunately, Haussmann's algorithm is very compu-tationally intensive. Compared with the method of Haussmann, the method of Stantonet al. is fairly sophisticated. The number of computations involved is reduced sharplybecause only a few images are processed. Regretably, this technique is limited to roundtracer particles.2.2 Microbubble Image AnalysisThe image analysis of in-focus particle images is the final stage of Hologram ParticleImage Velocimetry. The aim of this analysis is to extract information pertaining to theparticle size, shape, and displacement between the two holographic laser pulses. Whenthe concentration of particles is rather low and the reconstruction and analysis methodinvolves just an isolated particle, the image analysis is relatively simple (Stanton et al.,1984, Caulfield 1985). For high particle concentration, several particles may appear onone image. The image analysis is more complicated (Haussnann et al., 1980).The first step of image analysis is often edge detection which is used to highlight theboundaries of objects (McKee, 1975). Marr and Hildreth (1980) derived three physicalconsiderations (constraints) for an edge detector. These considerations are: (1) theconstraint of spatial localization, that is, the edge detector that one seek should be smoothChapter 2. LITERATURE REVIEW^ 17and localized in the spatial domain, and in particular its spatial variance should be small,(2) the constraint of frequency localization, that is, the detector's spectrum should alsobe smooth and roughly band-limited in the frequency domain, and (3) independence ofedge orientation, that is, the detector is required that there be no preference for the edgein a particular direction. These three constraints have become three basic criteria foredge detectors.Some edge detection schemes begin by applying small differential operators to animage, followed by a detection operation to locate small edge segments. The performanceof local differential operators, however, deteriorates rapidly with the presence of blurredand noisy edges, because the noise is enhanced by differentiation. This has led to thedevelopment of some particular methods more specialized for detecting edges in noisyimages. Modestino and Fries (1977) introduced an approach to edge detection in noisyimages using two-dimensional recursive digital filtering. In general, a smoothing functionthat is of a Gaussian, or close to a Gaussian is often used. One generic approach to edgedetection is to apply a first derivative operator with Gaussian-smooth procedure and lookfor extrema and compare this with a threshold value (Canny, 1986, and Deriche, 1987).Unfortunately, it is rather difficult to apply above the edge detection algorithm di-rectly to a reconstructed hologram image because of the substantial noise present. Somealgorithms using the basic edge detection theory with various specific noise suppressiontechniques have been published. In the approach of Haussmann et al. (1980) a weightingfunction was used to amplify the edge pixels and suppress the pixels elsewhere (this kindof image processing has been shown in the topic of autofocussing). Several directionaledge filters used to suppress the background noise can also used to good effect (Greenand Lin, 1991).Chapter 2. LITERATURE REVIEW^ 182.2.1 Zarschizky MethodZarschizky et al. (1980) have used a direct way to process the focussed particleimages. Their approach consists of four parts.2.2.2.1 Obtain a reliable segmentation threshold valueSegmentation of an image implies the finding of an intensity threshold value, whichallows association of all image points below this value with object points, and all otherpoints with background points. This intensity threshold value is called segmentationthreshold value. A partially integrated field, which includes no particles, is used toestimate the statistical properties at the image. Those statistical properties are taken asthe properties associated with the pure speckle background of the hologram. By studyingthose properties, one can get a reliable segmentation value to threshold the original image.All points below the threshold value are defined to be object points, and all other pointsare defined to be background points.2.2.2.2 Smoothing procedureIf the image is thresholded directly with the segmentation threshold value, the re-constructed images of objects are interlaced with speckle noise in a binary image. Seg-mentation may thus lead to an inhomogeneous object that will be recognized as smallersub-particles. Therefore, a smoothing procedure to connect these sub-particles is neces-sary as a further preprocessing.The image is smoothed locally with a 3 x 3 averaging filter. The smoothing algorithm,however, pays attention to pixel connections that probably belong to a particle. Thecenter of the 3 x 3 neighborhood gets the mean intensity value of all neighborhood pixelsif no critical pixel connection is detected. If at least three pixels of a neighborhoodChapter 2. LITERATURE REVIEW^ 19with intensities less than the segment value are directly connected, these points will beexcludes from the averaging. Only the remaining points outside of such object chain willcontibute to the mean value that is assigned to the center. Isolated bright pixels with aclosed object chain receive value 1/9 of their original intensity for being averaged witheight pixels of value which is less than the segmentation value, so it receives "0". They willthen be incorperated into the object. Then the segmentation threshold is implemented.All image points below the segmentation threshold value has been assigned "1", andall others have been assigned a value of "0". The binary image, where bright region isconnection with dark background, is obtained.2.2.2.3 Shrink and grow procedureThe smoothing procedure described above leaves many noise pixels incorrectly iden-tified as objects. The noise is quite small in spatial extent, and may be removed by a"shrink and grow" or "erosion-dilation" procedure. Pixels at the exterior of an objectare "eroded" away like an onion being peeled one layer at a time. Small objects are"eroded" out of existence. Large objects will be shrunk in area at the same time. Theblow procedure reverses the process, blowing the remaining pixel cluster back to nearlytheir original size.2.2.2.4 Final evaluationThe final evaluation of the preprocessed image sector gives the coordinates of theparticle centers, the perimeter of the particle (the number of boundary points), and thearea of the objects as the number of all object pixels.Chapter 2. LITERATURE REVIEW^ 202.2.2 Payne MethodPayne et al. (1984) have developed an algorithm which is less involved computation-ally than that of Haussmann and Zarschizky. However, a human operator was needed tofocus the particle images.The image of each particle was successively focussed on the real projection screenby moving the holographic frame in the z axis. The projected scene was viewed with aTV camera and converted into a digitized video signal. The holographic frame was thenmoved in the x and y axes to bring the focussed image of each particle into the center ofthe TV monitor. All above was controlled by the human operator with a joystick.The detection of the pixels, which represent the edge of particles, is based on asimplified single-pass edge detection process. An average intensity between the particleand background is selected as threshold value, then the pixel intensity is compared withthe threshold value. Pixels at each transition across the threshold value are selected asedge points.Using the pixels detected at this stage, the particle area and perimeter are calculated.This method is effective of detecting the edge of particles. However, it dose not lend itselfto complete automation, because a human-controlled cursor must be used.2.3 SummaryBasically, autofocussing of bubble images and bubble image analysis are the maintasks of automated hologram analysis. There are some similarities between conventionalmethods of particle image focussing and the Stanton em et al. (1984) method. Bothmethods use a focussing parameter that varies with the depth coordinate. The differencebetween the methods is that there is a linear relationship between the focussing parameterand depth coordinate in Stanton's method; the correct focal plane can be determinedChapter 2. LITERATURE REVIEW^ 21using only a few out-of-focus particle images. In conventional methods, the correct focalplane is determined by finding the maximum of a focussing parameter extracted from alarge number of successive images.Analysis of focussed hologram images lies within the domain of standard 2D imageprocessing. Edge detection and noise suppression are the principal aspects of this imageanalysis. Different edge detectors and noise suppression techniques will result in verydifferent images analysis efficienies.Chapter 3EXPERIMENTAL APPARATUS3.1 Overall Automated Hologram Reconstruction SystemThe holograms analyzed here were produced using an in-line ruby laser. The rubylaser holograms, recorded on high resolution Agfa-Gevaert 10E75 holofilm, were takenthrough the optical grade windows and 30cm wide test section of a recirculating watertunnel. Microbubbles of 100pm to 250/im in diameter, injected into the flow, wererecorded in the holograms.Each double-pulsed hologram to be analyzed was mounted on a computer-controlledx — y — z positioning table. The hologram was reconstructed by shining He-Ne laserlight through the film. The beam of laser light was first filtered by a spatial filter andcollimated by a collimating lenses. Two coaxial images, a real and a virtual image, werethus produced. Due to the difference in wavelength of the recording laser light (rubyrod laser, 694nm in wavelength) and reconstructing laser light (He-Ne laser, 633nrn inwavelength), contraction of the reconstructed image compared with the original flow fielddimension occurred out of the plane of the hologram. The image is compressed by a factorof about 0.91 (633nm/694nm 0.91), an effect that was allowed for in subsequent datamanipulation.The resultant reconstructed real image was magnified using a microscope objectivelens, and split into two image volumes by a beam splitter. One image volume, compressedin one dimension by a cylindrical lens, was detected by a linear CCD array, and the other22HologramMicroscopeObjective LensFrameGrabberAST 386/25 Comp iterVideoCameraCCDDriving &ProcessingBeamSplitterHe-NeLaserBeam ExpanderSz Spatial FilterX-Y-Z TablePositionControllerChapter 3. EXPERIMENTAL APPARATUS^ 23Figure 3.1: Schematic of hologram reconstruction facilityFigure 3.2: Photograph of hologram reconstruction facilityChapter 3. EXPERIMENTAL APPARATUS^ 24was recorded by a video camera (Figure 3.1 ). Microbubbles within the real image volumeappear as light bright objects (due to negative film developing) surrounded by a highlynoisy background that is, on average, dimmer than the bubbles. A linear CCD array,which is driven by an electronic circuit, was used to detect the optical signal of bubbles,and the video signal of the CCD was processed by a separate circuit. After processingof the video signal, the "bubble" and "hologram edge" signals (refer to Chapter 4) arederived. These signals are used by the host computer that controls the x and y axes ofthe x — y — z positioning table to implement x — y dimensional scanning.The second image produced by the beamsplitter was recorded by the video camera.The video camera output was digitized by a frame grabber, and the digitized imagetransferred to a computer for processing. During the autofocussing phase of image pro-cessing the z (focussing) axis of the positioning table was moved to maximize a focussingparameter derived from the video camera image.3.2 Experimental ProceduresBefore we use the system to process the hologram of microbubbles, the sequence ofexperimental procedures must be followed, otherwise, we can not make the system workproperly, sometimes, the system does not work totally.3.2.1 Adjustments of Optical SystemThe beam expanding system consists of a beam expanding lens, a spatial filter, anda collimating lens. He-Ne laser beam must be collimated after these optical processingcomponents. The collimated laser beam is obtained by adjusting the four screws andthe position of spatial filter, so there are five degrees of freedom. It is a bit difficult andtime-consuming to adjust the collimated laser beam.Chapter 3. EXPERIMENTAL APPARATUS^ 25The position of the video camera is adjusted so that the size of bubble images, whichare shown on a dual monitor, is corresponding to the size of whole images on the screenof monitor. Then a magnification ratio is recorded for later use. In general, once thebubble image is OK, the position of video camera is fixed.A cylindrical lens is located at the position such that the laterally compressed bubbleimage is a little bit greater than the width of the linear CCD array.3.2.2 Adjustments of Electronic SystemFirst of all, the orientation of the linear CCD array is adjusted to match the laterallycompressed bubble image. A test program has been designed for this purpose. A typicalbubble image can be used. The best orientation of the linear CCD array will result inthe highest signal-to-noise rate of output for the bubble image.The offset of voltage, which is added to the mean value of bubble image intensity,can be adjusted based on the holograms, optical system, and the light source. Theintegration time of the linear CCD array is adjusted by the software, which is used in theobject detection system. A test program has been designed to test the result of bubbledetection.Chapter 4FAST OBJECT DETECTION SYSTEM4.1 Rationale for the System DesignThe reconstruction of the in-line hologram forms a real 3-D image field of the original3-D flow field. The vast majority of this 3-dimensional image is empty space. Therefore,the first task in holographic analysis is to find rapidly a microbubble image whether thebubble is in-focus or not. The explanation of the automatic fast object detection systemdeveloped for holograms is the focus of this chapter.It should be noted that the fast object detection system described here is only war-ranted when the particle density of the hologram is low. Fortunately, in-line holographyitself is, in general, significantly constrained in the particle densities it can record. Toproduce a good in-line hologram, the microbubbles must be sparsely injected so thata significant amount of light (about 80 percent) passes through the flow field withoutmodulation, in order to serve as an effective reference beam. That is why the hologramsstudied here have a very low bubble density.A fast object detection system has been designed to take advantage of those in-lineholography limitations. The operation system is based on the fact that the microbubblesare observable over a depth range which is much larger than their original dimension(Hobson, 1980). The image of a microbubble is apparent within a certain range of itsfocal position. For example, a 10/int radius bubble can be observed over a depth of 60mm.This effect is similar to holding a bright yellow tennis ball against a black background.26Chapter 4. FAST OBJECT DETECTION SYSTEM^ 27The tennis ball is quite apparent even if our eyes are focussed well ahead or behind theball.A schematic of fast object system is shown in Figure 4.1. MicroscopeObjective LensBeamSplitterHologramReal ImageCollimated X-Y-Z Table VolumeLaser Beam CCD DrivingCircuitCCDCylindricalPosition LensController Signal Proce- Logicalssing Circuit CircuitAST 386/25 ComputerFigure 4.1: Schematic of the fast object detection system.4.2 Hardware of the Fast Object Detection SystemThe fast object detection system, designed based on the rationale described above,consists of both an optical system and associated electronics. Figure 4.1 is a schematicof the whole system.4.2.1 The Optical SystemThe optical system is very simple. It consists of just a beam splitter, a cylindricallens and a linear CCD array. The beam splitter splits the real image volume into twoChapter 4. FAST OBJECT DETECTION SYSTEM^ 28image volumes, and only one of which is directed onto the cylindrical lens.The beam splitter is the 12.0mm cubic beam splitter with nearly equal transmittanceand reflectance. The cylindrical lens is 15.0mm across by 60mm along the lens axis, withthe focal length 40mm. The purpose of the cylindrical lens is to compress the image inthe widthwise direction, and focus it onto the CCD array.The linear CCD array used here is the Fairchild CCD 123, which is an integratedcircuit with 1728 imaging elements. Each pixel of the array is 10pm by 13ym in size,spaced on 10Am centers. The Fairchild chip contains some internal clock-driving circuitrythat allows it to be driven with three external clocks (Fairchild, 1989).4.2.2 Electronic System1.2.2.1 CCD Driving CircuitA custom circuit was used to drive the Fairchild CCD123DC. That circuit was de-signed by Slessor and Green (1992). By varying the spacing of pulses sent to the CCDdriving circuit, one can control the exposure of the CCD array as appropriate for theincident light intensity.1.2.2.2 Signal Processing and Logical CircuitVideo Signal InversionThe unprocessed CCD video signal output is a voltage-train in which the negative(lower) envelope represents the analog video signal, and its positive (upper) envelope isthe result of the positive-going reset clock. The upper envelope is generally uniform andis representative of the zero or "dark" signal (Texas, 1990). In our case, it is about 7.5volts, and the signal fluctuates in a range of about 1.5 volts (Figure 4.2) down from theChapter 4. FAST OBJECT DETECTION SYSTEM^ 29dark signal voltage. The first step in CCD array signal processing was signal inversion.The inverted signal was thresholded so that the signal range is from 0 to 1.5 volts (Figure4.3). This partially processed signal is more intuitive, as brighter regions on the CCDarray correspond with more positive signal, and "dark" regions of the array show littlesignal.7.50E-■7.2507.00'FA 6.756.5006.250.0^2.5^5.0^7.5^10.0^12.5CCD LINEAR COORDINATE (mm)Figure 4.2: Video signal output of the linear CCD.The inverter circuit is very simple (Figure 4.4). If R1 R2 = R, then Vo = Vi — V2and V2 is the CCD video signal. When V1 = V2 Vo 0. The potentiometer (R2) is usedhere for Vo zero adjustment.Low-pass Filtering The inverted (and the original) video signal has a large high frequency noise compo-nent, which is thought to be due to both diffraction of the He-Ne laser light on opticalsurfaces in construction, and holographic noise. A simple low-pass filter operates on theinverted signal to reduce this high frequency noise.A standard low-pass filter (Figure 4.5) dramatically reduces this high frequency noise.ci)E-11.251.000-a 0.750 0.50a 0.2500.000.0^2.5^5.0^7.5^10.0^12.5CCD LINEAR COORDINATE (mm)Chapter 4. FAST OBJECT DETECTION SYSTEM^ 30Figure 4.3: Inverted signal of Figure 4.2v,v,Figure 4.4: A schematic of the signal inverter.Chapter 4. FAST OBJECT DETECTION SYSTEM^ 31Using Laplacian transforms, one can demonstrate that this circuit (in the transformplane) has transfer function:\ v. ( s ) _ ^1T ( S) - v. ( s) - 1 + ( )where coo =0^ vsFigure 4.5: Schematic of the low-pass filter circuit.Since 271-fo = coo = Ric , one can deduce1fo = 2^ ir RCwhere fo is the cut-off frequency.Theoretically, one can change the cut-off frequency through adjusting R or C. Ac-tually, the adjustment of R is implemented more easily. The low-pass filtered signal ofFigure 4.3 is shown in Figure 4.6.Mean value extraction (4.1)(4.2)Extracting the mean value of the CCD video signal turns out to be an important partof image processing circuit. This mean value extraction is important because the meanChapter 4. FAST OBJECT DETECTION SYSTEM^ 321.251E-4^.0000.750 0.50E-■a. 0.2500. 000.0^2.5^5.0^7.5^10.0^12.5CCD LINEAR COORDINATE (mm)Figure 4.6: Low-pass filtered signal of the inverted signalintensity of hologram varies significantly from one region of the hologram to another, andconsequently the intensity of light incident on the CCD array also varies significantly.Because the determination of the presence of a bubble within the image field is based ona simple threshold of the low-pass filtered signal, the variation of the background lightintensity is quite significant.A simple integrator circuit was first considered to perform this mean value operation,as shown below (Figure 4.7).For this circuitVz^Vo^ (4.3)AL^scthen(4.4)Finally,Chapter 4. FAST OBJECT DETECTION SYSTEM^ 33Figure 4.7: Schematic of integratorVo^1  ITVi(t)dtRC o (4.5)Figure 4.8: Schematic of mean value extractionWe want Vm fean .o represent the mean value of the given signal 1/2(t) (Figure 4.8). So10 T 17,(i)dt = Vmean X T^ (4.6)If we let Vo represent the mean value Vmean, that is, Vo —Vmean, thenChapter 4. FAST OBJECT DETECTION SYSTEM^ 341T =RCThe duration of the video signal output of the CCD array is T 2ms, which allowsR and C to be selected according to equation 4.7. The above is the basic mean valueextraction method using a integrator. The output of the aforementioned simple integratoris negative comparing with the input signal. A more elaborate non-inverting integrator(Sedra, 1982)(Figure 4.9) was used instead.Figure 4.9: Schematic of a positive integratorIn Figure 4.9, the capacitor C is used as a load, and will be supplied by a current/ = Vz/R, and thus its voltage V2 will be given byVs 1 = .^= .ywC jaJRC(4.8)That is,(4.7)V2^1jUIRC(4.9)Chapter 4. FAST OBJECT DETECTION SYSTEM^ 35which is the transfer function of an integrator,1 ITV2  ^V(t)di + V^ (4.10)RC owhere V is the voltage across the capacitor at t = 0It is clear that Vo = 2V2 =2cv1 (because of the equality of the input voltages of anopamp and the zero current draw of the opamp). Assuming V = 0 when t = 0:2^ JTVo =^Vi(t)dtRC oNow,foTVi(t)dt Vmean x TAssuming Vo = Vmean, we obtain finallyRC^T =  2(4.11)(4.12)(4.13)The appropriate values of R and C are therefore specified by equation 4.13.Bubble and Hologram Edge DetectionWe are now ready to discuss the actual bubble detection hardware. The first step inbubble detection involves adding an offset voltage to the CCD mean value. This defines athreshold voltage. The low-pass filtered CCD array signal is compared to this thresholdvalue. If the low-pass filtered signal exceeds the threshold, a bubble is assumed to bepresent. Thresholding is done in hardware by a comparator. The result of the bubblesignal comparison is held by a flip-flop (Figure 4.10).The circuits described above can be readily modified to detect the edge of the holo-gram. Hologram edge detection is accomplished by establishing a threshold voltage whichClearFlip-FlopBubble SignalVmeanLow-pass filtered signal V;VoffsetChapter 4. FAST OBJECT DETECTION SYSTEM^ 36Figure 4.10: Schematic of signal detection circuitis a little bit less than the CCD saturation signal. The low-pass filtered signal is com-pared with the threshold voltage. In this way hologram edges were detected with a 100%success rate.Figure 4.11 shows the entire signal processing and logical control circuit.4.2.3 Software DesignThe fast object detection electronics described above cannot operate in isolation; itmust be interfaced with a computer. The purpose of the computer is to provide softwarecontrol of the hardware. This software control consists of three aspects—initializationhardware signals, providing the control signals for the hardware, and receiving inputdata from the detection boardThe detection board is connected with the computer via an I/O port device 8255. The8255 is a programmable peripheral interface device. Like many other I/O components,the function of the 8255 is to interface the peripheral equipment to the microcomputersystem bus. There are 24 I/O pins from the 8255 which may be individually programmed-77- 2 k1/2 TL082 I2kCOMPARATOR 210k2kI1- ^I10 k^ I^ADDER^+5V^10k 2k^0---1V.,AP--NAM.H I . 1^100 k• ^i .---owv--5k^ I1/2 TL082^ 100 k^1/2 LM 358I10 k1115^1/2 TL082^ Vinean82k "+ 5IVI 100 kII20 nF^I^100 k 100 k2Ik20 TIF^0^I IS/6 11 if k 4 OP'. 1 ^r+ 5 V10 uFICL76604 3RESET ^MEAN INTENSITY EXTRACTION1/2 TL08282 k 82 k10KF_I - 5 V1/4 4066B 7._^ICLEAR 01/4 LM 3391/2 7473 ^-0COMPARATOR 1+ 5 VBUBBLE SIGNAL4110 k 100 k^109 k100k5k ^1/2 LM 3581/2 TL082 82k10 h47k1/2 TL0821/2 T1082LOW PASS FILTER2k^5k10 nF1/2 TL082LM 339PPP'0^CLEAR1/2 7473EDGE SIGNALCCD 0^VIDEO+ 15 vSIGNAL INVERTERChapter 4. FAST OBJECT DETECTION SYSTEM^ 38in 2 groups of 12, or 3 groups of 8, and each group can be programmed to function asinput or output.In our case, we set up the 8255 into three ports, port A, port B and port C. Port Ais set up as output, and port B and port C are set up as input.Control signals are sent out via port A. Each bit can be used separately. These controlsignals are the pulses necessary to control the CCD exposure (optical integration time),a clear pulse to clear the bubble signal holder, a pulse to clear the edge signal holder,a clear pulse to clear the set-up holder for LF 398, a reset pulse for the 4066B analogswitch, and a control pulse for generating a hold signal for LF 398 (refer to Figure 4.11).The bubble signal and edge signal are input via port B, and occupying two bits, BoB0 represents the bubble signal and B1 represents the edge signal of the hologram. Thereare four situations: (1) B1B0 = 00, (2) BiBo = 01, (3) B1/30 = 10, and (4) B1 B0 = 11.Situation (1) means that no bubble and no hologram edge have been detected. Situation(2) means only a bubble has been detected. In situations (3) and (4), the edge of thehologram has been detected.Synchronization of the hardware-software interface is an important aspect of the soft-ware design. The signal timing diagram is shown in Figure 4.12. The period betweenpulses (1-1) and (1-2) is the integration time (refer to Figure 4.12). After the secondpulse the CCD video signal is driven out. The video signal covers 1700 its. After thatthe LF 398 is signaled to hold the signal that corresponds with the mean value of thevideo signal. The first cycle of the video signal is used to calculate the mean value.Because the second cycle is used for bubble and hologram edge detection, all the signalholders must be cleared in the first cycle. After the second cycle of the CCD exposure,the bubble and edge signals can be read into the computer.Chapter 4. FAST OBJECT DETECTION SYSTEM^ 39(1-1)^(1-2) (2-1)^(2-2)(a . CCD VideoSignalSignalIntegration^;--.^S/HVn-,,,„_ClearReset ^HBubble/EdgeSignalFigure 4.12: Synchronization of the different signals.Chapter 4. FAST OBJECT DETECTION SYSTEM^ 404.3 The Effectiveness of the Detection System4.3.1 Factors which Impact on the EffectivenessThe effectiveness of the bubble detection system can be judged based on two criteria.One criterion is the scanning speed permitted by CCD detection board, and the other isthe bubble detection success rate.The scanning speed of CCD bubble detection is largely governed by optical design ofCCD detection system. The width of a 200/im diameter bubble, laterally compressed bythe cylindrical lens is about 20pm on the CCD array. The CCD pixel width is 10Am. Theresidence time of this compressed bubble on the CCD during scanning determines themaximum scanning speed. As an absolute minimum, the residence time must be largerthan the integration time of the CCD. The CCD integration time is adjustable, but islargely determined by the intensity of the bubble image; without sufficient integratedimage intensity, the CCD video signal will be poor due to the relative magnitude of thedark signal.The bubble detection success rate is influenced by several factors. Due to the severebackground noise, the object detection system occasionally mistakes some bright regionsof background noise as bubbles, particularly, when there is of good contrast betweenthe bright noise region and the surrounding dimmer noise. The scanning speed has aneffect on the success rate of bubble detection too. If the scanning speed is too fast andthe bubble size is much smaller than that of a typical one, the bubble will pass throughthe CCD image field before it can be detected. The selection of an appropriate offsetthreshold in the comparator circuit is also salient. It is rather difficult to select this offsetvalue. If the threshold value is selected to be too small, the detection system will mistakethe noise for a bubble, and conversely if too high a threshold is chosen, the system willconfuse some bubbles for noise.Chapter 4. FAST OBJECT DETECTION SYSTEM^ 41Tithe laterally compressed image of the bubbles and the linear CCD array are notprecisely colinear, the non-colinearity can seriously degrade the effectiveness of the bubbledetection system, too.4.3.2 Improvement of Bubble Detection System EffectivenessIn order to raise the hologram scanning speed, we must correspondingly decreasethe exposure time of the CCD array. In order to maintain an acceptable integrated lightintensity at the CCD, it is necessary to change the optical design a bit. Tithe CCDsensor is set closer to the focal point of the cylindrical lens, the image intensity will muchstronger. The lateral width of the bubble decreases at the same time. However, thegeometric size of image varies linearly with the distance from the focal point, whereasthe intensity of the image varies with the third power of this distance.Wbubble OC d'bubble OC d3 (4.14)Through a final and error process, in which scanning speed and the CCD distancewere varied while keeping track of the bubble detection failure, the scanning rate of thehologram was increased to 700pm/s. False detections of bubbles (i.e. noise mistaken forbubbles) were simple to identify during the autofocussing stage of the bubble analysis,described in chapter 5.The colinearity of the laterally-compressed bubble image and the linear CCD arraywas adjusted carefully. A typical bubble image was used for this adjustment. Theamplitude of the bubble image video signal were recorded as a function of time, as theCCD position was adjusted. The position of maximum signal amplitude was identifiedChapter 4. FAST OBJECT DETECTION SYSTEM^ 42as the colinear position. For this optimum configuration, the video signal has the bestachievable signal-to-noise ratio.In conclusion, after appropriate selection of CCD integration time, scanning speed,threshold value, etc, a very low failure rate of 1% was achieved.Chapter 5AUTOFOCUSSING OF BUBBLE IMAGES5.1 Basic Concepts of Autofocussing5.1.1 Principle of Bubble Image FocussingAs discussed in Chapter 3, the reconstructed real image volume of the in-line holo-gram is behind the hologram plane. An objective lens is used to magnify the real imagevolume, and the real image is projected onto the video camera (Figure 3.1 in Chapter 3).Movement of the hologram along the z (laser beam) axis alters the distance, u, betweeneach bubble and the imaging lens, thus allowing each microbubble to be individuallyfocussed. That focal position of a bubble is that position which satisfies the geometricoptics constraint:11^1— + — = — (5.1)u v fwhere v is the distance of the video camera from the imaging lens (the bubble imagevolume is taken as an object), and f is the focal length of the imaging lens (Figure5.1). Both upstream and down stream of this focal point the bubble image is said to be"defocussed" or "out-of-focus."In the experimental configuration described in Chapter 3, the distance v is knownand is equal to the distance between the fixed objective lens and the fixed video camera.The focal length, f, of the objective lens is known too. Therefore, the distance u betweenthe bubble and the imaging lens, which is linearly related to the out-of-holographic-plane43Video CameraChapter 5. AUTOFOCUSSING OF BUBBLE IMAGES^ 44Hologram ObjectiveReal Image^LensVolumeIlluminatingBeamMovement •^1.14^Figure 5.1: Schematic of bubble image focussingdistance of the bubble in the original flow, is known, provided the image at v is in focus(Figure 5.1). This analysis leaves unaddressed the issue of what determines when animage on the video camera is focussed; an issue is discussed below.Photographs of a typical reconstructed hologram microbubble both in focussed andout-of-focus are shown below (Figure 5.2).5.1.2 Extraction of a Focussing ParameterAfter questioning different human hologram readers, it was established that at leastthree criteria are used to differentiate an in-focus bubble location from an out-of-focusposition:1. the contrast between the bubble interior and exterior, i.e. the sharpness of thebubble edge (focussed bubbles were sharpest).2. the size of the bubble (generally a minimum at the focussed location).3. the smoothness of the bubble edge (focussed bubbles generally had the smoothestedges).(a) In-focus image of microbubble(b) Microbubble 2mm out-of-focal plane(c) Microbubble lOmm out-of-focusChapter 5. AUTOFOCUSSING OF BUBBLE IMAGES^ 45Figure 5.2: Photographs of a microbubble in and out-of-focus5.0-5.0^-2.5^0.0^2.5DEPTH COORDINATE (Z) mm130115w 110105100Chapter 5. AUTOFOCUSSING OF BUBBLE IMAGES^ 46Stanton et al. (1984) have used the second of the above criteria for image focussing.Their approach relies on the fact that the size of the first Fraunhofer diffraction ringof a bubble in a hologram is linearly related to the distance of the bubble from itscorrect focal plane. Unfortunately, no Fraunhofer diffraction rings were visible in ourreconstructed holograms. The rings are thought to not be visible because double-pulsedholograms of multiple non-spherical objects contained in a large volume of less thanperfectly clear water are much noisier than the pristine holograms of perfect polystyrenespheres examined by Stanton. Consequently, it was not possible to employ their methodon our holograms.Following the failure of the Stanton method, an attempt was made to correlate thebubble image size with the distance from the bubble focal plane. As Figure 5.3 (ofa typical bubble) illustrates, the bubble size is indeed a minimum at its focal point.However, due to the severe hologram speckle noise of the background, the convergenceof the bubble image size along z direction is far from uniform.Figure 5.3: Focussing parameter using the image size of bubbleChapter 5. AUTOFOCUSSING OF BUBBLE IMAGES^ 47The smoothness of the bubble image edge was also studied. The bubble images wereprocessed first to extract the edge of bubble using the edge detection technique describedlater. The edge of the bubble thus determined was fit with a (smooth) ellipse. The rmserror between the bubble edge data and the smooth, ellipse-fitted bubble edge data wascalculated for each image along the depth direction. This error was taken as a focussingparameter. The basic idea here is that large portions of any silhouette of a real bubbleshould be almost perfectly represented by a fitted ellipse. Deviations from an ellipticalshape are indicators of a noisy bubble edge. At the focussed position the bubble imageshould have the most smooth edge contour, and thus rms error should be a minimum.The extraction of this focussing parameter is, however, rather computationally intensive.It was also found to be less effective than the bubble focussing described below.The gradient of intensity between the interior and exterior of the bubble was thefinal criterion studied. This gradient has been used as a focussing parameter by otherresearchers (e.g. Haussmann et al., 1980). Haussmann used a 2D differential operator toextract a focussing parameter. Instead of 2-dimensional differentiation using a differentialoperator, we examined the use of a more efficient one-dimensional filter to extract thesharpness of the bubble edge.First the approximate centroid of a microbubble was determined using simple thresh-olding method. Then radial lines were drawn from the centroid at a large number ofdifferent angles. A box filter was used to filter the bubble image along each such radialline (Figure 5.4). The intensity variation along a particular radial line is shown in Figure5.5, and the corresponding intensity gradient determined using the box filter is shown inFigure 5.6. Even though there are many peaks in Figure 5.6 due to the noise, the peakof maximal intensity should appear at the location of the bubble edge. If the angle ofthe radial line is fixed, when the depth (z coordinate) is changed, the edge filtered inten-sity changes as well, and the maximum radial gradient should occur when the bubble isChapter 5. AUTOFOCUSSING OF BUBBLE IMAGES^ 48focussed.Figure 5.4: One dimensional box filter applied to radial lines from centroidFigure 5.5: Intensity variation along a radial line prior to filteringOwing to the substantial background noise on a hologram image, this maximum radialintensity gradient may not even occur at the radial location of the bubble edge, let aloneChapter 5. AUTOFOCUSSING OF BUBBLE IMAGES^ 49correspond with the appropriate focal plane. To accommodate the image noise, theportion of the bubble edge with the greatest mean value of intensity was selected. Manyradial lines were drawn through this portion. A mean edge filtered intensity gradient wascalculated across this part of the bubble edge.The process of selecting the "best" part of the bubble edge, i.e. portion of greatestmean intensity gradient, was carried out as follows. Equispaced radial lines were firstdrawn from the centroid, then the bubble image was filtered along those radial linesusing the box filter. For each filtered line, the bubble edge was assumed to be identifiedby the location of maximal radial intensity gradient. In this way a series of intensitygradients /0, /1 -IM -1 (M is the total number of radial lines ) were determined. The"best" (i.e. most contrasty) portion of the bubble image was selected by finding of iin Equation 5.2 which yielded the maximum mean intensity gradient, I, along a sectorof the bubble. N was typically chosen to be M/3, with M 180. This value of Nrepresents a compromise between larger values that give better accuracy, and smallerChapter 5. AUTOFOCUSSING OF BUBBLE IMAGES^ 50values that require fewer computations.1 n=i+NEn=i(5.2)In Equation 5.2, if n> M, n = n — M. The maximal intensity, /max, was thus found,and was taken to be the autofocussing parameter. When the image plane changes in depth(z coordinate), this parameter changes (Figure 5.7). There is a small region near z = 0in which the focussing parameter is maximal. This region called the "near-focus" regioncorresponds with the infocus position of the bubble image. In this region, the focussingparameter is not very sensitive to z direction changes. Any maximum of the focussingparameter in this region does not necessarily correspond with the true bubble focuslocation. Region (2) is called "sensitive" region. In this region the focussing parameteris rather sensitive to z direction changes. In this region the focussing parameter changessharply. Region (3) is called the "far-focus" region; the focussing parameter is insensitiveto z direction changes in this region. Noise dominates the signal in this region, makingit impossible to even determine the direction of improving focus.5.1.3 An Approach to Autofocussing of Bubble ImageThe characteristic shape of the focussing parameter shown in Figure 5.7 virtuallyimposes an autofocussing algorithm. A rough scanning procedure was developed thatguarantees that at least one scanning position is located within the "near-focus" region.Use of such a rough scanning procedure saves much time due to the large reduction in thenumber of required calculations relative to a traditional step-by-step method (Haussmannet al., 1980).Due to the arbitrariness of first focal plane examined, rough scanning using an ever-widening scanning sweep was employed (Figure 5.8).175150125Chapter 5. AUTOFOCUSSING OF BUBBLE IMAGES^ 51-5.0^-2.5^0.0^2.5^5.0DEPTH COORDINATE (Z)^mmFigure 5.7: Focussing parameter for a typical microbubbleFigure 5.8: A schematic of a large step scanning procedureChapter 5. AUTOFOCUSSING OF BUBBLE IMAGES^ 52Position (1) is the beginning position of the large step scanning. Position (2) is thesecond scanning position, and position (3) is the third scanning position. Position (4) isthe fourth scanning point, and scanning procedure outwards in this fashion, expandingthe scope of the sweep alternately to the left and right. Even though the length of eachjump is variable, the distance between adjacent scanning positions is the same. Thescanning does not stop until the difference between the maximal focussing parameterand the minimal focussing parameter is greater than a defined threshold value, or thejump scanning number is greater than a limited value. For example, position (2) is theposition of maximal focussing parameter, and position (4) is the position of minimalfocussing parameter. If G2 - G4 I > threshold value, where G2 and G4 represent thevalues of focussing parameter, the large step scanning will stop. Otherwise, scanningcontinues until the difference is greater than the threshold value or a fixed large numberof scans have been made. If the maximum number of scans has been reached, scanningstops, and it is assumed that no bubble exists within the autofocussing depth of field.If the difference between the maximum and minimum focussing parameters is greaterthan a threshold value, a small step scanning procedure is begun to scan a small distancearound the position of the maximal focussing parameter (Figure 5.9).The region of small step scanning is limited by the distance r (r = 5mm). Scan-ning occurs at numerous locations between position (i) and position (ii). The focussingparameters are calculated and held for each such position. When small step scanningis finished, the focussing parameter data are saved. The precise location of the infocusposition is still not readily discernible. Within the "near-focus" region there are severalpeaks, but the infocus position of bubble image is often not associated with any of thesepeaks. We attempted some kind of data fitting to locate the infocus position.Because the shape of the parameter function looks like the Gaussian curve, a Gaussianfunction was selected as a fitting function model (Equation 5.3):Chapter 5. AUTOFOCUSSING OF BUBBLE IMAGES^ 53(2)0Figure 5.9: A schematic of a small step scanningG(z) = Bexp[ (z E )2]^(5.3)where B, E, and W represent the amplitude, center, and width of Gaussian functionrespectively, and G is the focussing parameter and z is the focal plane distance.The basic approach is as follows. A merit funtion that measures the agreementbetween the data and the fitting model is selected. Such a funtion is x2 given by Equation5.4. This function was arranged so that small values represent close agreement. Theparameters B, E and W are then adjusted to achieve a minimum in the merit function,yielding best fit parameters. In our case, only the parameter of center is what we want.This best fitting center parameter is used to represent the infocus position of bubbleimage.2 (B , E,G) =Gz — G(z, B E W)) 2X^ (5.4)az2=1Figure 5.10 shows the excellent fit to the focussing parameter that results from Gaussianfitting.Chapter 5. AUTOFOCUSSING OF BUBBLE IMAGES^ 54-5.0^-2.5^0.0^2.5^5.0DEPTH COORDINATE (Z)Figure 5.10: Focussing parameter fitting using a Gaussian functionFurther details about the Gaussian (non linear) data fitting are described in appendixA.5.2 Implementation of Autofocussing Algorithm5.2.1 Identification of Spurious "Bubbles"With basic concepts of autofocussing understood, we may proceed with a descriptionof the interface between the autofocussing algorithm and the Fast Object DetectionSystem.The CCD-based Fast Object Detection System (FODS) scans holograms in the x —y —plane. When FODS detects what it characterizes as a "bubble," 2D scanning stopsand autofocussing begins. During autofocussing, specious "bubbles" detected by FODScan be identified. The specious "bubbles" (i.e. bright background noise; not true bubbles)are identified in one of two ways.Chapter 5. A UTOFOCUSSING OF BUBBLE IMAGES^ 55True bubble images appear as large, very bright regions in the video camera image,whereas noise may be equally bright, but is rarely large. Thus spurious noise may bedistinguished from bubble images by first thresholding the video camera image at 254(on a 0-255 intensity scale). Only isolated regions of bright noise and a large bubble(if one is present) remain in the image (see Figure 5.11). By performing "erosion" and"dilation" on the thresholded image, the small patches of noise are removed, and only alarge bubble, if present, is left. If no bubble is present after these steps, then the "bubble"detected by FODS is assumed to have been spurious.Figure 5.11: An image with a spurious bubble (bright region at the center of frame)5.2.2 Selection of Bubble for FocussingAfter the above procedures, we are confident at least one bubble exists in the aut-ofocussing window. There may, however, be more than one bubble in the autofocussingwindow. Only one bubble at a tim,e can be autofocussed.The first step in selecting the bubble for autofocussing is to evaluate the mean inten-sity within the autofocussing window area. Then an offset value is added to the meanChapter 5. AUTOFOCUSSING OF BUBBLE IMAGES^ 56intensity value to form a threshold value. The image is thresholded by this thresholdvalue; the image is changed to a binary image (grey level is 0 or 255, i.e. black or white).Image erosion and dilation are then applied to this thresholded image. Each bubblecentroid and mean radius is calculated. The bubble of greatest mean radius is selectedas the first autofocussing bubble. The maximum radius of this bubble is calculatedsimultaneously for subsequent use.5.2.3 Autofocussing of the Bubble ImageAs described in §5.1.3, the autofocussing procedure begins with large z-step scan-ning. During the large step scanning, whenever the difference between the maximaland minimal focussing parameter is greater than a preset threshold value, the large stepscanning will stops immediately. Otherwise this scanning continues until the limits of z-direction scanning have been reached. If the z-direction limits are reached, the differencebetween the maximum and minimum focussing parameters is compared with the secondthreshold value. It is assumed that the identified "bubble" can not be focussed, if thedifference is less than the second threshold.If the bubble can be focussed, the x — y — z position table motor moves the hologram tojust upstream of the position of maximum focussing parameter, as identified in the largestep scanning procedure. Small step scanning begins. The motor brings the hologramstep by step from upstream of the maximum parameter to downstream thereof. Duringthis procedure the focussing parameter as a function of z location are stored for furtherprocessing. The Gaussian fitting algorithm then evaluates the optimum fit parameters ofthe focussing parameter versus position, and the bubble focal plane is thus determined.Chapter 5. AUTOFOCUSSING OF BUBBLE IMAGES^57Bubble Number 1 2 3 4 5 6 7Absolute Error (pm) 89.50 19.55 394.55 94.55 14.85 -201.20 729.25Bubble Number 8 9 10 11 12 13 14Absolute Error (Am) 533.40 800.05 72.65 -105.00 132.70 52.40 155.45Bubble Number^15 16 17 18 19^20Absolute Error (Am)^269.70 -203.70 -112.55 -705.21 255.01^91.35Table 5.1: Table of absolute error5.3 Comparision with Human Focussed Images5.3.1 Definition of ErrorsThe focussed bubble z position determined by the autofocussing algorithm describedabove is not perfect—there is some error. Determining this error is non-trivial becausethe exact location of a bubble within a hologram is not known. Consequently, in orderto determine the error in the autofocussing-parameter-determined z location, we assumethe in focus position identified by skilled human operators looking at the same bubbleis indeed the correct bubble focus location. Table 5.1 shows the absolute error betweenthe focal locations of twenty microbubbles as determined by human and again by theautofocussing algorithm.A more useful measure of error than this absolute error is to define a relative errorin one of two ways. One may define a relative error in terms of the particular physicalphenomena being studied using holography. In this study, the holograms studied weretaken in the flow field generated by a wing tip vortex (Figure 5.12). The relationshipthat exists in a vortex between the tangent velocity, 170, and the radial coordinate, r, isshown in Figure 5.13.The relevant dimension in this flow is the vortex core radius, rcare, which can be usedto calculate the relative error. The relative error is then defined by:Chapter 5. AUTOFOCUSSING OF BUBBLE IMAGES^ 58---/^ "...^/^---...--/ ///^/^***.....- _._,1^I^I^( I, , \^\^...,/ I/ / /\' I \. -_,- / /N- .......Figure 5.12: Schematic of the flow around a vortexFigure 5.13: A schematic of relationship between Vo and rChapter 5. AUTOFOCUSSING OF BUBBLE IMAGES^ 59Bubble Number 1 2 3 4 5 6 7Relative Error (%) 2.56 0.56 11.26 2.70 0.43 -5.74 20.83Bubble Number 8 9 10 11 12 13 14Relative Error (%) 15.24 22.87 2.07 -3.00 3.79 1.50 4.44Bubble Number 15 16 17 18 19 20Realtive Error (%) 7.70 -5.81 -3.21 -21.43 7.29 2.61Table 5.2: Table of relative errorZauto Z humanError„iativerCOTeFor the flow studied, rcm., = 3.5mm. The relative errors were calculated using Equa-tion 5.5; Table 5.2 shows the results.A second way to define the relative error is with reference to the only viable alternativeto computer autofocussing-the error involved in human focussing of bubble images.To determine the error involved in human focussing, three competent hologram read-ers were shown the same hologram, and were asked to measure the plane of focus ofeach of 20 different bubbles in the hologram. For each bubble the average of the Zhumanreadings, Zhuman, was deemed to be the 'correct' value, and the discrepancy between thisaverage and individual measurement was an indicator of the 'human error'. It is thenstraight forward to define a relative focussing error as:(5.5)Edepth —Z auto — Zhuman (5.6)ahumanWhere Zauto represents the bubble position determined by the autofocussing algorithm,and cihuman represents the standard deviation of the human measurements (‘a human300 gm).Chapter 5. AUTOFOCUSSING OF BUBBLE IMAGES^ 60This bubble focussing depth error is shown in Figure 5.14. 75% of the computer-analyzed focal planes were identified correctly within the range of human error, and noneof the focal plane errors was as much as three times the human error.2.00.0-2.00^5^10^15^20BUBBLE NUMBERFigure 5.14: Bubble focussing depth error.5.3.2 Analysis of ErrorWith discussion of the autofocussing error behind us, we may proceed to explain thesources of this error and explore possibilities for reducing it.The background noise is the major contributor to the focussing of error. The back-ground noise, which results from laser speckle and tiny particles in the water, makesinterpretation of bubble images (for humans and computers alike) more difficult, becausebright noise located near a bubble is easily misinterpreted as part of the bubble. Noisecannot be focussed, and hence its presence complicates autofocussing.A second significant source of error is the limited resolution of the video camera.Because the video camera/frame grabber produces an image only 488 x 512 pixels, andChapter 5. AUTOFOCUSSING OF BUBBLE IMAGES^ 61a typical bubble occupies approximately 1/10 of a video frame, typical bubble are only50 pixels in diameter. The pixelation error incurred on recording the image of a bubbleis significant.A third significant source of error arises from a measured unsteadiness in the outputof the video camera. The hologram was held fixed when the data displayed in Figure 5.15was acquired at different times. The fluctuation in the bubble focussing parameter signalare indicative of variations in the contrast between the bubble and the surroundings,caused by a deficiency in the video camera signal.175ui 1500,-1 1250^10^20^30^40^50^60EXPERIMENTAL TIMESFigure 5.15: Fluctuation with time of the focussing parameterImproving the autofocussing (i.e. reducing its error) will likely be acheived by:1. Using a higher resolution, more stable video camera2. Using various optical techniques to reduce the hologram noise3. Possibly, by the development of a better autofocussing algorithmChapter 5. AUTOFOCUSSING OF BUBBLE IMAGES^ 62Even with these improvements, it is doubtful that autofocussing to an accuracy betterthan +70pm (the limit on human focussing with a near perfect optical setup) will everbe attained.Chapter 6BUBBLE IMAGE ANALYSIS6.1 IntroductionBubble image analysis is the final stage of automated hologram analysis. Once a par-ticular bubble has been focussed, the next stage of holographic analysis is comparativelystraightforward, involving only the manipulation of two-dimensional focussed images of asmall number of simple objects. The only confounding factor in the analysis is the highlynoisy background behind each object within the image, which makes object recognitiondifficult.The information we wish to extract through bubble image analysis includes:1. the diameters of bubbles within the image (related to the pressure field around thebubble that existed when the hologram was taken).2. the displacement of each bubble between the two laser pulses used to create thedouble-pulsed hologram (proportional to the instantaneous velocity of the bubble).6.2 Image Analysis6.2.1 Preprocessing of Bubble ImageThe first step in bubble image analysis is pre-processing of the image to obtain acrude estimate of the bubble centroid location and bubble mean radius. Subsequentprocessing was facilitated by knowledge of these bubble parameters.63Chapter 6. BUBBLE IMAGE ANALYSIS^ 64Because only crude estimates of bubble parameters are required at this stage in theimage processing, a compressed video image (with 4x fewer bits of information) was usedin processing.In order to estimate the bubble centroid and radius, it is necessary to distinguishbetween "bubble" and "background" pixels in the image. Due to lack of a clear separationbetween the intensity of bubble pixels and background pixels (Figure 6.1), it was notpossible to use the segmentation threshold value method of Zarschizky et al. (1983) forthis purpose.An alternative method of bubble pixel identification was developed. The area ofthe video camera field of view used for autofocussing was scanned to find the meanpixel intensity. This mean intensity varies substantially from one region of the hologramto another (as a result of non-uniform hologram lasing intensity). An experimentally-determined constant offset was added to the mean value, yielding a threshold value. Theautofocussing window image was then thresholded using this value.Following thresholding, the bright bubble and considerable background noise re-mained. Noise suppression using common erosion-dilation (also called "shrink and blow"—see Appendix B) techniques eliminated those of this noise. The remaining, comparativelylarger regions of bright noise were removed by comparing the areas to that of the small-est bubble likely to appear on a hologram. Because even large regions of noise are smallcompared to bubble, all noise regions of the background were successfully removed bysimply nulling small regions after erosion-dilation.The next step in image pre-processing involves outlining the boundary of each bubbleusing an edge tracking technique (Appendix C). Then the centroid, maximal and meanradius, of each bubble was estimated based on the bubble edge so identified. Photographsillustrating the different preprocessing procedures are shown in Figure 6.2.300250200=°) 15010050I^I^I^I^I^I^I^I^I^I^I^I^I^I^I^I^I^I0Chapter 6. BUBBLE IMAGE ANALYSIS^ 650^50^100^150 200^250Intensity Grey LevelFigure 6.1: Histogram of bubble image6.2.2 Further Image Analysis ProceduresOnce the crude estimates of bubble centroids and maximum radii were obtainedusing the pre-processing procedure, all dimensions thus obtained were multiplied by thecompression factor to find their size in the original image. The original image was usedfor subsequent processing, the purpose of which was to perform more accurate bubbleedge detection.6.2.2.1 Box-FilteringThe first step in this phase of image processing was to apply the one-dimensionalbox filter (Figure 5.4) along rays directed radially outward from the centroidal locationfound in the pre-processing step. The portion of each ray filtered in this step rangedbetween 0.4Rma. 1.5R„,as, corresponding with the region of the image in which thebubble edge could be found. For one such ray the variation of intensity gradient withradius is shown in Figure 5.6. The first large peak in this curve is assumed to be presentCompressedImageImageErosionImageThresholdingImageDilationChapter 6. BUBBLE IMAGE ANALYSIS^ 66a. A typical image with many bubblesb. Image preprocessing steps applied to the above imageFigure 6.2: Photographs of preprocessing steps applied to a typical hologram image25175 20015075Chapter 6. BUBBLE IMAGE ANALYSIS^ 67a bubble edge point.6.2.2.2 Crude noise suppressionFollowing box filtering, most of the bubble edge is clearly defined (Figure 6.3), butsome portions of the edge are still ill-defined because of the background noise.X Coordinate (pixels)Figure 6.3: Typical bubble edge points identified during one-dimensional box filteringPreliminary bubble edge noise suppression consists of deleting "edge" pixels locatedfar from the main contour of the bubble. The central concept of crude noise suppressionis to consider all the pixels in a 2N + 1 by 2N + 1 square centered on the bubble edgepixel. For example, if the pixel (i, j) is identified as an edge pixel, then the square areabounded by pixels (i — N, j — N), (i N, j — N), (i — N, Li N) and (i N, j N) isexamined. N is a measure to control the size of the square.Chapter 6. BUBBLE IMAGE ANALYSIS^ 6835307.;(4 25S"' 20105 t,0^50 100 150 200 250 300 350Angular coordinate (degrees)Figure 6.4: Radial distance of bubble edge points versus angular coordinatei+N j-FNNp = E E /x=i-N y=j-NI ^1 edge point= 0 otherwise(6.1)If the sum Np is greater than a threshold value, Nt, the pixel will be accepted as anedge pixel, otherwise it will be deemed a noise pixel.Figure 6.3 shows the bubble edge before this crude noise suppression. After the noisesuppression some pixels, which are far away from the main contour of the bubble, areremoved, and the result shows obvious improvements (Figure 6.5). This improvement isalso evident by looking at the angular variation of the edge pixel radius (Figure 6.6).6.2.2.3 Link procedureAs Figure 6.5 shows, the bubble edge contour, following crude noise suppression,may still contain large regions of pixels which are substantially detached from most of25175 2001507535307.> 25105Chapter 6. BUBBLE IMAGE ANALYSIS^ 69X Coordinate (pixels)Figure 6.5: Typical bubble edge after crude noise suppression0^50^100 150 200 250 300 350Angular coordinate (degrees)Figure 6.6: Radial distance of edge points after crude noise suppression(1) NEdge of BubbleChapter 6. BUBBLE IMAGE ANALYSIS^ 70the bubble edge pixels (e.g. the pixels near x=195, y=15 in Figure 6.5). The linkprocedure works by connecting together, in sequence, edge pixels that are close togetherin space.Centroid of Bubble\ (2)\\\ Noise EdgeFigure 6.7: Large noise portion of edgeThe edge pixels were linked in succession, according to the angular coordinate, 0, ofeach edge pixel. If the radial difference between two edge pixels, which are closerest in 0,is greater than a threshold value, a large region of edge noise is deemed to be present. Ascanning procedure was developed to scan the space around the discontinuity in radialdistance to identify the next edge pixel that is the most probable next true edge pixel.That is, the procedure for linking edge point (1) with edge point (2).Imagine the bubble edge has been identified up to edge pixel (1) in Figure 6.7. Letedge pixel (1) correspond with point 'a' in Figure 6.8a. First, the eight pixels around'a' are scanned in the order shown in Figure 6.8b. If another edge pixel is found, it islinked with the pixel at 'a', otherwise a somewhat broader region is examined. The nextsuch region to be examined is on node point '13', one AO away from 'a' in the same radialdistance with 'a'. The same pixel scanning sequence shown in Figure 6.8b is repeated.Chapter 6. BUBBLE IMAGE ANALYSIS^ 71If another edge pixel is still not identified, the regions around nodes 'c', 'd', `e', 'f', `g','h', T, and T are scanned in sequence, until another edge pixel is found. This scanningprocedure biases the edge linking so as to create a smoother bubble (i.e. more uniformradius of curvature) rather than a more contiguous bubble edge.Node iPoint340 i 2A0^AOa. Node point Sequence^b. Pixel Scanning SequenceFigure 6.8: Scanning sequenceBubble edge linking actually serves two functions, one is noise suppression, and theother is to link bubble edge pixels to generate a closed curve. The former purpose is clearlyaccomplished by our algorithm (Figure 6.9 and 6.10), and the later purpose is necessiatedby the requirements of the centroid evaluation algorithm, described in following sections.6.2.2.4 Edge smoothAfter the crude noise suppression and link procedures, the bubble edge is fairly welldefined, but there remains some high spatial frequency noise along the edge contour. Anedge smoothing procedure was developed to suppress these high frequency components.Edge smoothing is accomplished by representing the bubble edge contour by a function,and using a low-pass filter to reduce the high frequency noise on this function.75— 502535105Chapter 6. BUBBLE IMAGE ANALYSIS^ 72150^175^200X Coordinate (pixels)Figure 6.9: Typical bubble after link procedureII^11111111^I0^50 100 150 200 250 300 350Angular coordinate (degrees)Figure 6.10: Radial distance of edge points after the link procedure.Chapter 6. BUBBLE IMAGE ANALYSIS^ 73The edge of a bubble to be smoothed is represented as two coordinate functions of acurve length parameter, s:x = x(s)^and^y = y(s)^ (6.2)In order to filter out high frequency components in this curve, we first tried to convolvethese functions with a one-dimensional low-pass (averaging) filter, and we observed an im-provement to the edge shape. We then tried to convolve this curve with a one-dimensionalGaussian filter. The result was better still.Consider a Gaussian function, FG, with a standard deviation a:1e-s2/20-2FG(s) =o-V27r(6.3)Then, let X(s) and Y(s) represent the convolution of this Gaussian with x(s) andy(s) respectively:x(s)=FG(s)0x(s) and Y(s) , FG(s)e) y(s) (6.4)X(s) and Y(s) will be smoothed versions of x(s) and y(s); the degree of smoothingwill depend on the selected value of a. A shortcoming of all averaging functions, anda Gaussian filter is no exception, is shrinkage of bubble results from the averaging. Infact, it can be shown that edge pixels migrate foreward the center on application of theconvolution. The extent of this migration increases as the radius of curvature decreasesand as a increases.However, since this shrinkage is due to the amount of smoothing and local curvature,the shrinkage can be corrected using a special compensating technique (Lowe, 1989). Thebasic idea of this technique is to predict the degree of shrinkage for each point of thesmoothed curve X(s) and Y(s) as functions of the degree of smoothing o- and measuredChapter 6. BUBBLE IMAGE ANALYSIS^ 74local curvature related to X"(s). The shrinkage at each edge point can be representedX(s) = r.(1 — e-a2/21.2)^(6.5)where r is the local curvature of x(s).In fact, the original curvature r is unknown, but the second derivative of the smoothedcurve can be used to calculate r. The measured second derivative of the smooth curveX"(s):e-cr2 /2r2,r(S)=  ^ (6.6)For small values of a- equation 6.6 shows that r 1/X". We used a Gaussian filter witha small standard deviation (a = 5), and this approximation is pretty good.The elimination of the shrinkage effect was carried out through subtracting the errorvalue (equation 6.5) from the original smoothed value. As Figure 6.11 demostrates, theresult following Gaussian filtering with the shrinkage compensation was excellent. Wecan also see the improvement from Figure 6.12.6.2.2.5 Centroid EvaluationOnce the bubbles are processed using the noise suppression techniques, the centroidof each bubble is reestimated. First, the edge of each bubble is highlighted by an edgeline tracking technique (Payne, 1980) (Appendix C). Then the area of each bubble iscalculated, and the moment of inertia along each coordinate is calculated independently.The x and y locations of the bubble centroid are evaluated as:X centerY centerEE X2-Et:CL2—X—EEO y2 (6.7)25175150 200Chapter 6. BUBBLE IMAGE ANALYSIS75X Coordinate (pixels)Figure 6.11: Typical bubble edge after smoothing35 1^I^I^1^.1^1^I^I^I^I^I^1^I^1^I^1^1^I^I^I^I^I^I^I^1 1050^50^100 150 200 250 300 350Angular coordinate (degrees)75Figure 6.12: Radial distance of edge points after edge smoothingChapter 6. BUBBLE IMAGE ANALYSIS^ 76where 1-2 represents the region occupied by a bubble. The maximal and mean radius ofeach bubble is evaluated at the same time for the purpose of subsequent bubble pairing.Photographs illustrating the different procedures of image analysis and noise suppres-sion are shown in Figure 6.13 and Figure 6.14.6.2.3 Bubble Displacement and Bubble DiameterAfter centroid evaluation, twice the mean radius of each bubble is our best esti-mate of the diameter of each bubble. The displacement of each bubble between the twolaser pulses must be evaluated by a further processing technique. We chose the patchcorrelation technique to find the 'twin' of each bubble within the video camera frame.The methodology we adopted for patch correlation is as follows. First, a knowledgeof the physics of the flow in which the bubbles were convected allowed us to put an upperbound on the displacement of any bubble between the two laser pulses. Assume a bubbleto be the original one. Only bubbles appearing with the distance away from the originalbubble equal to or less than this maximum displacement of a particular bubble werecandidate 'twins' of the first bubble. The field of potential 'twins' was further thinnedby requiring that the mean and maximum radius of all potential twins was fairly closeto that of the original bubble.After this preliminary comparison phase, there were just a few (typically one ortwo) candidate bubble 'twins' remaining, and patch correlation was begun. A patch ofthe video image was taken centred on the centroid of the original bubble, and differentpatches were similarly taken around the centroids of each of the candidate twins. Allimage patches were normalized (Fua, 1991).Each candidate image patch was then correlated with the original bubble patch, withthe centroids of each bubble matched. The centroids of the two images were then offset byamounts dx and dy, and the correlation repeated. dx and dy were allowed to range fromChapter 6. BUBBLE IMAGE ANALYSIS^ 77a. Bubble edge image after edge detection procedureb. Bubble edge image after crude noise suppressionFigure 6.13: Photographs of different stages in the preliminary image analysis.Chapter 6. BUBBLE IMAGE ANALYSIS^ 78a. Bubble edge image after edge link procedureb. Bubble edge image after edge smoothing procedureFigure 6.14: Photographs of different stages in later image analysis.Chapter 6. BUBBLE IMAGE ANALYSIS^ 79-2 to +2 pixels. The particular values of dx and dy yielding the maximum correlationscore S (refer to Equation 6.8) were the best fit of the image data (a maximum inthe image cross-correlation function); the pairs of bubbles with maximum correlationscore were deemed to be 'twins.' Bubble displacements could then be calculated asDx^Xcoardinatebubbledisplacement = Xcentraid(ariginalbubble)^Xcentroid(twin), and likewise for D.Figure 6.15: Schematic of patch correlationThe correlation score Rxy was evaluated as:Ei j) - h)(h(i + dx, j dy) — 12))= ^2Rxy 0 ^- 102)(E•(I2(z + dx, j + clY) — 12)2)(6.8)where /1 and 12 are the original bubble patch image and the 'twin' bubble patch imageintensities, 11 and /2 are their average values over the correlation window, and dx anddy represent the shifts around the bubble centroids.Chapter 6. BUBBLE IMAGE ANALYSIS^ 806.3 Results and Comparison with the Human AnalysisWith the discussion of the image analysis procedures completed, we may turn to adiscussion of the effectiveness of the analysis techniques. The results of bubble diameterand displacement evaluations by computer were compared with the results of humanoperator analysis, because although the human operator image analysis is notoriouslytedious and slow, it is reliable and relatively accurate.An experiment to measure the effectiveness of the image analysis was conducted asfollows. Three human operators were shown the same bubble images, and were asked tomeasure the bubble diameters and bubble displacements. A total of 20 bubbles imageswere measured by human operators and subsequently by computer-based image analysistechniques. If we take the average of the human operator readings as the 'correct' value,the relative error between the human operators' measurements and computer analysiscan be computed based on equation 6.9.Vcomputer Vhuman (6.9)Vhumanwhere Vhumau is the 'correct' value, and Vcomput, is the measured value using the imageanalysis algorithm.An alternative approach to assessing the accuracy of the image analysis proceduresis to the relative error described above with a typical 'human error.' The average of thereadings by human operators was deemed to be the 'correct' value, and the discrepancybetween this average and individual human measurements was an indicator of the 'humanerror.' The relative error was normalized by the 'human error,' as indicated by equation6.10:= Vcarnputer Vhuman (6.10)E humanChapter 6. BUBBLE IMAGE ANALYSIS^ 81where Ehuman is the 'human error.'The differences between human measured and computer-measured bubble displace-ments and diameters normalized by the 'human error', are plotted in Figure 6.16 andFigure 6.17.1.00.0-1.00^5^10^15^20BUBBLE NUMBERFigure 6.16: Bubble displacement errorThe computer-based analysis (Figure 6.16) is most effective at determining the bubbledisplacement. 75% of the bubble displacements measured by computer were within thehuman error bounds, and all displacements were within 1.5 times the human error. Com-puterized image analysis measurements of bubble displacements are therefore essentiallyas accurate as the corresponding human operator measurements. There was, however, agreat discrepancy between human analysis and computer-based analysis measurementsof bubble diameter (Figure 6.17). 50% of the measured bubble diameters were within therange of human error, and 95% were within two times this range-an acceptable level oferror for most purpose. The comparatively large normalized bubble diameter error canChapter 6. BUBBLE IMAGE ANALYSIS^ 820^5^10^15^20BUBBLE NUMBERFigure 6.17: Bubble diameter errorbe attributed to two factors. Small errors in bubble diameter result from small brightpatched of noise located near a bubble being mistaken for part of the bubble. Largeerrors in the bubble diameter determination can result when two separate bubble imagesoverlap, and are treated together as one bubble.Chapter 7Concluding Remarks7.1 SummaryThis thesis described, and demonstrated the effectiveness of, a particular strategyfor the automated analysis of double-pulsed holograms of microbubbles. The strategy isbased on our intention to develop simple but effective algorithms, methods, and facilitiesto improve the performance of the hologram reconstruction and image analysis system.The strategy for automated analysis consists of three steps—Fast Object Detection, Auto-focussing, and Bubble Image Analysis. This three step procedure results in a remarkablyefficient, effective, and automated bubble analysis system, which operates at nearly therate of a trained human operator.7.1.1 Fast Object DetectionFast object detection was accomplished by using a cylindrical lens to compress theimage volume laterally, and a CCD with dedicated electronics to 'read' the compressedimage. Bright large objects (bubbles) both in and out of focus could be detected reliablyusing this system. The whole CCD-based system operated at the human rate.83Chapter 7. Concluding Remarks^ 847.1.2 Bubble Autofocussing AlgorithmBubble autofocussing is based on the following simple observation: an object displaysa sharper edge when it is focussed than does its defocussed image. A simple computer-based autofocussing algorithm using the image generated by a video camera connectedwith a frame grabber was developed. This algorithm used one-dimensional box-filtering,applied to rays directed radially outward from the approximate bubble centroid, to iden-tify the bubble edges. The sharpness of the bubble edge along a particular ray is relatedto the magnitude of the box-filtering output at the edge; the average of such sharpnessaround the entire bubble was defined to be a focussing parameter. This focussing param-eter, when plotted as a function of distance away from the correct bubble focal plane, iswell approximated by a noisy Gaussian function centred at the focal plane. The focussingparameter versus out-of-plane distance was statistically fit with a Gaussian function, andthe correct bubble focal plane was thus identified very accurately.7.1.3 Bubble Image AnalysisFollowing bubble focussing, the task of bubble image analysis is fairly straightfor-ward. The first step in the analysis is the identification of the bubble mean radius, whichis accomplished by identifying the bubble edges as described above, and then using var-ious noise suppression techniques to smooth and link the bubble edge pixels. Once theedge pixels were linked, the bubble centroid and mean radius were readily evaluated.Knowledge of these crucial bubble characteristics allowed for the application of a patchcorrelation procedure to determine the bubble's displacement during the time betweenthe two double-pulsed hologram laser pulses.Chapter 7. Concluding Remarks^ 857.2 Recommendations for Future WorkThe following improvements to the automated hologram analysis system are sug-gested for future workers:• Decrease the exposure time of the linear CCD array through changes to the opticaland electronic design, so that the CCD scanning speed can be increased.• A microprocessor-based CCD scanning control board is needed. Such a boardwould work independently from the host computer, speeding up the Fast ObjectDetection System substantially.• Another control board is needed to manage step scanning in the autofocussingprocess. This board should also improve the overall system efficiency.Bibliography[1] Adrian, R. J., 1986, "Multi-Point Optical Measurements of Simultaneous Vectorsin Unsteady Flow - A Review," Int. J. Heat and Fluid Flow, Vol. 7, pp. 127-145.[2] Belz, R. A., and Shofner, F. M., 1972, "Characteristics and Measurements of anAperture-Limited In-Line Hologram Image," Applied Optics, Vol.11, No. 10, pp.2215-2222.[3] Canny, J., 1986, "A Computational Approach to Edge Detection," IEEE Trans-actions on Pattern Analysis and Machine Intelligence, Vol. PAMI-8. No. 6. pp.679-698.[4] Caulfield, H. J., 1985, "Automated Analysis of Particle Holograms," Optical Engi-neering, Vol. 24, No. 3, pp. 462-463.[5] Craig, J. E., 1984, "Conventional and Liquid Metal Droplet Breakup in Aerody-namic Nozzle Contractions, " AIAA 22nd Aerospace Science Meeting, Paper No.84-0201, Reno, NV.[6] Deriche, R., 1987, "Using Canny's Criteria to Derive a Recursively ImplementedOptimal Edge Detector," International Journal of Computer Vision, pp. 167-187.[7] Dimotakis, P. E., Debussy, F. D., and Koochesfahani, M. M., 1981, "Particle StreakVelocity Field Measurements in A Three-dimensional Mixing Layer," Phys. Fluids,Vol. 24, pp. 995-999.[8] Ewan, B. C. R., 1979, "Holographic Particle Velocity Measurement in the Fraun-hofer Plane," Applied Optics, Vol. 18 No. 5, pp. 623-626.[9] Fairchild, 1989, "CCD Sensor System and Developmental Technology," 1989Fairchild Weston CCD Imaging Databook, Milpitas, CA, pp. 25-32.[10] Forbes, S. J., and Kuehn, T. H., 1991, "Fraunhofer Holography of Small Particles,"Applied Physics B, Vol. 52, pp. 305-310.[11] Fua P., 1991, "A Parallel Stereo Algorithm that Produces Dense Depth Maps andPreserves Image Features," INRIA Technical Report, pp. 1369.[12] Galbiati, L. J., 1990, "Machine Vision and Digital Image Processing," Prentice-Hall, New York.86Bibliography^ 87[13] Gates, E. M., and Bacon, J., 1978, "Determination of Cavitation Nuclei Distribu-tion by Holography,"Journal of Ship Research, Vol. 22, No. 1, pp. 29-31.[14] Green, S. I., 1991, "Correlating SinglePhase Flow Measurements with Observationsof Trailing Vortex Cavitation, " Journal of Fluids Engineering, Vol. 113, pp. 125-129.[15] Green, S. I., and Lin, G., 1991, "Computer-Aided Analysis of Reconstructed Holo-graphic Images," ASME Cavitation and Multiphase Flow Forum, Portland, OR.[16] Green, S. I., and Acosta, A. J., 1991, "Unsteady Flow in Trailing Vortices,"J. FluidMech., Vol. 227, pp. 107-134.[17] Harris, C. G., and Pike, J. M., 1988, "3D Positional Integration from Image Se-quences," Image and Vision Computing, Vol. 6, No. 2, pp. 87-90.[18] Hobson, P. R., 1988, "Precison Coordinate Measurements using HolographicRecording," Journal of Physics E, Vol. 21, No. 2, pp. 139-145.[19] Horn, B. K. P., 1986, "Robot Vision," McGraw-Hill, New york, U. S. A.[20] Haussmann, G., and Lauterborn, W., 1980, "Determination of Size and Position ofFast Moving Gas Bubbles in Liquids by Digital 3-D Image Processing of HologramReconstructions," Applied Optics, Vol. 19, No. 20, pp. 3529-3535.[21] Lee, Y. J., 1973, "An Application of Holography to the Study of Air-Water Two-phase Critical Flow," Ph.D thesis, University of Washington, Seattle, WA.[22] Lee, Y. J., and Kim, J. H., 1986, "A Review of Holography Applications in Multi-phase Flow Visualization Study," Journal of Fluid Engineering, Vol. 108, pp. 279-288.[23] Lowe, D. G., 1989, "Organization of Smooth Image Curves at Multiple Scales,"International Journal of Computer Vision, Vol. 3, pp.119-130.[24] Malyak, P. K., and Thompson, B. J., 1984, "Particle Displacement and VelocityMeasurement Using Holography," Optical Engineering, Vol. 23, pp. 567-576.[25] Marko, K. A., and Rimai, L., 1985 "Video Recording and Quantitative Analysis ofSeed Particle Track Images in Unsteady Flow," Applied Optics, Vol. 24, pp.3666-3672.[26] Marquardt, D. W., 1963, J. Soc. Ind. Appl. Math., Vol. 11, pp. 431-441.[27] Marr, D., and Hildreth, E., 1980, "Theory of Edge Detection," Proc. R. Soc. Lond.B, Vol. 207, pp. 187-217.Bibliography^ 88[28] McKee, J. W., and Aggarwal, J. K., 1975, "Finding the Edges of the Surfaces ofThree-Dimensional Curved Objects by Computer," Pattern Recognition, Vol. 7, pp.25-52.[29] Meynart, R., 1983, "Instantaneous Velocity Field Measurements in Unsteady GasFlow by Speckle Velocimetry," Applied Optics, Vol. 22. No. 4, pp. 535540.[30] Modestino, J. W., and Fries, R. W., 1977,"Edge Detection in Noise Images UsingRecursive Digital Filtering," Comput. Graphics Image Processing, Vol. 6, pp. 409-433[31] Payne, P. R., Carder, K. L., and Steward, R. G., 1984, "Image Analysis Techniquesfor Holograms of Dynamic Oceanic Particles," Applied Optics, Vol. 23, No. 2, pp.204-210.[32] Pratt, W. K., 1991, "Digital Image Processing" John Wiley & Son, Inc., New York.[33] Press, W. H., Flannery, B. P., Teukolsky, S. A., and Vetterling, W. T, 1989, "Nu-merical Recipes—The Art of Scientific Computing," Cambridge University Press,Cambridge, Great Britain.[34] Prikryl, I, and Vest, C. M., 1982, "Holographic Imaging of SemitransparentDroplets or Particles," Applied Optics, Vol. 21, No. 14, pp. 2541-2547.[35] 1991, "Optics for Industry," Rolyn Optics Company, Covina, CA, pp.34-35.[36] Schmidt-Harms, C. A., 1984, "Velocimetry of Moving Phase Plate Using LaserSpeckle Patterns," Applied Optics, Vol. 23, No. 14, pp. 2353-2358.[37] Sedra, A. S., and Smith, K. C., 1982, "Microelectronics Circuits," CBS Collegepublishing, New York, pp. 71-87.[38] Slessor, M. D., and Green, S. I., 1992, "A Simple and Low-Cost CCD-Based Imag-ing System," Meas. Sci. Technol., Vol. 3, pp. 421-423.[39] Stanton, A. C., Caulfield, H. J., Stewart, G., W., 1984, "An Approach for Au-tomated Analysis of Particle Hologram," Optical Engineering, Vol. 23, No. 5, pp.577-582.[40] 1990, "Optoelectronics and Image Sensors," Texas Instruments, pp. 7-33-7-48.[41] Trolinger, J., Belz, R. A., and Farmer, W. M., 1969, "Holographic Techniques forthe Study of Dynamic Particle Fields," Applied Optics, Vol. 8, No. 5, pp. 957-961.[42] Trolinger, J., 1975 "Particle Field Holography," Optical Engineering, Vol. 14, No.5, pp. 383-392.Bibliography^ 89[43] Trolinger, J., and Field, R., 1980, "Coal Particle Combustion Studied by Hologra-phy," AIAA 18th Aerospace Sciences Meeting, Paper No. 80-0018, Pasadena, CA.[44] Uemura, T., Yamamoto, F., and Ohmi, K., 1989, "A High Speed Algorithm ofImage Analysis for Real Time Measurement of Two-dimensional Velocity Distri-bution," Proceedings, Flow Visualization - 1989, Winter annual meeting ASME,California, pp. 129-133.[45] Utami, T., Blackwelder, R. E., and Ueno, T, 1991, "A Cross-correlation Techniquefor Velocity Field Extraction from Particulate Visualization," Exp. Fluids, Vol. 10,pp. 213-223.[46] Vikram, C. S., and Billet, M. L., 1984, "Far-Field Holography at Non-Image Planesfor Size Analysis of Small Particles," Applied Physics B, Vol. 33, pp.149-153.[47] Weingartner, I., 1983, "Holography—Techniques and Applications," J. Physics E,Vol. 16, pp. 16-23.[48] Zarschizky, H., and Lauterborn, W., 1983, "Digital Picture Processing on HighSpeed Holograms," IEEE Publ. 83, CH1954-7, pp. 49-56.Appendix ANonlinear Data FittingLet us assume we have some data, y, that we wish to fit as a non-linear functionof another variable, x. Let the non-linear function consist of M unknown parametersa = (a1, a2...ak), k =1, 2..., M. Then, the model to be fitted is represented asy = y(x;^ (A.1)Then, a function that measures the merit of our fitted function is x2:x2(a)^[y, — y(xi; a)] 2(A.2)s=1The gradient of x2 with respect to the parameters a, which will be zero at the x2minimum, has componentsaX2 =^2^[Yi — Y(xi; a)] OY(xi; a) k =1,2, ..., M (A.3)Oak^i=1^CTi2^ OakTaking an additional partial derivative gives02x2 2 17^1= raY(xi; a) °Y(xi; a) 02y(xii a) (A.4)acikaal [^act,^aai^[y= Y(xi; a)]^aakactiIt is conventional to remove the factors of 2 by defining1 5x2^1=522Ok — (A.5)ak,2 Oak 2 aakaaiWe expect the x2 function to be well approximated by a quadratic form, which wecan write as1x2(a) -y — d • a + —2a D • a^ (A.6)90Appendix A. Nonlinear Data Fitting^ 91where d is an M-vector and D is an M x M matrix. If the quadratic approximation is agood one, we know how to jump from the current trial parameters a, to the minimizingones am,„ in a single leap,amtn = act.. + D-1 • H V X2(aeur)1^(A.7)On the other hand, equation A.6 might be a poor local approximation to the shapeof the function that we are trying to minimize at a„.. In that case, about all we can dois take a step down the gradient, as in the steepest descent method. In other words,anext = acur — constant x Vx2(a.r)^(A.8)To use equation A.7 or equation A.8, we must be able to compute the gradient of theX2 function at any set of parameters a. To use equation A.7 we also need the matrix D,which is the second derivative matrix (Hessian matrix) of the X2 merit function of any a.Making [a] = 113 in equation A.7, the equation A.5 can be rewritten as the set oflinear equationsE akisai =^ (A.9)1=1So equation A.8, the steepest descent formula, translates toSal = constant x^ (A.10)Marquardt (Marquardt, 1963) has put forth an elegant method for varying smoothlybetween the extremes of the inverse-Hessian method (equation A.9) and the steepestdescent method (equation A.10). The first insight of Marquardt is that we can dividethe constant in equation A.10 by some fudge factor A, with the possibility of settingA>> 1 to reduce the step size. In other words, replace equation A.10 by1Sal— ^ or^AaiiSal = (A.11)Appendix A. Nonlinear Data Fitting^ 92Marquardt's second insight is that equations (A.11) and (A.9) can be combined if wedefine a new matrix a' using the following prescriptiona'• • = aj•(1 + A).7a3- k = a lc^k) (A.12)and then replacing both (A.11) and (A.9) by(A.13)When A is very large, the matrix a' becomes diagonally dominant, so equation (A.13) isidentical to (A.11). On the other hand, as A approaches zero, equation becomes (A.9).Given an initial guess for the set of fitted parameters a, Marquardt recommends thefollowing algorithm:• Compute x2•• Pick a modest value for A, say A = 0.001.• (t) Solve the linear equation (A.13) for 8a and evaluate x2(a + 8a).• If Oa + 8a) > x2(a), increase A by a factor of 10 (or any other substantial factor)and go back to (t).• If x2(a+Sa) <2(a), decrease A by factor of 10, update the trial solution a^a+6a,and go back to (t)•In practice, Marquardt suggests that one stop iterating on the first or second occasionthat x2 decreases by a negligible amount, say either less than 0.1 absolutely or (in caseroundoff prevents that from being reached) some fractional amount like 10'.Appendix A. Nonlinear Data Fitting^ 93Once the acceptable minimum has been found, one can set A = 0 and computer thematrix[C] = [a]-1^ (A.14)which, as before, is the estimated covariance matrix of the standard errors in the fittedparameter a. In this way, we can get the best-fit parameter a.Appendix BShrink and Blow TechniqueThe Shrink and Blow technique is used commonly to remove background noise inimage processing. It is typically applied to a binary image ('white' or 'black' , that is, '1'or '0'). Assume in the following discussion that 'white' represents the background, and'black' represents objects and noise.The shrink procedure is as follows. If a pixel is '0', that is, a 'black' pixel, the areasurrounding that pixel (located (x, y) j)) will be inquired. The area to be consideredis (2N + 1) by (2N + 1) pixels, centred on (i, j). The number of 'black' pixels, Np, iscounted as:i-FN j-FNN^S= Ex=i-N y=j-N,,y{Sx,y = 1 if /s,y = 0Ss,y = 0 otherwise(BA)If Np is greater than a threshold value Nt, which means there are enough surroundingobject pixels, this pixel will remain a value of '0'. Otherwise this pixel will be taken asa noise pixel and set to the value of '1'.By adjusting N, we can change the convolution window area. We can also control thedegree of noise suppression by adjusting the relative value Nt and N. If Nt is selectedtoo large, some small objects may be mistaken for noise. The selection of the windowsize (N) and the threshold value (NO must therefore be based on the characteristics ofthe images.Shrinkage of an object image, occurs automatically during this phase, because the94Appendix B. Shrink and Blow Technique^ 95pixels near the edge of objects are eroded due to a shortage of adjacent object pixels.Consequently, a corresponding procedure will be needed to dilate the objects after theshrink procedure.A dilation or blow procedure is an inverse procedure with respect to a shrink proce-dure.If following shrinking a pixel is still '0', a square (2N + 1) by (2N + 1) pixels, centredon the original pixel will also be set to zero.Following the blow procedure, the object image erosion resulting from the shrinkprocedure will be compensated for exactly provided that the size of the blow window isthe same as the size of the shrink window.Appendix CObject Edge Tracking TechniqueObject edge tracking is used to highlight the edge of objects, especially in a binaryimage. The basic idea of edge tracking is shown in Figure C.1. A starting point is selectedfirst. Assuming point (1) is the start point, object edge tracking explains the procedurefor finding the other edge points, (2), (3), ..., in sequence.•••••••••••111MEM MEMNEM NM MNma MUM• 1111=MMIll •• INIUM•UM •• 111===. •• IMEEMIUMU. MUM MNI= mama aimIMOINUMMENEEMEMA. Schematic of Edge-Tracking Technique^ B. Enlarged Edge of ObjectFigure C.1: Schematic of edge-tracking techniqueLet the coordinate of point (1) be (x, y). Then, an area given by equation C.1 aroundthis point is scanned in a sequence. The sequence is controlled by a pointer, dir (0-7),with the associated matrix shown in Figure C.2.xpos = x xdir[dir]96Appendix C. Object Edge Tracking Technique^ 97ypos = y ydir[dir]^(C.1)xdir[0]=0^ydir[0]=-1xdir[1]=1^ydir[1]=-1xdir[2]=1^ydir[2]=0xdir[3]=1^ydir[3]=1xdir[4]=0^ydir[4]=1xdir[5]=-1 ydir[51=1xdir[6]=-1 ydir[6]=0xdir[7]=-1 ydir[7]=-15 4 36 (x, y) 27 0 1Figure C.2: Scanning sequence and matrixThe methodology for implementing object edge tracking is as follows. The status ofthe previously scanned pixel is held, and the status of the current scanned pixel (given byequation C.1) is enquired. If the previous pixel status is an object pixel, and the currentpixel status is background, the previous pixel will be identified as an edge point. Thesame procedure is then used to find the next point (3). This process continues until thescanning returns to the neighborhood of the starting point.

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.831.1-0080889/manifest

Comment

Related Items